Compare commits

...

25 Commits

Author SHA1 Message Date
Ghanshyam Mann
863815153e [goal] Deprecate the JSON formatted policy file
As per the community goal of migrating the policy file
the format from JSON to YAML[1], we need to do two things:

1. Change the default value of '[oslo_policy] policy_file''
config option from 'policy.json' to 'policy.yaml' with
upgrade checks.

2. Deprecate the JSON formatted policy file on the project side
via warning in doc and releasenotes.

Also replace policy.json to policy.yaml ref from doc and tests.

[1]https://governance.openstack.org/tc/goals/selected/wallaby/migrate-policy-format-from-json-to-yaml.html

Change-Id: I207c02ba71fe60635fd3406c9c9364c11f259bae
2021-02-12 19:59:27 +00:00
Ghanshyam Mann
76270c8383 Fix gate requirement checks job
Current requirements-check job is failing with
below error:

ERROR: Requirement for package PrettyTable excludes a version not excluded in the global list.
  Local settings : {'<0.8'}
  Global settings: set()
  Unexpected     : set()
Validating test-requirements.txt

Keeping PrettyTable same as what we have in openstack/requirements repo

Change-Id: I63633d2932757ca23bcea69fd655a2499a5b6d31
2021-02-12 18:58:23 +00:00
Zuul
58de9c405a Merge "Use common rpc pattern for all services" 2021-02-03 10:09:25 +00:00
Zuul
8f0126f1fe Merge "incorrect name in unit test" 2021-02-03 10:02:43 +00:00
sue
ec21898978 incorrect name in unit test
Incorrect name would mislead new developer.

Change-Id: I6ea228035df4437162b6c559ebb7bfb16853c520
2021-01-26 09:43:05 +08:00
Erik Olof Gunnar Andersson
e61f9b5e88 Use common rpc pattern for all services
There is a commonly shared and proven rpc pattern used
across most OpenStack services that is already implemented
in watcher, but the functions are not used.

This patch basically makes use of the existing
rpc classes and removes some unnecessary code.

Change-Id: I57424561e0675a836d10b712ef1579a334f72018
2021-01-25 12:47:52 -08:00
Zuul
e91efbde01 Merge "remove bandit B322 check" 2021-01-25 06:40:08 +00:00
sue
63b6997c83 Drop lower-constraints
Lower-constraints is not a requirement of the OpenStack Python PTI
[0] and there currently is a discussion on the mailing list [1]
about dropping the test, with the oslo team already having done
so [2].

The new dependency resolver in pip fails due to incompatible
dependency versions in our lower-constraints file, meaning that
we were never providing any real guarantees with it.

To unblock the CI, I am disabling lower-constraints job for now,
with the option to reenable it in case we fix the constraints,
and based on the outcome of the mailing list discussions and
consensus.

[0]. https://governance.openstack.org/tc/reference/pti/python.html
[1]. http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019672.html
[2]. http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019659.html

Change-Id: I588fa809839cf3112dae24e356547100f7e89bc5
2021-01-21 03:28:56 +00:00
suzhengwei
262edc8cc9 remove bandit B322 check
The check for this call to input() has been removed.
The input method in Python 2 will read from standard input, evaluate and
run the resulting string as python source code. This is similar, though
in many ways worse, than using eval. On Python 2, use raw_input instead,
input is safe in Python 3.

Change-Id: I8654f0c197bfe88796b56e9d85f563cdded6e8a8
2021-01-04 07:36:44 +00:00
zhufl
204b276693 Fix missing self argument in instances_no_attached
instances_no_attached should have self as the first argument, this is
to add it.

Change-Id: I010d9d1e9ddb8790c398bcf06d0772a0d17f57ec
2020-11-27 17:01:52 +08:00
Zuul
f8a2877f24 Merge "Imported Translations from Zanata" 2020-11-10 09:43:45 +00:00
zhufl
af02bebca9 Fix parameter passed to IronicNodeNotFound exception
IronicNodeNotFound expects uuid parameter for the error message,
not name.

Change-Id: I9fefa98fa9fe6f6491e5f621190cac7d376db6c9
2020-11-02 15:48:27 +08:00
OpenStack Proposal Bot
3aaa20908d Imported Translations from Zanata
For more information about this automatic import see:
https://docs.openstack.org/i18n/latest/reviewing-translation-import.html

Change-Id: I1c07f65533761586bf9563376004eaf0897743cb
2020-10-29 10:29:55 +00:00
wu.chunyang
5097665be3 Remove the unused coding style modules
Python modules related to coding style checks (listed in blacklist.txt in
openstack/requirements repo) are dropped from lower-constraints.txt
they are not needed during installation.

Change-Id: Iadf4581646131f87803c2cebbc66bd55fdb56685
2020-10-22 00:19:35 +08:00
root
09f6e3bde5 Remove usage of six
Remove six-library Replace the following items with Python 3 style code.
- six.string_types
- six.moves
- six.iteritems

Change-Id: I30358b3b08cc076ac59bd325d0e11a3e2deabde3
2020-10-12 05:41:00 +00:00
root
f488636fb8 Bump py37 to py38 in tox.ini
In 'victoria' cycle, we should test py38 by default.

ref:
  https://governance.openstack.org/tc/reference/runtimes/victoria.html

Change-Id: I0a1d49d3f0b2401b5941cd510bc7627863947532
2020-10-12 03:24:16 +00:00
Zuul
11cb88c2cd Merge "Remove six" 2020-10-10 02:31:22 +00:00
xuanyandong
16a0486655 Remove six
Replace the following items with Python 3 style code.

- six.string_types
- six.integer_types
- six.moves
- six.PY2

Implements: blueprint six-removal

Change-Id: I2a0624bd4b455c7e5a0617f1253efa05485dc673
2020-09-30 16:25:13 +08:00
Zuul
2454d4d199 Merge "Add Python3 wallaby unit tests" 2020-09-30 04:06:23 +00:00
Zuul
45dca00dee Merge "Implements base method for time series metrics" 2020-09-27 02:45:20 +00:00
Zuul
09b2383685 Merge "[goal] Migrate testing to ubuntu focal" 2020-09-25 07:06:43 +00:00
OpenStack Release Bot
f8797a7f70 Add Python3 wallaby unit tests
This is an automatically generated patch to ensure unit testing
is in place for all the of the tested runtimes for wallaby.

See also the PTI in governance [1].

[1]: https://governance.openstack.org/tc/reference/project-testing-interface.html

Change-Id: I8951721c8c06ba6ebde9b68665c9aa791ab7ef9b
2020-09-22 14:12:54 +00:00
OpenStack Release Bot
da283b49b8 Update master for stable/victoria
Add file to the reno documentation build to show release notes for
stable/victoria.

Use pbr instruction to increment the minor version number
automatically so that master versions are higher than the versions on
stable/victoria.

Change-Id: I311548732398a680ba50a72273fb98bb16009be4
Sem-Ver: feature
2020-09-22 14:12:53 +00:00
Ghanshyam Mann
e21e5f609e [goal] Migrate testing to ubuntu focal
As per victoria cycle testing runtime and community goal[1]
we need to migrate upstream CI/CD to Ubuntu Focal(20.04).

Fixing:
- bug#1886298
Bump the lower constraints for required deps which added python3.8 support
in their later version.

- Move multinode jobs to focal nodeset

Story: #2007865
Task: #40227

Closes-Bug: #1886298

[1] https://governance.openstack.org/tc/goals/selected/victoria/migrate-ci-cd-jobs-to-ubuntu-focal>

Depends-On: https://review.opendev.org/#/c/752294/

Change-Id: Iec953f3294087cd0b628b701ad3d684cea61c057
2020-09-17 10:59:59 +00:00
Dantali0n
cca0d9f7d7 Implements base method for time series metrics
Implements base method as well as some basic implementations to
retrieve time series metrics. Ceilometer can not be supported
as API documentation has been unavailable. Grafana will be
supported in follow-up patch.

Partially Implements: blueprint time-series-framework

Change-Id: I55414093324c8cff379b28f5b855f41a9265c2d3
2020-08-26 16:01:15 +02:00
34 changed files with 771 additions and 267 deletions

View File

@@ -2,8 +2,7 @@
templates: templates:
- check-requirements - check-requirements
- openstack-cover-jobs - openstack-cover-jobs
- openstack-lower-constraints-jobs - openstack-python3-wallaby-jobs
- openstack-python3-victoria-jobs
- publish-openstack-docs-pti - publish-openstack-docs-pti
- release-notes-jobs-python3 - release-notes-jobs-python3
check: check:
@@ -102,7 +101,7 @@
- job: - job:
name: watcher-tempest-multinode name: watcher-tempest-multinode
parent: watcher-tempest-functional parent: watcher-tempest-functional
nodeset: openstack-two-node-bionic nodeset: openstack-two-node-focal
roles: roles:
- zuul: openstack/tempest - zuul: openstack/tempest
group-vars: group-vars:

View File

@@ -17,6 +17,14 @@
Policies Policies
======== ========
.. warning::
JSON formatted policy file is deprecated since Watcher 6.0.0 (Wallaby).
This `oslopolicy-convert-json-to-yaml`__ tool will migrate your existing
JSON-formatted policy file to YAML in a backward-compatible way.
.. __: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html
Watcher's public API calls may be restricted to certain sets of users using a Watcher's public API calls may be restricted to certain sets of users using a
policy configuration file. This document explains exactly how policies are policy configuration file. This document explains exactly how policies are
configured and what they apply to. configured and what they apply to.

View File

@@ -56,9 +56,6 @@ Here is an example showing how you can write a plugin called ``NewStrategy``:
# filepath: thirdparty/new.py # filepath: thirdparty/new.py
# import path: thirdparty.new # import path: thirdparty.new
import abc import abc
import six
from watcher._i18n import _ from watcher._i18n import _
from watcher.decision_engine.strategy.strategies import base from watcher.decision_engine.strategy.strategies import base

View File

@@ -1,150 +0,0 @@
alabaster==0.7.10
alembic==0.9.8
amqp==2.2.2
appdirs==1.4.3
APScheduler==3.5.1
asn1crypto==0.24.0
automaton==1.14.0
beautifulsoup4==4.6.0
cachetools==2.0.1
certifi==2018.1.18
cffi==1.11.5
chardet==3.0.4
cliff==2.11.0
cmd2==0.8.1
contextlib2==0.5.5
coverage==4.5.1
croniter==0.3.20
cryptography==2.1.4
debtcollector==1.19.0
decorator==4.2.1
deprecation==2.0
doc8==0.8.0
docutils==0.14
dogpile.cache==0.6.5
dulwich==0.19.0
enum34==1.1.6
enum-compat==0.0.2
eventlet==0.20.0
extras==1.0.0
fasteners==0.14.1
fixtures==3.0.0
freezegun==0.3.10
futurist==1.8.0
gitdb2==2.0.3
GitPython==2.1.8
gnocchiclient==7.0.1
greenlet==0.4.13
idna==2.6
imagesize==1.0.0
iso8601==0.1.12
Jinja2==2.10
jmespath==0.9.3
jsonpatch==1.21
jsonpointer==2.0
jsonschema==3.2.0
keystoneauth1==3.4.0
keystonemiddleware==4.21.0
kombu==4.1.0
linecache2==1.0.0
logutils==0.3.5
lxml==4.1.1
Mako==1.0.7
MarkupSafe==1.0
mccabe==0.2.1
microversion_parse==0.2.1
monotonic==1.4
msgpack==0.5.6
munch==2.2.0
netaddr==0.7.19
netifaces==0.10.6
networkx==2.2
openstacksdk==0.12.0
os-api-ref===1.4.0
os-client-config==1.29.0
os-service-types==1.2.0
os-testr==1.0.0
osc-lib==1.10.0
os-resource-classes==0.4.0
oslo.cache==1.29.0
oslo.concurrency==3.26.0
oslo.config==5.2.0
oslo.context==2.21.0
oslo.db==4.35.0
oslo.i18n==3.20.0
oslo.log==3.37.0
oslo.messaging==8.1.2
oslo.middleware==3.35.0
oslo.policy==1.34.0
oslo.reports==1.27.0
oslo.serialization==2.25.0
oslo.service==1.30.0
oslo.upgradecheck==0.1.0
oslo.utils==3.36.0
oslo.versionedobjects==1.32.0
oslotest==3.3.0
packaging==17.1
Paste==2.0.3
PasteDeploy==1.5.2
pbr==3.1.1
pecan==1.3.2
pika==0.10.0
pika-pool==0.1.3
prettytable==0.7.2
psutil==5.4.3
pycadf==2.7.0
pycparser==2.18
Pygments==2.2.0
pyinotify==0.9.6
pyOpenSSL==17.5.0
pyparsing==2.2.0
pyperclip==1.6.0
python-ceilometerclient==2.9.0
python-cinderclient==3.5.0
python-dateutil==2.7.0
python-editor==1.0.3
python-glanceclient==2.9.1
python-ironicclient==2.5.0
python-keystoneclient==3.15.0
python-mimeparse==1.6.0
python-monascaclient==1.12.0
python-neutronclient==6.7.0
python-novaclient==14.1.0
python-openstackclient==3.14.0
python-subunit==1.2.0
pytz==2018.3
PyYAML==3.12
repoze.lru==0.7
requests==2.18.4
requestsexceptions==1.4.0
restructuredtext-lint==1.1.3
rfc3986==1.1.0
Routes==2.4.1
simplegeneric==0.8.1
simplejson==3.13.2
smmap2==2.0.3
snowballstemmer==1.2.1
SQLAlchemy==1.2.5
sqlalchemy-migrate==0.11.0
sqlparse==0.2.4
statsd==3.2.2
stestr==2.0.0
stevedore==1.28.0
taskflow==3.7.1
Tempita==0.5.2
tenacity==4.9.0
testresources==2.0.1
testscenarios==0.5.0
testtools==2.3.0
traceback2==1.4.0
tzlocal==1.5.1
ujson==1.35
unittest2==1.1.0
urllib3==1.22
vine==1.1.4
waitress==1.1.0
warlock==1.3.0
WebOb==1.8.5
WebTest==2.0.29
wrapt==1.10.11
WSME==0.9.2

View File

@@ -0,0 +1,20 @@
---
upgrade:
- |
The default value of ``[oslo_policy] policy_file`` config option has
been changed from ``policy.json`` to ``policy.yaml``.
Operators who are utilizing customized or previously generated
static policy JSON files (which are not needed by default), should
generate new policy files or convert them in YAML format. Use the
`oslopolicy-convert-json-to-yaml
<https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html>`_
tool to convert a JSON to YAML formatted policy file in
backward compatible way.
deprecations:
- |
Use of JSON policy files was deprecated by the ``oslo.policy`` library
during the Victoria development cycle. As a result, this deprecation is
being noted in the Wallaby cycle with an anticipated future removal of support
by ``oslo.policy``. As such operators will need to convert to YAML policy
files. Please see the upgrade notes for details on migration of any
custom policy files.

View File

@@ -21,6 +21,7 @@ Contents:
:maxdepth: 1 :maxdepth: 1
unreleased unreleased
victoria
ussuri ussuri
train train
stein stein

View File

@@ -1,14 +1,15 @@
# Andi Chandler <andi@gowling.com>, 2017. #zanata # Andi Chandler <andi@gowling.com>, 2017. #zanata
# Andi Chandler <andi@gowling.com>, 2018. #zanata # Andi Chandler <andi@gowling.com>, 2018. #zanata
# Andi Chandler <andi@gowling.com>, 2020. #zanata
msgid "" msgid ""
msgstr "" msgstr ""
"Project-Id-Version: python-watcher\n" "Project-Id-Version: python-watcher\n"
"Report-Msgid-Bugs-To: \n" "Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2018-11-08 01:22+0000\n" "POT-Creation-Date: 2020-10-27 04:13+0000\n"
"MIME-Version: 1.0\n" "MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n" "Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n" "Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2018-11-07 06:15+0000\n" "PO-Revision-Date: 2020-10-28 11:13+0000\n"
"Last-Translator: Andi Chandler <andi@gowling.com>\n" "Last-Translator: Andi Chandler <andi@gowling.com>\n"
"Language-Team: English (United Kingdom)\n" "Language-Team: English (United Kingdom)\n"
"Language: en_GB\n" "Language: en_GB\n"
@@ -54,6 +55,61 @@ msgstr "1.7.0"
msgid "1.9.0" msgid "1.9.0"
msgstr "1.9.0" msgstr "1.9.0"
msgid "2.0.0"
msgstr "2.0.0"
msgid "3.0.0"
msgstr "3.0.0"
msgid "4.0.0"
msgstr "4.0.0"
msgid "A ``watcher-status upgrade check`` has been added for this."
msgstr "A ``watcher-status upgrade check`` has been added for this."
msgid ""
"A new threadpool for the decision engine that contributors can use to "
"improve the performance of many operations, primarily I/O bound onces. The "
"amount of workers used by the decision engine threadpool can be configured "
"to scale according to the available infrastructure using the "
"`watcher_decision_engine.max_general_workers` config option. Documentation "
"for contributors to effectively use this threadpool is available online: "
"https://docs.openstack.org/watcher/latest/contributor/concurrency.html"
msgstr ""
"A new threadpool for the decision engine that contributors can use to "
"improve the performance of many operations, primarily I/O bound onces. The "
"amount of workers used by the decision engine threadpool can be configured "
"to scale according to the available infrastructure using the "
"`watcher_decision_engine.max_general_workers` config option. Documentation "
"for contributors to effectively use this threadpool is available online: "
"https://docs.openstack.org/watcher/latest/contributor/concurrency.html"
msgid ""
"API calls while building the Compute data model will be retried upon "
"failure. The amount of failures allowed before giving up and the time before "
"reattempting are configurable. The `api_call_retries` and "
"`api_query_timeout` parameters in the `[collector]` group can be used to "
"adjust these paremeters. 10 retries with a 1 second time in between "
"reattempts is the default."
msgstr ""
"API calls while building the Compute data model will be retried upon "
"failure. The amount of failures allowed before giving up and the time before "
"reattempting are configurable. The `api_call_retries` and "
"`api_query_timeout` parameters in the `[collector]` group can be used to "
"adjust these parameters. 10 retries with a 1 second time in between "
"reattempts is the default."
msgid ""
"Add a new webhook API and a new audit type EVENT, the microversion is 1.4. "
"Now Watcher user can create audit with EVENT type and the audit will be "
"triggered by webhook API. The user guide is available online: https://docs."
"openstack.org/watcher/latest/user/event_type_audit.html"
msgstr ""
"Add a new webhook API and a new audit type EVENT, the microversion is 1.4. "
"Now Watcher user can create audit with EVENT type and the audit will be "
"triggered by webhook API. The user guide is available online: https://docs."
"openstack.org/watcher/latest/user/event_type_audit.html"
msgid "Add a service supervisor to watch Watcher deamons." msgid "Add a service supervisor to watch Watcher deamons."
msgstr "Add a service supervisor to watch Watcher daemons." msgstr "Add a service supervisor to watch Watcher daemons."
@@ -67,6 +123,24 @@ msgstr ""
"Add description property for dynamic action. Admin can see detail " "Add description property for dynamic action. Admin can see detail "
"information of any specify action." "information of any specify action."
msgid ""
"Add force field to Audit. User can set --force to enable the new option when "
"launching audit. If force is True, audit will be executed despite of ongoing "
"actionplan. The new audit may create a wrong actionplan if they use the same "
"data model."
msgstr ""
"Add force field to Audit. User can set --force to enable the new option when "
"launching audit. If force is True, audit will be executed despite of ongoing "
"actionplan. The new audit may create a wrong actionplan if they use the same "
"data model."
msgid ""
"Add keystone_client Group for user to configure 'interface' and "
"'region_name' by watcher.conf. The default value of 'interface' is 'admin'."
msgstr ""
"Add keystone_client Group for user to configure 'interface' and "
"'region_name' by watcher.conf. The default value of 'interface' is 'admin'."
msgid "Add notifications related to Action object." msgid "Add notifications related to Action object."
msgstr "Add notifications related to Action object." msgstr "Add notifications related to Action object."
@@ -79,6 +153,25 @@ msgstr "Add notifications related to Audit object."
msgid "Add notifications related to Service object." msgid "Add notifications related to Service object."
msgstr "Add notifications related to Service object." msgstr "Add notifications related to Service object."
msgid ""
"Add show data model api for Watcher. New in version 1.3. User can use "
"'openstack optimize datamodel list' command to view the current data model "
"information in memory. User can also add '--audit <Audit_UUID>' to view "
"specific data model in memory filted by the scope in audit. User can also "
"add '--detail' to view detailed information about current data model. User "
"can also add '--type <type>' to specify the type of data model. Default type "
"is 'compute'. In the future, type 'storage' and 'baremetal' will be "
"supported."
msgstr ""
"Add show data model API for Watcher. New in version 1.3. User can use "
"'openstack optimize datamodel list' command to view the current data model "
"information in memory. User can also add '--audit <Audit_UUID>' to view "
"specific data model in memory filtered by the scope in audit. User can also "
"add '--detail' to view detailed information about current data model. User "
"can also add '--type <type>' to specify the type of data model. Default type "
"is 'compute'. In the future, type 'storage' and 'baremetal' will be "
"supported."
msgid "" msgid ""
"Add start_time and end_time fields in audits table. User can set the start " "Add start_time and end_time fields in audits table. User can set the start "
"time and/or end time when creating CONTINUOUS audit." "time and/or end time when creating CONTINUOUS audit."
@@ -93,6 +186,19 @@ msgstr ""
"Add superseded state for an action plan if the cluster data model has " "Add superseded state for an action plan if the cluster data model has "
"changed after it has been created." "changed after it has been created."
msgid ""
"Added Placement API helper to Watcher. Now Watcher can get information about "
"resource providers, it can be used for the data model and strategies. Config "
"group placement_client with options 'api_version', 'interface' and "
"'region_name' is also added. The default values for 'api_version' and "
"'interface' are 1.29 and 'public', respectively."
msgstr ""
"Added Placement API helper to Watcher. Now Watcher can get information about "
"resource providers, it can be used for the data model and strategies. Config "
"group placement_client with options 'api_version', 'interface' and "
"'region_name' is also added. The default values for 'api_version' and "
"'interface' are 1.29 and 'public', respectively."
msgid "Added SUSPENDED audit state" msgid "Added SUSPENDED audit state"
msgstr "Added SUSPENDED audit state" msgstr "Added SUSPENDED audit state"
@@ -107,6 +213,31 @@ msgstr ""
"scoring engine by different Strategies, which improve the code and data " "scoring engine by different Strategies, which improve the code and data "
"model re-use." "model re-use."
msgid ""
"Added a new config option 'action_execution_rule' which is a dict type. Its "
"key field is strategy name and the value is 'ALWAYS' or 'ANY'. 'ALWAYS' "
"means the callback function returns True as usual. 'ANY' means the return "
"depends on the result of previous action execution. The callback returns "
"True if previous action gets failed, and the engine continues to run the "
"next action. If previous action executes success, the callback returns False "
"then the next action will be ignored. For strategies that aren't in "
"'action_execution_rule', the callback always returns True. Please add the "
"next section in the watcher.conf file if your strategy needs this feature. "
"[watcher_workflow_engines.taskflow] action_execution_rule = {'your strategy "
"name': 'ANY'}"
msgstr ""
"Added a new config option 'action_execution_rule' which is a dict type. Its "
"key field is strategy name and the value is 'ALWAYS' or 'ANY'. 'ALWAYS' "
"means the callback function returns True as usual. 'ANY' means the return "
"depends on the result of previous action execution. The callback returns "
"True if previous action gets failed, and the engine continues to run the "
"next action. If previous action executes success, the callback returns False "
"then the next action will be ignored. For strategies that aren't in "
"'action_execution_rule', the callback always returns True. Please add the "
"next section in the watcher.conf file if your strategy needs this feature. "
"[watcher_workflow_engines.taskflow] action_execution_rule = {'your strategy "
"name': 'ANY'}"
msgid "" msgid ""
"Added a new strategy based on the airflow of servers. This strategy makes " "Added a new strategy based on the airflow of servers. This strategy makes "
"decisions to migrate VMs to make the airflow uniform." "decisions to migrate VMs to make the airflow uniform."
@@ -248,6 +379,15 @@ msgstr ""
"The strategy migrates many instances and volumes efficiently with minimum " "The strategy migrates many instances and volumes efficiently with minimum "
"downtime automatically." "downtime automatically."
msgid ""
"Added strategy \"node resource consolidation\". This strategy is used to "
"centralize VMs to as few nodes as possible by VM migration. User can set an "
"input parameter to decide how to select the destination node."
msgstr ""
"Added strategy \"node resource consolidation\". This strategy is used to "
"centralize VMs to as few nodes as possible by VM migration. User can set an "
"input parameter to decide how to select the destination node."
msgid "" msgid ""
"Added strategy to identify and migrate a Noisy Neighbor - a low priority VM " "Added strategy to identify and migrate a Noisy Neighbor - a low priority VM "
"that negatively affects peformance of a high priority VM by over utilizing " "that negatively affects peformance of a high priority VM by over utilizing "
@@ -284,6 +424,19 @@ msgstr ""
msgid "Adds baremetal data model in Watcher" msgid "Adds baremetal data model in Watcher"
msgstr "Adds baremetal data model in Watcher" msgstr "Adds baremetal data model in Watcher"
msgid ""
"All datasources can now be configured to retry retrieving a metric upon "
"encountering an error. Between each attempt will be a set amount of time "
"which can be adjusted from the configuration. These configuration options "
"can be found in the `[watcher_datasources]` group and are named "
"`query_max_retries` and `query_timeout`."
msgstr ""
"All datasources can now be configured to retry retrieving a metric upon "
"encountering an error. Between each attempt will be a set amount of time "
"which can be adjusted from the configuration. These configuration options "
"can be found in the `[watcher_datasources]` group and are named "
"`query_max_retries` and `query_timeout`."
msgid "" msgid ""
"Allow decision engine to pass strategy parameters, like optimization " "Allow decision engine to pass strategy parameters, like optimization "
"threshold, to selected strategy, also strategy to provide parameters info to " "threshold, to selected strategy, also strategy to provide parameters info to "
@@ -293,6 +446,34 @@ msgstr ""
"threshold, to selected strategy, also strategy to provide parameters info to " "threshold, to selected strategy, also strategy to provide parameters info to "
"end user." "end user."
msgid ""
"Allow using file to override metric map. Override the metric map of each "
"datasource as soon as it is created by the manager. This override comes from "
"a file whose path is provided by a setting in config file. The setting is "
"`watcher_decision_engine/metric_map_path`. The file contains a map per "
"datasource whose keys are the metric names as recognized by watcher and the "
"value is the real name of the metric in the datasource. This setting "
"defaults to `/etc/watcher/metric_map.yaml`, and presence of this file is "
"optional."
msgstr ""
"Allow using file to override metric map. Override the metric map of each "
"datasource as soon as it is created by the manager. This override comes from "
"a file whose path is provided by a setting in config file. The setting is "
"`watcher_decision_engine/metric_map_path`. The file contains a map per "
"datasource whose keys are the metric names as recognized by watcher and the "
"value is the real name of the metric in the datasource. This setting "
"defaults to `/etc/watcher/metric_map.yaml`, and presence of this file is "
"optional."
msgid ""
"An Watcher API WSGI application script ``watcher-api-wsgi`` is now "
"available. It is auto-generated by ``pbr`` and allows to run the API service "
"using WSGI server (for example Nginx and uWSGI)."
msgstr ""
"An Watcher API WSGI application script ``watcher-api-wsgi`` is now "
"available. It is auto-generated by ``pbr`` and allows to run the API service "
"using WSGI server (for example Nginx and uWSGI)."
msgid "" msgid ""
"Audits have 'name' field now, that is more friendly to end users. Audit's " "Audits have 'name' field now, that is more friendly to end users. Audit's "
"name can't exceed 63 characters." "name can't exceed 63 characters."
@@ -300,9 +481,25 @@ msgstr ""
"Audits have 'name' field now, that is more friendly to end users. Audit's " "Audits have 'name' field now, that is more friendly to end users. Audit's "
"name can't exceed 63 characters." "name can't exceed 63 characters."
msgid ""
"Baremetal Model gets Audit scoper with an ability to exclude Ironic nodes."
msgstr ""
"Baremetal Model gets Audit scope with an ability to exclude Ironic nodes."
msgid "Bug Fixes" msgid "Bug Fixes"
msgstr "Bug Fixes" msgstr "Bug Fixes"
msgid ""
"Ceilometer Datasource has been deprecated since its API has been deprecated "
"in Ocata cycle. Watcher has supported Ceilometer for some releases after "
"Ocata to let users migrate to Gnocchi/Monasca datasources. Since Train "
"release, Ceilometer support will be removed."
msgstr ""
"Ceilometer Datasource has been deprecated since its API has been deprecated "
"in Ocata cycle. Watcher has supported Ceilometer for some releases after "
"Ocata to let users migrate to Gnocchi/Monasca datasources. Since Train "
"release, Ceilometer support will be removed."
msgid "Centralize all configuration options for Watcher." msgid "Centralize all configuration options for Watcher."
msgstr "Centralise all configuration options for Watcher." msgstr "Centralise all configuration options for Watcher."
@@ -360,6 +557,52 @@ msgstr ""
"Now instances from particular project in OpenStack can be excluded from " "Now instances from particular project in OpenStack can be excluded from "
"audit defining scope in audit templates." "audit defining scope in audit templates."
msgid ""
"For a large cloud infrastructure, retrieving data from Nova may take a long "
"time. To avoid getting too much data from Nova, building the compute data "
"model according to the scope of audit."
msgstr ""
"For a large cloud infrastructure, retrieving data from Nova may take a long "
"time. To avoid getting too much data from Nova, building the compute data "
"model according to the scope of audit."
msgid ""
"Grafana has been added as datasource that can be used for collecting "
"metrics. The configuration options allow to specify what metrics and how "
"they are stored in grafana so that no matter how Grafana is configured it "
"can still be used. The configuration can be done via the typical "
"configuration file but it is recommended to configure most options in the "
"yaml file for metrics. For a complete walkthrough on configuring Grafana "
"see: https://docs.openstack.org/watcher/latest/datasources/grafana.html"
msgstr ""
"Grafana has been added as datasource that can be used for collecting "
"metrics. The configuration options allow to specify what metrics and how "
"they are stored in Grafana so that no matter how Grafana is configured it "
"can still be used. The configuration can be done via the typical "
"configuration file but it is recommended to configure most options in the "
"yaml file for metrics. For a complete walkthrough on configuring Grafana "
"see: https://docs.openstack.org/watcher/latest/datasources/grafana.html"
msgid ""
"If Gnocchi was configured to have a custom amount of retries and or a custom "
"timeout then the configuration needs to moved into the "
"`[watcher_datasources]` group instead of the `[gnocchi_client]` group."
msgstr ""
"If Gnocchi was configured to have a custom amount of retries and or a custom "
"timeout then the configuration needs to moved into the "
"`[watcher_datasources]` group instead of the `[gnocchi_client]` group."
msgid ""
"Improved interface for datasource baseclass that better defines expected "
"values and types for parameters and return types of all abstract methods. "
"This allows all strategies to work with every datasource provided the "
"metrics are configured for that given datasource."
msgstr ""
"Improved interface for datasource baseclass that better defines expected "
"values and types for parameters and return types of all abstract methods. "
"This allows all strategies to work with every datasource provided the "
"metrics are configured for that given datasource."
msgid "" msgid ""
"Instance cold migration logic is now replaced with using Nova migrate " "Instance cold migration logic is now replaced with using Nova migrate "
"Server(migrate Action) API which has host option since v2.56." "Server(migrate Action) API which has host option since v2.56."
@@ -367,6 +610,17 @@ msgstr ""
"Instance cold migration logic is now replaced with using Nova migrate " "Instance cold migration logic is now replaced with using Nova migrate "
"Server(migrate Action) API which has host option since v2.56." "Server(migrate Action) API which has host option since v2.56."
msgid ""
"Many operations in the decision engine will block on I/O. Such I/O "
"operations can stall the execution of a sequential application "
"significantly. To reduce the potential bottleneck of many operations the "
"general purpose decision engine threadpool is introduced."
msgstr ""
"Many operations in the decision engine will block on I/O. Such I/O "
"operations can stall the execution of a sequential application "
"significantly. To reduce the potential bottleneck of many operations the "
"general purpose decision engine threadpool is introduced."
msgid "New Features" msgid "New Features"
msgstr "New Features" msgstr "New Features"
@@ -389,6 +643,13 @@ msgstr ""
"Nova API version is now set to 2.56 by default. This needs the migrate " "Nova API version is now set to 2.56 by default. This needs the migrate "
"action of migration type cold with destination_node parameter to work." "action of migration type cold with destination_node parameter to work."
msgid ""
"Now Watcher strategy can select specific planner beyond default. Strategy "
"can set planner property to specify its own planner."
msgstr ""
"Now Watcher strategy can select specific planner beyond default. Strategy "
"can set planner property to specify its own planner."
msgid "Ocata Series Release Notes" msgid "Ocata Series Release Notes"
msgstr "Ocata Series Release Notes" msgstr "Ocata Series Release Notes"
@@ -429,12 +690,60 @@ msgstr ""
"resources will be called \"Audit scope\" and will be defined in each audit " "resources will be called \"Audit scope\" and will be defined in each audit "
"template (which contains the audit settings)." "template (which contains the audit settings)."
msgid ""
"Python 2.7 support has been dropped. Last release of Watcher to support "
"py2.7 is OpenStack Train. The minimum version of Python now supported by "
"Watcher is Python 3.6."
msgstr ""
"Python 2.7 support has been dropped. Last release of Watcher to support "
"py2.7 is OpenStack Train. The minimum version of Python now supported by "
"Watcher is Python 3.6."
msgid "Queens Series Release Notes" msgid "Queens Series Release Notes"
msgstr "Queens Series Release Notes" msgstr "Queens Series Release Notes"
msgid "Rocky Series Release Notes" msgid "Rocky Series Release Notes"
msgstr "Rocky Series Release Notes" msgstr "Rocky Series Release Notes"
msgid ""
"Several strategies have changed the `node` parameter to `compute_node` to be "
"better aligned with terminology. These strategies include "
"`basic_consolidation` and `workload_stabilzation`. The `node` parameter will "
"remain supported during Train release and will be removed in the subsequent "
"release."
msgstr ""
"Several strategies have changed the `node` parameter to `compute_node` to be "
"better aligned with terminology. These strategies include "
"`basic_consolidation` and `workload_stabilzation`. The `node` parameter will "
"remain supported during Train release and will be removed in the subsequent "
"release."
msgid ""
"Specific strategies can override this order and use datasources which are "
"not listed in the global preference."
msgstr ""
"Specific strategies can override this order and use datasources which are "
"not listed in the global preference."
msgid "Stein Series Release Notes"
msgstr "Stein Series Release Notes"
msgid ""
"The building of the compute (Nova) data model will be done using the "
"decision engine threadpool, thereby, significantly reducing the total time "
"required to build it."
msgstr ""
"The building of the compute (Nova) data model will be done using the "
"decision engine threadpool, thereby, significantly reducing the total time "
"required to build it."
msgid ""
"The configuration options for query retries in `[gnocchi_client]` are "
"deprecated and the option in `[watcher_datasources]` should now be used."
msgstr ""
"The configuration options for query retries in `[gnocchi_client]` are "
"deprecated and the option in `[watcher_datasources]` should now be used."
msgid "" msgid ""
"The graph model describes how VMs are associated to compute hosts. This " "The graph model describes how VMs are associated to compute hosts. This "
"allows for seeing relationships upfront between the entities and hence can " "allows for seeing relationships upfront between the entities and hence can "
@@ -455,6 +764,22 @@ msgstr ""
"was fixed. Before fixing, it booted an instance in the service project as a " "was fixed. Before fixing, it booted an instance in the service project as a "
"migrated instance." "migrated instance."
msgid ""
"The minimum required version of the ``[nova_client]/api_version`` value is "
"now enforced to be ``2.56`` which is available since the Queens version of "
"the nova compute service."
msgstr ""
"The minimum required version of the ``[nova_client]/api_version`` value is "
"now enforced to be ``2.56`` which is available since the Queens version of "
"the Nova compute service."
msgid ""
"The new strategy baseclass has significant changes in method parameters and "
"any out-of-tree strategies will have to be adopted."
msgstr ""
"The new strategy baseclass has significant changes in method parameters and "
"any out-of-tree strategies will have to be adopted."
msgid "" msgid ""
"There is new ability to create Watcher continuous audits with cron interval. " "There is new ability to create Watcher continuous audits with cron interval. "
"It means you may use, for example, optional argument '--interval \"\\*/5 \\* " "It means you may use, for example, optional argument '--interval \"\\*/5 \\* "
@@ -468,9 +793,27 @@ msgstr ""
"best effort basis and therefore, we recommend you to use a minimal cron " "best effort basis and therefore, we recommend you to use a minimal cron "
"interval of at least one minute." "interval of at least one minute."
msgid "Train Series Release Notes"
msgstr "Train Series Release Notes"
msgid "Upgrade Notes" msgid "Upgrade Notes"
msgstr "Upgrade Notes" msgstr "Upgrade Notes"
msgid ""
"Using ``watcher/api/app.wsgi`` script is deprecated and it will be removed "
"in U release. Please switch to automatically generated ``watcher-api-wsgi`` "
"script instead."
msgstr ""
"Using ``watcher/api/app.wsgi`` script is deprecated and it will be removed "
"in U release. Please switch to automatically generated ``watcher-api-wsgi`` "
"script instead."
msgid "Ussuri Series Release Notes"
msgstr "Ussuri Series Release Notes"
msgid "Victoria Series Release Notes"
msgstr "Victoria Series Release Notes"
msgid "" msgid ""
"Watcher can continuously optimize the OpenStack cloud for a specific " "Watcher can continuously optimize the OpenStack cloud for a specific "
"strategy or goal by triggering an audit periodically which generates an " "strategy or goal by triggering an audit periodically which generates an "
@@ -480,6 +823,15 @@ msgstr ""
"strategy or goal by triggering an audit periodically which generates an " "strategy or goal by triggering an audit periodically which generates an "
"action plan and run it automatically." "action plan and run it automatically."
msgid ""
"Watcher can get resource information such as total, allocation ratio and "
"reserved information from Placement API. Now we add some new fields to the "
"Watcher Data Model:"
msgstr ""
"Watcher can get resource information such as total, allocation ratio and "
"reserved information from Placement API. Now we add some new fields to the "
"Watcher Data Model:"
msgid "" msgid ""
"Watcher can now run specific actions in parallel improving the performances " "Watcher can now run specific actions in parallel improving the performances "
"dramatically when executing an action plan." "dramatically when executing an action plan."
@@ -517,6 +869,15 @@ msgstr ""
"includes all instances. It filters excluded instances when migration during " "includes all instances. It filters excluded instances when migration during "
"the audit." "the audit."
msgid ""
"Watcher now supports configuring which datasource to use and in which order. "
"This configuration is done by specifying datasources in the "
"watcher_datasources section:"
msgstr ""
"Watcher now supports configuring which datasource to use and in which order. "
"This configuration is done by specifying datasources in the "
"watcher_datasources section:"
msgid "" msgid ""
"Watcher removes the support to Nova legacy notifications because of Nova " "Watcher removes the support to Nova legacy notifications because of Nova "
"will deprecate them." "will deprecate them."
@@ -557,9 +918,15 @@ msgstr ""
"Watcher supports multiple metrics backend and relies on Ceilometer and " "Watcher supports multiple metrics backend and relies on Ceilometer and "
"Monasca." "Monasca."
msgid "We also add some new propeties:"
msgstr "We also add some new properties:"
msgid "Welcome to watcher's Release Notes documentation!" msgid "Welcome to watcher's Release Notes documentation!"
msgstr "Welcome to watcher's Release Notes documentation!" msgstr "Welcome to watcher's Release Notes documentation!"
msgid "``[watcher_datasources] datasources = gnocchi,monasca,ceilometer``"
msgstr "``[watcher_datasources] datasources = gnocchi,monasca,ceilometer``"
msgid "" msgid ""
"all Watcher objects have been refactored to support OVO (oslo." "all Watcher objects have been refactored to support OVO (oslo."
"versionedobjects) which was a prerequisite step in order to implement " "versionedobjects) which was a prerequisite step in order to implement "
@@ -569,6 +936,21 @@ msgstr ""
"versionedobjects) which was a prerequisite step in order to implement " "versionedobjects) which was a prerequisite step in order to implement "
"versioned notifications." "versioned notifications."
msgid ""
"disk_gb_capacity: The amount of disk, take allocation ratio into account, "
"but do not include reserved."
msgstr ""
"disk_gb_capacity: The amount of disk, take allocation ratio into account, "
"but do not include reserved."
msgid ""
"disk_gb_reserved: The amount of disk a node has reserved for its own use."
msgstr ""
"disk_gb_reserved: The amount of disk a node has reserved for its own use."
msgid "disk_ratio: Disk allocation ratio."
msgstr "disk_ratio: Disk allocation ratio."
msgid "instance.create.end" msgid "instance.create.end"
msgstr "instance.create.end" msgstr "instance.create.end"
@@ -635,6 +1017,21 @@ msgstr "instance.unshelve.end"
msgid "instance.update" msgid "instance.update"
msgstr "instance.update" msgstr "instance.update"
msgid ""
"memory_mb_capacity: The amount of memory, take allocation ratio into "
"account, but do not include reserved."
msgstr ""
"memory_mb_capacity: The amount of memory, take allocation ratio into "
"account, but do not include reserved."
msgid ""
"memory_mb_reserved: The amount of memory a node has reserved for its own use."
msgstr ""
"memory_mb_reserved: The amount of memory a node has reserved for its own use."
msgid "memory_ratio: Memory allocation ratio."
msgstr "memory_ratio: Memory allocation ratio."
msgid "new:" msgid "new:"
msgstr "new:" msgstr "new:"
@@ -649,3 +1046,16 @@ msgstr "service.delete"
msgid "service.update" msgid "service.update"
msgstr "service.update" msgstr "service.update"
msgid ""
"vcpu_capacity: The amount of vcpu, take allocation ratio into account, but "
"do not include reserved."
msgstr ""
"vcpu_capacity: The amount of vcpu, take allocation ratio into account, but "
"do not include reserved."
msgid "vcpu_ratio: CPU allocation ratio."
msgstr "vcpu_ratio: CPU allocation ratio."
msgid "vcpu_reserved: The amount of cpu a node has reserved for its own use."
msgstr "vcpu_reserved: The amount of CPU a node has reserved for its own use."

View File

@@ -0,0 +1,6 @@
=============================
Victoria Series Release Notes
=============================
.. release-notes::
:branch: stable/victoria

View File

@@ -1,4 +1,4 @@
# The order of packages is significant, because pip processes them in the order # The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration # of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later. # process, which may cause wedges in the gate later.
@@ -7,28 +7,28 @@ jsonpatch>=1.21 # BSD
keystoneauth1>=3.4.0 # Apache-2.0 keystoneauth1>=3.4.0 # Apache-2.0
jsonschema>=3.2.0 # MIT jsonschema>=3.2.0 # MIT
keystonemiddleware>=4.21.0 # Apache-2.0 keystonemiddleware>=4.21.0 # Apache-2.0
lxml>=4.1.1 # BSD lxml>=4.5.1 # BSD
croniter>=0.3.20 # MIT License croniter>=0.3.20 # MIT License
os-resource-classes>=0.4.0 os-resource-classes>=0.4.0
oslo.concurrency>=3.26.0 # Apache-2.0 oslo.concurrency>=3.26.0 # Apache-2.0
oslo.cache>=1.29.0 # Apache-2.0 oslo.cache>=1.29.0 # Apache-2.0
oslo.config>=5.2.0 # Apache-2.0 oslo.config>=6.8.0 # Apache-2.0
oslo.context>=2.21.0 # Apache-2.0 oslo.context>=2.21.0 # Apache-2.0
oslo.db>=4.35.0 # Apache-2.0 oslo.db>=4.44.0 # Apache-2.0
oslo.i18n>=3.20.0 # Apache-2.0 oslo.i18n>=3.20.0 # Apache-2.0
oslo.log>=3.37.0 # Apache-2.0 oslo.log>=3.37.0 # Apache-2.0
oslo.messaging>=8.1.2 # Apache-2.0 oslo.messaging>=8.1.2 # Apache-2.0
oslo.policy>=1.34.0 # Apache-2.0 oslo.policy>=3.6.0 # Apache-2.0
oslo.reports>=1.27.0 # Apache-2.0 oslo.reports>=1.27.0 # Apache-2.0
oslo.serialization>=2.25.0 # Apache-2.0 oslo.serialization>=2.25.0 # Apache-2.0
oslo.service>=1.30.0 # Apache-2.0 oslo.service>=1.30.0 # Apache-2.0
oslo.upgradecheck>=0.1.0 # Apache-2.0 oslo.upgradecheck>=1.3.0 # Apache-2.0
oslo.utils>=3.36.0 # Apache-2.0 oslo.utils>=3.36.0 # Apache-2.0
oslo.versionedobjects>=1.32.0 # Apache-2.0 oslo.versionedobjects>=1.32.0 # Apache-2.0
PasteDeploy>=1.5.2 # MIT PasteDeploy>=1.5.2 # MIT
pbr>=3.1.1 # Apache-2.0 pbr>=3.1.1 # Apache-2.0
pecan>=1.3.2 # BSD pecan>=1.3.2 # BSD
PrettyTable<0.8,>=0.7.2 # BSD PrettyTable>=0.7.2 # BSD
gnocchiclient>=7.0.1 # Apache-2.0 gnocchiclient>=7.0.1 # Apache-2.0
python-ceilometerclient>=2.9.0 # Apache-2.0 python-ceilometerclient>=2.9.0 # Apache-2.0
python-cinderclient>=3.5.0 # Apache-2.0 python-cinderclient>=3.5.0 # Apache-2.0
@@ -41,9 +41,9 @@ python-openstackclient>=3.14.0 # Apache-2.0
python-ironicclient>=2.5.0 # Apache-2.0 python-ironicclient>=2.5.0 # Apache-2.0
SQLAlchemy>=1.2.5 # MIT SQLAlchemy>=1.2.5 # MIT
stevedore>=1.28.0 # Apache-2.0 stevedore>=1.28.0 # Apache-2.0
taskflow>=3.7.1 # Apache-2.0 taskflow>=3.8.0 # Apache-2.0
WebOb>=1.8.5 # MIT WebOb>=1.8.5 # MIT
WSME>=0.9.2 # MIT WSME>=0.9.2 # MIT
networkx>=2.2 # BSD networkx>=2.4 # BSD
microversion_parse>=0.2.1 # Apache-2.0 microversion_parse>=0.2.1 # Apache-2.0
futurist>=1.8.0 # Apache-2.0 futurist>=1.8.0 # Apache-2.0

10
tox.ini
View File

@@ -1,6 +1,6 @@
[tox] [tox]
minversion = 2.0 minversion = 2.0
envlist = py36,py37,pep8 envlist = py38,pep8
skipsdist = True skipsdist = True
ignore_basepython_conflict = True ignore_basepython_conflict = True
@@ -26,7 +26,7 @@ passenv = http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY
commands = commands =
doc8 doc/source/ CONTRIBUTING.rst HACKING.rst README.rst doc8 doc/source/ CONTRIBUTING.rst HACKING.rst README.rst
flake8 flake8
bandit -r watcher -x watcher/tests/* -n5 -ll -s B320,B322 bandit -r watcher -x watcher/tests/* -n5 -ll -s B320
[testenv:venv] [testenv:venv]
setenv = PYTHONHASHSEED=0 setenv = PYTHONHASHSEED=0
@@ -132,9 +132,3 @@ commands = sphinx-build -a -W -E -d releasenotes/build/doctrees --keep-going -b
[testenv:bandit] [testenv:bandit]
deps = -r{toxinidir}/test-requirements.txt deps = -r{toxinidir}/test-requirements.txt
commands = bandit -r watcher -x watcher/tests/* -n5 -ll -s B320 commands = bandit -r watcher -x watcher/tests/* -n5 -ll -s B320
[testenv:lower-constraints]
deps =
-c{toxinidir}/lower-constraints.txt
-r{toxinidir}/test-requirements.txt
-r{toxinidir}/requirements.txt

View File

@@ -19,8 +19,6 @@ Service mechanism provides ability to monitor Watcher services state.
""" """
import datetime import datetime
import six
from oslo_config import cfg from oslo_config import cfg
from oslo_log import log from oslo_log import log
from oslo_utils import timeutils from oslo_utils import timeutils
@@ -70,7 +68,7 @@ class Service(base.APIBase):
service = objects.Service.get(pecan.request.context, id) service = objects.Service.get(pecan.request.context, id)
last_heartbeat = (service.last_seen_up or service.updated_at or last_heartbeat = (service.last_seen_up or service.updated_at or
service.created_at) service.created_at)
if isinstance(last_heartbeat, six.string_types): if isinstance(last_heartbeat, str):
# NOTE(russellb) If this service came in over rpc via # NOTE(russellb) If this service came in over rpc via
# conductor, then the timestamp will be a string and needs to be # conductor, then the timestamp will be a string and needs to be
# converted back to a datetime. # converted back to a datetime.

View File

@@ -15,7 +15,6 @@
from oslo_serialization import jsonutils from oslo_serialization import jsonutils
from oslo_utils import strutils from oslo_utils import strutils
import six
import wsme import wsme
from wsme import types as wtypes from wsme import types as wtypes
@@ -132,7 +131,7 @@ class JsonType(wtypes.UserType):
def __str__(self): def __str__(self):
# These are the json serializable native types # These are the json serializable native types
return ' | '.join(map(str, (wtypes.text, six.integer_types, float, return ' | '.join(map(str, (wtypes.text, int, float,
BooleanType, list, dict, None))) BooleanType, list, dict, None)))
@staticmethod @staticmethod

View File

@@ -14,6 +14,7 @@
import sys import sys
from oslo_upgradecheck import common_checks
from oslo_upgradecheck import upgradecheck from oslo_upgradecheck import upgradecheck
from watcher._i18n import _ from watcher._i18n import _
@@ -43,6 +44,10 @@ class Checks(upgradecheck.UpgradeCommands):
_upgrade_checks = ( _upgrade_checks = (
# Added in Train. # Added in Train.
(_('Minimum Nova API Version'), _minimum_nova_api_version), (_('Minimum Nova API Version'), _minimum_nova_api_version),
# Added in Wallaby.
(_("Policy File JSON to YAML Migration"),
(common_checks.check_policy_json, {'conf': CONF})),
) )

View File

@@ -18,6 +18,7 @@
import sys import sys
from oslo_config import cfg from oslo_config import cfg
from oslo_policy import opts
from oslo_policy import policy from oslo_policy import policy
from watcher.common import exception from watcher.common import exception
@@ -26,6 +27,12 @@ from watcher.common import policies
_ENFORCER = None _ENFORCER = None
CONF = cfg.CONF CONF = cfg.CONF
# TODO(gmann): Remove setting the default value of config policy_file
# once oslo_policy change the default value to 'policy.yaml'.
# https://github.com/openstack/oslo.policy/blob/a626ad12fe5a3abd49d70e3e5b95589d279ab578/oslo_policy/opts.py#L49
DEFAULT_POLICY_FILE = 'policy.yaml'
opts.set_defaults(CONF, DEFAULT_POLICY_FILE)
# we can get a policy enforcer by this init. # we can get a policy enforcer by this init.
# oslo policy support change policy rule dynamically. # oslo policy support change policy rule dynamically.

View File

@@ -121,22 +121,40 @@ class RequestContextSerializer(messaging.Serializer):
def get_client(target, version_cap=None, serializer=None): def get_client(target, version_cap=None, serializer=None):
assert TRANSPORT is not None assert TRANSPORT is not None
serializer = RequestContextSerializer(serializer) serializer = RequestContextSerializer(serializer)
return messaging.RPCClient(TRANSPORT, return messaging.RPCClient(
target, TRANSPORT,
version_cap=version_cap, target,
serializer=serializer) version_cap=version_cap,
serializer=serializer
)
def get_server(target, endpoints, serializer=None): def get_server(target, endpoints, serializer=None):
assert TRANSPORT is not None assert TRANSPORT is not None
access_policy = dispatcher.DefaultRPCAccessPolicy access_policy = dispatcher.DefaultRPCAccessPolicy
serializer = RequestContextSerializer(serializer) serializer = RequestContextSerializer(serializer)
return messaging.get_rpc_server(TRANSPORT, return messaging.get_rpc_server(
target, TRANSPORT,
endpoints, target,
executor='eventlet', endpoints,
serializer=serializer, executor='eventlet',
access_policy=access_policy) serializer=serializer,
access_policy=access_policy
)
def get_notification_listener(targets, endpoints, serializer=None, pool=None):
assert NOTIFICATION_TRANSPORT is not None
serializer = RequestContextSerializer(serializer)
return messaging.get_notification_listener(
NOTIFICATION_TRANSPORT,
targets,
endpoints,
allow_requeue=False,
executor='eventlet',
pool=pool,
serializer=serializer
)
def get_notifier(publisher_id): def get_notifier(publisher_id):

View File

@@ -21,14 +21,12 @@ from oslo_concurrency import processutils
from oslo_config import cfg from oslo_config import cfg
from oslo_log import _options from oslo_log import _options
from oslo_log import log from oslo_log import log
import oslo_messaging as om import oslo_messaging as messaging
from oslo_reports import guru_meditation_report as gmr from oslo_reports import guru_meditation_report as gmr
from oslo_reports import opts as gmr_opts from oslo_reports import opts as gmr_opts
from oslo_service import service from oslo_service import service
from oslo_service import wsgi from oslo_service import wsgi
from oslo_messaging.rpc import dispatcher
from watcher._i18n import _ from watcher._i18n import _
from watcher.api import app from watcher.api import app
from watcher.common import config from watcher.common import config
@@ -183,11 +181,6 @@ class Service(service.ServiceBase):
] ]
self.notification_endpoints = self.manager.notification_endpoints self.notification_endpoints = self.manager.notification_endpoints
self.serializer = rpc.RequestContextSerializer(
base.WatcherObjectSerializer())
self._transport = None
self._notification_transport = None
self._conductor_client = None self._conductor_client = None
self.conductor_topic_handler = None self.conductor_topic_handler = None
@@ -201,27 +194,17 @@ class Service(service.ServiceBase):
self.notification_topics, self.notification_endpoints self.notification_topics, self.notification_endpoints
) )
@property
def transport(self):
if self._transport is None:
self._transport = om.get_rpc_transport(CONF)
return self._transport
@property
def notification_transport(self):
if self._notification_transport is None:
self._notification_transport = om.get_notification_transport(CONF)
return self._notification_transport
@property @property
def conductor_client(self): def conductor_client(self):
if self._conductor_client is None: if self._conductor_client is None:
target = om.Target( target = messaging.Target(
topic=self.conductor_topic, topic=self.conductor_topic,
version=self.API_VERSION, version=self.API_VERSION,
) )
self._conductor_client = om.RPCClient( self._conductor_client = rpc.get_client(
self.transport, target, serializer=self.serializer) target,
serializer=base.WatcherObjectSerializer()
)
return self._conductor_client return self._conductor_client
@conductor_client.setter @conductor_client.setter
@@ -229,21 +212,18 @@ class Service(service.ServiceBase):
self.conductor_client = c self.conductor_client = c
def build_topic_handler(self, topic_name, endpoints=()): def build_topic_handler(self, topic_name, endpoints=()):
access_policy = dispatcher.DefaultRPCAccessPolicy target = messaging.Target(
serializer = rpc.RequestContextSerializer(rpc.JsonPayloadSerializer())
target = om.Target(
topic=topic_name, topic=topic_name,
# For compatibility, we can override it with 'host' opt # For compatibility, we can override it with 'host' opt
server=CONF.host or socket.gethostname(), server=CONF.host or socket.gethostname(),
version=self.api_version, version=self.api_version,
) )
return om.get_rpc_server( return rpc.get_server(
self.transport, target, endpoints, target, endpoints,
executor='eventlet', serializer=serializer, serializer=rpc.JsonPayloadSerializer()
access_policy=access_policy) )
def build_notification_handler(self, topic_names, endpoints=()): def build_notification_handler(self, topic_names, endpoints=()):
serializer = rpc.RequestContextSerializer(rpc.JsonPayloadSerializer())
targets = [] targets = []
for topic in topic_names: for topic in topic_names:
kwargs = {} kwargs = {}
@@ -251,11 +231,13 @@ class Service(service.ServiceBase):
exchange, topic = topic.split('.') exchange, topic = topic.split('.')
kwargs['exchange'] = exchange kwargs['exchange'] = exchange
kwargs['topic'] = topic kwargs['topic'] = topic
targets.append(om.Target(**kwargs)) targets.append(messaging.Target(**kwargs))
return om.get_notification_listener(
self.notification_transport, targets, endpoints, return rpc.get_notification_listener(
executor='eventlet', serializer=serializer, targets, endpoints,
allow_requeue=False, pool=CONF.host) serializer=rpc.JsonPayloadSerializer(),
pool=CONF.host
)
def start(self): def start(self):
LOG.debug("Connecting to '%s'", CONF.transport_url) LOG.debug("Connecting to '%s'", CONF.transport_url)

View File

@@ -18,7 +18,6 @@ SQLAlchemy models for watcher service
from oslo_db.sqlalchemy import models from oslo_db.sqlalchemy import models
from oslo_serialization import jsonutils from oslo_serialization import jsonutils
import six.moves.urllib.parse as urlparse
from sqlalchemy import Boolean from sqlalchemy import Boolean
from sqlalchemy import Column from sqlalchemy import Column
from sqlalchemy import DateTime from sqlalchemy import DateTime
@@ -33,7 +32,7 @@ from sqlalchemy import String
from sqlalchemy import Text from sqlalchemy import Text
from sqlalchemy.types import TypeDecorator, TEXT from sqlalchemy.types import TypeDecorator, TEXT
from sqlalchemy import UniqueConstraint from sqlalchemy import UniqueConstraint
import urllib.parse as urlparse
from watcher import conf from watcher import conf
CONF = conf.CONF CONF = conf.CONF

View File

@@ -19,6 +19,8 @@ import time
from oslo_config import cfg from oslo_config import cfg
from oslo_log import log from oslo_log import log
from watcher.common import exception
CONF = cfg.CONF CONF = cfg.CONF
LOG = log.getLogger(__name__) LOG = log.getLogger(__name__)
@@ -54,6 +56,13 @@ class DataSourceBase(object):
instance_root_disk_size=None, instance_root_disk_size=None,
) )
def _get_meter(self, meter_name):
"""Retrieve the meter from the metric map or raise error"""
meter = self.METRIC_MAP.get(meter_name)
if meter is None:
raise exception.MetricNotAvailable(metric=meter_name)
return meter
def query_retry(self, f, *args, **kwargs): def query_retry(self, f, *args, **kwargs):
"""Attempts to retrieve metrics from the external service """Attempts to retrieve metrics from the external service
@@ -122,6 +131,30 @@ class DataSourceBase(object):
pass pass
@abc.abstractmethod
def statistic_series(self, resource=None, resource_type=None,
meter_name=None, start_time=None, end_time=None,
granularity=300):
"""Retrieves metrics based on the specified parameters over a period
:param resource: Resource object as defined in watcher models such as
ComputeNode and Instance
:param resource_type: Indicates which type of object is supplied
to the resource parameter
:param meter_name: The desired metric to retrieve as key from
METRIC_MAP
:param start_time: The datetime to start retrieving metrics for
:type start_time: datetime.datetime
:param end_time: The datetime to limit the retrieval of metrics to
:type end_time: datetime.datetime
:param granularity: Interval between samples in measurements in
seconds
:return: Dictionary of key value pairs with timestamps and metric
values
"""
pass
@abc.abstractmethod @abc.abstractmethod
def get_host_cpu_usage(self, resource, period, aggregate, def get_host_cpu_usage(self, resource, period, aggregate,
granularity=None): granularity=None):

View File

@@ -161,9 +161,7 @@ class CeilometerHelper(base.DataSourceBase):
end_time = datetime.datetime.utcnow() end_time = datetime.datetime.utcnow()
start_time = end_time - datetime.timedelta(seconds=int(period)) start_time = end_time - datetime.timedelta(seconds=int(period))
meter = self.METRIC_MAP.get(meter_name) meter = self._get_meter(meter_name)
if meter is None:
raise exception.MetricNotAvailable(metric=meter_name)
if aggregate == 'mean': if aggregate == 'mean':
aggregate = 'avg' aggregate = 'avg'
@@ -194,6 +192,12 @@ class CeilometerHelper(base.DataSourceBase):
item_value *= 10 item_value *= 10
return item_value return item_value
def statistic_series(self, resource=None, resource_type=None,
meter_name=None, start_time=None, end_time=None,
granularity=300):
raise NotImplementedError(
_('Ceilometer helper does not support statistic series method'))
def get_host_cpu_usage(self, resource, period, def get_host_cpu_usage(self, resource, period,
aggregate, granularity=None): aggregate, granularity=None):

View File

@@ -23,7 +23,6 @@ from oslo_config import cfg
from oslo_log import log from oslo_log import log
from watcher.common import clients from watcher.common import clients
from watcher.common import exception
from watcher.decision_engine.datasources import base from watcher.decision_engine.datasources import base
CONF = cfg.CONF CONF = cfg.CONF
@@ -72,9 +71,7 @@ class GnocchiHelper(base.DataSourceBase):
stop_time = datetime.utcnow() stop_time = datetime.utcnow()
start_time = stop_time - timedelta(seconds=(int(period))) start_time = stop_time - timedelta(seconds=(int(period)))
meter = self.METRIC_MAP.get(meter_name) meter = self._get_meter(meter_name)
if meter is None:
raise exception.MetricNotAvailable(metric=meter_name)
if aggregate == 'count': if aggregate == 'count':
aggregate = 'mean' aggregate = 'mean'
@@ -123,6 +120,52 @@ class GnocchiHelper(base.DataSourceBase):
return return_value return return_value
def statistic_series(self, resource=None, resource_type=None,
meter_name=None, start_time=None, end_time=None,
granularity=300):
meter = self._get_meter(meter_name)
resource_id = resource.uuid
if resource_type == 'compute_node':
resource_id = "%s_%s" % (resource.hostname, resource.hostname)
kwargs = dict(query={"=": {"original_resource_id": resource_id}},
limit=1)
resources = self.query_retry(
f=self.gnocchi.resource.search, **kwargs)
if not resources:
LOG.warning("The {0} resource {1} could not be "
"found".format(self.NAME, resource_id))
return
resource_id = resources[0]['id']
raw_kwargs = dict(
metric=meter,
start=start_time,
stop=end_time,
resource_id=resource_id,
granularity=granularity,
)
kwargs = {k: v for k, v in raw_kwargs.items() if k and v}
statistics = self.query_retry(
f=self.gnocchi.metric.get_measures, **kwargs)
return_value = None
if statistics:
# measure has structure [time, granularity, value]
if meter_name == 'host_airflow':
# Airflow from hardware.ipmi.node.airflow is reported as
# 1/10 th of actual CFM
return_value = {s[0]: s[2]*10 for s in statistics}
else:
return_value = {s[0]: s[2] for s in statistics}
return return_value
def get_host_cpu_usage(self, resource, period, aggregate, def get_host_cpu_usage(self, resource, period, aggregate,
granularity=300): granularity=300):

View File

@@ -21,6 +21,7 @@ from urllib import parse as urlparse
from oslo_config import cfg from oslo_config import cfg
from oslo_log import log from oslo_log import log
from watcher._i18n import _
from watcher.common import clients from watcher.common import clients
from watcher.common import exception from watcher.common import exception
from watcher.decision_engine.datasources import base from watcher.decision_engine.datasources import base
@@ -188,6 +189,12 @@ class GrafanaHelper(base.DataSourceBase):
return result return result
def statistic_series(self, resource=None, resource_type=None,
meter_name=None, start_time=None, end_time=None,
granularity=300):
raise NotImplementedError(
_('Grafana helper does not support statistic series method'))
def get_host_cpu_usage(self, resource, period=300, def get_host_cpu_usage(self, resource, period=300,
aggregate="mean", granularity=None): aggregate="mean", granularity=None):
return self.statistic_aggregation( return self.statistic_aggregation(

View File

@@ -21,7 +21,6 @@ import datetime
from monascaclient import exc from monascaclient import exc
from watcher.common import clients from watcher.common import clients
from watcher.common import exception
from watcher.decision_engine.datasources import base from watcher.decision_engine.datasources import base
@@ -90,9 +89,7 @@ class MonascaHelper(base.DataSourceBase):
stop_time = datetime.datetime.utcnow() stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(seconds=(int(period))) start_time = stop_time - datetime.timedelta(seconds=(int(period)))
meter = self.METRIC_MAP.get(meter_name) meter = self._get_meter(meter_name)
if meter is None:
raise exception.MetricNotAvailable(metric=meter_name)
if aggregate == 'mean': if aggregate == 'mean':
aggregate = 'avg' aggregate = 'avg'
@@ -121,6 +118,34 @@ class MonascaHelper(base.DataSourceBase):
return cpu_usage return cpu_usage
def statistic_series(self, resource=None, resource_type=None,
meter_name=None, start_time=None, end_time=None,
granularity=300):
meter = self._get_meter(meter_name)
raw_kwargs = dict(
name=meter,
start_time=start_time.isoformat(),
end_time=end_time.isoformat(),
dimensions={'hostname': resource.uuid},
statistics='avg',
group_by='*',
)
kwargs = {k: v for k, v in raw_kwargs.items() if k and v}
statistics = self.query_retry(
f=self.monasca.metrics.list_statistics, **kwargs)
result = {}
for stat in statistics:
v_index = stat['columns'].index('avg')
t_index = stat['columns'].index('timestamp')
result.update({r[t_index]: r[v_index] for r in stat['statistics']})
return result
def get_host_cpu_usage(self, resource, period, def get_host_cpu_usage(self, resource, period,
aggregate, granularity=None): aggregate, granularity=None):
return self.statistic_aggregation( return self.statistic_aggregation(

View File

@@ -631,7 +631,7 @@ class BaremetalModelRoot(nx.DiGraph, base.Model):
super(BaremetalModelRoot, self).remove_node(node.uuid) super(BaremetalModelRoot, self).remove_node(node.uuid)
except nx.NetworkXError as exc: except nx.NetworkXError as exc:
LOG.exception(exc) LOG.exception(exc)
raise exception.IronicNodeNotFound(name=node.uuid) raise exception.IronicNodeNotFound(uuid=node.uuid)
@lockutils.synchronized("baremetal_model") @lockutils.synchronized("baremetal_model")
def get_all_ironic_nodes(self): def get_all_ironic_nodes(self):
@@ -643,7 +643,7 @@ class BaremetalModelRoot(nx.DiGraph, base.Model):
try: try:
return self._get_by_uuid(uuid) return self._get_by_uuid(uuid)
except exception.BaremetalResourceNotFound: except exception.BaremetalResourceNotFound:
raise exception.IronicNodeNotFound(name=uuid) raise exception.IronicNodeNotFound(uuid=uuid)
def _get_by_uuid(self, uuid): def _get_by_uuid(self, uuid):
try: try:

View File

@@ -18,8 +18,6 @@
# #
from oslo_log import log from oslo_log import log
import six
from watcher._i18n import _ from watcher._i18n import _
from watcher.common import exception from watcher.common import exception
from watcher.decision_engine.model import element from watcher.decision_engine.model import element
@@ -103,7 +101,7 @@ class HostMaintenance(base.HostMaintenanceBaseStrategy):
def get_instance_state_str(self, instance): def get_instance_state_str(self, instance):
"""Get instance state in string format""" """Get instance state in string format"""
if isinstance(instance.state, six.string_types): if isinstance(instance.state, str):
return instance.state return instance.state
elif isinstance(instance.state, element.InstanceState): elif isinstance(instance.state, element.InstanceState):
return instance.state.value return instance.state.value
@@ -116,7 +114,7 @@ class HostMaintenance(base.HostMaintenanceBaseStrategy):
def get_node_status_str(self, node): def get_node_status_str(self, node):
"""Get node status in string format""" """Get node status in string format"""
if isinstance(node.status, six.string_types): if isinstance(node.status, str):
return node.status return node.status
elif isinstance(node.status, element.ServiceState): elif isinstance(node.status, element.ServiceState):
return node.status.value return node.status.value

View File

@@ -350,7 +350,7 @@ class ZoneMigration(base.ZoneMigrationBaseStrategy):
def is_in_use(self, volume): def is_in_use(self, volume):
return getattr(volume, 'status') == IN_USE return getattr(volume, 'status') == IN_USE
def instances_no_attached(instances): def instances_no_attached(self, instances):
return [i for i in instances return [i for i in instances
if not getattr(i, "os-extended-volumes:volumes_attached")] if not getattr(i, "os-extended-volumes:volumes_attached")]

View File

@@ -128,22 +128,20 @@ def check_assert_called_once_with(logical_line, filename):
@flake8ext @flake8ext
def check_python3_xrange(logical_line): def check_python3_xrange(logical_line):
if re.search(r"\bxrange\s*\(", logical_line): if re.search(r"\bxrange\s*\(", logical_line):
yield(0, "N325: Do not use xrange. Use range, or six.moves.range for " yield(0, "N325: Do not use xrange. Use range for large loops.")
"large loops.")
@flake8ext @flake8ext
def check_no_basestring(logical_line): def check_no_basestring(logical_line):
if re.search(r"\bbasestring\b", logical_line): if re.search(r"\bbasestring\b", logical_line):
msg = ("N326: basestring is not Python3-compatible, use " msg = ("N326: basestring is not Python3-compatible, use str instead.")
"six.string_types instead.")
yield(0, msg) yield(0, msg)
@flake8ext @flake8ext
def check_python3_no_iteritems(logical_line): def check_python3_no_iteritems(logical_line):
if re.search(r".*\.iteritems\(\)", logical_line): if re.search(r".*\.iteritems\(\)", logical_line):
msg = ("N327: Use six.iteritems() instead of dict.iteritems().") msg = ("N327: Use dict.items() instead of dict.iteritems().")
yield(0, msg) yield(0, msg)

View File

@@ -1,14 +1,15 @@
# Andi Chandler <andi@gowling.com>, 2017. #zanata # Andi Chandler <andi@gowling.com>, 2017. #zanata
# Andi Chandler <andi@gowling.com>, 2018. #zanata # Andi Chandler <andi@gowling.com>, 2018. #zanata
# Andi Chandler <andi@gowling.com>, 2020. #zanata
msgid "" msgid ""
msgstr "" msgstr ""
"Project-Id-Version: watcher VERSION\n" "Project-Id-Version: watcher VERSION\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2020-04-26 02:09+0000\n" "POT-Creation-Date: 2020-10-27 04:14+0000\n"
"MIME-Version: 1.0\n" "MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n" "Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n" "Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2018-11-07 06:14+0000\n" "PO-Revision-Date: 2020-10-28 11:02+0000\n"
"Last-Translator: Andi Chandler <andi@gowling.com>\n" "Last-Translator: Andi Chandler <andi@gowling.com>\n"
"Language-Team: English (United Kingdom)\n" "Language-Team: English (United Kingdom)\n"
"Language: en_GB\n" "Language: en_GB\n"
@@ -187,10 +188,18 @@ msgstr "Audit Templates"
msgid "Audit parameter %(parameter)s are not allowed" msgid "Audit parameter %(parameter)s are not allowed"
msgstr "Audit parameter %(parameter)s are not allowed" msgstr "Audit parameter %(parameter)s are not allowed"
#, python-format
msgid "Audit state %(state)s is disallowed."
msgstr "Audit state %(state)s is disallowed."
#, python-format #, python-format
msgid "Audit type %(audit_type)s could not be found" msgid "Audit type %(audit_type)s could not be found"
msgstr "Audit type %(audit_type)s could not be found" msgstr "Audit type %(audit_type)s could not be found"
#, python-format
msgid "Audit type %(audit_type)s is disallowed."
msgstr "Audit type %(audit_type)s is disallowed."
#, python-format #, python-format
msgid "AuditTemplate %(audit_template)s could not be found" msgid "AuditTemplate %(audit_template)s could not be found"
msgstr "AuditTemplate %(audit_template)s could not be found" msgstr "AuditTemplate %(audit_template)s could not be found"
@@ -243,6 +252,9 @@ msgstr "Cannot overwrite UUID for an existing efficacy indicator."
msgid "Cannot remove 'goal' attribute from an audit template" msgid "Cannot remove 'goal' attribute from an audit template"
msgstr "Cannot remove 'goal' attribute from an audit template" msgstr "Cannot remove 'goal' attribute from an audit template"
msgid "Ceilometer helper does not support statistic series method"
msgstr "Ceilometer helper does not support statistic series method"
msgid "Cluster Maintaining" msgid "Cluster Maintaining"
msgstr "Cluster Maintaining" msgstr "Cluster Maintaining"
@@ -369,6 +381,9 @@ msgstr "Goal %(goal)s is invalid"
msgid "Goals" msgid "Goals"
msgstr "Goals" msgstr "Goals"
msgid "Grafana helper does not support statistic series method"
msgstr "Grafana helper does not support statistic series method"
msgid "Hardware Maintenance" msgid "Hardware Maintenance"
msgstr "Hardware Maintenance" msgstr "Hardware Maintenance"
@@ -434,10 +449,17 @@ msgstr "Limit should be positive"
msgid "Maximum time since last check-in for up service." msgid "Maximum time since last check-in for up service."
msgstr "Maximum time since last check-in for up service." msgstr "Maximum time since last check-in for up service."
#, python-format
msgid "Metric: %(metric)s not available"
msgstr "Metric: %(metric)s not available"
#, python-format #, python-format
msgid "Migration of type '%(migration_type)s' is not supported." msgid "Migration of type '%(migration_type)s' is not supported."
msgstr "Migration of type '%(migration_type)s' is not supported." msgstr "Migration of type '%(migration_type)s' is not supported."
msgid "Minimum Nova API Version"
msgstr "Minimum Nova API Version"
msgid "" msgid ""
"Name of this node. This can be an opaque identifier. It is not necessarily a " "Name of this node. This can be an opaque identifier. It is not necessarily a "
"hostname, FQDN, or IP address. However, the node name must be valid within " "hostname, FQDN, or IP address. However, the node name must be valid within "
@@ -451,10 +473,16 @@ msgstr ""
msgid "No %(metric)s metric for %(host)s found." msgid "No %(metric)s metric for %(host)s found."
msgstr "No %(metric)s metric for %(host)s found." msgstr "No %(metric)s metric for %(host)s found."
msgid "No datasources available"
msgstr "No datasources available"
#, python-format #, python-format
msgid "No strategy could be found to achieve the '%(goal)s' goal." msgid "No strategy could be found to achieve the '%(goal)s' goal."
msgstr "No strategy could be found to achieve the '%(goal)s' goal." msgstr "No strategy could be found to achieve the '%(goal)s' goal."
msgid "Node Resource Consolidation strategy"
msgstr "Node Resource Consolidation strategy"
msgid "Noisy Neighbor" msgid "Noisy Neighbor"
msgstr "Noisy Neighbour" msgstr "Noisy Neighbour"
@@ -606,6 +634,10 @@ msgstr "Strategy %(strategy)s could not be found"
msgid "Strategy %(strategy)s is invalid" msgid "Strategy %(strategy)s is invalid"
msgstr "Strategy %(strategy)s is invalid" msgstr "Strategy %(strategy)s is invalid"
#, python-format
msgid "The %(data_model_type)s data model could not be found"
msgstr "The %(data_model_type)s data model could not be found"
#, python-format #, python-format
msgid "The %(name)s %(id)s could not be found" msgid "The %(name)s %(id)s could not be found"
msgstr "The %(name)s %(id)s could not be found" msgstr "The %(name)s %(id)s could not be found"
@@ -675,6 +707,13 @@ msgstr "The instance '%(name)s' could not be found"
msgid "The ironic node %(uuid)s could not be found" msgid "The ironic node %(uuid)s could not be found"
msgstr "The Ironic node %(uuid)s could not be found" msgstr "The Ironic node %(uuid)s could not be found"
#, python-format
msgid "The mapped compute node for instance '%(uuid)s' could not be found."
msgstr "The mapped compute node for instance '%(uuid)s' could not be found."
msgid "The node status is not defined"
msgstr "The node status is not defined"
msgid "The number of VM migrations to be performed." msgid "The number of VM migrations to be performed."
msgstr "The number of VM migrations to be performed." msgstr "The number of VM migrations to be performed."
@@ -738,6 +777,10 @@ msgstr "The total number of audited instances in strategy."
msgid "The total number of enabled compute nodes." msgid "The total number of enabled compute nodes."
msgstr "The total number of enabled compute nodes." msgstr "The total number of enabled compute nodes."
#, python-format
msgid "The value %(value)s for parameter %(parameter)s is invalid"
msgstr "The value %(value)s for parameter %(parameter)s is invalid"
msgid "The value of original standard deviation." msgid "The value of original standard deviation."
msgstr "The value of original standard deviation." msgstr "The value of original standard deviation."

View File

@@ -14,14 +14,11 @@
"""Tests for the Pecan API hooks.""" """Tests for the Pecan API hooks."""
from unittest import mock from http import client as http_client
from oslo_config import cfg from oslo_config import cfg
import oslo_messaging as messaging import oslo_messaging as messaging
from oslo_serialization import jsonutils from oslo_serialization import jsonutils
import six from unittest import mock
from six.moves import http_client
from watcher.api.controllers import root from watcher.api.controllers import root
from watcher.api import hooks from watcher.api import hooks
from watcher.common import context from watcher.common import context
@@ -144,7 +141,7 @@ class TestNoExceptionTracebackHook(base.FunctionalTest):
# we don't care about this garbage. # we don't care about this garbage.
expected_msg = ("Remote error: %s %s" expected_msg = ("Remote error: %s %s"
% (test_exc_type, self.MSG_WITHOUT_TRACE) + % (test_exc_type, self.MSG_WITHOUT_TRACE) +
("\n[u'" if six.PY2 else "\n['")) "\n['")
actual_msg = jsonutils.loads( actual_msg = jsonutils.loads(
response.json['error_message'])['faultstring'] response.json['error_message'])['faultstring']
self.assertEqual(expected_msg, actual_msg) self.assertEqual(expected_msg, actual_msg)

View File

@@ -40,7 +40,7 @@ class TestApplierAPI(base.TestCase):
'check_api_version', 'check_api_version',
api_version=rpcapi.ApplierAPI().API_VERSION) api_version=rpcapi.ApplierAPI().API_VERSION)
def test_execute_audit_without_error(self): def test_execute_action_plan_without_error(self):
with mock.patch.object(om.RPCClient, 'cast') as mock_cast: with mock.patch.object(om.RPCClient, 'cast') as mock_cast:
action_plan_uuid = utils.generate_uuid() action_plan_uuid = utils.generate_uuid()
self.api.launch_action_plan(self.context, action_plan_uuid) self.api.launch_action_plan(self.context, action_plan_uuid)

View File

@@ -83,4 +83,4 @@ class TestCancelOngoingActionPlans(db_base.DbTestCase):
m_action_list.assert_called() m_action_list.assert_called()
m_plan_save.assert_called() m_plan_save.assert_called()
m_action_save.assert_called() m_action_save.assert_called()
self.assertEqual(self.action.state, objects.audit.State.CANCELLED) self.assertEqual(self.action.state, objects.action.State.CANCELLED)

View File

@@ -20,7 +20,6 @@ from unittest import mock
from oslo_config import cfg from oslo_config import cfg
import oslo_messaging as om import oslo_messaging as om
from watcher.common import rpc
from watcher.common import service from watcher.common import service
from watcher import objects from watcher import objects
from watcher.tests import base from watcher.tests import base
@@ -102,8 +101,6 @@ class TestService(base.TestCase):
def test_init_service(self): def test_init_service(self):
dummy_service = service.Service(DummyManager) dummy_service = service.Service(DummyManager)
self.assertIsInstance(dummy_service.serializer,
rpc.RequestContextSerializer)
self.assertIsInstance( self.assertIsInstance(
dummy_service.conductor_topic_handler, dummy_service.conductor_topic_handler,
om.rpc.server.RPCServer) om.rpc.server.RPCServer)

View File

@@ -13,7 +13,7 @@
# implied. # implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from datetime import datetime
from unittest import mock from unittest import mock
from oslo_config import cfg from oslo_config import cfg
@@ -59,6 +59,35 @@ class TestGnocchiHelper(base.BaseTestCase):
) )
self.assertEqual(expected_result, result) self.assertEqual(expected_result, result)
def test_gnocchi_statistic_series(self, mock_gnocchi):
gnocchi = mock.MagicMock()
expected_result = {
"2017-02-02T09:00:00.000000": 5.5,
"2017-02-02T09:03:60.000000": 5.8
}
expected_measures = [
["2017-02-02T09:00:00.000000", 360, 5.5],
["2017-02-02T09:03:60.000000", 360, 5.8]
]
gnocchi.metric.get_measures.return_value = expected_measures
mock_gnocchi.return_value = gnocchi
start = datetime(year=2017, month=2, day=2, hour=9, minute=0)
end = datetime(year=2017, month=2, day=2, hour=9, minute=4)
helper = gnocchi_helper.GnocchiHelper()
result = helper.statistic_series(
resource=mock.Mock(id='16a86790-327a-45f9-bc82-45839f062fdc'),
resource_type='instance',
meter_name='instance_cpu_usage',
start_time=start,
end_time=end,
granularity=360,
)
self.assertEqual(expected_result, result)
def test_statistic_aggregation_metric_unavailable(self, mock_gnocchi): def test_statistic_aggregation_metric_unavailable(self, mock_gnocchi):
helper = gnocchi_helper.GnocchiHelper() helper = gnocchi_helper.GnocchiHelper()

View File

@@ -13,7 +13,7 @@
# implied. # implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from datetime import datetime
from unittest import mock from unittest import mock
from oslo_config import cfg from oslo_config import cfg
@@ -67,6 +67,43 @@ class TestMonascaHelper(base.BaseTestCase):
) )
self.assertEqual(0.6, result) self.assertEqual(0.6, result)
def test_monasca_statistic_series(self, mock_monasca):
monasca = mock.MagicMock()
expected_stat = [{
'columns': ['timestamp', 'avg'],
'dimensions': {
'hostname': 'rdev-indeedsrv001',
'service': 'monasca'},
'id': '0',
'name': 'cpu.percent',
'statistics': [
['2016-07-29T12:45:00Z', 0.0],
['2016-07-29T12:50:00Z', 0.9],
['2016-07-29T12:55:00Z', 0.9]]}]
expected_result = {
'2016-07-29T12:45:00Z': 0.0,
'2016-07-29T12:50:00Z': 0.9,
'2016-07-29T12:55:00Z': 0.9,
}
monasca.metrics.list_statistics.return_value = expected_stat
mock_monasca.return_value = monasca
start = datetime(year=2016, month=7, day=29, hour=12, minute=45)
end = datetime(year=2016, month=7, day=29, hour=12, minute=55)
helper = monasca_helper.MonascaHelper()
result = helper.statistic_series(
resource=mock.Mock(id='NODE_UUID'),
resource_type='compute_node',
meter_name='host_cpu_usage',
start_time=start,
end_time=end,
granularity=300,
)
self.assertEqual(expected_result, result)
def test_statistic_aggregation_metric_unavailable(self, mock_monasca): def test_statistic_aggregation_metric_unavailable(self, mock_monasca):
helper = monasca_helper.MonascaHelper() helper = monasca_helper.MonascaHelper()

View File

@@ -30,7 +30,7 @@ class PolicyFixture(fixtures.Fixture):
def _setUp(self): def _setUp(self):
self.policy_dir = self.useFixture(fixtures.TempDir()) self.policy_dir = self.useFixture(fixtures.TempDir())
self.policy_file_name = os.path.join(self.policy_dir.path, self.policy_file_name = os.path.join(self.policy_dir.path,
'policy.json') 'policy.yaml')
with open(self.policy_file_name, 'w') as policy_file: with open(self.policy_file_name, 'w') as policy_file:
policy_file.write(fake_policy.policy_data) policy_file.write(fake_policy.policy_data)
policy_opts.set_defaults(CONF) policy_opts.set_defaults(CONF)