Compare commits

..

18 Commits

Author SHA1 Message Date
OpenDev Sysadmins
5307f5a80e OpenDev Migration Patch
This commit was bulk generated and pushed by the OpenDev sysadmins
as a part of the Git hosting and code review systems migration
detailed in these mailing list posts:

http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003603.html
http://lists.openstack.org/pipermail/openstack-discuss/2019-April/004920.html

Attempts have been made to correct repository namespaces and
hostnames based on simple pattern matching, but it's possible some
were updated incorrectly or missed entirely. Please reach out to us
via the contact information listed at https://opendev.org/ with any
questions you may have.
2019-04-19 19:40:46 +00:00
Ian Wienand
b5467a2a1f Replace openstack.org git:// URLs with https://
This is a mechanically generated change to replace openstack.org
git:// URLs with https:// equivalents.

This is in aid of a planned future move of the git hosting
infrastructure to a self-hosted instance of gitea (https://gitea.io),
which does not support the git wire protocol at this stage.

This update should result in no functional change.

For more information see the thread at

 http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003825.html

Change-Id: I886b29ba8a1814cf876e70b5b20504a221d32fa1
2019-03-24 20:36:26 +00:00
Alexander Chadin
83411ec89f Fix stop_watcher function
Apache should be reloaded after watcher-api is disabled.

Change-Id: Ifee0e7701849348630568aa36b3f3c4c62d3382e
2018-12-10 13:55:44 +00:00
licanwei
08750536e7 optimize get_instances_by_node
We can set host filed in search_opts.
refer to:
https://developer.openstack.org/api-ref/compute/?expanded=list-servers-detail#list-servers

Change-Id: I36b27167d7223f3bf6bb05995210af41ad01fc6d
2018-11-06 13:39:14 +00:00
Tatiana Kholkina
9f7ccfe408 Use limit -1 for nova servers list
By default nova has a limit for returned items in a single response [1].
We should pass limit=-1 to get all items.

[1] https://docs.openstack.org/nova/rocky/configuration/config.html

Change-Id: I1fabd909c4c0356ef5fcb7c51718fb4513e6befa
2018-10-16 08:37:45 +00:00
Tatiana Kholkina
fb2619e538 Provide region name while initialize clients
Add new option 'region_name' to config for each client section.

Change-Id: Ifad8908852f4be69dd294a4c4ab28d2e1df265e8
Closes-Bug: #1787937
(cherry picked from commit 925b971377)
2018-09-21 12:31:04 +00:00
Nguyen Hai
6bd857fa0e import zuul job settings from project-config
This is a mechanically generated patch to complete step 1 of moving
the zuul job settings out of project-config and into each project
repository.

Because there will be a separate patch on each branch, the branch
specifiers for branch-specific jobs have been removed.

Because this patch is generated by a script, there may be some
cosmetic changes to the layout of the YAML file(s) as the contents are
normalized.

See the python3-first goal document for details:
https://governance.openstack.org/tc/goals/stein/python3-first.html

Change-Id: I35a8ce3dc54cb662ee9154e343cf50fe96f64807
Story: #2002586
Task: #24344
2018-08-19 00:59:08 +09:00
Clark Boylan
e0faeea608 Remove undefined job
The legacy-rally-dsvm-watcher-rally job does not exist but it is listed
in the .zuul.yaml config. This is a zuul configuration error. Remove
this job which does not exist to fix zuul.

Change-Id: I1bbfd373ad12b98696ab2ddb78e56e6503cc4c4d
2018-07-03 13:27:12 -07:00
Zuul
61aca40e6e Merge "Update auth_uri option to www_authenticate_uri" into stable/queens 2018-06-05 07:49:22 +00:00
caoyuan
b293389734 Delete the unnecessary '-'
fix a typo

Change-Id: I4ecdb827d94ef0ae88e2f37db9d1a53525140947
(cherry picked from commit 4844baa816)
2018-05-16 05:03:45 +00:00
caoyuan
050e6d58f1 Update auth_uri option to www_authenticate_uri
Option auth_uri from group keystone_authtoken is deprecated in Queens [1].
Use option www_authenticate_uri from group keystone_authtoken.

[1]https://review.openstack.org/#/c/508522/

Change-Id: I2ef330d7f9b632e9a81d22a8edec3c88eb532ff5
(cherry picked from commit 8c916930c8)
2018-05-15 07:57:53 +00:00
Zuul
7223d35c47 Merge "Imported Translations from Zanata" into stable/queens 2018-03-06 05:30:53 +00:00
Zuul
57f1971982 Merge "Add a hacking rule for string interpolation at logging" into stable/queens 2018-03-06 02:42:13 +00:00
OpenStack Proposal Bot
c9b2b2aa39 Imported Translations from Zanata
For more information about this automatic import see:
https://docs.openstack.org/i18n/latest/reviewing-translation-import.html

Change-Id: Ia00d11dd76a27a5c052c7a512cadaefa168d0340
2018-03-03 07:22:16 +00:00
Andreas Jaeger
a42c31c221 Fix exception string format
The string %(action) is not valid, it misses the conversion specified,
add s for string.

Note that this leads to an untranslatable string, since our translation
tools check for valid formats and fail. In this case the failure comes
from a source code fail.

Change-Id: I2e630928dc32542a8a7c02657a9f0ab1eaab62ff
2018-03-02 20:57:41 +00:00
ForestLee
403ec94bc1 Add a hacking rule for string interpolation at logging
String interpolation should be delayed to be handled by
the logging code, rather than being done at the point
of the logging call.
See the oslo i18n guideline
* https://docs.openstack.org/oslo.i18n/latest/user/guidelines.html#adding-variables-to-log-messages
and
* https://github.com/openstack-dev/hacking/blob/master/hacking/checks/other.py#L39
Closes-Bug: #1596829

Change-Id: Ibba5791669c137be1483805db657beb907030227
2018-02-28 12:13:10 +00:00
OpenStack Release Bot
3431b77388 Update UPPER_CONSTRAINTS_FILE for stable/queens
The new stable upper-constraints file is only available
after the openstack/requirements repository is branched.
This will happen around the RC1 timeframe.

Recheck and merge this change once the requirements
repository has been branched.

The CI system will work with this patch before the requirements
repository is branched because zuul configues the job to run
with a local copy of the file and defaults to the master branch.
However, accepting the patch will break the test configuration
on developers' local systems, so please wait until after the
requirements repository is branched to merge the patch.

Change-Id: I8ec196a62e7c0146f25045e643073f414ae69249
2018-02-08 16:34:03 +00:00
OpenStack Release Bot
eb4cacc00e Update .gitreview for stable/queens
Change-Id: I4ac0da37285c34471654bb5125c034b415c6031d
2018-02-08 16:33:58 +00:00
154 changed files with 1618 additions and 3303 deletions

View File

@@ -1,4 +1,5 @@
[gerrit]
host=review.openstack.org
host=review.opendev.org
port=29418
project=openstack/watcher.git
defaultbranch=stable/queens

View File

@@ -1,139 +1,45 @@
- project:
templates:
- openstack-python-jobs
- openstack-python35-jobs
- publish-openstack-sphinx-docs
- check-requirements
- release-notes-jobs
check:
jobs:
- watcher-tempest-functional
- watcher-tempest-dummy_optim
- watcher-tempest-actuator
- watcher-tempest-basic_optim
- watcher-tempest-workload_balancing
- watcherclient-tempest-functional
- legacy-rally-dsvm-watcher-rally
- openstack-tox-lower-constraints
- watcher-tempest-multinode
gate:
jobs:
- watcher-tempest-functional
- watcher-tempest-dummy_optim
- watcher-tempest-actuator
- watcher-tempest-basic_optim
- watcher-tempest-workload_balancing
- watcherclient-tempest-functional
- legacy-rally-dsvm-watcher-rally
- openstack-tox-lower-constraints
queue: watcher
- job:
name: watcher-tempest-dummy_optim
parent: watcher-tempest-multinode
vars:
tempest_test_regex: 'watcher_tempest_plugin.tests.scenario.test_execute_dummy_optim'
- job:
name: watcher-tempest-actuator
parent: watcher-tempest-multinode
vars:
tempest_test_regex: 'watcher_tempest_plugin.tests.scenario.test_execute_actuator'
- job:
name: watcher-tempest-basic_optim
parent: watcher-tempest-multinode
vars:
tempest_test_regex: 'watcher_tempest_plugin.tests.scenario.test_execute_basic_optim'
- job:
name: watcher-tempest-workload_balancing
parent: watcher-tempest-multinode
vars:
tempest_test_regex: 'watcher_tempest_plugin.tests.scenario.test_execute_workload_balancing'
- job:
name: watcher-tempest-multinode
parent: watcher-tempest-functional
voting: false
nodeset: openstack-two-node
pre-run: playbooks/pre.yaml
run: playbooks/orchestrate-tempest.yaml
roles:
- zuul: openstack/tempest
group-vars:
subnode:
devstack_local_conf:
post-config:
$NOVA_CONF:
libvirt:
live_migration_uri: 'qemu+ssh://root@%s/system'
devstack_services:
watcher-api: false
watcher-decision-engine: false
watcher-applier: false
# We need to add TLS support for watcher plugin
tls-proxy: false
ceilometer: false
ceilometer-acompute: false
ceilometer-acentral: false
ceilometer-anotification: false
watcher: false
gnocchi-api: false
gnocchi-metricd: false
rabbit: false
mysql: false
vars:
devstack_local_conf:
post-config:
$NOVA_CONF:
libvirt:
live_migration_uri: 'qemu+ssh://root@%s/system'
test-config:
$TEMPEST_CONFIG:
compute:
min_compute_nodes: 2
compute-feature-enabled:
live_migration: true
block_migration_for_live_migration: true
devstack_plugins:
ceilometer: https://git.openstack.org/openstack/ceilometer
- job:
name: watcher-tempest-functional
parent: devstack-tempest
timeout: 7200
name: watcher-tempest-base-multinode
parent: legacy-dsvm-base-multinode
run: playbooks/legacy/watcher-tempest-base-multinode/run.yaml
post-run: playbooks/legacy/watcher-tempest-base-multinode/post.yaml
timeout: 4200
required-projects:
- openstack/ceilometer
- openstack-infra/devstack-gate
- openstack/devstack-gate
- openstack/python-openstackclient
- openstack/python-watcherclient
- openstack/watcher
- openstack/watcher-tempest-plugin
- openstack/tempest
vars:
devstack_plugins:
watcher: https://git.openstack.org/openstack/watcher
devstack_services:
tls-proxy: false
watcher-api: true
watcher-decision-engine: true
watcher-applier: true
tempest: true
s-account: false
s-container: false
s-object: false
s-proxy: false
devstack_localrc:
TEMPEST_PLUGINS: '/opt/stack/watcher-tempest-plugin'
tempest_test_regex: 'watcher_tempest_plugin.tests.api'
tox_envlist: all
tox_environment:
# Do we really need to set this? It's cargo culted
PYTHONUNBUFFERED: 'true'
zuul_copy_output:
/etc/hosts: logs
nodeset: legacy-ubuntu-xenial-2-node
- job:
# This job is used in python-watcherclient repo
name: watcherclient-tempest-functional
parent: watcher-tempest-functional
name: watcher-tempest-multinode
parent: watcher-tempest-base-multinode
voting: false
- job:
# This job is used by python-watcherclient repo
name: watcherclient-tempest-functional
parent: legacy-dsvm-base
run: playbooks/legacy/watcherclient-tempest-functional/run.yaml
post-run: playbooks/legacy/watcherclient-tempest-functional/post.yaml
timeout: 4200
vars:
tempest_concurrency: 1
devstack_localrc:
TEMPEST_PLUGINS: '/opt/stack/python-watcherclient'
tempest_test_regex: 'watcherclient.tests.functional'
required-projects:
- openstack/devstack
- openstack/devstack-gate
- openstack/python-openstackclient
- openstack/python-watcherclient
- openstack/watcher

View File

@@ -8,4 +8,4 @@
watcher Style Commandments
==========================
Read the OpenStack Style Commandments https://docs.openstack.org/hacking/latest/
Read the OpenStack Style Commandments https://docs.openstack.org/developer/hacking/

View File

@@ -2,8 +2,8 @@
Team and repository tags
========================
.. image:: https://governance.openstack.org/tc/badges/watcher.svg
:target: https://governance.openstack.org/tc/reference/tags/index.html
.. image:: https://governance.openstack.org/badges/watcher.svg
:target: https://governance.openstack.org/reference/tags/index.html
.. Change things from this point on
@@ -22,11 +22,10 @@ service for multi-tenant OpenStack-based clouds.
Watcher provides a robust framework to realize a wide range of cloud
optimization goals, including the reduction of data center
operating costs, increased system performance via intelligent virtual machine
migration, increased energy efficiency and more!
migration, increased energy efficiency-and more!
* Free software: Apache license
* Wiki: https://wiki.openstack.org/wiki/Watcher
* Source: https://github.com/openstack/watcher
* Source: https://github.com/openstack/watcher
* Bugs: https://bugs.launchpad.net/watcher
* Documentation: https://docs.openstack.org/watcher/latest/
* Release notes: https://docs.openstack.org/releasenotes/watcher/

View File

@@ -177,20 +177,16 @@ function create_watcher_conf {
iniset $WATCHER_CONF DEFAULT debug "$ENABLE_DEBUG_LOG_LEVEL"
iniset $WATCHER_CONF DEFAULT control_exchange watcher
iniset_rpc_backend watcher $WATCHER_CONF
iniset $WATCHER_CONF database connection $(database_connection_url watcher)
iniset $WATCHER_CONF api host "$WATCHER_SERVICE_HOST"
if is_service_enabled tls-proxy; then
iniset $WATCHER_CONF api port "$WATCHER_SERVICE_PORT_INT"
# iniset $WATCHER_CONF api enable_ssl_api "True"
else
iniset $WATCHER_CONF api port "$WATCHER_SERVICE_PORT"
fi
iniset $WATCHER_CONF api port "$WATCHER_SERVICE_PORT"
iniset $WATCHER_CONF oslo_policy policy_file $WATCHER_POLICY_YAML
iniset $WATCHER_CONF oslo_messaging_rabbit rabbit_userid $RABBIT_USERID
iniset $WATCHER_CONF oslo_messaging_rabbit rabbit_password $RABBIT_PASSWORD
iniset $WATCHER_CONF oslo_messaging_rabbit rabbit_host $RABBIT_HOST
iniset $WATCHER_CONF oslo_messaging_notifications driver "messagingv2"
iniset $NOVA_CONF oslo_messaging_notifications topics "notifications,watcher_notifications"
@@ -301,7 +297,8 @@ function start_watcher_api {
# Start proxies if enabled
if is_service_enabled tls-proxy; then
start_tls_proxy watcher '*' $WATCHER_SERVICE_PORT $WATCHER_SERVICE_HOST $WATCHER_SERVICE_PORT_INT
start_tls_proxy '*' $WATCHER_SERVICE_PORT $WATCHER_SERVICE_HOST $WATCHER_SERVICE_PORT_INT &
start_tls_proxy '*' $EC2_SERVICE_PORT $WATCHER_SERVICE_HOST $WATCHER_SERVICE_PORT_INT &
fi
}
@@ -317,6 +314,7 @@ function start_watcher {
function stop_watcher {
if [[ "$WATCHER_USE_MOD_WSGI" == "True" ]]; then
disable_apache_site watcher-api
restart_apache_server
else
stop_process watcher-api
fi

View File

@@ -35,7 +35,7 @@ VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP
NOVA_INSTANCES_PATH=/opt/stack/data/instances
# Enable the Ceilometer plugin for the compute agent
enable_plugin ceilometer git://git.openstack.org/openstack/ceilometer
enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer
disable_service ceilometer-acentral,ceilometer-collector,ceilometer-api
LOGFILE=$DEST/logs/stack.sh.log

View File

@@ -25,13 +25,13 @@ MULTI_HOST=1
disable_service n-cpu
# Enable the Watcher Dashboard plugin
enable_plugin watcher-dashboard git://git.openstack.org/openstack/watcher-dashboard
enable_plugin watcher-dashboard https://git.openstack.org/openstack/watcher-dashboard
# Enable the Watcher plugin
enable_plugin watcher git://git.openstack.org/openstack/watcher
enable_plugin watcher https://git.openstack.org/openstack/watcher
# Enable the Ceilometer plugin
enable_plugin ceilometer git://git.openstack.org/openstack/ceilometer
enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer
# This is the controller node, so disable the ceilometer compute agent
disable_service ceilometer-acompute

View File

@@ -20,7 +20,7 @@ It is used via a single directive in the .rst file
"""
from docutils.parsers.rst import Directive
from sphinx.util.compat import Directive
from docutils import nodes
from watcher.notifications import base as notification

View File

@@ -19,7 +19,7 @@ The source install instructions specifically avoid using platform specific
packages, instead using the source for the code and the Python Package Index
(PyPi_).
.. _PyPi: https://pypi.org/
.. _PyPi: https://pypi.python.org/pypi
It's expected that your system already has python2.7_, latest version of pip_,
and git_ available.

View File

@@ -42,7 +42,6 @@ extensions = [
'ext.versioned_notifications',
'oslo_config.sphinxconfiggen',
'openstackdocstheme',
'sphinx.ext.napoleon',
]
wsme_protocols = ['restjson']

View File

@@ -129,14 +129,10 @@ Configure the Identity service for the Watcher service
.. code-block:: bash
$ openstack endpoint create --region YOUR_REGION
watcher public http://WATCHER_API_PUBLIC_IP:9322
$ openstack endpoint create --region YOUR_REGION
watcher internal http://WATCHER_API_INTERNAL_IP:9322
$ openstack endpoint create --region YOUR_REGION
watcher admin http://WATCHER_API_ADMIN_IP:9322
$ openstack endpoint create --region YOUR_REGION watcher \
--publicurl http://WATCHER_API_PUBLIC_IP:9322 \
--internalurl http://WATCHER_API_INTERNAL_IP:9322 \
--adminurl http://WATCHER_API_ADMIN_IP:9322
.. _watcher-db_configuration:
@@ -169,7 +165,7 @@ You can easily generate and update a sample configuration file
named :ref:`watcher.conf.sample <watcher_sample_configuration_files>` by using
these following commands::
$ git clone git://git.openstack.org/openstack/watcher
$ git clone https://git.openstack.org/openstack/watcher
$ cd watcher/
$ tox -e genconfig
$ vi etc/watcher/watcher.conf.sample
@@ -221,7 +217,7 @@ so that the watcher service is configured for your needs.
# The SQLAlchemy connection string used to connect to the
# database (string value)
#connection=<None>
connection = mysql+pymysql://watcher:WATCHER_DBPASSWORD@DB_IP/watcher?charset=utf8
connection = mysql://watcher:WATCHER_DBPASSWORD@DB_IP/watcher?charset=utf8
#. Configure the Watcher Service to use the RabbitMQ message broker by
setting one or more of these options. Replace RABBIT_HOST with the
@@ -239,8 +235,21 @@ so that the watcher service is configured for your needs.
# option. (string value)
control_exchange = watcher
# ...
transport_url = rabbit://RABBITMQ_USER:RABBITMQ_PASSWORD@RABBIT_HOST
...
[oslo_messaging_rabbit]
# The username used by the message broker (string value)
rabbit_userid = RABBITMQ_USER
# The password of user used by the message broker (string value)
rabbit_password = RABBITMQ_PASSWORD
# The host where the message brokeris installed (string value)
rabbit_host = RABBIT_HOST
# The port used bythe message broker (string value)
#rabbit_port = 5672
#. Watcher API shall validate the token provided by every incoming request,
@@ -264,7 +273,7 @@ so that the watcher service is configured for your needs.
# Authentication URL (unknown value)
#auth_url = <None>
auth_url = http://IDENTITY_IP:5000
auth_url = http://IDENTITY_IP:35357
# Username (unknown value)
# Deprecated group/name - [DEFAULT]/username
@@ -310,7 +319,7 @@ so that the watcher service is configured for your needs.
# Authentication URL (unknown value)
#auth_url = <None>
auth_url = http://IDENTITY_IP:5000
auth_url = http://IDENTITY_IP:35357
# Username (unknown value)
# Deprecated group/name - [DEFAULT]/username
@@ -340,7 +349,7 @@ so that the watcher service is configured for your needs.
[nova_client]
# Version of Nova API to use in novaclient. (string value)
#api_version = 2.56
#api_version = 2.53
api_version = 2.1
#. Create the Watcher Service database tables::

View File

@@ -1,9 +1,5 @@
===================
Configuration Guide
===================
.. toctree::
:maxdepth: 2
:maxdepth: 1
configuring
watcher

View File

@@ -39,7 +39,7 @@ notifications of important events.
* https://launchpad.net
* https://launchpad.net/watcher
* https://launchpad.net/openstack
* https://launchpad.net/~openstack
Project Hosting Details
@@ -49,7 +49,7 @@ Bug tracker
https://launchpad.net/watcher
Mailing list (prefix subjects with ``[watcher]`` for faster responses)
http://lists.openstack.org/pipermail/openstack-dev/
https://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Wiki
https://wiki.openstack.org/Watcher
@@ -65,7 +65,7 @@ IRC Channel
Weekly Meetings
On Wednesdays at 14:00 UTC on even weeks in the ``#openstack-meeting-4``
IRC channel, 08:00 UTC on odd weeks in the ``#openstack-meeting-alt``
IRC channel, 13:00 UTC on odd weeks in the ``#openstack-meeting-alt``
IRC channel (`meetings logs`_)
.. _changelog: http://eavesdrop.openstack.org/irclogs/%23openstack-watcher/

View File

@@ -19,7 +19,7 @@ model. To enable the Watcher plugin with DevStack, add the following to the
`[[local|localrc]]` section of your controller's `local.conf` to enable the
Watcher plugin::
enable_plugin watcher git://git.openstack.org/openstack/watcher
enable_plugin watcher https://git.openstack.org/openstack/watcher
For more detailed instructions, see `Detailed DevStack Instructions`_. Check
out the `DevStack documentation`_ for more information regarding DevStack.

View File

@@ -37,7 +37,7 @@ different version of the above, please document your configuration here!
.. _Python: https://www.python.org/
.. _git: https://git-scm.com/
.. _setuptools: https://pypi.org/project/setuptools
.. _setuptools: https://pypi.python.org/pypi/setuptools
.. _virtualenvwrapper: https://virtualenvwrapper.readthedocs.io/en/latest/install.html
Getting the latest code
@@ -69,8 +69,8 @@ itself.
These dependencies can be installed from PyPi_ using the Python tool pip_.
.. _PyPi: https://pypi.org/
.. _pip: https://pypi.org/project/pip
.. _PyPi: https://pypi.python.org/
.. _pip: https://pypi.python.org/pypi/pip
However, your system *may* need additional dependencies that `pip` (and by
extension, PyPi) cannot satisfy. These dependencies should be installed
@@ -123,10 +123,9 @@ You can re-activate this virtualenv for your current shell using:
$ workon watcher
For more information on virtual environments, see virtualenv_ and
virtualenvwrapper_.
For more information on virtual environments, see virtualenv_.
.. _virtualenv: https://pypi.org/project/virtualenv/
.. _virtualenv: https://www.virtualenv.org/

View File

@@ -79,7 +79,7 @@ requirements.txt file::
.. _cookiecutter: https://github.com/audreyr/cookiecutter
.. _OpenStack cookiecutter: https://github.com/openstack-dev/cookiecutter
.. _python-watcher: https://pypi.org/project/python-watcher
.. _python-watcher: https://pypi.python.org/pypi/python-watcher
Implementing a plugin for Watcher
=================================

View File

@@ -208,7 +208,7 @@ Here below is how to register ``DummyClusterDataModelCollector`` using pbr_:
watcher_cluster_data_model_collectors =
dummy = thirdparty.dummy:DummyClusterDataModelCollector
.. _pbr: https://docs.openstack.org/pbr/latest/
.. _pbr: http://docs.openstack.org/pbr/latest
Add new notification endpoints

View File

@@ -31,7 +31,7 @@ the following::
(watcher) $ tox -e pep8
.. _tox: https://tox.readthedocs.org/
.. _Gerrit: https://review.openstack.org/
.. _Gerrit: http://review.openstack.org/
You may pass options to the test programs using positional arguments. To run a
specific unit test, you can pass extra options to `os-testr`_ after putting

View File

@@ -274,7 +274,7 @@ In OpenStack Identity, a :ref:`project <project_definition>` must be owned by a
specific domain.
Please, read `the official OpenStack definition of a Project
<https://docs.openstack.org/doc-contrib-guide/common/glossary.html>`_.
<http://docs.openstack.org/glossary/content/glossary.html>`_.
.. _scoring_engine_definition:

View File

@@ -15,7 +15,7 @@ metrics receiver, complex event processor and profiler, optimization processor
and an action plan applier. This provides a robust framework to realize a wide
range of cloud optimization goals, including the reduction of data center
operating costs, increased system performance via intelligent virtual machine
migration, increased energy efficiency and more!
migration, increased energy efficiencyand more!
Watcher project consists of several source code repositories:

View File

@@ -27,7 +27,7 @@
[keystone_authtoken]
...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
@@ -47,7 +47,7 @@
[watcher_clients_auth]
...
auth_type = password
auth_url = http://controller:5000
auth_url = http://controller:35357
username = watcher
password = WATCHER_PASS
project_domain_name = default

View File

@@ -28,10 +28,10 @@ optimization algorithms, data metrics and data profilers can be
developed and inserted into the Watcher framework.
Check the documentation for watcher optimization strategies at
`Strategies <https://docs.openstack.org/watcher/latest/strategies/index.html>`_.
https://docs.openstack.org/watcher/latest/strategies/index.html
Check watcher glossary at `Glossary
<https://docs.openstack.org/watcher/latest/glossary.html>`_.
Check watcher glossary at
https://docs.openstack.org/watcher/latest/glossary.html
This chapter assumes a working setup of OpenStack following the

View File

@@ -7,7 +7,9 @@ Service for the Watcher API
---------------------------
:Author: openstack@lists.launchpad.net
:Date:
:Copyright: OpenStack Foundation
:Version:
:Manual section: 1
:Manual group: cloud computing

View File

@@ -7,7 +7,9 @@ Service for the Watcher Applier
-------------------------------
:Author: openstack@lists.launchpad.net
:Date:
:Copyright: OpenStack Foundation
:Version:
:Manual section: 1
:Manual group: cloud computing

View File

@@ -7,7 +7,9 @@ Service for the Watcher Decision Engine
---------------------------------------
:Author: openstack@lists.launchpad.net
:Date:
:Copyright: OpenStack Foundation
:Version:
:Manual section: 1
:Manual group: cloud computing

View File

@@ -9,7 +9,7 @@ Synopsis
**goal**: ``unclassified``
.. watcher-term:: watcher.decision_engine.strategy.strategies.actuation.Actuator
.. watcher-term:: watcher.decision_engine.strategy.strategies.actuation
Requirements
------------

View File

@@ -9,7 +9,7 @@ Synopsis
**goal**: ``server_consolidation``
.. watcher-term:: watcher.decision_engine.strategy.strategies.basic_consolidation.BasicConsolidation
.. watcher-term:: watcher.decision_engine.strategy.strategies.basic_consolidation
Requirements
------------

View File

@@ -1,92 +0,0 @@
===========================
Host Maintenance Strategy
===========================
Synopsis
--------
**display name**: ``Host Maintenance Strategy``
**goal**: ``cluster_maintaining``
.. watcher-term:: watcher.decision_engine.strategy.strategies.host_maintenance.HostMaintenance
Requirements
------------
None.
Metrics
*******
None
Cluster data model
******************
Default Watcher's Compute cluster data model:
.. watcher-term:: watcher.decision_engine.model.collector.nova.NovaClusterDataModelCollector
Actions
*******
Default Watcher's actions:
.. list-table::
:widths: 30 30
:header-rows: 1
* - action
- description
* - ``migration``
- .. watcher-term:: watcher.applier.actions.migration.Migrate
Planner
*******
Default Watcher's planner:
.. watcher-term:: watcher.decision_engine.planner.weight.WeightPlanner
Configuration
-------------
Strategy parameters are:
==================== ====== ====================================
parameter type default Value description
==================== ====== ====================================
``maintenance_node`` String The name of the compute node which
need maintenance. Required.
``backup_node`` String The name of the compute node which
will backup the maintenance node.
Optional.
==================== ====== ====================================
Efficacy Indicator
------------------
None
Algorithm
---------
For more information on the Host Maintenance Strategy please refer
to: https://specs.openstack.org/openstack/watcher-specs/specs/queens/approved/cluster-maintenance-strategy.html
How to use it ?
---------------
.. code-block:: shell
$ openstack optimize audit create \
-g cluster_maintaining -s host_maintenance \
-p maintenance_node=compute01 \
-p backup_node=compute02 \
--auto-trigger
External Links
--------------
None.

View File

@@ -9,7 +9,11 @@ Synopsis
**goal**: ``thermal_optimization``
.. watcher-term:: watcher.decision_engine.strategy.strategies.outlet_temp_control
Outlet (Exhaust Air) temperature is a new thermal telemetry which can be
used to measure the host's thermal/workload status. This strategy makes
decisions to migrate workloads to the hosts with good thermal condition
(lowest outlet temperature) when the outlet temperature of source hosts
reach a configurable threshold.
Requirements
------------

View File

@@ -9,7 +9,7 @@ Synopsis
**goal**: ``saving_energy``
.. watcher-term:: watcher.decision_engine.strategy.strategies.saving_energy.SavingEnergy
.. watcher-term:: watcher.decision_engine.strategy.strategies.saving_energy
Requirements
------------
@@ -67,13 +67,13 @@ parameter type default description
Efficacy Indicator
------------------
None
Energy saving strategy efficacy indicator is unclassified.
https://github.com/openstack/watcher/blob/master/watcher/decision_engine/goal/goals.py#L215-L218
Algorithm
---------
For more information on the Energy Saving Strategy please refer to:
http://specs.openstack.org/openstack/watcher-specs/specs/pike/implemented/energy-saving-strategy.html
For more information on the Energy Saving Strategy please refer to:http://specs.openstack.org/openstack/watcher-specs/specs/pike/implemented/energy-saving-strategy.html
How to use it ?
---------------
@@ -91,10 +91,10 @@ step 2: Create audit to do optimization
$ openstack optimize audittemplate create \
at1 saving_energy --strategy saving_energy
$ openstack optimize audit create -a at1 \
-p free_used_percent=20.0
$ openstack optimize audit create -a at1
External Links
--------------
None
*Spec URL*
http://specs.openstack.org/openstack/watcher-specs/specs/pike/implemented/energy-saving-strategy.html

View File

@@ -1,87 +0,0 @@
========================
Storage capacity balance
========================
Synopsis
--------
**display name**: ``Storage Capacity Balance Strategy``
**goal**: ``workload_balancing``
.. watcher-term:: watcher.decision_engine.strategy.strategies.storage_capacity_balance.StorageCapacityBalance
Requirements
------------
Metrics
*******
None
Cluster data model
******************
Storage cluster data model is required:
.. watcher-term:: watcher.decision_engine.model.collector.cinder.CinderClusterDataModelCollector
Actions
*******
Default Watcher's actions:
.. list-table::
:widths: 25 35
:header-rows: 1
* - action
- description
* - ``volume_migrate``
- .. watcher-term:: watcher.applier.actions.volume_migration.VolumeMigrate
Planner
*******
Default Watcher's planner:
.. watcher-term:: watcher.decision_engine.planner.weight.WeightPlanner
Configuration
-------------
Strategy parameter is:
==================== ====== ============= =====================================
parameter type default Value description
==================== ====== ============= =====================================
``volume_threshold`` Number 80.0 Volume threshold for capacity balance
==================== ====== ============= =====================================
Efficacy Indicator
------------------
None
Algorithm
---------
For more information on the zone migration strategy please refer to:
http://specs.openstack.org/openstack/watcher-specs/specs/queens/implemented/storage-capacity-balance.html
How to use it ?
---------------
.. code-block:: shell
$ openstack optimize audittemplate create \
at1 workload_balancing --strategy storage_capacity_balance
$ openstack optimize audit create -a at1 \
-p volume_threshold=85.0
External Links
--------------
None

View File

@@ -9,7 +9,7 @@ Synopsis
**goal**: ``airflow_optimization``
.. watcher-term:: watcher.decision_engine.strategy.strategies.uniform_airflow.UniformAirflow
.. watcher-term:: watcher.decision_engine.strategy.strategies.uniform_airflow
Requirements
------------

View File

@@ -9,7 +9,7 @@ Synopsis
**goal**: ``vm_consolidation``
.. watcher-term:: watcher.decision_engine.strategy.strategies.vm_workload_consolidation.VMWorkloadConsolidation
.. watcher-term:: watcher.decision_engine.strategy.strategies.vm_workload_consolidation
Requirements
------------

View File

@@ -9,7 +9,7 @@ Synopsis
**goal**: ``workload_balancing``
.. watcher-term:: watcher.decision_engine.strategy.strategies.workload_stabilization.WorkloadStabilization
.. watcher-term:: watcher.decision_engine.strategy.strategies.workload_stabilization
Requirements
------------

View File

@@ -9,7 +9,7 @@ Synopsis
**goal**: ``workload_balancing``
.. watcher-term:: watcher.decision_engine.strategy.strategies.workload_balance.WorkloadBalance
.. watcher-term:: watcher.decision_engine.strategy.strategies.workload_balance
Requirements
------------

View File

@@ -9,7 +9,7 @@ Synopsis
**goal**: ``hardware_maintenance``
.. watcher-term:: watcher.decision_engine.strategy.strategies.zone_migration.ZoneMigration
.. watcher-term:: watcher.decision_engine.strategy.strategies.zone_migration
Requirements
------------

View File

@@ -39,22 +39,6 @@ named ``watcher``, or by using the `OpenStack CLI`_ ``openstack``.
If you want to deploy Watcher in Horizon, please refer to the `Watcher Horizon
plugin installation guide`_.
.. note::
Notice, that in this guide we'll use `OpenStack CLI`_ as major interface.
Nevertheless, you can use `Watcher CLI`_ in the same way. It can be
achieved by replacing
.. code:: bash
$ openstack optimize ...
with
.. code:: bash
$ watcher ...
.. _`installation guide`: https://docs.openstack.org/python-watcherclient/latest
.. _`Watcher Horizon plugin installation guide`: https://docs.openstack.org/watcher-dashboard/latest/install/installation.html
.. _`OpenStack CLI`: https://docs.openstack.org/python-openstackclient/latest/cli/man/openstack.html
@@ -67,6 +51,10 @@ watcher binary without options.
.. code:: bash
$ watcher help
or::
$ openstack help optimize
How do I run an audit of my cluster ?
@@ -76,6 +64,10 @@ First, you need to find the :ref:`goal <goal_definition>` you want to achieve:
.. code:: bash
$ watcher goal list
or::
$ openstack optimize goal list
.. note::
@@ -89,6 +81,10 @@ An :ref:`audit template <audit_template_definition>` defines an optimization
.. code:: bash
$ watcher audittemplate create my_first_audit_template <your_goal>
or::
$ openstack optimize audittemplate create my_first_audit_template <your_goal>
Although optional, you may want to actually set a specific strategy for your
@@ -97,6 +93,10 @@ following command:
.. code:: bash
$ watcher strategy list --goal <your_goal_uuid_or_name>
or::
$ openstack optimize strategy list --goal <your_goal_uuid_or_name>
You can use the following command to check strategy details including which
@@ -104,12 +104,21 @@ parameters of which format it supports:
.. code:: bash
$ watcher strategy show <your_strategy>
or::
$ openstack optimize strategy show <your_strategy>
The command to create your audit template would then be:
.. code:: bash
$ watcher audittemplate create my_first_audit_template <your_goal> \
--strategy <your_strategy>
or::
$ openstack optimize audittemplate create my_first_audit_template <your_goal> \
--strategy <your_strategy>
@@ -124,6 +133,10 @@ audit) that you want to use.
.. code:: bash
$ watcher audittemplate list
or::
$ openstack optimize audittemplate list
- Start an audit based on this :ref:`audit template
@@ -131,6 +144,10 @@ audit) that you want to use.
.. code:: bash
$ watcher audit create -a <your_audit_template>
or::
$ openstack optimize audit create -a <your_audit_template>
If your_audit_template was created by --strategy <your_strategy>, and it
@@ -139,6 +156,11 @@ format), your can append `-p` to input required parameters:
.. code:: bash
$ watcher audit create -a <your_audit_template> \
-p <your_strategy_para1>=5.5 -p <your_strategy_para2>=hi
or::
$ openstack optimize audit create -a <your_audit_template> \
-p <your_strategy_para1>=5.5 -p <your_strategy_para2>=hi
@@ -151,13 +173,19 @@ Input parameter could cause audit creation failure, when:
Watcher service will compute an :ref:`Action Plan <action_plan_definition>`
composed of a list of potential optimization :ref:`actions <action_definition>`
(instance migration, disabling of a compute node, ...) according to the
:ref:`goal <goal_definition>` to achieve.
:ref:`goal <goal_definition>` to achieve. You can see all of the goals
available in section ``[watcher_strategies]`` of the Watcher service
configuration file.
- Wait until the Watcher audit has produced a new :ref:`action plan
<action_plan_definition>`, and get it:
.. code:: bash
$ watcher actionplan list --audit <the_audit_uuid>
or::
$ openstack optimize actionplan list --audit <the_audit_uuid>
- Have a look on the list of optimization :ref:`actions <action_definition>`
@@ -165,6 +193,10 @@ composed of a list of potential optimization :ref:`actions <action_definition>`
.. code:: bash
$ watcher action list --action-plan <the_action_plan_uuid>
or::
$ openstack optimize action list --action-plan <the_action_plan_uuid>
Once you have learned how to create an :ref:`Action Plan
@@ -175,6 +207,10 @@ cluster:
.. code:: bash
$ watcher actionplan start <the_action_plan_uuid>
or::
$ openstack optimize actionplan start <the_action_plan_uuid>
You can follow the states of the :ref:`actions <action_definition>` by
@@ -182,11 +218,19 @@ periodically calling:
.. code:: bash
$ watcher action list
or::
$ openstack optimize action list
You can also obtain more detailed information about a specific action:
.. code:: bash
$ watcher action show <the_action_uuid>
or::
$ openstack optimize action show <the_action_uuid>

View File

@@ -1,165 +0,0 @@
alabaster==0.7.10
alembic==0.9.8
amqp==2.2.2
appdirs==1.4.3
APScheduler==3.5.1
asn1crypto==0.24.0
automaton==1.14.0
Babel==2.5.3
bandit==1.4.0
beautifulsoup4==4.6.0
cachetools==2.0.1
certifi==2018.1.18
cffi==1.11.5
chardet==3.0.4
cliff==2.11.0
cmd2==0.8.1
contextlib2==0.5.5
coverage==4.5.1
croniter==0.3.20
cryptography==2.1.4
debtcollector==1.19.0
decorator==4.2.1
deprecation==2.0
doc8==0.8.0
docutils==0.14
dogpile.cache==0.6.5
dulwich==0.19.0
enum34==1.1.6
enum-compat==0.0.2
eventlet==0.20.0
extras==1.0.0
fasteners==0.14.1
fixtures==3.0.0
flake8==2.5.5
freezegun==0.3.10
future==0.16.0
futurist==1.6.0
gitdb2==2.0.3
GitPython==2.1.8
gnocchiclient==7.0.1
greenlet==0.4.13
hacking==0.12.0
idna==2.6
imagesize==1.0.0
iso8601==0.1.12
Jinja2==2.10
jmespath==0.9.3
jsonpatch==1.21
jsonpointer==2.0
jsonschema==2.6.0
keystoneauth1==3.4.0
keystonemiddleware==4.21.0
kombu==4.1.0
linecache2==1.0.0
logutils==0.3.5
lxml==4.1.1
Mako==1.0.7
MarkupSafe==1.0
mccabe==0.2.1
mock==2.0.0
monotonic==1.4
mox3==0.25.0
msgpack==0.5.6
munch==2.2.0
netaddr==0.7.19
netifaces==0.10.6
networkx==1.11
openstackdocstheme==1.20.0
openstacksdk==0.12.0
os-api-ref===1.4.0
os-client-config==1.29.0
os-service-types==1.2.0
os-testr==1.0.0
osc-lib==1.10.0
oslo.cache==1.29.0
oslo.concurrency==3.26.0
oslo.config==5.2.0
oslo.context==2.20.0
oslo.db==4.35.0
oslo.i18n==3.20.0
oslo.log==3.37.0
oslo.messaging==5.36.0
oslo.middleware==3.35.0
oslo.policy==1.34.0
oslo.reports==1.27.0
oslo.serialization==2.25.0
oslo.service==1.30.0
oslo.utils==3.36.0
oslo.versionedobjects==1.32.0
oslotest==3.3.0
packaging==17.1
Paste==2.0.3
PasteDeploy==1.5.2
pbr==3.1.1
pecan==1.2.1
pep8==1.5.7
pika==0.10.0
pika-pool==0.1.3
prettytable==0.7.2
psutil==5.4.3
pycadf==2.7.0
pycparser==2.18
pyflakes==0.8.1
Pygments==2.2.0
pyinotify==0.9.6
pyOpenSSL==17.5.0
pyparsing==2.2.0
pyperclip==1.6.0
python-ceilometerclient==2.9.0
python-cinderclient==3.5.0
python-dateutil==2.7.0
python-editor==1.0.3
python-glanceclient==2.9.1
python-ironicclient==2.3.0
python-keystoneclient==3.15.0
python-mimeparse==1.6.0
python-monascaclient==1.10.0
python-neutronclient==6.7.0
python-novaclient==10.1.0
python-openstackclient==3.14.0
python-subunit==1.2.0
pytz==2018.3
PyYAML==3.12
reno==2.7.0
repoze.lru==0.7
requests==2.18.4
requestsexceptions==1.4.0
restructuredtext-lint==1.1.3
rfc3986==1.1.0
Routes==2.4.1
simplegeneric==0.8.1
simplejson==3.13.2
six==1.11.0
smmap2==2.0.3
snowballstemmer==1.2.1
Sphinx==1.6.5
sphinxcontrib-httpdomain==1.6.1
sphinxcontrib-pecanwsme==0.8.0
sphinxcontrib-websupport==1.0.1
SQLAlchemy==1.2.5
sqlalchemy-migrate==0.11.0
sqlparse==0.2.4
statsd==3.2.2
stestr==2.0.0
stevedore==1.28.0
taskflow==3.1.0
Tempita==0.5.2
tenacity==4.9.0
testrepository==0.0.20
testresources==2.0.1
testscenarios==0.5.0
testtools==2.3.0
traceback2==1.4.0
tzlocal==1.5.1
ujson==1.35
unittest2==1.1.0
urllib3==1.22
vine==1.1.4
voluptuous==0.11.1
waitress==1.1.0
warlock==1.3.0
WebOb==1.7.4
WebTest==2.0.29
wrapt==1.10.11
WSME==0.9.2

View File

@@ -0,0 +1,15 @@
- hosts: primary
tasks:
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=/logs/**
- --include=*/
- --exclude=*
- --prune-empty-dirs

View File

@@ -0,0 +1,67 @@
- hosts: primary
name: Legacy Watcher tempest base multinode
tasks:
- name: Ensure legacy workspace directory
file:
path: '{{ ansible_user_dir }}/workspace'
state: directory
- shell:
cmd: |
set -e
set -x
cat > clonemap.yaml << EOF
clonemap:
- name: openstack/devstack-gate
dest: devstack-gate
EOF
/usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \
https://opendev.org \
openstack/devstack-gate
executable: /bin/bash
chdir: '{{ ansible_user_dir }}/workspace'
environment: '{{ zuul | zuul_legacy_vars }}'
- shell:
cmd: |
set -e
set -x
cat << 'EOF' >>"/tmp/dg-local.conf"
[[local|localrc]]
TEMPEST_PLUGINS='/opt/stack/new/watcher-tempest-plugin'
enable_plugin ceilometer https://opendev.org/openstack/ceilometer
# Enable watcher devstack plugin.
enable_plugin watcher https://opendev.org/openstack/watcher
EOF
executable: /bin/bash
chdir: '{{ ansible_user_dir }}/workspace'
environment: '{{ zuul | zuul_legacy_vars }}'
- shell:
cmd: |
set -e
set -x
export DEVSTACK_SUBNODE_CONFIG=" "
export PYTHONUNBUFFERED=true
export DEVSTACK_GATE_TEMPEST=1
export DEVSTACK_GATE_NEUTRON=1
export DEVSTACK_GATE_TOPOLOGY="multinode"
export PROJECTS="openstack/watcher $PROJECTS"
export PROJECTS="openstack/python-watcherclient $PROJECTS"
export PROJECTS="openstack/watcher-tempest-plugin $PROJECTS"
export DEVSTACK_GATE_TEMPEST_REGEX="watcher_tempest_plugin"
export BRANCH_OVERRIDE=default
if [ "$BRANCH_OVERRIDE" != "default" ] ; then
export OVERRIDE_ZUUL_BRANCH=$BRANCH_OVERRIDE
fi
cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh
./safe-devstack-vm-gate-wrap.sh
executable: /bin/bash
chdir: '{{ ansible_user_dir }}/workspace'
environment: '{{ zuul | zuul_legacy_vars }}'

View File

@@ -0,0 +1,80 @@
- hosts: primary
tasks:
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=**/*nose_results.html
- --include=*/
- --exclude=*
- --prune-empty-dirs
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=**/*testr_results.html.gz
- --include=*/
- --exclude=*
- --prune-empty-dirs
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=/.testrepository/tmp*
- --include=*/
- --exclude=*
- --prune-empty-dirs
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=**/*testrepository.subunit.gz
- --include=*/
- --exclude=*
- --prune-empty-dirs
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}/tox'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=/.tox/*/log/*
- --include=*/
- --exclude=*
- --prune-empty-dirs
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=/logs/**
- --include=*/
- --exclude=*
- --prune-empty-dirs

View File

@@ -0,0 +1,64 @@
- hosts: all
name: Legacy watcherclient-dsvm-functional
tasks:
- name: Ensure legacy workspace directory
file:
path: '{{ ansible_user_dir }}/workspace'
state: directory
- shell:
cmd: |
set -e
set -x
cat > clonemap.yaml << EOF
clonemap:
- name: openstack/devstack-gate
dest: devstack-gate
EOF
/usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \
https://opendev.org \
openstack/devstack-gate
executable: /bin/bash
chdir: '{{ ansible_user_dir }}/workspace'
environment: '{{ zuul | zuul_legacy_vars }}'
- shell:
cmd: |
set -e
set -x
cat << 'EOF' >>"/tmp/dg-local.conf"
[[local|localrc]]
enable_plugin watcher https://opendev.org/openstack/watcher
EOF
executable: /bin/bash
chdir: '{{ ansible_user_dir }}/workspace'
environment: '{{ zuul | zuul_legacy_vars }}'
- shell:
cmd: |
set -e
set -x
ENABLED_SERVICES=tempest
ENABLED_SERVICES+=,watcher-api,watcher-decision-engine,watcher-applier
export ENABLED_SERVICES
export PYTHONUNBUFFERED=true
export BRANCH_OVERRIDE=default
export PROJECTS="openstack/watcher $PROJECTS"
export DEVSTACK_PROJECT_FROM_GIT=python-watcherclient
if [ "$BRANCH_OVERRIDE" != "default" ] ; then
export OVERRIDE_ZUUL_BRANCH=$BRANCH_OVERRIDE
fi
function post_test_hook {
# Configure and run functional tests
$BASE/new/python-watcherclient/watcherclient/tests/functional/hooks/post_test_hook.sh
}
export -f post_test_hook
cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh
./safe-devstack-vm-gate-wrap.sh
executable: /bin/bash
chdir: '{{ ansible_user_dir }}/workspace'
environment: '{{ zuul | zuul_legacy_vars }}'

View File

@@ -1,14 +0,0 @@
- hosts: all
# This is the default strategy, however since orchestrate-devstack requires
# "linear", it is safer to enforce it in case this is running in an
# environment configured with a different default strategy.
strategy: linear
roles:
- orchestrate-devstack
- hosts: tempest
roles:
- setup-tempest-run-dir
- setup-tempest-data-dir
- acl-devstack-files
- run-tempest

View File

@@ -1,3 +0,0 @@
- hosts: all
roles:
- add-hostnames-to-hosts

View File

@@ -29,7 +29,7 @@ Useful links
* How to install: https://docs.openstack.org/rally/latest/install_and_upgrade/install.html
* How to set Rally up and launch your first scenario: https://rally.readthedocs.io/en/latest/quick_start/tutorial/step_1_setting_up_env_and_running_benchmark_from_samples.html
* How to set Rally up and launch your first scenario: https://rally.readthedocs.io/en/latest/tutorial/step_1_setting_up_env_and_running_benchmark_from_samples.html
* More about Rally: https://docs.openstack.org/rally/latest/

View File

@@ -1,4 +0,0 @@
---
features:
- Audits have 'name' field now, that is more friendly to end users.
Audit's name can't exceed 63 characters.

View File

@@ -1,6 +0,0 @@
---
features:
- |
Feature to exclude instances from audit scope based on project_id is added.
Now instances from particular project in OpenStack can be excluded from audit
defining scope in audit templates.

View File

@@ -1,6 +0,0 @@
---
features:
- Watcher has a whole scope of the cluster, when building
compute CDM which includes all instances.
It filters excluded instances when migration during the
audit.

View File

@@ -1,9 +0,0 @@
---
features:
- |
Added a strategy for one compute node maintenance,
without having the user's application been interrupted.
If given one backup node, the strategy will firstly
migrate all instances from the maintenance node to
the backup node. If the backup node is not provided,
it will migrate all instances, relying on nova-scheduler.

View File

@@ -1,6 +0,0 @@
---
features:
- Watcher got an ability to calculate multiple global efficacy indicators
during audit's execution. Now global efficacy can be calculated for many
resource types (like volumes, instances, network) if strategy supports
efficacy indicators.

View File

@@ -1,5 +0,0 @@
---
features:
- Added notifications about cancelling of action plan.
Now event based plugins know when action plan cancel
started and completed.

View File

@@ -1,14 +0,0 @@
---
features:
- |
Instance cold migration logic is now replaced with using Nova migrate
Server(migrate Action) API which has host option since v2.56.
upgrade:
- |
Nova API version is now set to 2.56 by default. This needs the migrate
action of migration type cold with destination_node parameter to work.
fixes:
- |
The migrate action of migration type cold with destination_node parameter
was fixed. Before fixing, it booted an instance in the service project
as a migrated instance.

View File

@@ -21,7 +21,6 @@ Contents:
:maxdepth: 1
unreleased
queens
pike
ocata
newton

View File

@@ -1,426 +0,0 @@
# Andi Chandler <andi@gowling.com>, 2017. #zanata
# Andi Chandler <andi@gowling.com>, 2018. #zanata
msgid ""
msgstr ""
"Project-Id-Version: watcher\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2018-02-28 12:27+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2018-02-16 07:20+0000\n"
"Last-Translator: Andi Chandler <andi@gowling.com>\n"
"Language-Team: English (United Kingdom)\n"
"Language: en_GB\n"
"X-Generator: Zanata 4.3.3\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
msgid "0.29.0"
msgstr "0.29.0"
msgid "0.34.0"
msgstr "0.34.0"
msgid "1.0.0"
msgstr "1.0.0"
msgid "1.1.0"
msgstr "1.1.0"
msgid "1.3.0"
msgstr "1.3.0"
msgid "1.4.0"
msgstr "1.4.0"
msgid "1.4.1"
msgstr "1.4.1"
msgid "1.5.0"
msgstr "1.5.0"
msgid "1.6.0"
msgstr "1.6.0"
msgid "1.7.0"
msgstr "1.7.0"
msgid "Add a service supervisor to watch Watcher deamons."
msgstr "Add a service supervisor to watch Watcher daemons."
msgid "Add action for compute node power on/off"
msgstr "Add action for compute node power on/off"
msgid ""
"Add description property for dynamic action. Admin can see detail "
"information of any specify action."
msgstr ""
"Add description property for dynamic action. Admin can see detail "
"information of any specify action."
msgid "Add notifications related to Action object."
msgstr "Add notifications related to Action object."
msgid "Add notifications related to Action plan object."
msgstr "Add notifications related to Action plan object."
msgid "Add notifications related to Audit object."
msgstr "Add notifications related to Audit object."
msgid "Add notifications related to Service object."
msgstr "Add notifications related to Service object."
msgid ""
"Add superseded state for an action plan if the cluster data model has "
"changed after it has been created."
msgstr ""
"Add superseded state for an action plan if the cluster data model has "
"changed after it has been created."
msgid "Added SUSPENDED audit state"
msgstr "Added SUSPENDED audit state"
msgid ""
"Added a generic scoring engine module, which will standarize interactions "
"with scoring engines through the common API. It is possible to use the "
"scoring engine by different Strategies, which improve the code and data "
"model re-use."
msgstr ""
"Added a generic scoring engine module, which will standardise interactions "
"with scoring engines through the common API. It is possible to use the "
"scoring engine by different Strategies, which improve the code and data "
"model re-use."
msgid ""
"Added a new strategy based on the airflow of servers. This strategy makes "
"decisions to migrate VMs to make the airflow uniform."
msgstr ""
"Added a new strategy based on the airflow of servers. This strategy makes "
"decisions to migrate VMs to make the airflow uniform."
msgid ""
"Added a standard way to both declare and fetch configuration options so that "
"whenever the administrator generates the Watcher configuration sample file, "
"it contains the configuration options of the plugins that are currently "
"available."
msgstr ""
"Added a standard way to both declare and fetch configuration options so that "
"whenever the administrator generates the Watcher configuration sample file, "
"it contains the configuration options of the plugins that are currently "
"available."
msgid ""
"Added a strategy based on the VM workloads of hypervisors. This strategy "
"makes decisions to migrate workloads to make the total VM workloads of each "
"hypervisor balanced, when the total VM workloads of hypervisor reaches "
"threshold."
msgstr ""
"Added a strategy based on the VM workloads of hypervisors. This strategy "
"makes decisions to migrate workloads to make the total VM workloads of each "
"hypervisor balanced, when the total VM workloads of hypervisor reaches "
"threshold."
msgid ""
"Added a strategy that monitors if there is a higher load on some hosts "
"compared to other hosts in the cluster and re-balances the work across hosts "
"to minimize the standard deviation of the loads in the cluster."
msgstr ""
"Added a strategy that monitors if there is a higher load on some hosts "
"compared to other hosts in the cluster and re-balances the work across hosts "
"to minimise the standard deviation of the loads in the cluster."
msgid ""
"Added a way to add a new action without having to amend the source code of "
"the default planner."
msgstr ""
"Added a way to add a new action without having to amend the source code of "
"the default planner."
msgid ""
"Added a way to check state of strategy before audit's execution. "
"Administrator can use \"watcher strategy state <strategy_name>\" command to "
"get information about metrics' availability, datasource's availability and "
"CDM's availability."
msgstr ""
"Added a way to check state of strategy before audit's execution. "
"Administrator can use \"watcher strategy state <strategy_name>\" command to "
"get information about metrics' availability, datasource's availability and "
"CDM's availability."
msgid ""
"Added a way to compare the efficacy of different strategies for a give "
"optimization goal."
msgstr ""
"Added a way to compare the efficacy of different strategies for a give "
"optimisation goal."
msgid ""
"Added a way to create periodic audit to be able to optimize continuously the "
"cloud infrastructure."
msgstr ""
"Added a way to create periodic audit to be able to continuously optimise the "
"cloud infrastructure."
msgid ""
"Added a way to return the of available goals depending on which strategies "
"have been deployed on the node where the decison engine is running."
msgstr ""
"Added a way to return the of available goals depending on which strategies "
"have been deployed on the node where the decision engine is running."
msgid ""
"Added an in-memory cache of the cluster model built up and kept fresh via "
"notifications from services of interest in addition to periodic syncing "
"logic."
msgstr ""
"Added an in-memory cache of the cluster model built up and kept fresh via "
"notifications from services of interest in addition to periodic syncing "
"logic."
msgid ""
"Added binding between apscheduler job and Watcher decision engine service. "
"It will allow to provide HA support in the future."
msgstr ""
"Added binding between apscheduler job and Watcher decision engine service. "
"It will allow to provide HA support in the future."
msgid "Added cinder cluster data model"
msgstr "Added cinder cluster data model"
msgid ""
"Added gnocchi support as data source for metrics. Administrator can change "
"data source for each strategy using config file."
msgstr ""
"Added Gnocchi support as data source for metrics. Administrator can change "
"data source for each strategy using config file."
msgid ""
"Added notifications about cancelling of action plan. Now event based plugins "
"know when action plan cancel started and completed."
msgstr ""
"Added notifications about cancelling of action plan. Now event based plugins "
"know when action plan cancel started and completed."
msgid "Added policies to handle user rights to access Watcher API."
msgstr "Added policies to handle user rights to access Watcher API."
msgid "Added storage capacity balance strategy."
msgstr "Added storage capacity balance strategy."
msgid ""
"Added strategy \"Zone migration\" and it's goal \"Hardware maintenance\". "
"The strategy migrates many instances and volumes efficiently with minimum "
"downtime automatically."
msgstr ""
"Added strategy \"Zone migration\" and it's goal \"Hardware maintenance\". "
"The strategy migrates many instances and volumes efficiently with minimum "
"downtime automatically."
msgid ""
"Added strategy to identify and migrate a Noisy Neighbor - a low priority VM "
"that negatively affects peformance of a high priority VM by over utilizing "
"Last Level Cache."
msgstr ""
"Added strategy to identify and migrate a Noisy Neighbour - a low priority VM "
"that negatively affects performance of a high priority VM by over utilising "
"Last Level Cache."
msgid ""
"Added the functionality to filter out instances which have metadata field "
"'optimize' set to False. For now, this is only available for the "
"basic_consolidation strategy (if \"check_optimize_metadata\" configuration "
"option is enabled)."
msgstr ""
"Added the functionality to filter out instances which have metadata field "
"'optimize' set to False. For now, this is only available for the "
"basic_consolidation strategy (if \"check_optimize_metadata\" configuration "
"option is enabled)."
msgid "Added using of JSONSchema instead of voluptuous to validate Actions."
msgstr "Added using of JSONSchema instead of voluptuous to validate Actions."
msgid "Added volume migrate action"
msgstr "Added volume migrate action"
msgid ""
"Adds audit scoper for storage data model, now watcher users can specify "
"audit scope for storage CDM in the same manner as compute scope."
msgstr ""
"Adds audit scoper for storage data model, now watcher users can specify "
"audit scope for storage CDM in the same manner as compute scope."
msgid "Adds baremetal data model in Watcher"
msgstr "Adds baremetal data model in Watcher"
msgid ""
"Allow decision engine to pass strategy parameters, like optimization "
"threshold, to selected strategy, also strategy to provide parameters info to "
"end user."
msgstr ""
"Allow decision engine to pass strategy parameters, like optimisation "
"threshold, to selected strategy, also strategy to provide parameters info to "
"end user."
msgid ""
"Audits have 'name' field now, that is more friendly to end users. Audit's "
"name can't exceed 63 characters."
msgstr ""
"Audits have 'name' field now, that is more friendly to end users. Audit's "
"name can't exceed 63 characters."
msgid "Centralize all configuration options for Watcher."
msgstr "Centralise all configuration options for Watcher."
msgid "Contents:"
msgstr "Contents:"
msgid ""
"Copy all audit templates parameters into audit instead of having a reference "
"to the audit template."
msgstr ""
"Copy all audit templates parameters into audit instead of having a reference "
"to the audit template."
msgid "Current Series Release Notes"
msgstr "Current Series Release Notes"
msgid ""
"Each CDM collector can have its own CDM scoper now. This changed Scope JSON "
"schema definition for the audit template POST data. Please see audit "
"template create help message in python-watcherclient."
msgstr ""
"Each CDM collector can have its own CDM scoper now. This changed Scope JSON "
"schema definition for the audit template POST data. Please see audit "
"template create help message in python-watcherclient."
msgid ""
"Enhancement of vm_workload_consolidation strategy by using 'memory.resident' "
"metric in place of 'memory.usage', as memory.usage shows the memory usage "
"inside guest-os and memory.resident represents volume of RAM used by "
"instance on host machine."
msgstr ""
"Enhancement of vm_workload_consolidation strategy by using 'memory.resident' "
"metric in place of 'memory.usage', as memory.usage shows the memory usage "
"inside guest-os and memory.resident represents volume of RAM used by "
"instance on host machine."
msgid ""
"Existing workload_balance strategy based on the VM workloads of CPU. This "
"feature improves the strategy. By the input parameter \"metrics\", it makes "
"decision to migrate a VM base on CPU or memory utilization."
msgstr ""
"Existing workload_balance strategy based on the VM workloads of CPU. This "
"feature improves the strategy. By the input parameter \"metrics\", it makes "
"decision to migrate a VM base on CPU or memory utilisation."
msgid "New Features"
msgstr "New Features"
msgid "Newton Series Release Notes"
msgstr "Newton Series Release Notes"
msgid "Ocata Series Release Notes"
msgstr "Ocata Series Release Notes"
msgid "Pike Series Release Notes"
msgstr "Pike Series Release Notes"
msgid ""
"Provide a notification mechanism into Watcher that supports versioning. "
"Whenever a Watcher object is created, updated or deleted, a versioned "
"notification will, if it's relevant, be automatically sent to notify in "
"order to allow an event-driven style of architecture within Watcher. "
"Moreover, it will also give other services and/or 3rd party softwares (e.g. "
"monitoring solutions or rules engines) the ability to react to such events."
msgstr ""
"Provide a notification mechanism into Watcher that supports versioning. "
"Whenever a Watcher object is created, updated or deleted, a versioned "
"notification will, if it's relevant, be automatically sent to notify in "
"order to allow an event-driven style of architecture within Watcher. "
"Moreover, it will also give other services and/or 3rd party software (e.g. "
"monitoring solutions or rules engines) the ability to react to such events."
msgid ""
"Provides a generic way to define the scope of an audit. The set of audited "
"resources will be called \"Audit scope\" and will be defined in each audit "
"template (which contains the audit settings)."
msgstr ""
"Provides a generic way to define the scope of an audit. The set of audited "
"resources will be called \"Audit scope\" and will be defined in each audit "
"template (which contains the audit settings)."
msgid "Queens Series Release Notes"
msgstr "Queens Series Release Notes"
msgid ""
"The graph model describes how VMs are associated to compute hosts. This "
"allows for seeing relationships upfront between the entities and hence can "
"be used to identify hot/cold spots in the data center and influence a "
"strategy decision."
msgstr ""
"The graph model describes how VMs are associated to compute hosts. This "
"allows for seeing relationships upfront between the entities and hence can "
"be used to identify hot/cold spots in the data centre and influence a "
"strategy decision."
msgid ""
"There is new ability to create Watcher continuous audits with cron interval. "
"It means you may use, for example, optional argument '--interval \"\\*/5 \\* "
"\\* \\* \\*\"' to launch audit every 5 minutes. These jobs are executed on a "
"best effort basis and therefore, we recommend you to use a minimal cron "
"interval of at least one minute."
msgstr ""
"There is new ability to create Watcher continuous audits with cron interval. "
"It means you may use, for example, optional argument '--interval \"\\*/5 \\* "
"\\* \\* \\*\"' to launch audit every 5 minutes. These jobs are executed on a "
"best effort basis and therefore, we recommend you to use a minimal cron "
"interval of at least one minute."
msgid ""
"Watcher can continuously optimize the OpenStack cloud for a specific "
"strategy or goal by triggering an audit periodically which generates an "
"action plan and run it automatically."
msgstr ""
"Watcher can continuously optimise the OpenStack cloud for a specific "
"strategy or goal by triggering an audit periodically which generates an "
"action plan and run it automatically."
msgid ""
"Watcher can now run specific actions in parallel improving the performances "
"dramatically when executing an action plan."
msgstr ""
"Watcher can now run specific actions in parallel improving the performance "
"dramatically when executing an action plan."
msgid "Watcher database can now be upgraded thanks to Alembic."
msgstr "Watcher database can now be upgraded thanks to Alembic."
msgid ""
"Watcher got an ability to calculate multiple global efficacy indicators "
"during audit's execution. Now global efficacy can be calculated for many "
"resource types (like volumes, instances, network) if strategy supports "
"efficacy indicators."
msgstr ""
"Watcher got an ability to calculate multiple global efficacy indicators "
"during audit's execution. Now global efficacy can be calculated for many "
"resource types (like volumes, instances, network) if strategy supports "
"efficacy indicators."
msgid ""
"Watcher supports multiple metrics backend and relies on Ceilometer and "
"Monasca."
msgstr ""
"Watcher supports multiple metrics backend and relies on Ceilometer and "
"Monasca."
msgid "Welcome to watcher's Release Notes documentation!"
msgstr "Welcome to watcher's Release Notes documentation!"
msgid ""
"all Watcher objects have been refactored to support OVO (oslo."
"versionedobjects) which was a prerequisite step in order to implement "
"versioned notifications."
msgstr ""
"all Watcher objects have been refactored to support OVO (oslo."
"versionedobjects) which was a prerequisite step in order to implement "
"versioned notifications."

View File

@@ -1,6 +0,0 @@
===================================
Queens Series Release Notes
===================================
.. release-notes::
:branch: stable/queens

View File

@@ -2,48 +2,48 @@
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
apscheduler>=3.5.1 # MIT License
enum34>=1.1.6;python_version=='2.7' or python_version=='2.6' or python_version=='3.3' # BSD
jsonpatch>=1.21 # BSD
keystoneauth1>=3.4.0 # Apache-2.0
apscheduler>=3.0.5 # MIT License
enum34>=1.0.4;python_version=='2.7' or python_version=='2.6' or python_version=='3.3' # BSD
jsonpatch!=1.20,>=1.16 # BSD
keystoneauth1>=3.3.0 # Apache-2.0
jsonschema<3.0.0,>=2.6.0 # MIT
keystonemiddleware>=4.21.0 # Apache-2.0
lxml>=4.1.1 # BSD
croniter>=0.3.20 # MIT License
oslo.concurrency>=3.26.0 # Apache-2.0
oslo.cache>=1.29.0 # Apache-2.0
oslo.config>=5.2.0 # Apache-2.0
oslo.context>=2.20.0 # Apache-2.0
oslo.db>=4.35.0 # Apache-2.0
oslo.i18n>=3.20.0 # Apache-2.0
oslo.log>=3.37.0 # Apache-2.0
oslo.messaging>=5.36.0 # Apache-2.0
oslo.policy>=1.34.0 # Apache-2.0
oslo.reports>=1.27.0 # Apache-2.0
oslo.serialization>=2.25.0 # Apache-2.0
oslo.service>=1.30.0 # Apache-2.0
oslo.utils>=3.36.0 # Apache-2.0
oslo.versionedobjects>=1.32.0 # Apache-2.0
PasteDeploy>=1.5.2 # MIT
pbr>=3.1.1 # Apache-2.0
pecan>=1.2.1 # BSD
PrettyTable<0.8,>=0.7.2 # BSD
voluptuous>=0.11.1 # BSD License
gnocchiclient>=7.0.1 # Apache-2.0
python-ceilometerclient>=2.9.0 # Apache-2.0
python-cinderclient>=3.5.0 # Apache-2.0
python-glanceclient>=2.9.1 # Apache-2.0
python-keystoneclient>=3.15.0 # Apache-2.0
python-monascaclient>=1.10.0 # Apache-2.0
python-neutronclient>=6.7.0 # Apache-2.0
python-novaclient>=10.1.0 # Apache-2.0
python-openstackclient>=3.14.0 # Apache-2.0
python-ironicclient>=2.3.0 # Apache-2.0
six>=1.11.0 # MIT
SQLAlchemy>=1.2.5 # MIT
stevedore>=1.28.0 # Apache-2.0
taskflow>=3.1.0 # Apache-2.0
WebOb>=1.7.4 # MIT
WSME>=0.9.2 # MIT
networkx>=1.11 # BSD
keystonemiddleware>=4.17.0 # Apache-2.0
lxml!=3.7.0,>=3.4.1 # BSD
croniter>=0.3.4 # MIT License
oslo.concurrency>=3.25.0 # Apache-2.0
oslo.cache>=1.26.0 # Apache-2.0
oslo.config>=5.1.0 # Apache-2.0
oslo.context>=2.19.2 # Apache-2.0
oslo.db>=4.27.0 # Apache-2.0
oslo.i18n>=3.15.3 # Apache-2.0
oslo.log>=3.36.0 # Apache-2.0
oslo.messaging>=5.29.0 # Apache-2.0
oslo.policy>=1.30.0 # Apache-2.0
oslo.reports>=1.18.0 # Apache-2.0
oslo.serialization!=2.19.1,>=2.18.0 # Apache-2.0
oslo.service!=1.28.1,>=1.24.0 # Apache-2.0
oslo.utils>=3.33.0 # Apache-2.0
oslo.versionedobjects>=1.31.2 # Apache-2.0
PasteDeploy>=1.5.0 # MIT
pbr!=2.1.0,>=2.0.0 # Apache-2.0
pecan!=1.0.2,!=1.0.3,!=1.0.4,!=1.2,>=1.0.0 # BSD
PrettyTable<0.8,>=0.7.1 # BSD
voluptuous>=0.8.9 # BSD License
gnocchiclient>=3.3.1 # Apache-2.0
python-ceilometerclient>=2.5.0 # Apache-2.0
python-cinderclient>=3.3.0 # Apache-2.0
python-glanceclient>=2.8.0 # Apache-2.0
python-keystoneclient>=3.8.0 # Apache-2.0
python-monascaclient>=1.7.0 # Apache-2.0
python-neutronclient>=6.3.0 # Apache-2.0
python-novaclient>=9.1.0 # Apache-2.0
python-openstackclient>=3.12.0 # Apache-2.0
python-ironicclient>=2.2.0 # Apache-2.0
six>=1.10.0 # MIT
SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 # MIT
stevedore>=1.20.0 # Apache-2.0
taskflow>=2.16.0 # Apache-2.0
WebOb>=1.7.1 # MIT
WSME>=0.8.0 # MIT
networkx<2.0,>=1.10 # BSD

View File

@@ -1,16 +0,0 @@
- name: Set up the list of hostnames and addresses
set_fact:
hostname_addresses: >
{% set hosts = {} -%}
{% for host, vars in hostvars.items() -%}
{% set _ = hosts.update({vars['ansible_hostname']: vars['nodepool']['private_ipv4']}) -%}
{% endfor -%}
{{- hosts -}}
- name: Add inventory hostnames to the hosts file
become: yes
lineinfile:
dest: /etc/hosts
state: present
insertafter: EOF
line: "{{ item.value }} {{ item.key }}"
with_dict: "{{ hostname_addresses }}"

View File

@@ -58,7 +58,6 @@ watcher_goals =
noisy_neighbor = watcher.decision_engine.goal.goals:NoisyNeighborOptimization
saving_energy = watcher.decision_engine.goal.goals:SavingEnergy
hardware_maintenance = watcher.decision_engine.goal.goals:HardwareMaintenance
cluster_maintaining = watcher.decision_engine.goal.goals:ClusterMaintaining
watcher_scoring_engines =
dummy_scorer = watcher.decision_engine.scoring.dummy_scorer:DummyScorer
@@ -81,7 +80,6 @@ watcher_strategies =
noisy_neighbor = watcher.decision_engine.strategy.strategies.noisy_neighbor:NoisyNeighbor
storage_capacity_balance = watcher.decision_engine.strategy.strategies.storage_capacity_balance:StorageCapacityBalance
zone_migration = watcher.decision_engine.strategy.strategies.zone_migration:ZoneMigration
host_maintenance = watcher.decision_engine.strategy.strategies.host_maintenance:HostMaintenance
watcher_actions =
migrate = watcher.applier.actions.migration:Migrate

View File

@@ -2,27 +2,25 @@
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
coverage!=4.4 # Apache-2.0
doc8 # Apache-2.0
freezegun # Apache-2.0
coverage!=4.4,>=4.0 # Apache-2.0
doc8>=0.6.0 # Apache-2.0
freezegun>=0.3.6 # Apache-2.0
hacking!=0.13.0,<0.14,>=0.12.0 # Apache-2.0
mock # BSD
oslotest # Apache-2.0
os-testr # Apache-2.0
testrepository # Apache-2.0/BSD
testscenarios # Apache-2.0/BSD
testtools # MIT
mock>=2.0.0 # BSD
oslotest>=3.2.0 # Apache-2.0
os-testr>=1.0.0 # Apache-2.0
testrepository>=0.0.18 # Apache-2.0/BSD
testscenarios>=0.4 # Apache-2.0/BSD
testtools>=2.2.0 # MIT
# Doc requirements
openstackdocstheme # Apache-2.0
sphinx!=1.6.6,!=1.6.7 # BSD
sphinxcontrib-pecanwsme # Apache-2.0
openstackdocstheme>=1.18.1 # Apache-2.0
sphinx!=1.6.6,>=1.6.2 # BSD
sphinxcontrib-pecanwsme>=0.8.0 # Apache-2.0
# api-ref
os-api-ref # Apache-2.0
# releasenotes
reno # Apache-2.0
reno>=2.5.0 # Apache-2.0
# bandit
bandit>=1.1.0 # Apache-2.0

View File

@@ -7,7 +7,7 @@ skipsdist = True
usedevelop = True
whitelist_externals = find
rm
install_command = pip install -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} {opts} {packages}
install_command = pip install -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/queens} {opts} {packages}
setenv =
VIRTUAL_ENV={envdir}
deps = -r{toxinidir}/test-requirements.txt
@@ -76,10 +76,3 @@ commands = sphinx-build -a -W -E -d releasenotes/build/doctrees -b html releasen
[testenv:bandit]
deps = -r{toxinidir}/test-requirements.txt
commands = bandit -r watcher -x tests -n5 -ll -s B320
[testenv:lower-constraints]
basepython = python3
deps =
-c{toxinidir}/lower-constraints.txt
-r{toxinidir}/test-requirements.txt
-r{toxinidir}/requirements.txt

View File

@@ -205,7 +205,7 @@ class ActionCollection(collection.Collection):
collection = ActionCollection()
collection.actions = [Action.convert_with_links(p, expand)
for p in actions]
collection.next = collection.get_next(limit, url=url, **kwargs)
return collection
@classmethod
@@ -232,10 +232,6 @@ class ActionsController(rest.RestController):
sort_key, sort_dir, expand=False,
resource_url=None,
action_plan_uuid=None, audit_uuid=None):
additional_fields = ['action_plan_uuid']
api_utils.validate_sort_key(sort_key, list(objects.Action.fields) +
additional_fields)
limit = api_utils.validate_limit(limit)
api_utils.validate_sort_dir(sort_dir)
@@ -251,10 +247,7 @@ class ActionsController(rest.RestController):
if audit_uuid:
filters['audit_uuid'] = audit_uuid
need_api_sort = api_utils.check_need_api_sort(sort_key,
additional_fields)
sort_db_key = (sort_key if not need_api_sort
else None)
sort_db_key = sort_key
actions = objects.Action.list(pecan.request.context,
limit,
@@ -262,15 +255,11 @@ class ActionsController(rest.RestController):
sort_dir=sort_dir,
filters=filters)
actions_collection = ActionCollection.convert_with_links(
actions, limit, url=resource_url, expand=expand,
sort_key=sort_key, sort_dir=sort_dir)
if need_api_sort:
api_utils.make_api_sort(actions_collection.actions,
sort_key, sort_dir)
return actions_collection
return ActionCollection.convert_with_links(actions, limit,
url=resource_url,
expand=expand,
sort_key=sort_key,
sort_dir=sort_dir)
@wsme_pecan.wsexpose(ActionCollection, types.uuid, int,
wtypes.text, wtypes.text, types.uuid,

View File

@@ -305,6 +305,17 @@ class ActionPlanCollection(collection.Collection):
ap_collection = ActionPlanCollection()
ap_collection.action_plans = [ActionPlan.convert_with_links(
p, expand) for p in rpc_action_plans]
if 'sort_key' in kwargs:
reverse = False
if kwargs['sort_key'] == 'audit_uuid':
if 'sort_dir' in kwargs:
reverse = True if kwargs['sort_dir'] == 'desc' else False
ap_collection.action_plans = sorted(
ap_collection.action_plans,
key=lambda action_plan: action_plan.audit_uuid,
reverse=reverse)
ap_collection.next = ap_collection.get_next(limit, url=url, **kwargs)
return ap_collection
@@ -320,25 +331,20 @@ class ActionPlansController(rest.RestController):
def __init__(self):
super(ActionPlansController, self).__init__()
self.applier_client = rpcapi.ApplierAPI()
from_actionsPlans = False
"""A flag to indicate if the requests to this controller are coming
from the top-level resource ActionPlan."""
_custom_actions = {
'start': ['POST'],
'detail': ['GET']
'detail': ['GET'],
}
def _get_action_plans_collection(self, marker, limit,
sort_key, sort_dir, expand=False,
resource_url=None, audit_uuid=None,
strategy=None):
additional_fields = ['audit_uuid', 'strategy_uuid', 'strategy_name']
api_utils.validate_sort_key(
sort_key, list(objects.ActionPlan.fields) + additional_fields)
limit = api_utils.validate_limit(limit)
api_utils.validate_sort_dir(sort_dir)
@@ -357,10 +363,10 @@ class ActionPlansController(rest.RestController):
else:
filters['strategy_name'] = strategy
need_api_sort = api_utils.check_need_api_sort(sort_key,
additional_fields)
sort_db_key = (sort_key if not need_api_sort
else None)
if sort_key == 'audit_uuid':
sort_db_key = None
else:
sort_db_key = sort_key
action_plans = objects.ActionPlan.list(
pecan.request.context,
@@ -368,15 +374,12 @@ class ActionPlansController(rest.RestController):
marker_obj, sort_key=sort_db_key,
sort_dir=sort_dir, filters=filters)
action_plans_collection = ActionPlanCollection.convert_with_links(
action_plans, limit, url=resource_url, expand=expand,
sort_key=sort_key, sort_dir=sort_dir)
if need_api_sort:
api_utils.make_api_sort(action_plans_collection.action_plans,
sort_key, sort_dir)
return action_plans_collection
return ActionPlanCollection.convert_with_links(
action_plans, limit,
url=resource_url,
expand=expand,
sort_key=sort_key,
sort_dir=sort_dir)
@wsme_pecan.wsexpose(ActionPlanCollection, types.uuid, int, wtypes.text,
wtypes.text, types.uuid, wtypes.text)
@@ -537,7 +540,7 @@ class ActionPlansController(rest.RestController):
if action_plan_to_update[field] != patch_val:
action_plan_to_update[field] = patch_val
if (field == 'state' and
if (field == 'state'and
patch_val == objects.action_plan.State.PENDING):
launch_action_plan = True
@@ -554,39 +557,11 @@ class ActionPlansController(rest.RestController):
a.save()
if launch_action_plan:
self.applier_client.launch_action_plan(pecan.request.context,
action_plan.uuid)
applier_client = rpcapi.ApplierAPI()
applier_client.launch_action_plan(pecan.request.context,
action_plan.uuid)
action_plan_to_update = objects.ActionPlan.get_by_uuid(
pecan.request.context,
action_plan_uuid)
return ActionPlan.convert_with_links(action_plan_to_update)
@wsme_pecan.wsexpose(ActionPlan, types.uuid)
def start(self, action_plan_uuid, **kwargs):
"""Start an action_plan
:param action_plan_uuid: UUID of an action_plan.
"""
action_plan_to_start = api_utils.get_resource(
'ActionPlan', action_plan_uuid, eager=True)
context = pecan.request.context
policy.enforce(context, 'action_plan:start', action_plan_to_start,
action='action_plan:start')
if action_plan_to_start['state'] != \
objects.action_plan.State.RECOMMENDED:
raise Exception.StartError(
state=action_plan_to_start.state)
action_plan_to_start['state'] = objects.action_plan.State.PENDING
action_plan_to_start.save()
self.applier_client.launch_action_plan(pecan.request.context,
action_plan_uuid)
action_plan_to_start = objects.ActionPlan.get_by_uuid(
pecan.request.context, action_plan_uuid)
return ActionPlan.convert_with_links(action_plan_to_start)

View File

@@ -389,6 +389,17 @@ class AuditCollection(collection.Collection):
collection = AuditCollection()
collection.audits = [Audit.convert_with_links(p, expand)
for p in rpc_audits]
if 'sort_key' in kwargs:
reverse = False
if kwargs['sort_key'] == 'goal_uuid':
if 'sort_dir' in kwargs:
reverse = True if kwargs['sort_dir'] == 'desc' else False
collection.audits = sorted(
collection.audits,
key=lambda audit: audit.goal_uuid,
reverse=reverse)
collection.next = collection.get_next(limit, url=url, **kwargs)
return collection
@@ -403,7 +414,6 @@ class AuditsController(rest.RestController):
"""REST controller for Audits."""
def __init__(self):
super(AuditsController, self).__init__()
self.dc_client = rpcapi.DecisionEngineAPI()
from_audits = False
"""A flag to indicate if the requests to this controller are coming
@@ -417,14 +427,8 @@ class AuditsController(rest.RestController):
sort_key, sort_dir, expand=False,
resource_url=None, goal=None,
strategy=None):
additional_fields = ["goal_uuid", "goal_name", "strategy_uuid",
"strategy_name"]
api_utils.validate_sort_key(
sort_key, list(objects.Audit.fields) + additional_fields)
limit = api_utils.validate_limit(limit)
api_utils.validate_sort_dir(sort_dir)
marker_obj = None
if marker:
marker_obj = objects.Audit.get_by_uuid(pecan.request.context,
@@ -445,25 +449,23 @@ class AuditsController(rest.RestController):
# TODO(michaelgugino): add method to get goal by name.
filters['strategy_name'] = strategy
need_api_sort = api_utils.check_need_api_sort(sort_key,
additional_fields)
sort_db_key = (sort_key if not need_api_sort
else None)
if sort_key == 'goal_uuid':
sort_db_key = 'goal_id'
elif sort_key == 'strategy_uuid':
sort_db_key = 'strategy_id'
else:
sort_db_key = sort_key
audits = objects.Audit.list(pecan.request.context,
limit,
marker_obj, sort_key=sort_db_key,
sort_dir=sort_dir, filters=filters)
audits_collection = AuditCollection.convert_with_links(
audits, limit, url=resource_url, expand=expand,
sort_key=sort_key, sort_dir=sort_dir)
if need_api_sort:
api_utils.make_api_sort(audits_collection.audits, sort_key,
sort_dir)
return audits_collection
return AuditCollection.convert_with_links(audits, limit,
url=resource_url,
expand=expand,
sort_key=sort_key,
sort_dir=sort_dir)
@wsme_pecan.wsexpose(AuditCollection, types.uuid, int, wtypes.text,
wtypes.text, wtypes.text, wtypes.text, int)
@@ -576,7 +578,8 @@ class AuditsController(rest.RestController):
# trigger decision-engine to run the audit
if new_audit.audit_type == objects.audit.AuditType.ONESHOT.value:
self.dc_client.trigger_audit(context, new_audit.uuid)
dc_client = rpcapi.DecisionEngineAPI()
dc_client.trigger_audit(context, new_audit.uuid)
return Audit.convert_with_links(new_audit)
@@ -639,8 +642,8 @@ class AuditsController(rest.RestController):
context = pecan.request.context
audit_to_delete = api_utils.get_resource(
'Audit', audit, eager=True)
policy.enforce(context, 'audit:delete', audit_to_delete,
action='audit:delete')
policy.enforce(context, 'audit:update', audit_to_delete,
action='audit:update')
initial_state = audit_to_delete.state
new_state = objects.audit.State.DELETED

View File

@@ -474,13 +474,9 @@ class AuditTemplatesController(rest.RestController):
def _get_audit_templates_collection(self, filters, marker, limit,
sort_key, sort_dir, expand=False,
resource_url=None):
additional_fields = ["goal_uuid", "goal_name", "strategy_uuid",
"strategy_name"]
api_utils.validate_sort_key(
sort_key, list(objects.AuditTemplate.fields) + additional_fields)
api_utils.validate_search_filters(
filters, list(objects.AuditTemplate.fields) + additional_fields)
filters, list(objects.audit_template.AuditTemplate.fields) +
["goal_uuid", "goal_name", "strategy_uuid", "strategy_name"])
limit = api_utils.validate_limit(limit)
api_utils.validate_sort_dir(sort_dir)
@@ -490,26 +486,19 @@ class AuditTemplatesController(rest.RestController):
pecan.request.context,
marker)
need_api_sort = api_utils.check_need_api_sort(sort_key,
additional_fields)
sort_db_key = (sort_key if not need_api_sort
else None)
audit_templates = objects.AuditTemplate.list(
pecan.request.context, filters, limit, marker_obj,
sort_key=sort_db_key, sort_dir=sort_dir)
pecan.request.context,
filters,
limit,
marker_obj, sort_key=sort_key,
sort_dir=sort_dir)
audit_templates_collection = \
AuditTemplateCollection.convert_with_links(
audit_templates, limit, url=resource_url, expand=expand,
sort_key=sort_key, sort_dir=sort_dir)
if need_api_sort:
api_utils.make_api_sort(
audit_templates_collection.audit_templates, sort_key,
sort_dir)
return audit_templates_collection
return AuditTemplateCollection.convert_with_links(audit_templates,
limit,
url=resource_url,
expand=expand,
sort_key=sort_key,
sort_dir=sort_dir)
@wsme_pecan.wsexpose(AuditTemplateCollection, wtypes.text, wtypes.text,
types.uuid, int, wtypes.text, wtypes.text)
@@ -688,8 +677,8 @@ class AuditTemplatesController(rest.RestController):
context = pecan.request.context
audit_template_to_delete = api_utils.get_resource('AuditTemplate',
audit_template)
policy.enforce(context, 'audit_template:delete',
policy.enforce(context, 'audit_template:update',
audit_template_to_delete,
action='audit_template:delete')
action='audit_template:update')
audit_template_to_delete.soft_delete()

View File

@@ -130,6 +130,17 @@ class GoalCollection(collection.Collection):
goal_collection = GoalCollection()
goal_collection.goals = [
Goal.convert_with_links(g, expand) for g in goals]
if 'sort_key' in kwargs:
reverse = False
if kwargs['sort_key'] == 'strategy':
if 'sort_dir' in kwargs:
reverse = True if kwargs['sort_dir'] == 'desc' else False
goal_collection.goals = sorted(
goal_collection.goals,
key=lambda goal: goal.uuid,
reverse=reverse)
goal_collection.next = goal_collection.get_next(
limit, url=url, **kwargs)
return goal_collection
@@ -156,19 +167,17 @@ class GoalsController(rest.RestController):
def _get_goals_collection(self, marker, limit, sort_key, sort_dir,
expand=False, resource_url=None):
api_utils.validate_sort_key(
sort_key, list(objects.Goal.fields))
limit = api_utils.validate_limit(limit)
api_utils.validate_sort_dir(sort_dir)
sort_db_key = (sort_key if sort_key in objects.Goal.fields
else None)
marker_obj = None
if marker:
marker_obj = objects.Goal.get_by_uuid(
pecan.request.context, marker)
sort_db_key = (sort_key if sort_key in objects.Goal.fields
else None)
goals = objects.Goal.list(pecan.request.context, limit, marker_obj,
sort_key=sort_db_key, sort_dir=sort_dir)

View File

@@ -123,6 +123,17 @@ class ScoringEngineCollection(collection.Collection):
collection = ScoringEngineCollection()
collection.scoring_engines = [ScoringEngine.convert_with_links(
se, expand) for se in scoring_engines]
if 'sort_key' in kwargs:
reverse = False
if kwargs['sort_key'] == 'name':
if 'sort_dir' in kwargs:
reverse = True if kwargs['sort_dir'] == 'desc' else False
collection.goals = sorted(
collection.scoring_engines,
key=lambda se: se.name,
reverse=reverse)
collection.next = collection.get_next(limit, url=url, **kwargs)
return collection
@@ -149,8 +160,7 @@ class ScoringEngineController(rest.RestController):
def _get_scoring_engines_collection(self, marker, limit,
sort_key, sort_dir, expand=False,
resource_url=None):
api_utils.validate_sort_key(
sort_key, list(objects.ScoringEngine.fields))
limit = api_utils.validate_limit(limit)
api_utils.validate_sort_dir(sort_dir)
@@ -161,8 +171,7 @@ class ScoringEngineController(rest.RestController):
filters = {}
sort_db_key = (sort_key if sort_key in objects.ScoringEngine.fields
else None)
sort_db_key = sort_key
scoring_engines = objects.ScoringEngine.list(
context=pecan.request.context,

View File

@@ -154,6 +154,17 @@ class ServiceCollection(collection.Collection):
service_collection = ServiceCollection()
service_collection.services = [
Service.convert_with_links(g, expand) for g in services]
if 'sort_key' in kwargs:
reverse = False
if kwargs['sort_key'] == 'service':
if 'sort_dir' in kwargs:
reverse = True if kwargs['sort_dir'] == 'desc' else False
service_collection.services = sorted(
service_collection.services,
key=lambda service: service.id,
reverse=reverse)
service_collection.next = service_collection.get_next(
limit, url=url, marker_field='id', **kwargs)
return service_collection
@@ -180,19 +191,17 @@ class ServicesController(rest.RestController):
def _get_services_collection(self, marker, limit, sort_key, sort_dir,
expand=False, resource_url=None):
api_utils.validate_sort_key(
sort_key, list(objects.Service.fields))
limit = api_utils.validate_limit(limit)
api_utils.validate_sort_dir(sort_dir)
sort_db_key = (sort_key if sort_key in objects.Service.fields
else None)
marker_obj = None
if marker:
marker_obj = objects.Service.get(
pecan.request.context, marker)
sort_db_key = (sort_key if sort_key in objects.Service.fields
else None)
services = objects.Service.list(
pecan.request.context, limit, marker_obj,
sort_key=sort_db_key, sort_dir=sort_dir)

View File

@@ -173,6 +173,17 @@ class StrategyCollection(collection.Collection):
strategy_collection = StrategyCollection()
strategy_collection.strategies = [
Strategy.convert_with_links(g, expand) for g in strategies]
if 'sort_key' in kwargs:
reverse = False
if kwargs['sort_key'] == 'strategy':
if 'sort_dir' in kwargs:
reverse = True if kwargs['sort_dir'] == 'desc' else False
strategy_collection.strategies = sorted(
strategy_collection.strategies,
key=lambda strategy: strategy.uuid,
reverse=reverse)
strategy_collection.next = strategy_collection.get_next(
limit, url=url, **kwargs)
return strategy_collection
@@ -200,39 +211,28 @@ class StrategiesController(rest.RestController):
def _get_strategies_collection(self, filters, marker, limit, sort_key,
sort_dir, expand=False, resource_url=None):
additional_fields = ["goal_uuid", "goal_name"]
api_utils.validate_sort_key(
sort_key, list(objects.Strategy.fields) + additional_fields)
api_utils.validate_search_filters(
filters, list(objects.Strategy.fields) + additional_fields)
filters, list(objects.strategy.Strategy.fields) +
["goal_uuid", "goal_name"])
limit = api_utils.validate_limit(limit)
api_utils.validate_sort_dir(sort_dir)
sort_db_key = (sort_key if sort_key in objects.Strategy.fields
else None)
marker_obj = None
if marker:
marker_obj = objects.Strategy.get_by_uuid(
pecan.request.context, marker)
need_api_sort = api_utils.check_need_api_sort(sort_key,
additional_fields)
sort_db_key = (sort_key if not need_api_sort
else None)
strategies = objects.Strategy.list(
pecan.request.context, limit, marker_obj, filters=filters,
sort_key=sort_db_key, sort_dir=sort_dir)
strategies_collection = StrategyCollection.convert_with_links(
return StrategyCollection.convert_with_links(
strategies, limit, url=resource_url, expand=expand,
sort_key=sort_key, sort_dir=sort_dir)
if need_api_sort:
api_utils.make_api_sort(strategies_collection.strategies,
sort_key, sort_dir)
return strategies_collection
@wsme_pecan.wsexpose(StrategyCollection, wtypes.text, wtypes.text,
int, wtypes.text, wtypes.text)
def get_all(self, goal=None, marker=None, limit=None,

View File

@@ -13,8 +13,6 @@
# License for the specific language governing permissions and limitations
# under the License.
from operator import attrgetter
import jsonpatch
from oslo_config import cfg
from oslo_utils import reflection
@@ -56,13 +54,6 @@ def validate_sort_dir(sort_dir):
"'asc' or 'desc'") % sort_dir)
def validate_sort_key(sort_key, allowed_fields):
# Very lightweight validation for now
if sort_key not in allowed_fields:
raise wsme.exc.ClientSideError(
_("Invalid sort key: %s") % sort_key)
def validate_search_filters(filters, allowed_fields):
# Very lightweight validation for now
# todo: improve this (e.g. https://www.parse.com/docs/rest/guide/#queries)
@@ -72,19 +63,6 @@ def validate_search_filters(filters, allowed_fields):
_("Invalid filter: %s") % filter_name)
def check_need_api_sort(sort_key, additional_fields):
return sort_key in additional_fields
def make_api_sort(sorting_list, sort_key, sort_dir):
# First sort by uuid field, than sort by sort_key
# sort() ensures stable sorting, so we could
# make lexicographical sort
reverse_direction = (sort_dir == 'desc')
sorting_list.sort(key=attrgetter('uuid'), reverse=reverse_direction)
sorting_list.sort(key=attrgetter(sort_key), reverse=reverse_direction)
def apply_jsonpatch(doc, patch):
for p in patch:
if p['op'] == 'add' and p['path'].count('/') == 1:

View File

@@ -50,12 +50,6 @@ class Migrate(base.BaseAction):
source and the destination compute hostname (list of available compute
hosts is returned by this command: ``nova service-list --binary
nova-compute``).
.. note::
Nova API version must be 2.56 or above if `destination_node` parameter
is given.
"""
# input parameters constants

View File

@@ -75,7 +75,7 @@ class CinderHelper(object):
search_opts={'all_tenants': True})
def get_volume_type_by_backendname(self, backendname):
"""Return a list of volume type"""
"""Retrun a list of volume type"""
volume_type_list = self.get_volume_type_list()
volume_type = [volume_type.name for volume_type in volume_type_list

View File

@@ -83,8 +83,10 @@ class OpenStackClients(object):
novaclient_version = self._get_client_option('nova', 'api_version')
nova_endpoint_type = self._get_client_option('nova', 'endpoint_type')
nova_region_name = self._get_client_option('nova', 'region_name')
self._nova = nvclient.Client(novaclient_version,
endpoint_type=nova_endpoint_type,
region_name=nova_region_name,
session=self.session)
return self._nova
@@ -96,8 +98,10 @@ class OpenStackClients(object):
glanceclient_version = self._get_client_option('glance', 'api_version')
glance_endpoint_type = self._get_client_option('glance',
'endpoint_type')
glance_region_name = self._get_client_option('glance', 'region_name')
self._glance = glclient.Client(glanceclient_version,
interface=glance_endpoint_type,
region_name=glance_region_name,
session=self.session)
return self._glance
@@ -110,8 +114,11 @@ class OpenStackClients(object):
'api_version')
gnocchiclient_interface = self._get_client_option('gnocchi',
'endpoint_type')
gnocchiclient_region_name = self._get_client_option('gnocchi',
'region_name')
adapter_options = {
"interface": gnocchiclient_interface
"interface": gnocchiclient_interface,
"region_name": gnocchiclient_region_name
}
self._gnocchi = gnclient.Client(gnocchiclient_version,
@@ -127,8 +134,10 @@ class OpenStackClients(object):
cinderclient_version = self._get_client_option('cinder', 'api_version')
cinder_endpoint_type = self._get_client_option('cinder',
'endpoint_type')
cinder_region_name = self._get_client_option('cinder', 'region_name')
self._cinder = ciclient.Client(cinderclient_version,
endpoint_type=cinder_endpoint_type,
region_name=cinder_region_name,
session=self.session)
return self._cinder
@@ -141,9 +150,12 @@ class OpenStackClients(object):
'api_version')
ceilometer_endpoint_type = self._get_client_option('ceilometer',
'endpoint_type')
ceilometer_region_name = self._get_client_option('ceilometer',
'region_name')
self._ceilometer = ceclient.get_client(
ceilometerclient_version,
endpoint_type=ceilometer_endpoint_type,
region_name=ceilometer_region_name,
session=self.session)
return self._ceilometer
@@ -156,6 +168,8 @@ class OpenStackClients(object):
'monasca', 'api_version')
monascaclient_interface = self._get_client_option(
'monasca', 'interface')
monascaclient_region = self._get_client_option(
'monasca', 'region_name')
token = self.session.get_token()
watcher_clients_auth_config = CONF.get(_CLIENTS_AUTH_GROUP)
service_type = 'monitoring'
@@ -172,7 +186,8 @@ class OpenStackClients(object):
'password': watcher_clients_auth_config.password,
}
endpoint = self.session.get_endpoint(service_type=service_type,
interface=monascaclient_interface)
interface=monascaclient_interface,
region_name=monascaclient_region)
self._monasca = monclient.Client(
monascaclient_version, endpoint, **monasca_kwargs)
@@ -188,9 +203,11 @@ class OpenStackClients(object):
'api_version')
neutron_endpoint_type = self._get_client_option('neutron',
'endpoint_type')
neutron_region_name = self._get_client_option('neutron', 'region_name')
self._neutron = netclient.Client(neutronclient_version,
endpoint_type=neutron_endpoint_type,
region_name=neutron_region_name,
session=self.session)
self._neutron.format = 'json'
return self._neutron
@@ -202,7 +219,9 @@ class OpenStackClients(object):
ironicclient_version = self._get_client_option('ironic', 'api_version')
endpoint_type = self._get_client_option('ironic', 'endpoint_type')
ironic_region_name = self._get_client_option('ironic', 'region_name')
self._ironic = irclient.get_client(ironicclient_version,
os_endpoint_type=endpoint_type,
region_name=ironic_region_name,
session=self.session)
return self._ironic

View File

@@ -62,7 +62,6 @@ class RequestContext(context.RequestContext):
# safely ignore this as we don't use it.
kwargs.pop('user_identity', None)
kwargs.pop('global_request_id', None)
kwargs.pop('project', None)
if kwargs:
LOG.warning('Arguments dropped when creating context: %s',
str(kwargs))

View File

@@ -336,10 +336,6 @@ class DeleteError(Invalid):
msg_fmt = _("Couldn't delete when state is '%(state)s'.")
class StartError(Invalid):
msg_fmt = _("Couldn't start when state is '%(state)s'.")
# decision engine
class WorkflowExecutionException(WatcherException):
@@ -516,7 +512,3 @@ class NegativeLimitError(WatcherException):
class NotificationPayloadError(WatcherException):
_msg_fmt = _("Payload not populated when trying to send notification "
"\"%(class_name)s\"")
class InvalidPoolAttributeValue(Invalid):
msg_fmt = _("The %(name)s pool %(attribute)s is not integer")

View File

@@ -17,9 +17,9 @@
# limitations under the License.
#
import random
import time
from novaclient import api_versions
from oslo_log import log
import cinderclient.exceptions as ciexceptions
@@ -29,12 +29,9 @@ import novaclient.exceptions as nvexceptions
from watcher.common import clients
from watcher.common import exception
from watcher.common import utils
from watcher import conf
LOG = log.getLogger(__name__)
CONF = conf.CONF
class NovaHelper(object):
@@ -75,7 +72,8 @@ class NovaHelper(object):
raise exception.ComputeNodeNotFound(name=node_hostname)
def get_instance_list(self):
return self.nova.servers.list(search_opts={'all_tenants': True})
return self.nova.servers.list(search_opts={'all_tenants': True},
limit=-1)
def get_flavor_list(self):
return self.nova.flavors.list(**{'is_public': None})
@@ -133,24 +131,31 @@ class NovaHelper(object):
return volume.status == status
def watcher_non_live_migrate_instance(self, instance_id, dest_hostname,
keep_original_image_name=True,
retry=120):
"""This method migrates a given instance
This method uses the Nova built-in migrate()
action to do a migration of a given instance.
For migrating a given dest_hostname, Nova API version
must be 2.56 or higher.
using an image of this instance and creating a new instance
from this image. It saves some configuration information
about the original instance : security group, list of networks,
list of attached volumes, floating IP, ...
in order to apply the same settings to the new instance.
At the end of the process the original instance is deleted.
It returns True if the migration was successful,
False otherwise.
if destination hostname not given, this method calls nova api
to migrate the instance.
:param instance_id: the unique id of the instance to migrate.
:param dest_hostname: the name of the destination compute node, if
destination_node is None, nova scheduler choose
the destination host
:param keep_original_image_name: flag indicating whether the
image name from which the original instance was built must be
used as the name of the intermediate image used for migration.
If this flag is False, a temporary image name is built
"""
new_image_name = ""
LOG.debug(
"Trying a cold migrate of instance '%s' ", instance_id)
"Trying a non-live migrate of instance '%s' ", instance_id)
# Looking for the instance to migrate
instance = self.find_instance(instance_id)
@@ -158,43 +163,215 @@ class NovaHelper(object):
LOG.debug("Instance %s not found !", instance_id)
return False
else:
# NOTE: If destination node is None call Nova API to migrate
# instance
host_name = getattr(instance, "OS-EXT-SRV-ATTR:host")
LOG.debug(
"Instance %(instance)s found on host '%(host)s'.",
{'instance': instance_id, 'host': host_name})
previous_status = getattr(instance, 'status')
if dest_hostname is None:
previous_status = getattr(instance, 'status')
if (dest_hostname and
not self._check_nova_api_version(self.nova, "2.56")):
LOG.error("For migrating a given dest_hostname,"
"Nova API version must be 2.56 or higher")
return False
instance.migrate()
instance = self.nova.servers.get(instance_id)
while (getattr(instance, 'status') not in
["VERIFY_RESIZE", "ERROR"] and retry):
instance = self.nova.servers.get(instance.id)
time.sleep(2)
retry -= 1
new_hostname = getattr(instance, 'OS-EXT-SRV-ATTR:host')
instance.migrate(host=dest_hostname)
instance = self.nova.servers.get(instance_id)
while (getattr(instance, 'status') not in
["VERIFY_RESIZE", "ERROR"] and retry):
instance = self.nova.servers.get(instance.id)
time.sleep(2)
retry -= 1
new_hostname = getattr(instance, 'OS-EXT-SRV-ATTR:host')
if (host_name != new_hostname and
instance.status == 'VERIFY_RESIZE'):
if not self.confirm_resize(instance, previous_status):
if (host_name != new_hostname and
instance.status == 'VERIFY_RESIZE'):
if not self.confirm_resize(instance, previous_status):
return False
LOG.debug(
"cold migration succeeded : "
"instance %s is now on host '%s'.", (
instance_id, new_hostname))
return True
else:
LOG.debug(
"cold migration for instance %s failed", instance_id)
return False
LOG.debug(
"cold migration succeeded : "
"instance %(instance)s is now on host '%(host)s'.",
{'instance': instance_id, 'host': new_hostname})
return True
if not keep_original_image_name:
# randrange gives you an integral value
irand = random.randint(0, 1000)
# Building the temporary image name
# which will be used for the migration
new_image_name = "tmp-migrate-%s-%s" % (instance_id, irand)
else:
# Get the image name of the current instance.
# We'll use the same name for the new instance.
imagedict = getattr(instance, "image")
image_id = imagedict["id"]
image = self.glance.images.get(image_id)
new_image_name = getattr(image, "name")
instance_name = getattr(instance, "name")
flavor_name = instance.flavor.get('original_name')
keypair_name = getattr(instance, "key_name")
addresses = getattr(instance, "addresses")
floating_ip = ""
network_names_list = []
for network_name, network_conf_obj in addresses.items():
LOG.debug(
"cold migration for instance %s failed", instance_id)
"Extracting network configuration for network '%s'",
network_name)
network_names_list.append(network_name)
for net_conf_item in network_conf_obj:
if net_conf_item['OS-EXT-IPS:type'] == "floating":
floating_ip = net_conf_item['addr']
break
sec_groups_list = getattr(instance, "security_groups")
sec_groups = []
for sec_group_dict in sec_groups_list:
sec_groups.append(sec_group_dict['name'])
# Stopping the old instance properly so
# that no new data is sent to it and to its attached volumes
stopped_ok = self.stop_instance(instance_id)
if not stopped_ok:
LOG.debug("Could not stop instance: %s", instance_id)
return False
# Building the temporary image which will be used
# to re-build the same instance on another target host
image_uuid = self.create_image_from_instance(instance_id,
new_image_name)
if not image_uuid:
LOG.debug(
"Could not build temporary image of instance: %s",
instance_id)
return False
#
# We need to get the list of attached volumes and detach
# them from the instance in order to attache them later
# to the new instance
#
blocks = []
# Looks like this :
# os-extended-volumes:volumes_attached |
# [{u'id': u'c5c3245f-dd59-4d4f-8d3a-89d80135859a'}]
attached_volumes = getattr(instance,
"os-extended-volumes:volumes_attached")
for attached_volume in attached_volumes:
volume_id = attached_volume['id']
try:
volume = self.cinder.volumes.get(volume_id)
attachments_list = getattr(volume, "attachments")
device_name = attachments_list[0]['device']
# When a volume is attached to an instance
# it contains the following property :
# attachments = [{u'device': u'/dev/vdb',
# u'server_id': u'742cc508-a2f2-4769-a794-bcdad777e814',
# u'id': u'f6d62785-04b8-400d-9626-88640610f65e',
# u'host_name': None, u'volume_id':
# u'f6d62785-04b8-400d-9626-88640610f65e'}]
# boot_index indicates a number
# designating the boot order of the device.
# Use -1 for the boot volume,
# choose 0 for an attached volume.
block_device_mapping_v2_item = {"device_name": device_name,
"source_type": "volume",
"destination_type":
"volume",
"uuid": volume_id,
"boot_index": "0"}
blocks.append(
block_device_mapping_v2_item)
LOG.debug(
"Detaching volume %(volume)s from "
"instance: %(instance)s",
{'volume': volume_id, 'instance': instance_id})
# volume.detach()
self.nova.volumes.delete_server_volume(instance_id,
volume_id)
if not self.wait_for_volume_status(volume, "available", 5,
10):
LOG.debug(
"Could not detach volume %(volume)s "
"from instance: %(instance)s",
{'volume': volume_id, 'instance': instance_id})
return False
except ciexceptions.NotFound:
LOG.debug("Volume '%s' not found ", image_id)
return False
# We create the new instance from
# the intermediate image of the original instance
new_instance = self. \
create_instance(dest_hostname,
instance_name,
image_uuid,
flavor_name,
sec_groups,
network_names_list=network_names_list,
keypair_name=keypair_name,
create_new_floating_ip=False,
block_device_mapping_v2=blocks)
if not new_instance:
LOG.debug(
"Could not create new instance "
"for non-live migration of instance %s", instance_id)
return False
try:
LOG.debug(
"Detaching floating ip '%(floating_ip)s' "
"from instance %(instance)s",
{'floating_ip': floating_ip, 'instance': instance_id})
# We detach the floating ip from the current instance
instance.remove_floating_ip(floating_ip)
LOG.debug(
"Attaching floating ip '%(ip)s' to the new "
"instance %(id)s",
{'ip': floating_ip, 'id': new_instance.id})
# We attach the same floating ip to the new instance
new_instance.add_floating_ip(floating_ip)
except Exception as e:
LOG.debug(e)
new_host_name = getattr(new_instance, "OS-EXT-SRV-ATTR:host")
# Deleting the old instance (because no more useful)
delete_ok = self.delete_instance(instance_id)
if not delete_ok:
LOG.debug("Could not delete instance: %s", instance_id)
return False
LOG.debug(
"Instance %s has been successfully migrated "
"to new host '%s' and its new id is %s.", (
instance_id, new_host_name, new_instance.id))
return True
def resize_instance(self, instance_id, flavor, retry=120):
"""This method resizes given instance with specified flavor.
@@ -380,31 +557,21 @@ class NovaHelper(object):
"for the instance %s" % instance_id)
def enable_service_nova_compute(self, hostname):
if float(CONF.nova_client.api_version) < 2.53:
status = self.nova.services.enable(
host=hostname, binary='nova-compute').status == 'enabled'
if self.nova.services.enable(host=hostname,
binary='nova-compute'). \
status == 'enabled':
return True
else:
service_uuid = self.nova.services.list(host=hostname,
binary='nova-compute')[0].id
status = self.nova.services.enable(
service_uuid=service_uuid).status == 'enabled'
return status
return False
def disable_service_nova_compute(self, hostname, reason=None):
if float(CONF.nova_client.api_version) < 2.53:
status = self.nova.services.disable_log_reason(
host=hostname,
binary='nova-compute',
reason=reason).status == 'disabled'
if self.nova.services.disable_log_reason(host=hostname,
binary='nova-compute',
reason=reason). \
status == 'disabled':
return True
else:
service_uuid = self.nova.services.list(host=hostname,
binary='nova-compute')[0].id
status = self.nova.services.disable_log_reason(
service_uuid=service_uuid,
reason=reason).status == 'disabled'
return status
return False
def set_host_offline(self, hostname):
# See API on https://developer.openstack.org/api-ref/compute/
@@ -705,8 +872,9 @@ class NovaHelper(object):
def get_instances_by_node(self, host):
return [instance for instance in
self.nova.servers.list(search_opts={"all_tenants": True})
if self.get_hostname(instance) == host]
self.nova.servers.list(search_opts={"all_tenants": True,
"host": host},
limit=-1)]
def get_hostname(self, instance):
return str(getattr(instance, 'OS-EXT-SRV-ATTR:host'))
@@ -757,12 +925,3 @@ class NovaHelper(object):
"Volume %s is now on host '%s'.",
(new_volume.id, host_name))
return True
def _check_nova_api_version(self, client, version):
api_version = api_versions.APIVersion(version_str=version)
try:
api_versions.discover_version(client, api_version)
return True
except nvexceptions.UnsupportedVersion as e:
LOG.exception(e)
return False

View File

@@ -71,17 +71,6 @@ rules = [
'method': 'PATCH'
}
]
),
policy.DocumentedRuleDefault(
name=ACTION_PLAN % 'start',
check_str=base.RULE_ADMIN_API,
description='Start an action plans.',
operations=[
{
'path': '/v1/action_plans/{action_plan_uuid}/action',
'method': 'POST'
}
]
)
]

View File

@@ -289,7 +289,7 @@ class Service(service.ServiceBase):
return api_manager_version
def launch(conf, service_, workers=1, restart_method='mutate'):
def launch(conf, service_, workers=1, restart_method='reload'):
return service.launch(conf, service_, workers, restart_method)

View File

@@ -30,7 +30,10 @@ CEILOMETER_CLIENT_OPTS = [
default='internalURL',
help='Type of endpoint to use in ceilometerclient.'
'Supported values: internalURL, publicURL, adminURL'
'The default is internalURL.')]
'The default is internalURL.'),
cfg.StrOpt('region_name',
help='Region in Identity service catalog to use for '
'communication with the OpenStack service.')]
def register_opts(conf):

View File

@@ -29,7 +29,10 @@ CINDER_CLIENT_OPTS = [
default='publicURL',
help='Type of endpoint to use in cinderclient.'
'Supported values: internalURL, publicURL, adminURL'
'The default is publicURL.')]
'The default is publicURL.'),
cfg.StrOpt('region_name',
help='Region in Identity service catalog to use for '
'communication with the OpenStack service.')]
def register_opts(conf):

View File

@@ -44,21 +44,18 @@ WATCHER_DECISION_ENGINE_OPTS = [
'execute strategies'),
cfg.IntOpt('action_plan_expiry',
default=24,
mutable=True,
help='An expiry timespan(hours). Watcher invalidates any '
'action plan for which its creation time '
'-whose number of hours has been offset by this value-'
' is older that the current time.'),
cfg.IntOpt('check_periodic_interval',
default=30 * 60,
mutable=True,
help='Interval (in seconds) for checking action plan expiry.')
]
WATCHER_CONTINUOUS_OPTS = [
cfg.IntOpt('continuous_audit_interval',
default=10,
mutable=True,
help='Interval (in seconds) for checking newly created '
'continuous audits.')
]

View File

@@ -29,7 +29,10 @@ GLANCE_CLIENT_OPTS = [
default='publicURL',
help='Type of endpoint to use in glanceclient.'
'Supported values: internalURL, publicURL, adminURL'
'The default is publicURL.')]
'The default is publicURL.'),
cfg.StrOpt('region_name',
help='Region in Identity service catalog to use for '
'communication with the OpenStack service.')]
def register_opts(conf):

View File

@@ -30,13 +30,14 @@ GNOCCHI_CLIENT_OPTS = [
help='Type of endpoint to use in gnocchi client.'
'Supported values: internal, public, admin'
'The default is public.'),
cfg.StrOpt('region_name',
help='Region in Identity service catalog to use for '
'communication with the OpenStack service.'),
cfg.IntOpt('query_max_retries',
default=10,
mutable=True,
help='How many times Watcher is trying to query again'),
cfg.IntOpt('query_timeout',
default=1,
mutable=True,
help='How many seconds Watcher should wait to do query again')]

View File

@@ -29,7 +29,10 @@ IRONIC_CLIENT_OPTS = [
default='publicURL',
help='Type of endpoint to use in ironicclient.'
'Supported values: internalURL, publicURL, adminURL'
'The default is publicURL.')]
'The default is publicURL.'),
cfg.StrOpt('region_name',
help='Region in Identity service catalog to use for '
'communication with the OpenStack service.')]
def register_opts(conf):

View File

@@ -29,7 +29,10 @@ MONASCA_CLIENT_OPTS = [
default='internal',
help='Type of interface used for monasca endpoint.'
'Supported values: internal, public, admin'
'The default is internal.')]
'The default is internal.'),
cfg.StrOpt('region_name',
help='Region in Identity service catalog to use for '
'communication with the OpenStack service.')]
def register_opts(conf):

View File

@@ -29,7 +29,10 @@ NEUTRON_CLIENT_OPTS = [
default='publicURL',
help='Type of endpoint to use in neutronclient.'
'Supported values: internalURL, publicURL, adminURL'
'The default is publicURL.')]
'The default is publicURL.'),
cfg.StrOpt('region_name',
help='Region in Identity service catalog to use for '
'communication with the OpenStack service.')]
def register_opts(conf):

View File

@@ -23,13 +23,16 @@ nova_client = cfg.OptGroup(name='nova_client',
NOVA_CLIENT_OPTS = [
cfg.StrOpt('api_version',
default='2.56',
default='2.53',
help='Version of Nova API to use in novaclient.'),
cfg.StrOpt('endpoint_type',
default='publicURL',
help='Type of endpoint to use in novaclient.'
'Supported values: internalURL, publicURL, adminURL'
'The default is publicURL.')]
'The default is publicURL.'),
cfg.StrOpt('region_name',
help='Region in Identity service catalog to use for '
'communication with the OpenStack service.')]
def register_opts(conf):

View File

@@ -25,7 +25,6 @@ from watcher._i18n import _
SERVICE_OPTS = [
cfg.IntOpt('periodic_interval',
default=60,
mutable=True,
help=_('Seconds between running periodic tasks.')),
cfg.HostAddressOpt('host',
default=socket.gethostname(),

View File

@@ -314,21 +314,6 @@ class Connection(api.BaseConnection):
query.delete()
def _get_model_list(self, model, add_filters_func, context, filters=None,
limit=None, marker=None, sort_key=None, sort_dir=None,
eager=False):
query = model_query(model)
if eager:
query = self._set_eager_options(model, query)
query = add_filters_func(query, filters)
if not context.show_deleted:
query = query.filter(model.deleted_at.is_(None))
return _paginate_query(model, limit, marker,
sort_key, sort_dir, query)
# NOTE(erakli): _add_..._filters methods should be refactored to have same
# content. join_fieldmap should be filled with JoinMap instead of dict
def _add_goals_filters(self, query, filters):
if filters is None:
filters = {}
@@ -441,42 +426,18 @@ class Connection(api.BaseConnection):
query=query, model=models.EfficacyIndicator, filters=filters,
plain_fields=plain_fields, join_fieldmap=join_fieldmap)
def _add_scoring_engine_filters(self, query, filters):
if filters is None:
filters = {}
plain_fields = ['id', 'description']
return self._add_filters(
query=query, model=models.ScoringEngine, filters=filters,
plain_fields=plain_fields)
def _add_action_descriptions_filters(self, query, filters):
if not filters:
filters = {}
plain_fields = ['id', 'action_type']
return self._add_filters(
query=query, model=models.ActionDescription, filters=filters,
plain_fields=plain_fields)
def _add_services_filters(self, query, filters):
if not filters:
filters = {}
plain_fields = ['id', 'name', 'host']
return self._add_filters(
query=query, model=models.Service, filters=filters,
plain_fields=plain_fields)
# ### GOALS ### #
def get_goal_list(self, *args, **kwargs):
return self._get_model_list(models.Goal,
self._add_goals_filters,
*args, **kwargs)
def get_goal_list(self, context, filters=None, limit=None, marker=None,
sort_key=None, sort_dir=None, eager=False):
query = model_query(models.Goal)
if eager:
query = self._set_eager_options(models.Goal, query)
query = self._add_goals_filters(query, filters)
if not context.show_deleted:
query = query.filter_by(deleted_at=None)
return _paginate_query(models.Goal, limit, marker,
sort_key, sort_dir, query)
def create_goal(self, values):
# ensure defaults are present for new goals
@@ -532,10 +493,17 @@ class Connection(api.BaseConnection):
# ### STRATEGIES ### #
def get_strategy_list(self, *args, **kwargs):
return self._get_model_list(models.Strategy,
self._add_strategies_filters,
*args, **kwargs)
def get_strategy_list(self, context, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None,
eager=True):
query = model_query(models.Strategy)
if eager:
query = self._set_eager_options(models.Strategy, query)
query = self._add_strategies_filters(query, filters)
if not context.show_deleted:
query = query.filter_by(deleted_at=None)
return _paginate_query(models.Strategy, limit, marker,
sort_key, sort_dir, query)
def create_strategy(self, values):
# ensure defaults are present for new strategies
@@ -591,10 +559,18 @@ class Connection(api.BaseConnection):
# ### AUDIT TEMPLATES ### #
def get_audit_template_list(self, *args, **kwargs):
return self._get_model_list(models.AuditTemplate,
self._add_audit_templates_filters,
*args, **kwargs)
def get_audit_template_list(self, context, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None,
eager=False):
query = model_query(models.AuditTemplate)
if eager:
query = self._set_eager_options(models.AuditTemplate, query)
query = self._add_audit_templates_filters(query, filters)
if not context.show_deleted:
query = query.filter_by(deleted_at=None)
return _paginate_query(models.AuditTemplate, limit, marker,
sort_key, sort_dir, query)
def create_audit_template(self, values):
# ensure defaults are present for new audit_templates
@@ -666,10 +642,17 @@ class Connection(api.BaseConnection):
# ### AUDITS ### #
def get_audit_list(self, *args, **kwargs):
return self._get_model_list(models.Audit,
self._add_audits_filters,
*args, **kwargs)
def get_audit_list(self, context, filters=None, limit=None, marker=None,
sort_key=None, sort_dir=None, eager=False):
query = model_query(models.Audit)
if eager:
query = self._set_eager_options(models.Audit, query)
query = self._add_audits_filters(query, filters)
if not context.show_deleted:
query = query.filter_by(deleted_at=None)
return _paginate_query(models.Audit, limit, marker,
sort_key, sort_dir, query)
def create_audit(self, values):
# ensure defaults are present for new audits
@@ -757,10 +740,16 @@ class Connection(api.BaseConnection):
# ### ACTIONS ### #
def get_action_list(self, *args, **kwargs):
return self._get_model_list(models.Action,
self._add_actions_filters,
*args, **kwargs)
def get_action_list(self, context, filters=None, limit=None, marker=None,
sort_key=None, sort_dir=None, eager=False):
query = model_query(models.Action)
if eager:
query = self._set_eager_options(models.Action, query)
query = self._add_actions_filters(query, filters)
if not context.show_deleted:
query = query.filter_by(deleted_at=None)
return _paginate_query(models.Action, limit, marker,
sort_key, sort_dir, query)
def create_action(self, values):
# ensure defaults are present for new actions
@@ -830,10 +819,18 @@ class Connection(api.BaseConnection):
# ### ACTION PLANS ### #
def get_action_plan_list(self, *args, **kwargs):
return self._get_model_list(models.ActionPlan,
self._add_action_plans_filters,
*args, **kwargs)
def get_action_plan_list(
self, context, filters=None, limit=None, marker=None,
sort_key=None, sort_dir=None, eager=False):
query = model_query(models.ActionPlan)
if eager:
query = self._set_eager_options(models.ActionPlan, query)
query = self._add_action_plans_filters(query, filters)
if not context.show_deleted:
query = query.filter(models.ActionPlan.deleted_at.is_(None))
return _paginate_query(models.ActionPlan, limit, marker,
sort_key, sort_dir, query)
def create_action_plan(self, values):
# ensure defaults are present for new audits
@@ -915,10 +912,18 @@ class Connection(api.BaseConnection):
# ### EFFICACY INDICATORS ### #
def get_efficacy_indicator_list(self, *args, **kwargs):
return self._get_model_list(models.EfficacyIndicator,
self._add_efficacy_indicators_filters,
*args, **kwargs)
def get_efficacy_indicator_list(self, context, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None,
eager=False):
query = model_query(models.EfficacyIndicator)
if eager:
query = self._set_eager_options(models.EfficacyIndicator, query)
query = self._add_efficacy_indicators_filters(query, filters)
if not context.show_deleted:
query = query.filter_by(deleted_at=None)
return _paginate_query(models.EfficacyIndicator, limit, marker,
sort_key, sort_dir, query)
def create_efficacy_indicator(self, values):
# ensure defaults are present for new efficacy indicators
@@ -987,10 +992,28 @@ class Connection(api.BaseConnection):
# ### SCORING ENGINES ### #
def get_scoring_engine_list(self, *args, **kwargs):
return self._get_model_list(models.ScoringEngine,
self._add_scoring_engine_filters,
*args, **kwargs)
def _add_scoring_engine_filters(self, query, filters):
if filters is None:
filters = {}
plain_fields = ['id', 'description']
return self._add_filters(
query=query, model=models.ScoringEngine, filters=filters,
plain_fields=plain_fields)
def get_scoring_engine_list(
self, context, columns=None, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None, eager=False):
query = model_query(models.ScoringEngine)
if eager:
query = self._set_eager_options(models.ScoringEngine, query)
query = self._add_scoring_engine_filters(query, filters)
if not context.show_deleted:
query = query.filter_by(deleted_at=None)
return _paginate_query(models.ScoringEngine, limit, marker,
sort_key, sort_dir, query)
def create_scoring_engine(self, values):
# ensure defaults are present for new scoring engines
@@ -1055,10 +1078,26 @@ class Connection(api.BaseConnection):
# ### SERVICES ### #
def get_service_list(self, *args, **kwargs):
return self._get_model_list(models.Service,
self._add_services_filters,
*args, **kwargs)
def _add_services_filters(self, query, filters):
if not filters:
filters = {}
plain_fields = ['id', 'name', 'host']
return self._add_filters(
query=query, model=models.Service, filters=filters,
plain_fields=plain_fields)
def get_service_list(self, context, filters=None, limit=None, marker=None,
sort_key=None, sort_dir=None, eager=False):
query = model_query(models.Service)
if eager:
query = self._set_eager_options(models.Service, query)
query = self._add_services_filters(query, filters)
if not context.show_deleted:
query = query.filter_by(deleted_at=None)
return _paginate_query(models.Service, limit, marker,
sort_key, sort_dir, query)
def create_service(self, values):
try:
@@ -1103,10 +1142,27 @@ class Connection(api.BaseConnection):
# ### ACTION_DESCRIPTIONS ### #
def get_action_description_list(self, *args, **kwargs):
return self._get_model_list(models.ActionDescription,
self._add_action_descriptions_filters,
*args, **kwargs)
def _add_action_descriptions_filters(self, query, filters):
if not filters:
filters = {}
plain_fields = ['id', 'action_type']
return self._add_filters(
query=query, model=models.ActionDescription, filters=filters,
plain_fields=plain_fields)
def get_action_description_list(self, context, filters=None, limit=None,
marker=None, sort_key=None,
sort_dir=None, eager=False):
query = model_query(models.ActionDescription)
if eager:
query = self._set_eager_options(models.ActionDescription, query)
query = self._add_action_descriptions_filters(query, filters)
if not context.show_deleted:
query = query.filter_by(deleted_at=None)
return _paginate_query(models.ActionDescription, limit, marker,
sort_key, sort_dir, query)
def create_action_description(self, values):
try:

View File

@@ -63,7 +63,6 @@ class AuditHandler(BaseAuditHandler):
self._strategy_context = default_context.DefaultStrategyContext()
self._planner_manager = planner_manager.PlannerManager()
self._planner = None
self.applier_client = rpcapi.ApplierAPI()
@property
def planner(self):
@@ -75,13 +74,6 @@ class AuditHandler(BaseAuditHandler):
def strategy_context(self):
return self._strategy_context
def do_execute(self, audit, request_context):
# execute the strategy
solution = self.strategy_context.execute_strategy(
audit, request_context)
return solution
def do_schedule(self, request_context, audit, solution):
try:
notifications.audit.send_action_notification(
@@ -126,8 +118,9 @@ class AuditHandler(BaseAuditHandler):
def post_execute(self, audit, solution, request_context):
action_plan = self.do_schedule(request_context, audit, solution)
if audit.auto_trigger:
self.applier_client.launch_action_plan(request_context,
action_plan.uuid)
applier_client = rpcapi.ApplierAPI()
applier_client.launch_action_plan(request_context,
action_plan.uuid)
def execute(self, audit, request_context):
try:

View File

@@ -71,8 +71,9 @@ class ContinuousAuditHandler(base.AuditHandler):
return False
def do_execute(self, audit, request_context):
solution = super(ContinuousAuditHandler, self)\
.do_execute(audit, request_context)
# execute the strategy
solution = self.strategy_context.execute_strategy(
audit, request_context)
if audit.audit_type == objects.audit.AuditType.CONTINUOUS.value:
a_plan_filters = {'audit_uuid': audit.uuid,

View File

@@ -20,6 +20,13 @@ from watcher import objects
class OneShotAuditHandler(base.AuditHandler):
def do_execute(self, audit, request_context):
# execute the strategy
solution = self.strategy_context.execute_strategy(
audit, request_context)
return solution
def post_execute(self, audit, solution, request_context):
super(OneShotAuditHandler, self).post_execute(audit, solution,
request_context)

View File

@@ -241,28 +241,3 @@ class HardwareMaintenance(base.Goal):
def get_efficacy_specification(cls):
"""The efficacy spec for the current goal"""
return specs.HardwareMaintenance()
class ClusterMaintaining(base.Goal):
"""ClusterMaintenance
This goal is used to maintain compute nodes
without having the user's application being interrupted.
"""
@classmethod
def get_name(cls):
return "cluster_maintaining"
@classmethod
def get_display_name(cls):
return _("Cluster Maintaining")
@classmethod
def get_translatable_display_name(cls):
return "Cluster Maintaining"
@classmethod
def get_efficacy_specification(cls):
"""The efficacy spec for the current goal"""
return specs.Unclassified()

View File

@@ -222,21 +222,8 @@ class ModelBuilder(object):
:param pool: A storage pool
:type pool: :py:class:`~cinderlient.v2.capabilities.Capabilities`
:raises: exception.InvalidPoolAttributeValue
"""
# build up the storage pool.
attrs = ["total_volumes", "total_capacity_gb",
"free_capacity_gb", "provisioned_capacity_gb",
"allocated_capacity_gb"]
for attr in attrs:
try:
int(getattr(pool, attr))
except ValueError:
raise exception.InvalidPoolAttributeValue(
name=pool.name, attribute=attr)
node_attributes = {
"name": pool.name,
"total_volumes": pool.total_volumes,

View File

@@ -104,18 +104,6 @@ class NovaClusterDataModelCollector(base.BaseClusterDataModelCollector):
"items": {
"type": "object"
}
},
"projects": {
"type": "array",
"items": {
"type": "object",
"properties": {
"uuid": {
"type": "string"
}
},
"additionalProperties": False
}
}
},
"additionalProperties": False
@@ -349,7 +337,7 @@ class ModelBuilder(object):
Create an instance node for the graph using nova and the
`server` nova object.
:param instance: Nova VM object.
:return: An instance node for the graph.
:return: A instance node for the graph.
"""
flavor = instance.flavor
instance_attributes = {
@@ -360,8 +348,7 @@ class ModelBuilder(object):
"disk_capacity": flavor["disk"],
"vcpus": flavor["vcpus"],
"state": getattr(instance, "OS-EXT-STS:vm_state"),
"metadata": instance.metadata,
"project_id": instance.tenant_id}
"metadata": instance.metadata}
# node_attributes = dict()
# node_attributes["layer"] = "virtual"

View File

@@ -29,7 +29,7 @@ class InstanceState(enum.Enum):
STOPPED = 'stopped' # Instance is shut off, the disk image is still there.
RESCUED = 'rescued' # A rescue image is running with the original image
# attached.
RESIZED = 'resized' # an Instance with the new size is active.
RESIZED = 'resized' # a Instance with the new size is active.
SOFT_DELETED = 'soft-delete'
# still available to restore.
@@ -52,7 +52,6 @@ class Instance(compute_resource.ComputeResource):
"disk_capacity": wfields.NonNegativeIntegerField(),
"vcpus": wfields.NonNegativeIntegerField(),
"metadata": wfields.JsonField(),
"project_id": wfields.UUIDField(),
}
def accept(self, visitor):

View File

@@ -74,7 +74,7 @@ class Pool(storage_resource.StorageResource):
"free_capacity_gb": wfields.NonNegativeIntegerField(),
"provisioned_capacity_gb": wfields.NonNegativeIntegerField(),
"allocated_capacity_gb": wfields.NonNegativeIntegerField(),
"virtual_free": wfields.NonNegativeIntegerField(default=0),
"virtual_free": wfields.NonNegativeIntegerField(),
}
def accept(self, visitor):

View File

@@ -28,6 +28,6 @@ class StorageResource(base.Element):
VERSION = '1.0'
fields = {
"uuid": wfields.StringField(default=""),
"uuid": wfields.StringField(),
"human_id": wfields.StringField(default=""),
}

View File

@@ -16,7 +16,6 @@
Openstack implementation of the cluster graph.
"""
import ast
from lxml import etree
import networkx as nx
from oslo_concurrency import lockutils
@@ -58,7 +57,7 @@ class ModelRoot(nx.DiGraph, base.Model):
@lockutils.synchronized("model_root")
def add_node(self, node):
self.assert_node(node)
super(ModelRoot, self).add_node(node.uuid, attr=node)
super(ModelRoot, self).add_node(node.uuid, node)
@lockutils.synchronized("model_root")
def remove_node(self, node):
@@ -73,7 +72,7 @@ class ModelRoot(nx.DiGraph, base.Model):
def add_instance(self, instance):
self.assert_instance(instance)
try:
super(ModelRoot, self).add_node(instance.uuid, attr=instance)
super(ModelRoot, self).add_node(instance.uuid, instance)
except nx.NetworkXError as exc:
LOG.exception(exc)
raise exception.InstanceNotFound(name=instance.uuid)
@@ -138,8 +137,8 @@ class ModelRoot(nx.DiGraph, base.Model):
@lockutils.synchronized("model_root")
def get_all_compute_nodes(self):
return {uuid: cn['attr'] for uuid, cn in self.nodes(data=True)
if isinstance(cn['attr'], element.ComputeNode)}
return {uuid: cn for uuid, cn in self.nodes(data=True)
if isinstance(cn, element.ComputeNode)}
@lockutils.synchronized("model_root")
def get_node_by_uuid(self, uuid):
@@ -157,7 +156,7 @@ class ModelRoot(nx.DiGraph, base.Model):
def _get_by_uuid(self, uuid):
try:
return self.node[uuid]['attr']
return self.node[uuid]
except Exception as exc:
LOG.exception(exc)
raise exception.ComputeResourceNotFound(name=uuid)
@@ -173,8 +172,8 @@ class ModelRoot(nx.DiGraph, base.Model):
@lockutils.synchronized("model_root")
def get_all_instances(self):
return {uuid: inst['attr'] for uuid, inst in self.nodes(data=True)
if isinstance(inst['attr'], element.Instance)}
return {uuid: inst for uuid, inst in self.nodes(data=True)
if isinstance(inst, element.Instance)}
@lockutils.synchronized("model_root")
def get_node_instances(self, node):
@@ -226,8 +225,6 @@ class ModelRoot(nx.DiGraph, base.Model):
for inst in root.findall('.//Instance'):
instance = element.Instance(**inst.attrib)
instance.watcher_exclude = ast.literal_eval(
inst.attrib["watcher_exclude"])
model.add_instance(instance)
parent = inst.getparent()
@@ -242,7 +239,7 @@ class ModelRoot(nx.DiGraph, base.Model):
@classmethod
def is_isomorphic(cls, G1, G2):
def node_match(node1, node2):
return node1['attr'].as_dict() == node2['attr'].as_dict()
return node1.as_dict() == node2.as_dict()
return nx.algorithms.isomorphism.isomorph.is_isomorphic(
G1, G2, node_match=node_match)
@@ -280,12 +277,12 @@ class StorageModelRoot(nx.DiGraph, base.Model):
@lockutils.synchronized("storage_model")
def add_node(self, node):
self.assert_node(node)
super(StorageModelRoot, self).add_node(node.host, attr=node)
super(StorageModelRoot, self).add_node(node.host, node)
@lockutils.synchronized("storage_model")
def add_pool(self, pool):
self.assert_pool(pool)
super(StorageModelRoot, self).add_node(pool.name, attr=pool)
super(StorageModelRoot, self).add_node(pool.name, pool)
@lockutils.synchronized("storage_model")
def remove_node(self, node):
@@ -338,7 +335,7 @@ class StorageModelRoot(nx.DiGraph, base.Model):
@lockutils.synchronized("storage_model")
def add_volume(self, volume):
self.assert_volume(volume)
super(StorageModelRoot, self).add_node(volume.uuid, attr=volume)
super(StorageModelRoot, self).add_node(volume.uuid, volume)
@lockutils.synchronized("storage_model")
def remove_volume(self, volume):
@@ -385,8 +382,8 @@ class StorageModelRoot(nx.DiGraph, base.Model):
@lockutils.synchronized("storage_model")
def get_all_storage_nodes(self):
return {host: cn['attr'] for host, cn in self.nodes(data=True)
if isinstance(cn['attr'], element.StorageNode)}
return {host: cn for host, cn in self.nodes(data=True)
if isinstance(cn, element.StorageNode)}
@lockutils.synchronized("storage_model")
def get_node_by_name(self, name):
@@ -415,14 +412,14 @@ class StorageModelRoot(nx.DiGraph, base.Model):
def _get_by_uuid(self, uuid):
try:
return self.node[uuid]['attr']
return self.node[uuid]
except Exception as exc:
LOG.exception(exc)
raise exception.StorageResourceNotFound(name=uuid)
def _get_by_name(self, name):
try:
return self.node[name]['attr']
return self.node[name]
except Exception as exc:
LOG.exception(exc)
raise exception.StorageResourceNotFound(name=name)
@@ -459,8 +456,8 @@ class StorageModelRoot(nx.DiGraph, base.Model):
@lockutils.synchronized("storage_model")
def get_all_volumes(self):
return {name: vol['attr'] for name, vol in self.nodes(data=True)
if isinstance(vol['attr'], element.Volume)}
return {name: vol for name, vol in self.nodes(data=True)
if isinstance(vol, element.Volume)}
@lockutils.synchronized("storage_model")
def get_pool_volumes(self, pool):
@@ -572,7 +569,7 @@ class BaremetalModelRoot(nx.DiGraph, base.Model):
@lockutils.synchronized("baremetal_model")
def add_node(self, node):
self.assert_node(node)
super(BaremetalModelRoot, self).add_node(node.uuid, attr=node)
super(BaremetalModelRoot, self).add_node(node.uuid, node)
@lockutils.synchronized("baremetal_model")
def remove_node(self, node):
@@ -585,8 +582,8 @@ class BaremetalModelRoot(nx.DiGraph, base.Model):
@lockutils.synchronized("baremetal_model")
def get_all_ironic_nodes(self):
return {uuid: cn['attr'] for uuid, cn in self.nodes(data=True)
if isinstance(cn['attr'], element.IronicNode)}
return {uuid: cn for uuid, cn in self.nodes(data=True)
if isinstance(cn, element.IronicNode)}
@lockutils.synchronized("baremetal_model")
def get_node_by_uuid(self, uuid):
@@ -597,7 +594,7 @@ class BaremetalModelRoot(nx.DiGraph, base.Model):
def _get_by_uuid(self, uuid):
try:
return self.node[uuid]['attr']
return self.node[uuid]
except Exception as exc:
LOG.exception(exc)
raise exception.BaremetalResourceNotFound(name=uuid)

View File

@@ -76,7 +76,6 @@ class NovaNotification(base.NotificationEndpoint):
'disk': disk_gb,
'disk_capacity': disk_gb,
'metadata': instance_metadata,
'tenant_id': instance_data['tenant_id']
})
try:

View File

@@ -87,7 +87,6 @@ class ComputeScope(base.BaseScope):
instances_to_exclude = kwargs.get('instances')
nodes_to_exclude = kwargs.get('nodes')
instance_metadata = kwargs.get('instance_metadata')
projects_to_exclude = kwargs.get('projects')
for resource in resources:
if 'instances' in resource:
@@ -106,9 +105,6 @@ class ComputeScope(base.BaseScope):
elif 'instance_metadata' in resource:
instance_metadata.extend(
[metadata for metadata in resource['instance_metadata']])
elif 'projects' in resource:
projects_to_exclude.extend(
[project['uuid'] for project in resource['projects']])
def remove_nodes_from_model(self, nodes_to_remove, cluster_model):
for node_uuid in nodes_to_remove:
@@ -148,13 +144,6 @@ class ComputeScope(base.BaseScope):
if str(value).lower() == str(metadata.get(key)).lower():
instances_to_remove.add(uuid)
def exclude_instances_with_given_project(
self, projects_to_exclude, cluster_model, instances_to_exclude):
all_instances = cluster_model.get_all_instances()
for uuid, instance in all_instances.items():
if instance.project_id in projects_to_exclude:
instances_to_exclude.add(uuid)
def get_scoped_model(self, cluster_model):
"""Leave only nodes and instances proposed in the audit scope"""
if not cluster_model:
@@ -165,7 +154,6 @@ class ComputeScope(base.BaseScope):
nodes_to_remove = set()
instances_to_exclude = []
instance_metadata = []
projects_to_exclude = []
compute_scope = []
model_hosts = list(cluster_model.get_all_compute_nodes().keys())
@@ -189,8 +177,7 @@ class ComputeScope(base.BaseScope):
self.exclude_resources(
rule['exclude'], instances=instances_to_exclude,
nodes=nodes_to_exclude,
instance_metadata=instance_metadata,
projects=projects_to_exclude)
instance_metadata=instance_metadata)
instances_to_exclude = set(instances_to_exclude)
if allowed_nodes:
@@ -203,10 +190,6 @@ class ComputeScope(base.BaseScope):
self.exclude_instances_with_given_metadata(
instance_metadata, cluster_model, instances_to_exclude)
if projects_to_exclude:
self.exclude_instances_with_given_project(
projects_to_exclude, cluster_model, instances_to_exclude)
self.update_exclude_instance_in_model(instances_to_exclude,
cluster_model)

Some files were not shown because too many files have changed in this diff Show More