Compare commits

...

47 Commits

Author SHA1 Message Date
limin0801
7c90c9c752 Watcher API supports strategy name when creating audit template
when directly using the `curl` command to create audit template,
strategy name can be accepted.

Closes-Bug: #1884174

Change-Id: I7c0ca760a7fa414faca03c5293df34a84aad6fac
(cherry picked from commit 3f7a508a2e)
2020-08-03 09:41:59 +00:00
Alexander Chadin
4c56eba5be Improve logs of Workload Stabilization strategy
This patch set refactors logs of workload stabilization
strategy to make them more readable and sensible.

Related-Bug: #1874416
Change-Id: I408988712bb7560728157f3b4e4f2b37572128c4
2020-05-14 03:31:22 +00:00
licanwei
a8974556ec Don't throw exception when missing metrics
When querying data from datasource, it's possible to miss some data.
In this case if we throw an exception, Audit will failed because of
the exception. We should remove the exception and give the decision
to the strategy.

Change-Id: I1b0e6b78b3bba4df9ba16e093b3910aab1de922e
Closes-Bug: #1847434
can not cherry picke from master because of code refactoring
(cherry picked from commit 306224f70c)
2019-10-21 03:26:36 +00:00
Stamatis Katsaounis
6a4a9af538 Fix issues on stable/rocky
This patch fixes issues present to stable/rocky branch. Due to heavy
refactoring in later branches backporting is no possible.

Change-Id: I896a7c49eea1b267099fc90d837458ec7bb7853d
Signed-off-by: Stamatis Katsaounis <skatsaounis@admin.grnet.gr>
2019-10-09 10:31:53 +03:00
Zuul
214ee82e45 Merge "pass default_config_dirs variable for config initialization." into stable/rocky 2019-09-18 02:00:25 +00:00
Guang Yee
5c7fcc22c0 fix test failure with ironic client
The watcher.tests.common.test_clients.TestClients.test_clients_ironic unit
test's been failure since python-ironicclient 2.5.2 release. In that release,
we fixed a bug with the interface argument was ignored.

https://docs.openstack.org/releasenotes/python-ironicclient/rocky.html#relnotes-2-5-2-stable-rocky

Therefore, we need to adjust the ironic test case in watcher to account for
the interface argument.

Change-Id: Iedb27efc9f296054fcbd485b27736a789cee3496
2019-09-16 10:33:36 -07:00
Sumit Jamgade
4db5a58d0d pass default_config_dirs variable for config initialization.
Currently default config files are being for initialization of CONF from
oslo_config. However default config dirs are not being passed as a
result watcher components (eg: decision-engine) are unable to load
files from default directories (eg: /etc/watcher/watcher.conf.d)
supported by oslo_config. This is a short-coming on watcher's side.
Also this forces user to have multiple config for each component.

Without this default set, oslo_config will search for conf with string
'python-watcher' in it, eg: /etc/python-watcher/.... Since there is a
because project=python-watcher a couple of lines below

This patch adds the option after evaluating using project as 'watcher'
which is similar to evaluation of default_config_files and also allows
it to be passed in as a function parameter.

Change-Id: I013f9d03978f8716847f8d1ee6888629faf5779b
(cherry picked from commit dce23d7eb4)
(cherry picked from commit ec5780902f)
2019-09-16 15:29:29 +00:00
OpenDev Sysadmins
0a20d27860 OpenDev Migration Patch
This commit was bulk generated and pushed by the OpenDev sysadmins
as a part of the Git hosting and code review systems migration
detailed in these mailing list posts:

http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003603.html
http://lists.openstack.org/pipermail/openstack-discuss/2019-April/004920.html

Attempts have been made to correct repository namespaces and
hostnames based on simple pattern matching, but it's possible some
were updated incorrectly or missed entirely. Please reach out to us
via the contact information listed at https://opendev.org/ with any
questions you may have.
2019-04-19 19:40:47 +00:00
Ian Wienand
dc9dba2fda Replace openstack.org git:// URLs with https://
This is a mechanically generated change to replace openstack.org
git:// URLs with https:// equivalents.

This is in aid of a planned future move of the git hosting
infrastructure to a self-hosted instance of gitea (https://gitea.io),
which does not support the git wire protocol at this stage.

This update should result in no functional change.

For more information see the thread at

 http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003825.html

Change-Id: I90059bb4fdbe0b5355170999896accde0dc3e59b
2019-03-24 20:36:26 +00:00
Zuul
6203a280ce Merge "Access to action's uuid by key" into stable/rocky 2019-03-16 04:23:08 +00:00
licanwei
294c3cd760 set watcherclient no voting
Change-Id: I817328148559dd35755a706085247f3547b188cb
2019-03-15 17:13:20 +08:00
Tatiana Kholkina
d226c5d0fb Access to action's uuid by key
Change-Id: I9fe992be8f54de51f0c8e0a9fcf7880c68360929
Closes-Bug: #1818962
(cherry picked from commit e830d3793e)
2019-03-15 06:30:13 +00:00
Sumit Jamgade
abbe182cf1 make ceilometer client import optional
on ImportError set HAS_CEILCLIENT to false

Without this none of the watcher componenets can be started for master
as well as rocky because the ceilometercleint was deprecated.

Using the variable the support for ceilometer can be gradually removed
from master

A backport to rocky will allow using watcher without ceilometerclient.

Change-Id: I3beb0fb8f0a8e8e0a22acaf6bdeca492836bbee2
2019-03-04 03:56:28 +00:00
Tatiana Kholkina
c1c0a472dd Provide two arguments to exception's message
Change-Id: I003c9e88abb08b11c22b008936413ee51f6096b1
Closes-Bug: #1817533
(cherry picked from commit 594039f794)
2019-02-27 07:06:39 +00:00
Alexander Chadin
bb0e959bd2 Fix stop_watcher function
Apache should be reloaded after watcher-api is disabled.

Change-Id: Ifee0e7701849348630568aa36b3f3c4c62d3382e
2018-12-10 13:55:52 +00:00
Tatiana Kholkina
41bfba5cac Fix accessing to optional cinder pool attributes
Leave storage pool arguments empty if they are not provided
by cinderclient.

Change-Id: I90435146b33465c8eef95a6104e53285f785b014
Closes-Bug: #1800468
(cherry picked from commit e8c08e2abb)
2018-11-08 09:26:35 +00:00
licanwei
ef66e75b77 optimize get_instances_by_node
We can set host filed in search_opts.
refer to:
https://developer.openstack.org/api-ref/compute/?expanded=list-servers-detail#list-servers

Change-Id: I36b27167d7223f3bf6bb05995210af41ad01fc6d
2018-11-06 10:40:23 +00:00
Tatiana Kholkina
f517cc662a Use limit -1 for nova servers list
By default nova has a limit for returned items in a single response [1].
We should pass limit=-1 to get all items.

[1] https://docs.openstack.org/nova/rocky/configuration/config.html

Change-Id: I1fabd909c4c0356ef5fcb7c51718fb4513e6befa
2018-10-16 08:37:37 +00:00
Tatiana Kholkina
d99e8f33da Do not pass www_authenticate_uri to RequestContext
Change-Id: I0ee32031d714608c33643b12b1e217a04157f5b3
Closes-Bug: #1795613
(cherry picked from commit f0b96b8a37)
2018-10-04 05:51:25 +00:00
Tatiana Kholkina
ef14aec225 Provide region name while initialize clients
Add new option 'region_name' to config for each client section.

Change-Id: Ifad8908852f4be69dd294a4c4ab28d2e1df265e8
Closes-Bug: #1787937
(cherry picked from commit 925b971377)
2018-09-21 12:08:12 +00:00
Nguyen Hai
546012bed4 import zuul job settings from project-config
This is a mechanically generated patch to complete step 1 of moving
the zuul job settings out of project-config and into each project
repository.

Because there will be a separate patch on each branch, the branch
specifiers for branch-specific jobs have been removed.

Because this patch is generated by a script, there may be some
cosmetic changes to the layout of the YAML file(s) as the contents are
normalized.

See the python3-first goal document for details:
https://governance.openstack.org/tc/goals/stein/python3-first.html

Change-Id: I7a7838d04c7a22c1c80d211393c1e1be10f3e0b0
Story: #2002586
Task: #24344
2018-08-19 00:59:24 +09:00
OpenStack Release Bot
9be780b2b9 Update UPPER_CONSTRAINTS_FILE for stable/rocky
The new stable upper-constraints file is only available
after the openstack/requirements repository is branched.
This will happen around the RC1 timeframe.

Recheck and merge this change once the requirements
repository has been branched.

The CI system will work with this patch before the requirements
repository is branched because zuul configues the job to run
with a local copy of the file and defaults to the master branch.
However, accepting the patch will break the test configuration
on developers' local systems, so please wait until after the
requirements repository is branched to merge the patch.

Change-Id: I1070c8dff1b38f7335eb7cfc55fcca9b94383199
2018-08-08 14:10:47 +00:00
OpenStack Release Bot
f80e7098fd Update .gitreview for stable/rocky
Change-Id: I8c1d18f7f38b23f4763b4b57a455f42ac6adbde9
2018-08-08 14:10:45 +00:00
Zuul
b471b4ca36 Merge "Fix TypeError in LOG.debug" 2018-08-08 12:11:09 +00:00
Zuul
2be5bd1c3f Merge "fix unit test:test_execute_audit_with_interval_no_job" 2018-08-08 10:06:56 +00:00
licanwei
d79edb93d6 Fix TypeError in LOG.debug
Change-Id: I4a4050081d0a22cc66fdb311ef676d0ba802bb72
Closes-Bug: #1785962
2018-08-07 23:44:23 -07:00
Zuul
0c41f20df2 Merge "improve strategy doc" 2018-08-07 10:34:44 +00:00
Yumeng_Bao
249e3c9515 fix unit test:test_execute_audit_with_interval_no_job
The previous unit test does not indeed test the situation where there is no job.

Change-Id: I3a0835932134fa6d888e0611a9232e1098d3fe53
2018-08-07 15:44:29 +08:00
licanwei
a229fec4a6 improve strategy doc
Change-Id: Id84e086f316ab50999b43c4b4c60a59ca454e79c
2018-08-06 18:21:39 -07:00
licanwei
5c2b3f0025 remove get_flavor_instance
From nova api 2.47(see [1]),the flavor.id has been removed.
we could remove this unused get_flavor_instance.

[1] https://developer.openstack.org/api-ref/compute/#show-server-details

Change-Id: I19a30950c298ee5cde8e71548428330c101bcad6
2018-08-06 01:10:53 +00:00
Zuul
cf9b158713 Merge "remove voluptuous" 2018-08-03 08:32:14 +00:00
Zuul
2cb7871df0 Merge "Update watcher-db-manage help doc" 2018-08-03 08:26:42 +00:00
Zuul
7c83042aa1 Merge "Add noisy neighbor strategy doc" 2018-08-03 08:15:52 +00:00
Zuul
7103e60786 Merge "only check decision engine service" 2018-08-03 08:15:51 +00:00
Zuul
343128fcb9 Merge "Fix unittest MismatchError" 2018-08-02 08:38:38 +00:00
Zuul
a739f81bfb Merge "remove extra'_' and space" 2018-08-02 08:19:58 +00:00
Zuul
d690b2b598 Merge "Fix AttributeError exception" 2018-08-01 07:28:43 +00:00
Zuul
4d1b9c1f04 Merge "Add apscheduler_jobs table to models" 2018-08-01 07:19:29 +00:00
licanwei
927d094907 Fix unittest MismatchError
Change-Id: I4030fb2c4ec89c6c653c2882be1052ed5cbd2cd7
Closes-Bug: #1784758
2018-07-31 22:47:39 -07:00
licanwei
57a4aae92b only check decision engine service
We just need to check decision engine service status
when Rescheduling continuous audits.
This is an update for 1
1:https://review.openstack.org/#/c/586033

Change-Id: I05a17f39b6ff80c6b9382248c72cac571191e395
2018-08-01 01:10:25 +00:00
chenke
abd129002c remove extra'_' and space
Change-Id: I85cdb0dd4e8f192181146b99f0416bf777a8279a
2018-07-31 20:07:40 +08:00
licanwei
b92a26345f remove voluptuous
We have replaced voluptuous with jsonschema in [1].
Now voluptuous can be removed.
[1]: https://review.openstack.org/#/c/561182/

Change-Id: I99c65ed79ef166839838559a808ee7607389e07a
2018-07-30 19:03:26 -07:00
licanwei
843cd493c2 Update watcher-db-manage help doc
Change-Id: I472204687da138f23f51a56e24cc95a9ae3359fb
2018-07-30 04:05:34 -07:00
Alexander Chadin
bad257f402 Fix strategies with additional time to initialize CDM
Change-Id: I995cfe99443744eb9f5746be5fce6302b6a7b834
2018-07-27 13:14:38 +00:00
licanwei
c4821ceedf Add apscheduler_jobs table to models
watcher-db-manage create_schema doesn't create apscheduler_jobs.

Change-Id: I57327317aab0186b0ff641111b90e6f958f1e5fe
Closes-Bug: #1783504
2018-07-26 20:00:34 -07:00
licanwei
abbb1317d3 Fix AttributeError exception
StartError is in exception, not Exception

Change-Id: Iff6ea38a2d0173173719f1cd840d9f3789fcf023
Closes-Bug: #1783924
2018-07-26 19:50:28 -07:00
licanwei
4a5175cbad Add noisy neighbor strategy doc
Change-Id: I84add2103fd12c7b0c7e36d57fdfc4fe43e933b1
2018-07-26 00:45:40 -07:00
49 changed files with 433 additions and 197 deletions

View File

@@ -1,4 +1,5 @@
[gerrit] [gerrit]
host=review.openstack.org host=review.opendev.org
port=29418 port=29418
project=openstack/watcher.git project=openstack/watcher.git
defaultbranch=stable/rocky

View File

@@ -1,18 +1,24 @@
- project: - project:
templates:
- openstack-python-jobs
- openstack-python35-jobs
- publish-openstack-sphinx-docs
- check-requirements
- release-notes-jobs
check: check:
jobs: jobs:
- watcher-tempest-functional: - watcher-tempest-functional
voting: false
- watcher-tempest-dummy_optim - watcher-tempest-dummy_optim
- watcher-tempest-actuator - watcher-tempest-actuator
- watcher-tempest-basic_optim - watcher-tempest-basic_optim
- watcher-tempest-workload_balancing - watcher-tempest-workload_balancing
- watcherclient-tempest-functional: - watcherclient-tempest-functional
voting: false - watcher-tempest-zone_migration
- openstack-tox-lower-constraints - openstack-tox-lower-constraints
gate: gate:
queue: watcher
jobs: jobs:
# - watcher-tempest-functional - watcher-tempest-functional
- openstack-tox-lower-constraints - openstack-tox-lower-constraints
- job: - job:
@@ -20,28 +26,35 @@
parent: watcher-tempest-multinode parent: watcher-tempest-multinode
voting: false voting: false
vars: vars:
tempest_test_regex: 'watcher_tempest_plugin.tests.scenario.test_execute_dummy_optim' tempest_test_regex: watcher_tempest_plugin.tests.scenario.test_execute_dummy_optim
- job: - job:
name: watcher-tempest-actuator name: watcher-tempest-actuator
parent: watcher-tempest-multinode parent: watcher-tempest-multinode
voting: false voting: false
vars: vars:
tempest_test_regex: 'watcher_tempest_plugin.tests.scenario.test_execute_actuator' tempest_test_regex: watcher_tempest_plugin.tests.scenario.test_execute_actuator
- job: - job:
name: watcher-tempest-basic_optim name: watcher-tempest-basic_optim
parent: watcher-tempest-multinode parent: watcher-tempest-multinode
voting: false voting: false
vars: vars:
tempest_test_regex: 'watcher_tempest_plugin.tests.scenario.test_execute_basic_optim' tempest_test_regex: watcher_tempest_plugin.tests.scenario.test_execute_basic_optim
- job: - job:
name: watcher-tempest-workload_balancing name: watcher-tempest-workload_balancing
parent: watcher-tempest-multinode parent: watcher-tempest-multinode
voting: false voting: false
vars: vars:
tempest_test_regex: 'watcher_tempest_plugin.tests.scenario.test_execute_workload_balancing' tempest_test_regex: watcher_tempest_plugin.tests.scenario.test_execute_workload_balancing
- job:
name: watcher-tempest-zone_migration
parent: watcher-tempest-multinode
voting: false
vars:
tempest_test_regex: watcher_tempest_plugin.tests.scenario.test_execute_zone_migration
- job: - job:
name: watcher-tempest-multinode name: watcher-tempest-multinode
@@ -57,7 +70,14 @@
post-config: post-config:
$NOVA_CONF: $NOVA_CONF:
libvirt: libvirt:
live_migration_uri: 'qemu+ssh://root@%s/system' live_migration_uri: qemu+ssh://root@%s/system
$WATCHER_CONF:
watcher_cluster_data_model_collectors.compute:
period: 120
watcher_cluster_data_model_collectors.baremetal:
period: 120
watcher_cluster_data_model_collectors.storage:
period: 120
devstack_services: devstack_services:
watcher-api: false watcher-api: false
watcher-decision-engine: true watcher-decision-engine: true
@@ -78,7 +98,14 @@
post-config: post-config:
$NOVA_CONF: $NOVA_CONF:
libvirt: libvirt:
live_migration_uri: 'qemu+ssh://root@%s/system' live_migration_uri: qemu+ssh://root@%s/system
$WATCHER_CONF:
watcher_cluster_data_model_collectors.compute:
period: 120
watcher_cluster_data_model_collectors.baremetal:
period: 120
watcher_cluster_data_model_collectors.storage:
period: 120
test-config: test-config:
$TEMPEST_CONFIG: $TEMPEST_CONFIG:
compute: compute:
@@ -87,15 +114,16 @@
live_migration: true live_migration: true
block_migration_for_live_migration: true block_migration_for_live_migration: true
devstack_plugins: devstack_plugins:
ceilometer: https://git.openstack.org/openstack/ceilometer ceilometer: https://opendev.org/openstack/ceilometer
- job: - job:
name: watcher-tempest-functional name: watcher-tempest-functional
parent: devstack-tempest parent: devstack-tempest
voting: false
timeout: 7200 timeout: 7200
required-projects: required-projects:
- openstack/ceilometer - openstack/ceilometer
- openstack-infra/devstack-gate - openstack/devstack-gate
- openstack/python-openstackclient - openstack/python-openstackclient
- openstack/python-watcherclient - openstack/python-watcherclient
- openstack/watcher - openstack/watcher
@@ -103,7 +131,7 @@
- openstack/tempest - openstack/tempest
vars: vars:
devstack_plugins: devstack_plugins:
watcher: https://git.openstack.org/openstack/watcher watcher: https://opendev.org/openstack/watcher
devstack_services: devstack_services:
tls-proxy: false tls-proxy: false
watcher-api: true watcher-api: true
@@ -115,8 +143,8 @@
s-object: false s-object: false
s-proxy: false s-proxy: false
devstack_localrc: devstack_localrc:
TEMPEST_PLUGINS: '/opt/stack/watcher-tempest-plugin' TEMPEST_PLUGINS: /opt/stack/watcher-tempest-plugin
tempest_test_regex: 'watcher_tempest_plugin.tests.api' tempest_test_regex: watcher_tempest_plugin.tests.api
tox_envlist: all tox_envlist: all
tox_environment: tox_environment:
# Do we really need to set this? It's cargo culted # Do we really need to set this? It's cargo culted
@@ -128,9 +156,10 @@
# This job is used in python-watcherclient repo # This job is used in python-watcherclient repo
name: watcherclient-tempest-functional name: watcherclient-tempest-functional
parent: watcher-tempest-functional parent: watcher-tempest-functional
voting: false
timeout: 4200 timeout: 4200
vars: vars:
tempest_concurrency: 1 tempest_concurrency: 1
devstack_localrc: devstack_localrc:
TEMPEST_PLUGINS: '/opt/stack/python-watcherclient' TEMPEST_PLUGINS: /opt/stack/python-watcherclient
tempest_test_regex: 'watcherclient.tests.functional' tempest_test_regex: watcherclient.tests.functional

View File

@@ -317,6 +317,7 @@ function start_watcher {
function stop_watcher { function stop_watcher {
if [[ "$WATCHER_USE_MOD_WSGI" == "True" ]]; then if [[ "$WATCHER_USE_MOD_WSGI" == "True" ]]; then
disable_apache_site watcher-api disable_apache_site watcher-api
restart_apache_server
else else
stop_process watcher-api stop_process watcher-api
fi fi

View File

@@ -35,7 +35,7 @@ VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP
NOVA_INSTANCES_PATH=/opt/stack/data/instances NOVA_INSTANCES_PATH=/opt/stack/data/instances
# Enable the Ceilometer plugin for the compute agent # Enable the Ceilometer plugin for the compute agent
enable_plugin ceilometer git://git.openstack.org/openstack/ceilometer enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer
disable_service ceilometer-acentral,ceilometer-collector,ceilometer-api disable_service ceilometer-acentral,ceilometer-collector,ceilometer-api
LOGFILE=$DEST/logs/stack.sh.log LOGFILE=$DEST/logs/stack.sh.log

View File

@@ -25,13 +25,13 @@ MULTI_HOST=1
disable_service n-cpu disable_service n-cpu
# Enable the Watcher Dashboard plugin # Enable the Watcher Dashboard plugin
enable_plugin watcher-dashboard git://git.openstack.org/openstack/watcher-dashboard enable_plugin watcher-dashboard https://git.openstack.org/openstack/watcher-dashboard
# Enable the Watcher plugin # Enable the Watcher plugin
enable_plugin watcher git://git.openstack.org/openstack/watcher enable_plugin watcher https://git.openstack.org/openstack/watcher
# Enable the Ceilometer plugin # Enable the Ceilometer plugin
enable_plugin ceilometer git://git.openstack.org/openstack/ceilometer enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer
# This is the controller node, so disable the ceilometer compute agent # This is the controller node, so disable the ceilometer compute agent
disable_service ceilometer-acompute disable_service ceilometer-acompute

View File

@@ -178,7 +178,7 @@ You can easily generate and update a sample configuration file
named :ref:`watcher.conf.sample <watcher_sample_configuration_files>` by using named :ref:`watcher.conf.sample <watcher_sample_configuration_files>` by using
these following commands:: these following commands::
$ git clone git://git.openstack.org/openstack/watcher $ git clone https://git.openstack.org/openstack/watcher
$ cd watcher/ $ cd watcher/
$ tox -e genconfig $ tox -e genconfig
$ vi etc/watcher/watcher.conf.sample $ vi etc/watcher/watcher.conf.sample
@@ -239,10 +239,6 @@ so that the watcher service is configured for your needs.
[DEFAULT] [DEFAULT]
# The messaging driver to use, defaults to rabbit. Other drivers
# include qpid and zmq. (string value)
#rpc_backend = rabbit
# The default exchange under which topics are scoped. May be # The default exchange under which topics are scoped. May be
# overridden by an exchange name specified in the transport_url # overridden by an exchange name specified in the transport_url
# option. (string value) # option. (string value)

View File

@@ -19,7 +19,7 @@ model. To enable the Watcher plugin with DevStack, add the following to the
`[[local|localrc]]` section of your controller's `local.conf` to enable the `[[local|localrc]]` section of your controller's `local.conf` to enable the
Watcher plugin:: Watcher plugin::
enable_plugin watcher git://git.openstack.org/openstack/watcher enable_plugin watcher https://git.openstack.org/openstack/watcher
For more detailed instructions, see `Detailed DevStack Instructions`_. Check For more detailed instructions, see `Detailed DevStack Instructions`_. Check
out the `DevStack documentation`_ for more information regarding DevStack. out the `DevStack documentation`_ for more information regarding DevStack.

View File

@@ -241,10 +241,9 @@ purge
The maximum number of database objects we expect to be deleted. If exceeded, The maximum number of database objects we expect to be deleted. If exceeded,
this will prevent any deletion. this will prevent any deletion.
.. option:: -t, --audit-template .. option:: -t, --goal
Either the UUID or name of the soft deleted audit template to purge. This Either the UUID or name of the goal to purge.
will also include any related objects with it.
.. option:: -e, --exclude-orphans .. option:: -e, --exclude-orphans

View File

@@ -0,0 +1,97 @@
==============
Noisy neighbor
==============
Synopsis
--------
**display name**: ``Noisy Neighbor``
**goal**: ``noisy_neighbor``
.. watcher-term:: watcher.decision_engine.strategy.strategies.noisy_neighbor.NoisyNeighbor
Requirements
------------
Metrics
*******
The *noisy_neighbor* strategy requires the following metrics:
============================ ============ ======= =======================
metric service name plugins comment
============================ ============ ======= =======================
``cpu_l3_cache`` ceilometer_ none Intel CMT_ is required
============================ ============ ======= =======================
.. _CMT: http://www.intel.com/content/www/us/en/architecture-and-technology/resource-director-technology.html
.. _ceilometer: https://docs.openstack.org/ceilometer/latest/admin/telemetry-measurements.html#openstack-compute
Cluster data model
******************
Default Watcher's Compute cluster data model:
.. watcher-term:: watcher.decision_engine.model.collector.nova.NovaClusterDataModelCollector
Actions
*******
Default Watcher's actions:
.. list-table::
:widths: 30 30
:header-rows: 1
* - action
- description
* - ``migration``
- .. watcher-term:: watcher.applier.actions.migration.Migrate
Planner
*******
Default Watcher's planner:
.. watcher-term:: watcher.decision_engine.planner.weight.WeightPlanner
Configuration
-------------
Strategy parameter is:
==================== ====== ============= ============================
parameter type default Value description
==================== ====== ============= ============================
``cache_threshold`` Number 35.0 Performance drop in L3_cache
threshold for migration
==================== ====== ============= ============================
Efficacy Indicator
------------------
None
Algorithm
---------
For more information on the noisy neighbor strategy please refer to:
http://specs.openstack.org/openstack/watcher-specs/specs/pike/implemented/noisy_neighbor_strategy.html
How to use it ?
---------------
.. code-block:: shell
$ openstack optimize audittemplate create \
at1 noisy_neighbor --strategy noisy_neighbor
$ openstack optimize audit create -a at1 \
-p cache_threshold=45.0
External Links
--------------
None

View File

@@ -155,7 +155,6 @@ ujson==1.35
unittest2==1.1.0 unittest2==1.1.0
urllib3==1.22 urllib3==1.22
vine==1.1.4 vine==1.1.4
voluptuous==0.11.1
waitress==1.1.0 waitress==1.1.0
warlock==1.3.0 warlock==1.3.0
WebOb==1.7.4 WebOb==1.7.4

View File

@@ -28,7 +28,6 @@ PasteDeploy>=1.5.2 # MIT
pbr>=3.1.1 # Apache-2.0 pbr>=3.1.1 # Apache-2.0
pecan>=1.2.1 # BSD pecan>=1.2.1 # BSD
PrettyTable<0.8,>=0.7.2 # BSD PrettyTable<0.8,>=0.7.2 # BSD
voluptuous>=0.11.1 # BSD License
gnocchiclient>=7.0.1 # Apache-2.0 gnocchiclient>=7.0.1 # Apache-2.0
python-ceilometerclient>=2.9.0 # Apache-2.0 python-ceilometerclient>=2.9.0 # Apache-2.0
python-cinderclient>=3.5.0 # Apache-2.0 python-cinderclient>=3.5.0 # Apache-2.0

View File

@@ -7,7 +7,7 @@ skipsdist = True
usedevelop = True usedevelop = True
whitelist_externals = find whitelist_externals = find
rm rm
install_command = pip install -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} {opts} {packages} install_command = pip install -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/rocky} {opts} {packages}
setenv = setenv =
VIRTUAL_ENV={envdir} VIRTUAL_ENV={envdir}
deps = -r{toxinidir}/test-requirements.txt deps = -r{toxinidir}/test-requirements.txt

View File

@@ -581,7 +581,7 @@ class ActionPlansController(rest.RestController):
if action_plan_to_start['state'] != \ if action_plan_to_start['state'] != \
objects.action_plan.State.RECOMMENDED: objects.action_plan.State.RECOMMENDED:
raise Exception.StartError( raise exception.StartError(
state=action_plan_to_start.state) state=action_plan_to_start.state)
action_plan_to_start['state'] = objects.action_plan.State.PENDING action_plan_to_start['state'] = objects.action_plan.State.PENDING

View File

@@ -148,18 +148,23 @@ class AuditTemplatePostType(wtypes.Base):
"included and excluded together")) "included and excluded together"))
if audit_template.strategy: if audit_template.strategy:
available_strategies = objects.Strategy.list( try:
AuditTemplatePostType._ctx) if (common_utils.is_uuid_like(audit_template.strategy) or
available_strategies_map = { common_utils.is_int_like(audit_template.strategy)):
s.uuid: s for s in available_strategies} strategy = objects.Strategy.get(
if audit_template.strategy not in available_strategies_map: AuditTemplatePostType._ctx, audit_template.strategy)
else:
strategy = objects.Strategy.get_by_name(
AuditTemplatePostType._ctx, audit_template.strategy)
except Exception:
raise exception.InvalidStrategy( raise exception.InvalidStrategy(
strategy=audit_template.strategy) strategy=audit_template.strategy)
strategy = available_strategies_map[audit_template.strategy]
# Check that the strategy we indicate is actually related to the # Check that the strategy we indicate is actually related to the
# specified goal # specified goal
if strategy.goal_id != goal.id: if strategy.goal_id != goal.id:
available_strategies = objects.Strategy.list(
AuditTemplatePostType._ctx)
choices = ["'%s' (%s)" % (s.uuid, s.name) choices = ["'%s' (%s)" % (s.uuid, s.name)
for s in available_strategies] for s in available_strategies]
raise exception.InvalidStrategy( raise exception.InvalidStrategy(

View File

@@ -16,7 +16,6 @@
from oslo_config import cfg from oslo_config import cfg
from oslo_utils import importutils
from pecan import hooks from pecan import hooks
from six.moves import http_client from six.moves import http_client
@@ -60,14 +59,8 @@ class ContextHook(hooks.PecanHook):
roles = (headers.get('X-Roles', None) and roles = (headers.get('X-Roles', None) and
headers.get('X-Roles').split(',')) headers.get('X-Roles').split(','))
auth_url = headers.get('X-Auth-Url')
if auth_url is None:
importutils.import_module('keystonemiddleware.auth_token')
auth_url = cfg.CONF.keystone_authtoken.www_authenticate_uri
state.request.context = context.make_context( state.request.context = context.make_context(
auth_token=auth_token, auth_token=auth_token,
auth_url=auth_url,
auth_token_info=auth_token_info, auth_token_info=auth_token_info,
user=user, user=user,
user_id=user_id, user_id=user_id,

View File

@@ -52,7 +52,8 @@ class APISchedulingService(scheduling.BackgroundSchedulerService):
self.services_status[service.id] = result self.services_status[service.id] = result
notifications.service.send_service_update(context, service, notifications.service.send_service_update(context, service,
state=result) state=result)
if result == failed_s: if (result == failed_s) and (
service.name == 'watcher-decision-engine'):
audit_filters = { audit_filters = {
'audit_type': objects.audit.AuditType.CONTINUOUS.value, 'audit_type': objects.audit.AuditType.CONTINUOUS.value,
'state': objects.audit.State.ONGOING, 'state': objects.audit.State.ONGOING,

View File

@@ -10,7 +10,7 @@
# License for the specific language governing permissions and limitations # License for the specific language governing permissions and limitations
# under the License. # under the License.
from ceilometerclient import client as ceclient
from cinderclient import client as ciclient from cinderclient import client as ciclient
from glanceclient import client as glclient from glanceclient import client as glclient
from gnocchiclient import client as gnclient from gnocchiclient import client as gnclient
@@ -25,6 +25,12 @@ from watcher.common import exception
from watcher import conf from watcher import conf
try:
from ceilometerclient import client as ceclient
HAS_CEILCLIENT = True
except ImportError:
HAS_CEILCLIENT = False
CONF = conf.CONF CONF = conf.CONF
_CLIENTS_AUTH_GROUP = 'watcher_clients_auth' _CLIENTS_AUTH_GROUP = 'watcher_clients_auth'
@@ -83,8 +89,10 @@ class OpenStackClients(object):
novaclient_version = self._get_client_option('nova', 'api_version') novaclient_version = self._get_client_option('nova', 'api_version')
nova_endpoint_type = self._get_client_option('nova', 'endpoint_type') nova_endpoint_type = self._get_client_option('nova', 'endpoint_type')
nova_region_name = self._get_client_option('nova', 'region_name')
self._nova = nvclient.Client(novaclient_version, self._nova = nvclient.Client(novaclient_version,
endpoint_type=nova_endpoint_type, endpoint_type=nova_endpoint_type,
region_name=nova_region_name,
session=self.session) session=self.session)
return self._nova return self._nova
@@ -96,8 +104,10 @@ class OpenStackClients(object):
glanceclient_version = self._get_client_option('glance', 'api_version') glanceclient_version = self._get_client_option('glance', 'api_version')
glance_endpoint_type = self._get_client_option('glance', glance_endpoint_type = self._get_client_option('glance',
'endpoint_type') 'endpoint_type')
glance_region_name = self._get_client_option('glance', 'region_name')
self._glance = glclient.Client(glanceclient_version, self._glance = glclient.Client(glanceclient_version,
interface=glance_endpoint_type, interface=glance_endpoint_type,
region_name=glance_region_name,
session=self.session) session=self.session)
return self._glance return self._glance
@@ -110,8 +120,11 @@ class OpenStackClients(object):
'api_version') 'api_version')
gnocchiclient_interface = self._get_client_option('gnocchi', gnocchiclient_interface = self._get_client_option('gnocchi',
'endpoint_type') 'endpoint_type')
gnocchiclient_region_name = self._get_client_option('gnocchi',
'region_name')
adapter_options = { adapter_options = {
"interface": gnocchiclient_interface "interface": gnocchiclient_interface,
"region_name": gnocchiclient_region_name
} }
self._gnocchi = gnclient.Client(gnocchiclient_version, self._gnocchi = gnclient.Client(gnocchiclient_version,
@@ -127,8 +140,10 @@ class OpenStackClients(object):
cinderclient_version = self._get_client_option('cinder', 'api_version') cinderclient_version = self._get_client_option('cinder', 'api_version')
cinder_endpoint_type = self._get_client_option('cinder', cinder_endpoint_type = self._get_client_option('cinder',
'endpoint_type') 'endpoint_type')
cinder_region_name = self._get_client_option('cinder', 'region_name')
self._cinder = ciclient.Client(cinderclient_version, self._cinder = ciclient.Client(cinderclient_version,
endpoint_type=cinder_endpoint_type, endpoint_type=cinder_endpoint_type,
region_name=cinder_region_name,
session=self.session) session=self.session)
return self._cinder return self._cinder
@@ -141,9 +156,12 @@ class OpenStackClients(object):
'api_version') 'api_version')
ceilometer_endpoint_type = self._get_client_option('ceilometer', ceilometer_endpoint_type = self._get_client_option('ceilometer',
'endpoint_type') 'endpoint_type')
ceilometer_region_name = self._get_client_option('ceilometer',
'region_name')
self._ceilometer = ceclient.get_client( self._ceilometer = ceclient.get_client(
ceilometerclient_version, ceilometerclient_version,
endpoint_type=ceilometer_endpoint_type, endpoint_type=ceilometer_endpoint_type,
region_name=ceilometer_region_name,
session=self.session) session=self.session)
return self._ceilometer return self._ceilometer
@@ -156,6 +174,8 @@ class OpenStackClients(object):
'monasca', 'api_version') 'monasca', 'api_version')
monascaclient_interface = self._get_client_option( monascaclient_interface = self._get_client_option(
'monasca', 'interface') 'monasca', 'interface')
monascaclient_region = self._get_client_option(
'monasca', 'region_name')
token = self.session.get_token() token = self.session.get_token()
watcher_clients_auth_config = CONF.get(_CLIENTS_AUTH_GROUP) watcher_clients_auth_config = CONF.get(_CLIENTS_AUTH_GROUP)
service_type = 'monitoring' service_type = 'monitoring'
@@ -172,7 +192,8 @@ class OpenStackClients(object):
'password': watcher_clients_auth_config.password, 'password': watcher_clients_auth_config.password,
} }
endpoint = self.session.get_endpoint(service_type=service_type, endpoint = self.session.get_endpoint(service_type=service_type,
interface=monascaclient_interface) interface=monascaclient_interface,
region_name=monascaclient_region)
self._monasca = monclient.Client( self._monasca = monclient.Client(
monascaclient_version, endpoint, **monasca_kwargs) monascaclient_version, endpoint, **monasca_kwargs)
@@ -188,9 +209,11 @@ class OpenStackClients(object):
'api_version') 'api_version')
neutron_endpoint_type = self._get_client_option('neutron', neutron_endpoint_type = self._get_client_option('neutron',
'endpoint_type') 'endpoint_type')
neutron_region_name = self._get_client_option('neutron', 'region_name')
self._neutron = netclient.Client(neutronclient_version, self._neutron = netclient.Client(neutronclient_version,
endpoint_type=neutron_endpoint_type, endpoint_type=neutron_endpoint_type,
region_name=neutron_region_name,
session=self.session) session=self.session)
self._neutron.format = 'json' self._neutron.format = 'json'
return self._neutron return self._neutron
@@ -202,7 +225,9 @@ class OpenStackClients(object):
ironicclient_version = self._get_client_option('ironic', 'api_version') ironicclient_version = self._get_client_option('ironic', 'api_version')
endpoint_type = self._get_client_option('ironic', 'endpoint_type') endpoint_type = self._get_client_option('ironic', 'endpoint_type')
ironic_region_name = self._get_client_option('ironic', 'region_name')
self._ironic = irclient.get_client(ironicclient_version, self._ironic = irclient.get_client(ironicclient_version,
os_endpoint_type=endpoint_type, os_endpoint_type=endpoint_type,
region_name=ironic_region_name,
session=self.session) session=self.session)
return self._ironic return self._ironic

View File

@@ -21,12 +21,15 @@ from watcher.common import rpc
from watcher import version from watcher import version
def parse_args(argv, default_config_files=None): def parse_args(argv, default_config_files=None, default_config_dirs=None):
default_config_files = (default_config_files or default_config_files = (default_config_files or
cfg.find_config_files(project='watcher')) cfg.find_config_files(project='watcher'))
default_config_dirs = (default_config_dirs or
cfg.find_config_dirs(project='watcher'))
rpc.set_defaults(control_exchange='watcher') rpc.set_defaults(control_exchange='watcher')
cfg.CONF(argv[1:], cfg.CONF(argv[1:],
project='python-watcher', project='python-watcher',
version=version.version_info.release_string(), version=version.version_info.release_string(),
default_config_dirs=default_config_dirs,
default_config_files=default_config_files) default_config_files=default_config_files)
rpc.init(cfg.CONF) rpc.init(cfg.CONF)

View File

@@ -23,9 +23,9 @@ class RequestContext(context.RequestContext):
def __init__(self, user_id=None, project_id=None, is_admin=None, def __init__(self, user_id=None, project_id=None, is_admin=None,
roles=None, timestamp=None, request_id=None, auth_token=None, roles=None, timestamp=None, request_id=None, auth_token=None,
auth_url=None, overwrite=True, user_name=None, overwrite=True, user_name=None, project_name=None,
project_name=None, domain_name=None, domain_id=None, domain_name=None, domain_id=None, auth_token_info=None,
auth_token_info=None, **kwargs): **kwargs):
"""Stores several additional request parameters: """Stores several additional request parameters:
:param domain_id: The ID of the domain. :param domain_id: The ID of the domain.
@@ -70,7 +70,6 @@ class RequestContext(context.RequestContext):
# FIXME(dims): user_id and project_id duplicate information that is # FIXME(dims): user_id and project_id duplicate information that is
# already present in the oslo_context's RequestContext. We need to # already present in the oslo_context's RequestContext. We need to
# get rid of them. # get rid of them.
self.auth_url = auth_url
self.domain_name = domain_name self.domain_name = domain_name
self.domain_id = domain_id self.domain_id = domain_id
self.auth_token_info = auth_token_info self.auth_token_info = auth_token_info

View File

@@ -150,7 +150,7 @@ class ResourceNotFound(ObjectNotFound):
class InvalidParameter(Invalid): class InvalidParameter(Invalid):
msg_fmt = _("%(parameter)s has to be of type %(parameter_type)s") msg_fmt = _("%(parameter)s has to be of type %(parameter_type)s")
class InvalidIdentity(Invalid): class InvalidIdentity(Invalid):
@@ -514,9 +514,9 @@ class NegativeLimitError(WatcherException):
class NotificationPayloadError(WatcherException): class NotificationPayloadError(WatcherException):
_msg_fmt = _("Payload not populated when trying to send notification " msg_fmt = _("Payload not populated when trying to send notification "
"\"%(class_name)s\"") "\"%(class_name)s\"")
class InvalidPoolAttributeValue(Invalid): class InvalidPoolAttributeValue(Invalid):
msg_fmt = _("The %(name)s pool %(attribute)s is not integer") msg_fmt = _("The %(name)s pool %(attribute)s is not integer")

View File

@@ -22,7 +22,6 @@ import time
from novaclient import api_versions from novaclient import api_versions
from oslo_log import log from oslo_log import log
import cinderclient.exceptions as ciexceptions
import glanceclient.exc as glexceptions import glanceclient.exc as glexceptions
import novaclient.exceptions as nvexceptions import novaclient.exceptions as nvexceptions
@@ -75,7 +74,8 @@ class NovaHelper(object):
raise exception.ComputeNodeNotFound(name=node_hostname) raise exception.ComputeNodeNotFound(name=node_hostname)
def get_instance_list(self): def get_instance_list(self):
return self.nova.servers.list(search_opts={'all_tenants': True}) return self.nova.servers.list(search_opts={'all_tenants': True},
limit=-1)
def get_flavor_list(self): def get_flavor_list(self):
return self.nova.flavors.list(**{'is_public': None}) return self.nova.flavors.list(**{'is_public': None})
@@ -705,31 +705,13 @@ class NovaHelper(object):
def get_instances_by_node(self, host): def get_instances_by_node(self, host):
return [instance for instance in return [instance for instance in
self.nova.servers.list(search_opts={"all_tenants": True}) self.nova.servers.list(search_opts={"all_tenants": True,
if self.get_hostname(instance) == host] "host": host},
limit=-1)]
def get_hostname(self, instance): def get_hostname(self, instance):
return str(getattr(instance, 'OS-EXT-SRV-ATTR:host')) return str(getattr(instance, 'OS-EXT-SRV-ATTR:host'))
def get_flavor_instance(self, instance, cache):
fid = instance.flavor['id']
if fid in cache:
flavor = cache.get(fid)
else:
try:
flavor = self.nova.flavors.get(fid)
except ciexceptions.NotFound:
flavor = None
cache[fid] = flavor
attr_defaults = [('name', 'unknown-id-%s' % fid),
('vcpus', 0), ('ram', 0), ('disk', 0),
('ephemeral', 0), ('extra_specs', {})]
for attr, default in attr_defaults:
if not flavor:
instance.flavor[attr] = default
continue
instance.flavor[attr] = getattr(flavor, attr, default)
def get_running_migration(self, instance_id): def get_running_migration(self, instance_id):
return self.nova.server_migrations.list(server=instance_id) return self.nova.server_migrations.list(server=instance_id)

View File

@@ -265,8 +265,7 @@ class Service(service.ServiceBase):
allow_requeue=False, pool=CONF.host) allow_requeue=False, pool=CONF.host)
def start(self): def start(self):
LOG.debug("Connecting to '%s' (%s)", LOG.debug("Connecting to '%s'", CONF.transport_url)
CONF.transport_url, CONF.rpc_backend)
if self.conductor_topic_handler: if self.conductor_topic_handler:
self.conductor_topic_handler.start() self.conductor_topic_handler.start()
if self.notification_handler: if self.notification_handler:
@@ -275,8 +274,7 @@ class Service(service.ServiceBase):
self.heartbeat.start() self.heartbeat.start()
def stop(self): def stop(self):
LOG.debug("Disconnecting from '%s' (%s)", LOG.debug("Disconnecting from '%s' (%s)'", CONF.transport_url)
CONF.transport_url, CONF.rpc_backend)
if self.conductor_topic_handler: if self.conductor_topic_handler:
self.conductor_topic_handler.stop() self.conductor_topic_handler.stop()
if self.notification_handler: if self.notification_handler:

View File

@@ -30,7 +30,10 @@ CEILOMETER_CLIENT_OPTS = [
default='internalURL', default='internalURL',
help='Type of endpoint to use in ceilometerclient.' help='Type of endpoint to use in ceilometerclient.'
'Supported values: internalURL, publicURL, adminURL' 'Supported values: internalURL, publicURL, adminURL'
'The default is internalURL.')] 'The default is internalURL.'),
cfg.StrOpt('region_name',
help='Region in Identity service catalog to use for '
'communication with the OpenStack service.')]
def register_opts(conf): def register_opts(conf):

View File

@@ -29,7 +29,10 @@ CINDER_CLIENT_OPTS = [
default='publicURL', default='publicURL',
help='Type of endpoint to use in cinderclient.' help='Type of endpoint to use in cinderclient.'
'Supported values: internalURL, publicURL, adminURL' 'Supported values: internalURL, publicURL, adminURL'
'The default is publicURL.')] 'The default is publicURL.'),
cfg.StrOpt('region_name',
help='Region in Identity service catalog to use for '
'communication with the OpenStack service.')]
def register_opts(conf): def register_opts(conf):

View File

@@ -29,7 +29,10 @@ GLANCE_CLIENT_OPTS = [
default='publicURL', default='publicURL',
help='Type of endpoint to use in glanceclient.' help='Type of endpoint to use in glanceclient.'
'Supported values: internalURL, publicURL, adminURL' 'Supported values: internalURL, publicURL, adminURL'
'The default is publicURL.')] 'The default is publicURL.'),
cfg.StrOpt('region_name',
help='Region in Identity service catalog to use for '
'communication with the OpenStack service.')]
def register_opts(conf): def register_opts(conf):

View File

@@ -30,6 +30,9 @@ GNOCCHI_CLIENT_OPTS = [
help='Type of endpoint to use in gnocchi client.' help='Type of endpoint to use in gnocchi client.'
'Supported values: internal, public, admin' 'Supported values: internal, public, admin'
'The default is public.'), 'The default is public.'),
cfg.StrOpt('region_name',
help='Region in Identity service catalog to use for '
'communication with the OpenStack service.'),
cfg.IntOpt('query_max_retries', cfg.IntOpt('query_max_retries',
default=10, default=10,
mutable=True, mutable=True,

View File

@@ -29,7 +29,10 @@ IRONIC_CLIENT_OPTS = [
default='publicURL', default='publicURL',
help='Type of endpoint to use in ironicclient.' help='Type of endpoint to use in ironicclient.'
'Supported values: internalURL, publicURL, adminURL' 'Supported values: internalURL, publicURL, adminURL'
'The default is publicURL.')] 'The default is publicURL.'),
cfg.StrOpt('region_name',
help='Region in Identity service catalog to use for '
'communication with the OpenStack service.')]
def register_opts(conf): def register_opts(conf):

View File

@@ -29,7 +29,10 @@ MONASCA_CLIENT_OPTS = [
default='internal', default='internal',
help='Type of interface used for monasca endpoint.' help='Type of interface used for monasca endpoint.'
'Supported values: internal, public, admin' 'Supported values: internal, public, admin'
'The default is internal.')] 'The default is internal.'),
cfg.StrOpt('region_name',
help='Region in Identity service catalog to use for '
'communication with the OpenStack service.')]
def register_opts(conf): def register_opts(conf):

View File

@@ -29,7 +29,10 @@ NEUTRON_CLIENT_OPTS = [
default='publicURL', default='publicURL',
help='Type of endpoint to use in neutronclient.' help='Type of endpoint to use in neutronclient.'
'Supported values: internalURL, publicURL, adminURL' 'Supported values: internalURL, publicURL, adminURL'
'The default is publicURL.')] 'The default is publicURL.'),
cfg.StrOpt('region_name',
help='Region in Identity service catalog to use for '
'communication with the OpenStack service.')]
def register_opts(conf): def register_opts(conf):

View File

@@ -29,7 +29,10 @@ NOVA_CLIENT_OPTS = [
default='publicURL', default='publicURL',
help='Type of endpoint to use in novaclient.' help='Type of endpoint to use in novaclient.'
'Supported values: internalURL, publicURL, adminURL' 'Supported values: internalURL, publicURL, adminURL'
'The default is publicURL.')] 'The default is publicURL.'),
cfg.StrOpt('region_name',
help='Region in Identity service catalog to use for '
'communication with the OpenStack service.')]
def register_opts(conf): def register_opts(conf):

View File

@@ -19,6 +19,7 @@
import datetime import datetime
from ceilometerclient import exc from ceilometerclient import exc
from oslo_log import log
from oslo_utils import timeutils from oslo_utils import timeutils
from watcher._i18n import _ from watcher._i18n import _
@@ -26,6 +27,8 @@ from watcher.common import clients
from watcher.common import exception from watcher.common import exception
from watcher.datasource import base from watcher.datasource import base
LOG = log.getLogger(__name__)
class CeilometerHelper(base.DataSourceBase): class CeilometerHelper(base.DataSourceBase):
@@ -112,15 +115,15 @@ class CeilometerHelper(base.DataSourceBase):
self.osc.reset_clients() self.osc.reset_clients()
self.ceilometer = self.osc.ceilometer() self.ceilometer = self.osc.ceilometer()
return f(*args, **kargs) return f(*args, **kargs)
except Exception: except Exception as e:
raise LOG.exception(e)
def check_availability(self): def check_availability(self):
try: status = self.query_retry(self.ceilometer.resources.list)
self.query_retry(self.ceilometer.resources.list) if status:
except Exception: return 'available'
else:
return 'not available' return 'not available'
return 'available'
def query_sample(self, meter_name, query, limit=1): def query_sample(self, meter_name, query, limit=1):
return self.query_retry(f=self.ceilometer.samples.list, return self.query_retry(f=self.ceilometer.samples.list,
@@ -138,9 +141,8 @@ class CeilometerHelper(base.DataSourceBase):
def list_metrics(self): def list_metrics(self):
"""List the user's meters.""" """List the user's meters."""
try: meters = self.query_retry(f=self.ceilometer.meters.list)
meters = self.query_retry(f=self.ceilometer.meters.list) if not meters:
except Exception:
return set() return set()
else: else:
return meters return meters

View File

@@ -24,7 +24,6 @@ from oslo_config import cfg
from oslo_log import log from oslo_log import log
from watcher.common import clients from watcher.common import clients
from watcher.common import exception
from watcher.common import utils as common_utils from watcher.common import utils as common_utils
from watcher.datasource import base from watcher.datasource import base
@@ -49,27 +48,25 @@ class GnocchiHelper(base.DataSourceBase):
except Exception as e: except Exception as e:
LOG.exception(e) LOG.exception(e)
time.sleep(CONF.gnocchi_client.query_timeout) time.sleep(CONF.gnocchi_client.query_timeout)
raise exception.DataSourceNotAvailable(datasource='gnocchi')
def check_availability(self): def check_availability(self):
try: status = self.query_retry(self.gnocchi.status.get)
self.query_retry(self.gnocchi.status.get) if status:
except Exception: return 'available'
else:
return 'not available' return 'not available'
return 'available'
def list_metrics(self): def list_metrics(self):
"""List the user's meters.""" """List the user's meters."""
try: response = self.query_retry(f=self.gnocchi.metric.list)
response = self.query_retry(f=self.gnocchi.metric.list) if not response:
except Exception:
return set() return set()
else: else:
return set([metric['name'] for metric in response]) return set([metric['name'] for metric in response])
def statistic_aggregation(self, resource_id=None, meter_name=None, def statistic_aggregation(self, resource_id=None, meter_name=None,
period=300, granularity=300, dimensions=None, period=300, granularity=300, dimensions=None,
aggregation='avg', group_by='*'): aggregation='mean', group_by='*'):
"""Representing a statistic aggregate by operators """Representing a statistic aggregate by operators
:param resource_id: id of resource to list statistics for. :param resource_id: id of resource to list statistics for.
@@ -95,7 +92,9 @@ class GnocchiHelper(base.DataSourceBase):
f=self.gnocchi.resource.search, **kwargs) f=self.gnocchi.resource.search, **kwargs)
if not resources: if not resources:
raise exception.ResourceNotFound(name=resource_id) LOG.warning("The {0} resource {1} could not be "
"found".format(self.NAME, resource_id))
return
resource_id = resources[0]['id'] resource_id = resources[0]['id']

View File

@@ -29,7 +29,12 @@ class DataSourceManager(object):
self._monasca = None self._monasca = None
self._gnocchi = None self._gnocchi = None
self.metric_map = base.DataSourceBase.METRIC_MAP self.metric_map = base.DataSourceBase.METRIC_MAP
self.datasources = self.config.datasources if hasattr(self.config, 'datasources'):
self.datasources = self.config.datasources
elif hasattr(self.config, 'datasource'):
self.datasources = [self.config.datasource]
else:
self.datasources = []
@property @property
def ceilometer(self): def ceilometer(self):

View File

@@ -19,11 +19,14 @@
import datetime import datetime
from monascaclient import exc from monascaclient import exc
from oslo_log import log
from watcher.common import clients from watcher.common import clients
from watcher.common import exception from watcher.common import exception
from watcher.datasource import base from watcher.datasource import base
LOG = log.getLogger(__name__)
class MonascaHelper(base.DataSourceBase): class MonascaHelper(base.DataSourceBase):
@@ -42,8 +45,8 @@ class MonascaHelper(base.DataSourceBase):
self.osc.reset_clients() self.osc.reset_clients()
self.monasca = self.osc.monasca() self.monasca = self.osc.monasca()
return f(*args, **kwargs) return f(*args, **kwargs)
except Exception: except Exception as e:
raise LOG.exception(e)
def _format_time_params(self, start_time, end_time, period): def _format_time_params(self, start_time, end_time, period):
"""Format time-related params to the correct Monasca format """Format time-related params to the correct Monasca format
@@ -67,11 +70,11 @@ class MonascaHelper(base.DataSourceBase):
return start_timestamp, end_timestamp, period return start_timestamp, end_timestamp, period
def check_availability(self): def check_availability(self):
try: status = self.query_retry(self.monasca.metrics.list)
self.query_retry(self.monasca.metrics.list) if status:
except Exception: return 'available'
else:
return 'not available' return 'not available'
return 'available'
def list_metrics(self): def list_metrics(self):
# TODO(alexchadin): this method should be implemented in accordance to # TODO(alexchadin): this method should be implemented in accordance to

View File

@@ -23,8 +23,10 @@ from sqlalchemy import Boolean
from sqlalchemy import Column from sqlalchemy import Column
from sqlalchemy import DateTime from sqlalchemy import DateTime
from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Float
from sqlalchemy import ForeignKey from sqlalchemy import ForeignKey
from sqlalchemy import Integer from sqlalchemy import Integer
from sqlalchemy import LargeBinary
from sqlalchemy import Numeric from sqlalchemy import Numeric
from sqlalchemy import orm from sqlalchemy import orm
from sqlalchemy import String from sqlalchemy import String
@@ -296,3 +298,23 @@ class ActionDescription(Base):
id = Column(Integer, primary_key=True) id = Column(Integer, primary_key=True)
action_type = Column(String(255), nullable=False) action_type = Column(String(255), nullable=False)
description = Column(String(255), nullable=False) description = Column(String(255), nullable=False)
class APScheulerJob(Base):
"""Represents apscheduler jobs"""
__tablename__ = 'apscheduler_jobs'
__table_args__ = (
UniqueConstraint('id',
name="uniq_apscheduler_jobs0id"),
table_args()
)
id = Column(String(191), nullable=False, primary_key=True)
next_run_time = Column(Float(25), index=True)
job_state = Column(LargeBinary, nullable=False)
tag = Column(JSONEncodedDict(), nullable=True)
service_id = Column(Integer, ForeignKey('services.id'),
nullable=False)
service = orm.relationship(
Service, foreign_keys=service_id, lazy=None)

View File

@@ -230,21 +230,17 @@ class ModelBuilder(object):
"free_capacity_gb", "provisioned_capacity_gb", "free_capacity_gb", "provisioned_capacity_gb",
"allocated_capacity_gb"] "allocated_capacity_gb"]
node_attributes = {"name": pool.name}
for attr in attrs: for attr in attrs:
try: try:
int(getattr(pool, attr)) node_attributes[attr] = int(getattr(pool, attr))
except AttributeError:
LOG.debug("Attribute %s for pool %s is not provided",
attr, pool.name)
except ValueError: except ValueError:
raise exception.InvalidPoolAttributeValue( raise exception.InvalidPoolAttributeValue(
name=pool.name, attribute=attr) name=pool.name, attribute=attr)
node_attributes = {
"name": pool.name,
"total_volumes": pool.total_volumes,
"total_capacity_gb": pool.total_capacity_gb,
"free_capacity_gb": pool.free_capacity_gb,
"provisioned_capacity_gb": pool.provisioned_capacity_gb,
"allocated_capacity_gb": pool.allocated_capacity_gb}
storage_pool = element.Pool(**node_attributes) storage_pool = element.Pool(**node_attributes)
return storage_pool return storage_pool

View File

@@ -279,7 +279,7 @@ class ChangeNovaServiceStateActionValidator(BaseActionValidator):
def validate_parents(self, resource_action_map, action): def validate_parents(self, resource_action_map, action):
host_name = action['input_parameters']['resource_id'] host_name = action['input_parameters']['resource_id']
self._mapping(resource_action_map, host_name, action.uuid, self._mapping(resource_action_map, host_name, action['uuid'],
'change_nova_service_state') 'change_nova_service_state')
return [] return []

View File

@@ -31,6 +31,8 @@ LOG = log.getLogger(__name__)
class SavingEnergy(base.SavingEnergyBaseStrategy): class SavingEnergy(base.SavingEnergyBaseStrategy):
"""Saving Energy Strategy """Saving Energy Strategy
*Description*
Saving Energy Strategy together with VM Workload Consolidation Strategy Saving Energy Strategy together with VM Workload Consolidation Strategy
can perform the Dynamic Power Management (DPM) functionality, which tries can perform the Dynamic Power Management (DPM) functionality, which tries
to save power by dynamically consolidating workloads even further during to save power by dynamically consolidating workloads even further during
@@ -51,19 +53,29 @@ class SavingEnergy(base.SavingEnergyBaseStrategy):
the given number and there are spare unused nodes(in poweroff state), the given number and there are spare unused nodes(in poweroff state),
randomly select some nodes(unused,poweroff) and power on them. randomly select some nodes(unused,poweroff) and power on them.
*Requirements*
In this policy, in order to calculate the min_free_hosts_num, In this policy, in order to calculate the min_free_hosts_num,
users must provide two parameters: users must provide two parameters:
* One parameter("min_free_hosts_num") is a constant int number. * One parameter("min_free_hosts_num") is a constant int number.
This number should be int type and larger than zero. This number should be int type and larger than zero.
* The other parameter("free_used_percent") is a percentage number, which * The other parameter("free_used_percent") is a percentage number, which
describes the quotient of min_free_hosts_num/nodes_with_VMs_num, describes the quotient of min_free_hosts_num/nodes_with_VMs_num,
where nodes_with_VMs_num is the number of nodes with VMs running on it. where nodes_with_VMs_num is the number of nodes with VMs running on it.
This parameter is used to calculate a dynamic min_free_hosts_num. This parameter is used to calculate a dynamic min_free_hosts_num.
The nodes with VMs refer to those nodes with VMs running on it. The nodes with VMs refer to those nodes with VMs running on it.
Then choose the larger one as the final min_free_hosts_num. Then choose the larger one as the final min_free_hosts_num.
*Limitations*
* at least 2 physical compute hosts
*Spec URL*
http://specs.openstack.org/openstack/watcher-specs/specs/pike/implemented/energy-saving-strategy.html
""" """
def __init__(self, config, osc=None): def __init__(self, config, osc=None):
@@ -113,16 +125,16 @@ class SavingEnergy(base.SavingEnergyBaseStrategy):
"properties": { "properties": {
"free_used_percent": { "free_used_percent": {
"description": ("a rational number, which describes the" "description": ("a rational number, which describes the"
"quotient of" " quotient of"
" min_free_hosts_num/nodes_with_VMs_num" " min_free_hosts_num/nodes_with_VMs_num"
"where nodes_with_VMs_num is the number" " where nodes_with_VMs_num is the number"
"of nodes with VMs"), " of nodes with VMs"),
"type": "number", "type": "number",
"default": 10.0 "default": 10.0
}, },
"min_free_hosts_num": { "min_free_hosts_num": {
"description": ("minimum number of hosts without VMs" "description": ("minimum number of hosts without VMs"
"but still powered on"), " but still powered on"),
"type": "number", "type": "number",
"default": 1 "default": 1
}, },

View File

@@ -41,6 +41,15 @@ class StorageCapacityBalance(base.WorkloadStabilizationBaseStrategy):
* You must have at least 2 cinder volume pools to run * You must have at least 2 cinder volume pools to run
this strategy. this strategy.
*Limitations*
* Volume migration depends on the storage device.
It may take a long time.
*Spec URL*
http://specs.openstack.org/openstack/watcher-specs/specs/queens/implemented/storage-capacity-balance.html
""" """
def __init__(self, config, osc=None): def __init__(self, config, osc=None):

View File

@@ -254,7 +254,7 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
:param instance: instance for which statistic is gathered. :param instance: instance for which statistic is gathered.
:return: dict :return: dict
""" """
LOG.debug('get_instance_load started') LOG.debug('Getting load for %s', instance.uuid)
instance_load = {'uuid': instance.uuid, 'vcpus': instance.vcpus} instance_load = {'uuid': instance.uuid, 'vcpus': instance.vcpus}
for meter in self.metrics: for meter in self.metrics:
avg_meter = self.datasource_backend.statistic_aggregation( avg_meter = self.datasource_backend.statistic_aggregation(
@@ -269,6 +269,10 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
return return
if meter == 'cpu_util': if meter == 'cpu_util':
avg_meter /= float(100) avg_meter /= float(100)
LOG.debug('Load of %(metric)s for %(instance)s is %(value)s',
{'metric': meter,
'instance': instance.uuid,
'value': avg_meter})
instance_load[meter] = avg_meter instance_load[meter] = avg_meter
return instance_load return instance_load
@@ -293,6 +297,7 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
for node_id, node in self.get_available_nodes().items(): for node_id, node in self.get_available_nodes().items():
hosts_load[node_id] = {} hosts_load[node_id] = {}
hosts_load[node_id]['vcpus'] = node.vcpus hosts_load[node_id]['vcpus'] = node.vcpus
LOG.debug('Getting load for %s', node_id)
for metric in self.metrics: for metric in self.metrics:
resource_id = '' resource_id = ''
avg_meter = None avg_meter = None
@@ -315,6 +320,10 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
avg_meter /= oslo_utils.units.Ki avg_meter /= oslo_utils.units.Ki
if meter_name == 'compute.node.cpu.percent': if meter_name == 'compute.node.cpu.percent':
avg_meter /= 100 avg_meter /= 100
LOG.debug('Load of %(metric)s for %(node)s is %(value)s',
{'metric': metric,
'node': node_id,
'value': avg_meter})
hosts_load[node_id][metric] = avg_meter hosts_load[node_id][metric] = avg_meter
return hosts_load return hosts_load
@@ -442,12 +451,15 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
normalized_load = self.normalize_hosts_load(hosts_load) normalized_load = self.normalize_hosts_load(hosts_load)
for metric in self.metrics: for metric in self.metrics:
metric_sd = self.get_sd(normalized_load, metric) metric_sd = self.get_sd(normalized_load, metric)
LOG.info("Standard deviation for %s is %s.", LOG.info("Standard deviation for %(metric)s is %(sd)s.",
(metric, metric_sd)) {'metric': metric, 'sd': metric_sd})
if metric_sd > float(self.thresholds[metric]): if metric_sd > float(self.thresholds[metric]):
LOG.info("Standard deviation of %s exceeds" LOG.info("Standard deviation of %(metric)s exceeds"
" appropriate threshold %s.", " appropriate threshold %(threshold)s by %(sd)s.",
(metric, metric_sd)) {'metric': metric,
'threshold': float(self.thresholds[metric]),
'sd': metric_sd})
LOG.info("Launching workload optimization...")
return self.simulate_migrations(hosts_load) return self.simulate_migrations(hosts_load)
def add_migration(self, def add_migration(self,
@@ -523,12 +535,23 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
if weighted_sd < min_sd: if weighted_sd < min_sd:
min_sd = weighted_sd min_sd = weighted_sd
hosts_load = instance_load[-1] hosts_load = instance_load[-1]
LOG.info("Migration of %(instance_uuid)s from %(s_host)s "
"to %(host)s reduces standard deviation to "
"%(min_sd)s.",
{'instance_uuid': instance_host['instance'],
's_host': instance_host['s_host'],
'host': instance_host['host'],
'min_sd': min_sd})
self.migrate(instance_host['instance'], self.migrate(instance_host['instance'],
instance_host['s_host'], instance_host['s_host'],
instance_host['host']) instance_host['host'])
for metric, value in zip(self.metrics, instance_load[:-1]): for metric, value in zip(self.metrics, instance_load[:-1]):
if value < float(self.thresholds[metric]): if value < float(self.thresholds[metric]):
LOG.info("At least one of metrics' values fell "
"below the threshold values. "
"Workload Stabilization has successfully "
"completed optimization process.")
balanced = True balanced = True
break break
if balanced: if balanced:

View File

@@ -309,10 +309,10 @@ class ZoneMigration(base.ZoneMigrationBaseStrategy):
else: else:
self.instances_migration(targets, action_counter) self.instances_migration(targets, action_counter)
LOG.debug("action total: %s, pools: %s, nodes %s ", ( LOG.debug("action total: %s, pools: %s, nodes %s ",
action_counter.total_count, action_counter.total_count,
action_counter.per_pool_count, action_counter.per_pool_count,
action_counter.per_node_count)) action_counter.per_node_count)
def post_execute(self): def post_execute(self):
"""Post-execution phase """Post-execution phase
@@ -416,7 +416,7 @@ class ZoneMigration(base.ZoneMigrationBaseStrategy):
src_type = volume.volume_type src_type = volume.volume_type
dst_pool, dst_type = self.get_dst_pool_and_type(pool, src_type) dst_pool, dst_type = self.get_dst_pool_and_type(pool, src_type)
LOG.debug(src_type) LOG.debug(src_type)
LOG.debug("%s %s", (dst_pool, dst_type)) LOG.debug("%s %s", dst_pool, dst_type)
if self.is_available(volume): if self.is_available(volume):
if src_type == dst_type: if src_type == dst_type:
@@ -640,8 +640,8 @@ class ActionCounter(object):
if not self.is_total_max() and not self.is_pool_max(pool): if not self.is_total_max() and not self.is_pool_max(pool):
self.per_pool_count[pool] += 1 self.per_pool_count[pool] += 1
self.total_count += 1 self.total_count += 1
LOG.debug("total: %s, per_pool: %s", ( LOG.debug("total: %s, per_pool: %s",
self.total_count, self.per_pool_count)) self.total_count, self.per_pool_count)
return True return True
return False return False
@@ -657,8 +657,8 @@ class ActionCounter(object):
if not self.is_total_max() and not self.is_node_max(node): if not self.is_total_max() and not self.is_node_max(node):
self.per_node_count[node] += 1 self.per_node_count[node] += 1
self.total_count += 1 self.total_count += 1
LOG.debug("total: %s, per_node: %s", ( LOG.debug("total: %s, per_node: %s",
self.total_count, self.per_node_count)) self.total_count, self.per_node_count)
return True return True
return False return False
@@ -677,7 +677,7 @@ class ActionCounter(object):
if pool not in self.per_pool_count: if pool not in self.per_pool_count:
self.per_pool_count[pool] = 0 self.per_pool_count[pool] = 0
LOG.debug("the number of parallel per pool %s is %s ", LOG.debug("the number of parallel per pool %s is %s ",
(pool, self.per_pool_count[pool])) pool, self.per_pool_count[pool])
LOG.debug("per pool limit is %s", self.per_pool_limit) LOG.debug("per pool limit is %s", self.per_pool_limit)
return self.per_pool_count[pool] >= self.per_pool_limit return self.per_pool_count[pool] >= self.per_pool_limit
@@ -721,7 +721,7 @@ class BaseFilter(object):
for k, v in six.iteritems(targets): for k, v in six.iteritems(targets):
if not self.is_allowed(k): if not self.is_allowed(k):
continue continue
LOG.debug("filter:%s with the key: %s", (cond, k)) LOG.debug("filter:%s with the key: %s", cond, k)
targets[k] = self.exec_filter(v, cond) targets[k] = self.exec_filter(v, cond)
LOG.debug(targets) LOG.debug(targets)
@@ -775,7 +775,7 @@ class ProjectSortFilter(SortMovingToFrontFilter):
""" """
project_id = self.get_project_id(item) project_id = self.get_project_id(item)
LOG.debug("project_id: %s, sort_key: %s", (project_id, sort_key)) LOG.debug("project_id: %s, sort_key: %s", project_id, sort_key)
return project_id == sort_key return project_id == sort_key
def get_project_id(self, item): def get_project_id(self, item):
@@ -809,7 +809,7 @@ class ComputeHostSortFilter(SortMovingToFrontFilter):
""" """
host = self.get_host(item) host = self.get_host(item)
LOG.debug("host: %s, sort_key: %s", (host, sort_key)) LOG.debug("host: %s, sort_key: %s", host, sort_key)
return host == sort_key return host == sort_key
def get_host(self, item): def get_host(self, item):
@@ -837,7 +837,7 @@ class StorageHostSortFilter(SortMovingToFrontFilter):
""" """
host = self.get_host(item) host = self.get_host(item)
LOG.debug("host: %s, sort_key: %s", (host, sort_key)) LOG.debug("host: %s, sort_key: %s", host, sort_key)
return host == sort_key return host == sort_key
def get_host(self, item): def get_host(self, item):
@@ -909,9 +909,9 @@ class ComputeSpecSortFilter(BaseFilter):
:returns: memory size of item :returns: memory size of item
""" """
LOG.debug("item: %s, flavors: %s", (item, flavors)) LOG.debug("item: %s, flavors: %s", item, flavors)
for flavor in flavors: for flavor in flavors:
LOG.debug("item.flavor: %s, flavor: %s", (item.flavor, flavor)) LOG.debug("item.flavor: %s, flavor: %s", item.flavor, flavor)
if item.flavor.get('id') == flavor.id: if item.flavor.get('id') == flavor.id:
LOG.debug("flavor.ram: %s", flavor.ram) LOG.debug("flavor.ram: %s", flavor.ram)
return flavor.ram return flavor.ram
@@ -924,9 +924,9 @@ class ComputeSpecSortFilter(BaseFilter):
:returns: vcpu number of item :returns: vcpu number of item
""" """
LOG.debug("item: %s, flavors: %s", (item, flavors)) LOG.debug("item: %s, flavors: %s", item, flavors)
for flavor in flavors: for flavor in flavors:
LOG.debug("item.flavor: %s, flavor: %s", (item.flavor, flavor)) LOG.debug("item.flavor: %s, flavor: %s", item.flavor, flavor)
if item.flavor.get('id') == flavor.id: if item.flavor.get('id') == flavor.id:
LOG.debug("flavor.vcpus: %s", flavor.vcpus) LOG.debug("flavor.vcpus: %s", flavor.vcpus)
return flavor.vcpus return flavor.vcpus
@@ -939,9 +939,9 @@ class ComputeSpecSortFilter(BaseFilter):
:returns: disk size of item :returns: disk size of item
""" """
LOG.debug("item: %s, flavors: %s", (item, flavors)) LOG.debug("item: %s, flavors: %s", item, flavors)
for flavor in flavors: for flavor in flavors:
LOG.debug("item.flavor: %s, flavor: %s", (item.flavor, flavor)) LOG.debug("item.flavor: %s, flavor: %s", item.flavor, flavor)
if item.flavor.get('id') == flavor.id: if item.flavor.get('id') == flavor.id:
LOG.debug("flavor.disk: %s", flavor.disk) LOG.debug("flavor.disk: %s", flavor.disk)
return flavor.disk return flavor.disk

View File

@@ -222,7 +222,6 @@ class TestContextHook(base.FunctionalTest):
user_id=headers['X-User-Id'], user_id=headers['X-User-Id'],
domain_id=headers['X-User-Domain-Id'], domain_id=headers['X-User-Domain-Id'],
domain_name=headers['X-User-Domain-Name'], domain_name=headers['X-User-Domain-Name'],
auth_url=cfg.CONF.keystone_authtoken.www_authenticate_uri,
project=headers['X-Project-Name'], project=headers['X-Project-Name'],
project_id=headers['X-Project-Id'], project_id=headers['X-Project-Id'],
show_deleted=None, show_deleted=None,
@@ -243,7 +242,6 @@ class TestContextHook(base.FunctionalTest):
user_id=headers['X-User-Id'], user_id=headers['X-User-Id'],
domain_id=headers['X-User-Domain-Id'], domain_id=headers['X-User-Domain-Id'],
domain_name=headers['X-User-Domain-Name'], domain_name=headers['X-User-Domain-Name'],
auth_url=cfg.CONF.keystone_authtoken.www_authenticate_uri,
project=headers['X-Project-Name'], project=headers['X-Project-Name'],
project_id=headers['X-Project-Id'], project_id=headers['X-Project-Id'],
show_deleted=None, show_deleted=None,
@@ -265,7 +263,6 @@ class TestContextHook(base.FunctionalTest):
user_id=headers['X-User-Id'], user_id=headers['X-User-Id'],
domain_id=headers['X-User-Domain-Id'], domain_id=headers['X-User-Domain-Id'],
domain_name=headers['X-User-Domain-Name'], domain_name=headers['X-User-Domain-Name'],
auth_url=cfg.CONF.keystone_authtoken.www_authenticate_uri,
project=headers['X-Project-Name'], project=headers['X-Project-Name'],
project_id=headers['X-Project-Id'], project_id=headers['X-Project-Id'],
show_deleted=None, show_deleted=None,

View File

@@ -555,6 +555,35 @@ class TestPost(FunctionalTestWithSetup):
response.json['created_at']).replace(tzinfo=None) response.json['created_at']).replace(tzinfo=None)
self.assertEqual(test_time, return_created_at) self.assertEqual(test_time, return_created_at)
@mock.patch.object(timeutils, 'utcnow')
def test_create_audit_template_with_strategy_name(self, mock_utcnow):
audit_template_dict = post_get_test_audit_template(
goal=self.fake_goal1.uuid,
strategy=self.fake_strategy1.name)
test_time = datetime.datetime(2000, 1, 1, 0, 0)
mock_utcnow.return_value = test_time
response = self.post_json('/audit_templates', audit_template_dict)
self.assertEqual('application/json', response.content_type)
self.assertEqual(201, response.status_int)
# Check location header
self.assertIsNotNone(response.location)
expected_location = \
'/v1/audit_templates/%s' % response.json['uuid']
self.assertEqual(urlparse.urlparse(response.location).path,
expected_location)
self.assertTrue(utils.is_uuid_like(response.json['uuid']))
self.assertNotIn('updated_at', response.json.keys)
self.assertNotIn('deleted_at', response.json.keys)
self.assertEqual(self.fake_goal1.uuid, response.json['goal_uuid'])
self.assertEqual(self.fake_strategy1.uuid,
response.json['strategy_uuid'])
self.assertEqual(self.fake_strategy1.name,
response.json['strategy_name'])
return_created_at = timeutils.parse_isotime(
response.json['created_at']).replace(tzinfo=None)
self.assertEqual(test_time, return_created_at)
def test_create_audit_template_validation_with_aggregates(self): def test_create_audit_template_validation_with_aggregates(self):
scope = [{'compute': [{'host_aggregates': [{'id': '*'}]}, scope = [{'compute': [{'host_aggregates': [{'id': '*'}]},
{'availability_zones': [{'name': 'AZ1'}, {'availability_zones': [{'name': 'AZ1'},

View File

@@ -60,9 +60,6 @@ class TestCase(BaseTestCase):
cfg.CONF.set_override("auth_type", "admin_token", cfg.CONF.set_override("auth_type", "admin_token",
group='keystone_authtoken') group='keystone_authtoken')
cfg.CONF.set_override("www_authenticate_uri",
"http://127.0.0.1/identity",
group='keystone_authtoken')
app_config_path = os.path.join(os.path.dirname(__file__), 'config.py') app_config_path = os.path.join(os.path.dirname(__file__), 'config.py')
self.app = testing.load_test_app(app_config_path) self.app = testing.load_test_app(app_config_path)

View File

@@ -120,6 +120,7 @@ class TestClients(base.TestCase):
mock_call.assert_called_once_with( mock_call.assert_called_once_with(
CONF.nova_client.api_version, CONF.nova_client.api_version,
endpoint_type=CONF.nova_client.endpoint_type, endpoint_type=CONF.nova_client.endpoint_type,
region_name=CONF.nova_client.region_name,
session=mock_session) session=mock_session)
@mock.patch.object(clients.OpenStackClients, 'session') @mock.patch.object(clients.OpenStackClients, 'session')
@@ -155,6 +156,7 @@ class TestClients(base.TestCase):
mock_call.assert_called_once_with( mock_call.assert_called_once_with(
CONF.glance_client.api_version, CONF.glance_client.api_version,
interface=CONF.glance_client.endpoint_type, interface=CONF.glance_client.endpoint_type,
region_name=CONF.glance_client.region_name,
session=mock_session) session=mock_session)
@mock.patch.object(clients.OpenStackClients, 'session') @mock.patch.object(clients.OpenStackClients, 'session')
@@ -191,7 +193,8 @@ class TestClients(base.TestCase):
mock_call.assert_called_once_with( mock_call.assert_called_once_with(
CONF.gnocchi_client.api_version, CONF.gnocchi_client.api_version,
adapter_options={ adapter_options={
"interface": CONF.gnocchi_client.endpoint_type}, "interface": CONF.gnocchi_client.endpoint_type,
"region_name": CONF.gnocchi_client.region_name},
session=mock_session) session=mock_session)
@mock.patch.object(clients.OpenStackClients, 'session') @mock.patch.object(clients.OpenStackClients, 'session')
@@ -229,6 +232,7 @@ class TestClients(base.TestCase):
mock_call.assert_called_once_with( mock_call.assert_called_once_with(
CONF.cinder_client.api_version, CONF.cinder_client.api_version,
endpoint_type=CONF.cinder_client.endpoint_type, endpoint_type=CONF.cinder_client.endpoint_type,
region_name=CONF.cinder_client.region_name,
session=mock_session) session=mock_session)
@mock.patch.object(clients.OpenStackClients, 'session') @mock.patch.object(clients.OpenStackClients, 'session')
@@ -266,6 +270,7 @@ class TestClients(base.TestCase):
CONF.ceilometer_client.api_version, CONF.ceilometer_client.api_version,
None, None,
endpoint_type=CONF.ceilometer_client.endpoint_type, endpoint_type=CONF.ceilometer_client.endpoint_type,
region_name=CONF.ceilometer_client.region_name,
session=mock_session) session=mock_session)
@mock.patch.object(clients.OpenStackClients, 'session') @mock.patch.object(clients.OpenStackClients, 'session')
@@ -314,6 +319,7 @@ class TestClients(base.TestCase):
mock_call.assert_called_once_with( mock_call.assert_called_once_with(
CONF.neutron_client.api_version, CONF.neutron_client.api_version,
endpoint_type=CONF.neutron_client.endpoint_type, endpoint_type=CONF.neutron_client.endpoint_type,
region_name=CONF.neutron_client.region_name,
session=mock_session) session=mock_session)
@mock.patch.object(clients.OpenStackClients, 'session') @mock.patch.object(clients.OpenStackClients, 'session')
@@ -404,6 +410,7 @@ class TestClients(base.TestCase):
mock_call.assert_called_once_with( mock_call.assert_called_once_with(
CONF.ironic_client.api_version, CONF.ironic_client.api_version,
endpoint_override=ironic_url, endpoint_override=ironic_url,
interface='publicURL',
max_retries=None, max_retries=None,
os_ironic_api_version=None, os_ironic_api_version=None,
retry_interval=None, retry_interval=None,

View File

@@ -321,19 +321,6 @@ class TestNovaHelper(base.TestCase):
instance = nova_util.create_instance(self.source_node) instance = nova_util.create_instance(self.source_node)
self.assertIsNotNone(instance) self.assertIsNotNone(instance)
def test_get_flavor_instance(self, mock_glance, mock_cinder,
mock_neutron, mock_nova):
nova_util = nova_helper.NovaHelper()
instance = self.fake_server(self.instance_uuid)
flavor = {'id': 1, 'name': 'm1.tiny', 'ram': 512, 'vcpus': 1,
'disk': 0, 'ephemeral': 0}
instance.flavor = flavor
nova_util.nova.flavors.get.return_value = flavor
cache = flavor
nova_util.get_flavor_instance(instance, cache)
self.assertEqual(instance.flavor['name'], cache['name'])
@staticmethod @staticmethod
def fake_volume(**kwargs): def fake_volume(**kwargs):
volume = mock.MagicMock() volume = mock.MagicMock()

View File

@@ -385,11 +385,10 @@ class TestContinuousAuditHandler(base.DbTestCase):
audit_handler = continuous.ContinuousAuditHandler() audit_handler = continuous.ContinuousAuditHandler()
self.audits[0].next_run_time = (datetime.datetime.now() - self.audits[0].next_run_time = (datetime.datetime.now() -
datetime.timedelta(seconds=1800)) datetime.timedelta(seconds=1800))
m_is_inactive.return_value = False m_is_inactive.return_value = True
m_get_jobs.return_value = None m_get_jobs.return_value = []
audit_handler.execute_audit(self.audits[0], self.context) audit_handler.execute_audit(self.audits[0], self.context)
m_execute.assert_called_once_with(self.audits[0], self.context)
self.assertIsNotNone(self.audits[0].next_run_time) self.assertIsNotNone(self.audits[0].next_run_time)
@mock.patch.object(objects.service.Service, 'list') @mock.patch.object(objects.service.Service, 'list')

View File

@@ -451,11 +451,10 @@ class TestSyncer(base.DbTestCase):
self._find_created_modified_unmodified_ids( self._find_created_modified_unmodified_ids(
before_action_plans, after_action_plans)) before_action_plans, after_action_plans))
dummy_1_spec = [ dummy_1_spec = jsonutils.loads(
{'description': 'Dummy indicator', 'name': 'dummy', self.goal1_spec.serialize_indicators_specs())
'schema': jsonutils.dumps({'minimum': 0, 'type': 'integer'}), dummy_2_spec = jsonutils.loads(
'unit': '%'}] self.goal2_spec.serialize_indicators_specs())
dummy_2_spec = []
self.assertEqual( self.assertEqual(
[dummy_1_spec, dummy_2_spec], [dummy_1_spec, dummy_2_spec],
[g.efficacy_specification for g in after_goals]) [g.efficacy_specification for g in after_goals])

View File

@@ -20,7 +20,6 @@ fakeAuthTokenHeaders = {'X-User-Id': u'773a902f022949619b5c2f32cd89d419',
'X-Auth-Token': u'5588aebbcdc24e17a061595f80574376', 'X-Auth-Token': u'5588aebbcdc24e17a061595f80574376',
'X-Forwarded-For': u'10.10.10.10, 11.11.11.11', 'X-Forwarded-For': u'10.10.10.10, 11.11.11.11',
'X-Service-Catalog': u'{test: 12345}', 'X-Service-Catalog': u'{test: 12345}',
'X-Auth-Url': 'fake_auth_url',
'X-Identity-Status': 'Confirmed', 'X-Identity-Status': 'Confirmed',
'X-User-Domain-Name': 'domain', 'X-User-Domain-Name': 'domain',
'X-Project-Domain-Id': 'project_domain_id', 'X-Project-Domain-Id': 'project_domain_id',