Compare commits

...

247 Commits

Author SHA1 Message Date
Zuul
3591d9fa0a Merge "Replace port 35357 with 5000 for test_clients.py" 2018-06-15 05:10:32 +00:00
Alexander Chadin
44fc7d5799 Restore requirements versions
Change-Id: I7704778324d7597d5df2de6b77f6b914d948d6fa
2018-06-13 15:08:11 +03:00
Zuul
a330576eae Merge "Update storage CDM collector" 2018-06-06 12:16:43 +00:00
Zuul
70d05214c7 Merge "add doc for host_maintenance" 2018-06-06 08:23:13 +00:00
suzhengwei
ca9644f4d8 add doc for host_maintenance
Change-Id: If9a112d33d7586d828024dbace1863ecc04408d9
2018-06-05 17:34:01 +08:00
inspurericzhang
44061cf333 Update pypi url to new url
Pypi url update to "https://pypi.org/"

Change-Id: I0bc9d7fc6111cb32db212d6ef3dab144fdd31c17
2018-05-25 17:11:56 +08:00
Zuul
18bf1f4e8d Merge "add strategy host_maintenance" 2018-05-23 09:18:24 +00:00
Zuul
f2df0da0b2 Merge "Trivial: update url to new url" 2018-05-23 07:46:28 +00:00
Hidekazu Nakamura
3c83077724 Update storage CDM collector
Storage CDM can not be build for some environment such as
the one using VMwareVcVmdkDriver, since some attributes of
Storage CDM'S pool element can be 'unknown'.

This patch updates storage CDM collector to raise watcher
specific exception if some attributes of storage CDM'S pool
element is 'unknown'

Change-Id: If75a909025c8d764e4de6e20f058b84e23123c1a
Closes-Bug: #1751206
2018-05-23 10:51:26 +09:00
caoyuan
d8872a743b Replace port 35357 with 5000 for test_clients.py
Now that the v2.0 API has been removed, we don't have a reason to
include deployment instructions for two separate applications on
different ports.

Related-bug: #1754104

Change-Id: I98fae626d39cb62ad51c86435c1a2c60be5c1fb9
2018-05-15 12:43:48 +00:00
Hidekazu Nakamura
7556d19638 Add Cinder Cluster Data Model Collector test case
This patch adds Cinder Data Model Collector test case.

Change-Id: Ifaf7cd4a962da287f740a12e4c382a1ca92750d6
2018-05-15 20:30:31 +09:00
suzhengwei
58276ec79e add strategy host_maintenance
maintain one compute node without having the user's application
been interruptted.
It will firstly migrate all instances from the maintenance node
to one backup node. If not, it will migrate all instances,
relying on nova-schduler.

Change-Id: I29ecb65745d5e6ecab41508e9a91b29b39a3f0a8
Implements:blueprint cluster-maintaining
2018-05-14 11:33:59 +00:00
XiaojueGuan
36ad9e12da Trivial: update url to new url
Change-Id: Ia238564c5c41aaf015d9d2f5839703a035c76fce
2018-05-13 21:39:50 +08:00
Hidekazu Nakamura
cdb1975530 Fix to reuse RabbitMQ connection
Currently RabbitMQ connection gradually increases by CONTINUOUS audit
with auto-trigger option.
This patch fixes watcher to reuse RabbitMQ connection.

Change-Id: I818fc1ce982f67bac08c815821f1ad67f8f3c893
2018-05-10 14:21:23 +09:00
Zuul
6efffd6d89 Merge "Updated tests on bug, when get list returns deleted items" 2018-05-09 08:40:18 +00:00
Zuul
95ec79626b Merge "Grouped _add_*_filters methods together" 2018-05-09 08:20:37 +00:00
Zuul
00aa77651b Merge "Replace of private _create methods in tests" 2018-05-09 08:20:36 +00:00
Zuul
7d62175b23 Merge "Added _get_model_list base method for all get_*_list methods" 2018-05-09 08:20:36 +00:00
Zuul
5107cfa30f Merge "Refactor watcher API for Action Plan Start" 2018-05-09 06:16:38 +00:00
deepak_mourya
ff57eb73f9 Refactor watcher API for Action Plan Start
Currently the REST API to start action plan in watcher
is which is same as for update action plan.

PATCH /v1/action_plans

https://docs.openstack.org/watcher/latest/api/v1.html

we need to make it easy to understand like :

POST /v1/action_plans/{action_plan_uuid}/start

the action should be start in above case.
Change-Id: I5353e4aa58d1675d8afb94bea35d9b953514129a
Closes-Bug: #1756274
2018-05-08 07:28:45 +00:00
Zuul
4c035a7cbd Merge "Update auth_url in install docs" 2018-05-08 05:57:39 +00:00
Zuul
b5d9eb6acb Merge "Exclude Project By Audit Scope" 2018-05-08 05:01:57 +00:00
Hidekazu Nakamura
904b72cf5e Update auth_url in install docs
Beginning with the Queens release, the keystone install guide
recommends running all interfaces on the same port. This patch
updates the install guide to reflect that change.

Change-Id: Ice155d0b80d2f2ed6c1a9a9738be2184b6e9e76c
Closes-bug: #1754104
2018-05-07 11:42:10 +09:00
Egor Panfilov
d23e7f0f8c Updated tests on bug, when get list returns deleted items
In I4d2f44fa149aee564c62a69822c6ad79de5bba8a we introduced new
_get_model_list method that introduces unify way for retrieving models
from db. This commit adds tests that do checks on bug 1761956, when
selecting with filter() method could return deleted entites.

Change-Id: I12df4af70bcc25654a0fb276ea7145d772d891e2
Related-Bug: 1761956
2018-05-05 14:30:00 +03:00
Zuul
55cbb15fbc Merge "Moved do_execute method to AuditHandler class" 2018-05-04 06:08:17 +00:00
wu.chunyang
3a5b42302c Fix the openstack endpoint create failed
Change-Id: Ic05950c47bf5ad26e91051ac5e1d766db0f5ccae
2018-04-27 22:44:13 +08:00
Zuul
4fdb22cba2 Merge "Update the default value for nova api_verison" 2018-04-27 06:10:54 +00:00
Zuul
431f17d999 Merge "add unittest for execute_audit in audit/continuous.py" 2018-04-25 08:24:25 +00:00
caoyuan
b586612d25 Update the default value for nova api_verison
refer to https://github.com/openstack/watcher/blob/master/watcher/conf/nova_client.py#L26

Change-Id: If7c12d49c68e1bfc30327d465b9d5bafe82882e0
2018-04-24 23:15:37 +08:00
Egor Panfilov
ad1593bb36 Moved do_execute method to AuditHandler class
Both Continuous and Oneshot audits made same action in
do_execute, so it's a good idea to move it to the base
class

TrivialFix

Change-Id: Ic0353f010509ce45f94126e4db0e629417128ded
2018-04-23 20:38:06 +03:00
Zuul
bbd0ae5b16 Merge "Fix typo in StorageCapacityBalance" 2018-04-23 07:59:51 +00:00
Zuul
5a30f814bf Merge "add strategy doc:storage capacity balance" 2018-04-23 05:46:08 +00:00
Egor Panfilov
7f6a300ea0 Fix typo in StorageCapacityBalance
TrivialFix

Change-Id: If1fb33276fc08945aa45e6baecaeebca3ba070fe
2018-04-22 18:00:53 +03:00
Egor Panfilov
93a8ba804f Grouped _add_*_filters methods together
TrivialFix

Change-Id: I148dc19140aede8cc905b0bdc2753b82d8484363
2018-04-22 00:52:27 +03:00
Egor Panfilov
415bab4bc9 Replace of private _create methods in tests
Methods that already implemented in utils module are removed from test
classes

TrivialFix

Change-Id: I38d806e23c162805b7d362b68bf3fe18da123ee3
2018-04-21 22:32:25 +03:00
aditi
fc388d8292 Exclude Project By Audit Scope
This patch adds project_id in compute CDM, It also adds logic for
excluding project_id in audit scope.

Change-Id: Ife228e3d1855b65abee637516470e463ba8a2815
Implements: blueprint audit-scope-exclude-project
2018-04-20 08:47:07 +00:00
Zuul
5b70c28047 Merge "amend delete action policy" 2018-04-20 03:08:52 +00:00
licanwei
b290ad7368 add strategy doc:storage capacity balance
Change-Id: Ifa37156e641b840ae560e1f7c8a0dd4bca7662ba
2018-04-19 19:55:37 -07:00
Alexander Chadin
8c8e58e7d9 Update requirements
Change-Id: Iee6ca0a49f8b1d67dd0d88f9a2cf9863b2c6c7bf
2018-04-19 11:10:39 +03:00
Zuul
391bb92bd2 Merge "Replace cold migration to use Nova migration API" 2018-04-18 13:36:53 +00:00
licanwei
171654c0ea add unittest for execute_audit in audit/continuous.py
Change-Id: I20b9cb9b4b175a1befdbe23f7c187bec6a195dac
2018-04-17 04:19:12 -07:00
suzhengwei
0157fa7dad amend delete action policy
Change-Id: I545b969a3f0a3451b880840108484ca7ef3fabf9
2018-04-17 16:18:14 +08:00
Zuul
3912075c19 Merge "Enable mutable config in Watcher" 2018-04-16 11:39:39 +00:00
Zuul
d42a89f70f Merge "Update auth_uri option to www_authenticate_uri" 2018-04-16 02:16:26 +00:00
Zuul
6bb25d2c36 Merge "Trivial fix of saving_energy strategy doc" 2018-04-13 11:35:29 +00:00
Hidekazu Nakamura
4179c3527c Replace cold migration to use Nova migration API
Since Nova API v2.56, Nova migrate Server(migrate Action) API
has host option.
This patch replaces cold migration implementation to use the API.

Change-Id: Idd6ebc94f81ad5d65256c80885f2addc1aaeaae1
Implements: blueprint replace-cold-migrate-to-use-nova-migration-api
2018-04-13 10:53:26 +09:00
ShangXiao
3b1356346a Add release notes link to README
Add release notes url doc link to README.rst.

Change-Id: Ia068ce4847a99be4ec5fb336e6b8e283a061d614
2018-04-12 00:35:31 -07:00
licanwei
67be974861 Trivial fix of saving_energy strategy doc
Change-Id: Ie7b85b8e57a679be8f8fc05c0c24e707b0dd575d
2018-04-11 22:58:56 -07:00
caoyuan
8c916930c8 Update auth_uri option to www_authenticate_uri
Option auth_uri from group keystone_authtoken is deprecated[1].
Use option www_authenticate_uri from group keystone_authtoken.

[1]https://review.openstack.org/#/c/508522/

Change-Id: I2ef330d7f9b632e9a81d22a8edec3c88eb532ff5
2018-04-12 04:15:01 +00:00
Zuul
b537979e45 Merge "Several fixes of strategies docs" 2018-04-11 09:12:22 +00:00
Egor Panfilov
aa74817686 Added _get_model_list base method for all get_*_list methods
When we call audittemplate list without filters, it returns all Audit
Templates that are not deleted, as expected. If we add any filter to
query and context.show_deleted is None (we request only current AT),
query.filter_by adds filter to joined table (for example, goals, results
 in a query like JOIN goals ... WHERE ... goals.deleted_at IS NULL) not
to model's table (AuditTemplate in our case).

We change call for filter_by to filter, explicitly point to model that
we want to filter.

Also, we moved query generating code to new method _get_model_list(). As
a result we applied same fix to all of the other models.

Change-Id: I4d2f44fa149aee564c62a69822c6ad79de5bba8a
Closes-bug: 1761956
2018-04-10 14:10:44 +03:00
Zuul
831e58df10 Merge "Trivial fix of user guide doc" 2018-04-09 05:42:03 +00:00
Egor Panfilov
3dd03b2d45 Trivial fix of user guide doc
Removed duplicates of same commands, removed erroneous sentence about
conf file

Change-Id: I630924ed2860d0df70524d4f9f7d3ddb07a3dcc0
2018-04-07 12:56:30 +03:00
Zuul
2548f0bbba Merge "zuulv3 optimization" 2018-04-03 13:06:48 +00:00
Alexander Chadin
39d7ce9ee8 zuulv3 optimization
This patch set improves inheritance of watcher jobs.

Change-Id: I65335cd0b25a355c46bfea8a962f63b8ac02ebf2
2018-04-03 09:25:04 +00:00
Zuul
1f8c073cb3 Merge "filter exclude instances during migration" 2018-04-02 12:09:47 +00:00
Zuul
0353a0ac77 Merge "Fix sort of *list command output" 2018-03-30 07:42:52 +00:00
Zuul
921584ac4b Merge "add lower-constraints job" 2018-03-29 09:04:18 +00:00
Egor Panfilov
65a09ce32d Enable mutable config in Watcher
New releases of oslo.config support a 'mutable' parameter to Opts.
Configuration options are mutable if their oslo.config Opt's
mutable=True is set. This mutable setting is respected when the oslo
method mutate_config_files is called instead of reload_config_files.
Icec3e664f3fe72614e373b2938e8dee53cf8bc5e allows services to tell
oslo.service they want mutate_config_files to be called by specifying
the 'restart_method=mutate' parameter, what this patch does.

The default mutable configuration options (set by oslo.config Opts'
mutable=True) are:
- [DEFAULT]/pin_release_version
- [DEFAULT]/debug
- [DEFAULT]/log_config_append

Concrete params, that made mutable in Watcher:

* watcher_decision_engine.action_plan_expiry
* watcher_decision_engine.check_periodic_interval
* watcher_decision_engine.continuous_audit_interval
* gnocchi_client.query_max_retries
* gnocchi_client.query_timeout
* DEFAULT.periodic_interval

Change-Id: If28f2de094d99471a3ab756c947e29ae3d8a28a2
Implements: bp mutable-config
2018-03-28 23:44:47 +03:00
Egor Panfilov
92dad3be2d Several fixes of strategies docs
Removed duplicates of strategies descriptions, added references to
that descriptions instead of module descriptions.

Change-Id: Ife396ddce5c3cc926cc111f1ff1abd3a42c22561
2018-03-28 22:53:17 +03:00
Zuul
d86fee294f Merge "Remove obsolete playbooks of legacy jobs" 2018-03-28 14:41:25 +00:00
Zuul
95a01c4e12 Merge "set one worker for watcherclient-tempest-functional job" 2018-03-28 09:58:26 +00:00
Alexander Chadin
b9456e242e set one worker for watcherclient-tempest-functional job
Change-Id: I88646707ddfeff91a33bf25ee348bcb0981a2df4
2018-03-28 11:15:31 +03:00
Zuul
4e49ad64c0 Merge "Replaced deprecated oslo_messaging_rabbit section" 2018-03-28 03:40:14 +00:00
Alexander Chadin
184b1b1ce6 Remove obsolete playbooks of legacy jobs
This patch set removes playbooks of legacy jobs.

Change-Id: Ia8c36e261486709c3077b2705a97106b946519c2
2018-03-27 12:48:21 +03:00
OpenStack Proposal Bot
f49d0555e7 Updated from global requirements
Change-Id: Ia731e87abd108b07193f869322ba32b0c130c26e
2018-03-26 08:31:14 +00:00
Doug Hellmann
9d8a0feab4 add lower-constraints job
Create a tox environment for running the unit tests against the lower
bounds of the dependencies.

Create a lower-constraints.txt to be used to enforce the lower bounds
in those tests.

Add openstack-tox-lower-constraints job to the zuul configuration.

See http://lists.openstack.org/pipermail/openstack-dev/2018-March/128352.html
for more details.

Change-Id: Ia0547a12c756388dc521c5eed7897140fe0dfd61
Depends-On: https://review.openstack.org/555034
Signed-off-by: Doug Hellmann <doug@doughellmann.com>
2018-03-25 14:50:34 -04:00
Egor Panfilov
52a5c99fc5 Replaced deprecated oslo_messaging_rabbit section
Added creation of [DEFAULT]/transport_url value
in devstack.

Also, fixed same topic in docs.

Change-Id: I9ad9475c4fccf023daac40c0b1e841eeeb22f040
Closes-Bug: 1738329
2018-03-25 12:50:13 +03:00
Zuul
cfaab0cbdc Merge "ZuulV3 jobs" 2018-03-23 22:29:17 +00:00
Alexander Chadin
6bb0432ee7 ZuulV3 jobs
This patch set removes legacy-* jobs and migrates
tempest functional job to ZuulV3 syntax.

Change-Id: I87771737cc713eae20b4d6aaaefefc5e40875666
Implements: blueprint migrate-to-zuulv3
2018-03-23 20:40:23 +00:00
melissaml
99837d6339 Delete the unnecessary '-'
Fix a typo

Change-Id: Ibeaf5454c3a8f10f338022fd24d98ef484efd370
2018-03-22 00:05:39 +08:00
Egor Panfilov
3075723da9 Fix sort of *list command output
While sorting output of list command ("audittemplate list",
"strategy list", etc)  by sort-key that is not belongs
to specific model, this sort-key was passed to db what
caused error (HTTP 500). We added check on such keys and now,
if got one of them, then we make sort on API side
instead of db side.

We removed excess sort and changed all sorting routines
to unified way.

Also added sort tests on every model.

Change-Id: I41faea1622605ee4fa8dc48cd572876d75be8383
Closes-Bug: 1662887
2018-03-20 13:16:13 +00:00
Zuul
b0bdeea7cf Merge "Remove version/date from CLI documentation" 2018-03-20 09:36:45 +00:00
Zuul
5eaad33709 Merge "Adding driver to mysql connection URL" 2018-03-20 09:32:10 +00:00
Ken'ichi Ohmichi
24b6432490 Remove version/date from CLI documentation
This patch removes the unnecessary maintenance of a date and version
from the CLI documentation.

NOTE: Cinder/Nova teams also did the same removal with
      the commit Idf78bbed44f942bb6976ccf4da67c748d9283ed9
      and the commit I0a9dd49e68f2d47c58a46b107c77975e7b2aeaf7

Change-Id: I6a0faeb596f1ee3a3b67d1d37a14e1507aa40eba
2018-03-19 15:04:32 -07:00
Vu Cong Tuan
ca61594511 Adding driver to mysql connection URL
With current URL [1], default driver will be used.
In order to ensure the compatibility, it is better to include the exact driver [2].

[1] connection = mysql://
[2] connection = mysql+pymysql://

Change-Id: I4f7b3ccbecfb2f1e2b3d125179dbd5c6fbf5e6b9
2018-03-19 17:02:08 +07:00
OpenStack Proposal Bot
bd57077bfe Updated from global requirements
Change-Id: I3aa816dadb10cb52b431edb928b789df4dca337d
2018-03-15 09:40:18 +00:00
Zuul
56bcba2dc0 Merge "ignore useless WARNING log message" 2018-03-13 07:54:49 +00:00
licanwei
73928412b3 ignore useless WARNING log message
remove the useless 'project' filed in context

Change-Id: I0d00969dd4b993dfbe6f4623c27457ed2589ae3f
Closes-Bug: #1755347
2018-03-12 21:03:12 -07:00
Zuul
29f41b7dff Merge "Change the outdated links to the latest links in README" 2018-03-13 02:18:36 +00:00
Zuul
02f86ffe02 Merge "Updated from global requirements" 2018-03-12 11:14:42 +00:00
Zuul
20c6bf1b5a Merge "Add the missing markups for the hyperlink titles" 2018-03-12 11:13:26 +00:00
Zuul
083f070d17 Merge "Revert "Update OpenStack Installation Tutorial to Rocky"" 2018-03-12 01:30:48 +00:00
OpenStack Proposal Bot
4022b59d79 Updated from global requirements
Change-Id: I16aebdcc8b83d7f85034845da2a2de0470d12ce6
2018-03-10 14:00:34 +00:00
caoyuan
3d1cb11ea6 Add the missing markups for the hyperlink titles
Change-Id: If037d1ad76cfea89cc5a132b60eeda8e17afb1c4
2018-03-10 17:19:02 +08:00
caoyuan
d0b1dacec1 Change the outdated links to the latest links in README
1. Update the link
2. Remove the unnecessary space

Change-Id: I0fcf5a878d789ecd2f2a23cad314c32b6bb5ba51
2018-03-10 16:26:28 +08:00
Alexander Chadin
45a06445f3 basic_cons fix
Change-Id: I0856fadc3aaece3be286af9047339ce63d54be29
2018-03-09 14:52:57 +03:00
Andreas Jaeger
2f173bba56 Revert "Update OpenStack Installation Tutorial to Rocky"
The change is wrong. We link on purpose to the unversioned version and update that one once rocky is released.

This reverts commit e771ae9e95.

Change-Id: I0f981a8473a47d18ce20be74a8e2d12d22f40061
2018-03-09 11:10:16 +00:00
Zuul
cb497d2642 Merge "Add parameter aggregation_method for basic_consolidation" 2018-03-09 10:52:08 +00:00
Zuul
e1fd686272 Merge "Update OpenStack Installation Tutorial to Rocky" 2018-03-09 10:04:16 +00:00
Zuul
8f7127a874 Merge "Delete the unnecessary '-'" 2018-03-09 10:04:15 +00:00
Zuul
3a529a0f7b Merge "Fix Uuid and virtual_free elements load error" 2018-03-09 10:00:19 +00:00
Alexander Chadin
5c81f1bd7f Add parameter aggregation_method for basic_consolidation
This parameter is required to fix tempest multinode test.

Change-Id: I4014fb7a76ce74e1426378183ecef0308bc56ce7
2018-03-09 12:50:46 +03:00
Zuul
e0c019002a Merge "Imported Translations from Zanata" 2018-03-09 06:50:04 +00:00
Zuul
cc24ef6e08 Merge "Fix exception string format" 2018-03-09 06:47:03 +00:00
OpenStack Proposal Bot
7e27abc5db Imported Translations from Zanata
For more information about this automatic import see:
https://docs.openstack.org/i18n/latest/reviewing-translation-import.html

Change-Id: Ic6546e38367df64e2b3819ccf79261a1c4d38a2c
2018-03-08 07:27:37 +00:00
caoyuan
4844baa816 Delete the unnecessary '-'
fix a typo

Change-Id: I4ecdb827d94ef0ae88e2f37db9d1a53525140947
2018-03-08 12:50:00 +08:00
caoyuan
e771ae9e95 Update OpenStack Installation Tutorial to Rocky
Since rocky branch created, OpenStack Installation Tutorial
should use it.

Change-Id: I40d2b1fdf2bac9a5515d10cf0b33f25c1153155a
2018-03-08 12:46:46 +08:00
Alexander Chadin
a2488045ea Add parameter aggregation_method for work_stab
This parameter is required to fix tempest multinode test.

Change-Id: Id0c6a01b831a6b15694fdb811a1f53f8c6303120
2018-03-07 11:38:40 +00:00
Alexander Chadin
cce5ebd3f0 basic_consolidation trivial fix
This fix adds usage of granularity parameter.
Should be merged ASAP.

Change-Id: I469ee056b32f95aba02100450c65945ee9877b23
2018-03-06 14:42:51 +00:00
Hidekazu Nakamura
a7ab77078e Fix Uuid and virtual_free elements load error
NotImplementedError are reported in decision-engine log file
when we activate storage data model and see a Guru Meditation Report.
This patch fixes by adding default values.

Change-Id: I06386f8295f7758cbb633612eee8b19225905c92
Closes-Bug: #1750300
2018-03-06 16:55:11 +09:00
Zuul
9af32bce5b Merge "Complete schema of workload_stabilization strategy" 2018-03-06 01:17:37 +00:00
Zuul
4cf35e7e62 Merge "Updated Hacking doc" 2018-03-06 01:06:55 +00:00
Andreas Jaeger
6f27e50cf0 Fix exception string format
The string %(action) is not valid, it misses the conversion specified,
add s for string.

Note that this leads to an untranslatable string, since our translation
tools check for valid formats and fail. In this case the failure comes
from a source code fail.

Change-Id: I2e630928dc32542a8a7c02657a9f0ab1eaab62ff
2018-03-03 17:09:59 +00:00
Zuul
bd8c5c684c Merge "Add the missing title of Configuration Guide" 2018-03-03 13:17:46 +00:00
OpenStack Proposal Bot
1834db853b Imported Translations from Zanata
For more information about this automatic import see:
https://docs.openstack.org/i18n/latest/reviewing-translation-import.html

Change-Id: Id81df7aff5aa53071bc6f15a9178c5fcaffabf56
2018-03-03 12:13:49 +00:00
zhang.lei
59ef0d24d1 Add the missing title of Configuration Guide
There is no title for Configuration Guide now[1], this patch
is to add it.

backport: pike

[1] https://docs.openstack.org/watcher/pike/configuration/

Change-Id: I82d1b14b9a943cc1a2a22187ff30c75680f9f5d6
2018-03-03 12:13:19 +00:00
Alexander Chadin
c53817c33d Fix change_nova_service_state action
The function signature has been changed in 2.53 version[1]
and this patch is required to fix watcher-tempest-plugin.
If all tests are ok, I'll merge it ASAP.

[1]: https://developer.openstack.org/api-ref/compute/#enable-scheduling-for-a-compute-service

Change-Id: Ie03519dac2a55263e278344fd00f103067f90f27
2018-03-03 10:14:26 +00:00
Zuul
b33b7a0474 Merge "Add a hacking rule for string interpolation at logging" 2018-02-28 12:24:37 +00:00
wangqi
033bc072c0 Updated Hacking doc
Change-Id: Ib9ec1d7dd17786e084b7e889e845b959b1398909
2018-02-28 03:58:07 +00:00
Zuul
f32ed6bc79 Merge "Fix old url links in doc" 2018-02-27 14:20:09 +00:00
Zuul
707590143b Merge "[Trivialfix]Modify a grammatical error" 2018-02-26 01:47:50 +00:00
Zuul
b2663de513 Merge "Add support for networkx v2.0" 2018-02-23 09:23:47 +00:00
ShangXiao
dd210292ae [Trivialfix]Modify a grammatical error
Modify a grammatical error in basic_consolidation.py.

Change-Id: I9770121b0b0064c3ddfb582e5eaf6ee52ae8d6bb
2018-02-23 09:16:18 +00:00
ShangXiao
abb9155eb4 Fix old url links in doc
Replace the old http url links with the lastest https ones according
to the official OpenStack website.

Change-Id: I1abd79bb80dae44ee2ba5946b8a375c7096b39d6
2018-02-23 00:19:24 -08:00
ForestLee
f607ae8ec0 Add a hacking rule for string interpolation at logging
String interpolation should be delayed to be handled by
the logging code, rather than being done at the point
of the logging call.
See the oslo i18n guideline
* https://docs.openstack.org/oslo.i18n/latest/user/guidelines.html#adding-variables-to-log-messages
and
* https://github.com/openstack-dev/hacking/blob/master/hacking/checks/other.py#L39
Closes-Bug: #1596829

Change-Id: Ibba5791669c137be1483805db657beb907030227
2018-02-23 10:41:00 +03:00
Alexander Chadin
b3ded34244 Complete schema of workload_stabilization strategy
This patch set completes schema by adding restrictions
to different types of schema properties.

It also makes workload_stabilization strategy more
user friendly by setting cpu_util as default metric.

Change-Id: If34cf4b7ee2f70dc9a86309cb94a90b19e3d9bec
2018-02-23 07:13:40 +00:00
Zuul
bdfb074aa4 Merge "workload_stabilization trivial fix" 2018-02-23 06:40:51 +00:00
Zuul
b3be5f16fc Merge "Fix grammar errors" 2018-02-23 05:48:54 +00:00
suzhengwei
dad60fb878 filter exclude instances during migration
Change-Id: Ib5e0d285de0f25515702890778aca5e6417befaf
Implements:blueprint compute-cdm-include-all-instances
2018-02-23 03:13:46 +00:00
baiwenteng
fb66a9f2c3 Fix grammar errors
Replace 'a instance' with 'an instance' in
watcher/decision_engine/model/collector/nova.py
watcher/decision_engine/model/element/instance.py

Change-Id: I39020f3e7b460dea768f7e38fef9ae9e2a4b7357
2018-02-21 13:18:42 +00:00
Alexander Chadin
dc9ef6f49c workload_stabilization trivial fix
This fix allows to compare metric name by value,
not by object.

Change-Id: I57c50ff97efa43efe4fd81875e481b25e9a18cc6
2018-02-20 13:53:02 +03:00
Zuul
8e8a43ed48 Merge "Updated from global requirements" 2018-02-19 07:30:56 +00:00
OpenStack Proposal Bot
5ac65b7bfc Updated from global requirements
Change-Id: I998ce5743e58a8c6bf754a15e491d7bce44e7264
2018-02-17 10:30:58 +00:00
OpenStack Proposal Bot
7b9b726577 Imported Translations from Zanata
For more information about this automatic import see:
https://docs.openstack.org/i18n/latest/reviewing-translation-import.html

Change-Id: Iba37807905b24db36d506c0dc08c3dff0a3c38cf
2018-02-17 07:41:55 +00:00
Alexander Chadin
c81cd675a5 Add support for networkx v2.0
Closes-Bug: #1718576
Change-Id: I1628e4c395591b87c7993294c065476a1f8191bb
2018-02-15 15:17:34 +03:00
Zuul
ab926bf6c5 Merge "Updated from global requirements" 2018-02-15 10:05:54 +00:00
Zuul
08c688ed11 Merge "Fix some dead link in docs" 2018-02-15 08:34:18 +00:00
OpenStack Proposal Bot
e399d96661 Updated from global requirements
Change-Id: Ibed48beff0bf4537644641fd149e39d54a21d475
2018-02-14 12:37:35 +00:00
Zuul
ba54b30d4a Merge "Update meeting time on odd weeks" 2018-02-14 11:04:22 +00:00
watanabe isao
44d9183d36 Fix some dead link in docs
Change-Id: I729266a789d38f831d726c769fd7ac8d111dee26
2018-02-14 16:45:13 +09:00
Zuul
f6f3c00206 Merge "Imported Translations from Zanata" 2018-02-12 10:27:48 +00:00
Alexander Chadin
cc87b823fa Update meeting time on odd weeks
Change-Id: Ib07fea7a0bb9dc7c6f50655eeb05443ccf312ebd
2018-02-12 12:43:47 +03:00
Zuul
ba2395f7e7 Merge "fix misspelling of 'return'" 2018-02-12 09:39:32 +00:00
Zuul
b546ce8777 Merge "Add missing release notes" 2018-02-11 01:53:49 +00:00
pangliye
0900eaa9df fix misspelling of 'return'
[trivial_fix]

Change-Id: I3df27dc419d8ae48650648e9f696ea6a182915bf
2018-02-11 01:17:32 +00:00
Alexander Chadin
9fb5b2a4e7 Add missing release notes
Change-Id: I6559398d26869ed092eedf5648eea23d89bcb81c
2018-02-09 11:45:05 +03:00
OpenStack Proposal Bot
d80edea218 Imported Translations from Zanata
For more information about this automatic import see:
https://docs.openstack.org/i18n/latest/reviewing-translation-import.html

Change-Id: Idf4f8689bedc48f500e9cebb953c036675729571
2018-02-09 07:38:05 +00:00
OpenStack Release Bot
26d6074689 Update reno for stable/queens
Change-Id: I32b3883d0a7d47434b5f21efcf2d053d0e40a448
2018-02-08 16:34:08 +00:00
Zuul
40a653215f Merge "Zuul: Remove project name" 2018-02-07 07:24:53 +00:00
Zuul
1492f5d8dc Merge "Repalce Chinese double quotes to English double quotes" 2018-02-07 07:22:41 +00:00
Zuul
76263f149a Merge "Fix issues with aggregate and granularity attributes" 2018-02-06 06:05:50 +00:00
James E. Blair
028006d15d Zuul: Remove project name
Zuul no longer requires the project-name for in-repo configuration.
Omitting it makes forking or renaming projects easier.

Change-Id: Ib3be82015be1d6853c44cf53faacb238237ad701
2018-02-05 14:18:38 -08:00
Alexander Chadin
d27ba8cc2a Fix issues with aggregate and granularity attributes
This patch set fixes issues that have appeared after merging
watcher-multi-datasource and strategy-requirements patches.
It is final commit in watcher-multi-datasource blueprint.

Partially-Implements: blueprint watcher-multi-datasource
Change-Id: I25b4cb0e1b85379ff0c4da9d0c1474380d75ce3a
2018-02-05 11:08:48 +00:00
chengebj5238
33750ce7a9 Repalce Chinese double quotes to English double quotes
Change-Id: I566ce10064c3dc51b875fc973c0ad9b58449001c
2018-02-05 17:59:08 +08:00
Zuul
cb8d1a98d6 Merge "Fix get_compute_node_by_hostname in nova_helper" 2018-02-05 06:47:10 +00:00
Hidekazu Nakamura
f32252d510 Fix get_compute_node_by_hostname in nova_helper
If hostname is different from uuid in Compute CDM,
get_compute_node_by_hostname method returns empty.
This patch set fixes to return a compute node even if hostname
is different from uuid.

Change-Id: I6cbc0be1a79cc238f480caed9adb8dc31256754a
Closes-Bug: #1746162
2018-02-02 14:26:20 +09:00
Zuul
4849f8dde9 Merge "Add zone migration strategy document" 2018-02-02 04:51:26 +00:00
Hidekazu Nakamura
0cafdcdee9 Add zone migration strategy document
This patch set adds zone migration strategy document.

Change-Id: Ifd9d85d635977900929efd376f0d7990a6fec627
2018-02-02 09:35:58 +09:00
OpenStack Proposal Bot
3a70225164 Updated from global requirements
Change-Id: Ifb8d8d6cb1248eaf8715c84539d74fa04dd753dd
2018-02-01 07:36:19 +00:00
Zuul
892c766ac4 Merge "Fixed AttributeError in storage_model" 2018-01-31 13:58:53 +00:00
Zuul
63a3fd84ae Merge "Remove redundant import alias" 2018-01-31 12:45:21 +00:00
Zuul
287ace1dcc Merge "Update zone_migration comment" 2018-01-31 06:14:15 +00:00
Zuul
4b302e415e Merge "Zuul: Remove project name" 2018-01-30 12:22:41 +00:00
licanwei
f24744c910 Fixed AttributeError in storage_model
self.audit.scope should be self.audit_scope

Closes-Bug: #1746191

Change-Id: I0cce165a2bc1afd4c9e09c51e4d3250ee70d3705
2018-01-30 00:32:19 -08:00
Zuul
d9a85eda2c Merge "Imported Translations from Zanata" 2018-01-29 14:12:36 +00:00
Zuul
82c8633e42 Merge "[Doc] Add actuator strategy doc" 2018-01-29 14:12:35 +00:00
Hidekazu Nakamura
d3f23795f5 Update zone_migration comment
This patch updates zone_migration comment for document and
removes unnecessary TODO.

Change-Id: Ib1eadad6496fe202e406108f432349c82696ea88
2018-01-29 17:48:48 +09:00
Hoang Trung Hieu
e7f4456a80 Zuul: Remove project name
Zuul no longer requires the project-name for in-repo configuration[1].
Omitting it makes forking or renaming projects easier.

[1] https://docs.openstack.org/infra/manual/drivers.html#consistent-naming-for-jobs-with-zuul-v3

Change-Id: Iddf89707289a22ea322c14d1b11f58840871304d
2018-01-29 07:24:44 +00:00
OpenStack Proposal Bot
a36a309e2e Updated from global requirements
Change-Id: I29ebfe2e3398dab6f2e22f3d97c16b72843f1e34
2018-01-29 00:42:54 +00:00
Hidekazu Nakamura
8e3affd9ac [Doc] Add actuator strategy doc
This patch adds actuator strategy document.

Change-Id: I5f0415754c83e4f152155988625ada2208d6c35a
2018-01-28 20:00:05 +09:00
OpenStack Proposal Bot
71e979cae0 Imported Translations from Zanata
For more information about this automatic import see:
https://docs.openstack.org/i18n/latest/reviewing-translation-import.html

Change-Id: Ie34aafe6d9b54bb97469844d21de38d7c6249031
2018-01-28 07:16:20 +00:00
Luong Anh Tuan
6edfd34a53 Remove redundant import alias
This patch remove redundant import aliases and add pep8 hacking function
to check no redundant import aliases.

Co-Authored-By: Dao Cong Tien <tiendc@vn.fujitsu.com>

Change-Id: I3207cb9f0eb4b4a029b7e822b9c59cf48d1e0f9d
Closes-Bug: #1745527
2018-01-26 09:11:43 +07:00
Alexander Chadin
0c8c32e69e Fix strategy state
Change-Id: I003bb3b41aac69cc40a847f52a50c7bc4cc8d020
2018-01-25 15:41:34 +03:00
Alexander Chadin
9138b7bacb Add datasources to strategies
This patch set add datasources instead of datasource.

Change-Id: I94f17ae3a0b6a8990293dc9e33be1a2bd3432a14
2018-01-24 20:51:38 +03:00
Zuul
072822d920 Merge "Add baremetal strategy validation" 2018-01-24 14:59:14 +00:00
Zuul
f67ce8cca5 Merge "Add zone migration strategy" 2018-01-24 14:56:07 +00:00
Zuul
9e6f768263 Merge "Strategy requirements" 2018-01-24 14:53:47 +00:00
Zuul
ba9c89186b Merge "Update unreachable link" 2018-01-24 14:21:49 +00:00
Alexander Chadin
16e7d9c13b Add baremetal strategy validation
This patch set adds validation of baremetal model.

It also fixes PEP issues with storage capacity balance
strategy.

Change-Id: I53e37d91fa6c65f7c3d290747169007809100304
Depends-On: I177b443648301eb50da0da63271ecbfd9408bd4f
2018-01-24 14:35:52 +03:00
Zuul
c3536406bd Merge "Audit scoper for storage CDM" 2018-01-24 10:57:37 +00:00
Alexander Chadin
0c66fe2e65 Strategy requirements
This patch set adds /state resource to strategy API
which allows to retrieve strategy requirements.

Partially-Implements: blueprint check-strategy-requirements
Change-Id: I177b443648301eb50da0da63271ecbfd9408bd4f
2018-01-24 13:39:42 +03:00
Zuul
74933bf0ba Merge "Fix workload_stabilization unavailable nodes and instances" 2018-01-24 10:35:25 +00:00
Hidekazu Nakamura
1dae83da57 Add zone migration strategy
This patch adds hardware maintenance goal, efficacy and zone
migration strategy.

Change-Id: I5bfee421780233ffeea8c1539aba720ae554983d
Implements: blueprint zone-migration-strategy
2018-01-24 19:33:22 +09:00
Zuul
5ec8932182 Merge "Add storage capacity balance Strategy" 2018-01-24 10:22:25 +00:00
Alexander Chadin
701b258dc7 Fix workload_stabilization unavailable nodes and instances
This patch set excludes nodes and instances from auditing
if appropriate metrics aren't available.

Change-Id: I87c6c249e3962f45d082f92d7e6e0be04e101799
Closes-Bug: #1736982
2018-01-24 11:37:43 +03:00
gaofei
f7fcdf14d0 Update unreachable link
Change-Id: I74bbe5a8c4ca9df550f1279aa80a836d6a2f8a93
2018-01-24 14:40:43 +08:00
OpenStack Proposal Bot
47ba6c0808 Updated from global requirements
Change-Id: I4cbf5308061707e28c202f22e8a9bf8492742040
2018-01-24 01:42:12 +00:00
Zuul
5b5fbbedb4 Merge "Fix compute api ref link" 2018-01-23 15:16:19 +00:00
Zuul
a1c575bfc5 Merge "check audit name length" 2018-01-23 11:21:14 +00:00
deepak_mourya
27e887556d Fix compute api ref link
This is to fix some compute api ref link.

Change-Id: Id5acc4d0f635f3d19b916721b6839a0eef544b2a
2018-01-23 09:23:55 +00:00
Alexander Chadin
891f6bc241 Adapt workload_balance strategy to multiple datasource backend
This patch set:
1. Removes nova, ceilometer and gnocchi properties.
2. Adds using of datasource_backend properties along with
   statistic_aggregation method.
3. Changes type of datasource config.

Change-Id: I09d2dce00378f0ee5381d7c85006752aea6975d2
Partially-Implements: blueprint watcher-multi-datasource
2018-01-23 11:51:02 +03:00
Alexander Chadin
5dd6817d47 Adapt noisy_neighbor strategy to multiple datasource backend
Partially-Implements: blueprint watcher-multi-datasource
Change-Id: Ibcd5d0776280bb68ed838f88ebfcde27fc1a3d35
2018-01-23 11:51:02 +03:00
Alexander Chadin
7cdcb4743e Adapt basic_consolidation strategy to multiple datasource backend
Change-Id: Ie30308fd08ed1fd103b70f58f1d17b3749a6fe04
2018-01-23 11:51:02 +03:00
licanwei
6d03c4c543 check audit name length
No more than 63 characters

Change-Id: I52adbd7e9f12dd4a8b6977756d788ee0e5d6391a
Closes-Bug: #1744231
2018-01-23 00:47:26 -08:00
aditi
bcc129cf94 Audit scoper for storage CDM
This patch adds audit scoper for Storage CDM.

Change-Id: I0c5b3b652027e1394fd7744d904397ce87ed35a1
Implements: blueprint audit-scoper-for-storage-data-model
2018-01-23 13:53:31 +05:30
Zuul
40cff311c6 Merge "Adapt workload_stabilization strategy to new datasource backend" 2018-01-23 01:08:32 +00:00
OpenStack Proposal Bot
1a48a7fc57 Imported Translations from Zanata
For more information about this automatic import see:
https://docs.openstack.org/i18n/latest/reviewing-translation-import.html

Change-Id: I19a628bc7a0623e2f1ff8ab8794658bfe25801f5
2018-01-20 07:21:59 +00:00
Zuul
652aa54586 Merge "Update link address" 2018-01-19 11:40:25 +00:00
zhangdebo
42a3886ded Update link address
Link to new measurements is out of date and should be updated.
Change
https://docs.openstack.org/ceilometer/latest/contributor/new_meters.html#meters
to
https://docs.openstack.org/ceilometer/latest/contributor/measurements.html#new-measurements

Change-Id: Idc77e29a69a1f1eb9f8827fa74c9fde79e5619df
2018-01-19 07:59:15 +00:00
licanwei
3430493de1 Fix tempest devstack error
Devstack failed because mysql wasn't enabled.

Change-Id: Ifc1c00f2dddd0f3d67c6672d3b9d3d4bd78a4a90
Closes-Bug: #1744224
2018-01-18 23:33:08 -08:00
licanwei
f5bcf9d355 Add storage capacity balance Strategy
This patch adds Storage Capacity Balance Strategy to balance the
storage capacity through volume migration.

Change-Id: I52ea7ce00deb609a2f668db330f1fbc1c9932613
Implements: blueprint storage-workload-balance
2018-01-18 22:18:18 -08:00
Zuul
d809523bef Merge "Add baremetal data model" 2018-01-18 10:38:12 +00:00
Zuul
bfe3c28986 Merge "Fix compute scope test bug" 2018-01-18 09:37:24 +00:00
OpenStack Proposal Bot
3c8caa3d0a Updated from global requirements
Change-Id: I4814a236f5d015ee25b9de95dd1f3f97e375d382
2018-01-18 03:39:36 +00:00
Zuul
766d064dd0 Merge "Update pike install supermark to queens" 2018-01-17 12:34:35 +00:00
Alexander Chadin
ce196b68c4 Adapt workload_stabilization strategy to new datasource backend
This patch set:
1. Removes nova, ceilometer and gnocchi properties.
2. Adds using of datasource_backend properties along with
   statistic_aggregation method.
3. Changes type of datasource config.

Change-Id: I4a2f05772248fddd97a41e27be4094eb59ee0bdb
Partially-Implements: blueprint watcher-multi-datasource
2018-01-17 13:01:05 +03:00
OpenStack Proposal Bot
42130c42a1 Updated from global requirements
Change-Id: I4ef734eeaeee414c3e6340490f1146d537370127
2018-01-16 12:57:22 +00:00
caoyuan
1a8639d256 Update pike install supermark to queens
Change-Id: If981c77518d0605b4113f4bb4345d152545ffc52
2018-01-15 11:56:36 +00:00
zhang.lei
1702fe1a83 Add the title of API Guide
Currently, The title of API Guide is missing.[1] We should add a
title just like other projects.[2]

[1] https://docs.openstack.org/watcher/latest/api
[2] https://developer.openstack.org/api-ref/application-catalog

Change-Id: I012d746e99a68fc5f259a189188d9cea00d5a4f7
2018-01-13 08:04:36 +00:00
aditi
354ebd35cc Fix compute scope test bug
We were excluding 'INSTANCE_6'from scope, which belongs to 'NODE_3'
in scenerio_1.xml [1]. But NODE_3 is removed from model before only
as it is not in scope.

So, This Patch adds 'AZ3' in fake_scope.

[1] https://github.com/openstack/watcher/blob/master/watcher/tests/decision_engine/model/data/scenario_1.xml
Closes-Bug: #1737901

Change-Id: Ib1aaca7045908418ad0c23b718887cd89db98a83
2018-01-12 16:17:25 +05:30
Zuul
7297603f65 Merge "reset job interval when audit was updated" 2018-01-11 09:12:38 +00:00
Zuul
9626cb1356 Merge "check actionplan state when deleting actionplan" 2018-01-11 09:12:37 +00:00
Zuul
9e027940d7 Merge "use current weighted sd as min_sd when starting to simulate migrations" 2018-01-11 08:48:43 +00:00
Zuul
3754938d96 Merge "Set apscheduler logs to WARN level" 2018-01-11 05:39:10 +00:00
Zuul
8a7f930a64 Merge "update audit API description" 2018-01-11 05:32:50 +00:00
Zuul
f7e506155b Merge "Fix configuration doc link" 2018-01-10 17:02:26 +00:00
Yumeng_Bao
54da2a75fb Add baremetal data model
Change-Id: I57b7bb53b3bc84ad383ae485069274f5c5362c50
Implements: blueprint build-baremetal-data-model-in-watcher
2018-01-10 14:46:41 +08:00
Zuul
5cbb9aca7e Merge "bug fix remove volume migration type 'cold'" 2018-01-10 06:15:01 +00:00
Alexander Chadin
bd79882b16 Set apscheduler logs to WARN level
This patch set defines level of apscheduler logs as WARN.

Closes-Bug: #1742153
Change-Id: Idbb4b3e16187afc5c3969096deaf3248fcef2164
2018-01-09 16:30:14 +03:00
licanwei
960c50ba45 Fix configuration doc link
Change-Id: I7b144194287514144948f8547bc45d6bc4551a52
2018-01-07 23:36:20 -08:00
licanwei
9411f85cd2 update audit API description
Change-Id: I1d3eb9364fb5597788a282d275c71f5a314a0923
2018-01-02 23:51:05 -08:00
licanwei
b4370f0461 update action API description
POST/PATCH/DELETE actions APIs aren't permitted.

Change-Id: I4126bcc6bf6fe2628748d1f151617a38be06efd8
2017-12-28 22:06:33 -08:00
Zuul
97799521f9 Merge "correct audit parameter typo" 2017-12-28 10:54:57 +00:00
suzhengwei
96fa7f33ac use current weighted sd as min_sd when starting to simulate migrations
If it uses a specific value(usually 1 or 2) as the min_sd when starting
to simulate migrations. The first simulate_migration case will always be
less than the min_sd and come into the solution, even though the migration
will increase the weighted sd. This is unreasonable, and make migrations
among hosts back and forth

Change-Id: I7813c4c92c380c489c349444b85187c5611d9c92
Closes-Bug: #1739723
2017-12-27 15:00:57 +03:00
Zuul
1c2d0aa1f2 Merge "Updated from global requirements" 2017-12-27 10:00:01 +00:00
licanwei
070aed7076 correct audit parameter typo
Change-Id: Id98294a093ac9a704791cdbf52046ce1377f1796
2017-12-25 23:52:43 -08:00
Zuul
2b402d3cbf Merge "Fix watcher audit list command" 2017-12-26 04:49:49 +00:00
Zuul
cca3e75ac1 Merge "Add Datasource Abstraction" 2017-12-26 03:02:36 +00:00
OpenStack Proposal Bot
6f27275f44 Updated from global requirements
Change-Id: I26c1f4be398496b88b69094ec1804b07f7c1d7f1
2017-12-23 10:18:41 +00:00
Alexander Chadin
95548af426 Fix watcher audit list command
This patch set adds data migration version that fills noname audits
with name like strategy.name + '-' + audit.created_at.

Closes-Bug: #1738758
Change-Id: I1d65b3110166e9f64ce5b80a34672d24d629807d
2017-12-22 08:43:28 +00:00
licanwei
cdc847d352 check actionplan state when deleting actionplan
If actionplan is 'ONGOING' or 'PENDING',
don't delete it.

Change-Id: I8bfa31a70bba0a7adb1bfd09fc22e6a66b9ebf3a
Closes-Bug: #1738360
2017-12-21 22:32:09 -08:00
Zuul
b69244f8ef Merge "TrivialFix: remove redundant import alias" 2017-12-21 15:43:42 +00:00
Dao Cong Tien
cbd6d88025 TrivialFix: remove redundant import alias
Change-Id: Idf53683def6588e626144ecc3b74033d57ab3f87
2017-12-21 20:09:07 +07:00
Zuul
028d7c939c Merge "check audit state when deleting audit" 2017-12-20 09:04:02 +00:00
licanwei
a8fa969379 check audit state when deleting audit
If audit is 'ONGOING' or 'PENDING', don't delete audit.

Change-Id: Iac714e7e78e7bb5b52f401e5b2ad0e1d8af8bb45
Closes-Bug: #1738358
2017-12-19 18:09:42 -08:00
licanwei
80ee4b29f5 reset job interval when audit was updated
when we update a existing audit's interval, the interval of
'execute_audit' job is still the old value.
We need to update the interval of 'execute_audit' job.

Change-Id: I402efaa6b2fd3a454717c3df9746c827927ffa91
Closes-Bug: #1738140
2017-12-19 17:57:37 -08:00
Zuul
e562c9173c Merge "Updated from global requirements" 2017-12-19 16:38:39 +00:00
OpenStack Proposal Bot
ec0c359037 Updated from global requirements
Change-Id: I96d4a5a7e2b05df3f06d7c08f64cd9bcf83ff99b
2017-12-19 01:52:42 +00:00
Andreas Jaeger
3b6bef180b Fix releasenotes build
Remove a stray import of watcher project that breaks releasenotes build.

Change-Id: I4d107449b88adb19a3f269b2f33221addef0d9d6
2017-12-18 15:39:25 +01:00
Zuul
640e4e1fea Merge "Update getting scoped storage CDM" 2017-12-18 14:31:39 +00:00
Zuul
eeb817cd6e Merge "listen to 'compute.instance.rebuild.end' event" 2017-12-18 13:12:26 +00:00
Hidekazu Nakamura
c6afa7c320 Update getting scoped storage CDM
Now that CDM-scoping was implemented, Getting scoped storage model
have to be updated.
This patch updates getting storage cluster data model.

Change-Id: Iefc22b54995aa8d2f3a7b3698575f6eb800d4289
2017-12-16 15:20:58 +00:00
OpenStack Proposal Bot
9ccd17e40b Updated from global requirements
Change-Id: I0af2c9fd266f925af5e3e8731b37a00dab91d6a8
2017-12-15 22:24:15 +00:00
Zuul
2a7e0d652c Merge "'get_volume_type_by_backendname' returns a list" 2017-12-14 06:18:04 +00:00
Zuul
a94e35b60e Merge "Fix 'unable to exclude instance'" 2017-12-14 05:38:34 +00:00
Zuul
72e3d5c7f9 Merge "Add and identify excluded instances in compute CDM" 2017-12-13 13:34:33 +00:00
aditi
be56441e55 Fix 'unable to exclude instance'
Change-Id: I1599a86a2ba7d3af755fb1412a5e38516c736957
Closes-Bug: #1736129
2017-12-12 10:29:35 +00:00
Zuul
aa2b213a45 Merge "Register default policies in code" 2017-12-12 03:38:13 +00:00
Zuul
668513d771 Merge "Updated from global requirements" 2017-12-12 02:57:47 +00:00
Lance Bragstad
0242d33adb Register default policies in code
This commit registers all policies formally kept in policy.json as
defaults in code. This is an effort to make policy management easier
for operators. More information on this initiative can be found
below:

  https://governance.openstack.org/tc/goals/queens/policy-in-code.html

bp policy-and-docs-in-code

Change-Id: Ibab08f8e1c95b86e08737c67a39c293566dbabc7
2017-12-11 15:19:10 +03:00
suzhengwei
c38dc9828b listen to 'compute.instance.rebuild.end' event
In one integrated cloud env, there would be many solutions, which would
make the compute resource strongly relocated. Watcher should listen to
all the notifications which represent the compute resource changes, to
update compute CDM. If not, the compute CDM will be stale, Watcher
couldn't work steadily and harmoniously.

Change-Id: I793131dd8f24f1ac5f5a6a070bb4fe7980c8dfb2
Implements:blueprint listen-all-necessary-notifications
2017-12-08 16:18:35 +08:00
OpenStack Proposal Bot
4ce1a9096b Updated from global requirements
Change-Id: I04a2a04de3b32570bb0afaf9eb736976e888a031
2017-12-07 13:53:09 +00:00
Yumeng_Bao
02163d64aa bug fix remove volume migration type 'cold'
Migration action 'cold' is not intuitive for the developers and users,
so this patch replaces it with ‘migrate’ and 'retype'.

Change-Id: I58acac741499f47e79630a6031d44088681e038a
Closes-Bug: #1733247
2017-12-06 18:03:25 +08:00
suzhengwei
d91f0bff22 Add and identify excluded instances in compute CDM
Change-Id: If03893c5e9b6a37e1126ad91e4f3bfafe0f101d9
Implements:blueprint compute-cdm-include-all-instances
2017-12-06 17:43:42 +08:00
aditi
e401cb7c9d Add Datasource Abstraction
This patch set adds, datasource abstraction layer.

Change-Id: Id828e427b998aa34efa07e04e615c82c5730d3c9
Partially-Implements: blueprint watcher-multi-datasource
2017-12-05 17:33:04 +03:00
licanwei
fa31341bbb 'get_volume_type_by_backendname' returns a list
Storage pool can have many volume types,
'get_volume_type_by_backendname' should return a list of types.

Closes-Bug: #1733257
Change-Id: I877d5886259e482089ed0f9944d97bb99f375824
2017-11-26 23:28:56 -08:00
225 changed files with 11309 additions and 2653 deletions

View File

@@ -1,39 +1,139 @@
- project:
name: openstack/watcher
check:
jobs:
- watcher-tempest-multinode
- watcher-tempest-functional
- watcher-tempest-dummy_optim
- watcher-tempest-actuator
- watcher-tempest-basic_optim
- watcher-tempest-workload_balancing
- watcherclient-tempest-functional
- legacy-rally-dsvm-watcher-rally
- openstack-tox-lower-constraints
gate:
jobs:
- watcher-tempest-functional
- watcher-tempest-dummy_optim
- watcher-tempest-actuator
- watcher-tempest-basic_optim
- watcher-tempest-workload_balancing
- watcherclient-tempest-functional
- legacy-rally-dsvm-watcher-rally
- openstack-tox-lower-constraints
- job:
name: watcher-tempest-base-multinode
parent: legacy-dsvm-base-multinode
run: playbooks/legacy/watcher-tempest-base-multinode/run.yaml
post-run: playbooks/legacy/watcher-tempest-base-multinode/post.yaml
timeout: 4200
name: watcher-tempest-dummy_optim
parent: watcher-tempest-multinode
vars:
tempest_test_regex: 'watcher_tempest_plugin.tests.scenario.test_execute_dummy_optim'
- job:
name: watcher-tempest-actuator
parent: watcher-tempest-multinode
vars:
tempest_test_regex: 'watcher_tempest_plugin.tests.scenario.test_execute_actuator'
- job:
name: watcher-tempest-basic_optim
parent: watcher-tempest-multinode
vars:
tempest_test_regex: 'watcher_tempest_plugin.tests.scenario.test_execute_basic_optim'
- job:
name: watcher-tempest-workload_balancing
parent: watcher-tempest-multinode
vars:
tempest_test_regex: 'watcher_tempest_plugin.tests.scenario.test_execute_workload_balancing'
- job:
name: watcher-tempest-multinode
parent: watcher-tempest-functional
voting: false
nodeset: openstack-two-node
pre-run: playbooks/pre.yaml
run: playbooks/orchestrate-tempest.yaml
roles:
- zuul: openstack/tempest
group-vars:
subnode:
devstack_local_conf:
post-config:
$NOVA_CONF:
libvirt:
live_migration_uri: 'qemu+ssh://root@%s/system'
devstack_services:
watcher-api: false
watcher-decision-engine: false
watcher-applier: false
# We need to add TLS support for watcher plugin
tls-proxy: false
ceilometer: false
ceilometer-acompute: false
ceilometer-acentral: false
ceilometer-anotification: false
watcher: false
gnocchi-api: false
gnocchi-metricd: false
rabbit: false
mysql: false
vars:
devstack_local_conf:
post-config:
$NOVA_CONF:
libvirt:
live_migration_uri: 'qemu+ssh://root@%s/system'
test-config:
$TEMPEST_CONFIG:
compute:
min_compute_nodes: 2
compute-feature-enabled:
live_migration: true
block_migration_for_live_migration: true
devstack_plugins:
ceilometer: https://git.openstack.org/openstack/ceilometer
- job:
name: watcher-tempest-functional
parent: devstack-tempest
timeout: 7200
required-projects:
- openstack/ceilometer
- openstack-infra/devstack-gate
- openstack/python-openstackclient
- openstack/python-watcherclient
- openstack/watcher
- openstack/watcher-tempest-plugin
nodeset: legacy-ubuntu-xenial-2-node
- openstack/tempest
vars:
devstack_plugins:
watcher: https://git.openstack.org/openstack/watcher
devstack_services:
tls-proxy: false
watcher-api: true
watcher-decision-engine: true
watcher-applier: true
tempest: true
s-account: false
s-container: false
s-object: false
s-proxy: false
devstack_localrc:
TEMPEST_PLUGINS: '/opt/stack/watcher-tempest-plugin'
tempest_test_regex: 'watcher_tempest_plugin.tests.api'
tox_envlist: all
tox_environment:
# Do we really need to set this? It's cargo culted
PYTHONUNBUFFERED: 'true'
zuul_copy_output:
/etc/hosts: logs
- job:
name: watcher-tempest-multinode
parent: watcher-tempest-base-multinode
voting: false
- job:
# This job is used by python-watcherclient repo
# This job is used in python-watcherclient repo
name: watcherclient-tempest-functional
parent: legacy-dsvm-base
run: playbooks/legacy/watcherclient-tempest-functional/run.yaml
post-run: playbooks/legacy/watcherclient-tempest-functional/post.yaml
parent: watcher-tempest-functional
voting: false
timeout: 4200
required-projects:
- openstack-dev/devstack
- openstack-infra/devstack-gate
- openstack/python-openstackclient
- openstack/python-watcherclient
- openstack/watcher
vars:
tempest_concurrency: 1
devstack_localrc:
TEMPEST_PLUGINS: '/opt/stack/python-watcherclient'
tempest_test_regex: 'watcherclient.tests.functional'

View File

@@ -8,4 +8,4 @@
watcher Style Commandments
==========================
Read the OpenStack Style Commandments https://docs.openstack.org/developer/hacking/
Read the OpenStack Style Commandments https://docs.openstack.org/hacking/latest/

View File

@@ -2,8 +2,8 @@
Team and repository tags
========================
.. image:: https://governance.openstack.org/badges/watcher.svg
:target: https://governance.openstack.org/reference/tags/index.html
.. image:: https://governance.openstack.org/tc/badges/watcher.svg
:target: https://governance.openstack.org/tc/reference/tags/index.html
.. Change things from this point on
@@ -22,10 +22,11 @@ service for multi-tenant OpenStack-based clouds.
Watcher provides a robust framework to realize a wide range of cloud
optimization goals, including the reduction of data center
operating costs, increased system performance via intelligent virtual machine
migration, increased energy efficiency-and more!
migration, increased energy efficiency and more!
* Free software: Apache license
* Wiki: https://wiki.openstack.org/wiki/Watcher
* Source: https://github.com/openstack/watcher
* Source: https://github.com/openstack/watcher
* Bugs: https://bugs.launchpad.net/watcher
* Documentation: https://docs.openstack.org/watcher/latest/
* Release notes: https://docs.openstack.org/releasenotes/watcher/

View File

@@ -42,7 +42,7 @@ WATCHER_AUTH_CACHE_DIR=${WATCHER_AUTH_CACHE_DIR:-/var/cache/watcher}
WATCHER_CONF_DIR=/etc/watcher
WATCHER_CONF=$WATCHER_CONF_DIR/watcher.conf
WATCHER_POLICY_JSON=$WATCHER_CONF_DIR/policy.json
WATCHER_POLICY_YAML=$WATCHER_CONF_DIR/policy.yaml.sample
WATCHER_DEVSTACK_DIR=$WATCHER_DIR/devstack
WATCHER_DEVSTACK_FILES_DIR=$WATCHER_DEVSTACK_DIR/files
@@ -106,7 +106,25 @@ function configure_watcher {
# Put config files in ``/etc/watcher`` for everyone to find
sudo install -d -o $STACK_USER $WATCHER_CONF_DIR
install_default_policy watcher
local project=watcher
local project_uc
project_uc=$(echo watcher|tr a-z A-Z)
local conf_dir="${project_uc}_CONF_DIR"
# eval conf dir to get the variable
conf_dir="${!conf_dir}"
local project_dir="${project_uc}_DIR"
# eval project dir to get the variable
project_dir="${!project_dir}"
local sample_conf_dir="${project_dir}/etc/${project}"
local sample_policy_dir="${project_dir}/etc/${project}/policy.d"
local sample_policy_generator="${project_dir}/etc/${project}/oslo-policy-generator/watcher-policy-generator.conf"
# first generate policy.yaml
oslopolicy-sample-generator --config-file $sample_policy_generator
# then optionally copy over policy.d
if [[ -d $sample_policy_dir ]]; then
cp -r $sample_policy_dir $conf_dir/policy.d
fi
# Rebuild the config file from scratch
create_watcher_conf
@@ -159,15 +177,19 @@ function create_watcher_conf {
iniset $WATCHER_CONF DEFAULT debug "$ENABLE_DEBUG_LOG_LEVEL"
iniset $WATCHER_CONF DEFAULT control_exchange watcher
iniset_rpc_backend watcher $WATCHER_CONF
iniset $WATCHER_CONF database connection $(database_connection_url watcher)
iniset $WATCHER_CONF api host "$WATCHER_SERVICE_HOST"
iniset $WATCHER_CONF api port "$WATCHER_SERVICE_PORT"
iniset $WATCHER_CONF oslo_policy policy_file $WATCHER_POLICY_JSON
if is_service_enabled tls-proxy; then
iniset $WATCHER_CONF api port "$WATCHER_SERVICE_PORT_INT"
# iniset $WATCHER_CONF api enable_ssl_api "True"
else
iniset $WATCHER_CONF api port "$WATCHER_SERVICE_PORT"
fi
iniset $WATCHER_CONF oslo_messaging_rabbit rabbit_userid $RABBIT_USERID
iniset $WATCHER_CONF oslo_messaging_rabbit rabbit_password $RABBIT_PASSWORD
iniset $WATCHER_CONF oslo_messaging_rabbit rabbit_host $RABBIT_HOST
iniset $WATCHER_CONF oslo_policy policy_file $WATCHER_POLICY_YAML
iniset $WATCHER_CONF oslo_messaging_notifications driver "messagingv2"
@@ -279,8 +301,7 @@ function start_watcher_api {
# Start proxies if enabled
if is_service_enabled tls-proxy; then
start_tls_proxy '*' $WATCHER_SERVICE_PORT $WATCHER_SERVICE_HOST $WATCHER_SERVICE_PORT_INT &
start_tls_proxy '*' $EC2_SERVICE_PORT $WATCHER_SERVICE_HOST $WATCHER_SERVICE_PORT_INT &
start_tls_proxy watcher '*' $WATCHER_SERVICE_PORT $WATCHER_SERVICE_HOST $WATCHER_SERVICE_PORT_INT
fi
}

View File

@@ -3,6 +3,9 @@
# Make sure rabbit is enabled
enable_service rabbit
# Make sure mysql is enabled
enable_service mysql
# Enable Watcher services
enable_service watcher-api
enable_service watcher-decision-engine

View File

@@ -20,7 +20,7 @@ It is used via a single directive in the .rst file
"""
from sphinx.util.compat import Directive
from docutils.parsers.rst import Directive
from docutils import nodes
from watcher.notifications import base as notification

View File

@@ -19,7 +19,7 @@ The source install instructions specifically avoid using platform specific
packages, instead using the source for the code and the Python Package Index
(PyPi_).
.. _PyPi: https://pypi.python.org/pypi
.. _PyPi: https://pypi.org/
It's expected that your system already has python2.7_, latest version of pip_,
and git_ available.

View File

@@ -1,3 +1,7 @@
==================================================
OpenStack Infrastructure Optimization Service APIs
==================================================
.. toctree::
:maxdepth: 1

View File

@@ -42,6 +42,7 @@ extensions = [
'ext.versioned_notifications',
'oslo_config.sphinxconfiggen',
'openstackdocstheme',
'sphinx.ext.napoleon',
]
wsme_protocols = ['restjson']

View File

@@ -129,10 +129,14 @@ Configure the Identity service for the Watcher service
.. code-block:: bash
$ openstack endpoint create --region YOUR_REGION watcher \
--publicurl http://WATCHER_API_PUBLIC_IP:9322 \
--internalurl http://WATCHER_API_INTERNAL_IP:9322 \
--adminurl http://WATCHER_API_ADMIN_IP:9322
$ openstack endpoint create --region YOUR_REGION
watcher public http://WATCHER_API_PUBLIC_IP:9322
$ openstack endpoint create --region YOUR_REGION
watcher internal http://WATCHER_API_INTERNAL_IP:9322
$ openstack endpoint create --region YOUR_REGION
watcher admin http://WATCHER_API_ADMIN_IP:9322
.. _watcher-db_configuration:
@@ -200,8 +204,8 @@ configuration file, in order:
Although some configuration options are mentioned here, it is recommended that
you review all the `available options
<https://git.openstack.org/cgit/openstack/watcher/tree/etc/watcher/watcher.conf.sample>`_
you review all the :ref:`available options
<watcher_sample_configuration_files>`
so that the watcher service is configured for your needs.
#. The Watcher Service stores information in a database. This guide uses the
@@ -217,7 +221,7 @@ so that the watcher service is configured for your needs.
# The SQLAlchemy connection string used to connect to the
# database (string value)
#connection=<None>
connection = mysql://watcher:WATCHER_DBPASSWORD@DB_IP/watcher?charset=utf8
connection = mysql+pymysql://watcher:WATCHER_DBPASSWORD@DB_IP/watcher?charset=utf8
#. Configure the Watcher Service to use the RabbitMQ message broker by
setting one or more of these options. Replace RABBIT_HOST with the
@@ -235,21 +239,8 @@ so that the watcher service is configured for your needs.
# option. (string value)
control_exchange = watcher
...
[oslo_messaging_rabbit]
# The username used by the message broker (string value)
rabbit_userid = RABBITMQ_USER
# The password of user used by the message broker (string value)
rabbit_password = RABBITMQ_PASSWORD
# The host where the message brokeris installed (string value)
rabbit_host = RABBIT_HOST
# The port used bythe message broker (string value)
#rabbit_port = 5672
# ...
transport_url = rabbit://RABBITMQ_USER:RABBITMQ_PASSWORD@RABBIT_HOST
#. Watcher API shall validate the token provided by every incoming request,
@@ -273,7 +264,7 @@ so that the watcher service is configured for your needs.
# Authentication URL (unknown value)
#auth_url = <None>
auth_url = http://IDENTITY_IP:35357
auth_url = http://IDENTITY_IP:5000
# Username (unknown value)
# Deprecated group/name - [DEFAULT]/username
@@ -319,7 +310,7 @@ so that the watcher service is configured for your needs.
# Authentication URL (unknown value)
#auth_url = <None>
auth_url = http://IDENTITY_IP:35357
auth_url = http://IDENTITY_IP:5000
# Username (unknown value)
# Deprecated group/name - [DEFAULT]/username
@@ -349,7 +340,7 @@ so that the watcher service is configured for your needs.
[nova_client]
# Version of Nova API to use in novaclient. (string value)
#api_version = 2.53
#api_version = 2.56
api_version = 2.1
#. Create the Watcher Service database tables::
@@ -391,7 +382,7 @@ Ceilometer is designed to collect measurements from OpenStack services and from
other external components. If you would like to add new meters to the currently
existing ones, you need to follow the documentation below:
#. https://docs.openstack.org/ceilometer/latest/contributor/new_meters.html#meters
#. https://docs.openstack.org/ceilometer/latest/contributor/measurements.html#new-measurements
The Ceilometer collector uses a pluggable storage system, meaning that you can
pick any database system you prefer.

View File

@@ -1,5 +1,9 @@
===================
Configuration Guide
===================
.. toctree::
:maxdepth: 1
:maxdepth: 2
configuring
watcher

View File

@@ -39,7 +39,7 @@ notifications of important events.
* https://launchpad.net
* https://launchpad.net/watcher
* https://launchpad.net/~openstack
* https://launchpad.net/openstack
Project Hosting Details
@@ -49,7 +49,7 @@ Bug tracker
https://launchpad.net/watcher
Mailing list (prefix subjects with ``[watcher]`` for faster responses)
https://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
http://lists.openstack.org/pipermail/openstack-dev/
Wiki
https://wiki.openstack.org/Watcher
@@ -65,7 +65,7 @@ IRC Channel
Weekly Meetings
On Wednesdays at 14:00 UTC on even weeks in the ``#openstack-meeting-4``
IRC channel, 13:00 UTC on odd weeks in the ``#openstack-meeting-alt``
IRC channel, 08:00 UTC on odd weeks in the ``#openstack-meeting-alt``
IRC channel (`meetings logs`_)
.. _changelog: http://eavesdrop.openstack.org/irclogs/%23openstack-watcher/

View File

@@ -37,7 +37,7 @@ different version of the above, please document your configuration here!
.. _Python: https://www.python.org/
.. _git: https://git-scm.com/
.. _setuptools: https://pypi.python.org/pypi/setuptools
.. _setuptools: https://pypi.org/project/setuptools
.. _virtualenvwrapper: https://virtualenvwrapper.readthedocs.io/en/latest/install.html
Getting the latest code
@@ -69,8 +69,8 @@ itself.
These dependencies can be installed from PyPi_ using the Python tool pip_.
.. _PyPi: https://pypi.python.org/
.. _pip: https://pypi.python.org/pypi/pip
.. _PyPi: https://pypi.org/
.. _pip: https://pypi.org/project/pip
However, your system *may* need additional dependencies that `pip` (and by
extension, PyPi) cannot satisfy. These dependencies should be installed
@@ -123,9 +123,10 @@ You can re-activate this virtualenv for your current shell using:
$ workon watcher
For more information on virtual environments, see virtualenv_.
For more information on virtual environments, see virtualenv_ and
virtualenvwrapper_.
.. _virtualenv: https://www.virtualenv.org/
.. _virtualenv: https://pypi.org/project/virtualenv/

View File

@@ -79,7 +79,7 @@ requirements.txt file::
.. _cookiecutter: https://github.com/audreyr/cookiecutter
.. _OpenStack cookiecutter: https://github.com/openstack-dev/cookiecutter
.. _python-watcher: https://pypi.python.org/pypi/python-watcher
.. _python-watcher: https://pypi.org/project/python-watcher
Implementing a plugin for Watcher
=================================

View File

@@ -208,7 +208,7 @@ Here below is how to register ``DummyClusterDataModelCollector`` using pbr_:
watcher_cluster_data_model_collectors =
dummy = thirdparty.dummy:DummyClusterDataModelCollector
.. _pbr: http://docs.openstack.org/pbr/latest
.. _pbr: https://docs.openstack.org/pbr/latest/
Add new notification endpoints

View File

@@ -263,7 +263,7 @@ requires new metrics not covered by Ceilometer, you can add them through a
`Ceilometer plugin`_.
.. _`Helper`: https://github.com/openstack/watcher/blob/master/watcher/decision_engine/cluster/history/ceilometer.py
.. _`Helper`: https://github.com/openstack/watcher/blob/master/watcher/datasource/ceilometer.py
.. _`Ceilometer developer guide`: https://docs.openstack.org/ceilometer/latest/contributor/architecture.html#storing-accessing-the-data
.. _`Ceilometer`: https://docs.openstack.org/ceilometer/latest
.. _`Monasca`: https://github.com/openstack/monasca-api/blob/master/docs/monasca-api-spec.md

View File

@@ -31,7 +31,7 @@ the following::
(watcher) $ tox -e pep8
.. _tox: https://tox.readthedocs.org/
.. _Gerrit: http://review.openstack.org/
.. _Gerrit: https://review.openstack.org/
You may pass options to the test programs using positional arguments. To run a
specific unit test, you can pass extra options to `os-testr`_ after putting

View File

@@ -267,14 +267,14 @@ the same goal and same workload of the :ref:`Cluster <cluster_definition>`.
Project
=======
:ref:`Projects <project_definition>` represent the base unit of ownership
:ref:`Projects <project_definition>` represent the base unit of "ownership"
in OpenStack, in that all :ref:`resources <managed_resource_definition>` in
OpenStack should be owned by a specific :ref:`project <project_definition>`.
In OpenStack Identity, a :ref:`project <project_definition>` must be owned by a
specific domain.
Please, read `the official OpenStack definition of a Project
<http://docs.openstack.org/glossary/content/glossary.html>`_.
<https://docs.openstack.org/doc-contrib-guide/common/glossary.html>`_.
.. _scoring_engine_definition:

View File

@@ -15,7 +15,7 @@ metrics receiver, complex event processor and profiler, optimization processor
and an action plan applier. This provides a robust framework to realize a wide
range of cloud optimization goals, including the reduction of data center
operating costs, increased system performance via intelligent virtual machine
migration, increased energy efficiencyand more!
migration, increased energy efficiency and more!
Watcher project consists of several source code repositories:

View File

@@ -26,8 +26,8 @@
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
@@ -47,7 +47,7 @@
[watcher_clients_auth]
...
auth_type = password
auth_url = http://controller:35357
auth_url = http://controller:5000
username = watcher
password = WATCHER_PASS
project_domain_name = default

View File

@@ -10,7 +10,7 @@ Infrastructure Optimization service
verify.rst
next-steps.rst
The Infrastructure Optimization service (watcher) provides
The Infrastructure Optimization service (Watcher) provides
flexible and scalable resource optimization service for
multi-tenant OpenStack-based clouds.
@@ -21,19 +21,19 @@ applier. This provides a robust framework to realize a wide
range of cloud optimization goals, including the reduction
of data center operating costs, increased system performance
via intelligent virtual machine migration, increased energy
efficiencyand more!
efficiency and more!
Watcher also supports a pluggable architecture by which custom
optimization algorithms, data metrics and data profilers can be
developed and inserted into the Watcher framework.
Check the documentation for watcher optimization strategies at
https://docs.openstack.org/watcher/latest/strategies/index.html
`Strategies <https://docs.openstack.org/watcher/latest/strategies/index.html>`_.
Check watcher glossary at
https://docs.openstack.org/watcher/latest/glossary.html
Check watcher glossary at `Glossary
<https://docs.openstack.org/watcher/latest/glossary.html>`_.
This chapter assumes a working setup of OpenStack following the
`OpenStack Installation Tutorial
<https://docs.openstack.org/pike/install/>`_.
<https://docs.openstack.org/queens/install/>`_.

View File

@@ -6,4 +6,4 @@ Next steps
Your OpenStack environment now includes the watcher service.
To add additional services, see
https://docs.openstack.org/pike/install/.
https://docs.openstack.org/queens/install/.

View File

@@ -7,9 +7,7 @@ Service for the Watcher API
---------------------------
:Author: openstack@lists.launchpad.net
:Date:
:Copyright: OpenStack Foundation
:Version:
:Manual section: 1
:Manual group: cloud computing

View File

@@ -7,9 +7,7 @@ Service for the Watcher Applier
-------------------------------
:Author: openstack@lists.launchpad.net
:Date:
:Copyright: OpenStack Foundation
:Version:
:Manual section: 1
:Manual group: cloud computing

View File

@@ -7,9 +7,7 @@ Service for the Watcher Decision Engine
---------------------------------------
:Author: openstack@lists.launchpad.net
:Date:
:Copyright: OpenStack Foundation
:Version:
:Manual section: 1
:Manual group: cloud computing

View File

@@ -0,0 +1,86 @@
=============
Actuator
=============
Synopsis
--------
**display name**: ``Actuator``
**goal**: ``unclassified``
.. watcher-term:: watcher.decision_engine.strategy.strategies.actuation.Actuator
Requirements
------------
Metrics
*******
None
Cluster data model
******************
None
Actions
*******
Default Watcher's actions.
Planner
*******
Default Watcher's planner:
.. watcher-term:: watcher.decision_engine.planner.weight.WeightPlanner
Configuration
-------------
Strategy parameters are:
==================== ====== ===================== =============================
parameter type default Value description
==================== ====== ===================== =============================
``actions`` array None Actions to be executed.
==================== ====== ===================== =============================
The elements of actions array are:
==================== ====== ===================== =============================
parameter type default Value description
==================== ====== ===================== =============================
``action_type`` string None Action name defined in
setup.cfg(mandatory)
``resource_id`` string None Resource_id of the action.
``input_parameters`` object None Input_parameters of the
action(mandatory).
==================== ====== ===================== =============================
Efficacy Indicator
------------------
None
Algorithm
---------
This strategy create an action plan with a predefined set of actions.
How to use it ?
---------------
.. code-block:: shell
$ openstack optimize audittemplate create \
at1 unclassified --strategy actuator
$ openstack optimize audit create -a at1 \
-p actions='[{"action_type": "migrate", "resource_id": "56a40802-6fde-4b59-957c-c84baec7eaed", "input_parameters": {"migration_type": "live", "source_node": "s01"}}]'
External Links
--------------
None

View File

@@ -9,7 +9,7 @@ Synopsis
**goal**: ``server_consolidation``
.. watcher-term:: watcher.decision_engine.strategy.strategies.basic_consolidation
.. watcher-term:: watcher.decision_engine.strategy.strategies.basic_consolidation.BasicConsolidation
Requirements
------------

View File

@@ -0,0 +1,92 @@
===========================
Host Maintenance Strategy
===========================
Synopsis
--------
**display name**: ``Host Maintenance Strategy``
**goal**: ``cluster_maintaining``
.. watcher-term:: watcher.decision_engine.strategy.strategies.host_maintenance.HostMaintenance
Requirements
------------
None.
Metrics
*******
None
Cluster data model
******************
Default Watcher's Compute cluster data model:
.. watcher-term:: watcher.decision_engine.model.collector.nova.NovaClusterDataModelCollector
Actions
*******
Default Watcher's actions:
.. list-table::
:widths: 30 30
:header-rows: 1
* - action
- description
* - ``migration``
- .. watcher-term:: watcher.applier.actions.migration.Migrate
Planner
*******
Default Watcher's planner:
.. watcher-term:: watcher.decision_engine.planner.weight.WeightPlanner
Configuration
-------------
Strategy parameters are:
==================== ====== ====================================
parameter type default Value description
==================== ====== ====================================
``maintenance_node`` String The name of the compute node which
need maintenance. Required.
``backup_node`` String The name of the compute node which
will backup the maintenance node.
Optional.
==================== ====== ====================================
Efficacy Indicator
------------------
None
Algorithm
---------
For more information on the Host Maintenance Strategy please refer
to: https://specs.openstack.org/openstack/watcher-specs/specs/queens/approved/cluster-maintenance-strategy.html
How to use it ?
---------------
.. code-block:: shell
$ openstack optimize audit create \
-g cluster_maintaining -s host_maintenance \
-p maintenance_node=compute01 \
-p backup_node=compute02 \
--auto-trigger
External Links
--------------
None.

View File

@@ -9,11 +9,7 @@ Synopsis
**goal**: ``thermal_optimization``
Outlet (Exhaust Air) temperature is a new thermal telemetry which can be
used to measure the host's thermal/workload status. This strategy makes
decisions to migrate workloads to the hosts with good thermal condition
(lowest outlet temperature) when the outlet temperature of source hosts
reach a configurable threshold.
.. watcher-term:: watcher.decision_engine.strategy.strategies.outlet_temp_control
Requirements
------------

View File

@@ -9,7 +9,7 @@ Synopsis
**goal**: ``saving_energy``
.. watcher-term:: watcher.decision_engine.strategy.strategies.saving_energy
.. watcher-term:: watcher.decision_engine.strategy.strategies.saving_energy.SavingEnergy
Requirements
------------
@@ -67,13 +67,13 @@ parameter type default description
Efficacy Indicator
------------------
Energy saving strategy efficacy indicator is unclassified.
https://github.com/openstack/watcher/blob/master/watcher/decision_engine/goal/goals.py#L215-L218
None
Algorithm
---------
For more information on the Energy Saving Strategy please refer to:http://specs.openstack.org/openstack/watcher-specs/specs/pike/implemented/energy-saving-strategy.html
For more information on the Energy Saving Strategy please refer to:
http://specs.openstack.org/openstack/watcher-specs/specs/pike/implemented/energy-saving-strategy.html
How to use it ?
---------------
@@ -91,10 +91,10 @@ step 2: Create audit to do optimization
$ openstack optimize audittemplate create \
at1 saving_energy --strategy saving_energy
$ openstack optimize audit create -a at1
$ openstack optimize audit create -a at1 \
-p free_used_percent=20.0
External Links
--------------
*Spec URL*
http://specs.openstack.org/openstack/watcher-specs/specs/pike/implemented/energy-saving-strategy.html
None

View File

@@ -0,0 +1,87 @@
========================
Storage capacity balance
========================
Synopsis
--------
**display name**: ``Storage Capacity Balance Strategy``
**goal**: ``workload_balancing``
.. watcher-term:: watcher.decision_engine.strategy.strategies.storage_capacity_balance.StorageCapacityBalance
Requirements
------------
Metrics
*******
None
Cluster data model
******************
Storage cluster data model is required:
.. watcher-term:: watcher.decision_engine.model.collector.cinder.CinderClusterDataModelCollector
Actions
*******
Default Watcher's actions:
.. list-table::
:widths: 25 35
:header-rows: 1
* - action
- description
* - ``volume_migrate``
- .. watcher-term:: watcher.applier.actions.volume_migration.VolumeMigrate
Planner
*******
Default Watcher's planner:
.. watcher-term:: watcher.decision_engine.planner.weight.WeightPlanner
Configuration
-------------
Strategy parameter is:
==================== ====== ============= =====================================
parameter type default Value description
==================== ====== ============= =====================================
``volume_threshold`` Number 80.0 Volume threshold for capacity balance
==================== ====== ============= =====================================
Efficacy Indicator
------------------
None
Algorithm
---------
For more information on the zone migration strategy please refer to:
http://specs.openstack.org/openstack/watcher-specs/specs/queens/implemented/storage-capacity-balance.html
How to use it ?
---------------
.. code-block:: shell
$ openstack optimize audittemplate create \
at1 workload_balancing --strategy storage_capacity_balance
$ openstack optimize audit create -a at1 \
-p volume_threshold=85.0
External Links
--------------
None

View File

@@ -9,7 +9,7 @@ Synopsis
**goal**: ``airflow_optimization``
.. watcher-term:: watcher.decision_engine.strategy.strategies.uniform_airflow
.. watcher-term:: watcher.decision_engine.strategy.strategies.uniform_airflow.UniformAirflow
Requirements
------------

View File

@@ -9,7 +9,7 @@ Synopsis
**goal**: ``vm_consolidation``
.. watcher-term:: watcher.decision_engine.strategy.strategies.vm_workload_consolidation
.. watcher-term:: watcher.decision_engine.strategy.strategies.vm_workload_consolidation.VMWorkloadConsolidation
Requirements
------------

View File

@@ -9,7 +9,7 @@ Synopsis
**goal**: ``workload_balancing``
.. watcher-term:: watcher.decision_engine.strategy.strategies.workload_stabilization
.. watcher-term:: watcher.decision_engine.strategy.strategies.workload_stabilization.WorkloadStabilization
Requirements
------------

View File

@@ -9,7 +9,7 @@ Synopsis
**goal**: ``workload_balancing``
.. watcher-term:: watcher.decision_engine.strategy.strategies.workload_balance
.. watcher-term:: watcher.decision_engine.strategy.strategies.workload_balance.WorkloadBalance
Requirements
------------

View File

@@ -0,0 +1,154 @@
==============
Zone migration
==============
Synopsis
--------
**display name**: ``Zone migration``
**goal**: ``hardware_maintenance``
.. watcher-term:: watcher.decision_engine.strategy.strategies.zone_migration.ZoneMigration
Requirements
------------
Metrics
*******
None
Cluster data model
******************
Default Watcher's Compute cluster data model:
.. watcher-term:: watcher.decision_engine.model.collector.nova.NovaClusterDataModelCollector
Storage cluster data model is also required:
.. watcher-term:: watcher.decision_engine.model.collector.cinder.CinderClusterDataModelCollector
Actions
*******
Default Watcher's actions:
.. list-table::
:widths: 30 30
:header-rows: 1
* - action
- description
* - ``migrate``
- .. watcher-term:: watcher.applier.actions.migration.Migrate
* - ``volume_migrate``
- .. watcher-term:: watcher.applier.actions.volume_migration.VolumeMigrate
Planner
*******
Default Watcher's planner:
.. watcher-term:: watcher.decision_engine.planner.weight.WeightPlanner
Configuration
-------------
Strategy parameters are:
======================== ======== ============= ==============================
parameter type default Value description
======================== ======== ============= ==============================
``compute_nodes`` array None Compute nodes to migrate.
``storage_pools`` array None Storage pools to migrate.
``parallel_total`` integer 6 The number of actions to be
run in parallel in total.
``parallel_per_node`` integer 2 The number of actions to be
run in parallel per compute
node.
``parallel_per_pool`` integer 2 The number of actions to be
run in parallel per storage
pool.
``priority`` object None List prioritizes instances
and volumes.
``with_attached_volume`` boolean False False: Instances will migrate
after all volumes migrate.
True: An instance will migrate
after the attached volumes
migrate.
======================== ======== ============= ==============================
The elements of compute_nodes array are:
============= ======= =============== =============================
parameter type default Value description
============= ======= =============== =============================
``src_node`` string None Compute node from which
instances migrate(mandatory).
``dst_node`` string None Compute node to which
instances migrate.
============= ======= =============== =============================
The elements of storage_pools array are:
============= ======= =============== ==============================
parameter type default Value description
============= ======= =============== ==============================
``src_pool`` string None Storage pool from which
volumes migrate(mandatory).
``dst_pool`` string None Storage pool to which
volumes migrate.
``src_type`` string None Source volume type(mandatory).
``dst_type`` string None Destination volume type
(mandatory).
============= ======= =============== ==============================
The elements of priority object are:
================ ======= =============== ======================
parameter type default Value description
================ ======= =============== ======================
``project`` array None Project names.
``compute_node`` array None Compute node names.
``storage_pool`` array None Storage pool names.
``compute`` enum None Instance attributes.
|compute|
``storage`` enum None Volume attributes.
|storage|
================ ======= =============== ======================
.. |compute| replace:: ["vcpu_num", "mem_size", "disk_size", "created_at"]
.. |storage| replace:: ["size", "created_at"]
Efficacy Indicator
------------------
.. watcher-func::
:format: literal_block
watcher.decision_engine.goal.efficacy.specs.HardwareMaintenance.get_global_efficacy_indicator
Algorithm
---------
For more information on the zone migration strategy please refer
to: http://specs.openstack.org/openstack/watcher-specs/specs/queens/implemented/zone-migration-strategy.html
How to use it ?
---------------
.. code-block:: shell
$ openstack optimize audittemplate create \
at1 hardware_maintenance --strategy zone_migration
$ openstack optimize audit create -a at1 \
-p compute_nodes='[{"src_node": "s01", "dst_node": "d01"}]'
External Links
--------------
None

View File

@@ -39,6 +39,22 @@ named ``watcher``, or by using the `OpenStack CLI`_ ``openstack``.
If you want to deploy Watcher in Horizon, please refer to the `Watcher Horizon
plugin installation guide`_.
.. note::
Notice, that in this guide we'll use `OpenStack CLI`_ as major interface.
Nevertheless, you can use `Watcher CLI`_ in the same way. It can be
achieved by replacing
.. code:: bash
$ openstack optimize ...
with
.. code:: bash
$ watcher ...
.. _`installation guide`: https://docs.openstack.org/python-watcherclient/latest
.. _`Watcher Horizon plugin installation guide`: https://docs.openstack.org/watcher-dashboard/latest/install/installation.html
.. _`OpenStack CLI`: https://docs.openstack.org/python-openstackclient/latest/cli/man/openstack.html
@@ -51,10 +67,6 @@ watcher binary without options.
.. code:: bash
$ watcher help
or::
$ openstack help optimize
How do I run an audit of my cluster ?
@@ -64,10 +76,6 @@ First, you need to find the :ref:`goal <goal_definition>` you want to achieve:
.. code:: bash
$ watcher goal list
or::
$ openstack optimize goal list
.. note::
@@ -81,10 +89,6 @@ An :ref:`audit template <audit_template_definition>` defines an optimization
.. code:: bash
$ watcher audittemplate create my_first_audit_template <your_goal>
or::
$ openstack optimize audittemplate create my_first_audit_template <your_goal>
Although optional, you may want to actually set a specific strategy for your
@@ -93,10 +97,6 @@ following command:
.. code:: bash
$ watcher strategy list --goal <your_goal_uuid_or_name>
or::
$ openstack optimize strategy list --goal <your_goal_uuid_or_name>
You can use the following command to check strategy details including which
@@ -104,21 +104,12 @@ parameters of which format it supports:
.. code:: bash
$ watcher strategy show <your_strategy>
or::
$ openstack optimize strategy show <your_strategy>
The command to create your audit template would then be:
.. code:: bash
$ watcher audittemplate create my_first_audit_template <your_goal> \
--strategy <your_strategy>
or::
$ openstack optimize audittemplate create my_first_audit_template <your_goal> \
--strategy <your_strategy>
@@ -133,10 +124,6 @@ audit) that you want to use.
.. code:: bash
$ watcher audittemplate list
or::
$ openstack optimize audittemplate list
- Start an audit based on this :ref:`audit template
@@ -144,10 +131,6 @@ or::
.. code:: bash
$ watcher audit create -a <your_audit_template>
or::
$ openstack optimize audit create -a <your_audit_template>
If your_audit_template was created by --strategy <your_strategy>, and it
@@ -156,11 +139,6 @@ format), your can append `-p` to input required parameters:
.. code:: bash
$ watcher audit create -a <your_audit_template> \
-p <your_strategy_para1>=5.5 -p <your_strategy_para2>=hi
or::
$ openstack optimize audit create -a <your_audit_template> \
-p <your_strategy_para1>=5.5 -p <your_strategy_para2>=hi
@@ -173,19 +151,13 @@ Input parameter could cause audit creation failure, when:
Watcher service will compute an :ref:`Action Plan <action_plan_definition>`
composed of a list of potential optimization :ref:`actions <action_definition>`
(instance migration, disabling of a compute node, ...) according to the
:ref:`goal <goal_definition>` to achieve. You can see all of the goals
available in section ``[watcher_strategies]`` of the Watcher service
configuration file.
:ref:`goal <goal_definition>` to achieve.
- Wait until the Watcher audit has produced a new :ref:`action plan
<action_plan_definition>`, and get it:
.. code:: bash
$ watcher actionplan list --audit <the_audit_uuid>
or::
$ openstack optimize actionplan list --audit <the_audit_uuid>
- Have a look on the list of optimization :ref:`actions <action_definition>`
@@ -193,10 +165,6 @@ or::
.. code:: bash
$ watcher action list --action-plan <the_action_plan_uuid>
or::
$ openstack optimize action list --action-plan <the_action_plan_uuid>
Once you have learned how to create an :ref:`Action Plan
@@ -207,10 +175,6 @@ cluster:
.. code:: bash
$ watcher actionplan start <the_action_plan_uuid>
or::
$ openstack optimize actionplan start <the_action_plan_uuid>
You can follow the states of the :ref:`actions <action_definition>` by
@@ -218,19 +182,11 @@ periodically calling:
.. code:: bash
$ watcher action list
or::
$ openstack optimize action list
You can also obtain more detailed information about a specific action:
.. code:: bash
$ watcher action show <the_action_uuid>
or::
$ openstack optimize action show <the_action_uuid>

View File

@@ -0,0 +1,3 @@
[DEFAULT]
output_file = /etc/watcher/policy.yaml.sample
namespace = watcher

View File

@@ -1,45 +0,0 @@
{
"admin_api": "role:admin or role:administrator",
"show_password": "!",
"default": "rule:admin_api",
"action:detail": "rule:default",
"action:get": "rule:default",
"action:get_all": "rule:default",
"action_plan:delete": "rule:default",
"action_plan:detail": "rule:default",
"action_plan:get": "rule:default",
"action_plan:get_all": "rule:default",
"action_plan:update": "rule:default",
"audit:create": "rule:default",
"audit:delete": "rule:default",
"audit:detail": "rule:default",
"audit:get": "rule:default",
"audit:get_all": "rule:default",
"audit:update": "rule:default",
"audit_template:create": "rule:default",
"audit_template:delete": "rule:default",
"audit_template:detail": "rule:default",
"audit_template:get": "rule:default",
"audit_template:get_all": "rule:default",
"audit_template:update": "rule:default",
"goal:detail": "rule:default",
"goal:get": "rule:default",
"goal:get_all": "rule:default",
"scoring_engine:detail": "rule:default",
"scoring_engine:get": "rule:default",
"scoring_engine:get_all": "rule:default",
"strategy:detail": "rule:default",
"strategy:get": "rule:default",
"strategy:get_all": "rule:default",
"service:detail": "rule:default",
"service:get": "rule:default",
"service:get_all": "rule:default"
}

165
lower-constraints.txt Normal file
View File

@@ -0,0 +1,165 @@
alabaster==0.7.10
alembic==0.9.8
amqp==2.2.2
appdirs==1.4.3
APScheduler==3.5.1
asn1crypto==0.24.0
automaton==1.14.0
Babel==2.5.3
bandit==1.4.0
beautifulsoup4==4.6.0
cachetools==2.0.1
certifi==2018.1.18
cffi==1.11.5
chardet==3.0.4
cliff==2.11.0
cmd2==0.8.1
contextlib2==0.5.5
coverage==4.5.1
croniter==0.3.20
cryptography==2.1.4
debtcollector==1.19.0
decorator==4.2.1
deprecation==2.0
doc8==0.8.0
docutils==0.14
dogpile.cache==0.6.5
dulwich==0.19.0
enum34==1.1.6
enum-compat==0.0.2
eventlet==0.20.0
extras==1.0.0
fasteners==0.14.1
fixtures==3.0.0
flake8==2.5.5
freezegun==0.3.10
future==0.16.0
futurist==1.6.0
gitdb2==2.0.3
GitPython==2.1.8
gnocchiclient==7.0.1
greenlet==0.4.13
hacking==0.12.0
idna==2.6
imagesize==1.0.0
iso8601==0.1.12
Jinja2==2.10
jmespath==0.9.3
jsonpatch==1.21
jsonpointer==2.0
jsonschema==2.6.0
keystoneauth1==3.4.0
keystonemiddleware==4.21.0
kombu==4.1.0
linecache2==1.0.0
logutils==0.3.5
lxml==4.1.1
Mako==1.0.7
MarkupSafe==1.0
mccabe==0.2.1
mock==2.0.0
monotonic==1.4
mox3==0.25.0
msgpack==0.5.6
munch==2.2.0
netaddr==0.7.19
netifaces==0.10.6
networkx==1.11
openstackdocstheme==1.20.0
openstacksdk==0.12.0
os-api-ref===1.4.0
os-client-config==1.29.0
os-service-types==1.2.0
os-testr==1.0.0
osc-lib==1.10.0
oslo.cache==1.29.0
oslo.concurrency==3.26.0
oslo.config==5.2.0
oslo.context==2.20.0
oslo.db==4.35.0
oslo.i18n==3.20.0
oslo.log==3.37.0
oslo.messaging==5.36.0
oslo.middleware==3.35.0
oslo.policy==1.34.0
oslo.reports==1.27.0
oslo.serialization==2.25.0
oslo.service==1.30.0
oslo.utils==3.36.0
oslo.versionedobjects==1.32.0
oslotest==3.3.0
packaging==17.1
Paste==2.0.3
PasteDeploy==1.5.2
pbr==3.1.1
pecan==1.2.1
pep8==1.5.7
pika==0.10.0
pika-pool==0.1.3
prettytable==0.7.2
psutil==5.4.3
pycadf==2.7.0
pycparser==2.18
pyflakes==0.8.1
Pygments==2.2.0
pyinotify==0.9.6
pyOpenSSL==17.5.0
pyparsing==2.2.0
pyperclip==1.6.0
python-ceilometerclient==2.9.0
python-cinderclient==3.5.0
python-dateutil==2.7.0
python-editor==1.0.3
python-glanceclient==2.9.1
python-ironicclient==2.3.0
python-keystoneclient==3.15.0
python-mimeparse==1.6.0
python-monascaclient==1.10.0
python-neutronclient==6.7.0
python-novaclient==10.1.0
python-openstackclient==3.14.0
python-subunit==1.2.0
pytz==2018.3
PyYAML==3.12
reno==2.7.0
repoze.lru==0.7
requests==2.18.4
requestsexceptions==1.4.0
restructuredtext-lint==1.1.3
rfc3986==1.1.0
Routes==2.4.1
simplegeneric==0.8.1
simplejson==3.13.2
six==1.11.0
smmap2==2.0.3
snowballstemmer==1.2.1
Sphinx==1.6.5
sphinxcontrib-httpdomain==1.6.1
sphinxcontrib-pecanwsme==0.8.0
sphinxcontrib-websupport==1.0.1
SQLAlchemy==1.2.5
sqlalchemy-migrate==0.11.0
sqlparse==0.2.4
statsd==3.2.2
stestr==2.0.0
stevedore==1.28.0
taskflow==3.1.0
Tempita==0.5.2
tenacity==4.9.0
testrepository==0.0.20
testresources==2.0.1
testscenarios==0.5.0
testtools==2.3.0
traceback2==1.4.0
tzlocal==1.5.1
ujson==1.35
unittest2==1.1.0
urllib3==1.22
vine==1.1.4
voluptuous==0.11.1
waitress==1.1.0
warlock==1.3.0
WebOb==1.7.4
WebTest==2.0.29
wrapt==1.10.11
WSME==0.9.2

View File

@@ -1,15 +0,0 @@
- hosts: primary
tasks:
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=/logs/**
- --include=*/
- --exclude=*
- --prune-empty-dirs

View File

@@ -1,67 +0,0 @@
- hosts: primary
name: Legacy Watcher tempest base multinode
tasks:
- name: Ensure legacy workspace directory
file:
path: '{{ ansible_user_dir }}/workspace'
state: directory
- shell:
cmd: |
set -e
set -x
cat > clonemap.yaml << EOF
clonemap:
- name: openstack-infra/devstack-gate
dest: devstack-gate
EOF
/usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \
git://git.openstack.org \
openstack-infra/devstack-gate
executable: /bin/bash
chdir: '{{ ansible_user_dir }}/workspace'
environment: '{{ zuul | zuul_legacy_vars }}'
- shell:
cmd: |
set -e
set -x
cat << 'EOF' >>"/tmp/dg-local.conf"
[[local|localrc]]
TEMPEST_PLUGINS='/opt/stack/new/watcher-tempest-plugin'
enable_plugin ceilometer git://git.openstack.org/openstack/ceilometer
# Enable watcher devstack plugin.
enable_plugin watcher git://git.openstack.org/openstack/watcher
EOF
executable: /bin/bash
chdir: '{{ ansible_user_dir }}/workspace'
environment: '{{ zuul | zuul_legacy_vars }}'
- shell:
cmd: |
set -e
set -x
export DEVSTACK_SUBNODE_CONFIG=" "
export PYTHONUNBUFFERED=true
export DEVSTACK_GATE_TEMPEST=1
export DEVSTACK_GATE_NEUTRON=1
export DEVSTACK_GATE_TOPOLOGY="multinode"
export PROJECTS="openstack/watcher $PROJECTS"
export PROJECTS="openstack/python-watcherclient $PROJECTS"
export PROJECTS="openstack/watcher-tempest-plugin $PROJECTS"
export DEVSTACK_GATE_TEMPEST_REGEX="watcher_tempest_plugin"
export BRANCH_OVERRIDE=default
if [ "$BRANCH_OVERRIDE" != "default" ] ; then
export OVERRIDE_ZUUL_BRANCH=$BRANCH_OVERRIDE
fi
cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh
./safe-devstack-vm-gate-wrap.sh
executable: /bin/bash
chdir: '{{ ansible_user_dir }}/workspace'
environment: '{{ zuul | zuul_legacy_vars }}'

View File

@@ -1,80 +0,0 @@
- hosts: primary
tasks:
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=**/*nose_results.html
- --include=*/
- --exclude=*
- --prune-empty-dirs
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=**/*testr_results.html.gz
- --include=*/
- --exclude=*
- --prune-empty-dirs
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=/.testrepository/tmp*
- --include=*/
- --exclude=*
- --prune-empty-dirs
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=**/*testrepository.subunit.gz
- --include=*/
- --exclude=*
- --prune-empty-dirs
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}/tox'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=/.tox/*/log/*
- --include=*/
- --exclude=*
- --prune-empty-dirs
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=/logs/**
- --include=*/
- --exclude=*
- --prune-empty-dirs

View File

@@ -1,64 +0,0 @@
- hosts: all
name: Legacy watcherclient-dsvm-functional
tasks:
- name: Ensure legacy workspace directory
file:
path: '{{ ansible_user_dir }}/workspace'
state: directory
- shell:
cmd: |
set -e
set -x
cat > clonemap.yaml << EOF
clonemap:
- name: openstack-infra/devstack-gate
dest: devstack-gate
EOF
/usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \
git://git.openstack.org \
openstack-infra/devstack-gate
executable: /bin/bash
chdir: '{{ ansible_user_dir }}/workspace'
environment: '{{ zuul | zuul_legacy_vars }}'
- shell:
cmd: |
set -e
set -x
cat << 'EOF' >>"/tmp/dg-local.conf"
[[local|localrc]]
enable_plugin watcher git://git.openstack.org/openstack/watcher
EOF
executable: /bin/bash
chdir: '{{ ansible_user_dir }}/workspace'
environment: '{{ zuul | zuul_legacy_vars }}'
- shell:
cmd: |
set -e
set -x
ENABLED_SERVICES=tempest
ENABLED_SERVICES+=,watcher-api,watcher-decision-engine,watcher-applier
export ENABLED_SERVICES
export PYTHONUNBUFFERED=true
export BRANCH_OVERRIDE=default
export PROJECTS="openstack/watcher $PROJECTS"
export DEVSTACK_PROJECT_FROM_GIT=python-watcherclient
if [ "$BRANCH_OVERRIDE" != "default" ] ; then
export OVERRIDE_ZUUL_BRANCH=$BRANCH_OVERRIDE
fi
function post_test_hook {
# Configure and run functional tests
$BASE/new/python-watcherclient/watcherclient/tests/functional/hooks/post_test_hook.sh
}
export -f post_test_hook
cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh
./safe-devstack-vm-gate-wrap.sh
executable: /bin/bash
chdir: '{{ ansible_user_dir }}/workspace'
environment: '{{ zuul | zuul_legacy_vars }}'

View File

@@ -0,0 +1,14 @@
- hosts: all
# This is the default strategy, however since orchestrate-devstack requires
# "linear", it is safer to enforce it in case this is running in an
# environment configured with a different default strategy.
strategy: linear
roles:
- orchestrate-devstack
- hosts: tempest
roles:
- setup-tempest-run-dir
- setup-tempest-data-dir
- acl-devstack-files
- run-tempest

3
playbooks/pre.yaml Normal file
View File

@@ -0,0 +1,3 @@
- hosts: all
roles:
- add-hostnames-to-hosts

View File

@@ -29,7 +29,7 @@ Useful links
* How to install: https://docs.openstack.org/rally/latest/install_and_upgrade/install.html
* How to set Rally up and launch your first scenario: https://rally.readthedocs.io/en/latest/tutorial/step_1_setting_up_env_and_running_benchmark_from_samples.html
* How to set Rally up and launch your first scenario: https://rally.readthedocs.io/en/latest/quick_start/tutorial/step_1_setting_up_env_and_running_benchmark_from_samples.html
* More about Rally: https://docs.openstack.org/rally/latest/

View File

@@ -0,0 +1,4 @@
---
features:
- Audits have 'name' field now, that is more friendly to end users.
Audit's name can't exceed 63 characters.

View File

@@ -0,0 +1,5 @@
---
features:
- |
Adds audit scoper for storage data model, now watcher users can specify
audit scope for storage CDM in the same manner as compute scope.

View File

@@ -0,0 +1,6 @@
---
features:
- |
Feature to exclude instances from audit scope based on project_id is added.
Now instances from particular project in OpenStack can be excluded from audit
defining scope in audit templates.

View File

@@ -0,0 +1,4 @@
---
features:
- |
Adds baremetal data model in Watcher

View File

@@ -0,0 +1,6 @@
---
features:
- Added a way to check state of strategy before audit's execution.
Administrator can use "watcher strategy state <strategy_name>" command
to get information about metrics' availability, datasource's availability
and CDM's availability.

View File

@@ -0,0 +1,6 @@
---
features:
- Watcher has a whole scope of the cluster, when building
compute CDM which includes all instances.
It filters excluded instances when migration during the
audit.

View File

@@ -0,0 +1,9 @@
---
features:
- |
Added a strategy for one compute node maintenance,
without having the user's application been interrupted.
If given one backup node, the strategy will firstly
migrate all instances from the maintenance node to
the backup node. If the backup node is not provided,
it will migrate all instances, relying on nova-scheduler.

View File

@@ -0,0 +1,6 @@
---
features:
- Watcher got an ability to calculate multiple global efficacy indicators
during audit's execution. Now global efficacy can be calculated for many
resource types (like volumes, instances, network) if strategy supports
efficacy indicators.

View File

@@ -0,0 +1,5 @@
---
features:
- Added notifications about cancelling of action plan.
Now event based plugins know when action plan cancel
started and completed.

View File

@@ -0,0 +1,14 @@
---
features:
- |
Instance cold migration logic is now replaced with using Nova migrate
Server(migrate Action) API which has host option since v2.56.
upgrade:
- |
Nova API version is now set to 2.56 by default. This needs the migrate
action of migration type cold with destination_node parameter to work.
fixes:
- |
The migrate action of migration type cold with destination_node parameter
was fixed. Before fixing, it booted an instance in the service project
as a migrated instance.

View File

@@ -0,0 +1,4 @@
---
features:
- |
Added storage capacity balance strategy.

View File

@@ -0,0 +1,6 @@
---
features:
- |
Added strategy "Zone migration" and it's goal "Hardware maintenance".
The strategy migrates many instances and volumes efficiently with
minimum downtime automatically.

View File

@@ -24,7 +24,6 @@
import os
import sys
from watcher import version as watcher_version
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the

View File

@@ -21,6 +21,7 @@ Contents:
:maxdepth: 1
unreleased
queens
pike
ocata
newton

View File

@@ -1,26 +1,23 @@
# Andi Chandler <andi@gowling.com>, 2016. #zanata
# Andi Chandler <andi@gowling.com>, 2017. #zanata
# Andi Chandler <andi@gowling.com>, 2018. #zanata
msgid ""
msgstr ""
"Project-Id-Version: watcher 1.4.1.dev113\n"
"Project-Id-Version: watcher\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2017-10-23 04:03+0000\n"
"POT-Creation-Date: 2018-02-28 12:27+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2017-10-21 06:22+0000\n"
"PO-Revision-Date: 2018-02-16 07:20+0000\n"
"Last-Translator: Andi Chandler <andi@gowling.com>\n"
"Language-Team: English (United Kingdom)\n"
"Language: en-GB\n"
"X-Generator: Zanata 3.9.6\n"
"Language: en_GB\n"
"X-Generator: Zanata 4.3.3\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
msgid "0.29.0"
msgstr "0.29.0"
msgid "0.33.0"
msgstr "0.33.0"
msgid "0.34.0"
msgstr "0.34.0"
@@ -39,6 +36,15 @@ msgstr "1.4.0"
msgid "1.4.1"
msgstr "1.4.1"
msgid "1.5.0"
msgstr "1.5.0"
msgid "1.6.0"
msgstr "1.6.0"
msgid "1.7.0"
msgstr "1.7.0"
msgid "Add a service supervisor to watch Watcher deamons."
msgstr "Add a service supervisor to watch Watcher daemons."
@@ -74,17 +80,6 @@ msgstr ""
msgid "Added SUSPENDED audit state"
msgstr "Added SUSPENDED audit state"
msgid ""
"Added a generic scoring engine module, which will standardize interactions "
"with scoring engines through the common API. It is possible to use the "
"scoring engine by different Strategies, which improve the code and data "
"model re-use."
msgstr ""
"Added a generic scoring engine module, which will standardize interactions "
"with scoring engines through the common API. It is possible to use the "
"scoring engine by different Strategies, which improve the code and data "
"model re-use."
msgid ""
"Added a generic scoring engine module, which will standarize interactions "
"with scoring engines through the common API. It is possible to use the "
@@ -141,6 +136,17 @@ msgstr ""
"Added a way to add a new action without having to amend the source code of "
"the default planner."
msgid ""
"Added a way to check state of strategy before audit's execution. "
"Administrator can use \"watcher strategy state <strategy_name>\" command to "
"get information about metrics' availability, datasource's availability and "
"CDM's availability."
msgstr ""
"Added a way to check state of strategy before audit's execution. "
"Administrator can use \"watcher strategy state <strategy_name>\" command to "
"get information about metrics' availability, datasource's availability and "
"CDM's availability."
msgid ""
"Added a way to compare the efficacy of different strategies for a give "
"optimization goal."
@@ -155,13 +161,6 @@ msgstr ""
"Added a way to create periodic audit to be able to continuously optimise the "
"cloud infrastructure."
msgid ""
"Added a way to return the of available goals depending on which strategies "
"have been deployed on the node where the decision engine is running."
msgstr ""
"Added a way to return the of available goals depending on which strategies "
"have been deployed on the node where the decision engine is running."
msgid ""
"Added a way to return the of available goals depending on which strategies "
"have been deployed on the node where the decison engine is running."
@@ -195,13 +194,233 @@ msgstr ""
"Added Gnocchi support as data source for metrics. Administrator can change "
"data source for each strategy using config file."
msgid ""
"Added notifications about cancelling of action plan. Now event based plugins "
"know when action plan cancel started and completed."
msgstr ""
"Added notifications about cancelling of action plan. Now event based plugins "
"know when action plan cancel started and completed."
msgid "Added policies to handle user rights to access Watcher API."
msgstr "Added policies to handle user rights to access Watcher API."
#, fuzzy
msgid "Added storage capacity balance strategy."
msgstr "Added storage capacity balance strategy."
msgid ""
"Added strategy \"Zone migration\" and it's goal \"Hardware maintenance\". "
"The strategy migrates many instances and volumes efficiently with minimum "
"downtime automatically."
msgstr ""
"Added strategy \"Zone migration\" and it's goal \"Hardware maintenance\". "
"The strategy migrates many instances and volumes efficiently with minimum "
"downtime automatically."
msgid ""
"Added strategy to identify and migrate a Noisy Neighbor - a low priority VM "
"that negatively affects peformance of a high priority VM by over utilizing "
"Last Level Cache."
msgstr ""
"Added strategy to identify and migrate a Noisy Neighbour - a low priority VM "
"that negatively affects performance of a high priority VM by over utilising "
"Last Level Cache."
msgid ""
"Added the functionality to filter out instances which have metadata field "
"'optimize' set to False. For now, this is only available for the "
"basic_consolidation strategy (if \"check_optimize_metadata\" configuration "
"option is enabled)."
msgstr ""
"Added the functionality to filter out instances which have metadata field "
"'optimize' set to False. For now, this is only available for the "
"basic_consolidation strategy (if \"check_optimize_metadata\" configuration "
"option is enabled)."
msgid "Added using of JSONSchema instead of voluptuous to validate Actions."
msgstr "Added using of JSONSchema instead of voluptuous to validate Actions."
msgid "Added volume migrate action"
msgstr "Added volume migrate action"
msgid ""
"Adds audit scoper for storage data model, now watcher users can specify "
"audit scope for storage CDM in the same manner as compute scope."
msgstr ""
"Adds audit scoper for storage data model, now watcher users can specify "
"audit scope for storage CDM in the same manner as compute scope."
msgid "Adds baremetal data model in Watcher"
msgstr "Adds baremetal data model in Watcher"
msgid ""
"Allow decision engine to pass strategy parameters, like optimization "
"threshold, to selected strategy, also strategy to provide parameters info to "
"end user."
msgstr ""
"Allow decision engine to pass strategy parameters, like optimisation "
"threshold, to selected strategy, also strategy to provide parameters info to "
"end user."
msgid ""
"Audits have 'name' field now, that is more friendly to end users. Audit's "
"name can't exceed 63 characters."
msgstr ""
"Audits have 'name' field now, that is more friendly to end users. Audit's "
"name can't exceed 63 characters."
msgid "Centralize all configuration options for Watcher."
msgstr "Centralise all configuration options for Watcher."
msgid "Contents:"
msgstr "Contents:"
#, fuzzy
msgid ""
"Copy all audit templates parameters into audit instead of having a reference "
"to the audit template."
msgstr ""
"Copy all audit templates parameters into audit instead of having a reference "
"to the audit template."
msgid "Current Series Release Notes"
msgstr "Current Series Release Notes"
msgid ""
"Each CDM collector can have its own CDM scoper now. This changed Scope JSON "
"schema definition for the audit template POST data. Please see audit "
"template create help message in python-watcherclient."
msgstr ""
"Each CDM collector can have its own CDM scoper now. This changed Scope JSON "
"schema definition for the audit template POST data. Please see audit "
"template create help message in python-watcherclient."
msgid ""
"Enhancement of vm_workload_consolidation strategy by using 'memory.resident' "
"metric in place of 'memory.usage', as memory.usage shows the memory usage "
"inside guest-os and memory.resident represents volume of RAM used by "
"instance on host machine."
msgstr ""
"Enhancement of vm_workload_consolidation strategy by using 'memory.resident' "
"metric in place of 'memory.usage', as memory.usage shows the memory usage "
"inside guest-os and memory.resident represents volume of RAM used by "
"instance on host machine."
msgid ""
"Existing workload_balance strategy based on the VM workloads of CPU. This "
"feature improves the strategy. By the input parameter \"metrics\", it makes "
"decision to migrate a VM base on CPU or memory utilization."
msgstr ""
"Existing workload_balance strategy based on the VM workloads of CPU. This "
"feature improves the strategy. By the input parameter \"metrics\", it makes "
"decision to migrate a VM base on CPU or memory utilisation."
msgid "New Features"
msgstr "New Features"
msgid "Newton Series Release Notes"
msgstr "Newton Series Release Notes"
msgid "Ocata Series Release Notes"
msgstr "Ocata Series Release Notes"
msgid "Pike Series Release Notes"
msgstr "Pike Series Release Notes"
msgid ""
"Provide a notification mechanism into Watcher that supports versioning. "
"Whenever a Watcher object is created, updated or deleted, a versioned "
"notification will, if it's relevant, be automatically sent to notify in "
"order to allow an event-driven style of architecture within Watcher. "
"Moreover, it will also give other services and/or 3rd party softwares (e.g. "
"monitoring solutions or rules engines) the ability to react to such events."
msgstr ""
"Provide a notification mechanism into Watcher that supports versioning. "
"Whenever a Watcher object is created, updated or deleted, a versioned "
"notification will, if it's relevant, be automatically sent to notify in "
"order to allow an event-driven style of architecture within Watcher. "
"Moreover, it will also give other services and/or 3rd party software (e.g. "
"monitoring solutions or rules engines) the ability to react to such events."
msgid ""
"Provides a generic way to define the scope of an audit. The set of audited "
"resources will be called \"Audit scope\" and will be defined in each audit "
"template (which contains the audit settings)."
msgstr ""
"Provides a generic way to define the scope of an audit. The set of audited "
"resources will be called \"Audit scope\" and will be defined in each audit "
"template (which contains the audit settings)."
msgid "Queens Series Release Notes"
msgstr "Queens Series Release Notes"
msgid ""
"The graph model describes how VMs are associated to compute hosts. This "
"allows for seeing relationships upfront between the entities and hence can "
"be used to identify hot/cold spots in the data center and influence a "
"strategy decision."
msgstr ""
"The graph model describes how VMs are associated to compute hosts. This "
"allows for seeing relationships upfront between the entities and hence can "
"be used to identify hot/cold spots in the data centre and influence a "
"strategy decision."
msgid ""
"There is new ability to create Watcher continuous audits with cron interval. "
"It means you may use, for example, optional argument '--interval \"\\*/5 \\* "
"\\* \\* \\*\"' to launch audit every 5 minutes. These jobs are executed on a "
"best effort basis and therefore, we recommend you to use a minimal cron "
"interval of at least one minute."
msgstr ""
"There is new ability to create Watcher continuous audits with cron interval. "
"It means you may use, for example, optional argument '--interval \"\\*/5 \\* "
"\\* \\* \\*\"' to launch audit every 5 minutes. These jobs are executed on a "
"best effort basis and therefore, we recommend you to use a minimal cron "
"interval of at least one minute."
msgid ""
"Watcher can continuously optimize the OpenStack cloud for a specific "
"strategy or goal by triggering an audit periodically which generates an "
"action plan and run it automatically."
msgstr ""
"Watcher can continuously optimise the OpenStack cloud for a specific "
"strategy or goal by triggering an audit periodically which generates an "
"action plan and run it automatically."
msgid ""
"Watcher can now run specific actions in parallel improving the performances "
"dramatically when executing an action plan."
msgstr ""
"Watcher can now run specific actions in parallel improving the performance "
"dramatically when executing an action plan."
msgid "Watcher database can now be upgraded thanks to Alembic."
msgstr "Watcher database can now be upgraded thanks to Alembic."
msgid ""
"Watcher got an ability to calculate multiple global efficacy indicators "
"during audit's execution. Now global efficacy can be calculated for many "
"resource types (like volumes, instances, network) if strategy supports "
"efficacy indicators."
msgstr ""
"Watcher got an ability to calculate multiple global efficacy indicators "
"during audit's execution. Now global efficacy can be calculated for many "
"resource types (like volumes, instances, network) if strategy supports "
"efficacy indicators."
msgid ""
"Watcher supports multiple metrics backend and relies on Ceilometer and "
"Monasca."
msgstr ""
"Watcher supports multiple metrics backend and relies on Ceilometer and "
"Monasca."
msgid "Welcome to watcher's Release Notes documentation!"
msgstr "Welcome to watcher's Release Notes documentation!"
msgid ""
"all Watcher objects have been refactored to support OVO (oslo."
"versionedobjects) which was a prerequisite step in order to implement "
"versioned notifications."
msgstr ""
"all Watcher objects have been refactored to support OVO (oslo."
"versionedobjects) which was a prerequisite step in order to implement "
"versioned notifications."

View File

@@ -1,33 +0,0 @@
# Gérald LONLAS <g.lonlas@gmail.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: watcher 1.0.1.dev51\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2017-03-21 11:57+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-10-22 06:44+0000\n"
"Last-Translator: Gérald LONLAS <g.lonlas@gmail.com>\n"
"Language-Team: French\n"
"Language: fr\n"
"X-Generator: Zanata 3.9.6\n"
"Plural-Forms: nplurals=2; plural=(n > 1)\n"
msgid "0.29.0"
msgstr "0.29.0"
msgid "Contents:"
msgstr "Contenu :"
msgid "Current Series Release Notes"
msgstr "Note de la release actuelle"
msgid "New Features"
msgstr "Nouvelles fonctionnalités"
msgid "Newton Series Release Notes"
msgstr "Note de release pour Newton"
msgid "Welcome to watcher's Release Notes documentation!"
msgstr "Bienvenue dans la documentation de la note de Release de Watcher"

View File

@@ -0,0 +1,6 @@
===================================
Queens Series Release Notes
===================================
.. release-notes::
:branch: stable/queens

View File

@@ -2,48 +2,48 @@
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
apscheduler>=3.0.5 # MIT License
enum34>=1.0.4;python_version=='2.7' or python_version=='2.6' or python_version=='3.3' # BSD
jsonpatch!=1.20,>=1.16 # BSD
keystoneauth1>=3.3.0 # Apache-2.0
apscheduler>=3.5.1 # MIT License
enum34>=1.1.6;python_version=='2.7' or python_version=='2.6' or python_version=='3.3' # BSD
jsonpatch>=1.21 # BSD
keystoneauth1>=3.4.0 # Apache-2.0
jsonschema<3.0.0,>=2.6.0 # MIT
keystonemiddleware>=4.17.0 # Apache-2.0
lxml!=3.7.0,>=3.4.1 # BSD
croniter>=0.3.4 # MIT License
oslo.concurrency>=3.20.0 # Apache-2.0
oslo.cache>=1.26.0 # Apache-2.0
oslo.config>=5.1.0 # Apache-2.0
oslo.context>=2.19.2 # Apache-2.0
oslo.db>=4.27.0 # Apache-2.0
oslo.i18n>=3.15.3 # Apache-2.0
oslo.log>=3.30.0 # Apache-2.0
oslo.messaging>=5.29.0 # Apache-2.0
oslo.policy>=1.23.0 # Apache-2.0
oslo.reports>=1.18.0 # Apache-2.0
oslo.serialization!=2.19.1,>=2.18.0 # Apache-2.0
oslo.service>=1.24.0 # Apache-2.0
oslo.utils>=3.31.0 # Apache-2.0
oslo.versionedobjects>=1.28.0 # Apache-2.0
PasteDeploy>=1.5.0 # MIT
pbr!=2.1.0,>=2.0.0 # Apache-2.0
pecan!=1.0.2,!=1.0.3,!=1.0.4,!=1.2,>=1.0.0 # BSD
PrettyTable<0.8,>=0.7.1 # BSD
voluptuous>=0.8.9 # BSD License
gnocchiclient>=3.3.1 # Apache-2.0
python-ceilometerclient>=2.5.0 # Apache-2.0
python-cinderclient>=3.2.0 # Apache-2.0
python-glanceclient>=2.8.0 # Apache-2.0
python-keystoneclient>=3.8.0 # Apache-2.0
python-monascaclient>=1.7.0 # Apache-2.0
python-neutronclient>=6.3.0 # Apache-2.0
python-novaclient>=9.1.0 # Apache-2.0
python-openstackclient>=3.12.0 # Apache-2.0
python-ironicclient>=1.14.0 # Apache-2.0
six>=1.10.0 # MIT
SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 # MIT
stevedore>=1.20.0 # Apache-2.0
taskflow>=2.7.0 # Apache-2.0
WebOb>=1.7.1 # MIT
WSME>=0.8.0 # MIT
networkx<2.0,>=1.10 # BSD
keystonemiddleware>=4.21.0 # Apache-2.0
lxml>=4.1.1 # BSD
croniter>=0.3.20 # MIT License
oslo.concurrency>=3.26.0 # Apache-2.0
oslo.cache>=1.29.0 # Apache-2.0
oslo.config>=5.2.0 # Apache-2.0
oslo.context>=2.20.0 # Apache-2.0
oslo.db>=4.35.0 # Apache-2.0
oslo.i18n>=3.20.0 # Apache-2.0
oslo.log>=3.37.0 # Apache-2.0
oslo.messaging>=5.36.0 # Apache-2.0
oslo.policy>=1.34.0 # Apache-2.0
oslo.reports>=1.27.0 # Apache-2.0
oslo.serialization>=2.25.0 # Apache-2.0
oslo.service>=1.30.0 # Apache-2.0
oslo.utils>=3.36.0 # Apache-2.0
oslo.versionedobjects>=1.32.0 # Apache-2.0
PasteDeploy>=1.5.2 # MIT
pbr>=3.1.1 # Apache-2.0
pecan>=1.2.1 # BSD
PrettyTable<0.8,>=0.7.2 # BSD
voluptuous>=0.11.1 # BSD License
gnocchiclient>=7.0.1 # Apache-2.0
python-ceilometerclient>=2.9.0 # Apache-2.0
python-cinderclient>=3.5.0 # Apache-2.0
python-glanceclient>=2.9.1 # Apache-2.0
python-keystoneclient>=3.15.0 # Apache-2.0
python-monascaclient>=1.10.0 # Apache-2.0
python-neutronclient>=6.7.0 # Apache-2.0
python-novaclient>=10.1.0 # Apache-2.0
python-openstackclient>=3.14.0 # Apache-2.0
python-ironicclient>=2.3.0 # Apache-2.0
six>=1.11.0 # MIT
SQLAlchemy>=1.2.5 # MIT
stevedore>=1.28.0 # Apache-2.0
taskflow>=3.1.0 # Apache-2.0
WebOb>=1.7.4 # MIT
WSME>=0.9.2 # MIT
networkx>=1.11 # BSD

View File

@@ -0,0 +1,16 @@
- name: Set up the list of hostnames and addresses
set_fact:
hostname_addresses: >
{% set hosts = {} -%}
{% for host, vars in hostvars.items() -%}
{% set _ = hosts.update({vars['ansible_hostname']: vars['nodepool']['private_ipv4']}) -%}
{% endfor -%}
{{- hosts -}}
- name: Add inventory hostnames to the hosts file
become: yes
lineinfile:
dest: /etc/hosts
state: present
insertafter: EOF
line: "{{ item.value }} {{ item.key }}"
with_dict: "{{ hostname_addresses }}"

View File

@@ -32,6 +32,12 @@ setup-hooks =
oslo.config.opts =
watcher = watcher.conf.opts:list_opts
oslo.policy.policies =
watcher = watcher.common.policies:list_rules
oslo.policy.enforcer =
watcher = watcher.common.policy:get_enforcer
console_scripts =
watcher-api = watcher.cmd.api:main
watcher-db-manage = watcher.cmd.dbmanage:main
@@ -51,6 +57,8 @@ watcher_goals =
airflow_optimization = watcher.decision_engine.goal.goals:AirflowOptimization
noisy_neighbor = watcher.decision_engine.goal.goals:NoisyNeighborOptimization
saving_energy = watcher.decision_engine.goal.goals:SavingEnergy
hardware_maintenance = watcher.decision_engine.goal.goals:HardwareMaintenance
cluster_maintaining = watcher.decision_engine.goal.goals:ClusterMaintaining
watcher_scoring_engines =
dummy_scorer = watcher.decision_engine.scoring.dummy_scorer:DummyScorer
@@ -71,6 +79,9 @@ watcher_strategies =
workload_balance = watcher.decision_engine.strategy.strategies.workload_balance:WorkloadBalance
uniform_airflow = watcher.decision_engine.strategy.strategies.uniform_airflow:UniformAirflow
noisy_neighbor = watcher.decision_engine.strategy.strategies.noisy_neighbor:NoisyNeighbor
storage_capacity_balance = watcher.decision_engine.strategy.strategies.storage_capacity_balance:StorageCapacityBalance
zone_migration = watcher.decision_engine.strategy.strategies.zone_migration:ZoneMigration
host_maintenance = watcher.decision_engine.strategy.strategies.host_maintenance:HostMaintenance
watcher_actions =
migrate = watcher.applier.actions.migration:Migrate
@@ -91,6 +102,7 @@ watcher_planners =
watcher_cluster_data_model_collectors =
compute = watcher.decision_engine.model.collector.nova:NovaClusterDataModelCollector
storage = watcher.decision_engine.model.collector.cinder:CinderClusterDataModelCollector
baremetal = watcher.decision_engine.model.collector.ironic:BaremetalClusterDataModelCollector
[pbr]

View File

@@ -2,25 +2,27 @@
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
coverage!=4.4,>=4.0 # Apache-2.0
doc8>=0.6.0 # Apache-2.0
freezegun>=0.3.6 # Apache-2.0
coverage!=4.4 # Apache-2.0
doc8 # Apache-2.0
freezegun # Apache-2.0
hacking!=0.13.0,<0.14,>=0.12.0 # Apache-2.0
mock>=2.0.0 # BSD
oslotest>=1.10.0 # Apache-2.0
os-testr>=1.0.0 # Apache-2.0
testrepository>=0.0.18 # Apache-2.0/BSD
testscenarios>=0.4 # Apache-2.0/BSD
testtools>=2.2.0 # MIT
mock # BSD
oslotest # Apache-2.0
os-testr # Apache-2.0
testrepository # Apache-2.0/BSD
testscenarios # Apache-2.0/BSD
testtools # MIT
# Doc requirements
openstackdocstheme>=1.17.0 # Apache-2.0
sphinx>=1.6.2 # BSD
sphinxcontrib-pecanwsme>=0.8.0 # Apache-2.0
openstackdocstheme # Apache-2.0
sphinx!=1.6.6,!=1.6.7 # BSD
sphinxcontrib-pecanwsme # Apache-2.0
# api-ref
os-api-ref # Apache-2.0
# releasenotes
reno>=2.5.0 # Apache-2.0
reno # Apache-2.0
# bandit
bandit>=1.1.0 # Apache-2.0

13
tox.ini
View File

@@ -46,12 +46,16 @@ sitepackages = False
commands =
oslo-config-generator --config-file etc/watcher/oslo-config-generator/watcher.conf
[testenv:genpolicy]
commands =
oslopolicy-sample-generator --config-file etc/watcher/oslo-policy-generator/watcher-policy-generator.conf
[flake8]
filename = *.py,app.wsgi
show-source=True
ignore= H105,E123,E226,N320,H202
builtins= _
enable-extensions = H106,H203
enable-extensions = H106,H203,H904
exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,build,*sqlalchemy/alembic/versions/*,demo/,releasenotes
[testenv:wheel]
@@ -72,3 +76,10 @@ commands = sphinx-build -a -W -E -d releasenotes/build/doctrees -b html releasen
[testenv:bandit]
deps = -r{toxinidir}/test-requirements.txt
commands = bandit -r watcher -x tests -n5 -ll -s B320
[testenv:lower-constraints]
basepython = python3
deps =
-c{toxinidir}/lower-constraints.txt
-r{toxinidir}/test-requirements.txt
-r{toxinidir}/requirements.txt

View File

@@ -205,7 +205,7 @@ class ActionCollection(collection.Collection):
collection = ActionCollection()
collection.actions = [Action.convert_with_links(p, expand)
for p in actions]
collection.next = collection.get_next(limit, url=url, **kwargs)
return collection
@classmethod
@@ -232,6 +232,10 @@ class ActionsController(rest.RestController):
sort_key, sort_dir, expand=False,
resource_url=None,
action_plan_uuid=None, audit_uuid=None):
additional_fields = ['action_plan_uuid']
api_utils.validate_sort_key(sort_key, list(objects.Action.fields) +
additional_fields)
limit = api_utils.validate_limit(limit)
api_utils.validate_sort_dir(sort_dir)
@@ -247,7 +251,10 @@ class ActionsController(rest.RestController):
if audit_uuid:
filters['audit_uuid'] = audit_uuid
sort_db_key = sort_key
need_api_sort = api_utils.check_need_api_sort(sort_key,
additional_fields)
sort_db_key = (sort_key if not need_api_sort
else None)
actions = objects.Action.list(pecan.request.context,
limit,
@@ -255,11 +262,15 @@ class ActionsController(rest.RestController):
sort_dir=sort_dir,
filters=filters)
return ActionCollection.convert_with_links(actions, limit,
url=resource_url,
expand=expand,
sort_key=sort_key,
sort_dir=sort_dir)
actions_collection = ActionCollection.convert_with_links(
actions, limit, url=resource_url, expand=expand,
sort_key=sort_key, sort_dir=sort_dir)
if need_api_sort:
api_utils.make_api_sort(actions_collection.actions,
sort_key, sort_dir)
return actions_collection
@wsme_pecan.wsexpose(ActionCollection, types.uuid, int,
wtypes.text, wtypes.text, types.uuid,
@@ -341,7 +352,7 @@ class ActionsController(rest.RestController):
@wsme_pecan.wsexpose(Action, body=Action, status_code=201)
def post(self, action):
"""Create a new action.
"""Create a new action(forbidden).
:param action: a action within the request body.
"""
@@ -364,7 +375,7 @@ class ActionsController(rest.RestController):
@wsme.validate(types.uuid, [ActionPatchType])
@wsme_pecan.wsexpose(Action, types.uuid, body=[ActionPatchType])
def patch(self, action_uuid, patch):
"""Update an existing action.
"""Update an existing action(forbidden).
:param action_uuid: UUID of a action.
:param patch: a json PATCH document to apply to this action.
@@ -401,7 +412,7 @@ class ActionsController(rest.RestController):
@wsme_pecan.wsexpose(None, types.uuid, status_code=204)
def delete(self, action_uuid):
"""Delete a action.
"""Delete a action(forbidden).
:param action_uuid: UUID of a action.
"""

View File

@@ -305,17 +305,6 @@ class ActionPlanCollection(collection.Collection):
ap_collection = ActionPlanCollection()
ap_collection.action_plans = [ActionPlan.convert_with_links(
p, expand) for p in rpc_action_plans]
if 'sort_key' in kwargs:
reverse = False
if kwargs['sort_key'] == 'audit_uuid':
if 'sort_dir' in kwargs:
reverse = True if kwargs['sort_dir'] == 'desc' else False
ap_collection.action_plans = sorted(
ap_collection.action_plans,
key=lambda action_plan: action_plan.audit_uuid,
reverse=reverse)
ap_collection.next = ap_collection.get_next(limit, url=url, **kwargs)
return ap_collection
@@ -331,20 +320,25 @@ class ActionPlansController(rest.RestController):
def __init__(self):
super(ActionPlansController, self).__init__()
self.applier_client = rpcapi.ApplierAPI()
from_actionsPlans = False
"""A flag to indicate if the requests to this controller are coming
from the top-level resource ActionPlan."""
_custom_actions = {
'detail': ['GET'],
'start': ['POST'],
'detail': ['GET']
}
def _get_action_plans_collection(self, marker, limit,
sort_key, sort_dir, expand=False,
resource_url=None, audit_uuid=None,
strategy=None):
additional_fields = ['audit_uuid', 'strategy_uuid', 'strategy_name']
api_utils.validate_sort_key(
sort_key, list(objects.ActionPlan.fields) + additional_fields)
limit = api_utils.validate_limit(limit)
api_utils.validate_sort_dir(sort_dir)
@@ -363,10 +357,10 @@ class ActionPlansController(rest.RestController):
else:
filters['strategy_name'] = strategy
if sort_key == 'audit_uuid':
sort_db_key = None
else:
sort_db_key = sort_key
need_api_sort = api_utils.check_need_api_sort(sort_key,
additional_fields)
sort_db_key = (sort_key if not need_api_sort
else None)
action_plans = objects.ActionPlan.list(
pecan.request.context,
@@ -374,12 +368,15 @@ class ActionPlansController(rest.RestController):
marker_obj, sort_key=sort_db_key,
sort_dir=sort_dir, filters=filters)
return ActionPlanCollection.convert_with_links(
action_plans, limit,
url=resource_url,
expand=expand,
sort_key=sort_key,
sort_dir=sort_dir)
action_plans_collection = ActionPlanCollection.convert_with_links(
action_plans, limit, url=resource_url, expand=expand,
sort_key=sort_key, sort_dir=sort_dir)
if need_api_sort:
api_utils.make_api_sort(action_plans_collection.action_plans,
sort_key, sort_dir)
return action_plans_collection
@wsme_pecan.wsexpose(ActionPlanCollection, types.uuid, int, wtypes.text,
wtypes.text, types.uuid, wtypes.text)
@@ -460,6 +457,15 @@ class ActionPlansController(rest.RestController):
policy.enforce(context, 'action_plan:delete', action_plan,
action='action_plan:delete')
allowed_states = (ap_objects.State.SUCCEEDED,
ap_objects.State.RECOMMENDED,
ap_objects.State.FAILED,
ap_objects.State.SUPERSEDED,
ap_objects.State.CANCELLED)
if action_plan.state not in allowed_states:
raise exception.DeleteError(
state=action_plan.state)
action_plan.soft_delete()
@wsme.validate(types.uuid, [ActionPlanPatchType])
@@ -531,7 +537,7 @@ class ActionPlansController(rest.RestController):
if action_plan_to_update[field] != patch_val:
action_plan_to_update[field] = patch_val
if (field == 'state'and
if (field == 'state' and
patch_val == objects.action_plan.State.PENDING):
launch_action_plan = True
@@ -548,11 +554,39 @@ class ActionPlansController(rest.RestController):
a.save()
if launch_action_plan:
applier_client = rpcapi.ApplierAPI()
applier_client.launch_action_plan(pecan.request.context,
action_plan.uuid)
self.applier_client.launch_action_plan(pecan.request.context,
action_plan.uuid)
action_plan_to_update = objects.ActionPlan.get_by_uuid(
pecan.request.context,
action_plan_uuid)
return ActionPlan.convert_with_links(action_plan_to_update)
@wsme_pecan.wsexpose(ActionPlan, types.uuid)
def start(self, action_plan_uuid, **kwargs):
"""Start an action_plan
:param action_plan_uuid: UUID of an action_plan.
"""
action_plan_to_start = api_utils.get_resource(
'ActionPlan', action_plan_uuid, eager=True)
context = pecan.request.context
policy.enforce(context, 'action_plan:start', action_plan_to_start,
action='action_plan:start')
if action_plan_to_start['state'] != \
objects.action_plan.State.RECOMMENDED:
raise Exception.StartError(
state=action_plan_to_start.state)
action_plan_to_start['state'] = objects.action_plan.State.PENDING
action_plan_to_start.save()
self.applier_client.launch_action_plan(pecan.request.context,
action_plan_uuid)
action_plan_to_start = objects.ActionPlan.get_by_uuid(
pecan.request.context, action_plan_uuid)
return ActionPlan.convert_with_links(action_plan_to_start)

View File

@@ -37,6 +37,8 @@ import wsme
from wsme import types as wtypes
import wsmeext.pecan as wsme_pecan
from oslo_log import log
from watcher._i18n import _
from watcher.api.controllers import base
from watcher.api.controllers import link
@@ -49,6 +51,8 @@ from watcher.common import utils
from watcher.decision_engine import rpcapi
from watcher import objects
LOG = log.getLogger(__name__)
class AuditPostType(wtypes.Base):
@@ -129,6 +133,11 @@ class AuditPostType(wtypes.Base):
goal = objects.Goal.get(context, self.goal)
self.name = "%s-%s" % (goal.name,
datetime.datetime.utcnow().isoformat())
# No more than 63 characters
if len(self.name) > 63:
LOG.warning("Audit: %s length exceeds 63 characters",
self.name)
self.name = self.name[0:63]
return Audit(
name=self.name,
@@ -166,10 +175,10 @@ class AuditPatchType(types.JsonPatchType):
class Audit(base.APIBase):
"""API representation of a audit.
"""API representation of an audit.
This class enforces type checking and value constraints, and converts
between the internal object model and the API representation of a audit.
between the internal object model and the API representation of an audit.
"""
_goal_uuid = None
_goal_name = None
@@ -264,19 +273,19 @@ class Audit(base.APIBase):
goal_uuid = wsme.wsproperty(
wtypes.text, _get_goal_uuid, _set_goal_uuid, mandatory=True)
"""Goal UUID the audit template refers to"""
"""Goal UUID the audit refers to"""
goal_name = wsme.wsproperty(
wtypes.text, _get_goal_name, _set_goal_name, mandatory=False)
"""The name of the goal this audit template refers to"""
"""The name of the goal this audit refers to"""
strategy_uuid = wsme.wsproperty(
wtypes.text, _get_strategy_uuid, _set_strategy_uuid, mandatory=False)
"""Strategy UUID the audit template refers to"""
"""Strategy UUID the audit refers to"""
strategy_name = wsme.wsproperty(
wtypes.text, _get_strategy_name, _set_strategy_name, mandatory=False)
"""The name of the strategy this audit template refers to"""
"""The name of the strategy this audit refers to"""
parameters = {wtypes.text: types.jsontype}
"""The strategy parameters for this audit"""
@@ -380,17 +389,6 @@ class AuditCollection(collection.Collection):
collection = AuditCollection()
collection.audits = [Audit.convert_with_links(p, expand)
for p in rpc_audits]
if 'sort_key' in kwargs:
reverse = False
if kwargs['sort_key'] == 'goal_uuid':
if 'sort_dir' in kwargs:
reverse = True if kwargs['sort_dir'] == 'desc' else False
collection.audits = sorted(
collection.audits,
key=lambda audit: audit.goal_uuid,
reverse=reverse)
collection.next = collection.get_next(limit, url=url, **kwargs)
return collection
@@ -405,6 +403,7 @@ class AuditsController(rest.RestController):
"""REST controller for Audits."""
def __init__(self):
super(AuditsController, self).__init__()
self.dc_client = rpcapi.DecisionEngineAPI()
from_audits = False
"""A flag to indicate if the requests to this controller are coming
@@ -418,8 +417,14 @@ class AuditsController(rest.RestController):
sort_key, sort_dir, expand=False,
resource_url=None, goal=None,
strategy=None):
additional_fields = ["goal_uuid", "goal_name", "strategy_uuid",
"strategy_name"]
api_utils.validate_sort_key(
sort_key, list(objects.Audit.fields) + additional_fields)
limit = api_utils.validate_limit(limit)
api_utils.validate_sort_dir(sort_dir)
marker_obj = None
if marker:
marker_obj = objects.Audit.get_by_uuid(pecan.request.context,
@@ -440,23 +445,25 @@ class AuditsController(rest.RestController):
# TODO(michaelgugino): add method to get goal by name.
filters['strategy_name'] = strategy
if sort_key == 'goal_uuid':
sort_db_key = 'goal_id'
elif sort_key == 'strategy_uuid':
sort_db_key = 'strategy_id'
else:
sort_db_key = sort_key
need_api_sort = api_utils.check_need_api_sort(sort_key,
additional_fields)
sort_db_key = (sort_key if not need_api_sort
else None)
audits = objects.Audit.list(pecan.request.context,
limit,
marker_obj, sort_key=sort_db_key,
sort_dir=sort_dir, filters=filters)
return AuditCollection.convert_with_links(audits, limit,
url=resource_url,
expand=expand,
sort_key=sort_key,
sort_dir=sort_dir)
audits_collection = AuditCollection.convert_with_links(
audits, limit, url=resource_url, expand=expand,
sort_key=sort_key, sort_dir=sort_dir)
if need_api_sort:
api_utils.make_api_sort(audits_collection.audits, sort_key,
sort_dir)
return audits_collection
@wsme_pecan.wsexpose(AuditCollection, types.uuid, int, wtypes.text,
wtypes.text, wtypes.text, wtypes.text, int)
@@ -511,7 +518,7 @@ class AuditsController(rest.RestController):
def get_one(self, audit):
"""Retrieve information about the given audit.
:param audit_uuid: UUID or name of an audit.
:param audit: UUID or name of an audit.
"""
if self.from_audits:
raise exception.OperationNotPermitted
@@ -526,7 +533,7 @@ class AuditsController(rest.RestController):
def post(self, audit_p):
"""Create a new audit.
:param audit_p: a audit within the request body.
:param audit_p: an audit within the request body.
"""
context = pecan.request.context
policy.enforce(context, 'audit:create',
@@ -556,7 +563,7 @@ class AuditsController(rest.RestController):
if no_schema and audit.parameters:
raise exception.Invalid(_('Specify parameters but no predefined '
'strategy for audit template, or no '
'strategy for audit, or no '
'parameter spec in predefined strategy'))
audit_dict = audit.as_dict()
@@ -569,8 +576,7 @@ class AuditsController(rest.RestController):
# trigger decision-engine to run the audit
if new_audit.audit_type == objects.audit.AuditType.ONESHOT.value:
dc_client = rpcapi.DecisionEngineAPI()
dc_client.trigger_audit(context, new_audit.uuid)
self.dc_client.trigger_audit(context, new_audit.uuid)
return Audit.convert_with_links(new_audit)
@@ -579,7 +585,7 @@ class AuditsController(rest.RestController):
def patch(self, audit, patch):
"""Update an existing audit.
:param auditd: UUID or name of a audit.
:param audit: UUID or name of an audit.
:param patch: a json PATCH document to apply to this audit.
"""
if self.from_audits:
@@ -633,7 +639,14 @@ class AuditsController(rest.RestController):
context = pecan.request.context
audit_to_delete = api_utils.get_resource(
'Audit', audit, eager=True)
policy.enforce(context, 'audit:update', audit_to_delete,
action='audit:update')
policy.enforce(context, 'audit:delete', audit_to_delete,
action='audit:delete')
initial_state = audit_to_delete.state
new_state = objects.audit.State.DELETED
if not objects.audit.AuditStateTransitionManager(
).check_transition(initial_state, new_state):
raise exception.DeleteError(
state=initial_state)
audit_to_delete.soft_delete()

View File

@@ -474,9 +474,13 @@ class AuditTemplatesController(rest.RestController):
def _get_audit_templates_collection(self, filters, marker, limit,
sort_key, sort_dir, expand=False,
resource_url=None):
additional_fields = ["goal_uuid", "goal_name", "strategy_uuid",
"strategy_name"]
api_utils.validate_sort_key(
sort_key, list(objects.AuditTemplate.fields) + additional_fields)
api_utils.validate_search_filters(
filters, list(objects.audit_template.AuditTemplate.fields) +
["goal_uuid", "goal_name", "strategy_uuid", "strategy_name"])
filters, list(objects.AuditTemplate.fields) + additional_fields)
limit = api_utils.validate_limit(limit)
api_utils.validate_sort_dir(sort_dir)
@@ -486,19 +490,26 @@ class AuditTemplatesController(rest.RestController):
pecan.request.context,
marker)
audit_templates = objects.AuditTemplate.list(
pecan.request.context,
filters,
limit,
marker_obj, sort_key=sort_key,
sort_dir=sort_dir)
need_api_sort = api_utils.check_need_api_sort(sort_key,
additional_fields)
sort_db_key = (sort_key if not need_api_sort
else None)
return AuditTemplateCollection.convert_with_links(audit_templates,
limit,
url=resource_url,
expand=expand,
sort_key=sort_key,
sort_dir=sort_dir)
audit_templates = objects.AuditTemplate.list(
pecan.request.context, filters, limit, marker_obj,
sort_key=sort_db_key, sort_dir=sort_dir)
audit_templates_collection = \
AuditTemplateCollection.convert_with_links(
audit_templates, limit, url=resource_url, expand=expand,
sort_key=sort_key, sort_dir=sort_dir)
if need_api_sort:
api_utils.make_api_sort(
audit_templates_collection.audit_templates, sort_key,
sort_dir)
return audit_templates_collection
@wsme_pecan.wsexpose(AuditTemplateCollection, wtypes.text, wtypes.text,
types.uuid, int, wtypes.text, wtypes.text)
@@ -677,8 +688,8 @@ class AuditTemplatesController(rest.RestController):
context = pecan.request.context
audit_template_to_delete = api_utils.get_resource('AuditTemplate',
audit_template)
policy.enforce(context, 'audit_template:update',
policy.enforce(context, 'audit_template:delete',
audit_template_to_delete,
action='audit_template:update')
action='audit_template:delete')
audit_template_to_delete.soft_delete()

View File

@@ -130,17 +130,6 @@ class GoalCollection(collection.Collection):
goal_collection = GoalCollection()
goal_collection.goals = [
Goal.convert_with_links(g, expand) for g in goals]
if 'sort_key' in kwargs:
reverse = False
if kwargs['sort_key'] == 'strategy':
if 'sort_dir' in kwargs:
reverse = True if kwargs['sort_dir'] == 'desc' else False
goal_collection.goals = sorted(
goal_collection.goals,
key=lambda goal: goal.uuid,
reverse=reverse)
goal_collection.next = goal_collection.get_next(
limit, url=url, **kwargs)
return goal_collection
@@ -167,17 +156,19 @@ class GoalsController(rest.RestController):
def _get_goals_collection(self, marker, limit, sort_key, sort_dir,
expand=False, resource_url=None):
api_utils.validate_sort_key(
sort_key, list(objects.Goal.fields))
limit = api_utils.validate_limit(limit)
api_utils.validate_sort_dir(sort_dir)
sort_db_key = (sort_key if sort_key in objects.Goal.fields
else None)
marker_obj = None
if marker:
marker_obj = objects.Goal.get_by_uuid(
pecan.request.context, marker)
sort_db_key = (sort_key if sort_key in objects.Goal.fields
else None)
goals = objects.Goal.list(pecan.request.context, limit, marker_obj,
sort_key=sort_db_key, sort_dir=sort_dir)

View File

@@ -123,17 +123,6 @@ class ScoringEngineCollection(collection.Collection):
collection = ScoringEngineCollection()
collection.scoring_engines = [ScoringEngine.convert_with_links(
se, expand) for se in scoring_engines]
if 'sort_key' in kwargs:
reverse = False
if kwargs['sort_key'] == 'name':
if 'sort_dir' in kwargs:
reverse = True if kwargs['sort_dir'] == 'desc' else False
collection.goals = sorted(
collection.scoring_engines,
key=lambda se: se.name,
reverse=reverse)
collection.next = collection.get_next(limit, url=url, **kwargs)
return collection
@@ -160,7 +149,8 @@ class ScoringEngineController(rest.RestController):
def _get_scoring_engines_collection(self, marker, limit,
sort_key, sort_dir, expand=False,
resource_url=None):
api_utils.validate_sort_key(
sort_key, list(objects.ScoringEngine.fields))
limit = api_utils.validate_limit(limit)
api_utils.validate_sort_dir(sort_dir)
@@ -171,7 +161,8 @@ class ScoringEngineController(rest.RestController):
filters = {}
sort_db_key = sort_key
sort_db_key = (sort_key if sort_key in objects.ScoringEngine.fields
else None)
scoring_engines = objects.ScoringEngine.list(
context=pecan.request.context,

View File

@@ -154,17 +154,6 @@ class ServiceCollection(collection.Collection):
service_collection = ServiceCollection()
service_collection.services = [
Service.convert_with_links(g, expand) for g in services]
if 'sort_key' in kwargs:
reverse = False
if kwargs['sort_key'] == 'service':
if 'sort_dir' in kwargs:
reverse = True if kwargs['sort_dir'] == 'desc' else False
service_collection.services = sorted(
service_collection.services,
key=lambda service: service.id,
reverse=reverse)
service_collection.next = service_collection.get_next(
limit, url=url, marker_field='id', **kwargs)
return service_collection
@@ -191,17 +180,19 @@ class ServicesController(rest.RestController):
def _get_services_collection(self, marker, limit, sort_key, sort_dir,
expand=False, resource_url=None):
api_utils.validate_sort_key(
sort_key, list(objects.Service.fields))
limit = api_utils.validate_limit(limit)
api_utils.validate_sort_dir(sort_dir)
sort_db_key = (sort_key if sort_key in objects.Service.fields
else None)
marker_obj = None
if marker:
marker_obj = objects.Service.get(
pecan.request.context, marker)
sort_db_key = (sort_key if sort_key in objects.Service.fields
else None)
services = objects.Service.list(
pecan.request.context, limit, marker_obj,
sort_key=sort_db_key, sort_dir=sort_dir)

View File

@@ -41,6 +41,7 @@ from watcher.api.controllers.v1 import utils as api_utils
from watcher.common import exception
from watcher.common import policy
from watcher.common import utils as common_utils
from watcher.decision_engine import rpcapi
from watcher import objects
@@ -172,17 +173,6 @@ class StrategyCollection(collection.Collection):
strategy_collection = StrategyCollection()
strategy_collection.strategies = [
Strategy.convert_with_links(g, expand) for g in strategies]
if 'sort_key' in kwargs:
reverse = False
if kwargs['sort_key'] == 'strategy':
if 'sort_dir' in kwargs:
reverse = True if kwargs['sort_dir'] == 'desc' else False
strategy_collection.strategies = sorted(
strategy_collection.strategies,
key=lambda strategy: strategy.uuid,
reverse=reverse)
strategy_collection.next = strategy_collection.get_next(
limit, url=url, **kwargs)
return strategy_collection
@@ -205,32 +195,44 @@ class StrategiesController(rest.RestController):
_custom_actions = {
'detail': ['GET'],
'state': ['GET'],
}
def _get_strategies_collection(self, filters, marker, limit, sort_key,
sort_dir, expand=False, resource_url=None):
additional_fields = ["goal_uuid", "goal_name"]
api_utils.validate_sort_key(
sort_key, list(objects.Strategy.fields) + additional_fields)
api_utils.validate_search_filters(
filters, list(objects.strategy.Strategy.fields) +
["goal_uuid", "goal_name"])
filters, list(objects.Strategy.fields) + additional_fields)
limit = api_utils.validate_limit(limit)
api_utils.validate_sort_dir(sort_dir)
sort_db_key = (sort_key if sort_key in objects.Strategy.fields
else None)
marker_obj = None
if marker:
marker_obj = objects.Strategy.get_by_uuid(
pecan.request.context, marker)
need_api_sort = api_utils.check_need_api_sort(sort_key,
additional_fields)
sort_db_key = (sort_key if not need_api_sort
else None)
strategies = objects.Strategy.list(
pecan.request.context, limit, marker_obj, filters=filters,
sort_key=sort_db_key, sort_dir=sort_dir)
return StrategyCollection.convert_with_links(
strategies_collection = StrategyCollection.convert_with_links(
strategies, limit, url=resource_url, expand=expand,
sort_key=sort_key, sort_dir=sort_dir)
if need_api_sort:
api_utils.make_api_sort(strategies_collection.strategies,
sort_key, sort_dir)
return strategies_collection
@wsme_pecan.wsexpose(StrategyCollection, wtypes.text, wtypes.text,
int, wtypes.text, wtypes.text)
def get_all(self, goal=None, marker=None, limit=None,
@@ -288,6 +290,26 @@ class StrategiesController(rest.RestController):
return self._get_strategies_collection(
filters, marker, limit, sort_key, sort_dir, expand, resource_url)
@wsme_pecan.wsexpose(wtypes.text, wtypes.text)
def state(self, strategy):
"""Retrieve a inforamation about strategy requirements.
:param strategy: name of the strategy.
"""
context = pecan.request.context
policy.enforce(context, 'strategy:state', action='strategy:state')
parents = pecan.request.path.split('/')[:-1]
if parents[-2] != "strategies":
raise exception.HTTPNotFound
rpc_strategy = api_utils.get_resource('Strategy', strategy)
de_client = rpcapi.DecisionEngineAPI()
strategy_state = de_client.get_strategy_info(context,
rpc_strategy.name)
strategy_state.extend([{
'type': 'Name', 'state': rpc_strategy.name,
'mandatory': '', 'comment': ''}])
return strategy_state
@wsme_pecan.wsexpose(Strategy, wtypes.text)
def get_one(self, strategy):
"""Retrieve information about the given strategy.

View File

@@ -13,6 +13,8 @@
# License for the specific language governing permissions and limitations
# under the License.
from operator import attrgetter
import jsonpatch
from oslo_config import cfg
from oslo_utils import reflection
@@ -54,6 +56,13 @@ def validate_sort_dir(sort_dir):
"'asc' or 'desc'") % sort_dir)
def validate_sort_key(sort_key, allowed_fields):
# Very lightweight validation for now
if sort_key not in allowed_fields:
raise wsme.exc.ClientSideError(
_("Invalid sort key: %s") % sort_key)
def validate_search_filters(filters, allowed_fields):
# Very lightweight validation for now
# todo: improve this (e.g. https://www.parse.com/docs/rest/guide/#queries)
@@ -63,6 +72,19 @@ def validate_search_filters(filters, allowed_fields):
_("Invalid filter: %s") % filter_name)
def check_need_api_sort(sort_key, additional_fields):
return sort_key in additional_fields
def make_api_sort(sorting_list, sort_key, sort_dir):
# First sort by uuid field, than sort by sort_key
# sort() ensures stable sorting, so we could
# make lexicographical sort
reverse_direction = (sort_dir == 'desc')
sorting_list.sort(key=attrgetter('uuid'), reverse=reverse_direction)
sorting_list.sort(key=attrgetter(sort_key), reverse=reverse_direction)
def apply_jsonpatch(doc, patch):
for p in patch:
if p['op'] == 'add' and p['path'].count('/') == 1:

View File

@@ -63,7 +63,7 @@ class ContextHook(hooks.PecanHook):
auth_url = headers.get('X-Auth-Url')
if auth_url is None:
importutils.import_module('keystonemiddleware.auth_token')
auth_url = cfg.CONF.keystone_authtoken.auth_uri
auth_url = cfg.CONF.keystone_authtoken.www_authenticate_uri
state.request.context = context.make_context(
auth_token=auth_token,

View File

@@ -50,6 +50,12 @@ class Migrate(base.BaseAction):
source and the destination compute hostname (list of available compute
hosts is returned by this command: ``nova service-list --binary
nova-compute``).
.. note::
Nova API version must be 2.56 or above if `destination_node` parameter
is given.
"""
# input parameters constants
@@ -113,8 +119,10 @@ class Migrate(base.BaseAction):
dest_hostname=destination)
except nova_helper.nvexceptions.ClientException as e:
LOG.debug("Nova client exception occurred while live "
"migrating instance %s.Exception: %s" %
(self.instance_uuid, e))
"migrating instance "
"%(instance)s.Exception: %(exception)s",
{'instance': self.instance_uuid, 'exception': e})
except Exception as e:
LOG.exception(e)
LOG.critical("Unexpected error occurred. Migration failed for "

View File

@@ -36,13 +36,16 @@ class VolumeMigrate(base.BaseAction):
By using this action, you will be able to migrate cinder volume.
Migration type 'swap' can only be used for migrating attached volume.
Migration type 'cold' can only be used for migrating detached volume.
Migration type 'migrate' can be used for migrating detached volume to
the pool of same volume type.
Migration type 'retype' can be used for changing volume type of
detached volume.
The action schema is::
schema = Schema({
'resource_id': str, # should be a UUID
'migration_type': str, # choices -> "swap", "cold"
'migration_type': str, # choices -> "swap", "migrate","retype"
'destination_node': str,
'destination_type': str,
})
@@ -60,7 +63,8 @@ class VolumeMigrate(base.BaseAction):
MIGRATION_TYPE = 'migration_type'
SWAP = 'swap'
COLD = 'cold'
RETYPE = 'retype'
MIGRATE = 'migrate'
DESTINATION_NODE = "destination_node"
DESTINATION_TYPE = "destination_type"
@@ -85,7 +89,7 @@ class VolumeMigrate(base.BaseAction):
},
'migration_type': {
'type': 'string',
"enum": ["swap", "cold"]
"enum": ["swap", "retype", "migrate"]
},
'destination_node': {
"anyof": [
@@ -127,20 +131,6 @@ class VolumeMigrate(base.BaseAction):
def destination_type(self):
return self.input_parameters.get(self.DESTINATION_TYPE)
def _cold_migrate(self, volume, dest_node, dest_type):
if not self.cinder_util.can_cold(volume, dest_node):
raise exception.Invalid(
message=(_("Invalid state for cold migration")))
if dest_node:
return self.cinder_util.migrate(volume, dest_node)
elif dest_type:
return self.cinder_util.retype(volume, dest_type)
else:
raise exception.Invalid(
message=(_("destination host or destination type is "
"required when migration type is cold")))
def _can_swap(self, volume):
"""Judge volume can be swapped"""
@@ -212,12 +202,14 @@ class VolumeMigrate(base.BaseAction):
try:
volume = self.cinder_util.get_volume(volume_id)
if self.migration_type == self.COLD:
return self._cold_migrate(volume, dest_node, dest_type)
elif self.migration_type == self.SWAP:
if self.migration_type == self.SWAP:
if dest_node:
LOG.warning("dest_node is ignored")
return self._swap_volume(volume, dest_type)
elif self.migration_type == self.RETYPE:
return self.cinder_util.retype(volume, dest_type)
elif self.migration_type == self.MIGRATE:
return self.cinder_util.migrate(volume, dest_node)
else:
raise exception.Invalid(
message=(_("Migration of type '%(migration_type)s' is not "

View File

@@ -40,10 +40,10 @@ def main():
if host == '127.0.0.1':
LOG.info('serving on 127.0.0.1:%(port)s, '
'view at %(protocol)s://127.0.0.1:%(port)s' %
'view at %(protocol)s://127.0.0.1:%(port)s',
dict(protocol=protocol, port=port))
else:
LOG.info('serving on %(protocol)s://%(host)s:%(port)s' %
LOG.info('serving on %(protocol)s://%(host)s:%(port)s',
dict(protocol=protocol, host=host, port=port))
api_schedule = scheduling.APISchedulingService()

View File

@@ -22,7 +22,7 @@ import sys
from oslo_log import log
from watcher.common import service as service
from watcher.common import service
from watcher import conf
from watcher.decision_engine import sync

View File

@@ -70,16 +70,18 @@ class CinderHelper(object):
def get_volume_type_list(self):
return self.cinder.volume_types.list()
def get_volume_snapshots_list(self):
return self.cinder.volume_snapshots.list(
search_opts={'all_tenants': True})
def get_volume_type_by_backendname(self, backendname):
"""Return a list of volume type"""
volume_type_list = self.get_volume_type_list()
volume_type = [volume_type for volume_type in volume_type_list
volume_type = [volume_type.name for volume_type in volume_type_list
if volume_type.extra_specs.get(
'volume_backend_name') == backendname]
if volume_type:
return volume_type[0].name
else:
return ""
return volume_type
def get_volume(self, volume):
@@ -111,23 +113,6 @@ class CinderHelper(object):
return True
return False
def can_cold(self, volume, host=None):
"""Judge volume can be migrated"""
can_cold = False
status = self.get_volume(volume).status
snapshot = self._has_snapshot(volume)
same_host = False
if host and getattr(volume, 'os-vol-host-attr:host') == host:
same_host = True
if (status == 'available' and
snapshot is False and
same_host is False):
can_cold = True
return can_cold
def get_deleting_volume(self, volume):
volume = self.get_volume(volume)
all_volume = self.get_volume_list()
@@ -154,13 +139,13 @@ class CinderHelper(object):
volume = self.get_volume(volume.id)
time.sleep(retry_interval)
retry -= 1
LOG.debug("retry count: %s" % retry)
LOG.debug("Waiting to complete deletion of volume %s" % volume.id)
LOG.debug("retry count: %s", retry)
LOG.debug("Waiting to complete deletion of volume %s", volume.id)
if self._can_get_volume(volume.id):
LOG.error("Volume deletion error: %s" % volume.id)
LOG.error("Volume deletion error: %s", volume.id)
return False
LOG.debug("Volume %s was deleted successfully." % volume.id)
LOG.debug("Volume %s was deleted successfully.", volume.id)
return True
def check_migrated(self, volume, retry_interval=10):
@@ -194,8 +179,7 @@ class CinderHelper(object):
LOG.error(error_msg)
return False
LOG.debug(
"Volume migration succeeded : "
"volume %s is now on host '%s'." % (
"Volume migration succeeded : volume %s is now on host '%s'.", (
volume.id, host_name))
return True
@@ -204,13 +188,13 @@ class CinderHelper(object):
volume = self.get_volume(volume)
dest_backend = self.backendname_from_poolname(dest_node)
dest_type = self.get_volume_type_by_backendname(dest_backend)
if volume.volume_type != dest_type:
if volume.volume_type not in dest_type:
raise exception.Invalid(
message=(_("Volume type must be same for migrating")))
source_node = getattr(volume, 'os-vol-host-attr:host')
LOG.debug("Volume %s found on host '%s'."
% (volume.id, source_node))
LOG.debug("Volume %s found on host '%s'.",
(volume.id, source_node))
self.cinder.volumes.migrate_volume(
volume, dest_node, False, True)
@@ -226,8 +210,8 @@ class CinderHelper(object):
source_node = getattr(volume, 'os-vol-host-attr:host')
LOG.debug(
"Volume %s found on host '%s'." % (
volume.id, source_node))
"Volume %s found on host '%s'.",
(volume.id, source_node))
self.cinder.volumes.retype(
volume, dest_type, "on-demand")
@@ -249,14 +233,14 @@ class CinderHelper(object):
LOG.debug('Waiting volume creation of {0}'.format(new_volume))
time.sleep(retry_interval)
retry -= 1
LOG.debug("retry count: %s" % retry)
LOG.debug("retry count: %s", retry)
if getattr(new_volume, 'status') != 'available':
error_msg = (_("Failed to create volume '%(volume)s. ") %
{'volume': new_volume.id})
raise Exception(error_msg)
LOG.debug("Volume %s was created successfully." % new_volume)
LOG.debug("Volume %s was created successfully.", new_volume)
return new_volume
def delete_volume(self, volume):

View File

@@ -62,6 +62,7 @@ class RequestContext(context.RequestContext):
# safely ignore this as we don't use it.
kwargs.pop('user_identity', None)
kwargs.pop('global_request_id', None)
kwargs.pop('project', None)
if kwargs:
LOG.warning('Arguments dropped when creating context: %s',
str(kwargs))

View File

@@ -305,7 +305,7 @@ class ActionFilterCombinationProhibited(Invalid):
class UnsupportedActionType(UnsupportedError):
msg_fmt = _("Provided %(action_type) is not supported yet")
msg_fmt = _("Provided %(action_type)s is not supported yet")
class EfficacyIndicatorNotFound(ResourceNotFound):
@@ -332,6 +332,14 @@ class PatchError(Invalid):
msg_fmt = _("Couldn't apply patch '%(patch)s'. Reason: %(reason)s")
class DeleteError(Invalid):
msg_fmt = _("Couldn't delete when state is '%(state)s'.")
class StartError(Invalid):
msg_fmt = _("Couldn't start when state is '%(state)s'.")
# decision engine
class WorkflowExecutionException(WatcherException):
@@ -362,6 +370,14 @@ class ClusterEmpty(WatcherException):
msg_fmt = _("The list of compute node(s) in the cluster is empty")
class ComputeClusterEmpty(WatcherException):
msg_fmt = _("The list of compute node(s) in the cluster is empty")
class StorageClusterEmpty(WatcherException):
msg_fmt = _("The list of storage node(s) in the cluster is empty")
class MetricCollectorNotDefined(WatcherException):
msg_fmt = _("The metrics resource collector is not defined")
@@ -405,6 +421,10 @@ class UnsupportedDataSource(UnsupportedError):
"by strategy %(strategy)s")
class DataSourceNotAvailable(WatcherException):
msg_fmt = _("Datasource %(datasource)s is not available.")
class NoSuchMetricForHost(WatcherException):
msg_fmt = _("No %(metric)s metric for %(host)s found.")
@@ -469,6 +489,14 @@ class VolumeNotFound(StorageResourceNotFound):
msg_fmt = _("The volume '%(name)s' could not be found")
class BaremetalResourceNotFound(WatcherException):
msg_fmt = _("The baremetal resource '%(name)s' could not be found")
class IronicNodeNotFound(BaremetalResourceNotFound):
msg_fmt = _("The ironic node %(uuid)s could not be found")
class LoadingError(WatcherException):
msg_fmt = _("Error loading plugin '%(name)s'")
@@ -488,3 +516,7 @@ class NegativeLimitError(WatcherException):
class NotificationPayloadError(WatcherException):
_msg_fmt = _("Payload not populated when trying to send notification "
"\"%(class_name)s\"")
class InvalidPoolAttributeValue(Invalid):
msg_fmt = _("The %(name)s pool %(attribute)s is not integer")

View File

@@ -0,0 +1,49 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2017 ZTE Corporation
#
# Authors:Yumeng Bao <bao.yumeng@zte.com.cn>
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from oslo_log import log
from watcher.common import clients
from watcher.common import exception
from watcher.common import utils
LOG = log.getLogger(__name__)
class IronicHelper(object):
def __init__(self, osc=None):
""":param osc: an OpenStackClients instance"""
self.osc = osc if osc else clients.OpenStackClients()
self.ironic = self.osc.ironic()
def get_ironic_node_list(self):
return self.ironic.node.list()
def get_ironic_node_by_uuid(self, node_uuid):
"""Get ironic node by node UUID"""
try:
node = self.ironic.node.get(utils.Struct(uuid=node_uuid))
if not node:
raise exception.IronicNodeNotFound(uuid=node_uuid)
except Exception as exc:
LOG.exception(exc)
raise exception.IronicNodeNotFound(uuid=node_uuid)
# We need to pass an object with an 'uuid' attribute to make it work
return node

View File

@@ -17,9 +17,9 @@
# limitations under the License.
#
import random
import time
from novaclient import api_versions
from oslo_log import log
import cinderclient.exceptions as ciexceptions
@@ -29,9 +29,12 @@ import novaclient.exceptions as nvexceptions
from watcher.common import clients
from watcher.common import exception
from watcher.common import utils
from watcher import conf
LOG = log.getLogger(__name__)
CONF = conf.CONF
class NovaHelper(object):
@@ -52,14 +55,21 @@ class NovaHelper(object):
return self.nova.hypervisors.get(utils.Struct(id=node_id))
def get_compute_node_by_hostname(self, node_hostname):
"""Get compute node by ID (*not* UUID)"""
# We need to pass an object with an 'id' attribute to make it work
"""Get compute node by hostname"""
try:
compute_nodes = self.nova.hypervisors.search(node_hostname)
if len(compute_nodes) != 1:
hypervisors = [hv for hv in self.get_compute_node_list()
if hv.service['host'] == node_hostname]
if len(hypervisors) != 1:
# TODO(hidekazu)
# this may occur if VMware vCenter driver is used
raise exception.ComputeNodeNotFound(name=node_hostname)
else:
compute_nodes = self.nova.hypervisors.search(
hypervisors[0].hypervisor_hostname)
if len(compute_nodes) != 1:
raise exception.ComputeNodeNotFound(name=node_hostname)
return self.get_compute_node_by_id(compute_nodes[0].id)
return self.get_compute_node_by_id(compute_nodes[0].id)
except Exception as exc:
LOG.exception(exc)
raise exception.ComputeNodeNotFound(name=node_hostname)
@@ -67,6 +77,9 @@ class NovaHelper(object):
def get_instance_list(self):
return self.nova.servers.list(search_opts={'all_tenants': True})
def get_flavor_list(self):
return self.nova.flavors.list(**{'is_public': None})
def get_service(self, service_id):
return self.nova.services.find(id=service_id)
@@ -96,7 +109,7 @@ class NovaHelper(object):
return True
else:
LOG.debug("confirm resize failed for the "
"instance %s" % instance.id)
"instance %s", instance.id)
return False
def wait_for_volume_status(self, volume, status, timeout=60,
@@ -120,240 +133,68 @@ class NovaHelper(object):
return volume.status == status
def watcher_non_live_migrate_instance(self, instance_id, dest_hostname,
keep_original_image_name=True,
retry=120):
"""This method migrates a given instance
using an image of this instance and creating a new instance
from this image. It saves some configuration information
about the original instance : security group, list of networks,
list of attached volumes, floating IP, ...
in order to apply the same settings to the new instance.
At the end of the process the original instance is deleted.
This method uses the Nova built-in migrate()
action to do a migration of a given instance.
For migrating a given dest_hostname, Nova API version
must be 2.56 or higher.
It returns True if the migration was successful,
False otherwise.
if destination hostname not given, this method calls nova api
to migrate the instance.
:param instance_id: the unique id of the instance to migrate.
:param keep_original_image_name: flag indicating whether the
image name from which the original instance was built must be
used as the name of the intermediate image used for migration.
If this flag is False, a temporary image name is built
:param dest_hostname: the name of the destination compute node, if
destination_node is None, nova scheduler choose
the destination host
"""
new_image_name = ""
LOG.debug(
"Trying a non-live migrate of instance '%s' " % instance_id)
"Trying a cold migrate of instance '%s' ", instance_id)
# Looking for the instance to migrate
instance = self.find_instance(instance_id)
if not instance:
LOG.debug("Instance %s not found !" % instance_id)
LOG.debug("Instance %s not found !", instance_id)
return False
else:
# NOTE: If destination node is None call Nova API to migrate
# instance
host_name = getattr(instance, "OS-EXT-SRV-ATTR:host")
LOG.debug(
"Instance %s found on host '%s'." % (instance_id, host_name))
"Instance %(instance)s found on host '%(host)s'.",
{'instance': instance_id, 'host': host_name})
if dest_hostname is None:
previous_status = getattr(instance, 'status')
previous_status = getattr(instance, 'status')
instance.migrate()
instance = self.nova.servers.get(instance_id)
while (getattr(instance, 'status') not in
["VERIFY_RESIZE", "ERROR"] and retry):
instance = self.nova.servers.get(instance.id)
time.sleep(2)
retry -= 1
new_hostname = getattr(instance, 'OS-EXT-SRV-ATTR:host')
if (dest_hostname and
not self._check_nova_api_version(self.nova, "2.56")):
LOG.error("For migrating a given dest_hostname,"
"Nova API version must be 2.56 or higher")
return False
if (host_name != new_hostname and
instance.status == 'VERIFY_RESIZE'):
if not self.confirm_resize(instance, previous_status):
return False
LOG.debug(
"cold migration succeeded : "
"instance %s is now on host '%s'." % (
instance_id, new_hostname))
return True
else:
LOG.debug(
"cold migration for instance %s failed" % instance_id)
instance.migrate(host=dest_hostname)
instance = self.nova.servers.get(instance_id)
while (getattr(instance, 'status') not in
["VERIFY_RESIZE", "ERROR"] and retry):
instance = self.nova.servers.get(instance.id)
time.sleep(2)
retry -= 1
new_hostname = getattr(instance, 'OS-EXT-SRV-ATTR:host')
if (host_name != new_hostname and
instance.status == 'VERIFY_RESIZE'):
if not self.confirm_resize(instance, previous_status):
return False
if not keep_original_image_name:
# randrange gives you an integral value
irand = random.randint(0, 1000)
# Building the temporary image name
# which will be used for the migration
new_image_name = "tmp-migrate-%s-%s" % (instance_id, irand)
LOG.debug(
"cold migration succeeded : "
"instance %(instance)s is now on host '%(host)s'.",
{'instance': instance_id, 'host': new_hostname})
return True
else:
# Get the image name of the current instance.
# We'll use the same name for the new instance.
imagedict = getattr(instance, "image")
image_id = imagedict["id"]
image = self.glance.images.get(image_id)
new_image_name = getattr(image, "name")
instance_name = getattr(instance, "name")
flavor_name = instance.flavor.get('original_name')
keypair_name = getattr(instance, "key_name")
addresses = getattr(instance, "addresses")
floating_ip = ""
network_names_list = []
for network_name, network_conf_obj in addresses.items():
LOG.debug(
"Extracting network configuration for network '%s'" %
network_name)
network_names_list.append(network_name)
for net_conf_item in network_conf_obj:
if net_conf_item['OS-EXT-IPS:type'] == "floating":
floating_ip = net_conf_item['addr']
break
sec_groups_list = getattr(instance, "security_groups")
sec_groups = []
for sec_group_dict in sec_groups_list:
sec_groups.append(sec_group_dict['name'])
# Stopping the old instance properly so
# that no new data is sent to it and to its attached volumes
stopped_ok = self.stop_instance(instance_id)
if not stopped_ok:
LOG.debug("Could not stop instance: %s" % instance_id)
"cold migration for instance %s failed", instance_id)
return False
# Building the temporary image which will be used
# to re-build the same instance on another target host
image_uuid = self.create_image_from_instance(instance_id,
new_image_name)
if not image_uuid:
LOG.debug(
"Could not build temporary image of instance: %s" %
instance_id)
return False
#
# We need to get the list of attached volumes and detach
# them from the instance in order to attache them later
# to the new instance
#
blocks = []
# Looks like this :
# os-extended-volumes:volumes_attached |
# [{u'id': u'c5c3245f-dd59-4d4f-8d3a-89d80135859a'}]
attached_volumes = getattr(instance,
"os-extended-volumes:volumes_attached")
for attached_volume in attached_volumes:
volume_id = attached_volume['id']
try:
volume = self.cinder.volumes.get(volume_id)
attachments_list = getattr(volume, "attachments")
device_name = attachments_list[0]['device']
# When a volume is attached to an instance
# it contains the following property :
# attachments = [{u'device': u'/dev/vdb',
# u'server_id': u'742cc508-a2f2-4769-a794-bcdad777e814',
# u'id': u'f6d62785-04b8-400d-9626-88640610f65e',
# u'host_name': None, u'volume_id':
# u'f6d62785-04b8-400d-9626-88640610f65e'}]
# boot_index indicates a number
# designating the boot order of the device.
# Use -1 for the boot volume,
# choose 0 for an attached volume.
block_device_mapping_v2_item = {"device_name": device_name,
"source_type": "volume",
"destination_type":
"volume",
"uuid": volume_id,
"boot_index": "0"}
blocks.append(
block_device_mapping_v2_item)
LOG.debug("Detaching volume %s from instance: %s" % (
volume_id, instance_id))
# volume.detach()
self.nova.volumes.delete_server_volume(instance_id,
volume_id)
if not self.wait_for_volume_status(volume, "available", 5,
10):
LOG.debug(
"Could not detach volume %s from instance: %s" % (
volume_id, instance_id))
return False
except ciexceptions.NotFound:
LOG.debug("Volume '%s' not found " % image_id)
return False
# We create the new instance from
# the intermediate image of the original instance
new_instance = self. \
create_instance(dest_hostname,
instance_name,
image_uuid,
flavor_name,
sec_groups,
network_names_list=network_names_list,
keypair_name=keypair_name,
create_new_floating_ip=False,
block_device_mapping_v2=blocks)
if not new_instance:
LOG.debug(
"Could not create new instance "
"for non-live migration of instance %s" % instance_id)
return False
try:
LOG.debug("Detaching floating ip '%s' from instance %s" % (
floating_ip, instance_id))
# We detach the floating ip from the current instance
instance.remove_floating_ip(floating_ip)
LOG.debug(
"Attaching floating ip '%s' to the new instance %s" % (
floating_ip, new_instance.id))
# We attach the same floating ip to the new instance
new_instance.add_floating_ip(floating_ip)
except Exception as e:
LOG.debug(e)
new_host_name = getattr(new_instance, "OS-EXT-SRV-ATTR:host")
# Deleting the old instance (because no more useful)
delete_ok = self.delete_instance(instance_id)
if not delete_ok:
LOG.debug("Could not delete instance: %s" % instance_id)
return False
LOG.debug(
"Instance %s has been successfully migrated "
"to new host '%s' and its new id is %s." % (
instance_id, new_host_name, new_instance.id))
return True
def resize_instance(self, instance_id, flavor, retry=120):
"""This method resizes given instance with specified flavor.
@@ -366,8 +207,10 @@ class NovaHelper(object):
:param instance_id: the unique id of the instance to resize.
:param flavor: the name or ID of the flavor to resize to.
"""
LOG.debug("Trying a resize of instance %s to flavor '%s'" % (
instance_id, flavor))
LOG.debug(
"Trying a resize of instance %(instance)s to "
"flavor '%(flavor)s'",
{'instance': instance_id, 'flavor': flavor})
# Looking for the instance to resize
instance = self.find_instance(instance_id)
@@ -384,17 +227,17 @@ class NovaHelper(object):
"instance %s. Exception: %s", instance_id, e)
if not flavor_id:
LOG.debug("Flavor not found: %s" % flavor)
LOG.debug("Flavor not found: %s", flavor)
return False
if not instance:
LOG.debug("Instance not found: %s" % instance_id)
LOG.debug("Instance not found: %s", instance_id)
return False
instance_status = getattr(instance, 'OS-EXT-STS:vm_state')
LOG.debug(
"Instance %s is in '%s' status." % (instance_id,
instance_status))
"Instance %(id)s is in '%(status)s' status.",
{'id': instance_id, 'status': instance_status})
instance.resize(flavor=flavor_id)
while getattr(instance,
@@ -432,17 +275,20 @@ class NovaHelper(object):
destination_node is None, nova scheduler choose
the destination host
"""
LOG.debug("Trying to live migrate instance %s " % (instance_id))
LOG.debug(
"Trying a live migrate instance %(instance)s ",
{'instance': instance_id})
# Looking for the instance to migrate
instance = self.find_instance(instance_id)
if not instance:
LOG.debug("Instance not found: %s" % instance_id)
LOG.debug("Instance not found: %s", instance_id)
return False
else:
host_name = getattr(instance, 'OS-EXT-SRV-ATTR:host')
LOG.debug(
"Instance %s found on host '%s'." % (instance_id, host_name))
"Instance %(instance)s found on host '%(host)s'.",
{'instance': instance_id, 'host': host_name})
# From nova api version 2.25(Mitaka release), the default value of
# block_migration is None which is mapped to 'auto'.
@@ -464,7 +310,7 @@ class NovaHelper(object):
if host_name != new_hostname and instance.status == 'ACTIVE':
LOG.debug(
"Live migration succeeded : "
"instance %s is now on host '%s'." % (
"instance %s is now on host '%s'.", (
instance_id, new_hostname))
return True
else:
@@ -475,7 +321,7 @@ class NovaHelper(object):
and retry:
instance = self.nova.servers.get(instance.id)
if not getattr(instance, 'OS-EXT-STS:task_state'):
LOG.debug("Instance task state: %s is null" % instance_id)
LOG.debug("Instance task state: %s is null", instance_id)
break
LOG.debug(
'Waiting the migration of {0} to {1}'.format(
@@ -491,13 +337,13 @@ class NovaHelper(object):
LOG.debug(
"Live migration succeeded : "
"instance %s is now on host '%s'." % (
instance_id, host_name))
"instance %(instance)s is now on host '%(host)s'.",
{'instance': instance_id, 'host': host_name})
return True
def abort_live_migrate(self, instance_id, source, destination, retry=240):
LOG.debug("Aborting live migration of instance %s" % instance_id)
LOG.debug("Aborting live migration of instance %s", instance_id)
migration = self.get_running_migration(instance_id)
if migration:
migration_id = getattr(migration[0], "id")
@@ -510,7 +356,7 @@ class NovaHelper(object):
LOG.exception(e)
else:
LOG.debug(
"No running migrations found for instance %s" % instance_id)
"No running migrations found for instance %s", instance_id)
while retry:
instance = self.nova.servers.get(instance_id)
@@ -534,24 +380,34 @@ class NovaHelper(object):
"for the instance %s" % instance_id)
def enable_service_nova_compute(self, hostname):
if self.nova.services.enable(host=hostname,
binary='nova-compute'). \
status == 'enabled':
return True
if float(CONF.nova_client.api_version) < 2.53:
status = self.nova.services.enable(
host=hostname, binary='nova-compute').status == 'enabled'
else:
return False
service_uuid = self.nova.services.list(host=hostname,
binary='nova-compute')[0].id
status = self.nova.services.enable(
service_uuid=service_uuid).status == 'enabled'
return status
def disable_service_nova_compute(self, hostname, reason=None):
if self.nova.services.disable_log_reason(host=hostname,
binary='nova-compute',
reason=reason). \
status == 'disabled':
return True
if float(CONF.nova_client.api_version) < 2.53:
status = self.nova.services.disable_log_reason(
host=hostname,
binary='nova-compute',
reason=reason).status == 'disabled'
else:
return False
service_uuid = self.nova.services.list(host=hostname,
binary='nova-compute')[0].id
status = self.nova.services.disable_log_reason(
service_uuid=service_uuid,
reason=reason).status == 'disabled'
return status
def set_host_offline(self, hostname):
# See API on http://developer.openstack.org/api-ref-compute-v2.1.html
# See API on https://developer.openstack.org/api-ref/compute/
# especially the PUT request
# regarding this resource : /v2.1/os-hosts/{host_name}
#
@@ -575,7 +431,7 @@ class NovaHelper(object):
host = self.nova.hosts.get(hostname)
if not host:
LOG.debug("host not found: %s" % hostname)
LOG.debug("host not found: %s", hostname)
return False
else:
host[0].update(
@@ -597,18 +453,19 @@ class NovaHelper(object):
key-value pairs to associate to the image as metadata.
"""
LOG.debug(
"Trying to create an image from instance %s ..." % instance_id)
"Trying to create an image from instance %s ...", instance_id)
# Looking for the instance
instance = self.find_instance(instance_id)
if not instance:
LOG.debug("Instance not found: %s" % instance_id)
LOG.debug("Instance not found: %s", instance_id)
return None
else:
host_name = getattr(instance, 'OS-EXT-SRV-ATTR:host')
LOG.debug(
"Instance %s found on host '%s'." % (instance_id, host_name))
"Instance %(instance)s found on host '%(host)s'.",
{'instance': instance_id, 'host': host_name})
# We need to wait for an appropriate status
# of the instance before we can build an image from it
@@ -635,14 +492,15 @@ class NovaHelper(object):
if not image:
break
status = image.status
LOG.debug("Current image status: %s" % status)
LOG.debug("Current image status: %s", status)
if not image:
LOG.debug("Image not found: %s" % image_uuid)
LOG.debug("Image not found: %s", image_uuid)
else:
LOG.debug(
"Image %s successfully created for instance %s" % (
image_uuid, instance_id))
"Image %(image)s successfully created for "
"instance %(instance)s",
{'image': image_uuid, 'instance': instance_id})
return image_uuid
return None
@@ -651,16 +509,16 @@ class NovaHelper(object):
:param instance_id: the unique id of the instance to delete.
"""
LOG.debug("Trying to remove instance %s ..." % instance_id)
LOG.debug("Trying to remove instance %s ...", instance_id)
instance = self.find_instance(instance_id)
if not instance:
LOG.debug("Instance not found: %s" % instance_id)
LOG.debug("Instance not found: %s", instance_id)
return False
else:
self.nova.servers.delete(instance_id)
LOG.debug("Instance %s removed." % instance_id)
LOG.debug("Instance %s removed.", instance_id)
return True
def stop_instance(self, instance_id):
@@ -668,21 +526,21 @@ class NovaHelper(object):
:param instance_id: the unique id of the instance to stop.
"""
LOG.debug("Trying to stop instance %s ..." % instance_id)
LOG.debug("Trying to stop instance %s ...", instance_id)
instance = self.find_instance(instance_id)
if not instance:
LOG.debug("Instance not found: %s" % instance_id)
LOG.debug("Instance not found: %s", instance_id)
return False
elif getattr(instance, 'OS-EXT-STS:vm_state') == "stopped":
LOG.debug("Instance has been stopped: %s" % instance_id)
LOG.debug("Instance has been stopped: %s", instance_id)
return True
else:
self.nova.servers.stop(instance_id)
if self.wait_for_instance_state(instance, "stopped", 8, 10):
LOG.debug("Instance %s stopped." % instance_id)
LOG.debug("Instance %s stopped.", instance_id)
return True
else:
return False
@@ -723,11 +581,11 @@ class NovaHelper(object):
return False
while instance.status not in status_list and retry:
LOG.debug("Current instance status: %s" % instance.status)
LOG.debug("Current instance status: %s", instance.status)
time.sleep(sleep)
instance = self.nova.servers.get(instance.id)
retry -= 1
LOG.debug("Current instance status: %s" % instance.status)
LOG.debug("Current instance status: %s", instance.status)
return instance.status in status_list
def create_instance(self, node_id, inst_name="test", image_id=None,
@@ -743,26 +601,26 @@ class NovaHelper(object):
It returns the unique id of the created instance.
"""
LOG.debug(
"Trying to create new instance '%s' "
"from image '%s' with flavor '%s' ..." % (
inst_name, image_id, flavor_name))
"Trying to create new instance '%(inst)s' "
"from image '%(image)s' with flavor '%(flavor)s' ...",
{'inst': inst_name, 'image': image_id, 'flavor': flavor_name})
try:
self.nova.keypairs.findall(name=keypair_name)
except nvexceptions.NotFound:
LOG.debug("Key pair '%s' not found " % keypair_name)
LOG.debug("Key pair '%s' not found ", keypair_name)
return
try:
image = self.glance.images.get(image_id)
except glexceptions.NotFound:
LOG.debug("Image '%s' not found " % image_id)
LOG.debug("Image '%s' not found ", image_id)
return
try:
flavor = self.nova.flavors.find(name=flavor_name)
except nvexceptions.NotFound:
LOG.debug("Flavor '%s' not found " % flavor_name)
LOG.debug("Flavor '%s' not found ", flavor_name)
return
# Make sure all security groups exist
@@ -770,7 +628,7 @@ class NovaHelper(object):
group_id = self.get_security_group_id_from_name(sec_group_name)
if not group_id:
LOG.debug("Security group '%s' not found " % sec_group_name)
LOG.debug("Security group '%s' not found ", sec_group_name)
return
net_list = list()
@@ -779,7 +637,7 @@ class NovaHelper(object):
nic_id = self.get_network_id_from_name(network_name)
if not nic_id:
LOG.debug("Network '%s' not found " % network_name)
LOG.debug("Network '%s' not found ", network_name)
return
net_obj = {"net-id": nic_id}
net_list.append(net_obj)
@@ -805,14 +663,16 @@ class NovaHelper(object):
if create_new_floating_ip and instance.status == 'ACTIVE':
LOG.debug(
"Creating a new floating IP"
" for instance '%s'" % instance.id)
" for instance '%s'", instance.id)
# Creating floating IP for the new instance
floating_ip = self.nova.floating_ips.create()
instance.add_floating_ip(floating_ip)
LOG.debug("Instance %s associated to Floating IP '%s'" % (
instance.id, floating_ip.ip))
LOG.debug(
"Instance %(instance)s associated to "
"Floating IP '%(ip)s'",
{'instance': instance.id, 'ip': floating_ip.ip})
return instance
@@ -886,7 +746,7 @@ class NovaHelper(object):
LOG.debug('Waiting volume update to {0}'.format(new_volume))
time.sleep(retry_interval)
retry -= 1
LOG.debug("retry count: %s" % retry)
LOG.debug("retry count: %s", retry)
if getattr(new_volume, 'status') != "in-use":
LOG.error("Volume update retry timeout or error")
return False
@@ -894,5 +754,15 @@ class NovaHelper(object):
host_name = getattr(new_volume, "os-vol-host-attr:host")
LOG.debug(
"Volume update succeeded : "
"Volume %s is now on host '%s'." % (new_volume.id, host_name))
"Volume %s is now on host '%s'.",
(new_volume.id, host_name))
return True
def _check_nova_api_version(self, client, version):
api_version = api_versions.APIVersion(version_str=version)
try:
api_versions.discover_version(client, api_version)
return True
except nvexceptions.UnsupportedVersion as e:
LOG.exception(e)
return False

View File

@@ -0,0 +1,37 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import itertools
from watcher.common.policies import action
from watcher.common.policies import action_plan
from watcher.common.policies import audit
from watcher.common.policies import audit_template
from watcher.common.policies import base
from watcher.common.policies import goal
from watcher.common.policies import scoring_engine
from watcher.common.policies import service
from watcher.common.policies import strategy
def list_rules():
return itertools.chain(
base.list_rules(),
action.list_rules(),
action_plan.list_rules(),
audit.list_rules(),
audit_template.list_rules(),
goal.list_rules(),
scoring_engine.list_rules(),
service.list_rules(),
strategy.list_rules(),
)

View File

@@ -0,0 +1,57 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
from watcher.common.policies import base
ACTION = 'action:%s'
rules = [
policy.DocumentedRuleDefault(
name=ACTION % 'detail',
check_str=base.RULE_ADMIN_API,
description='Retrieve a list of actions with detail.',
operations=[
{
'path': '/v1/actions/detail',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=ACTION % 'get',
check_str=base.RULE_ADMIN_API,
description='Retrieve information about a given action.',
operations=[
{
'path': '/v1/actions/{action_id}',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=ACTION % 'get_all',
check_str=base.RULE_ADMIN_API,
description='Retrieve a list of all actions.',
operations=[
{
'path': '/v1/actions',
'method': 'GET'
}
]
)
]
def list_rules():
return rules

View File

@@ -0,0 +1,90 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
from watcher.common.policies import base
ACTION_PLAN = 'action_plan:%s'
rules = [
policy.DocumentedRuleDefault(
name=ACTION_PLAN % 'delete',
check_str=base.RULE_ADMIN_API,
description='Delete an action plan.',
operations=[
{
'path': '/v1/action_plans/{action_plan_uuid}',
'method': 'DELETE'
}
]
),
policy.DocumentedRuleDefault(
name=ACTION_PLAN % 'detail',
check_str=base.RULE_ADMIN_API,
description='Retrieve a list of action plans with detail.',
operations=[
{
'path': '/v1/action_plans/detail',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=ACTION_PLAN % 'get',
check_str=base.RULE_ADMIN_API,
description='Get an action plan.',
operations=[
{
'path': '/v1/action_plans/{action_plan_id}',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=ACTION_PLAN % 'get_all',
check_str=base.RULE_ADMIN_API,
description='Get all action plans.',
operations=[
{
'path': '/v1/action_plans',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=ACTION_PLAN % 'update',
check_str=base.RULE_ADMIN_API,
description='Update an action plans.',
operations=[
{
'path': '/v1/action_plans/{action_plan_uuid}',
'method': 'PATCH'
}
]
),
policy.DocumentedRuleDefault(
name=ACTION_PLAN % 'start',
check_str=base.RULE_ADMIN_API,
description='Start an action plans.',
operations=[
{
'path': '/v1/action_plans/{action_plan_uuid}/action',
'method': 'POST'
}
]
)
]
def list_rules():
return rules

View File

@@ -0,0 +1,90 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
from watcher.common.policies import base
AUDIT = 'audit:%s'
rules = [
policy.DocumentedRuleDefault(
name=AUDIT % 'create',
check_str=base.RULE_ADMIN_API,
description='Create a new audit.',
operations=[
{
'path': '/v1/audits',
'method': 'POST'
}
]
),
policy.DocumentedRuleDefault(
name=AUDIT % 'delete',
check_str=base.RULE_ADMIN_API,
description='Delete an audit.',
operations=[
{
'path': '/v1/audits/{audit_uuid}',
'method': 'DELETE'
}
]
),
policy.DocumentedRuleDefault(
name=AUDIT % 'detail',
check_str=base.RULE_ADMIN_API,
description='Retrieve audit list with details.',
operations=[
{
'path': '/v1/audits/detail',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=AUDIT % 'get',
check_str=base.RULE_ADMIN_API,
description='Get an audit.',
operations=[
{
'path': '/v1/audits/{audit_uuid}',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=AUDIT % 'get_all',
check_str=base.RULE_ADMIN_API,
description='Get all audits.',
operations=[
{
'path': '/v1/audits',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=AUDIT % 'update',
check_str=base.RULE_ADMIN_API,
description='Update an audit.',
operations=[
{
'path': '/v1/audits/{audit_uuid}',
'method': 'PATCH'
}
]
)
]
def list_rules():
return rules

View File

@@ -0,0 +1,90 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
from watcher.common.policies import base
AUDIT_TEMPLATE = 'audit_template:%s'
rules = [
policy.DocumentedRuleDefault(
name=AUDIT_TEMPLATE % 'create',
check_str=base.RULE_ADMIN_API,
description='Create an audit template.',
operations=[
{
'path': '/v1/audit_templates',
'method': 'POST'
}
]
),
policy.DocumentedRuleDefault(
name=AUDIT_TEMPLATE % 'delete',
check_str=base.RULE_ADMIN_API,
description='Delete an audit template.',
operations=[
{
'path': '/v1/audit_templates/{audit_template_uuid}',
'method': 'DELETE'
}
]
),
policy.DocumentedRuleDefault(
name=AUDIT_TEMPLATE % 'detail',
check_str=base.RULE_ADMIN_API,
description='Retrieve a list of audit templates with details.',
operations=[
{
'path': '/v1/audit_templates/detail',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=AUDIT_TEMPLATE % 'get',
check_str=base.RULE_ADMIN_API,
description='Get an audit template.',
operations=[
{
'path': '/v1/audit_templates/{audit_template_uuid}',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=AUDIT_TEMPLATE % 'get_all',
check_str=base.RULE_ADMIN_API,
description='Get a list of all audit templates.',
operations=[
{
'path': '/v1/audit_templates',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=AUDIT_TEMPLATE % 'update',
check_str=base.RULE_ADMIN_API,
description='Update an audit template.',
operations=[
{
'path': '/v1/audit_templates/{audit_template_uuid}',
'method': 'PATCH'
}
]
)
]
def list_rules():
return rules

View File

@@ -0,0 +1,32 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
RULE_ADMIN_API = 'rule:admin_api'
ROLE_ADMIN_OR_ADMINISTRATOR = 'role:admin or role:administrator'
ALWAYS_DENY = '!'
rules = [
policy.RuleDefault(
name='admin_api',
check_str=ROLE_ADMIN_OR_ADMINISTRATOR
),
policy.RuleDefault(
name='show_password',
check_str=ALWAYS_DENY
)
]
def list_rules():
return rules

View File

@@ -0,0 +1,57 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
from watcher.common.policies import base
GOAL = 'goal:%s'
rules = [
policy.DocumentedRuleDefault(
name=GOAL % 'detail',
check_str=base.RULE_ADMIN_API,
description='Retrieve a list of goals with detail.',
operations=[
{
'path': '/v1/goals/detail',
'method': 'DELETE'
}
]
),
policy.DocumentedRuleDefault(
name=GOAL % 'get',
check_str=base.RULE_ADMIN_API,
description='Get a goal.',
operations=[
{
'path': '/v1/goals/{goal_uuid}',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=GOAL % 'get_all',
check_str=base.RULE_ADMIN_API,
description='Get all goals.',
operations=[
{
'path': '/v1/goals',
'method': 'GET'
}
]
)
]
def list_rules():
return rules

View File

@@ -0,0 +1,66 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
from watcher.common.policies import base
SCORING_ENGINE = 'scoring_engine:%s'
rules = [
# FIXME(lbragstad): Find someone from watcher to double check this
# information. This API isn't listed in watcher's API reference
# documentation.
policy.DocumentedRuleDefault(
name=SCORING_ENGINE % 'detail',
check_str=base.RULE_ADMIN_API,
description='List scoring engines with details.',
operations=[
{
'path': '/v1/scoring_engines/detail',
'method': 'GET'
}
]
),
# FIXME(lbragstad): Find someone from watcher to double check this
# information. This API isn't listed in watcher's API reference
# documentation.
policy.DocumentedRuleDefault(
name=SCORING_ENGINE % 'get',
check_str=base.RULE_ADMIN_API,
description='Get a scoring engine.',
operations=[
{
'path': '/v1/scoring_engines/{scoring_engine_id}',
'method': 'GET'
}
]
),
# FIXME(lbragstad): Find someone from watcher to double check this
# information. This API isn't listed in watcher's API reference
# documentation.
policy.DocumentedRuleDefault(
name=SCORING_ENGINE % 'get_all',
check_str=base.RULE_ADMIN_API,
description='Get all scoring engines.',
operations=[
{
'path': '/v1/scoring_engines',
'method': 'GET'
}
]
)
]
def list_rules():
return rules

View File

@@ -0,0 +1,57 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
from watcher.common.policies import base
SERVICE = 'service:%s'
rules = [
policy.DocumentedRuleDefault(
name=SERVICE % 'detail',
check_str=base.RULE_ADMIN_API,
description='List services with detail.',
operations=[
{
'path': '/v1/services/',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=SERVICE % 'get',
check_str=base.RULE_ADMIN_API,
description='Get a specific service.',
operations=[
{
'path': '/v1/services/{service_id}',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=SERVICE % 'get_all',
check_str=base.RULE_ADMIN_API,
description='List all services.',
operations=[
{
'path': '/v1/services/',
'method': 'GET'
}
]
),
]
def list_rules():
return rules

View File

@@ -0,0 +1,68 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
from watcher.common.policies import base
STRATEGY = 'strategy:%s'
rules = [
policy.DocumentedRuleDefault(
name=STRATEGY % 'detail',
check_str=base.RULE_ADMIN_API,
description='List strategies with detail.',
operations=[
{
'path': '/v1/strategies/detail',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=STRATEGY % 'get',
check_str=base.RULE_ADMIN_API,
description='Get a strategy.',
operations=[
{
'path': '/v1/strategies/{strategy_uuid}',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=STRATEGY % 'get_all',
check_str=base.RULE_ADMIN_API,
description='List all strategies.',
operations=[
{
'path': '/v1/strategies',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=STRATEGY % 'state',
check_str=base.RULE_ADMIN_API,
description='Get state of strategy.',
operations=[
{
'path': '/v1/strategies{strategy_uuid}/state',
'method': 'GET'
}
]
)
]
def list_rules():
return rules

View File

@@ -15,11 +15,13 @@
"""Policy Engine For Watcher."""
import sys
from oslo_config import cfg
from oslo_policy import policy
from watcher.common import exception
from watcher.common import policies
_ENFORCER = None
CONF = cfg.CONF
@@ -56,6 +58,7 @@ def init(policy_file=None, rules=None,
default_rule=default_rule,
use_conf=use_conf,
overwrite=overwrite)
_ENFORCER.register_defaults(policies.list_rules())
return _ENFORCER
@@ -92,3 +95,23 @@ def enforce(context, rule=None, target=None,
'user_id': context.user_id}
return enforcer.enforce(rule, target, credentials,
do_raise=do_raise, exc=exc, *args, **kwargs)
def get_enforcer():
# This method is for use by oslopolicy CLI scripts. Those scripts need the
# 'output-file' and 'namespace' options, but having those in sys.argv means
# loading the Watcher config options will fail as those are not expected
# to be present. So we pass in an arg list with those stripped out.
conf_args = []
# Start at 1 because cfg.CONF expects the equivalent of sys.argv[1:]
i = 1
while i < len(sys.argv):
if sys.argv[i].strip('-') in ['namespace', 'output-file']:
i += 2
continue
conf_args.append(sys.argv[i])
i += 1
cfg.CONF(conf_args, project='watcher')
init()
return _ENFORCER

View File

@@ -69,7 +69,8 @@ _DEFAULT_LOG_LEVELS = ['amqp=WARN', 'amqplib=WARN', 'qpid.messaging=INFO',
'keystoneclient=INFO', 'stevedore=INFO',
'eventlet.wsgi.server=WARN', 'iso8601=WARN',
'paramiko=WARN', 'requests=WARN', 'neutronclient=WARN',
'glanceclient=WARN', 'watcher.openstack.common=WARN']
'glanceclient=WARN', 'watcher.openstack.common=WARN',
'apscheduler=WARN']
Singleton = service.Singleton
@@ -288,7 +289,7 @@ class Service(service.ServiceBase):
return api_manager_version
def launch(conf, service_, workers=1, restart_method='reload'):
def launch(conf, service_, workers=1, restart_method='mutate'):
return service.launch(conf, service_, workers, restart_method)

Some files were not shown because too many files have changed in this diff Show More