Compare commits

...

166 Commits
1.3.0 ... 1.5.0

Author SHA1 Message Date
caoyuan
84fb7423f1 Update the nova api_version default value to 2.53
the api_version has been update, so the doc should change the default
value

refer to https://github.com/openstack/watcher/blob/master/watcher/conf/nova_client.py#L26

Change-Id: I9717294d43203315d0593a4fee8c2ff2caf6f0d0
2017-10-17 14:40:59 +08:00
caoyuan
ee5b01d33b Correct the instance migration link
1. Update the instance migration link
2. remove the unnecessary install-guide link

configure and install Ceilometer by
https://docs.openstack.org/ceilometer/latest is enough
remove the other link.

Change-Id: I2bf408de1023750a3d1f2c9e25293649d99ac428
2017-10-17 10:07:55 +08:00
Zuul
b35feb5432 Merge "optimize update_audit_state" 2017-10-17 01:19:31 +00:00
Jenkins
8343f4bf46 Merge "Add documentation about saving energy strategy" 2017-10-13 06:51:13 +00:00
Jenkins
cd3f792eef Merge "Invoke version_string in watcher/version.py directly" 2017-10-13 04:51:43 +00:00
Jenkins
7f7f7a9fd2 Merge "Add saving energy strategy description" 2017-10-13 04:50:00 +00:00
Jenkins
d5b778b730 Merge "Fix _build_instance_node for building Compute CDM" 2017-10-13 04:05:56 +00:00
Jenkins
62a902df7c Merge "Fix the telemetry-measurements hyperlink for strategies" 2017-10-13 02:52:06 +00:00
Jenkins
5b6f65630d Merge "Update OpenStack Installation Tutorial to pike" 2017-10-13 02:23:52 +00:00
OpenStack Proposal Bot
e5031ef04a Updated from global requirements
Change-Id: I973ef2867d95fe4f70a43c9015f7571188dc13cd
2017-10-12 22:07:21 +00:00
Jenkins
ab2408ea67 Merge "writing convention: do not use “-y” for package install" 2017-10-12 07:04:58 +00:00
Yumeng_Bao
4c0d2ab4b2 Add saving energy strategy description
Change-Id: Id99d48bd2ca2a2539366d8dc1f7627d7eb472a10
2017-10-12 11:39:02 +08:00
Yumeng_Bao
cf8d7bb2f4 Add documentation about saving energy strategy
Change-Id: I9746239c83ea7bff364ad6939e4174748be2d299
Closes-Bug: #1713402
2017-10-11 14:24:30 +08:00
Jenkins
b3d60cb13d Merge "Optimize the import format by pep8" 2017-10-10 14:33:45 +00:00
Jenkins
ffdc3b554d Merge "Remove explicitly enable neutron" 2017-10-10 13:21:35 +00:00
Jenkins
9933d61065 Merge "Fix Action 'change_node_power_state' FAILED" 2017-10-10 13:19:15 +00:00
Jenkins
b4bc1599e6 Merge "Fix TypeError in function chunkify" 2017-10-10 13:17:26 +00:00
Jenkins
280188a762 Merge "Remove installation guide for openSUSE and SLES" 2017-10-10 13:14:41 +00:00
Jenkins
3942f44e56 Merge "Notification Cancel Action Plan" 2017-10-10 13:12:02 +00:00
Jenkins
6208caba0c Merge "Fix action plan state change when action failed" 2017-10-10 12:06:07 +00:00
Jenkins
f3b3e82313 Merge "Remove the unnecessary word" 2017-10-10 11:02:37 +00:00
zhuzeyu
be69ebd8bd Invoke version_string in watcher/version.py directly
There is already define version_string in version.py
So we don't need generate version in other files, just call it.

Change-Id: I7d8294860523eedad92e213ad00569829e120c39
2017-10-10 11:00:26 +00:00
Hidekazu Nakamura
7d33bf8813 Fix _build_instance_node for building Compute CDM
As of Nova API microversion 2.47, response of GET /servers/detail has flavor
which contains a subset of the actual flavor information used to create the
server instance, represented as a nested dictionary.

Since current watcher nova default api version is 2.53(Pike), This patch
follows the API response change.

Change-Id: Ia575950f0702afa1d093f03ca8ddedd3c410b7de
Closes-Bug: #1722462
2017-10-10 17:21:05 +09:00
zhengwei6082
b3fa8a0f86 writing convention: do not use “-y” for package install
Adhering to coding conventions. Refer to ``Code conventions`` at
https://docs.openstack.org/contributor-guide/ for details.

Change-Id: Ic8b166e17ab0d1cbbf2bb6b831f5e53cae6797ba
2017-10-10 06:13:52 +00:00
caoyuan
9d3cc28d2d Update OpenStack Installation Tutorial to pike
Since the pike is release, OpenStack Installation Tutorial should
update to pike

Change-Id: I565f721cb2acbc692c790707ef6b0d167d6a7b09
2017-10-10 10:37:45 +08:00
aditi
eed2e128b0 Remove explicitly enable neutron
This patch removes, explicitly enabled neutron from local.conf
as devstack now default uses neutron

Change-Id: Icf6bd944dd2262ff23cbcceb762a9ba80f471dbb
2017-10-10 01:59:46 +00:00
caoyuan
7091fe435f Fix the telemetry-measurements hyperlink for strategies
Change-Id: Ie38950967665bdc81eb75f54bc1b3b0a4630fe65
2017-10-09 10:41:54 +08:00
licanwei
7f9b562bbd optimize update_audit_state
save state only if the state is different with audit.state

Change-Id: Ida5156f2e63be55e8dd7de452965270c007ab4ab
2017-10-07 00:59:42 -07:00
caoyuan
f445fc451e Optimize the import format by pep8
Change-Id: Ic96759df51f5572fb6047df4b38bb411ecba8e20
2017-10-05 22:34:13 +08:00
Jenkins
fa7749ac8f Merge "Use Property setters" 2017-10-05 04:21:05 +00:00
Zuul
e6c06c1bdf Merge "Add exception log when migrate action failed" 2017-10-03 17:06:20 +00:00
caoyuan
f461b8c567 Remove the unnecessary word
Change-Id: I7f76f89ae17ffdacde421509dda29b7b7d3f5a4a
2017-10-03 21:06:27 +08:00
LiXiangyu
c717be12a6 Fix TypeError in function chunkify
This patch fixes TypeError of range() in function chunkify, as
range() integer step argument expected, but got str.

Change-Id: I2acde859e014baa4c4c59caa6f4ea938c7c4c3bf
2017-10-02 12:25:20 +00:00
Hidekazu Nakamura
5814914aef Fix action plan state change when action failed
Since default workflow engine action container do_execute method
does not raise exception when action failed, workflow engine action
container execute method never raise exception and
action plan state becomes always SUCCEEDED.

This patch fixes default workflow engine action container do_execute
method to raise exception when action does not return True.

Change-Id: I7eeef69dbdfb5d40e3cf0b1004cbfe199a16bf7b
Closes-Bug: #1719793
2017-10-02 07:09:17 +00:00
Hidekazu Nakamura
fb3c2355a6 Remove installation guide for openSUSE and SLES
Since packages for openSUSE and SLES are not provided,
this patch removes installation guide for openSUSE and SLES.

Change-Id: Ic15d8c4b262e935c7acaef41e18960d0b259d5c9
Closes-Bug: #1715032
2017-10-02 13:52:29 +09:00
aditi
d4e6e82dd2 Notification Cancel Action Plan
This patch adds Notifications for cancel action plan
operation.

Change-Id: I5a89a80729349e3db43ca35ff9fbe8579e86b3b1
Implements: blueprint notifications-actionplan-cancel
2017-09-29 14:44:30 +09:00
Hidekazu Nakamura
816765374d Fix migrate action failure
disk_over_commit flag was removed in Nova API microversion 2.25(Mitaka).

Since current watcher nova default api version is 2.53(Pike),
this patch removes disk_over_commit flag.

Change-Id: Ib141505b9e8cb41997b29c1762e387b1f84f5143
Closes-Bug: #1720054
2017-09-28 14:06:07 +09:00
Hidekazu Nakamura
35e502f666 Add exception log when migrate action failed
As of now we can not know what was happend when migrate action
failed critically.
This patch adds exception log when migrate action failed critically.

Change-Id: I54d0bc54ee1df6f13754771775c58255f53f5008
2017-09-28 11:56:29 +09:00
Jenkins
ee36bb8180 Merge "[Doc] Fix host option" 2017-09-28 02:06:09 +00:00
Jenkins
0213bee63b Merge "Fix Watcher DB schema creation" 2017-09-27 08:36:31 +00:00
Alexander Chadin
f516a9c3b9 [Doc] Fix host option
Change-Id: I599856d2d02396f02f91ac4a607520ff60d7b033
2017-09-27 08:10:44 +00:00
aditi
8e89d5489c Use Property setters
At various places in watcher code, we are using property getters
to set property, in this way the property setters defined are
never used, this patch fixes to use property setters to set
property.

Change-Id: Idb274887f383523cea39277b166ec9b46ebcda85
2017-09-27 10:43:43 +09:00
Jenkins
773b20a05f Merge "cleanup test-requirements" 2017-09-26 00:09:48 +00:00
Jenkins
03d6580819 Merge "Update the description for controller node" 2017-09-25 11:08:48 +00:00
Jenkins
afa73238c4 Merge "Update the "IAAS" to "IaaS"" 2017-09-25 10:54:50 +00:00
caoyuan
2467780f9d Update the description for controller node
1. change the controller node description to link
2. correct the link for compute node

Change-Id: Idfdde7f01c38a26dc4962e94431a760a0ed51f82
2017-09-25 03:21:44 +00:00
OpenStack Proposal Bot
25854aabd8 Updated from global requirements
Change-Id: I86b2ed25f98f022597d58335461efc9e0ff61b26
2017-09-24 12:31:19 +00:00
melissaml
e4f4588e69 cleanup test-requirements
python-subunit is not used directly anywhere
and it is dependency of both testrepository
and os-testr
(probably was used by some tox wrapper script before)

Change-Id: I89279430554bc522817c4e2685afef0d95c641dd
2017-09-24 04:40:32 +08:00
caoyuan
1465aa0c5f Update the "IAAS" to "IaaS"
Infrastructure-as-a-Service should short for IaaS

Change-Id: I845fed0c4a1f073dbdea1e8f0e9cdc1655aa3622
2017-09-21 16:38:30 +08:00
caoyuan
e6e0b3dbaa Correct the link for watcher cli
Change-Id: Ic844804278af3abdf5bbb05ea5ef9a1c630da628
2017-09-20 22:17:18 +08:00
Jenkins
d8274e062e Merge "Update the documentation link for doc migration" 2017-09-19 00:06:20 +00:00
lingyongxu
28b9766693 Update the documentation link for doc migration
This patch is proposed according to the Direction 10 of doc
migration(https://etherpad.openstack.org/p/doc-migration-tracking).

Change-Id: I4eb594115e350e28f9136f7003692a1ec0abfcf6
2017-09-18 09:19:33 +00:00
OpenStack Proposal Bot
998e86f6c7 Updated from global requirements
Change-Id: I464b3573c2dbab3d97efbec0280298b0331a3cef
2017-09-16 23:26:49 +00:00
Alexander Chadin
a5e7fd90c2 Fix Watcher DB schema creation
This patch set replaces create_schema with upgrade to fix
apscheduler creation issue. It also fixes pep8 warnings to
d09a5945e4a0_add_action_description_table.py

Change-Id: Ica842d585ee3a9cd67e45eb1d7bb1916573d7c9c
2017-09-15 15:30:38 +03:00
Jenkins
a99a9ae69e Merge "Utils: fix usage of strtime" 2017-09-14 07:28:24 +00:00
licanwei
6e6e5907ee Fix Action 'change_node_power_state' FAILED
The return value of ironic_client.node.set_power_state is None, so it's
useless to return the result.
We should check the node state until it's changed or timeout.

Change-Id: I31f75a2c4a721ce4481e6ae7fb83d154a443dad9
Closes-Bug: #1713655
2017-09-13 23:59:35 -07:00
Jenkins
c887499b4d Merge "Fix incorrect config section name of configure doc" 2017-09-14 02:47:17 +00:00
Jenkins
58e4bf2727 Merge "Remove redundant right parenthesis" 2017-09-14 02:44:43 +00:00
OpenStack Proposal Bot
1df395d31d Updated from global requirements
Change-Id: I52665af336d0a0c765d034368a554005505cf30a
2017-09-14 00:11:17 +00:00
Jenkins
f811c8af48 Merge "Remove the unused rootwrap config" 2017-09-13 15:50:59 +00:00
Jenkins
e447393f18 Merge "iso8601.is8601.Utc No Longer Exists" 2017-09-13 13:13:59 +00:00
gaozx
a25be6498c Fix incorrect config section name of configure doc
Change-Id: I3d1e602f3a4beace516c56979b3b21b5683c1b0a
2017-09-13 16:38:50 +08:00
Jenkins
8e372ee153 Merge "Update the documentation link for doc migration" 2017-09-13 07:41:06 +00:00
aditi
7bc984b84a Fix Gate Failure
This Patch fixes gate failure, encountered in recent version
of oslo_messaging.

Change-Id: I6d8ab882a7c157dcf4f78c805a4ce2d9b1fa3f14
Closes-Bug: #1716476
2017-09-12 16:30:22 +09:00
gaozx
eeee32ad36 Remove redundant right parenthesis
Change-Id: Ibdb295d8d5ff8e49b0bebdb71c9c856f49c3881e
2017-09-11 09:48:42 +08:00
chenghuiyu
3a7fc7a8e5 Utils: fix usage of strtime
As oslo_utils.timeutils.strtime() is deprecated in
version '1.6', and will be removed in a future version.

For more informations:
https://docs.openstack.org/oslo.utils/latest/reference/timeutils.html

Change-Id: I1aca257fbe8b08c3478c5da9639835033b19144a
Partial-Bug: #1715325
2017-09-07 16:56:01 +08:00
zhengwei6082
63697d5a6e Update the documentation link for doc migration
This patch is proposed according to the direction 10 of doc
migration(https://etherpad.openstack.org/p/doc-migration-tracking).

Change-Id: Idf2f369fb68c19efa54a06731bb33dc6fa949790
2017-09-07 09:56:40 +08:00
Sampath Priyankara
887fa746ae iso8601.is8601.Utc No Longer Exists
iso8601.UTC is correct datetime UTC field object.
iso8601 >= 0.1.12 includes only iso8601.UTC for python3
while both UTC and Utc() for python2. Less then 0.1.12
included both UTC and Utc() for both python2/3.

Change-Id: I0f8796fba6725eea013b3f8d9ad33c10a402c524
Closes-Bug: #1715486
2017-09-07 10:42:05 +09:00
Jenkins
e74095da1f Merge "Remove unused efficacy indicators" 2017-09-06 09:50:55 +00:00
zhurong
65c7cd0e02 Remove the unused rootwrap config
Watcher doesn't use rootwrap, but have the rootwrap config.
So remove the unused rootwrap config.

Change-Id: Icc32fc958ca8deb08d7b0e5404cbbe19b3ae98c7
2017-09-06 14:46:09 +08:00
Hidekazu Nakamura
5df54ea3fb Remove unused efficacy indicators
AverageCpuLoad and MigrationEfficacy efficacy indicators are not used.
This patch removes unused indicators.

Change-Id: I2b21defd442c135d26f8fd45f6faf9f67c770bde
2017-09-06 12:05:25 +09:00
Yaguo Zhou
51dba60e01 Replace DbMigrationError with DBMigrationError
because DbMigrationError is deprecated

Change-Id: I75ef338d2e22924997804632d26ae3588c4f719b
2017-09-05 23:31:05 +08:00
Jenkins
a9f33467fb Merge "Replace default gnocchi endpoint type" 2017-09-05 10:48:43 +00:00
Jenkins
4640d88adf Merge "Fix DEFAULT_SCHEMA to validate host_aggreates" 2017-09-05 08:45:35 +00:00
zhengwei6082
154aca3948 Replace default gnocchi endpoint type
The default gnocchi endpoint type is publicURL in gnocchiclient.
This patch replaces default gnocchi endpoint type from
internalURL to publicURL
see https://github.com/openstack/keystoneauth/blob/master/keystoneauth1/adapter.py#L347-L351

Change-Id: I0ba2bde46de3025964affe23ef16cce9e5b4670f
2017-09-05 02:11:42 +00:00
Jenkins
fa7afc89ab Merge "Updated from global requirements" 2017-09-04 08:30:54 +00:00
Jenkins
790548fff0 Merge "Modify display_name in strategy documentation" 2017-09-04 08:28:21 +00:00
Hidekazu Nakamura
a2fa13c8ff Fix gnocchiclient creation
Gnocchiclient uses keystoneauth1.adapter so that adapter_options
need to be given.
This patch fixes gnocchiclient creation.

Change-Id: I6b5d8ee775929f4b3fd30be3321b378d19085547
Closes-Bug: #1714871
2017-09-04 14:52:11 +09:00
Hidekazu Nakamura
4c3c84dee9 Fix DEFAULT_SCHEMA to validate host_aggreates
Audit scope JSON schema should restrict key of host_aggregates
to "id" or "name", but that is not working now.
This patch fixes DEFAULT_SCHEMA to validate host_aggregates.

Change-Id: Iea42da41d61435780e247736599a56c026f47914
Closes-Bug: #1714448
2017-09-04 09:50:49 +09:00
OpenStack Proposal Bot
8f585c3def Updated from global requirements
Change-Id: If2d709b80f1783a5b14c9eda4d15da13c9ba5234
2017-09-02 12:15:38 +00:00
Jenkins
c9a43d8da4 Merge "Update default Nova API version to 2.53(Pike)" 2017-09-02 09:00:07 +00:00
Jenkins
2ea7d61ac8 Merge "Restrict existing strategies to their default scope" 2017-09-02 09:00:02 +00:00
Yumeng_Bao
bbfd6711fc Modify display_name in strategy documentation
Display_name in documentation of each strategy should be like[1].
[1]:https://github.com/openstack/watcher/blob/master/watcher/decision_engine/strategy/strategies/workload_balance.py#L143

Change-Id: I31b16dbb81d824e0189fcf96ea7f6e57a289e59a
2017-09-01 14:48:23 +08:00
shangxiaobj
162aaa75ee [Trivialfix]Fix typos in watcher
Fix the typos in watcher.

Change-Id: I3ab77e2a1f862d3790065de4a6ff6c3ef42f226d
2017-08-31 20:47:57 -07:00
suzhengwei
4cb2b45e3a Restrict existing strategies to their default scope
Diffrent stratege has diffrent default scope, restrict them to their
default scope will avoid usage problems.
1)workload_balancing/thermal_optimization/airflow_optimization goals
  react on enabled nodes, so restrict default scope to compute nodes
  with up state and enabled status.
2)server_consolidation goal react on enabled or disabled nodes, So
  restrict default scope to compute nodes with up state and
  enabled/disabled status.

Change-Id: I7437dee699ee2d3dd227a047196d4d8db811b81e
Closes-Bug: #1714002
2017-09-01 11:21:35 +08:00
Jenkins
50935af15f Merge "Fix to use . to source script files" 2017-09-01 01:31:39 +00:00
Hidekazu Nakamura
cf92ece936 Update default Nova API version to 2.53(Pike)
Services are now identified by uuid instead of database id to ensure
uniqueness across cells.
GET /os-services returns a uuid in the id field of the response
from API microversion 2.53(maximum in Pike)

This patch set updates default Nova API version to 2.53.

Change-Id: Ib9fefb794eda3c9e75c6a2f5cfdb0e682b8955f3
Closes-Bug: #1709544
2017-08-30 14:39:31 +09:00
zhengwei6082
b7c4a0467c Fix to use . to source script files
Adhering to coding conventions. Refer to ``Code conventions`` at
https://docs.openstack.org/contributor-guide/ for details.

Change-Id: I54b93214c0e718465a0ea4ebe063061ef7d6e4b2
2017-08-29 18:01:09 +08:00
Jenkins
c12f132699 Merge "Remove unnecessary dict.keys() method calls (api)" 2017-08-29 06:30:49 +00:00
melissaml
0329dafec9 Fix to use "." to source script files
Adhering to coding conventions. Refer to ``Code conventions`` at
https://docs.openstack.org/contributor-guide/ for details.

Change-Id: I23ff70c9caefc870b3cc9d61cd8c18b534f2ffaf
2017-08-29 02:32:28 +08:00
zhengwei6082
e73ead4807 Update the documentation link for doc migration
Change-Id: If429e5023a252d9b86b227488f73cac863b3c658
2017-08-25 16:58:31 +08:00
Jenkins
cb90f60cc1 Merge "Updated from global requirements" 2017-08-25 06:23:39 +00:00
Jenkins
d7994a2466 Merge "Fix KeyError exception" 2017-08-24 12:40:32 +00:00
OpenStack Proposal Bot
62822fa933 Updated from global requirements
Change-Id: I31aaf37bf1ea73c26d9578abe167b43a24bb6c96
2017-08-24 11:51:24 +00:00
Jenkins
3f0ff1ed7e Merge "Updated from global requirements" 2017-08-24 10:09:43 +00:00
Jenkins
8e3b5c90a6 Merge "Remove watcher_tempest_plugin" 2017-08-24 10:09:24 +00:00
OpenStack Proposal Bot
1c5e254124 Updated from global requirements
Change-Id: I48a42877f8157873f4ea376c72d170d978d5e090
2017-08-24 06:03:14 +00:00
Viktor Varga
39e200e5eb Remove unnecessary dict.keys() method calls (api)
Since iter(dict) is equivalent to iter(dict.keys()), it is unnecessary
to call the keys() method of a dict, the dictionary itself is enough
to be referenced. The shorter form is also considered to be more
Pythonic.

This patch removes the unnecessary dict.keys() method calls in api.
This is a part of a larger patch series that removes dict.keys()
method calls.

TrivialFix

Change-Id: I29000f1f05b90d70109fa01393e97e1ebf450c63
2017-08-23 12:51:54 +02:00
Jenkins
2650b89fe5 Merge "Update the documention for doc migration" 2017-08-22 10:10:33 +00:00
zhengwei6082
d5bcd37478 Update the documention for doc migration
Change-Id: I22dc18e6f2f7471f5c804d4d19c631f81a6e196b
2017-08-22 09:06:06 +00:00
Alexander Chadin
0c4b439c5e Remove watcher_tempest_plugin
In accordance with Queens global goal[1], this patch set
removes watcher tempest plugin from watcher repository since
it is stored as independent repository[2]. Jenkins job
gate-watcher-dsvm-multinode-ubuntu-xenial-nv has been modified,
it uses watcher-tempest-plugin repo now.

[1]: https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html
[2]: http://git.openstack.org/cgit/openstack/watcher-tempest-plugin/

Change-Id: I4d1207fbd73ee2519a6d40342f5fd3c5d3ee8bc7
2017-08-21 17:41:56 +03:00
OpenStack Proposal Bot
0e43504e44 Updated from global requirements
Change-Id: Idd00808c17ecbca5925f91ea2f1257d097af7892
2017-08-18 11:45:04 +00:00
licanwei
322843b21c Fix KeyError exception
During the strategy sync process,
if goal_id can't be found in the goals table,
will throw a KeyError exception.

Change-Id: I62800ac5c69f4f5c7820908f2e777094a51a5541
Closes-Bug: #1711086
2017-08-17 04:25:53 -07:00
Jenkins
1b413f5536 Merge "Remove pbr warnerrors" 2017-08-17 03:07:46 +00:00
Alexander Chadin
f76a628d1f Remove pbr warnerrors
This change removes the now unused "warnerrors" setting,
which is replaced by "warning-is-error" in sphinx
releases >= 1.5 [1].

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-March/113085.html

Change-Id: I32f078169668be08737e47cd15edbdfba42904dc
2017-08-16 11:54:24 +03:00
Jenkins
3e6ea71cbc Merge "Adjust the action state judgment logic" 2017-08-15 09:10:53 +00:00
Jenkins
e5c3df0c2f Merge "workload balance base on cpu or ram util" 2017-08-15 08:37:58 +00:00
Jenkins
6005d6ebdd Merge "Fix gnocchi repository URL in local.conf.controller" 2017-08-14 09:14:54 +00:00
licanwei
965af1b6fd Adjust the action state judgment logic
Only when True is returned, the action state is set to SUCCEEDED
some actions(such as migrate) will return None if exception raised

Change-Id: I52e7a1ffb68f54594f2b00d9843e8e0a4c985667
2017-08-14 02:12:35 -07:00
Jenkins
daf428ad69 Merge "Removed unnecessary setUp calls in tests" 2017-08-11 09:15:01 +00:00
OpenStack Release Bot
ab64dab646 Update reno for stable/pike
Change-Id: I1c1c855b80ad10e343d1c34e17ed11d8255e9fea
2017-08-11 01:09:58 +00:00
Jenkins
eaa09a4cfc Merge "Fix failure to load storage plugin" 2017-08-10 12:41:52 +00:00
suzhengwei
5c86a54d20 workload balance base on cpu or ram util
By the input parameter "metrics", it makes decision to migrate a VM
base on cpu or memory utilization.

Change-Id: I35cce3495c8dacad64ea6c6ee71082a85e9e0a83
2017-08-09 07:04:10 +00:00
Jenkins
e78f2d073f Merge "[Doc] Fix db creation" 2017-08-09 06:26:57 +00:00
Jenkins
47004b7c67 Merge "change ram util metric" 2017-08-09 06:26:48 +00:00
Jenkins
9ecd22f4c8 Merge "Fix exception.ComputeNodeNotFound" 2017-08-08 08:03:28 +00:00
Jenkins
daee2336a4 Merge "get_config_opts method was overwritten" 2017-08-08 01:05:56 +00:00
Jenkins
893b730a44 Merge "Replace map/filter lambda with comprehensions" 2017-08-08 00:39:51 +00:00
Alexander Chadin
d5b6e0a54f [Doc] Fix db creation
This patch set fixes command to create db schema.

Closes-Bug: #1709048
Change-Id: I1214313307fe0375d42e1a22562cd16ae867795d
2017-08-07 15:02:41 +00:00
Fanis Kalimullin
13b89c8dd2 get_config_opts method was overwritten
outlet_temperature strategy relies on a datasource config parameter,
which can be either "ceilometer" or "gnocchi". This patch overrides
get_config_opts method of base class to allow specify datasource.

Change-Id: I551401039e26816568a04c7f2151d5b3c7ed269a
Closes-Bug: #1709024
2017-08-07 11:05:19 +00:00
Jenkins
7a300832b2 Merge "Fix compute CDM to include disabled compute node" 2017-08-07 10:23:46 +00:00
Viktor Varga
d218e6f107 Replace map/filter lambda with comprehensions
List comprehensions and generator expressions are considered to be more
Pythonic (and usually more readable) than map and filter with lambda.
This patch replaces four usages of [map|filter](lambda ...) with the
appropriate list comprehension or generator expression.

TrivialFix

Change-Id: Ifda9030bb8aa196cb7a5977a57ef46dfefd70fa6
2017-08-07 13:22:40 +03:00
suzhengwei
d2f70f9d6f change ram util metric
Metric 'memory.usage' depends on balloon driver,
and shows the memory usage inside guest-os,
while 'memory.resident' represents volume of RAM
used by the instance on the physical machine
So, it is more reasonable to use 'memory.resident'
when calculating node utilization
by gathering instances utilization of the node.

Change-Id: I12dff5176bcf6cb103aa44cafd54f9ecd7170864
2017-08-07 16:04:19 +08:00
Jenkins
4951854f76 Merge "Change exception class from monascaclient" 2017-08-07 08:02:22 +00:00
Jenkins
ffbd263888 Merge "[Doc] Update software version" 2017-08-07 07:41:02 +00:00
Hidekazu Nakamura
985c6c49f9 Fix failure to load storage plugin
Watcher fails to load storage plugin in case there is no installed
Cinder in OpenStack services.

This patch set adds collector_plugins parameter under collector
section in watcher.conf. If plugin name is in collector_plugins,
The plugin is loaded.

Change-Id: Ie3c3543216c925d49b772bf5fe3773ca7d5ae437
Closes-Bug: #1707603
2017-08-07 16:40:40 +09:00
Jenkins
adac2c0c16 Merge "Fix ironic client input parameter" 2017-08-07 07:39:53 +00:00
Jenkins
f700ca4e0f Merge "Fix incorrect action status in notifications" 2017-08-07 07:21:08 +00:00
licanwei
5b741b2a4d Fix exception.ComputeNodeNotFound
compute_model.get_node_by_uuid(node_uuid) will raise a exception
when the compute node isn't in the compute model.

Closes-Bug: #1709004

Change-Id: I667a9dbfcf67f9f895976aadd5300bbea2ffe6f0
2017-08-06 23:16:16 -07:00
OpenStack Proposal Bot
382f641b22 Updated from global requirements
Change-Id: Ie647221a3ab055e7b150d65ffb9287b44ef834cb
2017-08-07 00:56:18 +00:00
Tomasz Trębski
5da5db8b56 Change exception class from monascaclient
monascaclient was recently migrated to
use 'osc' library. Due to that, exception
classes has been changed. This commit adjusts
the exception class to recently released
monascaclient==1.7.0

Depends-On: Ie647221a3ab055e7b150d65ffb9287b44ef834cb
Change-Id: Icfef345c4269ac4cb682049f22a43fdab3d39845
2017-08-04 08:55:10 +00:00
Hidekazu Nakamura
5cc4716a95 Fix gnocchi repository URL in local.conf.controller
This patch set updates gnocchi repository URL in local.conf.controller
bacause it moved from under openstack to their own repository.

Change-Id: I53c6efcb40b26f83bc1867564b9067ae5f50938d
2017-08-04 09:23:02 +09:00
Jenkins
c4888fee63 Merge "Update the documention for doc migration" 2017-08-03 03:21:24 +00:00
Jenkins
76f85591ea Merge "[Doc] Add Configure Cinder Notifications" 2017-08-02 10:25:13 +00:00
Jenkins
b006cadd22 Merge "Ignore autogenerated sample config file" 2017-08-02 10:23:15 +00:00
Jenkins
1fd2053001 Merge "[Doc] Add cinder to architecture diagram" 2017-08-02 10:22:12 +00:00
Jenkins
6a920fd307 Merge "Fix show db version in README" 2017-08-02 08:20:36 +00:00
Jenkins
514eeb75ef Merge "Update State diagram of Action Plan" 2017-08-02 07:08:52 +00:00
licanwei
b43633fa6d Fix ironic client input parameter
The correct parameter is 'os_endpoint_type'

Change-Id: I80b03af8c55ec1d89ff1fbdd9894115b819ccde4
2017-08-01 22:35:01 -07:00
licanwei
d5a7d7674c Fix show db version in README
watcher-db-manage version: Print the current version

Change-Id: Ie08eb682879b2c071f724a6847094650047bde34
2017-08-01 21:54:48 -07:00
Gábor Antal
b532355232 Removed unnecessary setUp calls in tests
TrivialFix

Change-Id: I057d03466b058a42be8ec57dbc42cbd67b75cc3c
2017-08-01 10:50:34 +02:00
Jenkins
bce87b3d05 Merge "Modification of statistic_aggregation method" 2017-08-01 08:12:33 +00:00
Hidekazu Nakamura
783627626c Fix compute CDM to include disabled compute node
Currently compute CDM excludes disabled compute node.
This patch set fixes compute CDM to include disabled compute node.

Change-Id: I8236bb73e0d9bb242251c2abfb59ad5693087afa
Closes-Bug: #1685787
2017-08-01 16:48:47 +09:00
aditi
3043e57066 Update State diagram of Action Plan
This patch updates the state machine diagram for action plan, It
includes new state "cancelling" which is introduced by actionplan
cancel operation

Change-Id: I0af59f2164922c56d59fbad8018e2aecfef97098
2017-08-01 04:49:14 +00:00
Jenkins
be8b163a62 Merge "Added Actuator Strategy" 2017-08-01 00:30:05 +00:00
mergalievibragim
4f38595e4e Modification of statistic_aggregation method
In this patch feching resource_id by resource's original_id was added to
statistic_aggregation method.

Closes-Bug: #1707653 
Change-Id: I70b9346146f810e2236ccdb31de4c3fedf200568
2017-07-31 14:03:18 +00:00
aditi
30def6f35b Fix incorrect action status in notifications
This patch fixes incorrect action status in action execution
notification.

Change-Id: I1859f6183e2b4f8f380b8c9a13e3e0b7feb4b8e2
Closes-Bug: #1706860
2017-07-31 11:06:47 +00:00
Vincent Françoise
0b31828a01 Added Actuator Strategy
This strategy now allow us to create action plans with an explicit
set of actions.

Co-Authored-By: Mikhail Kizilov <kizilov.mikhail@gmail.com>
Change-Id: I7b04b9936ce5f3b5b38f319da7f8737e0f3eea88
Closes-Bug: #1659243
2017-07-31 10:52:07 +00:00
Jenkins
b5ac97bc2d Merge "Fix continuous audit fails once it fails" 2017-07-31 07:41:57 +00:00
Hidekazu Nakamura
398974a7b0 [Doc] Update software version
1. Update python version from 3.4 to 3.5
2. Update Ubuntu version from 14.04 to 16.04
3. Update Fedora version from 19+ to 24+

Change-Id: Ic5e9bbd126e10697300c6ffd51ff55d0b815d5ca
2017-07-31 15:12:41 +09:00
Hidekazu Nakamura
3a29b4e710 Fix continuous audit fails once it fails
Currently continuous audit fails once it fails
because continous audit tries to remove job
even if job is not exists.

This patch set fixes it.

Change-Id: Ic461408c97d71e14c57e368f8436b26fe355fa4e
Closes-Bug: #1706857
2017-07-31 11:01:04 +09:00
OpenStack Proposal Bot
8024dbf913 Updated from global requirements
Change-Id: If6105a3b911757ac3204e9c73e793b5cee58c1a8
2017-07-28 13:02:45 +00:00
Jenkins
529b0d34ee Merge "Fix Hardcoded availability zone in nova-helper" 2017-07-28 08:38:30 +00:00
aditi
dac0924194 Fix Hardcoded availability zone in nova-helper
This patch fixes the hardcoded value of availability zone
in nova-helper, Now nova api is used to get the availability zone
of destination node

Change-Id: I4c5a34946ed404df5bbfe34ce99873d32772dbf4
2017-07-28 03:55:13 +00:00
Jenkins
3bb66b645c Merge "Saving Energy Strategy" 2017-07-27 12:32:21 +00:00
Jenkins
63cebc0bfa Merge "dynamic action description" 2017-07-27 12:09:44 +00:00
Yumeng Bao
5a28ac772a Saving Energy Strategy
Add strategy to trigger "power on" and "power off" actions in watcher.

Change-Id: I7ebcd2a0282e3cc7b9b01cf8c744468ce16c56bb
Implements: blueprint strategy-to-trigger-power-on-and-power-off-actions
Co-Authored-By: licanwei <li.canwei2@zte.com.cn>
2017-07-27 19:04:26 +08:00
Jenkins
fe7ad9e42b Merge "Add volume migrate action" 2017-07-27 09:40:14 +00:00
Jenkins
711de94855 Merge "Add release notes for Pike" 2017-07-27 09:40:03 +00:00
licanwei
a24b7f0b61 dynamic action description
Add a new table to save the mapping
Add logic to update the table when action loading
Add logic to show the action description

Change-Id: Ia008a8715bcc666ab0fefe444ef612394c775e91
Implements: blueprint dynamic-action-description
2017-07-26 20:42:01 -07:00
Hidekazu Nakamura
c03668cb02 [Doc] Add cinder to architecture diagram
Cinder data model was added in Pike cycle.
This patch set adds cinder to architecture diagram.

Change-Id: Ibf590996494f4e6ebcc59b26fbd562d079cea9ef
2017-07-26 21:50:33 +09:00
Alexander Chadin
aab18245eb Add release notes for Pike
This patch set adds release notes for Pike release.

Change-Id: I4a962ed3d20ca746a470a7ee8b2de2cf703f94f5
2017-07-26 10:45:15 +03:00
Hidekazu Nakamura
c12178920b [Doc] Add Configure Cinder Notifications
Cinder data model was added in Pike cycle and that needs
configuration in cinder.conf for refreshing the model in
real time.

This patch set adds Add Configure Cinder Notifications section
for explaining the configuration.

Change-Id: I41cc870e2d47c56fd7c9fcdd6f03c95fa939c3f2
2017-07-26 16:31:29 +09:00
zhengwei6082
f733fbeecd Update the documention for doc migration
Change-ID: Ic3dc2a93caac99f1dbe3547350a87fc01d0d4181
2017-07-26 15:26:13 +08:00
Hidekazu Nakamura
bff76de6f1 Add volume migrate action
This patch adds volume migrate action.

Change-Id: I9f46931d2a7edff4c727d674ec315924b9ae30c2
Implements: blueprint volume-migrate-action
2017-07-21 11:27:37 +09:00
Yumeng Bao
22ee0aa8f7 Ignore autogenerated sample config file
Change-Id: Ief43668feb06c136c87a2218e9d7671c7809dcbc
2017-07-10 19:16:36 +08:00
204 changed files with 4261 additions and 3518 deletions

3
.gitignore vendored
View File

@@ -72,3 +72,6 @@ releasenotes/build
# Desktop Service Store # Desktop Service Store
*.DS_Store *.DS_Store
# Autogenerated sample config file
etc/watcher/watcher.conf.sample

View File

@@ -24,13 +24,6 @@ MULTI_HOST=1
# This is the controller node, so disable nova-compute # This is the controller node, so disable nova-compute
disable_service n-cpu disable_service n-cpu
# Disable nova-network and use neutron instead
disable_service n-net
ENABLED_SERVICES+=,q-svc,q-dhcp,q-meta,q-agt,q-l3,neutron
# Enable remote console access
enable_service n-cauth
# Enable the Watcher Dashboard plugin # Enable the Watcher Dashboard plugin
enable_plugin watcher-dashboard git://git.openstack.org/openstack/watcher-dashboard enable_plugin watcher-dashboard git://git.openstack.org/openstack/watcher-dashboard
@@ -42,11 +35,12 @@ enable_plugin ceilometer git://git.openstack.org/openstack/ceilometer
# This is the controller node, so disable the ceilometer compute agent # This is the controller node, so disable the ceilometer compute agent
disable_service ceilometer-acompute disable_service ceilometer-acompute
# Enable the ceilometer api explicitly(bug:1667678) # Enable the ceilometer api explicitly(bug:1667678)
enable_service ceilometer-api enable_service ceilometer-api
# Enable the Gnocchi plugin # Enable the Gnocchi plugin
enable_plugin gnocchi https://git.openstack.org/openstack/gnocchi enable_plugin gnocchi https://github.com/gnocchixyz/gnocchi
LOGFILE=$DEST/logs/stack.sh.log LOGFILE=$DEST/logs/stack.sh.log
LOGDAYS=2 LOGDAYS=2

View File

@@ -7,7 +7,7 @@ _XTRACE_WATCHER_PLUGIN=$(set +o | grep xtrace)
set -o xtrace set -o xtrace
echo_summary "watcher's plugin.sh was called..." echo_summary "watcher's plugin.sh was called..."
source $DEST/watcher/devstack/lib/watcher . $DEST/watcher/devstack/lib/watcher
# Show all of defined environment variables # Show all of defined environment variables
(set -o posix; set) (set -o posix; set)

View File

@@ -22,7 +22,7 @@ from docutils import nodes
from docutils.parsers import rst from docutils.parsers import rst
from docutils import statemachine from docutils import statemachine
from watcher.version import version_info from watcher.version import version_string
class BaseWatcherDirective(rst.Directive): class BaseWatcherDirective(rst.Directive):
@@ -169,4 +169,4 @@ class WatcherFunc(BaseWatcherDirective):
def setup(app): def setup(app):
app.add_directive('watcher-term', WatcherTerm) app.add_directive('watcher-term', WatcherTerm)
app.add_directive('watcher-func', WatcherFunc) app.add_directive('watcher-func', WatcherFunc)
return {'version': version_info.version_string()} return {'version': version_string}

View File

@@ -0,0 +1,41 @@
{
"priority": "INFO",
"payload": {
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0",
"watcher_object.name": "ActionCancelPayload",
"watcher_object.data": {
"uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"input_parameters": {
"param2": 2,
"param1": 1
},
"fault": null,
"created_at": "2016-10-18T09:52:05Z",
"updated_at": null,
"state": "CANCELLED",
"action_plan": {
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0",
"watcher_object.name": "TerseActionPlanPayload",
"watcher_object.data": {
"uuid": "76be87bd-3422-43f9-93a0-e85a577e3061",
"global_efficacy": {},
"created_at": "2016-10-18T09:52:05Z",
"updated_at": null,
"state": "CANCELLING",
"audit_uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"strategy_uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
"deleted_at": null
}
},
"parents": [],
"action_type": "nop",
"deleted_at": null
}
},
"event_type": "action.cancel.end",
"publisher_id": "infra-optim:node0",
"timestamp": "2017-01-01 00:00:00.000000",
"message_id": "530b409c-9b6b-459b-8f08-f93dbfeb4d41"
}

View File

@@ -0,0 +1,51 @@
{
"priority": "ERROR",
"payload": {
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0",
"watcher_object.name": "ActionCancelPayload",
"watcher_object.data": {
"uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"input_parameters": {
"param2": 2,
"param1": 1
},
"fault": {
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0",
"watcher_object.name": "ExceptionPayload",
"watcher_object.data": {
"module_name": "watcher.tests.notifications.test_action_notification",
"exception": "WatcherException",
"exception_message": "TEST",
"function_name": "test_send_action_cancel_with_error"
}
},
"created_at": "2016-10-18T09:52:05Z",
"updated_at": null,
"state": "FAILED",
"action_plan": {
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0",
"watcher_object.name": "TerseActionPlanPayload",
"watcher_object.data": {
"uuid": "76be87bd-3422-43f9-93a0-e85a577e3061",
"global_efficacy": {},
"created_at": "2016-10-18T09:52:05Z",
"updated_at": null,
"state": "CANCELLING",
"audit_uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"strategy_uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
"deleted_at": null
}
},
"parents": [],
"action_type": "nop",
"deleted_at": null
}
},
"event_type": "action.cancel.error",
"publisher_id": "infra-optim:node0",
"timestamp": "2017-01-01 00:00:00.000000",
"message_id": "530b409c-9b6b-459b-8f08-f93dbfeb4d41"
}

View File

@@ -0,0 +1,41 @@
{
"priority": "INFO",
"payload": {
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0",
"watcher_object.name": "ActionCancelPayload",
"watcher_object.data": {
"uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"input_parameters": {
"param2": 2,
"param1": 1
},
"fault": null,
"created_at": "2016-10-18T09:52:05Z",
"updated_at": null,
"state": "CANCELLING",
"action_plan": {
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0",
"watcher_object.name": "TerseActionPlanPayload",
"watcher_object.data": {
"uuid": "76be87bd-3422-43f9-93a0-e85a577e3061",
"global_efficacy": {},
"created_at": "2016-10-18T09:52:05Z",
"updated_at": null,
"state": "CANCELLING",
"audit_uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"strategy_uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
"deleted_at": null
}
},
"parents": [],
"action_type": "nop",
"deleted_at": null
}
},
"event_type": "action.cancel.start",
"publisher_id": "infra-optim:node0",
"timestamp": "2017-01-01 00:00:00.000000",
"message_id": "530b409c-9b6b-459b-8f08-f93dbfeb4d41"
}

View File

@@ -0,0 +1,55 @@
{
"event_type": "action_plan.cancel.end",
"payload": {
"watcher_object.namespace": "watcher",
"watcher_object.name": "ActionPlanCancelPayload",
"watcher_object.version": "1.0",
"watcher_object.data": {
"created_at": "2016-10-18T09:52:05Z",
"deleted_at": null,
"audit_uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"audit": {
"watcher_object.namespace": "watcher",
"watcher_object.name": "TerseAuditPayload",
"watcher_object.version": "1.0",
"watcher_object.data": {
"created_at": "2016-10-18T09:52:05Z",
"deleted_at": null,
"uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"goal_uuid": "bc830f84-8ae3-4fc6-8bc6-e3dd15e8b49a",
"strategy_uuid": "75234dfe-87e3-4f11-a0e0-3c3305d86a39",
"scope": [],
"audit_type": "ONESHOT",
"state": "SUCCEEDED",
"parameters": {},
"interval": null,
"updated_at": null
}
},
"uuid": "76be87bd-3422-43f9-93a0-e85a577e3061",
"fault": null,
"state": "CANCELLED",
"global_efficacy": {},
"strategy_uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
"strategy": {
"watcher_object.namespace": "watcher",
"watcher_object.name": "StrategyPayload",
"watcher_object.version": "1.0",
"watcher_object.data": {
"created_at": "2016-10-18T09:52:05Z",
"deleted_at": null,
"name": "TEST",
"uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
"parameters_spec": {},
"display_name": "test strategy",
"updated_at": null
}
},
"updated_at": null
}
},
"priority": "INFO",
"message_id": "3984dc2b-8aef-462b-a220-8ae04237a56e",
"timestamp": "2016-10-18 09:52:05.219414",
"publisher_id": "infra-optim:node0"
}

View File

@@ -0,0 +1,65 @@
{
"event_type": "action_plan.cancel.error",
"publisher_id": "infra-optim:node0",
"priority": "ERROR",
"message_id": "9a45c5ae-0e21-4300-8fa0-5555d52a66d9",
"payload": {
"watcher_object.version": "1.0",
"watcher_object.namespace": "watcher",
"watcher_object.name": "ActionPlanCancelPayload",
"watcher_object.data": {
"fault": {
"watcher_object.version": "1.0",
"watcher_object.namespace": "watcher",
"watcher_object.name": "ExceptionPayload",
"watcher_object.data": {
"exception_message": "TEST",
"module_name": "watcher.tests.notifications.test_action_plan_notification",
"function_name": "test_send_action_plan_cancel_with_error",
"exception": "WatcherException"
}
},
"uuid": "76be87bd-3422-43f9-93a0-e85a577e3061",
"created_at": "2016-10-18T09:52:05Z",
"strategy_uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
"strategy": {
"watcher_object.version": "1.0",
"watcher_object.namespace": "watcher",
"watcher_object.name": "StrategyPayload",
"watcher_object.data": {
"uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
"created_at": "2016-10-18T09:52:05Z",
"name": "TEST",
"updated_at": null,
"display_name": "test strategy",
"parameters_spec": {},
"deleted_at": null
}
},
"updated_at": null,
"deleted_at": null,
"audit_uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"audit": {
"watcher_object.version": "1.0",
"watcher_object.namespace": "watcher",
"watcher_object.name": "TerseAuditPayload",
"watcher_object.data": {
"parameters": {},
"uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"goal_uuid": "bc830f84-8ae3-4fc6-8bc6-e3dd15e8b49a",
"strategy_uuid": "75234dfe-87e3-4f11-a0e0-3c3305d86a39",
"created_at": "2016-10-18T09:52:05Z",
"scope": [],
"updated_at": null,
"audit_type": "ONESHOT",
"interval": null,
"deleted_at": null,
"state": "SUCCEEDED"
}
},
"global_efficacy": {},
"state": "CANCELLING"
}
},
"timestamp": "2016-10-18 09:52:05.219414"
}

View File

@@ -0,0 +1,55 @@
{
"event_type": "action_plan.cancel.start",
"payload": {
"watcher_object.namespace": "watcher",
"watcher_object.name": "ActionPlanCancelPayload",
"watcher_object.version": "1.0",
"watcher_object.data": {
"created_at": "2016-10-18T09:52:05Z",
"deleted_at": null,
"audit_uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"audit": {
"watcher_object.namespace": "watcher",
"watcher_object.name": "TerseAuditPayload",
"watcher_object.version": "1.0",
"watcher_object.data": {
"created_at": "2016-10-18T09:52:05Z",
"deleted_at": null,
"uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"goal_uuid": "bc830f84-8ae3-4fc6-8bc6-e3dd15e8b49a",
"strategy_uuid": "75234dfe-87e3-4f11-a0e0-3c3305d86a39",
"scope": [],
"audit_type": "ONESHOT",
"state": "SUCCEEDED",
"parameters": {},
"interval": null,
"updated_at": null
}
},
"uuid": "76be87bd-3422-43f9-93a0-e85a577e3061",
"fault": null,
"state": "CANCELLING",
"global_efficacy": {},
"strategy_uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
"strategy": {
"watcher_object.namespace": "watcher",
"watcher_object.name": "StrategyPayload",
"watcher_object.version": "1.0",
"watcher_object.data": {
"created_at": "2016-10-18T09:52:05Z",
"deleted_at": null,
"name": "TEST",
"uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
"parameters_spec": {},
"display_name": "test strategy",
"updated_at": null
}
},
"updated_at": null
}
},
"priority": "INFO",
"message_id": "3984dc2b-8aef-462b-a220-8ae04237a56e",
"timestamp": "2016-10-18 09:52:05.219414",
"publisher_id": "infra-optim:node0"
}

View File

@@ -127,8 +127,8 @@ Here is single Dockerfile snippet you can use to run your Docker container:
RUN apt-get update RUN apt-get update
RUN apt-get dist-upgrade -y RUN apt-get dist-upgrade -y
RUN apt-get install -y vim net-tools RUN apt-get install vim net-tools
RUN apt-get install -yt experimental watcher-api RUN apt-get install experimental watcher-api
CMD ["/usr/bin/watcher-api"] CMD ["/usr/bin/watcher-api"]

View File

@@ -119,7 +119,7 @@ The watcher command-line interface (CLI) can be used to interact with the
Watcher system in order to control it or to know its current status. Watcher system in order to control it or to know its current status.
Please, read `the detailed documentation about Watcher CLI Please, read `the detailed documentation about Watcher CLI
<https://factory.b-com.com/www/watcher/doc/python-watcherclient/>`_. <https://docs.openstack.org/python-watcherclient/latest/cli/>`_.
.. _archi_watcher_dashboard_definition: .. _archi_watcher_dashboard_definition:
@@ -130,7 +130,7 @@ The Watcher Dashboard can be used to interact with the Watcher system through
Horizon in order to control it or to know its current status. Horizon in order to control it or to know its current status.
Please, read `the detailed documentation about Watcher Dashboard Please, read `the detailed documentation about Watcher Dashboard
<http://docs.openstack.org/developer/watcher-dashboard/>`_. <https://docs.openstack.org/watcher-dashboard/latest>`_.
.. _archi_watcher_database_definition: .. _archi_watcher_database_definition:
@@ -170,7 +170,7 @@ Unless specified, it then selects the most appropriate :ref:`strategy
goal. goal.
The :ref:`Strategy <strategy_definition>` is then dynamically loaded (via The :ref:`Strategy <strategy_definition>` is then dynamically loaded (via
`stevedore <http://docs.openstack.org/developer/stevedore/>`_). The `stevedore <https://docs.openstack.org/stevedore/latest>`_). The
:ref:`Watcher Decision Engine <watcher_decision_engine_definition>` executes :ref:`Watcher Decision Engine <watcher_decision_engine_definition>` executes
the strategy. the strategy.

View File

@@ -72,7 +72,7 @@ copyright = u'OpenStack Foundation'
# The full version, including alpha/beta/rc tags. # The full version, including alpha/beta/rc tags.
release = watcher_version.version_info.release_string() release = watcher_version.version_info.release_string()
# The short X.Y version. # The short X.Y version.
version = watcher_version.version_info.version_string() version = watcher_version.version_string
# A list of ignored prefixes for module index sorting. # A list of ignored prefixes for module index sorting.
modindex_common_prefix = ['watcher.'] modindex_common_prefix = ['watcher.']

View File

@@ -15,7 +15,7 @@ Service overview
================ ================
The Watcher system is a collection of services that provides support to The Watcher system is a collection of services that provides support to
optimize your IAAS platform. The Watcher service may, depending upon optimize your IaaS platform. The Watcher service may, depending upon
configuration, interact with several other OpenStack services. This includes: configuration, interact with several other OpenStack services. This includes:
- the OpenStack Identity service (`keystone`_) for request authentication and - the OpenStack Identity service (`keystone`_) for request authentication and
@@ -27,7 +27,7 @@ configuration, interact with several other OpenStack services. This includes:
The Watcher service includes the following components: The Watcher service includes the following components:
- ``watcher-decision-engine``: runs audit on part of your IAAS and return an - ``watcher-decision-engine``: runs audit on part of your IaaS and return an
action plan in order to optimize resource placement. action plan in order to optimize resource placement.
- ``watcher-api``: A RESTful API that processes application requests by sending - ``watcher-api``: A RESTful API that processes application requests by sending
them to the watcher-decision-engine over RPC. them to the watcher-decision-engine over RPC.
@@ -349,7 +349,7 @@ so that the watcher service is configured for your needs.
[nova_client] [nova_client]
# Version of Nova API to use in novaclient. (string value) # Version of Nova API to use in novaclient. (string value)
#api_version = 2 #api_version = 2.53
api_version = 2.1 api_version = 2.1
#. Create the Watcher Service database tables:: #. Create the Watcher Service database tables::
@@ -366,15 +366,14 @@ Configure Nova compute
Please check your hypervisor configuration to correctly handle Please check your hypervisor configuration to correctly handle
`instance migration`_. `instance migration`_.
.. _`instance migration`: http://docs.openstack.org/admin-guide/compute-live-migration-usage.html .. _`instance migration`: https://docs.openstack.org/nova/latest/admin/migration.html
Configure Measurements Configure Measurements
====================== ======================
You can configure and install Ceilometer by following the documentation below : You can configure and install Ceilometer by following the documentation below :
#. http://docs.openstack.org/developer/ceilometer #. https://docs.openstack.org/ceilometer/latest
#. http://docs.openstack.org/kilo/install-guide/install/apt/content/ceilometer-nova.html
The built-in strategy 'basic_consolidation' provided by watcher requires The built-in strategy 'basic_consolidation' provided by watcher requires
"**compute.node.cpu.percent**" and "**cpu_util**" measurements to be collected "**compute.node.cpu.percent**" and "**cpu_util**" measurements to be collected
@@ -386,13 +385,13 @@ the OpenStack site.
You can use 'ceilometer meter-list' to list the available meters. You can use 'ceilometer meter-list' to list the available meters.
For more information: For more information:
http://docs.openstack.org/developer/ceilometer/measurements.html https://docs.openstack.org/ceilometer/latest/admin/telemetry-measurements.html
Ceilometer is designed to collect measurements from OpenStack services and from Ceilometer is designed to collect measurements from OpenStack services and from
other external components. If you would like to add new meters to the currently other external components. If you would like to add new meters to the currently
existing ones, you need to follow the documentation below: existing ones, you need to follow the documentation below:
#. http://docs.openstack.org/developer/ceilometer/new_meters.html #. https://docs.openstack.org/ceilometer/latest/contributor/new_meters.html#meters
The Ceilometer collector uses a pluggable storage system, meaning that you can The Ceilometer collector uses a pluggable storage system, meaning that you can
pick any database system you prefer. pick any database system you prefer.
@@ -430,6 +429,26 @@ to Watcher receives Nova notifications in ``watcher_notifications`` as well.
* Restart the Nova services. * Restart the Nova services.
Configure Cinder Notifications
==============================
Watcher can also consume notifications generated by the Cinder services, in
order to build or update, in real time, its cluster data model related to
storage resources. To do so, you have to update the Cinder configuration
file on controller and volume nodes, in order to let Watcher receive Cinder
notifications in a dedicated ``watcher_notifications`` channel.
* In the file ``/etc/cinder/cinder.conf``, update the section
``[oslo_messaging_notifications]``, by redefining the list of topics
into which Cinder services will publish events ::
[oslo_messaging_notifications]
driver = messagingv2
topics = notifications,watcher_notifications
* Restart the Cinder services.
Workers Workers
======= =======

View File

@@ -24,8 +24,8 @@ Watcher plugin::
For more detailed instructions, see `Detailed DevStack Instructions`_. Check For more detailed instructions, see `Detailed DevStack Instructions`_. Check
out the `DevStack documentation`_ for more information regarding DevStack. out the `DevStack documentation`_ for more information regarding DevStack.
.. _PluginModelDocs: http://docs.openstack.org/developer/devstack/plugins.html .. _PluginModelDocs: https://docs.openstack.org/devstack/latest/plugins.html
.. _DevStack documentation: http://docs.openstack.org/developer/devstack/ .. _DevStack documentation: https://docs.openstack.org/devstack/latest
Detailed DevStack Instructions Detailed DevStack Instructions
============================== ==============================

View File

@@ -4,7 +4,7 @@
https://creativecommons.org/licenses/by/3.0/ https://creativecommons.org/licenses/by/3.0/
.. _watcher_developement_environment: .. _watcher_development_environment:
========================================= =========================================
Set up a development environment manually Set up a development environment manually
@@ -25,7 +25,7 @@ Prerequisites
This document assumes you are using Ubuntu or Fedora, and that you have the This document assumes you are using Ubuntu or Fedora, and that you have the
following tools available on your system: following tools available on your system:
- Python_ 2.7 and 3.4 - Python_ 2.7 and 3.5
- git_ - git_
- setuptools_ - setuptools_
- pip_ - pip_
@@ -77,13 +77,13 @@ extension, PyPi) cannot satisfy. These dependencies should be installed
prior to using `pip`, and the installation method may vary depending on prior to using `pip`, and the installation method may vary depending on
your platform. your platform.
* Ubuntu 14.04:: * Ubuntu 16.04::
$ sudo apt-get install python-dev libssl-dev libmysqlclient-dev libffi-dev $ sudo apt-get install python-dev libssl-dev libmysqlclient-dev libffi-dev
* Fedora 19+:: * Fedora 24+::
$ sudo yum install openssl-devel libffi-devel mysql-devel $ sudo dnf install redhat-rpm-config gcc python-devel libxml2-devel
* CentOS 7:: * CentOS 7::

View File

@@ -178,7 +178,7 @@ Here below is how you would proceed to register ``DummyAction`` using pbr_:
watcher_actions = watcher_actions =
dummy = thirdparty.dummy:DummyAction dummy = thirdparty.dummy:DummyAction
.. _pbr: http://docs.openstack.org/developer/pbr/ .. _pbr: https://docs.openstack.org/pbr/latest
Using action plugins Using action plugins
@@ -217,3 +217,11 @@ which is only able to process the Watcher built-in actions. Therefore, you will
either have to use an existing third-party planner or :ref:`implement another either have to use an existing third-party planner or :ref:`implement another
planner <implement_planner_plugin>` that will be able to take into account your planner <implement_planner_plugin>` that will be able to take into account your
new action plugin. new action plugin.
Test your new action
====================
In order to test your new action via a manual test or a Tempest test, you can
use the :py:class:`~.Actuator` strategy and pass it one or more actions to
execute. This way, you can isolate your action to see if it works as expected.

View File

@@ -22,7 +22,7 @@ Pre-requisites
We assume that you have set up a working Watcher development environment. So if We assume that you have set up a working Watcher development environment. So if
this not already the case, you can check out our documentation which explains this not already the case, you can check out our documentation which explains
how to set up a :ref:`development environment how to set up a :ref:`development environment
<watcher_developement_environment>`. <watcher_development_environment>`.
.. _development environment: .. _development environment:
@@ -34,7 +34,7 @@ First off, we need to create the project structure. To do so, we can use
generate the skeleton of our project:: generate the skeleton of our project::
$ virtualenv thirdparty $ virtualenv thirdparty
$ source thirdparty/bin/activate $ . thirdparty/bin/activate
$ pip install cookiecutter $ pip install cookiecutter
$ cookiecutter https://github.com/openstack-dev/cookiecutter $ cookiecutter https://github.com/openstack-dev/cookiecutter

View File

@@ -198,7 +198,7 @@ Here below is how to register ``DummyClusterDataModelCollector`` using pbr_:
watcher_cluster_data_model_collectors = watcher_cluster_data_model_collectors =
dummy = thirdparty.dummy:DummyClusterDataModelCollector dummy = thirdparty.dummy:DummyClusterDataModelCollector
.. _pbr: http://docs.openstack.org/developer/pbr/ .. _pbr: http://docs.openstack.org/pbr/latest
Add new notification endpoints Add new notification endpoints

View File

@@ -127,7 +127,7 @@ To get a better understanding on how to implement a more advanced goal, have
a look at the a look at the
:py:class:`watcher.decision_engine.goal.goals.ServerConsolidation` class. :py:class:`watcher.decision_engine.goal.goals.ServerConsolidation` class.
.. _pbr: http://docs.openstack.org/developer/pbr/ .. _pbr: https://docs.openstack.org/pbr/latest
.. _implement_efficacy_specification: .. _implement_efficacy_specification:

View File

@@ -145,7 +145,7 @@ Here below is how you would proceed to register ``DummyPlanner`` using pbr_:
watcher_planners = watcher_planners =
dummy = third_party.dummy:DummyPlanner dummy = third_party.dummy:DummyPlanner
.. _pbr: http://docs.openstack.org/developer/pbr/ .. _pbr: https://docs.openstack.org/pbr/latest
Using planner plugins Using planner plugins

View File

@@ -190,7 +190,7 @@ the :py:class:`~.DummyScoringContainer` and the way it is configured in
watcher_scoring_engine_containers = watcher_scoring_engine_containers =
new_scoring_container = thirdparty.new:NewContainer new_scoring_container = thirdparty.new:NewContainer
.. _pbr: http://docs.openstack.org/developer/pbr/ .. _pbr: https://docs.openstack.org/pbr/latest/
Using scoring engine plugins Using scoring engine plugins

View File

@@ -219,7 +219,7 @@ Here below is how you would proceed to register ``NewStrategy`` using pbr_:
To get a better understanding on how to implement a more advanced strategy, To get a better understanding on how to implement a more advanced strategy,
have a look at the :py:class:`~.BasicConsolidation` class. have a look at the :py:class:`~.BasicConsolidation` class.
.. _pbr: http://docs.openstack.org/developer/pbr/ .. _pbr: https://docs.openstack.org/pbr/latest
Using strategy plugins Using strategy plugins
====================== ======================
@@ -264,11 +264,11 @@ requires new metrics not covered by Ceilometer, you can add them through a
.. _`Helper`: https://github.com/openstack/watcher/blob/master/watcher/decision_engine/cluster/history/ceilometer.py .. _`Helper`: https://github.com/openstack/watcher/blob/master/watcher/decision_engine/cluster/history/ceilometer.py
.. _`Ceilometer developer guide`: http://docs.openstack.org/developer/ceilometer/architecture.html#storing-the-data .. _`Ceilometer developer guide`: https://docs.openstack.org/ceilometer/latest/contributor/architecture.html#storing-accessing-the-data
.. _`Ceilometer`: http://docs.openstack.org/developer/ceilometer/ .. _`Ceilometer`: https://docs.openstack.org/ceilometer/latest
.. _`Monasca`: https://github.com/openstack/monasca-api/blob/master/docs/monasca-api-spec.md .. _`Monasca`: https://github.com/openstack/monasca-api/blob/master/docs/monasca-api-spec.md
.. _`here`: http://docs.openstack.org/developer/ceilometer/install/dbreco.html#choosing-a-database-backend .. _`here`: https://docs.openstack.org/ceilometer/latest/contributor/install/dbreco.html#choosing-a-database-backend
.. _`Ceilometer plugin`: http://docs.openstack.org/developer/ceilometer/plugins.html .. _`Ceilometer plugin`: https://docs.openstack.org/ceilometer/latest/contributor/plugins.html
.. _`Ceilosca`: https://github.com/openstack/monasca-ceilometer/blob/master/ceilosca/ceilometer/storage/impl_monasca.py .. _`Ceilosca`: https://github.com/openstack/monasca-ceilometer/blob/master/ceilosca/ceilometer/storage/impl_monasca.py
Read usage metrics using the Watcher Datasource Helper Read usage metrics using the Watcher Datasource Helper

View File

@@ -41,10 +41,18 @@ you can run the desired test::
$ workon watcher $ workon watcher
(watcher) $ tox -e py27 -- -r watcher.tests.api (watcher) $ tox -e py27 -- -r watcher.tests.api
.. _os-testr: http://docs.openstack.org/developer/os-testr/ .. _os-testr: https://docs.openstack.org/os-testr/latest
When you're done, deactivate the virtualenv:: When you're done, deactivate the virtualenv::
$ deactivate $ deactivate
.. include:: ../../../watcher_tempest_plugin/README.rst .. _tempest_tests:
Tempest tests
=============
Tempest tests for Watcher has been migrated to the external repo
`watcher-tempest-plugin`_.
.. _watcher-tempest-plugin: https://github.com/openstack/watcher-tempest-plugin

View File

@@ -83,7 +83,7 @@ Audit Template
Availability Zone Availability Zone
================= =================
Please, read `the official OpenStack definition of an Availability Zone <http://docs.openstack.org/developer/nova/aggregates.html#availability-zones-azs>`_. Please, read `the official OpenStack definition of an Availability Zone <https://docs.openstack.org/nova/latest/user/aggregates.html#availability-zones-azs>`_.
.. _cluster_definition: .. _cluster_definition:
@@ -115,15 +115,8 @@ Cluster Data Model (CDM)
Controller Node Controller Node
=============== ===============
A controller node is a machine that typically runs the following core OpenStack Please, read `the official OpenStack definition of a Controller Node
services: <https://docs.openstack.org/nova/latest/install/overview.html#controller>`_.
- Keystone: for identity and service management
- Cinder scheduler: for volumes management
- Glance controller: for image management
- Neutron controller: for network management
- Nova controller: for global compute resources management with services
such as nova-scheduler, nova-conductor and nova-network.
In many configurations, Watcher will reside on a controller node even if it In many configurations, Watcher will reside on a controller node even if it
can potentially be hosted on a dedicated machine. can potentially be hosted on a dedicated machine.
@@ -134,7 +127,7 @@ Compute node
============ ============
Please, read `the official OpenStack definition of a Compute Node Please, read `the official OpenStack definition of a Compute Node
<http://docs.openstack.org/ops-guide/arch-compute-nodes.html>`_. <https://docs.openstack.org/nova/latest/install/overview.html#compute>`_.
.. _customer_definition: .. _customer_definition:
@@ -167,7 +160,7 @@ Host Aggregate
============== ==============
Please, read `the official OpenStack definition of a Host Aggregate Please, read `the official OpenStack definition of a Host Aggregate
<http://docs.openstack.org/developer/nova/aggregates.html>`_. <https://docs.openstack.org/nova/latest/user/aggregates.html>`_.
.. _instance_definition: .. _instance_definition:
@@ -206,18 +199,18 @@ the Watcher system can act on.
Here are some examples of Here are some examples of
:ref:`Managed resource types <managed_resource_definition>`: :ref:`Managed resource types <managed_resource_definition>`:
- `Nova Host Aggregates <http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::HostAggregate>`_ - `Nova Host Aggregates <https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Nova::HostAggregate>`_
- `Nova Servers <http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::Server>`_ - `Nova Servers <https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Nova::Server>`_
- `Cinder Volumes <http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Cinder::Volume>`_ - `Cinder Volumes <https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Cinder::Volume>`_
- `Neutron Routers <http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Router>`_ - `Neutron Routers <https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Neutron::Router>`_
- `Neutron Networks <http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Net>`_ - `Neutron Networks <https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Neutron::Net>`_
- `Neutron load-balancers <http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LoadBalancer>`_ - `Neutron load-balancers <https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Neutron::LoadBalancer>`_
- `Sahara Hadoop Cluster <http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Sahara::Cluster>`_ - `Sahara Hadoop Cluster <https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Sahara::Cluster>`_
- ... - ...
It can be any of the `the official list of available resource types defined in It can be any of `the official list of available resource types defined in
OpenStack for HEAT OpenStack for HEAT
<http://docs.openstack.org/developer/heat/template_guide/openstack.html>`_. <https://docs.openstack.org/heat/latest/template_guide/openstack.html>`_.
.. _efficacy_indicator_definition: .. _efficacy_indicator_definition:

View File

@@ -7,7 +7,9 @@ ONGOING --> FAILED: Something failed while executing\nthe Action Plan in the Wat
ONGOING --> SUCCEEDED: The Watcher Applier executed\nthe Action Plan successfully ONGOING --> SUCCEEDED: The Watcher Applier executed\nthe Action Plan successfully
FAILED --> DELETED : Administrator removes\nAction Plan FAILED --> DELETED : Administrator removes\nAction Plan
SUCCEEDED --> DELETED : Administrator removes\nAction Plan SUCCEEDED --> DELETED : Administrator removes\nAction Plan
ONGOING --> CANCELLED : Administrator cancels\nAction Plan ONGOING --> CANCELLING : Administrator cancels\nAction Plan
CANCELLING --> CANCELLED : The Watcher Applier cancelled\nthe Action Plan successfully
CANCELLING --> FAILED : Something failed while cancelling\nthe Action Plan in the Watcher Applier
RECOMMENDED --> CANCELLED : Administrator cancels\nAction Plan RECOMMENDED --> CANCELLED : Administrator cancels\nAction Plan
RECOMMENDED --> SUPERSEDED : The Watcher Decision Engine supersedes\nAction Plan RECOMMENDED --> SUPERSEDED : The Watcher Decision Engine supersedes\nAction Plan
PENDING --> CANCELLED : Administrator cancels\nAction Plan PENDING --> CANCELLED : Administrator cancels\nAction Plan

Binary file not shown.

Before

Width:  |  Height:  |  Size: 48 KiB

After

Width:  |  Height:  |  Size: 76 KiB

View File

@@ -339,6 +339,34 @@
style="fill:#ffffff;fill-rule:evenodd;stroke:#000000;stroke-width:1pt" style="fill:#ffffff;fill-rule:evenodd;stroke:#000000;stroke-width:1pt"
transform="matrix(-0.8,0,0,-0.8,4.8,0)" /> transform="matrix(-0.8,0,0,-0.8,4.8,0)" />
</marker> </marker>
<marker
inkscape:stockid="EmptyTriangleInL"
orient="auto"
refY="0"
refX="0"
id="EmptyTriangleInL-6"
style="overflow:visible">
<path
inkscape:connector-curvature="0"
id="path7091-2"
d="m 5.77,0 -8.65,5 0,-10 8.65,5 z"
style="fill:#ffffff;fill-rule:evenodd;stroke:#000000;stroke-width:1pt"
transform="matrix(-0.8,0,0,-0.8,4.8,0)" />
</marker>
<marker
inkscape:stockid="EmptyTriangleInL"
orient="auto"
refY="0"
refX="0"
id="EmptyTriangleInL-12"
style="overflow:visible">
<path
inkscape:connector-curvature="0"
id="path7091-70"
d="m 5.77,0 -8.65,5 0,-10 8.65,5 z"
style="fill:#ffffff;fill-rule:evenodd;stroke:#000000;stroke-width:1pt"
transform="matrix(-0.8,0,0,-0.8,4.8,0)" />
</marker>
</defs> </defs>
<sodipodi:namedview <sodipodi:namedview
inkscape:document-units="mm" inkscape:document-units="mm"
@@ -348,13 +376,13 @@
inkscape:pageopacity="0.0" inkscape:pageopacity="0.0"
inkscape:pageshadow="2" inkscape:pageshadow="2"
inkscape:zoom="1.4142136" inkscape:zoom="1.4142136"
inkscape:cx="261.24633" inkscape:cx="665.19215"
inkscape:cy="108.90512" inkscape:cy="108.90512"
inkscape:current-layer="g5356" inkscape:current-layer="g4866-2-3"
id="namedview4950" id="namedview4950"
showgrid="true" showgrid="true"
inkscape:window-width="1215" inkscape:window-width="1211"
inkscape:window-height="776" inkscape:window-height="698"
inkscape:window-x="65" inkscape:window-x="65"
inkscape:window-y="24" inkscape:window-y="24"
inkscape:window-maximized="1"> inkscape:window-maximized="1">
@@ -381,6 +409,12 @@
<g <g
id="g5356" id="g5356"
transform="translate(-15.096057,-107.16694)"> transform="translate(-15.096057,-107.16694)">
<path
sodipodi:nodetypes="cc"
inkscape:connector-curvature="0"
id="path3284-4-2-3-77-5-9"
d="m 813.66791,753.1462 0,-92.21768"
style="display:inline;fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-start:url(#EmptyTriangleInL-6)" />
<rect <rect
y="377.8927" y="377.8927"
x="96.920677" x="96.920677"
@@ -875,8 +909,8 @@
sodipodi:nodetypes="cc" sodipodi:nodetypes="cc"
inkscape:connector-curvature="0" inkscape:connector-curvature="0"
id="path5110-9" id="path5110-9"
d="m 472.18905,726.66568 221.85496,0" d="m 472.18905,726.66568 331.45651,0"
style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;display:inline" /> style="display:inline;fill:none;stroke:#000000;stroke-width:1.22230256px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" />
<path <path
sodipodi:nodetypes="cc" sodipodi:nodetypes="cc"
inkscape:connector-curvature="0" inkscape:connector-curvature="0"
@@ -919,8 +953,8 @@
sodipodi:nodetypes="cc" sodipodi:nodetypes="cc"
inkscape:connector-curvature="0" inkscape:connector-curvature="0"
id="path3284-4-2-3-4-6" id="path3284-4-2-3-4-6"
d="m 540.57926,651.7922 179.16488,0" d="m 543.75943,651.7922 280.63651,0"
style="fill:none;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:1.99999999, 1.99999999;stroke-dashoffset:0;marker-start:url(#TriangleInL);display:inline" /> style="display:inline;fill:none;stroke:#000000;stroke-width:1.25154257;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:2.50308524, 2.50308524;stroke-dashoffset:0;stroke-opacity:1;marker-start:url(#TriangleInL)" />
<rect <rect
y="262.01205" y="262.01205"
x="451.89563" x="451.89563"
@@ -1402,6 +1436,48 @@
id="path5110-9-6" id="path5110-9-6"
d="m 192.18905,726.66568 221.85496,0" d="m 192.18905,726.66568 221.85496,0"
style="display:inline;fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" /> style="display:inline;fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" />
<g
id="g4866-2-3"
style="display:inline"
transform="matrix(1.7775787,0,0,1.7775787,991.15946,596.08131)">
<rect
style="display:inline;fill:#ffffff;stroke:#000000;stroke-width:0.562563;stroke-opacity:1"
id="rect4267-4-7-7-6"
width="49.81258"
height="24.243191"
x="-116.67716"
y="88.977051" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:11.73851585px;line-height:125%;font-family:Sans;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;display:inline;fill:#000000;fill-opacity:1;stroke:none"
x="-91.899979"
y="104.01585"
id="text5037-4-6-9-7"
sodipodi:linespacing="125%"><tspan
sodipodi:role="line"
x="-91.899979"
y="104.01585"
style="font-size:11.2512598px;text-align:center;text-anchor:middle"
id="tspan5184-3-5-5">cinder</tspan></text>
</g>
<path
sodipodi:nodetypes="cc"
inkscape:connector-curvature="0"
id="path3284-4-2-3-4-9-3"
d="m 824.37881,651.58554 0,102.98987"
style="display:inline;fill:none;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:1.99999999, 1.99999999;stroke-dashoffset:0;stroke-opacity:1;marker-start:none" />
<circle
r="2.6672709"
cy="693.98395"
cx="823.72699"
id="path13407-89-5"
style="color:#000000;display:inline;overflow:visible;visibility:visible;fill:#ececec;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;marker:none;enable-background:accumulate" />
<path
sodipodi:nodetypes="cc"
inkscape:connector-curvature="0"
id="path3284-4-2-3-7-9"
d="m 804.16781,752.35205 0,-26.2061"
style="display:inline;fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-start:url(#EmptyTriangleInL-12)" />
</g> </g>
</g> </g>
</svg> </svg>

Before

Width:  |  Height:  |  Size: 60 KiB

After

Width:  |  Height:  |  Size: 64 KiB

View File

@@ -39,12 +39,12 @@
Replace WATCHER_PASS with the password you chose for the watcher user in the Identity service. Replace WATCHER_PASS with the password you chose for the watcher user in the Identity service.
* Watcher interacts with other OpenStack projects via project clients, in order to instantiate these * Watcher interacts with other OpenStack projects via project clients, in order to instantiate these
clients, Watcher requests new session from Identity service. In the `[watcher_client_auth]` section, clients, Watcher requests new session from Identity service. In the `[watcher_clients_auth]` section,
configure the identity service access to interact with other OpenStack project clients. configure the identity service access to interact with other OpenStack project clients.
.. code-block:: ini .. code-block:: ini
[watcher_client_auth] [watcher_clients_auth]
... ...
auth_type = password auth_type = password
auth_url = http://controller:35357 auth_url = http://controller:35357
@@ -56,6 +56,16 @@
Replace WATCHER_PASS with the password you chose for the watcher user in the Identity service. Replace WATCHER_PASS with the password you chose for the watcher user in the Identity service.
* In the `[api]` section, configure host option.
.. code-block:: ini
[api]
...
host = controller
Replace controller with the IP address of the management network interface on your controller node, typically 10.0.0.11 for the first node in the example architecture.
* In the `[oslo_messaging_notifications]` section, configure the messaging driver. * In the `[oslo_messaging_notifications]` section, configure the messaging driver.
.. code-block:: ini .. code-block:: ini
@@ -68,4 +78,4 @@
.. code-block:: ini .. code-block:: ini
su -s /bin/sh -c "watcher-db-manage" watcher su -s /bin/sh -c "watcher-db-manage --config-file /etc/watcher/watcher.conf upgrade"

View File

@@ -36,4 +36,4 @@ https://docs.openstack.org/watcher/latest/glossary.html
This chapter assumes a working setup of OpenStack following the This chapter assumes a working setup of OpenStack following the
`OpenStack Installation Tutorial `OpenStack Installation Tutorial
<https://docs.openstack.org/project-install-guide/ocata/>`_. <https://docs.openstack.org/pike/install/>`_.

View File

@@ -1,35 +0,0 @@
.. _install-obs:
Install and configure for openSUSE and SUSE Linux Enterprise
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the Infrastructure
Optimization service for openSUSE Leap 42.1 and
SUSE Linux Enterprise Server 12 SP1.
.. include:: common_prerequisites.rst
Install and configure components
--------------------------------
#. Install the packages:
.. code-block:: console
# zypper --quiet --non-interactive install
.. include:: common_configure.rst
Finalize installation
---------------------
Start the Infrastructure Optimization services and configure them to start when
the system boots:
.. code-block:: console
# systemctl enable openstack-watcher-api.service
# systemctl start openstack-watcher-api.service

View File

@@ -15,6 +15,5 @@ Note that installation and configuration vary by distribution.
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
install-obs.rst
install-rdo.rst install-rdo.rst
install-ubuntu.rst install-ubuntu.rst

View File

@@ -6,4 +6,4 @@ Next steps
Your OpenStack environment now includes the watcher service. Your OpenStack environment now includes the watcher service.
To add additional services, see To add additional services, see
https://docs.openstack.org/project-install-guide/ocata/. https://docs.openstack.org/pike/install/.

View File

@@ -5,7 +5,7 @@ Basic Offline Server Consolidation
Synopsis Synopsis
-------- --------
**display name**: ``basic`` **display name**: ``Basic offline consolidation``
**goal**: ``server_consolidation`` **goal**: ``server_consolidation``
@@ -26,7 +26,7 @@ metric service name plugins comment
``cpu_util`` ceilometer_ none ``cpu_util`` ceilometer_ none
============================ ============ ======= ======= ============================ ============ ======= =======
.. _ceilometer: http://docs.openstack.org/admin-guide/telemetry-measurements.html#openstack-compute .. _ceilometer: https://docs.openstack.org/ceilometer/latest/admin/telemetry-measurements.html#openstack-compute
Cluster data model Cluster data model
****************** ******************

View File

@@ -5,7 +5,7 @@ Outlet Temperature Based Strategy
Synopsis Synopsis
-------- --------
**display name**: ``outlet_temperature`` **display name**: ``Outlet temperature based strategy``
**goal**: ``thermal_optimization`` **goal**: ``thermal_optimization``
@@ -33,7 +33,7 @@ metric service name plugins comment
``hardware.ipmi.node.outlet_temperature`` ceilometer_ IPMI ``hardware.ipmi.node.outlet_temperature`` ceilometer_ IPMI
========================================= ============ ======= ======= ========================================= ============ ======= =======
.. _ceilometer: http://docs.openstack.org/admin-guide/telemetry-measurements.html#ipmi-based-meters .. _ceilometer: https://docs.openstack.org/ceilometer/latest/admin/telemetry-measurements.html#ipmi-based-meters
Cluster data model Cluster data model
****************** ******************

View File

@@ -0,0 +1,100 @@
======================
Saving Energy Strategy
======================
Synopsis
--------
**display name**: ``Saving Energy Strategy``
**goal**: ``saving_energy``
.. watcher-term:: watcher.decision_engine.strategy.strategies.saving_energy
Requirements
------------
This feature will use Ironic to do the power on/off actions, therefore
this feature requires that the ironic component is configured.
And the compute node should be managed by Ironic.
Ironic installation: https://docs.openstack.org/ironic/latest/install/index.html
Cluster data model
******************
Default Watcher's Compute cluster data model:
.. watcher-term:: watcher.decision_engine.model.collector.nova.NovaClusterDataModelCollector
Actions
*******
.. list-table::
:widths: 30 30
:header-rows: 1
* - action
- description
* - ``change_node_power_state``
- .. watcher-term:: watcher.applier.actions.change_node_power_state.ChangeNodePowerState
Planner
*******
Default Watcher's planner:
.. watcher-term:: watcher.decision_engine.planner.weight.WeightPlanner
Configuration
-------------
Strategy parameter is:
====================== ====== ======= ======================================
parameter type default description
Value
====================== ====== ======= ======================================
``free_used_percent`` Number 10.0 a rational number, which describes the
the quotient of
min_free_hosts_num/nodes_with_VMs_num
``min_free_hosts_num`` Int 1 an int number describes minimum free
compute nodes
====================== ====== ======= ======================================
Efficacy Indicator
------------------
Energy saving strategy efficacy indicator is unclassified.
https://github.com/openstack/watcher/blob/master/watcher/decision_engine/goal/goals.py#L215-L218
Algorithm
---------
For more information on the Energy Saving Strategy please refer to:http://specs.openstack.org/openstack/watcher-specs/specs/pike/implemented/energy-saving-strategy.html
How to use it ?
---------------
step1: Add compute nodes info into ironic node management
.. code-block:: shell
$ ironic node-create -d pxe_ipmitool -i ipmi_address=10.43.200.184 \
ipmi_username=root -i ipmi_password=nomoresecret -e compute_node_id=3
step 2: Create audit to do optimization
.. code-block:: shell
$ openstack optimize audittemplate create \
at1 saving_energy --strategy saving_energy
$ openstack optimize audit create -a at1
External Links
--------------
*Spec URL*
http://specs.openstack.org/openstack/watcher-specs/specs/pike/implemented/energy-saving-strategy.html

View File

@@ -33,7 +33,7 @@ power ceilometer_ kwapi_ one point every 60s
======================= ============ ======= ======= ======================= ============ ======= =======
.. _ceilometer: http://docs.openstack.org/admin-guide/telemetry-measurements.html#openstack-compute .. _ceilometer: https://docs.openstack.org/ceilometer/latest/admin/telemetry-measurements.html#openstack-compute
.. _monasca: https://github.com/openstack/monasca-agent/blob/master/docs/Libvirt.md .. _monasca: https://github.com/openstack/monasca-agent/blob/master/docs/Libvirt.md
.. _kwapi: https://kwapi.readthedocs.io/en/latest/index.html .. _kwapi: https://kwapi.readthedocs.io/en/latest/index.html

View File

@@ -5,7 +5,7 @@ Uniform Airflow Migration Strategy
Synopsis Synopsis
-------- --------
**display name**: ``uniform_airflow`` **display name**: ``Uniform airflow migration strategy``
**goal**: ``airflow_optimization`` **goal**: ``airflow_optimization``

View File

@@ -5,7 +5,7 @@ VM Workload Consolidation Strategy
Synopsis Synopsis
-------- --------
**display name**: ``vm_workload_consolidation`` **display name**: ``VM Workload Consolidation Strategy``
**goal**: ``vm_consolidation`` **goal**: ``vm_consolidation``
@@ -22,7 +22,7 @@ The *vm_workload_consolidation* strategy requires the following metrics:
============================ ============ ======= ======= ============================ ============ ======= =======
metric service name plugins comment metric service name plugins comment
============================ ============ ======= ======= ============================ ============ ======= =======
``memory`` ceilometer_ none ``memory`` ceilometer_ none
``disk.root.size`` ceilometer_ none ``disk.root.size`` ceilometer_ none
============================ ============ ======= ======= ============================ ============ ======= =======
@@ -32,11 +32,11 @@ the strategy if available:
============================ ============ ======= ======= ============================ ============ ======= =======
metric service name plugins comment metric service name plugins comment
============================ ============ ======= ======= ============================ ============ ======= =======
``memory.usage`` ceilometer_ none ``memory.resident`` ceilometer_ none
``cpu_util`` ceilometer_ none ``cpu_util`` ceilometer_ none
============================ ============ ======= ======= ============================ ============ ======= =======
.. _ceilometer: http://docs.openstack.org/admin-guide/telemetry-measurements.html#openstack-compute .. _ceilometer: https://docs.openstack.org/ceilometer/latest/admin/telemetry-measurements.html#openstack-compute
Cluster data model Cluster data model
****************** ******************

View File

@@ -5,7 +5,7 @@ Watcher Overload standard deviation algorithm
Synopsis Synopsis
-------- --------
**display name**: ``workload_stabilization`` **display name**: ``Workload stabilization``
**goal**: ``workload_balancing`` **goal**: ``workload_balancing``
@@ -28,7 +28,7 @@ metric service name plugins comment
``memory.resident`` ceilometer_ none ``memory.resident`` ceilometer_ none
============================ ============ ======= ======= ============================ ============ ======= =======
.. _ceilometer: http://docs.openstack.org/admin-guide/telemetry-measurements.html#openstack-compute .. _ceilometer: https://docs.openstack.org/ceilometer/latest/admin/telemetry-measurements.html#openstack-compute
.. _SNMP: http://docs.openstack.org/admin-guide/telemetry-measurements.html .. _SNMP: http://docs.openstack.org/admin-guide/telemetry-measurements.html
Cluster data model Cluster data model
@@ -100,7 +100,7 @@ parameter type default Value description
into which the samples are into which the samples are
grouped for aggregation. grouped for aggregation.
Watcher uses only the last Watcher uses only the last
period of all recieved ones. period of all received ones.
==================== ====== ===================== ============================= ==================== ====== ===================== =============================
.. |metrics| replace:: ["cpu_util", "memory.resident"] .. |metrics| replace:: ["cpu_util", "memory.resident"]

View File

@@ -5,7 +5,7 @@ Workload Balance Migration Strategy
Synopsis Synopsis
-------- --------
**display name**: ``workload_balance`` **display name**: ``Workload Balance Migration Strategy``
**goal**: ``workload_balancing`` **goal**: ``workload_balancing``
@@ -25,9 +25,10 @@ The *workload_balance* strategy requires the following metrics:
metric service name plugins comment metric service name plugins comment
======================= ============ ======= ======= ======================= ============ ======= =======
``cpu_util`` ceilometer_ none ``cpu_util`` ceilometer_ none
``memory.resident`` ceilometer_ none
======================= ============ ======= ======= ======================= ============ ======= =======
.. _ceilometer: http://docs.openstack.org/admin-guide/telemetry-measurements.html#openstack-compute .. _ceilometer: https://docs.openstack.org/ceilometer/latest/admin/telemetry-measurements.html#openstack-compute
Cluster data model Cluster data model
@@ -66,6 +67,9 @@ Strategy parameters are:
============== ====== ============= ==================================== ============== ====== ============= ====================================
parameter type default Value description parameter type default Value description
============== ====== ============= ==================================== ============== ====== ============= ====================================
``metrics`` String 'cpu_util' Workload balance base on cpu or ram
utilization. choice: ['cpu_util',
'memory.resident']
``threshold`` Number 25.0 Workload threshold for migration ``threshold`` Number 25.0 Workload threshold for migration
``period`` Number 300 Aggregate time period of ceilometer ``period`` Number 300 Aggregate time period of ceilometer
============== ====== ============= ==================================== ============== ====== ============= ====================================
@@ -90,7 +94,7 @@ How to use it ?
at1 workload_balancing --strategy workload_balance at1 workload_balancing --strategy workload_balance
$ openstack optimize audit create -a at1 -p threshold=26.0 \ $ openstack optimize audit create -a at1 -p threshold=26.0 \
-p period=310 -p period=310 -p metrics=cpu_util
External Links External Links
-------------- --------------

View File

@@ -39,10 +39,10 @@ named ``watcher``, or by using the `OpenStack CLI`_ ``openstack``.
If you want to deploy Watcher in Horizon, please refer to the `Watcher Horizon If you want to deploy Watcher in Horizon, please refer to the `Watcher Horizon
plugin installation guide`_. plugin installation guide`_.
.. _`installation guide`: http://docs.openstack.org/developer/python-watcherclient .. _`installation guide`: https://docs.openstack.org/python-watcherclient/latest
.. _`Watcher Horizon plugin installation guide`: http://docs.openstack.org/developer/watcher-dashboard/deploy/installation.html .. _`Watcher Horizon plugin installation guide`: https://docs.openstack.org/watcher-dashboard/latest/install/installation.html
.. _`OpenStack CLI`: http://docs.openstack.org/developer/python-openstackclient/man/openstack.html .. _`OpenStack CLI`: https://docs.openstack.org/python-openstackclient/latest/cli/man/openstack.html
.. _`Watcher CLI`: http://docs.openstack.org/developer/python-watcherclient/index.html .. _`Watcher CLI`: https://docs.openstack.org/python-watcherclient/latest/cli/index.html
Seeing what the Watcher CLI can do ? Seeing what the Watcher CLI can do ?
------------------------------------ ------------------------------------

View File

@@ -27,7 +27,7 @@ Structure
Useful links Useful links
------------ ------------
* How to install: http://docs.openstack.org/developer/rally/install.html * How to install: https://docs.openstack.org/rally/latest/install_and_upgrade/install.html
* How to set Rally up and launch your first scenario: https://rally.readthedocs.io/en/latest/tutorial/step_1_setting_up_env_and_running_benchmark_from_samples.html * How to set Rally up and launch your first scenario: https://rally.readthedocs.io/en/latest/tutorial/step_1_setting_up_env_and_running_benchmark_from_samples.html

View File

@@ -0,0 +1,3 @@
---
features:
- Add notifications related to Action object.

View File

@@ -0,0 +1,6 @@
---
features:
- Added the functionality to filter out instances which have metadata field
'optimize' set to False. For now, this is only available for the
basic_consolidation strategy (if "check_optimize_metadata" configuration
option is enabled).

View File

@@ -0,0 +1,4 @@
---
features:
- Added binding between apscheduler job and Watcher decision engine service.
It will allow to provide HA support in the future.

View File

@@ -0,0 +1,8 @@
---
features:
- Enhancement of vm_workload_consolidation strategy
by using 'memory.resident' metric in place of
'memory.usage', as memory.usage shows the memory
usage inside guest-os and memory.resident
represents volume of RAM used by instance
on host machine.

View File

@@ -0,0 +1,7 @@
---
features:
- There is new ability to create Watcher continuous audits with cron
interval. It means you may use, for example, optional argument
'--interval "\*/5 \* \* \* \*"' to launch audit every 5 minutes.
These jobs are executed on a best effort basis and therefore, we
recommend you to use a minimal cron interval of at least one minute.

View File

@@ -0,0 +1,4 @@
---
features:
- Add description property for dynamic action. Admin can see detail information
of any specify action.

View File

@@ -0,0 +1,4 @@
---
features:
- Added gnocchi support as data source for metrics. Administrator can change
data source for each strategy using config file.

View File

@@ -0,0 +1,3 @@
---
features:
- Added using of JSONSchema instead of voluptuous to validate Actions.

View File

@@ -0,0 +1,5 @@
---
features:
- Added strategy to identify and migrate a Noisy Neighbor - a low priority VM
that negatively affects peformance of a high priority VM by over utilizing
Last Level Cache.

View File

@@ -0,0 +1,3 @@
---
features:
- Add notifications related to Service object.

View File

@@ -0,0 +1,4 @@
---
features:
- |
Added volume migrate action

View File

@@ -0,0 +1,7 @@
---
features:
- Existing workload_balance strategy based on
the VM workloads of CPU. This feature improves
the strategy. By the input parameter "metrics",
it makes decision to migrate a VM base on CPU
or memory utilization.

View File

@@ -22,7 +22,8 @@
# All configuration values have a default; values that are commented out # All configuration values have a default; values that are commented out
# serve to show the default. # serve to show the default.
import sys, os import os
import sys
from watcher import version as watcher_version from watcher import version as watcher_version
# If extensions (or modules to document with autodoc) are in another directory, # If extensions (or modules to document with autodoc) are in another directory,
@@ -63,7 +64,7 @@ copyright = u'2016, Watcher developers'
# The short X.Y version. # The short X.Y version.
version = watcher_version.version_info.release_string() version = watcher_version.version_info.release_string()
# The full version, including alpha/beta/rc tags. # The full version, including alpha/beta/rc tags.
release = watcher_version.version_info.version_string() release = watcher_version.version_string
# The language for content autogenerated by Sphinx. Refer to documentation # The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages. # for a list of supported languages.

View File

@@ -21,6 +21,7 @@ Contents:
:maxdepth: 1 :maxdepth: 1
unreleased unreleased
pike
ocata ocata
newton newton

View File

@@ -0,0 +1,6 @@
===================================
Pike Series Release Notes
===================================
.. release-notes::
:branch: stable/pike

View File

@@ -2,48 +2,48 @@
# of appearance. Changing the order has an impact on the overall integration # of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later. # process, which may cause wedges in the gate later.
apscheduler # MIT License apscheduler>=3.0.5 # MIT License
enum34;python_version=='2.7' or python_version=='2.6' or python_version=='3.3' # BSD enum34>=1.0.4;python_version=='2.7' or python_version=='2.6' or python_version=='3.3' # BSD
jsonpatch>=1.1 # BSD jsonpatch>=1.16 # BSD
keystoneauth1>=3.0.1 # Apache-2.0 keystoneauth1>=3.2.0 # Apache-2.0
jsonschema!=2.5.0,<3.0.0,>=2.0.0 # MIT jsonschema<3.0.0,>=2.6.0 # MIT
keystonemiddleware>=4.12.0 # Apache-2.0 keystonemiddleware>=4.17.0 # Apache-2.0
lxml!=3.7.0,>=2.3 # BSD lxml!=3.7.0,>=3.4.1 # BSD
croniter>=0.3.4 # MIT License croniter>=0.3.4 # MIT License
oslo.concurrency>=3.8.0 # Apache-2.0 oslo.concurrency>=3.20.0 # Apache-2.0
oslo.cache>=1.5.0 # Apache-2.0 oslo.cache>=1.26.0 # Apache-2.0
oslo.config!=4.3.0,!=4.4.0,>=4.0.0 # Apache-2.0 oslo.config>=4.6.0 # Apache-2.0
oslo.context>=2.14.0 # Apache-2.0 oslo.context!=2.19.1,>=2.14.0 # Apache-2.0
oslo.db>=4.24.0 # Apache-2.0 oslo.db>=4.27.0 # Apache-2.0
oslo.i18n!=3.15.2,>=2.1.0 # Apache-2.0 oslo.i18n>=3.15.3 # Apache-2.0
oslo.log>=3.22.0 # Apache-2.0 oslo.log>=3.30.0 # Apache-2.0
oslo.messaging!=5.25.0,>=5.24.2 # Apache-2.0 oslo.messaging>=5.29.0 # Apache-2.0
oslo.policy>=1.23.0 # Apache-2.0 oslo.policy>=1.23.0 # Apache-2.0
oslo.reports>=0.6.0 # Apache-2.0 oslo.reports>=1.18.0 # Apache-2.0
oslo.serialization!=2.19.1,>=1.10.0 # Apache-2.0 oslo.serialization!=2.19.1,>=2.18.0 # Apache-2.0
oslo.service>=1.10.0 # Apache-2.0 oslo.service>=1.24.0 # Apache-2.0
oslo.utils>=3.20.0 # Apache-2.0 oslo.utils>=3.28.0 # Apache-2.0
oslo.versionedobjects>=1.17.0 # Apache-2.0 oslo.versionedobjects>=1.28.0 # Apache-2.0
PasteDeploy>=1.5.0 # MIT PasteDeploy>=1.5.0 # MIT
pbr!=2.1.0,>=2.0.0 # Apache-2.0 pbr!=2.1.0,>=2.0.0 # Apache-2.0
pecan!=1.0.2,!=1.0.3,!=1.0.4,!=1.2,>=1.0.0 # BSD pecan!=1.0.2,!=1.0.3,!=1.0.4,!=1.2,>=1.0.0 # BSD
PrettyTable<0.8,>=0.7.1 # BSD PrettyTable<0.8,>=0.7.1 # BSD
voluptuous>=0.8.9 # BSD License voluptuous>=0.8.9 # BSD License
gnocchiclient>=2.7.0 # Apache-2.0 gnocchiclient>=3.3.1 # Apache-2.0
python-ceilometerclient>=2.5.0 # Apache-2.0 python-ceilometerclient>=2.5.0 # Apache-2.0
python-cinderclient>=3.0.0 # Apache-2.0 python-cinderclient>=3.2.0 # Apache-2.0
python-glanceclient>=2.7.0 # Apache-2.0 python-glanceclient>=2.8.0 # Apache-2.0
python-keystoneclient>=3.8.0 # Apache-2.0 python-keystoneclient>=3.8.0 # Apache-2.0
python-monascaclient>=1.1.0 # Apache-2.0 python-monascaclient>=1.7.0 # Apache-2.0
python-neutronclient>=6.3.0 # Apache-2.0 python-neutronclient>=6.3.0 # Apache-2.0
python-novaclient>=9.0.0 # Apache-2.0 python-novaclient>=9.1.0 # Apache-2.0
python-openstackclient!=3.10.0,>=3.3.0 # Apache-2.0 python-openstackclient>=3.12.0 # Apache-2.0
python-ironicclient>=1.14.0 # Apache-2.0 python-ironicclient>=1.14.0 # Apache-2.0
six>=1.9.0 # MIT six>=1.9.0 # MIT
SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 # MIT SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 # MIT
stevedore>=1.20.0 # Apache-2.0 stevedore>=1.20.0 # Apache-2.0
taskflow>=2.7.0 # Apache-2.0 taskflow>=2.7.0 # Apache-2.0
WebOb>=1.7.1 # MIT WebOb>=1.7.1 # MIT
WSME>=0.8 # MIT WSME>=0.8.0 # MIT
networkx>=1.10 # BSD networkx<2.0,>=1.10 # BSD

View File

@@ -21,7 +21,6 @@ classifier =
[files] [files]
packages = packages =
watcher watcher
watcher_tempest_plugin
data_files = data_files =
etc/ = etc/* etc/ = etc/*
@@ -40,9 +39,6 @@ console_scripts =
watcher-applier = watcher.cmd.applier:main watcher-applier = watcher.cmd.applier:main
watcher-sync = watcher.cmd.sync:main watcher-sync = watcher.cmd.sync:main
tempest.test_plugins =
watcher_tests = watcher_tempest_plugin.plugin:WatcherTempestPlugin
watcher.database.migration_backend = watcher.database.migration_backend =
sqlalchemy = watcher.db.sqlalchemy.migration sqlalchemy = watcher.db.sqlalchemy.migration
@@ -54,6 +50,7 @@ watcher_goals =
workload_balancing = watcher.decision_engine.goal.goals:WorkloadBalancing workload_balancing = watcher.decision_engine.goal.goals:WorkloadBalancing
airflow_optimization = watcher.decision_engine.goal.goals:AirflowOptimization airflow_optimization = watcher.decision_engine.goal.goals:AirflowOptimization
noisy_neighbor = watcher.decision_engine.goal.goals:NoisyNeighborOptimization noisy_neighbor = watcher.decision_engine.goal.goals:NoisyNeighborOptimization
saving_energy = watcher.decision_engine.goal.goals:SavingEnergy
watcher_scoring_engines = watcher_scoring_engines =
dummy_scorer = watcher.decision_engine.scoring.dummy_scorer:DummyScorer dummy_scorer = watcher.decision_engine.scoring.dummy_scorer:DummyScorer
@@ -65,8 +62,10 @@ watcher_strategies =
dummy = watcher.decision_engine.strategy.strategies.dummy_strategy:DummyStrategy dummy = watcher.decision_engine.strategy.strategies.dummy_strategy:DummyStrategy
dummy_with_scorer = watcher.decision_engine.strategy.strategies.dummy_with_scorer:DummyWithScorer dummy_with_scorer = watcher.decision_engine.strategy.strategies.dummy_with_scorer:DummyWithScorer
dummy_with_resize = watcher.decision_engine.strategy.strategies.dummy_with_resize:DummyWithResize dummy_with_resize = watcher.decision_engine.strategy.strategies.dummy_with_resize:DummyWithResize
actuator = watcher.decision_engine.strategy.strategies.actuation:Actuator
basic = watcher.decision_engine.strategy.strategies.basic_consolidation:BasicConsolidation basic = watcher.decision_engine.strategy.strategies.basic_consolidation:BasicConsolidation
outlet_temperature = watcher.decision_engine.strategy.strategies.outlet_temp_control:OutletTempControl outlet_temperature = watcher.decision_engine.strategy.strategies.outlet_temp_control:OutletTempControl
saving_energy = watcher.decision_engine.strategy.strategies.saving_energy:SavingEnergy
vm_workload_consolidation = watcher.decision_engine.strategy.strategies.vm_workload_consolidation:VMWorkloadConsolidation vm_workload_consolidation = watcher.decision_engine.strategy.strategies.vm_workload_consolidation:VMWorkloadConsolidation
workload_stabilization = watcher.decision_engine.strategy.strategies.workload_stabilization:WorkloadStabilization workload_stabilization = watcher.decision_engine.strategy.strategies.workload_stabilization:WorkloadStabilization
workload_balance = watcher.decision_engine.strategy.strategies.workload_balance:WorkloadBalance workload_balance = watcher.decision_engine.strategy.strategies.workload_balance:WorkloadBalance
@@ -80,6 +79,7 @@ watcher_actions =
change_nova_service_state = watcher.applier.actions.change_nova_service_state:ChangeNovaServiceState change_nova_service_state = watcher.applier.actions.change_nova_service_state:ChangeNovaServiceState
resize = watcher.applier.actions.resize:Resize resize = watcher.applier.actions.resize:Resize
change_node_power_state = watcher.applier.actions.change_node_power_state:ChangeNodePowerState change_node_power_state = watcher.applier.actions.change_node_power_state:ChangeNodePowerState
volume_migrate = watcher.applier.actions.volume_migration:VolumeMigrate
watcher_workflow_engines = watcher_workflow_engines =
taskflow = watcher.applier.workflow_engine.default:DefaultWorkFlowEngine taskflow = watcher.applier.workflow_engine.default:DefaultWorkFlowEngine
@@ -94,13 +94,11 @@ watcher_cluster_data_model_collectors =
[pbr] [pbr]
warnerrors = true
autodoc_index_modules = true autodoc_index_modules = true
autodoc_exclude_modules = autodoc_exclude_modules =
watcher.db.sqlalchemy.alembic.env watcher.db.sqlalchemy.alembic.env
watcher.db.sqlalchemy.alembic.versions.* watcher.db.sqlalchemy.alembic.versions.*
watcher.tests.* watcher.tests.*
watcher_tempest_plugin.*
watcher.doc watcher.doc

View File

@@ -3,25 +3,24 @@
# process, which may cause wedges in the gate later. # process, which may cause wedges in the gate later.
coverage!=4.4,>=4.0 # Apache-2.0 coverage!=4.4,>=4.0 # Apache-2.0
doc8 # Apache-2.0 doc8>=0.6.0 # Apache-2.0
freezegun>=0.3.6 # Apache-2.0 freezegun>=0.3.6 # Apache-2.0
hacking!=0.13.0,<0.14,>=0.12.0 # Apache-2.0 hacking!=0.13.0,<0.14,>=0.12.0 # Apache-2.0
mock>=2.0 # BSD mock>=2.0.0 # BSD
oslotest>=1.10.0 # Apache-2.0 oslotest>=1.10.0 # Apache-2.0
os-testr>=0.8.0 # Apache-2.0 os-testr>=1.0.0 # Apache-2.0
python-subunit>=0.0.18 # Apache-2.0/BSD
testrepository>=0.0.18 # Apache-2.0/BSD testrepository>=0.0.18 # Apache-2.0/BSD
testscenarios>=0.4 # Apache-2.0/BSD testscenarios>=0.4 # Apache-2.0/BSD
testtools>=1.4.0 # MIT testtools>=1.4.0 # MIT
# Doc requirements # Doc requirements
openstackdocstheme>=1.11.0 # Apache-2.0 openstackdocstheme>=1.17.0 # Apache-2.0
sphinx>=1.6.2 # BSD sphinx>=1.6.2 # BSD
sphinxcontrib-pecanwsme>=0.8 # Apache-2.0 sphinxcontrib-pecanwsme>=0.8.0 # Apache-2.0
# releasenotes # releasenotes
reno!=2.3.1,>=1.8.0 # Apache-2.0 reno>=2.5.0 # Apache-2.0
# bandit # bandit
bandit>=1.1.0 # Apache-2.0 bandit>=1.1.0 # Apache-2.0

View File

@@ -118,6 +118,9 @@ class Action(base.APIBase):
action_type = wtypes.text action_type = wtypes.text
"""Action type""" """Action type"""
description = wtypes.text
"""Action description"""
input_parameters = types.jsontype input_parameters = types.jsontype
"""One or more key/value pairs """ """One or more key/value pairs """
@@ -141,6 +144,7 @@ class Action(base.APIBase):
setattr(self, field, kwargs.get(field, wtypes.Unset)) setattr(self, field, kwargs.get(field, wtypes.Unset))
self.fields.append('action_plan_id') self.fields.append('action_plan_id')
self.fields.append('description')
setattr(self, 'action_plan_uuid', kwargs.get('action_plan_id', setattr(self, 'action_plan_uuid', kwargs.get('action_plan_id',
wtypes.Unset)) wtypes.Unset))
@@ -162,6 +166,14 @@ class Action(base.APIBase):
@classmethod @classmethod
def convert_with_links(cls, action, expand=True): def convert_with_links(cls, action, expand=True):
action = Action(**action.as_dict()) action = Action(**action.as_dict())
try:
obj_action_desc = objects.ActionDescription.get_by_type(
pecan.request.context, action.action_type)
description = obj_action_desc.description
except exception.ActionDescriptionNotFound:
description = ""
setattr(action, 'description', description)
return cls._convert_with_links(action, pecan.request.host_url, expand) return cls._convert_with_links(action, pecan.request.host_url, expand)
@classmethod @classmethod

View File

@@ -448,7 +448,7 @@ class AuditTemplatesController(rest.RestController):
sort_key, sort_dir, expand=False, sort_key, sort_dir, expand=False,
resource_url=None): resource_url=None):
api_utils.validate_search_filters( api_utils.validate_search_filters(
filters, list(objects.audit_template.AuditTemplate.fields.keys()) + filters, list(objects.audit_template.AuditTemplate.fields) +
["goal_uuid", "goal_name", "strategy_uuid", "strategy_name"]) ["goal_uuid", "goal_name", "strategy_uuid", "strategy_name"])
limit = api_utils.validate_limit(limit) limit = api_utils.validate_limit(limit)
api_utils.validate_sort_dir(sort_dir) api_utils.validate_sort_dir(sort_dir)

View File

@@ -170,7 +170,7 @@ class GoalsController(rest.RestController):
limit = api_utils.validate_limit(limit) limit = api_utils.validate_limit(limit)
api_utils.validate_sort_dir(sort_dir) api_utils.validate_sort_dir(sort_dir)
sort_db_key = (sort_key if sort_key in objects.Goal.fields.keys() sort_db_key = (sort_key if sort_key in objects.Goal.fields
else None) else None)
marker_obj = None marker_obj = None

View File

@@ -104,7 +104,7 @@ class Service(base.APIBase):
def __init__(self, **kwargs): def __init__(self, **kwargs):
super(Service, self).__init__() super(Service, self).__init__()
fields = list(objects.Service.fields.keys()) + ['status'] fields = list(objects.Service.fields) + ['status']
self.fields = [] self.fields = []
for field in fields: for field in fields:
self.fields.append(field) self.fields.append(field)
@@ -194,7 +194,7 @@ class ServicesController(rest.RestController):
limit = api_utils.validate_limit(limit) limit = api_utils.validate_limit(limit)
api_utils.validate_sort_dir(sort_dir) api_utils.validate_sort_dir(sort_dir)
sort_db_key = (sort_key if sort_key in objects.Service.fields.keys() sort_db_key = (sort_key if sort_key in objects.Service.fields
else None) else None)
marker_obj = None marker_obj = None

View File

@@ -210,12 +210,12 @@ class StrategiesController(rest.RestController):
def _get_strategies_collection(self, filters, marker, limit, sort_key, def _get_strategies_collection(self, filters, marker, limit, sort_key,
sort_dir, expand=False, resource_url=None): sort_dir, expand=False, resource_url=None):
api_utils.validate_search_filters( api_utils.validate_search_filters(
filters, list(objects.strategy.Strategy.fields.keys()) + filters, list(objects.strategy.Strategy.fields) +
["goal_uuid", "goal_name"]) ["goal_uuid", "goal_name"])
limit = api_utils.validate_limit(limit) limit = api_utils.validate_limit(limit)
api_utils.validate_sort_dir(sort_dir) api_utils.validate_sort_dir(sort_dir)
sort_db_key = (sort_key if sort_key in objects.Strategy.fields.keys() sort_db_key = (sort_key if sort_key in objects.Strategy.fields
else None) else None)
marker_obj = None marker_obj = None

View File

@@ -57,7 +57,7 @@ def validate_sort_dir(sort_dir):
def validate_search_filters(filters, allowed_fields): def validate_search_filters(filters, allowed_fields):
# Very lightweight validation for now # Very lightweight validation for now
# todo: improve this (e.g. https://www.parse.com/docs/rest/guide/#queries) # todo: improve this (e.g. https://www.parse.com/docs/rest/guide/#queries)
for filter_name in filters.keys(): for filter_name in filters:
if filter_name not in allowed_fields: if filter_name not in allowed_fields:
raise wsme.exc.ClientSideError( raise wsme.exc.ClientSideError(
_("Invalid filter: %s") % filter_name) _("Invalid filter: %s") % filter_name)

View File

@@ -52,8 +52,8 @@ class AuthTokenMiddleware(auth_token.AuthProtocol):
# The information whether the API call is being performed against the # The information whether the API call is being performed against the
# public API is required for some other components. Saving it to the # public API is required for some other components. Saving it to the
# WSGI environment is reasonable thereby. # WSGI environment is reasonable thereby.
env['is_public_api'] = any(map(lambda pattern: re.match(pattern, path), env['is_public_api'] = any(re.match(pattern, path)
self.public_api_routes)) for pattern in self.public_api_routes)
if env['is_public_api']: if env['is_public_api']:
return self._app(env, start_response) return self._app(env, start_response)

View File

@@ -42,7 +42,7 @@ class APISchedulingService(scheduling.BackgroundSchedulerService):
services = objects.service.Service.list(context) services = objects.service.Service.list(context)
for service in services: for service in services:
result = self.get_service_status(context, service.id) result = self.get_service_status(context, service.id)
if service.id not in self.services_status.keys(): if service.id not in self.services_status:
self.services_status[service.id] = result self.services_status[service.id] = result
continue continue
if self.services_status[service.id] != result: if self.services_status[service.id] != result:

View File

@@ -54,6 +54,7 @@ class DefaultActionPlanHandler(base.BaseActionPlanHandler):
applier.execute(self.action_plan_uuid) applier.execute(self.action_plan_uuid)
action_plan.state = objects.action_plan.State.SUCCEEDED action_plan.state = objects.action_plan.State.SUCCEEDED
action_plan.save()
notifications.action_plan.send_action_notification( notifications.action_plan.send_action_notification(
self.ctx, action_plan, self.ctx, action_plan,
action=fields.NotificationAction.EXECUTION, action=fields.NotificationAction.EXECUTION,
@@ -63,17 +64,32 @@ class DefaultActionPlanHandler(base.BaseActionPlanHandler):
LOG.exception(e) LOG.exception(e)
action_plan.state = objects.action_plan.State.CANCELLED action_plan.state = objects.action_plan.State.CANCELLED
self._update_action_from_pending_to_cancelled() self._update_action_from_pending_to_cancelled()
action_plan.save()
notifications.action_plan.send_cancel_notification(
self.ctx, action_plan,
action=fields.NotificationAction.CANCEL,
phase=fields.NotificationPhase.END)
except Exception as e: except Exception as e:
LOG.exception(e) LOG.exception(e)
action_plan.state = objects.action_plan.State.FAILED action_plan = objects.ActionPlan.get_by_uuid(
notifications.action_plan.send_action_notification( self.ctx, self.action_plan_uuid, eager=True)
self.ctx, action_plan, if action_plan.state == objects.action_plan.State.CANCELLING:
action=fields.NotificationAction.EXECUTION, action_plan.state = objects.action_plan.State.FAILED
priority=fields.NotificationPriority.ERROR, action_plan.save()
phase=fields.NotificationPhase.ERROR) notifications.action_plan.send_cancel_notification(
finally: self.ctx, action_plan,
action_plan.save() action=fields.NotificationAction.CANCEL,
priority=fields.NotificationPriority.ERROR,
phase=fields.NotificationPhase.ERROR)
else:
action_plan.state = objects.action_plan.State.FAILED
action_plan.save()
notifications.action_plan.send_action_notification(
self.ctx, action_plan,
action=fields.NotificationAction.EXECUTION,
priority=fields.NotificationPriority.ERROR,
phase=fields.NotificationPhase.ERROR)
def _update_action_from_pending_to_cancelled(self): def _update_action_from_pending_to_cancelled(self):
filters = {'action_plan_uuid': self.action_plan_uuid, filters = {'action_plan_uuid': self.action_plan_uuid,

View File

@@ -18,6 +18,7 @@
# #
import enum import enum
import time
from watcher._i18n import _ from watcher._i18n import _
from watcher.applier.actions import base from watcher.applier.actions import base
@@ -87,25 +88,39 @@ class ChangeNodePowerState(base.BaseAction):
target_state = NodeState.POWERON.value target_state = NodeState.POWERON.value
return self._node_manage_power(target_state) return self._node_manage_power(target_state)
def _node_manage_power(self, state): def _node_manage_power(self, state, retry=60):
if state is None: if state is None:
raise exception.IllegalArgumentException( raise exception.IllegalArgumentException(
message=_("The target state is not defined")) message=_("The target state is not defined"))
result = False
ironic_client = self.osc.ironic() ironic_client = self.osc.ironic()
nova_client = self.osc.nova() nova_client = self.osc.nova()
current_state = ironic_client.node.get(self.node_uuid).power_state
# power state: 'power on' or 'power off', if current node state
# is the same as state, just return True
if state in current_state:
return True
if state == NodeState.POWEROFF.value: if state == NodeState.POWEROFF.value:
node_info = ironic_client.node.get(self.node_uuid).to_dict() node_info = ironic_client.node.get(self.node_uuid).to_dict()
compute_node_id = node_info['extra']['compute_node_id'] compute_node_id = node_info['extra']['compute_node_id']
compute_node = nova_client.hypervisors.get(compute_node_id) compute_node = nova_client.hypervisors.get(compute_node_id)
compute_node = compute_node.to_dict() compute_node = compute_node.to_dict()
if (compute_node['running_vms'] == 0): if (compute_node['running_vms'] == 0):
result = ironic_client.node.set_power_state( ironic_client.node.set_power_state(
self.node_uuid, state) self.node_uuid, state)
else: else:
result = ironic_client.node.set_power_state(self.node_uuid, state) ironic_client.node.set_power_state(self.node_uuid, state)
return result
ironic_node = ironic_client.node.get(self.node_uuid)
while ironic_node.power_state == current_state and retry:
time.sleep(10)
retry -= 1
ironic_node = ironic_client.node.get(self.node_uuid)
if retry > 0:
return True
else:
return False
def pre_condition(self): def pre_condition(self):
pass pass

View File

@@ -124,7 +124,8 @@ class Migrate(base.BaseAction):
LOG.debug("Nova client exception occurred while live " LOG.debug("Nova client exception occurred while live "
"migrating instance %s.Exception: %s" % "migrating instance %s.Exception: %s" %
(self.instance_uuid, e)) (self.instance_uuid, e))
except Exception: except Exception as e:
LOG.exception(e)
LOG.critical("Unexpected error occurred. Migration failed for " LOG.critical("Unexpected error occurred. Migration failed for "
"instance %s. Leaving instance on previous " "instance %s. Leaving instance on previous "
"host.", self.instance_uuid) "host.", self.instance_uuid)

View File

@@ -0,0 +1,252 @@
# Copyright 2017 NEC Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import jsonschema
from oslo_log import log
from cinderclient import client as cinder_client
from watcher._i18n import _
from watcher.applier.actions import base
from watcher.common import cinder_helper
from watcher.common import exception
from watcher.common import keystone_helper
from watcher.common import nova_helper
from watcher.common import utils
from watcher import conf
CONF = conf.CONF
LOG = log.getLogger(__name__)
class VolumeMigrate(base.BaseAction):
"""Migrates a volume to destination node or type
By using this action, you will be able to migrate cinder volume.
Migration type 'swap' can only be used for migrating attached volume.
Migration type 'cold' can only be used for migrating detached volume.
The action schema is::
schema = Schema({
'resource_id': str, # should be a UUID
'migration_type': str, # choices -> "swap", "cold"
'destination_node': str,
'destination_type': str,
)}
The `resource_id` is the UUID of cinder volume to migrate.
The `destination_node` is the destination block storage pool name.
(list of available pools are returned by this command: ``cinder
get-pools``) which is mandatory for migrating detached volume
to the one with same volume type.
The `destination_type` is the destination block storage type name.
(list of available types are returned by this command: ``cinder
type-list``) which is mandatory for migrating detached volume or
swapping attached volume to the one with different volume type.
"""
MIGRATION_TYPE = 'migration_type'
SWAP = 'swap'
COLD = 'cold'
DESTINATION_NODE = "destination_node"
DESTINATION_TYPE = "destination_type"
def __init__(self, config, osc=None):
super(VolumeMigrate, self).__init__(config)
self.temp_username = utils.random_string(10)
self.temp_password = utils.random_string(10)
self.cinder_util = cinder_helper.CinderHelper(osc=self.osc)
self.nova_util = nova_helper.NovaHelper(osc=self.osc)
@property
def schema(self):
return {
'type': 'object',
'properties': {
'resource_id': {
'type': 'string',
"minlength": 1,
"pattern": ("^([a-fA-F0-9]){8}-([a-fA-F0-9]){4}-"
"([a-fA-F0-9]){4}-([a-fA-F0-9]){4}-"
"([a-fA-F0-9]){12}$")
},
'migration_type': {
'type': 'string',
"enum": ["swap", "cold"]
},
'destination_node': {
"anyof": [
{'type': 'string', "minLength": 1},
{'type': 'None'}
]
},
'destination_type': {
"anyof": [
{'type': 'string', "minLength": 1},
{'type': 'None'}
]
}
},
'required': ['resource_id', 'migration_type'],
'additionalProperties': False,
}
def validate_parameters(self):
try:
jsonschema.validate(self.input_parameters, self.schema)
return True
except jsonschema.ValidationError as e:
raise e
@property
def volume_id(self):
return self.input_parameters.get(self.RESOURCE_ID)
@property
def migration_type(self):
return self.input_parameters.get(self.MIGRATION_TYPE)
@property
def destination_node(self):
return self.input_parameters.get(self.DESTINATION_NODE)
@property
def destination_type(self):
return self.input_parameters.get(self.DESTINATION_TYPE)
def _cold_migrate(self, volume, dest_node, dest_type):
if not self.cinder_util.can_cold(volume, dest_node):
raise exception.Invalid(
message=(_("Invalid state for cold migration")))
if dest_node:
return self.cinder_util.migrate(volume, dest_node)
elif dest_type:
return self.cinder_util.retype(volume, dest_type)
else:
raise exception.Invalid(
message=(_("destination host or destination type is "
"required when migration type is cold")))
def _can_swap(self, volume):
"""Judge volume can be swapped"""
if not volume.attachments:
return False
instance_id = volume.attachments[0]['server_id']
instance_status = self.nova_util.find_instance(instance_id).status
if (volume.status == 'in-use' and
instance_status in ('ACTIVE', 'PAUSED', 'RESIZED')):
return True
return False
def _create_user(self, volume, user):
"""Create user with volume attribute and user information"""
keystone_util = keystone_helper.KeystoneHelper(osc=self.osc)
project_id = getattr(volume, 'os-vol-tenant-attr:tenant_id')
user['project'] = project_id
user['domain'] = keystone_util.get_project(project_id).domain_id
user['roles'] = ['admin']
return keystone_util.create_user(user)
def _get_cinder_client(self, session):
"""Get cinder client by session"""
return cinder_client.Client(
CONF.cinder_client.api_version,
session=session,
endpoint_type=CONF.cinder_client.endpoint_type)
def _swap_volume(self, volume, dest_type):
"""Swap volume to dest_type
Limitation note: only for compute libvirt driver
"""
if not dest_type:
raise exception.Invalid(
message=(_("destination type is required when "
"migration type is swap")))
if not self._can_swap(volume):
raise exception.Invalid(
message=(_("Invalid state for swapping volume")))
user_info = {
'name': self.temp_username,
'password': self.temp_password}
user = self._create_user(volume, user_info)
keystone_util = keystone_helper.KeystoneHelper(osc=self.osc)
try:
session = keystone_util.create_session(
user.id, self.temp_password)
temp_cinder = self._get_cinder_client(session)
# swap volume
new_volume = self.cinder_util.create_volume(
temp_cinder, volume, dest_type)
self.nova_util.swap_volume(volume, new_volume)
# delete old volume
self.cinder_util.delete_volume(volume)
finally:
keystone_util.delete_user(user)
return True
def _migrate(self, volume_id, dest_node, dest_type):
try:
volume = self.cinder_util.get_volume(volume_id)
if self.migration_type == self.COLD:
return self._cold_migrate(volume, dest_node, dest_type)
elif self.migration_type == self.SWAP:
if dest_node:
LOG.warning("dest_node is ignored")
return self._swap_volume(volume, dest_type)
else:
raise exception.Invalid(
message=(_("Migration of type '%(migration_type)s' is not "
"supported.") %
{'migration_type': self.migration_type}))
except exception.Invalid as ei:
LOG.exception(ei)
return False
except Exception as e:
LOG.critical("Unexpected exception occurred.")
LOG.exception(e)
return False
def execute(self):
return self._migrate(self.volume_id,
self.destination_node,
self.destination_type)
def revert(self):
LOG.warning("revert not supported")
def abort(self):
pass
def pre_condition(self):
pass
def post_condition(self):
pass
def get_description(self):
return "Moving a volume to destination_node or destination_type"

44
watcher/applier/sync.py Normal file
View File

@@ -0,0 +1,44 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2017 ZTE
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from watcher.applier.loading import default
from watcher.common import context
from watcher.common import exception
from watcher import objects
class Syncer(object):
"""Syncs all available actions with the Watcher DB"""
def sync(self):
ctx = context.make_context()
action_loader = default.DefaultActionLoader()
available_actions = action_loader.list_available()
for action_type in available_actions.keys():
load_action = action_loader.load(action_type)
load_description = load_action.get_description()
try:
action_desc = objects.ActionDescription.get_by_type(
ctx, action_type)
if action_desc.description != load_description:
action_desc.description = load_description
action_desc.save()
except exception.ActionDescriptionNotFound:
obj_action_desc = objects.ActionDescription(ctx)
obj_action_desc.action_type = action_type
obj_action_desc.description = load_description
obj_action_desc.create()

View File

@@ -57,6 +57,7 @@ class BaseWorkFlowEngine(loadable.Loadable):
self._applier_manager = applier_manager self._applier_manager = applier_manager
self._action_factory = factory.ActionFactory() self._action_factory = factory.ActionFactory()
self._osc = None self._osc = None
self._is_notified = False
@classmethod @classmethod
def get_config_opts(cls): def get_config_opts(cls):
@@ -90,6 +91,18 @@ class BaseWorkFlowEngine(loadable.Loadable):
eager=True) eager=True)
db_action.state = state db_action.state = state
db_action.save() db_action.save()
return db_action
def notify_cancel_start(self, action_plan_uuid):
action_plan = objects.ActionPlan.get_by_uuid(self.context,
action_plan_uuid,
eager=True)
if not self._is_notified:
self._is_notified = True
notifications.action_plan.send_cancel_notification(
self._context, action_plan,
action=fields.NotificationAction.CANCEL,
phase=fields.NotificationPhase.START)
@abc.abstractmethod @abc.abstractmethod
def execute(self, actions): def execute(self, actions):
@@ -149,19 +162,21 @@ class BaseTaskFlowActionContainer(flow_task.Task):
self.engine.context, self._db_action.action_plan_id) self.engine.context, self._db_action.action_plan_id)
if action_plan.state in CANCEL_STATE: if action_plan.state in CANCEL_STATE:
raise exception.ActionPlanCancelled(uuid=action_plan.uuid) raise exception.ActionPlanCancelled(uuid=action_plan.uuid)
self.do_pre_execute() db_action = self.do_pre_execute()
notifications.action.send_execution_notification( notifications.action.send_execution_notification(
self.engine.context, self._db_action, self.engine.context, db_action,
fields.NotificationAction.EXECUTION, fields.NotificationAction.EXECUTION,
fields.NotificationPhase.START) fields.NotificationPhase.START)
except exception.ActionPlanCancelled as e: except exception.ActionPlanCancelled as e:
LOG.exception(e) LOG.exception(e)
self.engine.notify_cancel_start(action_plan.uuid)
raise raise
except Exception as e: except Exception as e:
LOG.exception(e) LOG.exception(e)
self.engine.notify(self._db_action, objects.action.State.FAILED) db_action = self.engine.notify(self._db_action,
objects.action.State.FAILED)
notifications.action.send_execution_notification( notifications.action.send_execution_notification(
self.engine.context, self._db_action, self.engine.context, db_action,
fields.NotificationAction.EXECUTION, fields.NotificationAction.EXECUTION,
fields.NotificationPhase.ERROR, fields.NotificationPhase.ERROR,
priority=fields.NotificationPriority.ERROR) priority=fields.NotificationPriority.ERROR)
@@ -169,19 +184,19 @@ class BaseTaskFlowActionContainer(flow_task.Task):
def execute(self, *args, **kwargs): def execute(self, *args, **kwargs):
def _do_execute_action(*args, **kwargs): def _do_execute_action(*args, **kwargs):
try: try:
self.do_execute(*args, **kwargs) db_action = self.do_execute(*args, **kwargs)
notifications.action.send_execution_notification( notifications.action.send_execution_notification(
self.engine.context, self._db_action, self.engine.context, db_action,
fields.NotificationAction.EXECUTION, fields.NotificationAction.EXECUTION,
fields.NotificationPhase.END) fields.NotificationPhase.END)
except Exception as e: except Exception as e:
LOG.exception(e) LOG.exception(e)
LOG.error('The workflow engine has failed' LOG.error('The workflow engine has failed'
'to execute the action: %s', self.name) 'to execute the action: %s', self.name)
self.engine.notify(self._db_action, db_action = self.engine.notify(self._db_action,
objects.action.State.FAILED) objects.action.State.FAILED)
notifications.action.send_execution_notification( notifications.action.send_execution_notification(
self.engine.context, self._db_action, self.engine.context, db_action,
fields.NotificationAction.EXECUTION, fields.NotificationAction.EXECUTION,
fields.NotificationPhase.ERROR, fields.NotificationPhase.ERROR,
priority=fields.NotificationPriority.ERROR) priority=fields.NotificationPriority.ERROR)
@@ -216,6 +231,7 @@ class BaseTaskFlowActionContainer(flow_task.Task):
# taskflow will call revert for the action, # taskflow will call revert for the action,
# we will redirect it to abort. # we will redirect it to abort.
except eventlet.greenlet.GreenletExit: except eventlet.greenlet.GreenletExit:
self.engine.notify_cancel_start(action_plan_object.uuid)
raise exception.ActionPlanCancelled(uuid=action_plan_object.uuid) raise exception.ActionPlanCancelled(uuid=action_plan_object.uuid)
except Exception as e: except Exception as e:
@@ -227,9 +243,10 @@ class BaseTaskFlowActionContainer(flow_task.Task):
self.do_post_execute() self.do_post_execute()
except Exception as e: except Exception as e:
LOG.exception(e) LOG.exception(e)
self.engine.notify(self._db_action, objects.action.State.FAILED) db_action = self.engine.notify(self._db_action,
objects.action.State.FAILED)
notifications.action.send_execution_notification( notifications.action.send_execution_notification(
self.engine.context, self._db_action, self.engine.context, db_action,
fields.NotificationAction.EXECUTION, fields.NotificationAction.EXECUTION,
fields.NotificationPhase.ERROR, fields.NotificationPhase.ERROR,
priority=fields.NotificationPriority.ERROR) priority=fields.NotificationPriority.ERROR)
@@ -238,7 +255,7 @@ class BaseTaskFlowActionContainer(flow_task.Task):
action_plan = objects.ActionPlan.get_by_id( action_plan = objects.ActionPlan.get_by_id(
self.engine.context, self._db_action.action_plan_id, eager=True) self.engine.context, self._db_action.action_plan_id, eager=True)
# NOTE: check if revert cause by cancel action plan or # NOTE: check if revert cause by cancel action plan or
# some other exception occured during action plan execution # some other exception occurred during action plan execution
# if due to some other exception keep the flow intact. # if due to some other exception keep the flow intact.
if action_plan.state not in CANCEL_STATE: if action_plan.state not in CANCEL_STATE:
self.do_revert() self.do_revert()
@@ -246,15 +263,42 @@ class BaseTaskFlowActionContainer(flow_task.Task):
action_object = objects.Action.get_by_uuid( action_object = objects.Action.get_by_uuid(
self.engine.context, self._db_action.uuid, eager=True) self.engine.context, self._db_action.uuid, eager=True)
if action_object.state == objects.action.State.ONGOING: try:
action_object.state = objects.action.State.CANCELLING if action_object.state == objects.action.State.ONGOING:
action_object.state = objects.action.State.CANCELLING
action_object.save()
notifications.action.send_cancel_notification(
self.engine.context, action_object,
fields.NotificationAction.CANCEL,
fields.NotificationPhase.START)
action_object = self.abort()
notifications.action.send_cancel_notification(
self.engine.context, action_object,
fields.NotificationAction.CANCEL,
fields.NotificationPhase.END)
if action_object.state == objects.action.State.PENDING:
notifications.action.send_cancel_notification(
self.engine.context, action_object,
fields.NotificationAction.CANCEL,
fields.NotificationPhase.START)
action_object.state = objects.action.State.CANCELLED
action_object.save()
notifications.action.send_cancel_notification(
self.engine.context, action_object,
fields.NotificationAction.CANCEL,
fields.NotificationPhase.END)
except Exception as e:
LOG.exception(e)
action_object.state = objects.action.State.FAILED
action_object.save() action_object.save()
self.abort() notifications.action.send_cancel_notification(
elif action_object.state == objects.action.State.PENDING: self.engine.context, action_object,
action_object.state = objects.action.State.CANCELLED fields.NotificationAction.CANCEL,
action_object.save() fields.NotificationPhase.ERROR,
else: priority=fields.NotificationPriority.ERROR)
pass
def abort(self, *args, **kwargs): def abort(self, *args, **kwargs):
self.do_abort(*args, **kwargs) return self.do_abort(*args, **kwargs)

View File

@@ -34,7 +34,7 @@ class DefaultWorkFlowEngine(base.BaseWorkFlowEngine):
"""Taskflow as a workflow engine for Watcher """Taskflow as a workflow engine for Watcher
Full documentation on taskflow at Full documentation on taskflow at
http://docs.openstack.org/developer/taskflow/ https://docs.openstack.org/taskflow/latest
""" """
def decider(self, history): def decider(self, history):
@@ -45,7 +45,7 @@ class DefaultWorkFlowEngine(base.BaseWorkFlowEngine):
# (or whether the execution of v should be ignored, # (or whether the execution of v should be ignored,
# and therefore not executed). It is expected to take as single # and therefore not executed). It is expected to take as single
# keyword argument history which will be the execution results of # keyword argument history which will be the execution results of
# all u decideable links that have v as a target. It is expected # all u decidable links that have v as a target. It is expected
# to return a single boolean # to return a single boolean
# (True to allow v execution or False to not). # (True to allow v execution or False to not).
return True return True
@@ -111,21 +111,26 @@ class TaskFlowActionContainer(base.BaseTaskFlowActionContainer):
super(TaskFlowActionContainer, self).__init__(name, db_action, engine) super(TaskFlowActionContainer, self).__init__(name, db_action, engine)
def do_pre_execute(self): def do_pre_execute(self):
self.engine.notify(self._db_action, objects.action.State.ONGOING) db_action = self.engine.notify(self._db_action,
objects.action.State.ONGOING)
LOG.debug("Pre-condition action: %s", self.name) LOG.debug("Pre-condition action: %s", self.name)
self.action.pre_condition() self.action.pre_condition()
return db_action
def do_execute(self, *args, **kwargs): def do_execute(self, *args, **kwargs):
LOG.debug("Running action: %s", self.name) LOG.debug("Running action: %s", self.name)
# NOTE: For result is False, set action state fail # NOTE:Some actions(such as migrate) will return None when exception
# Only when True is returned, the action state is set to SUCCEEDED
result = self.action.execute() result = self.action.execute()
if result is False: if result is True:
self.engine.notify(self._db_action, return self.engine.notify(self._db_action,
objects.action.State.FAILED) objects.action.State.SUCCEEDED)
else: else:
self.engine.notify(self._db_action, self.engine.notify(self._db_action,
objects.action.State.SUCCEEDED) objects.action.State.FAILED)
raise exception.ActionExecutionFailure(
action_id=self._db_action.uuid)
def do_post_execute(self): def do_post_execute(self):
LOG.debug("Post-condition action: %s", self.name) LOG.debug("Post-condition action: %s", self.name)
@@ -146,14 +151,15 @@ class TaskFlowActionContainer(base.BaseTaskFlowActionContainer):
result = self.action.abort() result = self.action.abort()
if result: if result:
# Aborted the action. # Aborted the action.
self.engine.notify(self._db_action, return self.engine.notify(self._db_action,
objects.action.State.CANCELLED) objects.action.State.CANCELLED)
else: else:
self.engine.notify(self._db_action, return self.engine.notify(self._db_action,
objects.action.State.SUCCEEDED) objects.action.State.SUCCEEDED)
except Exception as e: except Exception as e:
self.engine.notify(self._db_action, objects.action.State.FAILED)
LOG.exception(e) LOG.exception(e)
return self.engine.notify(self._db_action,
objects.action.State.FAILED)
class TaskFlowNop(flow_task.Task): class TaskFlowNop(flow_task.Task):

View File

@@ -23,6 +23,7 @@ import sys
from oslo_log import log as logging from oslo_log import log as logging
from watcher.applier import manager from watcher.applier import manager
from watcher.applier import sync
from watcher.common import service as watcher_service from watcher.common import service as watcher_service
from watcher import conf from watcher import conf
@@ -37,6 +38,9 @@ def main():
applier_service = watcher_service.Service(manager.ApplierManager) applier_service = watcher_service.Service(manager.ApplierManager)
syncer = sync.Syncer()
syncer.sync()
# Only 1 process # Only 1 process
launcher = watcher_service.launch(CONF, applier_service) launcher = watcher_service.launch(CONF, applier_service)
launcher.wait() launcher.wait()

View File

@@ -12,12 +12,18 @@
# limitations under the License. # limitations under the License.
# #
import time
from oslo_log import log from oslo_log import log
from cinderclient import exceptions as cinder_exception
from cinderclient.v2.volumes import Volume
from watcher._i18n import _
from watcher.common import clients from watcher.common import clients
from watcher.common import exception from watcher.common import exception
from watcher import conf
CONF = conf.CONF
LOG = log.getLogger(__name__) LOG = log.getLogger(__name__)
@@ -34,9 +40,8 @@ class CinderHelper(object):
def get_storage_node_by_name(self, name): def get_storage_node_by_name(self, name):
"""Get storage node by name(host@backendname)""" """Get storage node by name(host@backendname)"""
try: try:
storages = list(filter(lambda storage: storages = [storage for storage in self.get_storage_node_list()
storage.host == name, if storage.host == name]
self.get_storage_node_list()))
if len(storages) != 1: if len(storages) != 1:
raise exception.StorageNodeNotFound(name=name) raise exception.StorageNodeNotFound(name=name)
return storages[0] return storages[0]
@@ -50,9 +55,8 @@ class CinderHelper(object):
def get_storage_pool_by_name(self, name): def get_storage_pool_by_name(self, name):
"""Get pool by name(host@backend#poolname)""" """Get pool by name(host@backend#poolname)"""
try: try:
pools = list(filter(lambda pool: pools = [pool for pool in self.get_storage_pool_list()
pool.name == name, if pool.name == name]
self.get_storage_pool_list()))
if len(pools) != 1: if len(pools) != 1:
raise exception.PoolNotFound(name=name) raise exception.PoolNotFound(name=name)
return pools[0] return pools[0]
@@ -69,11 +73,197 @@ class CinderHelper(object):
def get_volume_type_by_backendname(self, backendname): def get_volume_type_by_backendname(self, backendname):
volume_type_list = self.get_volume_type_list() volume_type_list = self.get_volume_type_list()
volume_type = list(filter( volume_type = [volume_type for volume_type in volume_type_list
lambda volume_type: if volume_type.extra_specs.get(
volume_type.extra_specs.get( 'volume_backend_name') == backendname]
'volume_backend_name') == backendname, volume_type_list))
if volume_type: if volume_type:
return volume_type[0].name return volume_type[0].name
else: else:
return "" return ""
def get_volume(self, volume):
if isinstance(volume, Volume):
volume = volume.id
try:
volume = self.cinder.volumes.get(volume)
return volume
except cinder_exception.NotFound:
return self.cinder.volumes.find(name=volume)
def backendname_from_poolname(self, poolname):
"""Get backendname from poolname"""
# pooolname formatted as host@backend#pool since ocata
# as of ocata, may as only host
backend = poolname.split('#')[0]
backendname = ""
try:
backendname = backend.split('@')[1]
except IndexError:
pass
return backendname
def _has_snapshot(self, volume):
"""Judge volume has a snapshot"""
volume = self.get_volume(volume)
if volume.snapshot_id:
return True
return False
def can_cold(self, volume, host=None):
"""Judge volume can be migrated"""
can_cold = False
status = self.get_volume(volume).status
snapshot = self._has_snapshot(volume)
same_host = False
if host and getattr(volume, 'os-vol-host-attr:host') == host:
same_host = True
if (status == 'available' and
snapshot is False and
same_host is False):
can_cold = True
return can_cold
def get_deleting_volume(self, volume):
volume = self.get_volume(volume)
all_volume = self.get_volume_list()
for _volume in all_volume:
if getattr(_volume, 'os-vol-mig-status-attr:name_id') == volume.id:
return _volume
return False
def _can_get_volume(self, volume_id):
"""Check to get volume with volume_id"""
try:
volume = self.get_volume(volume_id)
if not volume:
raise Exception
except cinder_exception.NotFound:
return False
else:
return True
def check_volume_deleted(self, volume, retry=120, retry_interval=10):
"""Check volume has been deleted"""
volume = self.get_volume(volume)
while self._can_get_volume(volume.id) and retry:
volume = self.get_volume(volume.id)
time.sleep(retry_interval)
retry -= 1
LOG.debug("retry count: %s" % retry)
LOG.debug("Waiting to complete deletion of volume %s" % volume.id)
if self._can_get_volume(volume.id):
LOG.error("Volume deletion error: %s" % volume.id)
return False
LOG.debug("Volume %s was deleted successfully." % volume.id)
return True
def check_migrated(self, volume, retry_interval=10):
volume = self.get_volume(volume)
while getattr(volume, 'migration_status') == 'migrating':
volume = self.get_volume(volume.id)
LOG.debug('Waiting the migration of {0}'.format(volume))
time.sleep(retry_interval)
if getattr(volume, 'migration_status') == 'error':
host_name = getattr(volume, 'os-vol-host-attr:host')
error_msg = (("Volume migration error : "
"volume %(volume)s is now on host '%(host)s'.") %
{'volume': volume.id, 'host': host_name})
LOG.error(error_msg)
return False
host_name = getattr(volume, 'os-vol-host-attr:host')
if getattr(volume, 'migration_status') == 'success':
# check original volume deleted
deleting_volume = self.get_deleting_volume(volume)
if deleting_volume:
delete_id = getattr(deleting_volume, 'id')
if not self.check_volume_deleted(delete_id):
return False
else:
host_name = getattr(volume, 'os-vol-host-attr:host')
error_msg = (("Volume migration error : "
"volume %(volume)s is now on host '%(host)s'.") %
{'volume': volume.id, 'host': host_name})
LOG.error(error_msg)
return False
LOG.debug(
"Volume migration succeeded : "
"volume %s is now on host '%s'." % (
volume.id, host_name))
return True
def migrate(self, volume, dest_node):
"""Migrate volume to dest_node"""
volume = self.get_volume(volume)
dest_backend = self.backendname_from_poolname(dest_node)
dest_type = self.get_volume_type_by_backendname(dest_backend)
if volume.volume_type != dest_type:
raise exception.Invalid(
message=(_("Volume type must be same for migrating")))
source_node = getattr(volume, 'os-vol-host-attr:host')
LOG.debug("Volume %s found on host '%s'."
% (volume.id, source_node))
self.cinder.volumes.migrate_volume(
volume, dest_node, False, True)
return self.check_migrated(volume)
def retype(self, volume, dest_type):
"""Retype volume to dest_type with on-demand option"""
volume = self.get_volume(volume)
if volume.volume_type == dest_type:
raise exception.Invalid(
message=(_("Volume type must be different for retyping")))
source_node = getattr(volume, 'os-vol-host-attr:host')
LOG.debug(
"Volume %s found on host '%s'." % (
volume.id, source_node))
self.cinder.volumes.retype(
volume, dest_type, "on-demand")
return self.check_migrated(volume)
def create_volume(self, cinder, volume,
dest_type, retry=120, retry_interval=10):
"""Create volume of volume with dest_type using cinder"""
volume = self.get_volume(volume)
LOG.debug("start creating new volume")
new_volume = cinder.volumes.create(
getattr(volume, 'size'),
name=getattr(volume, 'name'),
volume_type=dest_type,
availability_zone=getattr(volume, 'availability_zone'))
while getattr(new_volume, 'status') != 'available' and retry:
new_volume = cinder.volumes.get(new_volume.id)
LOG.debug('Waiting volume creation of {0}'.format(new_volume))
time.sleep(retry_interval)
retry -= 1
LOG.debug("retry count: %s" % retry)
if getattr(new_volume, 'status') != 'available':
error_msg = (_("Failed to create volume '%(volume)s. ") %
{'volume': new_volume.id})
raise Exception(error_msg)
LOG.debug("Volume %s was created successfully." % new_volume)
return new_volume
def delete_volume(self, volume):
"""Delete volume"""
volume = self.get_volume(volume)
self.cinder.volumes.delete(volume)
result = self.check_volume_deleted(volume)
if not result:
error_msg = (_("Failed to delete volume '%(volume)s. ") %
{'volume': volume.id})
raise Exception(error_msg)

View File

@@ -110,8 +110,12 @@ class OpenStackClients(object):
'api_version') 'api_version')
gnocchiclient_interface = self._get_client_option('gnocchi', gnocchiclient_interface = self._get_client_option('gnocchi',
'endpoint_type') 'endpoint_type')
adapter_options = {
"interface": gnocchiclient_interface
}
self._gnocchi = gnclient.Client(gnocchiclient_version, self._gnocchi = gnclient.Client(gnocchiclient_version,
interface=gnocchiclient_interface, adapter_options=adapter_options,
session=self.session) session=self.session)
return self._gnocchi return self._gnocchi
@@ -199,6 +203,6 @@ class OpenStackClients(object):
ironicclient_version = self._get_client_option('ironic', 'api_version') ironicclient_version = self._get_client_option('ironic', 'api_version')
endpoint_type = self._get_client_option('ironic', 'endpoint_type') endpoint_type = self._get_client_option('ironic', 'endpoint_type')
self._ironic = irclient.get_client(ironicclient_version, self._ironic = irclient.get_client(ironicclient_version,
ironic_url=endpoint_type, os_endpoint_type=endpoint_type,
session=self.session) session=self.session)
return self._ironic return self._ironic

View File

@@ -15,8 +15,6 @@ from oslo_log import log as logging
from oslo_utils import timeutils from oslo_utils import timeutils
import six import six
from watcher.common import utils
LOG = logging.getLogger(__name__) LOG = logging.getLogger(__name__)
@@ -102,7 +100,7 @@ class RequestContext(context.RequestContext):
'domain_name': getattr(self, 'domain_name', None), 'domain_name': getattr(self, 'domain_name', None),
'auth_token_info': getattr(self, 'auth_token_info', None), 'auth_token_info': getattr(self, 'auth_token_info', None),
'is_admin': getattr(self, 'is_admin', None), 'is_admin': getattr(self, 'is_admin', None),
'timestamp': utils.strtime(self.timestamp) if hasattr( 'timestamp': self.timestamp.isoformat() if hasattr(
self, 'timestamp') else None, self, 'timestamp') else None,
'request_id': getattr(self, 'request_id', None), 'request_id': getattr(self, 'request_id', None),
}) })

View File

@@ -426,6 +426,19 @@ class CronFormatIsInvalid(WatcherException):
msg_fmt = _("Provided cron is invalid: %(message)s") msg_fmt = _("Provided cron is invalid: %(message)s")
class ActionDescriptionAlreadyExists(Conflict):
msg_fmt = _("An action description with type %(action_type)s is "
"already exist.")
class ActionDescriptionNotFound(ResourceNotFound):
msg_fmt = _("The action description %(action_id)s cannot be found.")
class ActionExecutionFailure(WatcherException):
msg_fmt = _("The action %(action_id)s execution failed.")
# Model # Model
class ComputeResourceNotFound(WatcherException): class ComputeResourceNotFound(WatcherException):

View File

@@ -0,0 +1,124 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from oslo_log import log
from keystoneauth1.exceptions import http as ks_exceptions
from keystoneauth1 import loading
from keystoneauth1 import session
from watcher._i18n import _
from watcher.common import clients
from watcher.common import exception
from watcher import conf
CONF = conf.CONF
LOG = log.getLogger(__name__)
class KeystoneHelper(object):
def __init__(self, osc=None):
""":param osc: an OpenStackClients instance"""
self.osc = osc if osc else clients.OpenStackClients()
self.keystone = self.osc.keystone()
def get_role(self, name_or_id):
try:
role = self.keystone.roles.get(name_or_id)
return role
except ks_exceptions.NotFound:
roles = self.keystone.roles.list(name=name_or_id)
if len(roles) == 0:
raise exception.Invalid(
message=(_("Role not Found: %s") % name_or_id))
if len(roles) > 1:
raise exception.Invalid(
message=(_("Role name seems ambiguous: %s") % name_or_id))
return roles[0]
def get_user(self, name_or_id):
try:
user = self.keystone.users.get(name_or_id)
return user
except ks_exceptions.NotFound:
users = self.keystone.users.list(name=name_or_id)
if len(users) == 0:
raise exception.Invalid(
message=(_("User not Found: %s") % name_or_id))
if len(users) > 1:
raise exception.Invalid(
message=(_("User name seems ambiguous: %s") % name_or_id))
return users[0]
def get_project(self, name_or_id):
try:
project = self.keystone.projects.get(name_or_id)
return project
except ks_exceptions.NotFound:
projects = self.keystone.projects.list(name=name_or_id)
if len(projects) == 0:
raise exception.Invalid(
message=(_("Project not Found: %s") % name_or_id))
if len(projects) > 1:
raise exception.Invalid(
messsage=(_("Project name seems ambiguous: %s") %
name_or_id))
return projects[0]
def get_domain(self, name_or_id):
try:
domain = self.keystone.domains.get(name_or_id)
return domain
except ks_exceptions.NotFound:
domains = self.keystone.domains.list(name=name_or_id)
if len(domains) == 0:
raise exception.Invalid(
message=(_("Domain not Found: %s") % name_or_id))
if len(domains) > 1:
raise exception.Invalid(
message=(_("Domain name seems ambiguous: %s") %
name_or_id))
return domains[0]
def create_session(self, user_id, password):
user = self.get_user(user_id)
loader = loading.get_plugin_loader('password')
auth = loader.load_from_options(
auth_url=CONF.watcher_clients_auth.auth_url,
password=password,
user_id=user_id,
project_id=user.default_project_id)
return session.Session(auth=auth)
def create_user(self, user):
project = self.get_project(user['project'])
domain = self.get_domain(user['domain'])
_user = self.keystone.users.create(
user['name'],
password=user['password'],
domain=domain,
project=project,
)
for role in user['roles']:
role = self.get_role(role)
self.keystone.roles.grant(
role.id, user=_user.id, project=project.id)
return _user
def delete_user(self, user):
try:
user = self.get_user(user)
self.keystone.users.delete(user)
except exception.Invalid:
pass

View File

@@ -70,9 +70,6 @@ class NovaHelper(object):
def get_service(self, service_id): def get_service(self, service_id):
return self.nova.services.find(id=service_id) return self.nova.services.find(id=service_id)
def get_flavor(self, flavor_id):
return self.nova.flavors.get(flavor_id)
def get_aggregate_list(self): def get_aggregate_list(self):
return self.nova.aggregates.list() return self.nova.aggregates.list()
@@ -82,6 +79,9 @@ class NovaHelper(object):
def get_availability_zone_list(self): def get_availability_zone_list(self):
return self.nova.availability_zones.list(detailed=True) return self.nova.availability_zones.list(detailed=True)
def get_service_list(self):
return self.nova.services.list(binary='nova-compute')
def find_instance(self, instance_id): def find_instance(self, instance_id):
return self.nova.servers.get(instance_id) return self.nova.servers.get(instance_id)
@@ -451,8 +451,7 @@ class NovaHelper(object):
"Instance %s found on host '%s'." % (instance_id, host_name)) "Instance %s found on host '%s'." % (instance_id, host_name))
instance.live_migrate(host=dest_hostname, instance.live_migrate(host=dest_hostname,
block_migration=block_migration, block_migration=block_migration)
disk_over_commit=True)
instance = self.nova.servers.get(instance_id) instance = self.nova.servers.get(instance_id)
@@ -525,10 +524,10 @@ class NovaHelper(object):
instance_host = getattr(instance, 'OS-EXT-SRV-ATTR:host') instance_host = getattr(instance, 'OS-EXT-SRV-ATTR:host')
instance_status = getattr(instance, 'status') instance_status = getattr(instance, 'status')
# Abort live migration successfull, action is cancelled # Abort live migration successful, action is cancelled
if instance_host == source and instance_status == 'ACTIVE': if instance_host == source and instance_status == 'ACTIVE':
return True return True
# Nova Unable to abort live migration, action is succeded # Nova Unable to abort live migration, action is succeeded
elif instance_host == destination and instance_status == 'ACTIVE': elif instance_host == destination and instance_status == 'ACTIVE':
return False return False
@@ -787,6 +786,9 @@ class NovaHelper(object):
net_obj = {"net-id": nic_id} net_obj = {"net-id": nic_id}
net_list.append(net_obj) net_list.append(net_obj)
# get availability zone of destination host
azone = self.nova.services.list(host=node_id,
binary='nova-compute')[0].zone
instance = self.nova.servers.create( instance = self.nova.servers.create(
inst_name, image, inst_name, image,
flavor=flavor, flavor=flavor,
@@ -794,7 +796,7 @@ class NovaHelper(object):
security_groups=sec_group_list, security_groups=sec_group_list,
nics=net_list, nics=net_list,
block_device_mapping_v2=block_device_mapping_v2, block_device_mapping_v2=block_device_mapping_v2,
availability_zone="nova:%s" % node_id) availability_zone="%s:%s" % (azone, node_id))
# Poll at 5 second intervals, until the status is no longer 'BUILD' # Poll at 5 second intervals, until the status is no longer 'BUILD'
if instance: if instance:
@@ -864,3 +866,27 @@ class NovaHelper(object):
def get_running_migration(self, instance_id): def get_running_migration(self, instance_id):
return self.nova.server_migrations.list(server=instance_id) return self.nova.server_migrations.list(server=instance_id)
def swap_volume(self, old_volume, new_volume,
retry=120, retry_interval=10):
"""Swap old_volume for new_volume"""
attachments = old_volume.attachments
instance_id = attachments[0]['server_id']
# do volume update
self.nova.volumes.update_server_volume(
instance_id, old_volume.id, new_volume.id)
while getattr(new_volume, 'status') != 'in-use' and retry:
new_volume = self.cinder.volumes.get(new_volume.id)
LOG.debug('Waiting volume update to {0}'.format(new_volume))
time.sleep(retry_interval)
retry -= 1
LOG.debug("retry count: %s" % retry)
if getattr(new_volume, 'status') != "in-use":
LOG.error("Volume update retry timeout or error")
return False
host_name = getattr(new_volume, "os-vol-host-attr:host")
LOG.debug(
"Volume update succeeded : "
"Volume %s is now on host '%s'." % (new_volume.id, host_name))
return True

View File

@@ -49,7 +49,7 @@ def init(policy_file=None, rules=None,
""" """
global _ENFORCER global _ENFORCER
if not _ENFORCER: if not _ENFORCER:
# http://docs.openstack.org/developer/oslo.policy/usage.html # https://docs.openstack.org/oslo.policy/latest/admin/index.html
_ENFORCER = policy.Enforcer(CONF, _ENFORCER = policy.Enforcer(CONF,
policy_file=policy_file, policy_file=policy_file,
rules=rules, rules=rules,

View File

@@ -17,14 +17,15 @@
"""Utilities and helper functions.""" """Utilities and helper functions."""
import datetime import datetime
import random
import re import re
import string
from croniter import croniter from croniter import croniter
from jsonschema import validators from jsonschema import validators
from oslo_log import log as logging from oslo_log import log as logging
from oslo_utils import strutils from oslo_utils import strutils
from oslo_utils import timeutils
from oslo_utils import uuidutils from oslo_utils import uuidutils
import six import six
@@ -63,7 +64,6 @@ class Struct(dict):
generate_uuid = uuidutils.generate_uuid generate_uuid = uuidutils.generate_uuid
is_uuid_like = uuidutils.is_uuid_like is_uuid_like = uuidutils.is_uuid_like
is_int_like = strutils.is_int_like is_int_like = strutils.is_int_like
strtime = timeutils.strtime
def is_cron_like(value): def is_cron_like(value):
@@ -158,3 +158,8 @@ StrictDefaultValidatingDraft4Validator = extend_with_default(
extend_with_strict_schema(validators.Draft4Validator)) extend_with_strict_schema(validators.Draft4Validator))
Draft4Validator = validators.Draft4Validator Draft4Validator = validators.Draft4Validator
def random_string(n):
return ''.join([random.choice(
string.ascii_letters + string.digits) for i in range(n)])

View File

@@ -24,6 +24,7 @@ from watcher.conf import applier
from watcher.conf import ceilometer_client from watcher.conf import ceilometer_client
from watcher.conf import cinder_client from watcher.conf import cinder_client
from watcher.conf import clients_auth from watcher.conf import clients_auth
from watcher.conf import collector
from watcher.conf import db from watcher.conf import db
from watcher.conf import decision_engine from watcher.conf import decision_engine
from watcher.conf import exception from watcher.conf import exception
@@ -36,13 +37,11 @@ from watcher.conf import nova_client
from watcher.conf import paths from watcher.conf import paths
from watcher.conf import planner from watcher.conf import planner
from watcher.conf import service from watcher.conf import service
from watcher.conf import utils
CONF = cfg.CONF CONF = cfg.CONF
service.register_opts(CONF) service.register_opts(CONF)
api.register_opts(CONF) api.register_opts(CONF)
utils.register_opts(CONF)
paths.register_opts(CONF) paths.register_opts(CONF)
exception.register_opts(CONF) exception.register_opts(CONF)
db.register_opts(CONF) db.register_opts(CONF)
@@ -58,3 +57,4 @@ ceilometer_client.register_opts(CONF)
neutron_client.register_opts(CONF) neutron_client.register_opts(CONF)
clients_auth.register_opts(CONF) clients_auth.register_opts(CONF)
ironic_client.register_opts(CONF) ironic_client.register_opts(CONF)
collector.register_opts(CONF)

View File

@@ -30,7 +30,6 @@ from watcher.conf import neutron_client as conf_neutron_client
from watcher.conf import nova_client as conf_nova_client from watcher.conf import nova_client as conf_nova_client
from watcher.conf import paths from watcher.conf import paths
from watcher.conf import planner as conf_planner from watcher.conf import planner as conf_planner
from watcher.conf import utils
def list_opts(): def list_opts():
@@ -39,8 +38,7 @@ def list_opts():
('DEFAULT', ('DEFAULT',
(conf_api.AUTH_OPTS + (conf_api.AUTH_OPTS +
exception.EXC_LOG_OPTS + exception.EXC_LOG_OPTS +
paths.PATH_OPTS + paths.PATH_OPTS)),
utils.UTILS_OPTS)),
('api', conf_api.API_SERVICE_OPTS), ('api', conf_api.API_SERVICE_OPTS),
('database', db.SQL_OPTS), ('database', db.SQL_OPTS),
('watcher_planner', conf_planner.WATCHER_PLANNER_OPTS), ('watcher_planner', conf_planner.WATCHER_PLANNER_OPTS),

View File

@@ -1,7 +1,4 @@
# -*- encoding: utf-8 -*- # Copyright (c) 2017 NEC Corporation
# Copyright (c) 2016 Intel Corp
#
# Authors: Prudhvi Rao Shedimbi <prudhvi.rao.shedimbi@intel.com>
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@@ -18,19 +15,23 @@
from oslo_config import cfg from oslo_config import cfg
UTILS_OPTS = [
cfg.StrOpt('rootwrap_config', collector = cfg.OptGroup(name='collector',
default="/etc/watcher/rootwrap.conf", title='Defines the parameters of '
help='Path to the rootwrap configuration file to use for ' 'the module model collectors')
'running commands as root.'),
cfg.StrOpt('tempdir', COLLECTOR_OPTS = [
help='Explicitly specify the temporary working directory.'), cfg.ListOpt('collector_plugins',
default=['compute'],
help='The cluster data model plugin names'),
] ]
def register_opts(conf): def register_opts(conf):
conf.register_opts(UTILS_OPTS) conf.register_group(collector)
conf.register_opts(COLLECTOR_OPTS,
group=collector)
def list_opts(): def list_opts():
return [('DEFAULT', UTILS_OPTS)] return [('collector', COLLECTOR_OPTS)]

View File

@@ -26,10 +26,10 @@ GNOCCHI_CLIENT_OPTS = [
default='1', default='1',
help='Version of Gnocchi API to use in gnocchiclient.'), help='Version of Gnocchi API to use in gnocchiclient.'),
cfg.StrOpt('endpoint_type', cfg.StrOpt('endpoint_type',
default='internalURL', default='public',
help='Type of endpoint to use in gnocchi client.' help='Type of endpoint to use in gnocchi client.'
'Supported values: internalURL, publicURL, adminURL' 'Supported values: internal, public, admin'
'The default is internalURL.'), 'The default is public.'),
cfg.IntOpt('query_max_retries', cfg.IntOpt('query_max_retries',
default=10, default=10,
help='How many times Watcher is trying to query again'), help='How many times Watcher is trying to query again'),

View File

@@ -23,7 +23,7 @@ nova_client = cfg.OptGroup(name='nova_client',
NOVA_CLIENT_OPTS = [ NOVA_CLIENT_OPTS = [
cfg.StrOpt('api_version', cfg.StrOpt('api_version',
default='2', default='2.53',
help='Version of Nova API to use in novaclient.'), help='Version of Nova API to use in novaclient.'),
cfg.StrOpt('endpoint_type', cfg.StrOpt('endpoint_type',
default='publicURL', default='publicURL',

View File

@@ -24,6 +24,7 @@ from oslo_log import log
from watcher.common import clients from watcher.common import clients
from watcher.common import exception from watcher.common import exception
from watcher.common import utils as common_utils
CONF = cfg.CONF CONF = cfg.CONF
LOG = log.getLogger(__name__) LOG = log.getLogger(__name__)
@@ -72,6 +73,17 @@ class GnocchiHelper(object):
raise exception.InvalidParameter(parameter='stop_time', raise exception.InvalidParameter(parameter='stop_time',
parameter_type=datetime) parameter_type=datetime)
if not common_utils.is_uuid_like(resource_id):
kwargs = dict(query={"=": {"original_resource_id": resource_id}},
limit=1)
resources = self.query_retry(
f=self.gnocchi.resource.search, **kwargs)
if not resources:
raise exception.ResourceNotFound(name=resource_id)
resource_id = resources[0]['id']
raw_kwargs = dict( raw_kwargs = dict(
metric=metric, metric=metric,
start=start_time, start=start_time,

View File

@@ -33,7 +33,7 @@ class MonascaHelper(object):
def query_retry(self, f, *args, **kwargs): def query_retry(self, f, *args, **kwargs):
try: try:
return f(*args, **kwargs) return f(*args, **kwargs)
except exc.HTTPUnauthorized: except exc.Unauthorized:
self.osc.reset_clients() self.osc.reset_clients()
self.monasca = self.osc.monasca() self.monasca = self.osc.monasca()
return f(*args, **kwargs) return f(*args, **kwargs)

View File

@@ -20,7 +20,7 @@ You can upgrade to the latest database version via::
To check the current database version:: To check the current database version::
$ watcher-db-manage --config-file /path/to/watcher.conf current $ watcher-db-manage --config-file /path/to/watcher.conf version
To create a script to run the migration offline:: To create a script to run the migration offline::

View File

@@ -0,0 +1,36 @@
"""add action description table
Revision ID: d09a5945e4a0
Revises: d098df6021e2
Create Date: 2017-07-13 20:33:01.473711
"""
from alembic import op
import oslo_db
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = 'd09a5945e4a0'
down_revision = 'd098df6021e2'
def upgrade():
op.create_table(
'action_descriptions',
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.Column('deleted_at', sa.DateTime(), nullable=True),
sa.Column('deleted', oslo_db.sqlalchemy.types.SoftDeleteInteger(),
nullable=True),
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('action_type', sa.String(length=255), nullable=False),
sa.Column('description', sa.String(length=255), nullable=False),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('action_type',
name='uniq_action_description0action_type')
)
def downgrade():
op.drop_table('action_descriptions')

View File

@@ -1127,3 +1127,74 @@ class Connection(api.BaseConnection):
return self._soft_delete(models.Service, service_id) return self._soft_delete(models.Service, service_id)
except exception.ResourceNotFound: except exception.ResourceNotFound:
raise exception.ServiceNotFound(service=service_id) raise exception.ServiceNotFound(service=service_id)
# ### ACTION_DESCRIPTIONS ### #
def _add_action_descriptions_filters(self, query, filters):
if not filters:
filters = {}
plain_fields = ['id', 'action_type']
return self._add_filters(
query=query, model=models.ActionDescription, filters=filters,
plain_fields=plain_fields)
def get_action_description_list(self, context, filters=None, limit=None,
marker=None, sort_key=None,
sort_dir=None, eager=False):
query = model_query(models.ActionDescription)
if eager:
query = self._set_eager_options(models.ActionDescription, query)
query = self._add_action_descriptions_filters(query, filters)
if not context.show_deleted:
query = query.filter_by(deleted_at=None)
return _paginate_query(models.ActionDescription, limit, marker,
sort_key, sort_dir, query)
def create_action_description(self, values):
try:
action_description = self._create(models.ActionDescription, values)
except db_exc.DBDuplicateEntry:
raise exception.ActionDescriptionAlreadyExists(
action_type=values['action_type'])
return action_description
def _get_action_description(self, context, fieldname, value, eager):
try:
return self._get(context, model=models.ActionDescription,
fieldname=fieldname, value=value, eager=eager)
except exception.ResourceNotFound:
raise exception.ActionDescriptionNotFound(action_id=value)
def get_action_description_by_id(self, context,
action_id, eager=False):
return self._get_action_description(
context, fieldname="id", value=action_id, eager=eager)
def get_action_description_by_type(self, context,
action_type, eager=False):
return self._get_action_description(
context, fieldname="action_type", value=action_type, eager=eager)
def destroy_action_description(self, action_id):
try:
return self._destroy(models.ActionDescription, action_id)
except exception.ResourceNotFound:
raise exception.ActionDescriptionNotFound(
action_id=action_id)
def update_action_description(self, action_id, values):
try:
return self._update(models.ActionDescription,
action_id, values)
except exception.ResourceNotFound:
raise exception.ActionDescriptionNotFound(
action_id=action_id)
def soft_delete_action_description(self, action_id):
try:
return self._soft_delete(models.ActionDescription, action_id)
except exception.ResourceNotFound:
raise exception.ActionDescriptionNotFound(
action_id=action_id)

View File

@@ -69,7 +69,7 @@ def create_schema(config=None, engine=None):
# schema, it will only add the new tables, but leave # schema, it will only add the new tables, but leave
# existing as is. So we should avoid of this situation. # existing as is. So we should avoid of this situation.
if version(engine=engine) is not None: if version(engine=engine) is not None:
raise db_exc.DbMigrationError( raise db_exc.DBMigrationError(
_("Watcher database schema is already under version control; " _("Watcher database schema is already under version control; "
"use upgrade() instead")) "use upgrade() instead"))

View File

@@ -278,3 +278,17 @@ class Service(Base):
name = Column(String(255), nullable=False) name = Column(String(255), nullable=False)
host = Column(String(255), nullable=False) host = Column(String(255), nullable=False)
last_seen_up = Column(DateTime, nullable=True) last_seen_up = Column(DateTime, nullable=True)
class ActionDescription(Base):
"""Represents a action description"""
__tablename__ = 'action_descriptions'
__table_args__ = (
UniqueConstraint('action_type',
name="uniq_action_description0action_type"),
table_args()
)
id = Column(Integer, primary_key=True)
action_type = Column(String(255), nullable=False)
description = Column(String(255), nullable=False)

View File

@@ -96,9 +96,10 @@ class AuditHandler(BaseAuditHandler):
raise raise
def update_audit_state(self, audit, state): def update_audit_state(self, audit, state):
LOG.debug("Update audit state: %s", state) if audit.state != state:
audit.state = state LOG.debug("Update audit state: %s", state)
audit.save() audit.state = state
audit.save()
def check_ongoing_action_plans(self, request_context): def check_ongoing_action_plans(self, request_context):
a_plan_filters = {'state': objects.action_plan.State.ONGOING} a_plan_filters = {'state': objects.action_plan.State.ONGOING}

View File

@@ -62,10 +62,11 @@ class ContinuousAuditHandler(base.AuditHandler):
if objects.audit.AuditStateTransitionManager().is_inactive(audit): if objects.audit.AuditStateTransitionManager().is_inactive(audit):
# if audit isn't in active states, audit's job must be removed to # if audit isn't in active states, audit's job must be removed to
# prevent using of inactive audit in future. # prevent using of inactive audit in future.
[job for job in self.scheduler.get_jobs() if self.scheduler.get_jobs():
if job.name == 'execute_audit' and [job for job in self.scheduler.get_jobs()
job.args[0].uuid == audit.uuid][0].remove() if job.name == 'execute_audit' and
return True job.args[0].uuid == audit.uuid][0].remove()
return True
return False return False

View File

@@ -22,7 +22,8 @@ ThermalOptimization = goals.ThermalOptimization
Unclassified = goals.Unclassified Unclassified = goals.Unclassified
WorkloadBalancing = goals.WorkloadBalancing WorkloadBalancing = goals.WorkloadBalancing
NoisyNeighbor = goals.NoisyNeighborOptimization NoisyNeighbor = goals.NoisyNeighborOptimization
SavingEnergy = goals.SavingEnergy
__all__ = ("Dummy", "ServerConsolidation", "ThermalOptimization", __all__ = ("Dummy", "ServerConsolidation", "ThermalOptimization",
"Unclassified", "WorkloadBalancing", "Unclassified", "WorkloadBalancing",
"NoisyNeighborOptimization",) "NoisyNeighborOptimization", "SavingEnergy")

Some files were not shown because too many files have changed in this diff Show More