Compare commits

...

110 Commits

Author SHA1 Message Date
OpenStack Proposal Bot
dbff4326e3 Updated from global requirements
Change-Id: I3df338013a22ba11e83929660e1c63d87fb30d01
2017-04-12 13:29:03 +00:00
Jenkins
13a99b4c09 Merge "Prevent the migration of VM with 'optimize' False in VM metadata" 2017-04-12 12:00:05 +00:00
Jenkins
a8994bc227 Merge "Added suspended audit state" 2017-04-12 05:24:01 +00:00
Hidekazu Nakamura
c6845c0136 Added suspended audit state
New audit state SUSPENDED is added in this patch set. If audit
state with continuous mode is changed from ONGOING to SUSPENDED,
audit's job is removed and the audit is not executed.
If audit state changed from SUSPENDED to ONGOING in reverse,
audit is executed again periodically.

Change-Id: I32257f56a40c0352a7c24f3fb80ad95ec28dc614
Implements: blueprint suspended-audit-state
2017-04-11 20:50:24 +09:00
Jenkins
02c8e7d89c Merge "Add gnocchi support in uniform_airflow strategy" 2017-04-11 10:52:30 +00:00
Jenkins
0db41f1862 Merge "Updated from global requirements" 2017-04-11 01:30:46 +00:00
Santhosh Fernandes
b02cf3bea5 Add gnocchi support in uniform_airflow strategy
This patch adds gnocchi support in uniform_airflow strategy
and adds unit tests corresponding to that change.

Change-Id: I347d0129da94a2fc88229d09297765795c5eeb1a
Partiallly-Implements: bp gnocchi-watcher
2017-04-10 19:14:37 +05:30
XieYingYun
4eeaa0ab6b Add Apache License Content in index.rst
Add Apache License 2.0 Content which is necessary
for ./releasenotes/source/index.rst.

Change-Id: Ieda2bda1eadc26a1e3420aecd3bee1b64ae3e23f
2017-04-10 18:45:36 +08:00
Jenkins
84d6d4aadd Merge "Optimize the link address" 2017-04-10 08:28:17 +00:00
Jenkins
4511b36496 Merge "Optimize the link address" 2017-04-10 08:22:14 +00:00
Jenkins
9a5c017a9a Merge "correct syntax error" 2017-04-10 08:21:24 +00:00
Jenkins
4690e402ce Merge "Add gnocchi support in workload_balance strategy" 2017-04-10 08:20:21 +00:00
M V P Nitesh
442e569686 Optimize the link address
Use https instead of http to ensure the safety

Change-Id: I41035ccb7b46c3f4ffc54cedb2392bfb64e2ac4c
2017-04-10 11:59:19 +05:30
Yumeng_Bao
5d948e8aa1 correct syntax error
Change-Id: Ied06d39da39b955a90e890156f8c11e329cf864e
2017-04-10 10:57:19 +08:00
OpenStack Proposal Bot
f83a92fc70 Updated from global requirements
Change-Id: I3aee3f27ef116b4cb636b090caad1806e3447744
2017-04-07 14:34:49 +00:00
Jenkins
a06c462050 Merge "Add gnocchi support in VM-Workload-Consolidation strategy" 2017-04-07 13:37:59 +00:00
Jenkins
74e9349c1f Merge "Add gnocchi support in outlet_temp_control strategy" 2017-04-07 12:49:33 +00:00
Jenkins
7c048c761e Merge "Add gnocchi support in workload_stabilization strategy" 2017-04-07 10:12:48 +00:00
Jenkins
4006b4af7a Merge "Run Watcher-API behind mod-wsgi" 2017-04-07 08:42:30 +00:00
XieYingYun
1d05444f67 Optimize the link address
Use https instead of http to ensure the safety without containing our
account/password information

Change-Id: I1f164848f164d9694c0cfc802cc3980459bdf12f
2017-04-07 10:55:59 +08:00
Santhosh Fernandes
6de94cca2d Add gnocchi support in outlet_temp_control strategy
This patch adds gnocchi support in outlet_temp_control strategy
and adds unit tests corresponding to that change.
Partiallly-Implements: bp gnocchi-watcher

Change-Id: I2c2e9a86c470f3402adc3dbb7eb9995c643d5b37
2017-04-05 19:26:54 +05:30
Jenkins
9b5d17b412 Merge "oslo messaging notifications driver update" 2017-04-05 12:37:56 +00:00
Jenkins
719b96f2a8 Merge "fixed syntax error in json" 2017-04-05 12:33:18 +00:00
the.bling
e9f83417eb fixed syntax error in json
Change-Id: I6dc02d53291e42fb1de6811f439f5814c49af769
2017-04-05 16:37:41 +05:30
Yumeng_Bao
a139cca260 Replace py34 with py35
Change-Id: I562e53718ed269545a425cf30c59442ec72566f8
2017-04-05 11:25:01 +08:00
Jenkins
c04d3cc5e0 Merge "Use tox to generate a sample configuration file" 2017-04-05 01:52:41 +00:00
Santhosh Fernandes
e549e43e9e Add gnocchi support in workload_balance strategy
This patch adds gnocchi support in workload_balance strategy
and adds unit tests corresponding to that change.

Change-Id: I9bc56c7b91b5c3fd0cfe97d75c3bace50ab22532
Partiallly-Implements: bp gnocchi-watcher
2017-04-03 15:52:42 +05:30
Jenkins
c1a2c79514 Merge "Updated from global requirements" 2017-04-03 07:26:01 +00:00
Jenkins
f79cad99cb Merge "Add gnocchi plugin support for devstack" 2017-04-03 07:23:33 +00:00
Santhosh Fernandes
8663c3a5c5 Add gnocchi plugin support for devstack
This patch enables gnocchi plugin for devstack.

Partiallly-Implements: bp gnocchi-watcher
Closes-Bug: 1662515
Change-Id: I6614ce6999c9681bd6fafc6c85a3755b5ce8e2dd
2017-04-03 09:52:07 +05:30
OpenStack Proposal Bot
19d9b83665 Updated from global requirements
Change-Id: I4b82c80745c1cb54b66a5cb4a7fcecd7fbf964a3
2017-04-01 15:35:28 +00:00
aditi
af22899ebe Run Watcher-API behind mod-wsgi
This patch adds support to run watcher-api
with mod-wsgi. It provides
1. wsgi app script files, to run watcher-api under apache.
2. updated devstack plugin to run watcher-api default with
   mod-wsgi.
3. Document to deploy watcher-api behind wsgi.

Change-Id: I8f1026f0b09fd774376220c2d117ee66f72421b8
Closes-Bug: #1675376
2017-04-01 19:03:17 +05:30
Yumeng_Bao
fc33e181fc oslo messaging notifications driver update
Replace "messaging" with "messagingv2".

Change-Id: Iab53b5ba8cd58a0655dc35489081fb22d5834363
2017-04-01 18:34:34 +08:00
Yumeng_Bao
a3ee163480 Use tox to generate a sample configuration file
There is no sample configuration file if we don't generate one.

Change-Id: Ibe9005db4fa24b63914fa5f24fe4b144867781cd
2017-04-01 18:26:28 +08:00
Santhosh Fernandes
4642a92e78 Add gnocchi support in VM-Workload-Consolidation strategy
This patch adds gnocchi support in VM-Workload-Consolidation strategy
and adds unit tests corresponding to that change.

Change-Id: I4aab158a6b7c92cb9fe8979bb8bd6338c4686b11
Partiallly-Implements: bp gnocchi-watcher
2017-03-29 19:33:55 +05:30
OpenStack Proposal Bot
69c53da298 Updated from global requirements
Change-Id: I07bc3672692faec1dab41e98f7cf6b8fab821a2a
2017-03-29 13:44:56 +00:00
Jenkins
49924e1915 Merge "Use HostAddressOpt for opts that accept IP and hostnames" 2017-03-29 12:05:35 +00:00
Santhosh Fernandes
d53abb7af5 Fix for remove verbose option
This commit[1] has removed verbose option due to which our ut's are failing.
This patch is fixing this issue.

[1] https://review.openstack.org/#/c/444217/

Change-Id: I784a7f855f42de462e8fc8f829f5526e1483dab4
2017-03-29 11:17:43 +05:30
jeremy.zhang
4d3727eafb Use HostAddressOpt for opts that accept IP and hostnames
Some configuration options were accepting both IP addresses
and hostnames. Since there was no specific OSLO opt type to
support this, we were using ``StrOpt``. The change [1] that
added support for ``HostAddressOpt`` type was merged in Ocata
and became available for use with oslo version 3.22.

This patch changes the opt type of configuration options to use
this more relevant opt type - HostAddressOpt.

[1] I77bdb64b7e6e56ce761d76696bc4448a9bd325eb

Change-Id: Idec43189ff8edc539027ba0b0369e54ae883cd2b
2017-03-28 15:22:25 +08:00
Jenkins
4a3c15185c Merge "stale the action plan" 2017-03-27 22:46:30 +00:00
Jenkins
2c6ab9a926 Merge "Add period input parameter to vm workload consolidation and outlet temp control strategy." 2017-03-27 22:46:07 +00:00
Santhosh Fernandes
0750b93827 Add gnocchi support in workload_stabilization strategy
This patch adds gnocchi support in workload_stabilization strategy
and adds unit tests corresponding to that change.

Change-Id: I96bd758962bbf67d60e19a99a19451fb80e447b2
Partiallly-Implements: bp gnocchi-watcher
2017-03-27 23:56:41 +05:30
Pradeep Kumar Singh
a2cb02a861 Prevent the migration of VM with 'optimize' False in VM metadata
This patch adds the functionality to filter out VMs which have
metadata field 'optimize' set to False. This patch implements the
functionality for basic_consolidation strategy.

Change-Id: Iaf7b63e09534e4a67406e7f092242558b78c0bde
Partially-Implements: BP audit-tag-vm-metadata
2017-03-27 10:17:37 +00:00
Santhosh Fernandes
377889859d Add period input parameter to vm workload consolidation and
outlet temp control strategy.

Closes-Bug: #1614021
Change-Id: Iec975e4a4a39168a65ae89ca75a5ca9445c14f9d
2017-03-27 15:06:36 +05:30
Jenkins
54f0758fc3 Merge "Remove log translations" 2017-03-27 08:36:22 +00:00
Jenkins
a644600a18 Merge "Add endpoint_type option for openstack clients." 2017-03-27 07:49:04 +00:00
Jenkins
ca3d367ac7 Merge "Updated from global requirements" 2017-03-27 00:53:08 +00:00
Margarita Shakhova
cde60d2ead Add endpoint_type option for openstack clients.
Interface type internalURL' is used as a default value.

Change-Id: Ia1acbfbfd2a1eecd85e5aa1d2e19665d411c4c58
Closes-Bug: #1671405
2017-03-24 21:12:11 +00:00
Jenkins
f106076d70 Merge "Imported Translations from Zanata" 2017-03-24 20:03:43 +00:00
OpenStack Proposal Bot
e75dbfd776 Updated from global requirements
Change-Id: I74bfd82a835c43d94be6279adbe582f2a0bfa302
2017-03-24 16:38:30 +00:00
Santhosh Fernandes
18aa50c58e Add gnocchi support in basic_consolidation strategy
This patch adds gnocchi support in basic_consolidation strategy
and adds unit tests corresponding to that change.

Change-Id: Ia1ee55fca8eadffbd244c0247577805b6856369d
Partiallly-Implements: bp gnocchi-watcher
2017-03-24 09:09:42 +00:00
Jenkins
2c2120526c Merge "Add Gnocchi datasource" 2017-03-24 09:00:33 +00:00
Jenkins
9e505d3d36 Merge "exception when running 'watcher service list'" 2017-03-24 08:17:51 +00:00
OpenStack Proposal Bot
54ce5f796f Imported Translations from Zanata
For more information about this automatic import see:
http://docs.openstack.org/developer/i18n/reviewing-translation-import.html

Change-Id: I3090e188c8506781fe7260cb99727fab02df1044
2017-03-24 07:41:20 +00:00
yanxubin
f605888e32 Remove log translations
Log messages are no longer being translated. This removes all use of
the _LE, _LI, and _LW translation markers to simplify logging and to
avoid confusion with new contributions.

See:
http://lists.openstack.org/pipermail/openstack-i18n/2016-November/002574.html
http://lists.openstack.org/pipermail/openstack-dev/2017-March/113365.html

Change-Id: I3552767976807a9851af69b1fa4f86ac25943025
2017-03-24 09:46:19 +08:00
Alexander Chadin
0b213a8734 Add Gnocchi datasource
This patch set adds Gnocchi datasource support

Implements: blueprint gnocchi-watcher

Change-Id: I41653149435fd355548071de855586004371c4fc
2017-03-23 13:54:03 +03:00
licanwei
38a3cbc84a exception when running 'watcher service list'
We have two controllers as HA in our OpenStack environment.
There are watcher-applier and watcher-decision-engine in each
controller. So there are two same name in the services table.
In this case, the objects.Service.get_by_name(context, name)
will trigger exception of MultipleResultsFound.
We should use objects.Service.get(context, id) replace of
objects.Service.get_by_name(context, name).

Change-Id: Ic3ce784590d6c2a648cb3b28299744deed281332
Closes-Bug: #1674196
2017-03-22 12:12:31 +08:00
Jenkins
eb038e4af0 Merge "set eager=True for actionplan.list" 2017-03-21 10:19:41 +00:00
Jenkins
b5eccceaed Merge "Remove old oslo.messaging transport aliases" 2017-03-21 09:24:23 +00:00
Jenkins
f7b655b712 Merge "Local copy of scenario test base class" 2017-03-20 13:10:33 +00:00
ChangBo Guo(gcb)
1386ce690f Remove old oslo.messaging transport aliases
Those are remnants from the oslo-incubator times. Also, oslo.messaging
deprecated [1] transport aliases since 5.2.0+ that is the minimal
version supported for stable/newton. The patch that bumped the minimal
version for Watcher landed 3 months+ ago, so we can proceed ripping
those aliases from the code base.

[1] I314cefa5fb1803fa7e21e3e34300e5ced31bba89

Change-Id: Ie3008cc54b0eb3d1d02f55f388bd1c3b109d126d
Closes-Bug: #1424728
2017-03-20 13:23:45 +08:00
licanwei
38e4b48d70 stale the action plan
Check the creation time of the actionplan,
and set the state to SUPERSEDED if it has expired.

Change-Id: I900e8dc5011dec4cffd58913b9c5083a6131d70d
Implements: blueprint stale-action-plan
2017-03-18 13:46:34 +08:00
Andrea Frittoli
56ca542bef Local copy of scenario test base class
The scenario tests base class from Tempest is not a stable interface
and it's going to be refactored on Tempest side, as notified in

http://lists.openstack.org/pipermail/openstack-dev/2017-February/112938.html

Maintain a local copy of the base class, taken from Tempest with head of
master at c5f1064759fe6c75a4bc5dc251ed1661845936cb.

Change-Id: Idfa5ebe18c794c51e406156fb120d128478d4f1e
2017-03-17 17:31:46 +00:00
Jenkins
dafe2ad04b Merge "Use https instead of http" 2017-03-17 09:04:48 +00:00
licanwei
6044b04d33 set eager=True for actionplan.list
actionplan.save() will send a send_update notification.
This notification need eagerly loaded,
so we should set eager=True for actionplan.list.

Change-Id: Iafe35b9782fb0cc52ba5121c155f62c61ef70e1f
Closes-Bug: #1673679
2017-03-17 16:16:26 +08:00
Yumeng_Bao
6d81ac15b8 Use https instead of http
Currently http is used to access git, however http is not safe enough
since the access information contains our account/password information.
Use https instead of http to ensure the safety.

And for others services such as pypi or python, it's still nice to use
https.

Change-Id: I706a4a1873c6bbc05385057757fc5962344f9371
2017-03-17 07:44:27 +00:00
OpenStack Proposal Bot
db077e872e Updated from global requirements
Change-Id: Id1f3bee0c6cfb5621a7c23d7865247d6ef4217d1
2017-03-16 18:24:19 +00:00
OpenStack Proposal Bot
51bf7fedb6 Updated from global requirements
Change-Id: Id125d2e27aba257fb026491fc505245f4d7555ba
2017-03-15 12:54:16 +00:00
OpenStack Proposal Bot
bf0fd48659 Updated from global requirements
Change-Id: Id458895286cdaffe66676b68a4f4fd7169c24347
2017-03-15 05:21:56 +00:00
Jenkins
ba98c88303 Merge "Add Action Notification" 2017-03-08 09:58:46 +00:00
OpenStack Proposal Bot
77b406748c Updated from global requirements
Change-Id: I35c23139feb449af69470f8f9e90f82dda516585
2017-03-07 02:08:03 +00:00
Jenkins
d21198da8f Merge "[Fix gate]Update test requirement" 2017-03-06 03:30:47 +00:00
Jenkins
12a7b7171b Merge "Add Apache License content in conf.py file" 2017-03-04 23:16:39 +00:00
Jenkins
d45ab13619 Merge "Adding instance metadata into cluster data model" 2017-03-04 01:54:03 +00:00
Prudhvi Rao Shedimbi
e2d2fc6227 Adding instance metadata into cluster data model
This patch adds instance metadata in the cluster data model. This
is needed for Noisy Neighbor strategy.

Change-Id: Ia92a9f97ba1457ba844cc37a4d443ca4354069e3
2017-03-03 14:52:42 +00:00
yuhui_inspur
3c564ee3d8 Add Apache License content in conf.py file
Change-Id: I84478f02bc7f04ef57334a738fa0b1f3ca0cac45
2017-03-02 23:27:32 -08:00
ricolin
f9ce21a9a9 [Fix gate]Update test requirement
Since pbr already landed and the old version of hacking seems not
work very well with pbr>=2, we should update it to match global
requirement.
Partial-Bug: #1668848

Change-Id: I5de155e6ff255f4ae65deff991cff754f5777a8d
2017-03-03 11:43:53 +08:00
Jenkins
03a2c0142a Merge "Optimize audit process" 2017-03-02 23:25:39 +00:00
Hidekazu Nakamura
5afcf7a7f4 Remove unused PNG files in image_src directory
This patch removes unused PNG files in image_src directory.

Change-Id: Ia65ae07932d238277731a26fddb798b92d06e958
2017-03-01 10:26:14 +09:00
Jenkins
63faf4695e Merge "Fix no endpoints of ceilometer in devstack environment setup." 2017-02-28 01:55:53 +00:00
Jenkins
97800d1553 Merge "Switch to use test_utils.call_until_true" 2017-02-27 23:40:58 +00:00
OpenStack Proposal Bot
8f85169c15 Updated from global requirements
Change-Id: I693764145996f4c941de4b129a73c36e0db839d6
2017-02-27 01:21:44 +00:00
ericxiett
68e4bc4d87 Fix no endpoints of ceilometer in devstack environment setup.
There are not any endpoints for `ceilometer` with devstack/
local.conf.controller. The service `ceilometer-api` should
be enabled explicitly.

Change-Id: I2218a98182001bef65fbc17ae305cfadf341930e
Closes-Bug: #1667678
2017-02-24 21:40:19 +08:00
ericxiett
9e7f7f54f3 Fix some typos in vm_workload_consolidation.py.
Change-Id: I1da1ed89f3e278af05d227d5011c8984218c026f
2017-02-23 16:40:00 +08:00
licanwei
fceab5299b Optimize audit process
In the current audit process, after executing the strategy,
will check whether there are currently running actionplan,
and if so, will set the new actionplan SUPERSEDED.
We can optimize the process to perform this check in pre_execute(),
and if any actionplan is running, no further processing is performed.

Change-Id: I7377b53a2374b1dc177d256a0f800a86b1a2a16b
Closes-Bug: #1663150
2017-02-17 15:01:05 +08:00
Jenkins
dddbb63633 Merge "Adding additional details to notification logs" 2017-02-16 13:25:14 +00:00
Jenkins
b788dfab71 Merge "Fix spelling error in NotificationEndpoint classes." 2017-02-16 13:24:42 +00:00
Jenkins
7824b41e12 Merge "Add SUPERSEDED description" 2017-02-16 13:23:47 +00:00
Jenkins
86ded4d952 Merge "Reactivate watcher dashboard plugin in devstack/local.conf.controller" 2017-02-16 13:04:28 +00:00
Jenkins
5d6e731c42 Merge "Fix that remove 'strategy' attribute does not work." 2017-02-15 14:09:08 +00:00
Jenkins
cba6713bdc Merge "Add checking audit state" 2017-02-15 14:08:59 +00:00
Yumeng_Bao
24ab0469ec Reactivate watcher dashboard plugin in devstack/local.conf.controller
Since watcher dashboard can be sucessfully installed now by devstack,
we should enable this again. Many of us are get the local.conf from
here,so this change is necessary, we can enable watch dashboard plugin
by default.

Change-Id: Iad5081a97515b3f831d2f468dc514a942e6d3420
2017-02-15 17:32:47 +08:00
Jenkins
4d71bb112c Merge "Remove support for py34" 2017-02-14 10:28:34 +00:00
licanwei
fd374d1d30 Add SUPERSEDED description
Add SUPERSEDED description

Change-Id: I05dc12451ecde338d94f99be522a38e7fb042528
2017-02-13 12:23:32 +08:00
Alexander Chadin
25789c9c5a Add Action Notification
This patch set adds the following action notifications:

- action.create
- action.update
- action.delete
- action.execution.start
- action.execution.end
- action.execution.error

Partially Implements: blueprint action-versioned-notifications-api

Change-Id: If0bc25bfb7cb1bff3bfa2c5d5fb9ad48b0794168
2017-02-10 11:43:35 +03:00
Ken'ichi Ohmichi
a9b3534e97 Switch to use test_utils.call_until_true
test.call_until_true has been deprecated since Newton on Tempest side,
and now Tempest provides test_utils.call_until_true as the stable
library method. So this patch switches to use the stable method before
removing old test.call_until_true on Tempest side.

Change-Id: Iba2130aca93c8e6bccb4f8ed169424c791ebc127
Needed-by: Ide11a7434a4714e5d2211af1803333535f557370
2017-02-09 10:50:11 -08:00
Chris Spencer
f80c0c732a Adding additional details to notification logs
Improves tracking independent workflows.  A log adapter
was added to provide an easy means of prepending publisher
ID and event type information.

Change-Id: I5d2d8a369f99497b05c2a683989e656554d01b4f
Closes-Bug: 1642623
2017-02-09 11:26:53 -07:00
Hidekazu Nakamura
0d83354c57 Add checking audit state
This patch adds checking audit state when updating an existing audit
in accordance with audit state machine.

Closes-Bug: #1662406

Change-Id: I20610c83169b77f141974a5cebe33818a4bf0728
2017-02-09 14:23:14 +09:00
licanwei
67d44eb118 Fix the mapping between the instance and the node
The argument to the add_edge function should be instance.uuid
and node.uuid, not instance and node

Change-Id: Ida694f9158d3eb26e7f31062a18844472ea3c6fa
Closes-Bug: #1662810
2017-02-08 17:41:47 +08:00
Cao Xuan Hoang
8c1757f86d Remove support for py34
The gating on python 3.4 is restricted to <= Mitaka. This is due to
the change from Ubuntu Trusty to Xenial, where only python3.5 is
available. There is no need to continue to keep these settings.

Change-Id: I3b3f0b08f6f27322b8a9d99eb25984ccd6bfe7a6
2017-02-08 11:09:30 +07:00
ericxiett
e55c73be0e Fix that remove 'strategy' attribute does not work.
The 'strategy_id' attribute is the one that not exposes
to API. But the 'PATCH' API always update this attribute.
So this change does not work when choose 'remove' op.
This patch sets value for 'strategy_id' attribute.

Change-Id: I1597fb5d4985bb8271ad3cea7ea5f0adb7de65f4
Closes-Bug: #1662395
2017-02-08 10:57:26 +08:00
Chris Spencer
04c9e0362e Fix spelling error in NotificationEndpoint classes.
Change-Id: I47dc2d73b8e7c4adaa9622de932c0d8abcd17d87
2017-02-07 15:36:50 -07:00
Jenkins
8ceb710b59 Merge "Fix incorrect auto trigger flag" 2017-02-07 16:04:33 +00:00
Hidekazu Nakamura
58711aaaec Fix log level error to warning
When action plan is currently running, new action plan is set as
SUPERSEDED and error log reported. This patch changes log level
from error to warning.

Change-Id: I931218843d8f09340bd5363256164807d514446b
Closes-Bug: #1662450
2017-02-07 17:48:56 +09:00
licanwei
3ad5261d2a Fix incorrect auto trigger flag
'watcher audit list' returns a incorrect auto trigger flag
auto_trigger field is incorrectly unset to False

Change-Id: Iba4a0bda1acf18cbfde6f1dcdb0985a4c3f7b5bb
Closes-Bug: #1662051
2017-02-06 14:42:44 +08:00
Jenkins
26c7726c00 Merge "Using items() instead of six.iteritems()" 2017-02-03 09:33:24 +00:00
zhuzeyu
5b2cdb53a8 Using items() instead of six.iteritems()
We'd better not use six.iteritems(), read follow doc
http://lists.openstack.org/pipermail/openstack-dev/2015-June/066391.html

Change-Id: I32aff3ad0cc936c6b623db313b6f6a0790cbc7fb
2017-02-03 17:01:40 +08:00
Jenkins
4b5dce51dc Merge "Use RPC cast() to be asynchronous" 2017-02-03 08:48:56 +00:00
OpenStack Release Bot
b6a96e04aa Update reno for stable/ocata
Change-Id: I6c75072180fcbd449d9049ae1f2258a53a236ebd
2017-02-02 18:23:37 +00:00
Vincent Françoise
65f9646eae Use RPC cast() to be asynchronous
Change-Id: I54814dc37a79eb06386923f946d85a67894c7646
2017-02-01 14:40:10 +01:00
147 changed files with 4023 additions and 1359 deletions

View File

@@ -1,13 +1,13 @@
If you would like to contribute to the development of OpenStack,
you must follow the steps in this page:
http://docs.openstack.org/infra/manual/developers.html
https://docs.openstack.org/infra/manual/developers.html
Once those steps have been completed, changes to OpenStack
should be submitted for review via the Gerrit tool, following
the workflow documented at:
http://docs.openstack.org/infra/manual/developers.html#development-workflow
https://docs.openstack.org/infra/manual/developers.html#development-workflow
Pull requests submitted through GitHub will be ignored.

View File

@@ -8,4 +8,4 @@
watcher Style Commandments
==========================
Read the OpenStack Style Commandments http://docs.openstack.org/developer/hacking/
Read the OpenStack Style Commandments https://docs.openstack.org/developer/hacking/

View File

@@ -2,8 +2,8 @@
Team and repository tags
========================
.. image:: http://governance.openstack.org/badges/watcher.svg
:target: http://governance.openstack.org/reference/tags/index.html
.. image:: https://governance.openstack.org/badges/watcher.svg
:target: https://governance.openstack.org/reference/tags/index.html
.. Change things from this point on
@@ -25,7 +25,7 @@ operating costs, increased system performance via intelligent virtual machine
migration, increased energy efficiency-and more!
* Free software: Apache license
* Wiki: http://wiki.openstack.org/wiki/Watcher
* Wiki: https://wiki.openstack.org/wiki/Watcher
* Source: https://github.com/openstack/watcher
* Bugs: http://bugs.launchpad.net/watcher
* Documentation: http://docs.openstack.org/developer/watcher/
* Bugs: https://bugs.launchpad.net/watcher
* Documentation: https://docs.openstack.org/developer/watcher/

View File

@@ -0,0 +1,42 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# This is an example Apache2 configuration file for using the
# Watcher API through mod_wsgi. This version assumes you are
# running devstack to configure the software.
Listen %WATCHER_SERVICE_PORT%
<VirtualHost *:%WATCHER_SERVICE_PORT%>
WSGIDaemonProcess watcher-api user=%USER% processes=%APIWORKERS% threads=1 display-name=%{GROUP}
WSGIScriptAlias / %WATCHER_WSGI_DIR%/app.wsgi
WSGIApplicationGroup %{GLOBAL}
WSGIProcessGroup watcher-api
WSGIPassAuthorization On
ErrorLogFormat "%M"
ErrorLog /var/log/%APACHE_NAME%/watcher-api.log
CustomLog /var/log/%APACHE_NAME%/watcher-api-access.log combined
<Directory %WATCHER_WSGI_DIR%>
WSGIProcessGroup watcher-api
WSGIApplicationGroup %{GLOBAL}
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost>

View File

@@ -44,6 +44,9 @@ WATCHER_CONF_DIR=/etc/watcher
WATCHER_CONF=$WATCHER_CONF_DIR/watcher.conf
WATCHER_POLICY_JSON=$WATCHER_CONF_DIR/policy.json
WATCHER_DEVSTACK_DIR=$WATCHER_DIR/devstack
WATCHER_DEVSTACK_FILES_DIR=$WATCHER_DEVSTACK_DIR/files
NOVA_CONF_DIR=/etc/nova
NOVA_CONF=$NOVA_CONF_DIR/nova.conf
@@ -51,6 +54,13 @@ if is_ssl_enabled_service "watcher" || is_service_enabled tls-proxy; then
WATCHER_SERVICE_PROTOCOL="https"
fi
WATCHER_USE_MOD_WSGI=$(trueorfalse TRUE WATCHER_USE_MOD_WSGI)
if is_suse; then
WATCHER_WSGI_DIR=${WATCHER_WSGI_DIR:-/srv/www/htdocs/watcher}
else
WATCHER_WSGI_DIR=${WATCHER_WSGI_DIR:-/var/www/watcher}
fi
# Public facing bits
WATCHER_SERVICE_HOST=${WATCHER_SERVICE_HOST:-$HOST_IP}
WATCHER_SERVICE_PORT=${WATCHER_SERVICE_PORT:-9322}
@@ -74,10 +84,21 @@ function is_watcher_enabled {
return 1
}
#_cleanup_watcher_apache_wsgi - Remove wsgi files,
#disable and remove apache vhost file
function _cleanup_watcher_apache_wsgi {
sudo rm -rf $WATCHER_WSGI_DIR
sudo rm -f $(apache_site_config_for watcher-api)
restart_apache_server
}
# cleanup_watcher() - Remove residual data files, anything left over from previous
# runs that a clean run would need to clean up
function cleanup_watcher {
sudo rm -rf $WATCHER_STATE_PATH $WATCHER_AUTH_CACHE_DIR
if [[ "$WATCHER_USE_MOD_WSGI" == "True" ]]; then
_cleanup_watcher_apache_wsgi
fi
}
# configure_watcher() - Set config files, create data dirs, etc
@@ -108,6 +129,28 @@ function create_watcher_accounts {
"$WATCHER_SERVICE_PROTOCOL://$WATCHER_SERVICE_HOST:$WATCHER_SERVICE_PORT"
}
# _config_watcher_apache_wsgi() - Set WSGI config files of watcher
function _config_watcher_apache_wsgi {
local watcher_apache_conf
if [[ "$WATCHER_USE_MOD_WSGI" == "True" ]]; then
sudo mkdir -p $WATCHER_WSGI_DIR
sudo cp $WATCHER_DIR/watcher/api/app.wsgi $WATCHER_WSGI_DIR/app.wsgi
watcher_apache_conf=$(apache_site_config_for watcher-api)
sudo cp $WATCHER_DEVSTACK_FILES_DIR/apache-watcher-api.template $watcher_apache_conf
sudo sed -e "
s|%WATCHER_SERVICE_PORT%|$WATCHER_SERVICE_PORT|g;
s|%WATCHER_WSGI_DIR%|$WATCHER_WSGI_DIR|g;
s|%USER%|$STACK_USER|g;
s|%APIWORKERS%|$API_WORKERS|g;
s|%APACHE_NAME%|$APACHE_NAME|g;
" -i $watcher_apache_conf
enable_apache_site watcher-api
tail_log watcher-access /var/log/$APACHE_NAME/watcher-api-access.log
tail_log watcher-api /var/log/$APACHE_NAME/watcher-api.log
fi
}
# create_watcher_conf() - Create a new watcher.conf file
function create_watcher_conf {
# (Re)create ``watcher.conf``
@@ -126,7 +169,7 @@ function create_watcher_conf {
iniset $WATCHER_CONF oslo_messaging_rabbit rabbit_password $RABBIT_PASSWORD
iniset $WATCHER_CONF oslo_messaging_rabbit rabbit_host $RABBIT_HOST
iniset $WATCHER_CONF oslo_messaging_notifications driver "messaging"
iniset $WATCHER_CONF oslo_messaging_notifications driver "messagingv2"
iniset $NOVA_CONF oslo_messaging_notifications topics "notifications,watcher_notifications"
iniset $NOVA_CONF notifications notify_on_state_change "vm_and_task_state"
@@ -154,9 +197,13 @@ function create_watcher_conf {
setup_colorized_logging $WATCHER_CONF DEFAULT
else
# Show user_name and project_name instead of user_id and project_id
iniset $WATCHER_CONF DEFAULT logging_context_format_string "%(asctime)s.%(msecs)03d %(levelname)s %(name)s [%(request_id)s %(user_name)s %(project_name)s] %(instance)s%(message)s"
iniset $WATCHER_CONF DEFAULT logging_context_format_string "%(asctime)s.%(msecs)03d %(levelname)s %(name)s [%(request_id)s %(project_domain)s %(user_name)s %(project_name)s] %(instance)s%(message)s"
fi
#config apache files
if [[ "$WATCHER_USE_MOD_WSGI" == "True" ]]; then
_config_watcher_apache_wsgi
fi
# Register SSL certificates if provided
if is_ssl_enabled_service watcher; then
ensure_certificates WATCHER
@@ -205,19 +252,26 @@ function install_watcherclient {
function install_watcher {
git_clone $WATCHER_REPO $WATCHER_DIR $WATCHER_BRANCH
setup_develop $WATCHER_DIR
if [[ "$WATCHER_USE_MOD_WSGI" == "True" ]]; then
install_apache_wsgi
fi
}
# start_watcher_api() - Start the API process ahead of other things
function start_watcher_api {
# Get right service port for testing
local service_port=$WATCHER_SERVICE_PORT
local service_protocol=$WATCHER_SERVICE_PROTOCOL
if is_service_enabled tls-proxy; then
service_port=$WATCHER_SERVICE_PORT_INT
service_protocol="http"
fi
run_process watcher-api "$WATCHER_BIN_DIR/watcher-api --config-file $WATCHER_CONF"
if [[ "$WATCHER_USE_MOD_WSGI" == "True" ]]; then
restart_apache_server
else
run_process watcher-api "$WATCHER_BIN_DIR/watcher-api --config-file $WATCHER_CONF"
fi
echo "Waiting for watcher-api to start..."
if ! wait_for_service $SERVICE_TIMEOUT $service_protocol://$WATCHER_SERVICE_HOST:$service_port; then
die $LINENO "watcher-api did not start"
@@ -240,7 +294,12 @@ function start_watcher {
# stop_watcher() - Stop running processes (non-screen)
function stop_watcher {
for serv in watcher-api watcher-decision-engine watcher-applier; do
if [[ "$WATCHER_USE_MOD_WSGI" == "True" ]]; then
disable_apache_site watcher-api
else
stop_process watcher-api
fi
for serv in watcher-decision-engine watcher-applier; do
stop_process $serv
done
}

View File

@@ -17,6 +17,10 @@ NETWORK_GATEWAY=10.254.1.1 # Change this for your network
MULTI_HOST=1
#Set this to FALSE if do not want to run watcher-api behind mod-wsgi
#WATCHER_USE_MOD_WSGI=TRUE
# This is the controller node, so disable nova-compute
disable_service n-cpu
@@ -28,7 +32,7 @@ ENABLED_SERVICES+=,q-svc,q-dhcp,q-meta,q-agt,q-l3,neutron
enable_service n-cauth
# Enable the Watcher Dashboard plugin
# enable_plugin watcher-dashboard git://git.openstack.org/openstack/watcher-dashboard
enable_plugin watcher-dashboard git://git.openstack.org/openstack/watcher-dashboard
# Enable the Watcher plugin
enable_plugin watcher git://git.openstack.org/openstack/watcher
@@ -38,6 +42,11 @@ enable_plugin ceilometer git://git.openstack.org/openstack/ceilometer
# This is the controller node, so disable the ceilometer compute agent
disable_service ceilometer-acompute
# Enable the ceilometer api explicitly(bug:1667678)
enable_service ceilometer-api
# Enable the Gnocchi plugin
enable_plugin gnocchi https://git.openstack.org/openstack/gnocchi
LOGFILE=$DEST/logs/stack.sh.log
LOGDAYS=2

View File

@@ -0,0 +1,40 @@
{
"priority": "INFO",
"payload": {
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0",
"watcher_object.name": "ActionCreatePayload",
"watcher_object.data": {
"uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"input_parameters": {
"param2": 2,
"param1": 1
},
"created_at": "2016-10-18T09:52:05Z",
"updated_at": null,
"state": "PENDING",
"action_plan": {
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0",
"watcher_object.name": "TerseActionPlanPayload",
"watcher_object.data": {
"uuid": "76be87bd-3422-43f9-93a0-e85a577e3061",
"global_efficacy": {},
"created_at": "2016-10-18T09:52:05Z",
"updated_at": null,
"state": "ONGOING",
"audit_uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"strategy_uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
"deleted_at": null
}
},
"parents": [],
"action_type": "nop",
"deleted_at": null
}
},
"publisher_id": "infra-optim:node0",
"timestamp": "2017-01-01 00:00:00.000000",
"event_type": "action.create",
"message_id": "530b409c-9b6b-459b-8f08-f93dbfeb4d41"
}

View File

@@ -0,0 +1,40 @@
{
"priority": "INFO",
"payload": {
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0",
"watcher_object.name": "ActionDeletePayload",
"watcher_object.data": {
"uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"input_parameters": {
"param2": 2,
"param1": 1
},
"created_at": "2016-10-18T09:52:05Z",
"updated_at": null,
"state": "DELETED",
"action_plan": {
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0",
"watcher_object.name": "TerseActionPlanPayload",
"watcher_object.data": {
"uuid": "76be87bd-3422-43f9-93a0-e85a577e3061",
"global_efficacy": {},
"created_at": "2016-10-18T09:52:05Z",
"updated_at": null,
"state": "ONGOING",
"audit_uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"strategy_uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
"deleted_at": null
}
},
"parents": [],
"action_type": "nop",
"deleted_at": null
}
},
"publisher_id": "infra-optim:node0",
"timestamp": "2017-01-01 00:00:00.000000",
"event_type": "action.delete",
"message_id": "530b409c-9b6b-459b-8f08-f93dbfeb4d41"
}

View File

@@ -0,0 +1,41 @@
{
"priority": "INFO",
"payload": {
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0",
"watcher_object.name": "ActionExecutionPayload",
"watcher_object.data": {
"uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"input_parameters": {
"param2": 2,
"param1": 1
},
"fault": null,
"created_at": "2016-10-18T09:52:05Z",
"updated_at": null,
"state": "SUCCEEDED",
"action_plan": {
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0",
"watcher_object.name": "TerseActionPlanPayload",
"watcher_object.data": {
"uuid": "76be87bd-3422-43f9-93a0-e85a577e3061",
"global_efficacy": {},
"created_at": "2016-10-18T09:52:05Z",
"updated_at": null,
"state": "ONGOING",
"audit_uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"strategy_uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
"deleted_at": null
}
},
"parents": [],
"action_type": "nop",
"deleted_at": null
}
},
"event_type": "action.execution.end",
"publisher_id": "infra-optim:node0",
"timestamp": "2017-01-01 00:00:00.000000",
"message_id": "530b409c-9b6b-459b-8f08-f93dbfeb4d41"
}

View File

@@ -0,0 +1,51 @@
{
"priority": "ERROR",
"payload": {
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0",
"watcher_object.name": "ActionExecutionPayload",
"watcher_object.data": {
"uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"input_parameters": {
"param2": 2,
"param1": 1
},
"fault": {
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0",
"watcher_object.name": "ExceptionPayload",
"watcher_object.data": {
"module_name": "watcher.tests.notifications.test_action_notification",
"exception": "WatcherException",
"exception_message": "TEST",
"function_name": "test_send_action_execution_with_error"
}
},
"created_at": "2016-10-18T09:52:05Z",
"updated_at": null,
"state": "FAILED",
"action_plan": {
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0",
"watcher_object.name": "TerseActionPlanPayload",
"watcher_object.data": {
"uuid": "76be87bd-3422-43f9-93a0-e85a577e3061",
"global_efficacy": {},
"created_at": "2016-10-18T09:52:05Z",
"updated_at": null,
"state": "ONGOING",
"audit_uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"strategy_uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
"deleted_at": null
}
},
"parents": [],
"action_type": "nop",
"deleted_at": null
}
},
"event_type": "action.execution.error",
"publisher_id": "infra-optim:node0",
"timestamp": "2017-01-01 00:00:00.000000",
"message_id": "530b409c-9b6b-459b-8f08-f93dbfeb4d41"
}

View File

@@ -0,0 +1,41 @@
{
"priority": "INFO",
"payload": {
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0",
"watcher_object.name": "ActionExecutionPayload",
"watcher_object.data": {
"uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"input_parameters": {
"param2": 2,
"param1": 1
},
"fault": null,
"created_at": "2016-10-18T09:52:05Z",
"updated_at": null,
"state": "ONGOING",
"action_plan": {
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0",
"watcher_object.name": "TerseActionPlanPayload",
"watcher_object.data": {
"uuid": "76be87bd-3422-43f9-93a0-e85a577e3061",
"global_efficacy": {},
"created_at": "2016-10-18T09:52:05Z",
"updated_at": null,
"state": "ONGOING",
"audit_uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"strategy_uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
"deleted_at": null
}
},
"parents": [],
"action_type": "nop",
"deleted_at": null
}
},
"event_type": "action.execution.start",
"publisher_id": "infra-optim:node0",
"timestamp": "2017-01-01 00:00:00.000000",
"message_id": "530b409c-9b6b-459b-8f08-f93dbfeb4d41"
}

View File

@@ -0,0 +1,49 @@
{
"priority": "INFO",
"payload": {
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0",
"watcher_object.name": "ActionUpdatePayload",
"watcher_object.data": {
"uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"input_parameters": {
"param2": 2,
"param1": 1
},
"created_at": "2016-10-18T09:52:05Z",
"updated_at": null,
"state_update": {
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0",
"watcher_object.name": "ActionStateUpdatePayload",
"watcher_object.data": {
"old_state": "PENDING",
"state": "ONGOING"
}
},
"state": "ONGOING",
"action_plan": {
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0",
"watcher_object.name": "TerseActionPlanPayload",
"watcher_object.data": {
"uuid": "76be87bd-3422-43f9-93a0-e85a577e3061",
"global_efficacy": {},
"created_at": "2016-10-18T09:52:05Z",
"updated_at": null,
"state": "ONGOING",
"audit_uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"strategy_uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
"deleted_at": null
}
},
"parents": [],
"action_type": "nop",
"deleted_at": null
}
},
"event_type": "action.update",
"publisher_id": "infra-optim:node0",
"timestamp": "2017-01-01 00:00:00.000000",
"message_id": "530b409c-9b6b-459b-8f08-f93dbfeb4d41"
}

View File

@@ -407,6 +407,9 @@ be one of the following:
- **CANCELLED** : the :ref:`Audit <audit_definition>` was in **PENDING** or
**ONGOING** state and was cancelled by the
:ref:`Administrator <administrator_definition>`
- **SUSPENDED** : the :ref:`Audit <audit_definition>` was in **ONGOING**
state and was suspended by the
:ref:`Administrator <administrator_definition>`
The following diagram shows the different possible states of an
:ref:`Audit <audit_definition>` and what event makes the state change to a new

View File

@@ -0,0 +1,49 @@
..
Except where otherwise noted, this document is licensed under Creative
Commons Attribution 3.0 License. You can view the license at:
https://creativecommons.org/licenses/by/3.0/
Installing API behind mod_wsgi
==============================
#. Install the Apache Service::
Fedora 21/RHEL7/CentOS7:
sudo yum install httpd
Fedora 22 (or higher):
sudo dnf install httpd
Debian/Ubuntu:
apt-get install apache2
#. Copy ``etc/apache2/watcher.conf`` under the apache sites::
Fedora/RHEL7/CentOS7:
sudo cp etc/apache2/watcher /etc/httpd/conf.d/watcher.conf
Debian/Ubuntu:
sudo cp etc/apache2/watcher /etc/apache2/sites-available/watcher.conf
#. Edit ``<apache-configuration-dir>/watcher.conf`` according to installation
and environment.
* Modify the ``WSGIDaemonProcess`` directive to set the ``user`` and
``group`` values to appropriate user on your server.
* Modify the ``WSGIScriptAlias`` directive to point to the
watcher/api/app.wsgi script.
* Modify the ``Directory`` directive to set the path to the Watcher API
code.
* Modify the ``ErrorLog and CustomLog`` to redirect the logs to the right
directory.
#. Enable the apache watcher site and reload::
Fedora/RHEL7/CentOS7:
sudo systemctl reload httpd
Debian/Ubuntu:
sudo a2ensite watcher
sudo service apache2 reload

View File

@@ -18,14 +18,14 @@ The source install instructions specifically avoid using platform specific
packages, instead using the source for the code and the Python Package Index
(PyPi_).
.. _PyPi: http://pypi.python.org/pypi
.. _PyPi: https://pypi.python.org/pypi
It's expected that your system already has python2.7_, latest version of pip_,
and git_ available.
.. _python2.7: http://www.python.org
.. _pip: http://www.pip-installer.org/en/latest/installing.html
.. _git: http://git-scm.com/
.. _python2.7: https://www.python.org
.. _pip: https://pip.pypa.io/en/latest/installing/
.. _git: https://git-scm.com/
Your system shall also have some additional system libraries:

View File

@@ -92,6 +92,12 @@ Detailed DevStack Instructions
Note: if you want to use a specific branch, specify WATCHER_BRANCH in the
local.conf file. By default it will use the master branch.
Note: watcher-api will default run under apache/httpd, set the variable
WATCHER_USE_MOD_WSGI=FALSE if you do not wish to run under apache/httpd.
For development environment it is suggested to set WATHCER_USE_MOD_WSGI
to FALSE. For Production environment it is suggested to keep it at the
default TRUE value.
#. Start stacking from the controller node::
./devstack/stack.sh

View File

@@ -16,8 +16,8 @@ for development purposes.
To install Watcher from packaging, refer instead to Watcher `User
Documentation`_.
.. _`Git Repository`: http://git.openstack.org/cgit/openstack/watcher
.. _`User Documentation`: http://docs.openstack.org/developer/watcher/
.. _`Git Repository`: https://git.openstack.org/cgit/openstack/watcher
.. _`User Documentation`: https://docs.openstack.org/developer/watcher/
Prerequisites
=============
@@ -35,10 +35,10 @@ following tools available on your system:
**Reminder**: If you're successfully using a different platform, or a
different version of the above, please document your configuration here!
.. _Python: http://www.python.org/
.. _git: http://git-scm.com/
.. _setuptools: http://pypi.python.org/pypi/setuptools
.. _virtualenvwrapper: https://virtualenvwrapper.readthedocs.org/en/latest/install.html
.. _Python: https://www.python.org/
.. _git: https://git-scm.com/
.. _setuptools: https://pypi.python.org/pypi/setuptools
.. _virtualenvwrapper: https://virtualenvwrapper.readthedocs.io/en/latest/install.html
Getting the latest code
=======================
@@ -175,11 +175,12 @@ The HTML files are available into ``doc/build`` directory.
Configure the Watcher services
==============================
Watcher services require a configuration file. There is a sample configuration
file that can be used to get started:
Watcher services require a configuration file. Use tox to generate
a sample configuration file that can be used to get started:
.. code-block:: bash
$ tox -e genconfig
$ cp etc/watcher.conf.sample etc/watcher.conf
Most of the default configuration should be enough to get you going, but you

View File

@@ -14,7 +14,7 @@ Unit tests
==========
All unit tests should be run using `tox`_. To run the same unit tests that are
executing onto `Gerrit`_ which includes ``py34``, ``py27`` and ``pep8``, you
executing onto `Gerrit`_ which includes ``py35``, ``py27`` and ``pep8``, you
can issue the following command::
$ workon watcher
@@ -26,7 +26,7 @@ If you want to only run one of the aforementioned, you can then issue one of
the following::
$ workon watcher
(watcher) $ tox -e py34
(watcher) $ tox -e py35
(watcher) $ tox -e py27
(watcher) $ tox -e pep8

Binary file not shown.

Before

Width:  |  Height:  |  Size: 58 KiB

View File

@@ -4,11 +4,14 @@
PENDING --> ONGOING: Audit request is received\nby the Watcher Decision Engine
ONGOING --> FAILED: Audit fails\n(no solution found, technical error, ...)
ONGOING --> SUCCEEDED: The Watcher Decision Engine\ncould find at least one Solution
ONGOING --> SUSPENDED: Administrator wants to\nsuspend the Audit
SUSPENDED --> ONGOING: Administrator wants to\nresume the Audit
FAILED --> DELETED : Administrator wants to\narchive/delete the Audit
SUCCEEDED --> DELETED : Administrator wants to\narchive/delete the Audit
PENDING --> CANCELLED : Administrator cancels\nthe Audit
ONGOING --> CANCELLED : Administrator cancels\nthe Audit
CANCELLED --> DELETED : Administrator wants to\narchive/delete the Audit
SUSPENDED --> DELETED: Administrator wants to\narchive/delete the Audit
DELETED --> [*]
@enduml

Binary file not shown.

Before

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

After

Width:  |  Height:  |  Size: 47 KiB

View File

@@ -56,6 +56,7 @@ Getting Started
dev/devstack
deploy/configuration
deploy/conf-files
deploy/apache-mod-wsgi
dev/notifications
dev/testing
dev/rally_link

View File

@@ -72,6 +72,9 @@ Strategy parameter is:
parameter type default Value description
============== ====== ============= ====================================
``threshold`` Number 35.0 Temperature threshold for migration
``period`` Number 30 The time interval in seconds for
getting statistic aggregation from
metric data source
============== ====== ============= ====================================
Efficacy Indicator

View File

@@ -70,6 +70,20 @@ Default Watcher's planner:
.. watcher-term:: watcher.decision_engine.planner.default.DefaultPlanner
Configuration
-------------
Strategy parameter is:
====================== ====== ============= ===================================
parameter type default Value description
====================== ====== ============= ===================================
``period`` Number 3600 The time interval in seconds
for getting statistic aggregation
from metric data source
====================== ====== ============= ===================================
Efficacy Indicator
------------------

33
etc/apache2/watcher Normal file
View File

@@ -0,0 +1,33 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# This is an example Apache2 configuration file for using
# Watcher API through mod_wsgi
Listen 9322
<VirtualHost *:9322>
WSGIDaemonProcess watcher-api user=stack group=stack processes=2 threads=2 display-name=%{GROUP}
WSGIScriptAlias / /opt/stack/watcher/watcher/api/app.wsgi
WSGIProcessGroup watcher-api
ErrorLog /var/log/httpd/watcher_error.log
LogLevel info
CustomLog /var/log/httpd/watcher_access.log combined
<Directory /opt/stack/watcher/watcher/api>
WSGIProcessGroup watcher-api
WSGIApplicationGroup %{GLOBAL}
AllowOverride All
Require all granted
</Directory>
</VirtualHost>

View File

@@ -1,4 +1,4 @@
---
features:
- Add superseded state for an action plan if the cluster data model has
changed after it has been created.
- Check the creation time of the action plan,
and set its state to SUPERSEDED if it has expired.

View File

@@ -0,0 +1,4 @@
---
features:
- |
Added SUSPENDED audit state

View File

@@ -1,3 +1,16 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# watcher documentation build configuration file, created by
# sphinx-quickstart on Fri Jun 3 11:37:52 2016.
#

View File

@@ -1,3 +1,17 @@
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
=================================================
Welcome to watcher's Release Notes documentation!
=================================================
@@ -7,5 +21,6 @@ Contents:
:maxdepth: 1
unreleased
ocata
newton

View File

@@ -0,0 +1,33 @@
# Gérald LONLAS <g.lonlas@gmail.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: watcher 1.0.1.dev51\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2017-03-21 11:57+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-10-22 06:44+0000\n"
"Last-Translator: Gérald LONLAS <g.lonlas@gmail.com>\n"
"Language-Team: French\n"
"Language: fr\n"
"X-Generator: Zanata 3.9.6\n"
"Plural-Forms: nplurals=2; plural=(n > 1)\n"
msgid "0.29.0"
msgstr "0.29.0"
msgid "Contents:"
msgstr "Contenu :"
msgid "Current Series Release Notes"
msgstr "Note de la release actuelle"
msgid "New Features"
msgstr "Nouvelles fonctionnalités"
msgid "Newton Series Release Notes"
msgstr "Note de release pour Newton"
msgid "Welcome to watcher's Release Notes documentation!"
msgstr "Bienvenue dans la documentation de la note de Release de Watcher"

View File

@@ -0,0 +1,6 @@
===================================
Ocata Series Release Notes
===================================
.. release-notes::
:branch: origin/stable/ocata

View File

@@ -10,36 +10,37 @@ keystonemiddleware>=4.12.0 # Apache-2.0
lxml!=3.7.0,>=2.3 # BSD
oslo.concurrency>=3.8.0 # Apache-2.0
oslo.cache>=1.5.0 # Apache-2.0
oslo.config!=3.18.0,>=3.14.0 # Apache-2.0
oslo.context>=2.9.0 # Apache-2.0
oslo.db>=4.15.0 # Apache-2.0
oslo.config>=3.22.0 # Apache-2.0
oslo.context>=2.12.0 # Apache-2.0
oslo.db>=4.19.0 # Apache-2.0
oslo.i18n>=2.1.0 # Apache-2.0
oslo.log>=3.11.0 # Apache-2.0
oslo.messaging>=5.14.0 # Apache-2.0
oslo.log>=3.22.0 # Apache-2.0
oslo.messaging>=5.19.0 # Apache-2.0
oslo.policy>=1.17.0 # Apache-2.0
oslo.reports>=0.6.0 # Apache-2.0
oslo.serialization>=1.10.0 # Apache-2.0
oslo.service>=1.10.0 # Apache-2.0
oslo.utils>=3.18.0 # Apache-2.0
oslo.utils>=3.20.0 # Apache-2.0
oslo.versionedobjects>=1.17.0 # Apache-2.0
PasteDeploy>=1.5.0 # MIT
pbr>=1.8 # Apache-2.0
pbr!=2.1.0,>=2.0.0 # Apache-2.0
pecan!=1.0.2,!=1.0.3,!=1.0.4,!=1.2,>=1.0.0 # BSD
PrettyTable<0.8,>=0.7.1 # BSD
voluptuous>=0.8.9 # BSD License
gnocchiclient>=2.7.0 # Apache-2.0
python-ceilometerclient>=2.5.0 # Apache-2.0
python-cinderclient!=1.7.0,!=1.7.1,>=1.6.0 # Apache-2.0
python-cinderclient>=2.0.1 # Apache-2.0
python-glanceclient>=2.5.0 # Apache-2.0
python-keystoneclient>=3.8.0 # Apache-2.0
python-monascaclient>=1.1.0 # Apache-2.0
python-neutronclient>=5.1.0 # Apache-2.0
python-novaclient!=7.0.0,>=6.0.0 # Apache-2.0
python-novaclient>=7.1.0 # Apache-2.0
python-openstackclient>=3.3.0 # Apache-2.0
six>=1.9.0 # MIT
SQLAlchemy<1.1.0,>=1.0.10 # MIT
stevedore>=1.17.1 # Apache-2.0
SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 # MIT
stevedore>=1.20.0 # Apache-2.0
taskflow>=2.7.0 # Apache-2.0
WebOb>=1.6.0 # MIT
WebOb>=1.7.1 # MIT
WSME>=0.8 # MIT
networkx>=1.10 # BSD

View File

@@ -16,7 +16,6 @@ classifier =
Programming Language :: Python :: 2
Programming Language :: Python :: 2.7
Programming Language :: Python :: 3
Programming Language :: Python :: 3.4
Programming Language :: Python :: 3.5
[files]

View File

@@ -25,5 +25,5 @@ except ImportError:
pass
setuptools.setup(
setup_requires=['pbr>=1.8'],
setup_requires=['pbr>=2.0.0'],
pbr=True)

View File

@@ -5,7 +5,7 @@
coverage>=4.0 # Apache-2.0
doc8 # Apache-2.0
freezegun>=0.3.6 # Apache-2.0
hacking<0.11,>=0.10.2
hacking!=0.13.0,<0.14,>=0.12.0 # Apache-2.0
mock>=2.0 # BSD
oslotest>=1.10.0 # Apache-2.0
os-testr>=0.8.0 # Apache-2.0
@@ -16,7 +16,7 @@ testtools>=1.4.0 # MIT
# Doc requirements
oslosphinx>=4.7.0 # Apache-2.0
sphinx!=1.3b1,<1.4,>=1.2.1 # BSD
sphinx>=1.5.1 # BSD
sphinxcontrib-pecanwsme>=0.8 # Apache-2.0
# releasenotes

View File

@@ -1,6 +1,6 @@
[tox]
minversion = 1.8
envlist = py35,py34,py27,pep8
envlist = py35,py27,pep8
skipsdist = True
[testenv]
@@ -45,7 +45,7 @@ commands =
[flake8]
show-source=True
ignore=
ignore= H105,E123,E226,N320
builtins= _
exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,build,*sqlalchemy/alembic/versions/*,demo/,releasenotes

40
watcher/api/app.wsgi Normal file
View File

@@ -0,0 +1,40 @@
# -*- mode: python -*-
# -*- encoding: utf-8 -*-
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Use this file for deploying the API service under Apache2 mod_wsgi.
"""
import sys
from oslo_config import cfg
import oslo_i18n as i18n
from oslo_log import log
from watcher.api import app
from watcher.common import service
CONF = cfg.CONF
i18n.install('watcher')
service.prepare_service(sys.argv)
LOG = log.getLogger(__name__)
LOG.debug("Configuration:")
CONF.log_opt_values(LOG, log.DEBUG)
application = app.VersionSelectorApplication()

View File

@@ -129,8 +129,15 @@ class AuditPatchType(types.JsonPatchType):
@staticmethod
def validate(patch):
serialized_patch = {'path': patch.path, 'op': patch.op}
if patch.path in AuditPatchType.mandatory_attrs():
def is_new_state_none(p):
return p.path == '/state' and p.op == 'replace' and p.value is None
serialized_patch = {'path': patch.path,
'op': patch.op,
'value': patch.value}
if (patch.path in AuditPatchType.mandatory_attrs() or
is_new_state_none(patch)):
msg = _("%(field)s can't be updated.")
raise exception.PatchError(
patch=serialized_patch,
@@ -294,7 +301,7 @@ class Audit(base.APIBase):
audit.unset_fields_except(['uuid', 'audit_type', 'state',
'goal_uuid', 'interval', 'scope',
'strategy_uuid', 'goal_name',
'strategy_name'])
'strategy_name', 'auto_trigger'])
audit.links = [link.Link.make_link('self', url,
'audits', audit.uuid),
@@ -557,6 +564,18 @@ class AuditsController(rest.RestController):
try:
audit_dict = audit_to_update.as_dict()
initial_state = audit_dict['state']
new_state = api_utils.get_patch_value(patch, 'state')
if not api_utils.check_audit_state_transition(
patch, initial_state):
error_message = _("State transition not allowed: "
"(%(initial_state)s -> %(new_state)s)")
raise exception.PatchError(
patch=patch,
reason=error_message % dict(
initial_state=initial_state, new_state=new_state))
audit = Audit(**api_utils.apply_jsonpatch(audit_dict, patch))
except api_utils.JSONPATCH_EXCEPTIONS as e:
raise exception.PatchError(patch=patch, reason=e)

View File

@@ -333,6 +333,7 @@ class AuditTemplate(base.APIBase):
self.fields.append('goal_id')
self.fields.append('strategy_id')
setattr(self, 'strategy_id', kwargs.get('strategy_id', wtypes.Unset))
# goal_uuid & strategy_uuid are not part of
# objects.AuditTemplate.fields because they're API-only attributes.

View File

@@ -30,7 +30,6 @@ import wsme
from wsme import types as wtypes
import wsmeext.pecan as wsme_pecan
from watcher._i18n import _LW
from watcher.api.controllers import base
from watcher.api.controllers import link
from watcher.api.controllers.v1 import collection
@@ -56,8 +55,8 @@ class Service(base.APIBase):
def _get_status(self):
return self._status
def _set_status(self, name):
service = objects.Service.get_by_name(pecan.request.context, name)
def _set_status(self, id):
service = objects.Service.get(pecan.request.context, id)
last_heartbeat = (service.last_seen_up or service.updated_at
or service.created_at)
if isinstance(last_heartbeat, six.string_types):
@@ -72,9 +71,9 @@ class Service(base.APIBase):
elapsed = timeutils.delta_seconds(last_heartbeat, timeutils.utcnow())
is_up = abs(elapsed) <= CONF.service_down_time
if not is_up:
LOG.warning(_LW('Seems service %(name)s on host %(host)s is down. '
'Last heartbeat was %(lhb)s.'
'Elapsed time is %(el)s'),
LOG.warning('Seems service %(name)s on host %(host)s is down. '
'Last heartbeat was %(lhb)s.'
'Elapsed time is %(el)s',
{'name': service.name,
'host': service.host,
'lhb': str(last_heartbeat), 'el': str(elapsed)})
@@ -108,7 +107,7 @@ class Service(base.APIBase):
for field in fields:
self.fields.append(field)
setattr(self, field, kwargs.get(
field if field != 'status' else 'name', wtypes.Unset))
field if field != 'status' else 'id', wtypes.Unset))
@staticmethod
def _convert_with_links(service, url, expand=True):

View File

@@ -73,6 +73,21 @@ def apply_jsonpatch(doc, patch):
return jsonpatch.apply_patch(doc, jsonpatch.JsonPatch(patch))
def get_patch_value(patch, key):
for p in patch:
if p['op'] == 'replace' and p['path'] == '/%s' % key:
return p['value']
def check_audit_state_transition(patch, initial):
is_transition_valid = True
state_value = get_patch_value(patch, "state")
if state_value is not None:
is_transition_valid = objects.audit.AuditStateTransitionManager(
).check_transition(initial, state_value)
return is_transition_valid
def as_filters_dict(**filters):
filters_dict = {}
for filter_name, filter_value in filters.items():

View File

@@ -27,7 +27,7 @@ from oslo_serialization import jsonutils
import six
import webob
from watcher._i18n import _, _LE
from watcher._i18n import _
LOG = log.getLogger(__name__)
@@ -79,7 +79,7 @@ class ParsableErrorMiddleware(object):
et.ElementTree.Element(
'error_message', text='\n'.join(app_iter)))]
except et.ElementTree.ParseError as err:
LOG.error(_LE('Error parsing HTTP response: %s'), err)
LOG.error('Error parsing HTTP response: %s', err)
body = ['<error_message>%s'
'</error_message>' % state['status_code']]
state['headers'].append(('Content-Type', 'application/xml'))

View File

@@ -21,7 +21,7 @@ from oslo_log import log
import six
import voluptuous
from watcher._i18n import _, _LC
from watcher._i18n import _
from watcher.applier.actions import base
from watcher.common import exception
from watcher.common import nova_helper
@@ -120,9 +120,9 @@ class Migrate(base.BaseAction):
"migrating instance %s.Exception: %s" %
(self.instance_uuid, e))
except Exception:
LOG.critical(_LC("Unexpected error occurred. Migration failed for "
"instance %s. Leaving instance on previous "
"host."), self.instance_uuid)
LOG.critical("Unexpected error occurred. Migration failed for "
"instance %s. Leaving instance on previous "
"host.", self.instance_uuid)
return result
@@ -134,9 +134,9 @@ class Migrate(base.BaseAction):
dest_hostname=destination)
except Exception as exc:
LOG.exception(exc)
LOG.critical(_LC("Unexpected error occurred. Migration failed for "
"instance %s. Leaving instance on previous "
"host."), self.instance_uuid)
LOG.critical("Unexpected error occurred. Migration failed for "
"instance %s. Leaving instance on previous "
"host.", self.instance_uuid)
return result

View File

@@ -21,7 +21,7 @@ from oslo_log import log
import six
import voluptuous
from watcher._i18n import _, _LC
from watcher._i18n import _
from watcher.applier.actions import base
from watcher.common import nova_helper
from watcher.common import utils
@@ -86,8 +86,8 @@ class Resize(base.BaseAction):
except Exception as exc:
LOG.exception(exc)
LOG.critical(
_LC("Unexpected error occurred. Resizing failed for "
"instance %s."), self.instance_uuid)
"Unexpected error occurred. Resizing failed for "
"instance %s.", self.instance_uuid)
return result
def execute(self):

View File

@@ -36,7 +36,7 @@ class ApplierAPI(service.Service):
if not utils.is_uuid_like(action_plan_uuid):
raise exception.InvalidUuidOrName(name=action_plan_uuid)
return self.conductor_client.call(
self.conductor_client.cast(
context, 'launch_action_plan', action_plan_uuid=action_plan_uuid)

View File

@@ -18,12 +18,19 @@
import abc
from oslo_log import log
import six
from taskflow import task as flow_task
from watcher.applier.actions import factory
from watcher.common import clients
from watcher.common.loader import loadable
from watcher import notifications
from watcher import objects
from watcher.objects import fields
LOG = log.getLogger(__name__)
@six.add_metaclass(abc.ABCMeta)
@@ -72,11 +79,95 @@ class BaseWorkFlowEngine(loadable.Loadable):
return self._action_factory
def notify(self, action, state):
db_action = objects.Action.get_by_uuid(self.context, action.uuid)
db_action = objects.Action.get_by_uuid(self.context, action.uuid,
eager=True)
db_action.state = state
db_action.save()
# NOTE(v-francoise): Implement notifications for action
@abc.abstractmethod
def execute(self, actions):
raise NotImplementedError()
class BaseTaskFlowActionContainer(flow_task.Task):
def __init__(self, name, db_action, engine, **kwargs):
super(BaseTaskFlowActionContainer, self).__init__(name=name)
self._db_action = db_action
self._engine = engine
self.loaded_action = None
@property
def engine(self):
return self._engine
@property
def action(self):
if self.loaded_action is None:
action = self.engine.action_factory.make_action(
self._db_action,
osc=self._engine.osc)
self.loaded_action = action
return self.loaded_action
@abc.abstractmethod
def do_pre_execute(self):
raise NotImplementedError()
@abc.abstractmethod
def do_execute(self, *args, **kwargs):
raise NotImplementedError()
@abc.abstractmethod
def do_post_execute(self):
raise NotImplementedError()
# NOTE(alexchadin): taskflow does 3 method calls (pre_execute, execute,
# post_execute) independently. We want to support notifications in base
# class, so child's methods should be named with `do_` prefix and wrapped.
def pre_execute(self):
try:
self.do_pre_execute()
notifications.action.send_execution_notification(
self.engine.context, self._db_action,
fields.NotificationAction.EXECUTION,
fields.NotificationPhase.START)
except Exception as e:
LOG.exception(e)
self.engine.notify(self._db_action, objects.action.State.FAILED)
notifications.action.send_execution_notification(
self.engine.context, self._db_action,
fields.NotificationAction.EXECUTION,
fields.NotificationPhase.ERROR,
priority=fields.NotificationPriority.ERROR)
def execute(self, *args, **kwargs):
try:
self.do_execute(*args, **kwargs)
notifications.action.send_execution_notification(
self.engine.context, self._db_action,
fields.NotificationAction.EXECUTION,
fields.NotificationPhase.END)
except Exception as e:
LOG.exception(e)
LOG.error('The workflow engine has failed '
'to execute the action: %s', self.name)
self.engine.notify(self._db_action, objects.action.State.FAILED)
notifications.action.send_execution_notification(
self.engine.context, self._db_action,
fields.NotificationAction.EXECUTION,
fields.NotificationPhase.ERROR,
priority=fields.NotificationPriority.ERROR)
raise
def post_execute(self):
try:
self.do_post_execute()
except Exception as e:
LOG.exception(e)
self.engine.notify(self._db_action, objects.action.State.FAILED)
notifications.action.send_execution_notification(
self.engine.context, self._db_action,
fields.NotificationAction.EXECUTION,
fields.NotificationPhase.ERROR,
priority=fields.NotificationPriority.ERROR)

View File

@@ -22,7 +22,6 @@ from taskflow import engines
from taskflow.patterns import graph_flow as gf
from taskflow import task as flow_task
from watcher._i18n import _LE, _LW, _LC
from watcher.applier.workflow_engine import base
from watcher.common import exception
from watcher import objects
@@ -95,69 +94,35 @@ class DefaultWorkFlowEngine(base.BaseWorkFlowEngine):
raise exception.WorkflowExecutionException(error=e)
class TaskFlowActionContainer(flow_task.Task):
class TaskFlowActionContainer(base.BaseTaskFlowActionContainer):
def __init__(self, db_action, engine):
name = "action_type:{0} uuid:{1}".format(db_action.action_type,
db_action.uuid)
super(TaskFlowActionContainer, self).__init__(name=name)
self._db_action = db_action
self._engine = engine
self.loaded_action = None
super(TaskFlowActionContainer, self).__init__(name, db_action, engine)
@property
def action(self):
if self.loaded_action is None:
action = self.engine.action_factory.make_action(
self._db_action,
osc=self._engine.osc)
self.loaded_action = action
return self.loaded_action
def do_pre_execute(self):
self.engine.notify(self._db_action, objects.action.State.ONGOING)
LOG.debug("Pre-condition action: %s", self.name)
self.action.pre_condition()
@property
def engine(self):
return self._engine
def do_execute(self, *args, **kwargs):
LOG.debug("Running action: %s", self.name)
def pre_execute(self):
try:
self.engine.notify(self._db_action, objects.action.State.ONGOING)
LOG.debug("Pre-condition action: %s", self.name)
self.action.pre_condition()
except Exception as e:
LOG.exception(e)
self.engine.notify(self._db_action, objects.action.State.FAILED)
raise
self.action.execute()
self.engine.notify(self._db_action, objects.action.State.SUCCEEDED)
def execute(self, *args, **kwargs):
try:
LOG.debug("Running action: %s", self.name)
self.action.execute()
self.engine.notify(self._db_action, objects.action.State.SUCCEEDED)
except Exception as e:
LOG.exception(e)
LOG.error(_LE('The workflow engine has failed '
'to execute the action: %s'), self.name)
self.engine.notify(self._db_action, objects.action.State.FAILED)
raise
def post_execute(self):
try:
LOG.debug("Post-condition action: %s", self.name)
self.action.post_condition()
except Exception as e:
LOG.exception(e)
self.engine.notify(self._db_action, objects.action.State.FAILED)
raise
def do_post_execute(self):
LOG.debug("Post-condition action: %s", self.name)
self.action.post_condition()
def revert(self, *args, **kwargs):
LOG.warning(_LW("Revert action: %s"), self.name)
LOG.warning("Revert action: %s", self.name)
try:
# TODO(jed): do we need to update the states in case of failure?
self.action.revert()
except Exception as e:
LOG.exception(e)
LOG.critical(_LC("Oops! We need a disaster recover plan."))
LOG.critical("Oops! We need a disaster recover plan.")
class TaskFlowNop(flow_task.Task):

View File

@@ -22,7 +22,6 @@ import sys
from oslo_config import cfg
from oslo_log import log as logging
from watcher._i18n import _LI
from watcher.common import service
from watcher import conf
@@ -39,11 +38,11 @@ def main():
server = service.WSGIService('watcher-api', CONF.api.enable_ssl_api)
if host == '127.0.0.1':
LOG.info(_LI('serving on 127.0.0.1:%(port)s, '
'view at %(protocol)s://127.0.0.1:%(port)s') %
LOG.info('serving on 127.0.0.1:%(port)s, '
'view at %(protocol)s://127.0.0.1:%(port)s' %
dict(protocol=protocol, port=port))
else:
LOG.info(_LI('serving on %(protocol)s://%(host)s:%(port)s') %
LOG.info('serving on %(protocol)s://%(host)s:%(port)s' %
dict(protocol=protocol, host=host, port=port))
launcher = service.launch(CONF, server, workers=server.workers)

View File

@@ -22,7 +22,6 @@ import sys
from oslo_log import log as logging
from watcher._i18n import _LI
from watcher.applier import manager
from watcher.common import service as watcher_service
from watcher import conf
@@ -34,7 +33,7 @@ CONF = conf.CONF
def main():
watcher_service.prepare_service(sys.argv, CONF)
LOG.info(_LI('Starting Watcher Applier service in PID %s'), os.getpid())
LOG.info('Starting Watcher Applier service in PID %s', os.getpid())
applier_service = watcher_service.Service(manager.ApplierManager)

View File

@@ -22,7 +22,6 @@ import sys
from oslo_log import log as logging
from watcher._i18n import _LI
from watcher.common import service as watcher_service
from watcher import conf
from watcher.decision_engine import gmr
@@ -38,7 +37,7 @@ def main():
watcher_service.prepare_service(sys.argv, CONF)
gmr.register_gmr_plugins()
LOG.info(_LI('Starting Watcher Decision Engine service in PID %s'),
LOG.info('Starting Watcher Decision Engine service in PID %s',
os.getpid())
syncer = sync.Syncer()

View File

@@ -22,7 +22,6 @@ import sys
from oslo_log import log as logging
from watcher._i18n import _LI
from watcher.common import service as service
from watcher import conf
from watcher.decision_engine import sync
@@ -32,10 +31,10 @@ CONF = conf.CONF
def main():
LOG.info(_LI('Watcher sync started.'))
LOG.info('Watcher sync started.')
service.prepare_service(sys.argv, CONF)
syncer = sync.Syncer()
syncer.sync()
LOG.info(_LI('Watcher sync finished.'))
LOG.info('Watcher sync finished.')

View File

@@ -13,6 +13,7 @@
from ceilometerclient import client as ceclient
from cinderclient import client as ciclient
from glanceclient import client as glclient
from gnocchiclient import client as gnclient
from keystoneauth1 import loading as ka_loading
from keystoneclient import client as keyclient
from monascaclient import client as monclient
@@ -39,6 +40,7 @@ class OpenStackClients(object):
self._keystone = None
self._nova = None
self._glance = None
self._gnocchi = None
self._cinder = None
self._ceilometer = None
self._monasca = None
@@ -78,7 +80,9 @@ class OpenStackClients(object):
return self._nova
novaclient_version = self._get_client_option('nova', 'api_version')
nova_endpoint_type = self._get_client_option('nova', 'endpoint_type')
self._nova = nvclient.Client(novaclient_version,
endpoint_type=nova_endpoint_type,
session=self.session)
return self._nova
@@ -88,17 +92,37 @@ class OpenStackClients(object):
return self._glance
glanceclient_version = self._get_client_option('glance', 'api_version')
glance_endpoint_type = self._get_client_option('glance',
'endpoint_type')
self._glance = glclient.Client(glanceclient_version,
interface=glance_endpoint_type,
session=self.session)
return self._glance
@exception.wrap_keystone_exception
def gnocchi(self):
if self._gnocchi:
return self._gnocchi
gnocchiclient_version = self._get_client_option('gnocchi',
'api_version')
gnocchiclient_interface = self._get_client_option('gnocchi',
'endpoint_type')
self._gnocchi = gnclient.Client(gnocchiclient_version,
interface=gnocchiclient_interface,
session=self.session)
return self._gnocchi
@exception.wrap_keystone_exception
def cinder(self):
if self._cinder:
return self._cinder
cinderclient_version = self._get_client_option('cinder', 'api_version')
cinder_endpoint_type = self._get_client_option('cinder',
'endpoint_type')
self._cinder = ciclient.Client(cinderclient_version,
endpoint_type=cinder_endpoint_type,
session=self.session)
return self._cinder
@@ -109,8 +133,12 @@ class OpenStackClients(object):
ceilometerclient_version = self._get_client_option('ceilometer',
'api_version')
self._ceilometer = ceclient.get_client(ceilometerclient_version,
session=self.session)
ceilometer_endpoint_type = self._get_client_option('ceilometer',
'endpoint_type')
self._ceilometer = ceclient.get_client(
ceilometerclient_version,
endpoint_type=ceilometer_endpoint_type,
session=self.session)
return self._ceilometer
@exception.wrap_keystone_exception
@@ -120,6 +148,8 @@ class OpenStackClients(object):
monascaclient_version = self._get_client_option(
'monasca', 'api_version')
monascaclient_interface = self._get_client_option(
'monasca', 'interface')
token = self.session.get_token()
watcher_clients_auth_config = CONF.get(_CLIENTS_AUTH_GROUP)
service_type = 'monitoring'
@@ -135,7 +165,8 @@ class OpenStackClients(object):
'username': watcher_clients_auth_config.username,
'password': watcher_clients_auth_config.password,
}
endpoint = self.session.get_endpoint(service_type=service_type)
endpoint = self.session.get_endpoint(service_type=service_type,
interface=monascaclient_interface)
self._monasca = monclient.Client(
monascaclient_version, endpoint, **monasca_kwargs)
@@ -149,7 +180,11 @@ class OpenStackClients(object):
neutronclient_version = self._get_client_option('neutron',
'api_version')
neutron_endpoint_type = self._get_client_option('neutron',
'endpoint_type')
self._neutron = netclient.Client(neutronclient_version,
endpoint_type=neutron_endpoint_type,
session=self.session)
self._neutron.format = 'json'
return self._neutron

View File

@@ -15,7 +15,6 @@ from oslo_log import log as logging
from oslo_utils import timeutils
import six
from watcher._i18n import _LW
from watcher.common import utils
LOG = logging.getLogger(__name__)
@@ -65,7 +64,7 @@ class RequestContext(context.RequestContext):
# safely ignore this as we don't use it.
kwargs.pop('user_identity', None)
if kwargs:
LOG.warning(_LW('Arguments dropped when creating context: %s'),
LOG.warning('Arguments dropped when creating context: %s',
str(kwargs))
# FIXME(dims): user_id and project_id duplicate information that is

View File

@@ -29,7 +29,7 @@ from keystoneclient import exceptions as keystone_exceptions
from oslo_log import log as logging
import six
from watcher._i18n import _, _LE
from watcher._i18n import _
from watcher import conf
@@ -83,9 +83,9 @@ class WatcherException(Exception):
except Exception:
# kwargs doesn't match a variable in msg_fmt
# log the issue and the kwargs
LOG.exception(_LE('Exception in string format operation'))
LOG.exception('Exception in string format operation')
for name, value in kwargs.items():
LOG.error(_LE("%(name)s: %(value)s"),
LOG.error("%(name)s: %(value)s",
{'name': name, 'value': value})
if CONF.fatal_exception_format_errors:
@@ -130,7 +130,7 @@ class OperationNotPermitted(NotAuthorized):
msg_fmt = _("Operation not permitted")
class Invalid(WatcherException):
class Invalid(WatcherException, ValueError):
msg_fmt = _("Unacceptable parameters")
code = 400
@@ -149,6 +149,10 @@ class ResourceNotFound(ObjectNotFound):
code = 404
class InvalidParameter(Invalid):
msg_fmt = _("%(parameter)s has to be of type %(parameter_type)s")
class InvalidIdentity(Invalid):
msg_fmt = _("Expected a uuid or int but received %(identity)s")
@@ -182,6 +186,10 @@ class EagerlyLoadedActionPlanRequired(InvalidActionPlan):
msg_fmt = _("Action plan %(action_plan)s was not eagerly loaded")
class EagerlyLoadedActionRequired(InvalidActionPlan):
msg_fmt = _("Action %(action)s was not eagerly loaded")
class InvalidUUID(Invalid):
msg_fmt = _("Expected a uuid but received %(uuid)s")
@@ -267,9 +275,7 @@ class ActionPlanReferenced(Invalid):
class ActionPlanIsOngoing(Conflict):
msg_fmt = _("Action Plan %(action_plan)s is currently running. "
"New Action Plan %(new_action_plan)s will be set as "
"SUPERSEDED")
msg_fmt = _("Action Plan %(action_plan)s is currently running.")
class ActionNotFound(ResourceNotFound):

View File

@@ -17,7 +17,6 @@ from oslo_config import cfg
from oslo_log import log
import oslo_messaging as messaging
from watcher._i18n import _LE
from watcher.common import context as watcher_context
from watcher.common import exception
@@ -32,7 +31,6 @@ __all__ = [
'get_client',
'get_server',
'get_notifier',
'TRANSPORT_ALIASES',
]
CONF = cfg.CONF
@@ -46,16 +44,6 @@ ALLOWED_EXMODS = [
]
EXTRA_EXMODS = []
# NOTE(lucasagomes): The watcher.openstack.common.rpc entries are for
# backwards compat with IceHouse rpc_backend configuration values.
TRANSPORT_ALIASES = {
'watcher.openstack.common.rpc.impl_kombu': 'rabbit',
'watcher.openstack.common.rpc.impl_qpid': 'qpid',
'watcher.openstack.common.rpc.impl_zmq': 'zmq',
'watcher.rpc.impl_kombu': 'rabbit',
'watcher.rpc.impl_qpid': 'qpid',
'watcher.rpc.impl_zmq': 'zmq',
}
JsonPayloadSerializer = messaging.JsonPayloadSerializer
@@ -64,12 +52,10 @@ def init(conf):
global TRANSPORT, NOTIFICATION_TRANSPORT, NOTIFIER
exmods = get_allowed_exmods()
TRANSPORT = messaging.get_transport(conf,
allowed_remote_exmods=exmods,
aliases=TRANSPORT_ALIASES)
allowed_remote_exmods=exmods)
NOTIFICATION_TRANSPORT = messaging.get_notification_transport(
conf,
allowed_remote_exmods=exmods,
aliases=TRANSPORT_ALIASES)
allowed_remote_exmods=exmods)
serializer = RequestContextSerializer(JsonPayloadSerializer())
if not conf.notification_level:
@@ -87,7 +73,7 @@ def initialized():
def cleanup():
global TRANSPORT, NOTIFICATION_TRANSPORT, NOTIFIER
if NOTIFIER is None:
LOG.exception(_LE("RPC cleanup: NOTIFIER is None"))
LOG.exception("RPC cleanup: NOTIFIER is None")
TRANSPORT.cleanup()
NOTIFICATION_TRANSPORT.cleanup()
TRANSPORT = NOTIFICATION_TRANSPORT = NOTIFIER = None

View File

@@ -25,7 +25,6 @@ from oslo_utils import timeutils
from oslo_utils import uuidutils
import six
from watcher._i18n import _LW
from watcher.common import exception
from watcher import conf
@@ -73,9 +72,9 @@ def safe_rstrip(value, chars=None):
"""
if not isinstance(value, six.string_types):
LOG.warning(_LW(
LOG.warning(
"Failed to remove trailing character. Returning original object."
"Supplied object is not a string: %s,"), value)
"Supplied object is not a string: %s,", value)
return value
return value.rstrip(chars) or value

View File

@@ -28,6 +28,7 @@ from watcher.conf import db
from watcher.conf import decision_engine
from watcher.conf import exception
from watcher.conf import glance_client
from watcher.conf import gnocchi_client
from watcher.conf import monasca_client
from watcher.conf import neutron_client
from watcher.conf import nova_client
@@ -50,6 +51,7 @@ decision_engine.register_opts(CONF)
monasca_client.register_opts(CONF)
nova_client.register_opts(CONF)
glance_client.register_opts(CONF)
gnocchi_client.register_opts(CONF)
cinder_client.register_opts(CONF)
ceilometer_client.register_opts(CONF)
neutron_client.register_opts(CONF)

View File

@@ -32,9 +32,10 @@ API_SERVICE_OPTS = [
cfg.PortOpt('port',
default=9322,
help='The port for the watcher API server'),
cfg.StrOpt('host',
default='127.0.0.1',
help='The listen IP address for the watcher API server'),
cfg.HostAddressOpt('host',
default='127.0.0.1',
help='The listen IP address for the watcher API server'
),
cfg.IntOpt('max_limit',
default=1000,
help='The maximum number of items returned in a single '

View File

@@ -25,7 +25,12 @@ CEILOMETER_CLIENT_OPTS = [
cfg.StrOpt('api_version',
default='2',
help='Version of Ceilometer API to use in '
'ceilometerclient.')]
'ceilometerclient.'),
cfg.StrOpt('endpoint_type',
default='internalURL',
help='Type of endpoint to use in ceilometerclient.'
'Supported values: internalURL, publicURL, adminURL'
'The default is internalURL.')]
def register_opts(conf):

View File

@@ -24,7 +24,12 @@ cinder_client = cfg.OptGroup(name='cinder_client',
CINDER_CLIENT_OPTS = [
cfg.StrOpt('api_version',
default='2',
help='Version of Cinder API to use in cinderclient.')]
help='Version of Cinder API to use in cinderclient.'),
cfg.StrOpt('endpoint_type',
default='internalURL',
help='Type of endpoint to use in cinderclient.'
'Supported values: internalURL, publicURL, adminURL'
'The default is internalURL.')]
def register_opts(conf):

View File

@@ -42,6 +42,15 @@ WATCHER_DECISION_ENGINE_OPTS = [
required=True,
help='The maximum number of threads that can be used to '
'execute strategies'),
cfg.IntOpt('action_plan_expiry',
default=24,
help='An expiry timespan(hours). Watcher invalidates any '
'action plan for which its creation time '
'-whose number of hours has been offset by this value-'
' is older that the current time.'),
cfg.IntOpt('check_periodic_interval',
default=30*60,
help='Interval (in seconds) for checking action plan expiry.')
]
WATCHER_CONTINUOUS_OPTS = [

View File

@@ -24,7 +24,12 @@ glance_client = cfg.OptGroup(name='glance_client',
GLANCE_CLIENT_OPTS = [
cfg.StrOpt('api_version',
default='2',
help='Version of Glance API to use in glanceclient.')]
help='Version of Glance API to use in glanceclient.'),
cfg.StrOpt('endpoint_type',
default='internalURL',
help='Type of endpoint to use in glanceclient.'
'Supported values: internalURL, publicURL, adminURL'
'The default is internalURL.')]
def register_opts(conf):

View File

@@ -0,0 +1,47 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2017 Servionica
#
# Authors: Alexander Chadin <a.chadin@servionica.ru>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
gnocchi_client = cfg.OptGroup(name='gnocchi_client',
title='Configuration Options for Gnocchi')
GNOCCHI_CLIENT_OPTS = [
cfg.StrOpt('api_version',
default='1',
help='Version of Gnocchi API to use in gnocchiclient.'),
cfg.StrOpt('endpoint_type',
default='internalURL',
help='Type of endpoint to use in gnocchi client.'
'Supported values: internalURL, publicURL, adminURL'
'The default is internalURL.'),
cfg.IntOpt('query_max_retries',
default=10,
help='How many times Watcher is trying to query again'),
cfg.IntOpt('query_timeout',
default=1,
help='How many seconds Watcher should wait to do query again')]
def register_opts(conf):
conf.register_group(gnocchi_client)
conf.register_opts(GNOCCHI_CLIENT_OPTS, group=gnocchi_client)
def list_opts():
return [('gnocchi_client', GNOCCHI_CLIENT_OPTS)]

View File

@@ -24,7 +24,12 @@ monasca_client = cfg.OptGroup(name='monasca_client',
MONASCA_CLIENT_OPTS = [
cfg.StrOpt('api_version',
default='2_0',
help='Version of Monasca API to use in monascaclient.')]
help='Version of Monasca API to use in monascaclient.'),
cfg.StrOpt('interface',
default='internal',
help='Type of interface used for monasca endpoint.'
'Supported values: internal, public, admin'
'The default is internal.')]
def register_opts(conf):

View File

@@ -24,7 +24,12 @@ neutron_client = cfg.OptGroup(name='neutron_client',
NEUTRON_CLIENT_OPTS = [
cfg.StrOpt('api_version',
default='2.0',
help='Version of Neutron API to use in neutronclient.')]
help='Version of Neutron API to use in neutronclient.'),
cfg.StrOpt('endpoint_type',
default='internalURL',
help='Type of endpoint to use in neutronclient.'
'Supported values: internalURL, publicURL, adminURL'
'The default is internalURL.')]
def register_opts(conf):

View File

@@ -24,7 +24,12 @@ nova_client = cfg.OptGroup(name='nova_client',
NOVA_CLIENT_OPTS = [
cfg.StrOpt('api_version',
default='2',
help='Version of Nova API to use in novaclient.')]
help='Version of Nova API to use in novaclient.'),
cfg.StrOpt('endpoint_type',
default='internalURL',
help='Type of endpoint to use in novaclient.'
'Supported values: internalURL, publicURL, adminURL'
'The default is internalURL.')]
def register_opts(conf):

View File

@@ -26,13 +26,14 @@ SERVICE_OPTS = [
cfg.IntOpt('periodic_interval',
default=60,
help=_('Seconds between running periodic tasks.')),
cfg.StrOpt('host',
default=socket.gethostname(),
help=_('Name of this node. This can be an opaque identifier. '
'It is not necessarily a hostname, FQDN, or IP address. '
'However, the node name must be valid within '
'an AMQP key, and if using ZeroMQ, a valid '
'hostname, FQDN, or IP address.')),
cfg.HostAddressOpt('host',
default=socket.gethostname(),
help=_('Name of this node. This can be an opaque '
'identifier. It is not necessarily a hostname, '
'FQDN, or IP address. However, the node name '
'must be valid within an AMQP key, and if using '
'ZeroMQ, a valid hostname, FQDN, or IP address.')
),
cfg.IntOpt('service_down_time',
default=90,
help=_('Maximum time since last check-in for up service.'))

View File

@@ -0,0 +1,92 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2017 Servionica
#
# Authors: Alexander Chadin <a.chadin@servionica.ru>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from datetime import datetime
import time
from oslo_config import cfg
from oslo_log import log
from watcher.common import clients
from watcher.common import exception
CONF = cfg.CONF
LOG = log.getLogger(__name__)
class GnocchiHelper(object):
def __init__(self, osc=None):
""":param osc: an OpenStackClients instance"""
self.osc = osc if osc else clients.OpenStackClients()
self.gnocchi = self.osc.gnocchi()
def query_retry(self, f, *args, **kwargs):
for i in range(CONF.gnocchi_client.query_max_retries):
try:
return f(*args, **kwargs)
except Exception as e:
LOG.exception(e)
time.sleep(CONF.gnocchi_client.query_timeout)
raise
def statistic_aggregation(self,
resource_id,
metric,
granularity,
start_time=None,
stop_time=None,
aggregation='mean'):
"""Representing a statistic aggregate by operators
:param metric: metric name of which we want the statistics
:param resource_id: id of resource to list statistics for
:param start_time: Start datetime from which metrics will be used
:param stop_time: End datetime from which metrics will be used
:param granularity: frequency of marking metric point, in seconds
:param aggregation: Should be chosen in accrodance with policy
aggregations
:return: value of aggregated metric
"""
if start_time is not None and not isinstance(start_time, datetime):
raise exception.InvalidParameter(parameter='start_time',
parameter_type=datetime)
if stop_time is not None and not isinstance(stop_time, datetime):
raise exception.InvalidParameter(parameter='stop_time',
parameter_type=datetime)
raw_kwargs = dict(
metric=metric,
start=start_time,
stop=stop_time,
resource_id=resource_id,
granularity=granularity,
aggregation=aggregation,
)
kwargs = {k: v for k, v in raw_kwargs.items() if k and v}
statistics = self.query_retry(
f=self.gnocchi.metric.get_measures, **kwargs)
if statistics:
# return value of latest measure
# measure has structure [time, granularity, value]
return statistics[-1][2]

View File

@@ -27,7 +27,7 @@ from oslo_utils import strutils
import prettytable as ptable
from six.moves import input
from watcher._i18n import _, _LI
from watcher._i18n import _
from watcher._i18n import lazy_translation_enabled
from watcher.common import context
from watcher.common import exception
@@ -231,7 +231,7 @@ class PurgeCommand(object):
if action.action_plan_id not in action_plan_ids]
LOG.debug("Orphans found:\n%s", orphans)
LOG.info(_LI("Orphans found:\n%s"), orphans.get_count_table())
LOG.info("Orphans found:\n%s", orphans.get_count_table())
return orphans
@@ -403,13 +403,13 @@ class PurgeCommand(object):
return to_be_deleted
def do_delete(self):
LOG.info(_LI("Deleting..."))
LOG.info("Deleting...")
# Reversed to avoid errors with foreign keys
for entry in reversed(list(self._objects_map)):
entry.destroy()
def execute(self):
LOG.info(_LI("Starting purge command"))
LOG.info("Starting purge command")
self._objects_map = self.find_objects_to_delete()
if (self.max_number is not None and
@@ -424,15 +424,15 @@ class PurgeCommand(object):
if not self.dry_run and self.confirmation_prompt():
self.do_delete()
print(_("Purge results summary%s:") % _orphans_note)
LOG.info(_LI("Purge results summary%s:"), _orphans_note)
LOG.info("Purge results summary%s:", _orphans_note)
else:
LOG.debug(self._objects_map)
print(_("Here below is a table containing the objects "
"that can be purged%s:") % _orphans_note)
LOG.info(_LI("\n%s"), self._objects_map.get_count_table())
LOG.info("\n%s", self._objects_map.get_count_table())
print(self._objects_map.get_count_table())
LOG.info(_LI("Purge process completed"))
LOG.info("Purge process completed")
def purge(age_in_days, max_number, goal, exclude_orphans, dry_run):
@@ -457,11 +457,11 @@ def purge(age_in_days, max_number, goal, exclude_orphans, dry_run):
if max_number and max_number < 0:
raise exception.NegativeLimitError
LOG.info(_LI("[options] age_in_days = %s"), age_in_days)
LOG.info(_LI("[options] max_number = %s"), max_number)
LOG.info(_LI("[options] goal = %s"), goal)
LOG.info(_LI("[options] exclude_orphans = %s"), exclude_orphans)
LOG.info(_LI("[options] dry_run = %s"), dry_run)
LOG.info("[options] age_in_days = %s", age_in_days)
LOG.info("[options] max_number = %s", max_number)
LOG.info("[options] goal = %s", goal)
LOG.info("[options] exclude_orphans = %s", exclude_orphans)
LOG.info("[options] dry_run = %s", dry_run)
uuid = PurgeCommand.get_goal_uuid(goal)

View File

@@ -102,23 +102,24 @@ class AuditHandler(BaseAuditHandler):
audit.state = state
audit.save()
@staticmethod
def check_ongoing_action_plans(request_context):
a_plan_filters = {'state': objects.action_plan.State.ONGOING}
ongoing_action_plans = objects.ActionPlan.list(
request_context, filters=a_plan_filters)
if ongoing_action_plans:
raise exception.ActionPlanIsOngoing(
action_plan=ongoing_action_plans[0].uuid)
def pre_execute(self, audit, request_context):
LOG.debug("Trigger audit %s", audit.uuid)
self.check_ongoing_action_plans(request_context)
# change state of the audit to ONGOING
self.update_audit_state(audit, objects.audit.State.ONGOING)
def post_execute(self, audit, solution, request_context):
action_plan = self.do_schedule(request_context, audit, solution)
a_plan_filters = {'state': objects.action_plan.State.ONGOING}
ongoing_action_plans = objects.ActionPlan.list(
request_context, filters=a_plan_filters)
if ongoing_action_plans:
action_plan.state = objects.action_plan.State.SUPERSEDED
action_plan.save()
raise exception.ActionPlanIsOngoing(
action_plan=ongoing_action_plans[0].uuid,
new_action_plan=action_plan.uuid)
elif audit.auto_trigger:
if audit.auto_trigger:
applier_client = rpcapi.ApplierAPI()
applier_client.launch_action_plan(request_context,
action_plan.uuid)
@@ -129,7 +130,7 @@ class AuditHandler(BaseAuditHandler):
solution = self.do_execute(audit, request_context)
self.post_execute(audit, solution, request_context)
except exception.ActionPlanIsOngoing as e:
LOG.exception(e)
LOG.warning(e)
if audit.audit_type == objects.audit.AuditType.ONESHOT.value:
self.update_audit_state(audit, objects.audit.State.CANCELLED)
except Exception as e:

View File

@@ -49,9 +49,7 @@ class ContinuousAuditHandler(base.AuditHandler):
def _is_audit_inactive(self, audit):
audit = objects.Audit.get_by_uuid(
self.context_show_deleted, audit.uuid)
if audit.state in (objects.audit.State.CANCELLED,
objects.audit.State.DELETED,
objects.audit.State.FAILED):
if objects.audit.AuditStateTransitionManager().is_inactive(audit):
# if audit isn't in active states, audit's job must be removed to
# prevent using of inactive audit in future.
job_to_delete = [job for job in self.jobs
@@ -72,7 +70,7 @@ class ContinuousAuditHandler(base.AuditHandler):
a_plan_filters = {'audit_uuid': audit.uuid,
'state': objects.action_plan.State.RECOMMENDED}
action_plans = objects.ActionPlan.list(
request_context, filters=a_plan_filters)
request_context, filters=a_plan_filters, eager=True)
for plan in action_plans:
plan.state = objects.action_plan.State.CANCELLED
plan.save()

View File

@@ -215,8 +215,7 @@ class ModelBuilder(object):
compute_node = self.model.get_node_by_uuid(
cnode_uuid)
# Connect the instance to its compute node
self.model.add_edge(
instance, compute_node, label='RUNS_ON')
self.model.map_instance(instance, compute_node)
except exception.ComputeNodeNotFound:
continue
@@ -236,7 +235,8 @@ class ModelBuilder(object):
"disk": flavor.disk,
"disk_capacity": flavor.disk,
"vcpus": flavor.vcpus,
"state": getattr(instance, "OS-EXT-STS:vm_state")}
"state": getattr(instance, "OS-EXT-STS:vm_state"),
"metadata": instance.metadata}
# node_attributes = dict()
# node_attributes["layer"] = "virtual"

View File

@@ -48,6 +48,7 @@ class Instance(compute_resource.ComputeResource):
"disk": wfields.IntegerField(),
"disk_capacity": wfields.NonNegativeIntegerField(),
"vcpus": wfields.NonNegativeIntegerField(),
"metadata": wfields.JsonField(),
}
def accept(self, visitor):

View File

@@ -235,3 +235,10 @@ class ModelRoot(nx.DiGraph, base.Model):
model.add_instance(instance)
return model
@classmethod
def is_isomorphic(cls, G1, G2):
def node_match(node1, node2):
return node1.as_dict() == node2.as_dict()
return nx.algorithms.isomorphism.isomorph.is_isomorphic(
G1, G2, node_match=node_match)

View File

@@ -17,8 +17,6 @@
# limitations under the License.
from oslo_log import log
from watcher._i18n import _LI, _LW
from watcher.common import exception
from watcher.common import nova_helper
from watcher.decision_engine.model import element
@@ -45,8 +43,8 @@ class NovaNotification(base.NotificationEndpoint):
if node_uuid:
self.get_or_create_node(node_uuid)
except exception.ComputeNodeNotFound:
LOG.warning(_LW("Could not find compute node %(node)s for "
"instance %(instance)s"),
LOG.warning("Could not find compute node %(node)s for "
"instance %(instance)s",
dict(node=node_uuid, instance=instance_uuid))
try:
instance = self.cluster_data_model.get_instance_by_uuid(
@@ -67,6 +65,7 @@ class NovaNotification(base.NotificationEndpoint):
memory_mb = instance_flavor_data['memory_mb']
num_cores = instance_flavor_data['vcpus']
disk_gb = instance_flavor_data['root_gb']
instance_metadata = data['nova_object.data']['metadata']
instance.update({
'state': instance_data['state'],
@@ -76,6 +75,7 @@ class NovaNotification(base.NotificationEndpoint):
'vcpus': num_cores,
'disk': disk_gb,
'disk_capacity': disk_gb,
'metadata': instance_metadata,
})
try:
@@ -91,6 +91,7 @@ class NovaNotification(base.NotificationEndpoint):
memory_mb = data['memory_mb']
num_cores = data['vcpus']
disk_gb = data['root_gb']
instance_metadata = data['metadata']
instance.update({
'state': data['state'],
@@ -100,6 +101,7 @@ class NovaNotification(base.NotificationEndpoint):
'vcpus': num_cores,
'disk': disk_gb,
'disk_capacity': disk_gb,
'metadata': instance_metadata,
})
try:
@@ -198,18 +200,18 @@ class NovaNotification(base.NotificationEndpoint):
try:
self.cluster_data_model.delete_instance(instance, node)
except Exception:
LOG.info(_LI("Instance %s already deleted"), instance.uuid)
LOG.info("Instance %s already deleted", instance.uuid)
class VersionnedNotificationEndpoint(NovaNotification):
class VersionedNotificationEndpoint(NovaNotification):
publisher_id_regex = r'^nova-compute.*'
class UnversionnedNotificationEndpoint(NovaNotification):
class UnversionedNotificationEndpoint(NovaNotification):
publisher_id_regex = r'^compute.*'
class ServiceUpdated(VersionnedNotificationEndpoint):
class ServiceUpdated(VersionedNotificationEndpoint):
@property
def filter_rule(self):
@@ -220,8 +222,10 @@ class ServiceUpdated(VersionnedNotificationEndpoint):
)
def info(self, ctxt, publisher_id, event_type, payload, metadata):
LOG.info(_LI("Event '%(event)s' received from %(publisher)s "
"with metadata %(metadata)s") %
ctxt.request_id = metadata['message_id']
ctxt.project_domain = event_type
LOG.info("Event '%(event)s' received from %(publisher)s "
"with metadata %(metadata)s" %
dict(event=event_type,
publisher=publisher_id,
metadata=metadata))
@@ -235,7 +239,7 @@ class ServiceUpdated(VersionnedNotificationEndpoint):
LOG.exception(exc)
class InstanceCreated(VersionnedNotificationEndpoint):
class InstanceCreated(VersionedNotificationEndpoint):
@property
def filter_rule(self):
@@ -262,14 +266,15 @@ class InstanceCreated(VersionnedNotificationEndpoint):
)
def info(self, ctxt, publisher_id, event_type, payload, metadata):
LOG.info(_LI("Event '%(event)s' received from %(publisher)s "
"with metadata %(metadata)s") %
ctxt.request_id = metadata['message_id']
ctxt.project_domain = event_type
LOG.info("Event '%(event)s' received from %(publisher)s "
"with metadata %(metadata)s" %
dict(event=event_type,
publisher=publisher_id,
metadata=metadata))
LOG.debug(payload)
instance_data = payload['nova_object.data']
instance_uuid = instance_data['uuid']
node_uuid = instance_data.get('host')
instance = self.get_or_create_instance(instance_uuid, node_uuid)
@@ -277,7 +282,7 @@ class InstanceCreated(VersionnedNotificationEndpoint):
self.update_instance(instance, payload)
class InstanceUpdated(VersionnedNotificationEndpoint):
class InstanceUpdated(VersionedNotificationEndpoint):
@staticmethod
def _match_not_new_instance_state(data):
@@ -296,8 +301,10 @@ class InstanceUpdated(VersionnedNotificationEndpoint):
)
def info(self, ctxt, publisher_id, event_type, payload, metadata):
LOG.info(_LI("Event '%(event)s' received from %(publisher)s "
"with metadata %(metadata)s") %
ctxt.request_id = metadata['message_id']
ctxt.project_domain = event_type
LOG.info("Event '%(event)s' received from %(publisher)s "
"with metadata %(metadata)s" %
dict(event=event_type,
publisher=publisher_id,
metadata=metadata))
@@ -310,7 +317,7 @@ class InstanceUpdated(VersionnedNotificationEndpoint):
self.update_instance(instance, payload)
class InstanceDeletedEnd(VersionnedNotificationEndpoint):
class InstanceDeletedEnd(VersionedNotificationEndpoint):
@property
def filter_rule(self):
@@ -321,8 +328,10 @@ class InstanceDeletedEnd(VersionnedNotificationEndpoint):
)
def info(self, ctxt, publisher_id, event_type, payload, metadata):
LOG.info(_LI("Event '%(event)s' received from %(publisher)s "
"with metadata %(metadata)s") %
ctxt.request_id = metadata['message_id']
ctxt.project_domain = event_type
LOG.info("Event '%(event)s' received from %(publisher)s "
"with metadata %(metadata)s" %
dict(event=event_type,
publisher=publisher_id,
metadata=metadata))
@@ -343,7 +352,7 @@ class InstanceDeletedEnd(VersionnedNotificationEndpoint):
self.delete_instance(instance, node)
class LegacyInstanceUpdated(UnversionnedNotificationEndpoint):
class LegacyInstanceUpdated(UnversionedNotificationEndpoint):
@property
def filter_rule(self):
@@ -354,8 +363,10 @@ class LegacyInstanceUpdated(UnversionnedNotificationEndpoint):
)
def info(self, ctxt, publisher_id, event_type, payload, metadata):
LOG.info(_LI("Event '%(event)s' received from %(publisher)s "
"with metadata %(metadata)s") %
ctxt.request_id = metadata['message_id']
ctxt.project_domain = event_type
LOG.info("Event '%(event)s' received from %(publisher)s "
"with metadata %(metadata)s" %
dict(event=event_type,
publisher=publisher_id,
metadata=metadata))
@@ -368,7 +379,7 @@ class LegacyInstanceUpdated(UnversionnedNotificationEndpoint):
self.legacy_update_instance(instance, payload)
class LegacyInstanceCreatedEnd(UnversionnedNotificationEndpoint):
class LegacyInstanceCreatedEnd(UnversionedNotificationEndpoint):
@property
def filter_rule(self):
@@ -379,8 +390,10 @@ class LegacyInstanceCreatedEnd(UnversionnedNotificationEndpoint):
)
def info(self, ctxt, publisher_id, event_type, payload, metadata):
LOG.info(_LI("Event '%(event)s' received from %(publisher)s "
"with metadata %(metadata)s") %
ctxt.request_id = metadata['message_id']
ctxt.project_domain = event_type
LOG.info("Event '%(event)s' received from %(publisher)s "
"with metadata %(metadata)s" %
dict(event=event_type,
publisher=publisher_id,
metadata=metadata))
@@ -393,7 +406,7 @@ class LegacyInstanceCreatedEnd(UnversionnedNotificationEndpoint):
self.legacy_update_instance(instance, payload)
class LegacyInstanceDeletedEnd(UnversionnedNotificationEndpoint):
class LegacyInstanceDeletedEnd(UnversionedNotificationEndpoint):
@property
def filter_rule(self):
@@ -404,8 +417,10 @@ class LegacyInstanceDeletedEnd(UnversionnedNotificationEndpoint):
)
def info(self, ctxt, publisher_id, event_type, payload, metadata):
LOG.info(_LI("Event '%(event)s' received from %(publisher)s "
"with metadata %(metadata)s") %
ctxt.request_id = metadata['message_id']
ctxt.project_domain = event_type
LOG.info("Event '%(event)s' received from %(publisher)s "
"with metadata %(metadata)s" %
dict(event=event_type,
publisher=publisher_id,
metadata=metadata))
@@ -424,7 +439,7 @@ class LegacyInstanceDeletedEnd(UnversionnedNotificationEndpoint):
self.delete_instance(instance, node)
class LegacyLiveMigratedEnd(UnversionnedNotificationEndpoint):
class LegacyLiveMigratedEnd(UnversionedNotificationEndpoint):
@property
def filter_rule(self):
@@ -435,8 +450,10 @@ class LegacyLiveMigratedEnd(UnversionnedNotificationEndpoint):
)
def info(self, ctxt, publisher_id, event_type, payload, metadata):
LOG.info(_LI("Event '%(event)s' received from %(publisher)s "
"with metadata %(metadata)s") %
ctxt.request_id = metadata['message_id']
ctxt.project_domain = event_type
LOG.info("Event '%(event)s' received from %(publisher)s "
"with metadata %(metadata)s" %
dict(event=event_type,
publisher=publisher_id,
metadata=metadata))

View File

@@ -22,7 +22,6 @@ from oslo_config import cfg
from oslo_config import types
from oslo_log import log
from watcher._i18n import _LW
from watcher.common import utils
from watcher.decision_engine.planner import base
from watcher import objects
@@ -84,18 +83,6 @@ class WeightPlanner(base.BasePlanner):
default=cls.parallelization),
]
@staticmethod
def format_action(action_plan_id, action_type,
input_parameters=None, parents=()):
return {
'uuid': utils.generate_uuid(),
'action_plan_id': int(action_plan_id),
'action_type': action_type,
'input_parameters': input_parameters,
'state': objects.action.State.PENDING,
'parents': parents or None,
}
@staticmethod
def chunkify(lst, n):
"""Yield successive n-sized chunks from lst."""
@@ -164,11 +151,11 @@ class WeightPlanner(base.BasePlanner):
context, action_plan.id, solution.efficacy_indicators)
if len(action_graph.nodes()) == 0:
LOG.warning(_LW("The action plan is empty"))
LOG.warning("The action plan is empty")
action_plan.state = objects.action_plan.State.SUCCEEDED
action_plan.save()
self.create_scheduled_actions(action_plan, action_graph)
self.create_scheduled_actions(action_graph)
return action_plan
def get_sorted_actions_by_weight(self, context, action_plan, solution):
@@ -187,7 +174,7 @@ class WeightPlanner(base.BasePlanner):
return reversed(sorted(weighted_actions.items(), key=lambda x: x[0]))
def create_scheduled_actions(self, action_plan, graph):
def create_scheduled_actions(self, graph):
for action in graph.nodes():
LOG.debug("Creating the %s in the Watcher database",
action.action_type)

View File

@@ -20,7 +20,6 @@ from oslo_config import cfg
from oslo_config import types
from oslo_log import log
from watcher._i18n import _LW
from watcher.common import clients
from watcher.common import exception
from watcher.common import nova_helper
@@ -117,7 +116,7 @@ class WorkloadStabilizationPlanner(base.BasePlanner):
scheduled = sorted(to_schedule, key=lambda weight: (weight[0]),
reverse=True)
if len(scheduled) == 0:
LOG.warning(_LW("The action plan is empty"))
LOG.warning("The action plan is empty")
action_plan.state = objects.action_plan.State.SUCCEEDED
action_plan.save()
else:

View File

@@ -37,7 +37,7 @@ class DecisionEngineAPI(service.Service):
if not utils.is_uuid_like(audit_uuid):
raise exception.InvalidUuidOrName(name=audit_uuid)
return self.conductor_client.call(
self.conductor_client.cast(
context, 'trigger_audit', audit_uuid=audit_uuid)

View File

@@ -19,12 +19,17 @@ import datetime
import eventlet
from oslo_log import log
from watcher.common import context
from watcher.common import exception
from watcher.common import scheduling
from watcher.decision_engine.model.collector import manager
from watcher import objects
from watcher import conf
LOG = log.getLogger(__name__)
CONF = conf.CONF
class DecisionEngineSchedulingService(scheduling.BackgroundSchedulerService):
@@ -73,9 +78,20 @@ class DecisionEngineSchedulingService(scheduling.BackgroundSchedulerService):
return _sync
def add_checkstate_job(self):
# 30 minutes interval
interval = CONF.watcher_decision_engine.check_periodic_interval
ap_manager = objects.action_plan.StateManager()
if CONF.watcher_decision_engine.action_plan_expiry != 0:
self.add_job(ap_manager.check_expired, 'interval',
args=[context.make_context()],
seconds=interval,
next_run_time=datetime.datetime.now())
def start(self):
"""Start service."""
self.add_sync_jobs()
self.add_checkstate_job()
super(DecisionEngineSchedulingService, self).start()
def stop(self):

View File

@@ -16,7 +16,6 @@
from oslo_log import log
from watcher._i18n import _LW
from watcher.common import exception
from watcher.common import nova_helper
from watcher.decision_engine.scope import base
@@ -170,9 +169,9 @@ class DefaultScope(base.BaseScope):
node_name = cluster_model.get_node_by_instance_uuid(
instance_uuid).uuid
except exception.ComputeResourceNotFound:
LOG.warning(_LW("The following instance %s cannot be found. "
"It might be deleted from CDM along with node"
" instance was hosted on."),
LOG.warning("The following instance %s cannot be found. "
"It might be deleted from CDM along with node"
" instance was hosted on.",
instance_uuid)
continue
self.remove_instance(

View File

@@ -39,6 +39,8 @@ which are dynamically loaded by Watcher at launch time.
import abc
import six
from oslo_utils import strutils
from watcher.common import clients
from watcher.common import context
from watcher.common import exception
@@ -264,6 +266,22 @@ class BaseStrategy(loadable.Loadable):
def state_collector(self, s):
self._cluster_state_collector = s
def filter_instances_by_audit_tag(self, instances):
if not self.config.check_optimize_metadata:
return instances
instances_to_migrate = []
for instance in instances:
optimize = True
if instance.metadata:
try:
optimize = strutils.bool_from_string(
instance.metadata.get('optimize'))
except ValueError:
optimize = False
if optimize:
instances_to_migrate.append(instance)
return instances_to_migrate
@six.add_metaclass(abc.ABCMeta)
class DummyBaseStrategy(BaseStrategy):

View File

@@ -35,12 +35,15 @@ migration is possible on your OpenStack cluster.
"""
import datetime
from oslo_config import cfg
from oslo_log import log
from watcher._i18n import _, _LE, _LI, _LW
from watcher._i18n import _
from watcher.common import exception
from watcher.datasource import ceilometer as ceil
from watcher.datasource import gnocchi as gnoc
from watcher.datasource import monasca as mon
from watcher.decision_engine.model import element
from watcher.decision_engine.strategy.strategies import base
@@ -61,6 +64,9 @@ class BasicConsolidation(base.ServerConsolidationBaseStrategy):
monasca=dict(
host_cpu_usage='cpu.percent',
instance_cpu_usage='vm.cpu.utilization_perc'),
gnocchi=dict(
host_cpu_usage='compute.node.cpu.percent',
instance_cpu_usage='cpu_util'),
)
MIGRATION = "migrate"
@@ -87,6 +93,7 @@ class BasicConsolidation(base.ServerConsolidationBaseStrategy):
self._ceilometer = None
self._monasca = None
self._gnocchi = None
# TODO(jed): improve threshold overbooking?
self.threshold_mem = 1
@@ -105,6 +112,10 @@ class BasicConsolidation(base.ServerConsolidationBaseStrategy):
def period(self):
return self.input_parameters.get('period', 7200)
@property
def granularity(self):
return self.input_parameters.get('granularity', 300)
@classmethod
def get_display_name(cls):
return _("Basic offline consolidation")
@@ -132,6 +143,12 @@ class BasicConsolidation(base.ServerConsolidationBaseStrategy):
"type": "number",
"default": 7200
},
"granularity": {
"description": "The time between two measures in an "
"aggregated timeseries of a metric.",
"type": "number",
"default": 300
},
},
}
@@ -142,7 +159,12 @@ class BasicConsolidation(base.ServerConsolidationBaseStrategy):
"datasource",
help="Data source to use in order to query the needed metrics",
default="ceilometer",
choices=["ceilometer", "monasca"]),
choices=["ceilometer", "monasca", "gnocchi"]),
cfg.BoolOpt(
"check_optimize_metadata",
help="Check optimize metadata field in instance before "
"migration",
default=False),
]
@property
@@ -165,6 +187,16 @@ class BasicConsolidation(base.ServerConsolidationBaseStrategy):
def monasca(self, monasca):
self._monasca = monasca
@property
def gnocchi(self):
if self._gnocchi is None:
self._gnocchi = gnoc.GnocchiHelper(osc=self.osc)
return self._gnocchi
@gnocchi.setter
def gnocchi(self, gnocchi):
self._gnocchi = gnocchi
def check_migration(self, source_node, destination_node,
instance_to_migrate):
"""Check if the migration is possible
@@ -260,6 +292,19 @@ class BasicConsolidation(base.ServerConsolidationBaseStrategy):
period=self.period,
aggregate='avg',
)
elif self.config.datasource == "gnocchi":
resource_id = "%s_%s" % (node.uuid, node.hostname)
stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(
seconds=int(self.period))
return self.gnocchi.statistic_aggregation(
resource_id=resource_id,
metric=metric_name,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean'
)
elif self.config.datasource == "monasca":
statistics = self.monasca.statistic_aggregation(
meter_name=metric_name,
@@ -289,6 +334,18 @@ class BasicConsolidation(base.ServerConsolidationBaseStrategy):
period=self.period,
aggregate='avg'
)
elif self.config.datasource == "gnocchi":
stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(
seconds=int(self.period))
return self.gnocchi.statistic_aggregation(
resource_id=instance.uuid,
metric=metric_name,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean',
)
elif self.config.datasource == "monasca":
statistics = self.monasca.statistic_aggregation(
meter_name=metric_name,
@@ -319,11 +376,11 @@ class BasicConsolidation(base.ServerConsolidationBaseStrategy):
if host_avg_cpu_util is None:
resource_id = "%s_%s" % (node.uuid, node.hostname)
LOG.error(
_LE("No values returned by %(resource_id)s "
"for %(metric_name)s") % dict(
resource_id=resource_id,
metric_name=self.METRIC_NAMES[
self.config.datasource]['host_cpu_usage']))
"No values returned by %(resource_id)s "
"for %(metric_name)s" % dict(
resource_id=resource_id,
metric_name=self.METRIC_NAMES[
self.config.datasource]['host_cpu_usage']))
host_avg_cpu_util = 100
total_cores_used = node.vcpus * (host_avg_cpu_util / 100.0)
@@ -339,11 +396,11 @@ class BasicConsolidation(base.ServerConsolidationBaseStrategy):
instance_cpu_utilization = self.get_instance_cpu_usage(instance)
if instance_cpu_utilization is None:
LOG.error(
_LE("No values returned by %(resource_id)s "
"for %(metric_name)s") % dict(
resource_id=instance.uuid,
metric_name=self.METRIC_NAMES[
self.config.datasource]['instance_cpu_usage']))
"No values returned by %(resource_id)s "
"for %(metric_name)s" % dict(
resource_id=instance.uuid,
metric_name=self.METRIC_NAMES[
self.config.datasource]['instance_cpu_usage']))
instance_cpu_utilization = 100
total_cores_used = instance.vcpus * (instance_cpu_utilization / 100.0)
@@ -385,9 +442,10 @@ class BasicConsolidation(base.ServerConsolidationBaseStrategy):
def node_and_instance_score(self, sorted_scores):
"""Get List of VMs from node"""
node_to_release = sorted_scores[len(sorted_scores) - 1][0]
instances_to_migrate = self.compute_model.get_node_instances(
instances = self.compute_model.get_node_instances(
self.compute_model.get_node_by_uuid(node_to_release))
instances_to_migrate = self.filter_instances_by_audit_tag(instances)
instance_score = []
for instance in instances_to_migrate:
if instance.state == element.InstanceState.ACTIVE.value:
@@ -439,7 +497,7 @@ class BasicConsolidation(base.ServerConsolidationBaseStrategy):
return unsuccessful_migration + 1
def pre_execute(self):
LOG.info(_LI("Initializing Server Consolidation"))
LOG.info("Initializing Server Consolidation")
if not self.compute_model:
raise exception.ClusterStateNotDefined()
@@ -461,9 +519,9 @@ class BasicConsolidation(base.ServerConsolidationBaseStrategy):
LOG.debug("Compute node(s) BFD %s", sorted_scores)
# Get Node to be released
if len(scores) == 0:
LOG.warning(_LW(
LOG.warning(
"The workloads of the compute nodes"
" of the cluster is zero"))
" of the cluster is zero")
return
while sorted_scores and (

View File

@@ -28,11 +28,14 @@ Outlet (Exhaust Air) Temperature is one of the important thermal
telemetries to measure thermal/workload status of server.
"""
import datetime
from oslo_log import log
from watcher._i18n import _, _LW, _LI
from watcher._i18n import _
from watcher.common import exception as wexc
from watcher.datasource import ceilometer as ceil
from watcher.datasource import gnocchi as gnoc
from watcher.decision_engine.model import element
from watcher.decision_engine.strategy.strategies import base
@@ -71,9 +74,15 @@ class OutletTempControl(base.ThermalOptimizationBaseStrategy):
""" # noqa
# The meter to report outlet temperature in ceilometer
METER_NAME = "hardware.ipmi.node.outlet_temperature"
MIGRATION = "migrate"
METRIC_NAMES = dict(
ceilometer=dict(
host_outlet_temp='hardware.ipmi.node.outlet_temperature'),
gnocchi=dict(
host_outlet_temp='hardware.ipmi.node.outlet_temperature'),
)
def __init__(self, config, osc=None):
"""Outlet temperature control using live migration
@@ -83,8 +92,8 @@ class OutletTempControl(base.ThermalOptimizationBaseStrategy):
:type osc: :py:class:`~.OpenStackClients` instance, optional
"""
super(OutletTempControl, self).__init__(config, osc)
self._meter = self.METER_NAME
self._ceilometer = None
self._gnocchi = None
@classmethod
def get_name(cls):
@@ -98,6 +107,10 @@ class OutletTempControl(base.ThermalOptimizationBaseStrategy):
def get_translatable_display_name(cls):
return "Outlet temperature based strategy"
@property
def period(self):
return self.input_parameters.get('period', 30)
@classmethod
def get_schema(cls):
# Mandatory default setting for each element
@@ -108,6 +121,18 @@ class OutletTempControl(base.ThermalOptimizationBaseStrategy):
"type": "number",
"default": 35.0
},
"period": {
"description": "The time interval in seconds for "
"getting statistic aggregation",
"type": "number",
"default": 30
},
"granularity": {
"description": "The time between two measures in an "
"aggregated timeseries of a metric.",
"type": "number",
"default": 300
},
},
}
@@ -121,6 +146,20 @@ class OutletTempControl(base.ThermalOptimizationBaseStrategy):
def ceilometer(self, c):
self._ceilometer = c
@property
def gnocchi(self):
if self._gnocchi is None:
self._gnocchi = gnoc.GnocchiHelper(osc=self.osc)
return self._gnocchi
@gnocchi.setter
def gnocchi(self, g):
self._gnocchi = g
@property
def granularity(self):
return self.input_parameters.get('granularity', 300)
def calc_used_resource(self, node):
"""Calculate the used vcpus, memory and disk based on VM flavors"""
instances = self.compute_model.get_node_instances(node)
@@ -143,17 +182,34 @@ class OutletTempControl(base.ThermalOptimizationBaseStrategy):
hosts_need_release = []
hosts_target = []
metric_name = self.METRIC_NAMES[
self.config.datasource]['host_outlet_temp']
for node in nodes.values():
resource_id = node.uuid
outlet_temp = None
outlet_temp = self.ceilometer.statistic_aggregation(
resource_id=resource_id,
meter_name=self._meter,
period="30",
aggregate='avg')
if self.config.datasource == "ceilometer":
outlet_temp = self.ceilometer.statistic_aggregation(
resource_id=resource_id,
meter_name=metric_name,
period=self.period,
aggregate='avg'
)
elif self.config.datasource == "gnocchi":
stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(
seconds=int(self.period))
outlet_temp = self.gnocchi.statistic_aggregation(
resource_id=resource_id,
metric=metric_name,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean'
)
# some hosts may not have outlet temp meters, remove from target
if outlet_temp is None:
LOG.warning(_LW("%s: no outlet temp data"), resource_id)
LOG.warning("%s: no outlet temp data", resource_id)
continue
LOG.debug("%s: outlet temperature %f" % (resource_id, outlet_temp))
@@ -176,13 +232,13 @@ class OutletTempControl(base.ThermalOptimizationBaseStrategy):
# select the first active instance to migrate
if (instance.state !=
element.InstanceState.ACTIVE.value):
LOG.info(_LI("Instance not active, skipped: %s"),
LOG.info("Instance not active, skipped: %s",
instance.uuid)
continue
return mig_source_node, instance
except wexc.InstanceNotFound as e:
LOG.exception(e)
LOG.info(_LI("Instance not found"))
LOG.info("Instance not found")
return None
@@ -233,7 +289,7 @@ class OutletTempControl(base.ThermalOptimizationBaseStrategy):
return self.solution
if len(hosts_target) == 0:
LOG.warning(_LW("No hosts under outlet temp threshold found"))
LOG.warning("No hosts under outlet temp threshold found")
return self.solution
# choose the server with highest outlet t
@@ -254,7 +310,7 @@ class OutletTempControl(base.ThermalOptimizationBaseStrategy):
if len(dest_servers) == 0:
# TODO(zhenzanz): maybe to warn that there's no resource
# for instance.
LOG.info(_LI("No proper target host could be found"))
LOG.info("No proper target host could be found")
return self.solution
dest_servers = sorted(dest_servers, key=lambda x: (x["outlet_temp"]))

View File

@@ -42,12 +42,15 @@ airflow is higher than the specified threshold.
- It assumes that live migrations are possible.
"""
import datetime
from oslo_config import cfg
from oslo_log import log
from watcher._i18n import _, _LI, _LW
from watcher._i18n import _
from watcher.common import exception as wexc
from watcher.datasource import ceilometer as ceil
from watcher.datasource import gnocchi as gnoc
from watcher.decision_engine.model import element
from watcher.decision_engine.strategy.strategies import base
@@ -80,15 +83,28 @@ class UniformAirflow(base.BaseStrategy):
- It assumes that live migrations are possible.
"""
# The meter to report Airflow of physical server in ceilometer
METER_NAME_AIRFLOW = "hardware.ipmi.node.airflow"
# The meter to report inlet temperature of physical server in ceilometer
METER_NAME_INLET_T = "hardware.ipmi.node.temperature"
# The meter to report system power of physical server in ceilometer
METER_NAME_POWER = "hardware.ipmi.node.power"
# choose 300 seconds as the default duration of meter aggregation
PERIOD = 300
METRIC_NAMES = dict(
ceilometer=dict(
# The meter to report Airflow of physical server in ceilometer
host_airflow='hardware.ipmi.node.airflow',
# The meter to report inlet temperature of physical server
# in ceilometer
host_inlet_temp='hardware.ipmi.node.temperature',
# The meter to report system power of physical server in ceilometer
host_power='hardware.ipmi.node.power'),
gnocchi=dict(
# The meter to report Airflow of physical server in gnocchi
host_airflow='hardware.ipmi.node.airflow',
# The meter to report inlet temperature of physical server
# in gnocchi
host_inlet_temp='hardware.ipmi.node.temperature',
# The meter to report system power of physical server in gnocchi
host_power='hardware.ipmi.node.power'),
)
MIGRATION = "migrate"
def __init__(self, config, osc=None):
@@ -101,10 +117,14 @@ class UniformAirflow(base.BaseStrategy):
super(UniformAirflow, self).__init__(config, osc)
# The migration plan will be triggered when the airflow reaches
# threshold
self.meter_name_airflow = self.METER_NAME_AIRFLOW
self.meter_name_inlet_t = self.METER_NAME_INLET_T
self.meter_name_power = self.METER_NAME_POWER
self.meter_name_airflow = self.METRIC_NAMES[
self.config.datasource]['host_airflow']
self.meter_name_inlet_t = self.METRIC_NAMES[
self.config.datasource]['host_inlet_temp']
self.meter_name_power = self.METRIC_NAMES[
self.config.datasource]['host_power']
self._ceilometer = None
self._gnocchi = None
self._period = self.PERIOD
@property
@@ -117,6 +137,16 @@ class UniformAirflow(base.BaseStrategy):
def ceilometer(self, c):
self._ceilometer = c
@property
def gnocchi(self):
if self._gnocchi is None:
self._gnocchi = gnoc.GnocchiHelper(osc=self.osc)
return self._gnocchi
@gnocchi.setter
def gnocchi(self, g):
self._gnocchi = g
@classmethod
def get_name(cls):
return "uniform_airflow"
@@ -133,6 +163,10 @@ class UniformAirflow(base.BaseStrategy):
def get_goal_name(cls):
return "airflow_optimization"
@property
def granularity(self):
return self.input_parameters.get('granularity', 300)
@classmethod
def get_schema(cls):
# Mandatory default setting for each element
@@ -161,9 +195,25 @@ class UniformAirflow(base.BaseStrategy):
"type": "number",
"default": 300
},
"granularity": {
"description": "The time between two measures in an "
"aggregated timeseries of a metric.",
"type": "number",
"default": 300
},
},
}
@classmethod
def get_config_opts(cls):
return [
cfg.StrOpt(
"datasource",
help="Data source to use in order to query the needed metrics",
default="ceilometer",
choices=["ceilometer", "gnocchi"])
]
def calculate_used_resource(self, node):
"""Compute the used vcpus, memory and disk based on instance flavors"""
instances = self.compute_model.get_node_instances(node)
@@ -188,16 +238,35 @@ class UniformAirflow(base.BaseStrategy):
source_instances = self.compute_model.get_node_instances(
source_node)
if source_instances:
inlet_t = self.ceilometer.statistic_aggregation(
resource_id=source_node.uuid,
meter_name=self.meter_name_inlet_t,
period=self._period,
aggregate='avg')
power = self.ceilometer.statistic_aggregation(
resource_id=source_node.uuid,
meter_name=self.meter_name_power,
period=self._period,
aggregate='avg')
if self.config.datasource == "ceilometer":
inlet_t = self.ceilometer.statistic_aggregation(
resource_id=source_node.uuid,
meter_name=self.meter_name_inlet_t,
period=self._period,
aggregate='avg')
power = self.ceilometer.statistic_aggregation(
resource_id=source_node.uuid,
meter_name=self.meter_name_power,
period=self._period,
aggregate='avg')
elif self.config.datasource == "gnocchi":
stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(
seconds=int(self._period))
inlet_t = self.gnocchi.statistic_aggregation(
resource_id=source_node.uuid,
metric=self.meter_name_inlet_t,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean')
power = self.gnocchi.statistic_aggregation(
resource_id=source_node.uuid,
metric=self.meter_name_power,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean')
if (power < self.threshold_power and
inlet_t < self.threshold_inlet_t):
# hardware issue, migrate all instances from this node
@@ -210,13 +279,13 @@ class UniformAirflow(base.BaseStrategy):
if (instance.state !=
element.InstanceState.ACTIVE.value):
LOG.info(
_LI("Instance not active, skipped: %s"),
"Instance not active, skipped: %s",
instance.uuid)
continue
instances_tobe_migrate.append(instance)
return source_node, instances_tobe_migrate
else:
LOG.info(_LI("Instance not found on node: %s"),
LOG.info("Instance not found on node: %s",
source_node.uuid)
def filter_destination_hosts(self, hosts, instances_to_migrate):
@@ -257,8 +326,8 @@ class UniformAirflow(base.BaseStrategy):
break
# check if all instances have target hosts
if len(destination_hosts) != len(instances_to_migrate):
LOG.warning(_LW("Not all target hosts could be found; it might "
"be because there is not enough resource"))
LOG.warning("Not all target hosts could be found; it might "
"be because there is not enough resource")
return None
return destination_hosts
@@ -271,17 +340,30 @@ class UniformAirflow(base.BaseStrategy):
overload_hosts = []
nonoverload_hosts = []
for node_id in nodes:
airflow = None
node = self.compute_model.get_node_by_uuid(
node_id)
resource_id = node.uuid
airflow = self.ceilometer.statistic_aggregation(
resource_id=resource_id,
meter_name=self.meter_name_airflow,
period=self._period,
aggregate='avg')
if self.config.datasource == "ceilometer":
airflow = self.ceilometer.statistic_aggregation(
resource_id=resource_id,
meter_name=self.meter_name_airflow,
period=self._period,
aggregate='avg')
elif self.config.datasource == "gnocchi":
stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(
seconds=int(self._period))
airflow = self.gnocchi.statistic_aggregation(
resource_id=resource_id,
metric=self.meter_name_airflow,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean')
# some hosts may not have airflow meter, remove from target
if airflow is None:
LOG.warning(_LW("%s: no airflow data"), resource_id)
LOG.warning("%s: no airflow data", resource_id)
continue
LOG.debug("%s: airflow %f" % (resource_id, airflow))
@@ -316,9 +398,9 @@ class UniformAirflow(base.BaseStrategy):
return self.solution
if not target_nodes:
LOG.warning(_LW("No hosts currently have airflow under %s, "
"therefore there are no possible target "
"hosts for any migration"),
LOG.warning("No hosts currently have airflow under %s, "
"therefore there are no possible target "
"hosts for any migration",
self.threshold_airflow)
return self.solution
@@ -337,8 +419,8 @@ class UniformAirflow(base.BaseStrategy):
destination_hosts = self.filter_destination_hosts(
target_nodes, instances_src)
if not destination_hosts:
LOG.warning(_LW("No target host could be found; it might "
"be because there is not enough resources"))
LOG.warning("No target host could be found; it might "
"be because there is not enough resources")
return self.solution
# generate solution to migrate the instance to the dest server,
for info in destination_hosts:

View File

@@ -52,13 +52,16 @@ correctly on all compute nodes within the cluster.
This strategy assumes it is possible to live migrate any VM from
an active compute node to any other active compute node.
"""
import datetime
from oslo_config import cfg
from oslo_log import log
import six
from watcher._i18n import _, _LE, _LI
from watcher._i18n import _
from watcher.common import exception
from watcher.datasource import ceilometer as ceil
from watcher.datasource import gnocchi as gnoc
from watcher.decision_engine.model import element
from watcher.decision_engine.strategy.strategies import base
@@ -68,12 +71,33 @@ LOG = log.getLogger(__name__)
class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
"""VM Workload Consolidation Strategy"""
HOST_CPU_USAGE_METRIC_NAME = 'compute.node.cpu.percent'
INSTANCE_CPU_USAGE_METRIC_NAME = 'cpu_util'
METRIC_NAMES = dict(
ceilometer=dict(
cpu_util_metric='cpu_util',
ram_util_metric='memory.usage',
ram_alloc_metric='memory',
disk_alloc_metric='disk.root.size'),
gnocchi=dict(
cpu_util_metric='cpu_util',
ram_util_metric='memory.usage',
ram_alloc_metric='memory',
disk_alloc_metric='disk.root.size'),
)
MIGRATION = "migrate"
CHANGE_NOVA_SERVICE_STATE = "change_nova_service_state"
def __init__(self, config, osc=None):
super(VMWorkloadConsolidation, self).__init__(config, osc)
self._ceilometer = None
self._gnocchi = None
self.number_of_migrations = 0
self.number_of_released_nodes = 0
self.ceilometer_instance_data_cache = dict()
# self.ceilometer_instance_data_cache = dict()
self.datasource_instance_data_cache = dict()
@classmethod
def get_name(cls):
@@ -87,6 +111,10 @@ class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
def get_translatable_display_name(cls):
return "VM Workload Consolidation Strategy"
@property
def period(self):
return self.input_parameters.get('period', 3600)
@property
def ceilometer(self):
if self._ceilometer is None:
@@ -97,6 +125,50 @@ class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
def ceilometer(self, ceilometer):
self._ceilometer = ceilometer
@property
def gnocchi(self):
if self._gnocchi is None:
self._gnocchi = gnoc.GnocchiHelper(osc=self.osc)
return self._gnocchi
@gnocchi.setter
def gnocchi(self, gnocchi):
self._gnocchi = gnocchi
@property
def granularity(self):
return self.input_parameters.get('granularity', 300)
@classmethod
def get_schema(cls):
# Mandatory default setting for each element
return {
"properties": {
"period": {
"description": "The time interval in seconds for "
"getting statistic aggregation",
"type": "number",
"default": 3600
},
"granularity": {
"description": "The time between two measures in an "
"aggregated timeseries of a metric.",
"type": "number",
"default": 300
},
}
}
@classmethod
def get_config_opts(cls):
return [
cfg.StrOpt(
"datasource",
help="Data source to use in order to query the needed metrics",
default="ceilometer",
choices=["ceilometer", "gnocchi"])
]
def get_state_str(self, state):
"""Get resource state in string format.
@@ -107,10 +179,10 @@ class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
elif isinstance(state, (element.InstanceState, element.ServiceState)):
return state.value
else:
LOG.error(_LE('Unexpexted resource state type, '
'state=%(state)s, state_type=%(st)s.') % dict(
state=state,
st=type(state)))
LOG.error('Unexpected resource state type, '
'state=%(state)s, state_type=%(st)s.' %
dict(state=state,
st=type(state)))
raise exception.WatcherException
def add_action_enable_compute_node(self, node):
@@ -121,7 +193,7 @@ class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
"""
params = {'state': element.ServiceState.ENABLED.value}
self.solution.add_action(
action_type='change_nova_service_state',
action_type=self.CHANGE_NOVA_SERVICE_STATE,
resource_id=node.uuid,
input_parameters=params)
self.number_of_released_nodes -= 1
@@ -134,7 +206,7 @@ class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
"""
params = {'state': element.ServiceState.DISABLED.value}
self.solution.add_action(
action_type='change_nova_service_state',
action_type=self.CHANGE_NOVA_SERVICE_STATE,
resource_id=node.uuid,
input_parameters=params)
self.number_of_released_nodes += 1
@@ -149,15 +221,15 @@ class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
"""
instance_state_str = self.get_state_str(instance.state)
if instance_state_str != element.InstanceState.ACTIVE.value:
# Watcher curently only supports live VM migration and block live
# Watcher currently only supports live VM migration and block live
# VM migration which both requires migrated VM to be active.
# When supported, the cold migration may be used as a fallback
# migration mechanism to move non active VMs.
LOG.error(
_LE('Cannot live migrate: instance_uuid=%(instance_uuid)s, '
'state=%(instance_state)s.') % dict(
instance_uuid=instance.uuid,
instance_state=instance_state_str))
'Cannot live migrate: instance_uuid=%(instance_uuid)s, '
'state=%(instance_state)s.' % dict(
instance_uuid=instance.uuid,
instance_state=instance_state_str))
return
migration_type = 'live'
@@ -171,13 +243,13 @@ class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
params = {'migration_type': migration_type,
'source_node': source_node.uuid,
'destination_node': destination_node.uuid}
self.solution.add_action(action_type='migrate',
self.solution.add_action(action_type=self.MIGRATION,
resource_id=instance.uuid,
input_parameters=params)
self.number_of_migrations += 1
def disable_unused_nodes(self):
"""Generate actions for disablity of unused nodes.
"""Generate actions for disabling unused nodes.
:return: None
"""
@@ -187,62 +259,101 @@ class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
element.ServiceState.DISABLED.value):
self.add_action_disable_node(node)
def get_instance_utilization(self, instance,
period=3600, aggr='avg'):
def get_instance_utilization(self, instance):
"""Collect cpu, ram and disk utilization statistics of a VM.
:param instance: instance object
:param period: seconds
:param aggr: string
:return: dict(cpu(number of vcpus used), ram(MB used), disk(B used))
"""
if instance.uuid in self.ceilometer_instance_data_cache.keys():
return self.ceilometer_instance_data_cache.get(instance.uuid)
instance_cpu_util = None
instance_ram_util = None
instance_disk_util = None
cpu_util_metric = 'cpu_util'
ram_util_metric = 'memory.usage'
if instance.uuid in self.datasource_instance_data_cache.keys():
return self.datasource_instance_data_cache.get(instance.uuid)
ram_alloc_metric = 'memory'
disk_alloc_metric = 'disk.root.size'
instance_cpu_util = self.ceilometer.statistic_aggregation(
resource_id=instance.uuid, meter_name=cpu_util_metric,
period=period, aggregate=aggr)
cpu_util_metric = self.METRIC_NAMES[
self.config.datasource]['cpu_util_metric']
ram_util_metric = self.METRIC_NAMES[
self.config.datasource]['ram_util_metric']
ram_alloc_metric = self.METRIC_NAMES[
self.config.datasource]['ram_alloc_metric']
disk_alloc_metric = self.METRIC_NAMES[
self.config.datasource]['disk_alloc_metric']
if self.config.datasource == "ceilometer":
instance_cpu_util = self.ceilometer.statistic_aggregation(
resource_id=instance.uuid, meter_name=cpu_util_metric,
period=self.period, aggregate='avg')
instance_ram_util = self.ceilometer.statistic_aggregation(
resource_id=instance.uuid, meter_name=ram_util_metric,
period=self.period, aggregate='avg')
if not instance_ram_util:
instance_ram_util = self.ceilometer.statistic_aggregation(
resource_id=instance.uuid, meter_name=ram_alloc_metric,
period=self.period, aggregate='avg')
instance_disk_util = self.ceilometer.statistic_aggregation(
resource_id=instance.uuid, meter_name=disk_alloc_metric,
period=self.period, aggregate='avg')
elif self.config.datasource == "gnocchi":
stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(
seconds=int(self.period))
instance_cpu_util = self.gnocchi.statistic_aggregation(
resource_id=instance.uuid,
metric=cpu_util_metric,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean'
)
instance_ram_util = self.gnocchi.statistic_aggregation(
resource_id=instance.uuid,
metric=ram_util_metric,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean'
)
if not instance_ram_util:
instance_ram_util = self.gnocchi.statistic_aggregation(
resource_id=instance.uuid,
metric=ram_alloc_metric,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean'
)
instance_disk_util = self.gnocchi.statistic_aggregation(
resource_id=instance.uuid,
metric=disk_alloc_metric,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean'
)
if instance_cpu_util:
total_cpu_utilization = (
instance.vcpus * (instance_cpu_util / 100.0))
else:
total_cpu_utilization = instance.vcpus
instance_ram_util = self.ceilometer.statistic_aggregation(
resource_id=instance.uuid, meter_name=ram_util_metric,
period=period, aggregate=aggr)
if not instance_ram_util:
instance_ram_util = self.ceilometer.statistic_aggregation(
resource_id=instance.uuid, meter_name=ram_alloc_metric,
period=period, aggregate=aggr)
instance_disk_util = self.ceilometer.statistic_aggregation(
resource_id=instance.uuid, meter_name=disk_alloc_metric,
period=period, aggregate=aggr)
if not instance_ram_util or not instance_disk_util:
LOG.error(
_LE('No values returned by %s for memory.usage '
'or disk.root.size'), instance.uuid)
'No values returned by %s for memory.usage '
'or disk.root.size', instance.uuid)
raise exception.NoDataFound
self.ceilometer_instance_data_cache[instance.uuid] = dict(
self.datasource_instance_data_cache[instance.uuid] = dict(
cpu=total_cpu_utilization, ram=instance_ram_util,
disk=instance_disk_util)
return self.ceilometer_instance_data_cache.get(instance.uuid)
return self.datasource_instance_data_cache.get(instance.uuid)
def get_node_utilization(self, node, period=3600, aggr='avg'):
def get_node_utilization(self, node):
"""Collect cpu, ram and disk utilization statistics of a node.
:param node: node object
:param period: seconds
:param aggr: string
:return: dict(cpu(number of cores used), ram(MB used), disk(B used))
"""
@@ -252,7 +363,7 @@ class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
node_cpu_util = 0
for instance in node_instances:
instance_util = self.get_instance_utilization(
instance, period, aggr)
instance)
node_cpu_util += instance_util['cpu']
node_ram_util += instance_util['ram']
node_disk_util += instance_util['disk']
@@ -357,7 +468,7 @@ class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
"""
migrate_actions = (
a for a in self.solution.actions if a[
'action_type'] == 'migrate')
'action_type'] == self.MIGRATION)
instance_to_be_migrated = (
a['input_parameters']['resource_id'] for a in migrate_actions)
instance_uuids = list(set(instance_to_be_migrated))
@@ -387,11 +498,11 @@ class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
Offload phase performing first-fit based bin packing to offload
overloaded nodes. This is done in a fashion of moving
the least CPU utilized VM first as live migration these
generaly causes less troubles. This phase results in a cluster
generally causes less troubles. This phase results in a cluster
with no overloaded nodes.
* This phase is be able to enable disabled nodes (if needed
and any available) in the case of the resource capacity provided by
active nodes is not able to accomodate all the load.
active nodes is not able to accommodate all the load.
As the offload phase is later followed by the consolidation phase,
the node enabler in this phase doesn't necessarily results
in more enabled nodes in the final solution.
@@ -424,9 +535,9 @@ class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
Consolidation phase performing first-fit based bin packing.
First, nodes with the lowest cpu utilization are consolidated
by moving their load to nodes with the highest cpu utilization
which can accomodate the load. In this phase the most cpu utilizied
VMs are prioritizied as their load is more difficult to accomodate
in the system than less cpu utilizied VMs which can be later used
which can accommodate the load. In this phase the most cpu utilized
VMs are prioritized as their load is more difficult to accommodate
in the system than less cpu utilized VMs which can be later used
to fill smaller CPU capacity gaps.
:param cc: dictionary containing resource capacity coefficients
@@ -475,7 +586,7 @@ class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
:param original_model: root_model object
"""
LOG.info(_LI('Executing Smart Strategy'))
LOG.info('Executing Smart Strategy')
rcu = self.get_relative_cluster_utilization()
cc = {'cpu': 1.0, 'ram': 1.0, 'disk': 1.0}

View File

@@ -46,12 +46,15 @@ hosts nodes.
algorithm with `CONTINUOUS` audits.
"""
import datetime
from oslo_config import cfg
from oslo_log import log
from watcher._i18n import _, _LE, _LI, _LW
from watcher._i18n import _
from watcher.common import exception as wexc
from watcher.datasource import ceilometer as ceil
from watcher.datasource import gnocchi as gnoc
from watcher.decision_engine.model import element
from watcher.decision_engine.strategy.strategies import base
@@ -104,6 +107,7 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
# reaches threshold
self._meter = self.METER_NAME
self._ceilometer = None
self._gnocchi = None
@property
def ceilometer(self):
@@ -115,6 +119,16 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
def ceilometer(self, c):
self._ceilometer = c
@property
def gnocchi(self):
if self._gnocchi is None:
self._gnocchi = gnoc.GnocchiHelper(osc=self.osc)
return self._gnocchi
@gnocchi.setter
def gnocchi(self, gnocchi):
self._gnocchi = gnocchi
@classmethod
def get_name(cls):
return "workload_balance"
@@ -127,6 +141,10 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
def get_translatable_display_name(cls):
return "Workload Balance Migration Strategy"
@property
def granularity(self):
return self.input_parameters.get('granularity', 300)
@classmethod
def get_schema(cls):
# Mandatory default setting for each element
@@ -142,9 +160,25 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
"type": "number",
"default": 300
},
"granularity": {
"description": "The time between two measures in an "
"aggregated timeseries of a metric.",
"type": "number",
"default": 300
},
},
}
@classmethod
def get_config_opts(cls):
return [
cfg.StrOpt(
"datasource",
help="Data source to use in order to query the needed metrics",
default="ceilometer",
choices=["ceilometer", "gnocchi"])
]
def calculate_used_resource(self, node):
"""Calculate the used vcpus, memory and disk based on VM flavors"""
instances = self.compute_model.get_node_instances(node)
@@ -187,14 +221,14 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
min_delta = current_delta
instance_id = instance.uuid
except wexc.InstanceNotFound:
LOG.error(_LE("Instance not found; error: %s"),
LOG.error("Instance not found; error: %s",
instance_id)
if instance_id:
return (source_node,
self.compute_model.get_instance_by_uuid(
instance_id))
else:
LOG.info(_LI("VM not found from node: %s"),
LOG.info("VM not found from node: %s",
source_node.uuid)
def filter_destination_hosts(self, hosts, instance_to_migrate,
@@ -251,15 +285,30 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
instances = self.compute_model.get_node_instances(node)
node_workload = 0.0
for instance in instances:
cpu_util = None
try:
cpu_util = self.ceilometer.statistic_aggregation(
resource_id=instance.uuid,
meter_name=self._meter,
period=self._period,
aggregate='avg')
if self.config.datasource == "ceilometer":
cpu_util = self.ceilometer.statistic_aggregation(
resource_id=instance.uuid,
meter_name=self._meter,
period=self._period,
aggregate='avg')
elif self.config.datasource == "gnocchi":
stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(
seconds=int(self._period))
cpu_util = self.gnocchi.statistic_aggregation(
resource_id=instance.uuid,
metric=self._meter,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean'
)
except Exception as exc:
LOG.exception(exc)
LOG.error(_LE("Can not get cpu_util from Ceilometer"))
LOG.error("Can not get cpu_util from %s",
self.config.datasource)
continue
if cpu_util is None:
LOG.debug("Instance (%s): cpu_util is None", instance.uuid)
@@ -289,7 +338,7 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
This can be used to fetch some pre-requisites or data.
"""
LOG.info(_LI("Initializing Workload Balance Strategy"))
LOG.info("Initializing Workload Balance Strategy")
if not self.compute_model:
raise wexc.ClusterStateNotDefined()
@@ -314,9 +363,9 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
return self.solution
if not target_nodes:
LOG.warning(_LW("No hosts current have CPU utilization under %s "
"percent, therefore there are no possible target "
"hosts for any migration"),
LOG.warning("No hosts current have CPU utilization under %s "
"percent, therefore there are no possible target "
"hosts for any migration",
self.threshold)
return self.solution
@@ -337,8 +386,8 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
# pick up the lowest one as dest server
if not destination_hosts:
# for instance.
LOG.warning(_LW("No proper target host could be found, it might "
"be because of there's no enough CPU/Memory/DISK"))
LOG.warning("No proper target host could be found, it might "
"be because of there's no enough CPU/Memory/DISK")
return self.solution
destination_hosts = sorted(destination_hosts,
key=lambda x: (x["cpu_util"]))

View File

@@ -28,6 +28,7 @@ It assumes that live migrations are possible in your cluster.
"""
import copy
import datetime
import itertools
import math
import random
@@ -38,9 +39,10 @@ from oslo_config import cfg
from oslo_log import log
import oslo_utils
from watcher._i18n import _LI, _LW, _
from watcher._i18n import _
from watcher.common import exception
from watcher.datasource import ceilometer as ceil
from watcher.datasource import gnocchi as gnoc
from watcher.decision_engine.model import element
from watcher.decision_engine.strategy.strategies import base
@@ -72,6 +74,7 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
"""
super(WorkloadStabilization, self).__init__(config, osc)
self._ceilometer = None
self._gnocchi = None
self._nova = None
self.weights = None
self.metrics = None
@@ -93,6 +96,10 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
def get_translatable_display_name(cls):
return "Workload stabilization"
@property
def granularity(self):
return self.input_parameters.get('granularity', 300)
@classmethod
def get_schema(cls):
return {
@@ -149,10 +156,26 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
" ones.",
"type": "object",
"default": {"instance": 720, "node": 600}
}
},
"granularity": {
"description": "The time between two measures in an "
"aggregated timeseries of a metric.",
"type": "number",
"default": 300
},
}
}
@classmethod
def get_config_opts(cls):
return [
cfg.StrOpt(
"datasource",
help="Data source to use in order to query the needed metrics",
default="ceilometer",
choices=["ceilometer", "gnocchi"])
]
@property
def ceilometer(self):
if self._ceilometer is None:
@@ -173,6 +196,16 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
def ceilometer(self, c):
self._ceilometer = c
@property
def gnocchi(self):
if self._gnocchi is None:
self._gnocchi = gnoc.GnocchiHelper(osc=self.osc)
return self._gnocchi
@gnocchi.setter
def gnocchi(self, gnocchi):
self._gnocchi = gnocchi
def transform_instance_cpu(self, instance_load, host_vcpus):
"""Transform instance cpu utilization to overall host cpu utilization.
@@ -186,7 +219,7 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
@MEMOIZE
def get_instance_load(self, instance):
"""Gathering instance load through ceilometer statistic.
"""Gathering instance load through ceilometer/gnocchi statistic.
:param instance: instance for which statistic is gathered.
:return: dict
@@ -194,18 +227,31 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
LOG.debug('get_instance_load started')
instance_load = {'uuid': instance.uuid, 'vcpus': instance.vcpus}
for meter in self.metrics:
avg_meter = self.ceilometer.statistic_aggregation(
resource_id=instance.uuid,
meter_name=meter,
period=self.periods['instance'],
aggregate='min'
)
avg_meter = None
if self.config.datasource == "ceilometer":
avg_meter = self.ceilometer.statistic_aggregation(
resource_id=instance.uuid,
meter_name=meter,
period=self.periods['instance'],
aggregate='min'
)
elif self.config.datasource == "gnocchi":
stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(
seconds=int(self.periods['instance']))
avg_meter = self.gnocchi.statistic_aggregation(
resource_id=instance.uuid,
metric=meter,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean'
)
if avg_meter is None:
LOG.warning(
_LW("No values returned by %(resource_id)s "
"for %(metric_name)s") % dict(
resource_id=instance.uuid,
metric_name=meter))
"No values returned by %(resource_id)s "
"for %(metric_name)s" % dict(
resource_id=instance.uuid, metric_name=meter))
avg_meter = 0
if meter == 'cpu_util':
avg_meter /= float(100)
@@ -233,21 +279,34 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
for node_id, node in self.get_available_nodes().items():
hosts_load[node_id] = {}
hosts_load[node_id]['vcpus'] = node.vcpus
for metric in self.metrics:
resource_id = ''
avg_meter = None
meter_name = self.instance_metrics[metric]
if re.match('^compute.node', meter_name) is not None:
resource_id = "%s_%s" % (node.uuid, node.hostname)
else:
resource_id = node_id
if self.config.datasource == "ceilometer":
avg_meter = self.ceilometer.statistic_aggregation(
resource_id=resource_id,
meter_name=self.instance_metrics[metric],
period=self.periods['node'],
aggregate='avg'
)
elif self.config.datasource == "gnocchi":
stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(
seconds=int(self.periods['node']))
avg_meter = self.gnocchi.statistic_aggregation(
resource_id=resource_id,
metric=self.instance_metrics[metric],
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean'
)
avg_meter = self.ceilometer.statistic_aggregation(
resource_id=resource_id,
meter_name=self.instance_metrics[metric],
period=self.periods['node'],
aggregate='avg'
)
if avg_meter is None:
raise exception.NoSuchMetricForHost(
metric=meter_name,
@@ -399,7 +458,7 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
return self.solution
def pre_execute(self):
LOG.info(_LI("Initializing Workload Stabilization"))
LOG.info("Initializing Workload Stabilization")
if not self.compute_model:
raise exception.ClusterStateNotDefined()

View File

@@ -19,7 +19,6 @@ import collections
from oslo_log import log
from watcher._i18n import _LI, _LW
from watcher.common import context
from watcher.decision_engine.loading import default
from watcher.decision_engine.scoring import scoring_factory
@@ -136,7 +135,7 @@ class Syncer(object):
for goal_name, goal_map in goals_map.items():
if goal_map in self.available_goals_map:
LOG.info(_LI("Goal %s already exists"), goal_name)
LOG.info("Goal %s already exists", goal_name)
continue
self.goal_mapping.update(self._sync_goal(goal_map))
@@ -145,14 +144,14 @@ class Syncer(object):
if (strategy_map in self.available_strategies_map and
strategy_map.goal_name not in
[g.name for g in self.goal_mapping.values()]):
LOG.info(_LI("Strategy %s already exists"), strategy_name)
LOG.info("Strategy %s already exists", strategy_name)
continue
self.strategy_mapping.update(self._sync_strategy(strategy_map))
for se_name, se_map in scoringengines_map.items():
if se_map in self.available_scoringengines_map:
LOG.info(_LI("Scoring Engine %s already exists"),
LOG.info("Scoring Engine %s already exists",
se_name)
continue
@@ -177,7 +176,7 @@ class Syncer(object):
indicator._asdict()
for indicator in goal_map.efficacy_specification]
goal.create()
LOG.info(_LI("Goal %s created"), goal_name)
LOG.info("Goal %s created", goal_name)
# Updating the internal states
self.available_goals_map[goal] = goal_map
@@ -208,7 +207,7 @@ class Syncer(object):
strategy.goal_id = objects.Goal.get_by_name(self.ctx, goal_name).id
strategy.parameters_spec = parameters_spec
strategy.create()
LOG.info(_LI("Strategy %s created"), strategy_name)
LOG.info("Strategy %s created", strategy_name)
# Updating the internal states
self.available_strategies_map[strategy] = strategy_map
@@ -233,7 +232,7 @@ class Syncer(object):
scoringengine.description = scoringengine_map.description
scoringengine.metainfo = scoringengine_map.metainfo
scoringengine.create()
LOG.info(_LI("Scoring Engine %s created"), scoringengine_name)
LOG.info("Scoring Engine %s created", scoringengine_name)
# Updating the internal states
self.available_scoringengines_map[scoringengine] = \
@@ -270,17 +269,17 @@ class Syncer(object):
# and soft delete stale audits and action plans
for stale_audit_template in self.stale_audit_templates_map.values():
stale_audit_template.save()
LOG.info(_LI("Audit Template '%s' synced"),
LOG.info("Audit Template '%s' synced",
stale_audit_template.name)
for stale_audit in self.stale_audits_map.values():
stale_audit.save()
LOG.info(_LI("Stale audit '%s' synced and cancelled"),
LOG.info("Stale audit '%s' synced and cancelled",
stale_audit.uuid)
for stale_action_plan in self.stale_action_plans_map.values():
stale_action_plan.save()
LOG.info(_LI("Stale action plan '%s' synced and cancelled"),
LOG.info("Stale action plan '%s' synced and cancelled",
stale_action_plan.uuid)
def _find_stale_audit_templates_due_to_goal(self):
@@ -395,15 +394,15 @@ class Syncer(object):
invalid_ats = objects.AuditTemplate.list(self.ctx, filters=filters)
for at in invalid_ats:
LOG.warning(
_LW("Audit Template '%(audit_template)s' references a "
"goal that does not exist"), audit_template=at.uuid)
"Audit Template '%(audit_template)s' references a "
"goal that does not exist", audit_template=at.uuid)
stale_audits = objects.Audit.list(
self.ctx, filters=filters, eager=True)
for audit in stale_audits:
LOG.warning(
_LW("Audit '%(audit)s' references a "
"goal that does not exist"), audit=audit.uuid)
"Audit '%(audit)s' references a "
"goal that does not exist", audit=audit.uuid)
if audit.id not in self.stale_audits_map:
audit.state = objects.audit.State.CANCELLED
self.stale_audits_map[audit.id] = audit
@@ -422,8 +421,8 @@ class Syncer(object):
invalid_ats = objects.AuditTemplate.list(self.ctx, filters=filters)
for at in invalid_ats:
LOG.info(
_LI("Audit Template '%(audit_template)s' references a "
"strategy that does not exist"),
"Audit Template '%(audit_template)s' references a "
"strategy that does not exist",
audit_template=at.uuid)
# In this case we can reset the strategy ID to None
# so the audit template can still achieve the same goal
@@ -438,8 +437,8 @@ class Syncer(object):
self.ctx, filters=filters, eager=True)
for audit in stale_audits:
LOG.warning(
_LW("Audit '%(audit)s' references a "
"strategy that does not exist"), audit=audit.uuid)
"Audit '%(audit)s' references a "
"strategy that does not exist", audit=audit.uuid)
if audit.id not in self.stale_audits_map:
audit.state = objects.audit.State.CANCELLED
self.stale_audits_map[audit.id] = audit
@@ -451,8 +450,8 @@ class Syncer(object):
self.ctx, filters=filters, eager=True)
for action_plan in stale_action_plans:
LOG.warning(
_LW("Action Plan '%(action_plan)s' references a "
"strategy that does not exist"),
"Action Plan '%(action_plan)s' references a "
"strategy that does not exist",
action_plan=action_plan.uuid)
if action_plan.id not in self.stale_action_plans_map:
action_plan.state = objects.action_plan.State.CANCELLED
@@ -467,7 +466,7 @@ class Syncer(object):
se for se in self.available_scoringengines
if se.name not in self.discovered_map['scoringengines']]
for se in removed_se:
LOG.info(_LI("Scoring Engine %s removed"), se.name)
LOG.info("Scoring Engine %s removed", se.name)
se.soft_delete()
def _discover(self):
@@ -526,9 +525,9 @@ class Syncer(object):
for matching_goal in matching_goals:
if (matching_goal.efficacy_specification == goal_efficacy_spec and
matching_goal.display_name == goal_display_name):
LOG.info(_LI("Goal %s unchanged"), goal_name)
LOG.info("Goal %s unchanged", goal_name)
else:
LOG.info(_LI("Goal %s modified"), goal_name)
LOG.info("Goal %s modified", goal_name)
matching_goal.soft_delete()
stale_goals.append(matching_goal)
@@ -545,9 +544,9 @@ class Syncer(object):
matching_strategy.goal_id not in self.goal_mapping and
matching_strategy.parameters_spec ==
ast.literal_eval(parameters_spec)):
LOG.info(_LI("Strategy %s unchanged"), strategy_name)
LOG.info("Strategy %s unchanged", strategy_name)
else:
LOG.info(_LI("Strategy %s modified"), strategy_name)
LOG.info("Strategy %s modified", strategy_name)
matching_strategy.soft_delete()
stale_strategies.append(matching_strategy)
@@ -563,9 +562,9 @@ class Syncer(object):
for matching_scoringengine in matching_scoringengines:
if (matching_scoringengine.description == se_description and
matching_scoringengine.metainfo == se_metainfo):
LOG.info(_LI("Scoring Engine %s unchanged"), se_name)
LOG.info("Scoring Engine %s unchanged", se_name)
else:
LOG.info(_LI("Scoring Engine %s modified"), se_name)
LOG.info("Scoring Engine %s modified", se_name)
matching_scoringengine.soft_delete()
stale_scoringengines.append(matching_scoringengine)

View File

@@ -16,7 +16,6 @@ import os
import re
import pep8
import six
def flake8ext(f):
@@ -61,7 +60,7 @@ def _regex_for_level(level, hint):
log_translation_hint = re.compile(
'|'.join('(?:%s)' % _regex_for_level(level, hint)
for level, hint in six.iteritems(_all_log_levels)))
for level, hint in _all_log_levels.items()))
log_warn = re.compile(
r"(.)*LOG\.(warn)\(\s*('|\"|_)")

View File

@@ -1,640 +0,0 @@
# French translations for python-watcher.
# Copyright (C) 2015 ORGANIZATION
# This file is distributed under the same license as the python-watcher
# project.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2015.
#
msgid ""
msgstr ""
"Project-Id-Version: python-watcher 0.21.1.dev32\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2016-02-09 09:07+0100\n"
"PO-Revision-Date: 2015-12-11 15:42+0100\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: fr\n"
"Language-Team: fr <LL@li.org>\n"
"Plural-Forms: nplurals=2; plural=(n > 1)\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.1.1\n"
#: watcher/api/controllers/v1/action_plan.py:102
#, python-format
msgid "Invalid state: %(state)s"
msgstr "État invalide : %(state)s"
#: watcher/api/controllers/v1/action_plan.py:422
#, python-format
msgid "State transition not allowed: (%(initial_state)s -> %(new_state)s)"
msgstr "Transition d'état non autorisée : (%(initial_state)s -> %(new_state)s)"
#: watcher/api/controllers/v1/audit.py:359
msgid "The audit template UUID or name specified is invalid"
msgstr "Le nom ou UUID de l'audit template est invalide"
#: watcher/api/controllers/v1/types.py:148
#, python-format
msgid "%s is not JSON serializable"
msgstr "%s n'est pas sérialisable en JSON"
#: watcher/api/controllers/v1/types.py:184
#, python-format
msgid "Wrong type. Expected '%(type)s', got '%(value)s'"
msgstr "Type incorrect. '%(type)s' attendu, '%(value)s' obtenu"
#: watcher/api/controllers/v1/types.py:223
#, python-format
msgid "'%s' is an internal attribute and can not be updated"
msgstr "'%s' wat un attribut interne et ne peut pas être modifié"
#: watcher/api/controllers/v1/types.py:227
#, python-format
msgid "'%s' is a mandatory attribute and can not be removed"
msgstr "'%s' est un attribut obligatoire et ne peut pas être enlevé"
#: watcher/api/controllers/v1/types.py:232
msgid "'add' and 'replace' operations needs value"
msgstr "Les opérations 'add' et 'replace' recquièrent une valeur"
#: watcher/api/controllers/v1/utils.py:36
msgid "Limit must be positive"
msgstr "Limit doit être positif"
#: watcher/api/controllers/v1/utils.py:47
#, python-format
msgid "Invalid sort direction: %s. Acceptable values are 'asc' or 'desc'"
msgstr "Ordre de tri invalide : %s. Les valeurs acceptées sont 'asc' or 'desc'"
#: watcher/api/controllers/v1/utils.py:57
#, python-format
msgid "Adding a new attribute (%s) to the root of the resource is not allowed"
msgstr ""
#: watcher/api/middleware/auth_token.py:45
msgid "Cannot compile public API routes"
msgstr "Ne peut pas compiler les chemins d'API publique"
#: watcher/api/middleware/parsable_error.py:52
#, python-format
msgid "ErrorDocumentMiddleware received an invalid status %s"
msgstr ""
#: watcher/api/middleware/parsable_error.py:79
#, python-format
msgid "Error parsing HTTP response: %s"
msgstr ""
#: watcher/applier/actions/change_nova_service_state.py:69
msgid "The target state is not defined"
msgstr ""
#: watcher/applier/actions/migration.py:43
msgid "The parameter resource_id is invalid."
msgstr "Le paramètre resource_id est invalide"
#: watcher/applier/actions/migration.py:86
#, python-format
msgid "Migration of type %(migration_type)s is not supported."
msgstr ""
#: watcher/applier/workflow_engine/default.py:128
#, python-format
msgid "The WorkFlow Engine has failed to execute the action %s"
msgstr "Le moteur de workflow a echoué lors de l'éxécution de l'action %s"
#: watcher/applier/workflow_engine/default.py:146
#, python-format
msgid "Revert action %s"
msgstr "Annulation de l'action %s"
#: watcher/applier/workflow_engine/default.py:152
msgid "Oops! We need disaster recover plan"
msgstr "Oops! Nous avons besoin d'un plan de reprise d'activité"
#: watcher/cmd/api.py:46 watcher/cmd/applier.py:39
#: watcher/cmd/decisionengine.py:40
#, python-format
msgid "Starting server in PID %s"
msgstr "Démarre le serveur avec pour PID %s"
#: watcher/cmd/api.py:51
#, python-format
msgid "serving on 0.0.0.0:%(port)s, view at http://127.0.0.1:%(port)s"
msgstr "Sert sur 0.0.0.0:%(port)s, accessible à http://127.0.0.1:%(port)s"
#: watcher/cmd/api.py:55
#, python-format
msgid "serving on http://%(host)s:%(port)s"
msgstr "Sert sur http://%(host)s:%(port)s"
#: watcher/common/clients.py:29
msgid "Version of Nova API to use in novaclient."
msgstr ""
#: watcher/common/clients.py:34
msgid "Version of Glance API to use in glanceclient."
msgstr ""
#: watcher/common/clients.py:39
msgid "Version of Cinder API to use in cinderclient."
msgstr ""
#: watcher/common/clients.py:44
msgid "Version of Ceilometer API to use in ceilometerclient."
msgstr ""
#: watcher/common/clients.py:50
msgid "Version of Neutron API to use in neutronclient."
msgstr ""
#: watcher/common/exception.py:59
#, python-format
msgid "Unexpected keystone client error occurred: %s"
msgstr ""
#: watcher/common/exception.py:72
msgid "An unknown exception occurred"
msgstr ""
#: watcher/common/exception.py:92
msgid "Exception in string format operation"
msgstr ""
#: watcher/common/exception.py:122
msgid "Not authorized"
msgstr ""
#: watcher/common/exception.py:127
msgid "Operation not permitted"
msgstr ""
#: watcher/common/exception.py:131
msgid "Unacceptable parameters"
msgstr ""
#: watcher/common/exception.py:136
#, python-format
msgid "The %(name)s %(id)s could not be found"
msgstr ""
#: watcher/common/exception.py:140
#, fuzzy
msgid "Conflict"
msgstr "Conflit"
#: watcher/common/exception.py:145
#, python-format
msgid "The %(name)s resource %(id)s could not be found"
msgstr "La ressource %(name)s / %(id)s est introuvable"
#: watcher/common/exception.py:150
#, python-format
msgid "Expected an uuid or int but received %(identity)s"
msgstr ""
#: watcher/common/exception.py:154
#, python-format
msgid "Goal %(goal)s is not defined in Watcher configuration file"
msgstr ""
#: watcher/common/exception.py:158
#, python-format
msgid "Expected a uuid but received %(uuid)s"
msgstr ""
#: watcher/common/exception.py:162
#, python-format
msgid "Expected a logical name but received %(name)s"
msgstr ""
#: watcher/common/exception.py:166
#, python-format
msgid "Expected a logical name or uuid but received %(name)s"
msgstr ""
#: watcher/common/exception.py:170
#, python-format
msgid "AuditTemplate %(audit_template)s could not be found"
msgstr ""
#: watcher/common/exception.py:174
#, python-format
msgid "An audit_template with UUID %(uuid)s or name %(name)s already exists"
msgstr ""
#: watcher/common/exception.py:179
#, python-format
msgid "AuditTemplate %(audit_template)s is referenced by one or multiple audit"
msgstr ""
#: watcher/common/exception.py:184
#, python-format
msgid "Audit %(audit)s could not be found"
msgstr ""
#: watcher/common/exception.py:188
#, python-format
msgid "An audit with UUID %(uuid)s already exists"
msgstr ""
#: watcher/common/exception.py:192
#, python-format
msgid "Audit %(audit)s is referenced by one or multiple action plans"
msgstr ""
#: watcher/common/exception.py:197
#, python-format
msgid "ActionPlan %(action_plan)s could not be found"
msgstr ""
#: watcher/common/exception.py:201
#, python-format
msgid "An action plan with UUID %(uuid)s already exists"
msgstr ""
#: watcher/common/exception.py:205
#, python-format
msgid "Action Plan %(action_plan)s is referenced by one or multiple actions"
msgstr ""
#: watcher/common/exception.py:210
#, python-format
msgid "Action %(action)s could not be found"
msgstr ""
#: watcher/common/exception.py:214
#, python-format
msgid "An action with UUID %(uuid)s already exists"
msgstr ""
#: watcher/common/exception.py:218
#, python-format
msgid "Action plan %(action_plan)s is referenced by one or multiple goals"
msgstr ""
#: watcher/common/exception.py:223
msgid "Filtering actions on both audit and action-plan is prohibited"
msgstr ""
#: watcher/common/exception.py:232
#, python-format
msgid "Couldn't apply patch '%(patch)s'. Reason: %(reason)s"
msgstr ""
#: watcher/common/exception.py:239
msgid "Illegal argument"
msgstr ""
#: watcher/common/exception.py:243
msgid "No such metric"
msgstr ""
#: watcher/common/exception.py:247
msgid "No rows were returned"
msgstr ""
#: watcher/common/exception.py:251
#, python-format
msgid "%(client)s connection failed. Reason: %(reason)s"
msgstr ""
#: watcher/common/exception.py:255
msgid "'Keystone API endpoint is missing''"
msgstr ""
#: watcher/common/exception.py:259
msgid "The list of hypervisor(s) in the cluster is empty"
msgstr ""
#: watcher/common/exception.py:263
msgid "The metrics resource collector is not defined"
msgstr ""
#: watcher/common/exception.py:267
msgid "the cluster state is not defined"
msgstr ""
#: watcher/common/exception.py:273
#, python-format
msgid "The instance '%(name)s' is not found"
msgstr "L'instance '%(name)s' n'a pas été trouvée"
#: watcher/common/exception.py:277
msgid "The hypervisor is not found"
msgstr ""
#: watcher/common/exception.py:281
#, fuzzy, python-format
msgid "Error loading plugin '%(name)s'"
msgstr "Erreur lors du chargement du module '%(name)s'"
#: watcher/common/exception.py:285
#, fuzzy, python-format
msgid "The identifier '%(name)s' is a reserved word"
msgstr ""
#: watcher/common/service.py:83
#, python-format
msgid "Created RPC server for service %(service)s on host %(host)s."
msgstr ""
#: watcher/common/service.py:92
#, python-format
msgid "Service error occurred when stopping the RPC server. Error: %s"
msgstr ""
#: watcher/common/service.py:97
#, python-format
msgid "Service error occurred when cleaning up the RPC manager. Error: %s"
msgstr ""
#: watcher/common/service.py:101
#, python-format
msgid "Stopped RPC server for service %(service)s on host %(host)s."
msgstr ""
#: watcher/common/service.py:106
#, python-format
msgid ""
"Got signal SIGUSR1. Not deregistering on next shutdown of service "
"%(service)s on host %(host)s."
msgstr ""
#: watcher/common/utils.py:53
#, python-format
msgid ""
"Failed to remove trailing character. Returning original object.Supplied "
"object is not a string: %s,"
msgstr ""
#: watcher/common/messaging/messaging_handler.py:98
msgid "No endpoint defined; can only publish events"
msgstr ""
#: watcher/common/messaging/messaging_handler.py:101
msgid "Messaging configuration error"
msgstr ""
#: watcher/db/sqlalchemy/api.py:256
msgid ""
"Multiple audit templates exist with the same name. Please use the audit "
"template uuid instead"
msgstr ""
#: watcher/db/sqlalchemy/api.py:278
msgid "Cannot overwrite UUID for an existing Audit Template."
msgstr ""
#: watcher/db/sqlalchemy/api.py:388
msgid "Cannot overwrite UUID for an existing Audit."
msgstr ""
#: watcher/db/sqlalchemy/api.py:480
msgid "Cannot overwrite UUID for an existing Action."
msgstr ""
#: watcher/db/sqlalchemy/api.py:590
msgid "Cannot overwrite UUID for an existing Action Plan."
msgstr ""
#: watcher/db/sqlalchemy/migration.py:73
msgid ""
"Watcher database schema is already under version control; use upgrade() "
"instead"
msgstr ""
#: watcher/decision_engine/model/model_root.py:37
#: watcher/decision_engine/model/model_root.py:42
msgid "'obj' argument type is not valid"
msgstr ""
#: watcher/decision_engine/planner/default.py:72
msgid "The action plan is empty"
msgstr ""
#: watcher/decision_engine/strategy/selection/default.py:60
#, python-format
msgid "Incorrect mapping: could not find associated strategy for '%s'"
msgstr ""
#: watcher/decision_engine/strategy/strategies/basic_consolidation.py:269
#: watcher/decision_engine/strategy/strategies/basic_consolidation.py:316
#, python-format
msgid "No values returned by %(resource_id)s for %(metric_name)s"
msgstr ""
#: watcher/decision_engine/strategy/strategies/basic_consolidation.py:426
msgid "Initializing Server Consolidation"
msgstr ""
#: watcher/decision_engine/strategy/strategies/basic_consolidation.py:470
msgid "The workloads of the compute nodes of the cluster is zero"
msgstr ""
#: watcher/decision_engine/strategy/strategies/outlet_temp_control.py:127
#, python-format
msgid "%s: no outlet temp data"
msgstr ""
#: watcher/decision_engine/strategy/strategies/outlet_temp_control.py:151
#, python-format
msgid "VM not active, skipped: %s"
msgstr ""
#: watcher/decision_engine/strategy/strategies/outlet_temp_control.py:208
msgid "No hosts under outlet temp threshold found"
msgstr ""
#: watcher/decision_engine/strategy/strategies/outlet_temp_control.py:231
msgid "No proper target host could be found"
msgstr ""
#: watcher/objects/base.py:70
#, python-format
msgid "Error setting %(attr)s"
msgstr ""
#: watcher/objects/base.py:108
msgid "Invalid version string"
msgstr ""
#: watcher/objects/base.py:172
#, python-format
msgid "Unable to instantiate unregistered object type %(objtype)s"
msgstr ""
#: watcher/objects/base.py:299
#, python-format
msgid "Cannot load '%(attrname)s' in the base class"
msgstr ""
#: watcher/objects/base.py:308
msgid "Cannot save anything in the base class"
msgstr ""
#: watcher/objects/base.py:340
#, python-format
msgid "%(objname)s object has no attribute '%(attrname)s'"
msgstr ""
#: watcher/objects/base.py:390
#, python-format
msgid "'%(objclass)s' object has no attribute '%(attrname)s'"
msgstr ""
#: watcher/objects/utils.py:40
msgid "A datetime.datetime is required here"
msgstr ""
#: watcher/objects/utils.py:105
#, python-format
msgid "An object of class %s is required here"
msgstr ""
#~ msgid "Cannot compile public API routes: %s"
#~ msgstr ""
#~ msgid "An exception occurred without a description."
#~ msgstr ""
#~ msgid "no rows were returned"
#~ msgstr ""
#~ msgid ""
#~ msgstr ""
#~ msgid "An unknown exception occurred."
#~ msgstr ""
#~ msgid "Not authorized."
#~ msgstr ""
#~ msgid "Operation not permitted."
#~ msgstr ""
#~ msgid "Unacceptable parameters."
#~ msgstr ""
#~ msgid "The %(name)s %(id)s could not be found."
#~ msgstr ""
#~ msgid "The %(name)s resource %(id)s could not be found."
#~ msgstr ""
#~ msgid "Expected an uuid or int but received %(identity)s."
#~ msgstr ""
#~ msgid "Goal %(goal)s is not defined in Watcher configuration file."
#~ msgstr ""
#~ msgid "Expected a uuid but received %(uuid)s."
#~ msgstr ""
#~ msgid "Expected a logical name but received %(name)s."
#~ msgstr ""
#~ msgid "Expected a logical name or uuid but received %(name)s."
#~ msgstr ""
#~ msgid "AuditTemplate %(audit_template)s could not be found."
#~ msgstr ""
#~ msgid "An audit_template with UUID %(uuid)s or name %(name)s already exists."
#~ msgstr ""
#~ msgid "Audit %(audit)s could not be found."
#~ msgstr ""
#~ msgid "An audit with UUID %(uuid)s already exists."
#~ msgstr ""
#~ msgid "Audit %(audit)s is referenced by one or multiple action plans."
#~ msgstr ""
#~ msgid "ActionPlan %(action plan)s could not be found."
#~ msgstr ""
#~ msgid "An action plan with UUID %(uuid)s already exists."
#~ msgstr ""
#~ msgid "Action Plan %(action_plan)s is referenced by one or multiple actions."
#~ msgstr ""
#~ msgid "Action %(action)s could not be found."
#~ msgstr ""
#~ msgid "An action with UUID %(uuid)s already exists."
#~ msgstr ""
#~ msgid "Action plan %(action_plan)s is referenced by one or multiple goals."
#~ msgstr ""
#~ msgid "Filtering actions on both audit and action-plan is prohibited."
#~ msgstr ""
#~ msgid "The list of hypervisor(s) in the cluster is empty.'"
#~ msgstr ""
#~ msgid "The metrics resource collector is not defined.'"
#~ msgstr ""
#~ msgid "The VM could not be found."
#~ msgstr ""
#~ msgid "The hypervisor could not be found."
#~ msgstr ""
#~ msgid "The Meta-Action could not be found."
#~ msgstr ""
#~ msgid "'hypervisor' argument type is not valid"
#~ msgstr ""
#~ msgid "'vm' argument type is not valid"
#~ msgstr ""
#~ msgid "The Meta-Action could not be found"
#~ msgstr ""
#~ msgid "The VM could not be found"
#~ msgstr ""
#~ msgid "The hypervisor could not be found"
#~ msgstr ""
#~ msgid "Trigger a rollback"
#~ msgstr ""
#~ msgid "The WorkFlow Engine has failedto execute the action %s"
#~ msgstr ""
#~ msgid "ActionPlan %(action plan)s could not be found"
#~ msgstr ""
#~ msgid "Description must be an instance of str"
#~ msgstr ""
#~ msgid "An exception occurred without a description"
#~ msgstr ""
#~ msgid "Description cannot be empty"
#~ msgstr ""
#~ msgid "The hypervisor state is invalid."
#~ msgstr "L'état de l'hyperviseur est invalide"
#~ msgid "%(err)s"
#~ msgstr "%(err)s"
#~ msgid "No Keystone service catalog loaded"
#~ msgstr ""
#~ msgid "Cannot overwrite UUID for an existing AuditTemplate."
#~ msgstr ""
#~ msgid ""
#~ "This identifier is reserved word and "
#~ "cannot be used as variables '%(name)s'"
#~ msgstr ""

View File

@@ -20,6 +20,7 @@
# need to be changed after we moved these function inside the package
# Todo(gibi): remove these imports after legacy notifications using these are
# transformed to versioned notifications
from watcher.notifications import action # noqa
from watcher.notifications import action_plan # noqa
from watcher.notifications import audit # noqa
from watcher.notifications import exception # noqa

View File

@@ -0,0 +1,302 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2017 Servionica
#
# Authors: Alexander Chadin <a.chadin@servionica.ru>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
from watcher.common import context as wcontext
from watcher.common import exception
from watcher.notifications import action_plan as ap_notifications
from watcher.notifications import base as notificationbase
from watcher.notifications import exception as exception_notifications
from watcher import objects
from watcher.objects import base
from watcher.objects import fields as wfields
CONF = cfg.CONF
@base.WatcherObjectRegistry.register_notification
class ActionPayload(notificationbase.NotificationPayloadBase):
SCHEMA = {
'uuid': ('action', 'uuid'),
'action_type': ('action', 'action_type'),
'input_parameters': ('action', 'input_parameters'),
'state': ('action', 'state'),
'parents': ('action', 'parents'),
'created_at': ('action', 'created_at'),
'updated_at': ('action', 'updated_at'),
'deleted_at': ('action', 'deleted_at'),
}
# Version 1.0: Initial version
VERSION = '1.0'
fields = {
'uuid': wfields.UUIDField(),
'action_type': wfields.StringField(nullable=False),
'input_parameters': wfields.DictField(nullable=False, default={}),
'state': wfields.StringField(nullable=False),
'parents': wfields.ListOfUUIDsField(nullable=False, default=[]),
'action_plan_uuid': wfields.UUIDField(),
'action_plan': wfields.ObjectField('TerseActionPlanPayload'),
'created_at': wfields.DateTimeField(nullable=True),
'updated_at': wfields.DateTimeField(nullable=True),
'deleted_at': wfields.DateTimeField(nullable=True),
}
def __init__(self, action, **kwargs):
super(ActionPayload, self).__init__(**kwargs)
self.populate_schema(action=action)
@base.WatcherObjectRegistry.register_notification
class ActionStateUpdatePayload(notificationbase.NotificationPayloadBase):
# Version 1.0: Initial version
VERSION = '1.0'
fields = {
'old_state': wfields.StringField(nullable=True),
'state': wfields.StringField(nullable=True),
}
@base.WatcherObjectRegistry.register_notification
class ActionCreatePayload(ActionPayload):
# Version 1.0: Initial version
VERSION = '1.0'
fields = {}
def __init__(self, action, action_plan):
super(ActionCreatePayload, self).__init__(
action=action,
action_plan=action_plan)
@base.WatcherObjectRegistry.register_notification
class ActionUpdatePayload(ActionPayload):
# Version 1.0: Initial version
VERSION = '1.0'
fields = {
'state_update': wfields.ObjectField('ActionStateUpdatePayload'),
}
def __init__(self, action, state_update, action_plan):
super(ActionUpdatePayload, self).__init__(
action=action,
state_update=state_update,
action_plan=action_plan)
@base.WatcherObjectRegistry.register_notification
class ActionExecutionPayload(ActionPayload):
# Version 1.0: Initial version
VERSION = '1.0'
fields = {
'fault': wfields.ObjectField('ExceptionPayload', nullable=True),
}
def __init__(self, action, action_plan, **kwargs):
super(ActionExecutionPayload, self).__init__(
action=action,
action_plan=action_plan,
**kwargs)
@base.WatcherObjectRegistry.register_notification
class ActionDeletePayload(ActionPayload):
# Version 1.0: Initial version
VERSION = '1.0'
fields = {}
def __init__(self, action, action_plan):
super(ActionDeletePayload, self).__init__(
action=action,
action_plan=action_plan)
@notificationbase.notification_sample('action-execution-error.json')
@notificationbase.notification_sample('action-execution-end.json')
@notificationbase.notification_sample('action-execution-start.json')
@base.WatcherObjectRegistry.register_notification
class ActionExecutionNotification(notificationbase.NotificationBase):
# Version 1.0: Initial version
VERSION = '1.0'
fields = {
'payload': wfields.ObjectField('ActionExecutionPayload')
}
@notificationbase.notification_sample('action-create.json')
@base.WatcherObjectRegistry.register_notification
class ActionCreateNotification(notificationbase.NotificationBase):
# Version 1.0: Initial version
VERSION = '1.0'
fields = {
'payload': wfields.ObjectField('ActionCreatePayload')
}
@notificationbase.notification_sample('action-update.json')
@base.WatcherObjectRegistry.register_notification
class ActionUpdateNotification(notificationbase.NotificationBase):
# Version 1.0: Initial version
VERSION = '1.0'
fields = {
'payload': wfields.ObjectField('ActionUpdatePayload')
}
@notificationbase.notification_sample('action-delete.json')
@base.WatcherObjectRegistry.register_notification
class ActionDeleteNotification(notificationbase.NotificationBase):
# Version 1.0: Initial version
VERSION = '1.0'
fields = {
'payload': wfields.ObjectField('ActionDeletePayload')
}
def _get_action_plan_payload(action):
action_plan = None
strategy_uuid = None
audit = None
try:
action_plan = action.action_plan
audit = objects.Audit.get(wcontext.make_context(show_deleted=True),
action_plan.audit_id)
if audit.strategy_id:
strategy_uuid = objects.Strategy.get(
wcontext.make_context(show_deleted=True),
audit.strategy_id).uuid
except NotImplementedError:
raise exception.EagerlyLoadedActionRequired(action=action.uuid)
action_plan_payload = ap_notifications.TerseActionPlanPayload(
action_plan=action_plan,
audit_uuid=audit.uuid, strategy_uuid=strategy_uuid)
return action_plan_payload
def send_create(context, action, service='infra-optim', host=None):
"""Emit an action.create notification."""
action_plan_payload = _get_action_plan_payload(action)
versioned_payload = ActionCreatePayload(
action=action,
action_plan=action_plan_payload,
)
notification = ActionCreateNotification(
priority=wfields.NotificationPriority.INFO,
event_type=notificationbase.EventType(
object='action',
action=wfields.NotificationAction.CREATE),
publisher=notificationbase.NotificationPublisher(
host=host or CONF.host,
binary=service),
payload=versioned_payload)
notification.emit(context)
def send_update(context, action, service='infra-optim',
host=None, old_state=None):
"""Emit an action.update notification."""
action_plan_payload = _get_action_plan_payload(action)
state_update = ActionStateUpdatePayload(
old_state=old_state,
state=action.state if old_state else None)
versioned_payload = ActionUpdatePayload(
action=action,
state_update=state_update,
action_plan=action_plan_payload,
)
notification = ActionUpdateNotification(
priority=wfields.NotificationPriority.INFO,
event_type=notificationbase.EventType(
object='action',
action=wfields.NotificationAction.UPDATE),
publisher=notificationbase.NotificationPublisher(
host=host or CONF.host,
binary=service),
payload=versioned_payload)
notification.emit(context)
def send_delete(context, action, service='infra-optim', host=None):
"""Emit an action.delete notification."""
action_plan_payload = _get_action_plan_payload(action)
versioned_payload = ActionDeletePayload(
action=action,
action_plan=action_plan_payload,
)
notification = ActionDeleteNotification(
priority=wfields.NotificationPriority.INFO,
event_type=notificationbase.EventType(
object='action',
action=wfields.NotificationAction.DELETE),
publisher=notificationbase.NotificationPublisher(
host=host or CONF.host,
binary=service),
payload=versioned_payload)
notification.emit(context)
def send_execution_notification(context, action, notification_action, phase,
priority=wfields.NotificationPriority.INFO,
service='infra-optim', host=None):
"""Emit an action execution notification."""
action_plan_payload = _get_action_plan_payload(action)
fault = None
if phase == wfields.NotificationPhase.ERROR:
fault = exception_notifications.ExceptionPayload.from_exception()
versioned_payload = ActionExecutionPayload(
action=action,
action_plan=action_plan_payload,
fault=fault,
)
notification = ActionExecutionNotification(
priority=priority,
event_type=notificationbase.EventType(
object='action',
action=notification_action,
phase=phase),
publisher=notificationbase.NotificationPublisher(
host=host or CONF.host,
binary=service),
payload=versioned_payload)
notification.emit(context)

View File

@@ -32,14 +32,12 @@ CONF = cfg.CONF
@base.WatcherObjectRegistry.register_notification
class ActionPlanPayload(notificationbase.NotificationPayloadBase):
class TerseActionPlanPayload(notificationbase.NotificationPayloadBase):
SCHEMA = {
'uuid': ('action_plan', 'uuid'),
'state': ('action_plan', 'state'),
'global_efficacy': ('action_plan', 'global_efficacy'),
'audit_uuid': ('audit', 'uuid'),
'strategy_uuid': ('strategy', 'uuid'),
'created_at': ('action_plan', 'created_at'),
'updated_at': ('action_plan', 'updated_at'),
@@ -54,20 +52,50 @@ class ActionPlanPayload(notificationbase.NotificationPayloadBase):
'state': wfields.StringField(),
'global_efficacy': wfields.FlexibleDictField(nullable=True),
'audit_uuid': wfields.UUIDField(),
'strategy_uuid': wfields.UUIDField(),
'audit': wfields.ObjectField('TerseAuditPayload'),
'strategy': wfields.ObjectField('StrategyPayload'),
'strategy_uuid': wfields.UUIDField(nullable=True),
'created_at': wfields.DateTimeField(nullable=True),
'updated_at': wfields.DateTimeField(nullable=True),
'deleted_at': wfields.DateTimeField(nullable=True),
}
def __init__(self, action_plan, audit=None, strategy=None, **kwargs):
super(TerseActionPlanPayload, self).__init__(audit=audit,
strategy=strategy,
**kwargs)
self.populate_schema(action_plan=action_plan)
@base.WatcherObjectRegistry.register_notification
class ActionPlanPayload(TerseActionPlanPayload):
SCHEMA = {
'uuid': ('action_plan', 'uuid'),
'state': ('action_plan', 'state'),
'global_efficacy': ('action_plan', 'global_efficacy'),
'created_at': ('action_plan', 'created_at'),
'updated_at': ('action_plan', 'updated_at'),
'deleted_at': ('action_plan', 'deleted_at'),
}
# Version 1.0: Initial version
VERSION = '1.0'
fields = {
'audit': wfields.ObjectField('TerseAuditPayload'),
'strategy': wfields.ObjectField('StrategyPayload'),
}
def __init__(self, action_plan, audit, strategy, **kwargs):
if not kwargs.get('audit_uuid'):
kwargs['audit_uuid'] = audit.uuid
if strategy and not kwargs.get('strategy_uuid'):
kwargs['strategy_uuid'] = strategy.uuid
super(ActionPlanPayload, self).__init__(
audit=audit, strategy=strategy, **kwargs)
self.populate_schema(
action_plan=action_plan, audit=audit, strategy=strategy)
action_plan, audit=audit, strategy=strategy, **kwargs)
@base.WatcherObjectRegistry.register_notification

View File

@@ -198,7 +198,7 @@ class NotificationBase(NotificationObject):
def notification_sample(sample):
"""Provide a notification sample of the decatorated notification.
"""Provide a notification sample of the decorated notification.
Class decorator to attach the notification sample information
to the notification object for documentation generation purposes.

View File

@@ -17,6 +17,7 @@
from watcher.common import exception
from watcher.common import utils
from watcher.db import api as db_api
from watcher import notifications
from watcher import objects
from watcher.objects import base
from watcher.objects import fields as wfields
@@ -134,6 +135,8 @@ class Action(base.WatcherPersistentObject, base.WatcherObject,
# notifications containing information about the related relationships
self._from_db_object(self, db_action, eager=True)
notifications.action.send_create(self.obj_context, self)
def destroy(self):
"""Delete the Action from the DB"""
self.dbapi.destroy_action(self.uuid)
@@ -150,6 +153,7 @@ class Action(base.WatcherPersistentObject, base.WatcherObject,
db_obj = self.dbapi.update_action(self.uuid, updates)
obj = self._from_db_object(self, db_obj, eager=False)
self.obj_refresh(obj)
notifications.action.send_update(self.obj_context, self)
self.obj_reset_changes()
@base.remotable
@@ -173,3 +177,5 @@ class Action(base.WatcherPersistentObject, base.WatcherObject,
obj = self._from_db_object(
self.__class__(self._context), db_obj, eager=False)
self.obj_refresh(obj)
notifications.action.send_delete(self.obj_context, self)

View File

@@ -67,16 +67,23 @@ state may be one of the following:
- **CANCELLED** : the :ref:`Action Plan <action_plan_definition>` was in
**PENDING** or **ONGOING** state and was cancelled by the
:ref:`Administrator <administrator_definition>`
- **SUPERSEDED** : the :ref:`Action Plan <action_plan_definition>` was in
**RECOMMENDED** state and was superseded by the
:ref:`Administrator <administrator_definition>`
"""
import datetime
from watcher.common import exception
from watcher.common import utils
from watcher import conf
from watcher.db import api as db_api
from watcher import notifications
from watcher import objects
from watcher.objects import base
from watcher.objects import fields as wfields
CONF = conf.CONF
class State(object):
RECOMMENDED = 'RECOMMENDED'
@@ -289,7 +296,8 @@ class ActionPlan(base.WatcherPersistentObject, base.WatcherObject,
"""Soft Delete the Action plan from the DB"""
related_actions = objects.Action.list(
context=self._context,
filters={"action_plan_uuid": self.uuid})
filters={"action_plan_uuid": self.uuid},
eager=True)
# Cascade soft_delete of related actions
for related_action in related_actions:
@@ -314,3 +322,18 @@ class ActionPlan(base.WatcherPersistentObject, base.WatcherObject,
notifications.action_plan.send_delete(self._context, self)
_notify()
class StateManager(object):
def check_expired(self, context):
action_plan_expiry = (
CONF.watcher_decision_engine.action_plan_expiry)
date_created = datetime.datetime.utcnow() - datetime.timedelta(
hours=action_plan_expiry)
filters = {'state__eq': State.RECOMMENDED,
'created_at__lt': date_created}
action_plans = objects.ActionPlan.list(
context, filters=filters, eager=True)
for action_plan in action_plans:
action_plan.state = State.SUPERSEDED
action_plan.save()

View File

@@ -46,6 +46,9 @@ be one of the following:
- **CANCELLED** : the :ref:`Audit <audit_definition>` was in **PENDING** or
**ONGOING** state and was cancelled by the
:ref:`Administrator <administrator_definition>`
- **SUSPENDED** : the :ref:`Audit <audit_definition>` was in **ONGOING**
state and was suspended by the
:ref:`Administrator <administrator_definition>`
"""
import enum
@@ -66,6 +69,7 @@ class State(object):
CANCELLED = 'CANCELLED'
DELETED = 'DELETED'
PENDING = 'PENDING'
SUSPENDED = 'SUSPENDED'
class AuditType(enum.Enum):
@@ -296,3 +300,25 @@ class Audit(base.WatcherPersistentObject, base.WatcherObject,
notifications.audit.send_delete(self._context, self)
_notify()
class AuditStateTransitionManager(object):
TRANSITIONS = {
State.PENDING: [State.ONGOING, State.CANCELLED],
State.ONGOING: [State.FAILED, State.SUCCEEDED,
State.CANCELLED, State.SUSPENDED],
State.FAILED: [State.DELETED],
State.SUCCEEDED: [State.DELETED],
State.CANCELLED: [State.DELETED],
State.SUSPENDED: [State.ONGOING, State.DELETED],
}
INACTIVE_STATES = (State.CANCELLED, State.DELETED,
State.FAILED, State.SUSPENDED)
def check_transition(self, initial, new):
return new in self.TRANSITIONS.get(initial, [])
def is_inactive(self, audit):
return audit.state in self.INACTIVE_STATES

View File

@@ -17,6 +17,7 @@
import ast
import six
from oslo_serialization import jsonutils
from oslo_versionedobjects import fields
@@ -52,6 +53,10 @@ class DictField(fields.AutoTypedField):
AUTO_TYPE = fields.Dict(fields.FieldType())
class ListOfUUIDsField(fields.AutoTypedField):
AUTO_TYPE = fields.List(fields.UUID())
class FlexibleDict(fields.FieldType):
@staticmethod
def coerce(obj, attr, value):
@@ -92,8 +97,26 @@ class FlexibleListOfDictField(fields.AutoTypedField):
super(FlexibleListOfDictField, self)._null(obj, attr)
class Json(fields.FieldType):
def coerce(self, obj, attr, value):
if isinstance(value, six.string_types):
loaded = jsonutils.loads(value)
return loaded
return value
def from_primitive(self, obj, attr, value):
return self.coerce(obj, attr, value)
def to_primitive(self, obj, attr, value):
return jsonutils.dumps(value)
class JsonField(fields.AutoTypedField):
AUTO_TYPE = Json()
# ### Notification fields ### #
class BaseWatcherEnum(Enum):
ALL = ()

View File

@@ -11,6 +11,7 @@
# limitations under the License.
import datetime
import itertools
import mock
from oslo_config import cfg
@@ -267,7 +268,7 @@ class TestPatch(api_base.FunctionalTest):
test_time = datetime.datetime(2000, 1, 1, 0, 0)
mock_utcnow.return_value = test_time
new_state = objects.audit.State.SUCCEEDED
new_state = objects.audit.State.CANCELLED
response = self.get_json('/audits/%s' % self.audit.uuid)
self.assertNotEqual(new_state, response['state'])
@@ -343,6 +344,115 @@ class TestPatch(api_base.FunctionalTest):
self.assertTrue(response.json['error_message'])
ALLOWED_TRANSITIONS = [
{"original_state": key, "new_state": value}
for key, values in (
objects.audit.AuditStateTransitionManager.TRANSITIONS.items())
for value in values]
class TestPatchStateTransitionDenied(api_base.FunctionalTest):
STATES = [
ap_state for ap_state in objects.audit.State.__dict__
if not ap_state.startswith("_")
]
scenarios = [
(
"%s -> %s" % (original_state, new_state),
{"original_state": original_state,
"new_state": new_state},
)
for original_state, new_state
in list(itertools.product(STATES, STATES))
if original_state != new_state
and {"original_state": original_state,
"new_state": new_state} not in ALLOWED_TRANSITIONS
]
def setUp(self):
super(TestPatchStateTransitionDenied, self).setUp()
obj_utils.create_test_goal(self.context)
obj_utils.create_test_strategy(self.context)
obj_utils.create_test_audit_template(self.context)
self.audit = obj_utils.create_test_audit(self.context,
state=self.original_state)
p = mock.patch.object(db_api.BaseConnection, 'update_audit')
self.mock_audit_update = p.start()
self.mock_audit_update.side_effect = self._simulate_rpc_audit_update
self.addCleanup(p.stop)
def _simulate_rpc_audit_update(self, audit):
audit.save()
return audit
def test_replace_denied(self):
response = self.get_json('/audits/%s' % self.audit.uuid)
self.assertNotEqual(self.new_state, response['state'])
response = self.patch_json(
'/audits/%s' % self.audit.uuid,
[{'path': '/state', 'value': self.new_state,
'op': 'replace'}],
expect_errors=True)
self.assertEqual('application/json', response.content_type)
self.assertEqual(400, response.status_code)
self.assertTrue(response.json['error_message'])
response = self.get_json('/audits/%s' % self.audit.uuid)
self.assertEqual(self.original_state, response['state'])
class TestPatchStateTransitionOk(api_base.FunctionalTest):
scenarios = [
(
"%s -> %s" % (transition["original_state"],
transition["new_state"]),
transition
)
for transition in ALLOWED_TRANSITIONS
]
def setUp(self):
super(TestPatchStateTransitionOk, self).setUp()
obj_utils.create_test_goal(self.context)
obj_utils.create_test_strategy(self.context)
obj_utils.create_test_audit_template(self.context)
self.audit = obj_utils.create_test_audit(self.context,
state=self.original_state)
p = mock.patch.object(db_api.BaseConnection, 'update_audit')
self.mock_audit_update = p.start()
self.mock_audit_update.side_effect = self._simulate_rpc_audit_update
self.addCleanup(p.stop)
def _simulate_rpc_audit_update(self, audit):
audit.save()
return audit
@mock.patch('oslo_utils.timeutils.utcnow')
def test_replace_ok(self, mock_utcnow):
test_time = datetime.datetime(2000, 1, 1, 0, 0)
mock_utcnow.return_value = test_time
response = self.get_json('/audits/%s' % self.audit.uuid)
self.assertNotEqual(self.new_state, response['state'])
response = self.patch_json(
'/audits/%s' % self.audit.uuid,
[{'path': '/state', 'value': self.new_state,
'op': 'replace'}])
self.assertEqual('application/json', response.content_type)
self.assertEqual(200, response.status_code)
response = self.get_json('/audits/%s' % self.audit.uuid)
self.assertEqual(self.new_state, response['state'])
return_updated_at = timeutils.parse_isotime(
response['updated_at']).replace(tzinfo=None)
self.assertEqual(test_time, return_updated_at)
class TestPost(api_base.FunctionalTest):
def setUp(self):

Some files were not shown because too many files have changed in this diff Show More