Compare commits

..

102 Commits

Author SHA1 Message Date
Jenkins
8a38c4f479 Merge "Add release notes for Newton blueprints" 2016-08-30 07:36:57 +00:00
Jenkins
107cc76cdb Merge "TrivialFix: Remove cfg import unused" 2016-08-29 14:44:40 +00:00
Alexandr Stavitskiy
6e8dc5297e Merge scoring base files
Merge scoring_engine.py and scoring_container.py to base.py

Change-Id: I5cada2c9f7832827c1bccfdea1b0a2138b18bfc9
Closes-Bug: #1617376
2016-08-29 14:59:47 +03:00
Antoine Cabot
7cce4b9ed4 Add release notes for Newton blueprints
Change-Id: Id643dae85b1df86796d27fa885aa7c7303d4f4d8
2016-08-29 12:56:02 +02:00
Cao Xuan Hoang
deb5cb3fc2 TrivialFix: Remove cfg import unused
This patch removes cfg import unused in
watcher/api/controllers/v1/goal.py
watcher/api/controllers/v1/strategy.py
watcher/decision_engine/cluster/history/ceilometer.py
watcher/decision_engine/model/collector/manager.py
watcher/decision_engine/strategy/selection/default.py
watcher/tests/common/test_ceilometer_helper.py
watcher/tests/decision_engine/fake_strategies.py
watcher/tests/decision_engine/strategy/selector/test_strategy_selector.py

Change-Id: I0f30a056c3efa49faed857b6d1001a2367d384ac
2016-08-29 15:01:28 +07:00
Jenkins
3ed44383ab Merge "TrivialFix: Remove logging import unused" 2016-08-29 07:48:35 +00:00
Cao Xuan Hoang
720884cd55 TrivialFix: Remove logging import unused
This patch removes logging import unused in
watcher/applier/manager.py
watcher/applier/rpcapi.py
watcher/decision_engine/goal/base.py
watcher/decision_engine/model/notification/base.py
watcher/decision_engine/model/notification/filtering.py

Change-Id: I0b967e4931223b3b7e9459fb1483ed8185a1a7a0
2016-08-29 12:46:02 +07:00
gecong1973
249cd11533 Remove unused LOG
delete unused LOG in some files

Change-Id: I38f2a91d2e6b24b14e1d46fd8e9a5f87ea2f3171
2016-08-29 10:03:33 +08:00
Jenkins
f42106ca34 Merge "Add unit tests for continuous.py" 2016-08-26 17:44:30 +00:00
Jenkins
359debb6b0 Merge "Update configuration section for notifications" 2016-08-26 17:22:29 +00:00
Jenkins
dbcab41fae Merge "Doc on how to add notification endpoints" 2016-08-26 17:22:22 +00:00
Jenkins
28ff52f8ba Merge "Notification and CDM partial update" 2016-08-26 17:01:23 +00:00
Jenkins
781eb64c6a Merge "Added start/end date params on ceilometer queries" 2016-08-26 16:29:33 +00:00
Jenkins
45a0bda1ae Merge "Fix loading of plugin configuration parameters" 2016-08-26 16:26:17 +00:00
Jenkins
190d5ae899 Merge "Remove unreachable line" 2016-08-26 16:16:44 +00:00
David TARDIVEL
e4ba59e130 Update configuration section for notifications
Watcher consumes now notications sent by Nova services.
We have to configure Nova to publish its notifications into
the dedicated Watcher notification queue.

Change-Id: I29f2fa12dfe3a7ce0b014778109a08bbe78b4679
Partially-Implements: blueprint cluster-model-objects-wrapper
2016-08-26 15:57:54 +00:00
Vincent Françoise
f238167edc Doc on how to add notification endpoints
In this changeset, I updated the CDMC plugin documentation to explain
how to implement and register new notification endpoints.

Change-Id: Ib8c014e82051647edef5c1272f63429f76673227
Partially-Implements: blueprint cluster-model-objects-wrapper
2016-08-26 17:46:50 +02:00
Vincent Françoise
77b7fae41e Notification and CDM partial update
In this changeset, I implemented the notification handling (Rx only)
system for consuming incoming notifications, more especially the Nova
ones. The notifications handlers also contain the logic which
incrementally updates the Compute model.

Change-Id: Ia036a5a2be6caa64b7f180de38821b57c624300c
Partially-implements: blueprint cluster-model-objects-wrapper
2016-08-26 17:46:48 +02:00
Jenkins
8c23c08776 Merge "Check unspecified parameters create audit" 2016-08-26 14:53:17 +00:00
Viacheslav Samarin
103f541abd Remove unreachable line
This patch set removes unreachable line from nova_helper.py

Change-Id: I0befe271cc244b73fb9f4d79cc1d04b951b67135
Closes-Bug: #1617354
2016-08-26 17:43:59 +03:00
Jenkins
5fe6281e5c Merge "Correct watcher reraising of exception" 2016-08-26 14:16:45 +00:00
Vincent Françoise
c61793811e Added start/end date params on ceilometer queries
In this changeset, I added the start_time and end_time params
to the Ceilometer helper which can drastically reduce the execution
time of the queries.

Change-Id: I39cb3eef584acfca1b50ff6ec1b65f38750802d2
2016-08-26 14:16:19 +00:00
gengchc2
ecea228c4c Correct watcher reraising of exception
When an exception was caught and rethrown, it should call 'raise'
without any arguments because it shows the place where
an exception occured initially instead of place where
the exception re-raised

Change-Id: I662583cd3cda2424d5a510cae7e815c97a51c2fe
2016-08-26 12:52:13 +00:00
jinquanni
1fafcc5ef1 Check unspecified parameters create audit
Currently, create audit with unspecified parameters will success.
This is not reasonable, we shoud return a FAILED status to
notify the admin user.

Change-Id: Ifbcb3b8d9e736607b05b1eb408ec0f41bdf58a2f
Closes-Bug: #1599879
2016-08-26 19:18:20 +08:00
David TARDIVEL
32c13d00fe Fix loading of plugin configuration parameters
When the load a plugin, we need to reload once the watcher
configuration data, in order to include the plugin parameters in
cfg.CONF data dict. To reload the conf, we just call self.conf().
But every time we call this method, cfg.CONF is cleaned, and we
lost previously loaded parameters. This generated the exception
RequiredOptError and the plugin was not correctly loaded.

To fix it, we have just to add the watcher configuration
filename as argument of self.conf().

Change-Id: Ic2384b68f6d604640127fe06893d0f808aee34b7
Closes-Bug: #1617240
2016-08-26 12:08:45 +02:00
Tomasz Kaczynski
a1cb142009 Add Scoring Module implementation
This change is adding the main logic for the scoring module,
defines entry points for the scoring engine plugins and provides
a watcher-sync tool to enable Watcher database synchronization
without needing to restart any Watcher service.

Partially-Implements: blueprint scoring-module
Change-Id: If10daae969ec27a7008af5173359992e957dcd5e
2016-08-26 07:13:39 +00:00
Jenkins
ab10201c72 Merge "Added strategy ID + Action Plan syncing" 2016-08-25 21:08:08 +00:00
Jenkins
42b45a5949 Merge "Fixed flaky tempest test" 2016-08-25 15:50:38 +00:00
licanwei
da67b246a5 Add unit tests for continuous.py
Add unit tests about functions 'execute_audit'
and 'is_audit_inactive'

Change-Id: If9eb4d95166a486a59cc1535106944243bb7eb30
Closes-Bug: #1616759
2016-08-25 16:52:58 +08:00
OpenStack Proposal Bot
6d41c23e50 Updated from global requirements
Change-Id: I4501b494acc27f9471ea0ba1cf0151242ce17002
2016-08-25 05:06:56 +00:00
Vincent Françoise
7b228348a0 Fixed flaky tempest test
Fixed the flaky test_delete_audit test which was failing if the delete
query was sent before the audit reaches an 'idle' state.

Change-Id: I244643b1f7c9b31baa5c25753e62dd3da0a53544
2016-08-24 14:45:51 +02:00
Jenkins
eb421709d9 Merge "Updated from global requirements" 2016-08-24 11:55:02 +00:00
Andreas Jaeger
e741728eb8 Remove pot files
We do not store pot files anymore in repositories,
instead they are published at
http://tarballs.openstack.org/translation-source/watcher/master/ after
each commit and thus always accurate.

Remove the outdated and obsolete file.

Change-Id: I24f18c0a62f2c5339479d07904fb2ce0a888c696
2016-08-24 08:40:53 +02:00
OpenStack Proposal Bot
35201c8358 Updated from global requirements
Change-Id: I430ed2e1a4e2c29e35f57c6362589eb0ed36465c
2016-08-24 01:40:02 +00:00
Vincent Françoise
6be758bc5a Added strategy ID + Action Plan syncing
In this changeset, I implemented the logic which cancels
any audit or action plan whose goal has been re-synced
(upon restarting the Decision Engine).

Partially Implements: blueprint efficacy-indicator

Change-Id: I95d2739eb552d4a7a02c822b11844591008f648e
2016-08-22 10:08:05 +02:00
Jenkins
5f205d5602 Merge "Fixes to get cluster data model" 2016-08-22 07:51:46 +00:00
Prashanth Hari
9450a7079b Fixes to get cluster data model
Due to the recent code refactor in the cluster data model
compute model is failing to 'get_latest_cluster_data_model'
attribute

Change-Id: Iba848db6d03cf1b682c4000ca48cf846b0ffa79b
Closes-Bug: #1614296
2016-08-19 15:44:09 -04:00
OpenStack Proposal Bot
64f45add5f Updated from global requirements
Change-Id: I1651d322933e085773210920de257572eb142089
2016-08-19 09:12:46 +00:00
Jenkins
3a804ae045 Merge "Fix double self._goal definition" 2016-08-18 17:00:13 +00:00
Jenkins
638fd557dc Merge "Scheduler of decision_engine fix" 2016-08-18 15:44:05 +00:00
Alexandr Stavitskiy
4e3593a71d Fix double self._goal definition
This patch set removes second self._goal definition.

Change-Id: I343b82e1584ab30129e6f15cc9c15cee07250497
Closes-Bug: #1614568
2016-08-18 17:53:09 +03:00
Viacheslav Samarin
01164b0790 Scheduler of decision_engine fix
This patch set renames 'OS-EXT-STS:instance_state' to 'OS-EXT-STS:vm_state'
for correct working of decision_engine scheduler.

Change-Id: I20805a079a991d5f3b8565f52d5f7280c2389bee
Closes-Bug: #1614511
2016-08-18 16:49:20 +03:00
OpenStack Proposal Bot
9933955c7d Updated from global requirements
Change-Id: Id07d8af7e1c717d52005e73bf43baa67f60e2242
2016-08-18 05:58:34 +00:00
Jenkins
efdf6c93fc Merge "Modify libvirt_opts="-d -l" to libvirtd_opts="-d -l"" 2016-08-17 06:28:26 +00:00
Nguyen Hung Phuong
17d2d75abb Clean imports in code
In some part in the code we import objects.
In the Openstack style guidelines they recommand to import
only modules. We need to fix that.

Change-Id: I268c5045f00d25b4dfbf77c1f599c7baca8373ed
Partial-Bug:1543101
2016-08-15 13:43:28 +07:00
zte-hanrong
2e55f4ebee Modify libvirt_opts="-d -l" to libvirtd_opts="-d -l"
Change-Id: Iebb5ab722e416e83abfe48e7b66633645c63f94b
2016-08-13 16:19:54 +08:00
Jenkins
196a7a5457 Merge "Add unit tests for nova_helper.py" 2016-08-09 11:33:17 +00:00
Jenkins
1f72a35896 Merge "Updated from global requirements" 2016-08-09 11:31:04 +00:00
Joe Cropper
ea01031268 Rename (pre/post)condition to (pre/post)_condition
This patch updates the applier's abstract methods to be consistent
with other abstract methods of similar nature.

Also included are a few other miscellaneous changes.

Change-Id: Ia1527c00332011412aba2ab326ec986f1e773001
Closes-bug: 1606634
2016-08-08 08:25:41 -05:00
licanwei
6144551809 Add unit tests for nova_helper.py
The nova helper does not have a sufficient code coverage.
This commit raises its coverage from 39% to 89%.
And unit tests find two bugs.

Closes-Bug: #1607198
Change-Id: Iebb693cd09e512ce44702eddca8ead0c7310b263
Closes-Bug: #1599849
2016-08-08 19:39:58 +08:00
OpenStack Proposal Bot
1b2672a49b Updated from global requirements
Change-Id: Ia64e68758d3335e3ba4ca3c15f00ca6cbb25daa3
2016-08-08 10:49:38 +00:00
Jenkins
27b3c5254d Merge "Removed unused function in uniform airflow" 2016-08-05 01:43:19 +00:00
Jenkins
d69efcbd0f Merge "Updated from global requirements" 2016-08-04 16:02:38 +00:00
Vincent Françoise
31de0e319f Removed unused function in uniform airflow
In this changeset, I removed 2 unused methods.

Change-Id: I2e85ed63739f9bb436d110b54fe2b9ef10962205
2016-08-04 14:55:23 +02:00
zhangyanxian
8145906798 Update the home-page info with the developer documentation
Since there are so many components in openstack,
by describing the URL info in the configuration file,
it is more convenient to find the developer documentation
than from the unified portal

Change-Id: Id6bc554db8bb0cd71851773b9cea71aada4eb9e2
2016-08-04 06:11:42 +00:00
OpenStack Proposal Bot
9d2d2183f5 Updated from global requirements
Change-Id: I4ac7f94a726eb6a1dc1ff8f930f517df0731779d
2016-08-04 02:43:22 +00:00
Vincent Françoise
31c37342cd Refactored the compute model and its elements
In this changeset, I refactored the whole Watcher codebase to
adopt a naming convention about the various elements of the
Compute model so that it reflects the same naming convention
adopted by Nova.

Change-Id: I28adba5e1f27175f025330417b072686134d5f51
Partially-Implements: blueprint cluster-model-objects-wrapper
2016-08-03 12:10:43 +02:00
Jenkins
dbde1afea0 Merge "use parameters to set the threshold" 2016-08-03 08:28:16 +00:00
Jenkins
739b667cbc Merge "Merged metrics_engine package into decision_engine" 2016-08-02 16:34:48 +00:00
Jenkins
671c691189 Merge "Updated DE architecture doc + 'period' param" 2016-08-02 16:34:11 +00:00
Jenkins
32be5de2f9 Merge "Added DE Background Scheduler w/ model sync jobs" 2016-08-02 16:33:51 +00:00
Jenkins
cf01dad222 Merge "Cluster data model collector plugin documentation" 2016-08-02 16:28:56 +00:00
Jenkins
027ce5916c Merge "Loadable Cluster Data Model Collectors" 2016-08-02 16:28:51 +00:00
Jenkins
39227f6e86 Merge "Use more specific asserts" 2016-08-02 14:08:14 +00:00
Gábor Antal
cc2e805780 Use more specific asserts
Many places, there are more specific asserts which can be used.
I replaced the generic assert with more specific ones, where
it was possible.

This change enhances readibility, and on fail, more useful
message is displayed

Change-Id: I86a6baeae2cd36610a2be10ae5085555246c368a
2016-08-02 14:42:29 +02:00
Vincent Françoise
0a6841f510 Merged metrics_engine package into decision_engine
In this changeset, I merged the metrics_engine package into
the decision_engine one alongside the required changes to make
the tests pass.

Change-Id: Iac1cd266a854212bf4fa8b21c744b076c3b834a8
Partially-Implements: blueprint cluster-model-objects-wrapper
2016-08-02 12:07:35 +02:00
Vincent Françoise
4f8591cb02 Updated DE architecture doc + 'period' param
In this changeset, I updated the architecture documentation
about the Decision Engine by adding a new sequence diagram
outlining the CDM synchronization workflow.
I also explained the default 'period' parameter used in the
CDMC plugin.

Change-Id: I09790281ba9117e302ab8e66a887667929c6c261
Partially-Implements: blueprint cluster-model-objects-wrapper
2016-08-02 12:07:35 +02:00
Vincent Françoise
06c6c4691b Added DE Background Scheduler w/ model sync jobs
In this changeset, I implemented a background scheduler service
for Watcher and more particularly for the Decision Engine where
I made it create 2 types of job per cluster data model collector
plugin:

- An initial job that is asynchronously executed upon starting the
  Decision Engine
- A periodical job that gets triggered every configurable interval
  of time

Change-Id: I3f5442f81933a19565217b894bd86c186e339762
Partially-Implements: bluprint cluster-model-objects-wrapper
2016-08-02 12:07:35 +02:00
Vincent Françoise
b94677c3ef Cluster data model collector plugin documentation
In this changeset, I wrote down the documentation on how to implement
a cluster data model collector plugin for Watcher.

This documentation corresponds to part 1 of the associated
specification.

Change-Id: Iac72b933df95252163033cd559d13348075a9b16
Partially-Implements: blueprint cluster-model-objects-wrapper
2016-08-02 12:07:35 +02:00
Vincent Françoise
5a2a94fbec Loadable Cluster Data Model Collectors
In this changeset, I made BaseClusterDataModelCollector instances
pluggable. This corresponds to "part 1" of the work items detailed
in the specifications.

Change-Id: Iab1c7e264add9e2cbbbb767e3fd6e99a0c22c691
Partially-Implements: blueprint cluster-model-objects-wrapper
2016-08-02 12:07:35 +02:00
Jenkins
553c5a6c4b Merge "Add scoring engines to database and API layers" 2016-08-02 08:09:39 +00:00
Jenkins
5ffb7df20c Merge "Implement goal_id, strategy_id and host_aggregate into Audit api" 2016-08-02 07:59:46 +00:00
OpenStack Proposal Bot
61e581f45e Updated from global requirements
Change-Id: I5d8a4f1e4c3b72e248181ba4da5bd0e6f17c164d
2016-08-01 18:49:15 +00:00
Tomasz Kaczynski
26d84e353e Add scoring engines to database and API layers
A Scoring Module needs to expose a list of available
scoring engines through API and Watcher CLI. This list
is stored in database and synchronized by Decision Engine.

Partially-Implements: blueprint scoring-module
Change-Id: I32168adeaf34fd12a731204c5b58fe68434ad087
APIImpact
2016-08-01 12:40:33 +00:00
Jenkins
b2656b92c4 Merge "Add installation from Debian packages section" 2016-07-29 01:55:50 +00:00
Michael Gugino
52ffc2c8e2 Implement goal_id, strategy_id and host_aggregate into Audit api
Modifying the api controller for audit objects to allow
creation of audit objects by specifying either an
audit_template uuid/id and/or a goal_id.

strategy_id is optional.

Partially Implements: blueprint persistent-audit-parameters
Change-Id: I7b3eae4d0752a11208f5f92ee13ab1018d8521ad
2016-07-28 20:12:52 -04:00
junjie huang
5dee934565 use parameters to set the threshold
use parameters to set the threshold for strategies
of Uniform Airflow,Workload balance.

Change-Id: I5a547929eb1e2413468e9a5de16d3fd42cabadf9
Close-bug: #1600707
2016-07-28 11:28:41 +00:00
OpenStack Proposal Bot
0769e53f93 Updated from global requirements
Change-Id: I60b1bc77b3819d8fadddbd13a55b00da8e206b2f
2016-07-28 06:09:16 +00:00
Jenkins
8db3427665 Merge "Add hacking checks to watcher" 2016-07-27 09:10:06 +00:00
Jenkins
35dfa40b08 Merge "Add Python 3.5 classifier and venv" 2016-07-27 09:09:55 +00:00
Jenkins
a98636cd09 Merge "Fixed Basic optim tempest test" 2016-07-26 09:45:18 +00:00
Vincent Françoise
107bd0be54 Fixed Basic optim tempest test
In this changeset, I made some fixes in order to make the
multinode test pass on the gate.

Change-Id: I2433748a78c87b15893ea69964561955b478eebd
2016-07-26 10:43:07 +02:00
Jenkins
95917bc147 Merge "Fix 2 occurrences of typo: "occured" --> "occurred"" 2016-07-26 07:23:23 +00:00
Jenkins
cc1f66cd6a Merge "Update docs links to docs.openstack.org" 2016-07-26 06:29:41 +00:00
Ralf Rantzau
926b790747 Fix 2 occurrences of typo: "occured" --> "occurred"
Change-Id: Ie25223e8aebdc05bc0f19b9df4805dfb942f1818
2016-07-25 14:21:21 -07:00
Drew Thorstensen
7d704dbeec Add hacking checks to watcher
The hacking checks enforce during the pep8 run functional validations of
the code to ensure deeper filters and code consistency.  This change set
adds the hacking checks to the wathcer project.  These checks were
seeded from the neutron project, which had a good set of base defaults.

This change set also updates the watcher project to be compliant with
these new hacking checks.

Change-Id: I6f4566d384a7400bddf228aa127a53e6ecc82c2e
2016-07-25 07:52:45 -04:00
Tin Lam
5c08095417 Update docs links to docs.openstack.org
Changed reference from https://factory.b-com.com to
http://docs.openstack.org.

Change-Id: Ifb8b09ff9df5c67e49b594b7373f11082e00029a
Partial-Bug: #1595180
2016-07-21 13:37:16 -05:00
Jenkins
65d3a4d75e Merge "Updated from global requirements" 2016-07-21 13:29:16 +00:00
Jenkins
6727d53072 Merge "Fix dict.keys() PY3 compatible" 2016-07-21 13:02:16 +00:00
Swapnil Kulkarni (coolsvap)
b579a41f30 Remove discover from test-requirements
It's only needed for python < 2.7 which is not supported

Change-Id: Ib3f24755d50ef9466e5bb35c10ea57ac28ac7a34
2016-07-21 07:47:17 +00:00
OpenStack Proposal Bot
051810dfa2 Updated from global requirements
Change-Id: Ie246370342759d9e3935b7a6e802c94e33ad3d6a
2016-07-21 04:12:17 +00:00
Jenkins
8ab80894c3 Merge "Fix typos and messages in strategies" 2016-07-20 23:58:26 +00:00
Joe Cropper
6730202151 Fix typos and messages in strategies
This patch fixes various typos and other nits in the strategies.  It
also updates some of the log messages to be a little more operator
friendly.

Change-Id: Ic9268c6d7376dad215a1a40798485b1d836ba7ae
Closes-Bug: #1604248
2016-07-20 14:19:45 -07:00
Jenkins
a6d0eaa4c4 Merge "Update executor to eventlet" 2016-07-19 19:41:22 +00:00
Jenkins
2e04ba6b64 Merge "There are some spelling errors in the code." 2016-07-19 19:41:13 +00:00
Muzammil Mueen
fd7c41fba2 Remove unused columns parameters in watcher/db/api
In watcher/db/api.py, some abstract methods are specifying a 'columns'
parameter that is actually ignored in db/sqlalchemy/api.py. Since we
do not need this parameter, realignment was done for the signatures
of these methods, by removing the 'column' parameter (and its
docstring) from every single one of the following methods.

get_audit_template_list
get_audit_list
get_action_list
get_action_plan_list

Closes-Bug: #1597641
Change-Id: If706e24d5714f0139fd135bdc41d17d0e431e302
2016-07-19 09:51:41 -07:00
licanwei
aef1eba5df test_context_hook_before_method failed
gate-watcher-python27 FAILURE
The reason is that oslo_context.context.__init__()
added parameter 'domain_name'.
There are some mismatching in file '/watcher/common/context.py'

Change-Id: If0be05943e7c89788d6ccba3385474ccb036e6f5
Closes-Bug: #1604214
2016-07-19 10:58:14 +08:00
weiweigu
cd60336e20 Fix dict.keys() PY3 compatible
The dict.keys()[0] will raise a TypeError in PY3,
as dict.keys() doesn't return a list any more in PY3
but a view of list.

Change-Id: If15a153c9db9b654e761f8ad50d5d66a427efa4e
Closes-Bug: #1583419
2016-07-14 13:47:29 +08:00
zhangyanxian
e7a1ba7e74 There are some spelling errors in the code.
Change-Id: Ifb88a13783fc12258489abb4caabca1b1038a77d
2016-07-13 02:36:47 +00:00
Yatin Kumbhare
1b0801a0c3 Add Python 3.5 classifier and venv
Now that there is a passing gate job, we can claim support for
Python 3.5 in the classifier. This patch also adds the convenience
py35 venv.

Change-Id: Idf6cd632bcb6f4f61dba65fedc9309d0184f46b7
2016-07-12 21:23:43 +05:30
David TARDIVEL
ff4375337e Add installation from Debian packages section
Debian experimental packages are now available for Watcher.
I added a new section to help users to quickly deploy these
packages and test them.

Change-Id: Ib920b7dbf968c36941c0d30c9001bc150df543f8
2016-07-06 15:58:50 +00:00
David TARDIVEL
21d1610071 Update executor to eventlet
Default 'blocking' executor handles only one message at once.
'eventlet' executor is recommended.

Change-Id: Id738d0462fbb3c7fd6c78ee2f0dd0f1e79131ca7
Closes-Bug: #1517843
2016-07-06 17:40:55 +02:00
243 changed files with 9644 additions and 4323 deletions

View File

@@ -1,7 +1,9 @@
[run] [run]
branch = True branch = True
source = watcher source = watcher
omit = watcher/tests/* omit =
watcher/tests/*
watcher/hacking/*
[report] [report]
ignore_errors = True ignore_errors = True

View File

@@ -44,6 +44,9 @@ WATCHER_CONF_DIR=/etc/watcher
WATCHER_CONF=$WATCHER_CONF_DIR/watcher.conf WATCHER_CONF=$WATCHER_CONF_DIR/watcher.conf
WATCHER_POLICY_JSON=$WATCHER_CONF_DIR/policy.json WATCHER_POLICY_JSON=$WATCHER_CONF_DIR/policy.json
NOVA_CONF_DIR=/etc/nova
NOVA_CONF=$NOVA_CONF_DIR/nova.conf
if is_ssl_enabled_service "watcher" || is_service_enabled tls-proxy; then if is_ssl_enabled_service "watcher" || is_service_enabled tls-proxy; then
WATCHER_SERVICE_PROTOCOL="https" WATCHER_SERVICE_PROTOCOL="https"
fi fi
@@ -123,6 +126,8 @@ function create_watcher_conf {
iniset $WATCHER_CONF oslo_messaging_rabbit rabbit_password $RABBIT_PASSWORD iniset $WATCHER_CONF oslo_messaging_rabbit rabbit_password $RABBIT_PASSWORD
iniset $WATCHER_CONF oslo_messaging_rabbit rabbit_host $RABBIT_HOST iniset $WATCHER_CONF oslo_messaging_rabbit rabbit_host $RABBIT_HOST
iniset $NOVA_CONF oslo_messaging_notifications topics "notifications,watcher_notifications"
configure_auth_token_middleware $WATCHER_CONF watcher $WATCHER_AUTH_CACHE_DIR configure_auth_token_middleware $WATCHER_CONF watcher $WATCHER_AUTH_CACHE_DIR
configure_auth_token_middleware $WATCHER_CONF watcher $WATCHER_AUTH_CACHE_DIR "watcher_clients_auth" configure_auth_token_middleware $WATCHER_CONF watcher $WATCHER_AUTH_CACHE_DIR "watcher_clients_auth"

View File

@@ -139,7 +139,7 @@ The Watcher Dashboard can be used to interact with the Watcher system through
Horizon in order to control it or to know its current status. Horizon in order to control it or to know its current status.
Please, read `the detailed documentation about Watcher Dashboard Please, read `the detailed documentation about Watcher Dashboard
<https://factory.b-com.com/www/watcher/doc/watcher-dashboard/>`_. <http://docs.openstack.org/developer/watcher-dashboard/>`_.
.. _archi_watcher_database_definition: .. _archi_watcher_database_definition:
@@ -176,30 +176,42 @@ associated :ref:`Audit Template <audit_template_definition>` and knows the
:ref:`Goal <goal_definition>` to achieve. :ref:`Goal <goal_definition>` to achieve.
It then selects the most appropriate :ref:`Strategy <strategy_definition>` It then selects the most appropriate :ref:`Strategy <strategy_definition>`
depending on how Watcher was configured for this :ref:`Goal <goal_definition>`. from the list of available strategies achieving this goal.
The :ref:`Strategy <strategy_definition>` is then dynamically loaded (via The :ref:`Strategy <strategy_definition>` is then dynamically loaded (via
`stevedore <https://github.com/openstack/stevedore/>`_). The `stevedore <http://docs.openstack.org/developer/stevedore/>`_). The
:ref:`Watcher Decision Engine <watcher_decision_engine_definition>` calls the :ref:`Watcher Decision Engine <watcher_decision_engine_definition>` executes
**execute()** method of the :ref:`Strategy <strategy_definition>` class which the strategy.
generates a solution composed of a set of :ref:`Actions <action_definition>`.
In order to compute the potential :ref:`Solution <solution_definition>` for the
Audit, the :ref:`Strategy <strategy_definition>` relies on different sets of
data:
- :ref:`Cluster data models <cluster_data_model_definition>` that are
periodically synchronized through pluggable cluster data model collectors.
These models contain the current state of various
:ref:`Managed resources <managed_resource_definition>` (e.g., the data stored
in the Nova database). These models gives a strategy the ability to reason on
the current state of a given :ref:`cluster <cluster_definition>`.
- The data stored in the :ref:`Cluster History Database
<cluster_history_db_definition>` which provides information about the past of
the :ref:`Cluster <cluster_definition>`.
Here below is a sequence diagram showing how the Decision Engine builds and
maintains the :ref:`cluster data models <cluster_data_model_definition>` that
are used by the strategies.
.. image:: ./images/sequence_architecture_cdmc_sync.png
:width: 100%
The execution of a strategy then yields a solution composed of a set of
:ref:`Actions <action_definition>` as well as a set of :ref:`efficacy
indicators <efficacy_indicator_definition>`.
These :ref:`Actions <action_definition>` are scheduled in time by the These :ref:`Actions <action_definition>` are scheduled in time by the
:ref:`Watcher Planner <watcher_planner_definition>` (i.e., it generates an :ref:`Watcher Planner <watcher_planner_definition>` (i.e., it generates an
:ref:`Action Plan <action_plan_definition>`). :ref:`Action Plan <action_plan_definition>`).
In order to compute the potential :ref:`Solution <solution_definition>` for the
Audit, the :ref:`Strategy <strategy_definition>` relies on two sets of data:
- the current state of the
:ref:`Managed resources <managed_resource_definition>`
(e.g., the data stored in the Nova database)
- the data stored in the
:ref:`Cluster History Database <cluster_history_db_definition>`
which provides information about the past of the
:ref:`Cluster <cluster_definition>`
.. _data_model: .. _data_model:
Data model Data model
@@ -216,8 +228,6 @@ Here below is a diagram representing the main objects in Watcher from a
database perspective: database perspective:
.. image:: ./images/watcher_db_schema_diagram.png .. image:: ./images/watcher_db_schema_diagram.png
:width: 100%
.. _sequence_diagrams: .. _sequence_diagrams:
@@ -398,7 +408,7 @@ be one of the following:
:ref:`Watcher Decision Engine <watcher_decision_engine_definition>` :ref:`Watcher Decision Engine <watcher_decision_engine_definition>`
- **SUCCEEDED** : the :ref:`Audit <audit_definition>` has been executed - **SUCCEEDED** : the :ref:`Audit <audit_definition>` has been executed
successfully and at least one solution was found successfully and at least one solution was found
- **FAILED** : an error occured while executing the - **FAILED** : an error occurred while executing the
:ref:`Audit <audit_definition>` :ref:`Audit <audit_definition>`
- **DELETED** : the :ref:`Audit <audit_definition>` is still stored in the - **DELETED** : the :ref:`Audit <audit_definition>` is still stored in the
:ref:`Watcher database <watcher_database_definition>` but is not returned :ref:`Watcher database <watcher_database_definition>` but is not returned
@@ -434,7 +444,7 @@ state may be one of the following:
- **SUCCEEDED** : the :ref:`Action Plan <action_plan_definition>` has been - **SUCCEEDED** : the :ref:`Action Plan <action_plan_definition>` has been
executed successfully (i.e. all :ref:`Actions <action_definition>` that it executed successfully (i.e. all :ref:`Actions <action_definition>` that it
contains have been executed successfully) contains have been executed successfully)
- **FAILED** : an error occured while executing the - **FAILED** : an error occurred while executing the
:ref:`Action Plan <action_plan_definition>` :ref:`Action Plan <action_plan_definition>`
- **DELETED** : the :ref:`Action Plan <action_plan_definition>` is still - **DELETED** : the :ref:`Action Plan <action_plan_definition>` is still
stored in the :ref:`Watcher database <watcher_database_definition>` but is stored in the :ref:`Watcher database <watcher_database_definition>` but is

View File

@@ -19,12 +19,13 @@ from watcher import version as watcher_version
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [ extensions = [
'oslo_config.sphinxconfiggen', 'oslo_config.sphinxconfiggen',
'oslosphinx',
'sphinx.ext.autodoc', 'sphinx.ext.autodoc',
'sphinx.ext.viewcode', 'sphinx.ext.viewcode',
'sphinxcontrib.httpdomain', 'sphinxcontrib.httpdomain',
'sphinxcontrib.pecanwsme.rest', 'sphinxcontrib.pecanwsme.rest',
'stevedore.sphinxext',
'wsmeext.sphinxext', 'wsmeext.sphinxext',
'oslosphinx',
'watcher.doc', 'watcher.doc',
] ]

View File

@@ -403,6 +403,35 @@ own storage driver using whatever technology you want.
For more information : https://wiki.openstack.org/wiki/Gnocchi For more information : https://wiki.openstack.org/wiki/Gnocchi
Configure Nova Notifications
============================
Watcher can consume notifications generated by the Nova services, in order to
build or update, in real time, its cluster data model related to computing
resources.
Nova publishes, by default, notifications on ``notifications`` AMQP queue
(configurable) and ``versioned_notifications`` AMQP queue (not
configurable). ``notifications`` queue is mainly used by ceilometer, so we can
not use it. And some events, related to nova-compute service state, are only
sent into the ``versioned_notifications`` queue.
By default, Watcher listens to AMQP queues named ``watcher_notifications``
and ``versioned_notifications``. So you have to update the Nova
configuration file on controller and compute nodes, in order
to Watcher receives Nova notifications in ``watcher_notifications`` as well.
* In the file ``/etc/nova/nova.conf``, update the section
``[oslo_messaging_notifications]``, by redefining the list of topics
into which Nova services will publish events ::
[oslo_messaging_notifications]
driver = messaging
topics = notifications,watcher_notifications
* Restart the Nova services.
Workers Workers
======= =======

View File

@@ -108,3 +108,54 @@ installed on your system.
Once installed, you still need to declare Watcher as a new service into Once installed, you still need to declare Watcher as a new service into
Keystone and to configure its different modules, which you can find described Keystone and to configure its different modules, which you can find described
in :doc:`configuration`. in :doc:`configuration`.
Installing from packages: Debian (experimental)
-----------------------------------------------
Experimental Debian packages are available on `Debian repositories`_. The best
way to use them is to install them into a Docker_ container.
Here is single Dockerfile snippet you can use to run your Docker container:
.. code-block:: bash
FROM debian:experimental
MAINTAINER David TARDIVEL <david.tardivel@b-com.com>
RUN apt-get update
RUN apt-get dist-upgrade -y
RUN apt-get install -y vim net-tools
RUN apt-get install -yt experimental watcher-api
CMD ["/usr/bin/watcher-api"]
Build your container from this Dockerfile:
.. code-block:: bash
$ docker build -t watcher/api .
To run your container, execute this command:
.. code-block:: bash
$ docker run -d -p 9322:9322 watcher/api
Check in your logs Watcher API is started
.. code-block:: bash
$ docker logs <container ID>
You can run similar container with Watcher Decision Engine (package
``watcher-decision-engine``) and with the Watcher Applier (package
``watcher-applier``).
.. _Docker: https://www.docker.com/
.. _`Debian repositories`: https://packages.debian.org/experimental/allpackages

View File

@@ -39,10 +39,10 @@ named ``watcher``, or by using the `OpenStack CLI`_ ``openstack``.
If you want to deploy Watcher in Horizon, please refer to the `Watcher Horizon If you want to deploy Watcher in Horizon, please refer to the `Watcher Horizon
plugin installation guide`_. plugin installation guide`_.
.. _`installation guide`: https://factory.b-com.com/www/watcher/doc/python-watcherclient .. _`installation guide`: http://docs.openstack.org/developer/python-watcherclient
.. _`Watcher Horizon plugin installation guide`: https://factory.b-com.com/www/watcher/doc/watcher-dashboard/deploy/installation.html .. _`Watcher Horizon plugin installation guide`: http://docs.openstack.org/developer/watcher-dashboard/deploy/installation.html
.. _`OpenStack CLI`: http://docs.openstack.org/developer/python-openstackclient/man/openstack.html .. _`OpenStack CLI`: http://docs.openstack.org/developer/python-openstackclient/man/openstack.html
.. _`Watcher CLI`: https://factory.b-com.com/www/watcher/doc/python-watcherclient/index.html .. _`Watcher CLI`: http://docs.openstack.org/developer/python-watcherclient/index.html
Seeing what the Watcher CLI can do ? Seeing what the Watcher CLI can do ?
------------------------------------ ------------------------------------
@@ -172,7 +172,7 @@ Input parameter could cause audit creation failure, when:
Watcher service will compute an :ref:`Action Plan <action_plan_definition>` Watcher service will compute an :ref:`Action Plan <action_plan_definition>`
composed of a list of potential optimization :ref:`actions <action_definition>` composed of a list of potential optimization :ref:`actions <action_definition>`
(instance migration, disabling of an hypervisor, ...) according to the (instance migration, disabling of a compute node, ...) according to the
:ref:`goal <goal_definition>` to achieve. You can see all of the goals :ref:`goal <goal_definition>` to achieve. You can see all of the goals
available in section ``[watcher_strategies]`` of the Watcher service available in section ``[watcher_strategies]`` of the Watcher service
configuration file. configuration file.

View File

@@ -160,7 +160,7 @@ Edit `/etc/libvirt/libvirtd.conf` to make sure the following values are set::
Edit `/etc/default/libvirt-bin`:: Edit `/etc/default/libvirt-bin`::
libvirt_opts="-d -l" libvirtd_opts="-d -l"
Restart the libvirt service:: Restart the libvirt service::

View File

@@ -80,12 +80,9 @@ Here is an example showing how you can write a plugin called ``DummyAction``:
pass pass
This implementation is the most basic one. So if you want to have more advanced This implementation is the most basic one. So in order to get a better
examples, have a look at the implementation of the actions already provided understanding on how to implement a more advanced action, have a look at the
by Watcher like. :py:class:`~watcher.applier.actions.migration.Migrate` class.
To get a better understanding on how to implement a more advanced action,
have a look at the :py:class:`~watcher.applier.actions.migration.Migrate`
class.
Input validation Input validation
---------------- ----------------
@@ -117,12 +114,15 @@ tune the action to its needs. To do so, you can implement the
def execute(self): def execute(self):
assert self.config.test_opt == 0 assert self.config.test_opt == 0
def get_config_opts(self): @classmethod
return [ def get_config_opts(cls):
return super(
DummyAction, cls).get_config_opts() + [
cfg.StrOpt('test_opt', help="Demo Option.", default=0), cfg.StrOpt('test_opt', help="Demo Option.", default=0),
# Some more options ... # Some more options ...
] ]
The configuration options defined within this class method will be included The configuration options defined within this class method will be included
within the global ``watcher.conf`` configuration file under a section named by within the global ``watcher.conf`` configuration file under a section named by
convention: ``{namespace}.{plugin_name}``. In our case, the ``watcher.conf`` convention: ``{namespace}.{plugin_name}``. In our case, the ``watcher.conf``

View File

@@ -88,10 +88,13 @@ Now that the project skeleton has been created, you can start the
implementation of your plugin. As of now, you can implement the following implementation of your plugin. As of now, you can implement the following
plugins for Watcher: plugins for Watcher:
- A :ref:`goal plugin <implement_goal_plugin>`
- A :ref:`strategy plugin <implement_strategy_plugin>` - A :ref:`strategy plugin <implement_strategy_plugin>`
- A :ref:`planner plugin <implement_planner_plugin>`
- An :ref:`action plugin <implement_action_plugin>` - An :ref:`action plugin <implement_action_plugin>`
- A :ref:`workflow engine plugin <implement_workflow_engine_plugin>` - A :ref:`planner plugin <implement_planner_plugin>`
- A workflow engine plugin
- A :ref:`cluster data model collector plugin
<implement_cluster_data_model_collector_plugin>`
If you want to learn more on how to implement them, you can refer to their If you want to learn more on how to implement them, you can refer to their
dedicated documentation. dedicated documentation.

View File

@@ -0,0 +1,229 @@
..
Except where otherwise noted, this document is licensed under Creative
Commons Attribution 3.0 License. You can view the license at:
https://creativecommons.org/licenses/by/3.0/
.. _implement_cluster_data_model_collector_plugin:
========================================
Build a new cluster data model collector
========================================
Watcher Decision Engine has an external cluster data model (CDM) plugin
interface which gives anyone the ability to integrate an external cluster data
model collector (CDMC) in order to extend the initial set of cluster data model
collectors Watcher provides.
This section gives some guidelines on how to implement and integrate custom
cluster data model collectors within Watcher.
Creating a new plugin
=====================
In order to create a new model, you have to:
- Extend the :py:class:`~.base.BaseClusterDataModelCollector` class.
- Implement its :py:meth:`~.BaseClusterDataModelCollector.execute` abstract
method to return your entire cluster data model that this method should
build.
- Implement its :py:meth:`~.Goal.notification_endpoints` abstract property to
return the list of all the :py:class:`~.base.NotificationEndpoint` instances
that will be responsible for handling incoming notifications in order to
incrementally update your cluster data model.
First of all, you have to extend the :class:`~.BaseClusterDataModelCollector`
base class which defines the :py:meth:`~.BaseClusterDataModelCollector.execute`
abstract method you will have to implement. This method is responsible for
building an entire cluster data model.
Here is an example showing how you can write a plugin called
``DummyClusterDataModelCollector``:
.. code-block:: python
# Filepath = <PROJECT_DIR>/thirdparty/dummy.py
# Import path = thirdparty.dummy
from watcher.decision_engine.model import model_root
from watcher.decision_engine.model.collector import base
class DummyClusterDataModelCollector(base.BaseClusterDataModelCollector):
def execute(self):
model = model_root.ModelRoot()
# Do something here...
return model
@property
def notification_endpoints(self):
return []
This implementation is the most basic one. So in order to get a better
understanding on how to implement a more advanced cluster data model collector,
have a look at the :py:class:`~.NovaClusterDataModelCollector` class.
Define configuration parameters
===============================
At this point, you have a fully functional cluster data model collector.
By default, cluster data model collectors define a ``period`` option (see
:py:meth:`~.BaseClusterDataModelCollector.get_config_opts`) that corresponds
to the interval of time between each synchronization of the in-memory model.
However, in more complex implementation, you may want to define some
configuration options so one can tune the cluster data model collector to your
needs. To do so, you can implement the :py:meth:`~.Loadable.get_config_opts`
class method as followed:
.. code-block:: python
from oslo_config import cfg
from watcher.decision_engine.model import model_root
from watcher.decision_engine.model.collector import base
class DummyClusterDataModelCollector(base.BaseClusterDataModelCollector):
def execute(self):
model = model_root.ModelRoot()
# Do something here...
return model
@property
def notification_endpoints(self):
return []
@classmethod
def get_config_opts(cls):
return super(
DummyClusterDataModelCollector, cls).get_config_opts() + [
cfg.StrOpt('test_opt', help="Demo Option.", default=0),
# Some more options ...
]
The configuration options defined within this class method will be included
within the global ``watcher.conf`` configuration file under a section named by
convention: ``{namespace}.{plugin_name}`` (see section :ref:`Register a new
entry point <register_new_cdmc_entrypoint>`). The namespace for CDMC plugins is
``watcher_cluster_data_model_collectors``, so in our case, the ``watcher.conf``
configuration would have to be modified as followed:
.. code-block:: ini
[watcher_cluster_data_model_collectors.dummy]
# Option used for testing.
test_opt = test_value
Then, the configuration options you define within this method will then be
injected in each instantiated object via the ``config`` parameter of the
:py:meth:`~.BaseClusterDataModelCollector.__init__` method.
Abstract Plugin Class
=====================
Here below is the abstract ``BaseClusterDataModelCollector`` class that every
single cluster data model collector should implement:
.. autoclass:: watcher.decision_engine.model.collector.base.BaseClusterDataModelCollector
:members:
:special-members: __init__
:noindex:
.. _register_new_cdmc_entrypoint:
Register a new entry point
==========================
In order for the Watcher Decision Engine to load your new cluster data model
collector, the latter must be registered as a named entry point under the
``watcher_cluster_data_model_collectors`` entry point namespace of your
``setup.py`` file. If you are using pbr_, this entry point should be placed in
your ``setup.cfg`` file.
The name you give to your entry point has to be unique.
Here below is how to register ``DummyClusterDataModelCollector`` using pbr_:
.. code-block:: ini
[entry_points]
watcher_cluster_data_model_collectors =
dummy = thirdparty.dummy:DummyClusterDataModelCollector
.. _pbr: http://docs.openstack.org/developer/pbr/
Add new notification endpoints
==============================
At this point, you have a fully functional cluster data model collector.
However, this CDMC is only refreshed periodically via a background scheduler.
As you may sometimes execute a strategy with a stale CDM due to a high activity
on your infrastructure, you can define some notification endpoints that will be
responsible for incrementally updating the CDM based on notifications emitted
by other services such as Nova. To do so, you can implement and register a new
``DummyEndpoint`` notification endpoint regarding a ``dummy`` event as shown
below:
.. code-block:: python
from watcher.decision_engine.model import model_root
from watcher.decision_engine.model.collector import base
class DummyNotification(base.NotificationEndpoint):
@property
def filter_rule(self):
return filtering.NotificationFilter(
publisher_id=r'.*',
event_type=r'^dummy$',
)
def info(self, ctxt, publisher_id, event_type, payload, metadata):
# Do some CDM modifications here...
pass
class DummyClusterDataModelCollector(base.BaseClusterDataModelCollector):
def execute(self):
model = model_root.ModelRoot()
# Do something here...
return model
@property
def notification_endpoints(self):
return [DummyNotification(self)]
Note that if the event you are trying to listen to is published by a new
service, you may have to also add a new topic Watcher will have to subscribe to
in the ``notification_topics`` option of the ``[watcher_decision_engine]``
section.
Using cluster data model collector plugins
==========================================
The Watcher Decision Engine service will automatically discover any installed
plugins when it is restarted. If a Python package containing a custom plugin is
installed within the same environment as Watcher, Watcher will automatically
make that plugin available for use.
At this point, you can use your new cluster data model plugin in your
:ref:`strategy plugin <implement_strategy_plugin>` by using the
:py:attr:`~.BaseStrategy.collector_manager` property as followed:
.. code-block:: python
# [...]
dummy_collector = self.collector_manager.get_cluster_model_collector(
"dummy") # "dummy" is the name of the entry point we declared earlier
dummy_model = collector.get_latest_cluster_data_model()
# Do some stuff with this model

View File

@@ -91,8 +91,10 @@ tune the planner to its needs. To do so, you can implement the
assert self.config.test_opt == 0 assert self.config.test_opt == 0
# [...] # [...]
def get_config_opts(self): @classmethod
return [ def get_config_opts(cls):
return super(
DummyPlanner, cls).get_config_opts() + [
cfg.StrOpt('test_opt', help="Demo Option.", default=0), cfg.StrOpt('test_opt', help="Demo Option.", default=0),
# Some more options ... # Some more options ...
] ]

View File

@@ -257,7 +257,7 @@ pluggable backend.
Finally, if your strategy requires new metrics not covered by Ceilometer, you Finally, if your strategy requires new metrics not covered by Ceilometer, you
can add them through a Ceilometer `plugin`_. can add them through a Ceilometer `plugin`_.
.. _`Helper`: https://github.com/openstack/watcher/blob/master/watcher/metrics_engine/cluster_history/ceilometer.py#L31 .. _`Helper`: https://github.com/openstack/watcher/blob/master/watcher/decision_engine/cluster/history/ceilometer.py
.. _`Ceilometer developer guide`: http://docs.openstack.org/developer/ceilometer/architecture.html#storing-the-data .. _`Ceilometer developer guide`: http://docs.openstack.org/developer/ceilometer/architecture.html#storing-the-data
.. _`here`: http://docs.openstack.org/developer/ceilometer/install/dbreco.html#choosing-a-database-backend .. _`here`: http://docs.openstack.org/developer/ceilometer/install/dbreco.html#choosing-a-database-backend
.. _`plugin`: http://docs.openstack.org/developer/ceilometer/plugins.html .. _`plugin`: http://docs.openstack.org/developer/ceilometer/plugins.html
@@ -296,16 +296,15 @@ Read usage metrics using the Watcher Cluster History Helper
Here below is the abstract ``BaseClusterHistory`` class of the Helper. Here below is the abstract ``BaseClusterHistory`` class of the Helper.
.. autoclass:: watcher.metrics_engine.cluster_history.base.BaseClusterHistory .. autoclass:: watcher.decision_engine.cluster.history.base.BaseClusterHistory
:members: :members:
:noindex: :noindex:
The following code snippet shows how to create a Cluster History class: The following code snippet shows how to create a Cluster History class:
.. code-block:: py .. code-block:: py
from watcher.metrics_engine.cluster_history import ceilometer as ceil from watcher.decision_engine.cluster.history import ceilometer as ceil
query_history = ceil.CeilometerClusterHistory() query_history = ceil.CeilometerClusterHistory()
@@ -313,7 +312,7 @@ Using that you can now query the values for that specific metric:
.. code-block:: py .. code-block:: py
query_history.statistic_aggregation(resource_id=hypervisor.uuid, query_history.statistic_aggregation(resource_id=compute_node.uuid,
meter_name='compute.node.cpu.percent', meter_name='compute.node.cpu.percent',
period="7200", period="7200",
aggregate='avg' aggregate='avg'

View File

@@ -18,32 +18,43 @@ use the :ref:`Guru Meditation Reports <watcher_gmr>` to display them.
Goals Goals
===== =====
.. drivers-doc:: watcher_goals .. list-plugins:: watcher_goals
:detailed:
.. _watcher_strategies: .. _watcher_strategies:
Strategies Strategies
========== ==========
.. drivers-doc:: watcher_strategies .. list-plugins:: watcher_strategies
:detailed:
.. _watcher_actions: .. _watcher_actions:
Actions Actions
======= =======
.. drivers-doc:: watcher_actions .. list-plugins:: watcher_actions
:detailed:
.. _watcher_workflow_engines: .. _watcher_workflow_engines:
Workflow Engines Workflow Engines
================ ================
.. drivers-doc:: watcher_workflow_engines .. list-plugins:: watcher_workflow_engines
:detailed:
.. _watcher_planners: .. _watcher_planners:
Planners Planners
======== ========
.. drivers-doc:: watcher_planners .. list-plugins:: watcher_planners
:detailed:
Cluster Data Model Collectors
=============================
.. list-plugins:: watcher_cluster_data_model_collectors
:detailed:

View File

@@ -99,14 +99,14 @@ The :ref:`Cluster <cluster_definition>` may be divided in one or several
Cluster Data Model Cluster Data Model
================== ==================
.. watcher-term:: watcher.metrics_engine.cluster_model_collector.base .. watcher-term:: watcher.decision_engine.model.collector.base
.. _cluster_history_definition: .. _cluster_history_definition:
Cluster History Cluster History
=============== ===============
.. watcher-term:: watcher.metrics_engine.cluster_history.base .. watcher-term:: watcher.decision_engine.cluster.history.base
.. _controller_node_definition: .. _controller_node_definition:

View File

@@ -0,0 +1,41 @@
@startuml
skinparam maxMessageSize 100
actor "Administrator"
== Initialization ==
"Administrator" -> "Decision Engine" : Start all services
"Decision Engine" -> "Background Task Scheduler" : Start
activate "Background Task Scheduler"
"Background Task Scheduler" -> "Cluster Model Collector Loader"\
: List available cluster data models
"Cluster Model Collector Loader" --> "Background Task Scheduler"\
: list of BaseClusterModelCollector instances
loop for every available cluster data model collector
"Background Task Scheduler" -> "Background Task Scheduler"\
: add periodic synchronization job
create "Jobs Pool"
"Background Task Scheduler" -> "Jobs Pool" : Create sync job
end
deactivate "Background Task Scheduler"
hnote over "Background Task Scheduler" : Idle
== Job workflow ==
"Background Task Scheduler" -> "Jobs Pool" : Trigger synchronization job
"Jobs Pool" -> "Nova Cluster Data Model Collector" : synchronize
activate "Nova Cluster Data Model Collector"
"Nova Cluster Data Model Collector" -> "Nova API"\
: Fetch needed data to build the cluster data model
"Nova API" --> "Nova Cluster Data Model Collector" : Needed data
"Nova Cluster Data Model Collector" -> "Nova Cluster Data Model Collector"\
: Build an in-memory cluster data model
]o<-- "Nova Cluster Data Model Collector" : Done
deactivate "Nova Cluster Data Model Collector"
@enduml

View File

@@ -10,45 +10,41 @@ activate "Decision Engine"
"AMQP Bus" <[#blue]- "Decision Engine" : notify new audit state = ONGOING "AMQP Bus" <[#blue]- "Decision Engine" : notify new audit state = ONGOING
"Decision Engine" -> "Database" : get audit parameters (goal, strategy, ...) "Decision Engine" -> "Database" : get audit parameters (goal, strategy, ...)
"Decision Engine" <-- "Database" : audit parameters (goal, strategy, ...) "Decision Engine" <-- "Database" : audit parameters (goal, strategy, ...)
"Decision Engine" --> "Decision Engine": select appropriate \ "Decision Engine" --> "Decision Engine"\
optimization strategy (via the Strategy Selector) : select appropriate optimization strategy (via the Strategy Selector)
create Strategy create Strategy
"Decision Engine" -> "Strategy" : execute() "Decision Engine" -> "Strategy" : execute strategy
activate "Strategy" activate "Strategy"
create "Cluster Data Model Collector" "Strategy" -> "Cluster Data Model Collector" : get cluster data model
"Strategy" -> "Cluster Data Model Collector" : get cluster data model "Cluster Data Model Collector" --> "Strategy"\
: copy of the in-memory cluster data model
activate "Cluster Data Model Collector" loop while enough history data for the strategy
loop while enough data to build cluster data model "Strategy" -> "Ceilometer API" : get necessary metrics
"Cluster Data Model Collector" -> "Nova API" : get resource state (\ "Strategy" <-- "Ceilometer API" : aggregated metrics
host, instance, ...)
"Cluster Data Model Collector" <-- "Nova API" : resource state
end end
"Cluster Data Model Collector" -> "Strategy" : cluster data model "Strategy" -> "Strategy"\
deactivate "Cluster Data Model Collector" : compute/set needed actions for the solution so it achieves its goal
"Strategy" -> "Strategy" : compute/set efficacy indicators for the solution
loop while enough history data for the strategy "Strategy" -> "Strategy" : compute/set the solution global efficacy
"Strategy" -> "Ceilometer API": get necessary metrics "Decision Engine" <-- "Strategy"\
"Strategy" <-- "Ceilometer API": aggregated metrics : solution (unordered actions, efficacy indicators and global efficacy)
end
"Strategy" -> "Strategy" : compute/set needed actions for the solution \
so it achieves its goal
"Strategy" -> "Strategy" : compute/set efficacy indicators for the solution
"Strategy" -> "Strategy" : compute/set the solution global efficacy
"Decision Engine" <-- "Strategy" : solution (contains a list of unordered \
actions alongside its efficacy indicators as well as its global efficacy)
deactivate "Strategy" deactivate "Strategy"
"Decision Engine" --> "Planner": load actions scheduler (i.e. Planner plugin)
create "Planner" create "Planner"
"Decision Engine" -> "Planner": schedule() "Decision Engine" -> "Planner" : load actions scheduler
"Planner" -> "Planner": schedule actions according to \ "Planner" --> "Decision Engine" : planner plugin
scheduling rules/policies "Decision Engine" -> "Planner" : schedule actions
"Decision Engine" <-- "Planner": new action plan activate "Planner"
"Planner" -> "Planner"\
: schedule actions according to scheduling rules/policies
"Decision Engine" <-- "Planner" : new action plan
deactivate "Planner"
"Decision Engine" -> "Database" : save new action plan in database "Decision Engine" -> "Database" : save new action plan in database
"Decision Engine" -> "Database" : update audit.state = SUCCEEDED "Decision Engine" -> "Database" : update audit.state = SUCCEEDED
"AMQP Bus" <[#blue]- "Decision Engine" : notify new audit state = SUCCEEDED "AMQP Bus" <[#blue]- "Decision Engine" : notify new audit state = SUCCEEDED
deactivate "Decision Engine" deactivate "Decision Engine"
hnote over "Decision Engine" : Idle
@enduml @enduml

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 82 KiB

After

Width:  |  Height:  |  Size: 72 KiB

View File

@@ -75,6 +75,7 @@ Plugins
dev/plugin/base-setup dev/plugin/base-setup
dev/plugin/goal-plugin dev/plugin/goal-plugin
dev/plugin/strategy-plugin dev/plugin/strategy-plugin
dev/plugin/cdmc-plugin
dev/plugin/action-plugin dev/plugin/action-plugin
dev/plugin/planner-plugin dev/plugin/planner-plugin
dev/plugins dev/plugins

View File

@@ -31,6 +31,10 @@
"goal:get": "rule:default", "goal:get": "rule:default",
"goal:get_all": "rule:default", "goal:get_all": "rule:default",
"scoring_engine:detail": "rule:default",
"scoring_engine:get": "rule:default",
"scoring_engine:get_all": "rule:default",
"strategy:detail": "rule:default", "strategy:detail": "rule:default",
"strategy:get": "rule:default", "strategy:get": "rule:default",
"strategy:get_all": "rule:default" "strategy:get_all": "rule:default"

View File

@@ -0,0 +1,8 @@
---
features:
- Added a standard way to both declare and fetch
configuration options so that whenever the
administrator generates the Watcher
configuration sample file, it contains the
configuration options of the plugins that are
currently available.

View File

@@ -0,0 +1,7 @@
---
features:
- Added a generic scoring engine module, which
will standarize interactions with scoring engines
through the common API. It is possible to use the
scoring engine by different Strategies, which
improve the code and data model re-use.

View File

@@ -0,0 +1,6 @@
---
features:
- Added an in-memory cache of the cluster model
built up and kept fresh via notifications from
services of interest in addition to periodic
syncing logic.

View File

@@ -0,0 +1,4 @@
---
features:
- Added a way to add a new action without having to
amend the source code of the default planner.

View File

@@ -0,0 +1,4 @@
---
features:
- Added a way to create periodic audit to be able to
optimize continuously the cloud infrastructure.

View File

@@ -0,0 +1,4 @@
---
features:
- Added a way to compare the efficacy of different
strategies for a give optimization goal.

View File

@@ -0,0 +1,5 @@
---
features:
- Added a way to return the of available goals depending
on which strategies have been deployed on the node
where the decison engine is running.

View File

@@ -0,0 +1,5 @@
---
features:
- Allow decision engine to pass strategy parameters,
like optimization threshold, to selected strategy,
also strategy to provide parameters info to end user.

View File

@@ -0,0 +1,6 @@
---
features:
- Copy all audit templates parameters into
audit instead of having a reference to the
audit template.

View File

@@ -0,0 +1,7 @@
---
features:
- Added a strategy that monitors if there is a higher
load on some hosts compared to other hosts in the
cluster and re-balances the work across hosts to
minimize the standard deviation of the loads in
the cluster.

View File

@@ -0,0 +1,5 @@
---
features:
- Added a new strategy based on the airflow
of servers. This strategy makes decisions
to migrate VMs to make the airflow uniform.

View File

@@ -0,0 +1,4 @@
---
features:
- Added policies to handle user rights
to access Watcher API.

View File

@@ -0,0 +1,7 @@
---
features:
- Added a strategy based on the VM workloads of
hypervisors. This strategy makes decisions to
migrate workloads to make the total VM workloads
of each hypervisor balanced, when the total VM
workloads of hypervisor reaches threshold.

View File

@@ -5,35 +5,36 @@
apscheduler # MIT License apscheduler # MIT License
enum34;python_version=='2.7' or python_version=='2.6' or python_version=='3.3' # BSD enum34;python_version=='2.7' or python_version=='2.6' or python_version=='3.3' # BSD
jsonpatch>=1.1 # BSD jsonpatch>=1.1 # BSD
keystoneauth1>=2.7.0 # Apache-2.0 keystoneauth1>=2.10.0 # Apache-2.0
keystonemiddleware!=4.1.0,!=4.5.0,>=4.0.0 # Apache-2.0 keystonemiddleware!=4.1.0,!=4.5.0,>=4.0.0 # Apache-2.0
oslo.concurrency>=3.8.0 # Apache-2.0 oslo.concurrency>=3.8.0 # Apache-2.0
oslo.cache>=1.5.0 # Apache-2.0 oslo.cache>=1.5.0 # Apache-2.0
oslo.config>=3.10.0 # Apache-2.0 oslo.config>=3.14.0 # Apache-2.0
oslo.context>=2.4.0 # Apache-2.0 oslo.context>=2.9.0 # Apache-2.0
oslo.db>=4.1.0 # Apache-2.0 oslo.db>=4.10.0 # Apache-2.0
oslo.i18n>=2.1.0 # Apache-2.0 oslo.i18n>=2.1.0 # Apache-2.0
oslo.log>=1.14.0 # Apache-2.0 oslo.log>=1.14.0 # Apache-2.0
oslo.messaging>=5.2.0 # Apache-2.0 oslo.messaging>=5.2.0 # Apache-2.0
oslo.policy>=1.9.0 # Apache-2.0 oslo.policy>=1.9.0 # Apache-2.0
oslo.reports>=0.6.0 # Apache-2.0 oslo.reports>=0.6.0 # Apache-2.0
oslo.serialization>=1.10.0 # Apache-2.0
oslo.service>=1.10.0 # Apache-2.0 oslo.service>=1.10.0 # Apache-2.0
oslo.utils>=3.14.0 # Apache-2.0 oslo.utils>=3.16.0 # Apache-2.0
PasteDeploy>=1.5.0 # MIT PasteDeploy>=1.5.0 # MIT
pbr>=1.6 # Apache-2.0 pbr>=1.6 # Apache-2.0
pecan>=1.0.0 # BSD pecan!=1.0.2,!=1.0.3,!=1.0.4,>=1.0.0 # BSD
PrettyTable<0.8,>=0.7 # BSD PrettyTable<0.8,>=0.7 # BSD
voluptuous>=0.8.9 # BSD License voluptuous>=0.8.9 # BSD License
python-ceilometerclient>=2.2.1 # Apache-2.0 python-ceilometerclient>=2.5.0 # Apache-2.0
python-cinderclient!=1.7.0,!=1.7.1,>=1.6.0 # Apache-2.0 python-cinderclient!=1.7.0,!=1.7.1,>=1.6.0 # Apache-2.0
python-glanceclient>=2.0.0 # Apache-2.0 python-glanceclient!=2.4.0,>=2.3.0 # Apache-2.0
python-keystoneclient!=1.8.0,!=2.1.0,>=1.7.0 # Apache-2.0 python-keystoneclient!=2.1.0,>=2.0.0 # Apache-2.0
python-neutronclient>=4.2.0 # Apache-2.0 python-neutronclient>=5.1.0 # Apache-2.0
python-novaclient!=2.33.0,>=2.29.0 # Apache-2.0 python-novaclient!=2.33.0,>=2.29.0 # Apache-2.0
python-openstackclient>=2.1.0 # Apache-2.0 python-openstackclient>=2.1.0 # Apache-2.0
six>=1.9.0 # MIT six>=1.9.0 # MIT
SQLAlchemy<1.1.0,>=1.0.10 # MIT SQLAlchemy<1.1.0,>=1.0.10 # MIT
stevedore>=1.10.0 # Apache-2.0 stevedore>=1.16.0 # Apache-2.0
taskflow>=1.26.0 # Apache-2.0 taskflow>=1.26.0 # Apache-2.0
WebOb>=1.2.3 # MIT WebOb>=1.2.3 # MIT
WSME>=0.8 # MIT WSME>=0.8 # MIT

View File

@@ -5,7 +5,7 @@ description-file =
README.rst README.rst
author = OpenStack author = OpenStack
author-email = openstack-dev@lists.openstack.org author-email = openstack-dev@lists.openstack.org
home-page = http://www.openstack.org/ home-page = http://docs.openstack.org/developer/watcher/
classifier = classifier =
Environment :: OpenStack Environment :: OpenStack
Intended Audience :: Information Technology Intended Audience :: Information Technology
@@ -17,6 +17,7 @@ classifier =
Programming Language :: Python :: 2.7 Programming Language :: Python :: 2.7
Programming Language :: Python :: 3 Programming Language :: Python :: 3
Programming Language :: Python :: 3.4 Programming Language :: Python :: 3.4
Programming Language :: Python :: 3.5
[files] [files]
packages = packages =
@@ -38,6 +39,7 @@ console_scripts =
watcher-db-manage = watcher.cmd.dbmanage:main watcher-db-manage = watcher.cmd.dbmanage:main
watcher-decision-engine = watcher.cmd.decisionengine:main watcher-decision-engine = watcher.cmd.decisionengine:main
watcher-applier = watcher.cmd.applier:main watcher-applier = watcher.cmd.applier:main
watcher-sync = watcher.cmd.sync:main
tempest.test_plugins = tempest.test_plugins =
watcher_tests = watcher_tempest_plugin.plugin:WatcherTempestPlugin watcher_tests = watcher_tempest_plugin.plugin:WatcherTempestPlugin
@@ -53,8 +55,15 @@ watcher_goals =
workload_balancing = watcher.decision_engine.goal.goals:WorkloadBalancing workload_balancing = watcher.decision_engine.goal.goals:WorkloadBalancing
airflow_optimization = watcher.decision_engine.goal.goals:AirflowOptimization airflow_optimization = watcher.decision_engine.goal.goals:AirflowOptimization
watcher_scoring_engines =
dummy_scorer = watcher.decision_engine.scoring.dummy_scorer:DummyScorer
watcher_scoring_engine_containers =
dummy_scoring_container = watcher.decision_engine.scoring.dummy_scoring_container:DummyScoringContainer
watcher_strategies = watcher_strategies =
dummy = watcher.decision_engine.strategy.strategies.dummy_strategy:DummyStrategy dummy = watcher.decision_engine.strategy.strategies.dummy_strategy:DummyStrategy
dummy_with_scorer = watcher.decision_engine.strategy.strategies.dummy_with_scorer:DummyWithScorer
basic = watcher.decision_engine.strategy.strategies.basic_consolidation:BasicConsolidation basic = watcher.decision_engine.strategy.strategies.basic_consolidation:BasicConsolidation
outlet_temperature = watcher.decision_engine.strategy.strategies.outlet_temp_control:OutletTempControl outlet_temperature = watcher.decision_engine.strategy.strategies.outlet_temp_control:OutletTempControl
vm_workload_consolidation = watcher.decision_engine.strategy.strategies.vm_workload_consolidation:VMWorkloadConsolidation vm_workload_consolidation = watcher.decision_engine.strategy.strategies.vm_workload_consolidation:VMWorkloadConsolidation
@@ -74,6 +83,9 @@ watcher_workflow_engines =
watcher_planners = watcher_planners =
default = watcher.decision_engine.planner.default:DefaultPlanner default = watcher.decision_engine.planner.default:DefaultPlanner
watcher_cluster_data_model_collectors =
compute = watcher.decision_engine.model.collector.nova:NovaClusterDataModelCollector
[pbr] [pbr]
warnerrors = true warnerrors = true
autodoc_index_modules = true autodoc_index_modules = true

View File

@@ -3,7 +3,6 @@
# process, which may cause wedges in the gate later. # process, which may cause wedges in the gate later.
coverage>=3.6 # Apache-2.0 coverage>=3.6 # Apache-2.0
discover # BSD
doc8 # Apache-2.0 doc8 # Apache-2.0
freezegun # Apache-2.0 freezegun # Apache-2.0
hacking<0.11,>=0.10.2 hacking<0.11,>=0.10.2
@@ -24,4 +23,4 @@ sphinxcontrib-pecanwsme>=0.8 # Apache-2.0
reno>=1.8.0 # Apache2 reno>=1.8.0 # Apache2
# bandit # bandit
bandit>=1.0.1 # Apache-2.0 bandit>=1.1.0 # Apache-2.0

View File

@@ -1,6 +1,6 @@
[tox] [tox]
minversion = 1.6 minversion = 1.6
envlist = py34,py27,pep8 envlist = py35,py34,py27,pep8
skipsdist = True skipsdist = True
[testenv] [testenv]
@@ -54,6 +54,7 @@ commands = python setup.py bdist_wheel
[hacking] [hacking]
import_exceptions = watcher._i18n import_exceptions = watcher._i18n
local-check-factory = watcher.hacking.checks.factory
[doc8] [doc8]
extension=.rst extension=.rst

View File

@@ -34,6 +34,7 @@ from watcher.api.controllers.v1 import action_plan
from watcher.api.controllers.v1 import audit from watcher.api.controllers.v1 import audit
from watcher.api.controllers.v1 import audit_template from watcher.api.controllers.v1 import audit_template
from watcher.api.controllers.v1 import goal from watcher.api.controllers.v1 import goal
from watcher.api.controllers.v1 import scoring_engine
from watcher.api.controllers.v1 import strategy from watcher.api.controllers.v1 import strategy
@@ -101,6 +102,9 @@ class V1(APIBase):
action_plans = [link.Link] action_plans = [link.Link]
"""Links to the action plans resource""" """Links to the action plans resource"""
scoring_engines = [link.Link]
"""Links to the Scoring Engines resource"""
links = [link.Link] links = [link.Link]
"""Links that point to a specific URL for this version and documentation""" """Links that point to a specific URL for this version and documentation"""
@@ -147,6 +151,14 @@ class V1(APIBase):
'action_plans', '', 'action_plans', '',
bookmark=True) bookmark=True)
] ]
v1.scoring_engines = [link.Link.make_link(
'self', pecan.request.host_url, 'scoring_engines', ''),
link.Link.make_link('bookmark',
pecan.request.host_url,
'scoring_engines', '',
bookmark=True)
]
return v1 return v1
@@ -158,6 +170,7 @@ class Controller(rest.RestController):
actions = action.ActionsController() actions = action.ActionsController()
action_plans = action_plan.ActionPlansController() action_plans = action_plan.ActionPlansController()
goals = goal.GoalsController() goals = goal.GoalsController()
scoring_engines = scoring_engine.ScoringEngineController()
strategies = strategy.StrategiesController() strategies = strategy.StrategiesController()
@wsme_pecan.wsexpose(V1) @wsme_pecan.wsexpose(V1)

View File

@@ -27,7 +27,7 @@ of the OpenStack :ref:`Cluster <cluster_definition>` such as:
- Live migration of an instance from one compute node to another compute - Live migration of an instance from one compute node to another compute
node with Nova node with Nova
- Changing the power level of a compute node (ACPI level, ...) - Changing the power level of a compute node (ACPI level, ...)
- Changing the current state of an hypervisor (enable or disable) with Nova - Changing the current state of a compute node (enable or disable) with Nova
In most cases, an :ref:`Action <action_definition>` triggers some concrete In most cases, an :ref:`Action <action_definition>` triggers some concrete
commands on an existing OpenStack module (Nova, Neutron, Cinder, Ironic, etc.). commands on an existing OpenStack module (Nova, Neutron, Cinder, Ironic, etc.).
@@ -151,8 +151,6 @@ class Action(base.APIBase):
self.fields = [] self.fields = []
fields = list(objects.Action.fields) fields = list(objects.Action.fields)
# audit_template_uuid is not part of objects.Audit.fields
# because it's an API-only attribute.
fields.append('action_plan_uuid') fields.append('action_plan_uuid')
fields.append('next_uuid') fields.append('next_uuid')
for field in fields: for field in fields:

View File

@@ -73,6 +73,7 @@ from watcher.api.controllers.v1 import utils as api_utils
from watcher.applier import rpcapi from watcher.applier import rpcapi
from watcher.common import exception from watcher.common import exception
from watcher.common import policy from watcher.common import policy
from watcher.common import utils
from watcher import objects from watcher import objects
from watcher.objects import action_plan as ap_objects from watcher.objects import action_plan as ap_objects
@@ -117,6 +118,8 @@ class ActionPlan(base.APIBase):
""" """
_audit_uuid = None _audit_uuid = None
_strategy_uuid = None
_strategy_name = None
_first_action_uuid = None _first_action_uuid = None
_efficacy_indicators = None _efficacy_indicators = None
@@ -177,6 +180,43 @@ class ActionPlan(base.APIBase):
elif value and self._efficacy_indicators != value: elif value and self._efficacy_indicators != value:
self._efficacy_indicators = value self._efficacy_indicators = value
def _get_strategy(self, value):
if value == wtypes.Unset:
return None
strategy = None
try:
if utils.is_uuid_like(value) or utils.is_int_like(value):
strategy = objects.Strategy.get(
pecan.request.context, value)
else:
strategy = objects.Strategy.get_by_name(
pecan.request.context, value)
except exception.StrategyNotFound:
pass
if strategy:
self.strategy_id = strategy.id
return strategy
def _get_strategy_uuid(self):
return self._strategy_uuid
def _set_strategy_uuid(self, value):
if value and self._strategy_uuid != value:
self._strategy_uuid = None
strategy = self._get_strategy(value)
if strategy:
self._strategy_uuid = strategy.uuid
def _get_strategy_name(self):
return self._strategy_name
def _set_strategy_name(self, value):
if value and self._strategy_name != value:
self._strategy_name = None
strategy = self._get_strategy(value)
if strategy:
self._strategy_name = strategy.name
uuid = wtypes.wsattr(types.uuid, readonly=True) uuid = wtypes.wsattr(types.uuid, readonly=True)
"""Unique UUID for this action plan""" """Unique UUID for this action plan"""
@@ -189,6 +229,14 @@ class ActionPlan(base.APIBase):
mandatory=True) mandatory=True)
"""The UUID of the audit this port belongs to""" """The UUID of the audit this port belongs to"""
strategy_uuid = wsme.wsproperty(
wtypes.text, _get_strategy_uuid, _set_strategy_uuid, mandatory=False)
"""Strategy UUID the action plan refers to"""
strategy_name = wsme.wsproperty(
wtypes.text, _get_strategy_name, _set_strategy_name, mandatory=False)
"""The name of the strategy this action plan refers to"""
efficacy_indicators = wsme.wsproperty( efficacy_indicators = wsme.wsproperty(
types.jsontype, _get_efficacy_indicators, _set_efficacy_indicators, types.jsontype, _get_efficacy_indicators, _set_efficacy_indicators,
mandatory=True) mandatory=True)
@@ -219,6 +267,10 @@ class ActionPlan(base.APIBase):
self.fields.append('efficacy_indicators') self.fields.append('efficacy_indicators')
setattr(self, 'audit_uuid', kwargs.get('audit_id', wtypes.Unset)) setattr(self, 'audit_uuid', kwargs.get('audit_id', wtypes.Unset))
fields.append('strategy_uuid')
setattr(self, 'strategy_uuid', kwargs.get('strategy_id', wtypes.Unset))
fields.append('strategy_name')
setattr(self, 'strategy_name', kwargs.get('strategy_id', wtypes.Unset))
setattr(self, 'first_action_uuid', setattr(self, 'first_action_uuid',
kwargs.get('first_action_id', wtypes.Unset)) kwargs.get('first_action_id', wtypes.Unset))
@@ -227,7 +279,8 @@ class ActionPlan(base.APIBase):
if not expand: if not expand:
action_plan.unset_fields_except( action_plan.unset_fields_except(
['uuid', 'state', 'efficacy_indicators', 'global_efficacy', ['uuid', 'state', 'efficacy_indicators', 'global_efficacy',
'updated_at', 'audit_uuid', 'first_action_uuid']) 'updated_at', 'audit_uuid', 'strategy_uuid', 'strategy_name',
'first_action_uuid'])
action_plan.links = [ action_plan.links = [
link.Link.make_link( link.Link.make_link(
@@ -275,8 +328,8 @@ class ActionPlanCollection(collection.Collection):
@staticmethod @staticmethod
def convert_with_links(rpc_action_plans, limit, url=None, expand=False, def convert_with_links(rpc_action_plans, limit, url=None, expand=False,
**kwargs): **kwargs):
collection = ActionPlanCollection() ap_collection = ActionPlanCollection()
collection.action_plans = [ActionPlan.convert_with_links( ap_collection.action_plans = [ActionPlan.convert_with_links(
p, expand) for p in rpc_action_plans] p, expand) for p in rpc_action_plans]
if 'sort_key' in kwargs: if 'sort_key' in kwargs:
@@ -284,13 +337,13 @@ class ActionPlanCollection(collection.Collection):
if kwargs['sort_key'] == 'audit_uuid': if kwargs['sort_key'] == 'audit_uuid':
if 'sort_dir' in kwargs: if 'sort_dir' in kwargs:
reverse = True if kwargs['sort_dir'] == 'desc' else False reverse = True if kwargs['sort_dir'] == 'desc' else False
collection.action_plans = sorted( ap_collection.action_plans = sorted(
collection.action_plans, ap_collection.action_plans,
key=lambda action_plan: action_plan.audit_uuid, key=lambda action_plan: action_plan.audit_uuid,
reverse=reverse) reverse=reverse)
collection.next = collection.get_next(limit, url=url, **kwargs) ap_collection.next = ap_collection.get_next(limit, url=url, **kwargs)
return collection return ap_collection
@classmethod @classmethod
def sample(cls): def sample(cls):
@@ -301,6 +354,7 @@ class ActionPlanCollection(collection.Collection):
class ActionPlansController(rest.RestController): class ActionPlansController(rest.RestController):
"""REST controller for Actions.""" """REST controller for Actions."""
def __init__(self): def __init__(self):
super(ActionPlansController, self).__init__() super(ActionPlansController, self).__init__()
@@ -314,7 +368,8 @@ class ActionPlansController(rest.RestController):
def _get_action_plans_collection(self, marker, limit, def _get_action_plans_collection(self, marker, limit,
sort_key, sort_dir, expand=False, sort_key, sort_dir, expand=False,
resource_url=None, audit_uuid=None): resource_url=None, audit_uuid=None,
strategy=None):
limit = api_utils.validate_limit(limit) limit = api_utils.validate_limit(limit)
api_utils.validate_sort_dir(sort_dir) api_utils.validate_sort_dir(sort_dir)
@@ -328,6 +383,12 @@ class ActionPlansController(rest.RestController):
if audit_uuid: if audit_uuid:
filters['audit_uuid'] = audit_uuid filters['audit_uuid'] = audit_uuid
if strategy:
if utils.is_uuid_like(strategy):
filters['strategy_uuid'] = strategy
else:
filters['strategy_name'] = strategy
if sort_key == 'audit_uuid': if sort_key == 'audit_uuid':
sort_db_key = None sort_db_key = None
else: else:
@@ -347,9 +408,9 @@ class ActionPlansController(rest.RestController):
sort_dir=sort_dir) sort_dir=sort_dir)
@wsme_pecan.wsexpose(ActionPlanCollection, types.uuid, int, wtypes.text, @wsme_pecan.wsexpose(ActionPlanCollection, types.uuid, int, wtypes.text,
wtypes.text, types.uuid) wtypes.text, types.uuid, wtypes.text)
def get_all(self, marker=None, limit=None, def get_all(self, marker=None, limit=None,
sort_key='id', sort_dir='asc', audit_uuid=None): sort_key='id', sort_dir='asc', audit_uuid=None, strategy=None):
"""Retrieve a list of action plans. """Retrieve a list of action plans.
:param marker: pagination marker for large data sets. :param marker: pagination marker for large data sets.
@@ -358,18 +419,20 @@ class ActionPlansController(rest.RestController):
:param sort_dir: direction to sort. "asc" or "desc". Default: asc. :param sort_dir: direction to sort. "asc" or "desc". Default: asc.
:param audit_uuid: Optional UUID of an audit, to get only actions :param audit_uuid: Optional UUID of an audit, to get only actions
for that audit. for that audit.
:param strategy: strategy UUID or name to filter by
""" """
context = pecan.request.context context = pecan.request.context
policy.enforce(context, 'action_plan:get_all', policy.enforce(context, 'action_plan:get_all',
action='action_plan:get_all') action='action_plan:get_all')
return self._get_action_plans_collection( return self._get_action_plans_collection(
marker, limit, sort_key, sort_dir, audit_uuid=audit_uuid) marker, limit, sort_key, sort_dir,
audit_uuid=audit_uuid, strategy=strategy)
@wsme_pecan.wsexpose(ActionPlanCollection, types.uuid, int, wtypes.text, @wsme_pecan.wsexpose(ActionPlanCollection, types.uuid, int, wtypes.text,
wtypes.text, types.uuid) wtypes.text, types.uuid, wtypes.text)
def detail(self, marker=None, limit=None, def detail(self, marker=None, limit=None,
sort_key='id', sort_dir='asc', audit_uuid=None): sort_key='id', sort_dir='asc', audit_uuid=None, strategy=None):
"""Retrieve a list of action_plans with detail. """Retrieve a list of action_plans with detail.
:param marker: pagination marker for large data sets. :param marker: pagination marker for large data sets.
@@ -378,6 +441,7 @@ class ActionPlansController(rest.RestController):
:param sort_dir: direction to sort. "asc" or "desc". Default: asc. :param sort_dir: direction to sort. "asc" or "desc". Default: asc.
:param audit_uuid: Optional UUID of an audit, to get only actions :param audit_uuid: Optional UUID of an audit, to get only actions
for that audit. for that audit.
:param strategy: strategy UUID or name to filter by
""" """
context = pecan.request.context context = pecan.request.context
policy.enforce(context, 'action_plan:detail', policy.enforce(context, 'action_plan:detail',
@@ -391,9 +455,8 @@ class ActionPlansController(rest.RestController):
expand = True expand = True
resource_url = '/'.join(['action_plans', 'detail']) resource_url = '/'.join(['action_plans', 'detail'])
return self._get_action_plans_collection( return self._get_action_plans_collection(
marker, limit, marker, limit, sort_key, sort_dir, expand,
sort_key, sort_dir, expand, resource_url, audit_uuid=audit_uuid, strategy=strategy)
resource_url, audit_uuid=audit_uuid)
@wsme_pecan.wsexpose(ActionPlan, types.uuid) @wsme_pecan.wsexpose(ActionPlan, types.uuid)
def get_one(self, action_plan_uuid): def get_one(self, action_plan_uuid):
@@ -491,8 +554,8 @@ class ActionPlansController(rest.RestController):
if action_plan_to_update[field] != patch_val: if action_plan_to_update[field] != patch_val:
action_plan_to_update[field] = patch_val action_plan_to_update[field] = patch_val
if (field == 'state' if (field == 'state'and
and patch_val == objects.action_plan.State.PENDING): patch_val == objects.action_plan.State.PENDING):
launch_action_plan = True launch_action_plan = True
action_plan_to_update.save() action_plan_to_update.save()

View File

@@ -52,7 +52,11 @@ from watcher import objects
class AuditPostType(wtypes.Base): class AuditPostType(wtypes.Base):
audit_template_uuid = wtypes.wsattr(types.uuid, mandatory=True) audit_template_uuid = wtypes.wsattr(types.uuid, mandatory=False)
goal = wtypes.wsattr(wtypes.text, mandatory=False)
strategy = wtypes.wsattr(wtypes.text, mandatory=False)
audit_type = wtypes.wsattr(wtypes.text, mandatory=True) audit_type = wtypes.wsattr(wtypes.text, mandatory=True)
@@ -65,25 +69,56 @@ class AuditPostType(wtypes.Base):
default={}) default={})
interval = wsme.wsattr(int, mandatory=False) interval = wsme.wsattr(int, mandatory=False)
def as_audit(self): host_aggregate = wsme.wsattr(wtypes.IntegerType(minimum=1),
mandatory=False)
def as_audit(self, context):
audit_type_values = [val.value for val in objects.audit.AuditType] audit_type_values = [val.value for val in objects.audit.AuditType]
if self.audit_type not in audit_type_values: if self.audit_type not in audit_type_values:
raise exception.AuditTypeNotFound(audit_type=self.audit_type) raise exception.AuditTypeNotFound(audit_type=self.audit_type)
if (self.audit_type == objects.audit.AuditType.ONESHOT.value and if (self.audit_type == objects.audit.AuditType.ONESHOT.value and
self.interval != wtypes.Unset): self.interval not in (wtypes.Unset, None)):
raise exception.AuditIntervalNotAllowed(audit_type=self.audit_type) raise exception.AuditIntervalNotAllowed(audit_type=self.audit_type)
if (self.audit_type == objects.audit.AuditType.CONTINUOUS.value and if (self.audit_type == objects.audit.AuditType.CONTINUOUS.value and
self.interval == wtypes.Unset): self.interval in (wtypes.Unset, None)):
raise exception.AuditIntervalNotSpecified( raise exception.AuditIntervalNotSpecified(
audit_type=self.audit_type) audit_type=self.audit_type)
# If audit_template_uuid was provided, we will provide any
# variables not included in the request, but not override
# those variables that were included.
if self.audit_template_uuid:
try:
audit_template = objects.AuditTemplate.get(
context, self.audit_template_uuid)
except exception.AuditTemplateNotFound:
raise exception.Invalid(
message=_('The audit template UUID or name specified is '
'invalid'))
at2a = {
'goal': 'goal_id',
'strategy': 'strategy_id',
'host_aggregate': 'host_aggregate'
}
to_string_fields = set(['goal', 'strategy'])
for k in at2a:
if not getattr(self, k):
try:
at_attr = getattr(audit_template, at2a[k])
if at_attr and (k in to_string_fields):
at_attr = str(at_attr)
setattr(self, k, at_attr)
except AttributeError:
pass
return Audit( return Audit(
audit_template_id=self.audit_template_uuid,
audit_type=self.audit_type, audit_type=self.audit_type,
deadline=self.deadline, deadline=self.deadline,
parameters=self.parameters, parameters=self.parameters,
goal_id=self.goal,
host_aggregate=self.host_aggregate,
strategy_id=self.strategy,
interval=self.interval) interval=self.interval)
@@ -110,45 +145,84 @@ class Audit(base.APIBase):
This class enforces type checking and value constraints, and converts This class enforces type checking and value constraints, and converts
between the internal object model and the API representation of a audit. between the internal object model and the API representation of a audit.
""" """
_audit_template_uuid = None _goal_uuid = None
_audit_template_name = None _goal_name = None
_strategy_uuid = None
_strategy_name = None
def _get_audit_template(self, value): def _get_goal(self, value):
if value == wtypes.Unset: if value == wtypes.Unset:
return None return None
audit_template = None goal = None
try: try:
if utils.is_uuid_like(value) or utils.is_int_like(value): if utils.is_uuid_like(value) or utils.is_int_like(value):
audit_template = objects.AuditTemplate.get( goal = objects.Goal.get(
pecan.request.context, value) pecan.request.context, value)
else: else:
audit_template = objects.AuditTemplate.get_by_name( goal = objects.Goal.get_by_name(
pecan.request.context, value) pecan.request.context, value)
except exception.AuditTemplateNotFound: except exception.GoalNotFound:
pass pass
if audit_template: if goal:
self.audit_template_id = audit_template.id self.goal_id = goal.id
return audit_template return goal
def _get_audit_template_uuid(self): def _get_goal_uuid(self):
return self._audit_template_uuid return self._goal_uuid
def _set_audit_template_uuid(self, value): def _set_goal_uuid(self, value):
if value and self._audit_template_uuid != value: if value and self._goal_uuid != value:
self._audit_template_uuid = None self._goal_uuid = None
audit_template = self._get_audit_template(value) goal = self._get_goal(value)
if audit_template: if goal:
self._audit_template_uuid = audit_template.uuid self._goal_uuid = goal.uuid
def _get_audit_template_name(self): def _get_goal_name(self):
return self._audit_template_name return self._goal_name
def _set_audit_template_name(self, value): def _set_goal_name(self, value):
if value and self._audit_template_name != value: if value and self._goal_name != value:
self._audit_template_name = None self._goal_name = None
audit_template = self._get_audit_template(value) goal = self._get_goal(value)
if audit_template: if goal:
self._audit_template_name = audit_template.name self._goal_name = goal.name
def _get_strategy(self, value):
if value == wtypes.Unset:
return None
strategy = None
try:
if utils.is_uuid_like(value) or utils.is_int_like(value):
strategy = objects.Strategy.get(
pecan.request.context, value)
else:
strategy = objects.Strategy.get_by_name(
pecan.request.context, value)
except exception.StrategyNotFound:
pass
if strategy:
self.strategy_id = strategy.id
return strategy
def _get_strategy_uuid(self):
return self._strategy_uuid
def _set_strategy_uuid(self, value):
if value and self._strategy_uuid != value:
self._strategy_uuid = None
strategy = self._get_strategy(value)
if strategy:
self._strategy_uuid = strategy.uuid
def _get_strategy_name(self):
return self._strategy_name
def _set_strategy_name(self, value):
if value and self._strategy_name != value:
self._strategy_name = None
strategy = self._get_strategy(value)
if strategy:
self._strategy_name = strategy.name
uuid = types.uuid uuid = types.uuid
"""Unique UUID for this audit""" """Unique UUID for this audit"""
@@ -162,17 +236,21 @@ class Audit(base.APIBase):
state = wtypes.text state = wtypes.text
"""This audit state""" """This audit state"""
audit_template_uuid = wsme.wsproperty(wtypes.text, goal_uuid = wsme.wsproperty(
_get_audit_template_uuid, wtypes.text, _get_goal_uuid, _set_goal_uuid, mandatory=True)
_set_audit_template_uuid, """Goal UUID the audit template refers to"""
mandatory=True)
"""The UUID of the audit template this audit refers to"""
audit_template_name = wsme.wsproperty(wtypes.text, goal_name = wsme.wsproperty(
_get_audit_template_name, wtypes.text, _get_goal_name, _set_goal_name, mandatory=False)
_set_audit_template_name, """The name of the goal this audit template refers to"""
mandatory=False)
"""The name of the audit template this audit refers to""" strategy_uuid = wsme.wsproperty(
wtypes.text, _get_strategy_uuid, _set_strategy_uuid, mandatory=False)
"""Strategy UUID the audit template refers to"""
strategy_name = wsme.wsproperty(
wtypes.text, _get_strategy_name, _set_strategy_name, mandatory=False)
"""The name of the strategy this audit template refers to"""
parameters = {wtypes.text: types.jsontype} parameters = {wtypes.text: types.jsontype}
"""The strategy parameters for this audit""" """The strategy parameters for this audit"""
@@ -183,10 +261,12 @@ class Audit(base.APIBase):
interval = wsme.wsattr(int, mandatory=False) interval = wsme.wsattr(int, mandatory=False)
"""Launch audit periodically (in seconds)""" """Launch audit periodically (in seconds)"""
host_aggregate = wtypes.IntegerType(minimum=1)
"""ID of the Nova host aggregate targeted by the audit template"""
def __init__(self, **kwargs): def __init__(self, **kwargs):
self.fields = [] self.fields = []
fields = list(objects.Audit.fields) fields = list(objects.Audit.fields)
for k in fields: for k in fields:
# Skip fields we do not expose. # Skip fields we do not expose.
if not hasattr(self, k): if not hasattr(self, k):
@@ -194,27 +274,28 @@ class Audit(base.APIBase):
self.fields.append(k) self.fields.append(k)
setattr(self, k, kwargs.get(k, wtypes.Unset)) setattr(self, k, kwargs.get(k, wtypes.Unset))
self.fields.append('audit_template_id') self.fields.append('goal_id')
self.fields.append('strategy_id')
# audit_template_uuid & audit_template_name are not part of fields.append('goal_uuid')
# objects.Audit.fields because they're API-only attributes. setattr(self, 'goal_uuid', kwargs.get('goal_id',
fields.append('audit_template_uuid')
setattr(self, 'audit_template_uuid', kwargs.get('audit_template_id',
wtypes.Unset)) wtypes.Unset))
fields.append('audit_template_name') fields.append('goal_name')
setattr(self, 'audit_template_name', kwargs.get('audit_template_id', setattr(self, 'goal_name', kwargs.get('goal_id',
wtypes.Unset))
fields.append('strategy_uuid')
setattr(self, 'strategy_uuid', kwargs.get('strategy_id',
wtypes.Unset))
fields.append('strategy_name')
setattr(self, 'strategy_name', kwargs.get('strategy_id',
wtypes.Unset)) wtypes.Unset))
@staticmethod @staticmethod
def _convert_with_links(audit, url, expand=True): def _convert_with_links(audit, url, expand=True):
if not expand: if not expand:
audit.unset_fields_except(['uuid', 'audit_type', 'deadline', audit.unset_fields_except(['uuid', 'audit_type', 'deadline',
'state', 'audit_template_uuid', 'state', 'goal_uuid', 'interval',
'audit_template_name', 'interval']) 'strategy_uuid', 'host_aggregate',
'goal_name', 'strategy_name'])
# The numeric ID should not be exposed to
# the user, it's internal only.
audit.audit_template_id = wtypes.Unset
audit.links = [link.Link.make_link('self', url, audit.links = [link.Link.make_link('self', url,
'audits', audit.uuid), 'audits', audit.uuid),
@@ -240,7 +321,10 @@ class Audit(base.APIBase):
deleted_at=None, deleted_at=None,
updated_at=datetime.datetime.utcnow(), updated_at=datetime.datetime.utcnow(),
interval=7200) interval=7200)
sample._audit_template_uuid = '7ae81bb3-dec3-4289-8d6c-da80bd8001ae'
sample.goal_id = '7ae81bb3-dec3-4289-8d6c-da80bd8001ae'
sample.strategy_id = '7ae81bb3-dec3-4289-8d6c-da80bd8001ff'
sample.host_aggregate = 1
return cls._convert_with_links(sample, 'http://localhost:9322', expand) return cls._convert_with_links(sample, 'http://localhost:9322', expand)
@@ -263,12 +347,12 @@ class AuditCollection(collection.Collection):
if 'sort_key' in kwargs: if 'sort_key' in kwargs:
reverse = False reverse = False
if kwargs['sort_key'] == 'audit_template_uuid': if kwargs['sort_key'] == 'goal_uuid':
if 'sort_dir' in kwargs: if 'sort_dir' in kwargs:
reverse = True if kwargs['sort_dir'] == 'desc' else False reverse = True if kwargs['sort_dir'] == 'desc' else False
collection.audits = sorted( collection.audits = sorted(
collection.audits, collection.audits,
key=lambda audit: audit.audit_template_uuid, key=lambda audit: audit.goal_uuid,
reverse=reverse) reverse=reverse)
collection.next = collection.get_next(limit, url=url, **kwargs) collection.next = collection.get_next(limit, url=url, **kwargs)
@@ -296,24 +380,34 @@ class AuditsController(rest.RestController):
def _get_audits_collection(self, marker, limit, def _get_audits_collection(self, marker, limit,
sort_key, sort_dir, expand=False, sort_key, sort_dir, expand=False,
resource_url=None, audit_template=None): resource_url=None, goal=None,
strategy=None, host_aggregate=None):
limit = api_utils.validate_limit(limit) limit = api_utils.validate_limit(limit)
api_utils.validate_sort_dir(sort_dir) api_utils.validate_sort_dir(sort_dir)
marker_obj = None marker_obj = None
if marker: if marker:
marker_obj = objects.Audit.get_by_uuid(pecan.request.context, marker_obj = objects.Audit.get_by_uuid(pecan.request.context,
marker) marker)
filters = {} filters = {}
if audit_template: if goal:
if utils.is_uuid_like(audit_template): if utils.is_uuid_like(goal):
filters['audit_template_uuid'] = audit_template filters['goal_uuid'] = goal
else: else:
filters['audit_template_name'] = audit_template # TODO(michaelgugino): add method to get goal by name.
filters['goal_name'] = goal
if sort_key == 'audit_template_uuid': if strategy:
sort_db_key = None if utils.is_uuid_like(strategy):
filters['strategy_uuid'] = strategy
else:
# TODO(michaelgugino): add method to get goal by name.
filters['strategy_name'] = strategy
if sort_key == 'goal_uuid':
sort_db_key = 'goal_id'
elif sort_key == 'strategy_uuid':
sort_db_key = 'strategy_id'
else: else:
sort_db_key = sort_key sort_db_key = sort_key
@@ -328,33 +422,39 @@ class AuditsController(rest.RestController):
sort_key=sort_key, sort_key=sort_key,
sort_dir=sort_dir) sort_dir=sort_dir)
@wsme_pecan.wsexpose(AuditCollection, wtypes.text, types.uuid, int, @wsme_pecan.wsexpose(AuditCollection, types.uuid, int, wtypes.text,
wtypes.text, wtypes.text) wtypes.text, wtypes.text, wtypes.text, int)
def get_all(self, audit_template=None, marker=None, limit=None, def get_all(self, marker=None, limit=None,
sort_key='id', sort_dir='asc'): sort_key='id', sort_dir='asc', goal=None,
strategy=None, host_aggregate=None):
"""Retrieve a list of audits. """Retrieve a list of audits.
:param audit_template: Optional UUID or name of an audit
:param marker: pagination marker for large data sets. :param marker: pagination marker for large data sets.
:param limit: maximum number of resources to return in a single result. :param limit: maximum number of resources to return in a single result.
:param sort_key: column to sort results by. Default: id. :param sort_key: column to sort results by. Default: id.
:param sort_dir: direction to sort. "asc" or "desc". Default: asc. :param sort_dir: direction to sort. "asc" or "desc". Default: asc.
template, to get only audits for that audit template. id.
:param goal: goal UUID or name to filter by
:param strategy: strategy UUID or name to filter by
:param host_aggregate: Optional host_aggregate
""" """
context = pecan.request.context context = pecan.request.context
policy.enforce(context, 'audit:get_all', policy.enforce(context, 'audit:get_all',
action='audit:get_all') action='audit:get_all')
return self._get_audits_collection(marker, limit, sort_key, return self._get_audits_collection(marker, limit, sort_key,
sort_dir, sort_dir, goal=goal,
audit_template=audit_template) strategy=strategy,
host_aggregate=host_aggregate)
@wsme_pecan.wsexpose(AuditCollection, wtypes.text, types.uuid, int, @wsme_pecan.wsexpose(AuditCollection, wtypes.text, types.uuid, int,
wtypes.text, wtypes.text) wtypes.text, wtypes.text)
def detail(self, audit_template=None, marker=None, limit=None, def detail(self, goal=None, marker=None, limit=None,
sort_key='id', sort_dir='asc'): sort_key='id', sort_dir='asc'):
"""Retrieve a list of audits with detail. """Retrieve a list of audits with detail.
:param audit_template: Optional UUID or name of an audit :param goal: goal UUID or name to filter by
:param marker: pagination marker for large data sets. :param marker: pagination marker for large data sets.
:param limit: maximum number of resources to return in a single result. :param limit: maximum number of resources to return in a single result.
:param sort_key: column to sort results by. Default: id. :param sort_key: column to sort results by. Default: id.
@@ -373,7 +473,7 @@ class AuditsController(rest.RestController):
return self._get_audits_collection(marker, limit, return self._get_audits_collection(marker, limit,
sort_key, sort_dir, expand, sort_key, sort_dir, expand,
resource_url, resource_url,
audit_template=audit_template) goal=goal)
@wsme_pecan.wsexpose(Audit, types.uuid) @wsme_pecan.wsexpose(Audit, types.uuid)
def get_one(self, audit_uuid): def get_one(self, audit_uuid):
@@ -399,28 +499,27 @@ class AuditsController(rest.RestController):
context = pecan.request.context context = pecan.request.context
policy.enforce(context, 'audit:create', policy.enforce(context, 'audit:create',
action='audit:create') action='audit:create')
audit = audit_p.as_audit(context)
audit = audit_p.as_audit()
if self.from_audits: if self.from_audits:
raise exception.OperationNotPermitted raise exception.OperationNotPermitted
if not audit._audit_template_uuid: if not audit._goal_uuid:
raise exception.Invalid( raise exception.Invalid(
message=_('The audit template UUID or name specified is ' message=_('A valid goal_id or audit_template_id '
'invalid')) 'must be provided'))
audit_template = objects.AuditTemplate.get(pecan.request.context, strategy_uuid = audit.strategy_uuid
audit._audit_template_uuid)
strategy_id = audit_template.strategy_id
no_schema = True no_schema = True
if strategy_id is not None: if strategy_uuid is not None:
# validate parameter when predefined strategy in audit template # validate parameter when predefined strategy in audit template
strategy = objects.Strategy.get(pecan.request.context, strategy_id) strategy = objects.Strategy.get(pecan.request.context,
strategy_uuid)
schema = strategy.parameters_spec schema = strategy.parameters_spec
if schema: if schema:
# validate input parameter with default value feedback # validate input parameter with default value feedback
no_schema = False no_schema = False
utils.DefaultValidatingDraft4Validator(schema).validate( utils.StrictDefaultValidatingDraft4Validator(schema).validate(
audit.parameters) audit.parameters)
if no_schema and audit.parameters: if no_schema and audit.parameters:
@@ -429,7 +528,7 @@ class AuditsController(rest.RestController):
'parameter spec in predefined strategy')) 'parameter spec in predefined strategy'))
audit_dict = audit.as_dict() audit_dict = audit.as_dict()
context = pecan.request.context
new_audit = objects.Audit(context, **audit_dict) new_audit = objects.Audit(context, **audit_dict)
new_audit.create(context) new_audit.create(context)
@@ -463,6 +562,7 @@ class AuditsController(rest.RestController):
audit_to_update = objects.Audit.get_by_uuid(pecan.request.context, audit_to_update = objects.Audit.get_by_uuid(pecan.request.context,
audit_uuid) audit_uuid)
try: try:
audit_dict = audit_to_update.as_dict() audit_dict = audit_to_update.as_dict()
audit = Audit(**api_utils.apply_jsonpatch(audit_dict, patch)) audit = Audit(**api_utils.apply_jsonpatch(audit_dict, patch))

View File

@@ -35,7 +35,7 @@ class Collection(base.APIBase):
"""Return whether collection has more items.""" """Return whether collection has more items."""
return len(self.collection) and len(self.collection) == limit return len(self.collection) and len(self.collection) == limit
def get_next(self, limit, url=None, **kwargs): def get_next(self, limit, url=None, marker_field="uuid", **kwargs):
"""Return a link to the next subset of the collection.""" """Return a link to the next subset of the collection."""
if not self.has_next(limit): if not self.has_next(limit):
return wtypes.Unset return wtypes.Unset
@@ -44,7 +44,7 @@ class Collection(base.APIBase):
q_args = ''.join(['%s=%s&' % (key, kwargs[key]) for key in kwargs]) q_args = ''.join(['%s=%s&' % (key, kwargs[key]) for key in kwargs])
next_args = '?%(args)slimit=%(limit)d&marker=%(marker)s' % { next_args = '?%(args)slimit=%(limit)d&marker=%(marker)s' % {
'args': q_args, 'limit': limit, 'args': q_args, 'limit': limit,
'marker': getattr(self.collection[-1], "uuid")} 'marker': getattr(self.collection[-1], marker_field)}
return link.Link.make_link('next', pecan.request.host_url, return link.Link.make_link('next', pecan.request.host_url,
resource_url, next_args).href resource_url, next_args).href

View File

@@ -32,8 +32,6 @@ Here are some examples of :ref:`Goals <goal_definition>`:
modification, ... modification, ...
""" """
from oslo_config import cfg
import pecan import pecan
from pecan import rest from pecan import rest
import wsme import wsme
@@ -49,8 +47,6 @@ from watcher.common import exception
from watcher.common import policy from watcher.common import policy
from watcher import objects from watcher import objects
CONF = cfg.CONF
class Goal(base.APIBase): class Goal(base.APIBase):
"""API representation of a goal. """API representation of a goal.

View File

@@ -0,0 +1,246 @@
# -*- encoding: utf-8 -*-
# Copyright 2016 Intel
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
A :ref:`Scoring Engine <scoring_engine_definition>` is an instance of a data
model, to which a learning data was applied.
Because there might be multiple algorithms used to build a particular data
model (and therefore a scoring engine), the usage of scoring engine might
vary. A metainfo field is supposed to contain any information which might
be needed by the user of a given scoring engine.
"""
import pecan
from pecan import rest
import wsme
from wsme import types as wtypes
import wsmeext.pecan as wsme_pecan
from watcher.api.controllers import base
from watcher.api.controllers import link
from watcher.api.controllers.v1 import collection
from watcher.api.controllers.v1 import types
from watcher.api.controllers.v1 import utils as api_utils
from watcher.common import exception
from watcher.common import policy
from watcher import objects
class ScoringEngine(base.APIBase):
"""API representation of a scoring engine.
This class enforces type checking and value constraints, and converts
between the internal object model and the API representation of a scoring
engine.
"""
uuid = types.uuid
"""Unique UUID of the scoring engine"""
name = wtypes.text
"""The name of the scoring engine"""
description = wtypes.text
"""A human readable description of the Scoring Engine"""
metainfo = wtypes.text
"""A metadata associated with the scoring engine"""
links = wsme.wsattr([link.Link], readonly=True)
"""A list containing a self link and associated action links"""
def __init__(self, **kwargs):
super(ScoringEngine, self).__init__()
self.fields = []
self.fields.append('uuid')
self.fields.append('name')
self.fields.append('description')
self.fields.append('metainfo')
setattr(self, 'uuid', kwargs.get('uuid', wtypes.Unset))
setattr(self, 'name', kwargs.get('name', wtypes.Unset))
setattr(self, 'description', kwargs.get('description', wtypes.Unset))
setattr(self, 'metainfo', kwargs.get('metainfo', wtypes.Unset))
@staticmethod
def _convert_with_links(se, url, expand=True):
if not expand:
se.unset_fields_except(
['uuid', 'name', 'description', 'metainfo'])
se.links = [link.Link.make_link('self', url,
'scoring_engines', se.uuid),
link.Link.make_link('bookmark', url,
'scoring_engines', se.uuid,
bookmark=True)]
return se
@classmethod
def convert_with_links(cls, scoring_engine, expand=True):
scoring_engine = ScoringEngine(**scoring_engine.as_dict())
return cls._convert_with_links(
scoring_engine, pecan.request.host_url, expand)
@classmethod
def sample(cls, expand=True):
sample = cls(uuid='81bbd3c7-3b08-4d12-a268-99354dbf7b71',
name='sample-se-123',
description='Sample Scoring Engine 123 just for testing')
return cls._convert_with_links(sample, 'http://localhost:9322', expand)
class ScoringEngineCollection(collection.Collection):
"""API representation of a collection of scoring engines."""
scoring_engines = [ScoringEngine]
"""A list containing scoring engine objects"""
def __init__(self, **kwargs):
super(ScoringEngineCollection, self).__init__()
self._type = 'scoring_engines'
@staticmethod
def convert_with_links(scoring_engines, limit, url=None, expand=False,
**kwargs):
collection = ScoringEngineCollection()
collection.scoring_engines = [ScoringEngine.convert_with_links(
se, expand) for se in scoring_engines]
if 'sort_key' in kwargs:
reverse = False
if kwargs['sort_key'] == 'name':
if 'sort_dir' in kwargs:
reverse = True if kwargs['sort_dir'] == 'desc' else False
collection.goals = sorted(
collection.scoring_engines,
key=lambda se: se.name,
reverse=reverse)
collection.next = collection.get_next(limit, url=url, **kwargs)
return collection
@classmethod
def sample(cls):
sample = cls()
sample.scoring_engines = [ScoringEngine.sample(expand=False)]
return sample
class ScoringEngineController(rest.RestController):
"""REST controller for Scoring Engines."""
def __init__(self):
super(ScoringEngineController, self).__init__()
from_scoring_engines = False
"""A flag to indicate if the requests to this controller are coming
from the top-level resource Scoring Engines."""
_custom_actions = {
'detail': ['GET'],
}
def _get_scoring_engines_collection(self, marker, limit,
sort_key, sort_dir, expand=False,
resource_url=None):
limit = api_utils.validate_limit(limit)
api_utils.validate_sort_dir(sort_dir)
marker_obj = None
if marker:
marker_obj = objects.ScoringEngine.get_by_uuid(
pecan.request.context, marker)
filters = {}
sort_db_key = sort_key
scoring_engines = objects.ScoringEngine.list(
context=pecan.request.context,
limit=limit,
marker=marker_obj,
sort_key=sort_db_key,
sort_dir=sort_dir,
filters=filters)
return ScoringEngineCollection.convert_with_links(
scoring_engines,
limit,
url=resource_url,
expand=expand,
sort_key=sort_key,
sort_dir=sort_dir)
@wsme_pecan.wsexpose(ScoringEngineCollection, wtypes.text,
int, wtypes.text, wtypes.text)
def get_all(self, marker=None, limit=None, sort_key='id',
sort_dir='asc'):
"""Retrieve a list of Scoring Engines.
:param marker: pagination marker for large data sets.
:param limit: maximum number of resources to return in a single result.
:param sort_key: column to sort results by. Default: name.
:param sort_dir: direction to sort. "asc" or "desc". Default: asc.
"""
context = pecan.request.context
policy.enforce(context, 'scoring_engine:get_all',
action='scoring_engine:get_all')
return self._get_scoring_engines_collection(
marker, limit, sort_key, sort_dir)
@wsme_pecan.wsexpose(ScoringEngineCollection, wtypes.text,
int, wtypes.text, wtypes.text)
def detail(self, marker=None, limit=None, sort_key='id', sort_dir='asc'):
"""Retrieve a list of Scoring Engines with detail.
:param marker: pagination marker for large data sets.
:param limit: maximum number of resources to return in a single result.
:param sort_key: column to sort results by. Default: name.
:param sort_dir: direction to sort. "asc" or "desc". Default: asc.
"""
context = pecan.request.context
policy.enforce(context, 'scoring_engine:detail',
action='scoring_engine:detail')
parent = pecan.request.path.split('/')[:-1][-1]
if parent != "scoring_engines":
raise exception.HTTPNotFound
expand = True
resource_url = '/'.join(['scoring_engines', 'detail'])
return self._get_scoring_engines_collection(
marker, limit, sort_key, sort_dir, expand, resource_url)
@wsme_pecan.wsexpose(ScoringEngine, wtypes.text)
def get_one(self, scoring_engine):
"""Retrieve information about the given Scoring Engine.
:param scoring_engine_name: The name of the Scoring Engine.
"""
context = pecan.request.context
policy.enforce(context, 'scoring_engine:get',
action='scoring_engine:get')
if self.from_scoring_engines:
raise exception.OperationNotPermitted
rpc_scoring_engine = api_utils.get_resource(
'ScoringEngine', scoring_engine)
return ScoringEngine.convert_with_links(rpc_scoring_engine)

View File

@@ -27,8 +27,6 @@ Some strategies may provide better optimization results but may take more time
to find an optimal :ref:`Solution <solution_definition>`. to find an optimal :ref:`Solution <solution_definition>`.
""" """
from oslo_config import cfg
import pecan import pecan
from pecan import rest from pecan import rest
import wsme import wsme
@@ -45,8 +43,6 @@ from watcher.common import policy
from watcher.common import utils as common_utils from watcher.common import utils as common_utils
from watcher import objects from watcher import objects
CONF = cfg.CONF
class Strategy(base.APIBase): class Strategy(base.APIBase):
"""API representation of a strategy. """API representation of a strategy.

View File

@@ -15,7 +15,7 @@
# License for the specific language governing permissions and limitations # License for the specific language governing permissions and limitations
# under the License. # under the License.
import json from oslo_serialization import jsonutils
from oslo_utils import strutils from oslo_utils import strutils
import six import six
import wsme import wsme
@@ -118,7 +118,7 @@ class JsonType(wtypes.UserType):
@staticmethod @staticmethod
def validate(value): def validate(value):
try: try:
json.dumps(value) jsonutils.dumps(value, default=None)
except TypeError: except TypeError:
raise exception.Invalid(_('%s is not JSON serializable') % value) raise exception.Invalid(_('%s is not JSON serializable') % value)
else: else:

View File

@@ -20,10 +20,10 @@ response with one formatted so the client can parse it.
Based on pecan.middleware.errordocument Based on pecan.middleware.errordocument
""" """
import json
from xml import etree as et from xml import etree as et
from oslo_log import log from oslo_log import log
from oslo_serialization import jsonutils
import six import six
import webob import webob
@@ -84,7 +84,8 @@ class ParsableErrorMiddleware(object):
else: else:
if six.PY3: if six.PY3:
app_iter = [i.decode('utf-8') for i in app_iter] app_iter = [i.decode('utf-8') for i in app_iter]
body = [json.dumps({'error_message': '\n'.join(app_iter)})] body = [jsonutils.dumps(
{'error_message': '\n'.join(app_iter)})]
if six.PY3: if six.PY3:
body = [item.encode('utf-8') for item in body] body = [item.encode('utf-8') for item in body]
state['headers'].append(('Content-Type', 'application/json')) state['headers'].append(('Content-Type', 'application/json'))

View File

@@ -28,11 +28,11 @@ LOG = log.getLogger(__name__)
class DefaultActionPlanHandler(base.BaseActionPlanHandler): class DefaultActionPlanHandler(base.BaseActionPlanHandler):
def __init__(self, context, applier_manager, action_plan_uuid): def __init__(self, context, service, action_plan_uuid):
super(DefaultActionPlanHandler, self).__init__() super(DefaultActionPlanHandler, self).__init__()
self.ctx = context self.ctx = context
self.service = service
self.action_plan_uuid = action_plan_uuid self.action_plan_uuid = action_plan_uuid
self.applier_manager = applier_manager
def notify(self, uuid, event_type, state): def notify(self, uuid, event_type, state):
action_plan = ap_objects.ActionPlan.get_by_uuid(self.ctx, uuid) action_plan = ap_objects.ActionPlan.get_by_uuid(self.ctx, uuid)
@@ -43,8 +43,7 @@ class DefaultActionPlanHandler(base.BaseActionPlanHandler):
ev.data = {} ev.data = {}
payload = {'action_plan__uuid': uuid, payload = {'action_plan__uuid': uuid,
'action_plan_state': state} 'action_plan_state': state}
self.applier_manager.status_topic_handler.publish_event( self.service.publish_status_event(ev.type.name, payload)
ev.type.name, payload)
def execute(self): def execute(self):
try: try:
@@ -52,10 +51,9 @@ class DefaultActionPlanHandler(base.BaseActionPlanHandler):
self.notify(self.action_plan_uuid, self.notify(self.action_plan_uuid,
event_types.EventTypes.LAUNCH_ACTION_PLAN, event_types.EventTypes.LAUNCH_ACTION_PLAN,
ap_objects.State.ONGOING) ap_objects.State.ONGOING)
applier = default.DefaultApplier(self.ctx, self.applier_manager) applier = default.DefaultApplier(self.ctx, self.service)
applier.execute(self.action_plan_uuid) applier.execute(self.action_plan_uuid)
state = ap_objects.State.SUCCEEDED state = ap_objects.State.SUCCEEDED
except Exception as e: except Exception as e:
LOG.exception(e) LOG.exception(e)
state = ap_objects.State.FAILED state = ap_objects.State.FAILED

View File

@@ -15,8 +15,6 @@
# implied. # implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
#
import abc import abc
@@ -28,7 +26,7 @@ from watcher.common.loader import loadable
@six.add_metaclass(abc.ABCMeta) @six.add_metaclass(abc.ABCMeta)
class BaseAction(loadable.Loadable): class BaseAction(loadable.Loadable):
# NOTE(jed) by convention we decided # NOTE(jed): by convention we decided
# that the attribute "resource_id" is the unique id of # that the attribute "resource_id" is the unique id of
# the resource to which the Action applies to allow us to use it in the # the resource to which the Action applies to allow us to use it in the
# watcher dashboard and will be nested in input_parameters # watcher dashboard and will be nested in input_parameters
@@ -99,7 +97,7 @@ class BaseAction(loadable.Loadable):
raise NotImplementedError() raise NotImplementedError()
@abc.abstractmethod @abc.abstractmethod
def precondition(self): def pre_condition(self):
"""Hook: called before the execution of an action """Hook: called before the execution of an action
This method can be used to perform some initializations or to make This method can be used to perform some initializations or to make
@@ -110,7 +108,7 @@ class BaseAction(loadable.Loadable):
raise NotImplementedError() raise NotImplementedError()
@abc.abstractmethod @abc.abstractmethod
def postcondition(self): def post_condition(self):
"""Hook: called after the execution of an action """Hook: called after the execution of an action
This function is called regardless of whether an action succeded or This function is called regardless of whether an action succeded or

View File

@@ -16,6 +16,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import six import six
import voluptuous import voluptuous
@@ -23,7 +24,7 @@ from watcher._i18n import _
from watcher.applier.actions import base from watcher.applier.actions import base
from watcher.common import exception from watcher.common import exception
from watcher.common import nova_helper from watcher.common import nova_helper
from watcher.decision_engine.model import hypervisor_state as hstate from watcher.decision_engine.model import element
class ChangeNovaServiceState(base.BaseAction): class ChangeNovaServiceState(base.BaseAction):
@@ -57,7 +58,7 @@ class ChangeNovaServiceState(base.BaseAction):
voluptuous.Length(min=1)), voluptuous.Length(min=1)),
voluptuous.Required(self.STATE): voluptuous.Required(self.STATE):
voluptuous.Any(*[state.value voluptuous.Any(*[state.value
for state in list(hstate.HypervisorState)]), for state in list(element.ServiceState)]),
}) })
@property @property
@@ -70,17 +71,17 @@ class ChangeNovaServiceState(base.BaseAction):
def execute(self): def execute(self):
target_state = None target_state = None
if self.state == hstate.HypervisorState.DISABLED.value: if self.state == element.ServiceState.DISABLED.value:
target_state = False target_state = False
elif self.state == hstate.HypervisorState.ENABLED.value: elif self.state == element.ServiceState.ENABLED.value:
target_state = True target_state = True
return self._nova_manage_service(target_state) return self._nova_manage_service(target_state)
def revert(self): def revert(self):
target_state = None target_state = None
if self.state == hstate.HypervisorState.DISABLED.value: if self.state == element.ServiceState.DISABLED.value:
target_state = True target_state = True
elif self.state == hstate.HypervisorState.ENABLED.value: elif self.state == element.ServiceState.ENABLED.value:
target_state = False target_state = False
return self._nova_manage_service(target_state) return self._nova_manage_service(target_state)
@@ -95,8 +96,8 @@ class ChangeNovaServiceState(base.BaseAction):
else: else:
return nova.disable_service_nova_compute(self.host) return nova.disable_service_nova_compute(self.host)
def precondition(self): def pre_condition(self):
pass pass
def postcondition(self): def post_condition(self):
pass pass

View File

@@ -44,12 +44,12 @@ class Migrate(base.BaseAction):
schema = Schema({ schema = Schema({
'resource_id': str, # should be a UUID 'resource_id': str, # should be a UUID
'migration_type': str, # choices -> "live", "cold" 'migration_type': str, # choices -> "live", "cold"
'dst_hypervisor': str, 'destination_node': str,
'src_hypervisor': str, 'source_node': str,
}) })
The `resource_id` is the UUID of the server to migrate. The `resource_id` is the UUID of the server to migrate.
The `src_hypervisor` and `dst_hypervisor` parameters are respectively the The `source_node` and `destination_node` parameters are respectively the
source and the destination compute hostname (list of available compute source and the destination compute hostname (list of available compute
hosts is returned by this command: ``nova service-list --binary hosts is returned by this command: ``nova service-list --binary
nova-compute``). nova-compute``).
@@ -59,28 +59,28 @@ class Migrate(base.BaseAction):
MIGRATION_TYPE = 'migration_type' MIGRATION_TYPE = 'migration_type'
LIVE_MIGRATION = 'live' LIVE_MIGRATION = 'live'
COLD_MIGRATION = 'cold' COLD_MIGRATION = 'cold'
DST_HYPERVISOR = 'dst_hypervisor' DESTINATION_NODE = 'destination_node'
SRC_HYPERVISOR = 'src_hypervisor' SOURCE_NODE = 'source_node'
def check_resource_id(self, value): def check_resource_id(self, value):
if (value is not None and if (value is not None and
len(value) > 0 and not len(value) > 0 and not
utils.is_uuid_like(value)): utils.is_uuid_like(value)):
raise voluptuous.Invalid(_("The parameter" raise voluptuous.Invalid(_("The parameter "
" resource_id is invalid.")) "resource_id is invalid."))
@property @property
def schema(self): def schema(self):
return voluptuous.Schema({ return voluptuous.Schema({
voluptuous.Required(self.RESOURCE_ID): self.check_resource_id, voluptuous.Required(self.RESOURCE_ID): self.check_resource_id,
voluptuous.Required(self.MIGRATION_TYPE, voluptuous.Required(
default=self.LIVE_MIGRATION): self.MIGRATION_TYPE, default=self.LIVE_MIGRATION):
voluptuous.Any(*[self.LIVE_MIGRATION, voluptuous.Any(
self.COLD_MIGRATION]), *[self.LIVE_MIGRATION, self.COLD_MIGRATION]),
voluptuous.Required(self.DST_HYPERVISOR): voluptuous.Required(self.DESTINATION_NODE):
voluptuous.All(voluptuous.Any(*six.string_types), voluptuous.All(voluptuous.Any(*six.string_types),
voluptuous.Length(min=1)), voluptuous.Length(min=1)),
voluptuous.Required(self.SRC_HYPERVISOR): voluptuous.Required(self.SOURCE_NODE):
voluptuous.All(voluptuous.Any(*six.string_types), voluptuous.All(voluptuous.Any(*six.string_types),
voluptuous.Length(min=1)), voluptuous.Length(min=1)),
}) })
@@ -94,12 +94,12 @@ class Migrate(base.BaseAction):
return self.input_parameters.get(self.MIGRATION_TYPE) return self.input_parameters.get(self.MIGRATION_TYPE)
@property @property
def dst_hypervisor(self): def destination_node(self):
return self.input_parameters.get(self.DST_HYPERVISOR) return self.input_parameters.get(self.DESTINATION_NODE)
@property @property
def src_hypervisor(self): def source_node(self):
return self.input_parameters.get(self.SRC_HYPERVISOR) return self.input_parameters.get(self.SOURCE_NODE)
def _live_migrate_instance(self, nova, destination): def _live_migrate_instance(self, nova, destination):
result = None result = None
@@ -116,11 +116,11 @@ class Migrate(base.BaseAction):
dest_hostname=destination, dest_hostname=destination,
block_migration=True) block_migration=True)
else: else:
LOG.debug("Nova client exception occured while live migrating " LOG.debug("Nova client exception occurred while live "
"instance %s.Exception: %s" % "migrating instance %s.Exception: %s" %
(self.instance_uuid, e)) (self.instance_uuid, e))
except Exception: except Exception:
LOG.critical(_LC("Unexpected error occured. Migration failed for" LOG.critical(_LC("Unexpected error occurred. Migration failed for "
"instance %s. Leaving instance on previous " "instance %s. Leaving instance on previous "
"host."), self.instance_uuid) "host."), self.instance_uuid)
@@ -134,7 +134,7 @@ class Migrate(base.BaseAction):
dest_hostname=destination) dest_hostname=destination)
except Exception as exc: except Exception as exc:
LOG.exception(exc) LOG.exception(exc)
LOG.critical(_LC("Unexpected error occured. Migration failed for" LOG.critical(_LC("Unexpected error occurred. Migration failed for "
"instance %s. Leaving instance on previous " "instance %s. Leaving instance on previous "
"host."), self.instance_uuid) "host."), self.instance_uuid)
@@ -152,23 +152,23 @@ class Migrate(base.BaseAction):
return self._cold_migrate_instance(nova, destination) return self._cold_migrate_instance(nova, destination)
else: else:
raise exception.Invalid( raise exception.Invalid(
message=(_('Migration of type %(migration_type)s is not ' message=(_("Migration of type '%(migration_type)s' is not "
'supported.') % "supported.") %
{'migration_type': self.migration_type})) {'migration_type': self.migration_type}))
else: else:
raise exception.InstanceNotFound(name=self.instance_uuid) raise exception.InstanceNotFound(name=self.instance_uuid)
def execute(self): def execute(self):
return self.migrate(destination=self.dst_hypervisor) return self.migrate(destination=self.destination_node)
def revert(self): def revert(self):
return self.migrate(destination=self.src_hypervisor) return self.migrate(destination=self.source_node)
def precondition(self): def pre_condition(self):
# todo(jed) check if the instance exist/ check if the instance is on # TODO(jed): check if the instance exists / check if the instance is on
# the src_hypervisor # the source_node
pass pass
def postcondition(self): def post_condition(self):
# todo(jed) we can image to check extra parameters (nework reponse,ect) # TODO(jed): check extra parameters (network response, etc.)
pass pass

View File

@@ -53,15 +53,15 @@ class Nop(base.BaseAction):
return self.input_parameters.get(self.MESSAGE) return self.input_parameters.get(self.MESSAGE)
def execute(self): def execute(self):
LOG.debug("executing action NOP message:%s ", self.message) LOG.debug("Executing action NOP message: %s ", self.message)
return True return True
def revert(self): def revert(self):
LOG.debug("revert action NOP") LOG.debug("Revert action NOP")
return True return True
def precondition(self): def pre_condition(self):
pass pass
def postcondition(self): def post_condition(self):
pass pass

View File

@@ -16,8 +16,8 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import time
import time
from oslo_log import log from oslo_log import log
import voluptuous import voluptuous
@@ -53,16 +53,16 @@ class Sleep(base.BaseAction):
return int(self.input_parameters.get(self.DURATION)) return int(self.input_parameters.get(self.DURATION))
def execute(self): def execute(self):
LOG.debug("Starting action Sleep duration:%s ", self.duration) LOG.debug("Starting action sleep with duration: %s ", self.duration)
time.sleep(self.duration) time.sleep(self.duration)
return True return True
def revert(self): def revert(self):
LOG.debug("revert action Sleep") LOG.debug("Revert action sleep")
return True return True
def precondition(self): def pre_condition(self):
pass pass
def postcondition(self): def post_condition(self):
pass pass

View File

@@ -18,11 +18,9 @@
# #
from oslo_config import cfg from oslo_config import cfg
from oslo_log import log
from watcher.applier.messaging import trigger from watcher.applier.messaging import trigger
LOG = log.getLogger(__name__)
CONF = cfg.CONF CONF = cfg.CONF
@@ -68,6 +66,8 @@ class ApplierManager(object):
conductor_endpoints = [trigger.TriggerActionPlan] conductor_endpoints = [trigger.TriggerActionPlan]
status_endpoints = [] status_endpoints = []
notification_endpoints = []
notification_topics = []
def __init__(self): def __init__(self):
self.publisher_id = CONF.watcher_applier.publisher_id self.publisher_id = CONF.watcher_applier.publisher_id

View File

@@ -17,16 +17,13 @@
# limitations under the License. # limitations under the License.
# #
from oslo_config import cfg from oslo_config import cfg
from oslo_log import log
from watcher.applier import manager from watcher.applier import manager
from watcher.common import exception from watcher.common import exception
from watcher.common.messaging import notification_handler as notification
from watcher.common import service from watcher.common import service
from watcher.common import utils from watcher.common import utils
LOG = log.getLogger(__name__)
CONF = cfg.CONF CONF = cfg.CONF
CONF.register_group(manager.opt_group) CONF.register_group(manager.opt_group)
CONF.register_opts(manager.APPLIER_MANAGER_OPTS, manager.opt_group) CONF.register_opts(manager.APPLIER_MANAGER_OPTS, manager.opt_group)
@@ -51,7 +48,9 @@ class ApplierAPIManager(object):
API_VERSION = '1.0' API_VERSION = '1.0'
conductor_endpoints = [] conductor_endpoints = []
status_endpoints = [notification.NotificationHandler] status_endpoints = []
notification_endpoints = []
notification_topics = []
def __init__(self): def __init__(self):
self.publisher_id = CONF.watcher_applier.publisher_id self.publisher_id = CONF.watcher_applier.publisher_id

View File

@@ -82,8 +82,7 @@ class BaseWorkFlowEngine(loadable.Loadable):
ev.data = {} ev.data = {}
payload = {'action_uuid': action.uuid, payload = {'action_uuid': action.uuid,
'action_state': state} 'action_state': state}
self.applier_manager.status_topic_handler.publish_event( self.applier_manager.publish_status_event(ev.type.name, payload)
ev.type.name, payload)
@abc.abstractmethod @abc.abstractmethod
def execute(self, actions): def execute(self, actions):

View File

@@ -109,8 +109,8 @@ class TaskFlowActionContainer(task.Task):
try: try:
self.engine.notify(self._db_action, self.engine.notify(self._db_action,
obj_action.State.ONGOING) obj_action.State.ONGOING)
LOG.debug("Precondition action %s", self.name) LOG.debug("Pre-condition action: %s", self.name)
self.action.precondition() self.action.pre_condition()
except Exception as e: except Exception as e:
LOG.exception(e) LOG.exception(e)
self.engine.notify(self._db_action, self.engine.notify(self._db_action,
@@ -119,15 +119,15 @@ class TaskFlowActionContainer(task.Task):
def execute(self, *args, **kwargs): def execute(self, *args, **kwargs):
try: try:
LOG.debug("Running action %s", self.name) LOG.debug("Running action: %s", self.name)
self.action.execute() self.action.execute()
self.engine.notify(self._db_action, self.engine.notify(self._db_action,
obj_action.State.SUCCEEDED) obj_action.State.SUCCEEDED)
except Exception as e: except Exception as e:
LOG.exception(e) LOG.exception(e)
LOG.error(_LE('The WorkFlow Engine has failed ' LOG.error(_LE('The workflow engine has failed '
'to execute the action %s'), self.name) 'to execute the action: %s'), self.name)
self.engine.notify(self._db_action, self.engine.notify(self._db_action,
obj_action.State.FAILED) obj_action.State.FAILED)
@@ -135,8 +135,8 @@ class TaskFlowActionContainer(task.Task):
def post_execute(self): def post_execute(self):
try: try:
LOG.debug("postcondition action %s", self.name) LOG.debug("Post-condition action: %s", self.name)
self.action.postcondition() self.action.post_condition()
except Exception as e: except Exception as e:
LOG.exception(e) LOG.exception(e)
self.engine.notify(self._db_action, self.engine.notify(self._db_action,
@@ -144,19 +144,19 @@ class TaskFlowActionContainer(task.Task):
raise raise
def revert(self, *args, **kwargs): def revert(self, *args, **kwargs):
LOG.warning(_LW("Revert action %s"), self.name) LOG.warning(_LW("Revert action: %s"), self.name)
try: try:
# todo(jed) do we need to update the states in case of failure ? # TODO(jed): do we need to update the states in case of failure?
self.action.revert() self.action.revert()
except Exception as e: except Exception as e:
LOG.exception(e) LOG.exception(e)
LOG.critical(_LC("Oops! We need disaster recover plan")) LOG.critical(_LC("Oops! We need a disaster recover plan."))
class TaskFlowNop(task.Task): class TaskFlowNop(task.Task):
"""This class is use in case of the workflow have only one Action. """This class is used in case of the workflow have only one Action.
We need at least two atoms to create a link We need at least two atoms to create a link.
""" """
def execute(self): def execute(self):
pass pass

View File

@@ -46,6 +46,5 @@ def main():
LOG.info(_LI('serving on %(protocol)s://%(host)s:%(port)s') % LOG.info(_LI('serving on %(protocol)s://%(host)s:%(port)s') %
dict(protocol=protocol, host=host, port=port)) dict(protocol=protocol, host=host, port=port))
launcher = service.process_launcher() launcher = service.launch(CONF, server, workers=server.workers)
launcher.launch_service(server, workers=server.workers)
launcher.wait() launcher.wait()

View File

@@ -22,7 +22,6 @@ import sys
from oslo_config import cfg from oslo_config import cfg
from oslo_log import log as logging from oslo_log import log as logging
from oslo_service import service
from watcher._i18n import _LI from watcher._i18n import _LI
from watcher.applier import manager from watcher.applier import manager
@@ -38,5 +37,7 @@ def main():
LOG.info(_LI('Starting Watcher Applier service in PID %s'), os.getpid()) LOG.info(_LI('Starting Watcher Applier service in PID %s'), os.getpid())
applier_service = watcher_service.Service(manager.ApplierManager) applier_service = watcher_service.Service(manager.ApplierManager)
launcher = service.launch(CONF, applier_service)
# Only 1 process
launcher = watcher_service.launch(CONF, applier_service)
launcher.wait() launcher.wait()

View File

@@ -59,7 +59,7 @@ class DBCommand(object):
@staticmethod @staticmethod
def purge(): def purge():
purge.purge(CONF.command.age_in_days, CONF.command.max_number, purge.purge(CONF.command.age_in_days, CONF.command.max_number,
CONF.command.audit_template, CONF.command.exclude_orphans, CONF.command.goal, CONF.command.exclude_orphans,
CONF.command.dry_run) CONF.command.dry_run)
@@ -115,8 +115,8 @@ def add_command_parsers(subparsers):
"Prevents the deletion if exceeded. No limit if " "Prevents the deletion if exceeded. No limit if "
"set to None.", "set to None.",
type=int, default=None, nargs='?') type=int, default=None, nargs='?')
parser.add_argument('-t', '--audit-template', parser.add_argument('-t', '--goal',
help="UUID or name of the audit template to purge.", help="UUID or name of the goal to purge.",
type=str, default=None, nargs='?') type=str, default=None, nargs='?')
parser.add_argument('-e', '--exclude-orphans', action='store_true', parser.add_argument('-e', '--exclude-orphans', action='store_true',
help="Flag to indicate whether or not you want to " help="Flag to indicate whether or not you want to "

View File

@@ -22,11 +22,11 @@ import sys
from oslo_config import cfg from oslo_config import cfg
from oslo_log import log as logging from oslo_log import log as logging
from oslo_service import service
from watcher._i18n import _LI from watcher._i18n import _LI
from watcher.common import service as watcher_service from watcher.common import service as watcher_service
from watcher.decision_engine import manager from watcher.decision_engine import manager
from watcher.decision_engine import scheduling
from watcher.decision_engine import sync from watcher.decision_engine import sync
LOG = logging.getLogger(__name__) LOG = logging.getLogger(__name__)
@@ -43,5 +43,10 @@ def main():
syncer.sync() syncer.sync()
de_service = watcher_service.Service(manager.DecisionEngineManager) de_service = watcher_service.Service(manager.DecisionEngineManager)
launcher = service.launch(CONF, de_service) bg_schedulder_service = scheduling.DecisionEngineSchedulingService()
# Only 1 process
launcher = watcher_service.launch(CONF, de_service)
launcher.launch_service(bg_schedulder_service)
launcher.wait() launcher.wait()

39
watcher/cmd/sync.py Normal file
View File

@@ -0,0 +1,39 @@
# -*- encoding: utf-8 -*-
#
# Copyright (c) 2016 Intel
#
# Authors: Tomasz Kaczynski <tomasz.kaczynski@intel.com>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Script for the sync tool."""
import sys
from oslo_log import log as logging
from watcher._i18n import _LI
from watcher.common import service as service
from watcher.decision_engine import sync
LOG = logging.getLogger(__name__)
def main():
LOG.info(_LI('Watcher sync started.'))
service.prepare_service(sys.argv)
syncer = sync.Syncer()
syncer.sync()
LOG.info(_LI('Watcher sync finished.'))

View File

@@ -15,11 +15,15 @@
# implied. # implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
#
import datetime
from ceilometerclient import exc from ceilometerclient import exc
from oslo_utils import timeutils
from watcher._i18n import _
from watcher.common import clients from watcher.common import clients
from watcher.common import exception
class CeilometerHelper(object): class CeilometerHelper(object):
@@ -29,18 +33,20 @@ class CeilometerHelper(object):
self.ceilometer = self.osc.ceilometer() self.ceilometer = self.osc.ceilometer()
def build_query(self, user_id=None, tenant_id=None, resource_id=None, def build_query(self, user_id=None, tenant_id=None, resource_id=None,
user_ids=None, tenant_ids=None, resource_ids=None): user_ids=None, tenant_ids=None, resource_ids=None,
start_time=None, end_time=None):
"""Returns query built from given parameters. """Returns query built from given parameters.
This query can be then used for querying resources, meters and This query can be then used for querying resources, meters and
statistics. statistics.
:Parameters: :param user_id: user_id, has a priority over list of ids
- `user_id`: user_id, has a priority over list of ids :param tenant_id: tenant_id, has a priority over list of ids
- `tenant_id`: tenant_id, has a priority over list of ids :param resource_id: resource_id, has a priority over list of ids
- `resource_id`: resource_id, has a priority over list of ids :param user_ids: list of user_ids
- `user_ids`: list of user_ids :param tenant_ids: list of tenant_ids
- `tenant_ids`: list of tenant_ids :param resource_ids: list of resource_ids
- `resource_ids`: list of resource_ids :param start_time: datetime from which measurements should be collected
:param end_time: datetime until which measurements should be collected
""" """
user_ids = user_ids or [] user_ids = user_ids or []
@@ -63,6 +69,32 @@ class CeilometerHelper(object):
for r_id in resource_ids: for r_id in resource_ids:
query.append({"field": "resource_id", "op": "eq", "value": r_id}) query.append({"field": "resource_id", "op": "eq", "value": r_id})
start_timestamp = None
end_timestamp = None
if start_time:
start_timestamp = start_time
if isinstance(start_time, datetime.datetime):
start_timestamp = start_time.isoformat()
if end_time:
end_timestamp = end_time
if isinstance(end_time, datetime.datetime):
end_timestamp = end_time.isoformat()
if (start_timestamp and end_timestamp and
timeutils.parse_isotime(start_timestamp) >
timeutils.parse_isotime(end_timestamp)):
raise exception.Invalid(
_("Invalid query: %(start_time)s > %(end_time)s") % dict(
start_time=start_timestamp, end_time=end_timestamp))
if start_timestamp:
query.append({"field": "timestamp", "op": "ge",
"value": start_timestamp})
if end_timestamp:
query.append({"field": "timestamp", "op": "le",
"value": end_timestamp})
return query return query
def query_retry(self, f, *args, **kargs): def query_retry(self, f, *args, **kargs):
@@ -112,7 +144,10 @@ class CeilometerHelper(object):
:return: :return:
""" """
query = self.build_query(resource_id=resource_id) start_time = (datetime.datetime.utcnow() -
datetime.timedelta(seconds=int(period)))
query = self.build_query(
resource_id=resource_id, start_time=start_time)
statistic = self.query_retry(f=self.ceilometer.statistics.list, statistic = self.query_retry(f=self.ceilometer.statistics.list,
meter_name=meter_name, meter_name=meter_name,
q=query, q=query,

View File

@@ -11,16 +11,24 @@
# under the License. # under the License.
from oslo_context import context from oslo_context import context
from oslo_log import log as logging
from oslo_utils import timeutils
import six
from watcher._i18n import _LW
from watcher.common import utils
LOG = logging.getLogger(__name__)
class RequestContext(context.RequestContext): class RequestContext(context.RequestContext):
"""Extends security contexts from the OpenStack common library.""" """Extends security contexts from the OpenStack common library."""
def __init__(self, auth_token=None, auth_url=None, domain_id=None, def __init__(self, user_id=None, project_id=None, is_admin=None,
domain_name=None, user=None, user_id=None, project=None, roles=None, timestamp=None, request_id=None, auth_token=None,
project_id=None, is_admin=False, is_public_api=False, auth_url=None, overwrite=True, user_name=None,
read_only=False, show_deleted=False, request_id=None, project_name=None, domain_name=None, domain_id=None,
trust_id=None, auth_token_info=None, roles=None): auth_token_info=None, **kwargs):
"""Stores several additional request parameters: """Stores several additional request parameters:
:param domain_id: The ID of the domain. :param domain_id: The ID of the domain.
@@ -29,46 +37,84 @@ class RequestContext(context.RequestContext):
without authentication. without authentication.
""" """
self.is_public_api = is_public_api user = kwargs.pop('user', None)
self.user_id = user_id tenant = kwargs.pop('tenant', None)
self.project = project super(RequestContext, self).__init__(
self.project_id = project_id auth_token=auth_token,
self.domain_id = domain_id user=user_id or user,
self.domain_name = domain_name tenant=project_id or tenant,
self.auth_url = auth_url domain=kwargs.pop('domain', None) or domain_name or domain_id,
self.auth_token_info = auth_token_info user_domain=kwargs.pop('user_domain', None),
self.trust_id = trust_id project_domain=kwargs.pop('project_domain', None),
is_admin=is_admin,
read_only=kwargs.pop('read_only', False),
show_deleted=kwargs.pop('show_deleted', False),
request_id=request_id,
resource_uuid=kwargs.pop('resource_uuid', None),
is_admin_project=kwargs.pop('is_admin_project', None),
overwrite=overwrite,
roles=roles)
super(RequestContext, self).__init__(auth_token=auth_token, self.remote_address = kwargs.pop('remote_address', None)
user=user, tenant=project, self.instance_lock_checked = kwargs.pop('instance_lock_checked', None)
is_admin=is_admin, self.read_deleted = kwargs.pop('read_deleted', None)
read_only=read_only, self.service_catalog = kwargs.pop('service_catalog', None)
show_deleted=show_deleted, self.quota_class = kwargs.pop('quota_class', None)
request_id=request_id,
roles=roles) # oslo_context's RequestContext.to_dict() generates this field, we can
# safely ignore this as we don't use it.
kwargs.pop('user_identity', None)
if kwargs:
LOG.warning(_LW('Arguments dropped when creating context: %s'),
str(kwargs))
# FIXME(dims): user_id and project_id duplicate information that is
# already present in the oslo_context's RequestContext. We need to
# get rid of them.
self.auth_url = auth_url
self.domain_name = domain_name
self.domain_id = domain_id
self.auth_token_info = auth_token_info
self.user_id = user_id
self.project_id = project_id
if not timestamp:
timestamp = timeutils.utcnow()
if isinstance(timestamp, six.string_types):
timestamp = timeutils.parse_isotime(timestamp)
self.timestamp = timestamp
self.user_name = user_name
self.project_name = project_name
self.is_admin = is_admin
# if self.is_admin is None:
# self.is_admin = policy.check_is_admin(self)
def to_dict(self): def to_dict(self):
return {'auth_token': self.auth_token, values = super(RequestContext, self).to_dict()
'auth_url': self.auth_url, # FIXME(dims): defensive hasattr() checks need to be
'domain_id': self.domain_id, # removed once we figure out why we are seeing stack
'domain_name': self.domain_name, # traces
'user': self.user, values.update({
'user_id': self.user_id, 'user_id': getattr(self, 'user_id', None),
'project': self.project, 'user_name': getattr(self, 'user_name', None),
'project_id': self.project_id, 'project_id': getattr(self, 'project_id', None),
'is_admin': self.is_admin, 'project_name': getattr(self, 'project_name', None),
'is_public_api': self.is_public_api, 'domain_id': getattr(self, 'domain_id', None),
'read_only': self.read_only, 'domain_name': getattr(self, 'domain_name', None),
'show_deleted': self.show_deleted, 'auth_token_info': getattr(self, 'auth_token_info', None),
'request_id': self.request_id, 'is_admin': getattr(self, 'is_admin', None),
'trust_id': self.trust_id, 'timestamp': utils.strtime(self.timestamp) if hasattr(
'auth_token_info': self.auth_token_info, self, 'timestamp') else None,
'roles': self.roles} 'request_id': getattr(self, 'request_id', None),
})
return values
@classmethod @classmethod
def from_dict(cls, values): def from_dict(cls, values):
return cls(**values) return cls(**values)
def __str__(self):
return "<Context %s>" % self.to_dict()
def make_context(*args, **kwargs): def make_context(*args, **kwargs):
return RequestContext(*args, **kwargs) return RequestContext(*args, **kwargs)

View File

@@ -86,15 +86,16 @@ class WatcherException(Exception):
if not message: if not message:
try: try:
message = self.msg_fmt % kwargs message = self.msg_fmt % kwargs
except Exception as e: except Exception:
# kwargs doesn't match a variable in msg_fmt # kwargs doesn't match a variable in msg_fmt
# log the issue and the kwargs # log the issue and the kwargs
LOG.exception(_LE('Exception in string format operation')) LOG.exception(_LE('Exception in string format operation'))
for name, value in kwargs.items(): for name, value in kwargs.items():
LOG.error("%s: %s", name, value) LOG.error(_LE("%(name)s: %(value)s"),
{'name': name, 'value': value})
if CONF.fatal_exception_format_errors: if CONF.fatal_exception_format_errors:
raise e raise
else: else:
# at least get the core msg_fmt out if something happened # at least get the core msg_fmt out if something happened
message = self.msg_fmt message = self.msg_fmt
@@ -104,12 +105,12 @@ class WatcherException(Exception):
def __str__(self): def __str__(self):
"""Encode to utf-8 then wsme api can consume it as well""" """Encode to utf-8 then wsme api can consume it as well"""
if not six.PY3: if not six.PY3:
return unicode(self.args[0]).encode('utf-8') return six.text_type(self.args[0]).encode('utf-8')
else: else:
return self.args[0] return self.args[0]
def __unicode__(self): def __unicode__(self):
return unicode(self.args[0]) return six.text_type(self.args[0])
def format_message(self): def format_message(self):
if self.__class__.__name__.endswith('_Remote'): if self.__class__.__name__.endswith('_Remote'):
@@ -213,6 +214,10 @@ class AuditTypeNotFound(Invalid):
msg_fmt = _("Audit type %(audit_type)s could not be found") msg_fmt = _("Audit type %(audit_type)s could not be found")
class AuditParameterNotAllowed(Invalid):
msg_fmt = _("Audit parameter %(parameter)s are not allowed")
class AuditNotFound(ResourceNotFound): class AuditNotFound(ResourceNotFound):
msg_fmt = _("Audit %(audit)s could not be found") msg_fmt = _("Audit %(audit)s could not be found")
@@ -273,6 +278,14 @@ class EfficacyIndicatorAlreadyExists(Conflict):
msg_fmt = _("An action with UUID %(uuid)s already exists") msg_fmt = _("An action with UUID %(uuid)s already exists")
class ScoringEngineAlreadyExists(Conflict):
msg_fmt = _("A scoring engine with UUID %(uuid)s already exists")
class ScoringEngineNotFound(ResourceNotFound):
msg_fmt = _("ScoringEngine %(scoring_engine)s could not be found")
class HTTPNotFound(ResourceNotFound): class HTTPNotFound(ResourceNotFound):
pass pass
@@ -308,13 +321,17 @@ class KeystoneFailure(WatcherException):
class ClusterEmpty(WatcherException): class ClusterEmpty(WatcherException):
msg_fmt = _("The list of hypervisor(s) in the cluster is empty") msg_fmt = _("The list of compute node(s) in the cluster is empty")
class MetricCollectorNotDefined(WatcherException): class MetricCollectorNotDefined(WatcherException):
msg_fmt = _("The metrics resource collector is not defined") msg_fmt = _("The metrics resource collector is not defined")
class ClusterDataModelCollectionError(WatcherException):
msg_fmt = _("The cluster data model '%(cdm)s' could not be built")
class ClusterStateNotDefined(WatcherException): class ClusterStateNotDefined(WatcherException):
msg_fmt = _("The cluster state is not defined") msg_fmt = _("The cluster state is not defined")
@@ -333,7 +350,7 @@ class GlobalEfficacyComputationError(WatcherException):
"goal using the '%(strategy)s' strategy.") "goal using the '%(strategy)s' strategy.")
class NoMetricValuesForVM(WatcherException): class NoMetricValuesForInstance(WatcherException):
msg_fmt = _("No values returned by %(resource_id)s for %(metric_name)s.") msg_fmt = _("No values returned by %(resource_id)s for %(metric_name)s.")
@@ -344,11 +361,11 @@ class NoSuchMetricForHost(WatcherException):
# Model # Model
class InstanceNotFound(WatcherException): class InstanceNotFound(WatcherException):
msg_fmt = _("The instance '%(name)s' is not found") msg_fmt = _("The instance '%(name)s' could not be found")
class HypervisorNotFound(WatcherException): class ComputeNodeNotFound(WatcherException):
msg_fmt = _("The hypervisor is not found") msg_fmt = _("The compute node %(name)s could not be found")
class LoadingError(WatcherException): class LoadingError(WatcherException):

View File

@@ -61,7 +61,7 @@ class DefaultLoader(base.BaseLoader):
return driver return driver
def _reload_config(self): def _reload_config(self):
self.conf() self.conf(default_config_files=self.conf.default_config_files)
def get_entry_name(self, name): def get_entry_name(self, name):
return ".".join([self.namespace, name]) return ".".join([self.namespace, name])

View File

@@ -18,6 +18,8 @@ import abc
import six import six
from watcher.common import service
@six.add_metaclass(abc.ABCMeta) @six.add_metaclass(abc.ABCMeta)
class Loadable(object): class Loadable(object):
@@ -28,6 +30,35 @@ class Loadable(object):
""" """
def __init__(self, config): def __init__(self, config):
super(Loadable, self).__init__()
self.config = config
@classmethod
@abc.abstractmethod
def get_config_opts(cls):
"""Defines the configuration options to be associated to this loadable
:return: A list of configuration options relative to this Loadable
:rtype: list of :class:`oslo_config.cfg.Opt` instances
"""
raise NotImplementedError
LoadableSingletonMeta = type(
"LoadableSingletonMeta", (abc.ABCMeta, service.Singleton), {})
@six.add_metaclass(LoadableSingletonMeta)
class LoadableSingleton(object):
"""Generic interface for dynamically loading a driver as a singleton.
This defines the contract in order to let the loader manager inject
the configuration parameters during the loading. Classes inheriting from
this class will be singletons.
"""
def __init__(self, config):
super(LoadableSingleton, self).__init__()
self.config = config self.config = config
@classmethod @classmethod

View File

@@ -79,6 +79,7 @@ class MessagingHandler(threading.Thread):
def build_server(self, target): def build_server(self, target):
return om.get_rpc_server(self.__transport, target, return om.get_rpc_server(self.__transport, target,
self.__endpoints, self.__endpoints,
executor='eventlet',
serializer=self.__serializer) serializer=self.__serializer)
def _configure(self): def _configure(self):
@@ -109,7 +110,6 @@ class MessagingHandler(threading.Thread):
def stop(self): def stop(self):
LOG.debug('Stopped server') LOG.debug('Stopped server')
self.__server.wait()
self.__server.stop() self.__server.stop()
def publish_event(self, event_type, payload, request_id=None): def publish_event(self, event_type, payload, request_id=None):

View File

@@ -40,7 +40,7 @@ class NovaHelper(object):
self.nova = self.osc.nova() self.nova = self.osc.nova()
self.glance = self.osc.glance() self.glance = self.osc.glance()
def get_hypervisors_list(self): def get_compute_node_list(self):
return self.nova.hypervisors.list() return self.nova.hypervisors.list()
def find_instance(self, instance_id): def find_instance(self, instance_id):
@@ -54,7 +54,27 @@ class NovaHelper(object):
break break
return instance return instance
def watcher_non_live_migrate_instance(self, instance_id, hypervisor_id, def wait_for_volume_status(self, volume, status, timeout=60,
poll_interval=1):
"""Wait until volume reaches given status.
:param volume: volume resource
:param status: expected status of volume
:param timeout: timeout in seconds
:param poll_interval: poll interval in seconds
"""
start_time = time.time()
while time.time() - start_time < timeout:
volume = self.cinder.volumes.get(volume.id)
if volume.status == status:
break
time.sleep(poll_interval)
else:
raise Exception("Volume %s did not reach status %s after %d s"
% (volume.id, status, timeout))
return volume.status == status
def watcher_non_live_migrate_instance(self, instance_id, node_id,
keep_original_image_name=True): keep_original_image_name=True):
"""This method migrates a given instance """This method migrates a given instance
@@ -73,7 +93,6 @@ class NovaHelper(object):
used as the name of the intermediate image used for migration. used as the name of the intermediate image used for migration.
If this flag is False, a temporary image name is built If this flag is False, a temporary image name is built
""" """
new_image_name = "" new_image_name = ""
LOG.debug( LOG.debug(
@@ -218,7 +237,7 @@ class NovaHelper(object):
# We create the new instance from # We create the new instance from
# the intermediate image of the original instance # the intermediate image of the original instance
new_instance = self. \ new_instance = self. \
create_instance(hypervisor_id, create_instance(node_id,
instance_name, instance_name,
image_uuid, image_uuid,
flavor_name, flavor_name,
@@ -278,7 +297,6 @@ class NovaHelper(object):
:param dest_hostname: the name of the destination compute node. :param dest_hostname: the name of the destination compute node.
:param block_migration: No shared storage is required. :param block_migration: No shared storage is required.
""" """
LOG.debug("Trying a live migrate of instance %s to host '%s'" % ( LOG.debug("Trying a live migrate of instance %s to host '%s'" % (
instance_id, dest_hostname)) instance_id, dest_hostname))
@@ -319,8 +337,6 @@ class NovaHelper(object):
return True return True
return False
def enable_service_nova_compute(self, hostname): def enable_service_nova_compute(self, hostname):
if self.nova.services.enable(host=hostname, if self.nova.services.enable(host=hostname,
binary='nova-compute'). \ binary='nova-compute'). \
@@ -358,7 +374,7 @@ class NovaHelper(object):
# Sets the compute host's ability to accept new instances. # Sets the compute host's ability to accept new instances.
# host_maintenance_mode(self, host, mode): # host_maintenance_mode(self, host, mode):
# Start/Stop host maintenance window. # Start/Stop host maintenance window.
# On start, it triggers guest VMs evacuation. # On start, it triggers guest instances evacuation.
host = self.nova.hosts.get(hostname) host = self.nova.hosts.get(hostname)
if not host: if not host:
@@ -407,6 +423,8 @@ class NovaHelper(object):
metadata) metadata)
image = self.glance.images.get(image_uuid) image = self.glance.images.get(image_uuid)
if not image:
return None
# Waiting for the new image to be officially in ACTIVE state # Waiting for the new image to be officially in ACTIVE state
# in order to make sure it can be used # in order to make sure it can be used
@@ -417,6 +435,8 @@ class NovaHelper(object):
retry -= 1 retry -= 1
# Retrieve the instance again so the status field updates # Retrieve the instance again so the status field updates
image = self.glance.images.get(image_uuid) image = self.glance.images.get(image_uuid)
if not image:
break
status = image.status status = image.status
LOG.debug("Current image status: %s" % status) LOG.debug("Current image status: %s" % status)
@@ -434,7 +454,6 @@ class NovaHelper(object):
:param instance_id: the unique id of the instance to delete. :param instance_id: the unique id of the instance to delete.
""" """
LOG.debug("Trying to remove instance %s ..." % instance_id) LOG.debug("Trying to remove instance %s ..." % instance_id)
instance = self.find_instance(instance_id) instance = self.find_instance(instance_id)
@@ -452,7 +471,6 @@ class NovaHelper(object):
:param instance_id: the unique id of the instance to stop. :param instance_id: the unique id of the instance to stop.
""" """
LOG.debug("Trying to stop instance %s ..." % instance_id) LOG.debug("Trying to stop instance %s ..." % instance_id)
instance = self.find_instance(instance_id) instance = self.find_instance(instance_id)
@@ -463,32 +481,31 @@ class NovaHelper(object):
else: else:
self.nova.servers.stop(instance_id) self.nova.servers.stop(instance_id)
if self.wait_for_vm_state(instance, "stopped", 8, 10): if self.wait_for_instance_state(instance, "stopped", 8, 10):
LOG.debug("Instance %s stopped." % instance_id) LOG.debug("Instance %s stopped." % instance_id)
return True return True
else: else:
return False return False
def wait_for_vm_state(self, server, vm_state, retry, sleep): def wait_for_instance_state(self, server, state, retry, sleep):
"""Waits for server to be in a specific vm_state """Waits for server to be in a specific state
The vm_state can be one of the following : The state can be one of the following :
active, stopped active, stopped
:param server: server object. :param server: server object.
:param vm_state: for which state we are waiting for :param state: for which state we are waiting for
:param retry: how many times to retry :param retry: how many times to retry
:param sleep: seconds to sleep between the retries :param sleep: seconds to sleep between the retries
""" """
if not server: if not server:
return False return False
while getattr(server, 'OS-EXT-STS:vm_state') != vm_state and retry: while getattr(server, 'OS-EXT-STS:vm_state') != state and retry:
time.sleep(sleep) time.sleep(sleep)
server = self.nova.servers.get(server) server = self.nova.servers.get(server)
retry -= 1 retry -= 1
return getattr(server, 'OS-EXT-STS:vm_state') == vm_state return getattr(server, 'OS-EXT-STS:vm_state') == state
def wait_for_instance_status(self, instance, status_list, retry, sleep): def wait_for_instance_status(self, instance, status_list, retry, sleep):
"""Waits for instance to be in a specific status """Waits for instance to be in a specific status
@@ -502,7 +519,6 @@ class NovaHelper(object):
:param retry: how many times to retry :param retry: how many times to retry
:param sleep: seconds to sleep between the retries :param sleep: seconds to sleep between the retries
""" """
if not instance: if not instance:
return False return False
@@ -514,7 +530,7 @@ class NovaHelper(object):
LOG.debug("Current instance status: %s" % instance.status) LOG.debug("Current instance status: %s" % instance.status)
return instance.status in status_list return instance.status in status_list
def create_instance(self, hypervisor_id, inst_name="test", image_id=None, def create_instance(self, node_id, inst_name="test", image_id=None,
flavor_name="m1.tiny", flavor_name="m1.tiny",
sec_group_list=["default"], sec_group_list=["default"],
network_names_list=["demo-net"], keypair_name="mykeys", network_names_list=["demo-net"], keypair_name="mykeys",
@@ -526,7 +542,6 @@ class NovaHelper(object):
it with the new instance it with the new instance
It returns the unique id of the created instance. It returns the unique id of the created instance.
""" """
LOG.debug( LOG.debug(
"Trying to create new instance '%s' " "Trying to create new instance '%s' "
"from image '%s' with flavor '%s' ..." % ( "from image '%s' with flavor '%s' ..." % (
@@ -570,15 +585,14 @@ class NovaHelper(object):
net_obj = {"net-id": nic_id} net_obj = {"net-id": nic_id}
net_list.append(net_obj) net_list.append(net_obj)
instance = self.nova.servers. \ instance = self.nova.servers.create(
create(inst_name, inst_name, image,
image, flavor=flavor, flavor=flavor,
key_name=keypair_name, key_name=keypair_name,
security_groups=sec_group_list, security_groups=sec_group_list,
nics=net_list, nics=net_list,
block_device_mapping_v2=block_device_mapping_v2, block_device_mapping_v2=block_device_mapping_v2,
availability_zone="nova:" + availability_zone="nova:%s" % node_id)
hypervisor_id)
# Poll at 5 second intervals, until the status is no longer 'BUILD' # Poll at 5 second intervals, until the status is no longer 'BUILD'
if instance: if instance:
@@ -609,13 +623,13 @@ class NovaHelper(object):
return network_id return network_id
def get_vms_by_hypervisor(self, host): def get_instances_by_node(self, host):
return [vm for vm in return [instance for instance in
self.nova.servers.list(search_opts={"all_tenants": True}) self.nova.servers.list(search_opts={"all_tenants": True})
if self.get_hostname(vm) == host] if self.get_hostname(instance) == host]
def get_hostname(self, vm): def get_hostname(self, instance):
return str(getattr(vm, 'OS-EXT-SRV-ATTR:host')) return str(getattr(instance, 'OS-EXT-SRV-ATTR:host'))
def get_flavor_instance(self, instance, cache): def get_flavor_instance(self, instance, cache):
fid = instance.flavor['id'] fid = instance.flavor['id']

View File

@@ -13,11 +13,11 @@
# License for the specific language governing permissions and limitations # License for the specific language governing permissions and limitations
# under the License. # under the License.
from oslo_config import cfg from oslo_config import cfg
from oslo_log import log
import oslo_messaging as messaging import oslo_messaging as messaging
from oslo_serialization import jsonutils
from watcher._i18n import _LE
from watcher.common import context as watcher_context from watcher.common import context as watcher_context
from watcher.common import exception from watcher.common import exception
@@ -36,7 +36,9 @@ __all__ = [
] ]
CONF = cfg.CONF CONF = cfg.CONF
LOG = log.getLogger(__name__)
TRANSPORT = None TRANSPORT = None
NOTIFICATION_TRANSPORT = None
NOTIFIER = None NOTIFIER = None
ALLOWED_EXMODS = [ ALLOWED_EXMODS = [
@@ -55,23 +57,36 @@ TRANSPORT_ALIASES = {
'watcher.rpc.impl_zmq': 'zmq', 'watcher.rpc.impl_zmq': 'zmq',
} }
JsonPayloadSerializer = messaging.JsonPayloadSerializer
def init(conf): def init(conf):
global TRANSPORT, NOTIFIER global TRANSPORT, NOTIFICATION_TRANSPORT, NOTIFIER
exmods = get_allowed_exmods() exmods = get_allowed_exmods()
TRANSPORT = messaging.get_transport(conf, TRANSPORT = messaging.get_transport(conf,
allowed_remote_exmods=exmods, allowed_remote_exmods=exmods,
aliases=TRANSPORT_ALIASES) aliases=TRANSPORT_ALIASES)
NOTIFICATION_TRANSPORT = messaging.get_notification_transport(
conf,
allowed_remote_exmods=exmods,
aliases=TRANSPORT_ALIASES)
serializer = RequestContextSerializer(JsonPayloadSerializer()) serializer = RequestContextSerializer(JsonPayloadSerializer())
NOTIFIER = messaging.Notifier(TRANSPORT, serializer=serializer) NOTIFIER = messaging.Notifier(NOTIFICATION_TRANSPORT,
serializer=serializer)
def initialized():
return None not in [TRANSPORT, NOTIFIER]
def cleanup(): def cleanup():
global TRANSPORT, NOTIFIER global TRANSPORT, NOTIFICATION_TRANSPORT, NOTIFIER
assert TRANSPORT is not None if NOTIFIER is None:
assert NOTIFIER is not None LOG.exception(_LE("RPC cleanup: NOTIFIER is None"))
TRANSPORT.cleanup() TRANSPORT.cleanup()
TRANSPORT = NOTIFIER = None NOTIFICATION_TRANSPORT.cleanup()
TRANSPORT = NOTIFICATION_TRANSPORT = NOTIFIER = None
def set_defaults(control_exchange): def set_defaults(control_exchange):
@@ -90,12 +105,6 @@ def get_allowed_exmods():
return ALLOWED_EXMODS + EXTRA_EXMODS return ALLOWED_EXMODS + EXTRA_EXMODS
class JsonPayloadSerializer(messaging.NoOpSerializer):
@staticmethod
def serialize_entity(context, entity):
return jsonutils.to_primitive(entity, convert_instances=True)
class RequestContextSerializer(messaging.Serializer): class RequestContextSerializer(messaging.Serializer):
def __init__(self, base): def __init__(self, base):
@@ -118,10 +127,6 @@ class RequestContextSerializer(messaging.Serializer):
return watcher_context.RequestContext.from_dict(context) return watcher_context.RequestContext.from_dict(context)
def get_transport_url(url_str=None):
return messaging.TransportURL.parse(CONF, url_str, TRANSPORT_ALIASES)
def get_client(target, version_cap=None, serializer=None): def get_client(target, version_cap=None, serializer=None):
assert TRANSPORT is not None assert TRANSPORT is not None
serializer = RequestContextSerializer(serializer) serializer = RequestContextSerializer(serializer)

View File

@@ -0,0 +1,44 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2016 b<>com
#
# Authors: Vincent FRANCOISE <vincent.francoise@b-com.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from apscheduler import events
from apscheduler.schedulers import background
from oslo_service import service
job_events = events
class BackgroundSchedulerService(service.ServiceBase,
background.BackgroundScheduler):
def start(self):
"""Start service."""
background.BackgroundScheduler.start(self)
def stop(self):
"""Stop service."""
self.shutdown()
def wait(self):
"""Wait for service to complete."""
def reset(self):
"""Reset service.
Called in case service running in daemon mode receives SIGHUP.
"""

View File

@@ -27,7 +27,7 @@ from oslo_reports import opts as gmr_opts
from oslo_service import service from oslo_service import service
from oslo_service import wsgi from oslo_service import wsgi
from watcher._i18n import _ from watcher._i18n import _, _LI
from watcher.api import app from watcher.api import app
from watcher.common import config from watcher.common import config
from watcher.common.messaging.events import event_dispatcher as dispatcher from watcher.common.messaging.events import event_dispatcher as dispatcher
@@ -62,6 +62,8 @@ _DEFAULT_LOG_LEVELS = ['amqp=WARN', 'amqplib=WARN', 'qpid.messaging=INFO',
'paramiko=WARN', 'requests=WARN', 'neutronclient=WARN', 'paramiko=WARN', 'requests=WARN', 'neutronclient=WARN',
'glanceclient=WARN', 'watcher.openstack.common=WARN'] 'glanceclient=WARN', 'watcher.openstack.common=WARN']
Singleton = service.Singleton
class WSGIService(service.ServiceBase): class WSGIService(service.ServiceBase):
"""Provides ability to launch Watcher API from wsgi app.""" """Provides ability to launch Watcher API from wsgi app."""
@@ -109,8 +111,10 @@ class Service(service.ServiceBase, dispatcher.EventDispatcher):
self.publisher_id = self.manager.publisher_id self.publisher_id = self.manager.publisher_id
self.api_version = self.manager.API_VERSION self.api_version = self.manager.API_VERSION
self.conductor_topic = self.manager.conductor_topic self.conductor_topic = self.manager.conductor_topic
self.status_topic = self.manager.status_topic self.status_topic = self.manager.status_topic
self.notification_topics = self.manager.notification_topics
self.conductor_endpoints = [ self.conductor_endpoints = [
ep(self) for ep in self.manager.conductor_endpoints ep(self) for ep in self.manager.conductor_endpoints
@@ -118,28 +122,52 @@ class Service(service.ServiceBase, dispatcher.EventDispatcher):
self.status_endpoints = [ self.status_endpoints = [
ep(self.publisher_id) for ep in self.manager.status_endpoints ep(self.publisher_id) for ep in self.manager.status_endpoints
] ]
self.notification_endpoints = self.manager.notification_endpoints
self.serializer = rpc.RequestContextSerializer( self.serializer = rpc.RequestContextSerializer(
base.WatcherObjectSerializer()) base.WatcherObjectSerializer())
self.conductor_topic_handler = self.build_topic_handler( self._transport = None
self.conductor_topic, self.conductor_endpoints) self._notification_transport = None
self.status_topic_handler = self.build_topic_handler(
self.status_topic, self.status_endpoints)
self._conductor_client = None self._conductor_client = None
self._status_client = None self._status_client = None
self.conductor_topic_handler = None
self.status_topic_handler = None
self.notification_handler = None
if self.conductor_topic and self.conductor_endpoints:
self.conductor_topic_handler = self.build_topic_handler(
self.conductor_topic, self.conductor_endpoints)
if self.status_topic and self.status_endpoints:
self.status_topic_handler = self.build_topic_handler(
self.status_topic, self.status_endpoints)
if self.notification_topics and self.notification_endpoints:
self.notification_handler = self.build_notification_handler(
self.notification_topics, self.notification_endpoints
)
@property
def transport(self):
if self._transport is None:
self._transport = om.get_transport(CONF)
return self._transport
@property
def notification_transport(self):
if self._notification_transport is None:
self._notification_transport = om.get_notification_transport(CONF)
return self._notification_transport
@property @property
def conductor_client(self): def conductor_client(self):
if self._conductor_client is None: if self._conductor_client is None:
transport = om.get_transport(CONF)
target = om.Target( target = om.Target(
topic=self.conductor_topic, topic=self.conductor_topic,
version=self.API_VERSION, version=self.API_VERSION,
) )
self._conductor_client = om.RPCClient( self._conductor_client = om.RPCClient(
transport, target, serializer=self.serializer) self.transport, target, serializer=self.serializer)
return self._conductor_client return self._conductor_client
@conductor_client.setter @conductor_client.setter
@@ -149,13 +177,12 @@ class Service(service.ServiceBase, dispatcher.EventDispatcher):
@property @property
def status_client(self): def status_client(self):
if self._status_client is None: if self._status_client is None:
transport = om.get_transport(CONF)
target = om.Target( target = om.Target(
topic=self.status_topic, topic=self.status_topic,
version=self.API_VERSION, version=self.API_VERSION,
) )
self._status_client = om.RPCClient( self._status_client = om.RPCClient(
transport, target, serializer=self.serializer) self.transport, target, serializer=self.serializer)
return self._status_client return self._status_client
@status_client.setter @status_client.setter
@@ -167,17 +194,33 @@ class Service(service.ServiceBase, dispatcher.EventDispatcher):
self.publisher_id, topic_name, [self.manager] + list(endpoints), self.publisher_id, topic_name, [self.manager] + list(endpoints),
self.api_version, self.serializer) self.api_version, self.serializer)
def build_notification_handler(self, topic_names, endpoints=()):
serializer = rpc.RequestContextSerializer(rpc.JsonPayloadSerializer())
targets = [om.Target(topic=topic_name) for topic_name in topic_names]
return om.get_notification_listener(
self.notification_transport, targets, endpoints,
executor='eventlet', serializer=serializer,
allow_requeue=False)
def start(self): def start(self):
LOG.debug("Connecting to '%s' (%s)", LOG.debug("Connecting to '%s' (%s)",
CONF.transport_url, CONF.rpc_backend) CONF.transport_url, CONF.rpc_backend)
self.conductor_topic_handler.start() if self.conductor_topic_handler:
self.status_topic_handler.start() self.conductor_topic_handler.start()
if self.status_topic_handler:
self.status_topic_handler.start()
if self.notification_handler:
self.notification_handler.start()
def stop(self): def stop(self):
LOG.debug("Disconnecting from '%s' (%s)", LOG.debug("Disconnecting from '%s' (%s)",
CONF.transport_url, CONF.rpc_backend) CONF.transport_url, CONF.rpc_backend)
self.conductor_topic_handler.stop() if self.conductor_topic_handler:
self.status_topic_handler.stop() self.conductor_topic_handler.stop()
if self.status_topic_handler:
self.status_topic_handler.stop()
if self.notification_handler:
self.notification_handler.stop()
def reset(self): def reset(self):
"""Reset a service in case it received a SIGHUP.""" """Reset a service in case it received a SIGHUP."""
@@ -188,9 +231,14 @@ class Service(service.ServiceBase, dispatcher.EventDispatcher):
def publish_control(self, event, payload): def publish_control(self, event, payload):
return self.conductor_topic_handler.publish_event(event, payload) return self.conductor_topic_handler.publish_event(event, payload)
def publish_status(self, event, payload, request_id=None): def publish_status_event(self, event, payload, request_id=None):
return self.status_topic_handler.publish_event( if self.status_topic_handler:
event, payload, request_id) return self.status_topic_handler.publish_event(
event, payload, request_id)
else:
LOG.info(
_LI("No status notifier declared: notification '%s' not sent"),
event)
def get_version(self): def get_version(self):
return self.api_version return self.api_version
@@ -206,11 +254,11 @@ class Service(service.ServiceBase, dispatcher.EventDispatcher):
'request_id': ctx['request_id'], 'request_id': ctx['request_id'],
'msg': message 'msg': message
} }
self.publish_status(evt, payload) self.publish_status_event(evt, payload)
def process_launcher(conf=cfg.CONF): def launch(conf, service_, workers=1, restart_method='reload'):
return service.ProcessLauncher(conf) return service.launch(conf, service_, workers, restart_method)
def prepare_service(argv=(), conf=cfg.CONF): def prepare_service(argv=(), conf=cfg.CONF):

View File

@@ -19,6 +19,7 @@
from jsonschema import validators from jsonschema import validators
from oslo_config import cfg from oslo_config import cfg
from oslo_log import log as logging from oslo_log import log as logging
from watcher.common import exception
import re import re
import six import six
@@ -132,6 +133,10 @@ def get_cls_import_path(cls):
return module + '.' + cls.__name__ return module + '.' + cls.__name__
def strtime(at):
return at.strftime("%Y-%m-%dT%H:%M:%S.%f")
# Default value feedback extension as jsonschema doesn't support it # Default value feedback extension as jsonschema doesn't support it
def extend_with_default(validator_class): def extend_with_default(validator_class):
validate_properties = validator_class.VALIDATORS["properties"] validate_properties = validator_class.VALIDATORS["properties"]
@@ -142,13 +147,29 @@ def extend_with_default(validator_class):
instance.setdefault(prop, subschema["default"]) instance.setdefault(prop, subschema["default"])
for error in validate_properties( for error in validate_properties(
validator, properties, instance, schema, validator, properties, instance, schema
): ):
yield error yield error
return validators.extend( return validators.extend(validator_class,
validator_class, {"properties": set_defaults}, {"properties": set_defaults})
)
DefaultValidatingDraft4Validator = extend_with_default(
validators.Draft4Validator) # Parameter strict check extension as jsonschema doesn't support it
def extend_with_strict_schema(validator_class):
validate_properties = validator_class.VALIDATORS["properties"]
def strict_schema(validator, properties, instance, schema):
for para in instance.keys():
if para not in properties.keys():
raise exception.AuditParameterNotAllowed(parameter=para)
for error in validate_properties(
validator, properties, instance, schema
):
yield error
return validators.extend(validator_class, {"properties": strict_schema})
StrictDefaultValidatingDraft4Validator = extend_with_default(
extend_with_strict_schema(validators.Draft4Validator))

View File

@@ -44,7 +44,6 @@ class BaseConnection(object):
:param context: The security context :param context: The security context
:param filters: Filters to apply. Defaults to None. :param filters: Filters to apply. Defaults to None.
:param limit: Maximum number of goals to return. :param limit: Maximum number of goals to return.
:param marker: the last item of the previous page; we return the next :param marker: the last item of the previous page; we return the next
result set. result set.
@@ -229,7 +228,6 @@ class BaseConnection(object):
:param context: The security context :param context: The security context
:param filters: Filters to apply. Defaults to None. :param filters: Filters to apply. Defaults to None.
:param limit: Maximum number of audit templates to return. :param limit: Maximum number of audit templates to return.
:param marker: the last item of the previous page; we return the next :param marker: the last item of the previous page; we return the next
result set. result set.
@@ -325,7 +323,6 @@ class BaseConnection(object):
:param context: The security context :param context: The security context
:param filters: Filters to apply. Defaults to None. :param filters: Filters to apply. Defaults to None.
:param limit: Maximum number of audits to return. :param limit: Maximum number of audits to return.
:param marker: the last item of the previous page; we return the next :param marker: the last item of the previous page; we return the next
result set. result set.
@@ -410,7 +407,6 @@ class BaseConnection(object):
:param context: The security context :param context: The security context
:param filters: Filters to apply. Defaults to None. :param filters: Filters to apply. Defaults to None.
:param limit: Maximum number of actions to return. :param limit: Maximum number of actions to return.
:param marker: the last item of the previous page; we return the next :param marker: the last item of the previous page; we return the next
result set. result set.
@@ -491,7 +487,6 @@ class BaseConnection(object):
:param context: The security context :param context: The security context
:param filters: Filters to apply. Defaults to None. :param filters: Filters to apply. Defaults to None.
:param limit: Maximum number of audits to return. :param limit: Maximum number of audits to return.
:param marker: the last item of the previous page; we return the next :param marker: the last item of the previous page; we return the next
result set. result set.
@@ -640,3 +635,83 @@ class BaseConnection(object):
:raises: :py:class:`~.EfficacyIndicatorNotFound` :raises: :py:class:`~.EfficacyIndicatorNotFound`
:raises: :py:class:`~.Invalid` :raises: :py:class:`~.Invalid`
""" """
@abc.abstractmethod
def get_scoring_engine_list(
self, context, columns=None, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None):
"""Get specific columns for matching scoring engines.
Return a list of the specified columns for all scoring engines that
match the specified filters.
:param context: The security context
:param columns: List of column names to return.
Defaults to 'id' column when columns == None.
:param filters: Filters to apply. Defaults to None.
:param limit: Maximum number of scoring engines to return.
:param marker: the last item of the previous page; we return the next
result set.
:param sort_key: Attribute by which results should be sorted.
:param sort_dir: direction in which results should be sorted.
(asc, desc)
:returns: A list of tuples of the specified columns.
"""
@abc.abstractmethod
def create_scoring_engine(self, values):
"""Create a new scoring engine.
:param values: A dict containing several items used to identify
and track the scoring engine.
:returns: A scoring engine.
:raises: :py:class:`~.ScoringEngineAlreadyExists`
"""
@abc.abstractmethod
def get_scoring_engine_by_id(self, context, scoring_engine_id):
"""Return a scoring engine by its id.
:param context: The security context
:param scoring_engine_id: The id of a scoring engine.
:returns: A scoring engine.
:raises: :py:class:`~.ScoringEngineNotFound`
"""
@abc.abstractmethod
def get_scoring_engine_by_uuid(self, context, scoring_engine_uuid):
"""Return a scoring engine by its uuid.
:param context: The security context
:param scoring_engine_uuid: The uuid of a scoring engine.
:returns: A scoring engine.
:raises: :py:class:`~.ScoringEngineNotFound`
"""
@abc.abstractmethod
def get_scoring_engine_by_name(self, context, scoring_engine_name):
"""Return a scoring engine by its name.
:param context: The security context
:param scoring_engine_name: The name of a scoring engine.
:returns: A scoring engine.
:raises: :py:class:`~.ScoringEngineNotFound`
"""
@abc.abstractmethod
def destroy_scoring_engine(self, scoring_engine_id):
"""Destroy a scoring engine.
:param scoring_engine_id: The id of a scoring engine.
:raises: :py:class:`~.ScoringEngineNotFound`
"""
@abc.abstractmethod
def update_scoring_engine(self, scoring_engine_id, values):
"""Update properties of a scoring engine.
:param scoring_engine_id: The id of a scoring engine.
:returns: A scoring engine.
:raises: :py:class:`~.ScoringEngineNotFound`
:raises: :py:class:`~.Invalid`
"""

View File

@@ -145,27 +145,27 @@ class PurgeCommand(object):
return expiry_date return expiry_date
@classmethod @classmethod
def get_audit_template_uuid(cls, uuid_or_name): def get_goal_uuid(cls, uuid_or_name):
if uuid_or_name is None: if uuid_or_name is None:
return return
query_func = None query_func = None
if not utils.is_uuid_like(uuid_or_name): if not utils.is_uuid_like(uuid_or_name):
query_func = objects.AuditTemplate.get_by_name query_func = objects.Goal.get_by_name
else: else:
query_func = objects.AuditTemplate.get_by_uuid query_func = objects.Goal.get_by_uuid
try: try:
audit_template = query_func(cls.ctx, uuid_or_name) goal = query_func(cls.ctx, uuid_or_name)
except Exception as exc: except Exception as exc:
LOG.exception(exc) LOG.exception(exc)
raise exception.AuditTemplateNotFound(audit_template=uuid_or_name) raise exception.GoalNotFound(goal=uuid_or_name)
if not audit_template.deleted_at: if not goal.deleted_at:
raise exception.NotSoftDeletedStateError( raise exception.NotSoftDeletedStateError(
name=_('Audit Template'), id=uuid_or_name) name=_('Goal'), id=uuid_or_name)
return audit_template.uuid return goal.uuid
def _find_goals(self, filters=None): def _find_goals(self, filters=None):
return objects.Goal.list(self.ctx, filters=filters) return objects.Goal.list(self.ctx, filters=filters)
@@ -209,18 +209,19 @@ class PurgeCommand(object):
(audit_template.strategy_id and (audit_template.strategy_id and
audit_template.strategy_id not in strategy_ids)] audit_template.strategy_id not in strategy_ids)]
audit_template_ids = [at.id for at in audit_templates
if at not in orphans.audit_templates]
orphans.audits = [ orphans.audits = [
audit for audit in audits audit for audit in audits
if audit.audit_template_id not in audit_template_ids] if audit.goal_id not in goal_ids or
(audit.strategy_id and
audit.strategy_id not in strategy_ids)]
# Objects with orphan parents are themselves orphans # Objects with orphan parents are themselves orphans
audit_ids = [audit.id for audit in audits audit_ids = [audit.id for audit in audits
if audit not in orphans.audits] if audit not in orphans.audits]
orphans.action_plans = [ orphans.action_plans = [
ap for ap in action_plans ap for ap in action_plans
if ap.audit_id not in audit_ids] if ap.audit_id not in audit_ids or
ap.strategy_id not in strategy_ids]
# Objects with orphan parents are themselves orphans # Objects with orphan parents are themselves orphans
action_plan_ids = [ap.id for ap in action_plans action_plan_ids = [ap.id for ap in action_plans
@@ -270,6 +271,7 @@ class PurgeCommand(object):
related_objs = WatcherObjectsMap() related_objs = WatcherObjectsMap()
related_objs.strategies = self._find_strategies(filters) related_objs.strategies = self._find_strategies(filters)
related_objs.audit_templates = self._find_audit_templates(filters) related_objs.audit_templates = self._find_audit_templates(filters)
related_objs.audits = self._find_audits(filters)
objects_map += related_objs objects_map += related_objs
for strategy in objects_map.strategies: for strategy in objects_map.strategies:
@@ -278,13 +280,6 @@ class PurgeCommand(object):
filters.update(dict(strategy_id=strategy.id)) filters.update(dict(strategy_id=strategy.id))
related_objs = WatcherObjectsMap() related_objs = WatcherObjectsMap()
related_objs.audit_templates = self._find_audit_templates(filters) related_objs.audit_templates = self._find_audit_templates(filters)
objects_map += related_objs
for audit_template in objects_map.audit_templates:
filters = {}
filters.update(base_filters)
filters.update(dict(audit_template_id=audit_template.id))
related_objs = WatcherObjectsMap()
related_objs.audits = self._find_audits(filters) related_objs.audits = self._find_audits(filters)
objects_map += related_objs objects_map += related_objs
@@ -355,12 +350,9 @@ class PurgeCommand(object):
] ]
# audits # audits
audit_template_ids = [
audit_template.id
for audit_template in related_objs.audit_templates]
related_objs.audits = [ related_objs.audits = [
audit for audit in self._objects_map.audits audit for audit in self._objects_map.audits
if audit.audit_template_id in audit_template_ids if audit.goal_id in goal_ids
] ]
# action plans # action plans
@@ -438,12 +430,12 @@ class PurgeCommand(object):
print(_("Here below is a table containing the objects " print(_("Here below is a table containing the objects "
"that can be purged%s:") % _orphans_note) "that can be purged%s:") % _orphans_note)
LOG.info("\n%s", self._objects_map.get_count_table()) LOG.info(_LI("\n%s"), self._objects_map.get_count_table())
print(self._objects_map.get_count_table()) print(self._objects_map.get_count_table())
LOG.info(_LI("Purge process completed")) LOG.info(_LI("Purge process completed"))
def purge(age_in_days, max_number, audit_template, exclude_orphans, dry_run): def purge(age_in_days, max_number, goal, exclude_orphans, dry_run):
"""Removes soft deleted objects from the database """Removes soft deleted objects from the database
:param age_in_days: Number of days since deletion (from today) :param age_in_days: Number of days since deletion (from today)
@@ -452,8 +444,8 @@ def purge(age_in_days, max_number, audit_template, exclude_orphans, dry_run):
:param max_number: Max number of objects expected to be deleted. :param max_number: Max number of objects expected to be deleted.
Prevents the deletion if exceeded. No limit if set to None. Prevents the deletion if exceeded. No limit if set to None.
:type max_number: int :type max_number: int
:param audit_template: UUID or name of the audit template to purge. :param goal: UUID or name of the goal to purge.
:type audit_template: str :type goal: str
:param exclude_orphans: Flag to indicate whether or not you want to :param exclude_orphans: Flag to indicate whether or not you want to
exclude orphans from deletion (default: False). exclude orphans from deletion (default: False).
:type exclude_orphans: bool :type exclude_orphans: bool
@@ -465,13 +457,13 @@ def purge(age_in_days, max_number, audit_template, exclude_orphans, dry_run):
if max_number and max_number < 0: if max_number and max_number < 0:
raise exception.NegativeLimitError raise exception.NegativeLimitError
LOG.info("[options] age_in_days = %s", age_in_days) LOG.info(_LI("[options] age_in_days = %s"), age_in_days)
LOG.info("[options] max_number = %s", max_number) LOG.info(_LI("[options] max_number = %s"), max_number)
LOG.info("[options] audit_template = %s", audit_template) LOG.info(_LI("[options] goal = %s"), goal)
LOG.info("[options] exclude_orphans = %s", exclude_orphans) LOG.info(_LI("[options] exclude_orphans = %s"), exclude_orphans)
LOG.info("[options] dry_run = %s", dry_run) LOG.info(_LI("[options] dry_run = %s"), dry_run)
uuid = PurgeCommand.get_audit_template_uuid(audit_template) uuid = PurgeCommand.get_goal_uuid(goal)
cmd = PurgeCommand(age_in_days, max_number, uuid, cmd = PurgeCommand(age_in_days, max_number, uuid,
exclude_orphans, dry_run) exclude_orphans, dry_run)

View File

@@ -330,10 +330,13 @@ class Connection(api.BaseConnection):
if filters is None: if filters is None:
filters = {} filters = {}
plain_fields = ['uuid', 'audit_type', 'state', 'audit_template_id'] plain_fields = ['uuid', 'audit_type', 'state', 'goal_id',
'strategy_id']
join_fieldmap = { join_fieldmap = {
'audit_template_uuid': ("uuid", models.AuditTemplate), 'goal_uuid': ("uuid", models.Goal),
'audit_template_name': ("name", models.AuditTemplate), 'goal_name': ("name", models.Goal),
'strategy_uuid': ("uuid", models.Strategy),
'strategy_name': ("name", models.Strategy),
} }
return self._add_filters( return self._add_filters(
@@ -344,10 +347,15 @@ class Connection(api.BaseConnection):
if filters is None: if filters is None:
filters = {} filters = {}
plain_fields = ['uuid', 'state', 'audit_id'] plain_fields = ['uuid', 'state', 'audit_id', 'strategy_id']
join_fieldmap = { join_fieldmap = JoinMap(
'audit_uuid': ("uuid", models.Audit), audit_uuid=NaturalJoinFilter(
} join_fieldname="uuid", join_model=models.Audit),
strategy_uuid=NaturalJoinFilter(
join_fieldname="uuid", join_model=models.Strategy),
strategy_name=NaturalJoinFilter(
join_fieldname="name", join_model=models.Strategy),
)
return self._add_filters( return self._add_filters(
query=query, model=models.ActionPlan, filters=filters, query=query, model=models.ActionPlan, filters=filters,
@@ -974,3 +982,86 @@ class Connection(api.BaseConnection):
except exception.ResourceNotFound: except exception.ResourceNotFound:
raise exception.EfficacyIndicatorNotFound( raise exception.EfficacyIndicatorNotFound(
efficacy_indicator=efficacy_indicator_id) efficacy_indicator=efficacy_indicator_id)
# ### SCORING ENGINES ### #
def _add_scoring_engine_filters(self, query, filters):
if filters is None:
filters = {}
plain_fields = ['id', 'description']
return self._add_filters(
query=query, model=models.ScoringEngine, filters=filters,
plain_fields=plain_fields)
def get_scoring_engine_list(
self, context, columns=None, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None):
query = model_query(models.ScoringEngine)
query = self._add_scoring_engine_filters(query, filters)
if not context.show_deleted:
query = query.filter_by(deleted_at=None)
return _paginate_query(models.ScoringEngine, limit, marker,
sort_key, sort_dir, query)
def create_scoring_engine(self, values):
# ensure defaults are present for new scoring engines
if not values.get('uuid'):
values['uuid'] = utils.generate_uuid()
scoring_engine = models.ScoringEngine()
scoring_engine.update(values)
try:
scoring_engine.save()
except db_exc.DBDuplicateEntry:
raise exception.ScoringEngineAlreadyExists(uuid=values['uuid'])
return scoring_engine
def _get_scoring_engine(self, context, fieldname, value):
try:
return self._get(context, model=models.ScoringEngine,
fieldname=fieldname, value=value)
except exception.ResourceNotFound:
raise exception.ScoringEngineNotFound(scoring_engine=value)
def get_scoring_engine_by_id(self, context, scoring_engine_id):
return self._get_scoring_engine(
context, fieldname="id", value=scoring_engine_id)
def get_scoring_engine_by_uuid(self, context, scoring_engine_uuid):
return self._get_scoring_engine(
context, fieldname="uuid", value=scoring_engine_uuid)
def get_scoring_engine_by_name(self, context, scoring_engine_name):
return self._get_scoring_engine(
context, fieldname="name", value=scoring_engine_name)
def destroy_scoring_engine(self, scoring_engine_id):
try:
return self._destroy(models.ScoringEngine, scoring_engine_id)
except exception.ResourceNotFound:
raise exception.ScoringEngineNotFound(
scoring_engine=scoring_engine_id)
def update_scoring_engine(self, scoring_engine_id, values):
if 'id' in values:
raise exception.Invalid(
message=_("Cannot overwrite ID for an existing "
"Scoring Engine."))
try:
return self._update(
models.ScoringEngine, scoring_engine_id, values)
except exception.ResourceNotFound:
raise exception.ScoringEngineNotFound(
scoring_engine=scoring_engine_id)
def soft_delete_scoring_engine(self, scoring_engine_id):
try:
return self._soft_delete(models.ScoringEngine, scoring_engine_id)
except exception.ResourceNotFound:
raise exception.ScoringEngineNotFound(
scoring_engine=scoring_engine_id)

View File

@@ -16,11 +16,10 @@
SQLAlchemy models for watcher service SQLAlchemy models for watcher service
""" """
import json
from oslo_config import cfg from oslo_config import cfg
from oslo_db import options as db_options from oslo_db import options as db_options
from oslo_db.sqlalchemy import models from oslo_db.sqlalchemy import models
from oslo_serialization import jsonutils
import six.moves.urllib.parse as urlparse import six.moves.urllib.parse as urlparse
from sqlalchemy import Column from sqlalchemy import Column
from sqlalchemy import DateTime from sqlalchemy import DateTime
@@ -30,6 +29,7 @@ from sqlalchemy import Integer
from sqlalchemy import Numeric from sqlalchemy import Numeric
from sqlalchemy import schema from sqlalchemy import schema
from sqlalchemy import String from sqlalchemy import String
from sqlalchemy import Text
from sqlalchemy.types import TypeDecorator, TEXT from sqlalchemy.types import TypeDecorator, TEXT
from watcher.common import paths from watcher.common import paths
@@ -70,12 +70,12 @@ class JsonEncodedType(TypeDecorator):
% (self.__class__.__name__, % (self.__class__.__name__,
self.type.__name__, self.type.__name__,
type(value).__name__)) type(value).__name__))
serialized_value = json.dumps(value) serialized_value = jsonutils.dumps(value)
return serialized_value return serialized_value
def process_result_value(self, value, dialect): def process_result_value(self, value, dialect):
if value is not None: if value is not None:
value = json.loads(value) value = jsonutils.loads(value)
return value return value
@@ -174,10 +174,11 @@ class Audit(Base):
audit_type = Column(String(20)) audit_type = Column(String(20))
state = Column(String(20), nullable=True) state = Column(String(20), nullable=True)
deadline = Column(DateTime, nullable=True) deadline = Column(DateTime, nullable=True)
audit_template_id = Column(Integer, ForeignKey('audit_templates.id'),
nullable=False)
parameters = Column(JSONEncodedDict, nullable=True) parameters = Column(JSONEncodedDict, nullable=True)
interval = Column(Integer, nullable=True) interval = Column(Integer, nullable=True)
host_aggregate = Column(Integer, nullable=True)
goal_id = Column(Integer, ForeignKey('goals.id'), nullable=False)
strategy_id = Column(Integer, ForeignKey('strategies.id'), nullable=True)
class Action(Base): class Action(Base):
@@ -210,7 +211,8 @@ class ActionPlan(Base):
id = Column(Integer, primary_key=True) id = Column(Integer, primary_key=True)
uuid = Column(String(36)) uuid = Column(String(36))
first_action_id = Column(Integer) first_action_id = Column(Integer)
audit_id = Column(Integer, ForeignKey('audits.id'), nullable=True) audit_id = Column(Integer, ForeignKey('audits.id'), nullable=False)
strategy_id = Column(Integer, ForeignKey('strategies.id'), nullable=False)
state = Column(String(20), nullable=True) state = Column(String(20), nullable=True)
global_efficacy = Column(JSONEncodedDict, nullable=True) global_efficacy = Column(JSONEncodedDict, nullable=True)
@@ -231,3 +233,21 @@ class EfficacyIndicator(Base):
value = Column(Numeric()) value = Column(Numeric())
action_plan_id = Column(Integer, ForeignKey('action_plans.id'), action_plan_id = Column(Integer, ForeignKey('action_plans.id'),
nullable=False) nullable=False)
class ScoringEngine(Base):
"""Represents a scoring engine."""
__tablename__ = 'scoring_engines'
__table_args__ = (
schema.UniqueConstraint('uuid', name='uniq_scoring_engines0uuid'),
table_args()
)
id = Column(Integer, primary_key=True)
uuid = Column(String(36), nullable=False)
name = Column(String(63), nullable=False)
description = Column(String(255), nullable=True)
# Metainfo might contain some additional information about the data model.
# The format might vary between different models (e.g. be JSON, XML or
# even some custom format), the blob type should cover all scenarios.
metainfo = Column(Text, nullable=True)

View File

@@ -78,8 +78,7 @@ class AuditHandler(BaseAuditHandler):
event.data = {} event.data = {}
payload = {'audit_uuid': audit_uuid, payload = {'audit_uuid': audit_uuid,
'audit_status': status} 'audit_status': status}
self.messaging.status_topic_handler.publish_event( self.messaging.publish_status_event(event.type.name, payload)
event.type.name, payload)
def update_audit_state(self, request_context, audit, state): def update_audit_state(self, request_context, audit, state):
LOG.debug("Update audit state: %s", state) LOG.debug("Update audit state: %s", state)

View File

@@ -33,8 +33,8 @@ CONF = cfg.CONF
WATCHER_CONTINUOUS_OPTS = [ WATCHER_CONTINUOUS_OPTS = [
cfg.IntOpt('continuous_audit_interval', cfg.IntOpt('continuous_audit_interval',
default=10, default=10,
help='Interval, in seconds, for checking new created' help='Interval (in seconds) for checking newly created '
'continuous audit.') 'continuous audits.')
] ]
CONF.register_opts(WATCHER_CONTINUOUS_OPTS, 'watcher_decision_engine') CONF.register_opts(WATCHER_CONTINUOUS_OPTS, 'watcher_decision_engine')
@@ -64,7 +64,7 @@ class ContinuousAuditHandler(base.AuditHandler):
# if audit isn't in active states, audit's job must be removed to # if audit isn't in active states, audit's job must be removed to
# prevent using of inactive audit in future. # prevent using of inactive audit in future.
job_to_delete = [job for job in self.jobs job_to_delete = [job for job in self.jobs
if job.keys()[0] == audit.uuid][0] if list(job.keys())[0] == audit.uuid][0]
self.jobs.remove(job_to_delete) self.jobs.remove(job_to_delete)
job_to_delete[audit.uuid].remove() job_to_delete[audit.uuid].remove()
@@ -74,8 +74,8 @@ class ContinuousAuditHandler(base.AuditHandler):
def do_execute(self, audit, request_context): def do_execute(self, audit, request_context):
# execute the strategy # execute the strategy
solution = self.strategy_context.execute_strategy(audit.uuid, solution = self.strategy_context.execute_strategy(
request_context) audit, request_context)
if audit.audit_type == audit_objects.AuditType.CONTINUOUS.value: if audit.audit_type == audit_objects.AuditType.CONTINUOUS.value:
a_plan_filters = {'audit_uuid': audit.uuid, a_plan_filters = {'audit_uuid': audit.uuid,

View File

@@ -20,7 +20,7 @@ from watcher.decision_engine.audit import base
class OneShotAuditHandler(base.AuditHandler): class OneShotAuditHandler(base.AuditHandler):
def do_execute(self, audit, request_context): def do_execute(self, audit, request_context):
# execute the strategy # execute the strategy
solution = self.strategy_context.execute_strategy(audit.uuid, solution = self.strategy_context.execute_strategy(
request_context) audit, request_context)
return solution return solution

View File

@@ -17,13 +17,9 @@
# limitations under the License. # limitations under the License.
# #
from oslo_config import cfg
from watcher.common import ceilometer_helper from watcher.common import ceilometer_helper
from watcher.metrics_engine.cluster_history import base from watcher.decision_engine.cluster.history import base
CONF = cfg.CONF
class CeilometerClusterHistory(base.BaseClusterHistory): class CeilometerClusterHistory(base.BaseClusterHistory):

View File

@@ -17,12 +17,8 @@
import abc import abc
import six import six
from oslo_log import log
from watcher.common.loader import loadable from watcher.common.loader import loadable
LOG = log.getLogger(__name__)
@six.add_metaclass(abc.ABCMeta) @six.add_metaclass(abc.ABCMeta)
class Goal(loadable.Loadable): class Goal(loadable.Loadable):

View File

@@ -24,7 +24,7 @@ calculating its :ref:`global efficacy <efficacy_definition>`.
""" """
import abc import abc
import json from oslo_serialization import jsonutils
import six import six
import voluptuous import voluptuous
@@ -81,4 +81,4 @@ class EfficacySpecification(object):
for indicator in self.indicators_specs] for indicator in self.indicators_specs]
def serialize_indicators_specs(self): def serialize_indicators_specs(self):
return json.dumps(self.get_indicators_specs_dicts()) return jsonutils.dumps(self.get_indicators_specs_dicts())

View File

@@ -118,11 +118,11 @@ class ReleasedComputeNodesCount(IndicatorSpecification):
voluptuous.Range(min=0), required=True) voluptuous.Range(min=0), required=True)
class VmMigrationsCount(IndicatorSpecification): class InstanceMigrationsCount(IndicatorSpecification):
def __init__(self): def __init__(self):
super(VmMigrationsCount, self).__init__( super(InstanceMigrationsCount, self).__init__(
name="vm_migrations_count", name="instance_migrations_count",
description=_("The number of migrations to be performed."), description=_("The number of VM migrations to be performed."),
unit=None, unit=None,
) )

View File

@@ -34,14 +34,14 @@ class ServerConsolidation(base.EfficacySpecification):
def get_indicators_specifications(self): def get_indicators_specifications(self):
return [ return [
indicators.ReleasedComputeNodesCount(), indicators.ReleasedComputeNodesCount(),
indicators.VmMigrationsCount(), indicators.InstanceMigrationsCount(),
] ]
def get_global_efficacy_indicator(self, indicators_map): def get_global_efficacy_indicator(self, indicators_map):
value = 0 value = 0
if indicators_map.vm_migrations_count > 0: if indicators_map.instance_migrations_count > 0:
value = (float(indicators_map.released_compute_nodes_count) / value = (float(indicators_map.released_compute_nodes_count) /
float(indicators_map.vm_migrations_count)) * 100 float(indicators_map.instance_migrations_count)) * 100
return efficacy.Indicator( return efficacy.Indicator(
name="released_nodes_ratio", name="released_nodes_ratio",

View File

@@ -84,11 +84,11 @@ class ServerConsolidation(base.Goal):
@classmethod @classmethod
def get_display_name(cls): def get_display_name(cls):
return _("Server consolidation") return _("Server Consolidation")
@classmethod @classmethod
def get_translatable_display_name(cls): def get_translatable_display_name(cls):
return "Server consolidation" return "Server Consolidation"
@classmethod @classmethod
def get_efficacy_specification(cls): def get_efficacy_specification(cls):
@@ -108,11 +108,11 @@ class ThermalOptimization(base.Goal):
@classmethod @classmethod
def get_display_name(cls): def get_display_name(cls):
return _("Thermal optimization") return _("Thermal Optimization")
@classmethod @classmethod
def get_translatable_display_name(cls): def get_translatable_display_name(cls):
return "Thermal optimization" return "Thermal Optimization"
@classmethod @classmethod
def get_efficacy_specification(cls): def get_efficacy_specification(cls):
@@ -132,11 +132,11 @@ class WorkloadBalancing(base.Goal):
@classmethod @classmethod
def get_display_name(cls): def get_display_name(cls):
return _("Workload balancing") return _("Workload Balancing")
@classmethod @classmethod
def get_translatable_display_name(cls): def get_translatable_display_name(cls):
return "Workload balancing" return "Workload Balancing"
@classmethod @classmethod
def get_efficacy_specification(cls): def get_efficacy_specification(cls):
@@ -145,6 +145,10 @@ class WorkloadBalancing(base.Goal):
class AirflowOptimization(base.Goal): class AirflowOptimization(base.Goal):
"""AirflowOptimization
This goal is used to optimize the airflow within a cloud infrastructure.
"""
@classmethod @classmethod
def get_name(cls): def get_name(cls):
@@ -152,11 +156,11 @@ class AirflowOptimization(base.Goal):
@classmethod @classmethod
def get_display_name(cls): def get_display_name(cls):
return _("Airflow optimization") return _("Airflow Optimization")
@classmethod @classmethod
def get_translatable_display_name(cls): def get_translatable_display_name(cls):
return "Airflow optimization" return "Airflow Optimization"
@classmethod @classmethod
def get_efficacy_specification(cls): def get_efficacy_specification(cls):

View File

@@ -2,6 +2,8 @@
# Copyright (c) 2015 b<>com # Copyright (c) 2015 b<>com
# #
# Authors: Jean-Emile DARTOIS <jean-emile.dartois@b-com.com> # Authors: Jean-Emile DARTOIS <jean-emile.dartois@b-com.com>
# Vincent FRANCOISE <vincent.francoise@b-com.com>
# Tomasz Kaczynski <tomasz.kaczynski@intel.com>
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@@ -39,3 +41,21 @@ class DefaultPlannerLoader(default.DefaultLoader):
def __init__(self): def __init__(self):
super(DefaultPlannerLoader, self).__init__( super(DefaultPlannerLoader, self).__init__(
namespace='watcher_planners') namespace='watcher_planners')
class ClusterDataModelCollectorLoader(default.DefaultLoader):
def __init__(self):
super(ClusterDataModelCollectorLoader, self).__init__(
namespace='watcher_cluster_data_model_collectors')
class DefaultScoringLoader(default.DefaultLoader):
def __init__(self):
super(DefaultScoringLoader, self).__init__(
namespace='watcher_scoring_engines')
class DefaultScoringContainerLoader(default.DefaultLoader):
def __init__(self):
super(DefaultScoringContainerLoader, self).__init__(
namespace='watcher_scoring_engine_containers')

View File

@@ -15,7 +15,6 @@
# implied. # implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
#
""" """
This component is responsible for computing a set of potential optimization This component is responsible for computing a set of potential optimization
@@ -40,6 +39,7 @@ See :doc:`../architecture` for more details on this component.
from oslo_config import cfg from oslo_config import cfg
from watcher.decision_engine.messaging import audit_endpoint from watcher.decision_engine.messaging import audit_endpoint
from watcher.decision_engine.model.collector import manager
CONF = cfg.CONF CONF = cfg.CONF
@@ -47,26 +47,29 @@ CONF = cfg.CONF
WATCHER_DECISION_ENGINE_OPTS = [ WATCHER_DECISION_ENGINE_OPTS = [
cfg.StrOpt('conductor_topic', cfg.StrOpt('conductor_topic',
default='watcher.decision.control', default='watcher.decision.control',
help='The topic name used for' help='The topic name used for '
'control events, this topic ' 'control events, this topic '
'used for rpc call '), 'used for RPC calls'),
cfg.StrOpt('status_topic', cfg.StrOpt('status_topic',
default='watcher.decision.status', default='watcher.decision.status',
help='The topic name used for ' help='The topic name used for '
'status events, this topic ' 'status events; this topic '
'is used so as to notify' 'is used so as to notify'
'the others components ' 'the others components '
'of the system'), 'of the system'),
cfg.ListOpt('notification_topics',
default=['versioned_notifications', 'watcher_notifications'],
help='The topic names from which notification events '
'will be listened to'),
cfg.StrOpt('publisher_id', cfg.StrOpt('publisher_id',
default='watcher.decision.api', default='watcher.decision.api',
help='The identifier used by watcher ' help='The identifier used by the Watcher '
'module on the message broker'), 'module on the message broker'),
cfg.IntOpt('max_workers', cfg.IntOpt('max_workers',
default=2, default=2,
required=True, required=True,
help='The maximum number of threads that can be used to ' help='The maximum number of threads that can be used to '
'execute strategies', 'execute strategies'),
),
] ]
decision_engine_opt_group = cfg.OptGroup(name='watcher_decision_engine', decision_engine_opt_group = cfg.OptGroup(name='watcher_decision_engine',
title='Defines the parameters of ' title='Defines the parameters of '
@@ -79,11 +82,19 @@ class DecisionEngineManager(object):
API_VERSION = '1.0' API_VERSION = '1.0'
conductor_endpoints = [audit_endpoint.AuditEndpoint]
status_endpoints = []
def __init__(self): def __init__(self):
self.api_version = self.API_VERSION
self.publisher_id = CONF.watcher_decision_engine.publisher_id self.publisher_id = CONF.watcher_decision_engine.publisher_id
self.conductor_topic = CONF.watcher_decision_engine.conductor_topic self.conductor_topic = CONF.watcher_decision_engine.conductor_topic
self.status_topic = CONF.watcher_decision_engine.status_topic self.status_topic = CONF.watcher_decision_engine.status_topic
self.api_version = self.API_VERSION self.notification_topics = (
CONF.watcher_decision_engine.notification_topics)
self.conductor_endpoints = [audit_endpoint.AuditEndpoint]
self.status_endpoints = []
self.collector_manager = manager.CollectorManager()
self.notification_endpoints = (
self.collector_manager.get_notification_endpoints())

View File

@@ -30,6 +30,7 @@ LOG = log.getLogger(__name__)
class AuditEndpoint(object): class AuditEndpoint(object):
def __init__(self, messaging): def __init__(self, messaging):
self._messaging = messaging self._messaging = messaging
self._executor = futures.ThreadPoolExecutor( self._executor = futures.ThreadPoolExecutor(

View File

@@ -0,0 +1,178 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2015 b<>com
#
# Authors: Jean-Emile DARTOIS <jean-emile.dartois@b-com.com>
# Vincent FRANCOISE <vincent.francoise@b-com.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
A :ref:`Cluster Data Model <cluster_data_model_definition>` is a logical
representation of the current state and topology of the :ref:`Cluster
<cluster_definition>` :ref:`Managed resources <managed_resource_definition>`.
It is represented as a set of :ref:`Managed resources
<managed_resource_definition>` (which may be a simple tree or a flat list of
key-value pairs) which enables Watcher :ref:`Strategies <strategy_definition>`
to know the current relationships between the different :ref:`resources
<managed_resource_definition>`) of the :ref:`Cluster <cluster_definition>`
during an :ref:`Audit <audit_definition>` and enables the :ref:`Strategy
<strategy_definition>` to request information such as:
- What compute nodes are in a given :ref:`Availability Zone
<availability_zone_definition>` or a given :ref:`Host Aggregate
<host_aggregates_definition>`?
- What :ref:`Instances <instance_definition>` are hosted on a given compute
node?
- What is the current load of a compute node?
- What is the current free memory of a compute node?
- What is the network link between two compute nodes?
- What is the available bandwidth on a given network link?
- What is the current space available on a given virtual disk of a given
:ref:`Instance <instance_definition>` ?
- What is the current state of a given :ref:`Instance <instance_definition>`?
- ...
In a word, this data model enables the :ref:`Strategy <strategy_definition>`
to know:
- the current topology of the :ref:`Cluster <cluster_definition>`
- the current capacity for each :ref:`Managed resource
<managed_resource_definition>`
- the current amount of used/free space for each :ref:`Managed resource
<managed_resource_definition>`
- the current state of each :ref:`Managed resources
<managed_resource_definition>`
In the Watcher project, we aim at providing a some generic and basic
:ref:`Cluster Data Model <cluster_data_model_definition>` for each :ref:`Goal
<goal_definition>`, usable in the associated :ref:`Strategies
<strategy_definition>` through a plugin-based mechanism that are directly
accessible from the strategies classes in order to:
- simplify the development of a new :ref:`Strategy <strategy_definition>` for a
given :ref:`Goal <goal_definition>` when there already are some existing
:ref:`Strategies <strategy_definition>` associated to the same :ref:`Goal
<goal_definition>`
- avoid duplicating the same code in several :ref:`Strategies
<strategy_definition>` associated to the same :ref:`Goal <goal_definition>`
- have a better consistency between the different :ref:`Strategies
<strategy_definition>` for a given :ref:`Goal <goal_definition>`
- avoid any strong coupling with any external :ref:`Cluster Data Model
<cluster_data_model_definition>` (the proposed data model acts as a pivot
data model)
There may be various :ref:`generic and basic Cluster Data Models
<cluster_data_model_definition>` proposed in Watcher helpers, each of them
being adapted to achieving a given :ref:`Goal <goal_definition>`:
- For example, for a :ref:`Goal <goal_definition>` which aims at optimizing
the network :ref:`resources <managed_resource_definition>` the :ref:`Strategy
<strategy_definition>` may need to know which :ref:`resources
<managed_resource_definition>` are communicating together.
- Whereas for a :ref:`Goal <goal_definition>` which aims at optimizing thermal
and power conditions, the :ref:`Strategy <strategy_definition>` may need to
know the location of each compute node in the racks and the location of each
rack in the room.
Note however that a developer can use his/her own :ref:`Cluster Data Model
<cluster_data_model_definition>` if the proposed data model does not fit
his/her needs as long as the :ref:`Strategy <strategy_definition>` is able to
produce a :ref:`Solution <solution_definition>` for the requested :ref:`Goal
<goal_definition>`. For example, a developer could rely on the Nova Data Model
to optimize some compute resources.
The :ref:`Cluster Data Model <cluster_data_model_definition>` may be persisted
in any appropriate storage system (SQL database, NoSQL database, JSON file,
XML File, In Memory Database, ...). As of now, an in-memory model is built and
maintained in the background in order to accelerate the execution of
strategies.
"""
import abc
import copy
import threading
from oslo_config import cfg
import six
from watcher.common import clients
from watcher.common.loader import loadable
from watcher.decision_engine.model import model_root
@six.add_metaclass(abc.ABCMeta)
class BaseClusterDataModelCollector(loadable.LoadableSingleton):
STALE_MODEL = model_root.ModelRoot(stale=True)
def __init__(self, config, osc=None):
super(BaseClusterDataModelCollector, self).__init__(config)
self.osc = osc if osc else clients.OpenStackClients()
self._cluster_data_model = None
self.lock = threading.RLock()
@property
def cluster_data_model(self):
if self._cluster_data_model is None:
self.lock.acquire()
self._cluster_data_model = self.execute()
self.lock.release()
return self._cluster_data_model
@cluster_data_model.setter
def cluster_data_model(self, model):
self.lock.acquire()
self._cluster_data_model = model
self.lock.release()
@abc.abstractproperty
def notification_endpoints(self):
"""Associated notification endpoints
:return: Associated notification endpoints
:rtype: List of :py:class:`~.EventsNotificationEndpoint` instances
"""
raise NotImplementedError()
def set_cluster_data_model_as_stale(self):
self.cluster_data_model = self.STALE_MODEL
@abc.abstractmethod
def execute(self):
"""Build a cluster data model"""
raise NotImplementedError()
@classmethod
def get_config_opts(cls):
return [
cfg.IntOpt(
'period',
default=3600,
help='The time interval (in seconds) between each '
'synchronization of the model'),
]
def get_latest_cluster_data_model(self):
return copy.deepcopy(self.cluster_data_model)
def synchronize(self):
"""Synchronize the cluster data model
Whenever called this synchronization will perform a drop-in replacement
with the existing cluster data model
"""
self.cluster_data_model = self.execute()

View File

@@ -0,0 +1,61 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2015 b<>com
#
# Authors: Jean-Emile DARTOIS <jean-emile.dartois@b-com.com>
# Vincent FRANCOISE <vincent.francoise@b-com.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from watcher.common import utils
from watcher.decision_engine.loading import default
class CollectorManager(object):
def __init__(self):
self.collector_loader = default.ClusterDataModelCollectorLoader()
self._collectors = None
self._notification_endpoints = None
def get_collectors(self):
if self._collectors is None:
collectors = utils.Struct()
available_collectors = self.collector_loader.list_available()
for collector_name in available_collectors:
collector = self.collector_loader.load(collector_name)
collectors[collector_name] = collector
self._collectors = collectors
return self._collectors
def get_notification_endpoints(self):
if self._notification_endpoints is None:
endpoints = []
for collector in self.get_collectors().values():
endpoints.extend(collector.notification_endpoints)
self._notification_endpoints = endpoints
return self._notification_endpoints
def get_cluster_model_collector(self, name, osc=None):
"""Retrieve cluster data model collector
:param name: name of the cluster data model collector plugin
:type name: str
:param osc: an OpenStackClients instance
:type osc: :py:class:`~.OpenStackClients` instance
:returns: cluster data model collector plugin
:rtype: :py:class:`~.BaseClusterDataModelCollector`
"""
return self.collector_loader.load(name, osc=osc)

View File

@@ -0,0 +1,113 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2015 b<>com
#
# Authors: Jean-Emile DARTOIS <jean-emile.dartois@b-com.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_log import log
from watcher.common import nova_helper
from watcher.decision_engine.model.collector import base
from watcher.decision_engine.model import element
from watcher.decision_engine.model import model_root
from watcher.decision_engine.model.notification import nova
LOG = log.getLogger(__name__)
class NovaClusterDataModelCollector(base.BaseClusterDataModelCollector):
"""nova
*Description*
This Nova cluster data model collector creates an in-memory representation
of the resources exposed by the compute service.
*Spec URL*
<None>
"""
def __init__(self, config, osc=None):
super(NovaClusterDataModelCollector, self).__init__(config, osc)
self.wrapper = nova_helper.NovaHelper(osc=self.osc)
@property
def notification_endpoints(self):
"""Associated notification endpoints
:return: Associated notification endpoints
:rtype: List of :py:class:`~.EventsNotificationEndpoint` instances
"""
return [
nova.ServiceUpdated(self),
nova.InstanceCreated(self),
nova.InstanceUpdated(self),
nova.InstanceDeletedEnd(self),
nova.LegacyInstanceCreatedEnd(self),
nova.LegacyInstanceUpdated(self),
nova.LegacyInstanceDeletedEnd(self),
nova.LegacyLiveMigratedEnd(self),
]
def execute(self):
"""Build the compute cluster data model"""
LOG.debug("Building latest Nova cluster data model")
model = model_root.ModelRoot()
mem = element.Resource(element.ResourceType.memory)
num_cores = element.Resource(element.ResourceType.cpu_cores)
disk = element.Resource(element.ResourceType.disk)
disk_capacity = element.Resource(element.ResourceType.disk_capacity)
model.create_resource(mem)
model.create_resource(num_cores)
model.create_resource(disk)
model.create_resource(disk_capacity)
flavor_cache = {}
nodes = self.wrapper.get_compute_node_list()
for n in nodes:
service = self.wrapper.nova.services.find(id=n.service['id'])
# create node in cluster_model_collector
node = element.ComputeNode()
node.uuid = service.host
node.hostname = n.hypervisor_hostname
# set capacity
mem.set_capacity(node, n.memory_mb)
disk.set_capacity(node, n.free_disk_gb)
disk_capacity.set_capacity(node, n.local_gb)
num_cores.set_capacity(node, n.vcpus)
node.state = n.state
node.status = n.status
model.add_node(node)
instances = self.wrapper.get_instances_by_node(str(service.host))
for v in instances:
# create VM in cluster_model_collector
instance = element.Instance()
instance.uuid = v.id
# nova/nova/compute/instance_states.py
instance.state = getattr(v, 'OS-EXT-STS:vm_state')
# set capacity
self.wrapper.get_flavor_instance(v, flavor_cache)
mem.set_capacity(instance, v.flavor['ram'])
disk.set_capacity(instance, v.flavor['disk'])
num_cores.set_capacity(instance, v.flavor['vcpus'])
model.map_instance(instance, node)
return model

View File

@@ -0,0 +1,38 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2016 b<>com
#
# Authors: Vincent FRANCOISE <vincent.francoise@b-com.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from watcher.decision_engine.model.element import disk_info
from watcher.decision_engine.model.element import instance
from watcher.decision_engine.model.element import node
from watcher.decision_engine.model.element import resource
ServiceState = node.ServiceState
ComputeNode = node.ComputeNode
InstanceState = instance.InstanceState
Instance = instance.Instance
DiskInfo = disk_info.DiskInfo
ResourceType = resource.ResourceType
Resource = resource.Resource
__all__ = [
'ServiceState', 'ComputeNode', 'InstanceState', 'Instance',
'DiskInfo', 'ResourceType', 'Resource']

View File

@@ -1,11 +1,13 @@
# -*- encoding: utf-8 -*- # -*- encoding: utf-8 -*-
# Copyright (c) 2015 b<>com # Copyright (c) 2016 b<>com
#
# Authors: Vincent FRANCOISE <vincent.francoise@b-com.com>
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
# You may obtain a copy of the License at # You may obtain a copy of the License at
# #
# http://www.apache.org/licenses/LICENSE-2.0 # http://www.apache.org/licenses/LICENSE-2.0
# #
# Unless required by applicable law or agreed to in writing, software # Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, # distributed under the License is distributed on an "AS IS" BASIS,
@@ -14,11 +16,14 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import enum import abc
import six
class HypervisorState(enum.Enum): @six.add_metaclass(abc.ABCMeta)
ONLINE = 'up' class Element(object):
OFFLINE = 'down'
ENABLED = 'enabled' @abc.abstractmethod
DISABLED = 'disabled' def accept(self, visitor):
raise NotImplementedError()

View File

@@ -14,8 +14,15 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import abc
class ComputeResource(object): import six
from watcher.decision_engine.model.element import base
@six.add_metaclass(abc.ABCMeta)
class ComputeResource(base.Element):
def __init__(self): def __init__(self):
self._uuid = "" self._uuid = ""

Some files were not shown because too many files have changed in this diff Show More