Compare commits

...

116 Commits

Author SHA1 Message Date
Jenkins
3d398b4d22 Merge "Documentation for plugins-parameters" 2016-05-30 09:41:53 +00:00
Jenkins
585fbeb9ee Merge "Watcher plugins table in Guru meditation reports" 2016-05-30 09:41:49 +00:00
Jenkins
e9f237dc80 Merge "Enabled config parameters to plugins" 2016-05-30 09:40:30 +00:00
Vincent Françoise
38f6700144 Documentation for plugins-parameters
In this changeset, I updated the documentation to explain how to
add configuration options for each type of plugin.

Partially Implements: plugins-parameters

Change-Id: Ifd373da64207110492b4a62f1cb7f13b029a45d2
2016-05-30 10:28:29 +02:00
junjie huang
30bdf29002 Workload balance migration strategy implementation
This is one of the algorithm of Intel thermal POC.
It's based on the VM workloads of hypervisors.

Change-Id: I45ab0cf0f05786e6f68025bdd315f38381900a68
blueprint: workload-balance-migration-strategy
2016-05-30 07:52:05 +00:00
Vincent Françoise
e6f147d81d Watcher plugins table in Guru meditation reports
In this changeset, I added the list of all the available plugins
for the current instance of any given Watcher service.

Partially Implements: blueprint plugins-parameters

Change-Id: I58c9724a229712b0322a578f0f89a61b38dfd80a
2016-05-30 09:48:37 +02:00
Vincent Françoise
5aa6b16238 Enabled config parameters to plugins
In this changeset, I added the possibility for all plugins to define
configuration parameters for themselves.

Partially Implements: blueprint plugins-parameters

Change-Id: I676b2583b3b4841c64c862b2b0c234b4eb5fd0fd
2016-05-30 09:48:34 +02:00
Jenkins
dcb5c1f9fc Merge "Add Overload standard deviation strategy" 2016-05-30 07:10:17 +00:00
Jenkins
4ba01cbbcf Merge "Update Watcher documentation" 2016-05-27 15:58:01 +00:00
Jenkins
d91d72d2c2 Merge "Add goal name as filter for strategy list cmd" 2016-05-27 15:51:22 +00:00
Jenkins
083bc2bed4 Merge "Add goal_name & strategy_name in /audit_templates" 2016-05-27 15:51:14 +00:00
Alexander Chadin
9d3671af37 Add Overload standard deviation strategy
The main purpose of this strategy is to choose the pair VM:dest_host that
minimizes the standard deviation in a cluster best.

Change-Id: I95a31b7bcab83411ef6b6e1e01818ca21ef96883
Implements: blueprint watcher-overload-sd
2016-05-27 16:16:36 +03:00
Jenkins
3b88e37680 Merge "Added cold VM migration support" 2016-05-27 12:33:46 +00:00
David TARDIVEL
6eee64502f Add goal name as filter for strategy list cmd
This changeset add the possibility to use the goal name as a
stragegy list filter.

Change-Id: Ibaf45e694f115308f19e9bcd3023fe2e6d1750cd
2016-05-27 11:20:55 +02:00
David TARDIVEL
b9231f65cc Update Watcher documentation
We introduced a new watcher plugin for OpenStack CLI. This patchset
updates accordingly the watcher documentation and schemas.

Partially Implements: blueprint openstackclient-plugin

Change-Id: Ib00469c8645fff21f5ba95951379827dbd359c69
2016-05-27 09:40:06 +02:00
OpenStack Proposal Bot
b9b505a518 Updated from global requirements
Change-Id: Ia90ce890fac40ddb6d38effd022ca71e9a7fc52f
2016-05-26 17:07:34 +00:00
cima
388ef9f11c Added cold VM migration support
Cold migration enables migrating some of the VMs which are not in active state (e.g. stopped). Cold migration can also be used for migrating active VM, although VM is shut down and hence unaccessible while migrating.

Change-Id: I89ad0a04d41282431c9773f6ae7feb41573368e3
Closes-Bug: #1564297
2016-05-24 13:26:45 +02:00
Jenkins
4e3caaa157 Merge "Fixed flaky tempest test" 2016-05-24 09:25:15 +00:00
Vincent Françoise
8c6bf734af Add goal_name & strategy_name in /audit_templates
In this changeset, I added both the 'goal_name' and the 'strategy_name'
field.

Change-Id: Ic164df84d4e23ec75b2b2f4b358cf827d0ad7fa5
Related-Bug: #1573582
2016-05-24 11:09:48 +02:00
Vincent Françoise
277a749ca0 Fix lazy translation issue with watcher-db-manage
In this changeset, I fix the issue caused by the use of lazy
translations within the 'watcher-db-manage purge' subcommand.
This is caused by the PrettyTable dependency which performs
addition operations to format its tables and the __add__ magic
method is not supported by oslo_i18n._message.Message objects.

Change-Id: Idd590e882c697957cfaf1849c3d51b52797230f6
Closes-Bug: #1584652
2016-05-23 14:35:17 +02:00
Vincent Françoise
8401b5e479 Fixed flaky tempest test
In this changeset, I fixed the test_create_audit_with_no_state
tempest test which was randomly failing because of a race condition.

Change-Id: Ibda49944c79fcd406fa81870dbbff6064b5dc4fa
2016-05-23 14:32:44 +02:00
Vincent Françoise
78689fbe3b Removed telemetry tag from tempest tests
Since telemetry was removed from tempest, this changeset removes the
telemetry tags from the watcher integration tests

Change-Id: I6229ee23740c3d92a66fc04c8de8b0ed25911022
2016-05-23 09:22:31 +00:00
OpenStack Proposal Bot
22abaa9c3a Updated from global requirements
Change-Id: I2506d35432748691fb53f8540aac43d1656a67a3
2016-05-21 15:53:55 +00:00
Alexander Chadin
fb82131d85 Fix for statistic_aggregation
This patch set fixes aggregate parameter for statistic_aggregation
function so we may use it with any aggregate function.

Change-Id: If586d656aadd3d098a1610a97a2f315e70351de5
Closes-Bug: #1583610
2016-05-19 16:41:07 +03:00
Jenkins
f6f5079adb Merge "Watcher DB class diagram" 2016-05-18 12:36:17 +00:00
Jenkins
f045f5d816 Merge "Updated from global requirements" 2016-05-13 07:06:35 +00:00
Jenkins
b77541deb2 Merge "[nova_helper] get keypair name by every admin users" 2016-05-13 06:33:42 +00:00
OpenStack Proposal Bot
3b9d72439c Updated from global requirements
Change-Id: I0f4fe97bdfa872074964a10535db868354d926da
2016-05-11 17:29:55 +00:00
Jenkins
89aa2d54df Merge "Added .pot file" 2016-05-11 15:39:13 +00:00
Jenkins
86f4cee588 Merge "Remove [watcher_goals] config section" 2016-05-11 15:32:36 +00:00
Jenkins
04ac509821 Merge "Remove watcher_goals section from devstack plugin" 2016-05-11 15:32:35 +00:00
Jenkins
4ba9d2cb73 Merge "Documentation update for get-goal-from-strategy" 2016-05-11 15:32:28 +00:00
Jenkins
a71c9be860 Merge "Updated purge to now include goals and strategies" 2016-05-11 15:32:22 +00:00
Jenkins
c2cb1a1f8e Merge "Syncer now syncs stale audit templates" 2016-05-11 15:32:17 +00:00
Jenkins
79bdcf7baf Merge "Add strategy_id & goal_id fields in audit template" 2016-05-11 15:32:10 +00:00
Jenkins
de1b1a9938 Merge "Refactored Strategy selector to select from DB" 2016-05-11 15:32:07 +00:00
Jenkins
031ebdecde Merge "Added /strategies endpoint in Watcher API" 2016-05-11 15:32:01 +00:00
Jenkins
daabe671c7 Merge "Add Goal in BaseStrategy + Goal API reads from DB" 2016-05-11 15:31:58 +00:00
Jenkins
26bc3d139d Merge "DB sync for Strategies" 2016-05-11 15:31:39 +00:00
Jenkins
4d2536b9b2 Merge "Added Strategy model" 2016-05-11 15:30:46 +00:00
Jenkins
4388780e66 Merge "Added Goal object + goal syncing" 2016-05-11 15:29:21 +00:00
Jenkins
d03a9197b0 Merge "Added Goal model into Watcher DB" 2016-05-11 15:28:28 +00:00
Jean-Emile DARTOIS
43f5ab18ba Fix documentation watcher sql database
This changeset fixes the issue with the parameter watcher-db-manage

Change-Id: I668edd85e3ea40c2a309caacbf68cf35bfd680f7
Closes-Bug: #1580617
2016-05-11 15:58:46 +02:00
Vincent Françoise
209176c3d7 Watcher DB class diagram
In this changeset, I added a class diagram reprensenting the
database schema of Watcher.

Change-Id: I2257010d0040a3f40279ec9db2967f0e69384b62
2016-05-11 15:52:54 +02:00
Vincent Françoise
1a21867735 Added .pot file
In this changeset, I just generate the .pot file for all the new
translations that were added during the implementation of this BP

Partially Implements: blueprint get-goal-from-strategy

Change-Id: I2192508afda037510f8f91092c5cfde0115dae1d
2016-05-11 15:48:09 +02:00
Vincent Françoise
5f6a97148f Remove [watcher_goals] config section
In this changeset, I remove the now unused [watcher_goals] section.

Partially Implements: blueprint get-goal-from-strategy

Change-Id: I91e4e1ac3a58bb6f3e30b11449cf1a6eb18cd0ca
2016-05-11 15:48:09 +02:00
Vincent Françoise
e6b23a0856 Remove watcher_goals section from devstack plugin
In this changeset, I removed the now useless [watcher_goals] section
from the devstack plugin.

Partially Implements: blueprint get-goal-from-strategy

Change-Id: Iaa986f426dc47f6cbd04e74f16b67670e3563967
2016-05-11 15:48:09 +02:00
Vincent Françoise
f9a1b9d3ce Documentation update for get-goal-from-strategy
In this changeset, I updated the Watcher documentation to reflect
the changes that are introduced by this blueprint.

Partially Implements: blueprint get-goal-from-strategy

Change-Id: I40be39624097365220bf7d94cbe177bbf5bbe0ed
2016-05-11 15:48:02 +02:00
Vincent Françoise
ff611544fb Updated purge to now include goals and strategies
In this changeset, I updated the purge script to now take into
account the registered goals and strategies.

Partially Implements: blueprint get-goal-from-strategy

Change-Id: I2f1d58bb812fa45bc4bc6467760a071d8612e6a4
2016-05-11 15:31:02 +02:00
Vincent Françoise
18e5c7d844 Syncer now syncs stale audit templates
In this changeset, I introduce the syncing of audit templates.

Partially Implements: blueprint get-goal-from-strategy

Change-Id: Ie394c12fe51f73eff95465fd5140d82ebd212599
2016-05-11 15:31:02 +02:00
Vincent Françoise
2966b93777 Add strategy_id & goal_id fields in audit template
In this changeset, I updated the 'goal_id' field into the AuditTemplate
to now become a mandatory foreign key towards the Goal model. I also
added the 'strategy_id' field into the AuditTemplate model to be an
optional foreign key onto the Strategy model.

This changeset also includes an update of the /audit_template
Watcher API endpoint to reflect the previous changes.

As this changeset changes the API, this should be merged alongside the
related changeset from python-watcherclient.

Partially Implements: blueprint get-goal-from-strategy

Change-Id: Ic0573d036d1bbd7820f8eb963e47912d6b3ed1a9
2016-05-11 15:31:02 +02:00
Vincent Françoise
e67b532110 Refactored Strategy selector to select from DB
In this changeset, I refactored the strategy selector to now
look into the Watcher DB instead of looking into the configuration
file.

Partially Implements: blueprint get-goal-from-strategy

Change-Id: I2bcb63542f6237f26796a3e5a781c8b62820cf6f
2016-05-11 15:31:01 +02:00
Vincent Françoise
81765b9aa5 Added /strategies endpoint in Watcher API
In this changeset, I added the /strategies endpoint to the Watcher
API service.
This also includes the related Tempest tests.

Partially Implements: blueprint get-goal-from-strategy

Change-Id: I1b70836e0df2082ab0016ecc207e89fdcb0fc8b9
2016-05-11 15:31:01 +02:00
Vincent Françoise
673642e436 Add Goal in BaseStrategy + Goal API reads from DB
In this changeset, I changed the Strategy base class to add new
abstract class methods. I also added an abstract strategy class
per Goal type (dummy, server consolidation, thermal optimization).

This changeset also includes an update of the /goals Watcher API
endpoint to now use the new Goal model (DB entries) instead of
reading from the configuration file.

Partially Implements: blueprint get-goal-from-strategy
Change-Id: Iecfed58c72f3f9df4e9d27e50a3a274a1fc0a75f
2016-05-11 15:31:00 +02:00
Jenkins
1026a896e2 Merge "Log "https" if using SSL" 2016-05-11 13:24:43 +00:00
Vincent Françoise
a3ac26870a DB sync for Strategies
In this changeset, I added the ability to synchronize the strategies
into the Wather DB so that it can later be served through the Watcher
API.

Partially Implements: blueprint get-goal-from-strategy

Change-Id: Ifeaa1f6e1f4ff7d7efc1b221cf57797a49dc5bc5
2016-05-11 15:19:40 +02:00
Vincent Françoise
192d8e262c Added Strategy model
In this changeset, I add the Strategy model as well as the DB
functionalities we need to manipulate strategies.

This changeset implies a DB schema update.

Partially Implements: blueprint get-goal-from-strategy

Change-Id: I438a8788844fbc514edfe1e9e3136f46ba5a82f2
2016-05-11 15:19:40 +02:00
Vincent Françoise
3b5ef15db6 Added Goal object + goal syncing
In this changeset, I added the Goal object into Watcher along with
a sync module that is responsible for syncing the goals with the
Watcher DB.

Partially Implements: blueprint get-goal-from-strategy

Change-Id: Ia3a2032dd9023d668c6f32ebbce44f8c1d77b0a3
2016-05-11 15:19:40 +02:00
Vincent Françoise
be9058f3e3 Added Goal model into Watcher DB
In this changeset, I added the Goal model into Watcher.
This implies a change into the Watcher DB schema

Partially Implements: blueprint get-goal-from-strategy

Change-Id: I5b5b0ffc7cff8affb59f17743e1af0e1277c2878
2016-05-11 15:19:40 +02:00
Jenkins
91951f3b01 Merge "Refactored DE and Applier to use oslo.service" 2016-05-11 13:14:52 +00:00
Jenkins
57a2af2685 Merge "Refactored Watcher API service" 2016-05-11 13:14:29 +00:00
Yosef Hoffman
76e3d2e2f6 Log "https" if using SSL
When starting the Watcher API service, the URI it served to is shown in
a log message. In this log message (in watcher/cmd/api.py) take into
account the case where SSL has been enabled with CONF.api.enable_ssl_api
set to True and format this log message accordingly.

Change-Id: I98541810139d9d4319ac89f21a5e0bc25454ee62
Closes-Bug: #1580044
2016-05-10 11:44:56 -04:00
Jenkins
bd5a969a26 Merge "Remove using of UUID field in POST methods of Watcher API" 2016-04-29 02:57:34 +00:00
zhangguoqing
d61bf5f053 [nova_helper] get keypair name by every admin users
Since the bug #1182965 has been fixed, allow admin users
to view any keypair.

Change-Id: I9cf948701515afd45e6720cfd15cfac6b5866aa5
2016-04-25 21:20:16 +08:00
Alexander Chadin
aaaf3f1c84 Remove using of UUID field in POST methods of Watcher API
This patch set removes the possibility of using UUID field
in POST methods of Watcher API.

Closes-Bug: #1572625

Change-Id: I88a8aa5346e937e3e9409b55da3316cbe1ed832a
2016-04-25 16:05:59 +03:00
Vincent Françoise
eb861f86ab Refactored DE and Applier to use oslo.service
In this PS, I have refactored the Decision Engine and the Applier
to use the oslo service utility.

Change-Id: If29158cc9b5e5e50f6c69d67c232cceeb07084f2
Closes-Bug: #1541850
2016-04-22 10:33:21 +02:00
Vincent Françoise
a9e7251d0d Refactored Watcher API service
This patchset introduces the use of oslo.service to run the
Watcher API service.

Change-Id: I6c38a3c1a2b4dc47388876e4c0ba61b7447690bd
Related-Bug: #1541850
2016-04-22 10:33:21 +02:00
OpenStack Proposal Bot
4ff373197c Updated from global requirements
Change-Id: I04865b9e63d6fc805802b6057ba9750116849c98
2016-04-19 12:30:29 +00:00
Jenkins
87087e9add Merge "Removed unused 'alarm' field" 2016-04-19 08:02:21 +00:00
Larry Rensing
408d6d4650 Removed unused 'alarm' field
The 'alarm' field is currently unused, so it has been removed.

Change-Id: I02fa15b06ed49dbc5dd63de54a9cde601413983c
Closes-Bug: #1550261
2016-04-18 14:12:12 +00:00
Alexander Chadin
e52dc4f8aa Add parameters verification when Audit is being created
We have to check Audit Type and Audit State to make sure
these parameters are in valid status.

Also, we provide default states for the next attributes:

- 'audit_template' is required and should be either UUID or text field
- 'state' is readonly so it raises an error if submitted in POST
  and is set by default to PENDING
- 'deadline' is optional and should be a datetime
- 'type' is a required text field

Change-Id: I2a7e0deec0ee2040e86400b500bb0efd8eade564
Closes-Bug: #1532843
Closes-Bug: #1533210
2016-04-14 15:43:26 +03:00
Jenkins
0f14b7635d Merge "correct the available disk, memory calculating Source data are misused in outlet temperature strategy. This patch fixes it." 2016-04-12 07:08:15 +00:00
junjie huang
bb77641aad correct the available disk, memory calculating
Source data are misused in outlet temperature strategy. This patch
fixes it.

Change-Id: I8ad5c974d7674ddfe6c4c9e3a6e3029d34a400db
Closes-bug: #1569114
2016-04-11 17:53:54 +00:00
Vincent Françoise
77228a0b0a Upgrade Watcher Tempest tests for multinode
Change-Id: I4b84ba9814227776232c8ab883cdaaf411930ee6
2016-04-11 16:49:10 +02:00
Jenkins
1157a8db30 Merge "Fix for deleting audit template" 2016-04-08 18:58:07 +00:00
Jenkins
18354d1b4e Merge "Update .coveragerc to ignore abstract methods" 2016-04-08 18:57:11 +00:00
Larry Rensing
8387cd10de Update .coveragerc to ignore abstract methods
Due to importing modules rather than functions and decorators directly,
@abc.abstract and 'raise NotImplementedError' were added to the
.coveragerc file.  Since abstract methods are not testable, this will
give us a more accurate representation of our coverage.

Change-Id: Id5ed5e1f5e142d10f41ad18d20228399226ec20d
Co-Authored-By: Jin Li <jl7351@att.com>
Closes-Bug: #1563717
2016-04-08 16:59:57 +00:00
OpenStack Proposal Bot
0449bae747 Updated from global requirements
Change-Id: Ieead2a4c784c248bd6af821f5e1e84c5e6cd3b5a
2016-04-07 17:25:39 +00:00
Alexander Chadin
3e07844844 Fix for deleting audit template
We need to update sqlalchemy/api and sqlalchemy/models (and appropriate tests)
to support deleting audit templates and recreating them with the same names.

Change-Id: Icf54cf1ed989a3f2ad689e25be4474b16a3a3eb2
Related-Bug: #1510179
2016-04-07 11:27:53 +03:00
zhangguoqing
a52d92be87 Remove unused logging import and LOG global var
In some modules the global LOG is not used any more. And the import
of logging is not used. This patch removes the unused logging import
and LOG vars.

Change-Id: I794ee719d76f04e70154cf67f726152fbb1ba15a
2016-04-06 10:34:39 +08:00
OpenStack Proposal Bot
96683a6133 Updated from global requirements
Change-Id: Ib98e484bb216eb31b64931db735ced8d1de738a4
2016-04-05 13:44:07 +00:00
Jenkins
46d5094add Merge "Added information on plugin mechanism to glossary" 2016-04-05 07:52:55 +00:00
Jenkins
783c7c0177 Merge "Disabled PATCH, POST and DELETE for /actions" 2016-04-05 07:30:23 +00:00
Jenkins
6d0717199c Merge "Invalid states for Action Plan in the glossary" 2016-04-05 07:27:12 +00:00
cima
8b77e78f3d Added missing support for resource states in unicode format in VM workload consolidation strategy
Unicode type resource state is now handled in the same fashion as resource state specified by general string.

Change-Id: I35ffa09015283b51c935515436735aecbe83a9d6
Closes-Bug: #1565764
2016-04-04 15:17:35 +02:00
Vincent Françoise
22c9c4df87 Disabled PATCH, POST and DELETE for /actions
I removed the POST, PATCH and DELETE verbs from the actions
controller as they should only be modified internally.

Change-Id: Ia72484249240f829423056f66c5c0f9632d02106
Closes-Bug: #1533281
2016-03-30 10:10:28 +02:00
Jenkins
99ff6d3348 Merge "Integrated consolidation strategy with watcher" 2016-03-29 15:36:28 +00:00
Tin Lam
c67f83cce0 Added information on plugin mechanism to glossary
Added extra information regarding the plugin mechanism for:
action, strategy, and Watcher planner.

Change-Id: I9a7523282e229b83c16b06e3806ff795a0699c78
Closes-Bug: #1558470
2016-03-24 18:42:17 -05:00
Larry Rensing
397bb3497e Invalid states for Action Plan in the glossary
The list of possible states for Action Plan objects was outdated, and
was updated to match the state machine diagram.  A reference to the
state machines for Audits and Action Plans were added to the glossary,
and the descriptions of each state were moved to the sections containing
the state machines within the Architecture page.

Change-Id: I27043ad864c02fff50fb31868b27dc4b4897dbd4
Closes-Bug: #1558464
2016-03-24 15:14:42 +00:00
Bruno Grazioli
4c924fc505 Integrated consolidation strategy with watcher
This patch adds a new load consolidation strategy based on a heuristic
algorithm which focuses on measured CPU utilization and tries to
minimize hosts which have too much or too little load.
A new goal "vm_workload_consolidation" was added which executes
the strategy "VM_WORKLOAD_CONSOLIDATION".
This work depends on the implemetation of the bug:
https://bugs.launchpad.net/watcher/+bug/1553124

Change-Id: Ide05bddb5c85a3df05b94658ee5bd98f32e554b0
Implements: blueprint basic-cloud-consolidation-integration
2016-03-24 12:00:01 +01:00
jaugustine
4c5ecc808d Added oslo.context to requirements.txt
Added missing dependency oslo.context to requirements.txt

Change-Id: I88c42fd2381bad55ff499e096a93dcc2cc1d44e5
Closes-Bug: #1560976
2016-03-23 10:45:23 -05:00
Jenkins
64b5a7c3e4 Merge "Updated action-plugin doc to refer to Voluptuous" 2016-03-22 15:17:37 +00:00
Jenkins
40bb92f749 Merge "Remove true/false return from action.execute()" 2016-03-22 01:13:43 +00:00
Jenkins
92bd06cf94 Merge "Remove the watcher sample configuration file" 2016-03-21 13:10:28 +00:00
David TARDIVEL
c9e0dfd3f5 Remove the watcher sample configuration file
Watcher sample configuration file groups parameters from
various projects and the watcher project ones. This makes it
tricky to review updates on configuration parameters.

It is inconvenient for developer if the add/remove/change some
configuration options cause they need to take care about the
config.sample file.

The sample configuration file should be available into HTML doc.

This patchset:
. removes the file /etc/watcher/watcher.conf.sample
. adds an admin script tool to be able to built it, by using tox
. includes a new section 'Watcher sample configuration files' into
  the doc source files
. uses sphinx extension oslo_config.sphinxgenconfig

Change-Id: If2180de3614663f9cbc5396961a8d2175e28e315
Closes-Bug: #1541734
2016-03-21 11:47:29 +01:00
Vincent Françoise
446fe1307a Updated action-plugin doc to refer to Voluptuous
In this patchset, I added a small subsection which highlights the fact
that actions are using Voluptuous Schemas to validate their input
parameters.

Change-Id: I96a6060cf167468e4a3f7c8d8cd78330a20572e3
Closes-Bug: #1545643
2016-03-21 11:33:31 +01:00
Larry Rensing
2836f460e3 Rename variable vm_avg_cpu_util
The variable vm_avg_cpu_util was renamed to host_avg_cpu_util for
clarity, as it was really referring to the host average cpu util.

Change-Id: I7aaef9eb2c8421d01715c86afa36ab67f2fd5f30
Closes-Bug: #1559113
2016-03-18 10:45:08 -05:00
Jenkins
cb9bb7301b Merge "renamed "efficiency" with "efficacy" Closes-Bug:#1558468" 2016-03-18 11:09:12 +00:00
Jenkins
cb644fcef9 Merge "Renamed api.py to base.py in metrics engine" 2016-03-18 10:59:59 +00:00
sai
0a7c87eebf renamed "efficiency" with "efficacy"
Closes-Bug:#1558468

Change-Id: Iaf5f113b0aeb02904e76e7d1e729a93df3554275
2016-03-18 00:06:04 -05:00
Tin Lam
d7f4f42772 Remove true/false return from action.execute()
In watcher/applier/workflow_engine/default.py, we are checking the
return value of action.execute(). As the "TODO" above indicates it
(line 118), we should get rid of this and only flag an action as
failed if an exception was raised during its execute(). We will
need to update the related unit tests.

Change-Id: Ia8ff7abd9994c3504e733ccd1d629cafe9d4b839
Closes-Bug: #1548383
2016-03-16 18:18:55 -05:00
OpenStack Proposal Bot
bdc0eb196a Updated from global requirements
Change-Id: I568d88a71c47e16daa2f34b7dec97137eb7519b8
2016-03-16 13:34:18 +00:00
Jenkins
59427eb0d9 Merge "Refactored check for invalid goal" 2016-03-16 00:25:39 +00:00
Jenkins
b6801b192a Merge "Updated from global requirements" 2016-03-15 21:10:26 +00:00
Jenkins
0a6c2c16a4 Merge "Added Disk Capacity in cluster-data-model" 2016-03-15 16:28:53 +00:00
Vincent Françoise
9a44941c66 Documentation on purge command
This patchset add a new entry for the purge into the Watcher
documentation.

Change-Id: Ifb74b379bccd59ff736bf186bdaaf74de77098f1
Implements: blueprint db-purge-engine
2016-03-14 15:49:45 +01:00
Vincent Françoise
a6508a0013 Added purge script for soft deleted objects
This patchset implements the purge script as specified in its
related blueprint:

- The '--age-in-days' option allows to specify the number of
  days before expiry
- The '--max-number' option allows us to specify a limit on the number
  of objects to delete
- The '--audit-template' option allows you to only delete objects
  related to the specified audit template UUID or name
- The '--dry-run' option to go through the purge procedure without
  actually deleting anything
- The '--exclude-orphans' option which allows you to exclude from the
  purge any object that does not have a parent (i.e. and audit without
  a related audit template)

A prompt has been added to also propose to narrow down the number of
deletions to be below the specified limit.

Change-Id: I3ce83ab95277c109df67a6b5b920a878f6e59d3f
Implements: blueprint db-purge-engine
2016-03-14 15:49:45 +01:00
Vincent Françoise
c3db66ca09 Added Mixin-related filters on DB queries
As a pre-requisite for being able to query the database for objects
that are expired, I need a way to express date comparison on the
'deleted_at' field which is common for every Watcher object. As they
are coming from mixins, I decided to implement these filters with a
syntax borrowed from the Django ORM where the field is suffixed by the
comparison operator you want to apply:

- The '__lt' suffix stands for 'less than'
- The '__lte' suffix stands for 'less than or equal to'
- The '__gt' suffix stands for 'greater than'
- The '__gte' suffix stands for 'greater than or equal to'
- The '__eq' suffix stands for 'equal to'

I also added a 'uuid' filter to later on be able to filter by uuid.

Partially Implements: blueprint db-purge-engine

Change-Id: I763f330c1b8ea8395990d2276b71e87f5b3f3ddc
2016-03-14 15:46:58 +01:00
Jenkins
5d0fe553c4 Merge "Fixed wrongly used assertEqual method" 2016-03-14 14:43:34 +00:00
OpenStack Proposal Bot
8b8239c3d8 Updated from global requirements
Change-Id: Ib9bec0311e8ad6680487631e6707bafe4a259be6
2016-03-09 16:53:45 +00:00
Larry Rensing
920bd502ec Refactored check for invalid goal
When creating a new audit template, the verification of its goal
existence was previously done in watcher/objects/audit_template.py.
This check was moved to api/controllers/v1/audit_template.py, rather
than in the DAO class.

Change-Id: I6efb0657f64c46a56914a946ec78013b9e47331b
Closes-Bug: #1536191
2016-03-09 10:48:16 -06:00
Gábor Antal
c68d33f341 Renamed api.py to base.py in metrics engine
In watcher/metrics_engine/cluster_history/api.py,
we can find the BaseClusterHistory abstract base class.

To follow the same naming convention observed throughout the rest
of the project, I renamed watcher/metrics_engine/cluster_history/api.py
to watcher/metrics_engine/cluster_history/base.py

Change-Id: If18f8db7f0982e47c1998a469c54952670c262f5
Closes-Bug: #1548398
2016-03-09 12:02:05 +01:00
Vincent Françoise
8e8fdbd809 Re-generated the watcher.pot
There are translations that are missing from watcher.pot.
This patchset includes them.

Change-Id: Ia418066b5653b4c81885d3eb150613ba357f9b7b
Related-Bug: #1510189
2016-03-09 11:20:16 +01:00
Bruno Grazioli
681536c8c7 Added Disk Capacity in cluster-data-model
Fetched information of total disk capacity from nova and added a new
resource 'disk_capacity' to NovaClusterModelCollector cluster. Also a
new resource type 'disk_capacity' was added to ResourceType.
https://bugs.launchpad.net/watcher/+bug/1553124

Change-Id: I85750f25c6d2693432da8e5e3a3d0861320f4787
Closes-Bug: #1553124
2016-03-09 08:33:31 +00:00
Tin Lam
083b170083 Removing unicode from README.rst
Removing unicode from README.rst to unblock
the python34 gate failure.

Change-Id: I60da31e22b6a09540c9d6fca659a1b21d931a0b7
Closes-Bug: #1554347
2016-03-08 17:07:53 -06:00
Gábor Antal
c440cdd69f Fixed wrongly used assertEqual method
In several places, assertEqual is used the following way:
  assertEqual(observed, expected)
However, the correct way to use assertEqual is:
  assertEqual(expected, observed)

Change-Id: I5a7442f4adf98bf7bc73cef1d17d20da39d9a7f8
Closes-Bug: #1551861
2016-03-01 18:20:37 +01:00
188 changed files with 11381 additions and 2924 deletions

View File

@@ -4,6 +4,7 @@ source = watcher
omit = watcher/tests/*
[report]
ignore_errors = True
ignore_errors = True
exclude_lines =
@abstract
@abc.abstract
raise NotImplementedError

2
.gitignore vendored
View File

@@ -44,6 +44,8 @@ output/*/index.html
# Sphinx
doc/build
doc/source/api
doc/source/samples
doc/source/watcher.conf.sample
# pbr generates these
AUTHORS

View File

@@ -10,12 +10,12 @@ Watcher
OpenStack Watcher provides a flexible and scalable resource optimization
service for multi-tenant OpenStack-based clouds.
Watcher provides a complete optimization loopincluding everything from a
Watcher provides a complete optimization loop-including everything from a
metrics receiver, complex event processor and profiler, optimization processor
and an action plan applier. This provides a robust framework to realize a wide
range of cloud optimization goals, including the reduction of data center
operating costs, increased system performance via intelligent virtual machine
migration, increased energy efficiencyand more!
migration, increased energy efficiency-and more!
* Free software: Apache license
* Wiki: http://wiki.openstack.org/wiki/Watcher

View File

@@ -42,7 +42,3 @@ LOGDAYS=2
[[post-config|$NOVA_CONF]]
[DEFAULT]
compute_monitors=cpu.virt_driver
[[post-config|$WATCHER_CONF]]
[watcher_goals]
goals=BASIC_CONSOLIDATION:basic,DUMMY:dummy

View File

@@ -155,6 +155,7 @@ by the :ref:`Watcher API <archi_watcher_api_definition>` or the
- :ref:`Action plans <action_plan_definition>`
- :ref:`Actions <action_definition>`
- :ref:`Goals <goal_definition>`
- :ref:`Strategies <strategy_definition>`
The Watcher domain being here "*optimization of some resources provided by an
OpenStack system*".
@@ -196,8 +197,6 @@ Audit, the :ref:`Strategy <strategy_definition>` relies on two sets of data:
which provides information about the past of the
:ref:`Cluster <cluster_definition>`
So far, only one :ref:`Strategy <strategy_definition>` can be associated to a
given :ref:`Goal <goal_definition>` via the main Watcher configuration file.
.. _data_model:
@@ -211,6 +210,14 @@ view (Goals, Audits, Action Plans, ...):
.. image:: ./images/functional_data_model.svg
:width: 100%
Here below is a class diagram representing the main objects in Watcher from a
database perspective:
.. image:: ./images/watcher_class_diagram.png
:width: 100%
.. _sequence_diagrams:
Sequence diagrams
@@ -230,13 +237,15 @@ following parameters:
- A name
- A goal to achieve
- An optional strategy
.. image:: ./images/sequence_create_audit_template.png
:width: 100%
The `Watcher API`_ just makes sure that the goal exists (i.e. it is declared
in the Watcher configuration file) and stores a new audit template in the
:ref:`Watcher Database <watcher_database_definition>`.
The `Watcher API`_ makes sure that both the specified goal (mandatory) and
its associated strategy (optional) are registered inside the :ref:`Watcher
Database <watcher_database_definition>` before storing a new audit template in
the :ref:`Watcher Database <watcher_database_definition>`.
.. _sequence_diagrams_create_and_launch_audit:
@@ -260,12 +269,11 @@ the Audit in the
The :ref:`Watcher Decision Engine <watcher_decision_engine_definition>` reads
the Audit parameters from the
:ref:`Watcher Database <watcher_database_definition>`. It instantiates the
appropriate :ref:`Strategy <strategy_definition>` (using entry points)
associated to the :ref:`Goal <goal_definition>` of the
:ref:`Audit <audit_definition>` (it uses the information of the Watcher
configuration file to find the mapping between the
:ref:`Goal <goal_definition>` and the :ref:`Strategy <strategy_definition>`
python class).
appropriate :ref:`strategy <strategy_definition>` (using entry points)
given both the :ref:`goal <goal_definition>` and the strategy associated to the
parent :ref:`audit template <audit_template_definition>` of the :ref:`Audit
<audit_definition>`. If no strategy is associated to the audit template, the
strategy is dynamically selected by the Decision Engine.
The :ref:`Watcher Decision Engine <watcher_decision_engine_definition>` also
builds the :ref:`Cluster Data Model <cluster_data_model_definition>`. This
@@ -361,6 +369,28 @@ State Machine diagrams
Audit State Machine
-------------------
An :ref:`Audit <audit_definition>` has a life-cycle and its current state may
be one of the following:
- **PENDING** : a request for an :ref:`Audit <audit_definition>` has been
submitted (either manually by the
:ref:`Administrator <administrator_definition>` or automatically via some
event handling mechanism) and is in the queue for being processed by the
:ref:`Watcher Decision Engine <watcher_decision_engine_definition>`
- **ONGOING** : the :ref:`Audit <audit_definition>` is currently being
processed by the
:ref:`Watcher Decision Engine <watcher_decision_engine_definition>`
- **SUCCEEDED** : the :ref:`Audit <audit_definition>` has been executed
successfully and at least one solution was found
- **FAILED** : an error occured while executing the
:ref:`Audit <audit_definition>`
- **DELETED** : the :ref:`Audit <audit_definition>` is still stored in the
:ref:`Watcher database <watcher_database_definition>` but is not returned
any more through the Watcher APIs.
- **CANCELLED** : the :ref:`Audit <audit_definition>` was in **PENDING** or
**ONGOING** state and was cancelled by the
:ref:`Administrator <administrator_definition>`
The following diagram shows the different possible states of an
:ref:`Audit <audit_definition>` and what event makes the state change to a new
value:
@@ -373,6 +403,31 @@ value:
Action Plan State Machine
-------------------------
An :ref:`Action Plan <action_plan_definition>` has a life-cycle and its current
state may be one of the following:
- **RECOMMENDED** : the :ref:`Action Plan <action_plan_definition>` is waiting
for a validation from the :ref:`Administrator <administrator_definition>`
- **PENDING** : a request for an :ref:`Action Plan <action_plan_definition>`
has been submitted (due to an
:ref:`Administrator <administrator_definition>` executing an
:ref:`Audit <audit_definition>`) and is in the queue for
being processed by the :ref:`Watcher Applier <watcher_applier_definition>`
- **ONGOING** : the :ref:`Action Plan <action_plan_definition>` is currently
being processed by the :ref:`Watcher Applier <watcher_applier_definition>`
- **SUCCEEDED** : the :ref:`Action Plan <action_plan_definition>` has been
executed successfully (i.e. all :ref:`Actions <action_definition>` that it
contains have been executed successfully)
- **FAILED** : an error occured while executing the
:ref:`Action Plan <action_plan_definition>`
- **DELETED** : the :ref:`Action Plan <action_plan_definition>` is still
stored in the :ref:`Watcher database <watcher_database_definition>` but is
not returned any more through the Watcher APIs.
- **CANCELLED** : the :ref:`Action Plan <action_plan_definition>` was in
**PENDING** or **ONGOING** state and was cancelled by the
:ref:`Administrator <administrator_definition>`
The following diagram shows the different possible states of an
:ref:`Action Plan <action_plan_definition>` and what event makes the state
change to a new value:

View File

@@ -18,6 +18,7 @@ from watcher import version as watcher_version
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'oslo_config.sphinxconfiggen',
'sphinx.ext.autodoc',
'sphinx.ext.viewcode',
'sphinxcontrib.httpdomain',
@@ -28,7 +29,8 @@ extensions = [
]
wsme_protocols = ['restjson']
config_generator_config_file = '../../etc/watcher/watcher-config-generator.conf'
sample_config_basename = 'watcher'
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.

View File

@@ -0,0 +1 @@
../../etc/watcher/watcher-config-generator.conf

View File

@@ -0,0 +1,14 @@
.. _watcher_sample_configuration_files:
==================================
Watcher sample configuration files
==================================
watcher.conf
~~~~~~~~~~~~
The ``watcher.conf`` file contains most of the options to configure the
Watcher services.
.. literalinclude:: ../watcher.conf.sample
:language: ini

View File

@@ -163,6 +163,16 @@ Configure the Watcher service
The Watcher service is configured via its configuration file. This file
is typically located at ``/etc/watcher/watcher.conf``.
You can easily generate and update a sample configuration file
named :ref:`watcher.conf.sample <watcher_sample_configuration_files>` by using
these following commands::
$ git clone git://git.openstack.org/openstack/watcher
$ cd watcher/
$ tox -econfig
$ vi etc/watcher/watcher.conf.sample
The configuration file is organized into the following sections:
* ``[DEFAULT]`` - General configuration
@@ -172,8 +182,6 @@ The configuration file is organized into the following sections:
* ``[watcher_clients_auth]`` - Keystone auth configuration for clients
* ``[watcher_applier]`` - Watcher Applier module configuration
* ``[watcher_decision_engine]`` - Watcher Decision Engine module configuration
* ``[watcher_goals]`` - Goals mapping configuration
* ``[watcher_strategies]`` - Strategy configuration
* ``[oslo_messaging_rabbit]`` - Oslo Messaging RabbitMQ driver configuration
* ``[ceilometer_client]`` - Ceilometer client configuration
* ``[cinder_client]`` - Cinder client configuration

52
doc/source/deploy/gmr.rst Normal file
View File

@@ -0,0 +1,52 @@
..
Except where otherwise noted, this document is licensed under Creative
Commons Attribution 3.0 License. You can view the license at:
https://creativecommons.org/licenses/by/3.0/
.. _watcher_gmr:
=======================
Guru Meditation Reports
=======================
Watcher contains a mechanism whereby developers and system administrators can
generate a report about the state of a running Watcher service. This report
is called a *Guru Meditation Report* (*GMR* for short).
Generating a GMR
================
A *GMR* can be generated by sending the *USR2* signal to any Watcher process
with support (see below). The *GMR* will then be outputted as standard error
for that particular process.
For example, suppose that ``watcher-api`` has process id ``8675``, and was run
with ``2>/var/log/watcher/watcher-api-err.log``. Then, ``kill -USR2 8675``
will trigger the Guru Meditation report to be printed to
``/var/log/watcher/watcher-api-err.log``.
Structure of a GMR
==================
The *GMR* is designed to be extensible; any particular service may add its
own sections. However, the base *GMR* consists of several sections:
Package
Shows information about the package to which this process belongs, including
version informations.
Threads
Shows stack traces and thread ids for each of the threads within this
process.
Green Threads
Shows stack traces for each of the green threads within this process (green
threads don't have thread ids).
Configuration
Lists all the configuration options currently accessible via the CONF object
for the current process.
Plugins
Lists all the plugins currently accessible by the Watcher service.

View File

@@ -32,17 +32,17 @@ This guide assumes you have a working installation of Watcher. If you get
Please refer to the `installation guide`_.
In order to use Watcher, you have to configure your credentials suitable for
watcher command-line tools.
If you need help on a specific command, you can use:
.. code:: bash
$ watcher help COMMAND
You can interact with Watcher either by using our dedicated `Watcher CLI`_
named ``watcher``, or by using the `OpenStack CLI`_ ``openstack``.
If you want to deploy Watcher in Horizon, please refer to the `Watcher Horizon
plugin installation guide`_.
.. _`installation guide`: https://factory.b-com.com/www/watcher/doc/python-watcherclient
.. _`Watcher Horizon plugin installation guide`: https://factory.b-com.com/www/watcher/doc/watcher-dashboard/deploy/installation.html
.. _`OpenStack CLI`: http://docs.openstack.org/developer/python-openstackclient/man/openstack.html
.. _`Watcher CLI`: https://factory.b-com.com/www/watcher/doc/python-watcherclient/index.html
Seeing what the Watcher CLI can do ?
------------------------------------
@@ -51,23 +51,66 @@ watcher binary without options.
.. code:: bash
$ watcher
$ watcher help
or::
$ openstack help optimize
How do I run an audit of my cluster ?
-------------------------------------
First, you need to create an :ref:`audit template <audit_template_definition>`.
An :ref:`audit template <audit_template_definition>` defines an optimization
:ref:`goal <goal_definition>` to achieve (i.e. the settings of your audit).
This goal should be declared in the Watcher service configuration file
**/etc/watcher/watcher.conf**.
First, you need to find the :ref:`goal <goal_definition>` you want to achieve:
.. code:: bash
$ watcher audit-template-create my_first_audit DUMMY
$ watcher goal list
If you get "*You must provide a username via either --os-username or via
env[OS_USERNAME]*" you may have to verify your credentials.
or::
$ openstack optimize goal list
.. note::
If you get "*You must provide a username via either --os-username or via
env[OS_USERNAME]*" you may have to verify your credentials.
Then, you can create an :ref:`audit template <audit_template_definition>`.
An :ref:`audit template <audit_template_definition>` defines an optimization
:ref:`goal <goal_definition>` to achieve (i.e. the settings of your audit).
.. code:: bash
$ watcher audittemplate create my_first_audit_template <your_goal>
or::
$ openstack optimize audittemplate create my_first_audit_template <your_goal>
Although optional, you may want to actually set a specific strategy for your
audit template. If so, you may can search of its UUID or name using the
following command:
.. code:: bash
$ watcher strategy list --goal-uuid <your_goal_uuid>
or::
$ openstack optimize strategy list --goal-uuid <your_goal_uuid>
The command to create your audit template would then be:
.. code:: bash
$ watcher audittemplate create my_first_audit_template <your_goal> \
--strategy <your_strategy>
or::
$ openstack optimize audittemplate create my_first_audit_template <your_goal> \
--strategy <your_strategy>
Then, you can create an audit. An audit is a request for optimizing your
cluster depending on the specified :ref:`goal <goal_definition>`.
@@ -76,19 +119,26 @@ You can launch an audit on your cluster by referencing the
:ref:`audit template <audit_template_definition>` (i.e. the settings of your
audit) that you want to use.
- Get the :ref:`audit template <audit_template_definition>` UUID:
- Get the :ref:`audit template <audit_template_definition>` UUID or name:
.. code:: bash
$ watcher audit-template-list
$ watcher audittemplate list
or::
$ openstack optimize audittemplate list
- Start an audit based on this :ref:`audit template
<audit_template_definition>` settings:
.. code:: bash
$ watcher audit-create -a <your_audit_template_uuid>
$ watcher audit create -a <your_audit_template>
or::
$ openstack optimize audit create -a <your_audit_template>
Watcher service will compute an :ref:`Action Plan <action_plan_definition>`
composed of a list of potential optimization :ref:`actions <action_definition>`
@@ -102,15 +152,22 @@ configuration file.
.. code:: bash
$ watcher action-plan-list --audit <the_audit_uuid>
$ watcher actionplan list --audit <the_audit_uuid>
or::
$ openstack optimize actionplan list --audit <the_audit_uuid>
- Have a look on the list of optimization :ref:`actions <action_definition>`
contained in this new :ref:`action plan <action_plan_definition>`:
.. code:: bash
$ watcher action-list --action-plan <the_action_plan_uuid>
$ watcher action list --action-plan <the_action_plan_uuid>
or::
$ openstack optimize action list --action-plan <the_action_plan_uuid>
Once you have learned how to create an :ref:`Action Plan
<action_plan_definition>`, it's time to go further by applying it to your
@@ -120,18 +177,30 @@ cluster:
.. code:: bash
$ watcher action-plan-start <the_action_plan_uuid>
$ watcher actionplan start <the_action_plan_uuid>
or::
$ openstack optimize actionplan start <the_action_plan_uuid>
You can follow the states of the :ref:`actions <action_definition>` by
periodically calling:
.. code:: bash
$ watcher action-list
$ watcher action list
or::
$ openstack optimize action list
You can also obtain more detailed information about a specific action:
.. code:: bash
$ watcher action-show <the_action_uuid>
$ watcher action show <the_action_uuid>
or::
$ openstack optimize action show <the_action_uuid>

View File

@@ -205,7 +205,7 @@ place:
$ workon watcher
(watcher) $ watcher-db-manage --create_schema
(watcher) $ watcher-db-manage create_schema
Running Watcher services

View File

@@ -50,14 +50,16 @@ Here is an example showing how you can write a plugin called ``DummyAction``:
# Filepath = <PROJECT_DIR>/thirdparty/dummy.py
# Import path = thirdparty.dummy
import voluptuous
from watcher.applier.actions import base
class DummyAction(baseBaseAction):
class DummyAction(base.BaseAction):
@property
def schema(self):
return Schema({})
return voluptuous.Schema({})
def execute(self):
# Does nothing
@@ -83,6 +85,58 @@ To get a better understanding on how to implement a more advanced action,
have a look at the :py:class:`~watcher.applier.actions.migration.Migrate`
class.
Input validation
----------------
As you can see in the previous example, we are using `Voluptuous`_ to validate
the input parameters of an action. So if you want to learn more about how to
work with `Voluptuous`_, you can have a look at their `documentation`_:
.. _Voluptuous: https://github.com/alecthomas/voluptuous
.. _documentation: https://github.com/alecthomas/voluptuous/blob/master/README.md
Define configuration parameters
===============================
At this point, you have a fully functional action. However, in more complex
implementation, you may want to define some configuration options so one can
tune the action to its needs. To do so, you can implement the
:py:meth:`~.Loadable.get_config_opts` class method as followed:
.. code-block:: python
from oslo_config import cfg
class DummyAction(base.BaseAction):
# [...]
def execute(self):
assert self.config.test_opt == 0
def get_config_opts(self):
return [
cfg.StrOpt('test_opt', help="Demo Option.", default=0),
# Some more options ...
]
The configuration options defined within this class method will be included
within the global ``watcher.conf`` configuration file under a section named by
convention: ``{namespace}.{plugin_name}``. In our case, the ``watcher.conf``
configuration would have to be modified as followed:
.. code-block:: ini
[watcher_actions.dummy]
# Option used for testing.
test_opt = test_value
Then, the configuration options you define within this method will then be
injected in each instantiated object via the ``config`` parameter of the
:py:meth:`~.BaseAction.__init__` method.
Abstract Plugin Class
=====================
@@ -91,6 +145,7 @@ should implement:
.. autoclass:: watcher.applier.actions.base.BaseAction
:members:
:special-members: __init__
:noindex:
.. py:attribute:: schema

View File

@@ -69,6 +69,49 @@ examples, have a look at the implementation of planners already provided by
Watcher like :py:class:`~.DefaultPlanner`. A list with all available planner
plugins can be found :ref:`here <watcher_planners>`.
Define configuration parameters
===============================
At this point, you have a fully functional planner. However, in more complex
implementation, you may want to define some configuration options so one can
tune the planner to its needs. To do so, you can implement the
:py:meth:`~.Loadable.get_config_opts` class method as followed:
.. code-block:: python
from oslo_config import cfg
class DummyPlanner(base.BasePlanner):
# [...]
def schedule(self, context, audit_uuid, solution):
assert self.config.test_opt == 0
# [...]
def get_config_opts(self):
return [
cfg.StrOpt('test_opt', help="Demo Option.", default=0),
# Some more options ...
]
The configuration options defined within this class method will be included
within the global ``watcher.conf`` configuration file under a section named by
convention: ``{namespace}.{plugin_name}``. In our case, the ``watcher.conf``
configuration would have to be modified as followed:
.. code-block:: ini
[watcher_planners.dummy]
# Option used for testing.
test_opt = test_value
Then, the configuration options you define within this method will then be
injected in each instantiated object via the ``config`` parameter of the
:py:meth:`~.BasePlanner.__init__` method.
Abstract Plugin Class
=====================
@@ -77,6 +120,7 @@ should implement:
.. autoclass:: watcher.decision_engine.planner.base.BasePlanner
:members:
:special-members: __init__
:noindex:

View File

@@ -15,7 +15,9 @@ plugin interface which gives anyone the ability to integrate an external
strategy in order to make use of placement algorithms.
This section gives some guidelines on how to implement and integrate custom
strategies with Watcher.
strategies with Watcher. If you wish to create a third-party package for your
plugin, you can refer to our :ref:`documentation for third-party package
creation <plugin-base_setup>`.
Pre-requisites
@@ -26,64 +28,217 @@ configured so that it would provide you all the metrics you need to be able to
use your strategy.
Creating a new plugin
=====================
Create a new plugin
===================
First of all you have to:
In order to create a new strategy, you have to:
- Extend :py:class:`~.BaseStrategy`
- Implement its :py:meth:`~.BaseStrategy.execute` method
- Extend the :py:class:`~.UnclassifiedStrategy` class
- Implement its :py:meth:`~.BaseStrategy.get_name` class method to return the
**unique** ID of the new strategy you want to create. This unique ID should
be the same as the name of :ref:`the entry point we will declare later on
<strategy_plugin_add_entrypoint>`.
- Implement its :py:meth:`~.BaseStrategy.get_display_name` class method to
return the translated display name of the strategy you want to create.
Note: Do not use a variable to return the translated string so it can be
automatically collected by the translation tool.
- Implement its :py:meth:`~.BaseStrategy.get_translatable_display_name`
class method to return the translation key (actually the english display
name) of your new strategy. The value return should be the same as the
string translated in :py:meth:`~.BaseStrategy.get_display_name`.
- Implement its :py:meth:`~.BaseStrategy.execute` method to return the
solution you computed within your strategy.
Here is an example showing how you can write a plugin called ``DummyStrategy``:
Here is an example showing how you can write a plugin called ``NewStrategy``:
.. code-block:: python
import uuid
import abc
class DummyStrategy(BaseStrategy):
import six
DEFAULT_NAME = "dummy"
DEFAULT_DESCRIPTION = "Dummy Strategy"
from watcher._i18n import _
from watcher.decision_engine.strategy.strategies import base
def __init__(self, name=DEFAULT_NAME, description=DEFAULT_DESCRIPTION):
super(DummyStrategy, self).__init__(name, description)
def execute(self, model):
migration_type = 'live'
src_hypervisor = 'compute-host-1'
dst_hypervisor = 'compute-host-2'
instance_id = uuid.uuid4()
parameters = {'migration_type': migration_type,
'src_hypervisor': src_hypervisor,
'dst_hypervisor': dst_hypervisor}
self.solution.add_action(action_type="migration",
resource_id=instance_id,
class NewStrategy(base.UnclassifiedStrategy):
def __init__(self, osc=None):
super(NewStrategy, self).__init__(osc)
def execute(self, original_model):
self.solution.add_action(action_type="nop",
input_parameters=parameters)
# Do some more stuff here ...
return self.solution
@classmethod
def get_name(cls):
return "new_strategy"
@classmethod
def get_display_name(cls):
return _("New strategy")
@classmethod
def get_translatable_display_name(cls):
return "New strategy"
As you can see in the above example, the :py:meth:`~.BaseStrategy.execute`
method returns a :py:class:`~.BaseSolution` instance as required. This solution
is what wraps the abstract set of actions the strategy recommends to you. This
solution is then processed by a :ref:`planner <planner_definition>` to produce
an action plan which shall contain the sequenced flow of actions to be
an action plan which contains the sequenced flow of actions to be
executed by the :ref:`Watcher Applier <watcher_applier_definition>`.
Please note that your strategy class will be instantiated without any
parameter. Therefore, you should make sure not to make any of them required in
your ``__init__`` method.
Please note that your strategy class will expect to find the same constructor
signature as BaseStrategy to instantiate you strategy. Therefore, you should
ensure that your ``__init__`` signature is identical to the
:py:class:`~.BaseStrategy` one.
Create a new goal
=================
As stated before, the ``NewStrategy`` class extends a class called
:py:class:`~.UnclassifiedStrategy`. This class actually implements a set of
abstract methods which are defined within the :py:class:`~.BaseStrategy` parent
class.
Once you are confident in your strategy plugin, the next step is now to
classify your goal by assigning it a proper goal. To do so, you can either
reuse existing goals defined in Watcher. As of now, four goal-oriented abstract
classes are defined in Watcher:
- :py:class:`~.UnclassifiedStrategy` which is the one I mentioned up until now.
- :py:class:`~.DummyBaseStrategy` which is used by :py:class:`~.DummyStrategy`
for testing purposes.
- :py:class:`~.ServerConsolidationBaseStrategy`
- :py:class:`~.ThermalOptimizationBaseStrategy`
If none of the above actually correspond to the goal your new strategy
achieves, you can define a brand new one. To do so, you need to:
- Extend the :py:class:`~.BaseStrategy` class to make your new goal-oriented
strategy abstract class :
- Implement its :py:meth:`~.BaseStrategy.get_goal_name` class method to
return the **unique** ID of the goal you want to achieve.
- Implement its :py:meth:`~.BaseStrategy.get_goal_display_name` class method
to return the translated display name of the goal you want to achieve.
Note: Do not use a variable to return the translated string so it can be
automatically collected by the translation tool.
- Implement its :py:meth:`~.BaseStrategy.get_translatable_goal_display_name`
class method to return the goal translation key (actually the english
display name). The value return should be the same as the string translated
in :py:meth:`~.BaseStrategy.get_goal_display_name`.
Here is an example showing how you can define a new ``NEW_GOAL`` goal and
modify your ``NewStrategy`` plugin so it now achieves the latter:
.. code-block:: python
import abc
import six
from watcher._i18n import _
from watcher.decision_engine.strategy.strategies import base
@six.add_metaclass(abc.ABCMeta)
class NewGoalBaseStrategy(base.BaseStrategy):
@classmethod
def get_goal_name(cls):
return "NEW_GOAL"
@classmethod
def get_goal_display_name(cls):
return _("New goal")
@classmethod
def get_translatable_goal_display_name(cls):
return "New goal"
class NewStrategy(NewGoalBaseStrategy):
def __init__(self, config, osc=None):
super(NewStrategy, self).__init__(config, osc)
def execute(self, original_model):
self.solution.add_action(action_type="nop",
input_parameters=parameters)
# Do some more stuff here ...
return self.solution
@classmethod
def get_name(cls):
return "new_strategy"
@classmethod
def get_display_name(cls):
return _("New strategy")
@classmethod
def get_translatable_display_name(cls):
return "New strategy"
Define configuration parameters
===============================
At this point, you have a fully functional strategy. However, in more complex
implementation, you may want to define some configuration options so one can
tune the strategy to its needs. To do so, you can implement the
:py:meth:`~.Loadable.get_config_opts` class method as followed:
.. code-block:: python
from oslo_config import cfg
class NewStrategy(NewGoalBaseStrategy):
# [...]
def execute(self, original_model):
assert self.config.test_opt == 0
# [...]
def get_config_opts(self):
return [
cfg.StrOpt('test_opt', help="Demo Option.", default=0),
# Some more options ...
]
The configuration options defined within this class method will be included
within the global ``watcher.conf`` configuration file under a section named by
convention: ``{namespace}.{plugin_name}``. In our case, the ``watcher.conf``
configuration would have to be modified as followed:
.. code-block:: ini
[watcher_strategies.new_strategy]
# Option used for testing.
test_opt = test_value
Then, the configuration options you define within this method will then be
injected in each instantiated object via the ``config`` parameter of the
:py:meth:`~.BaseStrategy.__init__` method.
Abstract Plugin Class
=====================
Here below is the abstract :py:class:`~.BaseStrategy` class that every single
strategy should implement:
Here below is the abstract :py:class:`~.BaseStrategy` class:
.. autoclass:: watcher.decision_engine.strategy.strategies.base.BaseStrategy
:members:
:special-members: __init__
:noindex:
.. _strategy_plugin_add_entrypoint:
Add a new entry point
=====================
@@ -93,7 +248,9 @@ strategy must be registered as a named entry point under the
``watcher_strategies`` entry point of your ``setup.py`` file. If you are using
pbr_, this entry point should be placed in your ``setup.cfg`` file.
The name you give to your entry point has to be unique.
The name you give to your entry point has to be unique and should be the same
as the value returned by the :py:meth:`~.BaseStrategy.get_id` class method of
your strategy.
Here below is how you would proceed to register ``DummyStrategy`` using pbr_:
@@ -101,7 +258,7 @@ Here below is how you would proceed to register ``DummyStrategy`` using pbr_:
[entry_points]
watcher_strategies =
dummy = thirdparty.dummy:DummyStrategy
dummy_strategy = thirdparty.dummy:DummyStrategy
To get a better understanding on how to implement a more advanced strategy,
@@ -117,16 +274,10 @@ plugins when it is restarted. If a Python package containing a custom plugin is
installed within the same environment as Watcher, Watcher will automatically
make that plugin available for use.
At this point, Watcher will use your new strategy if you reference it in the
``goals`` under the ``[watcher_goals]`` section of your ``watcher.conf``
configuration file. For example, if you want to use a ``dummy`` strategy you
just installed, you would have to associate it to a goal like this:
.. code-block:: ini
[watcher_goals]
goals = BALANCE_LOAD:basic,MINIMIZE_ENERGY_CONSUMPTION:dummy
At this point, Watcher will scan and register inside the :ref:`Watcher Database
<watcher_database_definition>` all the strategies (alongside the goals they
should satisfy) you implemented upon restarting the :ref:`Watcher Decision
Engine <watcher_decision_engine_definition>`.
You should take care when installing strategy plugins. By their very nature,
there are no guarantees that utilizing them as is will be supported, as
@@ -148,7 +299,6 @@ for various types of backends. A list of the available backends is located
here_. The Ceilosca project is a good example of how to create your own
pluggable backend.
Finally, if your strategy requires new metrics not covered by Ceilometer, you
can add them through a Ceilometer `plugin`_.
@@ -191,7 +341,7 @@ Read usage metrics using the Watcher Cluster History Helper
Here below is the abstract ``BaseClusterHistory`` class of the Helper.
.. autoclass:: watcher.metrics_engine.cluster_history.api.BaseClusterHistory
.. autoclass:: watcher.metrics_engine.cluster_history.base.BaseClusterHistory
:members:
:noindex:

View File

@@ -9,6 +9,10 @@
Available Plugins
=================
In this section we present all the plugins that are shipped along with Watcher.
If you want to know which plugins your Watcher services have access to, you can
use the :ref:`Guru Meditation Reports <watcher_gmr>` to display them.
.. _watcher_strategies:
Strategies

View File

@@ -99,14 +99,14 @@ The :ref:`Cluster <cluster_definition>` may be divided in one or several
Cluster Data Model
==================
.. watcher-term:: watcher.metrics_engine.cluster_model_collector.api
.. watcher-term:: watcher.metrics_engine.cluster_model_collector.base
.. _cluster_history_definition:
Cluster History
===============
.. watcher-term:: watcher.metrics_engine.cluster_history.api
.. watcher-term:: watcher.metrics_engine.cluster_history.base
.. _controller_node_definition:
@@ -213,27 +213,27 @@ Here are some examples of
It can be any of the `the official list of available resource types defined in OpenStack for HEAT <http://docs.openstack.org/developer/heat/template_guide/openstack.html>`_.
.. _efficiency_definition:
.. _efficacy_definition:
Optimization Efficiency
=======================
Optimization Efficacy
=====================
The :ref:`Optimization Efficiency <efficiency_definition>` is the objective
The :ref:`Optimization Efficacy <efficacy_definition>` is the objective
measure of how much of the :ref:`Goal <goal_definition>` has been achieved in
respect with constraints and :ref:`SLAs <sla_definition>` defined by the
:ref:`Customer <customer_definition>`.
The way efficiency is evaluated will depend on the
:ref:`Goal <goal_definition>` to achieve.
The way efficacy is evaluated will depend on the :ref:`Goal <goal_definition>`
to achieve.
Of course, the efficiency will be relevant only as long as the
Of course, the efficacy will be relevant only as long as the
:ref:`Action Plan <action_plan_definition>` is relevant
(i.e., the current state of the :ref:`Cluster <cluster_definition>`
has not changed in a way that a new :ref:`Audit <audit_definition>` would need
to be launched).
For example, if the :ref:`Goal <goal_definition>` is to lower the energy
consumption, the :ref:`Efficiency <efficiency_definition>` will be computed
consumption, the :ref:`Efficacy <efficacy_definition>` will be computed
using several indicators (KPIs):
- the percentage of energy gain (which must be the highest possible)
@@ -244,7 +244,7 @@ using several indicators (KPIs):
All those indicators (KPIs) are computed within a given timeframe, which is the
time taken to execute the whole :ref:`Action Plan <action_plan_definition>`.
The efficiency also enables the :ref:`Administrator <administrator_definition>`
The efficacy also enables the :ref:`Administrator <administrator_definition>`
to objectively compare different :ref:`Strategies <strategy_definition>` for
the same goal and same workload of the :ref:`Cluster <cluster_definition>`.
@@ -323,7 +323,7 @@ Solution
Strategy
========
.. watcher-term:: watcher.decision_engine.strategy.strategies.base
.. watcher-term:: watcher.api.controllers.v1.strategy
.. _watcher_applier_definition:

View File

@@ -0,0 +1,14 @@
plantuml
========
To build an image from a source file, you have to upload the plantuml JAR file
available on http://plantuml.com/download.html.
After, just run this command to build your image:
.. code-block:: shell
$ cd doc/source/images
$ java -jar /path/to/plantuml.jar doc/source/image_src/plantuml/my_image.txt
$ ls doc/source/images/
my_image.png

View File

@@ -3,7 +3,7 @@
actor Administrator
Administrator -> "Watcher CLI" : watcher audit-create -a <audit_template_uuid>
Administrator -> "Watcher CLI" : watcher audit create -a <audit_template>
"Watcher CLI" -> "Watcher API" : POST audit(parameters)
"Watcher API" -> "Watcher Database" : create new audit in database (status=PENDING)
@@ -14,7 +14,7 @@ Administrator -> "Watcher CLI" : watcher audit-create -a <audit_template_uuid>
Administrator <-- "Watcher CLI" : new audit uuid
"Watcher API" -> "AMQP Bus" : trigger_audit(new_audit.uuid)
"AMQP Bus" -> "Watcher Decision Engine" : trigger_audit(new_audit.uuid)
"AMQP Bus" -> "Watcher Decision Engine" : trigger_audit(new_audit.uuid) (status=ONGOING)
ref over "Watcher Decision Engine"
Trigger audit in the

View File

@@ -2,15 +2,21 @@
actor Administrator
Administrator -> "Watcher CLI" : watcher audit-template-create <name> <goal>
Administrator -> "Watcher CLI" : watcher audittemplate create <name> <goal> \
[--strategy-uuid <strategy>]
"Watcher CLI" -> "Watcher API" : POST audit_template(parameters)
"Watcher API" -> "Watcher API" : make sure goal exist in configuration
"Watcher API" -> "Watcher Database" : create new audit_template in database
"Watcher API" -> "Watcher Database" : Request if goal exists in database
"Watcher API" <-- "Watcher Database" : OK
"Watcher API" <-- "Watcher Database" : new audit template uuid
"Watcher CLI" <-- "Watcher API" : return new audit template URL in HTTP Location Header
Administrator <-- "Watcher CLI" : new audit template uuid
"Watcher API" -> "Watcher Database" : Request if strategy exists in database (if provided)
"Watcher API" <-- "Watcher Database" : OK
"Watcher API" -> "Watcher Database" : Create new audit_template in database
"Watcher API" <-- "Watcher Database" : New audit template UUID
"Watcher CLI" <-- "Watcher API" : Return new audit template URL in HTTP Location Header
Administrator <-- "Watcher CLI" : New audit template UUID
@enduml

View File

@@ -2,10 +2,10 @@
actor Administrator
Administrator -> "Watcher CLI" : watcher action-plan-start <action_plan_uuid>
Administrator -> "Watcher CLI" : watcher actionplan start <action_plan_uuid>
"Watcher CLI" -> "Watcher API" : PATCH action_plan(state=TRIGGERED)
"Watcher API" -> "Watcher Database" : action_plan.state=TRIGGERED
"Watcher CLI" -> "Watcher API" : PATCH action_plan(state=PENDING)
"Watcher API" -> "Watcher Database" : action_plan.state=PENDING
"Watcher CLI" <-- "Watcher API" : HTTP 200

View File

@@ -0,0 +1,87 @@
@startuml
abstract class Base {
// Timestamp mixin
DateTime created_at
DateTime updated_at
// Soft Delete mixin
DateTime deleted_at
Integer deleted // default = 0
}
class Strategy {
**Integer id** // primary_key
String uuid // length = 36
String name // length = 63, nullable = false
String display_name // length = 63, nullable = false
<i>Integer goal_id</i> // ForeignKey('goals.id'), nullable = false
}
class Goal {
**Integer id** // primary_key
String uuid // length = 36
String name // length = 63, nullable = false
String display_name // length = 63, nullable=False
}
class AuditTemplate {
**Integer id** // primary_key
String uuid // length = 36
String name // length = 63, nullable = true
String description // length = 255, nullable = true
Integer host_aggregate // nullable = true
<i>Integer goal_id</i> // ForeignKey('goals.id'), nullable = false
<i>Integer strategy_id</i> // ForeignKey('strategies.id'), nullable = true
JsonString extra
String version // length = 15, nullable = true
}
class Audit {
**Integer id** // primary_key
String uuid // length = 36
String type // length = 20
String state // length = 20, nullable = true
DateTime deadline // nullable = true
<i>Integer audit_template_id</i> // ForeignKey('audit_templates.id') \
nullable = false
}
class Action {
**Integer id** // primary_key
String uuid // length = 36, nullable = false
<i>Integer action_plan_id</i> // ForeignKey('action_plans.id'), nullable = false
String action_type // length = 255, nullable = false
JsonString input_parameters // nullable = true
String state // length = 20, nullable = true
String next // length = 36, nullable = true
}
class ActionPlan {
**Integer id** // primary_key
String uuid // length = 36
Integer first_action_id //
<i>Integer audit_id</i> // ForeignKey('audits.id'), nullable = true
String state // length = 20, nullable = true
}
"Base" <|-- "Strategy"
"Base" <|-- "Goal"
"Base" <|-- "AuditTemplate"
"Base" <|-- "Audit"
"Base" <|-- "Action"
"Base" <|-- "ActionPlan"
"Goal" <.. "Strategy" : Foreign Key
"Goal" <.. "AuditTemplate" : Foreign Key
"Strategy" <.. "AuditTemplate" : Foreign Key
"AuditTemplate" <.. "Audit" : Foreign Key
"ActionPlan" <.. "Action" : Foreign Key
"Audit" <.. "ActionPlan" : Foreign Key
@enduml

Binary file not shown.

Before

Width:  |  Height:  |  Size: 28 KiB

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 89 KiB

View File

@@ -55,9 +55,9 @@ Getting Started
dev/environment
dev/devstack
deploy/configuration
deploy/conf-files
dev/testing
API References
--------------
@@ -90,6 +90,7 @@ Introduction
deploy/installation
deploy/user-guide
deploy/gmr
Watcher Manual Pages
====================

View File

@@ -52,7 +52,7 @@ run the following::
Show the program's version number and exit.
.. option:: upgrade, downgrade, stamp, revision, version, create_schema
.. option:: upgrade, downgrade, stamp, revision, version, create_schema, purge
The :ref:`command <db-manage_cmds>` to run.
@@ -219,3 +219,42 @@ version
Show help for version and exit.
This command will output the current database version.
purge
-----
.. program:: purge
.. option:: -h, --help
Show help for purge and exit.
.. option:: -d, --age-in-days
The number of days (starting from today) before which we consider soft
deleted objects as expired and should hence be erased. By default, all
objects soft deleted are considered expired. This can be useful as removing
a significant amount of objects may cause a performance issues.
.. option:: -n, --max-number
The maximum number of database objects we expect to be deleted. If exceeded,
this will prevent any deletion.
.. option:: -t, --audit-template
Either the UUID or name of the soft deleted audit template to purge. This
will also include any related objects with it.
.. option:: -e, --exclude-orphans
This is a flag to indicate when we want to exclude orphan objects from
deletion.
.. option:: --dry-run
This is a flag to indicate when we want to perform a dry run. This will show
the objects that would be deleted instead of actually deleting them.
This command will purge the current database by removing both its soft deleted
and orphan objects.

View File

@@ -0,0 +1,4 @@
To generate the sample watcher.conf file, run the following
command from the top level of the watcher directory:
tox -econfig

View File

@@ -0,0 +1,9 @@
[DEFAULT]
output_file = etc/watcher/watcher.conf.sample
wrap_width = 79
namespace = watcher
namespace = keystonemiddleware.auth_token
namespace = oslo.log
namespace = oslo.db
namespace = oslo.messaging

View File

@@ -1,962 +0,0 @@
[DEFAULT]
#
# From oslo.log
#
# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
# List of package logging levels in logger=LEVEL pairs. This option is
# ignored if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN
# Enables or disables publication of error events. (boolean value)
#publish_errors = false
# If set to true, the logging level will be set to DEBUG instead of
# the default INFO level. (boolean value)
#debug = false
# The format for an instance that is passed with the log message.
# (string value)
#instance_format = "[instance: %(uuid)s] "
# The format for an instance UUID that is passed with the log message.
# (string value)
#instance_uuid_format = "[instance: %(uuid)s] "
# If set to false, the logging level will be set to WARNING instead of
# the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true
# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false
# The name of a logging configuration file. This file is appended to
# any existing logging configuration files. For details about logging
# configuration files, see the Python logging module documentation.
# Note that when logging configuration files are used all logging
# configuration is defined in the configuration file and other logging
# configuration options are ignored (for example, log_format). (string
# value)
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>
# DEPRECATED. A logging.Formatter log message format string which may
# use any of the available logging.LogRecord attributes. This option
# is deprecated. Please use logging_context_format_string and
# logging_default_format_string instead. This option is ignored if
# log_config_append is set. (string value)
#log_format = <None>
# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set.
# (string value)
#log_date_format = %Y-%m-%d %H:%M:%S
# (Optional) Name of log file to send logging output to. If no default
# is set, logging will go to stderr as defined by use_stderr. This
# option is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>
# (Optional) The base directory used for relative log_file paths.
# This option is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>
# Uses logging handler designed to watch file system. When log file is
# moved or removed this handler will open a new log file with
# specified path instantaneously. It makes sense only if log_file
# option is specified and Linux platform is used. This option is
# ignored if log_config_append is set. (boolean value)
#watch_log_file = false
# Use syslog for logging. Existing syslog format is DEPRECATED and
# will be changed later to honor RFC5424. This option is ignored if
# log_config_append is set. (boolean value)
#use_syslog = false
# Enables or disables syslog rfc5424 format for logging. If enabled,
# prefixes the MSG part of the syslog message with APP-NAME (RFC5424).
# The format without the APP-NAME is deprecated in Kilo, and will be
# removed in Mitaka, along with this option. This option is ignored if
# log_config_append is set. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#use_syslog_rfc_format = true
# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER
# Log output to standard error. This option is ignored if
# log_config_append is set. (boolean value)
#use_stderr = true
# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
# Format string to use for log messages when context is undefined.
# (string value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
# Additional data to append to log message when logging level for the
# message is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
# Prefix each line of exception output with this format. (string
# value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
#
# From oslo.messaging
#
# Size of executor thread pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
#executor_thread_pool_size = 64
# Minimal port number for random ports range. (port value)
# Minimum value: 0
# Maximum value: 65535
#rpc_zmq_min_port = 49152
# Seconds to wait for a response from a call. (integer value)
#rpc_response_timeout = 60
# Size of RPC connection pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_conn_pool_size
#rpc_conn_pool_size = 30
# A URL representing the messaging driver to use and its full
# configuration. If not set, we fall back to the rpc_backend option
# and driver specific configuration. (string value)
#transport_url = <None>
# Number of retries to find free port number before fail with
# ZMQBindError. (integer value)
#rpc_zmq_bind_port_retries = 100
# The messaging driver to use, defaults to rabbit. Other drivers
# include amqp and zmq. (string value)
#rpc_backend = rabbit
# Host to locate redis. (string value)
#host = 127.0.0.1
# ZeroMQ bind address. Should be a wildcard (*), an ethernet
# interface, or IP. The "host" option should point or resolve to this
# address. (string value)
#rpc_zmq_bind_address = *
# MatchMaker driver. (string value)
#rpc_zmq_matchmaker = redis
# The default exchange under which topics are scoped. May be
# overridden by an exchange name specified in the transport_url
# option. (string value)
#control_exchange = openstack
# Use this port to connect to redis host. (port value)
# Minimum value: 0
# Maximum value: 65535
#port = 6379
# Type of concurrency used. Either "native" or "eventlet" (string
# value)
#rpc_zmq_concurrency = eventlet
# Password for Redis server (optional). (string value)
#password =
# Number of ZeroMQ contexts, defaults to 1. (integer value)
#rpc_zmq_contexts = 1
# Maximum number of ingress messages to locally buffer per topic.
# Default is unlimited. (integer value)
#rpc_zmq_topic_backlog = <None>
# List of Redis Sentinel hosts (fault tolerance mode) e.g.
# [host:port, host1:port ... ] (list value)
#sentinel_hosts =
# Directory for holding IPC sockets. (string value)
#rpc_zmq_ipc_dir = /var/run/openstack
# Name of this node. Must be a valid hostname, FQDN, or IP address.
# Must match "host" option, if running Nova. (string value)
#rpc_zmq_host = localhost
# Redis replica set name. (string value)
#sentinel_group_name = oslo-messaging-zeromq
# Seconds to wait before a cast expires (TTL). Only supported by
# impl_zmq. (integer value)
#rpc_cast_timeout = 30
# Time in ms to wait between connection attempts. (integer value)
#wait_timeout = 500
# The default number of seconds that poll should wait. Poll raises
# timeout exception when timeout expired. (integer value)
#rpc_poll_timeout = 1
# Time in ms to wait before the transaction is killed. (integer value)
#check_timeout = 20000
# Expiration timeout in seconds of a name service record about
# existing target ( < 0 means no timeout). (integer value)
#zmq_target_expire = 120
# Timeout in ms on blocking socket operations (integer value)
#socket_timeout = 1000
# Maximal port number for random ports range. (integer value)
# Minimum value: 1
# Maximum value: 65536
#rpc_zmq_max_port = 65536
# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy.
# (boolean value)
#use_pub_sub = true
[api]
#
# From watcher
#
# The port for the watcher API server (integer value)
#port = 9322
# The maximum number of items returned in a single response from a
# collection resource. (integer value)
#max_limit = 1000
# The listen IP for the watcher API server (string value)
#host = 0.0.0.0
[ceilometer_client]
#
# From watcher
#
# Version of Ceilometer API to use in ceilometerclient. (string value)
#api_version = 2
[cinder_client]
#
# From watcher
#
# Version of Cinder API to use in cinderclient. (string value)
#api_version = 2
[database]
#
# From oslo.db
#
# If set, use this value for max_overflow with SQLAlchemy. (integer
# value)
# Deprecated group/name - [DEFAULT]/sql_max_overflow
# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
#max_overflow = <None>
# Add Python stack traces to SQL as comment strings. (boolean value)
# Deprecated group/name - [DEFAULT]/sql_connection_trace
#connection_trace = false
# The SQLAlchemy connection string to use to connect to the database.
# (string value)
# Deprecated group/name - [DEFAULT]/sql_connection
# Deprecated group/name - [DATABASE]/sql_connection
# Deprecated group/name - [sql]/connection
#connection = <None>
# If db_inc_retry_interval is set, the maximum seconds between retries
# of a database operation. (integer value)
#db_max_retry_interval = 10
# Interval between retries of opening a SQL connection. (integer
# value)
# Deprecated group/name - [DEFAULT]/sql_retry_interval
# Deprecated group/name - [DATABASE]/reconnect_interval
#retry_interval = 10
# The file name to use with SQLite. (string value)
# Deprecated group/name - [DEFAULT]/sqlite_db
#sqlite_db = oslo.sqlite
# Timeout before idle SQL connections are reaped. (integer value)
# Deprecated group/name - [DEFAULT]/sql_idle_timeout
# Deprecated group/name - [DATABASE]/sql_idle_timeout
# Deprecated group/name - [sql]/idle_timeout
#idle_timeout = 3600
# If set, use this value for pool_timeout with SQLAlchemy. (integer
# value)
# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
#pool_timeout = <None>
# Maximum number of database connection retries during startup. Set to
# -1 to specify an infinite retry count. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_retries
# Deprecated group/name - [DATABASE]/sql_max_retries
#max_retries = 10
# Maximum retries in case of connection error or deadlock error before
# error is raised. Set to -1 to specify an infinite retry count.
# (integer value)
#db_max_retries = 20
# Enable the experimental use of database reconnect on connection
# lost. (boolean value)
#use_db_reconnect = false
# The SQLAlchemy connection string to use to connect to the slave
# database. (string value)
#slave_connection = <None>
# Minimum number of SQL connections to keep open in a pool. (integer
# value)
# Deprecated group/name - [DEFAULT]/sql_min_pool_size
# Deprecated group/name - [DATABASE]/sql_min_pool_size
#min_pool_size = 1
# Maximum number of SQL connections to keep open in a pool. (integer
# value)
# Deprecated group/name - [DEFAULT]/sql_max_pool_size
# Deprecated group/name - [DATABASE]/sql_max_pool_size
#max_pool_size = <None>
# If True, SQLite uses synchronous mode. (boolean value)
# Deprecated group/name - [DEFAULT]/sqlite_synchronous
#sqlite_synchronous = true
# The SQL mode to be used for MySQL sessions. This option, including
# the default, overrides any server-set SQL mode. To use whatever SQL
# mode is set by the server configuration, set this to no value.
# Example: mysql_sql_mode= (string value)
#mysql_sql_mode = TRADITIONAL
# Seconds between retries of a database transaction. (integer value)
#db_retry_interval = 1
# Verbosity of SQL debugging information: 0=None, 100=Everything.
# (integer value)
# Deprecated group/name - [DEFAULT]/sql_connection_debug
#connection_debug = 0
# The back end to use for the database. (string value)
# Deprecated group/name - [DEFAULT]/db_backend
#backend = sqlalchemy
# If True, increases the interval between retries of a database
# operation up to db_max_retry_interval. (boolean value)
#db_inc_retry_interval = true
[glance_client]
#
# From watcher
#
# Version of Glance API to use in glanceclient. (string value)
#api_version = 2
[keystone_authtoken]
#
# From keystonemiddleware.auth_token
#
# (Optional, mandatory if memcache_security_strategy is defined) This
# string is used for key derivation. (string value)
#memcache_secret_key = <None>
# In order to prevent excessive effort spent validating tokens, the
# middleware caches previously-seen tokens for a configurable duration
# (in seconds). Set to -1 to disable caching completely. (integer
# value)
#token_cache_time = 300
# Determines the frequency at which the list of revoked tokens is
# retrieved from the Identity service (in seconds). A high number of
# revocation events combined with a low cache duration may
# significantly reduce performance. (integer value)
#revocation_cache_time = 10
# (Optional) If defined, indicate whether token data should be
# authenticated or authenticated and encrypted. If MAC, token data is
# authenticated (with HMAC) in the cache. If ENCRYPT, token data is
# encrypted and authenticated in the cache. If the value is not one of
# these options or empty, auth_token will raise an exception on
# initialization. (string value)
# Allowed values: None, MAC, ENCRYPT
#memcache_security_strategy = None
# (Optional) Number of seconds memcached server is considered dead
# before it is tried again. (integer value)
#memcache_pool_dead_retry = 300
# (Optional) Maximum total number of open connections to every
# memcached server. (integer value)
#memcache_pool_maxsize = 10
# Complete public Identity API endpoint. (string value)
#auth_uri = <None>
# (Optional) Socket timeout in seconds for communicating with a
# memcached server. (integer value)
#memcache_pool_socket_timeout = 3
# (Optional) Number of seconds a connection to memcached is held
# unused in the pool before it is closed. (integer value)
#memcache_pool_unused_timeout = 60
# API version of the admin Identity API endpoint. (string value)
#auth_version = <None>
# (Optional) Number of seconds that an operation will wait to get a
# memcached client connection from the pool. (integer value)
#memcache_pool_conn_get_timeout = 10
# Do not handle authorization requests within the middleware, but
# delegate the authorization decision to downstream WSGI components.
# (boolean value)
#delay_auth_decision = false
# (Optional) Use the advanced (eventlet safe) memcached client pool.
# The advanced pool will only work under python 2.x. (boolean value)
#memcache_use_advanced_pool = false
# Request timeout value for communicating with Identity API server.
# (integer value)
#http_connect_timeout = <None>
# (Optional) Indicate whether to set the X-Service-Catalog header. If
# False, middleware will not ask for service catalog on token
# validation and will not set the X-Service-Catalog header. (boolean
# value)
#include_service_catalog = true
# How many times are we trying to reconnect when communicating with
# Identity API Server. (integer value)
#http_request_max_retries = 3
# Used to control the use and type of token binding. Can be set to:
# "disabled" to not check token binding. "permissive" (default) to
# validate binding information if the bind type is of a form known to
# the server and ignore it if not. "strict" like "permissive" but if
# the bind type is unknown the token will be rejected. "required" any
# form of token binding is needed to be allowed. Finally the name of a
# binding method that must be present in tokens. (string value)
#enforce_token_bind = permissive
# Env key for the swift cache. (string value)
#cache = <None>
# If true, the revocation list will be checked for cached tokens. This
# requires that PKI tokens are configured on the identity server.
# (boolean value)
#check_revocations_for_cached = false
# Required if identity server requires client certificate (string
# value)
#certfile = <None>
# Hash algorithms to use for hashing PKI tokens. This may be a single
# algorithm or multiple. The algorithms are those supported by Python
# standard hashlib.new(). The hashes will be tried in the order given,
# so put the preferred one first for performance. The result of the
# first hash will be stored in the cache. This will typically be set
# to multiple values only while migrating from a less secure algorithm
# to a more secure one. Once all the old tokens are expired this
# option should be set to a single value for better performance. (list
# value)
#hash_algorithms = md5
# Required if identity server requires client certificate (string
# value)
#keyfile = <None>
# A PEM encoded Certificate Authority to use when verifying HTTPs
# connections. Defaults to system CAs. (string value)
#cafile = <None>
# Authentication type to load (unknown value)
# Deprecated group/name - [DEFAULT]/auth_plugin
#auth_type = <None>
# Config Section from which to load plugin specific options (unknown
# value)
#auth_section = <None>
# Verify HTTPS connections. (boolean value)
#insecure = false
# The region in which the identity server can be found. (string value)
#region_name = <None>
# Directory used to cache files related to PKI tokens. (string value)
#signing_dir = <None>
# Optionally specify a list of memcached server(s) to use for caching.
# If left undefined, tokens will instead be cached in-process. (list
# value)
# Deprecated group/name - [DEFAULT]/memcache_servers
#memcached_servers = <None>
[matchmaker_redis]
#
# From oslo.messaging
#
# Time in ms to wait before the transaction is killed. (integer value)
#check_timeout = 20000
# Password for Redis server (optional). (string value)
#password =
# Timeout in ms on blocking socket operations (integer value)
#socket_timeout = 1000
# List of Redis Sentinel hosts (fault tolerance mode) e.g.
# [host:port, host1:port ... ] (list value)
#sentinel_hosts =
# Redis replica set name. (string value)
#sentinel_group_name = oslo-messaging-zeromq
# Host to locate redis. (string value)
#host = 127.0.0.1
# Time in ms to wait between connection attempts. (integer value)
#wait_timeout = 500
# Use this port to connect to redis host. (port value)
# Minimum value: 0
# Maximum value: 65535
#port = 6379
[neutron_client]
#
# From watcher
#
# Version of Neutron API to use in neutronclient. (string value)
#api_version = 2
[nova_client]
#
# From watcher
#
# Version of Nova API to use in novaclient. (string value)
#api_version = 2
[oslo_messaging_amqp]
#
# From oslo.messaging
#
# CA certificate PEM file to verify server certificate (string value)
# Deprecated group/name - [amqp1]/ssl_ca_file
#ssl_ca_file =
# Private key PEM file used to sign cert_file certificate (string
# value)
# Deprecated group/name - [amqp1]/ssl_key_file
#ssl_key_file =
# User name for message broker authentication (string value)
# Deprecated group/name - [amqp1]/username
#username =
# Name for the AMQP container (string value)
# Deprecated group/name - [amqp1]/container_name
#container_name = <None>
# Space separated list of acceptable SASL mechanisms (string value)
# Deprecated group/name - [amqp1]/sasl_mechanisms
#sasl_mechanisms =
# address prefix used when sending to a specific server (string value)
# Deprecated group/name - [amqp1]/server_request_prefix
#server_request_prefix = exclusive
# Password for decrypting ssl_key_file (if encrypted) (string value)
# Deprecated group/name - [amqp1]/ssl_key_password
#ssl_key_password = <None>
# Timeout for inactive connections (in seconds) (integer value)
# Deprecated group/name - [amqp1]/idle_timeout
#idle_timeout = 0
# Identifying certificate PEM file to present to clients (string
# value)
# Deprecated group/name - [amqp1]/ssl_cert_file
#ssl_cert_file =
# address prefix used when broadcasting to all servers (string value)
# Deprecated group/name - [amqp1]/broadcast_prefix
#broadcast_prefix = broadcast
# Debug: dump AMQP frames to stdout (boolean value)
# Deprecated group/name - [amqp1]/trace
#trace = false
# Password for message broker authentication (string value)
# Deprecated group/name - [amqp1]/password
#password =
# Accept clients using either SSL or plain TCP (boolean value)
# Deprecated group/name - [amqp1]/allow_insecure_clients
#allow_insecure_clients = false
# Name of configuration file (without .conf suffix) (string value)
# Deprecated group/name - [amqp1]/sasl_config_name
#sasl_config_name =
# Path to directory that contains the SASL configuration (string
# value)
# Deprecated group/name - [amqp1]/sasl_config_dir
#sasl_config_dir =
# address prefix when sending to any server in group (string value)
# Deprecated group/name - [amqp1]/group_request_prefix
#group_request_prefix = unicast
[oslo_messaging_notifications]
#
# From oslo.messaging
#
# The Drivers(s) to handle sending notifications. Possible values are
# messaging, messagingv2, routing, log, test, noop (multi valued)
# Deprecated group/name - [DEFAULT]/notification_driver
#driver =
# A URL representing the messaging driver to use for notifications. If
# not set, we fall back to the same configuration used for RPC.
# (string value)
# Deprecated group/name - [DEFAULT]/notification_transport_url
#transport_url = <None>
# AMQP topic used for OpenStack notifications. (list value)
# Deprecated group/name - [rpc_notifier2]/topics
# Deprecated group/name - [DEFAULT]/notification_topics
#topics = notifications
[oslo_messaging_rabbit]
#
# From oslo.messaging
#
# How often times during the heartbeat_timeout_threshold we check the
# heartbeat. (integer value)
#heartbeat_rate = 2
# Connect over SSL for RabbitMQ. (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_use_ssl
#rabbit_use_ssl = false
# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake
# (boolean value)
# Deprecated group/name - [DEFAULT]/fake_rabbit
#fake_rabbit = false
# The RabbitMQ userid. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_userid
#rabbit_userid = guest
# The RabbitMQ broker address where a single node is used. (string
# value)
# Deprecated group/name - [DEFAULT]/rabbit_host
#rabbit_host = localhost
# The RabbitMQ password. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_password
#rabbit_password = guest
# Use durable queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/amqp_durable_queues
# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
#amqp_durable_queues = false
# The RabbitMQ login method. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_login_method
#rabbit_login_method = AMQPLAIN
# Maximum number of RabbitMQ connection retries. Default is 0
# (infinite retry count). (integer value)
# Deprecated group/name - [DEFAULT]/rabbit_max_retries
#rabbit_max_retries = 0
# Auto-delete queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/amqp_auto_delete
#amqp_auto_delete = false
# The RabbitMQ virtual host. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_virtual_host
#rabbit_virtual_host = /
# SSL version to use (valid only if SSL enabled). Valid values are
# TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be
# available on some distributions. (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_version
#kombu_ssl_version =
# How frequently to retry connecting with RabbitMQ. (integer value)
#rabbit_retry_interval = 1
# SSL key file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_keyfile
#kombu_ssl_keyfile =
# Maximum interval of RabbitMQ connection retries. Default is 30
# seconds. (integer value)
#rabbit_interval_max = 30
# How long to backoff for between retries when connecting to RabbitMQ.
# (integer value)
# Deprecated group/name - [DEFAULT]/rabbit_retry_backoff
#rabbit_retry_backoff = 2
# SSL certification authority file (valid only if SSL enabled).
# (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_ca_certs
#kombu_ssl_ca_certs =
# Positive integer representing duration in seconds for queue TTL
# (x-expires). Queues which are unused for the duration of the TTL are
# automatically deleted. The parameter affects only reply and fanout
# queues. (integer value)
# Minimum value: 1
#rabbit_transient_queues_ttl = 600
# How long to wait before reconnecting in response to an AMQP consumer
# cancel notification. (floating point value)
# Deprecated group/name - [DEFAULT]/kombu_reconnect_delay
#kombu_reconnect_delay = 1.0
# Use HA queues in RabbitMQ (x-ha-policy: all). If you change this
# option, you must wipe the RabbitMQ database. (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_ha_queues
#rabbit_ha_queues = false
# How long to wait a missing client beforce abandoning to send it its
# replies. This value should not be longer than rpc_response_timeout.
# (integer value)
# Deprecated group/name - [DEFAULT]/kombu_reconnect_timeout
#kombu_missing_consumer_retry_timeout = 60
# Determines how the next RabbitMQ node is chosen in case the one we
# are currently connected to becomes unavailable. Takes effect only if
# more than one RabbitMQ node is provided in config. (string value)
# Allowed values: round-robin, shuffle
#kombu_failover_strategy = round-robin
# Specifies the number of messages to prefetch. Setting to zero allows
# unlimited messages. (integer value)
#rabbit_qos_prefetch_count = 0
# The RabbitMQ broker port where a single node is used. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rabbit_port
#rabbit_port = 5672
# SSL cert file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_certfile
#kombu_ssl_certfile =
# Number of seconds after which the Rabbit broker is considered down
# if heartbeat's keep-alive fails (0 disable the heartbeat).
# EXPERIMENTAL (integer value)
#heartbeat_timeout_threshold = 60
# RabbitMQ HA cluster host:port pairs. (list value)
# Deprecated group/name - [DEFAULT]/rabbit_hosts
#rabbit_hosts = $rabbit_host:$rabbit_port
[watcher_applier]
#
# From watcher
#
# The topic name used for status events, this topic is used so as to
# notifythe others components of the system (string value)
#status_topic = watcher.applier.status
# Select the engine to use to execute the workflow (string value)
#workflow_engine = taskflow
# The topic name used forcontrol events, this topic used for rpc call
# (string value)
#conductor_topic = watcher.applier.control
# Number of workers for applier, default value is 1. (integer value)
# Minimum value: 1
#workers = 1
# The identifier used by watcher module on the message broker (string
# value)
#publisher_id = watcher.applier.api
[watcher_clients_auth]
#
# From watcher
#
# Optional domain name to use with v3 API and v2 parameters. It will
# be used for both the user and project domain in v3 and ignored in v2
# authentication. (unknown value)
#default_domain_name = <None>
# Authentication URL (unknown value)
#auth_url = <None>
# Domain ID to scope to (unknown value)
#domain_id = <None>
# Domain name to scope to (unknown value)
#domain_name = <None>
# Project ID to scope to (unknown value)
# Deprecated group/name - [DEFAULT]/tenant-id
#project_id = <None>
# Project name to scope to (unknown value)
# Deprecated group/name - [DEFAULT]/tenant-name
#project_name = <None>
# Domain ID containing project (unknown value)
#project_domain_id = <None>
# PEM encoded client certificate cert file (string value)
#certfile = <None>
# Domain name containing project (unknown value)
#project_domain_name = <None>
# Trust ID (unknown value)
#trust_id = <None>
# Optional domain ID to use with v3 and v2 parameters. It will be used
# for both the user and project domain in v3 and ignored in v2
# authentication. (unknown value)
#default_domain_id = <None>
# Verify HTTPS connections. (boolean value)
#insecure = false
# User id (unknown value)
#user_id = <None>
# PEM encoded client certificate key file (string value)
#keyfile = <None>
# Username (unknown value)
# Deprecated group/name - [DEFAULT]/username
#username = <None>
# User's domain id (unknown value)
#user_domain_id = <None>
# User's domain name (unknown value)
#user_domain_name = <None>
# Timeout value for http requests (integer value)
#timeout = <None>
# User's password (unknown value)
#password = <None>
# Authentication type to load (unknown value)
# Deprecated group/name - [DEFAULT]/auth_plugin
#auth_type = <None>
# Config Section from which to load plugin specific options (unknown
# value)
#auth_section = <None>
# PEM encoded Certificate Authority to use when verifying HTTPs
# connections. (string value)
#cafile = <None>
[watcher_decision_engine]
#
# From watcher
#
# The maximum number of threads that can be used to execute strategies
# (integer value)
#max_workers = 2
# The topic name used for status events, this topic is used so as to
# notifythe others components of the system (string value)
#status_topic = watcher.decision.status
# The topic name used forcontrol events, this topic used for rpc call
# (string value)
#conductor_topic = watcher.decision.control
# The identifier used by watcher module on the message broker (string
# value)
#publisher_id = watcher.decision.api
[watcher_goals]
#
# From watcher
#
# Goals used for the optimization. Maps each goal to an associated
# strategy (for example: BASIC_CONSOLIDATION:basic,
# MY_GOAL:my_strategy_1) (dict value)
#goals = DUMMY:dummy
[watcher_planner]
#
# From watcher
#
# The selected planner used to schedule the actions (string value)
#planner = default

View File

@@ -5,29 +5,34 @@
enum34;python_version=='2.7' or python_version=='2.6' or python_version=='3.3' # BSD
jsonpatch>=1.1 # BSD
keystoneauth1>=2.1.0 # Apache-2.0
keystonemiddleware!=4.1.0,>=4.0.0 # Apache-2.0
oslo.config>=3.7.0 # Apache-2.0
keystonemiddleware!=4.1.0,!=4.5.0,>=4.0.0 # Apache-2.0
oslo.concurrency>=3.8.0 # Apache-2.0
oslo.cache>=1.5.0 # Apache-2.0
oslo.config>=3.9.0 # Apache-2.0
oslo.context>=2.2.0 # Apache-2.0
oslo.db>=4.1.0 # Apache-2.0
oslo.i18n>=2.1.0 # Apache-2.0
oslo.log>=1.14.0 # Apache-2.0
oslo.messaging>=4.0.0 # Apache-2.0
oslo.messaging>=4.5.0 # Apache-2.0
oslo.policy>=0.5.0 # Apache-2.0
oslo.service>=1.0.0 # Apache-2.0
oslo.reports>=0.6.0 # Apache-2.0
oslo.service>=1.10.0 # Apache-2.0
oslo.utils>=3.5.0 # Apache-2.0
PasteDeploy>=1.5.0 # MIT
pbr>=1.6 # Apache-2.0
pecan>=1.0.0 # BSD
voluptuous>=0.8.6 # BSD License
PrettyTable<0.8,>=0.7 # BSD
voluptuous>=0.8.9 # BSD License
python-ceilometerclient>=2.2.1 # Apache-2.0
python-cinderclient>=1.3.1 # Apache-2.0
python-glanceclient>=1.2.0 # Apache-2.0
python-keystoneclient!=1.8.0,!=2.1.0,>=1.6.0 # Apache-2.0
python-neutronclient>=2.6.0 # Apache-2.0
python-cinderclient!=1.7.0,>=1.6.0 # Apache-2.0
python-glanceclient>=2.0.0 # Apache-2.0
python-keystoneclient!=1.8.0,!=2.1.0,>=1.7.0 # Apache-2.0
python-neutronclient>=4.2.0 # Apache-2.0
python-novaclient!=2.33.0,>=2.29.0 # Apache-2.0
python-openstackclient>=2.1.0 # Apache-2.0
six>=1.9.0 # MIT
SQLAlchemy<1.1.0,>=1.0.10 # MIT
stevedore>=1.5.0 # Apache-2.0
stevedore>=1.10.0 # Apache-2.0
taskflow>=1.26.0 # Apache-2.0
WebOb>=1.2.3 # MIT
WSME>=0.8 # MIT

View File

@@ -49,6 +49,9 @@ watcher_strategies =
dummy = watcher.decision_engine.strategy.strategies.dummy_strategy:DummyStrategy
basic = watcher.decision_engine.strategy.strategies.basic_consolidation:BasicConsolidation
outlet_temp_control = watcher.decision_engine.strategy.strategies.outlet_temp_control:OutletTempControl
vm_workload_consolidation = watcher.decision_engine.strategy.strategies.vm_workload_consolidation:VMWorkloadConsolidation
workload_stabilization = watcher.decision_engine.strategy.strategies.workload_stabilization:WorkloadStabilization
workload_balance = watcher.decision_engine.strategy.strategies.workload_balance:WorkloadBalance
watcher_actions =
migrate = watcher.applier.actions.migration:Migrate

View File

@@ -5,8 +5,9 @@
coverage>=3.6 # Apache-2.0
discover # BSD
doc8 # Apache-2.0
freezegun # Apache-2.0
hacking<0.11,>=0.10.2
mock>=1.2 # BSD
mock>=2.0 # BSD
oslotest>=1.10.0 # Apache-2.0
os-testr>=0.4.1 # Apache-2.0
python-subunit>=0.0.18 # Apache-2.0/BSD

View File

@@ -40,12 +40,7 @@ commands = oslo_debug_helper -t watcher/tests {posargs}
[testenv:config]
sitepackages = False
commands =
oslo-config-generator --namespace watcher \
--namespace keystonemiddleware.auth_token \
--namespace oslo.log \
--namespace oslo.db \
--namespace oslo.messaging \
--output-file etc/watcher/watcher.conf.sample
oslo-config-generator --config-file etc/watcher/watcher-config-generator.conf
[flake8]
show-source=True

View File

@@ -15,6 +15,7 @@
# limitations under the License.
#
import oslo_i18n
from oslo_i18n import _lazy
# The domain is the name of the App which is used to generate the folder
# containing the translation files (i.e. the .pot file and the various locales)
@@ -42,5 +43,9 @@ _LE = _translators.log_error
_LC = _translators.log_critical
def lazy_translation_enabled():
return _lazy.USE_LAZY
def get_available_languages():
return oslo_i18n.get_available_languages(DOMAIN)

View File

@@ -19,24 +19,38 @@
from oslo_config import cfg
import pecan
from watcher._i18n import _
from watcher.api import acl
from watcher.api import config as api_config
from watcher.api import middleware
from watcher.decision_engine.strategy.selection import default \
as strategy_selector
# Register options for the service
API_SERVICE_OPTS = [
cfg.IntOpt('port',
default=9322,
help='The port for the watcher API server'),
cfg.PortOpt('port',
default=9322,
help=_('The port for the watcher API server')),
cfg.StrOpt('host',
default='0.0.0.0',
help='The listen IP for the watcher API server'),
help=_('The listen IP for the watcher API server')),
cfg.IntOpt('max_limit',
default=1000,
help='The maximum number of items returned in a single '
'response from a collection resource.')
help=_('The maximum number of items returned in a single '
'response from a collection resource')),
cfg.IntOpt('workers',
min=1,
help=_('Number of workers for Watcher API service. '
'The default is equal to the number of CPUs available '
'if that can be determined, else a default worker '
'count of 1 is returned.')),
cfg.BoolOpt('enable_ssl_api',
default=False,
help=_("Enable the integrated stand-alone API to service "
"requests via HTTPS instead of HTTP. If there is a "
"front-end service performing HTTPS offloading from "
"the service, this option should be False; note, you "
"will want to change public API endpoint to represent "
"SSL termination URL with 'public_endpoint' option.")),
]
CONF = cfg.CONF
@@ -45,7 +59,6 @@ opt_group = cfg.OptGroup(name='api',
CONF.register_group(opt_group)
CONF.register_opts(API_SERVICE_OPTS, opt_group)
CONF.register_opts(strategy_selector.WATCHER_GOALS_OPTS)
def get_pecan_config():
@@ -68,3 +81,12 @@ def setup_app(config=None):
)
return acl.install(app, CONF, config.app.acl_public_routes)
class VersionSelectorApplication(object):
def __init__(self):
pc = get_pecan_config()
self.v1 = setup_app(config=pc)
def __call__(self, environ, start_response):
return self.v1(environ, start_response)

View File

@@ -34,6 +34,7 @@ from watcher.api.controllers.v1 import action_plan
from watcher.api.controllers.v1 import audit
from watcher.api.controllers.v1 import audit_template
from watcher.api.controllers.v1 import goal
from watcher.api.controllers.v1 import strategy
class APIBase(wtypes.Base):
@@ -157,6 +158,7 @@ class Controller(rest.RestController):
actions = action.ActionsController()
action_plans = action_plan.ActionPlansController()
goals = goal.GoalsController()
strategies = strategy.StrategiesController()
@wsme_pecan.wsexpose(V1)
def get(self):

View File

@@ -49,6 +49,10 @@ be one of the following:
- **CANCELLED** : the :ref:`Action <action_definition>` was in **PENDING** or
**ONGOING** state and was cancelled by the
:ref:`Administrator <administrator_definition>`
:ref:`Some default implementations are provided <watcher_planners>`, but it is
possible to :ref:`develop new implementations <implement_action_plugin>` which
are dynamically loaded by Watcher at launch time.
"""
import datetime
@@ -115,7 +119,7 @@ class Action(base.APIBase):
self.action_next_uuid = None
# raise e
uuid = types.uuid
uuid = wtypes.wsattr(types.uuid, readonly=True)
"""Unique UUID for this action"""
action_plan_uuid = wsme.wsproperty(types.uuid, _get_action_plan_uuid,
@@ -126,9 +130,6 @@ class Action(base.APIBase):
state = wtypes.text
"""This audit state"""
alarm = types.uuid
"""An alarm UUID related to this action"""
action_type = wtypes.text
"""Action type"""
@@ -190,7 +191,6 @@ class Action(base.APIBase):
sample = cls(uuid='27e3153e-d5bf-4b7e-b517-fb518e17f34c',
description='action description',
state='PENDING',
alarm=None,
created_at=datetime.datetime.utcnow(),
deleted_at=None,
updated_at=datetime.datetime.utcnow())
@@ -359,6 +359,10 @@ class ActionsController(rest.RestController):
:param action: a action within the request body.
"""
# FIXME: blueprint edit-action-plan-flow
raise exception.OperationNotPermitted(
_("Cannot create an action directly"))
if self.from_actions:
raise exception.OperationNotPermitted
@@ -379,6 +383,10 @@ class ActionsController(rest.RestController):
:param action_uuid: UUID of a action.
:param patch: a json PATCH document to apply to this action.
"""
# FIXME: blueprint edit-action-plan-flow
raise exception.OperationNotPermitted(
_("Cannot modify an action directly"))
if self.from_actions:
raise exception.OperationNotPermitted
@@ -411,6 +419,9 @@ class ActionsController(rest.RestController):
:param action_uuid: UUID of a action.
"""
# FIXME: blueprint edit-action-plan-flow
raise exception.OperationNotPermitted(
_("Cannot delete an action directly"))
action_to_delete = objects.Action.get_by_uuid(
pecan.request.context,

View File

@@ -49,24 +49,9 @@ standard workflow model description formats such as
`Business Process Model and Notation 2.0 (BPMN 2.0) <http://www.omg.org/spec/BPMN/2.0/>`_
or `Unified Modeling Language (UML) <http://www.uml.org/>`_.
An :ref:`Action Plan <action_plan_definition>` has a life-cycle and its current
state may be one of the following:
- **RECOMMENDED** : the :ref:`Action Plan <action_plan_definition>` is waiting
for a validation from the :ref:`Administrator <administrator_definition>`
- **ONGOING** : the :ref:`Action Plan <action_plan_definition>` is currently
being processed by the :ref:`Watcher Applier <watcher_applier_definition>`
- **SUCCEEDED** : the :ref:`Action Plan <action_plan_definition>` has been
executed successfully (i.e. all :ref:`Actions <action_definition>` that it
contains have been executed successfully)
- **FAILED** : an error occured while executing the
:ref:`Action Plan <action_plan_definition>`
- **DELETED** : the :ref:`Action Plan <action_plan_definition>` is still
stored in the :ref:`Watcher database <watcher_database_definition>` but is
not returned any more through the Watcher APIs.
- **CANCELLED** : the :ref:`Action Plan <action_plan_definition>` was in
**PENDING** or **ONGOING** state and was cancelled by the
:ref:`Administrator <administrator_definition>`
To see the life-cycle and description of
:ref:`Action Plan <action_plan_definition>` states, visit :ref:`the Action Plan state
machine <action_plan_state_machine>`.
""" # noqa
import datetime
@@ -158,7 +143,7 @@ class ActionPlan(base.APIBase):
except exception.ActionNotFound:
self._first_action_uuid = None
uuid = types.uuid
uuid = wtypes.wsattr(types.uuid, readonly=True)
"""Unique UUID for this action plan"""
first_action_uuid = wsme.wsproperty(

View File

@@ -25,28 +25,8 @@ on a given :ref:`Cluster <cluster_definition>`.
For each :ref:`Audit <audit_definition>`, the Watcher system generates an
:ref:`Action Plan <action_plan_definition>`.
An :ref:`Audit <audit_definition>` has a life-cycle and its current state may
be one of the following:
- **PENDING** : a request for an :ref:`Audit <audit_definition>` has been
submitted (either manually by the
:ref:`Administrator <administrator_definition>` or automatically via some
event handling mechanism) and is in the queue for being processed by the
:ref:`Watcher Decision Engine <watcher_decision_engine_definition>`
- **ONGOING** : the :ref:`Audit <audit_definition>` is currently being
processed by the
:ref:`Watcher Decision Engine <watcher_decision_engine_definition>`
- **SUCCEEDED** : the :ref:`Audit <audit_definition>` has been executed
successfully (note that it may not necessarily produce a
:ref:`Solution <solution_definition>`).
- **FAILED** : an error occured while executing the
:ref:`Audit <audit_definition>`
- **DELETED** : the :ref:`Audit <audit_definition>` is still stored in the
:ref:`Watcher database <watcher_database_definition>` but is not returned
any more through the Watcher APIs.
- **CANCELLED** : the :ref:`Audit <audit_definition>` was in **PENDING** or
**ONGOING** state and was cancelled by the
:ref:`Administrator <administrator_definition>`
To see the life-cycle and description of an :ref:`Audit <audit_definition>`
states, visit :ref:`the Audit State machine <audit_state_machine>`.
"""
import datetime
@@ -69,6 +49,28 @@ from watcher.decision_engine import rpcapi
from watcher import objects
class AuditPostType(wtypes.Base):
audit_template_uuid = wtypes.wsattr(types.uuid, mandatory=True)
type = wtypes.wsattr(wtypes.text, mandatory=True)
deadline = wtypes.wsattr(datetime.datetime, mandatory=False)
state = wsme.wsattr(wtypes.text, readonly=True,
default=objects.audit.State.PENDING)
def as_audit(self):
audit_type_values = [val.value for val in objects.audit.AuditType]
if self.type not in audit_type_values:
raise exception.AuditTypeNotFound(audit_type=self.type)
return Audit(
audit_template_id=self.audit_template_uuid,
type=self.type,
deadline=self.deadline)
class AuditPatchType(types.JsonPatchType):
@staticmethod
@@ -345,12 +347,13 @@ class AuditsController(rest.RestController):
audit_uuid)
return Audit.convert_with_links(rpc_audit)
@wsme_pecan.wsexpose(Audit, body=Audit, status_code=201)
def post(self, audit):
@wsme_pecan.wsexpose(Audit, body=AuditPostType, status_code=201)
def post(self, audit_p):
"""Create a new audit.
:param audit: a audit within the request body.
:param audit_p: a audit within the request body.
"""
audit = audit_p.as_audit()
if self.from_audits:
raise exception.OperationNotPermitted

View File

@@ -56,22 +56,157 @@ import wsme
from wsme import types as wtypes
import wsmeext.pecan as wsme_pecan
from watcher._i18n import _
from watcher.api.controllers import base
from watcher.api.controllers import link
from watcher.api.controllers.v1 import collection
from watcher.api.controllers.v1 import types
from watcher.api.controllers.v1 import utils as api_utils
from watcher.common import context as context_utils
from watcher.common import exception
from watcher.common import utils as common_utils
from watcher import objects
class AuditTemplatePostType(wtypes.Base):
_ctx = context_utils.make_context()
name = wtypes.wsattr(wtypes.text, mandatory=True)
"""Name of this audit template"""
description = wtypes.wsattr(wtypes.text, mandatory=False)
"""Short description of this audit template"""
deadline = wsme.wsattr(datetime.datetime, mandatory=False)
"""deadline of the audit template"""
host_aggregate = wsme.wsattr(wtypes.IntegerType(minimum=1),
mandatory=False)
"""ID of the Nova host aggregate targeted by the audit template"""
extra = wtypes.wsattr({wtypes.text: types.jsontype}, mandatory=False)
"""The metadata of the audit template"""
goal = wtypes.wsattr(wtypes.text, mandatory=True)
"""Goal UUID or name of the audit template"""
strategy = wtypes.wsattr(wtypes.text, mandatory=False)
"""Strategy UUID or name of the audit template"""
version = wtypes.text
"""Internal version of the audit template"""
def as_audit_template(self):
return AuditTemplate(
name=self.name,
description=self.description,
deadline=self.deadline,
host_aggregate=self.host_aggregate,
extra=self.extra,
goal_id=self.goal, # Dirty trick ...
goal=self.goal,
strategy_id=self.strategy, # Dirty trick ...
strategy_uuid=self.strategy,
version=self.version,
)
@staticmethod
def validate(audit_template):
available_goals = objects.Goal.list(AuditTemplatePostType._ctx)
available_goal_uuids_map = {g.uuid: g for g in available_goals}
available_goal_names_map = {g.name: g for g in available_goals}
if audit_template.goal in available_goal_uuids_map:
goal = available_goal_uuids_map[audit_template.goal]
elif audit_template.goal in available_goal_names_map:
goal = available_goal_names_map[audit_template.goal]
else:
raise exception.InvalidGoal(goal=audit_template.goal)
if audit_template.strategy:
available_strategies = objects.Strategy.list(
AuditTemplatePostType._ctx)
available_strategies_map = {
s.uuid: s for s in available_strategies}
if audit_template.strategy not in available_strategies_map:
raise exception.InvalidStrategy(
strategy=audit_template.strategy)
strategy = available_strategies_map[audit_template.strategy]
# Check that the strategy we indicate is actually related to the
# specified goal
if strategy.goal_id != goal.id:
choices = ["'%s' (%s)" % (s.uuid, s.name)
for s in available_strategies]
raise exception.InvalidStrategy(
message=_(
"'%(strategy)s' strategy does relate to the "
"'%(goal)s' goal. Possible choices: %(choices)s")
% dict(strategy=strategy.name, goal=goal.name,
choices=", ".join(choices)))
audit_template.strategy = strategy.uuid
# We force the UUID so that we do not need to query the DB with the
# name afterwards
audit_template.goal = goal.uuid
return audit_template
class AuditTemplatePatchType(types.JsonPatchType):
_ctx = context_utils.make_context()
@staticmethod
def mandatory_attrs():
return []
@staticmethod
def validate(patch):
if patch.path == "/goal" and patch.op != "remove":
AuditTemplatePatchType._validate_goal(patch)
elif patch.path == "/goal" and patch.op == "remove":
raise exception.OperationNotPermitted(
_("Cannot remove 'goal' attribute "
"from an audit template"))
if patch.path == "/strategy":
AuditTemplatePatchType._validate_strategy(patch)
return types.JsonPatchType.validate(patch)
@staticmethod
def _validate_goal(patch):
patch.path = "/goal_id"
goal = patch.value
if goal:
available_goals = objects.Goal.list(
AuditTemplatePatchType._ctx)
available_goal_uuids_map = {g.uuid: g for g in available_goals}
available_goal_names_map = {g.name: g for g in available_goals}
if goal in available_goal_uuids_map:
patch.value = available_goal_uuids_map[goal].id
elif goal in available_goal_names_map:
patch.value = available_goal_names_map[goal].id
else:
raise exception.InvalidGoal(goal=goal)
@staticmethod
def _validate_strategy(patch):
patch.path = "/strategy_id"
strategy = patch.value
if strategy:
available_strategies = objects.Strategy.list(
AuditTemplatePatchType._ctx)
available_strategy_uuids_map = {
s.uuid: s for s in available_strategies}
available_strategy_names_map = {
s.name: s for s in available_strategies}
if strategy in available_strategy_uuids_map:
patch.value = available_strategy_uuids_map[strategy].id
elif strategy in available_strategy_names_map:
patch.value = available_strategy_names_map[strategy].id
else:
raise exception.InvalidStrategy(strategy=strategy)
class AuditTemplate(base.APIBase):
"""API representation of a audit template.
@@ -80,7 +215,90 @@ class AuditTemplate(base.APIBase):
between the internal object model and the API representation of an
audit template.
"""
uuid = types.uuid
_goal_uuid = None
_goal_name = None
_strategy_uuid = None
_strategy_name = None
def _get_goal(self, value):
if value == wtypes.Unset:
return None
goal = None
try:
if (common_utils.is_uuid_like(value) or
common_utils.is_int_like(value)):
goal = objects.Goal.get(
pecan.request.context, value)
else:
goal = objects.Goal.get_by_name(
pecan.request.context, value)
except exception.GoalNotFound:
pass
if goal:
self.goal_id = goal.id
return goal
def _get_strategy(self, value):
if value == wtypes.Unset:
return None
strategy = None
try:
if (common_utils.is_uuid_like(value) or
common_utils.is_int_like(value)):
strategy = objects.Strategy.get(
pecan.request.context, value)
else:
strategy = objects.Strategy.get_by_name(
pecan.request.context, value)
except exception.StrategyNotFound:
pass
if strategy:
self.strategy_id = strategy.id
return strategy
def _get_goal_uuid(self):
return self._goal_uuid
def _set_goal_uuid(self, value):
if value and self._goal_uuid != value:
self._goal_uuid = None
goal = self._get_goal(value)
if goal:
self._goal_uuid = goal.uuid
def _get_strategy_uuid(self):
return self._strategy_uuid
def _set_strategy_uuid(self, value):
if value and self._strategy_uuid != value:
self._strategy_uuid = None
strategy = self._get_strategy(value)
if strategy:
self._strategy_uuid = strategy.uuid
def _get_goal_name(self):
return self._goal_name
def _set_goal_name(self, value):
if value and self._goal_name != value:
self._goal_name = None
goal = self._get_goal(value)
if goal:
self._goal_name = goal.name
def _get_strategy_name(self):
return self._strategy_name
def _set_strategy_name(self, value):
if value and self._strategy_name != value:
self._strategy_name = None
strategy = self._get_strategy(value)
if strategy:
self._strategy_name = strategy.name
uuid = wtypes.wsattr(types.uuid, readonly=True)
"""Unique UUID for this audit template"""
name = wtypes.text
@@ -98,8 +316,21 @@ class AuditTemplate(base.APIBase):
extra = {wtypes.text: types.jsontype}
"""The metadata of the audit template"""
goal = wtypes.text
"""Goal type of the audit template"""
goal_uuid = wsme.wsproperty(
wtypes.text, _get_goal_uuid, _set_goal_uuid, mandatory=True)
"""Goal UUID the audit template refers to"""
goal_name = wsme.wsproperty(
wtypes.text, _get_goal_name, _set_goal_name, mandatory=False)
"""The name of the goal this audit template refers to"""
strategy_uuid = wsme.wsproperty(
wtypes.text, _get_strategy_uuid, _set_strategy_uuid, mandatory=False)
"""Strategy UUID the audit template refers to"""
strategy_name = wsme.wsproperty(
wtypes.text, _get_strategy_name, _set_strategy_name, mandatory=False)
"""The name of the strategy this audit template refers to"""
version = wtypes.text
"""Internal version of the audit template"""
@@ -112,20 +343,43 @@ class AuditTemplate(base.APIBase):
def __init__(self, **kwargs):
super(AuditTemplate, self).__init__()
self.fields = []
for field in objects.AuditTemplate.fields:
fields = list(objects.AuditTemplate.fields)
for k in fields:
# Skip fields we do not expose.
if not hasattr(self, field):
if not hasattr(self, k):
continue
self.fields.append(field)
setattr(self, field, kwargs.get(field, wtypes.Unset))
self.fields.append(k)
setattr(self, k, kwargs.get(k, wtypes.Unset))
self.fields.append('goal_id')
self.fields.append('strategy_id')
# goal_uuid & strategy_uuid are not part of
# objects.AuditTemplate.fields because they're API-only attributes.
self.fields.append('goal_uuid')
self.fields.append('goal_name')
self.fields.append('strategy_uuid')
self.fields.append('strategy_name')
setattr(self, 'goal_uuid', kwargs.get('goal_id', wtypes.Unset))
setattr(self, 'goal_name', kwargs.get('goal_id', wtypes.Unset))
setattr(self, 'strategy_uuid',
kwargs.get('strategy_id', wtypes.Unset))
setattr(self, 'strategy_name',
kwargs.get('strategy_id', wtypes.Unset))
@staticmethod
def _convert_with_links(audit_template, url, expand=True):
if not expand:
audit_template.unset_fields_except(['uuid', 'name',
'host_aggregate', 'goal'])
audit_template.unset_fields_except(
['uuid', 'name', 'host_aggregate', 'goal_uuid', 'goal_name',
'strategy_uuid', 'strategy_name'])
# The numeric ID should not be exposed to
# the user, it's internal only.
audit_template.goal_id = wtypes.Unset
audit_template.strategy_id = wtypes.Unset
audit_template.links = [link.Link.make_link('self', url,
'audit_templates',
@@ -133,8 +387,7 @@ class AuditTemplate(base.APIBase):
link.Link.make_link('bookmark', url,
'audit_templates',
audit_template.uuid,
bookmark=True)
]
bookmark=True)]
return audit_template
@classmethod
@@ -149,7 +402,8 @@ class AuditTemplate(base.APIBase):
name='My Audit Template',
description='Description of my audit template',
host_aggregate=5,
goal='DUMMY',
goal_uuid='83e44733-b640-40e2-8d8a-7dd3be7134e6',
strategy_uuid='367d826e-b6a4-4b70-bc44-c3f6fe1c9986',
extra={'automatic': True},
created_at=datetime.datetime.utcnow(),
deleted_at=None,
@@ -170,12 +424,12 @@ class AuditTemplateCollection(collection.Collection):
@staticmethod
def convert_with_links(rpc_audit_templates, limit, url=None, expand=False,
**kwargs):
collection = AuditTemplateCollection()
collection.audit_templates = \
[AuditTemplate.convert_with_links(p, expand)
for p in rpc_audit_templates]
collection.next = collection.get_next(limit, url=url, **kwargs)
return collection
at_collection = AuditTemplateCollection()
at_collection.audit_templates = [
AuditTemplate.convert_with_links(p, expand)
for p in rpc_audit_templates]
at_collection.next = at_collection.get_next(limit, url=url, **kwargs)
return at_collection
@classmethod
def sample(cls):
@@ -201,7 +455,8 @@ class AuditTemplatesController(rest.RestController):
sort_key, sort_dir, expand=False,
resource_url=None):
api_utils.validate_search_filters(
filters, objects.audit_template.AuditTemplate.fields.keys())
filters, list(objects.audit_template.AuditTemplate.fields.keys()) +
["goal_uuid", "goal_name", "strategy_uuid", "strategy_name"])
limit = api_utils.validate_limit(limit)
api_utils.validate_sort_dir(sort_dir)
@@ -225,30 +480,43 @@ class AuditTemplatesController(rest.RestController):
sort_key=sort_key,
sort_dir=sort_dir)
@wsme_pecan.wsexpose(AuditTemplateCollection, wtypes.text,
@wsme_pecan.wsexpose(AuditTemplateCollection, wtypes.text, wtypes.text,
types.uuid, int, wtypes.text, wtypes.text)
def get_all(self, goal=None, marker=None, limit=None,
sort_key='id', sort_dir='asc'):
def get_all(self, goal=None, strategy=None, marker=None,
limit=None, sort_key='id', sort_dir='asc'):
"""Retrieve a list of audit templates.
:param goal: goal name to filter by (case sensitive)
:param goal: goal UUID or name to filter by
:param strategy: strategy UUID or name to filter by
:param marker: pagination marker for large data sets.
:param limit: maximum number of resources to return in a single result.
:param sort_key: column to sort results by. Default: id.
:param sort_dir: direction to sort. "asc" or "desc". Default: asc.
"""
filters = api_utils.as_filters_dict(goal=goal)
filters = {}
if goal:
if common_utils.is_uuid_like(goal):
filters['goal_uuid'] = goal
else:
filters['goal_name'] = goal
if strategy:
if common_utils.is_uuid_like(strategy):
filters['strategy_uuid'] = strategy
else:
filters['strategy_name'] = strategy
return self._get_audit_templates_collection(
filters, marker, limit, sort_key, sort_dir)
@wsme_pecan.wsexpose(AuditTemplateCollection, wtypes.text, types.uuid, int,
wtypes.text, wtypes.text)
def detail(self, goal=None, marker=None, limit=None,
sort_key='id', sort_dir='asc'):
@wsme_pecan.wsexpose(AuditTemplateCollection, wtypes.text, wtypes.text,
types.uuid, int, wtypes.text, wtypes.text)
def detail(self, goal=None, strategy=None, marker=None,
limit=None, sort_key='id', sort_dir='asc'):
"""Retrieve a list of audit templates with detail.
:param goal: goal name to filter by (case sensitive)
:param goal: goal UUID or name to filter by
:param strategy: strategy UUID or name to filter by
:param marker: pagination marker for large data sets.
:param limit: maximum number of resources to return in a single result.
:param sort_key: column to sort results by. Default: id.
@@ -259,7 +527,18 @@ class AuditTemplatesController(rest.RestController):
if parent != "audit_templates":
raise exception.HTTPNotFound
filters = api_utils.as_filters_dict(goal=goal)
filters = {}
if goal:
if common_utils.is_uuid_like(goal):
filters['goal_uuid'] = goal
else:
filters['goal_name'] = goal
if strategy:
if common_utils.is_uuid_like(strategy):
filters['strategy_uuid'] = strategy
else:
filters['strategy_name'] = strategy
expand = True
resource_url = '/'.join(['audit_templates', 'detail'])
@@ -287,17 +566,21 @@ class AuditTemplatesController(rest.RestController):
return AuditTemplate.convert_with_links(rpc_audit_template)
@wsme_pecan.wsexpose(AuditTemplate, body=AuditTemplate, status_code=201)
def post(self, audit_template):
@wsme.validate(types.uuid, AuditTemplatePostType)
@wsme_pecan.wsexpose(AuditTemplate, body=AuditTemplatePostType,
status_code=201)
def post(self, audit_template_postdata):
"""Create a new audit template.
:param audit template: a audit template within the request body.
:param audit_template_postdata: the audit template POST data
from the request body.
"""
if self.from_audit_templates:
raise exception.OperationNotPermitted
audit_template_dict = audit_template.as_dict()
context = pecan.request.context
audit_template = audit_template_postdata.as_audit_template()
audit_template_dict = audit_template.as_dict()
new_audit_template = objects.AuditTemplate(context,
**audit_template_dict)
new_audit_template.create(context)

View File

@@ -44,7 +44,7 @@ class Collection(base.APIBase):
q_args = ''.join(['%s=%s&' % (key, kwargs[key]) for key in kwargs])
next_args = '?%(args)slimit=%(limit)d&marker=%(marker)s' % {
'args': q_args, 'limit': limit,
'marker': self.collection[-1].uuid}
'marker': getattr(self.collection[-1], "uuid")}
return link.Link.make_link('next', pecan.request.host_url,
resource_url, next_args).href

View File

@@ -46,61 +46,64 @@ from watcher.api.controllers.v1 import collection
from watcher.api.controllers.v1 import types
from watcher.api.controllers.v1 import utils as api_utils
from watcher.common import exception
from watcher.common import utils as common_utils
from watcher import objects
CONF = cfg.CONF
class Goal(base.APIBase):
"""API representation of a action.
"""API representation of a goal.
This class enforces type checking and value constraints, and converts
between the internal object model and the API representation of a action.
between the internal object model and the API representation of a goal.
"""
uuid = types.uuid
"""Unique UUID for this goal"""
name = wtypes.text
"""Name of the goal"""
strategy = wtypes.text
"""The strategy associated with the goal"""
uuid = types.uuid
"""Unused field"""
display_name = wtypes.text
"""Localized name of the goal"""
links = wsme.wsattr([link.Link], readonly=True)
"""A list containing a self link and associated action links"""
"""A list containing a self link and associated audit template links"""
def __init__(self, **kwargs):
super(Goal, self).__init__()
self.fields = []
self.fields.append('uuid')
self.fields.append('name')
self.fields.append('strategy')
setattr(self, 'name', kwargs.get('name',
wtypes.Unset))
setattr(self, 'strategy', kwargs.get('strategy',
wtypes.Unset))
self.fields.append('display_name')
setattr(self, 'uuid', kwargs.get('uuid', wtypes.Unset))
setattr(self, 'name', kwargs.get('name', wtypes.Unset))
setattr(self, 'display_name', kwargs.get('display_name', wtypes.Unset))
@staticmethod
def _convert_with_links(goal, url, expand=True):
if not expand:
goal.unset_fields_except(['name', 'strategy'])
goal.unset_fields_except(['uuid', 'name', 'display_name'])
goal.links = [link.Link.make_link('self', url,
'goals', goal.name),
'goals', goal.uuid),
link.Link.make_link('bookmark', url,
'goals', goal.name,
'goals', goal.uuid,
bookmark=True)]
return goal
@classmethod
def convert_with_links(cls, goal, expand=True):
goal = Goal(**goal)
goal = Goal(**goal.as_dict())
return cls._convert_with_links(goal, pecan.request.host_url, expand)
@classmethod
def sample(cls, expand=True):
sample = cls(name='27e3153e-d5bf-4b7e-b517-fb518e17f34c',
strategy='action description')
sample = cls(uuid='27e3153e-d5bf-4b7e-b517-fb518e17f34c',
name='DUMMY',
display_name='Dummy strategy')
return cls._convert_with_links(sample, 'http://localhost:9322', expand)
@@ -117,27 +120,28 @@ class GoalCollection(collection.Collection):
@staticmethod
def convert_with_links(goals, limit, url=None, expand=False,
**kwargs):
collection = GoalCollection()
collection.goals = [Goal.convert_with_links(g, expand) for g in goals]
goal_collection = GoalCollection()
goal_collection.goals = [
Goal.convert_with_links(g, expand) for g in goals]
if 'sort_key' in kwargs:
reverse = False
if kwargs['sort_key'] == 'strategy':
if 'sort_dir' in kwargs:
reverse = True if kwargs['sort_dir'] == 'desc' else False
collection.goals = sorted(
collection.goals,
key=lambda goal: goal.name,
goal_collection.goals = sorted(
goal_collection.goals,
key=lambda goal: goal.uuid,
reverse=reverse)
collection.next = collection.get_next(limit, url=url, **kwargs)
return collection
goal_collection.next = goal_collection.get_next(
limit, url=url, **kwargs)
return goal_collection
@classmethod
def sample(cls):
sample = cls()
sample.actions = [Goal.sample(expand=False)]
sample.goals = [Goal.sample(expand=False)]
return sample
@@ -154,51 +158,49 @@ class GoalsController(rest.RestController):
'detail': ['GET'],
}
def _get_goals_collection(self, limit,
sort_key, sort_dir, expand=False,
resource_url=None, goal_name=None):
def _get_goals_collection(self, marker, limit, sort_key, sort_dir,
expand=False, resource_url=None):
limit = api_utils.validate_limit(limit)
api_utils.validate_sort_dir(sort_dir)
goals = []
sort_db_key = (sort_key if sort_key in objects.Goal.fields.keys()
else None)
if not goal_name and goal_name in CONF.watcher_goals.goals.keys():
goals.append({'name': goal_name, 'strategy': goals[goal_name]})
else:
for name, strategy in CONF.watcher_goals.goals.items():
goals.append({'name': name, 'strategy': strategy})
marker_obj = None
if marker:
marker_obj = objects.Goal.get_by_uuid(
pecan.request.context, marker)
return GoalCollection.convert_with_links(goals[:limit], limit,
goals = objects.Goal.list(pecan.request.context, limit, marker_obj,
sort_key=sort_db_key, sort_dir=sort_dir)
return GoalCollection.convert_with_links(goals, limit,
url=resource_url,
expand=expand,
sort_key=sort_key,
sort_dir=sort_dir)
@wsme_pecan.wsexpose(GoalCollection, int, wtypes.text, wtypes.text)
def get_all(self, limit=None,
sort_key='name', sort_dir='asc'):
@wsme_pecan.wsexpose(GoalCollection, wtypes.text,
int, wtypes.text, wtypes.text)
def get_all(self, marker=None, limit=None, sort_key='id', sort_dir='asc'):
"""Retrieve a list of goals.
:param marker: pagination marker for large data sets.
:param limit: maximum number of resources to return in a single result.
:param sort_key: column to sort results by. Default: id.
:param sort_dir: direction to sort. "asc" or "desc". Default: asc.
to get only actions for that goal.
"""
return self._get_goals_collection(limit, sort_key, sort_dir)
return self._get_goals_collection(marker, limit, sort_key, sort_dir)
@wsme_pecan.wsexpose(GoalCollection, wtypes.text, int,
wtypes.text, wtypes.text)
def detail(self, goal_name=None, limit=None,
sort_key='name', sort_dir='asc'):
"""Retrieve a list of actions with detail.
def detail(self, marker=None, limit=None, sort_key='id', sort_dir='asc'):
"""Retrieve a list of goals with detail.
:param goal_name: name of a goal, to get only goals for that
action.
:param marker: pagination marker for large data sets.
:param limit: maximum number of resources to return in a single result.
:param sort_key: column to sort results by. Default: id.
:param sort_dir: direction to sort. "asc" or "desc". Default: asc.
to get only goals for that goal.
"""
# NOTE(lucasagomes): /detail should only work agaist collections
parent = pecan.request.path.split('/')[:-1][-1]
@@ -206,21 +208,23 @@ class GoalsController(rest.RestController):
raise exception.HTTPNotFound
expand = True
resource_url = '/'.join(['goals', 'detail'])
return self._get_goals_collection(limit, sort_key, sort_dir,
expand, resource_url, goal_name)
return self._get_goals_collection(marker, limit, sort_key, sort_dir,
expand, resource_url)
@wsme_pecan.wsexpose(Goal, wtypes.text)
def get_one(self, goal_name):
def get_one(self, goal):
"""Retrieve information about the given goal.
:param goal_name: name of the goal.
:param goal: UUID or name of the goal.
"""
if self.from_goals:
raise exception.OperationNotPermitted
goals = CONF.watcher_goals.goals
goal = {}
if goal_name in goals.keys():
goal = {'name': goal_name, 'strategy': goals[goal_name]}
if common_utils.is_uuid_like(goal):
get_goal_func = objects.Goal.get_by_uuid
else:
get_goal_func = objects.Goal.get_by_name
return Goal.convert_with_links(goal)
rpc_goal = get_goal_func(pecan.request.context, goal)
return Goal.convert_with_links(rpc_goal)

View File

@@ -0,0 +1,281 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2016 b<>com
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
A :ref:`Strategy <strategy_definition>` is an algorithm implementation which is
able to find a :ref:`Solution <solution_definition>` for a given
:ref:`Goal <goal_definition>`.
There may be several potential strategies which are able to achieve the same
:ref:`Goal <goal_definition>`. This is why it is possible to configure which
specific :ref:`Strategy <strategy_definition>` should be used for each goal.
Some strategies may provide better optimization results but may take more time
to find an optimal :ref:`Solution <solution_definition>`.
"""
from oslo_config import cfg
import pecan
from pecan import rest
import wsme
from wsme import types as wtypes
import wsmeext.pecan as wsme_pecan
from watcher.api.controllers import base
from watcher.api.controllers import link
from watcher.api.controllers.v1 import collection
from watcher.api.controllers.v1 import types
from watcher.api.controllers.v1 import utils as api_utils
from watcher.common import exception
from watcher.common import utils as common_utils
from watcher import objects
CONF = cfg.CONF
class Strategy(base.APIBase):
"""API representation of a strategy.
This class enforces type checking and value constraints, and converts
between the internal object model and the API representation of a strategy.
"""
_goal_uuid = None
def _get_goal(self, value):
if value == wtypes.Unset:
return None
goal = None
try:
if (common_utils.is_uuid_like(value) or
common_utils.is_int_like(value)):
goal = objects.Goal.get(pecan.request.context, value)
else:
goal = objects.Goal.get_by_name(pecan.request.context, value)
except exception.GoalNotFound:
pass
if goal:
self.goal_id = goal.id
return goal
def _get_goal_uuid(self):
return self._goal_uuid
def _set_goal_uuid(self, value):
if value and self._goal_uuid != value:
self._goal_uuid = None
goal = self._get_goal(value)
if goal:
self._goal_uuid = goal.uuid
uuid = types.uuid
"""Unique UUID for this strategy"""
name = wtypes.text
"""Name of the strategy"""
display_name = wtypes.text
"""Localized name of the strategy"""
links = wsme.wsattr([link.Link], readonly=True)
"""A list containing a self link and associated goal links"""
goal_uuid = wsme.wsproperty(wtypes.text, _get_goal_uuid, _set_goal_uuid,
mandatory=True)
"""The UUID of the goal this audit refers to"""
def __init__(self, **kwargs):
super(Strategy, self).__init__()
self.fields = []
self.fields.append('uuid')
self.fields.append('name')
self.fields.append('display_name')
self.fields.append('goal_uuid')
setattr(self, 'uuid', kwargs.get('uuid', wtypes.Unset))
setattr(self, 'name', kwargs.get('name', wtypes.Unset))
setattr(self, 'display_name', kwargs.get('display_name', wtypes.Unset))
setattr(self, 'goal_uuid', kwargs.get('goal_id', wtypes.Unset))
@staticmethod
def _convert_with_links(strategy, url, expand=True):
if not expand:
strategy.unset_fields_except(
['uuid', 'name', 'display_name', 'goal_uuid'])
strategy.links = [
link.Link.make_link('self', url, 'strategies', strategy.uuid),
link.Link.make_link('bookmark', url, 'strategies', strategy.uuid,
bookmark=True)]
return strategy
@classmethod
def convert_with_links(cls, strategy, expand=True):
strategy = Strategy(**strategy.as_dict())
return cls._convert_with_links(
strategy, pecan.request.host_url, expand)
@classmethod
def sample(cls, expand=True):
sample = cls(uuid='27e3153e-d5bf-4b7e-b517-fb518e17f34c',
name='DUMMY',
display_name='Dummy strategy')
return cls._convert_with_links(sample, 'http://localhost:9322', expand)
class StrategyCollection(collection.Collection):
"""API representation of a collection of strategies."""
strategies = [Strategy]
"""A list containing strategies objects"""
def __init__(self, **kwargs):
super(StrategyCollection, self).__init__()
self._type = 'strategies'
@staticmethod
def convert_with_links(strategies, limit, url=None, expand=False,
**kwargs):
strategy_collection = StrategyCollection()
strategy_collection.strategies = [
Strategy.convert_with_links(g, expand) for g in strategies]
if 'sort_key' in kwargs:
reverse = False
if kwargs['sort_key'] == 'strategy':
if 'sort_dir' in kwargs:
reverse = True if kwargs['sort_dir'] == 'desc' else False
strategy_collection.strategies = sorted(
strategy_collection.strategies,
key=lambda strategy: strategy.uuid,
reverse=reverse)
strategy_collection.next = strategy_collection.get_next(
limit, url=url, **kwargs)
return strategy_collection
@classmethod
def sample(cls):
sample = cls()
sample.strategies = [Strategy.sample(expand=False)]
return sample
class StrategiesController(rest.RestController):
"""REST controller for Strategies."""
def __init__(self):
super(StrategiesController, self).__init__()
from_strategies = False
"""A flag to indicate if the requests to this controller are coming
from the top-level resource Strategies."""
_custom_actions = {
'detail': ['GET'],
}
def _get_strategies_collection(self, filters, marker, limit, sort_key,
sort_dir, expand=False, resource_url=None):
api_utils.validate_search_filters(
filters, list(objects.strategy.Strategy.fields.keys()) +
["goal_uuid", "goal_name"])
limit = api_utils.validate_limit(limit)
api_utils.validate_sort_dir(sort_dir)
sort_db_key = (sort_key if sort_key in objects.Strategy.fields.keys()
else None)
marker_obj = None
if marker:
marker_obj = objects.Strategy.get_by_uuid(
pecan.request.context, marker)
strategies = objects.Strategy.list(
pecan.request.context, limit, marker_obj, filters=filters,
sort_key=sort_db_key, sort_dir=sort_dir)
return StrategyCollection.convert_with_links(
strategies, limit, url=resource_url, expand=expand,
sort_key=sort_key, sort_dir=sort_dir)
@wsme_pecan.wsexpose(StrategyCollection, wtypes.text, wtypes.text,
int, wtypes.text, wtypes.text)
def get_all(self, goal=None, marker=None, limit=None,
sort_key='id', sort_dir='asc'):
"""Retrieve a list of strategies.
:param goal: goal UUID or name to filter by.
:param marker: pagination marker for large data sets.
:param limit: maximum number of resources to return in a single result.
:param sort_key: column to sort results by. Default: id.
:param sort_dir: direction to sort. "asc" or "desc". Default: asc.
"""
filters = {}
if goal:
if common_utils.is_uuid_like(goal):
filters['goal_uuid'] = goal
else:
filters['goal_name'] = goal
return self._get_strategies_collection(
filters, marker, limit, sort_key, sort_dir)
@wsme_pecan.wsexpose(StrategyCollection, wtypes.text, wtypes.text, int,
wtypes.text, wtypes.text)
def detail(self, goal=None, marker=None, limit=None,
sort_key='id', sort_dir='asc'):
"""Retrieve a list of strategies with detail.
:param goal: goal UUID or name to filter by.
:param marker: pagination marker for large data sets.
:param limit: maximum number of resources to return in a single result.
:param sort_key: column to sort results by. Default: id.
:param sort_dir: direction to sort. "asc" or "desc". Default: asc.
"""
# NOTE(lucasagomes): /detail should only work agaist collections
parent = pecan.request.path.split('/')[:-1][-1]
if parent != "strategies":
raise exception.HTTPNotFound
expand = True
resource_url = '/'.join(['strategies', 'detail'])
filters = {}
if goal:
if common_utils.is_uuid_like(goal):
filters['goal_uuid'] = goal
else:
filters['goal_name'] = goal
return self._get_strategies_collection(
filters, marker, limit, sort_key, sort_dir, expand, resource_url)
@wsme_pecan.wsexpose(Strategy, wtypes.text)
def get_one(self, strategy):
"""Retrieve information about the given strategy.
:param strategy: UUID or name of the strategy.
"""
if self.from_strategies:
raise exception.OperationNotPermitted
if common_utils.is_uuid_like(strategy):
get_strategy_func = objects.Strategy.get_by_uuid
else:
get_strategy_func = objects.Strategy.get_by_name
rpc_strategy = get_strategy_func(pecan.request.context, strategy)
return Strategy.convert_with_links(rpc_strategy)

View File

@@ -31,11 +31,6 @@ class UuidOrNameType(wtypes.UserType):
basetype = wtypes.text
name = 'uuid_or_name'
# FIXME(lucasagomes): When used with wsexpose decorator WSME will try
# to get the name of the type by accessing it's __name__ attribute.
# Remove this __name__ attribute once it's fixed in WSME.
# https://bugs.launchpad.net/wsme/+bug/1265590
__name__ = name
@staticmethod
def validate(value):
@@ -55,11 +50,6 @@ class NameType(wtypes.UserType):
basetype = wtypes.text
name = 'name'
# FIXME(lucasagomes): When used with wsexpose decorator WSME will try
# to get the name of the type by accessing it's __name__ attribute.
# Remove this __name__ attribute once it's fixed in WSME.
# https://bugs.launchpad.net/wsme/+bug/1265590
__name__ = name
@staticmethod
def validate(value):
@@ -79,11 +69,6 @@ class UuidType(wtypes.UserType):
basetype = wtypes.text
name = 'uuid'
# FIXME(lucasagomes): When used with wsexpose decorator WSME will try
# to get the name of the type by accessing it's __name__ attribute.
# Remove this __name__ attribute once it's fixed in WSME.
# https://bugs.launchpad.net/wsme/+bug/1265590
__name__ = name
@staticmethod
def validate(value):
@@ -103,11 +88,6 @@ class BooleanType(wtypes.UserType):
basetype = wtypes.text
name = 'boolean'
# FIXME(lucasagomes): When used with wsexpose decorator WSME will try
# to get the name of the type by accessing it's __name__ attribute.
# Remove this __name__ attribute once it's fixed in WSME.
# https://bugs.launchpad.net/wsme/+bug/1265590
__name__ = name
@staticmethod
def validate(value):
@@ -129,11 +109,6 @@ class JsonType(wtypes.UserType):
basetype = wtypes.text
name = 'json'
# FIXME(lucasagomes): When used with wsexpose decorator WSME will try
# to get the name of the type by accessing it's __name__ attribute.
# Remove this __name__ attribute once it's fixed in WSME.
# https://bugs.launchpad.net/wsme/+bug/1265590
__name__ = name
def __str__(self):
# These are the json serializable native types

View File

@@ -53,16 +53,15 @@ class DefaultActionPlanHandler(base.BaseActionPlanHandler):
event_types.EventTypes.LAUNCH_ACTION_PLAN,
ap_objects.State.ONGOING)
applier = default.DefaultApplier(self.ctx, self.applier_manager)
result = applier.execute(self.action_plan_uuid)
applier.execute(self.action_plan_uuid)
state = ap_objects.State.SUCCEEDED
except Exception as e:
LOG.exception(e)
result = False
state = ap_objects.State.FAILED
finally:
if result is True:
status = ap_objects.State.SUCCEEDED
else:
status = ap_objects.State.FAILED
# update state
self.notify(self.action_plan_uuid,
event_types.EventTypes.LAUNCH_ACTION_PLAN,
status)
state)

View File

@@ -23,18 +23,26 @@ import abc
import six
from watcher.common import clients
from watcher.common.loader import loadable
@six.add_metaclass(abc.ABCMeta)
class BaseAction(object):
class BaseAction(loadable.Loadable):
# NOTE(jed) by convention we decided
# that the attribute "resource_id" is the unique id of
# the resource to which the Action applies to allow us to use it in the
# watcher dashboard and will be nested in input_parameters
RESOURCE_ID = 'resource_id'
def __init__(self, osc=None):
""":param osc: an OpenStackClients instance"""
def __init__(self, config, osc=None):
"""Constructor
:param config: A mapping containing the configuration of this action
:type config: dict
:param osc: an OpenStackClients instance, defaults to None
:type osc: :py:class:`~.OpenStackClients` instance, optional
"""
super(BaseAction, self).__init__(config)
self._input_parameters = {}
self._osc = osc
@@ -56,6 +64,15 @@ class BaseAction(object):
def resource_id(self):
return self.input_parameters[self.RESOURCE_ID]
@classmethod
def get_config_opts(cls):
"""Defines the configuration options to be associated to this loadable
:return: A list of configuration options relative to this Loadable
:rtype: list of :class:`oslo_config.cfg.Opt` instances
"""
return []
@abc.abstractmethod
def execute(self):
"""Executes the main logic of the action

View File

@@ -17,12 +17,8 @@
from __future__ import unicode_literals
from oslo_log import log
from watcher.common.loader import default
LOG = log.getLogger(__name__)
class DefaultActionLoader(default.DefaultLoader):
def __init__(self):

View File

@@ -31,27 +31,24 @@ LOG = log.getLogger(__name__)
class Migrate(base.BaseAction):
"""Live-Migrates a server to a destination nova-compute host
"""Migrates a server to a destination nova-compute host
This action will allow you to migrate a server to another compute
destination host. As of now, only live migration can be performed using
this action.
.. If either host uses shared storage, you can use ``live``
.. as ``migration_type``. If both source and destination hosts provide
.. local disks, you can set the block_migration parameter to True (not
.. supported for yet).
destination host.
Migration type 'live' can only be used for migrating active VMs.
Migration type 'cold' can be used for migrating non-active VMs
as well active VMs, which will be shut down while migrating.
The action schema is::
schema = Schema({
'resource_id': str, # should be a UUID
'migration_type': str, # choices -> "live" only
'migration_type': str, # choices -> "live", "cold"
'dst_hypervisor': str,
'src_hypervisor': str,
})
The `resource_id` is the UUID of the server to migrate. Only live migration
is supported.
The `resource_id` is the UUID of the server to migrate.
The `src_hypervisor` and `dst_hypervisor` parameters are respectively the
source and the destination compute hostname (list of available compute
hosts is returned by this command: ``nova service-list --binary
@@ -61,6 +58,7 @@ class Migrate(base.BaseAction):
# input parameters constants
MIGRATION_TYPE = 'migration_type'
LIVE_MIGRATION = 'live'
COLD_MIGRATION = 'cold'
DST_HYPERVISOR = 'dst_hypervisor'
SRC_HYPERVISOR = 'src_hypervisor'
@@ -77,7 +75,8 @@ class Migrate(base.BaseAction):
voluptuous.Required(self.RESOURCE_ID): self.check_resource_id,
voluptuous.Required(self.MIGRATION_TYPE,
default=self.LIVE_MIGRATION):
voluptuous.Any(*[self.LIVE_MIGRATION]),
voluptuous.Any(*[self.LIVE_MIGRATION,
self.COLD_MIGRATION]),
voluptuous.Required(self.DST_HYPERVISOR):
voluptuous.All(voluptuous.Any(*six.string_types),
voluptuous.Length(min=1)),
@@ -127,14 +126,30 @@ class Migrate(base.BaseAction):
return result
def _cold_migrate_instance(self, nova, destination):
result = None
try:
result = nova.watcher_non_live_migrate_instance(
instance_id=self.instance_uuid,
dest_hostname=destination)
except Exception as exc:
LOG.exception(exc)
LOG.critical(_LC("Unexpected error occured. Migration failed for"
"instance %s. Leaving instance on previous "
"host."), self.instance_uuid)
return result
def migrate(self, destination):
nova = nova_helper.NovaHelper(osc=self.osc)
LOG.debug("Migrate instance %s to %s", self.instance_uuid,
destination)
instance = nova.find_instance(self.instance_uuid)
if instance:
if self.migration_type == 'live':
if self.migration_type == self.LIVE_MIGRATION:
return self._live_migrate_instance(nova, destination)
elif self.migration_type == self.COLD_MIGRATION:
return self._cold_migrate_instance(nova, destination)
else:
raise exception.Invalid(
message=(_('Migration of type %(migration_type)s is not '

View File

@@ -21,7 +21,6 @@ from oslo_config import cfg
from oslo_log import log
from watcher.applier.messaging import trigger
from watcher.common.messaging import messaging_core
LOG = log.getLogger(__name__)
CONF = cfg.CONF
@@ -63,17 +62,15 @@ CONF.register_group(opt_group)
CONF.register_opts(APPLIER_MANAGER_OPTS, opt_group)
class ApplierManager(messaging_core.MessagingCore):
def __init__(self):
super(ApplierManager, self).__init__(
CONF.watcher_applier.publisher_id,
CONF.watcher_applier.conductor_topic,
CONF.watcher_applier.status_topic,
api_version=self.API_VERSION,
)
self.conductor_topic_handler.add_endpoint(
trigger.TriggerActionPlan(self))
class ApplierManager(object):
def join(self):
self.conductor_topic_handler.join()
self.status_topic_handler.join()
API_VERSION = '1.0'
conductor_endpoints = [trigger.TriggerActionPlan]
status_endpoints = []
def __init__(self):
self.publisher_id = CONF.watcher_applier.publisher_id
self.conductor_topic = CONF.watcher_applier.conductor_topic
self.status_topic = CONF.watcher_applier.status_topic
self.api_version = self.API_VERSION

View File

@@ -18,48 +18,43 @@
#
from oslo_config import cfg
from oslo_log import log
import oslo_messaging as om
from watcher.applier.manager import APPLIER_MANAGER_OPTS
from watcher.applier.manager import opt_group
from watcher.applier import manager
from watcher.common import exception
from watcher.common.messaging import messaging_core
from watcher.common.messaging import notification_handler as notification
from watcher.common import service
from watcher.common import utils
LOG = log.getLogger(__name__)
CONF = cfg.CONF
CONF.register_group(opt_group)
CONF.register_opts(APPLIER_MANAGER_OPTS, opt_group)
CONF.register_group(manager.opt_group)
CONF.register_opts(manager.APPLIER_MANAGER_OPTS, manager.opt_group)
class ApplierAPI(messaging_core.MessagingCore):
class ApplierAPI(service.Service):
def __init__(self):
super(ApplierAPI, self).__init__(
CONF.watcher_applier.publisher_id,
CONF.watcher_applier.conductor_topic,
CONF.watcher_applier.status_topic,
api_version=self.API_VERSION,
)
self.handler = notification.NotificationHandler(self.publisher_id)
self.handler.register_observer(self)
self.status_topic_handler.add_endpoint(self.handler)
transport = om.get_transport(CONF)
target = om.Target(
topic=CONF.watcher_applier.conductor_topic,
version=self.API_VERSION,
)
self.client = om.RPCClient(transport, target,
serializer=self.serializer)
super(ApplierAPI, self).__init__(ApplierAPIManager)
def launch_action_plan(self, context, action_plan_uuid=None):
if not utils.is_uuid_like(action_plan_uuid):
raise exception.InvalidUuidOrName(name=action_plan_uuid)
return self.client.call(
return self.conductor_client.call(
context.to_dict(), 'launch_action_plan',
action_plan_uuid=action_plan_uuid)
class ApplierAPIManager(object):
API_VERSION = '1.0'
conductor_endpoints = []
status_endpoints = [notification.NotificationHandler]
def __init__(self):
self.publisher_id = CONF.watcher_applier.publisher_id
self.conductor_topic = CONF.watcher_applier.conductor_topic
self.status_topic = CONF.watcher_applier.status_topic
self.api_version = self.API_VERSION

View File

@@ -23,18 +23,38 @@ import six
from watcher.applier.actions import factory
from watcher.applier.messaging import event_types
from watcher.common import clients
from watcher.common.loader import loadable
from watcher.common.messaging.events import event
from watcher import objects
@six.add_metaclass(abc.ABCMeta)
class BaseWorkFlowEngine(object):
def __init__(self, context=None, applier_manager=None):
class BaseWorkFlowEngine(loadable.Loadable):
def __init__(self, config, context=None, applier_manager=None):
"""Constructor
:param config: A mapping containing the configuration of this
workflow engine
:type config: dict
:param osc: an OpenStackClients object, defaults to None
:type osc: :py:class:`~.OpenStackClients` instance, optional
"""
super(BaseWorkFlowEngine, self).__init__(config)
self._context = context
self._applier_manager = applier_manager
self._action_factory = factory.ActionFactory()
self._osc = None
@classmethod
def get_config_opts(cls):
"""Defines the configuration options to be associated to this loadable
:return: A list of configuration options relative to this Loadable
:rtype: list of :class:`oslo_config.cfg.Opt` instances
"""
return []
@property
def context(self):
return self._context

View File

@@ -22,6 +22,7 @@ from taskflow import task
from watcher._i18n import _LE, _LW, _LC
from watcher.applier.workflow_engine import base
from watcher.common import exception
from watcher.objects import action as obj_action
LOG = log.getLogger(__name__)
@@ -77,10 +78,9 @@ class DefaultWorkFlowEngine(base.BaseWorkFlowEngine):
e = engines.load(flow)
e.run()
return True
except Exception as e:
LOG.exception(e)
return False
raise exception.WorkflowExecutionException(error=e)
class TaskFlowActionContainer(task.Task):
@@ -121,14 +121,9 @@ class TaskFlowActionContainer(task.Task):
try:
LOG.debug("Running action %s", self.name)
# todo(jed) remove return (true or false) raise an Exception
result = self.action.execute()
if result is not True:
self.engine.notify(self._db_action,
obj_action.State.FAILED)
else:
self.engine.notify(self._db_action,
obj_action.State.SUCCEEDED)
self.action.execute()
self.engine.notify(self._db_action,
obj_action.State.SUCCEEDED)
except Exception as e:
LOG.exception(e)
LOG.error(_LE('The WorkFlow Engine has failed '

View File

@@ -17,12 +17,8 @@
from __future__ import unicode_literals
from oslo_log import log
from watcher.common.loader import default
LOG = log.getLogger(__name__)
class DefaultWorkFlowEngineLoader(default.DefaultLoader):
def __init__(self):

View File

@@ -17,19 +17,14 @@
"""Starter script for the Watcher API service."""
import logging as std_logging
import os
import sys
from wsgiref import simple_server
from oslo_config import cfg
from oslo_log import log as logging
from watcher._i18n import _
from watcher.api import app as api_app
from watcher._i18n import _LI
from watcher.common import service
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
@@ -37,22 +32,20 @@ CONF = cfg.CONF
def main():
service.prepare_service(sys.argv)
app = api_app.setup_app()
# Create the WSGI server and start it
host, port = cfg.CONF.api.host, cfg.CONF.api.port
srv = simple_server.make_server(host, port, app)
LOG.info(_('Starting server in PID %s') % os.getpid())
LOG.debug("Watcher configuration:")
cfg.CONF.log_opt_values(LOG, std_logging.DEBUG)
protocol = "http" if not CONF.api.enable_ssl_api else "https"
# Build and start the WSGI app
server = service.WSGIService(
'watcher-api', CONF.api.enable_ssl_api)
if host == '0.0.0.0':
LOG.info(_('serving on 0.0.0.0:%(port)s, '
'view at http://127.0.0.1:%(port)s') %
dict(port=port))
LOG.info(_LI('serving on 0.0.0.0:%(port)s, '
'view at %(protocol)s://127.0.0.1:%(port)s') %
dict(protocol=protocol, port=port))
else:
LOG.info(_('serving on http://%(host)s:%(port)s') %
dict(host=host, port=port))
LOG.info(_LI('serving on %(protocol)s://%(host)s:%(port)s') %
dict(protocol=protocol, host=host, port=port))
srv.serve_forever()
launcher = service.process_launcher()
launcher.launch_service(server, workers=server.workers)
launcher.wait()

View File

@@ -17,29 +17,26 @@
"""Starter script for the Applier service."""
import logging as std_logging
import os
import sys
from oslo_config import cfg
from oslo_log import log as logging
from oslo_service import service
from watcher import _i18n
from watcher._i18n import _LI
from watcher.applier import manager
from watcher.common import service
from watcher.common import service as watcher_service
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
_LI = _i18n._LI
def main():
service.prepare_service(sys.argv)
watcher_service.prepare_service(sys.argv)
LOG.info(_LI('Starting server in PID %s') % os.getpid())
LOG.debug("Configuration:")
cfg.CONF.log_opt_values(LOG, std_logging.DEBUG)
LOG.info(_LI('Starting Watcher Applier service in PID %s'), os.getpid())
server = manager.ApplierManager()
server.connect()
server.join()
applier_service = watcher_service.Service(manager.ApplierManager)
launcher = service.launch(CONF, applier_service)
launcher.wait()

View File

@@ -25,7 +25,7 @@ from oslo_config import cfg
from watcher.common import service
from watcher.db import migration
from watcher.db import purge
CONF = cfg.CONF
@@ -56,6 +56,12 @@ class DBCommand(object):
def create_schema():
migration.create_schema()
@staticmethod
def purge():
purge.purge(CONF.command.age_in_days, CONF.command.max_number,
CONF.command.audit_template, CONF.command.exclude_orphans,
CONF.command.dry_run)
def add_command_parsers(subparsers):
parser = subparsers.add_parser(
@@ -96,6 +102,33 @@ def add_command_parsers(subparsers):
help="Create the database schema.")
parser.set_defaults(func=DBCommand.create_schema)
parser = subparsers.add_parser(
'purge',
help="Purge the database.")
parser.add_argument('-d', '--age-in-days',
help="Number of days since deletion (from today) "
"to exclude from the purge. If None, everything "
"will be purged.",
type=int, default=None, nargs='?')
parser.add_argument('-n', '--max-number',
help="Max number of objects expected to be deleted. "
"Prevents the deletion if exceeded. No limit if "
"set to None.",
type=int, default=None, nargs='?')
parser.add_argument('-t', '--audit-template',
help="UUID or name of the audit template to purge.",
type=str, default=None, nargs='?')
parser.add_argument('-e', '--exclude-orphans', action='store_true',
help="Flag to indicate whether or not you want to "
"exclude orphans from deletion (default: False).",
default=False)
parser.add_argument('--dry-run', action='store_true',
help="Flag to indicate whether or not you want to "
"perform a dry run (no deletion).",
default=False)
parser.set_defaults(func=DBCommand.purge)
command_opt = cfg.SubCommandOpt('command',
title='Command',
@@ -114,6 +147,7 @@ def main():
valid_commands = set([
'upgrade', 'downgrade', 'revision',
'version', 'stamp', 'create_schema',
'purge',
])
if not set(sys.argv).intersection(valid_commands):
sys.argv.append('upgrade')

View File

@@ -17,30 +17,31 @@
"""Starter script for the Decision Engine manager service."""
import logging as std_logging
import os
import sys
from oslo_config import cfg
from oslo_log import log as logging
from oslo_service import service
from watcher import _i18n
from watcher.common import service
from watcher._i18n import _LI
from watcher.common import service as watcher_service
from watcher.decision_engine import manager
from watcher.decision_engine import sync
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
_LI = _i18n._LI
def main():
service.prepare_service(sys.argv)
watcher_service.prepare_service(sys.argv)
LOG.info(_LI('Starting server in PID %s') % os.getpid())
LOG.debug("Configuration:")
cfg.CONF.log_opt_values(LOG, std_logging.DEBUG)
LOG.info(_LI('Starting Watcher Decision Engine service in PID %s'),
os.getpid())
server = manager.DecisionEngineManager()
server.connect()
server.join()
syncer = sync.Syncer()
syncer.sync()
de_service = watcher_service.Service(manager.DecisionEngineManager)
launcher = service.launch(CONF, de_service)
launcher.wait()

View File

@@ -123,7 +123,7 @@ class CeilometerHelper(object):
item_value = None
if statistic:
item_value = statistic[-1]._info.get('aggregate').get('avg')
item_value = statistic[-1]._info.get('aggregate').get(aggregate)
return item_value
def get_last_sample_values(self, resource_id, meter_name, limit=1):

View File

@@ -22,6 +22,8 @@ from watcher import version
def parse_args(argv, default_config_files=None):
default_config_files = (default_config_files or
cfg.find_config_files(project='watcher'))
rpc.set_defaults(control_exchange='watcher')
cfg.CONF(argv[1:],
project='python-watcher',

View File

@@ -147,11 +147,15 @@ class ResourceNotFound(ObjectNotFound):
class InvalidIdentity(Invalid):
msg_fmt = _("Expected an uuid or int but received %(identity)s")
msg_fmt = _("Expected a uuid or int but received %(identity)s")
class InvalidGoal(Invalid):
msg_fmt = _("Goal %(goal)s is not defined in Watcher configuration file")
msg_fmt = _("Goal %(goal)s is invalid")
class InvalidStrategy(Invalid):
msg_fmt = _("Strategy %(strategy)s is invalid")
class InvalidUUID(Invalid):
@@ -166,12 +170,28 @@ class InvalidUuidOrName(Invalid):
msg_fmt = _("Expected a logical name or uuid but received %(name)s")
class GoalNotFound(ResourceNotFound):
msg_fmt = _("Goal %(goal)s could not be found")
class GoalAlreadyExists(Conflict):
msg_fmt = _("A goal with UUID %(uuid)s already exists")
class StrategyNotFound(ResourceNotFound):
msg_fmt = _("Strategy %(strategy)s could not be found")
class StrategyAlreadyExists(Conflict):
msg_fmt = _("A strategy with UUID %(uuid)s already exists")
class AuditTemplateNotFound(ResourceNotFound):
msg_fmt = _("AuditTemplate %(audit_template)s could not be found")
class AuditTemplateAlreadyExists(Conflict):
msg_fmt = _("An audit_template with UUID %(uuid)s or name %(name)s "
msg_fmt = _("An audit_template with UUID or name %(audit_template)s "
"already exists")
@@ -180,6 +200,10 @@ class AuditTemplateReferenced(Invalid):
"multiple audit")
class AuditTypeNotFound(Invalid):
msg_fmt = _("Audit type %(audit_type)s could not be found")
class AuditNotFound(ResourceNotFound):
msg_fmt = _("Audit %(audit)s could not be found")
@@ -234,6 +258,9 @@ class PatchError(Invalid):
# decision engine
class WorkflowExecutionException(WatcherException):
msg_fmt = _('Workflow execution error: %(error)s')
class IllegalArgumentException(WatcherException):
msg_fmt = _('Illegal argument')
@@ -264,7 +291,19 @@ class MetricCollectorNotDefined(WatcherException):
class ClusterStateNotDefined(WatcherException):
msg_fmt = _("the cluster state is not defined")
msg_fmt = _("The cluster state is not defined")
class NoAvailableStrategyForGoal(WatcherException):
msg_fmt = _("No strategy could be found to achieve the '%(goal)s' goal.")
class NoMetricValuesForVM(WatcherException):
msg_fmt = _("No values returned by %(resource_id)s for %(metric_name)s.")
class NoSuchMetricForHost(WatcherException):
msg_fmt = _("No %(metric)s metric for %(host)s found.")
# Model
@@ -283,3 +322,11 @@ class LoadingError(WatcherException):
class ReservedWord(WatcherException):
msg_fmt = _("The identifier '%(name)s' is a reserved word")
class NotSoftDeletedStateError(WatcherException):
msg_fmt = _("The %(name)s resource %(id)s is not soft deleted")
class NegativeLimitError(WatcherException):
msg_fmt = _("Limit should be positive")

View File

@@ -1,5 +1,5 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2015 b<>com
# Copyright (c) 2016 b<>com
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -16,33 +16,82 @@
from __future__ import unicode_literals
from oslo_config import cfg
from oslo_log import log
from stevedore.driver import DriverManager
from stevedore import ExtensionManager
from stevedore import driver as drivermanager
from stevedore import extension as extensionmanager
from watcher.common import exception
from watcher.common.loader.base import BaseLoader
from watcher.common.loader import base
from watcher.common import utils
LOG = log.getLogger(__name__)
class DefaultLoader(BaseLoader):
def __init__(self, namespace):
class DefaultLoader(base.BaseLoader):
def __init__(self, namespace, conf=cfg.CONF):
"""Entry point loader for Watcher using Stevedore
:param namespace: namespace of the entry point(s) to load or list
:type namespace: str
:param conf: ConfigOpts instance, defaults to cfg.CONF
"""
super(DefaultLoader, self).__init__()
self.namespace = namespace
self.conf = conf
def load(self, name, **kwargs):
try:
LOG.debug("Loading in namespace %s => %s ", self.namespace, name)
driver_manager = DriverManager(namespace=self.namespace,
name=name)
loaded = driver_manager.driver
driver_manager = drivermanager.DriverManager(
namespace=self.namespace,
name=name,
invoke_on_load=False,
)
driver_cls = driver_manager.driver
config = self._load_plugin_config(name, driver_cls)
driver = driver_cls(config, **kwargs)
except Exception as exc:
LOG.exception(exc)
raise exception.LoadingError(name=name)
return loaded(**kwargs)
return driver
def _reload_config(self):
self.conf()
def get_entry_name(self, name):
return ".".join([self.namespace, name])
def _load_plugin_config(self, name, driver_cls):
"""Load the config of the plugin"""
config = utils.Struct()
config_opts = driver_cls.get_config_opts()
if not config_opts:
return config
group_name = self.get_entry_name(name)
self.conf.register_opts(config_opts, group=group_name)
# Finalise the opt import by re-checking the configuration
# against the provided config files
self._reload_config()
config_group = self.conf.get(group_name)
if not config_group:
raise exception.LoadingError(name=name)
config.update({
name: value for name, value in config_group.items()
})
return config
def list_available(self):
extension_manager = ExtensionManager(namespace=self.namespace)
extension_manager = extensionmanager.ExtensionManager(
namespace=self.namespace)
return {ext.name: ext.plugin for ext in extension_manager.extensions}

View File

@@ -1,5 +1,5 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2015 b<>com
# Copyright (c) 2016 b<>com
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -13,16 +13,29 @@
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import abc
import six
@six.add_metaclass(abc.ABCMeta)
class FakeLoadable(object):
@classmethod
def namespace(cls):
return "TESTING"
class Loadable(object):
"""Generic interface for dynamically loading a driver/entry point.
This defines the contract in order to let the loader manager inject
the configuration parameters during the loading.
"""
def __init__(self, config):
self.config = config
@classmethod
def get_name(cls):
return 'fake'
@abc.abstractmethod
def get_config_opts(cls):
"""Defines the configuration options to be associated to this loadable
:return: A list of configuration options relative to this Loadable
:rtype: list of :class:`oslo_config.cfg.Opt` instances
"""
raise NotImplementedError

View File

@@ -1,122 +0,0 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2015 b<>com
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
from oslo_log import log
import oslo_messaging as om
from watcher.common.messaging.events import event_dispatcher as dispatcher
from watcher.common.messaging import messaging_handler
from watcher.common import rpc
from watcher.objects import base
LOG = log.getLogger(__name__)
CONF = cfg.CONF
class MessagingCore(dispatcher.EventDispatcher):
API_VERSION = '1.0'
def __init__(self, publisher_id, conductor_topic, status_topic,
api_version=API_VERSION):
super(MessagingCore, self).__init__()
self.serializer = rpc.RequestContextSerializer(
base.WatcherObjectSerializer())
self.publisher_id = publisher_id
self.api_version = api_version
self.conductor_topic = conductor_topic
self.status_topic = status_topic
self.conductor_topic_handler = self.build_topic_handler(
conductor_topic)
self.status_topic_handler = self.build_topic_handler(status_topic)
self._conductor_client = None
self._status_client = None
@property
def conductor_client(self):
if self._conductor_client is None:
transport = om.get_transport(CONF)
target = om.Target(
topic=self.conductor_topic,
version=self.API_VERSION,
)
self._conductor_client = om.RPCClient(
transport, target, serializer=self.serializer)
return self._conductor_client
@conductor_client.setter
def conductor_client(self, c):
self.conductor_client = c
@property
def status_client(self):
if self._status_client is None:
transport = om.get_transport(CONF)
target = om.Target(
topic=self.status_topic,
version=self.API_VERSION,
)
self._status_client = om.RPCClient(
transport, target, serializer=self.serializer)
return self._status_client
@status_client.setter
def status_client(self, c):
self.status_client = c
def build_topic_handler(self, topic_name):
return messaging_handler.MessagingHandler(
self.publisher_id, topic_name, self,
self.api_version, self.serializer)
def connect(self):
LOG.debug("Connecting to '%s' (%s)",
CONF.transport_url, CONF.rpc_backend)
self.conductor_topic_handler.start()
self.status_topic_handler.start()
def disconnect(self):
LOG.debug("Disconnecting from '%s' (%s)",
CONF.transport_url, CONF.rpc_backend)
self.conductor_topic_handler.stop()
self.status_topic_handler.stop()
def publish_control(self, event, payload):
return self.conductor_topic_handler.publish_event(event, payload)
def publish_status(self, event, payload, request_id=None):
return self.status_topic_handler.publish_event(
event, payload, request_id)
def get_version(self):
return self.api_version
def check_api_version(self, context):
api_manager_version = self.conductor_client.call(
context.to_dict(), 'check_api_version',
api_version=self.api_version)
return api_manager_version
def response(self, evt, ctx, message):
payload = {
'request_id': ctx['request_id'],
'msg': message
}
self.publish_status(evt, payload)

View File

@@ -38,7 +38,7 @@ CONF = cfg.CONF
class MessagingHandler(threading.Thread):
def __init__(self, publisher_id, topic_name, endpoint, version,
def __init__(self, publisher_id, topic_name, endpoints, version,
serializer=None):
super(MessagingHandler, self).__init__()
self.publisher_id = publisher_id
@@ -50,10 +50,10 @@ class MessagingHandler(threading.Thread):
self.__server = None
self.__notifier = None
self.__transport = None
self.add_endpoint(endpoint)
self.add_endpoints(endpoints)
def add_endpoint(self, endpoint):
self.__endpoints.append(endpoint)
def add_endpoints(self, endpoints):
self.__endpoints.extend(endpoints)
def remove_endpoint(self, endpoint):
if endpoint in self.__endpoints:

View File

@@ -15,14 +15,12 @@
# limitations under the License.
import eventlet
from oslo_log import log
import oslo_messaging as messaging
from watcher.common.messaging.utils import observable
eventlet.monkey_patch()
LOG = log.getLogger(__name__)
class NotificationHandler(observable.Observable):

View File

@@ -87,7 +87,6 @@ class NovaHelper(object):
return False
else:
host_name = getattr(instance, "OS-EXT-SRV-ATTR:host")
# https://bugs.launchpad.net/nova/+bug/1182965
LOG.debug(
"Instance %s found on host '%s'." % (instance_id, host_name))
@@ -532,16 +531,12 @@ class NovaHelper(object):
"Trying to create new instance '%s' "
"from image '%s' with flavor '%s' ..." % (
inst_name, image_id, flavor_name))
# TODO(jed) wait feature
# Allow admin users to view any keypair
# https://bugs.launchpad.net/nova/+bug/1182965
if not self.nova.keypairs.findall(name=keypair_name):
LOG.debug("Key pair '%s' not found with user '%s'" % (
keypair_name, self.user))
try:
self.nova.keypairs.findall(name=keypair_name)
except nvexceptions.NotFound:
LOG.debug("Key pair '%s' not found " % keypair_name)
return
else:
LOG.debug("Key pair '%s' found with user '%s'" % (
keypair_name, self.user))
try:
image = self.nova.images.get(image_id)

View File

@@ -15,108 +15,46 @@
# under the License.
import logging
import signal
import socket
from oslo_concurrency import processutils
from oslo_config import cfg
from oslo_log import _options
from oslo_log import log
import oslo_messaging as messaging
import oslo_messaging as om
from oslo_reports import guru_meditation_report as gmr
from oslo_reports import opts as gmr_opts
from oslo_service import service
from oslo_utils import importutils
from oslo_service import wsgi
from watcher._i18n import _LE
from watcher._i18n import _LI
from watcher._i18n import _
from watcher.api import app
from watcher.common import config
from watcher.common import context
from watcher.common.messaging.events import event_dispatcher as dispatcher
from watcher.common.messaging import messaging_handler
from watcher.common import rpc
from watcher.objects import base as objects_base
from watcher.objects import base
from watcher import opts
from watcher import version
service_opts = [
cfg.IntOpt('periodic_interval',
default=60,
help='Seconds between running periodic tasks.'),
help=_('Seconds between running periodic tasks.')),
cfg.StrOpt('host',
default=socket.getfqdn(),
help='Name of this node. This can be an opaque identifier. '
'It is not necessarily a hostname, FQDN, or IP address. '
'However, the node name must be valid within '
'an AMQP key, and if using ZeroMQ, a valid '
'hostname, FQDN, or IP address.'),
help=_('Name of this node. This can be an opaque identifier. '
'It is not necessarily a hostname, FQDN, or IP address. '
'However, the node name must be valid within '
'an AMQP key, and if using ZeroMQ, a valid '
'hostname, FQDN, or IP address.')),
]
cfg.CONF.register_opts(service_opts)
CONF = cfg.CONF
LOG = log.getLogger(__name__)
class RPCService(service.Service):
def __init__(self, host, manager_module, manager_class):
super(RPCService, self).__init__()
self.host = host
manager_module = importutils.try_import(manager_module)
manager_class = getattr(manager_module, manager_class)
self.manager = manager_class(host, manager_module.MANAGER_TOPIC)
self.topic = self.manager.topic
self.rpcserver = None
self.deregister = True
def start(self):
super(RPCService, self).start()
admin_context = context.RequestContext('admin', 'admin', is_admin=True)
target = messaging.Target(topic=self.topic, server=self.host)
endpoints = [self.manager]
serializer = objects_base.IronicObjectSerializer()
self.rpcserver = rpc.get_server(target, endpoints, serializer)
self.rpcserver.start()
self.handle_signal()
self.manager.init_host()
self.tg.add_dynamic_timer(
self.manager.periodic_tasks,
periodic_interval_max=cfg.CONF.periodic_interval,
context=admin_context)
LOG.info(_LI('Created RPC server for service %(service)s on host '
'%(host)s.'),
{'service': self.topic, 'host': self.host})
def stop(self):
try:
self.rpcserver.stop()
self.rpcserver.wait()
except Exception as e:
LOG.exception(_LE('Service error occurred when stopping the '
'RPC server. Error: %s'), e)
try:
self.manager.del_host(deregister=self.deregister)
except Exception as e:
LOG.exception(_LE('Service error occurred when cleaning up '
'the RPC manager. Error: %s'), e)
super(RPCService, self).stop(graceful=True)
LOG.info(_LI('Stopped RPC server for service %(service)s on host '
'%(host)s.'),
{'service': self.topic, 'host': self.host})
def _handle_signal(self):
LOG.info(_LI('Got signal SIGUSR1. Not deregistering on next shutdown '
'of service %(service)s on host %(host)s.'),
{'service': self.topic, 'host': self.host})
self.deregister = False
def handle_signal(self):
"""Add a signal handler for SIGUSR1.
The handler ensures that the manager is not deregistered when it is
shutdown.
"""
signal.signal(signal.SIGUSR1, self._handle_signal)
_DEFAULT_LOG_LEVELS = ['amqp=WARN', 'amqplib=WARN', 'qpid.messaging=INFO',
'oslo.messaging=INFO', 'sqlalchemy=WARN',
'keystoneclient=INFO', 'stevedore=INFO',
@@ -125,10 +63,165 @@ _DEFAULT_LOG_LEVELS = ['amqp=WARN', 'amqplib=WARN', 'qpid.messaging=INFO',
'glanceclient=WARN', 'watcher.openstack.common=WARN']
def prepare_service(argv=[], conf=cfg.CONF):
class WSGIService(service.ServiceBase):
"""Provides ability to launch Watcher API from wsgi app."""
def __init__(self, name, use_ssl=False):
"""Initialize, but do not start the WSGI server.
:param name: The name of the WSGI server given to the loader.
:param use_ssl: Wraps the socket in an SSL context if True.
"""
self.name = name
self.app = app.VersionSelectorApplication()
self.workers = (CONF.api.workers or
processutils.get_worker_count())
self.server = wsgi.Server(CONF, name, self.app,
host=CONF.api.host,
port=CONF.api.port,
use_ssl=use_ssl,
logger_name=name)
def start(self):
"""Start serving this service using loaded configuration"""
self.server.start()
def stop(self):
"""Stop serving this API"""
self.server.stop()
def wait(self):
"""Wait for the service to stop serving this API"""
self.server.wait()
def reset(self):
"""Reset server greenpool size to default"""
self.server.reset()
class Service(service.ServiceBase, dispatcher.EventDispatcher):
API_VERSION = '1.0'
def __init__(self, manager_class):
super(Service, self).__init__()
self.manager = manager_class()
self.publisher_id = self.manager.publisher_id
self.api_version = self.manager.API_VERSION
self.conductor_topic = self.manager.conductor_topic
self.status_topic = self.manager.status_topic
self.conductor_endpoints = [
ep(self) for ep in self.manager.conductor_endpoints
]
self.status_endpoints = [
ep(self.publisher_id) for ep in self.manager.status_endpoints
]
self.serializer = rpc.RequestContextSerializer(
base.WatcherObjectSerializer())
self.conductor_topic_handler = self.build_topic_handler(
self.conductor_topic, self.conductor_endpoints)
self.status_topic_handler = self.build_topic_handler(
self.status_topic, self.status_endpoints)
self._conductor_client = None
self._status_client = None
@property
def conductor_client(self):
if self._conductor_client is None:
transport = om.get_transport(CONF)
target = om.Target(
topic=self.conductor_topic,
version=self.API_VERSION,
)
self._conductor_client = om.RPCClient(
transport, target, serializer=self.serializer)
return self._conductor_client
@conductor_client.setter
def conductor_client(self, c):
self.conductor_client = c
@property
def status_client(self):
if self._status_client is None:
transport = om.get_transport(CONF)
target = om.Target(
topic=self.status_topic,
version=self.API_VERSION,
)
self._status_client = om.RPCClient(
transport, target, serializer=self.serializer)
return self._status_client
@status_client.setter
def status_client(self, c):
self.status_client = c
def build_topic_handler(self, topic_name, endpoints=()):
return messaging_handler.MessagingHandler(
self.publisher_id, topic_name, [self.manager] + list(endpoints),
self.api_version, self.serializer)
def start(self):
LOG.debug("Connecting to '%s' (%s)",
CONF.transport_url, CONF.rpc_backend)
self.conductor_topic_handler.start()
self.status_topic_handler.start()
def stop(self):
LOG.debug("Disconnecting from '%s' (%s)",
CONF.transport_url, CONF.rpc_backend)
self.conductor_topic_handler.stop()
self.status_topic_handler.stop()
def reset(self):
"""Reset a service in case it received a SIGHUP."""
def wait(self):
"""Wait for service to complete."""
def publish_control(self, event, payload):
return self.conductor_topic_handler.publish_event(event, payload)
def publish_status(self, event, payload, request_id=None):
return self.status_topic_handler.publish_event(
event, payload, request_id)
def get_version(self):
return self.api_version
def check_api_version(self, context):
api_manager_version = self.conductor_client.call(
context.to_dict(), 'check_api_version',
api_version=self.api_version)
return api_manager_version
def response(self, evt, ctx, message):
payload = {
'request_id': ctx['request_id'],
'msg': message
}
self.publish_status(evt, payload)
def process_launcher(conf=cfg.CONF):
return service.ProcessLauncher(conf)
def prepare_service(argv=(), conf=cfg.CONF):
log.register_options(conf)
gmr_opts.set_defaults(conf)
config.parse_args(argv)
cfg.set_defaults(_options.log_opts,
default_log_levels=_DEFAULT_LOG_LEVELS)
log.setup(conf, 'python-watcher')
conf.log_opt_values(LOG, logging.DEBUG)
gmr.TextGuruMeditation.register_section(_('Plugins'), opts.show_plugins)
gmr.TextGuruMeditation.setup_autorun(version)

View File

@@ -41,6 +41,29 @@ CONF.register_opts(UTILS_OPTS)
LOG = logging.getLogger(__name__)
class Struct(dict):
"""Specialized dict where you access an item like an attribute
>>> struct = Struct()
>>> struct['a'] = 1
>>> struct.b = 2
>>> assert struct.a == 1
>>> assert struct['b'] == 2
"""
def __getattr__(self, name):
try:
return self[name]
except KeyError:
raise AttributeError(name)
def __setattr__(self, name, value):
try:
self[name] = value
except KeyError:
raise AttributeError(name)
def safe_rstrip(value, chars=None):
"""Removes trailing characters from a string if that does not make it empty
@@ -95,6 +118,14 @@ def is_hostname_safe(hostname):
:returns: True if valid. False if not.
"""
m = '^[a-z0-9]([a-z0-9\-]{0,61}[a-z0-9])?$'
m = r'^[a-z0-9]([a-z0-9\-]{0,61}[a-z0-9])?$'
return (isinstance(hostname, six.string_types) and
(re.match(m, hostname) is not None))
def get_cls_import_path(cls):
"""Return the import path of a given class"""
module = cls.__module__
if module is None or module == str.__module__:
return cls.__name__
return module + '.' + cls.__name__

View File

@@ -34,6 +34,192 @@ def get_instance():
class BaseConnection(object):
"""Base class for storage system connections."""
@abc.abstractmethod
def get_goal_list(self, context, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None):
"""Get specific columns for matching goals.
Return a list of the specified columns for all goals that
match the specified filters.
:param context: The security context
:param filters: Filters to apply. Defaults to None.
:param limit: Maximum number of goals to return.
:param marker: the last item of the previous page; we return the next
result set.
:param sort_key: Attribute by which results should be sorted.
:param sort_dir: direction in which results should be sorted.
(asc, desc)
:returns: A list of tuples of the specified columns.
"""
@abc.abstractmethod
def create_goal(self, values):
"""Create a new goal.
:param values: A dict containing several items used to identify
and track the goal. For example:
::
{
'uuid': utils.generate_uuid(),
'name': 'DUMMY',
'display_name': 'Dummy',
}
:returns: A goal
:raises: :py:class:`~.GoalAlreadyExists`
"""
@abc.abstractmethod
def get_goal_by_id(self, context, goal_id):
"""Return a goal given its ID.
:param context: The security context
:param goal_id: The ID of a goal
:returns: A goal
:raises: :py:class:`~.GoalNotFound`
"""
@abc.abstractmethod
def get_goal_by_uuid(self, context, goal_uuid):
"""Return a goal given its UUID.
:param context: The security context
:param goal_uuid: The UUID of a goal
:returns: A goal
:raises: :py:class:`~.GoalNotFound`
"""
@abc.abstractmethod
def get_goal_by_name(self, context, goal_name):
"""Return a goal given its name.
:param context: The security context
:param goal_name: The name of a goal
:returns: A goal
:raises: :py:class:`~.GoalNotFound`
"""
@abc.abstractmethod
def destroy_goal(self, goal_uuid):
"""Destroy a goal.
:param goal_uuid: The UUID of a goal
:raises: :py:class:`~.GoalNotFound`
"""
@abc.abstractmethod
def update_goal(self, goal_uuid, values):
"""Update properties of a goal.
:param goal_uuid: The UUID of a goal
:param values: A dict containing several items used to identify
and track the goal. For example:
::
{
'uuid': utils.generate_uuid(),
'name': 'DUMMY',
'display_name': 'Dummy',
}
:returns: A goal
:raises: :py:class:`~.GoalNotFound`
:raises: :py:class:`~.Invalid`
"""
@abc.abstractmethod
def get_strategy_list(self, context, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None):
"""Get specific columns for matching strategies.
Return a list of the specified columns for all strategies that
match the specified filters.
:param context: The security context
:param columns: List of column names to return.
Defaults to 'id' column when columns == None.
:param filters: Filters to apply. Defaults to None.
:param limit: Maximum number of strategies to return.
:param marker: The last item of the previous page; we return the next
result set.
:param sort_key: Attribute by which results should be sorted.
:param sort_dir: Direction in which results should be sorted.
(asc, desc)
:returns: A list of tuples of the specified columns.
"""
@abc.abstractmethod
def create_strategy(self, values):
"""Create a new strategy.
:param values: A dict containing items used to identify
and track the strategy. For example:
::
{
'id': 1,
'uuid': utils.generate_uuid(),
'name': 'my_strategy',
'display_name': 'My strategy',
'goal_uuid': utils.generate_uuid(),
}
:returns: A strategy
:raises: :py:class:`~.StrategyAlreadyExists`
"""
@abc.abstractmethod
def get_strategy_by_id(self, context, strategy_id):
"""Return a strategy given its ID.
:param context: The security context
:param strategy_id: The ID of a strategy
:returns: A strategy
:raises: :py:class:`~.StrategyNotFound`
"""
@abc.abstractmethod
def get_strategy_by_uuid(self, context, strategy_uuid):
"""Return a strategy given its UUID.
:param context: The security context
:param strategy_uuid: The UUID of a strategy
:returns: A strategy
:raises: :py:class:`~.StrategyNotFound`
"""
@abc.abstractmethod
def get_strategy_by_name(self, context, strategy_name):
"""Return a strategy given its name.
:param context: The security context
:param strategy_name: The name of a strategy
:returns: A strategy
:raises: :py:class:`~.StrategyNotFound`
"""
@abc.abstractmethod
def destroy_strategy(self, strategy_uuid):
"""Destroy a strategy.
:param strategy_uuid: The UUID of a strategy
:raises: :py:class:`~.StrategyNotFound`
"""
@abc.abstractmethod
def update_strategy(self, strategy_uuid, values):
"""Update properties of a strategy.
:param strategy_uuid: The UUID of a strategy
:returns: A strategy
:raises: :py:class:`~.StrategyNotFound`
:raises: :py:class:`~.Invalid`
"""
@abc.abstractmethod
def get_audit_template_list(self, context, columns=None, filters=None,
limit=None, marker=None, sort_key=None,
@@ -75,7 +261,7 @@ class BaseConnection(object):
'extra': {'automatic': True}
}
:returns: An audit template.
:raises: AuditTemplateAlreadyExists
:raises: :py:class:`~.AuditTemplateAlreadyExists`
"""
@abc.abstractmethod
@@ -85,7 +271,7 @@ class BaseConnection(object):
:param context: The security context
:param audit_template_id: The id of an audit template.
:returns: An audit template.
:raises: AuditTemplateNotFound
:raises: :py:class:`~.AuditTemplateNotFound`
"""
@abc.abstractmethod
@@ -95,7 +281,7 @@ class BaseConnection(object):
:param context: The security context
:param audit_template_uuid: The uuid of an audit template.
:returns: An audit template.
:raises: AuditTemplateNotFound
:raises: :py:class:`~.AuditTemplateNotFound`
"""
def get_audit_template_by_name(self, context, audit_template_name):
@@ -104,7 +290,7 @@ class BaseConnection(object):
:param context: The security context
:param audit_template_name: The name of an audit template.
:returns: An audit template.
:raises: AuditTemplateNotFound
:raises: :py:class:`~.AuditTemplateNotFound`
"""
@abc.abstractmethod
@@ -112,7 +298,7 @@ class BaseConnection(object):
"""Destroy an audit_template.
:param audit_template_id: The id or uuid of an audit template.
:raises: AuditTemplateNotFound
:raises: :py:class:`~.AuditTemplateNotFound`
"""
@abc.abstractmethod
@@ -121,8 +307,8 @@ class BaseConnection(object):
:param audit_template_id: The id or uuid of an audit template.
:returns: An audit template.
:raises: AuditTemplateNotFound
:raises: Invalid
:raises: :py:class:`~.AuditTemplateNotFound`
:raises: :py:class:`~.Invalid`
"""
@abc.abstractmethod
@@ -130,7 +316,7 @@ class BaseConnection(object):
"""Soft delete an audit_template.
:param audit_template_id: The id or uuid of an audit template.
:raises: AuditTemplateNotFound
:raises: :py:class:`~.AuditTemplateNotFound`
"""
@abc.abstractmethod
@@ -171,7 +357,7 @@ class BaseConnection(object):
'deadline': None
}
:returns: An audit.
:raises: AuditAlreadyExists
:raises: :py:class:`~.AuditAlreadyExists`
"""
@abc.abstractmethod
@@ -181,7 +367,7 @@ class BaseConnection(object):
:param context: The security context
:param audit_id: The id of an audit.
:returns: An audit.
:raises: AuditNotFound
:raises: :py:class:`~.AuditNotFound`
"""
@abc.abstractmethod
@@ -191,7 +377,7 @@ class BaseConnection(object):
:param context: The security context
:param audit_uuid: The uuid of an audit.
:returns: An audit.
:raises: AuditNotFound
:raises: :py:class:`~.AuditNotFound`
"""
@abc.abstractmethod
@@ -199,7 +385,7 @@ class BaseConnection(object):
"""Destroy an audit and all associated action plans.
:param audit_id: The id or uuid of an audit.
:raises: AuditNotFound
:raises: :py:class:`~.AuditNotFound`
"""
@abc.abstractmethod
@@ -208,8 +394,8 @@ class BaseConnection(object):
:param audit_id: The id or uuid of an audit.
:returns: An audit.
:raises: AuditNotFound
:raises: Invalid
:raises: :py:class:`~.AuditNotFound`
:raises: :py:class:`~.Invalid`
"""
def soft_delete_audit(self, audit_id):
@@ -217,7 +403,7 @@ class BaseConnection(object):
:param audit_id: The id or uuid of an audit.
:returns: An audit.
:raises: AuditNotFound
:raises: :py:class:`~.AuditNotFound`
"""
@abc.abstractmethod
@@ -259,7 +445,7 @@ class BaseConnection(object):
'aggregate': 'nova aggregate name or uuid'
}
:returns: A action.
:raises: ActionAlreadyExists
:raises: :py:class:`~.ActionAlreadyExists`
"""
@abc.abstractmethod
@@ -269,7 +455,7 @@ class BaseConnection(object):
:param context: The security context
:param action_id: The id of a action.
:returns: A action.
:raises: ActionNotFound
:raises: :py:class:`~.ActionNotFound`
"""
@abc.abstractmethod
@@ -279,7 +465,7 @@ class BaseConnection(object):
:param context: The security context
:param action_uuid: The uuid of a action.
:returns: A action.
:raises: ActionNotFound
:raises: :py:class:`~.ActionNotFound`
"""
@abc.abstractmethod
@@ -287,8 +473,8 @@ class BaseConnection(object):
"""Destroy a action and all associated interfaces.
:param action_id: The id or uuid of a action.
:raises: ActionNotFound
:raises: ActionReferenced
:raises: :py:class:`~.ActionNotFound`
:raises: :py:class:`~.ActionReferenced`
"""
@abc.abstractmethod
@@ -297,9 +483,9 @@ class BaseConnection(object):
:param action_id: The id or uuid of a action.
:returns: A action.
:raises: ActionNotFound
:raises: ActionReferenced
:raises: Invalid
:raises: :py:class:`~.ActionNotFound`
:raises: :py:class:`~.ActionReferenced`
:raises: :py:class:`~.Invalid`
"""
@abc.abstractmethod
@@ -332,7 +518,7 @@ class BaseConnection(object):
:param values: A dict containing several items used to identify
and track the action plan.
:returns: An action plan.
:raises: ActionPlanAlreadyExists
:raises: :py:class:`~.ActionPlanAlreadyExists`
"""
@abc.abstractmethod
@@ -342,7 +528,7 @@ class BaseConnection(object):
:param context: The security context
:param action_plan_id: The id of an action plan.
:returns: An action plan.
:raises: ActionPlanNotFound
:raises: :py:class:`~.ActionPlanNotFound`
"""
@abc.abstractmethod
@@ -352,7 +538,7 @@ class BaseConnection(object):
:param context: The security context
:param action_plan__uuid: The uuid of an action plan.
:returns: An action plan.
:raises: ActionPlanNotFound
:raises: :py:class:`~.ActionPlanNotFound`
"""
@abc.abstractmethod
@@ -360,8 +546,8 @@ class BaseConnection(object):
"""Destroy an action plan and all associated interfaces.
:param action_plan_id: The id or uuid of a action plan.
:raises: ActionPlanNotFound
:raises: ActionPlanReferenced
:raises: :py:class:`~.ActionPlanNotFound`
:raises: :py:class:`~.ActionPlanReferenced`
"""
@abc.abstractmethod
@@ -370,7 +556,7 @@ class BaseConnection(object):
:param action_plan_id: The id or uuid of an action plan.
:returns: An action plan.
:raises: ActionPlanNotFound
:raises: ActionPlanReferenced
:raises: Invalid
:raises: :py:class:`~.ActionPlanNotFound`
:raises: :py:class:`~.ActionPlanReferenced`
:raises: :py:class:`~.Invalid`
"""

484
watcher/db/purge.py Normal file
View File

@@ -0,0 +1,484 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2016 b<>com
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from __future__ import print_function
import collections
import datetime
import itertools
import sys
from oslo_log import log
from oslo_utils import strutils
import prettytable as ptable
from six.moves import input
from watcher._i18n import _, _LI
from watcher._i18n import lazy_translation_enabled
from watcher.common import context
from watcher.common import exception
from watcher.common import utils
from watcher import objects
LOG = log.getLogger(__name__)
class WatcherObjectsMap(object):
"""Wrapper to deal with watcher objects per type
This wrapper object contains a list of watcher objects per type.
Its main use is to simplify the merge of watcher objects by avoiding
duplicates, but also for representing the relationships between these
objects.
"""
# This is for generating the .pot translations
keymap = collections.OrderedDict([
("goals", _("Goals")),
("strategies", _("Strategies")),
("audit_templates", _("Audit Templates")),
("audits", _("Audits")),
("action_plans", _("Action Plans")),
("actions", _("Actions")),
])
def __init__(self):
for attr_name in self.keys():
setattr(self, attr_name, [])
def values(self):
return (getattr(self, key) for key in self.keys())
@classmethod
def keys(cls):
return cls.keymap.keys()
def __iter__(self):
return itertools.chain(*self.values())
def __add__(self, other):
new_map = self.__class__()
# Merge the 2 items dicts into a new object (and avoid dupes)
for attr_name, initials, others in zip(self.keys(), self.values(),
other.values()):
# Creates a copy
merged = initials[:]
initials_ids = [item.id for item in initials]
non_dupes = [item for item in others
if item.id not in initials_ids]
merged += non_dupes
setattr(new_map, attr_name, merged)
return new_map
def __str__(self):
out = ""
for key, vals in zip(self.keys(), self.values()):
ids = [val.id for val in vals]
out += "%(key)s: %(val)s" % (dict(key=key, val=ids))
out += "\n"
return out
def __len__(self):
return sum(len(getattr(self, key)) for key in self.keys())
def get_count_table(self):
headers = list(self.keymap.values())
headers.append(_("Total")) # We also add a total count
translated_headers = [
h.translate() if lazy_translation_enabled() else h
for h in headers
]
counters = [len(cat_vals) for cat_vals in self.values()] + [len(self)]
table = ptable.PrettyTable(field_names=translated_headers)
table.add_row(counters)
return table.get_string()
class PurgeCommand(object):
"""Purges the DB by removing soft deleted entries
The workflow for this purge is the following:
# Find soft deleted objects which are expired
# Find orphan objects
# Find their related objects whether they are expired or not
# Merge them together
# If it does not exceed the limit, destroy them all
"""
ctx = context.make_context(show_deleted=True)
def __init__(self, age_in_days=None, max_number=None,
uuid=None, exclude_orphans=False, dry_run=None):
self.age_in_days = age_in_days
self.max_number = max_number
self.uuid = uuid
self.exclude_orphans = exclude_orphans
self.dry_run = dry_run
self._delete_up_to_max = None
self._objects_map = WatcherObjectsMap()
def get_expiry_date(self):
if not self.age_in_days:
return None
today = datetime.datetime.today()
expiry_date = today - datetime.timedelta(days=self.age_in_days)
return expiry_date
@classmethod
def get_audit_template_uuid(cls, uuid_or_name):
if uuid_or_name is None:
return
query_func = None
if not utils.is_uuid_like(uuid_or_name):
query_func = objects.AuditTemplate.get_by_name
else:
query_func = objects.AuditTemplate.get_by_uuid
try:
audit_template = query_func(cls.ctx, uuid_or_name)
except Exception as exc:
LOG.exception(exc)
raise exception.AuditTemplateNotFound(audit_template=uuid_or_name)
if not audit_template.deleted_at:
raise exception.NotSoftDeletedStateError(
name=_('Audit Template'), id=uuid_or_name)
return audit_template.uuid
def _find_goals(self, filters=None):
return objects.Goal.list(self.ctx, filters=filters)
def _find_strategies(self, filters=None):
return objects.Strategy.list(self.ctx, filters=filters)
def _find_audit_templates(self, filters=None):
return objects.AuditTemplate.list(self.ctx, filters=filters)
def _find_audits(self, filters=None):
return objects.Audit.list(self.ctx, filters=filters)
def _find_action_plans(self, filters=None):
return objects.ActionPlan.list(self.ctx, filters=filters)
def _find_actions(self, filters=None):
return objects.Action.list(self.ctx, filters=filters)
def _find_orphans(self):
orphans = WatcherObjectsMap()
filters = dict(deleted=False)
goals = objects.Goal.list(self.ctx, filters=filters)
strategies = objects.Strategy.list(self.ctx, filters=filters)
audit_templates = objects.AuditTemplate.list(self.ctx, filters=filters)
audits = objects.Audit.list(self.ctx, filters=filters)
action_plans = objects.ActionPlan.list(self.ctx, filters=filters)
actions = objects.Action.list(self.ctx, filters=filters)
goal_ids = set(g.id for g in goals)
orphans.strategies = [
strategy for strategy in strategies
if strategy.goal_id not in goal_ids]
strategy_ids = [s.id for s in (s for s in strategies
if s not in orphans.strategies)]
orphans.audit_templates = [
audit_template for audit_template in audit_templates
if audit_template.goal_id not in goal_ids or
(audit_template.strategy_id and
audit_template.strategy_id not in strategy_ids)]
audit_template_ids = [at.id for at in audit_templates
if at not in orphans.audit_templates]
orphans.audits = [
audit for audit in audits
if audit.audit_template_id not in audit_template_ids]
# Objects with orphan parents are themselves orphans
audit_ids = [audit.id for audit in audits
if audit not in orphans.audits]
orphans.action_plans = [
ap for ap in action_plans
if ap.audit_id not in audit_ids]
# Objects with orphan parents are themselves orphans
action_plan_ids = [ap.id for ap in action_plans
if ap not in orphans.action_plans]
orphans.actions = [
action for action in actions
if action.action_plan_id not in action_plan_ids]
LOG.debug("Orphans found:\n%s", orphans)
LOG.info(_LI("Orphans found:\n%s"), orphans.get_count_table())
return orphans
def _find_soft_deleted_objects(self):
to_be_deleted = WatcherObjectsMap()
expiry_date = self.get_expiry_date()
filters = dict(deleted=True)
if self.uuid:
filters["uuid"] = self.uuid
if expiry_date:
filters.update(dict(deleted_at__lt=expiry_date))
to_be_deleted.goals.extend(self._find_goals(filters))
to_be_deleted.strategies.extend(self._find_strategies(filters))
to_be_deleted.audit_templates.extend(
self._find_audit_templates(filters))
to_be_deleted.audits.extend(self._find_audits(filters))
to_be_deleted.action_plans.extend(
self._find_action_plans(filters))
to_be_deleted.actions.extend(self._find_actions(filters))
soft_deleted_objs = self._find_related_objects(
to_be_deleted, base_filters=dict(deleted=True))
LOG.debug("Soft deleted objects:\n%s", soft_deleted_objs)
return soft_deleted_objs
def _find_related_objects(self, objects_map, base_filters=None):
base_filters = base_filters or {}
for goal in objects_map.goals:
filters = {}
filters.update(base_filters)
filters.update(dict(goal_id=goal.id))
related_objs = WatcherObjectsMap()
related_objs.strategies = self._find_strategies(filters)
related_objs.audit_templates = self._find_audit_templates(filters)
objects_map += related_objs
for strategy in objects_map.strategies:
filters = {}
filters.update(base_filters)
filters.update(dict(strategy_id=strategy.id))
related_objs = WatcherObjectsMap()
related_objs.audit_templates = self._find_audit_templates(filters)
objects_map += related_objs
for audit_template in objects_map.audit_templates:
filters = {}
filters.update(base_filters)
filters.update(dict(audit_template_id=audit_template.id))
related_objs = WatcherObjectsMap()
related_objs.audits = self._find_audits(filters)
objects_map += related_objs
for audit in objects_map.audits:
filters = {}
filters.update(base_filters)
filters.update(dict(audit_id=audit.id))
related_objs = WatcherObjectsMap()
related_objs.action_plans = self._find_action_plans(filters)
objects_map += related_objs
for action_plan in objects_map.action_plans:
filters = {}
filters.update(base_filters)
filters.update(dict(action_plan_id=action_plan.id))
related_objs = WatcherObjectsMap()
related_objs.actions = self._find_actions(filters)
objects_map += related_objs
return objects_map
def confirmation_prompt(self):
print(self._objects_map.get_count_table())
raw_val = input(
_("There are %(count)d objects set for deletion. "
"Continue? [y/N]") % dict(count=len(self._objects_map)))
return strutils.bool_from_string(raw_val)
def delete_up_to_max_prompt(self, objects_map):
print(objects_map.get_count_table())
print(_("The number of objects (%(num)s) to delete from the database "
"exceeds the maximum number of objects (%(max_number)s) "
"specified.") % dict(max_number=self.max_number,
num=len(objects_map)))
raw_val = input(
_("Do you want to delete objects up to the specified maximum "
"number? [y/N]"))
self._delete_up_to_max = strutils.bool_from_string(raw_val)
return self._delete_up_to_max
def _aggregate_objects(self):
"""Objects aggregated on a 'per goal' basis"""
# todo: aggregate orphans as well
aggregate = []
for goal in self._objects_map.goals:
related_objs = WatcherObjectsMap()
# goals
related_objs.goals = [goal]
# strategies
goal_ids = [goal.id]
related_objs.strategies = [
strategy for strategy in self._objects_map.strategies
if strategy.goal_id in goal_ids
]
# audit templates
strategy_ids = [
strategy.id for strategy in related_objs.strategies]
related_objs.audit_templates = [
at for at in self._objects_map.audit_templates
if at.goal_id in goal_ids or
(at.strategy_id and at.strategy_id in strategy_ids)
]
# audits
audit_template_ids = [
audit_template.id
for audit_template in related_objs.audit_templates]
related_objs.audits = [
audit for audit in self._objects_map.audits
if audit.audit_template_id in audit_template_ids
]
# action plans
audit_ids = [audit.id for audit in related_objs.audits]
related_objs.action_plans = [
action_plan for action_plan in self._objects_map.action_plans
if action_plan.audit_id in audit_ids
]
# actions
action_plan_ids = [
action_plan.id for action_plan in related_objs.action_plans
]
related_objs.actions = [
action for action in self._objects_map.actions
if action.action_plan_id in action_plan_ids
]
aggregate.append(related_objs)
return aggregate
def _get_objects_up_to_limit(self):
aggregated_objects = self._aggregate_objects()
to_be_deleted_subset = WatcherObjectsMap()
for aggregate in aggregated_objects:
if len(aggregate) + len(to_be_deleted_subset) <= self.max_number:
to_be_deleted_subset += aggregate
else:
break
LOG.debug(to_be_deleted_subset)
return to_be_deleted_subset
def find_objects_to_delete(self):
"""Finds all the objects to be purged
:returns: A mapping with all the Watcher objects to purged
:rtype: :py:class:`~.WatcherObjectsMap` instance
"""
to_be_deleted = self._find_soft_deleted_objects()
if not self.exclude_orphans:
to_be_deleted += self._find_orphans()
LOG.debug("Objects to be deleted:\n%s", to_be_deleted)
return to_be_deleted
def do_delete(self):
LOG.info(_LI("Deleting..."))
# Reversed to avoid errors with foreign keys
for entry in reversed(list(self._objects_map)):
entry.destroy()
def execute(self):
LOG.info(_LI("Starting purge command"))
self._objects_map = self.find_objects_to_delete()
if (self.max_number is not None and
len(self._objects_map) > self.max_number):
if self.delete_up_to_max_prompt(self._objects_map):
self._objects_map = self._get_objects_up_to_limit()
else:
return
_orphans_note = (_(" (orphans excluded)") if self.exclude_orphans
else _(" (may include orphans)"))
if not self.dry_run and self.confirmation_prompt():
self.do_delete()
print(_("Purge results summary%s:") % _orphans_note)
LOG.info(_LI("Purge results summary%s:"), _orphans_note)
else:
LOG.debug(self._objects_map)
print(_("Here below is a table containing the objects "
"that can be purged%s:") % _orphans_note)
LOG.info("\n%s", self._objects_map.get_count_table())
print(self._objects_map.get_count_table())
LOG.info(_LI("Purge process completed"))
def purge(age_in_days, max_number, audit_template, exclude_orphans, dry_run):
"""Removes soft deleted objects from the database
:param age_in_days: Number of days since deletion (from today)
to exclude from the purge. If None, everything will be purged.
:type age_in_days: int
:param max_number: Max number of objects expected to be deleted.
Prevents the deletion if exceeded. No limit if set to None.
:type max_number: int
:param audit_template: UUID or name of the audit template to purge.
:type audit_template: str
:param exclude_orphans: Flag to indicate whether or not you want to
exclude orphans from deletion (default: False).
:type exclude_orphans: bool
:param dry_run: Flag to indicate whether or not you want to perform
a dry run (no deletion).
:type dry_run: bool
"""
try:
if max_number and max_number < 0:
raise exception.NegativeLimitError
LOG.info("[options] age_in_days = %s", age_in_days)
LOG.info("[options] max_number = %s", max_number)
LOG.info("[options] audit_template = %s", audit_template)
LOG.info("[options] exclude_orphans = %s", exclude_orphans)
LOG.info("[options] dry_run = %s", dry_run)
uuid = PurgeCommand.get_audit_template_uuid(audit_template)
cmd = PurgeCommand(age_in_days, max_number, uuid,
exclude_orphans, dry_run)
cmd.execute()
except Exception as exc:
LOG.exception(exc)
print(exc)
sys.exit(1)

View File

@@ -21,7 +21,6 @@ from oslo_config import cfg
from oslo_db import exception as db_exc
from oslo_db.sqlalchemy import session as db_session
from oslo_db.sqlalchemy import utils as db_utils
from oslo_log import log
from sqlalchemy.orm import exc
from watcher import _i18n
@@ -29,10 +28,12 @@ from watcher.common import exception
from watcher.common import utils
from watcher.db import api
from watcher.db.sqlalchemy import models
from watcher.objects import action as action_objects
from watcher.objects import action_plan as ap_objects
from watcher.objects import audit as audit_objects
from watcher.objects import utils as objutils
CONF = cfg.CONF
LOG = log.getLogger(__name__)
_ = _i18n._
_FACADE = None
@@ -105,25 +106,213 @@ class Connection(api.BaseConnection):
"""SqlAlchemy connection."""
def __init__(self):
pass
super(Connection, self).__init__()
def __add_soft_delete_mixin_filters(self, query, filters, model):
if 'deleted' in filters:
if bool(filters['deleted']):
query = query.filter(model.deleted != 0)
else:
query = query.filter(model.deleted == 0)
if 'deleted_at__eq' in filters:
query = query.filter(
model.deleted_at == objutils.datetime_or_str_or_none(
filters['deleted_at__eq']))
if 'deleted_at__gt' in filters:
query = query.filter(
model.deleted_at > objutils.datetime_or_str_or_none(
filters['deleted_at__gt']))
if 'deleted_at__gte' in filters:
query = query.filter(
model.deleted_at >= objutils.datetime_or_str_or_none(
filters['deleted_at__gte']))
if 'deleted_at__lt' in filters:
query = query.filter(
model.deleted_at < objutils.datetime_or_str_or_none(
filters['deleted_at__lt']))
if 'deleted_at__lte' in filters:
query = query.filter(
model.deleted_at <= objutils.datetime_or_str_or_none(
filters['deleted_at__lte']))
return query
def __add_timestamp_mixin_filters(self, query, filters, model):
if 'created_at__eq' in filters:
query = query.filter(
model.created_at == objutils.datetime_or_str_or_none(
filters['created_at__eq']))
if 'created_at__gt' in filters:
query = query.filter(
model.created_at > objutils.datetime_or_str_or_none(
filters['created_at__gt']))
if 'created_at__gte' in filters:
query = query.filter(
model.created_at >= objutils.datetime_or_str_or_none(
filters['created_at__gte']))
if 'created_at__lt' in filters:
query = query.filter(
model.created_at < objutils.datetime_or_str_or_none(
filters['created_at__lt']))
if 'created_at__lte' in filters:
query = query.filter(
model.created_at <= objutils.datetime_or_str_or_none(
filters['created_at__lte']))
if 'updated_at__eq' in filters:
query = query.filter(
model.updated_at == objutils.datetime_or_str_or_none(
filters['updated_at__eq']))
if 'updated_at__gt' in filters:
query = query.filter(
model.updated_at > objutils.datetime_or_str_or_none(
filters['updated_at__gt']))
if 'updated_at__gte' in filters:
query = query.filter(
model.updated_at >= objutils.datetime_or_str_or_none(
filters['updated_at__gte']))
if 'updated_at__lt' in filters:
query = query.filter(
model.updated_at < objutils.datetime_or_str_or_none(
filters['updated_at__lt']))
if 'updated_at__lte' in filters:
query = query.filter(
model.updated_at <= objutils.datetime_or_str_or_none(
filters['updated_at__lte']))
return query
def __add_simple_filter(self, query, model, fieldname, value):
return query.filter(getattr(model, fieldname) == value)
def __add_join_filter(self, query, model, join_model, fieldname, value):
query = query.join(join_model)
return self.__add_simple_filter(query, join_model, fieldname, value)
def _add_filters(self, query, model, filters=None,
plain_fields=None, join_fieldmap=None):
"""Generic way to add filters to a Watcher model
:param query: a :py:class:`sqlalchemy.orm.query.Query` instance
:param model: the model class the filters should relate to
:param filters: dict with the following structure {"fieldname": value}
:param plain_fields: a :py:class:`sqlalchemy.orm.query.Query` instance
:param join_fieldmap: a :py:class:`sqlalchemy.orm.query.Query` instance
"""
filters = filters or {}
plain_fields = plain_fields or ()
join_fieldmap = join_fieldmap or {}
for fieldname, value in filters.items():
if fieldname in plain_fields:
query = self.__add_simple_filter(
query, model, fieldname, value)
elif fieldname in join_fieldmap:
join_field, join_model = join_fieldmap[fieldname]
query = self.__add_join_filter(
query, model, join_model, join_field, value)
query = self.__add_soft_delete_mixin_filters(query, filters, model)
query = self.__add_timestamp_mixin_filters(query, filters, model)
return query
def _get(self, context, model, fieldname, value):
query = model_query(model)
query = query.filter(getattr(model, fieldname) == value)
if not context.show_deleted:
query = query.filter(model.deleted_at.is_(None))
try:
obj = query.one()
except exc.NoResultFound:
raise exception.ResourceNotFound(name=model.__name__, id=value)
return obj
def _update(self, model, id_, values):
session = get_session()
with session.begin():
query = model_query(model, session=session)
query = add_identity_filter(query, id_)
try:
ref = query.with_lockmode('update').one()
except exc.NoResultFound:
raise exception.ResourceNotFound(name=model.__name__, id=id_)
ref.update(values)
return ref
def _soft_delete(self, model, id_):
session = get_session()
with session.begin():
query = model_query(model, session=session)
query = add_identity_filter(query, id_)
try:
query.one()
except exc.NoResultFound:
raise exception.ResourceNotFound(name=model.__name__, id=id_)
query.soft_delete()
def _destroy(self, model, id_):
session = get_session()
with session.begin():
query = model_query(model, session=session)
query = add_identity_filter(query, id_)
try:
query.one()
except exc.NoResultFound:
raise exception.ResourceNotFound(name=model.__name__, id=id_)
query.delete()
def _add_goals_filters(self, query, filters):
if filters is None:
filters = {}
plain_fields = ['uuid', 'name', 'display_name']
return self._add_filters(
query=query, model=models.Goal, filters=filters,
plain_fields=plain_fields)
def _add_strategies_filters(self, query, filters):
plain_fields = ['uuid', 'name', 'display_name', 'goal_id']
join_fieldmap = {
'goal_uuid': ("uuid", models.Goal),
'goal_name': ("name", models.Goal)
}
return self._add_filters(
query=query, model=models.Strategy, filters=filters,
plain_fields=plain_fields, join_fieldmap=join_fieldmap)
def _add_audit_templates_filters(self, query, filters):
if filters is None:
filters = []
filters = {}
if 'name' in filters:
query = query.filter_by(name=filters['name'])
if 'host_aggregate' in filters:
query = query.filter_by(host_aggregate=filters['host_aggregate'])
if 'goal' in filters:
query = query.filter_by(goal=filters['goal'])
plain_fields = ['uuid', 'name', 'host_aggregate',
'goal_id', 'strategy_id']
join_fieldmap = {
'goal_uuid': ("uuid", models.Goal),
'goal_name': ("name", models.Goal),
'strategy_uuid': ("uuid", models.Strategy),
'strategy_name': ("name", models.Strategy),
}
return query
return self._add_filters(
query=query, model=models.AuditTemplate, filters=filters,
plain_fields=plain_fields, join_fieldmap=join_fieldmap)
def _add_audits_filters(self, query, filters):
if filters is None:
filters = []
if 'uuid' in filters:
query = query.filter_by(uuid=filters['uuid'])
if 'type' in filters:
query = query.filter_by(type=filters['type'])
if 'state' in filters:
@@ -144,12 +333,20 @@ class Connection(api.BaseConnection):
query = query.filter(
models.AuditTemplate.name ==
filters['audit_template_name'])
query = self.__add_soft_delete_mixin_filters(
query, filters, models.Audit)
query = self.__add_timestamp_mixin_filters(
query, filters, models.Audit)
return query
def _add_action_plans_filters(self, query, filters):
if filters is None:
filters = []
if 'uuid' in filters:
query = query.filter_by(uuid=filters['uuid'])
if 'state' in filters:
query = query.filter_by(state=filters['state'])
if 'audit_id' in filters:
@@ -158,12 +355,20 @@ class Connection(api.BaseConnection):
query = query.join(models.Audit,
models.ActionPlan.audit_id == models.Audit.id)
query = query.filter(models.Audit.uuid == filters['audit_uuid'])
query = self.__add_soft_delete_mixin_filters(
query, filters, models.ActionPlan)
query = self.__add_timestamp_mixin_filters(
query, filters, models.ActionPlan)
return query
def _add_actions_filters(self, query, filters):
if filters is None:
filters = []
if 'uuid' in filters:
query = query.filter_by(uuid=filters['uuid'])
if 'action_plan_id' in filters:
query = query.filter_by(action_plan_id=filters['action_plan_id'])
if 'action_plan_uuid' in filters:
@@ -181,11 +386,146 @@ class Connection(api.BaseConnection):
if 'state' in filters:
query = query.filter_by(state=filters['state'])
if 'alarm' in filters:
query = query.filter_by(alarm=filters['alarm'])
query = self.__add_soft_delete_mixin_filters(
query, filters, models.Action)
query = self.__add_timestamp_mixin_filters(
query, filters, models.Action)
return query
# ### GOALS ### #
def get_goal_list(self, context, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None):
query = model_query(models.Goal)
query = self._add_goals_filters(query, filters)
if not context.show_deleted:
query = query.filter_by(deleted_at=None)
return _paginate_query(models.Goal, limit, marker,
sort_key, sort_dir, query)
def create_goal(self, values):
# ensure defaults are present for new goals
if not values.get('uuid'):
values['uuid'] = utils.generate_uuid()
goal = models.Goal()
goal.update(values)
try:
goal.save()
except db_exc.DBDuplicateEntry:
raise exception.GoalAlreadyExists(uuid=values['uuid'])
return goal
def _get_goal(self, context, fieldname, value):
try:
return self._get(context, model=models.Goal,
fieldname=fieldname, value=value)
except exception.ResourceNotFound:
raise exception.GoalNotFound(goal=value)
def get_goal_by_id(self, context, goal_id):
return self._get_goal(context, fieldname="id", value=goal_id)
def get_goal_by_uuid(self, context, goal_uuid):
return self._get_goal(context, fieldname="uuid", value=goal_uuid)
def get_goal_by_name(self, context, goal_name):
return self._get_goal(context, fieldname="name", value=goal_name)
def destroy_goal(self, goal_id):
try:
return self._destroy(models.Goal, goal_id)
except exception.ResourceNotFound:
raise exception.GoalNotFound(goal=goal_id)
def update_goal(self, goal_id, values):
if 'uuid' in values:
raise exception.Invalid(
message=_("Cannot overwrite UUID for an existing Goal."))
try:
return self._update(models.Goal, goal_id, values)
except exception.ResourceNotFound:
raise exception.GoalNotFound(goal=goal_id)
def soft_delete_goal(self, goal_id):
try:
self._soft_delete(models.Goal, goal_id)
except exception.ResourceNotFound:
raise exception.GoalNotFound(goal=goal_id)
# ### STRATEGIES ### #
def get_strategy_list(self, context, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None):
query = model_query(models.Strategy)
query = self._add_strategies_filters(query, filters)
if not context.show_deleted:
query = query.filter_by(deleted_at=None)
return _paginate_query(models.Strategy, limit, marker,
sort_key, sort_dir, query)
def create_strategy(self, values):
# ensure defaults are present for new strategies
if not values.get('uuid'):
values['uuid'] = utils.generate_uuid()
strategy = models.Strategy()
strategy.update(values)
try:
strategy.save()
except db_exc.DBDuplicateEntry:
raise exception.StrategyAlreadyExists(uuid=values['uuid'])
return strategy
def _get_strategy(self, context, fieldname, value):
try:
return self._get(context, model=models.Strategy,
fieldname=fieldname, value=value)
except exception.ResourceNotFound:
raise exception.StrategyNotFound(strategy=value)
def get_strategy_by_id(self, context, strategy_id):
return self._get_strategy(context, fieldname="id", value=strategy_id)
def get_strategy_by_uuid(self, context, strategy_uuid):
return self._get_strategy(
context, fieldname="uuid", value=strategy_uuid)
def get_strategy_by_name(self, context, strategy_name):
return self._get_strategy(
context, fieldname="name", value=strategy_name)
def destroy_strategy(self, strategy_id):
try:
return self._destroy(models.Strategy, strategy_id)
except exception.ResourceNotFound:
raise exception.StrategyNotFound(strategy=strategy_id)
def update_strategy(self, strategy_id, values):
if 'uuid' in values:
raise exception.Invalid(
message=_("Cannot overwrite UUID for an existing Strategy."))
try:
return self._update(models.Strategy, strategy_id, values)
except exception.ResourceNotFound:
raise exception.StrategyNotFound(strategy=strategy_id)
def soft_delete_strategy(self, strategy_id):
try:
self._soft_delete(models.Strategy, strategy_id)
except exception.ResourceNotFound:
raise exception.StrategyNotFound(strategy=strategy_id)
# ### AUDIT TEMPLATES ### #
def get_audit_template_list(self, context, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None):
@@ -193,7 +533,6 @@ class Connection(api.BaseConnection):
query = self._add_audit_templates_filters(query, filters)
if not context.show_deleted:
query = query.filter_by(deleted_at=None)
return _paginate_query(models.AuditTemplate, limit, marker,
sort_key, sort_dir, query)
@@ -202,117 +541,78 @@ class Connection(api.BaseConnection):
if not values.get('uuid'):
values['uuid'] = utils.generate_uuid()
query = model_query(models.AuditTemplate)
query = query.filter_by(name=values.get('name'),
deleted_at=None)
if len(query.all()) > 0:
raise exception.AuditTemplateAlreadyExists(
audit_template=values['name'])
audit_template = models.AuditTemplate()
audit_template.update(values)
try:
audit_template.save()
except db_exc.DBDuplicateEntry:
raise exception.AuditTemplateAlreadyExists(uuid=values['uuid'],
name=values['name'])
raise exception.AuditTemplateAlreadyExists(
audit_template=values['name'])
return audit_template
def get_audit_template_by_id(self, context, audit_template_id):
query = model_query(models.AuditTemplate)
query = query.filter_by(id=audit_template_id)
def _get_audit_template(self, context, fieldname, value):
try:
audit_template = query.one()
if not context.show_deleted:
if audit_template.deleted_at is not None:
raise exception.AuditTemplateNotFound(
audit_template=audit_template_id)
return audit_template
except exc.NoResultFound:
raise exception.AuditTemplateNotFound(
audit_template=audit_template_id)
return self._get(context, model=models.AuditTemplate,
fieldname=fieldname, value=value)
except exception.ResourceNotFound:
raise exception.AuditTemplateNotFound(audit_template=value)
def get_audit_template_by_id(self, context, audit_template_id):
return self._get_audit_template(
context, fieldname="id", value=audit_template_id)
def get_audit_template_by_uuid(self, context, audit_template_uuid):
query = model_query(models.AuditTemplate)
query = query.filter_by(uuid=audit_template_uuid)
try:
audit_template = query.one()
if not context.show_deleted:
if audit_template.deleted_at is not None:
raise exception.AuditTemplateNotFound(
audit_template=audit_template_uuid)
return audit_template
except exc.NoResultFound:
raise exception.AuditTemplateNotFound(
audit_template=audit_template_uuid)
return self._get_audit_template(
context, fieldname="uuid", value=audit_template_uuid)
def get_audit_template_by_name(self, context, audit_template_name):
query = model_query(models.AuditTemplate)
query = query.filter_by(name=audit_template_name)
try:
audit_template = query.one()
if not context.show_deleted:
if audit_template.deleted_at is not None:
raise exception.AuditTemplateNotFound(
audit_template=audit_template_name)
return audit_template
except exc.MultipleResultsFound:
raise exception.Conflict(
_('Multiple audit templates exist with the same name.'
' Please use the audit template uuid instead'))
except exc.NoResultFound:
raise exception.AuditTemplateNotFound(
audit_template=audit_template_name)
return self._get_audit_template(
context, fieldname="name", value=audit_template_name)
def destroy_audit_template(self, audit_template_id):
session = get_session()
with session.begin():
query = model_query(models.AuditTemplate, session=session)
query = add_identity_filter(query, audit_template_id)
try:
query.one()
except exc.NoResultFound:
raise exception.AuditTemplateNotFound(node=audit_template_id)
query.delete()
try:
return self._destroy(models.AuditTemplate, audit_template_id)
except exception.ResourceNotFound:
raise exception.AuditTemplateNotFound(
audit_template=audit_template_id)
def update_audit_template(self, audit_template_id, values):
if 'uuid' in values:
raise exception.Invalid(
message=_("Cannot overwrite UUID for an existing "
"Audit Template."))
return self._do_update_audit_template(audit_template_id, values)
def _do_update_audit_template(self, audit_template_id, values):
session = get_session()
with session.begin():
query = model_query(models.AuditTemplate, session=session)
query = add_identity_filter(query, audit_template_id)
try:
ref = query.with_lockmode('update').one()
except exc.NoResultFound:
raise exception.AuditTemplateNotFound(
audit_template=audit_template_id)
ref.update(values)
return ref
try:
return self._update(
models.AuditTemplate, audit_template_id, values)
except exception.ResourceNotFound:
raise exception.AuditTemplateNotFound(
audit_template=audit_template_id)
def soft_delete_audit_template(self, audit_template_id):
session = get_session()
with session.begin():
query = model_query(models.AuditTemplate, session=session)
query = add_identity_filter(query, audit_template_id)
try:
self._soft_delete(models.AuditTemplate, audit_template_id)
except exception.ResourceNotFound:
raise exception.AuditTemplateNotFound(
audit_template=audit_template_id)
try:
query.one()
except exc.NoResultFound:
raise exception.AuditTemplateNotFound(node=audit_template_id)
query.soft_delete()
# ### AUDITS ### #
def get_audit_list(self, context, filters=None, limit=None, marker=None,
sort_key=None, sort_dir=None):
query = model_query(models.Audit)
query = self._add_audits_filters(query, filters)
if not context.show_deleted:
query = query.filter(~(models.Audit.state == 'DELETED'))
query = query.filter(
~(models.Audit.state == audit_objects.State.DELETED))
return _paginate_query(models.Audit, limit, marker,
sort_key, sort_dir, query)
@@ -340,7 +640,7 @@ class Connection(api.BaseConnection):
try:
audit = query.one()
if not context.show_deleted:
if audit.state == 'DELETED':
if audit.state == audit_objects.State.DELETED:
raise exception.AuditNotFound(audit=audit_id)
return audit
except exc.NoResultFound:
@@ -353,7 +653,7 @@ class Connection(api.BaseConnection):
try:
audit = query.one()
if not context.show_deleted:
if audit.state == 'DELETED':
if audit.state == audit_objects.State.DELETED:
raise exception.AuditNotFound(audit=audit_uuid)
return audit
except exc.NoResultFound:
@@ -412,16 +712,19 @@ class Connection(api.BaseConnection):
try:
query.one()
except exc.NoResultFound:
raise exception.AuditNotFound(node=audit_id)
raise exception.AuditNotFound(audit=audit_id)
query.soft_delete()
# ### ACTIONS ### #
def get_action_list(self, context, filters=None, limit=None, marker=None,
sort_key=None, sort_dir=None):
query = model_query(models.Action)
query = self._add_actions_filters(query, filters)
if not context.show_deleted:
query = query.filter(~(models.Action.state == 'DELETED'))
query = query.filter(
~(models.Action.state == action_objects.State.DELETED))
return _paginate_query(models.Action, limit, marker,
sort_key, sort_dir, query)
@@ -444,7 +747,7 @@ class Connection(api.BaseConnection):
try:
action = query.one()
if not context.show_deleted:
if action.state == 'DELETED':
if action.state == action_objects.State.DELETED:
raise exception.ActionNotFound(
action=action_id)
return action
@@ -457,7 +760,7 @@ class Connection(api.BaseConnection):
try:
action = query.one()
if not context.show_deleted:
if action.state == 'DELETED':
if action.state == action_objects.State.DELETED:
raise exception.ActionNotFound(
action=action_uuid)
return action
@@ -504,17 +807,20 @@ class Connection(api.BaseConnection):
try:
query.one()
except exc.NoResultFound:
raise exception.ActionNotFound(node=action_id)
raise exception.ActionNotFound(action=action_id)
query.soft_delete()
# ### ACTION PLANS ### #
def get_action_plan_list(
self, context, columns=None, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None):
query = model_query(models.ActionPlan)
query = self._add_action_plans_filters(query, filters)
if not context.show_deleted:
query = query.filter(~(models.ActionPlan.state == 'DELETED'))
query = query.filter(
~(models.ActionPlan.state == ap_objects.State.DELETED))
return _paginate_query(models.ActionPlan, limit, marker,
sort_key, sort_dir, query)
@@ -539,7 +845,7 @@ class Connection(api.BaseConnection):
try:
action_plan = query.one()
if not context.show_deleted:
if action_plan.state == 'DELETED':
if action_plan.state == ap_objects.State.DELETED:
raise exception.ActionPlanNotFound(
action_plan=action_plan_id)
return action_plan
@@ -553,7 +859,7 @@ class Connection(api.BaseConnection):
try:
action_plan = query.one()
if not context.show_deleted:
if action_plan.state == 'DELETED':
if action_plan.state == ap_objects.State.DELETED:
raise exception.ActionPlanNotFound(
action_plan=action_plan__uuid)
return action_plan
@@ -614,6 +920,6 @@ class Connection(api.BaseConnection):
try:
query.one()
except exc.NoResultFound:
raise exception.ActionPlanNotFound(node=action_plan_id)
raise exception.ActionPlanNotFound(action_plan=action_plan_id)
query.soft_delete()

View File

@@ -110,13 +110,41 @@ class WatcherBase(models.SoftDeleteMixin,
Base = declarative_base(cls=WatcherBase)
class Strategy(Base):
"""Represents a strategy."""
__tablename__ = 'strategies'
__table_args__ = (
schema.UniqueConstraint('uuid', name='uniq_strategies0uuid'),
table_args()
)
id = Column(Integer, primary_key=True)
uuid = Column(String(36))
name = Column(String(63), nullable=False)
display_name = Column(String(63), nullable=False)
goal_id = Column(Integer, ForeignKey('goals.id'), nullable=False)
class Goal(Base):
"""Represents a goal."""
__tablename__ = 'goals'
__table_args__ = (
schema.UniqueConstraint('uuid', name='uniq_goals0uuid'),
table_args(),
)
id = Column(Integer, primary_key=True)
uuid = Column(String(36))
name = Column(String(63), nullable=False)
display_name = Column(String(63), nullable=False)
class AuditTemplate(Base):
"""Represents an audit template."""
__tablename__ = 'audit_templates'
__table_args__ = (
schema.UniqueConstraint('uuid', name='uniq_audit_templates0uuid'),
schema.UniqueConstraint('name', name='uniq_audit_templates0name'),
table_args()
)
id = Column(Integer, primary_key=True)
@@ -124,7 +152,8 @@ class AuditTemplate(Base):
name = Column(String(63), nullable=True)
description = Column(String(255), nullable=True)
host_aggregate = Column(Integer, nullable=True)
goal = Column(String(63), nullable=True)
goal_id = Column(Integer, ForeignKey('goals.id'), nullable=False)
strategy_id = Column(Integer, ForeignKey('strategies.id'), nullable=True)
extra = Column(JSONEncodedDict)
version = Column(String(15), nullable=True)
@@ -162,8 +191,6 @@ class Action(Base):
action_type = Column(String(255), nullable=False)
input_parameters = Column(JSONEncodedDict, nullable=True)
state = Column(String(20), nullable=True)
# todo(jed) remove parameter alarm
alarm = Column(String(36))
next = Column(String(36), nullable=True)
@@ -178,9 +205,6 @@ class ActionPlan(Base):
id = Column(Integer, primary_key=True)
uuid = Column(String(36))
first_action_id = Column(Integer)
# first_action_id = Column(Integer, ForeignKeyConstraint(
# ['first_action_id'], ['actions.id'], name='fk_first_action_id'),
# nullable=True)
audit_id = Column(Integer, ForeignKey('audits.id'),
nullable=True)
state = Column(String(20), nullable=True)

View File

@@ -38,13 +38,10 @@ See :doc:`../architecture` for more details on this component.
"""
from oslo_config import cfg
from oslo_log import log
from watcher.common.messaging import messaging_core
from watcher.decision_engine.messaging import audit_endpoint
LOG = log.getLogger(__name__)
CONF = cfg.CONF
WATCHER_DECISION_ENGINE_OPTS = [
@@ -78,18 +75,15 @@ CONF.register_group(decision_engine_opt_group)
CONF.register_opts(WATCHER_DECISION_ENGINE_OPTS, decision_engine_opt_group)
class DecisionEngineManager(messaging_core.MessagingCore):
def __init__(self):
super(DecisionEngineManager, self).__init__(
CONF.watcher_decision_engine.publisher_id,
CONF.watcher_decision_engine.conductor_topic,
CONF.watcher_decision_engine.status_topic,
api_version=self.API_VERSION)
endpoint = audit_endpoint.AuditEndpoint(
self,
max_workers=CONF.watcher_decision_engine.max_workers)
self.conductor_topic_handler.add_endpoint(endpoint)
class DecisionEngineManager(object):
def join(self):
self.conductor_topic_handler.join()
self.status_topic_handler.join()
API_VERSION = '1.0'
conductor_endpoints = [audit_endpoint.AuditEndpoint]
status_endpoints = []
def __init__(self):
self.publisher_id = CONF.watcher_decision_engine.publisher_id
self.conductor_topic = CONF.watcher_decision_engine.conductor_topic
self.status_topic = CONF.watcher_decision_engine.status_topic
self.api_version = self.API_VERSION

View File

@@ -18,17 +18,20 @@
#
from concurrent import futures
from oslo_config import cfg
from oslo_log import log
from watcher.decision_engine.audit import default
CONF = cfg.CONF
LOG = log.getLogger(__name__)
class AuditEndpoint(object):
def __init__(self, messaging, max_workers):
def __init__(self, messaging):
self._messaging = messaging
self._executor = futures.ThreadPoolExecutor(max_workers=max_workers)
self._executor = futures.ThreadPoolExecutor(
max_workers=CONF.watcher_decision_engine.max_workers)
@property
def executor(self):

View File

@@ -13,16 +13,12 @@
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_log import log
from watcher._i18n import _
from watcher.common import exception
from watcher.decision_engine.model import hypervisor
from watcher.decision_engine.model import mapping
from watcher.decision_engine.model import vm
LOG = log.getLogger(__name__)
class ModelRoot(object):
def __init__(self):

View File

@@ -21,6 +21,7 @@ class ResourceType(Enum):
cpu_cores = 'num_cores'
memory = 'memory'
disk = 'disk'
disk_capacity = 'disk_capacity'
class Resource(object):

View File

@@ -37,18 +37,34 @@ congestion which may decrease the :ref:`SLA <sla_definition>` for
It is also important to schedule :ref:`Actions <action_definition>` in order to
avoid security issues such as denial of service on core OpenStack services.
:ref:`Some default implementations are provided <watcher_planners>`, but it is
possible to :ref:`develop new implementations <implement_planner_plugin>`
which are dynamically loaded by Watcher at launch time.
See :doc:`../architecture` for more details on this component.
"""
import abc
import six
from watcher.common.loader import loadable
@six.add_metaclass(abc.ABCMeta)
class BasePlanner(object):
class BasePlanner(loadable.Loadable):
@classmethod
def get_config_opts(cls):
"""Defines the configuration options to be associated to this loadable
:return: A list of configuration options relative to this Loadable
:rtype: list of :class:`oslo_config.cfg.Opt` instances
"""
return []
@abc.abstractmethod
def schedule(self, context, audit_uuid, solution):
"""The planner receives a solution to schedule
"""The planner receives a solution to schedule
:param solution: A solution provided by a strategy for scheduling
:type solution: :py:class:`~.BaseSolution` subclass instance
@@ -56,7 +72,7 @@ class BasePlanner(object):
:type audit_uuid: str
:return: Action plan with an ordered sequence of actions such that all
security, dependency, and performance requirements are met.
:rtype: :py:class:`watcher.objects.action_plan.ActionPlan` instance
:rtype: :py:class:`watcher.objects.ActionPlan` instance
"""
# example: directed acyclic graph
raise NotImplementedError()

View File

@@ -53,7 +53,6 @@ class DefaultPlanner(base.BasePlanner):
'action_type': action_type,
'input_parameters': input_parameters,
'state': objects.action.State.PENDING,
'alarm': None,
'next': None,
}
return action

View File

@@ -17,14 +17,10 @@
from __future__ import unicode_literals
from oslo_log import log
from watcher.common.loader.default import DefaultLoader
LOG = log.getLogger(__name__)
from watcher.common.loader import default
class DefaultPlannerLoader(DefaultLoader):
class DefaultPlannerLoader(default.DefaultLoader):
def __init__(self):
super(DefaultPlannerLoader, self).__init__(
namespace='watcher_planners')

View File

@@ -18,35 +18,25 @@
#
from oslo_config import cfg
from oslo_log import log
from watcher.common import exception
from watcher.common.messaging import messaging_core
from watcher.common.messaging import notification_handler
from watcher.common import service
from watcher.common import utils
from watcher.decision_engine.manager import decision_engine_opt_group
from watcher.decision_engine.manager import WATCHER_DECISION_ENGINE_OPTS
from watcher.decision_engine import manager
LOG = log.getLogger(__name__)
CONF = cfg.CONF
CONF.register_group(decision_engine_opt_group)
CONF.register_opts(WATCHER_DECISION_ENGINE_OPTS, decision_engine_opt_group)
CONF.register_group(manager.decision_engine_opt_group)
CONF.register_opts(manager.WATCHER_DECISION_ENGINE_OPTS,
manager.decision_engine_opt_group)
class DecisionEngineAPI(messaging_core.MessagingCore):
class DecisionEngineAPI(service.Service):
def __init__(self):
super(DecisionEngineAPI, self).__init__(
CONF.watcher_decision_engine.publisher_id,
CONF.watcher_decision_engine.conductor_topic,
CONF.watcher_decision_engine.status_topic,
api_version=self.API_VERSION,
)
self.handler = notification_handler.NotificationHandler(
self.publisher_id)
self.status_topic_handler.add_endpoint(self.handler)
super(DecisionEngineAPI, self).__init__(DecisionEngineAPIManager)
def trigger_audit(self, context, audit_uuid=None):
if not utils.is_uuid_like(audit_uuid):
@@ -54,3 +44,17 @@ class DecisionEngineAPI(messaging_core.MessagingCore):
return self.conductor_client.call(
context.to_dict(), 'trigger_audit', audit_uuid=audit_uuid)
class DecisionEngineAPIManager(object):
API_VERSION = '1.0'
conductor_endpoints = []
status_endpoints = [notification_handler.NotificationHandler]
def __init__(self):
self.publisher_id = CONF.watcher_decision_engine.publisher_id
self.conductor_topic = CONF.watcher_decision_engine.conductor_topic
self.status_topic = CONF.watcher_decision_engine.status_topic
self.api_version = self.API_VERSION

View File

@@ -16,14 +16,10 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
from oslo_log import log
from watcher.applier.actions import base as baction
from watcher.common import exception
from watcher.decision_engine.solution import base
LOG = log.getLogger(__name__)
class DefaultSolution(base.BaseSolution):
def __init__(self):

View File

@@ -26,29 +26,24 @@ LOG = log.getLogger(__name__)
class DefaultStrategyContext(base.BaseStrategyContext):
def __init__(self):
super(DefaultStrategyContext, self).__init__()
LOG.debug("Initializing Strategy Context")
self._strategy_selector = default.DefaultStrategySelector()
self._collector_manager = manager.CollectorManager()
@property
def collector(self):
return self._collector_manager
@property
def strategy_selector(self):
return self._strategy_selector
def execute_strategy(self, audit_uuid, request_context):
audit = objects.Audit.get_by_uuid(request_context, audit_uuid)
# Retrieve the Audit Template
audit_template = objects.\
AuditTemplate.get_by_id(request_context, audit.audit_template_id)
audit_template = objects.AuditTemplate.get_by_id(
request_context, audit.audit_template_id)
osc = clients.OpenStackClients()
# todo(jed) retrieve in audit_template parameters (threshold,...)
# todo(jed) create ActionPlan
collector_manager = self.collector.get_cluster_model_collector(osc=osc)
@@ -56,8 +51,13 @@ class DefaultStrategyContext(base.BaseStrategyContext):
# todo(jed) remove call to get_latest_cluster_data_model
cluster_data_model = collector_manager.get_latest_cluster_data_model()
selected_strategy = self.strategy_selector.define_from_goal(
audit_template.goal, osc=osc)
strategy_selector = default.DefaultStrategySelector(
goal_name=objects.Goal.get_by_id(
request_context, audit_template.goal_id).name,
strategy_name=None,
osc=osc)
selected_strategy = strategy_selector.select()
# todo(jed) add parameters and remove cluster_data_model
return selected_strategy.execute(cluster_data_model)

View File

@@ -19,12 +19,9 @@
from __future__ import unicode_literals
from oslo_log import log
from watcher.common.loader import default
LOG = log.getLogger(__name__)
class DefaultStrategyLoader(default.DefaultLoader):
def __init__(self):

View File

@@ -22,6 +22,7 @@ import six
@six.add_metaclass(abc.ABCMeta)
class BaseSelector(object):
@abc.abstractmethod
def define_from_goal(self, goal_name):
def select(self):
raise NotImplementedError()

View File

@@ -25,38 +25,51 @@ from watcher.decision_engine.strategy.selection import base
LOG = log.getLogger(__name__)
CONF = cfg.CONF
default_goals = {'DUMMY': 'dummy'}
WATCHER_GOALS_OPTS = [
cfg.DictOpt(
'goals',
default=default_goals,
required=True,
help='Goals used for the optimization. '
'Maps each goal to an associated strategy (for example: '
'BASIC_CONSOLIDATION:basic, MY_GOAL:my_strategy_1)'),
]
goals_opt_group = cfg.OptGroup(name='watcher_goals',
title='Goals available for the optimization')
CONF.register_group(goals_opt_group)
CONF.register_opts(WATCHER_GOALS_OPTS, goals_opt_group)
class DefaultStrategySelector(base.BaseSelector):
def __init__(self):
def __init__(self, goal_name, strategy_name=None, osc=None):
"""Default strategy selector
:param goal_name: Name of the goal
:param strategy_name: Name of the strategy
:param osc: an OpenStackClients instance
"""
super(DefaultStrategySelector, self).__init__()
self.goal_name = goal_name
self.strategy_name = strategy_name
self.osc = osc
self.strategy_loader = default.DefaultStrategyLoader()
def define_from_goal(self, goal_name, osc=None):
""":param osc: an OpenStackClients instance"""
def select(self):
"""Selects a strategy
:raises: :py:class:`~.LoadingError` if it failed to load a strategy
:returns: A :py:class:`~.BaseStrategy` instance
"""
strategy_to_load = None
try:
strategy_to_load = CONF.watcher_goals.goals[goal_name]
return self.strategy_loader.load(strategy_to_load, osc=osc)
except KeyError as exc:
if self.strategy_name:
strategy_to_load = self.strategy_name
else:
available_strategies = self.strategy_loader.list_available()
available_strategies_for_goal = list(
key for key, strat in available_strategies.items()
if strat.get_goal_name() == self.goal_name)
if not available_strategies_for_goal:
raise exception.NoAvailableStrategyForGoal(
goal=self.goal_name)
# TODO(v-francoise): We should do some more work here to select
# a strategy out of a given goal instead of just choosing the
# 1st one
strategy_to_load = available_strategies_for_goal[0]
return self.strategy_loader.load(strategy_to_load, osc=self.osc)
except exception.NoAvailableStrategyForGoal:
raise
except Exception as exc:
LOG.exception(exc)
raise exception.WatcherException(
_("Incorrect mapping: could not find "
"associated strategy for '%s'") % goal_name
)
raise exception.LoadingError(
_("Could not load any strategy for goal %(goal)s"),
goal=self.goal_name)

View File

@@ -18,10 +18,16 @@
from watcher.decision_engine.strategy.strategies import basic_consolidation
from watcher.decision_engine.strategy.strategies import dummy_strategy
from watcher.decision_engine.strategy.strategies import outlet_temp_control
from watcher.decision_engine.strategy.strategies import \
vm_workload_consolidation
from watcher.decision_engine.strategy.strategies import workload_stabilization
BasicConsolidation = basic_consolidation.BasicConsolidation
OutletTempControl = outlet_temp_control.OutletTempControl
DummyStrategy = dummy_strategy.DummyStrategy
VMWorkloadConsolidation = vm_workload_consolidation.VMWorkloadConsolidation
WorkloadStabilization = workload_stabilization.WorkloadStabilization
__all__ = (BasicConsolidation, OutletTempControl, DummyStrategy)
__all__ = ("BasicConsolidation", "OutletTempControl",
"DummyStrategy", "VMWorkloadConsolidation",
"WorkloadStabilization")

View File

@@ -30,32 +30,35 @@ to find an optimal :ref:`Solution <solution_definition>`.
When a new :ref:`Goal <goal_definition>` is added to the Watcher configuration,
at least one default associated :ref:`Strategy <strategy_definition>` should be
provided as well.
:ref:`Some default implementations are provided <watcher_strategies>`, but it
is possible to :ref:`develop new implementations <implement_strategy_plugin>`
which are dynamically loaded by Watcher at launch time.
"""
import abc
from oslo_log import log
import six
from watcher._i18n import _
from watcher.common import clients
from watcher.common.loader import loadable
from watcher.decision_engine.solution import default
from watcher.decision_engine.strategy.common import level
LOG = log.getLogger(__name__)
@six.add_metaclass(abc.ABCMeta)
class BaseStrategy(object):
class BaseStrategy(loadable.Loadable):
"""A base class for all the strategies
A Strategy is an algorithm implementation which is able to find a
Solution for a given Goal.
"""
def __init__(self, name=None, description=None, osc=None):
def __init__(self, config, osc=None):
""":param osc: an OpenStackClients instance"""
self._name = name
self.description = description
super(BaseStrategy, self).__init__(config)
self._name = self.get_name()
self._display_name = self.get_display_name()
# default strategy level
self._strategy_level = level.StrategyLevel.conservative
self._cluster_state_collector = None
@@ -63,6 +66,55 @@ class BaseStrategy(object):
self._solution = default.DefaultSolution()
self._osc = osc
@classmethod
@abc.abstractmethod
def get_name(cls):
"""The name of the strategy"""
raise NotImplementedError()
@classmethod
@abc.abstractmethod
def get_display_name(cls):
"""The goal display name for the strategy"""
raise NotImplementedError()
@classmethod
@abc.abstractmethod
def get_translatable_display_name(cls):
"""The translatable msgid of the strategy"""
# Note(v-francoise): Defined here to be used as the translation key for
# other services
raise NotImplementedError()
@classmethod
@abc.abstractmethod
def get_goal_name(cls):
"""The goal name for the strategy"""
raise NotImplementedError()
@classmethod
@abc.abstractmethod
def get_goal_display_name(cls):
"""The translated display name related to the goal of the strategy"""
raise NotImplementedError()
@classmethod
@abc.abstractmethod
def get_translatable_goal_display_name(cls):
"""The translatable msgid related to the goal of the strategy"""
# Note(v-francoise): Defined here to be used as the translation key for
# other services
raise NotImplementedError()
@classmethod
def get_config_opts(cls):
"""Defines the configuration options to be associated to this loadable
:return: A list of configuration options relative to this Loadable
:rtype: list of :class:`oslo_config.cfg.Opt` instances
"""
return []
@abc.abstractmethod
def execute(self, original_model):
"""Execute a strategy
@@ -88,12 +140,12 @@ class BaseStrategy(object):
self._solution = s
@property
def name(self):
def id(self):
return self._name
@name.setter
def name(self, n):
self._name = n
@property
def display_name(self):
return self._display_name
@property
def strategy_level(self):
@@ -110,3 +162,90 @@ class BaseStrategy(object):
@state_collector.setter
def state_collector(self, s):
self._cluster_state_collector = s
@six.add_metaclass(abc.ABCMeta)
class DummyBaseStrategy(BaseStrategy):
@classmethod
def get_goal_name(cls):
return "DUMMY"
@classmethod
def get_goal_display_name(cls):
return _("Dummy goal")
@classmethod
def get_translatable_goal_display_name(cls):
return "Dummy goal"
@six.add_metaclass(abc.ABCMeta)
class UnclassifiedStrategy(BaseStrategy):
"""This base class is used to ease the development of new strategies
The goal defined within this strategy can be used to simplify the
documentation explaining how to implement a new strategy plugin by
ommitting the need for the strategy developer to define a goal straight
away.
"""
@classmethod
def get_goal_name(cls):
return "UNCLASSIFIED"
@classmethod
def get_goal_display_name(cls):
return _("Unclassified")
@classmethod
def get_translatable_goal_display_name(cls):
return "Unclassified"
@six.add_metaclass(abc.ABCMeta)
class ServerConsolidationBaseStrategy(BaseStrategy):
@classmethod
def get_goal_name(cls):
return "SERVER_CONSOLIDATION"
@classmethod
def get_goal_display_name(cls):
return _("Server consolidation")
@classmethod
def get_translatable_goal_display_name(cls):
return "Server consolidation"
@six.add_metaclass(abc.ABCMeta)
class ThermalOptimizationBaseStrategy(BaseStrategy):
@classmethod
def get_goal_name(cls):
return "THERMAL_OPTIMIZATION"
@classmethod
def get_goal_display_name(cls):
return _("Thermal optimization")
@classmethod
def get_translatable_goal_display_name(cls):
return "Thermal optimization"
@six.add_metaclass(abc.ABCMeta)
class WorkloadStabilizationBaseStrategy(BaseStrategy):
@classmethod
def get_goal_name(cls):
return "WORKLOAD_BALANCING"
@classmethod
def get_goal_display_name(cls):
return _("Workload balancing")
@classmethod
def get_translatable_goal_display_name(cls):
return "Workload balancing"

View File

@@ -29,7 +29,7 @@ order to both minimize energy consumption and comply to the various SLAs.
from oslo_log import log
from watcher._i18n import _LE, _LI, _LW
from watcher._i18n import _, _LE, _LI, _LW
from watcher.common import exception
from watcher.decision_engine.model import hypervisor_state as hyper_state
from watcher.decision_engine.model import resource
@@ -41,7 +41,7 @@ from watcher.metrics_engine.cluster_history import ceilometer as \
LOG = log.getLogger(__name__)
class BasicConsolidation(base.BaseStrategy):
class BasicConsolidation(base.ServerConsolidationBaseStrategy):
"""Basic offline consolidation using live migration
*Description*
@@ -65,26 +65,20 @@ class BasicConsolidation(base.BaseStrategy):
<None>
"""
DEFAULT_NAME = "basic"
DEFAULT_DESCRIPTION = "Basic offline consolidation"
HOST_CPU_USAGE_METRIC_NAME = 'compute.node.cpu.percent'
INSTANCE_CPU_USAGE_METRIC_NAME = 'cpu_util'
MIGRATION = "migrate"
CHANGE_NOVA_SERVICE_STATE = "change_nova_service_state"
def __init__(self, name=DEFAULT_NAME, description=DEFAULT_DESCRIPTION,
osc=None):
def __init__(self, config=None, osc=None):
"""Basic offline Consolidation using live migration
:param name: The name of the strategy (Default: "basic")
:param description: The description of the strategy
(Default: "Basic offline consolidation")
:param osc: An :py:class:`~watcher.common.clients.OpenStackClients`
instance
:param config: A mapping containing the configuration of this strategy
:type config: dict
:param osc: :py:class:`~.OpenStackClients` instance
"""
super(BasicConsolidation, self).__init__(name, description, osc)
super(BasicConsolidation, self).__init__(config, osc)
# set default value for the number of released nodes
self.number_of_released_nodes = 0
@@ -114,6 +108,18 @@ class BasicConsolidation(base.BaseStrategy):
# TODO(jed) bound migration attempts (80 %)
self.bound_migration = 0.80
@classmethod
def get_name(cls):
return "basic"
@classmethod
def get_display_name(cls):
return _("Basic offline consolidation")
@classmethod
def get_translatable_display_name(cls):
return "Basic offline consolidation"
@property
def ceilometer(self):
if self._ceilometer is None:
@@ -277,25 +283,25 @@ class BasicConsolidation(base.BaseStrategy):
:return:
"""
resource_id = "%s_%s" % (hypervisor.uuid, hypervisor.hostname)
vm_avg_cpu_util = self.ceilometer. \
host_avg_cpu_util = self.ceilometer. \
statistic_aggregation(resource_id=resource_id,
meter_name=self.HOST_CPU_USAGE_METRIC_NAME,
period="7200",
aggregate='avg'
)
if vm_avg_cpu_util is None:
if host_avg_cpu_util is None:
LOG.error(
_LE("No values returned by %(resource_id)s "
"for %(metric_name)s"),
resource_id=resource_id,
metric_name=self.HOST_CPU_USAGE_METRIC_NAME,
)
vm_avg_cpu_util = 100
host_avg_cpu_util = 100
cpu_capacity = model.get_resource_from_id(
resource.ResourceType.cpu_cores).get_capacity(hypervisor)
total_cores_used = cpu_capacity * (vm_avg_cpu_util / 100)
total_cores_used = cpu_capacity * (host_avg_cpu_util / 100)
return self.calculate_weight(model, hypervisor, total_cores_used,
0,

View File

@@ -18,12 +18,13 @@
#
from oslo_log import log
from watcher._i18n import _
from watcher.decision_engine.strategy.strategies import base
LOG = log.getLogger(__name__)
class DummyStrategy(base.BaseStrategy):
class DummyStrategy(base.DummyBaseStrategy):
"""Dummy strategy used for integration testing via Tempest
*Description*
@@ -44,15 +45,17 @@ class DummyStrategy(base.BaseStrategy):
<None>
"""
DEFAULT_NAME = "dummy"
DEFAULT_DESCRIPTION = "Dummy Strategy"
NOP = "nop"
SLEEP = "sleep"
def __init__(self, name=DEFAULT_NAME, description=DEFAULT_DESCRIPTION,
osc=None):
super(DummyStrategy, self).__init__(name, description, osc)
def __init__(self, config=None, osc=None):
"""Dummy Strategy implemented for demo and testing purposes
:param config: A mapping containing the configuration of this strategy
:type config: dict
:param osc: :py:class:`~.OpenStackClients` instance
"""
super(DummyStrategy, self).__init__(config, osc)
def execute(self, original_model):
LOG.debug("Executing Dummy strategy")
@@ -67,3 +70,15 @@ class DummyStrategy(base.BaseStrategy):
self.solution.add_action(action_type=self.SLEEP,
input_parameters={'duration': 5.0})
return self.solution
@classmethod
def get_name(cls):
return "dummy"
@classmethod
def get_display_name(cls):
return _("Dummy strategy")
@classmethod
def get_translatable_display_name(cls):
return "Dummy strategy"

View File

@@ -30,7 +30,7 @@ telemetries to measure thermal/workload status of server.
from oslo_log import log
from watcher._i18n import _LE
from watcher._i18n import _, _LE
from watcher.common import exception as wexc
from watcher.decision_engine.model import resource
from watcher.decision_engine.model import vm_state
@@ -41,7 +41,7 @@ from watcher.metrics_engine.cluster_history import ceilometer as ceil
LOG = log.getLogger(__name__)
class OutletTempControl(base.BaseStrategy):
class OutletTempControl(base.ThermalOptimizationBaseStrategy):
"""[PoC] Outlet temperature control using live migration
*Description*
@@ -71,8 +71,6 @@ class OutletTempControl(base.BaseStrategy):
https://github.com/openstack/watcher-specs/blob/master/specs/mitaka/approved/outlet-temperature-based-strategy.rst
""" # noqa
DEFAULT_NAME = "outlet_temp_control"
DEFAULT_DESCRIPTION = "outlet temperature based migration strategy"
# The meter to report outlet temperature in ceilometer
METER_NAME = "hardware.ipmi.node.outlet_temperature"
# Unit: degree C
@@ -80,15 +78,15 @@ class OutletTempControl(base.BaseStrategy):
MIGRATION = "migrate"
def __init__(self, name=DEFAULT_NAME, description=DEFAULT_DESCRIPTION,
osc=None):
def __init__(self, config=None, osc=None):
"""Outlet temperature control using live migration
:param name: the name of the strategy
:param description: a description of the strategy
:param osc: an OpenStackClients object
:param config: A mapping containing the configuration of this strategy
:type config: dict
:param osc: an OpenStackClients object, defaults to None
:type osc: :py:class:`~.OpenStackClients` instance, optional
"""
super(OutletTempControl, self).__init__(name, description, osc)
super(OutletTempControl, self).__init__(config, osc)
# the migration plan will be triggered when the outlet temperature
# reaches threshold
# TODO(zhenzanz): Threshold should be configurable for each audit
@@ -96,6 +94,18 @@ class OutletTempControl(base.BaseStrategy):
self._meter = self.METER_NAME
self._ceilometer = None
@classmethod
def get_name(cls):
return "outlet_temperature"
@classmethod
def get_display_name(cls):
return _("Outlet temperature based strategy")
@classmethod
def get_translatable_display_name(cls):
return "Outlet temperature based strategy"
@property
def ceilometer(self):
if self._ceilometer is None:
@@ -202,8 +212,8 @@ class OutletTempControl(base.BaseStrategy):
cluster_data_model, host, cpu_capacity, memory_capacity,
disk_capacity)
cores_available = cpu_capacity.get_capacity(host) - cores_used
disk_available = disk_capacity.get_capacity(host) - mem_used
mem_available = memory_capacity.get_capacity(host) - disk_used
disk_available = disk_capacity.get_capacity(host) - disk_used
mem_available = memory_capacity.get_capacity(host) - mem_used
if cores_available >= required_cores \
and disk_available >= required_disk \
and mem_available >= required_memory:

View File

@@ -0,0 +1,548 @@
# -*- encoding: utf-8 -*-
#
# Authors: Vojtech CIMA <cima@zhaw.ch>
# Bruno GRAZIOLI <gaea@zhaw.ch>
# Sean MURPHY <murp@zhaw.ch>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from copy import deepcopy
from oslo_log import log
import six
from watcher._i18n import _, _LE, _LI
from watcher.common import exception
from watcher.decision_engine.model import hypervisor_state as hyper_state
from watcher.decision_engine.model import resource
from watcher.decision_engine.model import vm_state
from watcher.decision_engine.strategy.strategies import base
from watcher.metrics_engine.cluster_history import ceilometer \
as ceilometer_cluster_history
LOG = log.getLogger(__name__)
class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
"""VM Workload Consolidation Strategy.
*Description*
A load consolidation strategy based on heuristic first-fit
algorithm which focuses on measured CPU utilization and tries to
minimize hosts which have too much or too little load respecting
resource capacity constraints.
This strategy produces a solution resulting in more efficient
utilization of cluster resources using following four phases:
* Offload phase - handling over-utilized resources
* Consolidation phase - handling under-utilized resources
* Solution optimization - reducing number of migrations
* Deactivation of unused hypervisors
A capacity coefficients (cc) might be used to adjust optimization
thresholds. Different resources may require different coefficient
values as well as setting up different coefficient values in both
phases may lead to to more efficient consolidation in the end.
If the cc equals 1 the full resource capacity may be used, cc
values lower than 1 will lead to resource under utilization and
values higher than 1 will lead to resource overbooking.
e.g. If targeted utilization is 80 percent of hypervisor capacity,
the coefficient in the consolidation phase will be 0.8, but
may any lower value in the offloading phase. The lower it gets
the cluster will appear more released (distributed) for the
following consolidation phase.
As this strategy laverages VM live migration to move the load
from one hypervisor to another, this feature needs to be set up
correctly on all hypervisors within the cluster.
This strategy assumes it is possible to live migrate any VM from
an active hypervisor to any other active hypervisor.
*Requirements*
* You must have at least 2 physical compute nodes to run this strategy.
*Limitations*
<None>
*Spec URL*
https://github.com/openstack/watcher-specs/blob/master/specs/mitaka/implemented/zhaw-load-consolidation.rst
""" # noqa
def __init__(self, config=None, osc=None):
super(VMWorkloadConsolidation, self).__init__(config, osc)
self._ceilometer = None
self.number_of_migrations = 0
self.number_of_released_hypervisors = 0
self.ceilometer_vm_data_cache = dict()
@classmethod
def get_name(cls):
return "vm_workload_consolidation"
@classmethod
def get_display_name(cls):
return _("VM Workload Consolidation Strategy")
@classmethod
def get_translatable_display_name(cls):
return "VM Workload Consolidation Strategy"
@property
def ceilometer(self):
if self._ceilometer is None:
self._ceilometer = (ceilometer_cluster_history.
CeilometerClusterHistory(osc=self.osc))
return self._ceilometer
@ceilometer.setter
def ceilometer(self, ceilometer):
self._ceilometer = ceilometer
def get_state_str(self, state):
"""Get resource state in string format.
:param state: resource state of unknown type
"""
if isinstance(state, six.string_types):
return state
elif (type(state) == hyper_state.HypervisorState or
type(state) == vm_state.VMState):
return state.value
else:
LOG.error(_LE('Unexpexted resource state type, '
'state=%(state)s, state_type=%(st)s.'),
state=state,
st=type(state))
raise exception.WatcherException
def add_action_activate_hypervisor(self, hypervisor):
"""Add an action for hypervisor activation into the solution.
:param hypervisor: hypervisor object
:return: None
"""
params = {'state': hyper_state.HypervisorState.ONLINE.value}
self.solution.add_action(
action_type='change_nova_service_state',
resource_id=hypervisor.uuid,
input_parameters=params)
self.number_of_released_hypervisors -= 1
def add_action_deactivate_hypervisor(self, hypervisor):
"""Add an action for hypervisor deactivation into the solution.
:param hypervisor: hypervisor object
:return: None
"""
params = {'state': hyper_state.HypervisorState.OFFLINE.value}
self.solution.add_action(
action_type='change_nova_service_state',
resource_id=hypervisor.uuid,
input_parameters=params)
self.number_of_released_hypervisors += 1
def add_migration(self, vm_uuid, src_hypervisor,
dst_hypervisor, model):
"""Add an action for VM migration into the solution.
:param vm_uuid: vm uuid
:param src_hypervisor: hypervisor object
:param dst_hypervisor: hypervisor object
:param model: model_root object
:return: None
"""
vm = model.get_vm_from_id(vm_uuid)
vm_state_str = self.get_state_str(vm.state)
if vm_state_str != vm_state.VMState.ACTIVE.value:
'''
Watcher curently only supports live VM migration and block live
VM migration which both requires migrated VM to be active.
When supported, the cold migration may be used as a fallback
migration mechanism to move non active VMs.
'''
LOG.error(_LE('Cannot live migrate: vm_uuid=%(vm_uuid)s, '
'state=%(vm_state)s.'),
vm_uuid=vm_uuid,
vm_state=vm_state_str)
raise exception.WatcherException
migration_type = 'live'
dst_hyper_state_str = self.get_state_str(dst_hypervisor.state)
if dst_hyper_state_str == hyper_state.HypervisorState.OFFLINE.value:
self.add_action_activate_hypervisor(dst_hypervisor)
model.get_mapping().unmap(src_hypervisor, vm)
model.get_mapping().map(dst_hypervisor, vm)
params = {'migration_type': migration_type,
'src_hypervisor': src_hypervisor.uuid,
'dst_hypervisor': dst_hypervisor.uuid}
self.solution.add_action(action_type='migrate',
resource_id=vm.uuid,
input_parameters=params)
self.number_of_migrations += 1
def deactivate_unused_hypervisors(self, model):
"""Generate actions for deactivation of unused hypervisors.
:param model: model_root object
:return: None
"""
for hypervisor in model.get_all_hypervisors().values():
if len(model.get_mapping().get_node_vms(hypervisor)) == 0:
self.add_action_deactivate_hypervisor(hypervisor)
def get_prediction_model(self, model):
"""Return a deepcopy of a model representing current cluster state.
:param model: model_root object
:return: model_root object
"""
return deepcopy(model)
def get_vm_utilization(self, vm_uuid, model, period=3600, aggr='avg'):
"""Collect cpu, ram and disk utilization statistics of a VM.
:param vm_uuid: vm object
:param model: model_root object
:param period: seconds
:param aggr: string
:return: dict(cpu(number of vcpus used), ram(MB used), disk(B used))
"""
if vm_uuid in self.ceilometer_vm_data_cache.keys():
return self.ceilometer_vm_data_cache.get(vm_uuid)
cpu_util_metric = 'cpu_util'
ram_util_metric = 'memory.usage'
ram_alloc_metric = 'memory'
disk_alloc_metric = 'disk.root.size'
vm_cpu_util = self.ceilometer.statistic_aggregation(
resource_id=vm_uuid, meter_name=cpu_util_metric,
period=period, aggregate=aggr)
vm_cpu_cores = model.get_resource_from_id(
resource.ResourceType.cpu_cores).get_capacity(
model.get_vm_from_id(vm_uuid))
if vm_cpu_util:
total_cpu_utilization = vm_cpu_cores * (vm_cpu_util / 100.0)
else:
total_cpu_utilization = vm_cpu_cores
vm_ram_util = self.ceilometer.statistic_aggregation(
resource_id=vm_uuid, meter_name=ram_util_metric,
period=period, aggregate=aggr)
if not vm_ram_util:
vm_ram_util = self.ceilometer.statistic_aggregation(
resource_id=vm_uuid, meter_name=ram_alloc_metric,
period=period, aggregate=aggr)
vm_disk_util = self.ceilometer.statistic_aggregation(
resource_id=vm_uuid, meter_name=disk_alloc_metric,
period=period, aggregate=aggr)
if not vm_ram_util or not vm_disk_util:
LOG.error(
_LE('No values returned by %(resource_id)s '
'for memory.usage or disk.root.size'),
resource_id=vm_uuid
)
raise exception.NoDataFound
self.ceilometer_vm_data_cache[vm_uuid] = dict(
cpu=total_cpu_utilization, ram=vm_ram_util, disk=vm_disk_util)
return self.ceilometer_vm_data_cache.get(vm_uuid)
def get_hypervisor_utilization(self, hypervisor, model, period=3600,
aggr='avg'):
"""Collect cpu, ram and disk utilization statistics of a hypervisor.
:param hypervisor: hypervisor object
:param model: model_root object
:param period: seconds
:param aggr: string
:return: dict(cpu(number of cores used), ram(MB used), disk(B used))
"""
hypervisor_vms = model.get_mapping().get_node_vms_from_id(
hypervisor.uuid)
hypervisor_ram_util = 0
hypervisor_disk_util = 0
hypervisor_cpu_util = 0
for vm_uuid in hypervisor_vms:
vm_util = self.get_vm_utilization(vm_uuid, model, period, aggr)
hypervisor_cpu_util += vm_util['cpu']
hypervisor_ram_util += vm_util['ram']
hypervisor_disk_util += vm_util['disk']
return dict(cpu=hypervisor_cpu_util, ram=hypervisor_ram_util,
disk=hypervisor_disk_util)
def get_hypervisor_capacity(self, hypervisor, model):
"""Collect cpu, ram and disk capacity of a hypervisor.
:param hypervisor: hypervisor object
:param model: model_root object
:return: dict(cpu(cores), ram(MB), disk(B))
"""
hypervisor_cpu_capacity = model.get_resource_from_id(
resource.ResourceType.cpu_cores).get_capacity(hypervisor)
hypervisor_disk_capacity = model.get_resource_from_id(
resource.ResourceType.disk_capacity).get_capacity(hypervisor)
hypervisor_ram_capacity = model.get_resource_from_id(
resource.ResourceType.memory).get_capacity(hypervisor)
return dict(cpu=hypervisor_cpu_capacity, ram=hypervisor_ram_capacity,
disk=hypervisor_disk_capacity)
def get_relative_hypervisor_utilization(self, hypervisor, model):
"""Return relative hypervisor utilization (rhu).
:param hypervisor: hypervisor object
:param model: model_root object
:return: {'cpu': <0,1>, 'ram': <0,1>, 'disk': <0,1>}
"""
rhu = {}
util = self.get_hypervisor_utilization(hypervisor, model)
cap = self.get_hypervisor_capacity(hypervisor, model)
for k in util.keys():
rhu[k] = float(util[k]) / float(cap[k])
return rhu
def get_relative_cluster_utilization(self, model):
"""Calculate relative cluster utilization (rcu).
RCU is an average of relative utilizations (rhu) of active hypervisors.
:param model: model_root object
:return: {'cpu': <0,1>, 'ram': <0,1>, 'disk': <0,1>}
"""
hypervisors = model.get_all_hypervisors().values()
rcu = {}
counters = {}
for hypervisor in hypervisors:
hyper_state_str = self.get_state_str(hypervisor.state)
if hyper_state_str == hyper_state.HypervisorState.ONLINE.value:
rhu = self.get_relative_hypervisor_utilization(
hypervisor, model)
for k in rhu.keys():
if k not in rcu:
rcu[k] = 0
if k not in counters:
counters[k] = 0
rcu[k] += rhu[k]
counters[k] += 1
for k in rcu.keys():
rcu[k] /= counters[k]
return rcu
def is_overloaded(self, hypervisor, model, cc):
"""Indicate whether a hypervisor is overloaded.
This considers provided resource capacity coefficients (cc).
:param hypervisor: hypervisor object
:param model: model_root object
:param cc: dictionary containing resource capacity coefficients
:return: [True, False]
"""
hypervisor_capacity = self.get_hypervisor_capacity(hypervisor, model)
hypervisor_utilization = self.get_hypervisor_utilization(
hypervisor, model)
metrics = ['cpu']
for m in metrics:
if hypervisor_utilization[m] > hypervisor_capacity[m] * cc[m]:
return True
return False
def vm_fits(self, vm_uuid, hypervisor, model, cc):
"""Indicate whether is a hypervisor able to accomodate a VM.
This considers provided resource capacity coefficients (cc).
:param vm_uuid: string
:param hypervisor: hypervisor object
:param model: model_root object
:param cc: dictionary containing resource capacity coefficients
:return: [True, False]
"""
hypervisor_capacity = self.get_hypervisor_capacity(hypervisor, model)
hypervisor_utilization = self.get_hypervisor_utilization(
hypervisor, model)
vm_utilization = self.get_vm_utilization(vm_uuid, model)
metrics = ['cpu', 'ram', 'disk']
for m in metrics:
if (vm_utilization[m] + hypervisor_utilization[m] >
hypervisor_capacity[m] * cc[m]):
return False
return True
def optimize_solution(self, model):
"""Optimize solution.
This is done by eliminating unnecessary or circular set of migrations
which can be replaced by a more efficient solution.
e.g.:
* A->B, B->C => replace migrations A->B, B->C with
a single migration A->C as both solution result in
VM running on hypervisor C which can be achieved with
one migration instead of two.
* A->B, B->A => remove A->B and B->A as they do not result
in a new VM placement.
:param model: model_root object
"""
migrate_actions = (
a for a in self.solution.actions if a[
'action_type'] == 'migrate')
vm_to_be_migrated = (a['input_parameters']['resource_id']
for a in migrate_actions)
vm_uuids = list(set(vm_to_be_migrated))
for vm_uuid in vm_uuids:
actions = list(
a for a in self.solution.actions if a[
'input_parameters'][
'resource_id'] == vm_uuid)
if len(actions) > 1:
src = actions[0]['input_parameters']['src_hypervisor']
dst = actions[-1]['input_parameters']['dst_hypervisor']
for a in actions:
self.solution.actions.remove(a)
self.number_of_migrations -= 1
if src != dst:
self.add_migration(vm_uuid, src, dst, model)
def offload_phase(self, model, cc):
"""Perform offloading phase.
This considers provided resource capacity coefficients.
Offload phase performing first-fit based bin packing to offload
overloaded hypervisors. This is done in a fashion of moving
the least CPU utilized VM first as live migration these
generaly causes less troubles. This phase results in a cluster
with no overloaded hypervisors.
* This phase is be able to activate turned off hypervisors (if needed
and any available) in the case of the resource capacity provided by
active hypervisors is not able to accomodate all the load.
As the offload phase is later followed by the consolidation phase,
the hypervisor activation in this phase doesn't necessarily results
in more activated hypervisors in the final solution.
:param model: model_root object
:param cc: dictionary containing resource capacity coefficients
"""
sorted_hypervisors = sorted(
model.get_all_hypervisors().values(),
key=lambda x: self.get_hypervisor_utilization(x, model)['cpu'])
for hypervisor in reversed(sorted_hypervisors):
if self.is_overloaded(hypervisor, model, cc):
for vm in sorted(model.get_mapping().get_node_vms(hypervisor),
key=lambda x: self.get_vm_utilization(
x, model)['cpu']):
for dst_hypervisor in reversed(sorted_hypervisors):
if self.vm_fits(vm, dst_hypervisor, model, cc):
self.add_migration(vm, hypervisor,
dst_hypervisor, model)
break
if not self.is_overloaded(hypervisor, model, cc):
break
def consolidation_phase(self, model, cc):
"""Perform consolidation phase.
This considers provided resource capacity coefficients.
Consolidation phase performing first-fit based bin packing.
First, hypervisors with the lowest cpu utilization are consolidated
by moving their load to hypervisors with the highest cpu utilization
which can accomodate the load. In this phase the most cpu utilizied
VMs are prioritizied as their load is more difficult to accomodate
in the system than less cpu utilizied VMs which can be later used
to fill smaller CPU capacity gaps.
:param model: model_root object
:param cc: dictionary containing resource capacity coefficients
"""
sorted_hypervisors = sorted(
model.get_all_hypervisors().values(),
key=lambda x: self.get_hypervisor_utilization(x, model)['cpu'])
asc = 0
for hypervisor in sorted_hypervisors:
vms = sorted(model.get_mapping().get_node_vms(hypervisor),
key=lambda x: self.get_vm_utilization(x,
model)['cpu'])
for vm in reversed(vms):
dsc = len(sorted_hypervisors) - 1
for dst_hypervisor in reversed(sorted_hypervisors):
if asc >= dsc:
break
if self.vm_fits(vm, dst_hypervisor, model, cc):
self.add_migration(vm, hypervisor,
dst_hypervisor, model)
break
dsc -= 1
asc += 1
def execute(self, original_model):
"""Execute strategy.
This strategy produces a solution resulting in more
efficient utilization of cluster resources using following
four phases:
* Offload phase - handling over-utilized resources
* Consolidation phase - handling under-utilized resources
* Solution optimization - reducing number of migrations
* Deactivation of unused hypervisors
:param original_model: root_model object
"""
LOG.info(_LI('Executing Smart Strategy'))
model = self.get_prediction_model(original_model)
rcu = self.get_relative_cluster_utilization(model)
self.ceilometer_vm_data_cache = dict()
cc = {'cpu': 1.0, 'ram': 1.0, 'disk': 1.0}
# Offloading phase
self.offload_phase(model, cc)
# Consolidation phase
self.consolidation_phase(model, cc)
# Optimize solution
self.optimize_solution(model)
# Deactivate unused hypervisors
self.deactivate_unused_hypervisors(model)
rcu_after = self.get_relative_cluster_utilization(model)
info = {
'number_of_migrations': self.number_of_migrations,
'number_of_released_hypervisors':
self.number_of_released_hypervisors,
'relative_cluster_utilization_before': str(rcu),
'relative_cluster_utilization_after': str(rcu_after)
}
LOG.debug(info)
self.solution.model = model
self.solution.efficacy = rcu_after['cpu']
return self.solution

View File

@@ -0,0 +1,324 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2016 Intel Corp
#
# Authors: Junjie-Huang <junjie.huang@intel.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from oslo_log import log
from watcher._i18n import _LE, _LI, _LW
from watcher.common import exception as wexc
from watcher.decision_engine.model import resource
from watcher.decision_engine.model import vm_state
from watcher.decision_engine.strategy.strategies import base
from watcher.metrics_engine.cluster_history import ceilometer as ceil
LOG = log.getLogger(__name__)
class WorkloadBalance(base.BaseStrategy):
"""[PoC]Workload balance using live migration
*Description*
It is a migration strategy based on the VM workload of physical
servers. It generates solutions to move a workload whenever a server's
CPU utilization % is higher than the specified threshold.
The VM to be moved should make the host close to average workload
of all hypervisors.
*Requirements*
* Hardware: compute node should use the same physical CPUs
* Software: Ceilometer component ceilometer-agent-compute running
in each compute node, and Ceilometer API can report such telemetry
"cpu_util" successfully.
* You must have at least 2 physical compute nodes to run this strategy
*Limitations*
- This is a proof of concept that is not meant to be used in production
- We cannot forecast how many servers should be migrated. This is the
reason why we only plan a single virtual machine migration at a time.
So it's better to use this algorithm with `CONTINUOUS` audits.
- It assume that live migrations are possible
"""
# The meter to report CPU utilization % of VM in ceilometer
METER_NAME = "cpu_util"
# Unit: %, value range is [0 , 100]
# TODO(Junjie): make it configurable
THRESHOLD = 25.0
# choose 300 seconds as the default duration of meter aggregation
# TODO(Junjie): make it configurable
PERIOD = 300
MIGRATION = "migrate"
def __init__(self, osc=None):
"""Using live migration
:param osc: an OpenStackClients object
"""
super(WorkloadBalance, self).__init__(osc)
# the migration plan will be triggered when the CPU utlization %
# reaches threshold
# TODO(Junjie): Threshold should be configurable for each audit
self.threshold = self.THRESHOLD
self._meter = self.METER_NAME
self._ceilometer = None
self._period = self.PERIOD
@property
def ceilometer(self):
if self._ceilometer is None:
self._ceilometer = ceil.CeilometerClusterHistory(osc=self.osc)
return self._ceilometer
@ceilometer.setter
def ceilometer(self, c):
self._ceilometer = c
@classmethod
def get_name(cls):
return "workload_balance"
@classmethod
def get_display_name(cls):
return _("workload balance migration strategy")
@classmethod
def get_translatable_display_name(cls):
return "workload balance migration strategy"
@classmethod
def get_goal_name(cls):
return "WORKLOAD_OPTIMIZATION"
@classmethod
def get_goal_display_name(cls):
return _("Workload optimization")
@classmethod
def get_translatable_goal_display_name(cls):
return "Workload optimization"
def calculate_used_resource(self, model, hypervisor, cap_cores, cap_mem,
cap_disk):
'''calculate the used vcpus, memory and disk based on VM flavors'''
vms = model.get_mapping().get_node_vms(hypervisor)
vcpus_used = 0
memory_mb_used = 0
disk_gb_used = 0
for vm_id in vms:
vm = model.get_vm_from_id(vm_id)
vcpus_used += cap_cores.get_capacity(vm)
memory_mb_used += cap_mem.get_capacity(vm)
disk_gb_used += cap_disk.get_capacity(vm)
return vcpus_used, memory_mb_used, disk_gb_used
def choose_vm_to_migrate(self, model, hosts, avg_workload, workload_cache):
"""pick up an active vm instance to migrate from provided hosts
:param model: it's the origin_model passed from 'execute' function
:param hosts: the array of dict which contains hypervisor object
:param avg_workload: the average workload value of all hypervisors
:param workload_cache: the map contains vm to workload mapping
"""
for hvmap in hosts:
source_hypervisor = hvmap['hv']
source_vms = model.get_mapping().get_node_vms(source_hypervisor)
if source_vms:
delta_workload = hvmap['workload'] - avg_workload
min_delta = 1000000
instance_id = None
for vm_id in source_vms:
try:
# select the first active VM to migrate
vm = model.get_vm_from_id(vm_id)
if vm.state != vm_state.VMState.ACTIVE.value:
LOG.debug("VM not active, skipped: %s",
vm.uuid)
continue
current_delta = delta_workload - workload_cache[vm_id]
if 0 <= current_delta < min_delta:
min_delta = current_delta
instance_id = vm_id
except wexc.InstanceNotFound:
LOG.error(_LE("VM not found Error: %s"), vm_id)
if instance_id:
return source_hypervisor, model.get_vm_from_id(instance_id)
else:
LOG.info(_LI("VM not found from hypervisor: %s"),
source_hypervisor.uuid)
def filter_destination_hosts(self, model, hosts, vm_to_migrate,
avg_workload, workload_cache):
'''Only return hosts with sufficient available resources'''
cap_cores = model.get_resource_from_id(resource.ResourceType.cpu_cores)
cap_disk = model.get_resource_from_id(resource.ResourceType.disk)
cap_mem = model.get_resource_from_id(resource.ResourceType.memory)
required_cores = cap_cores.get_capacity(vm_to_migrate)
required_disk = cap_disk.get_capacity(vm_to_migrate)
required_mem = cap_mem.get_capacity(vm_to_migrate)
# filter hypervisors without enough resource
destination_hosts = []
src_vm_workload = workload_cache[vm_to_migrate.uuid]
for hvmap in hosts:
host = hvmap['hv']
workload = hvmap['workload']
# calculate the available resources
cores_used, mem_used, disk_used = self.calculate_used_resource(
model, host, cap_cores, cap_mem, cap_disk)
cores_available = cap_cores.get_capacity(host) - cores_used
disk_available = cap_disk.get_capacity(host) - disk_used
mem_available = cap_mem.get_capacity(host) - mem_used
if (cores_available >= required_cores and
disk_available >= required_disk and
mem_available >= required_mem and
(src_vm_workload + workload) < self.threshold / 100 *
cap_cores.get_capacity(host)):
destination_hosts.append(hvmap)
return destination_hosts
def group_hosts_by_cpu_util(self, model):
"""Calculate the workloads of each hypervisor
try to find out the hypervisors which have reached threshold
and the hypervisors which are under threshold.
and also calculate the average workload value of all hypervisors.
and also generate the VM workload map.
"""
hypervisors = model.get_all_hypervisors()
cluster_size = len(hypervisors)
if not hypervisors:
raise wexc.ClusterEmpty()
# get cpu cores capacity of hypervisors and vms
cap_cores = model.get_resource_from_id(resource.ResourceType.cpu_cores)
overload_hosts = []
nonoverload_hosts = []
# total workload of cluster
# it's the total core numbers being utilized in a cluster.
cluster_workload = 0.0
# use workload_cache to store the workload of VMs for reuse purpose
workload_cache = {}
for hypervisor_id in hypervisors:
hypervisor = model.get_hypervisor_from_id(hypervisor_id)
vms = model.get_mapping().get_node_vms(hypervisor)
hypervisor_workload = 0.0
for vm_id in vms:
vm = model.get_vm_from_id(vm_id)
try:
cpu_util = self.ceilometer.statistic_aggregation(
resource_id=vm_id,
meter_name=self._meter,
period=self._period,
aggregate='avg')
except Exception as e:
LOG.error(_LE("Can not get cpu_util: %s"), e.message)
continue
if cpu_util is None:
LOG.debug("%s: cpu_util is None", vm_id)
continue
vm_cores = cap_cores.get_capacity(vm)
workload_cache[vm_id] = cpu_util * vm_cores / 100
hypervisor_workload += workload_cache[vm_id]
LOG.debug("%s: cpu_util %f", vm_id, cpu_util)
hypervisor_cores = cap_cores.get_capacity(hypervisor)
hy_cpu_util = hypervisor_workload / hypervisor_cores * 100
cluster_workload += hypervisor_workload
hvmap = {'hv': hypervisor, "cpu_util": hy_cpu_util, 'workload':
hypervisor_workload}
if hy_cpu_util >= self.threshold:
# mark the hypervisor to release resources
overload_hosts.append(hvmap)
else:
nonoverload_hosts.append(hvmap)
avg_workload = cluster_workload / cluster_size
return overload_hosts, nonoverload_hosts, avg_workload, workload_cache
def execute(self, origin_model):
LOG.info(_LI("Initializing Workload Balance Strategy"))
if origin_model is None:
raise wexc.ClusterStateNotDefined()
current_model = origin_model
src_hypervisors, target_hypervisors, avg_workload, workload_cache = (
self.group_hosts_by_cpu_util(current_model))
if not src_hypervisors:
LOG.debug("No hosts require optimization")
return self.solution
if not target_hypervisors:
LOG.warning(_LW("No hosts current have CPU utilization under %s "
"percent, therefore there are no possible target "
"hosts for any migration"),
self.threshold)
return self.solution
# choose the server with largest cpu_util
src_hypervisors = sorted(src_hypervisors,
reverse=True,
key=lambda x: (x[self.METER_NAME]))
vm_to_migrate = self.choose_vm_to_migrate(current_model,
src_hypervisors,
avg_workload,
workload_cache)
if not vm_to_migrate:
return self.solution
source_hypervisor, vm_src = vm_to_migrate
# find the hosts that have enough resource for the VM to be migrated
destination_hosts = self.filter_destination_hosts(current_model,
target_hypervisors,
vm_src,
avg_workload,
workload_cache)
# sort the filtered result by workload
# pick up the lowest one as dest server
if not destination_hosts:
# for instance.
LOG.warning(_LW("No proper target host could be found, it might "
"be because of there's no enough CPU/Memory/DISK"))
return self.solution
destination_hosts = sorted(destination_hosts,
key=lambda x: (x["cpu_util"]))
# always use the host with lowerest CPU utilization
mig_dst_hypervisor = destination_hosts[0]['hv']
# generate solution to migrate the vm to the dest server,
if current_model.get_mapping().migrate_vm(vm_src,
source_hypervisor,
mig_dst_hypervisor):
parameters = {'migration_type': 'live',
'src_hypervisor': source_hypervisor.uuid,
'dst_hypervisor': mig_dst_hypervisor.uuid}
self.solution.add_action(action_type=self.MIGRATION,
resource_id=vm_src.uuid,
input_parameters=parameters)
self.solution.model = current_model
return self.solution

View File

@@ -0,0 +1,414 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2016 Servionica LLC
#
# Authors: Alexander Chadin <a.chadin@servionica.ru>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from copy import deepcopy
import itertools
import math
import random
import oslo_cache
from oslo_config import cfg
from oslo_log import log
from watcher._i18n import _LI, _
from watcher.common import exception
from watcher.decision_engine.model import resource
from watcher.decision_engine.model import vm_state
from watcher.decision_engine.strategy.strategies import base
from watcher.metrics_engine.cluster_history import ceilometer as \
ceilometer_cluster_history
LOG = log.getLogger(__name__)
metrics = ['cpu_util', 'memory.resident']
thresholds_dict = {'cpu_util': 0.2, 'memory.resident': 0.2}
weights_dict = {'cpu_util_weight': 1.0, 'memory.resident_weight': 1.0}
vm_host_measures = {'cpu_util': 'hardware.cpu.util',
'memory.resident': 'hardware.memory.used'}
ws_opts = [
cfg.ListOpt('metrics',
default=metrics,
required=True,
help='Metrics used as rates of cluster loads.'),
cfg.DictOpt('thresholds',
default=thresholds_dict,
help=''),
cfg.DictOpt('weights',
default=weights_dict,
help='These weights used to calculate '
'common standard deviation. Name of weight '
'contains meter name and _weight suffix.'),
cfg.StrOpt('host_choice',
default='retry',
required=True,
help="Method of host's choice."),
cfg.IntOpt('retry_count',
default=1,
required=True,
help='Count of random returned hosts.'),
]
CONF = cfg.CONF
CONF.register_opts(ws_opts, 'watcher_strategies.workload_stabilization')
def _set_memoize(conf):
oslo_cache.configure(conf)
region = oslo_cache.create_region()
configured_region = oslo_cache.configure_cache_region(conf, region)
return oslo_cache.core.get_memoization_decorator(conf,
configured_region,
'cache')
class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
"""Workload Stabilization control using live migration
*Description*
This is workload stabilization strategy based on standard deviation
algorithm. The goal is to determine if there is an overload in a cluster
and respond to it by migrating VMs to stabilize the cluster.
*Requirements*
* Software: Ceilometer component ceilometer-compute running
in each compute host, and Ceilometer API can report such telemetries
``memory.resident`` and ``cpu_util`` successfully.
* You must have at least 2 physical compute nodes to run this strategy.
*Limitations*
- It assume that live migrations are possible
- Load on the system is sufficiently stable.
*Spec URL*
https://review.openstack.org/#/c/286153/
"""
MIGRATION = "migrate"
MEMOIZE = _set_memoize(CONF)
def __init__(self, osc=None):
super(WorkloadStabilization, self).__init__(osc)
self._ceilometer = None
self._nova = None
self.weights = CONF['watcher_strategies.workload_stabilization']\
.weights
self.metrics = CONF['watcher_strategies.workload_stabilization']\
.metrics
self.thresholds = CONF['watcher_strategies.workload_stabilization']\
.thresholds
self.host_choice = CONF['watcher_strategies.workload_stabilization']\
.host_choice
@classmethod
def get_name(cls):
return "WORKLOAD_BALANCING"
@classmethod
def get_display_name(cls):
return _("Workload balancing")
@classmethod
def get_translatable_display_name(cls):
return "Workload balancing"
@property
def ceilometer(self):
if self._ceilometer is None:
self._ceilometer = (ceilometer_cluster_history.
CeilometerClusterHistory(osc=self.osc))
return self._ceilometer
@property
def nova(self):
if self._nova is None:
self._nova = self.osc.nova()
return self._nova
@nova.setter
def nova(self, n):
self._nova = n
@ceilometer.setter
def ceilometer(self, c):
self._ceilometer = c
def transform_vm_cpu(self, vm_load, host_vcpus):
"""This method transforms vm cpu utilization to overall host cpu utilization.
:param vm_load: dict that contains vm uuid and utilization info.
:param host_vcpus: int
:return: float value
"""
return vm_load['cpu_util'] * (vm_load['vcpus'] / float(host_vcpus))
@MEMOIZE
def get_vm_load(self, vm_uuid, current_model):
"""Gathering vm load through ceilometer statistic.
:param vm_uuid: vm for which statistic is gathered.
:param current_model: the cluster model
:return: dict
"""
LOG.debug(_LI('get_vm_load started'))
vm_vcpus = current_model.get_resource_from_id(
resource.ResourceType.cpu_cores).get_capacity(
current_model.get_vm_from_id(vm_uuid))
vm_load = {'uuid': vm_uuid, 'vcpus': vm_vcpus}
for meter in self.metrics:
avg_meter = self.ceilometer.statistic_aggregation(
resource_id=vm_uuid,
meter_name=meter,
period="120",
aggregate='min'
)
if avg_meter is None:
raise exception.NoMetricValuesForVM(resource_id=vm_uuid,
metric_name=meter)
vm_load[meter] = avg_meter
return vm_load
def normalize_hosts_load(self, hosts, current_model):
normalized_hosts = deepcopy(hosts)
for host in normalized_hosts:
if 'cpu_util' in normalized_hosts[host]:
normalized_hosts[host]['cpu_util'] /= float(100)
if 'memory.resident' in normalized_hosts[host]:
h_memory = current_model.get_resource_from_id(
resource.ResourceType.memory).get_capacity(
current_model.get_hypervisor_from_id(host))
normalized_hosts[host]['memory.resident'] /= float(h_memory)
return normalized_hosts
def get_hosts_load(self, current_model):
"""Get load of every host by gathering vms load"""
hosts_load = {}
for hypervisor_id in current_model.get_all_hypervisors():
hosts_load[hypervisor_id] = {}
host_vcpus = current_model.get_resource_from_id(
resource.ResourceType.cpu_cores).get_capacity(
current_model.get_hypervisor_from_id(hypervisor_id))
hosts_load[hypervisor_id]['vcpus'] = host_vcpus
for metric in self.metrics:
avg_meter = self.ceilometer.statistic_aggregation(
resource_id=hypervisor_id,
meter_name=vm_host_measures[metric],
period="60",
aggregate='avg'
)
if avg_meter is None:
raise exception.NoSuchMetricForHost(
metric=vm_host_measures[metric],
host=hypervisor_id)
hosts_load[hypervisor_id][metric] = avg_meter
return hosts_load
def get_sd(self, hosts, meter_name):
"""Get standard deviation among hosts by specified meter"""
mean = 0
variaton = 0
for host_id in hosts:
mean += hosts[host_id][meter_name]
mean /= len(hosts)
for host_id in hosts:
variaton += (hosts[host_id][meter_name] - mean) ** 2
variaton /= len(hosts)
sd = math.sqrt(variaton)
return sd
def calculate_weighted_sd(self, sd_case):
"""Calculate common standard deviation among meters on host"""
weighted_sd = 0
for metric, value in zip(self.metrics, sd_case):
try:
weighted_sd += value * float(self.weights[metric + '_weight'])
except KeyError as exc:
LOG.exception(exc)
raise exception.WatcherException(
_("Incorrect mapping: could not find associated weight"
" for %s in weight dict.") % metric)
return weighted_sd
def calculate_migration_case(self, hosts, vm_id, src_hp_id, dst_hp_id,
current_model):
"""Calculate migration case
Return list of standard deviation values, that appearing in case of
migration of vm from source host to destination host
:param hosts: hosts with their workload
:param vm_id: the virtual machine
:param src_hp_id: the source hypervisor id
:param dst_hp_id: the destination hypervisor id
:param current_model: the cluster model
:return: list of standard deviation values
"""
migration_case = []
new_hosts = deepcopy(hosts)
vm_load = self.get_vm_load(vm_id, current_model)
d_host_vcpus = new_hosts[dst_hp_id]['vcpus']
s_host_vcpus = new_hosts[src_hp_id]['vcpus']
for metric in self.metrics:
if metric is 'cpu_util':
new_hosts[src_hp_id][metric] -= self.transform_vm_cpu(
vm_load,
s_host_vcpus)
new_hosts[dst_hp_id][metric] += self.transform_vm_cpu(
vm_load,
d_host_vcpus)
else:
new_hosts[src_hp_id][metric] -= vm_load[metric]
new_hosts[dst_hp_id][metric] += vm_load[metric]
normalized_hosts = self.normalize_hosts_load(new_hosts, current_model)
for metric in self.metrics:
migration_case.append(self.get_sd(normalized_hosts, metric))
migration_case.append(new_hosts)
return migration_case
def simulate_migrations(self, current_model, hosts):
"""Make sorted list of pairs vm:dst_host"""
def yield_hypervisors(hypervisors):
ct = CONF['watcher_strategies.workload_stabilization'].retry_count
if self.host_choice == 'cycle':
for i in itertools.cycle(hypervisors):
yield [i]
if self.host_choice == 'retry':
while True:
yield random.sample(hypervisors, ct)
if self.host_choice == 'fullsearch':
while True:
yield hypervisors
vm_host_map = []
for source_hp_id in current_model.get_all_hypervisors():
hypervisors = list(current_model.get_all_hypervisors())
hypervisors.remove(source_hp_id)
hypervisor_list = yield_hypervisors(hypervisors)
vms_id = current_model.get_mapping(). \
get_node_vms_from_id(source_hp_id)
for vm_id in vms_id:
min_sd_case = {'value': len(self.metrics)}
vm = current_model.get_vm_from_id(vm_id)
if vm.state not in [vm_state.VMState.ACTIVE.value,
vm_state.VMState.PAUSED.value]:
continue
for dst_hp_id in next(hypervisor_list):
sd_case = self.calculate_migration_case(hosts, vm_id,
source_hp_id,
dst_hp_id,
current_model)
weighted_sd = self.calculate_weighted_sd(sd_case[:-1])
if weighted_sd < min_sd_case['value']:
min_sd_case = {'host': dst_hp_id, 'value': weighted_sd,
's_host': source_hp_id, 'vm': vm_id}
vm_host_map.append(min_sd_case)
break
return sorted(vm_host_map, key=lambda x: x['value'])
def check_threshold(self, current_model):
"""Check if cluster is needed in balancing"""
hosts_load = self.get_hosts_load(current_model)
normalized_load = self.normalize_hosts_load(hosts_load, current_model)
for metric in self.metrics:
metric_sd = self.get_sd(normalized_load, metric)
if metric_sd > float(self.thresholds[metric]):
return self.simulate_migrations(current_model, hosts_load)
def add_migration(self,
resource_id,
migration_type,
src_hypervisor,
dst_hypervisor):
parameters = {'migration_type': migration_type,
'src_hypervisor': src_hypervisor,
'dst_hypervisor': dst_hypervisor}
self.solution.add_action(action_type=self.MIGRATION,
resource_id=resource_id,
input_parameters=parameters)
def create_migration_vm(self, current_model, mig_vm, mig_src_hypervisor,
mig_dst_hypervisor):
"""Create migration VM """
if current_model.get_mapping().migrate_vm(
mig_vm, mig_src_hypervisor, mig_dst_hypervisor):
self.add_migration(mig_vm.uuid, 'live',
mig_src_hypervisor.uuid,
mig_dst_hypervisor.uuid)
def migrate(self, current_model, vm_uuid, src_host, dst_host):
mig_vm = current_model.get_vm_from_id(vm_uuid)
mig_src_hypervisor = current_model.get_hypervisor_from_id(src_host)
mig_dst_hypervisor = current_model.get_hypervisor_from_id(dst_host)
self.create_migration_vm(current_model, mig_vm, mig_src_hypervisor,
mig_dst_hypervisor)
def fill_solution(self, current_model):
self.solution.model = current_model
self.solution.efficacy = 100
return self.solution
def execute(self, orign_model):
LOG.info(_LI("Initializing Workload Stabilization"))
current_model = orign_model
if orign_model is None:
raise exception.ClusterStateNotDefined()
migration = self.check_threshold(current_model)
if migration:
hosts_load = self.get_hosts_load(current_model)
min_sd = 1
balanced = False
for vm_host in migration:
dst_hp_disk = current_model.get_resource_from_id(
resource.ResourceType.disk).get_capacity(
current_model.get_hypervisor_from_id(vm_host['host']))
vm_disk = current_model.get_resource_from_id(
resource.ResourceType.disk).get_capacity(
current_model.get_vm_from_id(vm_host['vm']))
if vm_disk > dst_hp_disk:
continue
vm_load = self.calculate_migration_case(hosts_load,
vm_host['vm'],
vm_host['s_host'],
vm_host['host'],
current_model)
weighted_sd = self.calculate_weighted_sd(vm_load[:-1])
if weighted_sd < min_sd:
min_sd = weighted_sd
hosts_load = vm_load[-1]
self.migrate(current_model, vm_host['vm'],
vm_host['s_host'], vm_host['host'])
for metric, value in zip(self.metrics, vm_load[:-1]):
if value < float(self.thresholds[metric]):
balanced = True
break
if balanced:
break
return self.fill_solution(current_model)

View File

@@ -0,0 +1,301 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2016 b<>com
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import collections
from oslo_log import log
from watcher._i18n import _LE, _LI
from watcher.common import context
from watcher.decision_engine.strategy.loading import default
from watcher import objects
LOG = log.getLogger(__name__)
GoalMapping = collections.namedtuple('GoalMapping', ['name', 'display_name'])
StrategyMapping = collections.namedtuple(
'StrategyMapping', ['name', 'goal_name', 'display_name'])
class Syncer(object):
"""Syncs all available goals and strategies with the Watcher DB"""
def __init__(self):
self.ctx = context.make_context()
self.discovered_map = None
self._available_goals = None
self._available_goals_map = None
self._available_strategies = None
self._available_strategies_map = None
# This goal mapping maps stale goal IDs to the synced goal
self.goal_mapping = dict()
# This strategy mapping maps stale strategy IDs to the synced goal
self.strategy_mapping = dict()
self.stale_audit_templates_map = {}
@property
def available_goals(self):
if self._available_goals is None:
self._available_goals = objects.Goal.list(self.ctx)
return self._available_goals
@property
def available_strategies(self):
if self._available_strategies is None:
self._available_strategies = objects.Strategy.list(self.ctx)
return self._available_strategies
@property
def available_goals_map(self):
if self._available_goals_map is None:
self._available_goals_map = {
GoalMapping(
name=g.name, display_name=g.display_name): g
for g in self.available_goals
}
return self._available_goals_map
@property
def available_strategies_map(self):
if self._available_strategies_map is None:
goals_map = {g.id: g.name for g in self.available_goals}
self._available_strategies_map = {
StrategyMapping(
name=s.name, goal_name=goals_map[s.goal_id],
display_name=s.display_name): s
for s in self.available_strategies
}
return self._available_strategies_map
def sync(self):
self.discovered_map = self._discover()
goals_map = self.discovered_map["goals"]
strategies_map = self.discovered_map["strategies"]
for goal_name, goal_map in goals_map.items():
if goal_map in self.available_goals_map:
LOG.info(_LI("Goal %s already exists"), goal_name)
continue
self.goal_mapping.update(self._sync_goal(goal_map))
for strategy_name, strategy_map in strategies_map.items():
if (strategy_map in self.available_strategies_map and
strategy_map.goal_name not in
[g.name for g in self.goal_mapping.values()]):
LOG.info(_LI("Strategy %s already exists"), strategy_name)
continue
self.strategy_mapping.update(self._sync_strategy(strategy_map))
self._sync_audit_templates()
def _sync_goal(self, goal_map):
goal_name = goal_map.name
goal_display_name = goal_map.display_name
goal_mapping = dict()
# Goals that are matching by name with the given discovered goal name
matching_goals = [g for g in self.available_goals
if g.name == goal_name]
stale_goals = self._soft_delete_stale_goals(goal_map, matching_goals)
if stale_goals or not matching_goals:
goal = objects.Goal(self.ctx)
goal.name = goal_name
goal.display_name = goal_display_name
goal.create()
LOG.info(_LI("Goal %s created"), goal_name)
# Updating the internal states
self.available_goals_map[goal] = goal_map
# Map the old goal IDs to the new (equivalent) goal
for matching_goal in matching_goals:
goal_mapping[matching_goal.id] = goal
return goal_mapping
def _sync_strategy(self, strategy_map):
strategy_name = strategy_map.name
strategy_display_name = strategy_map.display_name
goal_name = strategy_map.goal_name
strategy_mapping = dict()
# Strategies that are matching by name with the given
# discovered strategy name
matching_strategies = [s for s in self.available_strategies
if s.name == strategy_name]
stale_strategies = self._soft_delete_stale_strategies(
strategy_map, matching_strategies)
if stale_strategies or not matching_strategies:
strategy = objects.Strategy(self.ctx)
strategy.name = strategy_name
strategy.display_name = strategy_display_name
strategy.goal_id = objects.Goal.get_by_name(self.ctx, goal_name).id
strategy.create()
LOG.info(_LI("Strategy %s created"), strategy_name)
# Updating the internal states
self.available_strategies_map[strategy] = strategy_map
# Map the old strategy IDs to the new (equivalent) strategy
for matching_strategy in matching_strategies:
strategy_mapping[matching_strategy.id] = strategy
return strategy_mapping
def _sync_audit_templates(self):
# First we find audit templates that are stale because their associated
# goal or strategy has been modified and we update them in-memory
self._find_stale_audit_templates_due_to_goal()
self._find_stale_audit_templates_due_to_strategy()
# Then we handle the case where an audit template became
# stale because its related goal does not exist anymore.
self._soft_delete_removed_goals()
# Then we handle the case where an audit template became
# stale because its related strategy does not exist anymore.
self._soft_delete_removed_strategies()
# Finally, we save into the DB the updated stale audit templates
for stale_audit_template in self.stale_audit_templates_map.values():
stale_audit_template.save()
LOG.info(_LI("Audit Template '%s' synced"),
stale_audit_template.name)
def _find_stale_audit_templates_due_to_goal(self):
for goal_id, synced_goal in self.goal_mapping.items():
filters = {"goal_id": goal_id}
stale_audit_templates = objects.AuditTemplate.list(
self.ctx, filters=filters)
# Update the goal ID for the stale audit templates (w/o saving)
for audit_template in stale_audit_templates:
if audit_template.id not in self.stale_audit_templates_map:
audit_template.goal_id = synced_goal.id
self.stale_audit_templates_map[audit_template.id] = (
audit_template)
else:
self.stale_audit_templates_map[
audit_template.id].goal_id = synced_goal.id
def _find_stale_audit_templates_due_to_strategy(self):
for strategy_id, synced_strategy in self.strategy_mapping.items():
filters = {"strategy_id": strategy_id}
stale_audit_templates = objects.AuditTemplate.list(
self.ctx, filters=filters)
# Update strategy IDs for all stale audit templates (w/o saving)
for audit_template in stale_audit_templates:
if audit_template.id not in self.stale_audit_templates_map:
audit_template.strategy_id = synced_strategy.id
self.stale_audit_templates_map[audit_template.id] = (
audit_template)
else:
self.stale_audit_templates_map[
audit_template.id].strategy_id = synced_strategy.id
def _soft_delete_removed_goals(self):
removed_goals = [
g for g in self.available_goals
if g.name not in self.discovered_map['goals']]
for removed_goal in removed_goals:
removed_goal.soft_delete()
filters = {"goal_id": removed_goal.id}
invalid_ats = objects.AuditTemplate.list(self.ctx, filters=filters)
for at in invalid_ats:
LOG.warning(
_LE("Audit Template '%(audit_template)s' references a "
"goal that does not exist"),
audit_template=at.uuid)
def _soft_delete_removed_strategies(self):
removed_strategies = [
s for s in self.available_strategies
if s.name not in self.discovered_map['strategies']]
for removed_strategy in removed_strategies:
removed_strategy.soft_delete()
filters = {"strategy_id": removed_strategy.id}
invalid_ats = objects.AuditTemplate.list(self.ctx, filters=filters)
for at in invalid_ats:
LOG.info(
_LI("Audit Template '%(audit_template)s' references a "
"strategy that does not exist"),
audit_template=at.uuid)
# In this case we can reset the strategy ID to None
# so the audit template can still achieve the same goal
# but with a different strategy
if at.id not in self.stale_audit_templates_map:
at.strategy_id = None
self.stale_audit_templates_map[at.id] = at
else:
self.stale_audit_templates_map[at.id].strategy_id = None
def _discover(self):
strategies_map = {}
goals_map = {}
discovered_map = {"goals": goals_map, "strategies": strategies_map}
strategy_loader = default.DefaultStrategyLoader()
implemented_strategies = strategy_loader.list_available()
for _, strategy_cls in implemented_strategies.items():
goals_map[strategy_cls.get_goal_name()] = GoalMapping(
name=strategy_cls.get_goal_name(),
display_name=strategy_cls.get_translatable_goal_display_name())
strategies_map[strategy_cls.get_name()] = StrategyMapping(
name=strategy_cls.get_name(),
goal_name=strategy_cls.get_goal_name(),
display_name=strategy_cls.get_translatable_display_name())
return discovered_map
def _soft_delete_stale_goals(self, goal_map, matching_goals):
goal_name = goal_map.name
goal_display_name = goal_map.display_name
stale_goals = []
for matching_goal in matching_goals:
if (matching_goal.display_name == goal_display_name and
matching_goal.strategy_id not in self.strategy_mapping):
LOG.info(_LI("Goal %s unchanged"), goal_name)
else:
LOG.info(_LI("Goal %s modified"), goal_name)
matching_goal.soft_delete()
stale_goals.append(matching_goal)
return stale_goals
def _soft_delete_stale_strategies(self, strategy_map, matching_strategies):
strategy_name = strategy_map.name
strategy_display_name = strategy_map.display_name
stale_strategies = []
for matching_strategy in matching_strategies:
if (matching_strategy.display_name == strategy_display_name and
matching_strategy.goal_id not in self.goal_mapping):
LOG.info(_LI("Strategy %s unchanged"), strategy_name)
else:
LOG.info(_LI("Strategy %s modified"), strategy_name)
matching_strategy.soft_delete()
stale_strategies.append(matching_strategy)
return stale_strategies

View File

@@ -7,52 +7,105 @@
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: python-watcher 0.23.3.dev2\n"
"Project-Id-Version: python-watcher 0.26.1.dev33\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2016-02-09 09:07+0100\n"
"POT-Creation-Date: 2016-05-11 15:31+0200\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.1.1\n"
"Generated-By: Babel 2.3.4\n"
#: watcher/api/controllers/v1/action_plan.py:102
#: watcher/api/app.py:31
msgid "The port for the watcher API server"
msgstr ""
#: watcher/api/app.py:34
msgid "The listen IP for the watcher API server"
msgstr ""
#: watcher/api/app.py:37
msgid ""
"The maximum number of items returned in a single response from a "
"collection resource"
msgstr ""
#: watcher/api/app.py:41
msgid ""
"Number of workers for Watcher API service. The default is equal to the "
"number of CPUs available if that can be determined, else a default worker"
" count of 1 is returned."
msgstr ""
#: watcher/api/app.py:48
msgid ""
"Enable the integrated stand-alone API to service requests via HTTPS "
"instead of HTTP. If there is a front-end service performing HTTPS "
"offloading from the service, this option should be False; note, you will "
"want to change public API endpoint to represent SSL termination URL with "
"'public_endpoint' option."
msgstr ""
#: watcher/api/controllers/v1/action.py:364
msgid "Cannot create an action directly"
msgstr ""
#: watcher/api/controllers/v1/action.py:388
msgid "Cannot modify an action directly"
msgstr ""
#: watcher/api/controllers/v1/action.py:424
msgid "Cannot delete an action directly"
msgstr ""
#: watcher/api/controllers/v1/action_plan.py:87
#, python-format
msgid "Invalid state: %(state)s"
msgstr ""
#: watcher/api/controllers/v1/action_plan.py:422
#: watcher/api/controllers/v1/action_plan.py:407
#, python-format
msgid "State transition not allowed: (%(initial_state)s -> %(new_state)s)"
msgstr ""
#: watcher/api/controllers/v1/audit.py:359
#: watcher/api/controllers/v1/audit.py:362
msgid "The audit template UUID or name specified is invalid"
msgstr ""
#: watcher/api/controllers/v1/types.py:148
#: watcher/api/controllers/v1/audit_template.py:138
#, python-format
msgid ""
"'%(strategy)s' strategy does relate to the '%(goal)s' goal. Possible "
"choices: %(choices)s"
msgstr ""
#: watcher/api/controllers/v1/audit_template.py:160
msgid "Cannot remove 'goal_uuid' attribute from an audit template"
msgstr ""
#: watcher/api/controllers/v1/types.py:123
#, python-format
msgid "%s is not JSON serializable"
msgstr ""
#: watcher/api/controllers/v1/types.py:184
#: watcher/api/controllers/v1/types.py:159
#, python-format
msgid "Wrong type. Expected '%(type)s', got '%(value)s'"
msgstr ""
#: watcher/api/controllers/v1/types.py:223
#: watcher/api/controllers/v1/types.py:198
#, python-format
msgid "'%s' is an internal attribute and can not be updated"
msgstr ""
#: watcher/api/controllers/v1/types.py:227
#: watcher/api/controllers/v1/types.py:202
#, python-format
msgid "'%s' is a mandatory attribute and can not be removed"
msgstr ""
#: watcher/api/controllers/v1/types.py:232
#: watcher/api/controllers/v1/types.py:207
msgid "'add' and 'replace' operations needs value"
msgstr ""
@@ -65,7 +118,12 @@ msgstr ""
msgid "Invalid sort direction: %s. Acceptable values are 'asc' or 'desc'"
msgstr ""
#: watcher/api/controllers/v1/utils.py:57
#: watcher/api/controllers/v1/utils.py:58
#, python-format
msgid "Invalid filter: %s"
msgstr ""
#: watcher/api/controllers/v1/utils.py:65
#, python-format
msgid "Adding a new attribute (%s) to the root of the resource is not allowed"
msgstr ""
@@ -84,49 +142,60 @@ msgstr ""
msgid "Error parsing HTTP response: %s"
msgstr ""
#: watcher/applier/actions/change_nova_service_state.py:69
#: watcher/applier/actions/change_nova_service_state.py:90
msgid "The target state is not defined"
msgstr ""
#: watcher/applier/actions/migration.py:43
#: watcher/applier/actions/migration.py:71
msgid "The parameter resource_id is invalid."
msgstr ""
#: watcher/applier/actions/migration.py:86
#: watcher/applier/actions/migration.py:124
#, python-format
msgid ""
"Unexpected error occured. Migration failed forinstance %s. Leaving "
"instance on previous host."
msgstr ""
#: watcher/applier/actions/migration.py:140
#, python-format
msgid "Migration of type %(migration_type)s is not supported."
msgstr ""
#: watcher/applier/workflow_engine/default.py:128
#: watcher/applier/workflow_engine/default.py:129
#, python-format
msgid "The WorkFlow Engine has failed to execute the action %s"
msgstr ""
#: watcher/applier/workflow_engine/default.py:146
#: watcher/applier/workflow_engine/default.py:147
#, python-format
msgid "Revert action %s"
msgstr ""
#: watcher/applier/workflow_engine/default.py:152
#: watcher/applier/workflow_engine/default.py:153
msgid "Oops! We need disaster recover plan"
msgstr ""
#: watcher/cmd/api.py:46 watcher/cmd/applier.py:39
#: watcher/cmd/decisionengine.py:40
#, python-format
msgid "Starting server in PID %s"
msgstr ""
#: watcher/cmd/api.py:51
#: watcher/cmd/api.py:46
#, python-format
msgid "serving on 0.0.0.0:%(port)s, view at http://127.0.0.1:%(port)s"
msgstr ""
#: watcher/cmd/api.py:55
#: watcher/cmd/api.py:50
#, python-format
msgid "serving on http://%(host)s:%(port)s"
msgstr ""
#: watcher/cmd/applier.py:41
#, python-format
msgid "Starting Watcher Applier service in PID %s"
msgstr ""
#: watcher/cmd/decisionengine.py:42
#, python-format
msgid "Starting Watcher Decision Engine service in PID %s"
msgstr ""
#: watcher/common/clients.py:29
msgid "Version of Nova API to use in novaclient."
msgstr ""
@@ -188,175 +257,209 @@ msgstr ""
#: watcher/common/exception.py:150
#, python-format
msgid "Expected an uuid or int but received %(identity)s"
msgid "Expected a uuid or int but received %(identity)s"
msgstr ""
#: watcher/common/exception.py:154
#, python-format
msgid "Goal %(goal)s is not defined in Watcher configuration file"
msgid "Goal %(goal)s is invalid"
msgstr ""
#: watcher/common/exception.py:158
#, python-format
msgid "Expected a uuid but received %(uuid)s"
msgid "Strategy %(strategy)s is invalid"
msgstr ""
#: watcher/common/exception.py:162
#, python-format
msgid "Expected a logical name but received %(name)s"
msgid "Expected a uuid but received %(uuid)s"
msgstr ""
#: watcher/common/exception.py:166
#, python-format
msgid "Expected a logical name or uuid but received %(name)s"
msgid "Expected a logical name but received %(name)s"
msgstr ""
#: watcher/common/exception.py:170
#, python-format
msgid "AuditTemplate %(audit_template)s could not be found"
msgid "Expected a logical name or uuid but received %(name)s"
msgstr ""
#: watcher/common/exception.py:174
#, python-format
msgid "An audit_template with UUID %(uuid)s or name %(name)s already exists"
msgid "Goal %(goal)s could not be found"
msgstr ""
#: watcher/common/exception.py:179
#: watcher/common/exception.py:178
#, python-format
msgid "A goal with UUID %(uuid)s already exists"
msgstr ""
#: watcher/common/exception.py:182
#, python-format
msgid "Strategy %(strategy)s could not be found"
msgstr ""
#: watcher/common/exception.py:186
#, python-format
msgid "A strategy with UUID %(uuid)s already exists"
msgstr ""
#: watcher/common/exception.py:190
#, python-format
msgid "AuditTemplate %(audit_template)s could not be found"
msgstr ""
#: watcher/common/exception.py:194
#, python-format
msgid "An audit_template with UUID or name %(audit_template)s already exists"
msgstr ""
#: watcher/common/exception.py:199
#, python-format
msgid "AuditTemplate %(audit_template)s is referenced by one or multiple audit"
msgstr ""
#: watcher/common/exception.py:184
#: watcher/common/exception.py:204
#, python-format
msgid "Audit type %(audit_type)s could not be found"
msgstr ""
#: watcher/common/exception.py:208
#, python-format
msgid "Audit %(audit)s could not be found"
msgstr ""
#: watcher/common/exception.py:188
#: watcher/common/exception.py:212
#, python-format
msgid "An audit with UUID %(uuid)s already exists"
msgstr ""
#: watcher/common/exception.py:192
#: watcher/common/exception.py:216
#, python-format
msgid "Audit %(audit)s is referenced by one or multiple action plans"
msgstr ""
#: watcher/common/exception.py:197
#: watcher/common/exception.py:221
#, python-format
msgid "ActionPlan %(action_plan)s could not be found"
msgstr ""
#: watcher/common/exception.py:201
#: watcher/common/exception.py:225
#, python-format
msgid "An action plan with UUID %(uuid)s already exists"
msgstr ""
#: watcher/common/exception.py:205
#: watcher/common/exception.py:229
#, python-format
msgid "Action Plan %(action_plan)s is referenced by one or multiple actions"
msgstr ""
#: watcher/common/exception.py:210
#: watcher/common/exception.py:234
#, python-format
msgid "Action %(action)s could not be found"
msgstr ""
#: watcher/common/exception.py:214
#: watcher/common/exception.py:238
#, python-format
msgid "An action with UUID %(uuid)s already exists"
msgstr ""
#: watcher/common/exception.py:218
#: watcher/common/exception.py:242
#, python-format
msgid "Action plan %(action_plan)s is referenced by one or multiple goals"
msgstr ""
#: watcher/common/exception.py:223
#: watcher/common/exception.py:247
msgid "Filtering actions on both audit and action-plan is prohibited"
msgstr ""
#: watcher/common/exception.py:232
#: watcher/common/exception.py:256
#, python-format
msgid "Couldn't apply patch '%(patch)s'. Reason: %(reason)s"
msgstr ""
#: watcher/common/exception.py:239
#: watcher/common/exception.py:262
#, python-format
msgid "Workflow execution error: %(error)s"
msgstr ""
#: watcher/common/exception.py:266
msgid "Illegal argument"
msgstr ""
#: watcher/common/exception.py:243
#: watcher/common/exception.py:270
msgid "No such metric"
msgstr ""
#: watcher/common/exception.py:247
#: watcher/common/exception.py:274
msgid "No rows were returned"
msgstr ""
#: watcher/common/exception.py:251
#: watcher/common/exception.py:278
#, python-format
msgid "%(client)s connection failed. Reason: %(reason)s"
msgstr ""
#: watcher/common/exception.py:255
#: watcher/common/exception.py:282
msgid "'Keystone API endpoint is missing''"
msgstr ""
#: watcher/common/exception.py:259
#: watcher/common/exception.py:286
msgid "The list of hypervisor(s) in the cluster is empty"
msgstr ""
#: watcher/common/exception.py:263
#: watcher/common/exception.py:290
msgid "The metrics resource collector is not defined"
msgstr ""
#: watcher/common/exception.py:267
msgid "the cluster state is not defined"
#: watcher/common/exception.py:294
msgid "The cluster state is not defined"
msgstr ""
#: watcher/common/exception.py:273
#: watcher/common/exception.py:298
#, python-format
msgid "No strategy could be found to achieve the '%(goal)s' goal."
msgstr ""
#: watcher/common/exception.py:304
#, python-format
msgid "The instance '%(name)s' is not found"
msgstr ""
#: watcher/common/exception.py:277
#: watcher/common/exception.py:308
msgid "The hypervisor is not found"
msgstr ""
#: watcher/common/exception.py:281
#: watcher/common/exception.py:312
#, python-format
msgid "Error loading plugin '%(name)s'"
msgstr ""
#: watcher/common/exception.py:285
#: watcher/common/exception.py:316
#, python-format
msgid "The identifier '%(name)s' is a reserved word"
msgstr ""
#: watcher/common/service.py:83
#: watcher/common/exception.py:320
#, python-format
msgid "Created RPC server for service %(service)s on host %(host)s."
msgid "The %(name)s resource %(id)s is not soft deleted"
msgstr ""
#: watcher/common/service.py:92
#, python-format
msgid "Service error occurred when stopping the RPC server. Error: %s"
#: watcher/common/exception.py:324
msgid "Limit should be positive"
msgstr ""
#: watcher/common/service.py:97
#, python-format
msgid "Service error occurred when cleaning up the RPC manager. Error: %s"
#: watcher/common/service.py:40
msgid "Seconds between running periodic tasks."
msgstr ""
#: watcher/common/service.py:101
#, python-format
msgid "Stopped RPC server for service %(service)s on host %(host)s."
msgstr ""
#: watcher/common/service.py:106
#, python-format
#: watcher/common/service.py:43
msgid ""
"Got signal SIGUSR1. Not deregistering on next shutdown of service "
"%(service)s on host %(host)s."
"Name of this node. This can be an opaque identifier. It is not "
"necessarily a hostname, FQDN, or IP address. However, the node name must "
"be valid within an AMQP key, and if using ZeroMQ, a valid hostname, FQDN,"
" or IP address."
msgstr ""
#: watcher/common/utils.py:53
@@ -374,25 +477,112 @@ msgstr ""
msgid "Messaging configuration error"
msgstr ""
#: watcher/db/sqlalchemy/api.py:256
msgid ""
"Multiple audit templates exist with the same name. Please use the audit "
"template uuid instead"
#: watcher/db/purge.py:50
msgid "Goals"
msgstr ""
#: watcher/db/sqlalchemy/api.py:278
#: watcher/db/purge.py:51
msgid "Strategies"
msgstr ""
#: watcher/db/purge.py:52
msgid "Audit Templates"
msgstr ""
#: watcher/db/purge.py:53
msgid "Audits"
msgstr ""
#: watcher/db/purge.py:54
msgid "Action Plans"
msgstr ""
#: watcher/db/purge.py:55
msgid "Actions"
msgstr ""
#: watcher/db/purge.py:102
msgid "Total"
msgstr ""
#: watcher/db/purge.py:160
msgid "Audit Template"
msgstr ""
#: watcher/db/purge.py:227
#, python-format
msgid ""
"Orphans found:\n"
"%s"
msgstr ""
#: watcher/db/purge.py:306
#, python-format
msgid "There are %(count)d objects set for deletion. Continue? [y/N]"
msgstr ""
#: watcher/db/purge.py:313
#, python-format
msgid ""
"The number of objects (%(num)s) to delete from the database exceeds the "
"maximum number of objects (%(max_number)s) specified."
msgstr ""
#: watcher/db/purge.py:318
msgid "Do you want to delete objects up to the specified maximum number? [y/N]"
msgstr ""
#: watcher/db/purge.py:408
msgid "Deleting..."
msgstr ""
#: watcher/db/purge.py:414
msgid "Starting purge command"
msgstr ""
#: watcher/db/purge.py:424
msgid " (orphans excluded)"
msgstr ""
#: watcher/db/purge.py:425
msgid " (may include orphans)"
msgstr ""
#: watcher/db/purge.py:428 watcher/db/purge.py:429
#, python-format
msgid "Purge results summary%s:"
msgstr ""
#: watcher/db/purge.py:432
#, python-format
msgid "Here below is a table containing the objects that can be purged%s:"
msgstr ""
#: watcher/db/purge.py:437
msgid "Purge process completed"
msgstr ""
#: watcher/db/sqlalchemy/api.py:443
msgid "Cannot overwrite UUID for an existing Goal."
msgstr ""
#: watcher/db/sqlalchemy/api.py:509
msgid "Cannot overwrite UUID for an existing Strategy."
msgstr ""
#: watcher/db/sqlalchemy/api.py:586
msgid "Cannot overwrite UUID for an existing Audit Template."
msgstr ""
#: watcher/db/sqlalchemy/api.py:388
#: watcher/db/sqlalchemy/api.py:683
msgid "Cannot overwrite UUID for an existing Audit."
msgstr ""
#: watcher/db/sqlalchemy/api.py:480
#: watcher/db/sqlalchemy/api.py:778
msgid "Cannot overwrite UUID for an existing Action."
msgstr ""
#: watcher/db/sqlalchemy/api.py:590
#: watcher/db/sqlalchemy/api.py:891
msgid "Cannot overwrite UUID for an existing Action Plan."
msgstr ""
@@ -402,52 +592,160 @@ msgid ""
"instead"
msgstr ""
#: watcher/decision_engine/model/model_root.py:37
#: watcher/decision_engine/model/model_root.py:42
#: watcher/decision_engine/sync.py:94
#, python-format
msgid "Goal %s already exists"
msgstr ""
#: watcher/decision_engine/sync.py:103
#, python-format
msgid "Strategy %s already exists"
msgstr ""
#: watcher/decision_engine/sync.py:125
#, python-format
msgid "Goal %s created"
msgstr ""
#: watcher/decision_engine/sync.py:154
#, python-format
msgid "Strategy %s created"
msgstr ""
#: watcher/decision_engine/sync.py:180
#, python-format
msgid "Audit Template '%s' synced"
msgstr ""
#: watcher/decision_engine/sync.py:225
#, python-format
msgid "Audit Template '%(audit_template)s' references a goal that does not exist"
msgstr ""
#: watcher/decision_engine/sync.py:240
#, python-format
msgid ""
"Audit Template '%(audit_template)s' references a strategy that does not "
"exist"
msgstr ""
#: watcher/decision_engine/sync.py:279
#, python-format
msgid "Goal %s unchanged"
msgstr ""
#: watcher/decision_engine/sync.py:281
#, python-format
msgid "Goal %s modified"
msgstr ""
#: watcher/decision_engine/sync.py:295
#, python-format
msgid "Strategy %s unchanged"
msgstr ""
#: watcher/decision_engine/sync.py:297
#, python-format
msgid "Strategy %s modified"
msgstr ""
#: watcher/decision_engine/model/model_root.py:33
#: watcher/decision_engine/model/model_root.py:38
msgid "'obj' argument type is not valid"
msgstr ""
#: watcher/decision_engine/planner/default.py:72
#: watcher/decision_engine/planner/default.py:78
msgid "The action plan is empty"
msgstr ""
#: watcher/decision_engine/strategy/selection/default.py:60
#: watcher/decision_engine/strategy/selection/default.py:74
#, python-format
msgid "Incorrect mapping: could not find associated strategy for '%s'"
msgid "Could not load any strategy for goal %(goal)s"
msgstr ""
#: watcher/decision_engine/strategy/strategies/basic_consolidation.py:269
#: watcher/decision_engine/strategy/strategies/basic_consolidation.py:316
#: watcher/decision_engine/strategy/strategies/base.py:165
msgid "Dummy goal"
msgstr ""
#: watcher/decision_engine/strategy/strategies/base.py:188
msgid "Unclassified"
msgstr ""
#: watcher/decision_engine/strategy/strategies/base.py:204
msgid "Server consolidation"
msgstr ""
#: watcher/decision_engine/strategy/strategies/base.py:220
msgid "Thermal optimization"
msgstr ""
#: watcher/decision_engine/strategy/strategies/basic_consolidation.py:119
msgid "Basic offline consolidation"
msgstr ""
#: watcher/decision_engine/strategy/strategies/basic_consolidation.py:296
#: watcher/decision_engine/strategy/strategies/basic_consolidation.py:343
#, python-format
msgid "No values returned by %(resource_id)s for %(metric_name)s"
msgstr ""
#: watcher/decision_engine/strategy/strategies/basic_consolidation.py:426
#: watcher/decision_engine/strategy/strategies/basic_consolidation.py:456
msgid "Initializing Sercon Consolidation"
msgstr ""
#: watcher/decision_engine/strategy/strategies/basic_consolidation.py:470
#: watcher/decision_engine/strategy/strategies/basic_consolidation.py:500
msgid "The workloads of the compute nodes of the cluster is zero"
msgstr ""
#: watcher/decision_engine/strategy/strategies/outlet_temp_control.py:127
#: watcher/decision_engine/strategy/strategies/dummy_strategy.py:74
msgid "Dummy strategy"
msgstr ""
#: watcher/decision_engine/strategy/strategies/outlet_temp_control.py:102
msgid "Outlet temperature based strategy"
msgstr ""
#: watcher/decision_engine/strategy/strategies/outlet_temp_control.py:156
#, python-format
msgid "%s: no outlet temp data"
msgstr ""
#: watcher/decision_engine/strategy/strategies/outlet_temp_control.py:151
#: watcher/decision_engine/strategy/strategies/outlet_temp_control.py:181
#, python-format
msgid "VM not active, skipped: %s"
msgstr ""
#: watcher/decision_engine/strategy/strategies/outlet_temp_control.py:208
#: watcher/decision_engine/strategy/strategies/outlet_temp_control.py:239
msgid "No hosts under outlet temp threshold found"
msgstr ""
#: watcher/decision_engine/strategy/strategies/outlet_temp_control.py:231
#: watcher/decision_engine/strategy/strategies/outlet_temp_control.py:262
msgid "No proper target host could be found"
msgstr ""
#: watcher/decision_engine/strategy/strategies/vm_workload_consolidation.py:100
msgid "VM Workload Consolidation Strategy"
msgstr ""
#: watcher/decision_engine/strategy/strategies/vm_workload_consolidation.py:128
#, python-format
msgid "Unexpexted resource state type, state=%(state)s, state_type=%(st)s."
msgstr ""
#: watcher/decision_engine/strategy/strategies/vm_workload_consolidation.py:180
#, python-format
msgid "Cannot live migrate: vm_uuid=%(vm_uuid)s, state=%(vm_state)s."
msgstr ""
#: watcher/decision_engine/strategy/strategies/vm_workload_consolidation.py:264
#, python-format
msgid "No values returned by %(resource_id)s for memory.usage or disk.root.size"
msgstr ""
#: watcher/decision_engine/strategy/strategies/vm_workload_consolidation.py:515
msgid "Executing Smart Strategy"
msgstr ""
#: watcher/objects/base.py:70
#, python-format
msgid "Error setting %(attr)s"

View File

@@ -19,16 +19,14 @@
from oslo_config import cfg
from oslo_log import log
from watcher.common import ceilometer_helper
from watcher.metrics_engine.cluster_history import api
from watcher.metrics_engine.cluster_history import base
CONF = cfg.CONF
LOG = log.getLogger(__name__)
class CeilometerClusterHistory(api.BaseClusterHistory):
class CeilometerClusterHistory(base.BaseClusterHistory):
def __init__(self, osc=None):
""":param osc: an OpenStackClients instance"""
super(CeilometerClusterHistory, self).__init__()

Some files were not shown because too many files have changed in this diff Show More