Compare commits

..

67 Commits

Author SHA1 Message Date
Vincent Françoise
de7b0129a1 Doc on how to set up a thirdparty project
This documentation is a pre-requisite to all plugin documentation
as it guides you through the creation of a project from scratch
instead of simply forcusing on the implementation of the plugin
itself.

Change-Id: Id2e09b3667390ee6c4be42454c41f9d266fdfac2
Related-Bug: #1534639
Related-Bug: #1533739
Related-Bug: #1533740
2016-03-04 12:07:41 +01:00
Jenkins
323fd01a85 Merge "Updated Strategy plugin doc" 2016-03-04 08:27:36 +00:00
Jenkins
c0133e6585 Merge "Remove tests omission from coverage target in tox.ini" 2016-03-04 08:25:30 +00:00
Gábor Antal
63fffeacd8 Remove tests omission from coverage target in tox.ini
In coverage target in tox.ini, there is the following argument:
  --omit="watcher/tests/*"

However, in .coveragerc, the tests are also omitted,
according to line 4 in .coveragerc:
  omit = watcher/tests/*

So the watcher/tests/* directory is omitted twice. As it can be
seen in other modules (e.g.: swift, nova) omitting only in
.coveragerc should be enough.

Change-Id: I72951196a346fb73a90c998138fc91dd171432cd
Closes-Bug: #1552617
2016-03-04 07:36:26 +00:00
Jenkins
bc791f0e75 Merge "add Goal into RESTful Web API (v1) documentation" 2016-03-03 16:45:20 +00:00
David TARDIVEL
6ed417e6a7 add Goal into RESTful Web API (v1) documentation
Change-Id: I94ba5debf6f0ff9562dc95cbc5c646f33881d9e2
Closes-Bug: #1546630
2016-03-03 16:02:56 +00:00
Vincent Françoise
4afefa3dfb Updated Strategy plugin doc
As we modified the way a strategy gets implemented in
blueprint watcher-add-actions-via-conf, this patchset updates the
documentation regarding the implementation of a strategy plugin.

Change-Id: I517455bc34623feff704956ce30ed545a0e1014b
Closes-Bug: #1533740
2016-03-03 16:18:33 +01:00
Jenkins
9fadfbe40a Merge "Doc on how to implement a custom Watcher action" 2016-03-03 15:12:21 +00:00
Jenkins
f278874a93 Merge "Add Watcher dashboard to the list of projects" 2016-03-03 14:34:20 +00:00
Jenkins
1c5b247300 Merge "Doc on how to implement a custom Watcher planner" 2016-03-03 14:19:58 +00:00
Jenkins
547bf5f87e Merge "Improve DevStack documentation for beginners" 2016-03-03 14:19:21 +00:00
Vincent Françoise
96c0ac0ca8 Doc on how to implement a custom Watcher planner
This documentation describes step-by-step the process for implementing
a new planner in Watcher.

Change-Id: I8addba53de69be93730924a58107687020c19c74
Closes-Bug: #1533739
2016-03-03 14:09:41 +00:00
Antoine Cabot
a80fd2a51e Add Watcher dashboard to the list of projects
We provide a list of Watcher related projects
on the generated doc. This commit add links to
the Watcher dashboard project in various locations.

Change-Id: I1993cd5a11d3fcc8ca2c40b1779700359adab4ea
2016-03-03 13:47:24 +00:00
Vincent Françoise
58ea85c852 Doc on how to implement a custom Watcher action
This documentation describes step-by-step the process for implementing
a new action in Watcher.

Change-Id: I978b81cdf9ac6dcf43eb3ecbb79ab64ae4fd6f72
Closes-Bug: #1534639
2016-03-02 14:46:37 +01:00
Jenkins
78f122f241 Merge "RST directive to discover and generate drivers doc" 2016-03-01 23:09:18 +00:00
Jenkins
43eb997edb Merge "Updated Watcher doc to mention Tempest tests" 2016-03-01 22:58:24 +00:00
Taylor Peoples
b5bccba169 Improve DevStack documentation for beginners
The DevStack documentation should provide basic steps for setting up
Watcher with DevStack assuming the reader has no previous knowledge of
DevStack.

Change-Id: I830b1d9accb0e65bba73944697cba9c53ac3263e
Closes-Bug: #1538291
2016-03-01 17:25:24 +01:00
Jenkins
e058437ae0 Merge "Replace "Triggered" state by "Pending" state" 2016-03-01 15:48:33 +00:00
Jenkins
1acacaa812 Merge "Added support for live migration on non-shared storage" 2016-03-01 09:31:50 +00:00
Daniel Pawlik
5bb1b6cbf0 Added support for live migration on non-shared storage
Watcher applier should be able to live migrate instances on any storage
type. To do this watcher will catch error 400 returned from nova if we
try to live migrate instance which is not on shared storage and live
migrate instance using block_migrate.

Added unit tests, changed action in watcher applier.

Closes-bug: #1549307

Change-Id: I97e583c9b4a0bb9daa1d39e6d652d6474a5aaeb1
2016-03-01 08:35:14 +00:00
Vincent Françoise
de058d7ed1 Updated Watcher doc to mention Tempest tests
The Watcher Tempest tests are only mentioned inside a README.rst.
They are now part of the main documentation.

Change-Id: Ieca85dc7f7307b45e4b99af4a4600a8c2d2b59d7
Closes-Bug: #1536993
2016-02-29 17:42:21 +01:00
Vincent Françoise
98a65efb16 RST directive to discover and generate drivers doc
This patchset introduces a new custom directive called 'drivers-doc'
which loads all available drivers under a given namespace and import
their respective docstring into the .rst document.

This patchset also contains some modification/addition to the
docstring of these drivers to make the final document complete.

Change-Id: Ib3df59fa45cea9d11d20fb73a5f0f1d564135bca
Closes-Bug: #1536218
Closes-Bug: #1536735
2016-02-29 16:34:44 +01:00
Tin Lam
338539ec53 Rename 'TRIGGERED' state as 'PENDING'
"TRIGGERED" is not the correct word to use to describe the action
state, we should use "PENDING" instead.

Change-Id: If24979cdb916523861324f7bcc024e2f1fc28b05
Closes-Bug: #1548377
2016-02-26 23:07:59 -06:00
Jenkins
02f0f8e70a Merge "Add missing requirements" 2016-02-26 17:06:55 +00:00
Jenkins
7f1bd20a09 Merge "Add start directory for oslo_debug_helper" 2016-02-26 17:04:03 +00:00
Jenkins
d3d2a5ef8c Merge "Updated from global requirements" 2016-02-26 17:03:57 +00:00
Jenkins
3a6ae820c0 Merge "Fixed type in get_audit_template_by_name method" 2016-02-26 17:03:51 +00:00
Jenkins
5a8860419e Merge "Cleanup in tests/__init__.py" 2016-02-26 16:59:08 +00:00
Steve Wilkerson
4aa1c7558b Fixed type in get_audit_template_by_name method
Removed an extra underscore in the
get_audit_template_by__name method name in
watcher/db/api.py

Change-Id: I2687858ff4510c626c4dd2e2e9a5701405b5da55
Closes-Bug: #1548765
2016-02-26 08:25:42 -06:00
OpenStack Proposal Bot
a8dab52376 Updated from global requirements
Change-Id: Id68a055e6514ea30e28ef6adcfb22bb2ff3eeafb
2016-02-26 01:54:52 +00:00
Gábor Antal
5615d0523d Cleanup in tests/__init__.py
In watcher/tests/__init__.py has a totally unused,
misleading class so I removed it, as it is never used.

Change-Id: Ib878252453489eb3e1b1ff06f4d6b5e2b0726be5
Closes-Bug: #1549920
2016-02-25 19:10:54 +01:00
Jenkins
10823ce133 Merge "Cleanup in test_objects.py" 2016-02-25 16:06:02 +00:00
Jenkins
18c098c4c1 Merge "Update nova service state" 2016-02-25 08:30:40 +00:00
Jenkins
db649d86b6 Merge "Useless return statement in validate_sort_dir" 2016-02-24 16:49:01 +00:00
Tin Lam
6e380b685b Update nova service state
The primitive ChangeNovaServiceState allows us to change the state of
the nova-compute by calling nova api. The state of a nova-compute can
be ENABLED or DISABLED, however in the current implementation we use
OFFLINE and ONLINE.

Update the code to use ENABLED or DISABLED.

Change-Id: If3d9726bc5ae980b66c7fd4c5b7986f89d8bc690
Closes-Bug: #1523891
2016-02-23 00:46:30 -06:00
Antoine Cabot
8dfff0e8e6 Replace "Triggered" state by "Pending" state
This commit only applies to documentation, state name
must be updated in code accordingly.

Change-Id: I027fd55d968c12992d800de3657543be417c71b0
Related-Bug: #1548377
2016-02-22 17:33:41 +01:00
Chaozhe.Chen
fbc7da755a Add start directory for oslo_debug_helper
When I used `tox -e debug <test-name>` to test a case, the case did not
run properly and there is an error like this:

ImportError: Start directory is not importable: './python-watcher/tests'

If we use '-t wartcher/tests' to point out the test path, the case will
run properly under debug.
So we'd better add this start directory for oslo_debug_helper.

Change-Id: I04d9937f72a95f8f045129af08df0cd0d0870d39
2016-02-23 00:01:18 +08:00
Chaozhe.Chen
b947c30910 Add missing requirements
WebOb is needed in our code[1].

[1]https://github.com/openstack/watcher/blob/master/watcher/api/
middleware/parsable_error.py#L70

Change-Id: I6dd3940057368ff70656dd40c75ec59284f378bf
2016-02-22 21:14:25 +08:00
OpenStack Proposal Bot
9af96114af Updated from global requirements
Change-Id: I28d515eef452dd715a28e3fb4c177fc93f307c79
2016-02-20 22:02:12 +00:00
Vincent Françoise
1ddf69a68f Re-enable related Tempest test
Following the previous 2 patchset
https://review.openstack.org/#/c/279517/ and
https://review.openstack.org/#/c/279555/, this patchset re-enables
the related Tempest test which filters goals while removing the one
that was filtering by host aggregate.

Change-Id: I384e62320de34761e29d5cbac37ddc8ae253a70c
Closes-Bug: #1510189
2016-02-19 14:32:42 +00:00
Jenkins
379ac791a8 Merge "Pass parameter to the query in get_last_sample_values" 2016-02-19 10:24:36 +00:00
Jenkins
81ea37de41 Merge "Remove unused function and argument" 2016-02-19 01:01:10 +00:00
Gábor Antal
1c963fdc96 Useless return statement in validate_sort_dir
In watcher/api/controllers/v1/utils.py, in
the validate_sort_dir method has a return statement,
however the return value is exactly the parameter's value.

This is misleading, so I removed it.

Change-Id: I18c5c7853a5afedac88431347712a4348c9fd5dd
Closes-Bug: #1546917
2016-02-18 17:56:16 +01:00
Gábor Antal
f32995228b Pass parameter to the query in get_last_sample_values
In watcher/common/ceilometer_help.py:129,
there is an unused parameter, called limit:
  def get_last_sample_values(self, resource_id, meter_name, limit=1):

In the next line, there is a method call which can take this
currently unused 'limit' parameter. Probably, passing this 'limit'
parameter to the query was originally intended, however it wasn't
added to the query call.

Closes-Bug: #1541415
Change-Id: I025070c6004243d6b8a6ea7a1d83081480c4148b
2016-02-18 13:55:02 +01:00
Jenkins
0ec3d68994 Merge "Added goal filter in Watcher API" 2016-02-18 10:26:38 +00:00
Jenkins
3503e11506 Merge "Improve variable names in strategy implementations" 2016-02-18 10:22:50 +00:00
Jenkins
c7f0ef37d0 Merge "Added unit tests on actions" 2016-02-18 08:42:46 +00:00
Béla Vancsics
d93b1ffe9f Remove unused function and argument
I removed the unused function and (function)argument in code

Change-Id: Ib7afa5d868c3c7769f53e45c270850e4c3370f86
2016-02-18 09:17:43 +01:00
Vincent Françoise
4e71a0c655 Added goal filter in Watcher API
Although it was proposed via python-watcherclient, the feature was
not implemented on the Watcher API.
As the notion of host aggregate is currently unused in Watcher,
decision was made to only implement the filtering of goal within
the Watcher API whilst removing the host_aggregate filter from the
Watcher client.
Thus, this patchset adds this missing functionality by adding the
'goal' parameter to the API.

Change-Id: I54d248f7e470249c6412650ddf50a3e3631d2a09
Related-Bug: #1510189
2016-02-17 18:22:51 +01:00
Steve Wilkerson
37dd713ed5 Improve variable names in strategy implementations
Renamed many of the variables and method parameters
in the strategy implementations to make the names
more meaningful.  Also changed the abstract method
signature in base.py to reflect these changes.

Closes-Bug: #1541615

Change-Id: Ibeba6c6ef6d5b70482930f387b05d5d650812355
2016-02-16 09:15:14 -06:00
Vincent Françoise
55aeb783e3 Added unit tests on actions
As we had a low test coverage on actions, I added some more tests
with this patchset. This actually revealed a small bug (typo) in
"change_nova_service_state" which has been fixed in here.

Note that Tempest test also cover these action via the
basic_consolidation strategy.

Change-Id: I2d7116a6fdefee82ca254512a9cf50fc61e3c80e
Closes-Bug: #1523513
2016-02-16 11:42:37 +01:00
Jenkins
fe3f6e73be Merge "Remove KEYSTONE_CATALOG_BACKEND from DevStack plugin" 2016-02-16 08:24:54 +00:00
Jean-Emile DARTOIS
5baff7dc3e Clean imports in code
In some part in the code we import objects.
In the Openstack style guidelines they recommand
to import only modules.
We need to fix that.

Change-Id: I4bfee2b94d101940d615f78f9bebb83310ed90ba
Partial-Bug:1543101
2016-02-15 18:04:24 +01:00
Jean-Emile DARTOIS
e3198d25a5 Add Voluptuous to validate the action parameters
We want a simplest way to validate the input parameters of an
Action through a schema.

APIImpact
DocImpact
Partially implements: blueprint watcher-add-actions-via-conf

Change-Id: I139775f467fe7778c7354b0cfacf796fc27ffcb2
2016-02-12 17:47:52 +01:00
Jenkins
33ee575936 Merge "Better cleanup for Tempest tests" 2016-02-12 16:04:26 +00:00
Darren Shaw
259f2562e6 Remove KEYSTONE_CATALOG_BACKEND from DevStack plugin
Via Sean Dague in the mailing list:
In Newton, the KEYSTONE_CATALOG_BACKEND variable will be removed.
This commit removes the if check and variable from watcher

Change-Id: I73c5dd25feff9ba9824c267a8817a49e4ad3a06a
Closes-Bug: #1544433
2016-02-12 09:51:51 -06:00
Jenkins
1629247413 Merge "Update the default version of Neutron API" 2016-02-12 15:41:34 +00:00
Jenkins
58d84aca6d Merge "Ceilometer client instantiation fixup" 2016-02-12 10:00:33 +00:00
Gábor Antal
236879490d Cleanup in test_objects.py
In watcher/tests/objects/test_objects.py, there is a class
called "_TestObject(object)" which is probably an older test class.

This test class runs never (as the name starts with "_"
and inherits from object), and has some really old test,
like test_orphaned_object method, which is testing an exception
that doesn't exist at the current codebase.

Change-Id: I7559a004e8c136a206fc1cf7ac330c7d4157f94f
Closes-Bug: #1544685
2016-02-11 19:29:16 +01:00
Vincent Françoise
79850cc89c Better cleanup for Tempest tests
When running our Tempest tests, we are now cleaning up all the
objects we have created within the WatcherDB via a soft_delete.

Partially Implements: blueprint deletion-of-actions-plan

Change-Id: Ibdcfd2be37094377d09ad77d5c20298ee2baa4d0
2016-02-11 18:36:16 +01:00
Vincent Françoise
b958214db8 Ceilometer client instantiation fixup
A problem was found during manual integration tests which were failing
because we couldn't instantiate the ceilometer client when trying to
execute an action plan using the 'basic_consolidation' strategy.

This patchset fixes the problem with an update of the related tests

Change-Id: I2b1f1dcc16fd8dfbf508c4d5661c1fce194254e4
Closes-Bug: #1544652
2016-02-11 18:29:58 +01:00
Jenkins
a0b5f5aa1d Merge "Delete linked actions when deleting an action plan" 2016-02-11 09:39:33 +00:00
David TARDIVEL
a4a009a2c6 Update the default version of Neutron API
Default value of Neutron API should be 2.0 in neutronclient.
Closes-Bug: #1544134

Change-Id: I241c067c33c992da53f30e974ffca3edb466811d
2016-02-10 17:04:34 +01:00
Taylor Peoples
0ba8a35ade Sync with openstack/requirements master branch
In order to accept the requirements contract defined in
openstack/requirements, we need to sync with the master branch of that
project's global-requirements.txt and test-requirements.txt files.

Most of the changes are just version changes and nothing major.  The
only major change is:

1) The twine dependency is removed, and therefore the pypi tox
environment was also removed.

Change-Id: Idbe9e73ddc5a34ac49aa6f6eff0779d46a75f583
Closes-Bug: #1533282
2016-02-09 16:31:07 +00:00
Vincent Françoise
8bcc1b2097 Delete linked actions when deleting an action plan
When a user deletes an action plan, we now delete all related actions.

Partially Implements: blueprint deletion-of-actions-plan

Change-Id: I5d519c38458741be78591cbec04dbd410a6dc14b
2016-02-08 17:04:37 +01:00
Jenkins
b440f5c69a Merge "Add IRC information into contributing page" 2016-02-05 17:19:48 +00:00
David TARDIVEL
858bbbf126 Add IRC information into contributing page
Add Watcher IRC channels : #openstack-watcher and
 #openstack-meeeting-4

Change-Id: I4c128544a32548065fddde54b28f645fc63ebfd5
Closes-Bug: #1536200
2016-02-05 15:53:21 +00:00
98 changed files with 3028 additions and 1833 deletions

View File

@@ -96,15 +96,13 @@ function configure_watcher {
function create_watcher_accounts {
create_service_user "watcher" "admin"
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
local watcher_service=$(get_or_create_service "watcher" \
"infra-optim" "Watcher Infrastructure Optimization Service")
get_or_create_endpoint $watcher_service \
"$REGION_NAME" \
"$WATCHER_SERVICE_PROTOCOL://$WATCHER_SERVICE_HOST:$WATCHER_SERVICE_PORT" \
"$WATCHER_SERVICE_PROTOCOL://$WATCHER_SERVICE_HOST:$WATCHER_SERVICE_PORT" \
"$WATCHER_SERVICE_PROTOCOL://$WATCHER_SERVICE_HOST:$WATCHER_SERVICE_PORT"
fi
local watcher_service=$(get_or_create_service "watcher" \
"infra-optim" "Watcher Infrastructure Optimization Service")
get_or_create_endpoint $watcher_service \
"$REGION_NAME" \
"$WATCHER_SERVICE_PROTOCOL://$WATCHER_SERVICE_HOST:$WATCHER_SERVICE_PORT" \
"$WATCHER_SERVICE_PROTOCOL://$WATCHER_SERVICE_HOST:$WATCHER_SERVICE_PORT" \
"$WATCHER_SERVICE_PROTOCOL://$WATCHER_SERVICE_HOST:$WATCHER_SERVICE_PORT"
}
# create_watcher_conf() - Create a new watcher.conf file

View File

@@ -127,7 +127,19 @@ Watcher CLI
The watcher command-line interface (CLI) can be used to interact with the
Watcher system in order to control it or to know its current status.
Please, read `the detailed documentation about Watcher CLI <https://factory.b-com.com/www/watcher/doc/python-watcherclient/>`_
Please, read `the detailed documentation about Watcher CLI
<https://factory.b-com.com/www/watcher/doc/python-watcherclient/>`_.
.. _archi_watcher_dashboard_definition:
Watcher Dashboard
-----------------
The Watcher Dashboard can be used to interact with the Watcher system through
Horizon in order to control it or to know its current status.
Please, read `the detailed documentation about Watcher Dashboard
<https://factory.b-com.com/www/watcher/doc/watcher-dashboard/>`_.
.. _archi_watcher_database_definition:

View File

@@ -34,6 +34,8 @@ The Watcher service includes the following components:
- ``watcher-applier``: applies the action plan.
- `python-watcherclient`_: A command-line interface (CLI) for interacting with
the Watcher service.
- `watcher-dashboard`_: An Horizon plugin for interacting with the Watcher
service.
Additionally, the Bare Metal service has certain external dependencies, which
are very similar to other OpenStack services:
@@ -52,6 +54,7 @@ additional functionality:
.. _`ceilometer`: https://github.com/openstack/ceilometer
.. _`nova`: https://github.com/openstack/nova
.. _`python-watcherclient`: https://github.com/openstack/python-watcherclient
.. _`watcher-dashboard`: https://github.com/openstack/watcher-dashboard
.. _`watcher metering`: https://github.com/b-com/watcher-metering
.. _`RabbitMQ`: https://www.rabbitmq.com/

View File

@@ -38,7 +38,11 @@ If you need help on a specific command, you can use:
$ watcher help COMMAND
If you want to deploy Watcher in Horizon, please refer to the `Watcher Horizon
plugin installation guide`_.
.. _`installation guide`: https://factory.b-com.com/www/watcher/doc/python-watcherclient
.. _`Watcher Horizon plugin installation guide`: https://factory.b-com.com/www/watcher/doc/watcher-dashboard/deploy/installation.html
Seeing what the Watcher CLI can do ?
------------------------------------

View File

@@ -60,3 +60,12 @@ Code Hosting
Code Review
https://review.openstack.org/#/q/status:open+project:openstack/watcher,n,z
IRC Channel
``#openstack-watcher`` (changelog_)
Weekly Meetings
on Wednesdays at 14:00 UTC in the ``#openstack-meeting-4`` IRC
channel (`meetings logs`_)
.. _changelog: http://eavesdrop.openstack.org/irclogs/%23openstack-watcher/
.. _meetings logs: http://eavesdrop.openstack.org/meetings/watcher/

View File

@@ -9,35 +9,97 @@ Set up a development environment via DevStack
=============================================
Watcher is currently able to optimize compute resources - specifically Nova
compute hosts - via operations such as live migrations. In order for you to
compute hosts - via operations such as live migrations. In order for you to
fully be able to exercise what Watcher can do, it is necessary to have a
multinode environment to use. If you have no experience with DevStack, you
should check out the `DevStack documentation`_ and be comfortable with the
basics of DevStack before attempting to get a multinode DevStack setup with
the Watcher plugin.
multinode environment to use.
You can set up the Watcher services quickly and easily using a Watcher
DevStack plugin. See `PluginModelDocs`_ for information on DevStack's plugin
model.
.. _DevStack documentation: http://docs.openstack.org/developer/devstack/
.. _PluginModelDocs: http://docs.openstack.org/developer/devstack/plugins.html
It is recommended that you build off of the provided example local.conf files
(`local.conf.controller`_, `local.conf.compute`_). You'll likely want to
configure something to obtain metrics, such as Ceilometer. Ceilometer is used
in the example local.conf files.
To configure the Watcher services with DevStack, add the following to the
DevStack plugin. See `PluginModelDocs`_ for information on DevStack's plugin
model. To enable the Watcher plugin with DevStack, add the following to the
`[[local|localrc]]` section of your controller's `local.conf` to enable the
Watcher plugin::
enable_plugin watcher git://git.openstack.org/openstack/watcher
Then run devstack normally::
For more detailed instructions, see `Detailed DevStack Instructions`_. Check
out the `DevStack documentation`_ for more information regarding DevStack.
cd /opt/stack/devstack
./stack.sh
.. _PluginModelDocs: http://docs.openstack.org/developer/devstack/plugins.html
.. _DevStack documentation: http://docs.openstack.org/developer/devstack/
Detailed DevStack Instructions
==============================
#. Obtain N (where N >= 1) servers (virtual machines preferred for DevStack).
One of these servers will be the controller node while the others will be
compute nodes. N is preferably >= 3 so that you have at least 2 compute
nodes, but in order to stand up the Watcher services only 1 server is
needed (i.e., no computes are needed if you want to just experiment with
the Watcher services). These servers can be VMs running on your local
machine via VirtualBox if you prefer. DevStack currently recommends that
you use Ubuntu 14.04 LTS. The servers should also have connections to the
same network such that they are all able to communicate with one another.
#. For each server, clone the DevStack repository and create the stack user::
sudo apt-get update
sudo apt-get install git
git clone https://git.openstack.org/openstack-dev/devstack
sudo ./devstack/tools/create-stack-user.sh
Now you have a stack user that is used to run the DevStack processes. You
may want to give your stack user a password to allow SSH via a password::
sudo passwd stack
#. Switch to the stack user and clone the DevStack repo again::
sudo su stack
cd ~
git clone https://git.openstack.org/openstack-dev/devstack
#. For each compute node, copy the provided `local.conf.compute`_ example file
to the compute node's system at ~/devstack/local.conf. Make sure the
HOST_IP and SERVICE_HOST values are changed appropriately - i.e., HOST_IP
is set to the IP address of the compute node and SERVICE_HOST is set to the
IP address of the controller node.
If you need specific metrics collected (or want to use something other
than Ceilometer), be sure to configure it. For example, in the
`local.conf.compute`_ example file, the appropriate ceilometer plugins and
services are enabled and disabled. If you were using something other than
Ceilometer, then you would likely want to configure it likewise. The
example file also sets the compute monitors nova configuration option to
use the CPU virt driver. If you needed other metrics, it may be necessary
to configure similar configuration options for the projects providing those
metrics.
#. For the controller node, copy the provided `local.conf.controller`_ example
file to the controller node's system at ~/devstack/local.conf. Make sure
the HOST_IP value is changed appropriately - i.e., HOST_IP is set to the IP
address of the controller node.
Note: if you want to use another Watcher git repository (such as a local
one), then change the enable plugin line::
enable_plugin watcher <your_local_git_repo> [optional_branch]
If you do this, then the Watcher DevStack plugin will try to pull the
python-watcherclient repo from <your_local_git_repo>/../, so either make
sure that is also available or specify WATCHERCLIENT_REPO in the local.conf
file.
Note: if you want to use a specific branch, specify WATCHER_BRANCH in the
local.conf file. By default it will use the master branch.
#. Start stacking from the controller node::
./devstack/stack.sh
#. Start stacking on each of the compute nodes using the same command.
#. Configure the environment for live migration via NFS. See the
`Multi-Node DevStack Environment`_ section for more details.
.. _local.conf.controller: https://github.com/openstack/watcher/tree/master/devstack/local.conf.controller
.. _local.conf.compute: https://github.com/openstack/watcher/tree/master/devstack/local.conf.compute
@@ -104,7 +166,6 @@ Restart the libvirt service::
sudo service libvirt-bin restart
Setting up SSH keys between compute nodes to enable live migration
------------------------------------------------------------------
@@ -113,8 +174,8 @@ each compute node:
1. The SOURCE root user's public RSA key (likely in /root/.ssh/id_rsa.pub)
needs to be in the DESTINATION stack user's authorized_keys file
(~stack/.ssh/authorized_keys). This can be accomplished by manually
copying the contents from the file on the SOURCE to the DESTINATION. If
(~stack/.ssh/authorized_keys). This can be accomplished by manually
copying the contents from the file on the SOURCE to the DESTINATION. If
you have a password configured for the stack user, then you can use the
following command to accomplish the same thing::
@@ -122,7 +183,7 @@ each compute node:
2. The DESTINATION host's public ECDSA key (/etc/ssh/ssh_host_ecdsa_key.pub)
needs to be in the SOURCE root user's known_hosts file
(/root/.ssh/known_hosts). This can be accomplished by running the
(/root/.ssh/known_hosts). This can be accomplished by running the
following on the SOURCE machine (hostname must be used)::
ssh-keyscan -H DEST_HOSTNAME | sudo tee -a /root/.ssh/known_hosts
@@ -131,3 +192,13 @@ In essence, this means that every compute node's root user's public RSA key
must exist in every other compute node's stack user's authorized_keys file and
every compute node's public ECDSA key needs to be in every other compute
node's root user's known_hosts file.
Environment final checkup
-------------------------
If you are willing to make sure everything is in order in your DevStack
environment, you can run the Watcher Tempest tests which will validate its API
but also that you can perform the typical Watcher workflows. To do so, have a
look at the :ref:`Tempest tests <tempest_tests>` section which will explain to
you how to run them.

View File

@@ -4,6 +4,8 @@
https://creativecommons.org/licenses/by/3.0/
.. _watcher_developement_environment:
=========================================
Set up a development environment manually
=========================================
@@ -143,34 +145,13 @@ You should then be able to `import watcher` using Python without issue:
If you can import watcher without a traceback, you should be ready to develop.
Run Watcher unit tests
======================
Run Watcher tests
=================
All unit tests should be run using tox. To run the unit tests under py27 and
also run the pep8 tests:
Watcher provides both :ref:`unit tests <unit_tests>` and
:ref:`functional/tempest tests <tempest_tests>`. Please refer to :doc:`testing`
to understand how to run them.
.. code-block:: bash
$ workon watcher
(watcher) $ pip install tox
(watcher) $ cd watcher
(watcher) $ tox -epep8 -epy27
You may pass options to the test programs using positional arguments. To run a
specific unit test, this passes the -r option and desired test (regex string)
to os-testr:
.. code-block:: bash
$ workon watcher
(watcher) $ tox -epy27 -- tests.api
When you're done, deactivate the virtualenv:
.. code-block:: bash
$ deactivate
Build the Watcher documentation
===============================
@@ -269,6 +250,11 @@ interface.
.. _`python-watcherclient`: https://github.com/openstack/python-watcherclient
There is also an Horizon plugin for Watcher `watcher-dashboard`_ which
allows to interact with Watcher through a web-based interface.
.. _`watcher-dashboard`: https://github.com/openstack/watcher-dashboard
Exercising the Watcher Services locally
=======================================

View File

@@ -0,0 +1,159 @@
..
Except where otherwise noted, this document is licensed under Creative
Commons Attribution 3.0 License. You can view the license at:
https://creativecommons.org/licenses/by/3.0/
==================
Build a new action
==================
Watcher Applier has an external :ref:`action <action_definition>` plugin
interface which gives anyone the ability to integrate an external
:ref:`action <action_definition>` in order to extend the initial set of actions
Watcher provides.
This section gives some guidelines on how to implement and integrate custom
actions with Watcher.
Creating a new plugin
=====================
First of all you have to extend the base :py:class:`BaseAction` class which
defines a set of abstract methods and/or properties that you will have to
implement:
- The :py:attr:`~.BaseAction.schema` is an abstract property that you have to
implement. This is the first function to be called by the
:ref:`applier <watcher_applier_definition>` before any further processing
and its role is to validate the input parameters that were provided to it.
- The :py:meth:`~.BaseAction.precondition` is called before the execution of
an action. This method is a hook that can be used to perform some
initializations or to make some more advanced validation on its input
parameters. If you wish to block the execution based on this factor, you
simply have to ``raise`` an exception.
- The :py:meth:`~.BaseAction.postcondition` is called after the execution of
an action. As this function is called regardless of whether an action
succeeded or not, this can prove itself useful to perform cleanup
operations.
- The :py:meth:`~.BaseAction.execute` is the main component of an action.
This is where you should implement the logic of your action.
- The :py:meth:`~.BaseAction.revert` allows you to roll back the targeted
resource to its original state following a faulty execution. Indeed, this
method is called by the workflow engine whenever an action raises an
exception.
Here is an example showing how you can write a plugin called ``DummyAction``:
.. code-block:: python
# Filepath = <PROJECT_DIR>/thirdparty/dummy.py
# Import path = thirdparty.dummy
from watcher.applier.actions import base
class DummyAction(baseBaseAction):
@property
def schema(self):
return Schema({})
def execute(self):
# Does nothing
pass # Only returning False is considered as a failure
def revert(self):
# Does nothing
pass
def precondition(self):
# No pre-checks are done here
pass
def postcondition(self):
# Nothing done here
pass
This implementation is the most basic one. So if you want to have more advanced
examples, have a look at the implementation of the actions already provided
by Watcher like.
To get a better understanding on how to implement a more advanced action,
have a look at the :py:class:`~watcher.applier.actions.migration.Migrate`
class.
Abstract Plugin Class
=====================
Here below is the abstract ``BaseAction`` class that every single action
should implement:
.. autoclass:: watcher.applier.actions.base.BaseAction
:members:
:noindex:
.. py:attribute:: schema
Defines a Schema that the input parameters shall comply to
:returns: A schema declaring the input parameters this action should be
provided along with their respective constraints
(e.g. type, value range, ...)
:rtype: :py:class:`voluptuous.Schema` instance
Register a new entry point
==========================
In order for the Watcher Applier to load your new action, the
action must be registered as a named entry point under the
``watcher_actions`` entry point of your ``setup.py`` file. If you are using
pbr_, this entry point should be placed in your ``setup.cfg`` file.
The name you give to your entry point has to be unique.
Here below is how you would proceed to register ``DummyAction`` using pbr_:
.. code-block:: ini
[entry_points]
watcher_actions =
dummy = thirdparty.dummy:DummyAction
.. _pbr: http://docs.openstack.org/developer/pbr/
Using action plugins
====================
The Watcher Applier service will automatically discover any installed plugins
when it is restarted. If a Python package containing a custom plugin is
installed within the same environment as Watcher, Watcher will automatically
make that plugin available for use.
At this point, you can use your new action plugin in your :ref:`strategy plugin
<implement_strategy_plugin>` if you reference it via the use of the
:py:meth:`~.Solution.add_action` method:
.. code-block:: python
# [...]
self.solution.add_action(
action_type="dummy", # Name of the entry point we registered earlier
applies_to="",
input_parameters={})
By doing so, your action will be saved within the Watcher Database, ready to be
processed by the planner for creating an action plan which can then be executed
by the Watcher Applier via its workflow engine.
Scheduling of an action plugin
==============================
Watcher provides a basic built-in :ref:`planner <watcher_planner_definition>`
which is only able to process the Watcher built-in actions. Therefore, you will
either have to use an existing third-party planner or :ref:`implement another
planner <implement_planner_plugin>` that will be able to take into account your
new action plugin.

View File

@@ -0,0 +1,90 @@
..
Except where otherwise noted, this document is licensed under Creative
Commons Attribution 3.0 License. You can view the license at:
https://creativecommons.org/licenses/by/3.0/
.. _plugin-base_setup:
=======================================
Create a third-party plugin for Watcher
=======================================
Watcher provides a plugin architecture which allows anyone to extend the
existing functionalities by implementing third-party plugins. This process can
be cumbersome so this documentation is there to help you get going as quickly
as possible.
Pre-requisites
==============
We assume that you have set up a working Watcher development environment. So if
this not already the case, you can check out our documentation which explains
how to set up a :ref:`development environment
<watcher_developement_environment>`.
.. _development environment:
Third party project scaffolding
===============================
First off, we need to create the project structure. To do so, we can use
`cookiecutter`_ and the `OpenStack cookiecutter`_ project scaffolder to
generate the skeleton of our project::
$ virtualenv thirdparty
$ source thirdparty/bin/activate
$ pip install cookiecutter
$ cookiecutter https://github.com/openstack-dev/cookiecutter
The last command will ask you for many information, and If you set
``module_name`` and ``repo_name`` as ``thirdparty``, you should end up with a
structure that looks like this::
$ cd thirdparty
$ tree .
.
├── babel.cfg
├── CONTRIBUTING.rst
├── doc
│   └── source
│   ├── conf.py
│   ├── contributing.rst
│   ├── index.rst
│   ├── installation.rst
│   ├── readme.rst
│   └── usage.rst
├── HACKING.rst
├── LICENSE
├── MANIFEST.in
├── README.rst
├── requirements.txt
├── setup.cfg
├── setup.py
├── test-requirements.txt
├── thirdparty
│   ├── __init__.py
│   └── tests
│   ├── base.py
│   ├── __init__.py
│   └── test_thirdparty.py
└── tox.ini
.. _cookiecutter: https://github.com/audreyr/cookiecutter
.. _OpenStack cookiecutter: https://github.com/openstack-dev/cookiecutter
Implementing a plugin for Watcher
=================================
Now that the project skeleton has been created, you can start the
implementation of your plugin. As of now, you can implement the following
plugins for Watcher:
- A :ref:`strategy plugin <implement_strategy_plugin>`
- A :ref:`planner plugin <implement_planner_plugin>`
- An :ref:`action plugin <implement_strategy_plugin>`
- A :ref:`workflow engine plugin <implement_workflow_engine_plugin>`
If you want to learn more on how to implement them, you can refer to their
dedicated documentation.

View File

@@ -0,0 +1,127 @@
..
Except where otherwise noted, this document is licensed under Creative
Commons Attribution 3.0 License. You can view the license at:
https://creativecommons.org/licenses/by/3.0/
.. _implement_planner_plugin:
===================
Build a new planner
===================
Watcher :ref:`Decision Engine <watcher_decision_engine_definition>` has an
external :ref:`planner <planner_definition>` plugin interface which gives
anyone the ability to integrate an external :ref:`planner <planner_definition>`
in order to extend the initial set of planners Watcher provides.
This section gives some guidelines on how to implement and integrate custom
planners with Watcher.
.. _Decision Engine: watcher_decision_engine_definition
Creating a new plugin
=====================
First of all you have to extend the base :py:class:`~.BasePlanner` class which
defines an abstract method that you will have to implement. The
:py:meth:`~.BasePlanner.schedule` is the method being called by the Decision
Engine to schedule a given solution (:py:class:`~.BaseSolution`) into an
:ref:`action plan <action_plan_definition>` by ordering/sequencing an unordered
set of actions contained in the proposed solution (for more details, see
:ref:`definition of a solution <solution_definition>`).
Here is an example showing how you can write a planner plugin called
``DummyPlanner``:
.. code-block:: python
# Filepath = third-party/third_party/dummy.py
# Import path = third_party.dummy
import uuid
from watcher.decision_engine.planner import base
class DummyPlanner(base.BasePlanner):
def _create_action_plan(self, context, audit_id):
action_plan_dict = {
'uuid': uuid.uuid4(),
'audit_id': audit_id,
'first_action_id': None,
'state': objects.action_plan.State.RECOMMENDED
}
new_action_plan = objects.ActionPlan(context, **action_plan_dict)
new_action_plan.create(context)
new_action_plan.save()
return new_action_plan
def schedule(self, context, audit_id, solution):
# Empty action plan
action_plan = self._create_action_plan(context, audit_id)
# todo: You need to create the workflow of actions here
# and attach it to the action plan
return action_plan
This implementation is the most basic one. So if you want to have more advanced
examples, have a look at the implementation of planners already provided by
Watcher like :py:class:`~.DefaultPlanner`. A list with all available planner
plugins can be found :ref:`here <watcher_planners>`.
Abstract Plugin Class
=====================
Here below is the abstract ``BasePlanner`` class that every single planner
should implement:
.. autoclass:: watcher.decision_engine.planner.base.BasePlanner
:members:
:noindex:
Register a new entry point
==========================
In order for the Watcher Decision Engine to load your new planner, the
latter must be registered as a new entry point under the
``watcher_planners`` entry point namespace of your ``setup.py`` file. If you
are using pbr_, this entry point should be placed in your ``setup.cfg`` file.
The name you give to your entry point has to be unique.
Here below is how you would proceed to register ``DummyPlanner`` using pbr_:
.. code-block:: ini
[entry_points]
watcher_planners =
dummy = third_party.dummy:DummyPlanner
.. _pbr: http://docs.openstack.org/developer/pbr/
Using planner plugins
=====================
The :ref:`Watcher Decision Engine <watcher_decision_engine_definition>` service
will automatically discover any installed plugins when it is started. This
means that if Watcher is already running when you install your plugin, you will
have to restart the related Watcher services. If a Python package containing a
custom plugin is installed within the same environment as Watcher, Watcher will
automatically make that plugin available for use.
At this point, Watcher will use your new planner if you referenced it in the
``planner`` option under the ``[watcher_planner]`` section of your
``watcher.conf`` configuration file when you started it. For example, if you
want to use the ``dummy`` planner you just installed, you would have to
select it as followed:
.. code-block:: ini
[watcher_planner]
planner = dummy
As you may have noticed, only a single planner implementation can be activated
at a time, so make sure it is generic enough to support all your strategies
and actions.

View File

@@ -4,17 +4,19 @@
https://creativecommons.org/licenses/by/3.0/
.. _implement_strategy_plugin:
=================================
Build a new optimization strategy
=================================
Watcher Decision Engine has an external :ref:`strategy <strategy_definition>`
plugin interface which gives anyone the ability to integrate an external
:ref:`strategy <strategy_definition>` in order to make use of placement
algorithms.
strategy in order to make use of placement algorithms.
This section gives some guidelines on how to implement and integrate custom
Stategies with Watcher.
strategies with Watcher.
Pre-requisites
==============
@@ -29,15 +31,14 @@ Creating a new plugin
First of all you have to:
- Extend the base ``BaseStrategy`` class
- Implement its ``execute`` method
- Extend :py:class:`~.BaseStrategy`
- Implement its :py:meth:`~.BaseStrategy.execute` method
Here is an example showing how you can write a plugin called ``DummyStrategy``:
.. code-block:: python
# Filepath = third-party/third_party/dummy.py
# Import path = third_party.dummy
import uuid
class DummyStrategy(BaseStrategy):
@@ -48,14 +49,25 @@ Here is an example showing how you can write a plugin called ``DummyStrategy``:
super(DummyStrategy, self).__init__(name, description)
def execute(self, model):
self.solution.add_change_request(
Migrate(vm=my_vm, src_hypervisor=src, dest_hypervisor=dest)
)
migration_type = 'live'
src_hypervisor = 'compute-host-1'
dst_hypervisor = 'compute-host-2'
instance_id = uuid.uuid4()
parameters = {'migration_type': migration_type,
'src_hypervisor': src_hypervisor,
'dst_hypervisor': dst_hypervisor}
self.solution.add_action(action_type="migration",
resource_id=instance_id,
input_parameters=parameters)
# Do some more stuff here ...
return self.solution
As you can see in the above example, the ``execute()`` method returns a
solution as required.
As you can see in the above example, the :py:meth:`~.BaseStrategy.execute`
method returns a :py:class:`~.BaseSolution` instance as required. This solution
is what wraps the abstract set of actions the strategy recommends to you. This
solution is then processed by a :ref:`planner <planner_definition>` to produce
an action plan which shall contain the sequenced flow of actions to be
executed by the :ref:`Watcher Applier <watcher_applier_definition>`.
Please note that your strategy class will be instantiated without any
parameter. Therefore, you should make sure not to make any of them required in
@@ -65,13 +77,10 @@ your ``__init__`` method.
Abstract Plugin Class
=====================
Here below is the abstract ``BaseStrategy`` class that every single strategy
should implement:
Here below is the abstract :py:class:`~.BaseStrategy` class that every single
strategy should implement:
.. automodule:: watcher.decision_engine.strategy.strategies.base
:noindex:
.. autoclass:: BaseStrategy
.. autoclass:: watcher.decision_engine.strategy.strategies.base.BaseStrategy
:members:
:noindex:
@@ -92,11 +101,11 @@ Here below is how you would proceed to register ``DummyStrategy`` using pbr_:
[entry_points]
watcher_strategies =
dummy = third_party.dummy:DummyStrategy
dummy = thirdparty.dummy:DummyStrategy
To get a better understanding on how to implement a more advanced strategy,
have a look at the :py:class:`BasicConsolidation` class.
have a look at the :py:class:`~.BasicConsolidation` class.
.. _pbr: http://docs.openstack.org/developer/pbr/
@@ -104,12 +113,12 @@ Using strategy plugins
======================
The Watcher Decision Engine service will automatically discover any installed
plugins when it is run. If a Python package containing a custom plugin is
plugins when it is restarted. If a Python package containing a custom plugin is
installed within the same environment as Watcher, Watcher will automatically
make that plugin available for use.
At this point, the way Watcher will use your new strategy if you reference it
in the ``goals`` under the ``[watcher_goals]`` section of your ``watcher.conf``
At this point, Watcher will use your new strategy if you reference it in the
``goals`` under the ``[watcher_goals]`` section of your ``watcher.conf``
configuration file. For example, if you want to use a ``dummy`` strategy you
just installed, you would have to associate it to a goal like this:
@@ -143,13 +152,13 @@ pluggable backend.
Finally, if your strategy requires new metrics not covered by Ceilometer, you
can add them through a Ceilometer `plugin`_.
.. _`Helper`: https://github.com/openstack/watcher/blob/master/watcher/metrics_engine/cluster_history/ceilometer.py#L31
.. _`Ceilometer developer guide`: http://docs.openstack.org/developer/ceilometer/architecture.html#storing-the-data
.. _`here`: http://docs.openstack.org/developer/ceilometer/install/dbreco.html#choosing-a-database-backend
.. _`plugin`: http://docs.openstack.org/developer/ceilometer/plugins.html
.. _`Ceilosca`: https://github.com/openstack/monasca-ceilometer/blob/master/ceilosca/ceilometer/storage/impl_monasca.py
Read usage metrics using the Python binding
-------------------------------------------
@@ -157,39 +166,43 @@ You can find the information about the Ceilometer Python binding on the
OpenStack `ceilometer client python API documentation
<http://docs.openstack.org/developer/python-ceilometerclient/api.html>`_
The first step is to authenticate against the Ceilometer service
(assuming that you already imported the Ceilometer client for Python)
with this call:
To facilitate the process, Watcher provides the ``osc`` attribute to every
strategy which includes clients to major OpenStack services, including
Ceilometer. So to access it within your strategy, you can do the following:
.. code-block:: py
cclient = ceilometerclient.client.get_client(VERSION, os_username=USERNAME,
os_password=PASSWORD, os_tenant_name=PROJECT_NAME, os_auth_url=AUTH_URL)
# Within your strategy "execute()"
cclient = self.osc.ceilometer
# TODO: Do something here
Using that you can now query the values for that specific metric:
.. code-block:: py
value_cpu = cclient.samples.list(meter_name='cpu_util', limit=10, q=query)
query = None # e.g. [{'field': 'foo', 'op': 'le', 'value': 34},]
value_cpu = cclient.samples.list(
meter_name='cpu_util',
limit=10, q=query)
Read usage metrics using the Watcher Cluster History Helper
-----------------------------------------------------------
Here below is the abstract ``BaseClusterHistory`` class of the Helper.
.. automodule:: watcher.metrics_engine.cluster_history.api
:noindex:
.. autoclass:: BaseClusterHistory
.. autoclass:: watcher.metrics_engine.cluster_history.api.BaseClusterHistory
:members:
:noindex:
The following snippet code shows how to create a Cluster History class:
The following code snippet shows how to create a Cluster History class:
.. code-block:: py
query_history = CeilometerClusterHistory()
from watcher.metrics_engine.cluster_history import ceilometer as ceil
query_history = ceil.CeilometerClusterHistory()
Using that you can now query the values for that specific metric:
@@ -200,4 +213,3 @@ Using that you can now query the values for that specific metric:
period="7200",
aggregate='avg'
)

View File

@@ -0,0 +1,38 @@
..
Except where otherwise noted, this document is licensed under Creative
Commons Attribution 3.0 License. You can view the license at:
https://creativecommons.org/licenses/by/3.0/
=================
Available Plugins
=================
.. _watcher_strategies:
Strategies
==========
.. drivers-doc:: watcher_strategies
.. _watcher_actions:
Actions
=======
.. drivers-doc:: watcher_actions
.. _watcher_workflow_engines:
Workflow Engines
================
.. drivers-doc:: watcher_workflow_engines
.. _watcher_planners:
Planners
========
.. drivers-doc:: watcher_planners

View File

@@ -0,0 +1,50 @@
..
Except where otherwise noted, this document is licensed under Creative
Commons Attribution 3.0 License. You can view the license at:
https://creativecommons.org/licenses/by/3.0/
=======
Testing
=======
.. _unit_tests:
Unit tests
==========
All unit tests should be run using `tox`_. To run the same unit tests that are
executing onto `Gerrit`_ which includes ``py34``, ``py27`` and ``pep8``, you
can issue the following command::
$ workon watcher
(watcher) $ pip install tox
(watcher) $ cd watcher
(watcher) $ tox
If you want to only run one of the aforementioned, you can then issue one of
the following::
$ workon watcher
(watcher) $ tox -e py34
(watcher) $ tox -e py27
(watcher) $ tox -e pep8
.. _tox: https://tox.readthedocs.org/
.. _Gerrit: http://review.openstack.org/
You may pass options to the test programs using positional arguments. To run a
specific unit test, you can pass extra options to `os-testr`_ after putting
the ``--`` separator. So using the ``-r`` option followed by a regex string,
you can run the desired test::
$ workon watcher
(watcher) $ tox -e py27 -- -r watcher.tests.api
.. _os-testr: http://docs.openstack.org/developer/os-testr/
When you're done, deactivate the virtualenv::
$ deactivate
.. include:: ../../../watcher_tempest_plugin/README.rst

View File

@@ -1,15 +1,15 @@
@startuml
[*] --> RECOMMENDED: The Watcher Planner\ncreates the Action Plan
RECOMMENDED --> TRIGGERED: Administrator launches\nthe Action Plan
TRIGGERED --> ONGOING: The Watcher Applier receives the request\nto launch the Action Plan
RECOMMENDED --> PENDING: Adminisrator launches\nthe Action Plan
PENDING --> ONGOING: The Watcher Applier receives the request\nto launch the Action Plan
ONGOING --> FAILED: Something failed while executing\nthe Action Plan in the Watcher Applier
ONGOING --> SUCCEEDED: The Watcher Applier executed\nthe Action Plan successfully
FAILED --> DELETED : Administrator removes\nAction Plan
SUCCEEDED --> DELETED : Administrator removes\nAction Plan
ONGOING --> CANCELLED : Administrator cancels\nAction Plan
RECOMMENDED --> CANCELLED : Administrator cancels\nAction Plan
TRIGGERED --> CANCELLED : Administrator cancels\nAction Plan
PENDING --> CANCELLED : Administrator cancels\nAction Plan
CANCELLED --> DELETED
DELETED --> [*]

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

After

Width:  |  Height:  |  Size: 48 KiB

View File

@@ -22,6 +22,7 @@ Watcher project consists of several source code repositories:
* `watcher`_ - is the main repository. It contains code for Watcher API server,
Watcher Decision Engine and Watcher Applier.
* `python-watcherclient`_ - Client library and CLI client for Watcher.
* `watcher-dashboard`_ - Watcher Horizon plugin.
The documentation provided here is continually kept up-to-date based
on the latest code, and may not represent the state of the project at any
@@ -29,6 +30,7 @@ specific prior release.
.. _watcher: https://git.openstack.org/cgit/openstack/watcher/
.. _python-watcherclient: https://git.openstack.org/cgit/openstack/python-watcherclient/
.. _watcher-dashboard: https://git.openstack.org/cgit/openstack/watcher-dashboard/
Developer Guide
===============
@@ -53,6 +55,7 @@ Getting Started
dev/environment
dev/devstack
deploy/configuration
dev/testing
API References
@@ -69,7 +72,11 @@ Plugins
.. toctree::
:maxdepth: 1
dev/strategy-plugin
dev/plugin/base-setup
dev/plugin/strategy-plugin
dev/plugin/action-plugin
dev/plugin/planner-plugin
dev/plugins
Admin Guide

View File

@@ -8,6 +8,19 @@
RESTful Web API (v1)
====================
Goals
=====
.. rest-controller:: watcher.api.controllers.v1.goal:GoalsController
:webprefix: /v1/goal
.. autotype:: watcher.api.controllers.v1.goal.GoalCollection
:members:
.. autotype:: watcher.api.controllers.v1.goal.Goal
:members:
Audit Templates
===============

File diff suppressed because it is too large Load Diff

View File

@@ -2,30 +2,32 @@
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
enum34;python_version=='2.7' or python_version=='2.6'
jsonpatch>=1.1
keystoneauth1>=2.1.0
keystonemiddleware>=2.0.0,!=2.4.0
oslo.config>=2.3.0 # Apache-2.0
oslo.db>=2.4.1 # Apache-2.0
oslo.i18n>=1.5.0 # Apache-2.0
oslo.log>=1.8.0 # Apache-2.0
oslo.messaging>=1.16.0,!=1.17.0,!=1.17.1,!=2.6.0,!=2.6.1 # Apache-2.0
oslo.policy>=0.5.0 # Apache-2.0
oslo.service>=0.7.0 # Apache-2.0
oslo.utils>=2.0.0,!=2.6.0 # Apache-2.0
PasteDeploy>=1.5.0
pbr>=1.6
pecan>=1.0.0
python-ceilometerclient>=1.5.0
python-cinderclient>=1.3.1
python-glanceclient>=0.18.0
python-keystoneclient>=1.6.0,!=1.8.0
python-neutronclient>=2.6.0
python-novaclient>=2.28.1,!=2.33.0
python-openstackclient>=1.5.0
six>=1.9.0
SQLAlchemy>=0.9.9,<1.1.0
stevedore>=1.5.0 # Apache-2.0
taskflow>=1.25.0 # Apache-2.0
WSME>=0.7
enum34;python_version=='2.7' or python_version=='2.6' or python_version=='3.3' # BSD
jsonpatch>=1.1 # BSD
keystoneauth1>=2.1.0 # Apache-2.0
keystonemiddleware!=4.1.0,>=4.0.0 # Apache-2.0
oslo.config>=3.7.0 # Apache-2.0
oslo.db>=4.1.0 # Apache-2.0
oslo.i18n>=2.1.0 # Apache-2.0
oslo.log>=1.14.0 # Apache-2.0
oslo.messaging>=4.0.0 # Apache-2.0
oslo.policy>=0.5.0 # Apache-2.0
oslo.service>=1.0.0 # Apache-2.0
oslo.utils>=3.5.0 # Apache-2.0
PasteDeploy>=1.5.0 # MIT
pbr>=1.6 # Apache-2.0
pecan>=1.0.0 # BSD
voluptuous>=0.8.6 # BSD License
python-ceilometerclient>=2.2.1 # Apache-2.0
python-cinderclient>=1.3.1 # Apache-2.0
python-glanceclient>=1.2.0 # Apache-2.0
python-keystoneclient!=1.8.0,!=2.1.0,>=1.6.0 # Apache-2.0
python-neutronclient>=2.6.0 # Apache-2.0
python-novaclient!=2.33.0,>=2.29.0 # Apache-2.0
python-openstackclient>=2.1.0 # Apache-2.0
six>=1.9.0 # MIT
SQLAlchemy<1.1.0,>=1.0.10 # MIT
stevedore>=1.5.0 # Apache-2.0
taskflow>=1.26.0 # Apache-2.0
WebOb>=1.2.3 # MIT
WSME>=0.8 # MIT

View File

@@ -63,7 +63,8 @@ watcher_planners =
default = watcher.decision_engine.planner.default:DefaultPlanner
[pbr]
autodoc_index_modules = True
warnerrors = true
autodoc_index_modules = true
autodoc_exclude_modules =
watcher.db.sqlalchemy.alembic.env
watcher.db.sqlalchemy.alembic.versions.*

3
setup.py Executable file → Normal file
View File

@@ -1,4 +1,3 @@
#!/usr/bin/env python
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -26,5 +25,5 @@ except ImportError:
pass
setuptools.setup(
setup_requires=['pbr'],
setup_requires=['pbr>=1.8'],
pbr=True)

View File

@@ -2,23 +2,19 @@
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
coverage>=3.6
discover
coverage>=3.6 # Apache-2.0
discover # BSD
doc8 # Apache-2.0
hacking>=0.10.2,<0.11
mock>=1.2
oslotest>=1.10.0 # Apache-2.0
os-testr>=0.1.0
python-subunit>=0.0.18
testrepository>=0.0.18
testscenarios>=0.4
testtools>=1.4.0
hacking<0.11,>=0.10.2
mock>=1.2 # BSD
oslotest>=1.10.0 # Apache-2.0
os-testr>=0.4.1 # Apache-2.0
python-subunit>=0.0.18 # Apache-2.0/BSD
testrepository>=0.0.18 # Apache-2.0/BSD
testscenarios>=0.4 # Apache-2.0/BSD
testtools>=1.4.0 # MIT
# Doc requirements
oslosphinx>=2.5.0 # Apache-2.0
sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3
sphinxcontrib-pecanwsme>=0.8
# For PyPI distribution
twine
oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 # BSD
sphinxcontrib-pecanwsme>=0.8 # Apache-2.0

View File

@@ -26,7 +26,7 @@ setenv = PYTHONHASHSEED=0
commands = {posargs}
[testenv:cover]
commands = python setup.py testr --coverage --omit="watcher/tests/*" --testr-args='{posargs}'
commands = python setup.py testr --coverage --testr-args='{posargs}'
[testenv:docs]
setenv = PYTHONHASHSEED=0
@@ -35,7 +35,7 @@ commands =
python setup.py build_sphinx
[testenv:debug]
commands = oslo_debug_helper {posargs}
commands = oslo_debug_helper -t watcher/tests {posargs}
[testenv:config]
sitepackages = False
@@ -53,11 +53,6 @@ ignore=
builtins= _
exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,build,*sqlalchemy/alembic/versions/*,demo/
[testenv:pypi]
commands =
python setup.py sdist bdist_wheel
twine upload --config-file .pypirc {posargs} dist/*
[testenv:wheel]
commands = python setup.py bdist_wheel

View File

@@ -129,13 +129,10 @@ class Action(base.APIBase):
alarm = types.uuid
"""An alarm UUID related to this action"""
applies_to = wtypes.text
"""Applies to"""
action_type = wtypes.text
"""Action type"""
input_parameters = wtypes.DictType(wtypes.text, wtypes.text)
input_parameters = types.jsontype
"""One or more key/value pairs """
next_uuid = wsme.wsproperty(types.uuid, _get_next_uuid,
@@ -257,7 +254,7 @@ class ActionsController(rest.RestController):
resource_url=None,
action_plan_uuid=None, audit_uuid=None):
limit = api_utils.validate_limit(limit)
sort_dir = api_utils.validate_sort_dir(sort_dir)
api_utils.validate_sort_dir(sort_dir)
marker_obj = None
if marker:

View File

@@ -283,7 +283,7 @@ class ActionPlansController(rest.RestController):
resource_url=None, audit_uuid=None):
limit = api_utils.validate_limit(limit)
sort_dir = api_utils.validate_sort_dir(sort_dir)
api_utils.validate_sort_dir(sort_dir)
marker_obj = None
if marker:
@@ -406,12 +406,12 @@ class ActionPlansController(rest.RestController):
# transitions that are allowed via PATCH
allowed_patch_transitions = [
(ap_objects.State.RECOMMENDED,
ap_objects.State.TRIGGERED),
ap_objects.State.PENDING),
(ap_objects.State.RECOMMENDED,
ap_objects.State.CANCELLED),
(ap_objects.State.ONGOING,
ap_objects.State.CANCELLED),
(ap_objects.State.TRIGGERED,
(ap_objects.State.PENDING,
ap_objects.State.CANCELLED),
]
@@ -427,7 +427,7 @@ class ActionPlansController(rest.RestController):
initial_state=action_plan_to_update.state,
new_state=action_plan.state))
if action_plan.state == ap_objects.State.TRIGGERED:
if action_plan.state == ap_objects.State.PENDING:
launch_action_plan = True
# Update only the fields that have changed
@@ -443,7 +443,7 @@ class ActionPlansController(rest.RestController):
action_plan_to_update[field] = patch_val
if (field == 'state'
and patch_val == objects.action_plan.State.TRIGGERED):
and patch_val == objects.action_plan.State.PENDING):
launch_action_plan = True
action_plan_to_update.save()

View File

@@ -65,7 +65,7 @@ from watcher.api.controllers.v1 import types
from watcher.api.controllers.v1 import utils as api_utils
from watcher.common import exception
from watcher.common import utils
from watcher.decision_engine.rpcapi import DecisionEngineAPI
from watcher.decision_engine import rpcapi
from watcher import objects
@@ -263,7 +263,7 @@ class AuditsController(rest.RestController):
sort_key, sort_dir, expand=False,
resource_url=None, audit_template=None):
limit = api_utils.validate_limit(limit)
sort_dir = api_utils.validate_sort_dir(sort_dir)
api_utils.validate_sort_dir(sort_dir)
marker_obj = None
if marker:
@@ -369,7 +369,7 @@ class AuditsController(rest.RestController):
# trigger decision-engine to run the audit
dc_client = DecisionEngineAPI()
dc_client = rpcapi.DecisionEngineAPI()
dc_client.trigger_audit(context, new_audit.uuid)
return Audit.convert_with_links(new_audit)

View File

@@ -197,12 +197,13 @@ class AuditTemplatesController(rest.RestController):
'detail': ['GET'],
}
def _get_audit_templates_collection(self, marker, limit,
def _get_audit_templates_collection(self, filters, marker, limit,
sort_key, sort_dir, expand=False,
resource_url=None):
api_utils.validate_search_filters(
filters, objects.audit_template.AuditTemplate.fields.keys())
limit = api_utils.validate_limit(limit)
sort_dir = api_utils.validate_sort_dir(sort_dir)
api_utils.validate_sort_dir(sort_dir)
marker_obj = None
if marker:
@@ -212,6 +213,7 @@ class AuditTemplatesController(rest.RestController):
audit_templates = objects.AuditTemplate.list(
pecan.request.context,
filters,
limit,
marker_obj, sort_key=sort_key,
sort_dir=sort_dir)
@@ -223,26 +225,30 @@ class AuditTemplatesController(rest.RestController):
sort_key=sort_key,
sort_dir=sort_dir)
@wsme_pecan.wsexpose(AuditTemplateCollection, types.uuid, int,
wtypes.text, wtypes.text)
def get_all(self, marker=None, limit=None,
@wsme_pecan.wsexpose(AuditTemplateCollection, wtypes.text,
types.uuid, int, wtypes.text, wtypes.text)
def get_all(self, goal=None, marker=None, limit=None,
sort_key='id', sort_dir='asc'):
"""Retrieve a list of audit templates.
:param goal: goal name to filter by (case sensitive)
:param marker: pagination marker for large data sets.
:param limit: maximum number of resources to return in a single result.
:param sort_key: column to sort results by. Default: id.
:param sort_dir: direction to sort. "asc" or "desc". Default: asc.
"""
return self._get_audit_templates_collection(marker, limit, sort_key,
sort_dir)
filters = api_utils.as_filters_dict(goal=goal)
@wsme_pecan.wsexpose(AuditTemplateCollection, types.uuid, int,
return self._get_audit_templates_collection(
filters, marker, limit, sort_key, sort_dir)
@wsme_pecan.wsexpose(AuditTemplateCollection, wtypes.text, types.uuid, int,
wtypes.text, wtypes.text)
def detail(self, marker=None, limit=None,
def detail(self, goal=None, marker=None, limit=None,
sort_key='id', sort_dir='asc'):
"""Retrieve a list of audit templates with detail.
:param goal: goal name to filter by (case sensitive)
:param marker: pagination marker for large data sets.
:param limit: maximum number of resources to return in a single result.
:param sort_key: column to sort results by. Default: id.
@@ -253,9 +259,11 @@ class AuditTemplatesController(rest.RestController):
if parent != "audit_templates":
raise exception.HTTPNotFound
filters = api_utils.as_filters_dict(goal=goal)
expand = True
resource_url = '/'.join(['audit_templates', 'detail'])
return self._get_audit_templates_collection(marker, limit,
return self._get_audit_templates_collection(filters, marker, limit,
sort_key, sort_dir, expand,
resource_url)
@@ -263,7 +271,7 @@ class AuditTemplatesController(rest.RestController):
def get_one(self, audit_template):
"""Retrieve information about the given audit template.
:param audit template_uuid: UUID or name of an audit template.
:param audit audit_template: UUID or name of an audit template.
"""
if self.from_audit_templates:
raise exception.OperationNotPermitted

View File

@@ -159,7 +159,7 @@ class GoalsController(rest.RestController):
resource_url=None, goal_name=None):
limit = api_utils.validate_limit(limit)
sort_dir = api_utils.validate_sort_dir(sort_dir)
api_utils.validate_sort_dir(sort_dir)
goals = []

View File

@@ -47,7 +47,15 @@ def validate_sort_dir(sort_dir):
raise wsme.exc.ClientSideError(_("Invalid sort direction: %s. "
"Acceptable values are "
"'asc' or 'desc'") % sort_dir)
return sort_dir
def validate_search_filters(filters, allowed_fields):
# Very leightweight validation for now
# todo: improve this (e.g. https://www.parse.com/docs/rest/guide/#queries)
for filter_name in filters.keys():
if filter_name not in allowed_fields:
raise wsme.exc.ClientSideError(
_("Invalid filter: %s") % filter_name)
def apply_jsonpatch(doc, patch):
@@ -58,3 +66,12 @@ def apply_jsonpatch(doc, patch):
' the resource is not allowed')
raise wsme.exc.ClientSideError(msg % p['path'])
return jsonpatch.apply_patch(doc, jsonpatch.JsonPatch(patch))
def as_filters_dict(**filters):
filters_dict = {}
for filter_name, filter_value in filters.items():
if filter_value:
filters_dict[filter_name] = filter_value
return filters_dict

View File

@@ -27,10 +27,15 @@ from watcher.common import clients
@six.add_metaclass(abc.ABCMeta)
class BaseAction(object):
# NOTE(jed) by convention we decided
# that the attribute "resource_id" is the unique id of
# the resource to which the Action applies to allow us to use it in the
# watcher dashboard and will be nested in input_parameters
RESOURCE_ID = 'resource_id'
def __init__(self, osc=None):
""":param osc: an OpenStackClients instance"""
self._input_parameters = {}
self._applies_to = ""
self._osc = osc
@property
@@ -48,25 +53,64 @@ class BaseAction(object):
self._input_parameters = p
@property
def applies_to(self):
return self._applies_to
@applies_to.setter
def applies_to(self, a):
self._applies_to = a
def resource_id(self):
return self.input_parameters[self.RESOURCE_ID]
@abc.abstractmethod
def execute(self):
"""Executes the main logic of the action
This method can be used to perform an action on a given set of input
parameters to accomplish some type of operation. This operation may
return a boolean value as a result of its execution. If False, this
will be considered as an error and will then trigger the reverting of
the actions.
:returns: A flag indicating whether or not the action succeeded
:rtype: bool
"""
raise NotImplementedError()
@abc.abstractmethod
def revert(self):
"""Revert this action
This method should rollback the resource to its initial state in the
event of a faulty execution. This happens when the action raised an
exception during its :py:meth:`~.BaseAction.execute`.
"""
raise NotImplementedError()
@abc.abstractmethod
def precondition(self):
"""Hook: called before the execution of an action
This method can be used to perform some initializations or to make
some more advanced validation on its input parameters. So if you wish
to block its execution based on this factor, `raise` the related
exception.
"""
raise NotImplementedError()
@abc.abstractmethod
def postcondition(self):
"""Hook: called after the execution of an action
This function is called regardless of whether an action succeded or
not. So you can use it to perform cleanup operations.
"""
raise NotImplementedError()
@abc.abstractproperty
def schema(self):
"""Defines a Schema that the input parameters shall comply to
:returns: A schema declaring the input parameters this action should be
provided along with their respective constraints
:rtype: :py:class:`voluptuous.Schema` instance
"""
raise NotImplementedError()
def validate_parameters(self):
self.schema(self.input_parameters)
return True

View File

@@ -16,7 +16,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import six
import voluptuous
from watcher._i18n import _
from watcher.applier.actions import base
@@ -26,32 +27,64 @@ from watcher.decision_engine.model import hypervisor_state as hstate
class ChangeNovaServiceState(base.BaseAction):
"""Disables or enables the nova-compute service, deployed on a host
By using this action, you will be able to update the state of a
nova-compute service. A disabled nova-compute service can not be selected
by the nova scheduler for future deployment of server.
The action schema is::
schema = Schema({
'resource_id': str,
'state': str,
})
The `resource_id` references a nova-compute service name (list of available
nova-compute services is returned by this command: ``nova service-list
--binary nova-compute``).
The `state` value should either be `ONLINE` or `OFFLINE`.
"""
STATE = 'state'
@property
def schema(self):
return voluptuous.Schema({
voluptuous.Required(self.RESOURCE_ID):
voluptuous.All(
voluptuous.Any(*six.string_types),
voluptuous.Length(min=1)),
voluptuous.Required(self.STATE):
voluptuous.Any(*[state.value
for state in list(hstate.HypervisorState)]),
})
@property
def host(self):
return self.applies_to
return self.resource_id
@property
def state(self):
return self.input_parameters.get('state')
return self.input_parameters.get(self.STATE)
def execute(self):
target_state = None
if self.state == hstate.HypervisorState.OFFLINE.value:
if self.state == hstate.HypervisorState.DISABLED.value:
target_state = False
elif self.status == hstate.HypervisorState.ONLINE.value:
elif self.state == hstate.HypervisorState.ENABLED.value:
target_state = True
return self.nova_manage_service(target_state)
return self._nova_manage_service(target_state)
def revert(self):
target_state = None
if self.state == hstate.HypervisorState.OFFLINE.value:
if self.state == hstate.HypervisorState.DISABLED.value:
target_state = True
elif self.state == hstate.HypervisorState.ONLINE.value:
elif self.state == hstate.HypervisorState.ENABLED.value:
target_state = False
return self.nova_manage_service(target_state)
return self._nova_manage_service(target_state)
def nova_manage_service(self, state):
def _nova_manage_service(self, state):
if state is None:
raise exception.IllegalArgumentException(
message=_("The target state is not defined"))

View File

@@ -33,5 +33,10 @@ class ActionFactory(object):
loaded_action = self.action_loader.load(name=object_action.action_type,
osc=osc)
loaded_action.input_parameters = object_action.input_parameters
loaded_action.applies_to = object_action.applies_to
LOG.debug("Checking the input parameters")
# NOTE(jed) if we change the schema of an action and we try to reload
# an older version of the Action, the validation can fail.
# We need to add the versioning of an Action or a migration tool.
# We can also create an new Action which extends the previous one.
loaded_action.validate_parameters()
return loaded_action

View File

@@ -18,30 +18,114 @@
#
from oslo_log import log
import six
import voluptuous
from watcher._i18n import _, _LC
from watcher.applier.actions import base
from watcher.common import exception
from watcher.common import nova_helper
from watcher.common import utils
LOG = log.getLogger(__name__)
class Migrate(base.BaseAction):
"""Live-Migrates a server to a destination nova-compute host
This action will allow you to migrate a server to another compute
destination host. As of now, only live migration can be performed using
this action.
.. If either host uses shared storage, you can use ``live``
.. as ``migration_type``. If both source and destination hosts provide
.. local disks, you can set the block_migration parameter to True (not
.. supported for yet).
The action schema is::
schema = Schema({
'resource_id': str, # should be a UUID
'migration_type': str, # choices -> "live" only
'dst_hypervisor': str,
'src_hypervisor': str,
})
The `resource_id` is the UUID of the server to migrate. Only live migration
is supported.
The `src_hypervisor` and `dst_hypervisor` parameters are respectively the
source and the destination compute hostname (list of available compute
hosts is returned by this command: ``nova service-list --binary
nova-compute``).
"""
# input parameters constants
MIGRATION_TYPE = 'migration_type'
LIVE_MIGRATION = 'live'
DST_HYPERVISOR = 'dst_hypervisor'
SRC_HYPERVISOR = 'src_hypervisor'
def check_resource_id(self, value):
if (value is not None and
len(value) > 0 and not
utils.is_uuid_like(value)):
raise voluptuous.Invalid(_("The parameter"
" resource_id is invalid."))
@property
def schema(self):
return voluptuous.Schema({
voluptuous.Required(self.RESOURCE_ID): self.check_resource_id,
voluptuous.Required(self.MIGRATION_TYPE,
default=self.LIVE_MIGRATION):
voluptuous.Any(*[self.LIVE_MIGRATION]),
voluptuous.Required(self.DST_HYPERVISOR):
voluptuous.All(voluptuous.Any(*six.string_types),
voluptuous.Length(min=1)),
voluptuous.Required(self.SRC_HYPERVISOR):
voluptuous.All(voluptuous.Any(*six.string_types),
voluptuous.Length(min=1)),
})
@property
def instance_uuid(self):
return self.applies_to
return self.resource_id
@property
def migration_type(self):
return self.input_parameters.get('migration_type')
return self.input_parameters.get(self.MIGRATION_TYPE)
@property
def dst_hypervisor(self):
return self.input_parameters.get('dst_hypervisor')
return self.input_parameters.get(self.DST_HYPERVISOR)
@property
def src_hypervisor(self):
return self.input_parameters.get('src_hypervisor')
return self.input_parameters.get(self.SRC_HYPERVISOR)
def _live_migrate_instance(self, nova, destination):
result = None
try:
result = nova.live_migrate_instance(instance_id=self.instance_uuid,
dest_hostname=destination)
except nova_helper.nvexceptions.ClientException as e:
if e.code == 400:
LOG.debug("Live migration of instance %s failed. "
"Trying to live migrate using block migration."
% self.instance_uuid)
result = nova.live_migrate_instance(
instance_id=self.instance_uuid,
dest_hostname=destination,
block_migration=True)
else:
LOG.debug("Nova client exception occured while live migrating "
"instance %s.Exception: %s" %
(self.instance_uuid, e))
except Exception:
LOG.critical(_LC("Unexpected error occured. Migration failed for"
"instance %s. Leaving instance on previous "
"host."), self.instance_uuid)
return result
def migrate(self, destination):
nova = nova_helper.NovaHelper(osc=self.osc)
@@ -50,8 +134,7 @@ class Migrate(base.BaseAction):
instance = nova.find_instance(self.instance_uuid)
if instance:
if self.migration_type == 'live':
return nova.live_migrate_instance(
instance_id=self.instance_uuid, dest_hostname=destination)
return self._live_migrate_instance(nova, destination)
else:
raise exception.Invalid(
message=(_('Migration of type %(migration_type)s is not '

View File

@@ -18,6 +18,8 @@
#
from oslo_log import log
import six
import voluptuous
from watcher.applier.actions import base
@@ -26,10 +28,29 @@ LOG = log.getLogger(__name__)
class Nop(base.BaseAction):
"""logs a message
The action schema is::
schema = Schema({
'message': str,
})
The `message` is the actual message that will be logged.
"""
MESSAGE = 'message'
@property
def schema(self):
return voluptuous.Schema({
voluptuous.Required(self.MESSAGE): voluptuous.Any(
voluptuous.Any(*six.string_types), None)
})
@property
def message(self):
return self.input_parameters.get('message')
return self.input_parameters.get(self.MESSAGE)
def execute(self):
LOG.debug("executing action NOP message:%s ", self.message)

View File

@@ -18,19 +18,39 @@
#
import time
from oslo_log import log
import voluptuous
from watcher.applier.actions import base
LOG = log.getLogger(__name__)
class Sleep(base.BaseAction):
"""Makes the executor of the action plan wait for a given duration
The action schema is::
schema = Schema({
'duration': float,
})
The `duration` is expressed in seconds.
"""
DURATION = 'duration'
@property
def schema(self):
return voluptuous.Schema({
voluptuous.Required(self.DURATION, default=1):
voluptuous.All(float, voluptuous.Range(min=0))
})
@property
def duration(self):
return int(self.input_parameters.get('duration'))
return int(self.input_parameters.get(self.DURATION))
def execute(self):
LOG.debug("Starting action Sleep duration:%s ", self.duration)

View File

@@ -63,10 +63,3 @@ class ApplierAPI(messaging_core.MessagingCore):
return self.client.call(
context.to_dict(), 'launch_action_plan',
action_plan_uuid=action_plan_uuid)
def event_receive(self, event):
try:
pass
except Exception as e:
LOG.exception(e)
raise

View File

@@ -28,6 +28,12 @@ LOG = log.getLogger(__name__)
class DefaultWorkFlowEngine(base.BaseWorkFlowEngine):
"""Taskflow as a workflow engine for Watcher
Full documentation on taskflow at
http://docs.openstack.org/developer/taskflow/
"""
def decider(self, history):
# FIXME(jed) not possible with the current Watcher Planner
#

View File

@@ -25,7 +25,7 @@ from oslo_config import cfg
from oslo_log import log as logging
from watcher import _i18n
from watcher.applier.manager import ApplierManager
from watcher.applier import manager
from watcher.common import service
LOG = logging.getLogger(__name__)
@@ -40,6 +40,6 @@ def main():
LOG.debug("Configuration:")
cfg.CONF.log_opt_values(LOG, std_logging.DEBUG)
server = ApplierManager()
server = manager.ApplierManager()
server.connect()
server.join()

View File

@@ -26,7 +26,7 @@ from oslo_log import log as logging
from watcher import _i18n
from watcher.common import service
from watcher.decision_engine.manager import DecisionEngineManager
from watcher.decision_engine import manager
LOG = logging.getLogger(__name__)
@@ -41,6 +41,6 @@ def main():
LOG.debug("Configuration:")
cfg.CONF.log_opt_values(LOG, std_logging.DEBUG)
server = DecisionEngineManager()
server = manager.DecisionEngineManager()
server.connect()
server.join()

View File

@@ -128,7 +128,8 @@ class CeilometerHelper(object):
def get_last_sample_values(self, resource_id, meter_name, limit=1):
samples = self.query_sample(meter_name=meter_name,
query=self.build_query(resource_id))
query=self.build_query(resource_id),
limit=limit)
values = []
for index, sample in enumerate(samples):
values.append(

View File

@@ -46,7 +46,7 @@ CEILOMETER_CLIENT_OPTS = [
NEUTRON_CLIENT_OPTS = [
cfg.StrOpt('api_version',
default='2',
default='2.0',
help=_('Version of Neutron API to use in neutronclient.'))]
cfg.CONF.register_opts(NOVA_CLIENT_OPTS, group='nova_client')
@@ -141,8 +141,8 @@ class OpenStackClients(object):
ceilometerclient_version = self._get_client_option('ceilometer',
'api_version')
self._ceilometer = ceclient.Client(ceilometerclient_version,
session=self.session)
self._ceilometer = ceclient.get_client(ceilometerclient_version,
session=self.session)
return self._ceilometer
@exception.wrap_keystone_exception

View File

@@ -279,3 +279,7 @@ class HypervisorNotFound(WatcherException):
class LoadingError(WatcherException):
msg_fmt = _("Error loading plugin '%(name)s'")
class ReservedWord(WatcherException):
msg_fmt = _("The identifier '%(name)s' is a reserved word")

View File

@@ -16,7 +16,7 @@
from oslo_log import log
from watcher.decision_engine.messaging.events import Events
from watcher.decision_engine.messaging import events as messaging_events
LOG = log.getLogger(__name__)
@@ -43,8 +43,8 @@ class EventDispatcher(object):
"""
Dispatch an instance of Event class
"""
if Events.ALL in self._events.keys():
listeners = self._events[Events.ALL]
if messaging_events.Events.ALL in self._events.keys():
listeners = self._events[messaging_events.Events.ALL]
for listener in listeners:
listener(event)

View File

@@ -265,58 +265,6 @@ class NovaHelper(object):
return True
def built_in_non_live_migrate_instance(self, instance_id, hypervisor_id):
"""This method does a live migration of a given instance
This method uses the Nova built-in non-live migrate()
action to migrate a given instance.
It returns True if the migration was successful, False otherwise.
:param instance_id: the unique id of the instance to migrate.
"""
LOG.debug(
"Trying a Nova built-in non-live "
"migrate of instance %s ..." % instance_id)
# Looking for the instance to migrate
instance = self.find_instance(instance_id)
if not instance:
LOG.debug("Instance not found: %s" % instance_id)
return False
else:
host_name = getattr(instance, 'OS-EXT-SRV-ATTR:host')
LOG.debug(
"Instance %s found on host '%s'." % (instance_id, host_name))
instance.migrate()
# Poll at 5 second intervals, until the status is as expected
if self.wait_for_instance_status(instance,
('VERIFY_RESIZE', 'ERROR'),
5, 10):
instance = self.nova.servers.get(instance.id)
if instance.status == 'VERIFY_RESIZE':
host_name = getattr(instance, 'OS-EXT-SRV-ATTR:host')
LOG.debug(
"Instance %s has been successfully "
"migrated to host '%s'." % (
instance_id, host_name))
# We need to confirm that the resize() operation
# has succeeded in order to
# get back instance state to 'ACTIVE'
instance.confirm_resize()
return True
elif instance.status == 'ERROR':
LOG.debug("Instance %s migration failed" % instance_id)
return False
def live_migrate_instance(self, instance_id, dest_hostname,
block_migration=False, retry=120):
"""This method does a live migration of a given instance

View File

@@ -102,7 +102,7 @@ class RPCService(service.Service):
'%(host)s.'),
{'service': self.topic, 'host': self.host})
def _handle_signal(self, signo, frame):
def _handle_signal(self):
LOG.info(_LI('Got signal SIGUSR1. Not deregistering on next shutdown '
'of service %(service)s on host %(host)s.'),
{'service': self.topic, 'host': self.host})

View File

@@ -98,7 +98,7 @@ class BaseConnection(object):
:raises: AuditTemplateNotFound
"""
def get_audit_template_by__name(self, context, audit_template_name):
def get_audit_template_by_name(self, context, audit_template_name):
"""Return an audit template.
:param context: The security context

View File

@@ -32,7 +32,7 @@ def _alembic_config():
return config
def version(config=None, engine=None):
def version(engine=None):
"""Current database version.
:returns: Database version

View File

@@ -160,7 +160,6 @@ class Action(Base):
nullable=False)
# only for the first version
action_type = Column(String(255), nullable=False)
applies_to = Column(String(255), nullable=True)
input_parameters = Column(JSONEncodedDict, nullable=True)
state = Column(String(20), nullable=True)
# todo(jed) remove parameter alarm

View File

@@ -16,11 +16,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
from concurrent.futures import ThreadPoolExecutor
from concurrent import futures
from oslo_log import log
from watcher.decision_engine.audit.default import DefaultAuditHandler
from watcher.decision_engine.audit import default
LOG = log.getLogger(__name__)
@@ -28,7 +28,7 @@ LOG = log.getLogger(__name__)
class AuditEndpoint(object):
def __init__(self, messaging, max_workers):
self._messaging = messaging
self._executor = ThreadPoolExecutor(max_workers=max_workers)
self._executor = futures.ThreadPoolExecutor(max_workers=max_workers)
@property
def executor(self):
@@ -39,7 +39,7 @@ class AuditEndpoint(object):
return self._messaging
def do_trigger_audit(self, context, audit_uuid):
audit = DefaultAuditHandler(self.messaging)
audit = default.DefaultAuditHandler(self.messaging)
audit.execute(audit_uuid, context)
def trigger_audit(self, context, audit_uuid):

View File

@@ -50,11 +50,13 @@ class BasePlanner(object):
def schedule(self, context, audit_uuid, solution):
"""The planner receives a solution to schedule
:param solution: the solution given by the strategy to
:param solution: A solution provided by a strategy for scheduling
:type solution: :py:class:`~.BaseSolution` subclass instance
:param audit_uuid: the audit uuid
:return: ActionPlan ordered sequence of change requests
such that all security, dependency, and performance
requirements are met.
:type audit_uuid: str
:return: Action plan with an ordered sequence of actions such that all
security, dependency, and performance requirements are met.
:rtype: :py:class:`watcher.objects.action_plan.ActionPlan` instance
"""
# example: directed acyclic graph
raise NotImplementedError()

View File

@@ -28,6 +28,13 @@ LOG = log.getLogger(__name__)
class DefaultPlanner(base.BasePlanner):
"""Default planner implementation
This implementation comes with basic rules with a fixed set of action types
that are weighted. An action having a lower weight will be scheduled before
the other ones.
"""
priorities = {
'nop': 0,
'sleep': 1,
@@ -38,14 +45,12 @@ class DefaultPlanner(base.BasePlanner):
def create_action(self,
action_plan_id,
action_type,
applies_to,
input_parameters=None):
uuid = utils.generate_uuid()
action = {
'uuid': uuid,
'action_plan_id': int(action_plan_id),
'action_type': action_type,
'applies_to': applies_to,
'input_parameters': input_parameters,
'state': objects.action.State.PENDING,
'alarm': None,
@@ -63,8 +68,6 @@ class DefaultPlanner(base.BasePlanner):
json_action = self.create_action(action_plan_id=action_plan.id,
action_type=action.get(
'action_type'),
applies_to=action.get(
'applies_to'),
input_parameters=action.get(
'input_parameters'))
to_schedule.append((self.priorities[action.get('action_type')],

View File

@@ -21,8 +21,8 @@ from oslo_config import cfg
from oslo_log import log
from watcher.common import exception
from watcher.common.messaging.messaging_core import MessagingCore
from watcher.common.messaging.notification_handler import NotificationHandler
from watcher.common.messaging import messaging_core
from watcher.common.messaging import notification_handler
from watcher.common import utils
from watcher.decision_engine.manager import decision_engine_opt_group
from watcher.decision_engine.manager import WATCHER_DECISION_ENGINE_OPTS
@@ -35,7 +35,7 @@ CONF.register_group(decision_engine_opt_group)
CONF.register_opts(WATCHER_DECISION_ENGINE_OPTS, decision_engine_opt_group)
class DecisionEngineAPI(MessagingCore):
class DecisionEngineAPI(messaging_core.MessagingCore):
def __init__(self):
super(DecisionEngineAPI, self).__init__(
@@ -44,7 +44,8 @@ class DecisionEngineAPI(MessagingCore):
CONF.watcher_decision_engine.status_topic,
api_version=self.API_VERSION,
)
self.handler = NotificationHandler(self.publisher_id)
self.handler = notification_handler.NotificationHandler(
self.publisher_id)
self.status_topic_handler.add_endpoint(self.handler)
def trigger_audit(self, context, audit_uuid=None):

View File

@@ -87,13 +87,13 @@ class BaseSolution(object):
@abc.abstractmethod
def add_action(self,
action_type,
applies_to,
resource_id,
input_parameters=None):
"""Add a new Action in the Action Plan
:param action_type: the unique id of an action type defined in
entry point 'watcher_actions'
:param applies_to: the unique id of the resource to which the
:param resource_id: the unique id of the resource to which the
`Action` applies.
:param input_parameters: An array of input parameters provided as
key-value pairs of strings. Each key-pair contains names and

View File

@@ -18,12 +18,14 @@
#
from oslo_log import log
from watcher.decision_engine.solution.base import BaseSolution
from watcher.applier.actions import base as baction
from watcher.common import exception
from watcher.decision_engine.solution import base
LOG = log.getLogger(__name__)
class DefaultSolution(BaseSolution):
class DefaultSolution(base.BaseSolution):
def __init__(self):
"""Stores a set of actions generated by a strategy
@@ -34,12 +36,17 @@ class DefaultSolution(BaseSolution):
self._actions = []
def add_action(self, action_type,
applies_to,
input_parameters=None):
# todo(jed) add https://pypi.python.org/pypi/schema
input_parameters=None,
resource_id=None):
if input_parameters is not None:
if baction.BaseAction.RESOURCE_ID in input_parameters.keys():
raise exception.ReservedWord(name=baction.BaseAction.
RESOURCE_ID)
if resource_id is not None:
input_parameters[baction.BaseAction.RESOURCE_ID] = resource_id
action = {
'action_type': action_type,
'applies_to': applies_to,
'input_parameters': input_parameters
}
self._actions.append(action)

View File

@@ -17,10 +17,10 @@
# limitations under the License.
#
from enum import Enum
import enum
class StrategyLevel(Enum):
class StrategyLevel(enum.Enum):
conservative = "conservative"
balanced = "balanced"
growth = "growth"

View File

@@ -16,22 +16,21 @@
from oslo_log import log
from watcher.common import clients
from watcher.decision_engine.strategy.context.base import BaseStrategyContext
from watcher.decision_engine.strategy.selection.default import \
DefaultStrategySelector
from watcher.metrics_engine.cluster_model_collector.manager import \
CollectorManager
from watcher.decision_engine.strategy.context import base
from watcher.decision_engine.strategy.selection import default
from watcher.metrics_engine.cluster_model_collector import manager
from watcher import objects
LOG = log.getLogger(__name__)
class DefaultStrategyContext(BaseStrategyContext):
class DefaultStrategyContext(base.BaseStrategyContext):
def __init__(self):
super(DefaultStrategyContext, self).__init__()
LOG.debug("Initializing Strategy Context")
self._strategy_selector = DefaultStrategySelector()
self._collector_manager = CollectorManager()
self._strategy_selector = default.DefaultStrategySelector()
self._collector_manager = manager.CollectorManager()
@property
def collector(self):

View File

@@ -21,12 +21,12 @@ from __future__ import unicode_literals
from oslo_log import log
from watcher.common.loader.default import DefaultLoader
from watcher.common.loader import default
LOG = log.getLogger(__name__)
class DefaultStrategyLoader(DefaultLoader):
class DefaultStrategyLoader(default.DefaultLoader):
def __init__(self):
super(DefaultStrategyLoader, self).__init__(
namespace='watcher_strategies')

View File

@@ -38,8 +38,8 @@ from oslo_log import log
import six
from watcher.common import clients
from watcher.decision_engine.solution.default import DefaultSolution
from watcher.decision_engine.strategy.common.level import StrategyLevel
from watcher.decision_engine.solution import default
from watcher.decision_engine.strategy.common import level
LOG = log.getLogger(__name__)
@@ -57,17 +57,17 @@ class BaseStrategy(object):
self._name = name
self.description = description
# default strategy level
self._strategy_level = StrategyLevel.conservative
self._strategy_level = level.StrategyLevel.conservative
self._cluster_state_collector = None
# the solution given by the strategy
self._solution = DefaultSolution()
self._solution = default.DefaultSolution()
self._osc = osc
@abc.abstractmethod
def execute(self, model):
def execute(self, original_model):
"""Execute a strategy
:param model: The name of the strategy to execute (loaded dynamically)
:param original_model: The model the strategy is executed on
:type model: str
:return: A computed solution (via a placement algorithm)
:rtype: :class:`watcher.decision_engine.solution.base.BaseSolution`

View File

@@ -16,22 +16,55 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
*Good server consolidation strategy*
Consolidation of VMs is essential to achieve energy optimization in cloud
environments such as OpenStack. As VMs are spinned up and/or moved over time,
it becomes necessary to migrate VMs among servers to lower the costs. However,
migration of VMs introduces runtime overheads and consumes extra energy, thus
a good server consolidation strategy should carefully plan for migration in
order to both minimize energy consumption and comply to the various SLAs.
"""
from oslo_log import log
from watcher._i18n import _LE, _LI, _LW
from watcher.common import exception
from watcher.decision_engine.model.hypervisor_state import HypervisorState
from watcher.decision_engine.model.resource import ResourceType
from watcher.decision_engine.model.vm_state import VMState
from watcher.decision_engine.strategy.strategies.base import BaseStrategy
from watcher.metrics_engine.cluster_history.ceilometer import \
CeilometerClusterHistory
from watcher.decision_engine.model import hypervisor_state as hyper_state
from watcher.decision_engine.model import resource
from watcher.decision_engine.model import vm_state
from watcher.decision_engine.strategy.strategies import base
from watcher.metrics_engine.cluster_history import ceilometer as \
ceilometer_cluster_history
LOG = log.getLogger(__name__)
class BasicConsolidation(BaseStrategy):
class BasicConsolidation(base.BaseStrategy):
"""Basic offline consolidation using live migration
*Description*
This is server consolidation algorithm which not only minimizes the overall
number of used servers, but also minimizes the number of migrations.
*Requirements*
* You must have at least 2 physical compute nodes to run this strategy.
*Limitations*
- It has been developed only for tests.
- It assumes that the virtual machine and the compute node are on the same
private network.
- It assume that live migrations are possible
*Spec URL*
<None>
"""
DEFAULT_NAME = "basic"
DEFAULT_DESCRIPTION = "Basic offline consolidation"
@@ -45,31 +78,11 @@ class BasicConsolidation(BaseStrategy):
osc=None):
"""Basic offline Consolidation using live migration
The basic consolidation algorithm has several limitations.
It has been developed only for tests.
eg: The BasicConsolidation assumes that the virtual mahine and
the compute node are on the same private network.
Good Strategy :
The workloads of the VMs are changing over the time
and often tend to migrate from one physical machine to another.
Hence, the traditional and offline heuristics such as bin packing
are not applicable for the placement VM in cloud computing.
So, the decision Engine optimizer provides placement strategy considering
not only the performance effects but also the workload characteristics of
VMs and others metrics like the power consumption and
the tenants constraints (SLAs).
The watcher optimizer uses an online VM placement technique
based on machine learning and meta-heuristics that must handle :
- multi-objectives
- Contradictory objectives
- Adapt to changes dynamically
- Fast convergence
:param name: the name of the strategy
:param description: a description of the strategy
:param osc: an OpenStackClients object
:param name: The name of the strategy (Default: "basic")
:param description: The description of the strategy
(Default: "Basic offline consolidation")
:param osc: An :py:class:`~watcher.common.clients.OpenStackClients`
instance
"""
super(BasicConsolidation, self).__init__(name, description, osc)
@@ -104,12 +117,13 @@ class BasicConsolidation(BaseStrategy):
@property
def ceilometer(self):
if self._ceilometer is None:
self._ceilometer = CeilometerClusterHistory(osc=self.osc)
self._ceilometer = (ceilometer_cluster_history.
CeilometerClusterHistory(osc=self.osc))
return self._ceilometer
@ceilometer.setter
def ceilometer(self, c):
self._ceilometer = c
def ceilometer(self, ceilometer):
self._ceilometer = ceilometer
def compute_attempts(self, size_cluster):
"""Upper bound of the number of migration
@@ -118,13 +132,13 @@ class BasicConsolidation(BaseStrategy):
"""
self.migration_attempts = size_cluster * self.bound_migration
def check_migration(self, model,
def check_migration(self, cluster_data_model,
src_hypervisor,
dest_hypervisor,
vm_to_mig):
"""check if the migration is possible
:param model: the current state of the cluster
:param cluster_data_model: the current state of the cluster
:param src_hypervisor: the current node of the virtual machine
:param dest_hypervisor: the destination of the virtual machine
:param vm_to_mig: the virtual machine
@@ -141,28 +155,32 @@ class BasicConsolidation(BaseStrategy):
total_cores = 0
total_disk = 0
total_mem = 0
cap_cores = model.get_resource_from_id(ResourceType.cpu_cores)
cap_disk = model.get_resource_from_id(ResourceType.disk)
cap_mem = model.get_resource_from_id(ResourceType.memory)
cpu_capacity = cluster_data_model.get_resource_from_id(
resource.ResourceType.cpu_cores)
disk_capacity = cluster_data_model.get_resource_from_id(
resource.ResourceType.disk)
memory_capacity = cluster_data_model.get_resource_from_id(
resource.ResourceType.memory)
for vm_id in model.get_mapping().get_node_vms(dest_hypervisor):
vm = model.get_vm_from_id(vm_id)
total_cores += cap_cores.get_capacity(vm)
total_disk += cap_disk.get_capacity(vm)
total_mem += cap_mem.get_capacity(vm)
for vm_id in cluster_data_model. \
get_mapping().get_node_vms(dest_hypervisor):
vm = cluster_data_model.get_vm_from_id(vm_id)
total_cores += cpu_capacity.get_capacity(vm)
total_disk += disk_capacity.get_capacity(vm)
total_mem += memory_capacity.get_capacity(vm)
# capacity requested by hypervisor
total_cores += cap_cores.get_capacity(vm_to_mig)
total_disk += cap_disk.get_capacity(vm_to_mig)
total_mem += cap_mem.get_capacity(vm_to_mig)
total_cores += cpu_capacity.get_capacity(vm_to_mig)
total_disk += disk_capacity.get_capacity(vm_to_mig)
total_mem += memory_capacity.get_capacity(vm_to_mig)
return self.check_threshold(model,
return self.check_threshold(cluster_data_model,
dest_hypervisor,
total_cores,
total_disk,
total_mem)
def check_threshold(self, model,
def check_threshold(self, cluster_data_model,
dest_hypervisor,
total_cores,
total_disk,
@@ -172,23 +190,23 @@ class BasicConsolidation(BaseStrategy):
check the threshold value defined by the ratio of
aggregated CPU capacity of VMs on one node to CPU capacity
of this node must not exceed the threshold value.
:param dest_hypervisor:
:param cluster_data_model: the current state of the cluster
:param dest_hypervisor: the destination of the virtual machine
:param total_cores
:param total_disk
:param total_mem
:return: True if the threshold is not exceed
"""
cap_cores = model.get_resource_from_id(ResourceType.cpu_cores)
cap_disk = model.get_resource_from_id(ResourceType.disk)
cap_mem = model.get_resource_from_id(ResourceType.memory)
# available
cores_available = cap_cores.get_capacity(dest_hypervisor)
disk_available = cap_disk.get_capacity(dest_hypervisor)
mem_available = cap_mem.get_capacity(dest_hypervisor)
cpu_capacity = cluster_data_model.get_resource_from_id(
resource.ResourceType.cpu_cores).get_capacity(dest_hypervisor)
disk_capacity = cluster_data_model.get_resource_from_id(
resource.ResourceType.disk).get_capacity(dest_hypervisor)
memory_capacity = cluster_data_model.get_resource_from_id(
resource.ResourceType.memory).get_capacity(dest_hypervisor)
if cores_available >= total_cores * self.threshold_cores \
and disk_available >= total_disk * self.threshold_disk \
and mem_available >= total_mem * self.threshold_mem:
if (cpu_capacity >= total_cores * self.threshold_cores and
disk_capacity >= total_disk * self.threshold_disk and
memory_capacity >= total_mem * self.threshold_mem):
return True
else:
return False
@@ -214,25 +232,26 @@ class BasicConsolidation(BaseStrategy):
def get_number_of_migrations(self):
return self.number_of_migrations
def calculate_weight(self, model, element, total_cores_used,
total_disk_used, total_memory_used):
def calculate_weight(self, cluster_data_model, element,
total_cores_used, total_disk_used,
total_memory_used):
"""Calculate weight of every resource
:param model:
:param cluster_data_model:
:param element:
:param total_cores_used:
:param total_disk_used:
:param total_memory_used:
:return:
"""
cpu_capacity = model.get_resource_from_id(
ResourceType.cpu_cores).get_capacity(element)
cpu_capacity = cluster_data_model.get_resource_from_id(
resource.ResourceType.cpu_cores).get_capacity(element)
disk_capacity = model.get_resource_from_id(
ResourceType.disk).get_capacity(element)
disk_capacity = cluster_data_model.get_resource_from_id(
resource.ResourceType.disk).get_capacity(element)
memory_capacity = model.get_resource_from_id(
ResourceType.memory).get_capacity(element)
memory_capacity = cluster_data_model.get_resource_from_id(
resource.ResourceType.memory).get_capacity(element)
score_cores = (1 - (float(cpu_capacity) - float(total_cores_used)) /
float(cpu_capacity))
@@ -258,25 +277,25 @@ class BasicConsolidation(BaseStrategy):
:return:
"""
resource_id = "%s_%s" % (hypervisor.uuid, hypervisor.hostname)
cpu_avg_vm = self.ceilometer. \
vm_avg_cpu_util = self.ceilometer. \
statistic_aggregation(resource_id=resource_id,
meter_name=self.HOST_CPU_USAGE_METRIC_NAME,
period="7200",
aggregate='avg'
)
if cpu_avg_vm is None:
if vm_avg_cpu_util is None:
LOG.error(
_LE("No values returned by %(resource_id)s "
"for %(metric_name)s"),
resource_id=resource_id,
metric_name=self.HOST_CPU_USAGE_METRIC_NAME,
)
cpu_avg_vm = 100
vm_avg_cpu_util = 100
cpu_capacity = model.get_resource_from_id(
ResourceType.cpu_cores).get_capacity(hypervisor)
resource.ResourceType.cpu_cores).get_capacity(hypervisor)
total_cores_used = cpu_capacity * (cpu_avg_vm / 100)
total_cores_used = cpu_capacity * (vm_avg_cpu_util / 100)
return self.calculate_weight(model, hypervisor, total_cores_used,
0,
@@ -294,14 +313,14 @@ class BasicConsolidation(BaseStrategy):
else:
return 0
def calculate_score_vm(self, vm, model):
def calculate_score_vm(self, vm, cluster_data_model):
"""Calculate Score of virtual machine
:param vm: the virtual machine
:param model: the model
:param cluster_data_model: the cluster model
:return: score
"""
if model is None:
if cluster_data_model is None:
raise exception.ClusterStateNotDefined()
vm_cpu_utilization = self.ceilometer. \
@@ -320,23 +339,22 @@ class BasicConsolidation(BaseStrategy):
)
vm_cpu_utilization = 100
cpu_capacity = model.get_resource_from_id(
ResourceType.cpu_cores).get_capacity(vm)
cpu_capacity = cluster_data_model.get_resource_from_id(
resource.ResourceType.cpu_cores).get_capacity(vm)
total_cores_used = cpu_capacity * (vm_cpu_utilization / 100.0)
return self.calculate_weight(model, vm, total_cores_used,
0,
0)
return self.calculate_weight(cluster_data_model, vm,
total_cores_used, 0, 0)
def add_change_service_state(self, applies_to, state):
def add_change_service_state(self, resource_id, state):
parameters = {'state': state}
self.solution.add_action(action_type=self.CHANGE_NOVA_SERVICE_STATE,
applies_to=applies_to,
resource_id=resource_id,
input_parameters=parameters)
def add_migration(self,
applies_to,
resource_id,
migration_type,
src_hypervisor,
dst_hypervisor):
@@ -344,17 +362,19 @@ class BasicConsolidation(BaseStrategy):
'src_hypervisor': src_hypervisor,
'dst_hypervisor': dst_hypervisor}
self.solution.add_action(action_type=self.MIGRATION,
applies_to=applies_to,
resource_id=resource_id,
input_parameters=parameters)
def score_of_nodes(self, current_model, score):
def score_of_nodes(self, cluster_data_model, score):
"""Calculate score of nodes based on load by VMs"""
for hypervisor_id in current_model.get_all_hypervisors():
hypervisor = current_model.get_hypervisor_from_id(hypervisor_id)
count = current_model.get_mapping(). \
for hypervisor_id in cluster_data_model.get_all_hypervisors():
hypervisor = cluster_data_model. \
get_hypervisor_from_id(hypervisor_id)
count = cluster_data_model.get_mapping(). \
get_node_vms_from_id(hypervisor_id)
if len(count) > 0:
result = self.calculate_score_node(hypervisor, current_model)
result = self.calculate_score_node(hypervisor,
cluster_data_model)
else:
''' the hypervisor has not VMs '''
result = 0
@@ -362,16 +382,16 @@ class BasicConsolidation(BaseStrategy):
score.append((hypervisor_id, result))
return score
def node_and_vm_score(self, s, score, current_model):
def node_and_vm_score(self, sorted_score, score, current_model):
"""Get List of VMs from Node"""
node_to_release = s[len(score) - 1][0]
node_to_release = sorted_score[len(score) - 1][0]
vms_to_mig = current_model.get_mapping().get_node_vms_from_id(
node_to_release)
vm_score = []
for vm_id in vms_to_mig:
vm = current_model.get_vm_from_id(vm_id)
if vm.state == VMState.ACTIVE.value:
if vm.state == vm_state.VMState.ACTIVE.value:
vm_score.append(
(vm_id, self.calculate_score_vm(vm, current_model)))
@@ -390,51 +410,53 @@ class BasicConsolidation(BaseStrategy):
mig_src_hypervisor)) == 0:
self.add_change_service_state(mig_src_hypervisor.
uuid,
HypervisorState.
OFFLINE.value)
hyper_state.HypervisorState.
DISABLED.value)
self.number_of_released_nodes += 1
def calculate_m(self, v, current_model, node_to_release, s):
m = 0
for vm in v:
for j in range(0, len(s)):
def calculate_num_migrations(self, sorted_vms, current_model,
node_to_release, sorted_score):
number_migrations = 0
for vm in sorted_vms:
for j in range(0, len(sorted_score)):
mig_vm = current_model.get_vm_from_id(vm[0])
mig_src_hypervisor = current_model.get_hypervisor_from_id(
node_to_release)
mig_dst_hypervisor = current_model.get_hypervisor_from_id(
s[j][0])
sorted_score[j][0])
result = self.check_migration(current_model,
mig_src_hypervisor,
mig_dst_hypervisor, mig_vm)
if result is True:
if result:
self.create_migration_vm(
current_model, mig_vm,
mig_src_hypervisor, mig_dst_hypervisor)
m += 1
number_migrations += 1
break
return m
return number_migrations
def unsuccessful_migration_actualization(self, m, unsuccessful_migration):
if m > 0:
self.number_of_migrations += m
def unsuccessful_migration_actualization(self, number_migrations,
unsuccessful_migration):
if number_migrations > 0:
self.number_of_migrations += number_migrations
return 0
else:
return unsuccessful_migration + 1
def execute(self, orign_model):
def execute(self, original_model):
LOG.info(_LI("Initializing Sercon Consolidation"))
if orign_model is None:
if original_model is None:
raise exception.ClusterStateNotDefined()
# todo(jed) clone model
current_model = orign_model
current_model = original_model
self.efficacy = 100
unsuccessful_migration = 0
first = True
first_migration = True
size_cluster = len(current_model.get_all_hypervisors())
if size_cluster == 0:
raise exception.ClusterEmpty()
@@ -446,24 +468,24 @@ class BasicConsolidation(BaseStrategy):
count = current_model.get_mapping(). \
get_node_vms_from_id(hypervisor_id)
if len(count) == 0:
if hypervisor.state == HypervisorState.ONLINE:
if hypervisor.state == hyper_state.HypervisorState.ENABLED:
self.add_change_service_state(hypervisor_id,
HypervisorState.
OFFLINE.value)
hyper_state.HypervisorState.
DISABLED.value)
while self.get_allowed_migration_attempts() >= unsuccessful_migration:
if first is not True:
if not first_migration:
self.efficacy = self.calculate_migration_efficacy()
if self.efficacy < float(self.target_efficacy):
break
first = False
first_migration = False
score = []
score = self.score_of_nodes(current_model, score)
''' sort compute nodes by Score decreasing '''''
s = sorted(score, reverse=True, key=lambda x: (x[1]))
LOG.debug("Hypervisor(s) BFD {0}".format(s))
sorted_score = sorted(score, reverse=True, key=lambda x: (x[1]))
LOG.debug("Hypervisor(s) BFD {0}".format(sorted_score))
''' get Node to be released '''
if len(score) == 0:
@@ -473,17 +495,18 @@ class BasicConsolidation(BaseStrategy):
break
node_to_release, vm_score = self.node_and_vm_score(
s, score, current_model)
sorted_score, score, current_model)
''' sort VMs by Score '''
v = sorted(vm_score, reverse=True, key=lambda x: (x[1]))
sorted_vms = sorted(vm_score, reverse=True, key=lambda x: (x[1]))
# BFD: Best Fit Decrease
LOG.debug("VM(s) BFD {0}".format(v))
LOG.debug("VM(s) BFD {0}".format(sorted_vms))
m = self.calculate_m(v, current_model, node_to_release, s)
migrations = self.calculate_num_migrations(
sorted_vms, current_model, node_to_release, sorted_score)
unsuccessful_migration = self.unsuccessful_migration_actualization(
m, unsuccessful_migration)
migrations, unsuccessful_migration)
infos = {
"number_of_migrations": self.number_of_migrations,
"number_of_nodes_released": self.number_of_released_nodes,

View File

@@ -18,12 +18,32 @@
#
from oslo_log import log
from watcher.decision_engine.strategy.strategies.base import BaseStrategy
from watcher.decision_engine.strategy.strategies import base
LOG = log.getLogger(__name__)
class DummyStrategy(BaseStrategy):
class DummyStrategy(base.BaseStrategy):
"""Dummy strategy used for integration testing via Tempest
*Description*
This strategy does not provide any useful optimization. Indeed, its only
purpose is to be used by Tempest tests.
*Requirements*
<None>
*Limitations*
Do not use in production.
*Spec URL*
<None>
"""
DEFAULT_NAME = "dummy"
DEFAULT_DESCRIPTION = "Dummy Strategy"
@@ -34,18 +54,16 @@ class DummyStrategy(BaseStrategy):
osc=None):
super(DummyStrategy, self).__init__(name, description, osc)
def execute(self, model):
def execute(self, original_model):
LOG.debug("Executing Dummy strategy")
parameters = {'message': 'hello World'}
self.solution.add_action(action_type=self.NOP,
applies_to="",
input_parameters=parameters)
parameters = {'message': 'Welcome'}
self.solution.add_action(action_type=self.NOP,
applies_to="",
input_parameters=parameters)
self.solution.add_action(action_type=self.SLEEP,
applies_to="",
input_parameters={'duration': '5'})
input_parameters={'duration': 5.0})
return self.solution

View File

@@ -16,20 +16,60 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
*Good Thermal Strategy*:
Towards to software defined infrastructure, the power and thermal
intelligences is being adopted to optimize workload, which can help
improve efficiency, reduce power, as well as to improve datacenter PUE
and lower down operation cost in data center.
Outlet (Exhaust Air) Temperature is one of the important thermal
telemetries to measure thermal/workload status of server.
"""
from oslo_log import log
from watcher._i18n import _LE
from watcher.common import exception as wexc
from watcher.decision_engine.model.resource import ResourceType
from watcher.decision_engine.model.vm_state import VMState
from watcher.decision_engine.strategy.strategies.base import BaseStrategy
from watcher.metrics_engine.cluster_history.ceilometer import \
CeilometerClusterHistory
from watcher.decision_engine.model import resource
from watcher.decision_engine.model import vm_state
from watcher.decision_engine.strategy.strategies import base
from watcher.metrics_engine.cluster_history import ceilometer as ceil
LOG = log.getLogger(__name__)
class OutletTempControl(BaseStrategy):
class OutletTempControl(base.BaseStrategy):
"""[PoC] Outlet temperature control using live migration
*Description*
It is a migration strategy based on the outlet temperature of compute
hosts. It generates solutions to move a workload whenever a server's
outlet temperature is higher than the specified threshold.
*Requirements*
* Hardware: All computer hosts should support IPMI and PTAS technology
* Software: Ceilometer component ceilometer-agent-ipmi running
in each compute host, and Ceilometer API can report such telemetry
``hardware.ipmi.node.outlet_temperature`` successfully.
* You must have at least 2 physical compute hosts to run this strategy.
*Limitations*
- This is a proof of concept that is not meant to be used in production
- We cannot forecast how many servers should be migrated. This is the
reason why we only plan a single virtual machine migration at a time.
So it's better to use this algorithm with `CONTINUOUS` audits.
- It assume that live migrations are possible
*Spec URL*
https://github.com/openstack/watcher-specs/blob/master/specs/mitaka/approved/outlet-temperature-based-strategy.rst
""" # noqa
DEFAULT_NAME = "outlet_temp_control"
DEFAULT_DESCRIPTION = "outlet temperature based migration strategy"
@@ -42,29 +82,7 @@ class OutletTempControl(BaseStrategy):
def __init__(self, name=DEFAULT_NAME, description=DEFAULT_DESCRIPTION,
osc=None):
"""[PoC]Outlet temperature control using live migration
It is a migration strategy based on the Outlet Temperature of physical
servers. It generates solutions to move a workload whenever a servers
outlet temperature is higher than the specified threshold. As of now,
we cannot forecast how many instances should be migrated. This is the
reason why we simply plan a single virtual machine migration.
So it's better to use this algorithm with CONTINUOUS audits.
Requirements:
* Hardware: computer node should support IPMI and PTAS technology
* Software: Ceilometer component ceilometer-agent-ipmi running
in each compute node, and Ceilometer API can report such telemetry
"hardware.ipmi.node.outlet_temperature" successfully.
* You must have at least 2 physical compute nodes to run this strategy.
Good Strategy:
Towards to software defined infrastructure, the power and thermal
intelligences is being adopted to optimize workload, which can help
improve efficiency, reduce power, as well as to improve datacenter PUE
and lower down operation cost in data center.
Outlet(Exhaust Air) Temperature is one of the important thermal
telemetries to measure thermal/workload status of server.
"""Outlet temperature control using live migration
:param name: the name of the strategy
:param description: a description of the strategy
@@ -81,32 +99,33 @@ class OutletTempControl(BaseStrategy):
@property
def ceilometer(self):
if self._ceilometer is None:
self._ceilometer = CeilometerClusterHistory(osc=self.osc)
self._ceilometer = ceil.CeilometerClusterHistory(osc=self.osc)
return self._ceilometer
@ceilometer.setter
def ceilometer(self, c):
self._ceilometer = c
def calc_used_res(self, model, hypervisor, cap_cores, cap_mem, cap_disk):
def calc_used_res(self, cluster_data_model, hypervisor, cpu_capacity,
memory_capacity, disk_capacity):
'''calculate the used vcpus, memory and disk based on VM flavors'''
vms = model.get_mapping().get_node_vms(hypervisor)
vms = cluster_data_model.get_mapping().get_node_vms(hypervisor)
vcpus_used = 0
memory_mb_used = 0
disk_gb_used = 0
if len(vms) > 0:
for vm_id in vms:
vm = model.get_vm_from_id(vm_id)
vcpus_used += cap_cores.get_capacity(vm)
memory_mb_used += cap_mem.get_capacity(vm)
disk_gb_used += cap_disk.get_capacity(vm)
vm = cluster_data_model.get_vm_from_id(vm_id)
vcpus_used += cpu_capacity.get_capacity(vm)
memory_mb_used += memory_capacity.get_capacity(vm)
disk_gb_used += disk_capacity.get_capacity(vm)
return vcpus_used, memory_mb_used, disk_gb_used
def group_hosts_by_outlet_temp(self, model):
'''Group hosts based on outlet temp meters'''
def group_hosts_by_outlet_temp(self, cluster_data_model):
"""Group hosts based on outlet temp meters"""
hypervisors = model.get_all_hypervisors()
hypervisors = cluster_data_model.get_all_hypervisors()
size_cluster = len(hypervisors)
if size_cluster == 0:
raise wexc.ClusterEmpty()
@@ -114,7 +133,8 @@ class OutletTempControl(BaseStrategy):
hosts_need_release = []
hosts_target = []
for hypervisor_id in hypervisors:
hypervisor = model.get_hypervisor_from_id(hypervisor_id)
hypervisor = cluster_data_model.get_hypervisor_from_id(
hypervisor_id)
resource_id = hypervisor.uuid
outlet_temp = self.ceilometer.statistic_aggregation(
@@ -136,18 +156,19 @@ class OutletTempControl(BaseStrategy):
hosts_target.append(hvmap)
return hosts_need_release, hosts_target
def choose_vm_to_migrate(self, model, hosts):
'''pick up an active vm instance to migrate from provided hosts'''
def choose_vm_to_migrate(self, cluster_data_model, hosts):
"""pick up an active vm instance to migrate from provided hosts"""
for hvmap in hosts:
mig_src_hypervisor = hvmap['hv']
vms_of_src = model.get_mapping().get_node_vms(mig_src_hypervisor)
vms_of_src = cluster_data_model.get_mapping().get_node_vms(
mig_src_hypervisor)
if len(vms_of_src) > 0:
for vm_id in vms_of_src:
try:
# select the first active VM to migrate
vm = model.get_vm_from_id(vm_id)
if vm.state != VMState.ACTIVE.value:
vm = cluster_data_model.get_vm_from_id(vm_id)
if vm.state != vm_state.VMState.ACTIVE.value:
LOG.info(_LE("VM not active, skipped: %s"),
vm.uuid)
continue
@@ -158,44 +179,45 @@ class OutletTempControl(BaseStrategy):
return None
def filter_dest_servers(self, model, hosts, vm_to_migrate):
'''Only return hosts with sufficient available resources'''
def filter_dest_servers(self, cluster_data_model, hosts, vm_to_migrate):
"""Only return hosts with sufficient available resources"""
cap_cores = model.get_resource_from_id(ResourceType.cpu_cores)
cap_disk = model.get_resource_from_id(ResourceType.disk)
cap_mem = model.get_resource_from_id(ResourceType.memory)
cpu_capacity = cluster_data_model.get_resource_from_id(
resource.ResourceType.cpu_cores)
disk_capacity = cluster_data_model.get_resource_from_id(
resource.ResourceType.disk)
memory_capacity = cluster_data_model.get_resource_from_id(
resource.ResourceType.memory)
required_cores = cap_cores.get_capacity(vm_to_migrate)
required_disk = cap_disk.get_capacity(vm_to_migrate)
required_mem = cap_mem.get_capacity(vm_to_migrate)
required_cores = cpu_capacity.get_capacity(vm_to_migrate)
required_disk = disk_capacity.get_capacity(vm_to_migrate)
required_memory = memory_capacity.get_capacity(vm_to_migrate)
# filter hypervisors without enough resource
dest_servers = []
for hvmap in hosts:
host = hvmap['hv']
# available
cores_used, mem_used, disk_used = self.calc_used_res(model,
host,
cap_cores,
cap_mem,
cap_disk)
cores_available = cap_cores.get_capacity(host) - cores_used
disk_available = cap_disk.get_capacity(host) - mem_used
mem_available = cap_mem.get_capacity(host) - disk_used
cores_used, mem_used, disk_used = self.calc_used_res(
cluster_data_model, host, cpu_capacity, memory_capacity,
disk_capacity)
cores_available = cpu_capacity.get_capacity(host) - cores_used
disk_available = disk_capacity.get_capacity(host) - mem_used
mem_available = memory_capacity.get_capacity(host) - disk_used
if cores_available >= required_cores \
and disk_available >= required_disk \
and mem_available >= required_mem:
and mem_available >= required_memory:
dest_servers.append(hvmap)
return dest_servers
def execute(self, orign_model):
def execute(self, original_model):
LOG.debug("Initializing Outlet temperature strategy")
if orign_model is None:
if original_model is None:
raise wexc.ClusterStateNotDefined()
current_model = orign_model
current_model = original_model
hosts_need_release, hosts_target = self.group_hosts_by_outlet_temp(
current_model)
@@ -239,10 +261,10 @@ class OutletTempControl(BaseStrategy):
mig_src_hypervisor,
mig_dst_hypervisor):
parameters = {'migration_type': 'live',
'src_hypervisor': mig_src_hypervisor,
'dst_hypervisor': mig_dst_hypervisor}
'src_hypervisor': mig_src_hypervisor.uuid,
'dst_hypervisor': mig_dst_hypervisor.uuid}
self.solution.add_action(action_type=self.MIGRATION,
applies_to=vm_src,
resource_id=vm_src.uuid,
input_parameters=parameters)
self.solution.model = current_model

View File

@@ -17,21 +17,56 @@
from __future__ import unicode_literals
import importlib
import inspect
from docutils import nodes
from docutils.parsers import rst
from docutils import statemachine as sm
from docutils import statemachine
from stevedore import extension
from watcher.version import version_info
import textwrap
class BaseWatcherDirective(rst.Directive):
def __init__(self, name, arguments, options, content, lineno,
content_offset, block_text, state, state_machine):
super(BaseWatcherDirective, self).__init__(
name, arguments, options, content, lineno,
content_offset, block_text, state, state_machine)
self.result = statemachine.ViewList()
def run(self):
raise NotImplementedError('Must override run() is subclass.')
def add_line(self, line, *lineno):
"""Append one line of generated reST to the output."""
self.result.append(line, rst.directives.unchanged, *lineno)
def add_textblock(self, textblock):
for line in textblock.splitlines():
self.add_line(line)
def add_object_docstring(self, obj):
obj_raw_docstring = obj.__doc__ or ""
# Maybe it's within the __init__
if not obj_raw_docstring and hasattr(obj, "__init__"):
if obj.__init__.__doc__:
obj_raw_docstring = obj.__init__.__doc__
if not obj_raw_docstring:
# Raise a warning to make the tests fail wit doc8
raise self.error("No docstring available for this plugin!")
obj_docstring = inspect.cleandoc(obj_raw_docstring)
self.add_textblock(obj_docstring)
class WatcherTerm(rst.Directive):
class WatcherTerm(BaseWatcherDirective):
"""Directive to import an RST formatted docstring into the Watcher glossary
How to use it
-------------
**How to use it**
# inside your .py file
class DocumentedObject(object):
@@ -47,17 +82,7 @@ class WatcherTerm(rst.Directive):
# You need to put an import path as an argument for this directive to work
required_arguments = 1
def add_textblock(self, textblock, *lineno):
for line in textblock.splitlines():
self.add_line(line)
def add_line(self, line, *lineno):
"""Append one line of generated reST to the output."""
self.result.append(line, rst.directives.unchanged, *lineno)
def run(self):
self.result = sm.ViewList()
cls_path = self.arguments[0]
try:
@@ -65,20 +90,82 @@ class WatcherTerm(rst.Directive):
except Exception as exc:
raise self.error(exc)
self.add_class_docstring(cls)
self.add_object_docstring(cls)
node = nodes.paragraph()
node.document = self.state.document
self.state.nested_parse(self.result, 0, node)
return node.children
def add_class_docstring(self, cls):
# Added 4 spaces to align the first line with the rest of the text
# to be able to dedent it correctly
cls_docstring = textwrap.dedent("%s%s" % (" " * 4, cls.__doc__))
self.add_textblock(cls_docstring)
class DriversDoc(BaseWatcherDirective):
"""Directive to import an RST formatted docstring into the Watcher doc
This directive imports the RST formatted docstring of every driver declared
within an entry point namespace provided as argument
**How to use it**
# inside your .py file
class DocumentedClassReferencedInEntrypoint(object):
'''My *.rst* docstring'''
def foo(self):
'''Foo docstring'''
# Inside your .rst file
.. drivers-doc:: entrypoint_namespace
:append_methods_doc: foo
This directive will then import the docstring and then interprete it.
Note that no section/sub-section can be imported via this directive as it
is a Sphinx restriction.
"""
# You need to put an import path as an argument for this directive to work
required_arguments = 1
optional_arguments = 0
final_argument_whitespace = True
has_content = False
option_spec = dict(
# CSV formatted list of method names whose return values will be zipped
# together in the given order
append_methods_doc=lambda opts: [
opt.strip() for opt in opts.split(",") if opt.strip()],
# By default, we always start by adding the driver object docstring
exclude_driver_docstring=rst.directives.flag,
)
def run(self):
ext_manager = extension.ExtensionManager(namespace=self.arguments[0])
extensions = ext_manager.extensions
# Aggregates drivers based on their module name (i.e import path)
classes = [(ext.name, ext.plugin) for ext in extensions]
for name, cls in classes:
self.add_line(".. rubric:: %s" % name)
self.add_line("")
if "exclude_driver_docstring" not in self.options:
self.add_object_docstring(cls)
self.add_line("")
for method_name in self.options.get("append_methods_doc", []):
if hasattr(cls, method_name):
method = getattr(cls, method_name)
method_result = inspect.cleandoc(method)
self.add_textblock(method_result())
self.add_line("")
node = nodes.paragraph()
node.document = self.state.document
self.state.nested_parse(self.result, 0, node)
return node.children
def setup(app):
app.add_directive('drivers-doc', DriversDoc)
app.add_directive('watcher-term', WatcherTerm)
return {'version': version_info.version_string()}

View File

@@ -8,7 +8,7 @@ msgid ""
msgstr ""
"Project-Id-Version: python-watcher 0.21.1.dev32\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2016-01-26 11:26+0100\n"
"POT-Creation-Date: 2016-02-09 09:07+0100\n"
"PO-Revision-Date: 2015-12-11 15:42+0100\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: fr\n"
@@ -24,7 +24,7 @@ msgstr ""
msgid "Invalid state: %(state)s"
msgstr "État invalide : %(state)s"
#: watcher/api/controllers/v1/action_plan.py:420
#: watcher/api/controllers/v1/action_plan.py:422
#, python-format
msgid "State transition not allowed: (%(initial_state)s -> %(new_state)s)"
msgstr "Transition d'état non autorisée : (%(initial_state)s -> %(new_state)s)"
@@ -85,21 +85,30 @@ msgstr ""
msgid "Error parsing HTTP response: %s"
msgstr ""
#: watcher/applier/actions/change_nova_service_state.py:58
#: watcher/applier/actions/change_nova_service_state.py:69
msgid "The target state is not defined"
msgstr ""
#: watcher/applier/workflow_engine/default.py:126
#: watcher/applier/actions/migration.py:43
msgid "The parameter resource_id is invalid."
msgstr "Le paramètre resource_id est invalide"
#: watcher/applier/actions/migration.py:86
#, python-format
msgid "Migration of type %(migration_type)s is not supported."
msgstr ""
#: watcher/applier/workflow_engine/default.py:128
#, python-format
msgid "The WorkFlow Engine has failed to execute the action %s"
msgstr "Le moteur de workflow a echoué lors de l'éxécution de l'action %s"
#: watcher/applier/workflow_engine/default.py:144
#: watcher/applier/workflow_engine/default.py:146
#, python-format
msgid "Revert action %s"
msgstr "Annulation de l'action %s"
#: watcher/applier/workflow_engine/default.py:150
#: watcher/applier/workflow_engine/default.py:152
msgid "Oops! We need disaster recover plan"
msgstr "Oops! Nous avons besoin d'un plan de reprise d'activité"
@@ -119,184 +128,210 @@ msgstr "Sert sur 0.0.0.0:%(port)s, accessible à http://127.0.0.1:%(port)s"
msgid "serving on http://%(host)s:%(port)s"
msgstr "Sert sur http://%(host)s:%(port)s"
#: watcher/common/exception.py:51
#: watcher/common/clients.py:29
msgid "Version of Nova API to use in novaclient."
msgstr ""
#: watcher/common/clients.py:34
msgid "Version of Glance API to use in glanceclient."
msgstr ""
#: watcher/common/clients.py:39
msgid "Version of Cinder API to use in cinderclient."
msgstr ""
#: watcher/common/clients.py:44
msgid "Version of Ceilometer API to use in ceilometerclient."
msgstr ""
#: watcher/common/clients.py:50
msgid "Version of Neutron API to use in neutronclient."
msgstr ""
#: watcher/common/exception.py:59
#, python-format
msgid "Unexpected keystone client error occurred: %s"
msgstr ""
#: watcher/common/exception.py:72
msgid "An unknown exception occurred"
msgstr ""
#: watcher/common/exception.py:71
#: watcher/common/exception.py:92
msgid "Exception in string format operation"
msgstr ""
#: watcher/common/exception.py:101
#: watcher/common/exception.py:122
msgid "Not authorized"
msgstr ""
#: watcher/common/exception.py:106
#: watcher/common/exception.py:127
msgid "Operation not permitted"
msgstr ""
#: watcher/common/exception.py:110
#: watcher/common/exception.py:131
msgid "Unacceptable parameters"
msgstr ""
#: watcher/common/exception.py:115
#: watcher/common/exception.py:136
#, python-format
msgid "The %(name)s %(id)s could not be found"
msgstr ""
#: watcher/common/exception.py:119
#: watcher/common/exception.py:140
#, fuzzy
msgid "Conflict"
msgstr "Conflit"
#: watcher/common/exception.py:124
#: watcher/common/exception.py:145
#, python-format
msgid "The %(name)s resource %(id)s could not be found"
msgstr "La ressource %(name)s / %(id)s est introuvable"
#: watcher/common/exception.py:129
#: watcher/common/exception.py:150
#, python-format
msgid "Expected an uuid or int but received %(identity)s"
msgstr ""
#: watcher/common/exception.py:133
#: watcher/common/exception.py:154
#, python-format
msgid "Goal %(goal)s is not defined in Watcher configuration file"
msgstr ""
#: watcher/common/exception.py:139
#, python-format
msgid "%(err)s"
msgstr "%(err)s"
#: watcher/common/exception.py:143
#: watcher/common/exception.py:158
#, python-format
msgid "Expected a uuid but received %(uuid)s"
msgstr ""
#: watcher/common/exception.py:147
#: watcher/common/exception.py:162
#, python-format
msgid "Expected a logical name but received %(name)s"
msgstr ""
#: watcher/common/exception.py:151
#: watcher/common/exception.py:166
#, python-format
msgid "Expected a logical name or uuid but received %(name)s"
msgstr ""
#: watcher/common/exception.py:155
#: watcher/common/exception.py:170
#, python-format
msgid "AuditTemplate %(audit_template)s could not be found"
msgstr ""
#: watcher/common/exception.py:159
#: watcher/common/exception.py:174
#, python-format
msgid "An audit_template with UUID %(uuid)s or name %(name)s already exists"
msgstr ""
#: watcher/common/exception.py:164
#: watcher/common/exception.py:179
#, python-format
msgid "AuditTemplate %(audit_template)s is referenced by one or multiple audit"
msgstr ""
#: watcher/common/exception.py:169
#: watcher/common/exception.py:184
#, python-format
msgid "Audit %(audit)s could not be found"
msgstr ""
#: watcher/common/exception.py:173
#: watcher/common/exception.py:188
#, python-format
msgid "An audit with UUID %(uuid)s already exists"
msgstr ""
#: watcher/common/exception.py:177
#: watcher/common/exception.py:192
#, python-format
msgid "Audit %(audit)s is referenced by one or multiple action plans"
msgstr ""
#: watcher/common/exception.py:182
#: watcher/common/exception.py:197
#, python-format
msgid "ActionPlan %(action_plan)s could not be found"
msgstr ""
#: watcher/common/exception.py:186
#: watcher/common/exception.py:201
#, python-format
msgid "An action plan with UUID %(uuid)s already exists"
msgstr ""
#: watcher/common/exception.py:190
#: watcher/common/exception.py:205
#, python-format
msgid "Action Plan %(action_plan)s is referenced by one or multiple actions"
msgstr ""
#: watcher/common/exception.py:195
#: watcher/common/exception.py:210
#, python-format
msgid "Action %(action)s could not be found"
msgstr ""
#: watcher/common/exception.py:199
#: watcher/common/exception.py:214
#, python-format
msgid "An action with UUID %(uuid)s already exists"
msgstr ""
#: watcher/common/exception.py:203
#: watcher/common/exception.py:218
#, python-format
msgid "Action plan %(action_plan)s is referenced by one or multiple goals"
msgstr ""
#: watcher/common/exception.py:208
#: watcher/common/exception.py:223
msgid "Filtering actions on both audit and action-plan is prohibited"
msgstr ""
#: watcher/common/exception.py:217
#: watcher/common/exception.py:232
#, python-format
msgid "Couldn't apply patch '%(patch)s'. Reason: %(reason)s"
msgstr ""
#: watcher/common/exception.py:224
#: watcher/common/exception.py:239
msgid "Illegal argument"
msgstr ""
#: watcher/common/exception.py:228
#: watcher/common/exception.py:243
msgid "No such metric"
msgstr ""
#: watcher/common/exception.py:232
#: watcher/common/exception.py:247
msgid "No rows were returned"
msgstr ""
#: watcher/common/exception.py:236
#: watcher/common/exception.py:251
#, python-format
msgid "%(client)s connection failed. Reason: %(reason)s"
msgstr ""
#: watcher/common/exception.py:255
msgid "'Keystone API endpoint is missing''"
msgstr ""
#: watcher/common/exception.py:240
#: watcher/common/exception.py:259
msgid "The list of hypervisor(s) in the cluster is empty"
msgstr ""
#: watcher/common/exception.py:244
#: watcher/common/exception.py:263
msgid "The metrics resource collector is not defined"
msgstr ""
#: watcher/common/exception.py:248
#: watcher/common/exception.py:267
msgid "the cluster state is not defined"
msgstr ""
#: watcher/common/exception.py:254
#: watcher/common/exception.py:273
#, python-format
msgid "The instance '%(name)s' is not found"
msgstr "L'instance '%(name)s' n'a pas été trouvée"
#: watcher/common/exception.py:258
#: watcher/common/exception.py:277
msgid "The hypervisor is not found"
msgstr ""
#: watcher/common/exception.py:262
#: watcher/common/exception.py:281
#, fuzzy, python-format
msgid "Error loading plugin '%(name)s'"
msgstr "Erreur lors du chargement du module '%(name)s'"
#: watcher/common/keystone.py:59
msgid "No Keystone service catalog loaded"
#: watcher/common/exception.py:285
#, fuzzy, python-format
msgid "The identifier '%(name)s' is a reserved word"
msgstr ""
#: watcher/common/service.py:83
@@ -347,18 +382,22 @@ msgid ""
"template uuid instead"
msgstr ""
#: watcher/db/sqlalchemy/api.py:277
msgid "Cannot overwrite UUID for an existing AuditTemplate."
#: watcher/db/sqlalchemy/api.py:278
msgid "Cannot overwrite UUID for an existing Audit Template."
msgstr ""
#: watcher/db/sqlalchemy/api.py:386 watcher/db/sqlalchemy/api.py:586
#: watcher/db/sqlalchemy/api.py:388
msgid "Cannot overwrite UUID for an existing Audit."
msgstr ""
#: watcher/db/sqlalchemy/api.py:477
#: watcher/db/sqlalchemy/api.py:480
msgid "Cannot overwrite UUID for an existing Action."
msgstr ""
#: watcher/db/sqlalchemy/api.py:590
msgid "Cannot overwrite UUID for an existing Action Plan."
msgstr ""
#: watcher/db/sqlalchemy/migration.py:73
msgid ""
"Watcher database schema is already under version control; use upgrade() "
@@ -370,44 +409,44 @@ msgstr ""
msgid "'obj' argument type is not valid"
msgstr ""
#: watcher/decision_engine/planner/default.py:76
#: watcher/decision_engine/planner/default.py:72
msgid "The action plan is empty"
msgstr ""
#: watcher/decision_engine/strategy/selection/default.py:59
#: watcher/decision_engine/strategy/selection/default.py:60
#, python-format
msgid "Incorrect mapping: could not find associated strategy for '%s'"
msgstr ""
#: watcher/decision_engine/strategy/strategies/basic_consolidation.py:267
#: watcher/decision_engine/strategy/strategies/basic_consolidation.py:314
#: watcher/decision_engine/strategy/strategies/basic_consolidation.py:269
#: watcher/decision_engine/strategy/strategies/basic_consolidation.py:316
#, python-format
msgid "No values returned by %(resource_id)s for %(metric_name)s"
msgstr ""
#: watcher/decision_engine/strategy/strategies/basic_consolidation.py:424
#: watcher/decision_engine/strategy/strategies/basic_consolidation.py:426
msgid "Initializing Sercon Consolidation"
msgstr ""
#: watcher/decision_engine/strategy/strategies/basic_consolidation.py:468
#: watcher/decision_engine/strategy/strategies/basic_consolidation.py:470
msgid "The workloads of the compute nodes of the cluster is zero"
msgstr ""
#: watcher/decision_engine/strategy/strategies/outlet_temp_control.py:125
#: watcher/decision_engine/strategy/strategies/outlet_temp_control.py:127
#, python-format
msgid "%s: no outlet temp data"
msgstr ""
#: watcher/decision_engine/strategy/strategies/outlet_temp_control.py:149
#: watcher/decision_engine/strategy/strategies/outlet_temp_control.py:151
#, python-format
msgid "VM not active, skipped: %s"
msgstr ""
#: watcher/decision_engine/strategy/strategies/outlet_temp_control.py:206
#: watcher/decision_engine/strategy/strategies/outlet_temp_control.py:208
msgid "No hosts under outlet temp threshold found"
msgstr ""
#: watcher/decision_engine/strategy/strategies/outlet_temp_control.py:229
#: watcher/decision_engine/strategy/strategies/outlet_temp_control.py:231
msgid "No proper target host could be found"
msgstr ""
@@ -582,9 +621,20 @@ msgstr ""
#~ msgid "Description cannot be empty"
#~ msgstr ""
#~ msgid ""
#~ "Failed to remove trailing character. "
#~ "Returning original object. Supplied object "
#~ "is not a string: %s,"
#~ msgid "The hypervisor state is invalid."
#~ msgstr "L'état de l'hyperviseur est invalide"
#~ msgid "%(err)s"
#~ msgstr "%(err)s"
#~ msgid "No Keystone service catalog loaded"
#~ msgstr ""
#~ msgid "Cannot overwrite UUID for an existing AuditTemplate."
#~ msgstr ""
#~ msgid ""
#~ "This identifier is reserved word and "
#~ "cannot be used as variables '%(name)s'"
#~ msgstr ""

View File

@@ -7,9 +7,9 @@
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: python-watcher 0.23.2.dev1\n"
"Project-Id-Version: python-watcher 0.23.3.dev2\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2016-01-26 11:26+0100\n"
"POT-Creation-Date: 2016-02-09 09:07+0100\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
@@ -23,7 +23,7 @@ msgstr ""
msgid "Invalid state: %(state)s"
msgstr ""
#: watcher/api/controllers/v1/action_plan.py:420
#: watcher/api/controllers/v1/action_plan.py:422
#, python-format
msgid "State transition not allowed: (%(initial_state)s -> %(new_state)s)"
msgstr ""
@@ -84,25 +84,30 @@ msgstr ""
msgid "Error parsing HTTP response: %s"
msgstr ""
#: watcher/applier/actions/change_nova_service_state.py:58
#: watcher/applier/actions/change_nova_service_state.py:69
msgid "The target state is not defined"
msgstr ""
#: watcher/applier/actions/migration.py:60
#: watcher/applier/actions/migration.py:43
msgid "The parameter resource_id is invalid."
msgstr ""
#: watcher/applier/actions/migration.py:86
#, python-format
msgid "Migration of type %(migration_type)s is not supported."
msgstr ""
#: watcher/applier/workflow_engine/default.py:126
#: watcher/applier/workflow_engine/default.py:128
#, python-format
msgid "The WorkFlow Engine has failed to execute the action %s"
msgstr ""
#: watcher/applier/workflow_engine/default.py:144
#: watcher/applier/workflow_engine/default.py:146
#, python-format
msgid "Revert action %s"
msgstr ""
#: watcher/applier/workflow_engine/default.py:150
#: watcher/applier/workflow_engine/default.py:152
msgid "Oops! We need disaster recover plan"
msgstr ""
@@ -122,178 +127,209 @@ msgstr ""
msgid "serving on http://%(host)s:%(port)s"
msgstr ""
#: watcher/common/exception.py:51
#: watcher/common/clients.py:29
msgid "Version of Nova API to use in novaclient."
msgstr ""
#: watcher/common/clients.py:34
msgid "Version of Glance API to use in glanceclient."
msgstr ""
#: watcher/common/clients.py:39
msgid "Version of Cinder API to use in cinderclient."
msgstr ""
#: watcher/common/clients.py:44
msgid "Version of Ceilometer API to use in ceilometerclient."
msgstr ""
#: watcher/common/clients.py:50
msgid "Version of Neutron API to use in neutronclient."
msgstr ""
#: watcher/common/exception.py:59
#, python-format
msgid "Unexpected keystone client error occurred: %s"
msgstr ""
#: watcher/common/exception.py:72
msgid "An unknown exception occurred"
msgstr ""
#: watcher/common/exception.py:71
#: watcher/common/exception.py:92
msgid "Exception in string format operation"
msgstr ""
#: watcher/common/exception.py:101
#: watcher/common/exception.py:122
msgid "Not authorized"
msgstr ""
#: watcher/common/exception.py:106
#: watcher/common/exception.py:127
msgid "Operation not permitted"
msgstr ""
#: watcher/common/exception.py:110
#: watcher/common/exception.py:131
msgid "Unacceptable parameters"
msgstr ""
#: watcher/common/exception.py:115
#: watcher/common/exception.py:136
#, python-format
msgid "The %(name)s %(id)s could not be found"
msgstr ""
#: watcher/common/exception.py:119
#: watcher/common/exception.py:140
msgid "Conflict"
msgstr ""
#: watcher/common/exception.py:124
#: watcher/common/exception.py:145
#, python-format
msgid "The %(name)s resource %(id)s could not be found"
msgstr ""
#: watcher/common/exception.py:129
#: watcher/common/exception.py:150
#, python-format
msgid "Expected an uuid or int but received %(identity)s"
msgstr ""
#: watcher/common/exception.py:133
#: watcher/common/exception.py:154
#, python-format
msgid "Goal %(goal)s is not defined in Watcher configuration file"
msgstr ""
#: watcher/common/exception.py:143
#: watcher/common/exception.py:158
#, python-format
msgid "Expected a uuid but received %(uuid)s"
msgstr ""
#: watcher/common/exception.py:147
#: watcher/common/exception.py:162
#, python-format
msgid "Expected a logical name but received %(name)s"
msgstr ""
#: watcher/common/exception.py:151
#: watcher/common/exception.py:166
#, python-format
msgid "Expected a logical name or uuid but received %(name)s"
msgstr ""
#: watcher/common/exception.py:155
#: watcher/common/exception.py:170
#, python-format
msgid "AuditTemplate %(audit_template)s could not be found"
msgstr ""
#: watcher/common/exception.py:159
#: watcher/common/exception.py:174
#, python-format
msgid "An audit_template with UUID %(uuid)s or name %(name)s already exists"
msgstr ""
#: watcher/common/exception.py:164
#: watcher/common/exception.py:179
#, python-format
msgid "AuditTemplate %(audit_template)s is referenced by one or multiple audit"
msgstr ""
#: watcher/common/exception.py:169
#: watcher/common/exception.py:184
#, python-format
msgid "Audit %(audit)s could not be found"
msgstr ""
#: watcher/common/exception.py:173
#: watcher/common/exception.py:188
#, python-format
msgid "An audit with UUID %(uuid)s already exists"
msgstr ""
#: watcher/common/exception.py:177
#: watcher/common/exception.py:192
#, python-format
msgid "Audit %(audit)s is referenced by one or multiple action plans"
msgstr ""
#: watcher/common/exception.py:182
#: watcher/common/exception.py:197
#, python-format
msgid "ActionPlan %(action_plan)s could not be found"
msgstr ""
#: watcher/common/exception.py:186
#: watcher/common/exception.py:201
#, python-format
msgid "An action plan with UUID %(uuid)s already exists"
msgstr ""
#: watcher/common/exception.py:190
#: watcher/common/exception.py:205
#, python-format
msgid "Action Plan %(action_plan)s is referenced by one or multiple actions"
msgstr ""
#: watcher/common/exception.py:195
#: watcher/common/exception.py:210
#, python-format
msgid "Action %(action)s could not be found"
msgstr ""
#: watcher/common/exception.py:199
#: watcher/common/exception.py:214
#, python-format
msgid "An action with UUID %(uuid)s already exists"
msgstr ""
#: watcher/common/exception.py:203
#: watcher/common/exception.py:218
#, python-format
msgid "Action plan %(action_plan)s is referenced by one or multiple goals"
msgstr ""
#: watcher/common/exception.py:208
#: watcher/common/exception.py:223
msgid "Filtering actions on both audit and action-plan is prohibited"
msgstr ""
#: watcher/common/exception.py:217
#: watcher/common/exception.py:232
#, python-format
msgid "Couldn't apply patch '%(patch)s'. Reason: %(reason)s"
msgstr ""
#: watcher/common/exception.py:224
#: watcher/common/exception.py:239
msgid "Illegal argument"
msgstr ""
#: watcher/common/exception.py:228
#: watcher/common/exception.py:243
msgid "No such metric"
msgstr ""
#: watcher/common/exception.py:232
#: watcher/common/exception.py:247
msgid "No rows were returned"
msgstr ""
#: watcher/common/exception.py:236
#: watcher/common/exception.py:251
#, python-format
msgid "%(client)s connection failed. Reason: %(reason)s"
msgstr ""
#: watcher/common/exception.py:255
msgid "'Keystone API endpoint is missing''"
msgstr ""
#: watcher/common/exception.py:240
#: watcher/common/exception.py:259
msgid "The list of hypervisor(s) in the cluster is empty"
msgstr ""
#: watcher/common/exception.py:244
#: watcher/common/exception.py:263
msgid "The metrics resource collector is not defined"
msgstr ""
#: watcher/common/exception.py:248
#: watcher/common/exception.py:267
msgid "the cluster state is not defined"
msgstr ""
#: watcher/common/exception.py:254
#: watcher/common/exception.py:273
#, python-format
msgid "The instance '%(name)s' is not found"
msgstr ""
#: watcher/common/exception.py:258
#: watcher/common/exception.py:277
msgid "The hypervisor is not found"
msgstr ""
#: watcher/common/exception.py:262
#: watcher/common/exception.py:281
#, python-format
msgid "Error loading plugin '%(name)s'"
msgstr ""
#: watcher/common/keystone.py:59
msgid "No Keystone service catalog loaded"
#: watcher/common/exception.py:285
#, python-format
msgid "The identifier '%(name)s' is a reserved word"
msgstr ""
#: watcher/common/service.py:83
@@ -344,18 +380,22 @@ msgid ""
"template uuid instead"
msgstr ""
#: watcher/db/sqlalchemy/api.py:277
msgid "Cannot overwrite UUID for an existing AuditTemplate."
#: watcher/db/sqlalchemy/api.py:278
msgid "Cannot overwrite UUID for an existing Audit Template."
msgstr ""
#: watcher/db/sqlalchemy/api.py:386 watcher/db/sqlalchemy/api.py:586
#: watcher/db/sqlalchemy/api.py:388
msgid "Cannot overwrite UUID for an existing Audit."
msgstr ""
#: watcher/db/sqlalchemy/api.py:477
#: watcher/db/sqlalchemy/api.py:480
msgid "Cannot overwrite UUID for an existing Action."
msgstr ""
#: watcher/db/sqlalchemy/api.py:590
msgid "Cannot overwrite UUID for an existing Action Plan."
msgstr ""
#: watcher/db/sqlalchemy/migration.py:73
msgid ""
"Watcher database schema is already under version control; use upgrade() "
@@ -367,44 +407,44 @@ msgstr ""
msgid "'obj' argument type is not valid"
msgstr ""
#: watcher/decision_engine/planner/default.py:76
#: watcher/decision_engine/planner/default.py:72
msgid "The action plan is empty"
msgstr ""
#: watcher/decision_engine/strategy/selection/default.py:59
#: watcher/decision_engine/strategy/selection/default.py:60
#, python-format
msgid "Incorrect mapping: could not find associated strategy for '%s'"
msgstr ""
#: watcher/decision_engine/strategy/strategies/basic_consolidation.py:267
#: watcher/decision_engine/strategy/strategies/basic_consolidation.py:314
#: watcher/decision_engine/strategy/strategies/basic_consolidation.py:269
#: watcher/decision_engine/strategy/strategies/basic_consolidation.py:316
#, python-format
msgid "No values returned by %(resource_id)s for %(metric_name)s"
msgstr ""
#: watcher/decision_engine/strategy/strategies/basic_consolidation.py:424
#: watcher/decision_engine/strategy/strategies/basic_consolidation.py:426
msgid "Initializing Sercon Consolidation"
msgstr ""
#: watcher/decision_engine/strategy/strategies/basic_consolidation.py:468
#: watcher/decision_engine/strategy/strategies/basic_consolidation.py:470
msgid "The workloads of the compute nodes of the cluster is zero"
msgstr ""
#: watcher/decision_engine/strategy/strategies/outlet_temp_control.py:125
#: watcher/decision_engine/strategy/strategies/outlet_temp_control.py:127
#, python-format
msgid "%s: no outlet temp data"
msgstr ""
#: watcher/decision_engine/strategy/strategies/outlet_temp_control.py:149
#: watcher/decision_engine/strategy/strategies/outlet_temp_control.py:151
#, python-format
msgid "VM not active, skipped: %s"
msgstr ""
#: watcher/decision_engine/strategy/strategies/outlet_temp_control.py:206
#: watcher/decision_engine/strategy/strategies/outlet_temp_control.py:208
msgid "No hosts under outlet temp threshold found"
msgstr ""
#: watcher/decision_engine/strategy/strategies/outlet_temp_control.py:229
#: watcher/decision_engine/strategy/strategies/outlet_temp_control.py:231
msgid "No proper target host could be found"
msgstr ""

View File

@@ -22,13 +22,13 @@ from oslo_config import cfg
from oslo_log import log
from watcher.common import ceilometer_helper
from watcher.metrics_engine.cluster_history.api import BaseClusterHistory
from watcher.metrics_engine.cluster_history import api
CONF = cfg.CONF
LOG = log.getLogger(__name__)
class CeilometerClusterHistory(BaseClusterHistory):
class CeilometerClusterHistory(api.BaseClusterHistory):
def __init__(self, osc=None):
""":param osc: an OpenStackClients instance"""
super(CeilometerClusterHistory, self).__init__()

View File

@@ -21,8 +21,8 @@ from oslo_config import cfg
from oslo_log import log
from watcher.common import nova_helper
from watcher.metrics_engine.cluster_model_collector.nova import \
NovaClusterModelCollector
from watcher.metrics_engine.cluster_model_collector import nova as cnova
LOG = log.getLogger(__name__)
CONF = cfg.CONF
@@ -32,4 +32,4 @@ class CollectorManager(object):
def get_cluster_model_collector(self, osc=None):
""":param osc: an OpenStackClients instance"""
nova = nova_helper.NovaHelper(osc=osc)
return NovaClusterModelCollector(nova)
return cnova.NovaClusterModelCollector(nova)

View File

@@ -17,32 +17,29 @@
# limitations under the License.
#
from oslo_config import cfg
from oslo_log import log
from watcher.decision_engine.model.hypervisor import Hypervisor
from watcher.decision_engine.model.model_root import ModelRoot
from watcher.decision_engine.model.resource import Resource
from watcher.decision_engine.model.resource import ResourceType
from watcher.decision_engine.model.vm import VM
from watcher.metrics_engine.cluster_model_collector.api import \
BaseClusterModelCollector
from watcher.decision_engine.model import hypervisor as obj_hypervisor
from watcher.decision_engine.model import model_root
from watcher.decision_engine.model import resource
from watcher.decision_engine.model import vm as obj_vm
from watcher.metrics_engine.cluster_model_collector import api
CONF = cfg.CONF
LOG = log.getLogger(__name__)
class NovaClusterModelCollector(BaseClusterModelCollector):
class NovaClusterModelCollector(api.BaseClusterModelCollector):
def __init__(self, wrapper):
super(NovaClusterModelCollector, self).__init__()
self.wrapper = wrapper
def get_latest_cluster_data_model(self):
LOG.debug("Getting latest cluster data model")
cluster = ModelRoot()
mem = Resource(ResourceType.memory)
num_cores = Resource(ResourceType.cpu_cores)
disk = Resource(ResourceType.disk)
cluster = model_root.ModelRoot()
mem = resource.Resource(resource.ResourceType.memory)
num_cores = resource.Resource(resource.ResourceType.cpu_cores)
disk = resource.Resource(resource.ResourceType.disk)
cluster.create_resource(mem)
cluster.create_resource(num_cores)
cluster.create_resource(disk)
@@ -52,7 +49,7 @@ class NovaClusterModelCollector(BaseClusterModelCollector):
for h in hypervisors:
service = self.wrapper.nova.services.find(id=h.service['id'])
# create hypervisor in cluster_model_collector
hypervisor = Hypervisor()
hypervisor = obj_hypervisor.Hypervisor()
hypervisor.uuid = service.host
hypervisor.hostname = h.hypervisor_hostname
# set capacity
@@ -65,7 +62,7 @@ class NovaClusterModelCollector(BaseClusterModelCollector):
vms = self.wrapper.get_vms_by_hypervisor(str(service.host))
for v in vms:
# create VM in cluster_model_collector
vm = VM()
vm = obj_vm.VM()
vm.uuid = v.id
# nova/nova/compute/vm_states.py
vm.state = getattr(v, 'OS-EXT-STS:vm_state')

View File

@@ -42,7 +42,6 @@ class Action(base.WatcherObject):
'uuid': obj_utils.str_or_none,
'action_plan_id': obj_utils.int_or_none,
'action_type': obj_utils.str_or_none,
'applies_to': obj_utils.str_or_none,
'input_parameters': obj_utils.dict_or_none,
'state': obj_utils.str_or_none,
# todo(jed) remove parameter alarm

View File

@@ -71,13 +71,14 @@ state may be one of the following:
from watcher.common import exception
from watcher.common import utils
from watcher.db import api as dbapi
from watcher.objects import action as action_objects
from watcher.objects import base
from watcher.objects import utils as obj_utils
class State(object):
RECOMMENDED = 'RECOMMENDED'
TRIGGERED = 'TRIGGERED'
PENDING = 'PENDING'
ONGOING = 'ONGOING'
FAILED = 'FAILED'
SUCCEEDED = 'SUCCEEDED'
@@ -251,6 +252,14 @@ class ActionPlan(base.WatcherObject):
A context should be set when instantiating the
object, e.g.: Audit(context)
"""
related_actions = action_objects.Action.list(
context=self._context,
filters={"action_plan_uuid": self.uuid})
# Cascade soft_delete of related actions
for related_action in related_actions:
related_action.soft_delete()
self.dbapi.soft_delete_action_plan(self.uuid)
self.state = "DELETED"
self.state = State.DELETED
self.save()

View File

@@ -164,7 +164,7 @@ class AuditTemplate(base.WatcherObject):
return audit_template
@classmethod
def list(cls, context, limit=None, marker=None,
def list(cls, context, filters=None, limit=None, marker=None,
sort_key=None, sort_dir=None):
"""Return a list of :class:`AuditTemplate` objects.
@@ -174,6 +174,7 @@ class AuditTemplate(base.WatcherObject):
argument, even though we don't use it.
A context should be set when instantiating the
object, e.g.: AuditTemplate(context)
:param filters: dict mapping the filter key to a value.
:param limit: maximum number of resources to return in a single result.
:param marker: pagination marker for large data sets.
:param sort_key: column to sort results by.
@@ -183,6 +184,7 @@ class AuditTemplate(base.WatcherObject):
db_audit_templates = cls.dbapi.get_audit_template_list(
context,
filters=filters,
limit=limit,
marker=marker,
sort_key=sort_key,

View File

@@ -1,43 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import unittest
from oslo_config import cfg
import pecan
from pecan import testing
cfg.CONF.import_opt('enable_authentication', 'watcher.api.acl')
__all__ = ['FunctionalTest']
class FunctionalTest(unittest.TestCase):
"""Functional tests
Used for functional tests where you need to test your
literal application and its integration with the framework.
"""
def setUp(self):
cfg.CONF.set_override("enable_authentication", False,
enforce_type=True)
self.app = testing.load_test_app(os.path.join(
os.path.dirname(__file__),
'config.py'
))
def tearDown(self):
pecan.set_config({}, overwrite=True)

View File

@@ -285,33 +285,41 @@ class TestListAction(api_base.FunctionalTest):
id=3,
uuid=utils.generate_uuid(),
audit_id=1)
action_list = []
ap1_action_list = []
ap2_action_list = []
for id_ in range(0, 2):
action = obj_utils.create_test_action(
self.context, id=id_,
action_plan_id=2,
action_plan_id=action_plan1.id,
uuid=utils.generate_uuid())
action_list.append(action.uuid)
ap1_action_list.append(action)
for id_ in range(2, 4):
action = obj_utils.create_test_action(
self.context, id=id_,
action_plan_id=3,
action_plan_id=action_plan2.id,
uuid=utils.generate_uuid())
action_list.append(action.uuid)
ap2_action_list.append(action)
self.delete('/action_plans/%s' % action_plan1.uuid)
response = self.get_json('/actions')
self.assertEqual(len(action_list), len(response['actions']))
for id_ in range(0, 2):
action = response['actions'][id_]
self.assertEqual(None, action['action_plan_uuid'])
# We deleted the actions from the 1st action plan so we've got 2 left
self.assertEqual(len(ap2_action_list), len(response['actions']))
for id_ in range(2, 4):
action = response['actions'][id_]
self.assertEqual(action_plan2.uuid, action['action_plan_uuid'])
# We deleted them so that's normal
self.assertEqual(
[act for act in response['actions']
if act['action_plan_uuid'] == action_plan1.uuid],
[])
# Here are the 2 actions left
self.assertEqual(
set([act['uuid'] for act in response['actions']
if act['action_plan_uuid'] == action_plan2.uuid]),
set([act.as_dict()['uuid'] for act in ap2_action_list]))
def test_many_with_next_uuid(self):
action_list = []

View File

@@ -307,7 +307,7 @@ class TestDelete(api_base.FunctionalTest):
action_plan = objects.ActionPlan.get_by_uuid(self.context, audit_uuid)
action_plan.destroy()
def test_delete_action_plan(self):
def test_delete_action_plan_without_action(self):
self.delete('/action_plans/%s' % self.action_plan.uuid)
response = self.get_json('/action_plans/%s' % self.action_plan.uuid,
expect_errors=True)
@@ -315,6 +315,30 @@ class TestDelete(api_base.FunctionalTest):
self.assertEqual('application/json', response.content_type)
self.assertTrue(response.json['error_message'])
def test_delete_action_plan_with_action(self):
action = obj_utils.create_test_action(
self.context, id=self.action_plan.first_action_id)
self.delete('/action_plans/%s' % self.action_plan.uuid)
ap_response = self.get_json('/action_plans/%s' % self.action_plan.uuid,
expect_errors=True)
acts_response = self.get_json(
'/actions/?action_plan_uuid=%s' % self.action_plan.uuid)
act_response = self.get_json(
'/actions/%s' % action.uuid,
expect_errors=True)
# The action plan does not exist anymore
self.assertEqual(404, ap_response.status_int)
self.assertEqual('application/json', ap_response.content_type)
self.assertTrue(ap_response.json['error_message'])
# Nor does the action
self.assertEqual(len(acts_response['actions']), 0)
self.assertEqual(404, act_response.status_int)
self.assertEqual('application/json', act_response.content_type)
self.assertTrue(act_response.json['error_message'])
def test_delete_action_plan_not_found(self):
uuid = utils.generate_uuid()
response = self.delete('/action_plans/%s' % uuid, expect_errors=True)
@@ -362,7 +386,7 @@ class TestPatch(api_base.FunctionalTest):
response = self.patch_json(
'/action_plans/%s' % utils.generate_uuid(),
[{'path': '/state',
'value': objects.action_plan.State.TRIGGERED,
'value': objects.action_plan.State.PENDING,
'op': 'replace'}],
expect_errors=True)
self.assertEqual(404, response.status_int)
@@ -412,8 +436,8 @@ class TestPatch(api_base.FunctionalTest):
self.assertTrue(response.json['error_message'])
@mock.patch.object(aapi.ApplierAPI, 'launch_action_plan')
def test_replace_state_triggered_ok(self, applier_mock):
new_state = objects.action_plan.State.TRIGGERED
def test_replace_state_pending_ok(self, applier_mock):
new_state = objects.action_plan.State.PENDING
response = self.get_json(
'/action_plans/%s' % self.action_plan.uuid)
self.assertNotEqual(new_state, response['state'])
@@ -429,12 +453,12 @@ class TestPatch(api_base.FunctionalTest):
ALLOWED_TRANSITIONS = [
{"original_state": objects.action_plan.State.RECOMMENDED,
"new_state": objects.action_plan.State.TRIGGERED},
"new_state": objects.action_plan.State.PENDING},
{"original_state": objects.action_plan.State.RECOMMENDED,
"new_state": objects.action_plan.State.CANCELLED},
{"original_state": objects.action_plan.State.ONGOING,
"new_state": objects.action_plan.State.CANCELLED},
{"original_state": objects.action_plan.State.TRIGGERED,
{"original_state": objects.action_plan.State.PENDING,
"new_state": objects.action_plan.State.CANCELLED},
]
@@ -455,7 +479,7 @@ class TestPatchStateTransitionDenied(api_base.FunctionalTest):
for original_state, new_state
in list(itertools.product(STATES, STATES))
# from DELETED to ...
# NOTE: Any state transition from DELETED (To RECOMMENDED, TRIGGERED,
# NOTE: Any state transition from DELETED (To RECOMMENDED, PENDING,
# ONGOING, CANCELLED, SUCCEEDED and FAILED) will cause a 404 Not Found
# because we cannot retrieve them with a GET (soft_deleted state).
# This is the reason why they are not listed here but they have a
@@ -469,7 +493,7 @@ class TestPatchStateTransitionDenied(api_base.FunctionalTest):
@mock.patch.object(
db_api.BaseConnection, 'update_action_plan',
mock.Mock(side_effect=lambda ap: ap.save() or ap))
def test_replace_state_triggered_denied(self):
def test_replace_state_pending_denied(self):
action_plan = obj_utils.create_action_plan_without_audit(
self.context, state=self.original_state)
@@ -493,7 +517,7 @@ class TestPatchStateDeletedNotFound(api_base.FunctionalTest):
@mock.patch.object(
db_api.BaseConnection, 'update_action_plan',
mock.Mock(side_effect=lambda ap: ap.save() or ap))
def test_replace_state_triggered_not_found(self):
def test_replace_state_pending_not_found(self):
action_plan = obj_utils.create_action_plan_without_audit(
self.context, state=objects.action_plan.State.DELETED)
@@ -521,7 +545,7 @@ class TestPatchStateTransitionOk(api_base.FunctionalTest):
db_api.BaseConnection, 'update_action_plan',
mock.Mock(side_effect=lambda ap: ap.save() or ap))
@mock.patch.object(aapi.ApplierAPI, 'launch_action_plan', mock.Mock())
def test_replace_state_triggered_ok(self):
def test_replace_state_pending_ok(self):
action_plan = obj_utils.create_action_plan_without_audit(
self.context, state=self.original_state)

View File

@@ -207,6 +207,25 @@ class TestListAuditTemplate(api_base.FunctionalTest):
next_marker = response['audit_templates'][-1]['uuid']
self.assertIn(next_marker, response['next'])
def test_filter_by_goal(self):
cfg.CONF.set_override('goals', {"DUMMY": "DUMMY", "BASIC": "BASIC"},
group='watcher_goals', enforce_type=True)
for id_ in range(2):
obj_utils.create_test_audit_template(
self.context, id=id_, uuid=utils.generate_uuid(),
name='My Audit Template {0}'.format(id_),
goal="DUMMY")
for id_ in range(2, 5):
obj_utils.create_test_audit_template(
self.context, id=id_, uuid=utils.generate_uuid(),
name='My Audit Template {0}'.format(id_),
goal="BASIC")
response = self.get_json('/audit_templates?goal=BASIC')
self.assertEqual(3, len(response['audit_templates']))
class TestPatch(api_base.FunctionalTest):

View File

@@ -15,11 +15,11 @@
import wsme
from oslo_config import cfg
from watcher.api.controllers.v1 import utils
from watcher.tests import base
from oslo_config import cfg
CONF = cfg.CONF
@@ -40,10 +40,31 @@ class TestApiUtils(base.TestCase):
self.assertRaises(wsme.exc.ClientSideError, utils.validate_limit, 0)
def test_validate_sort_dir(self):
sort_dir = utils.validate_sort_dir('asc')
self.assertEqual('asc', sort_dir)
# if sort_dir is valid, nothing should happen
try:
utils.validate_sort_dir('asc')
except Exception as exc:
self.fail(exc)
# invalid sort_dir parameter
self.assertRaises(wsme.exc.ClientSideError,
utils.validate_sort_dir,
'fake-sort')
def test_validate_search_filters(self):
allowed_fields = ["allowed", "authorized"]
test_filters = {"allowed": 1, "authorized": 2}
try:
utils.validate_search_filters(test_filters, allowed_fields)
except Exception as exc:
self.fail(exc)
def test_validate_search_filters_with_invalid_key(self):
allowed_fields = ["allowed", "authorized"]
test_filters = {"allowed": 1, "unauthorized": 2}
self.assertRaises(
wsme.exc.ClientSideError, utils.validate_search_filters,
test_filters, allowed_fields)

View File

@@ -18,38 +18,38 @@
import mock
from watcher.applier.action_plan.default import DefaultActionPlanHandler
from watcher.applier.messaging.event_types import EventTypes
from watcher.applier.action_plan import default
from watcher.applier.messaging import event_types as ev
from watcher.objects import action_plan as ap_objects
from watcher.objects import ActionPlan
from watcher.tests.db.base import DbTestCase
from watcher.tests.db import base
from watcher.tests.objects import utils as obj_utils
class TestDefaultActionPlanHandler(DbTestCase):
class TestDefaultActionPlanHandler(base.DbTestCase):
def setUp(self):
super(TestDefaultActionPlanHandler, self).setUp()
self.action_plan = obj_utils.create_test_action_plan(
self.context)
def test_launch_action_plan(self):
command = DefaultActionPlanHandler(self.context, mock.MagicMock(),
self.action_plan.uuid)
command = default.DefaultActionPlanHandler(self.context,
mock.MagicMock(),
self.action_plan.uuid)
command.execute()
action_plan = ActionPlan.get_by_uuid(self.context,
self.action_plan.uuid)
action_plan = ap_objects.ActionPlan.get_by_uuid(self.context,
self.action_plan.uuid)
self.assertEqual(ap_objects.State.SUCCEEDED, action_plan.state)
def test_trigger_audit_send_notification(self):
messaging = mock.MagicMock()
command = DefaultActionPlanHandler(self.context, messaging,
self.action_plan.uuid)
command = default.DefaultActionPlanHandler(self.context, messaging,
self.action_plan.uuid)
command.execute()
call_on_going = mock.call(EventTypes.LAUNCH_ACTION_PLAN.name, {
call_on_going = mock.call(ev.EventTypes.LAUNCH_ACTION_PLAN.name, {
'action_plan_state': ap_objects.State.ONGOING,
'action_plan__uuid': self.action_plan.uuid})
call_succeeded = mock.call(EventTypes.LAUNCH_ACTION_PLAN.name, {
call_succeeded = mock.call(ev.EventTypes.LAUNCH_ACTION_PLAN.name, {
'action_plan_state': ap_objects.State.SUCCEEDED,
'action_plan__uuid': self.action_plan.uuid})

View File

@@ -0,0 +1,146 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2016 b<>com
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import unicode_literals
import mock
import voluptuous
from watcher.applier.actions import base as baction
from watcher.applier.actions import change_nova_service_state
from watcher.common import clients
from watcher.common import nova_helper
from watcher.decision_engine.model import hypervisor_state as hstate
from watcher.tests import base
class TestChangeNovaServiceState(base.TestCase):
def setUp(self):
super(TestChangeNovaServiceState, self).setUp()
self.m_osc_cls = mock.Mock()
self.m_helper_cls = mock.Mock()
self.m_helper = mock.Mock(spec=nova_helper.NovaHelper)
self.m_helper_cls.return_value = self.m_helper
self.m_osc = mock.Mock(spec=clients.OpenStackClients)
self.m_osc_cls.return_value = self.m_osc
m_openstack_clients = mock.patch.object(
clients, "OpenStackClients", self.m_osc_cls)
m_nova_helper = mock.patch.object(
nova_helper, "NovaHelper", self.m_helper_cls)
m_openstack_clients.start()
m_nova_helper.start()
self.addCleanup(m_openstack_clients.stop)
self.addCleanup(m_nova_helper.stop)
self.input_parameters = {
baction.BaseAction.RESOURCE_ID: "compute-1",
"state": hstate.HypervisorState.ENABLED.value,
}
self.action = change_nova_service_state.ChangeNovaServiceState()
self.action.input_parameters = self.input_parameters
def test_parameters_down(self):
self.action.input_parameters = {
baction.BaseAction.RESOURCE_ID: "compute-1",
self.action.STATE: hstate.HypervisorState.DISABLED.value}
self.assertEqual(True, self.action.validate_parameters())
def test_parameters_up(self):
self.action.input_parameters = {
baction.BaseAction.RESOURCE_ID: "compute-1",
self.action.STATE: hstate.HypervisorState.ENABLED.value}
self.assertEqual(True, self.action.validate_parameters())
def test_parameters_exception_wrong_state(self):
self.action.input_parameters = {
baction.BaseAction.RESOURCE_ID: "compute-1",
self.action.STATE: 'error'}
exc = self.assertRaises(
voluptuous.Invalid, self.action.validate_parameters)
self.assertEqual(
[(['state'], voluptuous.ScalarInvalid)],
[([str(p) for p in e.path], type(e)) for e in exc.errors])
def test_parameters_resource_id_empty(self):
self.action.input_parameters = {
self.action.STATE: hstate.HypervisorState.ENABLED.value,
}
exc = self.assertRaises(
voluptuous.Invalid, self.action.validate_parameters)
self.assertEqual(
[(['resource_id'], voluptuous.RequiredFieldInvalid)],
[([str(p) for p in e.path], type(e)) for e in exc.errors])
def test_parameters_applies_add_extra(self):
self.action.input_parameters = {"extra": "failed"}
exc = self.assertRaises(
voluptuous.Invalid, self.action.validate_parameters)
self.assertEqual(
sorted([(['resource_id'], voluptuous.RequiredFieldInvalid),
(['state'], voluptuous.RequiredFieldInvalid),
(['extra'], voluptuous.Invalid)],
key=lambda x: str(x[0])),
sorted([([str(p) for p in e.path], type(e)) for e in exc.errors],
key=lambda x: str(x[0])))
def test_change_service_state_precondition(self):
try:
self.action.precondition()
except Exception as exc:
self.fail(exc)
def test_change_service_state_postcondition(self):
try:
self.action.postcondition()
except Exception as exc:
self.fail(exc)
def test_execute_change_service_state_with_enable_target(self):
self.action.execute()
self.m_helper_cls.assert_called_once_with(osc=self.m_osc)
self.m_helper.enable_service_nova_compute.assert_called_once_with(
"compute-1")
def test_execute_change_service_state_with_disable_target(self):
self.action.input_parameters["state"] = (
hstate.HypervisorState.DISABLED.value)
self.action.execute()
self.m_helper_cls.assert_called_once_with(osc=self.m_osc)
self.m_helper.disable_service_nova_compute.assert_called_once_with(
"compute-1")
def test_revert_change_service_state_with_enable_target(self):
self.action.revert()
self.m_helper_cls.assert_called_once_with(osc=self.m_osc)
self.m_helper.disable_service_nova_compute.assert_called_once_with(
"compute-1")
def test_revert_change_service_state_with_disable_target(self):
self.action.input_parameters["state"] = (
hstate.HypervisorState.DISABLED.value)
self.action.revert()
self.m_helper_cls.assert_called_once_with(osc=self.m_osc)
self.m_helper.enable_service_nova_compute.assert_called_once_with(
"compute-1")

View File

@@ -0,0 +1,206 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2016 b<>com
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import unicode_literals
import mock
import voluptuous
from watcher.applier.actions import base as baction
from watcher.applier.actions import migration
from watcher.common import clients
from watcher.common import exception
from watcher.common import nova_helper
from watcher.tests import base
class TestMigration(base.TestCase):
INSTANCE_UUID = "45a37aeb-95ab-4ddb-a305-7d9f62c2f5ba"
def setUp(self):
super(TestMigration, self).setUp()
self.m_osc_cls = mock.Mock()
self.m_helper_cls = mock.Mock()
self.m_helper = mock.Mock(spec=nova_helper.NovaHelper)
self.m_helper_cls.return_value = self.m_helper
self.m_osc = mock.Mock(spec=clients.OpenStackClients)
self.m_osc_cls.return_value = self.m_osc
m_openstack_clients = mock.patch.object(
clients, "OpenStackClients", self.m_osc_cls)
m_nova_helper = mock.patch.object(
nova_helper, "NovaHelper", self.m_helper_cls)
m_openstack_clients.start()
m_nova_helper.start()
self.addCleanup(m_openstack_clients.stop)
self.addCleanup(m_nova_helper.stop)
self.input_parameters = {
"migration_type": "live",
"src_hypervisor": "hypervisor1-hostname",
"dst_hypervisor": "hypervisor2-hostname",
baction.BaseAction.RESOURCE_ID: self.INSTANCE_UUID,
}
self.action = migration.Migrate()
self.action.input_parameters = self.input_parameters
def test_parameters(self):
params = {baction.BaseAction.RESOURCE_ID:
self.INSTANCE_UUID,
self.action.MIGRATION_TYPE: 'live',
self.action.DST_HYPERVISOR: 'compute-2',
self.action.SRC_HYPERVISOR: 'compute-3'}
self.action.input_parameters = params
self.assertEqual(True, self.action.validate_parameters())
def test_parameters_exception_empty_fields(self):
parameters = {baction.BaseAction.RESOURCE_ID: None,
'migration_type': None,
'src_hypervisor': None,
'dst_hypervisor': None}
self.action.input_parameters = parameters
exc = self.assertRaises(
voluptuous.MultipleInvalid, self.action.validate_parameters)
self.assertEqual(
sorted([(['migration_type'], voluptuous.ScalarInvalid),
(['src_hypervisor'], voluptuous.TypeInvalid),
(['dst_hypervisor'], voluptuous.TypeInvalid)]),
sorted([(e.path, type(e)) for e in exc.errors]))
def test_parameters_exception_migration_type(self):
parameters = {baction.BaseAction.RESOURCE_ID:
self.INSTANCE_UUID,
'migration_type': 'cold',
'src_hypervisor': 'compute-2',
'dst_hypervisor': 'compute-3'}
self.action.input_parameters = parameters
exc = self.assertRaises(
voluptuous.Invalid, self.action.validate_parameters)
self.assertEqual(
[(['migration_type'], voluptuous.ScalarInvalid)],
[(e.path, type(e)) for e in exc.errors])
def test_parameters_exception_src_hypervisor(self):
parameters = {baction.BaseAction.RESOURCE_ID:
self.INSTANCE_UUID,
'migration_type': 'live',
'src_hypervisor': None,
'dst_hypervisor': 'compute-3'}
self.action.input_parameters = parameters
exc = self.assertRaises(
voluptuous.MultipleInvalid, self.action.validate_parameters)
self.assertEqual(
[(['src_hypervisor'], voluptuous.TypeInvalid)],
[(e.path, type(e)) for e in exc.errors])
def test_parameters_exception_dst_hypervisor(self):
parameters = {baction.BaseAction.RESOURCE_ID:
self.INSTANCE_UUID,
'migration_type': 'live',
'src_hypervisor': 'compute-1',
'dst_hypervisor': None}
self.action.input_parameters = parameters
exc = self.assertRaises(
voluptuous.MultipleInvalid, self.action.validate_parameters)
self.assertEqual(
[(['dst_hypervisor'], voluptuous.TypeInvalid)],
[(e.path, type(e)) for e in exc.errors])
def test_parameters_exception_resource_id(self):
parameters = {baction.BaseAction.RESOURCE_ID: "EFEF",
'migration_type': 'live',
'src_hypervisor': 'compute-2',
'dst_hypervisor': 'compute-3'}
self.action.input_parameters = parameters
exc = self.assertRaises(
voluptuous.MultipleInvalid, self.action.validate_parameters)
self.assertEqual(
[(['resource_id'], voluptuous.Invalid)],
[(e.path, type(e)) for e in exc.errors])
def test_migration_precondition(self):
try:
self.action.precondition()
except Exception as exc:
self.fail(exc)
def test_migration_postcondition(self):
try:
self.action.postcondition()
except Exception as exc:
self.fail(exc)
def test_execute_live_migration_invalid_instance(self):
self.m_helper.find_instance.return_value = None
exc = self.assertRaises(
exception.InstanceNotFound, self.action.execute)
self.m_helper.find_instance.assert_called_once_with(self.INSTANCE_UUID)
self.assertEqual(exc.kwargs["name"], self.INSTANCE_UUID)
def test_execute_live_migration(self):
self.m_helper.find_instance.return_value = self.INSTANCE_UUID
try:
self.action.execute()
except Exception as exc:
self.fail(exc)
self.m_helper.live_migrate_instance.assert_called_once_with(
instance_id=self.INSTANCE_UUID,
dest_hostname="hypervisor2-hostname")
def test_revert_live_migration(self):
self.m_helper.find_instance.return_value = self.INSTANCE_UUID
self.action.revert()
self.m_helper_cls.assert_called_once_with(osc=self.m_osc)
self.m_helper.live_migrate_instance.assert_called_once_with(
instance_id=self.INSTANCE_UUID,
dest_hostname="hypervisor1-hostname"
)
def test_live_migrate_non_shared_storage_instance(self):
self.m_helper.find_instance.return_value = self.INSTANCE_UUID
self.m_helper.live_migrate_instance.side_effect = [
nova_helper.nvexceptions.ClientException(400, "BadRequest"), True]
try:
self.action.execute()
except Exception as exc:
self.fail(exc)
self.m_helper.live_migrate_instance.assert_has_calls([
mock.call(instance_id=self.INSTANCE_UUID,
dest_hostname="hypervisor2-hostname"),
mock.call(instance_id=self.INSTANCE_UUID,
dest_hostname="hypervisor2-hostname",
block_migration=True)
])
expected = [mock.call.first(instance_id=self.INSTANCE_UUID,
dest_hostname="hypervisor2-hostname"),
mock.call.second(instance_id=self.INSTANCE_UUID,
dest_hostname="hypervisor2-hostname",
block_migration=True)
]
self.m_helper.live_migrate_instance.mock_calls == expected
self.assertEqual(2, self.m_helper.live_migrate_instance.call_count)

View File

@@ -0,0 +1,42 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2016 b<>com
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import voluptuous
from watcher.applier.actions import sleep
from watcher.tests import base
class TestSleep(base.TestCase):
def setUp(self):
super(TestSleep, self).setUp()
self.s = sleep.Sleep()
def test_parameters_duration(self):
self.s.input_parameters = {self.s.DURATION: 1.0}
self.assertEqual(True, self.s.validate_parameters())
def test_parameters_duration_empty(self):
self.s.input_parameters = {self.s.DURATION: None}
self.assertRaises(voluptuous.Invalid, self.s.validate_parameters)
def test_parameters_wrong_parameter(self):
self.s.input_parameters = {self.s.DURATION: "ef"}
self.assertRaises(voluptuous.Invalid, self.s.validate_parameters)
def test_parameters_add_field(self):
self.s.input_parameters = {self.s.DURATION: 1.0, "not_required": "nop"}
self.assertRaises(voluptuous.Invalid, self.s.validate_parameters)

View File

@@ -32,6 +32,12 @@ from watcher.tests.db import base
@six.add_metaclass(abc.ABCMeta)
class FakeAction(abase.BaseAction):
def schema(self):
pass
def postcondition(self):
pass
def precondition(self):
pass
@@ -62,12 +68,11 @@ class TestDefaultWorkFlowEngine(base.DbTestCase):
result = self.engine.execute(actions)
self.assertEqual(result, True)
def create_action(self, action_type, applies_to, parameters, next):
def create_action(self, action_type, parameters, next):
action = {
'uuid': utils.generate_uuid(),
'action_plan_id': 0,
'action_type': action_type,
'applies_to': applies_to,
'input_parameters': parameters,
'state': objects.action.State.PENDING,
'alarm': None,
@@ -92,15 +97,15 @@ class TestDefaultWorkFlowEngine(base.DbTestCase):
self.assertEqual(result, True)
def test_execute_with_one_action(self):
actions = [self.create_action("nop", "", {'message': 'test'}, None)]
actions = [self.create_action("nop", {'message': 'test'}, None)]
result = self.engine.execute(actions)
self.assertEqual(result, True)
self.check_actions_state(actions, objects.action.State.SUCCEEDED)
def test_execute_with_two_actions(self):
actions = []
next = self.create_action("sleep", "", {'duration': '0'}, None)
first = self.create_action("nop", "", {'message': 'test'}, next.id)
next = self.create_action("sleep", {'duration': 0.0}, None)
first = self.create_action("nop", {'message': 'test'}, next.id)
actions.append(first)
actions.append(next)
@@ -111,9 +116,9 @@ class TestDefaultWorkFlowEngine(base.DbTestCase):
def test_execute_with_three_actions(self):
actions = []
next2 = self.create_action("nop", "vm1", {'message': 'next'}, None)
next = self.create_action("sleep", "vm1", {'duration': '0'}, next2.id)
first = self.create_action("nop", "vm1", {'message': 'hello'}, next.id)
next2 = self.create_action("nop", {'message': 'next'}, None)
next = self.create_action("sleep", {'duration': 0.0}, next2.id)
first = self.create_action("nop", {'message': 'hello'}, next.id)
self.check_action_state(first, objects.action.State.PENDING)
self.check_action_state(next, objects.action.State.PENDING)
self.check_action_state(next2, objects.action.State.PENDING)
@@ -128,12 +133,9 @@ class TestDefaultWorkFlowEngine(base.DbTestCase):
def test_execute_with_exception(self):
actions = []
next2 = self.create_action("no_exist",
"vm1", {'message': 'next'}, None)
next = self.create_action("sleep", "vm1",
{'duration': '0'}, next2.id)
first = self.create_action("nop", "vm1",
{'message': 'hello'}, next.id)
next2 = self.create_action("no_exist", {'message': 'next'}, None)
next = self.create_action("sleep", {'duration': 0.0}, next2.id)
first = self.create_action("nop", {'message': 'hello'}, next.id)
self.check_action_state(first, objects.action.State.PENDING)
self.check_action_state(next, objects.action.State.PENDING)
@@ -158,7 +160,7 @@ class TestDefaultWorkFlowEngine(base.DbTestCase):
plugin=FakeAction,
obj=None),
namespace=FakeAction.namespace())
actions = [self.create_action("dontcare", "vm1", {}, None)]
actions = [self.create_action("dontcare", {}, None)]
result = self.engine.execute(actions)
self.assertEqual(result, False)
self.check_action_state(actions[0], objects.action.State.FAILED)

View File

@@ -187,6 +187,7 @@ class TestClients(base.BaseTestCase):
osc.ceilometer()
mock_call.assert_called_once_with(
cfg.CONF.ceilometer_client.api_version,
None,
session=mock_session)
@mock.patch.object(clients.OpenStackClients, 'session')
@@ -194,6 +195,7 @@ class TestClients(base.BaseTestCase):
def test_clients_ceilometer_diff_vers(self, mock_get_alarm_client,
mock_session):
'''ceilometerclient currently only has one version (v2)'''
mock_get_alarm_client.return_value = [mock.Mock(), mock.Mock()]
cfg.CONF.set_override('api_version', '2',
group='ceilometer_client')
osc = clients.OpenStackClients()
@@ -206,6 +208,7 @@ class TestClients(base.BaseTestCase):
@mock.patch.object(ceclient_v2.Client, '_get_alarm_client')
def test_clients_ceilometer_cached(self, mock_get_alarm_client,
mock_session):
mock_get_alarm_client.return_value = [mock.Mock(), mock.Mock()]
osc = clients.OpenStackClients()
osc._ceilometer = None
ceilometer = osc.ceilometer()
@@ -224,7 +227,7 @@ class TestClients(base.BaseTestCase):
@mock.patch.object(clients.OpenStackClients, 'session')
def test_clients_neutron_diff_vers(self, mock_session):
'''neutronclient currently only has one version (v2)'''
cfg.CONF.set_override('api_version', '2',
cfg.CONF.set_override('api_version', '2.0',
group='neutron_client')
osc = clients.OpenStackClients()
osc._neutron = None

View File

@@ -84,10 +84,12 @@ def get_test_action(**kwargs):
'uuid': kwargs.get('uuid', '10a47dd1-4874-4298-91cf-eff046dbdb8d'),
'action_plan_id': kwargs.get('action_plan_id', 1),
'action_type': kwargs.get('action_type', 'nop'),
'applies_to': kwargs.get('applies_to',
'10a47dd1-4874-4298-91cf-eff046dbdb8d'),
'input_parameters': kwargs.get('input_parameters', {'key1': 'val1',
'key2': 'val2'}),
'input_parameters':
kwargs.get('input_parameters',
{'key1': 'val1',
'key2': 'val2',
'resource_id':
'10a47dd1-4874-4298-91cf-eff046dbdb8d'}),
'state': kwargs.get('state', 'PENDING'),
'alarm': kwargs.get('alarm', None),
'next': kwargs.get('next', 2),

View File

@@ -18,55 +18,60 @@
#
import uuid
from watcher.decision_engine.model.hypervisor import Hypervisor
from watcher.decision_engine.model.vm_state import VMState
from watcher.decision_engine.model import hypervisor as modelhyp
from watcher.decision_engine.model import vm_state
from watcher.tests import base
from watcher.tests.decision_engine.strategy.strategies.faker_cluster_state import \
FakerModelCollector
from watcher.tests.decision_engine.strategy.strategies import \
faker_cluster_state
class TestMapping(base.BaseTestCase):
VM1_UUID = "73b09e16-35b7-4922-804e-e8f5d9b740fc"
VM2_UUID = "a4cab39b-9828-413a-bf88-f76921bf1517"
def test_get_node_from_vm(self):
fake_cluster = FakerModelCollector()
fake_cluster = faker_cluster_state.FakerModelCollector()
model = fake_cluster.generate_scenario_3_with_2_hypervisors()
vms = model.get_all_vms()
keys = list(vms.keys())
vm = vms[keys[0]]
if vm.uuid != 'VM_0':
if vm.uuid != self.VM1_UUID:
vm = vms[keys[1]]
node = model.mapping.get_node_from_vm(vm)
self.assertEqual(node.uuid, 'Node_0')
def test_get_node_from_vm_id(self):
fake_cluster = FakerModelCollector()
fake_cluster = faker_cluster_state.FakerModelCollector()
model = fake_cluster.generate_scenario_3_with_2_hypervisors()
hyps = model.mapping.get_node_vms_from_id("BLABLABLA")
self.assertEqual(hyps.__len__(), 0)
def test_get_all_vms(self):
fake_cluster = FakerModelCollector()
fake_cluster = faker_cluster_state.FakerModelCollector()
model = fake_cluster.generate_scenario_3_with_2_hypervisors()
vms = model.get_all_vms()
self.assertEqual(vms.__len__(), 2)
self.assertEqual(vms['VM_0'].state, VMState.ACTIVE.value)
self.assertEqual(vms['VM_0'].uuid, 'VM_0')
self.assertEqual(vms['VM_1'].state, VMState.ACTIVE.value)
self.assertEqual(vms['VM_1'].uuid, 'VM_1')
self.assertEqual(vms[self.VM1_UUID].state,
vm_state.VMState.ACTIVE.value)
self.assertEqual(vms[self.VM1_UUID].uuid, self.VM1_UUID)
self.assertEqual(vms[self.VM2_UUID].state,
vm_state.VMState.ACTIVE.value)
self.assertEqual(vms[self.VM2_UUID].uuid, self.VM2_UUID)
def test_get_mapping(self):
fake_cluster = FakerModelCollector()
fake_cluster = faker_cluster_state.FakerModelCollector()
model = fake_cluster.generate_scenario_3_with_2_hypervisors()
mapping_vm = model.mapping.get_mapping_vm()
self.assertEqual(mapping_vm.__len__(), 2)
self.assertEqual(mapping_vm['VM_0'], 'Node_0')
self.assertEqual(mapping_vm['VM_1'], 'Node_1')
self.assertEqual(mapping_vm[self.VM1_UUID], 'Node_0')
self.assertEqual(mapping_vm[self.VM2_UUID], 'Node_1')
def test_migrate_vm(self):
fake_cluster = FakerModelCollector()
fake_cluster = faker_cluster_state.FakerModelCollector()
model = fake_cluster.generate_scenario_3_with_2_hypervisors()
vms = model.get_all_vms()
keys = list(vms.keys())
@@ -81,13 +86,13 @@ class TestMapping(base.BaseTestCase):
self.assertEqual(model.mapping.migrate_vm(vm1, hyp0, hyp1), True)
def test_unmap_from_id_log_warning(self):
fake_cluster = FakerModelCollector()
fake_cluster = faker_cluster_state.FakerModelCollector()
model = fake_cluster.generate_scenario_3_with_2_hypervisors()
vms = model.get_all_vms()
keys = list(vms.keys())
vm0 = vms[keys[0]]
id = "{0}".format(uuid.uuid4())
hypervisor = Hypervisor()
hypervisor = modelhyp.Hypervisor()
hypervisor.uuid = id
model.mapping.unmap_from_id(hypervisor.uuid, vm0.uuid)
@@ -95,7 +100,7 @@ class TestMapping(base.BaseTestCase):
# hypervisor.uuid)), 1)
def test_unmap_from_id(self):
fake_cluster = FakerModelCollector()
fake_cluster = faker_cluster_state.FakerModelCollector()
model = fake_cluster.generate_scenario_3_with_2_hypervisors()
vms = model.get_all_vms()
keys = list(vms.keys())

View File

@@ -68,7 +68,7 @@ class TestActionScheduling(base.DbTestCase):
"dst_uuid_hypervisor": "server2",
}
solution.add_action(action_type="migrate",
applies_to="b199db0c-1408-4d52-b5a5-5ca14de0ff36",
resource_id="b199db0c-1408-4d52-b5a5-5ca14de0ff36",
input_parameters=parameters)
with mock.patch.object(
@@ -94,11 +94,11 @@ class TestActionScheduling(base.DbTestCase):
"dst_uuid_hypervisor": "server2",
}
solution.add_action(action_type="migrate",
applies_to="b199db0c-1408-4d52-b5a5-5ca14de0ff36",
resource_id="b199db0c-1408-4d52-b5a5-5ca14de0ff36",
input_parameters=parameters)
solution.add_action(action_type="nop",
applies_to="",
resource_id="",
input_parameters={})
with mock.patch.object(

View File

@@ -13,27 +13,29 @@
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from watcher.decision_engine.solution.default import DefaultSolution
from watcher.decision_engine.solution import default
from watcher.tests import base
class TestDefaultSolution(base.BaseTestCase):
def test_default_solution(self):
solution = DefaultSolution()
solution = default.DefaultSolution()
parameters = {
"src_uuid_hypervisor": "server1",
"dst_uuid_hypervisor": "server2",
}
solution.add_action(action_type="nop",
applies_to="b199db0c-1408-4d52-b5a5-5ca14de0ff36",
resource_id="b199db0c-1408-4d52-b5a5-5ca14de0ff36",
input_parameters=parameters)
self.assertEqual(len(solution.actions), 1)
expected_action_type = "nop"
expected_applies_to = "b199db0c-1408-4d52-b5a5-5ca14de0ff36"
expected_parameters = parameters
expected_parameters = {
"src_uuid_hypervisor": "server1",
"dst_uuid_hypervisor": "server2",
"resource_id": "b199db0c-1408-4d52-b5a5-5ca14de0ff36"
}
self.assertEqual(solution.actions[0].get('action_type'),
expected_action_type)
self.assertEqual(solution.actions[0].get('applies_to'),
expected_applies_to)
self.assertEqual(solution.actions[0].get('input_parameters'),
expected_parameters)

View File

@@ -16,81 +16,24 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import random
from watcher.decision_engine.model.hypervisor import Hypervisor
from watcher.decision_engine.model.model_root import ModelRoot
from watcher.decision_engine.model.resource import Resource
from watcher.decision_engine.model.resource import ResourceType
from watcher.decision_engine.model.vm import VM
from watcher.metrics_engine.cluster_model_collector.api import \
BaseClusterModelCollector
from watcher.decision_engine.model import hypervisor
from watcher.decision_engine.model import model_root as modelroot
from watcher.decision_engine.model import resource
from watcher.decision_engine.model import vm as modelvm
from watcher.metrics_engine.cluster_model_collector import api
class FakerModelCollector(BaseClusterModelCollector):
class FakerModelCollector(api.BaseClusterModelCollector):
def __init__(self):
pass
def get_latest_cluster_data_model(self):
return self.generate_scenario_1()
def generate_random(self, count_nodes, number_of_vm_per_node):
vms = []
current_state_cluster = ModelRoot()
# number of nodes
count_node = count_nodes
# number max of vm per hypervisor
node_count_vm = number_of_vm_per_node
# total number of virtual machine
count_vm = (count_node * node_count_vm)
# define ressouce ( CPU, MEM disk, ... )
mem = Resource(ResourceType.memory)
# 2199.954 Mhz
num_cores = Resource(ResourceType.cpu_cores)
disk = Resource(ResourceType.disk)
current_state_cluster.create_resource(mem)
current_state_cluster.create_resource(num_cores)
current_state_cluster.create_resource(disk)
for i in range(0, count_node):
node_uuid = "Node_{0}".format(i)
hypervisor = Hypervisor()
hypervisor.uuid = node_uuid
hypervisor.hostname = "host_{0}".format(i)
mem.set_capacity(hypervisor, 132)
disk.set_capacity(hypervisor, 250)
num_cores.set_capacity(hypervisor, 40)
current_state_cluster.add_hypervisor(hypervisor)
for i in range(0, count_vm):
vm_uuid = "VM_{0}".format(i)
vm = VM()
vm.uuid = vm_uuid
mem.set_capacity(vm, 8)
disk.set_capacity(vm, 10)
num_cores.set_capacity(vm, 10)
vms.append(vm)
current_state_cluster.add_vm(vm)
j = 0
for node_id in current_state_cluster.get_all_hypervisors():
for i in range(0, random.randint(0, node_count_vm)):
# todo(jed) check if enough capacity
current_state_cluster.get_mapping().map(
current_state_cluster.get_hypervisor_from_id(node_id),
vms[j])
j += 1
return current_state_cluster
def generate_scenario_1(self):
vms = []
current_state_cluster = ModelRoot()
current_state_cluster = modelroot.ModelRoot()
# number of nodes
count_node = 5
# number max of vm per node
@@ -99,10 +42,10 @@ class FakerModelCollector(BaseClusterModelCollector):
count_vm = (count_node * node_count_vm)
# define ressouce ( CPU, MEM disk, ... )
mem = Resource(ResourceType.memory)
mem = resource.Resource(resource.ResourceType.memory)
# 2199.954 Mhz
num_cores = Resource(ResourceType.cpu_cores)
disk = Resource(ResourceType.disk)
num_cores = resource.Resource(resource.ResourceType.cpu_cores)
disk = resource.Resource(resource.ResourceType.disk)
current_state_cluster.create_resource(mem)
current_state_cluster.create_resource(num_cores)
@@ -110,7 +53,7 @@ class FakerModelCollector(BaseClusterModelCollector):
for i in range(0, count_node):
node_uuid = "Node_{0}".format(i)
node = Hypervisor()
node = hypervisor.Hypervisor()
node.uuid = node_uuid
node.hostname = "hostname_{0}".format(i)
@@ -121,7 +64,7 @@ class FakerModelCollector(BaseClusterModelCollector):
for i in range(0, count_vm):
vm_uuid = "VM_{0}".format(i)
vm = VM()
vm = modelvm.VM()
vm.uuid = vm_uuid
mem.set_capacity(vm, 2)
disk.set_capacity(vm, 20)
@@ -168,130 +111,68 @@ class FakerModelCollector(BaseClusterModelCollector):
model.get_hypervisor_from_id(h_id),
model.get_vm_from_id(vm_id))
def generate_scenario_2(self):
vms = []
current_state_cluster = ModelRoot()
# number of nodes
count_node = 10
# number max of vm per node
node_count_vm = 7
# total number of virtual machine
count_vm = (count_node * node_count_vm)
# define ressouce ( CPU, MEM disk, ... )
mem = Resource(ResourceType.memory)
# 2199.954 Mhz
num_cores = Resource(ResourceType.cpu_cores)
disk = Resource(ResourceType.disk)
current_state_cluster.create_resource(mem)
current_state_cluster.create_resource(num_cores)
current_state_cluster.create_resource(disk)
for i in range(0, count_node):
node_uuid = "Node_{0}".format(i)
node = Hypervisor()
node.uuid = node_uuid
node.hostname = "hostname_{0}".format(i)
mem.set_capacity(node, 132)
disk.set_capacity(node, 250)
num_cores.set_capacity(node, 40)
current_state_cluster.add_hypervisor(node)
for i in range(0, count_vm):
vm_uuid = "VM_{0}".format(i)
vm = VM()
vm.uuid = vm_uuid
mem.set_capacity(vm, 10)
disk.set_capacity(vm, 25)
num_cores.set_capacity(vm, 16)
vms.append(vm)
current_state_cluster.add_vm(vm)
indice = 0
for j in range(0, 2):
node_uuid = "Node_{0}".format(j)
for i in range(indice, 3):
vm_uuid = "VM_{0}".format(i)
self.map(current_state_cluster, node_uuid, vm_uuid)
for j in range(2, 5):
node_uuid = "Node_{0}".format(j)
for i in range(indice, 4):
vm_uuid = "VM_{0}".format(i)
self.map(current_state_cluster, node_uuid, vm_uuid)
for j in range(5, 10):
node_uuid = "Node_{0}".format(j)
for i in range(indice, 4):
vm_uuid = "VM_{0}".format(i)
self.map(current_state_cluster, node_uuid, vm_uuid)
return current_state_cluster
def generate_scenario_3_with_2_hypervisors(self):
vms = []
current_state_cluster = ModelRoot()
root = modelroot.ModelRoot()
# number of nodes
count_node = 2
# number max of vm per node
node_count_vm = 1
# total number of virtual machine
count_vm = (count_node * node_count_vm)
# define ressouce ( CPU, MEM disk, ... )
mem = Resource(ResourceType.memory)
mem = resource.Resource(resource.ResourceType.memory)
# 2199.954 Mhz
num_cores = Resource(ResourceType.cpu_cores)
disk = Resource(ResourceType.disk)
num_cores = resource.Resource(resource.ResourceType.cpu_cores)
disk = resource.Resource(resource.ResourceType.disk)
current_state_cluster.create_resource(mem)
current_state_cluster.create_resource(num_cores)
current_state_cluster.create_resource(disk)
root.create_resource(mem)
root.create_resource(num_cores)
root.create_resource(disk)
for i in range(0, count_node):
node_uuid = "Node_{0}".format(i)
node = Hypervisor()
node = hypervisor.Hypervisor()
node.uuid = node_uuid
node.hostname = "hostname_{0}".format(i)
mem.set_capacity(node, 132)
disk.set_capacity(node, 250)
num_cores.set_capacity(node, 40)
current_state_cluster.add_hypervisor(node)
root.add_hypervisor(node)
for i in range(0, count_vm):
vm_uuid = "VM_{0}".format(i)
vm = VM()
vm.uuid = vm_uuid
mem.set_capacity(vm, 2)
disk.set_capacity(vm, 20)
num_cores.set_capacity(vm, 10)
vms.append(vm)
current_state_cluster.add_vm(vm)
vm1 = modelvm.VM()
vm1.uuid = "73b09e16-35b7-4922-804e-e8f5d9b740fc"
mem.set_capacity(vm1, 2)
disk.set_capacity(vm1, 20)
num_cores.set_capacity(vm1, 10)
vms.append(vm1)
root.add_vm(vm1)
current_state_cluster.get_mapping().map(
current_state_cluster.get_hypervisor_from_id("Node_0"),
current_state_cluster.get_vm_from_id("VM_0"))
vm2 = modelvm.VM()
vm2.uuid = "a4cab39b-9828-413a-bf88-f76921bf1517"
mem.set_capacity(vm2, 2)
disk.set_capacity(vm2, 20)
num_cores.set_capacity(vm2, 10)
vms.append(vm2)
root.add_vm(vm2)
current_state_cluster.get_mapping().map(
current_state_cluster.get_hypervisor_from_id("Node_1"),
current_state_cluster.get_vm_from_id("VM_1"))
root.get_mapping().map(root.get_hypervisor_from_id("Node_0"),
root.get_vm_from_id(str(vm1.uuid)))
return current_state_cluster
root.get_mapping().map(root.get_hypervisor_from_id("Node_1"),
root.get_vm_from_id(str(vm2.uuid)))
return root
def generate_scenario_4_with_1_hypervisor_no_vm(self):
current_state_cluster = ModelRoot()
current_state_cluster = modelroot.ModelRoot()
# number of nodes
count_node = 1
# define ressouce ( CPU, MEM disk, ... )
mem = Resource(ResourceType.memory)
mem = resource.Resource(resource.ResourceType.memory)
# 2199.954 Mhz
num_cores = Resource(ResourceType.cpu_cores)
disk = Resource(ResourceType.disk)
num_cores = resource.Resource(resource.ResourceType.cpu_cores)
disk = resource.Resource(resource.ResourceType.disk)
current_state_cluster.create_resource(mem)
current_state_cluster.create_resource(num_cores)
@@ -299,7 +180,7 @@ class FakerModelCollector(BaseClusterModelCollector):
for i in range(0, count_node):
node_uuid = "Node_{0}".format(i)
node = Hypervisor()
node = hypervisor.Hypervisor()
node.uuid = node_uuid
node.hostname = "hostname_{0}".format(i)
@@ -312,17 +193,17 @@ class FakerModelCollector(BaseClusterModelCollector):
def generate_scenario_5_with_vm_disk_0(self):
vms = []
current_state_cluster = ModelRoot()
current_state_cluster = modelroot.ModelRoot()
# number of nodes
count_node = 1
# number of vms
count_vm = 1
# define ressouce ( CPU, MEM disk, ... )
mem = Resource(ResourceType.memory)
mem = resource.Resource(resource.ResourceType.memory)
# 2199.954 Mhz
num_cores = Resource(ResourceType.cpu_cores)
disk = Resource(ResourceType.disk)
num_cores = resource.Resource(resource.ResourceType.cpu_cores)
disk = resource.Resource(resource.ResourceType.disk)
current_state_cluster.create_resource(mem)
current_state_cluster.create_resource(num_cores)
@@ -330,7 +211,7 @@ class FakerModelCollector(BaseClusterModelCollector):
for i in range(0, count_node):
node_uuid = "Node_{0}".format(i)
node = Hypervisor()
node = hypervisor.Hypervisor()
node.uuid = node_uuid
node.hostname = "hostname_{0}".format(i)
@@ -341,7 +222,7 @@ class FakerModelCollector(BaseClusterModelCollector):
for i in range(0, count_vm):
vm_uuid = "VM_{0}".format(i)
vm = VM()
vm = modelvm.VM()
vm.uuid = vm_uuid
mem.set_capacity(vm, 2)
disk.set_capacity(vm, 0)

View File

@@ -16,26 +16,26 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
from collections import Counter
import collections
import mock
from watcher.applier.actions.loading import default
from watcher.common import exception
from watcher.decision_engine.model.model_root import ModelRoot
from watcher.decision_engine.strategy.strategies.basic_consolidation import \
BasicConsolidation
from watcher.decision_engine.model import model_root
from watcher.decision_engine.strategy import strategies
from watcher.tests import base
from watcher.tests.decision_engine.strategy.strategies.faker_cluster_state \
import FakerModelCollector
from watcher.tests.decision_engine.strategy.strategies.faker_metrics_collector\
import FakerMetricsCollector
from watcher.tests.decision_engine.strategy.strategies \
import faker_cluster_state
from watcher.tests.decision_engine.strategy.strategies \
import faker_metrics_collector
class TestBasicConsolidation(base.BaseTestCase):
# fake metrics
fake_metrics = FakerMetricsCollector()
fake_metrics = faker_metrics_collector.FakerMetricsCollector()
# fake cluster
fake_cluster = FakerModelCollector()
fake_cluster = faker_cluster_state.FakerModelCollector()
def test_cluster_size(self):
size_cluster = len(
@@ -45,7 +45,7 @@ class TestBasicConsolidation(base.BaseTestCase):
def test_basic_consolidation_score_hypervisor(self):
cluster = self.fake_cluster.generate_scenario_1()
sercon = BasicConsolidation()
sercon = strategies.BasicConsolidation()
sercon.ceilometer = mock.MagicMock(
statistic_aggregation=self.fake_metrics.mock_get_statistics)
@@ -67,7 +67,7 @@ class TestBasicConsolidation(base.BaseTestCase):
def test_basic_consolidation_score_vm(self):
cluster = self.fake_cluster.generate_scenario_1()
sercon = BasicConsolidation()
sercon = strategies.BasicConsolidation()
sercon.ceilometer = mock.MagicMock(
statistic_aggregation=self.fake_metrics.mock_get_statistics)
vm_0 = cluster.get_vm_from_id("VM_0")
@@ -90,7 +90,7 @@ class TestBasicConsolidation(base.BaseTestCase):
def test_basic_consolidation_score_vm_disk(self):
cluster = self.fake_cluster.generate_scenario_5_with_vm_disk_0()
sercon = BasicConsolidation()
sercon = strategies.BasicConsolidation()
sercon.ceilometer = mock.MagicMock(
statistic_aggregation=self.fake_metrics.mock_get_statistics)
vm_0 = cluster.get_vm_from_id("VM_0")
@@ -99,7 +99,7 @@ class TestBasicConsolidation(base.BaseTestCase):
def test_basic_consolidation_weight(self):
cluster = self.fake_cluster.generate_scenario_1()
sercon = BasicConsolidation()
sercon = strategies.BasicConsolidation()
sercon.ceilometer = mock.MagicMock(
statistic_aggregation=self.fake_metrics.mock_get_statistics)
vm_0 = cluster.get_vm_from_id("VM_0")
@@ -114,24 +114,24 @@ class TestBasicConsolidation(base.BaseTestCase):
vm_0_weight_assert)
def test_calculate_migration_efficacy(self):
sercon = BasicConsolidation()
sercon = strategies.BasicConsolidation()
sercon.calculate_migration_efficacy()
def test_exception_model(self):
sercon = BasicConsolidation()
sercon = strategies.BasicConsolidation()
self.assertRaises(exception.ClusterStateNotDefined, sercon.execute,
None)
def test_exception_cluster_empty(self):
sercon = BasicConsolidation()
model = ModelRoot()
sercon = strategies.BasicConsolidation()
model = model_root.ModelRoot()
self.assertRaises(exception.ClusterEmpty, sercon.execute,
model)
def test_calculate_score_vm_raise_cluster_state_not_found(self):
metrics = FakerMetricsCollector()
metrics = faker_metrics_collector.FakerMetricsCollector()
metrics.empty_one_metric("CPU_COMPUTE")
sercon = BasicConsolidation()
sercon = strategies.BasicConsolidation()
sercon.ceilometer = mock.MagicMock(
statistic_aggregation=self.fake_metrics.mock_get_statistics)
@@ -139,8 +139,8 @@ class TestBasicConsolidation(base.BaseTestCase):
sercon.calculate_score_vm, "VM_1", None)
def test_check_migration(self):
sercon = BasicConsolidation()
fake_cluster = FakerModelCollector()
sercon = strategies.BasicConsolidation()
fake_cluster = faker_cluster_state.FakerModelCollector()
model = fake_cluster.generate_scenario_3_with_2_hypervisors()
all_vms = model.get_all_vms()
@@ -151,8 +151,8 @@ class TestBasicConsolidation(base.BaseTestCase):
sercon.check_migration(model, hyp0, hyp0, vm0)
def test_threshold(self):
sercon = BasicConsolidation()
fake_cluster = FakerModelCollector()
sercon = strategies.BasicConsolidation()
fake_cluster = faker_cluster_state.FakerModelCollector()
model = fake_cluster.generate_scenario_3_with_2_hypervisors()
all_hyps = model.get_all_hypervisors()
@@ -165,19 +165,19 @@ class TestBasicConsolidation(base.BaseTestCase):
self.assertEqual(sercon.get_threshold_cores(), threshold_cores + 1)
def test_number_of(self):
sercon = BasicConsolidation()
sercon = strategies.BasicConsolidation()
sercon.get_number_of_released_nodes()
sercon.get_number_of_migrations()
def test_basic_consolidation_migration(self):
sercon = BasicConsolidation()
sercon = strategies.BasicConsolidation()
sercon.ceilometer = mock.MagicMock(
statistic_aggregation=self.fake_metrics.mock_get_statistics)
solution = sercon.execute(
self.fake_cluster.generate_scenario_3_with_2_hypervisors())
actions_counter = Counter(
actions_counter = collections.Counter(
[action.get('action_type') for action in solution.actions])
expected_num_migrations = 1
@@ -189,26 +189,31 @@ class TestBasicConsolidation(base.BaseTestCase):
self.assertEqual(num_migrations, expected_num_migrations)
self.assertEqual(num_hypervisor_state_change, expected_power_state)
def test_execute_cluster_empty(self):
current_state_cluster = FakerModelCollector()
sercon = BasicConsolidation("sercon", "Basic offline consolidation")
sercon.ceilometer = mock.MagicMock(
statistic_aggregation=self.fake_metrics.mock_get_statistics)
model = current_state_cluster.generate_random(0, 0)
self.assertRaises(exception.ClusterEmpty, sercon.execute, model)
# calculate_weight
def test_execute_no_workload(self):
sercon = BasicConsolidation()
sercon = strategies.BasicConsolidation()
sercon.ceilometer = mock.MagicMock(
statistic_aggregation=self.fake_metrics.mock_get_statistics)
current_state_cluster = FakerModelCollector()
current_state_cluster = faker_cluster_state.FakerModelCollector()
model = current_state_cluster. \
generate_scenario_4_with_1_hypervisor_no_vm()
with mock.patch.object(BasicConsolidation, 'calculate_weight') \
with mock.patch.object(strategies.BasicConsolidation,
'calculate_weight') \
as mock_score_call:
mock_score_call.return_value = 0
solution = sercon.execute(model)
self.assertEqual(solution.efficacy, 100)
def test_check_parameters(self):
sercon = strategies.BasicConsolidation()
sercon.ceilometer = mock.MagicMock(
statistic_aggregation=self.fake_metrics.mock_get_statistics)
solution = sercon.execute(
self.fake_cluster.generate_scenario_3_with_2_hypervisors())
loader = default.DefaultActionLoader()
for action in solution.actions:
loaded_action = loader.load(action['action_type'])
loaded_action.input_parameters = action['input_parameters']
loaded_action.validate_parameters()

View File

@@ -13,16 +13,29 @@
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from watcher.decision_engine.strategy.strategies.dummy_strategy import \
DummyStrategy
from watcher.applier.actions.loading import default
from watcher.decision_engine.strategy import strategies
from watcher.tests import base
from watcher.tests.decision_engine.strategy.strategies.faker_cluster_state\
import FakerModelCollector
from watcher.tests.decision_engine.strategy.strategies import \
faker_cluster_state
class TestDummyStrategy(base.TestCase):
def test_dummy_strategy(self):
tactique = DummyStrategy("basic", "Basic offline consolidation")
fake_cluster = FakerModelCollector()
dummy = strategies.DummyStrategy("dummy", "Dummy strategy")
fake_cluster = faker_cluster_state.FakerModelCollector()
model = fake_cluster.generate_scenario_3_with_2_hypervisors()
tactique.execute(model)
solution = dummy.execute(model)
self.assertEqual(3, len(solution.actions))
def test_check_parameters(self):
dummy = strategies.DummyStrategy("dummy", "Dummy strategy")
fake_cluster = faker_cluster_state.FakerModelCollector()
model = fake_cluster.generate_scenario_3_with_2_hypervisors()
solution = dummy.execute(model)
loader = default.DefaultActionLoader()
for action in solution.actions:
loaded_action = loader.load(action['action_type'])
loaded_action.input_parameters = action['input_parameters']
loaded_action.validate_parameters()

View File

@@ -16,35 +16,35 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
from collections import Counter
import collections
import mock
from watcher.applier.actions.loading import default
from watcher.common import exception
from watcher.decision_engine.model.model_root import ModelRoot
from watcher.decision_engine.model.resource import ResourceType
from watcher.decision_engine.strategy.strategies.outlet_temp_control import \
OutletTempControl
from watcher.decision_engine.model import model_root
from watcher.decision_engine.model import resource
from watcher.decision_engine.strategy import strategies
from watcher.tests import base
from watcher.tests.decision_engine.strategy.strategies.faker_cluster_state \
import FakerModelCollector
from watcher.tests.decision_engine.strategy.strategies.faker_metrics_collector\
import FakerMetricsCollector
from watcher.tests.decision_engine.strategy.strategies \
import faker_cluster_state
from watcher.tests.decision_engine.strategy.strategies \
import faker_metrics_collector
class TestOutletTempControl(base.BaseTestCase):
# fake metrics
fake_metrics = FakerMetricsCollector()
fake_metrics = faker_metrics_collector.FakerMetricsCollector()
# fake cluster
fake_cluster = FakerModelCollector()
fake_cluster = faker_cluster_state.FakerModelCollector()
def test_calc_used_res(self):
model = self.fake_cluster.generate_scenario_3_with_2_hypervisors()
strategy = OutletTempControl()
strategy = strategies.OutletTempControl()
hypervisor = model.get_hypervisor_from_id('Node_0')
cap_cores = model.get_resource_from_id(ResourceType.cpu_cores)
cap_mem = model.get_resource_from_id(ResourceType.memory)
cap_disk = model.get_resource_from_id(ResourceType.disk)
cap_cores = model.get_resource_from_id(resource.ResourceType.cpu_cores)
cap_mem = model.get_resource_from_id(resource.ResourceType.memory)
cap_disk = model.get_resource_from_id(resource.ResourceType.disk)
cores_used, mem_used, disk_used = strategy.calc_used_res(model,
hypervisor,
cap_cores,
@@ -55,7 +55,7 @@ class TestOutletTempControl(base.BaseTestCase):
def test_group_hosts_by_outlet_temp(self):
model = self.fake_cluster.generate_scenario_3_with_2_hypervisors()
strategy = OutletTempControl()
strategy = strategies.OutletTempControl()
strategy.ceilometer = mock.MagicMock(
statistic_aggregation=self.fake_metrics.mock_get_statistics)
h1, h2 = strategy.group_hosts_by_outlet_temp(model)
@@ -64,17 +64,18 @@ class TestOutletTempControl(base.BaseTestCase):
def test_choose_vm_to_migrate(self):
model = self.fake_cluster.generate_scenario_3_with_2_hypervisors()
strategy = OutletTempControl()
strategy = strategies.OutletTempControl()
strategy.ceilometer = mock.MagicMock(
statistic_aggregation=self.fake_metrics.mock_get_statistics)
h1, h2 = strategy.group_hosts_by_outlet_temp(model)
vm_to_mig = strategy.choose_vm_to_migrate(model, h1)
self.assertEqual(vm_to_mig[0].uuid, 'Node_1')
self.assertEqual(vm_to_mig[1].uuid, 'VM_1')
self.assertEqual(vm_to_mig[1].uuid,
"a4cab39b-9828-413a-bf88-f76921bf1517")
def test_filter_dest_servers(self):
model = self.fake_cluster.generate_scenario_3_with_2_hypervisors()
strategy = OutletTempControl()
strategy = strategies.OutletTempControl()
strategy.ceilometer = mock.MagicMock(
statistic_aggregation=self.fake_metrics.mock_get_statistics)
h1, h2 = strategy.group_hosts_by_outlet_temp(model)
@@ -84,29 +85,28 @@ class TestOutletTempControl(base.BaseTestCase):
self.assertEqual(dest_hosts[0]['hv'].uuid, 'Node_0')
def test_exception_model(self):
strategy = OutletTempControl()
strategy = strategies.OutletTempControl()
self.assertRaises(exception.ClusterStateNotDefined, strategy.execute,
None)
def test_exception_cluster_empty(self):
strategy = OutletTempControl()
model = ModelRoot()
strategy = strategies.OutletTempControl()
model = model_root.ModelRoot()
self.assertRaises(exception.ClusterEmpty, strategy.execute, model)
def test_execute_cluster_empty(self):
current_state_cluster = FakerModelCollector()
strategy = OutletTempControl()
strategy = strategies.OutletTempControl()
strategy.ceilometer = mock.MagicMock(
statistic_aggregation=self.fake_metrics.mock_get_statistics)
model = current_state_cluster.generate_random(0, 0)
model = model_root.ModelRoot()
self.assertRaises(exception.ClusterEmpty, strategy.execute, model)
def test_execute_no_workload(self):
strategy = OutletTempControl()
strategy = strategies.OutletTempControl()
strategy.ceilometer = mock.MagicMock(
statistic_aggregation=self.fake_metrics.mock_get_statistics)
current_state_cluster = FakerModelCollector()
current_state_cluster = faker_cluster_state.FakerModelCollector()
model = current_state_cluster. \
generate_scenario_4_with_1_hypervisor_no_vm()
@@ -114,13 +114,25 @@ class TestOutletTempControl(base.BaseTestCase):
self.assertEqual(solution.actions, [])
def test_execute(self):
strategy = OutletTempControl()
strategy = strategies.OutletTempControl()
strategy.ceilometer = mock.MagicMock(
statistic_aggregation=self.fake_metrics.mock_get_statistics)
model = self.fake_cluster.generate_scenario_3_with_2_hypervisors()
solution = strategy.execute(model)
actions_counter = Counter(
actions_counter = collections.Counter(
[action.get('action_type') for action in solution.actions])
num_migrations = actions_counter.get("migrate", 0)
self.assertEqual(num_migrations, 1)
def test_check_parameters(self):
outlet = strategies.OutletTempControl()
outlet.ceilometer = mock.MagicMock(
statistic_aggregation=self.fake_metrics.mock_get_statistics)
model = self.fake_cluster.generate_scenario_3_with_2_hypervisors()
solution = outlet.execute(model)
loader = default.DefaultActionLoader()
for action in solution.actions:
loaded_action = loader.load(action['action_type'])
loaded_action.input_parameters = action['input_parameters']
loaded_action.validate_parameters()

View File

@@ -20,8 +20,6 @@ import netaddr
from oslo_utils import timeutils
import six
from watcher.common import context as watcher_context
from watcher.common import exception
from watcher.objects import base
from watcher.objects import utils
from watcher.tests import base as test_base
@@ -202,279 +200,6 @@ class TestUtils(test_base.TestCase):
base.obj_to_primitive(mylist))
class _TestObject(object):
def test_hydration_type_error(self):
primitive = {'watcher_object.name': 'MyObj',
'watcher_object.namespace': 'watcher',
'watcher_object.version': '1.5',
'watcher_object.data': {'foo': 'a'}}
self.assertRaises(ValueError, MyObj.obj_from_primitive, primitive)
def test_hydration(self):
primitive = {'watcher_object.name': 'MyObj',
'watcher_object.namespace': 'watcher',
'watcher_object.version': '1.5',
'watcher_object.data': {'foo': 1}}
obj = MyObj.obj_from_primitive(primitive)
self.assertEqual(1, obj.foo)
def test_hydration_bad_ns(self):
primitive = {'watcher_object.name': 'MyObj',
'watcher_object.namespace': 'foo',
'watcher_object.version': '1.5',
'watcher_object.data': {'foo': 1}}
self.assertRaises(exception.UnsupportedObjectError,
MyObj.obj_from_primitive, primitive)
def test_dehydration(self):
expected = {'watcher_object.name': 'MyObj',
'watcher_object.namespace': 'watcher',
'watcher_object.version': '1.5',
'watcher_object.data': {'foo': 1}}
obj = MyObj(self.context)
obj.foo = 1
obj.obj_reset_changes()
self.assertEqual(expected, obj.obj_to_primitive())
def test_get_updates(self):
obj = MyObj(self.context)
self.assertEqual({}, obj.obj_get_changes())
obj.foo = 123
self.assertEqual({'foo': 123}, obj.obj_get_changes())
obj.bar = 'test'
self.assertEqual({'foo': 123, 'bar': 'test'}, obj.obj_get_changes())
obj.obj_reset_changes()
self.assertEqual({}, obj.obj_get_changes())
def test_object_property(self):
obj = MyObj(self.context, foo=1)
self.assertEqual(1, obj.foo)
def test_object_property_type_error(self):
obj = MyObj(self.context)
def fail():
obj.foo = 'a'
self.assertRaises(ValueError, fail)
def test_object_dict_syntax(self):
obj = MyObj(self.context)
obj.foo = 123
obj.bar = 'bar'
self.assertEqual(123, obj['foo'])
self.assertEqual([('bar', 'bar'), ('foo', 123)],
sorted(obj.items(), key=lambda x: x[0]))
self.assertEqual([('bar', 'bar'), ('foo', 123)],
sorted(list(obj.items()), key=lambda x: x[0]))
def test_load(self):
obj = MyObj(self.context)
self.assertEqual('loaded!', obj.bar)
def test_load_in_base(self):
class Foo(base.WatcherObject):
fields = {'foobar': int}
obj = Foo(self.context)
# NOTE(danms): Can't use assertRaisesRegexp() because of py26
raised = False
try:
obj.foobar
except NotImplementedError as ex:
raised = True
self.assertTrue(raised)
self.assertTrue('foobar' in str(ex))
def test_loaded_in_primitive(self):
obj = MyObj(self.context)
obj.foo = 1
obj.obj_reset_changes()
self.assertEqual('loaded!', obj.bar)
expected = {'watcher_object.name': 'MyObj',
'watcher_object.namespace': 'watcher',
'watcher_object.version': '1.0',
'watcher_object.changes': ['bar'],
'watcher_object.data': {'foo': 1,
'bar': 'loaded!'}}
self.assertEqual(expected, obj.obj_to_primitive())
def test_changes_in_primitive(self):
obj = MyObj(self.context)
obj.foo = 123
self.assertEqual(set(['foo']), obj.obj_what_changed())
primitive = obj.obj_to_primitive()
self.assertTrue('watcher_object.changes' in primitive)
obj2 = MyObj.obj_from_primitive(primitive)
self.assertEqual(set(['foo']), obj2.obj_what_changed())
obj2.obj_reset_changes()
self.assertEqual(set(), obj2.obj_what_changed())
def test_unknown_objtype(self):
self.assertRaises(exception.UnsupportedObjectError,
base.WatcherObject.obj_class_from_name, 'foo', '1.0')
def test_with_alternate_context(self):
context1 = watcher_context.RequestContext('foo', 'foo')
context2 = watcher_context.RequestContext('bar',
project_id='alternate')
obj = MyObj.query(context1)
obj.update_test(context2)
self.assertEqual('alternate-context', obj.bar)
self.assertRemotes()
def test_orphaned_object(self):
obj = MyObj.query(self.context)
obj._context = None
self.assertRaises(exception.OrphanedObjectError,
obj.update_test)
self.assertRemotes()
def test_changed_1(self):
obj = MyObj.query(self.context)
obj.foo = 123
self.assertEqual(set(['foo']), obj.obj_what_changed())
obj.update_test(self.context)
self.assertEqual(set(['foo', 'bar']), obj.obj_what_changed())
self.assertEqual(123, obj.foo)
self.assertRemotes()
def test_changed_2(self):
obj = MyObj.query(self.context)
obj.foo = 123
self.assertEqual(set(['foo']), obj.obj_what_changed())
obj.save()
self.assertEqual(set([]), obj.obj_what_changed())
self.assertEqual(123, obj.foo)
self.assertRemotes()
def test_changed_3(self):
obj = MyObj.query(self.context)
obj.foo = 123
self.assertEqual(set(['foo']), obj.obj_what_changed())
obj.refresh()
self.assertEqual(set([]), obj.obj_what_changed())
self.assertEqual(321, obj.foo)
self.assertEqual('refreshed', obj.bar)
self.assertRemotes()
def test_changed_4(self):
obj = MyObj.query(self.context)
obj.bar = 'something'
self.assertEqual(set(['bar']), obj.obj_what_changed())
obj.modify_save_modify(self.context)
self.assertEqual(set(['foo']), obj.obj_what_changed())
self.assertEqual(42, obj.foo)
self.assertEqual('meow', obj.bar)
self.assertRemotes()
def test_static_result(self):
obj = MyObj.query(self.context)
self.assertEqual('bar', obj.bar)
result = obj.marco()
self.assertEqual('polo', result)
self.assertRemotes()
def test_updates(self):
obj = MyObj.query(self.context)
self.assertEqual(1, obj.foo)
obj.update_test()
self.assertEqual('updated', obj.bar)
self.assertRemotes()
def test_base_attributes(self):
dt = datetime.datetime(1955, 11, 5)
obj = MyObj(self.context)
obj.created_at = dt
obj.updated_at = dt
expected = {'watcher_object.name': 'MyObj',
'watcher_object.namespace': 'watcher',
'watcher_object.version': '1.0',
'watcher_object.changes':
['created_at', 'updated_at'],
'watcher_object.data':
{'created_at': timeutils.isotime(dt),
'updated_at': timeutils.isotime(dt),
}
}
actual = obj.obj_to_primitive()
# watcher_object.changes is built from a set and order is undefined
self.assertEqual(sorted(expected['watcher_object.changes']),
sorted(actual['watcher_object.changes']))
del expected['watcher_object.changes'], \
actual['watcher_object.changes']
self.assertEqual(expected, actual)
def test_contains(self):
obj = MyObj(self.context)
self.assertFalse('foo' in obj)
obj.foo = 1
self.assertTrue('foo' in obj)
self.assertFalse('does_not_exist' in obj)
def test_obj_attr_is_set(self):
obj = MyObj(self.context, foo=1)
self.assertTrue(obj.obj_attr_is_set('foo'))
self.assertFalse(obj.obj_attr_is_set('bar'))
self.assertRaises(AttributeError, obj.obj_attr_is_set, 'bang')
def test_get(self):
obj = MyObj(self.context, foo=1)
# Foo has value, should not get the default
self.assertEqual(obj.get('foo', 2), 1)
# Foo has value, should return the value without error
self.assertEqual(obj.get('foo'), 1)
# Bar is not loaded, so we should get the default
self.assertEqual(obj.get('bar', 'not-loaded'), 'not-loaded')
# Bar without a default should lazy-load
self.assertEqual(obj.get('bar'), 'loaded!')
# Bar now has a default, but loaded value should be returned
self.assertEqual(obj.get('bar', 'not-loaded'), 'loaded!')
# Invalid attribute should raise AttributeError
self.assertRaises(AttributeError, obj.get, 'nothing')
# ...even with a default
self.assertRaises(AttributeError, obj.get, 'nothing', 3)
def test_object_inheritance(self):
base_fields = base.WatcherObject.fields.keys()
myobj_fields = ['foo', 'bar', 'missing'] + base_fields
myobj3_fields = ['new_field']
self.assertTrue(issubclass(DummySubclassedObject, MyObj))
self.assertEqual(len(myobj_fields), len(MyObj.fields))
self.assertEqual(set(myobj_fields), set(MyObj.fields.keys()))
self.assertEqual(len(myobj_fields) + len(myobj3_fields),
len(DummySubclassedObject.fields))
self.assertEqual(set(myobj_fields) | set(myobj3_fields),
set(DummySubclassedObject.fields.keys()))
def test_get_changes(self):
obj = MyObj(self.context)
self.assertEqual({}, obj.obj_get_changes())
obj.foo = 123
self.assertEqual({'foo': 123}, obj.obj_get_changes())
obj.bar = 'test'
self.assertEqual({'foo': 123, 'bar': 'test'}, obj.obj_get_changes())
obj.obj_reset_changes()
self.assertEqual({}, obj.obj_get_changes())
def test_obj_fields(self):
class TestObj(base.WatcherObject):
fields = {'foo': int}
obj_extra_fields = ['bar']
@property
def bar(self):
return 'this is bar'
obj = TestObj(self.context)
self.assertEqual(set(['created_at', 'updated_at', 'foo', 'bar']),
set(obj.obj_fields))
def test_obj_constructor(self):
obj = MyObj(self.context, foo=123, bar='abc')
self.assertEqual(123, obj.foo)
self.assertEqual('abc', obj.bar)
self.assertEqual(set(['foo', 'bar']), obj.obj_what_changed())
class TestObjectListBase(test_base.TestCase):
def test_list_like_operations(self):

View File

@@ -4,14 +4,11 @@
https://creativecommons.org/licenses/by/3.0/
.. _tempest_integration:
.. _tempest_tests:
=============
Tempest tests
=============
This directory contains Tempest tests to cover Watcher project.
The following procedure gets you started with Tempest testing but you can also
refer to the `Tempest documentation`_ for more details.
@@ -19,7 +16,7 @@ refer to the `Tempest documentation`_ for more details.
Tempest installation
====================
--------------------
To install Tempest you can issue the following commands::
@@ -30,11 +27,11 @@ To install Tempest you can issue the following commands::
The folder you are into now will be called ``<TEMPEST_DIR>`` from now onwards.
Please note that although it is fully working outside a virtual environment, it
is recommended to install within a venv.
is recommended to install within a `venv`.
Watcher Tempest testing setup
=============================
-----------------------------
You can now install Watcher alongside it in development mode by issuing the
following command::
@@ -43,7 +40,7 @@ following command::
Then setup a local working environment (here ``watcher-cloud``) for running
Tempest for Watcher which shall contain the configuration for your OpenStack
intergration platform.
integration platform.
In a virtual environment, you can do so by issuing the following command::
@@ -106,7 +103,7 @@ few more configuration have to be set in your ``tempest.conf`` file in order to
enable the execution of multi-node scenarios::
[compute]
# To indicate Tempest test that yout have provided enough compute nodes
# To indicate Tempest test that you have provided enough compute nodes
min_compute_nodes = 2
# Image UUID you can get using the "glance image-list" command
@@ -123,7 +120,7 @@ For more information, please refer to:
Watcher Tempest tests execution
===============================
-------------------------------
To list all Watcher Tempest cases, you can issue the following commands::

View File

@@ -174,11 +174,6 @@ class InfraOptimClientJSON(base.BaseInfraOptimClient):
"""Lists details of all existing action plan"""
return self._list_request('/action_plans/detail', **kwargs)
@base.handle_errors
def list_action_plan_by_audit(self, audit_uuid):
"""Lists all action plans associated with an audit"""
return self._list_request('/action_plans', audit_uuid=audit_uuid)
@base.handle_errors
def show_action_plan(self, action_plan_uuid):
"""Gets a specific action plan
@@ -190,7 +185,7 @@ class InfraOptimClientJSON(base.BaseInfraOptimClient):
@base.handle_errors
def delete_action_plan(self, action_plan_uuid):
"""Deletes an action_plan having the specified UUID
"""Deletes an action plan having the specified UUID
:param action_plan_uuid: The unique identifier of the action_plan
:return: A tuple with the server response and the response body
@@ -198,6 +193,18 @@ class InfraOptimClientJSON(base.BaseInfraOptimClient):
return self._delete_request('/action_plans', action_plan_uuid)
@base.handle_errors
def delete_action_plans_by_audit(self, audit_uuid):
"""Deletes an action plan having the specified UUID
:param audit_uuid: The unique identifier of the related Audit
"""
_, action_plans = self.list_action_plans(audit_uuid=audit_uuid)
for action_plan in action_plans:
self.delete_action_plan(action_plan['uuid'])
@base.handle_errors
def update_action_plan(self, action_plan_uuid, patch):
"""Update the specified action plan

View File

@@ -18,31 +18,17 @@ import functools
from tempest import test
from tempest_lib.common.utils import data_utils
from tempest_lib import exceptions as lib_exc
from watcher_tempest_plugin import infra_optim_clients as clients
def creates(resource):
"""Decorator that adds resources to the appropriate cleanup list."""
def decorator(f):
@functools.wraps(f)
def wrapper(cls, *args, **kwargs):
resp, body = f(cls, *args, **kwargs)
if 'uuid' in body:
cls.created_objects[resource].add(body['uuid'])
return resp, body
return wrapper
return decorator
class BaseInfraOptimTest(test.BaseTestCase):
"""Base class for Infrastructure Optimization API tests."""
RESOURCE_TYPES = ['audit_template', 'audit']
# States where the object is waiting for some event to perform a transition
IDLE_STATES = ('RECOMMENDED', 'FAILED', 'SUCCEEDED', 'CANCELLED')
# States where the object can only be DELETED (end of its life-cycle)
FINISHED_STATES = ('FAILED', 'SUCCEEDED', 'CANCELLED')
@classmethod
def setup_credentials(cls):
@@ -58,19 +44,52 @@ class BaseInfraOptimTest(test.BaseTestCase):
def resource_setup(cls):
super(BaseInfraOptimTest, cls).resource_setup()
cls.created_objects = {}
for resource in cls.RESOURCE_TYPES:
cls.created_objects[resource] = set()
# Set of all created audit templates UUIDs
cls.created_audit_templates = set()
# Set of all created audit UUIDs
cls.created_audits = set()
# Set of all created audit UUIDs. We use it to build the list of
# action plans to delete (including potential orphan one(s))
cls.created_action_plans_audit_uuids = set()
@classmethod
def resource_cleanup(cls):
"""Ensure that all created objects get destroyed."""
try:
for resource in cls.RESOURCE_TYPES:
obj_uuids = cls.created_objects[resource]
delete_method = getattr(cls.client, 'delete_%s' % resource)
for obj_uuid in obj_uuids:
delete_method(obj_uuid, ignore_errors=lib_exc.NotFound)
action_plans_to_be_deleted = set()
# Phase 1: Make sure all objects are in an idle state
for audit_uuid in cls.created_audits:
test.call_until_true(
func=functools.partial(
cls.is_audit_idle, audit_uuid),
duration=30,
sleep_for=.5
)
for audit_uuid in cls.created_action_plans_audit_uuids:
_, action_plans = cls.client.list_action_plans(
audit_uuid=audit_uuid)
action_plans_to_be_deleted.update(
ap['uuid'] for ap in action_plans['action_plans'])
for action_plan in action_plans['action_plans']:
test.call_until_true(
func=functools.partial(
cls.is_action_plan_idle, action_plan['uuid']),
duration=30,
sleep_for=.5
)
# Phase 2: Delete them all
for action_plan_uuid in action_plans_to_be_deleted:
cls.delete_action_plan(action_plan_uuid)
for audit_uuid in cls.created_audits.copy():
cls.delete_audit(audit_uuid)
for audit_template_uuid in cls.created_audit_templates.copy():
cls.delete_audit_template(audit_template_uuid)
finally:
super(BaseInfraOptimTest, cls).resource_cleanup()
@@ -95,7 +114,6 @@ class BaseInfraOptimTest(test.BaseTestCase):
# ### AUDIT TEMPLATES ### #
@classmethod
@creates('audit_template')
def create_audit_template(cls, name=None, description=None, goal=None,
host_aggregate=None, extra=None):
"""Wrapper utility for creating a test audit template
@@ -111,12 +129,14 @@ class BaseInfraOptimTest(test.BaseTestCase):
Default: {}
:return: A tuple with The HTTP response and its body
"""
description = description or data_utils.rand_name(
'test-audit_template')
resp, body = cls.client.create_audit_template(
name=name, description=description, goal=goal,
host_aggregate=host_aggregate, extra=extra)
cls.created_audit_templates.add(body['uuid'])
return resp, body
@classmethod
@@ -126,18 +146,16 @@ class BaseInfraOptimTest(test.BaseTestCase):
:param uuid: The unique identifier of the audit template
:return: Server response
"""
resp, _ = cls.client.delete_audit_template(uuid)
resp, body = cls.client.delete_audit_template(uuid)
if uuid in cls.created_objects['audit_template']:
cls.created_objects['audit_template'].remove(uuid)
if uuid in cls.created_audit_templates:
cls.created_audit_templates.remove(uuid)
return resp
# ### AUDITS ### #
@classmethod
@creates('audit')
def create_audit(cls, audit_template_uuid, type='ONESHOT',
state='PENDING', deadline=None):
"""Wrapper utility for creating a test audit
@@ -151,6 +169,10 @@ class BaseInfraOptimTest(test.BaseTestCase):
resp, body = cls.client.create_audit(
audit_template_uuid=audit_template_uuid, type=type,
state=state, deadline=deadline)
cls.created_audits.add(body['uuid'])
cls.created_action_plans_audit_uuids.add(body['uuid'])
return resp, body
@classmethod
@@ -160,10 +182,10 @@ class BaseInfraOptimTest(test.BaseTestCase):
:param audit_uuid: The unique identifier of the audit.
:return: the HTTP response
"""
resp, body = cls.client.delete_audit(audit_uuid)
resp, _ = cls.client.delete_audit(audit_uuid)
if audit_uuid in cls.created_objects['audit']:
cls.created_objects['audit'].remove(audit_uuid)
if audit_uuid in cls.created_audits:
cls.created_audits.remove(audit_uuid)
return resp
@@ -172,8 +194,39 @@ class BaseInfraOptimTest(test.BaseTestCase):
_, audit = cls.client.show_audit(audit_uuid)
return audit.get('state') == 'SUCCEEDED'
@classmethod
def has_audit_finished(cls, audit_uuid):
_, audit = cls.client.show_audit(audit_uuid)
return audit.get('state') in cls.FINISHED_STATES
@classmethod
def is_audit_idle(cls, audit_uuid):
_, audit = cls.client.show_audit(audit_uuid)
return audit.get('state') in cls.IDLE_STATES
# ### ACTION PLANS ### #
@classmethod
def create_action_plan(cls, audit_template_uuid, **audit_kwargs):
"""Wrapper utility for creating a test action plan
:param audit_template_uuid: Audit template UUID to use
:param audit_kwargs: Dict of audit properties to set
:return: The action plan as dict
"""
_, audit = cls.create_audit(audit_template_uuid, **audit_kwargs)
audit_uuid = audit['uuid']
assert test.call_until_true(
func=functools.partial(cls.has_audit_succeeded, audit_uuid),
duration=30,
sleep_for=.5
)
_, action_plans = cls.client.list_action_plans(audit_uuid=audit_uuid)
return action_plans['action_plans'][0]
@classmethod
def delete_action_plan(cls, action_plan_uuid):
"""Deletes an action plan having the specified UUID
@@ -181,14 +234,20 @@ class BaseInfraOptimTest(test.BaseTestCase):
:param action_plan_uuid: The unique identifier of the action plan.
:return: the HTTP response
"""
resp, body = cls.client.delete_action_plan(action_plan_uuid)
resp, _ = cls.client.delete_action_plan(action_plan_uuid)
if action_plan_uuid in cls.created_objects['action_plan']:
cls.created_objects['action_plan'].remove(action_plan_uuid)
if action_plan_uuid in cls.created_action_plans_audit_uuids:
cls.created_action_plans_audit_uuids.remove(action_plan_uuid)
return resp
@classmethod
def has_action_plan_finished(cls, action_plan_uuid):
_, action_plan = cls.client.show_action_plan(action_plan_uuid)
return action_plan.get('state') in ('FAILED', 'SUCCEEDED', 'CANCELLED')
return action_plan.get('state') in cls.FINISHED_STATES
@classmethod
def is_action_plan_idle(cls, action_plan_uuid):
"""This guard makes sure your action plan is not running"""
_, action_plan = cls.client.show_action_plan(action_plan_uuid)
return action_plan.get('state') in cls.IDLE_STATES

View File

@@ -38,8 +38,8 @@ class TestShowListAction(base.BaseInfraOptimTest):
duration=30,
sleep_for=.5
)
_, action_plans = cls.client.list_action_plan_by_audit(
cls.audit['uuid'])
_, action_plans = cls.client.list_action_plans(
audit_uuid=cls.audit['uuid'])
cls.action_plan = action_plans['action_plans'][0]
@test.attr(type='smoke')

View File

@@ -37,7 +37,8 @@ class TestCreateDeleteExecuteActionPlan(base.BaseInfraOptimTest):
duration=30,
sleep_for=.5
))
_, action_plans = self.client.list_action_plan_by_audit(audit['uuid'])
_, action_plans = self.client.list_action_plans(
audit_uuid=audit['uuid'])
action_plan = action_plans['action_plans'][0]
_, action_plan = self.client.show_action_plan(action_plan['uuid'])
@@ -55,7 +56,8 @@ class TestCreateDeleteExecuteActionPlan(base.BaseInfraOptimTest):
duration=30,
sleep_for=.5
))
_, action_plans = self.client.list_action_plan_by_audit(audit['uuid'])
_, action_plans = self.client.list_action_plans(
audit_uuid=audit['uuid'])
action_plan = action_plans['action_plans'][0]
_, action_plan = self.client.show_action_plan(action_plan['uuid'])
@@ -75,15 +77,16 @@ class TestCreateDeleteExecuteActionPlan(base.BaseInfraOptimTest):
duration=30,
sleep_for=.5
))
_, action_plans = self.client.list_action_plan_by_audit(audit['uuid'])
_, action_plans = self.client.list_action_plans(
audit_uuid=audit['uuid'])
action_plan = action_plans['action_plans'][0]
_, action_plan = self.client.show_action_plan(action_plan['uuid'])
# Execute the action by changing its state to TRIGGERED
# Execute the action by changing its state to PENDING
_, updated_ap = self.client.update_action_plan(
action_plan['uuid'],
patch=[{'path': '/state', 'op': 'replace', 'value': 'TRIGGERED'}]
patch=[{'path': '/state', 'op': 'replace', 'value': 'PENDING'}]
)
self.assertTrue(test.call_until_true(
@@ -94,7 +97,7 @@ class TestCreateDeleteExecuteActionPlan(base.BaseInfraOptimTest):
))
_, finished_ap = self.client.show_action_plan(action_plan['uuid'])
self.assertIn(updated_ap['state'], ('TRIGGERED', 'ONGOING'))
self.assertIn(updated_ap['state'], ('PENDING', 'ONGOING'))
self.assertEqual(finished_ap['state'], 'SUCCEEDED')
@@ -112,8 +115,8 @@ class TestShowListActionPlan(base.BaseInfraOptimTest):
duration=30,
sleep_for=.5
)
_, action_plans = cls.client.list_action_plan_by_audit(
cls.audit['uuid'])
_, action_plans = cls.client.list_action_plans(
audit_uuid=cls.audit['uuid'])
cls.action_plan = action_plans['action_plans'][0]
@test.attr(type='smoke')
@@ -155,7 +158,7 @@ class TestShowListActionPlan(base.BaseInfraOptimTest):
def test_list_with_limit(self):
# We create 3 extra audits to exceed the limit we fix
for _ in range(3):
self.create_audit(self.audit_template['uuid'])
self.create_action_plan(self.audit_template['uuid'])
_, body = self.client.list_action_plans(limit=3)

View File

@@ -19,7 +19,6 @@ from __future__ import unicode_literals
import uuid
from tempest import test
from tempest_lib import decorators
from tempest_lib import exceptions as lib_exc
from watcher_tempest_plugin.tests.api.admin import base
@@ -85,23 +84,14 @@ class TestAuditTemplate(base.BaseInfraOptimTest):
self.assert_expected(self.audit_template, audit_template)
@decorators.skip_because(bug="1510189")
@test.attr(type='smoke')
def test_filter_audit_template_by_goal(self):
_, audit_template = self.client.list_audit_templates(
_, audit_templates = self.client.list_audit_templates(
goal=self.audit_template['goal'])
self.assert_expected(self.audit_template,
audit_template['audit_templates'][0])
@decorators.skip_because(bug="1510189")
@test.attr(type='smoke')
def test_filter_audit_template_by_host_aggregate(self):
_, audit_template = self.client.list_audit_templates(
host_aggregate=self.audit_template['host_aggregate'])
self.assert_expected(self.audit_template,
audit_template['audit_templates'][0])
audit_template_uuids = [
at["uuid"] for at in audit_templates['audit_templates']]
self.assertIn(self.audit_template['uuid'], audit_template_uuids)
@test.attr(type='smoke')
def test_show_audit_template_with_links(self):

View File

@@ -117,15 +117,16 @@ class TestExecuteBasicStrategy(base.BaseInfraOptimScenarioTest):
duration=300,
sleep_for=2
))
_, action_plans = self.client.list_action_plan_by_audit(audit['uuid'])
_, action_plans = self.client.list_action_plans(
audit_uuid=audit['uuid'])
action_plan = action_plans['action_plans'][0]
_, action_plan = self.client.show_action_plan(action_plan['uuid'])
# Execute the action by changing its state to TRIGGERED
# Execute the action by changing its state to PENDING
_, updated_ap = self.client.update_action_plan(
action_plan['uuid'],
patch=[{'path': '/state', 'op': 'replace', 'value': 'TRIGGERED'}]
patch=[{'path': '/state', 'op': 'replace', 'value': 'PENDING'}]
)
self.assertTrue(test.call_until_true(
@@ -138,7 +139,7 @@ class TestExecuteBasicStrategy(base.BaseInfraOptimScenarioTest):
_, action_list = self.client.list_actions(
action_plan_uuid=finished_ap["uuid"])
self.assertIn(updated_ap['state'], ('TRIGGERED', 'ONGOING'))
self.assertIn(updated_ap['state'], ('PENDING', 'ONGOING'))
self.assertEqual(finished_ap['state'], 'SUCCEEDED')
for action in action_list['actions']:

View File

@@ -45,15 +45,16 @@ class TestExecuteDummyStrategy(base.BaseInfraOptimScenarioTest):
duration=30,
sleep_for=.5
))
_, action_plans = self.client.list_action_plan_by_audit(audit['uuid'])
_, action_plans = self.client.list_action_plans(
audit_uuid=audit['uuid'])
action_plan = action_plans['action_plans'][0]
_, action_plan = self.client.show_action_plan(action_plan['uuid'])
# Execute the action by changing its state to TRIGGERED
# Execute the action by changing its state to PENDING
_, updated_ap = self.client.update_action_plan(
action_plan['uuid'],
patch=[{'path': '/state', 'op': 'replace', 'value': 'TRIGGERED'}]
patch=[{'path': '/state', 'op': 'replace', 'value': 'PENDING'}]
)
self.assertTrue(test.call_until_true(
@@ -69,7 +70,7 @@ class TestExecuteDummyStrategy(base.BaseInfraOptimScenarioTest):
action_counter = collections.Counter(
act['action_type'] for act in action_list['actions'])
self.assertIn(updated_ap['state'], ('TRIGGERED', 'ONGOING'))
self.assertIn(updated_ap['state'], ('PENDING', 'ONGOING'))
self.assertEqual(finished_ap['state'], 'SUCCEEDED')
# A dummy strategy generates 2 "nop" actions and 1 "sleep" action