Compare commits

...

68 Commits

Author SHA1 Message Date
Prudhvi Rao Shedimbi
695ddf8ae7 Implemented clients and auth config module
Implemented clients and auth config module

Implements: blueprint centralise-config-opts

Change-Id: I28ea8376aa34114331cbae61596839ebae6cf7eb
2016-12-14 20:13:23 +00:00
Jenkins
8fd5057cd0 Merge "Implemented wacther decision engine config module" 2016-12-14 18:48:14 +00:00
Jenkins
3db81564f4 Merge "Documentation for Uniform Airflow Migration Strategy Fixed issues" 2016-12-14 17:51:11 +00:00
Jenkins
10066ed8fd Merge "Documentation for Workload Balance Migration Strategy Fixed comments and added the doc primitive call" 2016-12-14 17:51:04 +00:00
Jenkins
e4c5f4f050 Merge "update strategy table when parameters_spec changes" 2016-12-14 17:48:27 +00:00
Prudhvi Rao Shedimbi
53c896dd24 Implemented wacther decision engine config module
Implemented wacther decision engine config module

Partially Implements: blueprint centralise-config-opts

Change-Id: Ie4e9dd7d902fa85044d1859974cbd75d54c8b6cc
2016-12-14 17:40:57 +00:00
Susanne Balle
40a46c6663 Documentation for Uniform Airflow Migration Strategy
Fixed issues

Closes-Bug: #1623486
Change-Id: If303283949ef39a26c91bbff7b4664e81687d169
2016-12-14 11:01:27 -05:00
Prudhvi Rao Shedimbi
ed21e452e0 Implemented applier config module
Implemented applier config module

Partially Implements: blueprint centralise-config-opts

Change-Id: I237596b06dc3bee318414346cfa58ae4cb81079b
2016-12-14 15:50:11 +00:00
Prudhvi Rao Shedimbi
80e77a5b81 Implemented planner config module
Implemented planner config module

Partially Implements: blueprint centralise-config-opts

Change-Id: I4d710c8552ef211c6a9c38dd8f5515f68a6d36c4
2016-12-14 14:52:37 +00:00
Prudhvi Rao Shedimbi
74112dd7cf Implemented db config module
Implemented db config module

Partially Implements: blueprint centralise-config-opts. Also moved
basedir_def, bindir_def and state_path_def to watcher.conf.paths

Change-Id: I73d201f6a23bbdb1c6189434b11314a66620e85c
2016-12-14 15:13:31 +01:00
Prudhvi Rao Shedimbi
9e4bf718da Implemented exception config module
Implemented exception config module

Partially Implements: blueprint centralise-config-opts

Change-Id: Ic1b94e28a960a7306f15afbf69382edc15b5999e
2016-12-14 11:05:01 +00:00
Prudhvi Rao Shedimbi
5c79074e9c Implemented paths config module
Implemented paths config module

Partially Implements: blueprint centralise-config-opts

Change-Id: I2b779fb1ce552567feac678cb5bd78aad0d53d52
2016-12-14 10:56:58 +00:00
Jenkins
ac6848dad3 Merge "Implemented utils config module" 2016-12-14 10:31:07 +00:00
Jenkins
648715eb5c Merge "Implemented api config module" 2016-12-14 10:20:48 +00:00
Jenkins
3b5ef5d625 Merge "Specific exception for stale cluster state was added." 2016-12-13 15:35:41 +00:00
Jenkins
7fd486bd65 Merge "Unnecessary exception" 2016-12-13 12:00:23 +00:00
Jenkins
3cf4b315d3 Merge "improve statistic_aggregation" 2016-12-13 11:02:21 +00:00
Susanne Balle
25d84ba662 Documentation for Workload Balance Migration Strategy
Fixed comments and added the doc primitive call

Closes-Bug: #1623486

Change-Id: I704536530c576de702434008aa30a7fbbaddff25
2016-12-12 14:17:08 -05:00
Anton Khaldin
7908af3150 Specific exception for stale cluster state was added.
Specific exception should be thrown when cluster state
is stale. Current usage is to raise this exception if
compute_model.state is True.
Bug was describeid by Jean-Emile DARTOIS.

Change-Id: Iaddb4cc8007c51bb14759c9da829751e834499d0
Closes-Bug: #1621855
2016-12-12 18:04:51 +00:00
Prudhvi Rao Shedimbi
04fdea2aa0 Implemented utils config module
Implemented utils config module

Partially Implements: blueprint centralise-config-opts

Change-Id: Ic09ecba60022b69ec4031608716e34209d3fe578
2016-12-12 16:36:04 +00:00
Jenkins
cee9cfb62c Merge "Updated from global requirements" 2016-12-12 13:31:35 +00:00
Jenkins
e1912fe03e Merge "Modify the variable assignment errors" 2016-12-12 11:57:57 +00:00
licanwei
d859f3ac1f Fix CI failures
reference to:
http://osdir.com/ml/openstack-dev/2016-12/msg00420.html
/nova/tox.ini

Change-Id: Ie566f4b7c1c3cd3c1654281c0cad028c3886d9f7
2016-12-12 18:01:25 +08:00
licanwei
7a72371df8 improve statistic_aggregation
improve statistic_aggregation

Change-Id: Ic1fb19780fa4a39c5eb74e5ed30db0e4c06d0e09
2016-12-12 17:10:38 +08:00
licanwei
82bb097e9f Unnecessary exception
If the instance state is not ACTIVE,
There is no need to throw an exception.

Change-Id: I88ba3ae9c92b75ed57fc9647e33c4a10801b2c18
Closes-Bug: #1648309
2016-12-08 15:06:26 +08:00
licanwei
a9ef9f3a94 update strategy table when parameters_spec changes
At present, In the sync function, there is no check about the
parameters_spec field in strategy table, if the parameters_spec
field content has changed, Such as increased 'periods' parameter,
strategy table will not be updated, the program will run abnormal.

exception msg:
2016-12-05 11:11:39.138 TRACE watcher.decision_engine.audit.base
raise AttributeError(name)
2016-12-05 11:11:39.138 TRACE watcher.decision_engine.audit.base
AttributeError: periods

Change-Id: I84709c246acbdf44ccac257b07a74084962bb628
Closes-Bug: #1647521
2016-12-08 13:34:03 +08:00
Prudhvi Rao Shedimbi
8e7ba3c44a Implemented api config module
Implemented api config module

Partially Implements: blueprint centralise-config-opts

Change-Id: I055618e546bb1bfa2c1764bcff1a1f94e5adea96
2016-12-07 21:46:28 +00:00
OpenStack Proposal Bot
9a2ca8c4b7 Updated from global requirements
Change-Id: Ib5668cd281665477968b5a9acadf65765b5a06a1
2016-12-07 13:41:44 +00:00
Jenkins
c08666b2fa Merge "[Doc] Fix example code of goal plugin" 2016-12-07 12:26:50 +00:00
OpenStack Proposal Bot
6638f921a3 Updated from global requirements
Change-Id: Idceb803efc01b3b2346cc15391517e4b527d43ac
2016-12-07 09:09:52 +00:00
licanwei
1f2a854d6a Repairing unit test failures
If fieldname is 'deleted', field.type.python_type raise
NotImplementedError.

Change-Id: I47246ce9a3b0c8d2a3ea44e825d9604f5b14ed38
Closes-Bug: #1647574
2016-12-06 17:15:19 +08:00
Hidekazu Nakamura
d0e46d81fc [Doc] Fix example code of goal plugin
An example code of goal plugin does not work.
This patch fixes it.

Change-Id: I75c2ffa74a003fad9e2d512927e4cb47554783c2
2016-12-02 12:47:34 +09:00
Jenkins
4e240b945b Merge "Fix one ref that does not work." 2016-11-30 11:43:28 +00:00
Jenkins
e09188d862 Merge "Updated from global requirements" 2016-11-30 11:42:39 +00:00
Jenkins
a4b1df2fce Merge "Fix rally gate test" 2016-11-30 11:32:46 +00:00
Jenkins
22933e4d79 Merge "Show team and repo badges on README" 2016-11-30 10:39:46 +00:00
Jenkins
40a5c98382 Merge "Use uuidutils instead of uuid.uuid4()." 2016-11-30 08:43:49 +00:00
Jenkins
5b21b9a17e Merge "Fix 'ImportError' when docbuild." 2016-11-30 08:42:47 +00:00
Jenkins
02d1850be7 Merge "Add periods input parameter" 2016-11-30 08:34:19 +00:00
Jenkins
5f1f10e3d3 Merge "Documentation for Outlet Temperature Based Strategy Fixed outstanding comments" 2016-11-30 08:20:43 +00:00
zhuzeyu
e4732e1375 Use uuidutils instead of uuid.uuid4().
Change-Id: I3f734b6c4d252f8eb73a49b447fd89e5e444002f
Closes-Bug: #1082248
2016-11-29 14:15:54 +08:00
licanwei
715f6fa1cd Modify the variable assignment errors
The values of 'released_compute_nodes_count' and
'instance_migrations_count' are upside down

Change-Id: I0662bdfce575de529eb8c12363be7fa196b1a88c
2016-11-29 10:20:23 +08:00
Flavio Percoco
6daa09f489 Show team and repo badges on README
This patch adds the team's and repository's badges to the README file.
The motivation behind this is to communicate the project status and
features at first glance.

For more information about this effort, please read this email thread:

http://lists.openstack.org/pipermail/openstack-dev/2016-October/105562.html

To see an example of how this would look like check:

https://gist.github.com/fb14a4269de717e9410ba91722027512

Change-Id: If7f9b36d45c431ecb6d0eb76d907d63573de4238
2016-11-25 17:21:54 +01:00
ericxiett
7feced419c Fix 'ImportError' when docbuild.
There is not module 'messaging.utils' in 'watcher.common'
but module 'synchronization' instead.

Change-Id: If2d585a4f416614cbe91e4ef61fc7473508d38af
Closes-Bug: #1643862
2016-11-24 01:34:51 +08:00
ericxiett
3319748367 Fix one ref that does not work.
One space was missed between 'the' and ':ref' in
'the:ref:Watcher Database <watcher_database_definition>'.

Change-Id: I3e46121dc7c30f73df4ca455e2c629929cdbd2ec
Closes-Bug: #1644388
2016-11-24 01:15:56 +08:00
OpenStack Proposal Bot
a84f52dfe3 Updated from global requirements
Change-Id: Id8935a3139541edb1dae894358f20c3cfc0ddd21
2016-11-23 11:05:56 +00:00
Jenkins
3f8e4451f5 Merge "Fix some typos in action.py & action_plan.py & audit.py" 2016-11-23 09:41:59 +00:00
Jenkins
d082c9ac41 Merge "Fix the wrong ref for 'Compute node'" 2016-11-23 09:41:09 +00:00
Jenkins
9080180309 Merge "Fix inconsistent descriptions in docstring in action_plan.py" 2016-11-23 08:59:25 +00:00
Jenkins
55893043df Merge "Solve some spelling mistakes." 2016-11-23 08:58:58 +00:00
Jenkins
19074f615a Merge "Remove redundan lines." 2016-11-23 08:58:49 +00:00
Alexander Chadin
295c8d914c Add periods input parameter
This patch set adds new periods strategy input
parameter that allows to specify the time length of
statistic aggregation.

Change-Id: Id6c7900e7b909b0b325281c4038e07dc695847a1
2016-11-23 11:55:14 +03:00
zte-hanrong
99735fa39a Solve some spelling mistakes.
Change-Id: Id7e8c4efbfc4203e63583b68c87be75f4a195b66
2016-11-23 11:40:51 +08:00
zte-hanrong
b80229f3d0 Remove redundan lines.
Change-Id: Iac10aea306e59eb91b192ec6e89f42851d9548a5
2016-11-23 11:31:26 +08:00
Susanne Balle
5e9ba463ee Documentation for Outlet Temperature Based Strategy
Fixed outstanding comments

Closes-Bug: #1623486

Change-Id: I2d327f472749c0e5a8b184eb426abebd757cc4f7
2016-11-21 11:15:01 -05:00
Jenkins
120c116655 Merge "[Doc] Fix default value in workload_stabilization" 2016-11-21 13:45:41 +00:00
Jenkins
c9cfd3bfbd Merge "Replaces uuid.uuid4 with uuidutils.generate_uuid()" 2016-11-21 13:36:21 +00:00
Jenkins
31f2b4172e Merge "Change hardware.cpu_util in workload_stabilization" 2016-11-21 13:32:23 +00:00
Hidekazu Nakamura
5151b666fd Change hardware.cpu_util in workload_stabilization
In this change set, hardware.cpu_util is changed to
compute.node.cpu.percent in workload_stabilization.
By doing so, one can run this strategy on a simple devstack
without having to setup the SNMP plugin.

Change-Id: I8df8921337ea3f4e751c0c822d823e64e3ca7e1c
2016-11-21 09:58:38 +09:00
YumengBao
578138e432 Fix inconsistent descriptions in docstring in action_plan.py
Change-Id: I4e74ac0ce7bdd17f1809ac1fbb8bf8110bfa290e
2016-11-19 17:58:46 +08:00
Jenkins
7e2fd7ed9a Merge "Removed nullable flag from audit_id in ActionPlan" 2016-11-18 14:40:53 +00:00
Vincent Françoise
74cb93fca8 Removed nullable flag from audit_id in ActionPlan
Partially Implements: blueprint watcher-versioned-objects

Change-Id: I0bf572e0756ef5d9bb73711a28225526dd044995
2016-11-18 14:16:32 +01:00
qinchunhua
876f3adb22 Replaces uuid.uuid4 with uuidutils.generate_uuid()
Change-Id: I38740842402841ae446603faacbbe969854f2396
Closes-Bug: #1082248
2016-11-18 00:55:39 +00:00
Vincent Françoise
06682fe7c3 Fixed update of WatcherObject fields on update
I this changeset, I fixed the issue whereby object auto fields are not
being updated within the WatcherObject after an update.

Change-Id: I7e65341b386a5c0c58c2109348e39e463cf2f668
Closes-Bug: #1641955
2016-11-17 17:41:13 +01:00
zhangyanxian
eaaa2b1b69 Fix some typos in action.py & action_plan.py & audit.py
Change-Id: I64909a6319a709dd8cb6a0e6b28bca714f5b4f6e
TrivialFix: "occured" should be "occurred"
2016-11-17 11:20:58 +00:00
Hidekazu Nakamura
88187a8ba9 [Doc] Fix default value in workload_stabilization
In this change set, Default value 'hardware.cpu_util' of instance_metrics
was changed to 'compute.node.cpu.percent'.

Change-Id: I02f87e5fea663e2e04c61cc36b7d55ff250bf8cc
2016-11-17 10:39:10 +09:00
ericxiett
0b6979b71c Fix the wrong ref for 'Compute node'
The refo of 'Compute node' in glossary.rst is wrong.
And modify it.

Change-Id: I3be61c95df2d538c5e49f169c428a605816d66e0
Closes-Bug: #1641405
2016-11-17 09:13:42 +08:00
Alexander Chadin
8eb99ef76e Fix rally gate test
This patch set removes extra field from rally tasks
since Watcher team has removed extra field from
watcher and python-watcherclient projects.

Change-Id: Ib1640cbe8668f56f3a3a54e9f73bb1e3e6747d79
2016-11-16 11:26:27 +03:00
80 changed files with 1739 additions and 698 deletions

View File

@@ -1,3 +1,12 @@
========================
Team and repository tags
========================
.. image:: http://governance.openstack.org/badges/watcher.svg
:target: http://governance.openstack.org/reference/tags/index.html
.. Change things from this point on
..
Except where otherwise noted, this document is licensed under Creative
Commons Attribution 3.0 License. You can view the license at:

View File

@@ -316,7 +316,7 @@ This method finds an appropriate scheduling of
:ref:`Actions <action_definition>` taking into account some scheduling rules
(such as priorities between actions).
It generates a new :ref:`Action Plan <action_plan_definition>` with status
**RECOMMENDED** and saves it into the:ref:`Watcher Database
**RECOMMENDED** and saves it into the :ref:`Watcher Database
<watcher_database_definition>`. The saved action plan is now a scheduled flow
of actions to which a global efficacy is associated alongside a number of
:ref:`Efficacy Indicators <efficacy_indicator_definition>` as specified by the

View File

@@ -60,8 +60,8 @@ Here is an example showing how you can define a new ``NewGoal`` goal plugin:
# import path: thirdparty.new
from watcher._i18n import _
from watcher.decision_engine.goal import base
from watcher.decision_engine.goal.efficacy import specs
from watcher.decision_engine.strategy.strategies import base
class NewGoal(base.Goal):
@@ -79,11 +79,11 @@ Here is an example showing how you can define a new ``NewGoal`` goal plugin:
@classmethod
def get_efficacy_specification(cls):
return specs.UnclassifiedStrategySpecification()
return specs.Unclassified()
As you may have noticed, the :py:meth:`~.Goal.get_efficacy_specification`
method returns an :py:meth:`~.UnclassifiedStrategySpecification` instance which
method returns an :py:meth:`~.Unclassified` instance which
is provided by Watcher. This efficacy specification is useful during the
development process of your goal as it corresponds to an empty specification.
If you want to learn more about what efficacy specifications are used for or to

View File

@@ -39,7 +39,7 @@ Here is an example showing how you can write a planner plugin called
# Filepath = third-party/third_party/dummy.py
# Import path = third_party.dummy
import uuid
from oslo_utils import uuidutils
from watcher.decision_engine.planner import base
@@ -47,7 +47,7 @@ Here is an example showing how you can write a planner plugin called
def _create_action_plan(self, context, audit_id):
action_plan_dict = {
'uuid': uuid.uuid4(),
'uuid': uuidutils.generate_uuid(),
'audit_id': audit_id,
'first_action_id': None,
'state': objects.action_plan.State.RECOMMENDED

View File

@@ -132,7 +132,7 @@ Compute node
============
Please, read `the official OpenStack definition of a Compute Node
<http://docs.openstack.org/openstack-ops/content/compute_nodes.html>`_.
<http://docs.openstack.org/ops-guide/arch-compute-nodes.html>`_.
.. _customer_definition:

View File

@@ -0,0 +1,101 @@
=================================
Outlet Temperature Based Strategy
=================================
Synopsis
--------
**display name**: ``outlet_temperature``
**goal**: ``thermal_optimization``
Outlet (Exhaust Air) temperature is a new thermal telemetry which can be
used to measure the host's thermal/workload status. This strategy makes
decisions to migrate workloads to the hosts with good thermal condition
(lowest outlet temperature) when the outlet temperature of source hosts
reach a configurable threshold.
Requirements
------------
This strategy has a dependency on the host having Intel's Power
Node Manager 3.0 or later enabled.
Metrics
*******
The *outlet_temperature* strategy requires the following metrics:
========================================= ============ ======= =======
metric service name plugins comment
========================================= ============ ======= =======
``hardware.ipmi.node.outlet_temperature`` ceilometer_ IPMI
========================================= ============ ======= =======
.. _ceilometer: http://docs.openstack.org/admin-guide/telemetry-measurements.html#ipmi-based-meters
Cluster data model
******************
Default Watcher's Compute cluster data model:
.. watcher-term:: watcher.decision_engine.model.collector.nova.NovaClusterDataModelCollector
Actions
*******
Default Watcher's actions:
.. list-table::
:widths: 30 30
:header-rows: 1
* - action
- description
* - ``migration``
- .. watcher-term:: watcher.applier.actions.migration.Migrate
Planner
*******
Default Watcher's planner:
.. watcher-term:: watcher.decision_engine.planner.default.DefaultPlanner
Configuration
-------------
Strategy parameter is:
============== ====== ============= ====================================
parameter type default Value description
============== ====== ============= ====================================
``threshold`` Number 35.0 Temperature threshold for migration
============== ====== ============= ====================================
Efficacy Indicator
------------------
None
Algorithm
---------
For more information on the Outlet Temperature Based Strategy please refer to:
https://specs.openstack.org/openstack/watcher-specs/specs/mitaka/implemented/outlet-temperature-based-strategy.html
How to use it ?
---------------
.. code-block:: shell
$ openstack optimize audittemplate create \
at1 thermal_optimization --strategy outlet_temperature
$ openstack optimize audit create -a at1 -p threshold=31.0
External Links
--------------
- `Intel Power Node Manager 3.0 <http://www.intel.com/content/www/us/en/power-management/intelligent-power-node-manager-3-0-specification.html>`_

View File

@@ -0,0 +1,107 @@
==================================
Uniform Airflow Migration Strategy
==================================
Synopsis
--------
**display name**: ``uniform_airflow``
**goal**: ``airflow_optimization``
.. watcher-term:: watcher.decision_engine.strategy.strategies.uniform_airflow
Requirements
------------
This strategy has a dependency on the server having Intel's Power
Node Manager 3.0 or later enabled.
Metrics
*******
The *uniform_airflow* strategy requires the following metrics:
================================== ============ ======= =======
metric service name plugins comment
================================== ============ ======= =======
``hardware.ipmi.node.airflow`` ceilometer_ IPMI
``hardware.ipmi.node.temperature`` ceilometer_ IPMI
``hardware.ipmi.node.power`` ceilometer_ IPMI
================================== ============ ======= =======
.. _ceilometer: http://docs.openstack.org/admin-guide/telemetry-measurements.html#ipmi-based-meters
Cluster data model
******************
Default Watcher's Compute cluster data model:
.. watcher-term:: watcher.decision_engine.model.collector.nova.NovaClusterDataModelCollector
Actions
*******
Default Watcher's actions:
.. list-table::
:widths: 30 30
:header-rows: 1
* - action
- description
* - ``migration``
- .. watcher-term:: watcher.applier.actions.migration.Migrate
Planner
*******
Default Watcher's planner:
.. watcher-term:: watcher.decision_engine.planner.default.DefaultPlanner
Configuration
-------------
Strategy parameters are:
====================== ====== ============= ===========================
parameter type default Value description
====================== ====== ============= ===========================
``threshold_airflow`` Number 400.0 Airflow threshold for
migration Unit is 0.1CFM
``threshold_inlet_t`` Number 28.0 Inlet temperature threshold
for migration decision
``threshold_power`` Number 350.0 System power threshold for
migration decision
``period`` Number 300 Aggregate time period of
ceilometer
====================== ====== ============= ===========================
Efficacy Indicator
------------------
None
Algorithm
---------
For more information on the Uniform Airflow Migration Strategy please refer to:
https://specs.openstack.org/openstack/watcher-specs/specs/newton/implemented/uniform-airflow-migration-strategy.html
How to use it ?
---------------
.. code-block:: shell
$ openstack optimize audittemplate create \
at1 airflow_optimization --strategy uniform_airflow
$ openstack optimize audit create -a at1 -p threshold_airflow=410 \
-p threshold_inlet_t=29.0 -p threshold_power=355.0 -p period=310
External Links
--------------
- `Intel Power Node Manager 3.0 <http://www.intel.com/content/www/us/en/power-management/intelligent-power-node-manager-3-0-specification.html>`_

View File

@@ -92,12 +92,22 @@ parameter type default Value description
host from list.
``retry_count`` number 1 Count of random returned
hosts.
``periods`` object |periods| These periods are used to get
statistic aggregation for
instance and host metrics.
The period is simply a
repeating interval of time
into which the samples are
grouped for aggregation.
Watcher uses only the last
period of all recieved ones.
==================== ====== ===================== =============================
.. |metrics| replace:: ["cpu_util", "memory.resident"]
.. |thresholds| replace:: {"cpu_util": 0.2, "memory.resident": 0.2}
.. |weights| replace:: {"cpu_util_weight": 1.0, "memory.resident_weight": 1.0}
.. |instance_metrics| replace:: {"cpu_util": "hardware.cpu.util", "memory.resident": "hardware.memory.used"}
.. |instance_metrics| replace:: {"cpu_util": "compute.node.cpu.percent", "memory.resident": "hardware.memory.used"}
.. |periods| replace:: {"instance": 720, "node": 600}
Efficacy Indicator
------------------

View File

@@ -0,0 +1,98 @@
===================================
Workload Balance Migration Strategy
===================================
Synopsis
--------
**display name**: ``workload_balance``
**goal**: ``workload_balancing``
.. watcher-term:: watcher.decision_engine.strategy.strategies.workload_balance
Requirements
------------
None.
Metrics
*******
The *workload_balance* strategy requires the following metrics:
======================= ============ ======= =======
metric service name plugins comment
======================= ============ ======= =======
``cpu_util`` ceilometer_ none
======================= ============ ======= =======
.. _ceilometer: http://docs.openstack.org/admin-guide/telemetry-measurements.html#openstack-compute
Cluster data model
******************
Default Watcher's Compute cluster data model:
.. watcher-term:: watcher.decision_engine.model.collector.nova.NovaClusterDataModelCollector
Actions
*******
Default Watcher's actions:
.. list-table::
:widths: 30 30
:header-rows: 1
* - action
- description
* - ``migration``
- .. watcher-term:: watcher.applier.actions.migration.Migrate
Planner
*******
Default Watcher's planner:
.. watcher-term:: watcher.decision_engine.planner.default.DefaultPlanner
Configuration
-------------
Strategy parameters are:
============== ====== ============= ====================================
parameter type default Value description
============== ====== ============= ====================================
``threshold`` Number 25.0 Workload threshold for migration
``period`` Number 300 Aggregate time period of ceilometer
============== ====== ============= ====================================
Efficacy Indicator
------------------
None
Algorithm
---------
For more information on the Workload Balance Migration Strategy please refer
to: https://specs.openstack.org/openstack/watcher-specs/specs/mitaka/implemented/workload-balance-migration-strategy.html
How to use it ?
---------------
.. code-block:: shell
$ openstack optimize audittemplate create \
at1 workload_balancing --strategy workload_balance
$ openstack optimize audit create -a at1 -p threshold=26.0 \
-p period=310
External Links
--------------
None.

View File

@@ -7,7 +7,7 @@ To launch this task with configured Rally you just need to run:
::
rally task start watcher/rally-jobs/watcher.yaml
rally task start watcher/rally-jobs/watcher-watcher.yaml
Structure
---------

View File

@@ -17,7 +17,6 @@
name: "dummy"
strategy:
name: "dummy"
extra: {}
sla:
failure_rate:
max: 0
@@ -29,7 +28,6 @@
name: "dummy"
strategy:
name: "dummy"
extra: {}
runner:
type: "constant"
times: 10
@@ -56,12 +54,10 @@
name: "workload_balancing"
strategy:
name: "workload_stabilization"
extra: {}
- goal:
name: "dummy"
strategy:
name: "dummy"
extra: {}
sla:
failure_rate:
max: 0

View File

@@ -1,7 +1,7 @@
---
features:
- Added a generic scoring engine module, which
will standarize interactions with scoring engines
will standardize interactions with scoring engines
through the common API. It is possible to use the
scoring engine by different Strategies, which
improve the code and data model re-use.

View File

@@ -2,4 +2,4 @@
features:
- Added a way to return the of available goals depending
on which strategies have been deployed on the node
where the decison engine is running.
where the decision engine is running.

View File

@@ -15,13 +15,13 @@ oslo.context>=2.9.0 # Apache-2.0
oslo.db!=4.13.1,!=4.13.2,>=4.11.0 # Apache-2.0
oslo.i18n>=2.1.0 # Apache-2.0
oslo.log>=3.11.0 # Apache-2.0
oslo.messaging>=5.2.0 # Apache-2.0
oslo.policy>=1.15.0 # Apache-2.0
oslo.messaging>=5.14.0 # Apache-2.0
oslo.policy>=1.17.0 # Apache-2.0
oslo.reports>=0.6.0 # Apache-2.0
oslo.serialization>=1.10.0 # Apache-2.0
oslo.service>=1.10.0 # Apache-2.0
oslo.utils>=3.18.0 # Apache-2.0
oslo.versionedobjects>=1.13.0 # Apache-2.0
oslo.versionedobjects>=1.17.0 # Apache-2.0
PasteDeploy>=1.5.0 # MIT
pbr>=1.8 # Apache-2.0
pecan!=1.0.2,!=1.0.3,!=1.0.4,!=1.2,>=1.0.0 # BSD
@@ -30,7 +30,7 @@ voluptuous>=0.8.9 # BSD License
python-ceilometerclient>=2.5.0 # Apache-2.0
python-cinderclient!=1.7.0,!=1.7.1,>=1.6.0 # Apache-2.0
python-glanceclient>=2.5.0 # Apache-2.0
python-keystoneclient>=3.6.0 # Apache-2.0
python-keystoneclient>=3.8.0 # Apache-2.0
python-neutronclient>=5.1.0 # Apache-2.0
python-novaclient!=2.33.0,>=2.29.0 # Apache-2.0
python-openstackclient>=3.3.0 # Apache-2.0

View File

@@ -7,8 +7,8 @@ skipsdist = True
usedevelop = True
whitelist_externals = find
install_command =
constraints: pip install -U --force-reinstall -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} {opts} {packages}
pip install -U {opts} {packages}
pip install -U --force-reinstall -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} {opts} {packages}
setenv =
VIRTUAL_ENV={envdir}
deps = -r{toxinidir}/test-requirements.txt

View File

@@ -1,6 +1,7 @@
# -*- encoding: utf-8 -*-
#
# Copyright © 2012 New Dream Network, LLC (DreamHost)
# Copyright (c) 2016 Intel Corp
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
@@ -17,19 +18,10 @@
"""Access Control Lists (ACL's) control access the API server."""
from oslo_config import cfg
from watcher.api.middleware import auth_token
from watcher import conf
AUTH_OPTS = [
cfg.BoolOpt('enable_authentication',
default=True,
help='This option enables or disables user authentication '
'via keystone. Default value is True.'),
]
CONF = cfg.CONF
CONF.register_opts(AUTH_OPTS)
CONF = conf.CONF
def install(app, conf, public_routes):
@@ -42,7 +34,7 @@ def install(app, conf, public_routes):
:return: The same WSGI application with ACL installed.
"""
if not cfg.CONF.get('enable_authentication'):
if not CONF.get('enable_authentication'):
return app
return auth_token.AuthTokenMiddleware(app,
conf=dict(conf),

View File

@@ -2,6 +2,7 @@
# Copyright © 2012 New Dream Network, LLC (DreamHost)
# All Rights Reserved.
# Copyright (c) 2016 Intel Corp
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
@@ -16,49 +17,14 @@
# under the License.
from oslo_config import cfg
import pecan
from watcher._i18n import _
from watcher.api import acl
from watcher.api import config as api_config
from watcher.api import middleware
from watcher import conf
# Register options for the service
API_SERVICE_OPTS = [
cfg.PortOpt('port',
default=9322,
help=_('The port for the watcher API server')),
cfg.StrOpt('host',
default='127.0.0.1',
help=_('The listen IP for the watcher API server')),
cfg.IntOpt('max_limit',
default=1000,
help=_('The maximum number of items returned in a single '
'response from a collection resource')),
cfg.IntOpt('workers',
min=1,
help=_('Number of workers for Watcher API service. '
'The default is equal to the number of CPUs available '
'if that can be determined, else a default worker '
'count of 1 is returned.')),
cfg.BoolOpt('enable_ssl_api',
default=False,
help=_("Enable the integrated stand-alone API to service "
"requests via HTTPS instead of HTTP. If there is a "
"front-end service performing HTTPS offloading from "
"the service, this option should be False; note, you "
"will want to change public API endpoint to represent "
"SSL termination URL with 'public_endpoint' option.")),
]
CONF = cfg.CONF
opt_group = cfg.OptGroup(name='api',
title='Options for the watcher-api service')
CONF.register_group(opt_group)
CONF.register_opts(API_SERVICE_OPTS, opt_group)
CONF = conf.CONF
def get_pecan_config():

View File

@@ -41,7 +41,7 @@ be one of the following:
processed by the :ref:`Watcher Applier <watcher_applier_definition>`
- **SUCCEEDED** : the :ref:`Action <action_definition>` has been executed
successfully
- **FAILED** : an error occured while trying to execute the
- **FAILED** : an error occurred while trying to execute the
:ref:`Action <action_definition>`
- **DELETED** : the :ref:`Action <action_definition>` is still stored in the
:ref:`Watcher database <watcher_database_definition>` but is not returned

View File

@@ -496,7 +496,6 @@ class ActionPlansController(rest.RestController):
:param action_plan_uuid: UUID of a action plan.
:param patch: a json PATCH document to apply to this action plan.
"""
launch_action_plan = True
if self.from_actionsPlans:
raise exception.OperationNotPermitted

View File

@@ -525,7 +525,6 @@ class AuditsController(rest.RestController):
pecan.response.location = link.build_url('audits', new_audit.uuid)
# trigger decision-engine to run the audit
if new_audit.audit_type == objects.audit.AuditType.ONESHOT.value:
dc_client = rpcapi.DecisionEngineAPI()
dc_client.trigger_audit(context, new_audit.uuid)

View File

@@ -1,5 +1,6 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2015 b<>com
# Copyright (c) 2016 Intel Corp
#
# Authors: Jean-Emile DARTOIS <jean-emile.dartois@b-com.com>
#
@@ -17,41 +18,12 @@
# limitations under the License.
#
from oslo_config import cfg
from watcher.applier.messaging import trigger
from watcher.common import service_manager
CONF = cfg.CONF
from watcher import conf
# Register options
APPLIER_MANAGER_OPTS = [
cfg.IntOpt('workers',
default='1',
min=1,
required=True,
help='Number of workers for applier, default value is 1.'),
cfg.StrOpt('conductor_topic',
default='watcher.applier.control',
help='The topic name used for'
'control events, this topic '
'used for rpc call '),
cfg.StrOpt('publisher_id',
default='watcher.applier.api',
help='The identifier used by watcher '
'module on the message broker'),
cfg.StrOpt('workflow_engine',
default='taskflow',
required=True,
help='Select the engine to use to execute the workflow')
]
opt_group = cfg.OptGroup(name='watcher_applier',
title='Options for the Applier messaging'
'core')
CONF.register_group(opt_group)
CONF.register_opts(APPLIER_MANAGER_OPTS, opt_group)
CONF = conf.CONF
class ApplierManager(service_manager.ServiceManager):

View File

@@ -1,5 +1,6 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2015 b<>com
# Copyright (c) 2016 Intel Corp
#
# Authors: Jean-Emile DARTOIS <jean-emile.dartois@b-com.com>
#
@@ -16,18 +17,14 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
from watcher.applier import manager
from watcher.common import exception
from watcher.common import service
from watcher.common import service_manager
from watcher.common import utils
from watcher import conf
CONF = cfg.CONF
CONF.register_group(manager.opt_group)
CONF.register_opts(manager.APPLIER_MANAGER_OPTS, manager.opt_group)
CONF = conf.CONF
class ApplierAPI(service.Service):

View File

@@ -134,19 +134,16 @@ class CeilometerHelper(object):
aggregate='avg'):
"""Representing a statistic aggregate by operators
:param resource_id: id
:param meter_name: meter names of which we want the statistics
:param period: `period`: In seconds. If no period is given, only one
aggregate statistic is returned. If given, a faceted
result will be returned, divided into given periods.
Periods with no data are ignored.
:param aggregate:
:return:
:param resource_id: id of resource to list statistics for.
:param meter_name: Name of meter to list statistics for.
:param period: Period in seconds over which to group samples.
:param aggregate: Available aggregates are: count, cardinality,
min, max, sum, stddev, avg. Defaults to avg.
:return: Return the latest statistical data, None if no data.
"""
end_time = datetime.datetime.utcnow()
start_time = (datetime.datetime.utcnow() -
datetime.timedelta(seconds=int(period)))
start_time = end_time - datetime.timedelta(seconds=int(period))
query = self.build_query(
resource_id=resource_id, start_time=start_time, end_time=end_time)
statistic = self.query_retry(f=self.ceilometer.statistics.list,

View File

@@ -19,47 +19,14 @@ from neutronclient.neutron import client as netclient
from novaclient import client as nvclient
from oslo_config import cfg
from watcher._i18n import _
from watcher.common import exception
from watcher import conf
NOVA_CLIENT_OPTS = [
cfg.StrOpt('api_version',
default='2',
help=_('Version of Nova API to use in novaclient.'))]
GLANCE_CLIENT_OPTS = [
cfg.StrOpt('api_version',
default='2',
help=_('Version of Glance API to use in glanceclient.'))]
CINDER_CLIENT_OPTS = [
cfg.StrOpt('api_version',
default='2',
help=_('Version of Cinder API to use in cinderclient.'))]
CEILOMETER_CLIENT_OPTS = [
cfg.StrOpt('api_version',
default='2',
help=_('Version of Ceilometer API to use in '
'ceilometerclient.'))]
NEUTRON_CLIENT_OPTS = [
cfg.StrOpt('api_version',
default='2.0',
help=_('Version of Neutron API to use in neutronclient.'))]
cfg.CONF.register_opts(NOVA_CLIENT_OPTS, group='nova_client')
cfg.CONF.register_opts(GLANCE_CLIENT_OPTS, group='glance_client')
cfg.CONF.register_opts(CINDER_CLIENT_OPTS, group='cinder_client')
cfg.CONF.register_opts(CEILOMETER_CLIENT_OPTS, group='ceilometer_client')
cfg.CONF.register_opts(NEUTRON_CLIENT_OPTS, group='neutron_client')
CONF = conf.CONF
_CLIENTS_AUTH_GROUP = 'watcher_clients_auth'
ka_loading.register_auth_conf_options(cfg.CONF, _CLIENTS_AUTH_GROUP)
ka_loading.register_session_conf_options(cfg.CONF, _CLIENTS_AUTH_GROUP)
class OpenStackClients(object):
"""Convenience class to create and cache client instances."""

View File

@@ -26,22 +26,16 @@ import functools
import sys
from keystoneclient import exceptions as keystone_exceptions
from oslo_config import cfg
from oslo_log import log as logging
import six
from watcher._i18n import _, _LE
from watcher import conf
LOG = logging.getLogger(__name__)
EXC_LOG_OPTS = [
cfg.BoolOpt('fatal_exception_format_errors',
default=False,
help='Make exception message format errors fatal.'),
]
CONF = cfg.CONF
CONF.register_opts(EXC_LOG_OPTS)
CONF = conf.CONF
def wrap_keystone_exception(func):
@@ -340,6 +334,10 @@ class MetricCollectorNotDefined(WatcherException):
msg_fmt = _("The metrics resource collector is not defined")
class ClusterStateStale(WatcherException):
msg_fmt = _("The cluster state is stale")
class ClusterDataModelCollectionError(WatcherException):
msg_fmt = _("The cluster data model '%(cdm)s' could not be built")

View File

@@ -14,7 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from watcher.common.messaging.utils import synchronization
from watcher.common import synchronization
class Observable(synchronization.Synchronization):

View File

@@ -17,38 +17,9 @@
import os
from oslo_config import cfg
from watcher import conf
PATH_OPTS = [
cfg.StrOpt('pybasedir',
default=os.path.abspath(os.path.join(os.path.dirname(__file__),
'../')),
help='Directory where the watcher python module is installed.'),
cfg.StrOpt('bindir',
default='$pybasedir/bin',
help='Directory where watcher binaries are installed.'),
cfg.StrOpt('state_path',
default='$pybasedir',
help="Top-level directory for maintaining watcher's state."),
]
CONF = cfg.CONF
CONF.register_opts(PATH_OPTS)
def basedir_def(*args):
"""Return an uninterpolated path relative to $pybasedir."""
return os.path.join('$pybasedir', *args)
def bindir_def(*args):
"""Return an uninterpolated path relative to $bindir."""
return os.path.join('$bindir', *args)
def state_path_def(*args):
"""Return an uninterpolated path relative to $state_path."""
return os.path.join('$state_path', *args)
CONF = conf.CONF
def basedir_rel(*args):

View File

@@ -19,7 +19,6 @@
import re
from jsonschema import validators
from oslo_config import cfg
from oslo_log import log as logging
from oslo_utils import strutils
from oslo_utils import timeutils
@@ -29,18 +28,9 @@ import six
from watcher._i18n import _LW
from watcher.common import exception
from watcher import conf
UTILS_OPTS = [
cfg.StrOpt('rootwrap_config',
default="/etc/watcher/rootwrap.conf",
help='Path to the rootwrap configuration file to use for '
'running commands as root.'),
cfg.StrOpt('tempdir',
help='Explicitly specify the temporary working directory.'),
]
CONF = cfg.CONF
CONF.register_opts(UTILS_OPTS)
CONF = conf.CONF
LOG = logging.getLogger(__name__)

View File

@@ -1,5 +1,6 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2016 b<>com
# Copyright (c) 2016 Intel Corp
#
# Authors: Vincent FRANCOISE <vincent.francoise@b-com.com>
#
@@ -18,8 +19,36 @@
from oslo_config import cfg
from watcher.conf import api
from watcher.conf import applier
from watcher.conf import ceilometer_client
from watcher.conf import cinder_client
from watcher.conf import clients_auth
from watcher.conf import db
from watcher.conf import decision_engine
from watcher.conf import exception
from watcher.conf import glance_client
from watcher.conf import neutron_client
from watcher.conf import nova_client
from watcher.conf import paths
from watcher.conf import planner
from watcher.conf import service
from watcher.conf import utils
CONF = cfg.CONF
service.register_opts(CONF)
api.register_opts(CONF)
utils.register_opts(CONF)
paths.register_opts(CONF)
exception.register_opts(CONF)
db.register_opts(CONF)
planner.register_opts(CONF)
applier.register_opts(CONF)
decision_engine.register_opts(CONF)
nova_client.register_opts(CONF)
glance_client.register_opts(CONF)
cinder_client.register_opts(CONF)
ceilometer_client.register_opts(CONF)
neutron_client.register_opts(CONF)
clients_auth.register_opts(CONF)

View File

@@ -1,6 +1,7 @@
# -*- encoding: utf-8 -*-
# Copyright 2014
# The Cloudscaling Group, Inc.
# Copyright (c) 2016 Intel Corp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -17,38 +18,41 @@
from keystoneauth1 import loading as ka_loading
from watcher.api import acl as api_acl
from watcher.api import app as api_app
from watcher.applier import manager as applier_manager
from watcher.common import clients
from watcher.common import exception
from watcher.common import paths
from watcher.db.sqlalchemy import models
from watcher.decision_engine.audit import continuous
from watcher.decision_engine import manager as decision_engine_manager
from watcher.decision_engine.planner import manager as planner_manager
from watcher.conf import api as conf_api
from watcher.conf import applier as conf_applier
from watcher.conf import ceilometer_client as conf_ceilometer_client
from watcher.conf import cinder_client as conf_cinder_client
from watcher.conf import db
from watcher.conf import decision_engine as conf_de
from watcher.conf import exception
from watcher.conf import glance_client as conf_glance_client
from watcher.conf import neutron_client as conf_neutron_client
from watcher.conf import nova_client as conf_nova_client
from watcher.conf import paths
from watcher.conf import planner as conf_planner
from watcher.conf import utils
def list_opts():
"""Legacy aggregation of all the watcher config options"""
return [
('DEFAULT',
(api_app.API_SERVICE_OPTS +
api_acl.AUTH_OPTS +
(conf_api.AUTH_OPTS +
exception.EXC_LOG_OPTS +
paths.PATH_OPTS)),
('api', api_app.API_SERVICE_OPTS),
('database', models.SQL_OPTS),
paths.PATH_OPTS +
utils.UTILS_OPTS)),
('api', conf_api.API_SERVICE_OPTS),
('database', db.SQL_OPTS),
('watcher_planner', conf_planner.WATCHER_PLANNER_OPTS),
('watcher_applier', conf_applier.APPLIER_MANAGER_OPTS),
('watcher_decision_engine',
(decision_engine_manager.WATCHER_DECISION_ENGINE_OPTS +
continuous.WATCHER_CONTINUOUS_OPTS)),
('watcher_applier', applier_manager.APPLIER_MANAGER_OPTS),
('watcher_planner', planner_manager.WATCHER_PLANNER_OPTS),
('nova_client', clients.NOVA_CLIENT_OPTS),
('glance_client', clients.GLANCE_CLIENT_OPTS),
('cinder_client', clients.CINDER_CLIENT_OPTS),
('ceilometer_client', clients.CEILOMETER_CLIENT_OPTS),
('neutron_client', clients.NEUTRON_CLIENT_OPTS),
(conf_de.WATCHER_DECISION_ENGINE_OPTS +
conf_de.WATCHER_CONTINUOUS_OPTS)),
('nova_client', conf_nova_client.NOVA_CLIENT_OPTS),
('glance_client', conf_glance_client.GLANCE_CLIENT_OPTS),
('cinder_client', conf_cinder_client.CINDER_CLIENT_OPTS),
('ceilometer_client', conf_ceilometer_client.CEILOMETER_CLIENT_OPTS),
('neutron_client', conf_neutron_client.NEUTRON_CLIENT_OPTS),
('watcher_clients_auth',
(ka_loading.get_auth_common_conf_options() +
ka_loading.get_auth_plugin_conf_options('password') +

67
watcher/conf/api.py Normal file
View File

@@ -0,0 +1,67 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2016 Intel Corp
#
# Authors: Prudhvi Rao Shedimbi <prudhvi.rao.shedimbi@intel.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
api = cfg.OptGroup(name='api',
title='Options for the Watcher API service')
AUTH_OPTS = [
cfg.BoolOpt('enable_authentication',
default=True,
help='This option enables or disables user authentication '
'via keystone. Default value is True.'),
]
API_SERVICE_OPTS = [
cfg.PortOpt('port',
default=9322,
help='The port for the watcher API server'),
cfg.StrOpt('host',
default='127.0.0.1',
help='The listen IP address for the watcher API server'),
cfg.IntOpt('max_limit',
default=1000,
help='The maximum number of items returned in a single '
'response from a collection resource'),
cfg.IntOpt('workers',
min=1,
help='Number of workers for Watcher API service. '
'The default is equal to the number of CPUs available '
'if that can be determined, else a default worker '
'count of 1 is returned.'),
cfg.BoolOpt('enable_ssl_api',
default=False,
help="Enable the integrated stand-alone API to service "
"requests via HTTPS instead of HTTP. If there is a "
"front-end service performing HTTPS offloading from "
"the service, this option should be False; note, you "
"will want to change public API endpoint to represent "
"SSL termination URL with 'public_endpoint' option."),
]
def register_opts(conf):
conf.register_group(api)
conf.register_opts(API_SERVICE_OPTS, group=api)
conf.register_opts(AUTH_OPTS)
def list_opts():
return [('api', API_SERVICE_OPTS), ('DEFAULT', AUTH_OPTS)]

53
watcher/conf/applier.py Normal file
View File

@@ -0,0 +1,53 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2016 Intel Corp
#
# Authors: Prudhvi Rao Shedimbi <prudhvi.rao.shedimbi@intel.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
watcher_applier = cfg.OptGroup(name='watcher_applier',
title='Options for the Applier messaging'
'core')
APPLIER_MANAGER_OPTS = [
cfg.IntOpt('workers',
default='1',
min=1,
required=True,
help='Number of workers for applier, default value is 1.'),
cfg.StrOpt('conductor_topic',
default='watcher.applier.control',
help='The topic name used for'
'control events, this topic '
'used for rpc call '),
cfg.StrOpt('publisher_id',
default='watcher.applier.api',
help='The identifier used by watcher '
'module on the message broker'),
cfg.StrOpt('workflow_engine',
default='taskflow',
required=True,
help='Select the engine to use to execute the workflow')
]
def register_opts(conf):
conf.register_group(watcher_applier)
conf.register_opts(APPLIER_MANAGER_OPTS, group=watcher_applier)
def list_opts():
return [('watcher_applier', APPLIER_MANAGER_OPTS)]

View File

@@ -0,0 +1,37 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2016 Intel Corp
#
# Authors: Prudhvi Rao Shedimbi <prudhvi.rao.shedimbi@intel.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
ceilometer_client = cfg.OptGroup(name='ceilometer_client',
title='Configuration Options for Ceilometer')
CEILOMETER_CLIENT_OPTS = [
cfg.StrOpt('api_version',
default='2',
help='Version of Ceilometer API to use in '
'ceilometerclient.')]
def register_opts(conf):
conf.register_group(ceilometer_client)
conf.register_opts(CEILOMETER_CLIENT_OPTS, group=ceilometer_client)
def list_opts():
return [('ceilometer_client', CEILOMETER_CLIENT_OPTS)]

View File

@@ -0,0 +1,36 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2016 Intel Corp
#
# Authors: Prudhvi Rao Shedimbi <prudhvi.rao.shedimbi@intel.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
cinder_client = cfg.OptGroup(name='cinder_client',
title='Configuration Options for Cinder')
CINDER_CLIENT_OPTS = [
cfg.StrOpt('api_version',
default='2',
help='Version of Cinder API to use in cinderclient.')]
def register_opts(conf):
conf.register_group(cinder_client)
conf.register_opts(CINDER_CLIENT_OPTS, group=cinder_client)
def list_opts():
return [('cinder_client', CINDER_CLIENT_OPTS)]

View File

@@ -0,0 +1,31 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2016 Intel Corp
#
# Authors: Prudhvi Rao Shedimbi <prudhvi.rao.shedimbi@intel.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from keystoneauth1 import loading as ka_loading
WATCHER_CLIENTS_AUTH = 'watcher_clients_auth'
def register_opts(conf):
ka_loading.register_session_conf_options(conf, WATCHER_CLIENTS_AUTH)
ka_loading.register_auth_conf_options(conf, WATCHER_CLIENTS_AUTH)
def list_opts():
return [('watcher_clients_auth', ka_loading.get_session_conf_options() +
ka_loading.get_auth_common_conf_options())]

44
watcher/conf/db.py Normal file
View File

@@ -0,0 +1,44 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2016 Intel Corp
#
# Authors: Prudhvi Rao Shedimbi <prudhvi.rao.shedimbi@intel.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
from oslo_db import options as oslo_db_options
from watcher.conf import paths
_DEFAULT_SQL_CONNECTION = 'sqlite:///{0}'.format(
paths.state_path_def('watcher.sqlite'))
database = cfg.OptGroup(name='database',
title='Configuration Options for database')
SQL_OPTS = [
cfg.StrOpt('mysql_engine',
default='InnoDB',
help='MySQL engine to use.')
]
def register_opts(conf):
oslo_db_options.set_defaults(conf, connection=_DEFAULT_SQL_CONNECTION)
conf.register_group(database)
conf.register_opts(SQL_OPTS, group=database)
def list_opts():
return [('database', SQL_OPTS)]

View File

@@ -0,0 +1,64 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2015 Intel Corp
#
# Authors: Prudhvi Rao Shedimbi <prudhvi.rao.shedimbi@intel.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
watcher_decision_engine = cfg.OptGroup(name='watcher_decision_engine',
title='Defines the parameters of '
'the module decision engine')
WATCHER_DECISION_ENGINE_OPTS = [
cfg.StrOpt('conductor_topic',
default='watcher.decision.control',
help='The topic name used for '
'control events, this topic '
'used for RPC calls'),
cfg.ListOpt('notification_topics',
default=['versioned_notifications', 'watcher_notifications'],
help='The topic names from which notification events '
'will be listened to'),
cfg.StrOpt('publisher_id',
default='watcher.decision.api',
help='The identifier used by the Watcher '
'module on the message broker'),
cfg.IntOpt('max_workers',
default=2,
required=True,
help='The maximum number of threads that can be used to '
'execute strategies'),
]
WATCHER_CONTINUOUS_OPTS = [
cfg.IntOpt('continuous_audit_interval',
default=10,
help='Interval (in seconds) for checking newly created '
'continuous audits.')
]
def register_opts(conf):
conf.register_group(watcher_decision_engine)
conf.register_opts(WATCHER_DECISION_ENGINE_OPTS,
group=watcher_decision_engine)
conf.register_opts(WATCHER_CONTINUOUS_OPTS, group=watcher_decision_engine)
def list_opts():
return [('watcher_decision_engine', WATCHER_DECISION_ENGINE_OPTS),
('watcher_decision_engine', WATCHER_CONTINUOUS_OPTS)]

33
watcher/conf/exception.py Normal file
View File

@@ -0,0 +1,33 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2016 Intel Corp
#
# Authors: Prudhvi Rao Shedimbi <prudhvi.rao.shedimbi@intel.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
EXC_LOG_OPTS = [
cfg.BoolOpt('fatal_exception_format_errors',
default=False,
help='Make exception message format errors fatal.'),
]
def register_opts(conf):
conf.register_opts(EXC_LOG_OPTS)
def list_opts():
return [('DEFAULT', EXC_LOG_OPTS)]

View File

@@ -0,0 +1,36 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2016 Intel Corp
#
# Authors: Prudhvi Rao Shedimbi <prudhvi.rao.shedimbi@intel.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
glance_client = cfg.OptGroup(name='glance_client',
title='Configuration Options for Glance')
GLANCE_CLIENT_OPTS = [
cfg.StrOpt('api_version',
default='2',
help='Version of Glance API to use in glanceclient.')]
def register_opts(conf):
conf.register_group(glance_client)
conf.register_opts(GLANCE_CLIENT_OPTS, group=glance_client)
def list_opts():
return [('glance_client', GLANCE_CLIENT_OPTS)]

View File

@@ -0,0 +1,36 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2016 Intel Corp
#
# Authors: Prudhvi Rao Shedimbi <prudhvi.rao.shedimbi@intel.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
neutron_client = cfg.OptGroup(name='neutron_client',
title='Configuration Options for Neutron')
NEUTRON_CLIENT_OPTS = [
cfg.StrOpt('api_version',
default='2.0',
help='Version of Neutron API to use in neutronclient.')]
def register_opts(conf):
conf.register_group(neutron_client)
conf.register_opts(NEUTRON_CLIENT_OPTS, group=neutron_client)
def list_opts():
return [('neutron_client', NEUTRON_CLIENT_OPTS)]

View File

@@ -0,0 +1,36 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2016 Intel Corp
#
# Authors: Prudhvi Rao Shedimbi <prudhvi.rao.shedimbi@intel.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
nova_client = cfg.OptGroup(name='nova_client',
title='Configuration Options for Nova')
NOVA_CLIENT_OPTS = [
cfg.StrOpt('api_version',
default='2',
help='Version of Nova API to use in novaclient.')]
def register_opts(conf):
conf.register_group(nova_client)
conf.register_opts(NOVA_CLIENT_OPTS, group=nova_client)
def list_opts():
return [('nova_client', NOVA_CLIENT_OPTS)]

57
watcher/conf/paths.py Normal file
View File

@@ -0,0 +1,57 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2016 Intel Corp
#
# Authors: Prudhvi Rao Shedimbi <prudhvi.rao.shedimbi@intel.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
import os
PATH_OPTS = [
cfg.StrOpt('pybasedir',
default=os.path.abspath(os.path.join(os.path.dirname(__file__),
'../')),
help='Directory where the watcher python module is installed.'),
cfg.StrOpt('bindir',
default='$pybasedir/bin',
help='Directory where watcher binaries are installed.'),
cfg.StrOpt('state_path',
default='$pybasedir',
help="Top-level directory for maintaining watcher's state."),
]
def basedir_def(*args):
"""Return an uninterpolated path relative to $pybasedir."""
return os.path.join('$pybasedir', *args)
def bindir_def(*args):
"""Return an uninterpolated path relative to $bindir."""
return os.path.join('$bindir', *args)
def state_path_def(*args):
"""Return an uninterpolated path relative to $state_path."""
return os.path.join('$state_path', *args)
def register_opts(conf):
conf.register_opts(PATH_OPTS)
def list_opts():
return [('DEFAULT', PATH_OPTS)]

41
watcher/conf/planner.py Normal file
View File

@@ -0,0 +1,41 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2016 Intel Corp
#
# Authors: Prudhvi Rao Shedimbi <prudhvi.rao.shedimbi@intel.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
watcher_planner = cfg.OptGroup(name='watcher_planner',
title='Defines the parameters of '
'the planner')
default_planner = 'default'
WATCHER_PLANNER_OPTS = {
cfg.StrOpt('planner',
default=default_planner,
required=True,
help='The selected planner used to schedule the actions')
}
def register_opts(conf):
conf.register_group(watcher_planner)
conf.register_opts(WATCHER_PLANNER_OPTS, group=watcher_planner)
def list_opts():
return [('watcher_planner', WATCHER_PLANNER_OPTS)]

36
watcher/conf/utils.py Normal file
View File

@@ -0,0 +1,36 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2016 Intel Corp
#
# Authors: Prudhvi Rao Shedimbi <prudhvi.rao.shedimbi@intel.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
UTILS_OPTS = [
cfg.StrOpt('rootwrap_config',
default="/etc/watcher/rootwrap.conf",
help='Path to the rootwrap configuration file to use for '
'running commands as root.'),
cfg.StrOpt('tempdir',
help='Explicitly specify the temporary working directory.'),
]
def register_opts(conf):
conf.register_opts(UTILS_OPTS)
def list_opts():
return [('DEFAULT', UTILS_OPTS)]

View File

@@ -132,7 +132,8 @@ class Connection(api.BaseConnection):
def __add_simple_filter(self, query, model, fieldname, value, operator_):
field = getattr(model, fieldname)
if field.type.python_type is datetime.datetime and value:
if (fieldname != 'deleted' and value and
field.type.python_type is datetime.datetime):
if not isinstance(value, datetime.datetime):
value = timeutils.parse_isotime(value)
@@ -291,11 +292,13 @@ class Connection(api.BaseConnection):
query = model_query(model, session=session)
query = add_identity_filter(query, id_)
try:
query.one()
row = query.one()
except exc.NoResultFound:
raise exception.ResourceNotFound(name=model.__name__, id=id_)
query.soft_delete()
row.soft_delete(session)
return row
@staticmethod
def _destroy(model, id_):
@@ -484,7 +487,7 @@ class Connection(api.BaseConnection):
def soft_delete_goal(self, goal_id):
try:
self._soft_delete(models.Goal, goal_id)
return self._soft_delete(models.Goal, goal_id)
except exception.ResourceNotFound:
raise exception.GoalNotFound(goal=goal_id)
@@ -550,7 +553,7 @@ class Connection(api.BaseConnection):
def soft_delete_strategy(self, strategy_id):
try:
self._soft_delete(models.Strategy, strategy_id)
return self._soft_delete(models.Strategy, strategy_id)
except exception.ResourceNotFound:
raise exception.StrategyNotFound(strategy=strategy_id)
@@ -632,7 +635,7 @@ class Connection(api.BaseConnection):
def soft_delete_audit_template(self, audit_template_id):
try:
self._soft_delete(models.AuditTemplate, audit_template_id)
return self._soft_delete(models.AuditTemplate, audit_template_id)
except exception.ResourceNotFound:
raise exception.AuditTemplateNotFound(
audit_template=audit_template_id)
@@ -717,7 +720,7 @@ class Connection(api.BaseConnection):
def soft_delete_audit(self, audit_id):
try:
self._soft_delete(models.Audit, audit_id)
return self._soft_delete(models.Audit, audit_id)
except exception.ResourceNotFound:
raise exception.AuditNotFound(audit=audit_id)
@@ -793,17 +796,10 @@ class Connection(api.BaseConnection):
return ref
def soft_delete_action(self, action_id):
session = get_session()
with session.begin():
query = model_query(models.Action, session=session)
query = add_identity_filter(query, action_id)
try:
query.one()
except exc.NoResultFound:
raise exception.ActionNotFound(action=action_id)
query.soft_delete()
try:
return self._soft_delete(models.Action, action_id)
except exception.ResourceNotFound:
raise exception.ActionNotFound(action=action_id)
# ### ACTION PLANS ### #
@@ -895,17 +891,10 @@ class Connection(api.BaseConnection):
return ref
def soft_delete_action_plan(self, action_plan_id):
session = get_session()
with session.begin():
query = model_query(models.ActionPlan, session=session)
query = add_identity_filter(query, action_plan_id)
try:
query.one()
except exc.NoResultFound:
raise exception.ActionPlanNotFound(action_plan=action_plan_id)
query.soft_delete()
try:
return self._soft_delete(models.ActionPlan, action_plan_id)
except exception.ResourceNotFound:
raise exception.ActionPlanNotFound(action_plan=action_plan_id)
# ### EFFICACY INDICATORS ### #
@@ -973,7 +962,8 @@ class Connection(api.BaseConnection):
def soft_delete_efficacy_indicator(self, efficacy_indicator_id):
try:
self._soft_delete(models.EfficacyIndicator, efficacy_indicator_id)
return self._soft_delete(
models.EfficacyIndicator, efficacy_indicator_id)
except exception.ResourceNotFound:
raise exception.EfficacyIndicatorNotFound(
efficacy_indicator=efficacy_indicator_id)
@@ -1066,7 +1056,8 @@ class Connection(api.BaseConnection):
def soft_delete_scoring_engine(self, scoring_engine_id):
try:
return self._soft_delete(models.ScoringEngine, scoring_engine_id)
return self._soft_delete(
models.ScoringEngine, scoring_engine_id)
except exception.ResourceNotFound:
raise exception.ScoringEngineNotFound(
scoring_engine=scoring_engine_id)
@@ -1131,6 +1122,6 @@ class Connection(api.BaseConnection):
def soft_delete_service(self, service_id):
try:
self._soft_delete(models.Service, service_id)
return self._soft_delete(models.Service, service_id)
except exception.ResourceNotFound:
raise exception.ServiceNotFound(service=service_id)

View File

@@ -16,8 +16,6 @@
SQLAlchemy models for watcher service
"""
from oslo_config import cfg
from oslo_db import options as db_options
from oslo_db.sqlalchemy import models
from oslo_serialization import jsonutils
import six.moves.urllib.parse as urlparse
@@ -33,25 +31,15 @@ from sqlalchemy import Text
from sqlalchemy.types import TypeDecorator, TEXT
from sqlalchemy import UniqueConstraint
from watcher.common import paths
from watcher import conf
SQL_OPTS = [
cfg.StrOpt('mysql_engine',
default='InnoDB',
help='MySQL engine to use.')
]
_DEFAULT_SQL_CONNECTION = 'sqlite:///{0}'.format(
paths.state_path_def('watcher.sqlite'))
cfg.CONF.register_opts(SQL_OPTS, 'database')
db_options.set_defaults(cfg.CONF, _DEFAULT_SQL_CONNECTION, 'watcher.sqlite')
CONF = conf.CONF
def table_args():
engine_name = urlparse.urlparse(cfg.CONF.database.connection).scheme
engine_name = urlparse.urlparse(CONF.database.connection).scheme
if engine_name == 'mysql':
return {'mysql_engine': cfg.CONF.database.mysql_engine,
return {'mysql_engine': CONF.database.mysql_engine,
'mysql_charset': "utf8"}
return None

View File

@@ -1,5 +1,6 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2016 Servionica LTD
# Copyright (c) 2016 Intel Corp
#
# Authors: Alexander Chadin <a.chadin@servionica.ru>
#
@@ -21,22 +22,13 @@ import datetime
from apscheduler.schedulers import background
from oslo_config import cfg
from watcher.common import context
from watcher.decision_engine.audit import base
from watcher import objects
CONF = cfg.CONF
from watcher import conf
WATCHER_CONTINUOUS_OPTS = [
cfg.IntOpt('continuous_audit_interval',
default=10,
help='Interval (in seconds) for checking newly created '
'continuous audits.')
]
CONF.register_opts(WATCHER_CONTINUOUS_OPTS, 'watcher_decision_engine')
CONF = conf.CONF
class ContinuousAuditHandler(base.AuditHandler):

View File

@@ -1,5 +1,6 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2015 b<>com
# Copyright (c) 2016 Intel Corp
#
# Authors: Jean-Emile DARTOIS <jean-emile.dartois@b-com.com>
#
@@ -36,40 +37,13 @@ of :ref:`Actions <action_definition>` which are scheduled in time by the
See :doc:`../architecture` for more details on this component.
"""
from oslo_config import cfg
from watcher.common import service_manager
from watcher.decision_engine.messaging import audit_endpoint
from watcher.decision_engine.model.collector import manager
from watcher import conf
CONF = cfg.CONF
WATCHER_DECISION_ENGINE_OPTS = [
cfg.StrOpt('conductor_topic',
default='watcher.decision.control',
help='The topic name used for '
'control events, this topic '
'used for RPC calls'),
cfg.ListOpt('notification_topics',
default=['versioned_notifications', 'watcher_notifications'],
help='The topic names from which notification events '
'will be listened to'),
cfg.StrOpt('publisher_id',
default='watcher.decision.api',
help='The identifier used by the Watcher '
'module on the message broker'),
cfg.IntOpt('max_workers',
default=2,
required=True,
help='The maximum number of threads that can be used to '
'execute strategies'),
]
decision_engine_opt_group = cfg.OptGroup(name='watcher_decision_engine',
title='Defines the parameters of '
'the module decision engine')
CONF.register_group(decision_engine_opt_group)
CONF.register_opts(WATCHER_DECISION_ENGINE_OPTS, decision_engine_opt_group)
CONF = conf.CONF
class DecisionEngineManager(service_manager.ServiceManager):

View File

@@ -15,28 +15,14 @@
# limitations under the License.
#
from oslo_config import cfg
from oslo_log import log
from watcher.decision_engine.loading import default as loader
from watcher import conf
LOG = log.getLogger(__name__)
CONF = cfg.CONF
default_planner = 'default'
WATCHER_PLANNER_OPTS = {
cfg.StrOpt('planner',
default=default_planner,
required=True,
help='The selected planner used to schedule the actions')
}
planner_opt_group = cfg.OptGroup(name='watcher_planner',
title='Defines the parameters of '
'the planner')
CONF.register_group(planner_opt_group)
CONF.register_opts(WATCHER_PLANNER_OPTS, planner_opt_group)
CONF = conf.CONF
class PlannerManager(object):

View File

@@ -1,5 +1,6 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2015 b<>com
# Copyright (c) 2016 Intel Corp
#
# Authors: Jean-Emile DARTOIS <jean-emile.dartois@b-com.com>
#
@@ -17,20 +18,14 @@
# limitations under the License.
#
from oslo_config import cfg
from watcher.common import exception
from watcher.common import service
from watcher.common import service_manager
from watcher.common import utils
from watcher.decision_engine import manager
from watcher import conf
CONF = cfg.CONF
CONF.register_group(manager.decision_engine_opt_group)
CONF.register_opts(manager.WATCHER_DECISION_ENGINE_OPTS,
manager.decision_engine_opt_group)
CONF = conf.CONF
class DecisionEngineAPI(service.Service):

View File

@@ -185,6 +185,9 @@ class BaseStrategy(loadable.Loadable):
if not self._compute_model:
raise exception.ClusterStateNotDefined()
if self._compute_model.stale:
raise exception.ClusterStateStale()
return self._compute_model
@classmethod

View File

@@ -30,7 +30,7 @@ This algorithm not only minimizes the overall number of used servers, but also
minimizes the number of migrations.
It has been developed only for tests. You must have at least 2 physical compute
nodes to run it, so you can easilly run it on DevStack. It assumes that live
nodes to run it, so you can easily run it on DevStack. It assumes that live
migration is possible on your OpenStack cluster.
"""
@@ -384,12 +384,16 @@ class BasicConsolidation(base.ServerConsolidationBaseStrategy):
def pre_execute(self):
LOG.info(_LI("Initializing Server Consolidation"))
if not self.compute_model:
raise exception.ClusterStateNotDefined()
if len(self.compute_model.get_all_compute_nodes()) == 0:
raise exception.ClusterEmpty()
if self.compute_model.stale:
raise exception.ClusterStateStale()
LOG.debug(self.compute_model.to_string())
def do_execute(self):

View File

@@ -230,6 +230,9 @@ class OutletTempControl(base.ThermalOptimizationBaseStrategy):
if not self.compute_model:
raise wexc.ClusterStateNotDefined()
if self.compute_model.stale:
raise wexc.ClusterStateStale()
LOG.debug(self.compute_model.to_string())
def do_execute(self):

View File

@@ -16,6 +16,33 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
[PoC]Uniform Airflow using live migration
*Description*
It is a migration strategy based on the airflow of physical
servers. It generates solutions to move VM whenever a server's
airflow is higher than the specified threshold.
*Requirements*
* Hardware: compute node with NodeManager 3.0 support
* Software: Ceilometer component ceilometer-agent-compute running
in each compute node, and Ceilometer API can report such telemetry
"airflow, system power, inlet temperature" successfully.
* You must have at least 2 physical compute nodes to run this strategy
*Limitations*
- This is a proof of concept that is not meant to be used in production.
- We cannot forecast how many servers should be migrated. This is the
reason why we only plan a single virtual machine migration at a time.
So it's better to use this algorithm with `CONTINUOUS` audits.
- It assumes that live migrations are possible.
"""
from oslo_log import log
from watcher._i18n import _, _LE, _LI, _LW
@@ -293,6 +320,9 @@ class UniformAirflow(base.BaseStrategy):
if not self.compute_model:
raise wexc.ClusterStateNotDefined()
if self.compute_model.stale:
raise wexc.ClusterStateStale()
LOG.debug(self.compute_model.to_string())
def do_execute(self):

View File

@@ -129,7 +129,7 @@ class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
self.number_of_released_nodes -= 1
def add_action_disable_node(self, node):
"""Add an action for node disablity into the solution.
"""Add an action for node disability into the solution.
:param node: node object
:return: None
@@ -164,7 +164,7 @@ class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
'state=%(instance_state)s.'),
instance_uuid=instance_uuid,
instance_state=instance_state_str)
raise exception.WatcherException
return
migration_type = 'live'
@@ -487,6 +487,9 @@ class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
if not self.compute_model:
raise exception.ClusterStateNotDefined()
if self.compute_model.stale:
raise exception.ClusterStateStale()
LOG.debug(self.compute_model.to_string())
def do_execute(self):
@@ -534,10 +537,9 @@ class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
LOG.debug(info)
def post_execute(self):
# self.solution.efficacy = rcu_after['cpu']
self.solution.set_efficacy_indicators(
released_compute_nodes_count=self.number_of_migrations,
instance_migrations_count=self.number_of_released_nodes,
released_compute_nodes_count=self.number_of_released_nodes,
instance_migrations_count=self.number_of_migrations,
)
LOG.debug(self.compute_model.to_string())

View File

@@ -16,6 +16,37 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
*[PoC]Workload balance using live migration*
*Description*
This strategy migrates a VM based on the VM workload of the hosts.
It makes decision to migrate a workload whenever a host's CPU
utilization % is higher than the specified threshold. The VM to
be moved should make the host close to average workload of all
hosts nodes.
*Requirements*
* Hardware: compute node should use the same physical CPUs
* Software: Ceilometer component ceilometer-agent-compute
running in each compute node, and Ceilometer API can
report such telemetry "cpu_util" successfully.
* You must have at least 2 physical compute nodes to run
this strategy.
*Limitations*
- This is a proof of concept that is not meant to be used in
production.
- We cannot forecast how many servers should be migrated.
This is the reason why we only plan a single virtual
machine migration at a time. So it's better to use this
algorithm with `CONTINUOUS` audits.
"""
from oslo_log import log
from watcher._i18n import _, _LE, _LI, _LW
@@ -283,6 +314,9 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
if not self.compute_model:
raise wexc.ClusterStateNotDefined()
if self.compute_model.stale:
raise wexc.ClusterStateStale()
LOG.debug(self.compute_model.to_string())
def do_execute(self):

View File

@@ -31,6 +31,7 @@ import copy
import itertools
import math
import random
import re
import oslo_cache
from oslo_config import cfg
@@ -79,6 +80,7 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
self.host_choice = None
self.instance_metrics = None
self.retry_count = None
self.periods = None
@classmethod
def get_name(cls):
@@ -119,7 +121,7 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
"description": "Mapping to get hardware statistics using"
" instance metrics",
"type": "object",
"default": {"cpu_util": "hardware.cpu.util",
"default": {"cpu_util": "compute.node.cpu.percent",
"memory.resident": "hardware.memory.used"}
},
"host_choice": {
@@ -137,6 +139,17 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
"description": "Count of random returned hosts",
"type": "number",
"default": 1
},
"periods": {
"description": "These periods are used to get statistic "
"aggregation for instance and host "
"metrics. The period is simply a repeating"
" interval of time into which the samples"
" are grouped for aggregation. Watcher "
"uses only the last period of all recieved"
" ones.",
"type": "object",
"default": {"instance": 720, "node": 600}
}
}
}
@@ -189,7 +202,7 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
avg_meter = self.ceilometer.statistic_aggregation(
resource_id=instance_uuid,
meter_name=meter,
period="120",
period=self.periods['instance'],
aggregate='min'
)
if avg_meter is None:
@@ -224,7 +237,7 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
def get_hosts_load(self):
"""Get load of every available host by gathering instances load"""
hosts_load = {}
for node_id in self.get_available_nodes():
for node_id, node in self.get_available_nodes().items():
hosts_load[node_id] = {}
host_vcpus = self.compute_model.get_resource_by_uuid(
element.ResourceType.cpu_cores).get_capacity(
@@ -232,19 +245,27 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
hosts_load[node_id]['vcpus'] = host_vcpus
for metric in self.metrics:
resource_id = ''
meter_name = self.instance_metrics[metric]
if re.match('^compute.node', meter_name) is not None:
resource_id = "%s_%s" % (node.uuid, node.hostname)
else:
resource_id = node_id
avg_meter = self.ceilometer.statistic_aggregation(
resource_id=node_id,
resource_id=resource_id,
meter_name=self.instance_metrics[metric],
period="60",
period=self.periods['node'],
aggregate='avg'
)
if avg_meter is None:
raise exception.NoSuchMetricForHost(
metric=self.instance_metrics[metric],
metric=meter_name,
host=node_id)
if self.instance_metrics[metric] == 'hardware.memory.used':
if meter_name == 'hardware.memory.used':
avg_meter /= oslo_utils.units.Ki
if self.instance_metrics[metric] == 'hardware.cpu.util':
if meter_name == 'compute.node.cpu.percent':
avg_meter /= 100
hosts_load[node_id][metric] = avg_meter
return hosts_load
@@ -399,12 +420,16 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
if not self.compute_model:
raise exception.ClusterStateNotDefined()
if self.compute_model.stale:
raise exception.ClusterStateStale()
self.weights = self.input_parameters.weights
self.metrics = self.input_parameters.metrics
self.thresholds = self.input_parameters.thresholds
self.host_choice = self.input_parameters.host_choice
self.instance_metrics = self.input_parameters.instance_metrics
self.retry_count = self.input_parameters.retry_count
self.periods = self.input_parameters.periods
def do_execute(self):
migration = self.check_threshold()

View File

@@ -14,6 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import ast
import collections
from oslo_log import log
@@ -536,11 +537,14 @@ class Syncer(object):
def _soft_delete_stale_strategies(self, strategy_map, matching_strategies):
strategy_name = strategy_map.name
strategy_display_name = strategy_map.display_name
parameters_spec = strategy_map.parameters_spec
stale_strategies = []
for matching_strategy in matching_strategies:
if (matching_strategy.display_name == strategy_display_name and
matching_strategy.goal_id not in self.goal_mapping):
matching_strategy.goal_id not in self.goal_mapping and
matching_strategy.parameters_spec ==
ast.literal_eval(parameters_spec)):
LOG.info(_LI("Strategy %s unchanged"), strategy_name)
else:
LOG.info(_LI("Strategy %s modified"), strategy_name)

View File

@@ -146,8 +146,9 @@ class Action(base.WatcherPersistentObject, base.WatcherObject,
of self.what_changed().
"""
updates = self.obj_get_changes()
self.dbapi.update_action(self.uuid, updates)
db_obj = self.dbapi.update_action(self.uuid, updates)
obj = self._from_db_object(self, db_obj, eager=False)
self.obj_refresh(obj)
self.obj_reset_changes()
@base.remotable
@@ -165,6 +166,9 @@ class Action(base.WatcherPersistentObject, base.WatcherObject,
@base.remotable
def soft_delete(self):
"""Soft Delete the Audit from the DB"""
self.dbapi.soft_delete_action(self.uuid)
self.state = State.DELETED
self.save()
db_obj = self.dbapi.soft_delete_action(self.uuid)
obj = self._from_db_object(
self.__class__(self._context), db_obj, eager=False)
self.obj_refresh(obj)

View File

@@ -59,7 +59,7 @@ state may be one of the following:
- **SUCCEEDED** : the :ref:`Action Plan <action_plan_definition>` has been
executed successfully (i.e. all :ref:`Actions <action_definition>` that it
contains have been executed successfully)
- **FAILED** : an error occured while executing the
- **FAILED** : an error occurred while executing the
:ref:`Action Plan <action_plan_definition>`
- **DELETED** : the :ref:`Action Plan <action_plan_definition>` is still
stored in the :ref:`Watcher database <watcher_database_definition>` but is
@@ -93,14 +93,15 @@ class ActionPlan(base.WatcherPersistentObject, base.WatcherObject,
# Version 1.0: Initial version
# Version 1.1: Added 'audit' and 'strategy' object field
VERSION = '1.1'
# Version 1.2: audit_id is not nullable anymore
VERSION = '1.2'
dbapi = db_api.get_instance()
fields = {
'id': wfields.IntegerField(),
'uuid': wfields.UUIDField(),
'audit_id': wfields.IntegerField(nullable=True),
'audit_id': wfields.IntegerField(),
'strategy_id': wfields.IntegerField(),
'first_action_id': wfields.IntegerField(nullable=True),
'state': wfields.StringField(nullable=True),
@@ -132,11 +133,11 @@ class ActionPlan(base.WatcherPersistentObject, base.WatcherObject,
@base.remotable_classmethod
def get_by_id(cls, context, action_plan_id, eager=False):
"""Find a action_plan based on its integer id and return a Action object.
"""Find a action_plan based on its integer id and return a ActionPlan object.
:param action_plan_id: the id of a action_plan.
:param eager: Load object fields if True (Default: False)
:returns: a :class:`Action` object.
:returns: a :class:`ActionPlan` object.
"""
db_action_plan = cls.dbapi.get_action_plan_by_id(
context, action_plan_id, eager=eager)
@@ -146,12 +147,12 @@ class ActionPlan(base.WatcherPersistentObject, base.WatcherObject,
@base.remotable_classmethod
def get_by_uuid(cls, context, uuid, eager=False):
"""Find a action_plan based on uuid and return a :class:`Action` object.
"""Find a action_plan based on uuid and return a :class:`ActionPlan` object.
:param uuid: the uuid of a action_plan.
:param context: Security context
:param eager: Load object fields if True (Default: False)
:returns: a :class:`Action` object.
:returns: a :class:`ActionPlan` object.
"""
db_action_plan = cls.dbapi.get_action_plan_by_uuid(
context, uuid, eager=eager)
@@ -162,7 +163,7 @@ class ActionPlan(base.WatcherPersistentObject, base.WatcherObject,
@base.remotable_classmethod
def list(cls, context, limit=None, marker=None, filters=None,
sort_key=None, sort_dir=None, eager=False):
"""Return a list of Action objects.
"""Return a list of ActionPlan objects.
:param context: Security context.
:param limit: maximum number of resources to return in a single result.
@@ -218,8 +219,9 @@ class ActionPlan(base.WatcherPersistentObject, base.WatcherObject,
of self.what_changed().
"""
updates = self.obj_get_changes()
self.dbapi.update_action_plan(self.uuid, updates)
db_obj = self.dbapi.update_action_plan(self.uuid, updates)
obj = self._from_db_object(self, db_obj, eager=False)
self.obj_refresh(obj)
self.obj_reset_changes()
@base.remotable
@@ -253,6 +255,9 @@ class ActionPlan(base.WatcherPersistentObject, base.WatcherObject,
for related_efficacy_indicator in related_efficacy_indicators:
related_efficacy_indicator.soft_delete()
self.dbapi.soft_delete_action_plan(self.uuid)
self.state = State.DELETED
self.save()
db_obj = self.dbapi.soft_delete_action_plan(self.uuid)
obj = self._from_db_object(
self.__class__(self._context), db_obj, eager=False)
self.obj_refresh(obj)

View File

@@ -38,7 +38,7 @@ be one of the following:
- **SUCCEEDED** : the :ref:`Audit <audit_definition>` has been executed
successfully (note that it may not necessarily produce a
:ref:`Solution <solution_definition>`).
- **FAILED** : an error occured while executing the
- **FAILED** : an error occurred while executing the
:ref:`Audit <audit_definition>`
- **DELETED** : the :ref:`Audit <audit_definition>` is still stored in the
:ref:`Watcher database <watcher_database_definition>` but is not returned
@@ -108,7 +108,7 @@ class Audit(base.WatcherPersistentObject, base.WatcherObject,
_old_state = None
# NOTE(v-francoise): The way oslo.versionedobjects works is by using a
# __new__ that will automagically create the attributes referenced in
# __new__ that will automatically create the attributes referenced in
# fields. These attributes are properties that raise an exception if no
# value has been assigned, which means that they store the actual field
# value in an "_obj_%(field)s" attribute. So because we want to proxify a
@@ -255,7 +255,10 @@ class Audit(base.WatcherPersistentObject, base.WatcherObject,
of self.what_changed().
"""
updates = self.obj_get_changes()
self.dbapi.update_audit(self.uuid, updates)
db_obj = self.dbapi.update_audit(self.uuid, updates)
obj = self._from_db_object(
self.__class__(self._context), db_obj, eager=False)
self.obj_refresh(obj)
def _notify():
notifications.audit.send_update(
@@ -280,9 +283,12 @@ class Audit(base.WatcherPersistentObject, base.WatcherObject,
@base.remotable
def soft_delete(self):
"""Soft Delete the Audit from the DB."""
self.dbapi.soft_delete_audit(self.uuid)
self.state = State.DELETED
self.save()
db_obj = self.dbapi.soft_delete_audit(self.uuid)
obj = self._from_db_object(
self.__class__(self._context), db_obj, eager=False)
self.obj_refresh(obj)
def _notify():
notifications.audit.send_delete(self._context, self)

View File

@@ -215,8 +215,9 @@ class AuditTemplate(base.WatcherPersistentObject, base.WatcherObject,
of self.what_changed().
"""
updates = self.obj_get_changes()
self.dbapi.update_audit_template(self.uuid, updates)
db_obj = self.dbapi.update_audit_template(self.uuid, updates)
obj = self._from_db_object(self, db_obj, eager=False)
self.obj_refresh(obj)
self.obj_reset_changes()
@base.remotable
@@ -234,4 +235,7 @@ class AuditTemplate(base.WatcherPersistentObject, base.WatcherObject,
@base.remotable
def soft_delete(self):
"""Soft Delete the :class:`AuditTemplate` from the DB"""
self.dbapi.soft_delete_audit_template(self.uuid)
db_obj = self.dbapi.soft_delete_audit_template(self.uuid)
obj = self._from_db_object(
self.__class__(self._context), db_obj, eager=False)
self.obj_refresh(obj)

View File

@@ -123,7 +123,9 @@ class WatcherPersistentObject(object):
the loaded object column by column in comparison with the current
object.
"""
for field in self.fields:
fields = (field for field in self.fields
if field not in self.object_fields)
for field in fields:
if (self.obj_attr_is_set(field) and
self[field] != loaded_object[field]):
self[field] = loaded_object[field]

View File

@@ -151,8 +151,9 @@ class Goal(base.WatcherPersistentObject, base.WatcherObject,
of self.what_changed().
"""
updates = self.obj_get_changes()
self.dbapi.update_goal(self.id, updates)
db_obj = self.dbapi.update_goal(self.uuid, updates)
obj = self._from_db_object(self, db_obj, eager=False)
self.obj_refresh(obj)
self.obj_reset_changes()
@base.remotable
@@ -169,4 +170,7 @@ class Goal(base.WatcherPersistentObject, base.WatcherObject,
@base.remotable
def soft_delete(self):
"""Soft Delete the :class:`Goal` from the DB"""
self.dbapi.soft_delete_goal(self.uuid)
db_obj = self.dbapi.soft_delete_goal(self.uuid)
obj = self._from_db_object(
self.__class__(self._context), db_obj, eager=False)
self.obj_refresh(obj)

View File

@@ -175,8 +175,9 @@ class ScoringEngine(base.WatcherPersistentObject, base.WatcherObject,
of self.what_changed().
"""
updates = self.obj_get_changes()
self.dbapi.update_scoring_engine(self.id, updates)
db_obj = self.dbapi.update_scoring_engine(self.uuid, updates)
obj = self._from_db_object(self, db_obj, eager=False)
self.obj_refresh(obj)
self.obj_reset_changes()
def refresh(self):
@@ -191,4 +192,7 @@ class ScoringEngine(base.WatcherPersistentObject, base.WatcherObject,
def soft_delete(self):
"""Soft Delete the :class:`ScoringEngine` from the DB"""
self.dbapi.soft_delete_scoring_engine(self.id)
db_obj = self.dbapi.soft_delete_scoring_engine(self.id)
obj = self._from_db_object(
self.__class__(self._context), db_obj, eager=False)
self.obj_refresh(obj)

View File

@@ -119,8 +119,9 @@ class Service(base.WatcherPersistentObject, base.WatcherObject,
of self.what_changed().
"""
updates = self.obj_get_changes()
self.dbapi.update_service(self.id, updates)
db_obj = self.dbapi.update_service(self.id, updates)
obj = self._from_db_object(self, db_obj, eager=False)
self.obj_refresh(obj)
self.obj_reset_changes()
def refresh(self):
@@ -138,4 +139,7 @@ class Service(base.WatcherPersistentObject, base.WatcherObject,
def soft_delete(self):
"""Soft Delete the :class:`Service` from the DB."""
self.dbapi.soft_delete_service(self.id)
db_obj = self.dbapi.soft_delete_service(self.id)
obj = self._from_db_object(
self.__class__(self._context), db_obj, eager=False)
self.obj_refresh(obj)

View File

@@ -1,5 +1,6 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2015 b<>com
# Copyright (c) 2016 Intel Corp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.

View File

@@ -14,7 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import uuid
from oslo_utils import uuidutils
import freezegun
import mock
@@ -73,7 +73,7 @@ class TestPurgeCommand(base.DbTestCase):
seed += 1
def generate_unique_name(self, prefix):
return "%s%s" % (prefix, uuid.uuid4())
return "%s%s" % (prefix, uuidutils.generate_uuid())
def _data_setup(self):
# All the 1's are soft_deleted and are expired

View File

@@ -16,7 +16,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import uuid
from oslo_utils import uuidutils
from watcher.decision_engine.model import element
from watcher.tests import base
@@ -95,7 +95,7 @@ class TestMapping(base.TestCase):
instances = model.get_all_instances()
keys = list(instances.keys())
instance0 = instances[keys[0]]
uuid_ = "{0}".format(uuid.uuid4())
uuid_ = uuidutils.generate_uuid()
node = element.ComputeNode(id=1)
node.uuid = uuid_

View File

@@ -40,10 +40,10 @@ class TestWorkloadStabilization(base.TestCase):
self.hosts_load_assert = {
'Node_0': {'cpu_util': 0.07, 'memory.resident': 7.0, 'vcpus': 40},
'Node_1': {'cpu_util': 0.05, 'memory.resident': 5, 'vcpus': 40},
'Node_2': {'cpu_util': 0.1, 'memory.resident': 29, 'vcpus': 40},
'Node_3': {'cpu_util': 0.04, 'memory.resident': 8, 'vcpus': 40},
'Node_4': {'cpu_util': 0.02, 'memory.resident': 4, 'vcpus': 40}}
'Node_1': {'cpu_util': 0.07, 'memory.resident': 5, 'vcpus': 40},
'Node_2': {'cpu_util': 0.8, 'memory.resident': 29, 'vcpus': 40},
'Node_3': {'cpu_util': 0.05, 'memory.resident': 8, 'vcpus': 40},
'Node_4': {'cpu_util': 0.05, 'memory.resident': 4, 'vcpus': 40}}
p_model = mock.patch.object(
strategies.WorkloadStabilization, "compute_model",
@@ -76,19 +76,21 @@ class TestWorkloadStabilization(base.TestCase):
'weights': {"cpu_util_weight": 1.0,
"memory.resident_weight": 1.0},
'instance_metrics':
{"cpu_util": "hardware.cpu.util",
{"cpu_util": "compute.node.cpu.percent",
"memory.resident": "hardware.memory.used"},
'host_choice': 'retry',
'retry_count': 1})
'retry_count': 1,
'periods': {"instance": 720, "node": 600}})
self.strategy.metrics = ["cpu_util", "memory.resident"]
self.strategy.thresholds = {"cpu_util": 0.2, "memory.resident": 0.2}
self.strategy.weights = {"cpu_util_weight": 1.0,
"memory.resident_weight": 1.0}
self.strategy.instance_metrics = {"cpu_util": "hardware.cpu.util",
"memory.resident":
"hardware.memory.used"}
self.strategy.instance_metrics = {
"cpu_util": "compute.node.cpu.percent",
"memory.resident": "hardware.memory.used"}
self.strategy.host_choice = 'retry'
self.strategy.retry_count = 1
self.strategy.periods = {"instance": 720, "node": 600}
def test_get_instance_load(self):
self.m_model.return_value = self.fake_cluster.generate_scenario_1()
@@ -98,6 +100,23 @@ class TestWorkloadStabilization(base.TestCase):
self.assertEqual(
instance_0_dict, self.strategy.get_instance_load("INSTANCE_0"))
def test_periods(self):
self.m_model.return_value = self.fake_cluster.generate_scenario_1()
p_ceilometer = mock.patch.object(
strategies.WorkloadStabilization, "ceilometer")
m_ceilometer = p_ceilometer.start()
self.addCleanup(p_ceilometer.stop)
m_ceilometer.return_value = mock.Mock(
statistic_aggregation=self.fake_metrics.mock_get_statistics)
self.strategy.get_instance_load("INSTANCE_0")
m_ceilometer.statistic_aggregation.assert_called_with(
aggregate='min', meter_name='memory.resident',
period=720, resource_id='INSTANCE_0')
self.strategy.get_hosts_load()
m_ceilometer.statistic_aggregation.assert_called_with(
aggregate='avg', meter_name='hardware.memory.used',
period=600, resource_id=mock.ANY)
def test_normalize_hosts_load(self):
self.m_model.return_value = self.fake_cluster.generate_scenario_1()
fake_hosts = {'Node_0': {'cpu_util': 0.07, 'memory.resident': 7},
@@ -123,7 +142,7 @@ class TestWorkloadStabilization(base.TestCase):
self.hosts_load_assert)
def test_get_sd(self):
test_cpu_sd = 0.027
test_cpu_sd = 0.296
test_ram_sd = 9.3
self.assertEqual(
round(self.strategy.get_sd(
@@ -144,7 +163,7 @@ class TestWorkloadStabilization(base.TestCase):
self.hosts_load_assert, "INSTANCE_5", "Node_2", "Node_1")[-1][
"Node_1"]
result['cpu_util'] = round(result['cpu_util'], 3)
self.assertEqual(result, {'cpu_util': 0.075, 'memory.resident': 21,
self.assertEqual(result, {'cpu_util': 0.095, 'memory.resident': 21.0,
'vcpus': 40})
def test_simulate_migrations(self):

View File

@@ -13,6 +13,9 @@
# License for the specific language governing permissions and limitations
# under the License.
import datetime
import iso8601
import mock
from watcher.common import exception
@@ -100,17 +103,24 @@ class TestActionObject(base.DbTestCase):
@mock.patch.object(db_api.Connection, 'get_action_by_uuid')
def test_save(self, mock_get_action, mock_update_action):
mock_get_action.return_value = self.fake_action
fake_saved_action = self.fake_action.copy()
fake_saved_action['updated_at'] = datetime.datetime.utcnow()
mock_update_action.return_value = fake_saved_action
uuid = self.fake_action['uuid']
action = objects.Action.get_by_uuid(
self.context, uuid, eager=self.eager)
action.state = objects.action.State.SUCCEEDED
action.save()
expected_update_at = fake_saved_action['updated_at'].replace(
tzinfo=iso8601.iso8601.Utc())
mock_get_action.assert_called_once_with(
self.context, uuid, eager=self.eager)
mock_update_action.assert_called_once_with(
uuid, {'state': objects.action.State.SUCCEEDED})
self.assertEqual(self.context, action._context)
self.assertEqual(expected_update_at, action.updated_at)
@mock.patch.object(db_api.Connection, 'get_action_by_uuid')
def test_refresh(self, mock_get_action):
@@ -136,15 +146,18 @@ class TestCreateDeleteActionObject(base.DbTestCase):
self.fake_strategy = utils.create_test_strategy(name="DUMMY")
self.fake_audit = utils.create_test_audit()
self.fake_action_plan = utils.create_test_action_plan()
self.fake_action = utils.get_test_action()
self.fake_action = utils.get_test_action(
created_at=datetime.datetime.utcnow())
@mock.patch.object(db_api.Connection, 'create_action')
def test_create(self, mock_create_action):
mock_create_action.return_value = self.fake_action
action = objects.Action(self.context, **self.fake_action)
action.create()
mock_create_action.assert_called_once_with(self.fake_action)
expected_action = self.fake_action.copy()
expected_action['created_at'] = expected_action['created_at'].replace(
tzinfo=iso8601.iso8601.Utc())
mock_create_action.assert_called_once_with(expected_action)
self.assertEqual(self.context, action._context)
@mock.patch.object(db_api.Connection, 'update_action')
@@ -153,6 +166,18 @@ class TestCreateDeleteActionObject(base.DbTestCase):
def test_soft_delete(self, mock_get_action,
mock_soft_delete_action, mock_update_action):
mock_get_action.return_value = self.fake_action
fake_deleted_action = self.fake_action.copy()
fake_deleted_action['deleted_at'] = datetime.datetime.utcnow()
mock_soft_delete_action.return_value = fake_deleted_action
mock_update_action.return_value = fake_deleted_action
expected_action = fake_deleted_action.copy()
expected_action['created_at'] = expected_action['created_at'].replace(
tzinfo=iso8601.iso8601.Utc())
expected_action['deleted_at'] = expected_action['deleted_at'].replace(
tzinfo=iso8601.iso8601.Utc())
del expected_action['action_plan']
uuid = self.fake_action['uuid']
action = objects.Action.get_by_uuid(self.context, uuid)
action.soft_delete()
@@ -162,6 +187,7 @@ class TestCreateDeleteActionObject(base.DbTestCase):
mock_update_action.assert_called_once_with(
uuid, {'state': objects.action.State.DELETED})
self.assertEqual(self.context, action._context)
self.assertEqual(expected_action, action.as_dict())
@mock.patch.object(db_api.Connection, 'destroy_action')
@mock.patch.object(db_api.Connection, 'get_action_by_uuid')

View File

@@ -13,6 +13,9 @@
# License for the specific language governing permissions and limitations
# under the License.
import datetime
import iso8601
import mock
from watcher.common import exception
@@ -109,6 +112,9 @@ class TestActionPlanObject(base.DbTestCase):
@mock.patch.object(db_api.Connection, 'get_action_plan_by_uuid')
def test_save(self, mock_get_action_plan, mock_update_action_plan):
mock_get_action_plan.return_value = self.fake_action_plan
fake_saved_action_plan = self.fake_action_plan.copy()
fake_saved_action_plan['deleted_at'] = datetime.datetime.utcnow()
mock_update_action_plan.return_value = fake_saved_action_plan
uuid = self.fake_action_plan['uuid']
action_plan = objects.ActionPlan.get_by_uuid(
self.context, uuid, eager=self.eager)
@@ -146,7 +152,8 @@ class TestCreateDeleteActionPlanObject(base.DbTestCase):
super(TestCreateDeleteActionPlanObject, self).setUp()
self.fake_strategy = utils.create_test_strategy(name="DUMMY")
self.fake_audit = utils.create_test_audit()
self.fake_action_plan = utils.get_test_action_plan()
self.fake_action_plan = utils.get_test_action_plan(
created_at=datetime.datetime.utcnow())
@mock.patch.object(db_api.Connection, 'create_action_plan')
def test_create(self, mock_create_action_plan):
@@ -154,8 +161,10 @@ class TestCreateDeleteActionPlanObject(base.DbTestCase):
action_plan = objects.ActionPlan(
self.context, **self.fake_action_plan)
action_plan.create()
mock_create_action_plan.assert_called_once_with(
self.fake_action_plan)
expected_action_plan = self.fake_action_plan.copy()
expected_action_plan['created_at'] = expected_action_plan[
'created_at'].replace(tzinfo=iso8601.iso8601.Utc())
mock_create_action_plan.assert_called_once_with(expected_action_plan)
self.assertEqual(self.context, action_plan._context)
@mock.patch.multiple(
@@ -178,7 +187,20 @@ class TestCreateDeleteActionPlanObject(base.DbTestCase):
m_get_efficacy_indicator_list = get_efficacy_indicator_list
m_soft_delete_efficacy_indicator = soft_delete_efficacy_indicator
m_update_action_plan = update_action_plan
m_get_action_plan.return_value = self.fake_action_plan
fake_deleted_action_plan = self.fake_action_plan.copy()
fake_deleted_action_plan['deleted_at'] = datetime.datetime.utcnow()
m_update_action_plan.return_value = fake_deleted_action_plan
m_soft_delete_action_plan.return_value = fake_deleted_action_plan
expected_action_plan = fake_deleted_action_plan.copy()
expected_action_plan['created_at'] = expected_action_plan[
'created_at'].replace(tzinfo=iso8601.iso8601.Utc())
expected_action_plan['deleted_at'] = expected_action_plan[
'deleted_at'].replace(tzinfo=iso8601.iso8601.Utc())
del expected_action_plan['audit']
del expected_action_plan['strategy']
m_get_efficacy_indicator_list.return_value = [efficacy_indicator]
action_plan = objects.ActionPlan.get_by_uuid(self.context, uuid)
action_plan.soft_delete()
@@ -193,7 +215,9 @@ class TestCreateDeleteActionPlanObject(base.DbTestCase):
efficacy_indicator['uuid'])
m_update_action_plan.assert_called_once_with(
uuid, {'state': objects.action_plan.State.DELETED})
self.assertEqual(self.context, action_plan._context)
self.assertEqual(expected_action_plan, action_plan.as_dict())
@mock.patch.multiple(
db_api.Connection,

View File

@@ -13,6 +13,9 @@
# License for the specific language governing permissions and limitations
# under the License.
import datetime
import iso8601
import mock
from watcher.common import exception
@@ -37,14 +40,18 @@ class TestAuditObject(base.DbTestCase):
('non_eager', dict(
eager=False,
fake_audit=utils.get_test_audit(
created_at=datetime.datetime.utcnow(),
goal_id=goal_id))),
('eager_with_non_eager_load', dict(
eager=True,
fake_audit=utils.get_test_audit(
created_at=datetime.datetime.utcnow(),
goal_id=goal_id))),
('eager_with_eager_load', dict(
eager=True,
fake_audit=utils.get_test_audit(goal_id=goal_id, goal=goal_data))),
fake_audit=utils.get_test_audit(
created_at=datetime.datetime.utcnow(),
goal_id=goal_id, goal=goal_data))),
]
def setUp(self):
@@ -116,6 +123,17 @@ class TestAuditObject(base.DbTestCase):
@mock.patch.object(db_api.Connection, 'get_audit_by_uuid')
def test_save(self, mock_get_audit, mock_update_audit):
mock_get_audit.return_value = self.fake_audit
fake_saved_audit = self.fake_audit.copy()
fake_saved_audit['state'] = objects.audit.State.SUCCEEDED
fake_saved_audit['updated_at'] = datetime.datetime.utcnow()
mock_update_audit.return_value = fake_saved_audit
expected_audit = fake_saved_audit.copy()
expected_audit['created_at'] = expected_audit['created_at'].replace(
tzinfo=iso8601.iso8601.Utc())
expected_audit['updated_at'] = expected_audit['updated_at'].replace(
tzinfo=iso8601.iso8601.Utc())
uuid = self.fake_audit['uuid']
audit = objects.Audit.get_by_uuid(self.context, uuid, eager=self.eager)
audit.state = objects.audit.State.SUCCEEDED
@@ -129,6 +147,11 @@ class TestAuditObject(base.DbTestCase):
self.eager_load_audit_assert(audit, self.fake_goal)
self.m_send_update.assert_called_once_with(
self.context, audit, old_state=self.fake_audit['state'])
self.assertEqual(
{k: v for k, v in expected_audit.items()
if k not in audit.object_fields},
{k: v for k, v in audit.as_dict().items()
if k not in audit.object_fields})
@mock.patch.object(db_api.Connection, 'get_audit_by_uuid')
def test_refresh(self, mock_get_audit):
@@ -160,14 +183,18 @@ class TestCreateDeleteAuditObject(base.DbTestCase):
self.goal_id = 1
self.goal = utils.create_test_goal(id=self.goal_id, name="DUMMY")
self.fake_audit = utils.get_test_audit(goal_id=self.goal_id)
self.fake_audit = utils.get_test_audit(
goal_id=self.goal_id, created_at=datetime.datetime.utcnow())
@mock.patch.object(db_api.Connection, 'create_audit')
def test_create(self, mock_create_audit):
mock_create_audit.return_value = self.fake_audit
audit = objects.Audit(self.context, **self.fake_audit)
audit.create()
mock_create_audit.assert_called_once_with(self.fake_audit)
expected_audit = self.fake_audit.copy()
expected_audit['created_at'] = expected_audit['created_at'].replace(
tzinfo=iso8601.iso8601.Utc())
mock_create_audit.assert_called_once_with(expected_audit)
self.assertEqual(self.context, audit._context)
@mock.patch.object(db_api.Connection, 'update_audit')
@@ -176,13 +203,27 @@ class TestCreateDeleteAuditObject(base.DbTestCase):
def test_soft_delete(self, mock_get_audit,
mock_soft_delete_audit, mock_update_audit):
mock_get_audit.return_value = self.fake_audit
fake_deleted_audit = self.fake_audit.copy()
fake_deleted_audit['deleted_at'] = datetime.datetime.utcnow()
mock_soft_delete_audit.return_value = fake_deleted_audit
mock_update_audit.return_value = fake_deleted_audit
expected_audit = fake_deleted_audit.copy()
expected_audit['created_at'] = expected_audit['created_at'].replace(
tzinfo=iso8601.iso8601.Utc())
expected_audit['deleted_at'] = expected_audit['deleted_at'].replace(
tzinfo=iso8601.iso8601.Utc())
del expected_audit['goal']
del expected_audit['strategy']
uuid = self.fake_audit['uuid']
audit = objects.Audit.get_by_uuid(self.context, uuid, eager=True)
audit = objects.Audit.get_by_uuid(self.context, uuid, eager=False)
audit.soft_delete()
mock_get_audit.assert_called_once_with(self.context, uuid, eager=True)
mock_get_audit.assert_called_once_with(self.context, uuid, eager=False)
mock_soft_delete_audit.assert_called_once_with(uuid)
mock_update_audit.assert_called_once_with(uuid, {'state': 'DELETED'})
self.assertEqual(self.context, audit._context)
self.assertEqual(expected_audit, audit.as_dict())
@mock.patch.object(db_api.Connection, 'destroy_audit')
@mock.patch.object(db_api.Connection, 'get_audit_by_uuid')
@@ -216,14 +257,17 @@ class TestAuditObjectSendNotifications(base.DbTestCase):
self.m_notifier = self.m_get_notifier.return_value
self.addCleanup(p_get_notifier.stop)
@mock.patch.object(db_api.Connection, 'update_audit', mock.Mock())
@mock.patch.object(db_api.Connection, 'update_audit')
@mock.patch.object(db_api.Connection, 'get_audit_by_uuid')
def test_send_update_notification(self, m_get_audit):
def test_send_update_notification(self, m_get_audit, m_update_audit):
fake_audit = utils.get_test_audit(
goal=self.fake_goal.as_dict(),
strategy_id=self.fake_strategy.id,
strategy=self.fake_strategy.as_dict())
m_get_audit.return_value = fake_audit
fake_saved_audit = self.fake_audit.copy()
fake_saved_audit['state'] = objects.audit.State.SUCCEEDED
m_update_audit.return_value = fake_saved_audit
uuid = fake_audit['uuid']
audit = objects.Audit.get_by_uuid(self.context, uuid, eager=True)
@@ -249,17 +293,25 @@ class TestAuditObjectSendNotifications(base.DbTestCase):
self.assertEqual('audit.create',
self.m_notifier.info.call_args[1]['event_type'])
@mock.patch.object(db_api.Connection, 'soft_delete_audit', mock.Mock())
@mock.patch.object(db_api.Connection, 'update_audit', mock.Mock())
@mock.patch.object(db_api.Connection, 'update_audit')
@mock.patch.object(db_api.Connection, 'soft_delete_audit')
@mock.patch.object(db_api.Connection, 'get_audit_by_uuid')
def test_send_delete_notification(self, m_get_audit):
def test_send_delete_notification(
self, m_get_audit, m_soft_delete_audit, m_update_audit):
fake_audit = utils.get_test_audit(
goal=self.fake_goal.as_dict(),
strategy_id=self.fake_strategy.id,
strategy=self.fake_strategy.as_dict())
m_get_audit.return_value = fake_audit
uuid = fake_audit['uuid']
fake_deleted_audit = self.fake_audit.copy()
fake_deleted_audit['deleted_at'] = datetime.datetime.utcnow()
expected_audit = fake_deleted_audit.copy()
expected_audit['deleted_at'] = expected_audit['deleted_at'].replace(
tzinfo=iso8601.iso8601.Utc())
m_soft_delete_audit.return_value = fake_deleted_audit
m_update_audit.return_value = fake_deleted_audit
uuid = fake_audit['uuid']
audit = objects.Audit.get_by_uuid(self.context, uuid, eager=True)
audit.soft_delete()

View File

@@ -13,6 +13,9 @@
# License for the specific language governing permissions and limitations
# under the License.
import datetime
import iso8601
import mock
from watcher.common import exception
@@ -34,14 +37,17 @@ class TestAuditTemplateObject(base.DbTestCase):
('non_eager', dict(
eager=False,
fake_audit_template=utils.get_test_audit_template(
created_at=datetime.datetime.utcnow(),
goal_id=goal_id))),
('eager_with_non_eager_load', dict(
eager=True,
fake_audit_template=utils.get_test_audit_template(
created_at=datetime.datetime.utcnow(),
goal_id=goal_id))),
('eager_with_eager_load', dict(
eager=True,
fake_audit_template=utils.get_test_audit_template(
created_at=datetime.datetime.utcnow(),
goal_id=goal_id, goal=goal_data))),
]
@@ -120,6 +126,9 @@ class TestAuditTemplateObject(base.DbTestCase):
@mock.patch.object(db_api.Connection, 'get_audit_template_by_uuid')
def test_save(self, mock_get_audit_template, mock_update_audit_template):
mock_get_audit_template.return_value = self.fake_audit_template
fake_saved_audit_template = self.fake_audit_template.copy()
fake_saved_audit_template['updated_at'] = datetime.datetime.utcnow()
mock_update_audit_template.return_value = fake_saved_audit_template
uuid = self.fake_audit_template['uuid']
audit_template = objects.AuditTemplate.get_by_uuid(
self.context, uuid, eager=self.eager)
@@ -155,7 +164,8 @@ class TestCreateDeleteAuditTemplateObject(base.DbTestCase):
def setUp(self):
super(TestCreateDeleteAuditTemplateObject, self).setUp()
self.fake_audit_template = utils.get_test_audit_template()
self.fake_audit_template = utils.get_test_audit_template(
created_at=datetime.datetime.utcnow())
@mock.patch.object(db_api.Connection, 'create_audit_template')
def test_create(self, mock_create_audit_template):
@@ -165,22 +175,38 @@ class TestCreateDeleteAuditTemplateObject(base.DbTestCase):
audit_template = objects.AuditTemplate(
self.context, **self.fake_audit_template)
audit_template.create()
expected_audit_template = self.fake_audit_template.copy()
expected_audit_template['created_at'] = expected_audit_template[
'created_at'].replace(tzinfo=iso8601.iso8601.Utc())
mock_create_audit_template.assert_called_once_with(
self.fake_audit_template)
expected_audit_template)
self.assertEqual(self.context, audit_template._context)
@mock.patch.object(db_api.Connection, 'soft_delete_audit_template')
@mock.patch.object(db_api.Connection, 'get_audit_template_by_uuid')
def test_soft_delete(self, mock_get_audit_template,
mock_soft_delete_audit_template):
mock_get_audit_template.return_value = self.fake_audit_template
def test_soft_delete(self, m_get_audit_template,
m_soft_delete_audit_template):
m_get_audit_template.return_value = self.fake_audit_template
fake_deleted_audit_template = self.fake_audit_template.copy()
fake_deleted_audit_template['deleted_at'] = datetime.datetime.utcnow()
m_soft_delete_audit_template.return_value = fake_deleted_audit_template
expected_audit_template = fake_deleted_audit_template.copy()
expected_audit_template['created_at'] = expected_audit_template[
'created_at'].replace(tzinfo=iso8601.iso8601.Utc())
expected_audit_template['deleted_at'] = expected_audit_template[
'deleted_at'].replace(tzinfo=iso8601.iso8601.Utc())
del expected_audit_template['goal']
del expected_audit_template['strategy']
uuid = self.fake_audit_template['uuid']
audit_template = objects.AuditTemplate.get_by_uuid(self.context, uuid)
audit_template.soft_delete()
mock_get_audit_template.assert_called_once_with(
m_get_audit_template.assert_called_once_with(
self.context, uuid, eager=False)
mock_soft_delete_audit_template.assert_called_once_with(uuid)
m_soft_delete_audit_template.assert_called_once_with(uuid)
self.assertEqual(self.context, audit_template._context)
self.assertEqual(expected_audit_template, audit_template.as_dict())
@mock.patch.object(db_api.Connection, 'destroy_audit_template')
@mock.patch.object(db_api.Connection, 'get_audit_template_by_uuid')

View File

@@ -13,8 +13,12 @@
# License for the specific language governing permissions and limitations
# under the License.
import datetime
import iso8601
import mock
from watcher.db.sqlalchemy import api as db_api
from watcher import objects
from watcher.tests.db import base
from watcher.tests.db import utils
@@ -24,115 +28,116 @@ class TestGoalObject(base.DbTestCase):
def setUp(self):
super(TestGoalObject, self).setUp()
self.fake_goal = utils.get_test_goal()
self.fake_goal = utils.get_test_goal(
created_at=datetime.datetime.utcnow())
def test_get_by_id(self):
@mock.patch.object(db_api.Connection, 'get_goal_by_id')
def test_get_by_id(self, mock_get_goal):
goal_id = self.fake_goal['id']
with mock.patch.object(self.dbapi, 'get_goal_by_id',
autospec=True) as mock_get_goal:
mock_get_goal.return_value = self.fake_goal
goal = objects.Goal.get(self.context, goal_id)
mock_get_goal.assert_called_once_with(self.context, goal_id)
self.assertEqual(self.context, goal._context)
mock_get_goal.return_value = self.fake_goal
goal = objects.Goal.get(self.context, goal_id)
mock_get_goal.assert_called_once_with(self.context, goal_id)
self.assertEqual(self.context, goal._context)
def test_get_by_uuid(self):
@mock.patch.object(db_api.Connection, 'get_goal_by_uuid')
def test_get_by_uuid(self, mock_get_goal):
uuid = self.fake_goal['uuid']
with mock.patch.object(self.dbapi, 'get_goal_by_uuid',
autospec=True) as mock_get_goal:
mock_get_goal.return_value = self.fake_goal
goal = objects.Goal.get(self.context, uuid)
mock_get_goal.assert_called_once_with(self.context, uuid)
self.assertEqual(self.context, goal._context)
mock_get_goal.return_value = self.fake_goal
goal = objects.Goal.get(self.context, uuid)
mock_get_goal.assert_called_once_with(self.context, uuid)
self.assertEqual(self.context, goal._context)
def test_get_by_name(self):
@mock.patch.object(db_api.Connection, 'get_goal_by_name')
def test_get_by_name(self, mock_get_goal):
name = self.fake_goal['name']
with mock.patch.object(self.dbapi, 'get_goal_by_name',
autospec=True) as mock_get_goal:
mock_get_goal.return_value = self.fake_goal
goal = objects.Goal.get_by_name(
self.context,
name)
mock_get_goal.assert_called_once_with(self.context, name)
self.assertEqual(self.context, goal._context)
mock_get_goal.return_value = self.fake_goal
goal = objects.Goal.get_by_name(self.context, name)
mock_get_goal.assert_called_once_with(self.context, name)
self.assertEqual(self.context, goal._context)
def test_list(self):
with mock.patch.object(self.dbapi, 'get_goal_list',
autospec=True) as mock_get_list:
mock_get_list.return_value = [self.fake_goal]
goals = objects.Goal.list(self.context)
self.assertEqual(1, mock_get_list.call_count)
self.assertEqual(1, len(goals))
self.assertIsInstance(goals[0], objects.Goal)
self.assertEqual(self.context, goals[0]._context)
@mock.patch.object(db_api.Connection, 'get_goal_list')
def test_list(self, mock_get_list):
mock_get_list.return_value = [self.fake_goal]
goals = objects.Goal.list(self.context)
self.assertEqual(1, mock_get_list.call_count)
self.assertEqual(1, len(goals))
self.assertIsInstance(goals[0], objects.Goal)
self.assertEqual(self.context, goals[0]._context)
def test_create(self):
with mock.patch.object(self.dbapi, 'create_goal',
autospec=True) as mock_create_goal:
mock_create_goal.return_value = self.fake_goal
goal = objects.Goal(self.context, **self.fake_goal)
goal.create()
mock_create_goal.assert_called_once_with(self.fake_goal)
self.assertEqual(self.context, goal._context)
@mock.patch.object(db_api.Connection, 'create_goal')
def test_create(self, mock_create_goal):
mock_create_goal.return_value = self.fake_goal
goal = objects.Goal(self.context, **self.fake_goal)
goal.create()
expected_goal = self.fake_goal.copy()
expected_goal['created_at'] = expected_goal['created_at'].replace(
tzinfo=iso8601.iso8601.Utc())
mock_create_goal.assert_called_once_with(expected_goal)
self.assertEqual(self.context, goal._context)
def test_destroy(self):
@mock.patch.object(db_api.Connection, 'destroy_goal')
@mock.patch.object(db_api.Connection, 'get_goal_by_id')
def test_destroy(self, mock_get_goal, mock_destroy_goal):
goal_id = self.fake_goal['id']
with mock.patch.object(self.dbapi, 'get_goal_by_id',
autospec=True) as mock_get_goal:
mock_get_goal.return_value = self.fake_goal
with mock.patch.object(self.dbapi, 'destroy_goal',
autospec=True) \
as mock_destroy_goal:
goal = objects.Goal.get_by_id(self.context, goal_id)
goal.destroy()
mock_get_goal.assert_called_once_with(
self.context, goal_id)
mock_destroy_goal.assert_called_once_with(goal_id)
self.assertEqual(self.context, goal._context)
mock_get_goal.return_value = self.fake_goal
goal = objects.Goal.get_by_id(self.context, goal_id)
goal.destroy()
mock_get_goal.assert_called_once_with(
self.context, goal_id)
mock_destroy_goal.assert_called_once_with(goal_id)
self.assertEqual(self.context, goal._context)
def test_save(self):
goal_id = self.fake_goal['id']
with mock.patch.object(self.dbapi, 'get_goal_by_id',
autospec=True) as mock_get_goal:
mock_get_goal.return_value = self.fake_goal
with mock.patch.object(self.dbapi, 'update_goal',
autospec=True) as mock_update_goal:
goal = objects.Goal.get_by_id(self.context, goal_id)
goal.display_name = 'DUMMY'
goal.save()
@mock.patch.object(db_api.Connection, 'update_goal')
@mock.patch.object(db_api.Connection, 'get_goal_by_uuid')
def test_save(self, mock_get_goal, mock_update_goal):
mock_get_goal.return_value = self.fake_goal
goal_uuid = self.fake_goal['uuid']
fake_saved_goal = self.fake_goal.copy()
fake_saved_goal['updated_at'] = datetime.datetime.utcnow()
mock_update_goal.return_value = fake_saved_goal
mock_get_goal.assert_called_once_with(self.context, goal_id)
mock_update_goal.assert_called_once_with(
goal_id, {'display_name': 'DUMMY'})
self.assertEqual(self.context, goal._context)
goal = objects.Goal.get_by_uuid(self.context, goal_uuid)
goal.display_name = 'DUMMY'
goal.save()
def test_refresh(self):
uuid = self.fake_goal['uuid']
mock_get_goal.assert_called_once_with(self.context, goal_uuid)
mock_update_goal.assert_called_once_with(
goal_uuid, {'display_name': 'DUMMY'})
self.assertEqual(self.context, goal._context)
@mock.patch.object(db_api.Connection, 'get_goal_by_uuid')
def test_refresh(self, mock_get_goal):
fake_goal2 = utils.get_test_goal(name="BALANCE_LOAD")
returns = [self.fake_goal, fake_goal2]
mock_get_goal.side_effect = returns
uuid = self.fake_goal['uuid']
expected = [mock.call(self.context, uuid),
mock.call(self.context, uuid)]
with mock.patch.object(self.dbapi, 'get_goal_by_uuid',
side_effect=returns,
autospec=True) as mock_get_goal:
goal = objects.Goal.get(self.context, uuid)
self.assertEqual("TEST", goal.name)
goal.refresh()
self.assertEqual("BALANCE_LOAD", goal.name)
self.assertEqual(expected, mock_get_goal.call_args_list)
self.assertEqual(self.context, goal._context)
goal = objects.Goal.get(self.context, uuid)
self.assertEqual("TEST", goal.name)
goal.refresh()
self.assertEqual("BALANCE_LOAD", goal.name)
self.assertEqual(expected, mock_get_goal.call_args_list)
self.assertEqual(self.context, goal._context)
@mock.patch.object(db_api.Connection, 'soft_delete_goal')
@mock.patch.object(db_api.Connection, 'get_goal_by_uuid')
def test_soft_delete(self, mock_get_goal, mock_soft_delete_goal):
mock_get_goal.return_value = self.fake_goal
fake_deleted_goal = self.fake_goal.copy()
fake_deleted_goal['deleted_at'] = datetime.datetime.utcnow()
mock_soft_delete_goal.return_value = fake_deleted_goal
expected_goal = fake_deleted_goal.copy()
expected_goal['created_at'] = expected_goal['created_at'].replace(
tzinfo=iso8601.iso8601.Utc())
expected_goal['deleted_at'] = expected_goal['deleted_at'].replace(
tzinfo=iso8601.iso8601.Utc())
def test_soft_delete(self):
uuid = self.fake_goal['uuid']
with mock.patch.object(self.dbapi, 'get_goal_by_uuid',
autospec=True) as mock_get_goal:
mock_get_goal.return_value = self.fake_goal
with mock.patch.object(self.dbapi, 'soft_delete_goal',
autospec=True) \
as mock_soft_delete_goal:
goal = objects.Goal.get_by_uuid(
self.context, uuid)
goal.soft_delete()
mock_get_goal.assert_called_once_with(
self.context, uuid)
mock_soft_delete_goal.assert_called_once_with(uuid)
self.assertEqual(self.context, goal._context)
goal = objects.Goal.get_by_uuid(self.context, uuid)
goal.soft_delete()
mock_get_goal.assert_called_once_with(self.context, uuid)
mock_soft_delete_goal.assert_called_once_with(uuid)
self.assertEqual(self.context, goal._context)
self.assertEqual(expected_goal, goal.as_dict())

View File

@@ -413,7 +413,7 @@ expected_object_fingerprints = {
'Strategy': '1.1-73f164491bdd4c034f48083a51bdeb7b',
'AuditTemplate': '1.1-b291973ffc5efa2c61b24fe34fdccc0b',
'Audit': '1.1-dc246337c8d511646cb537144fcb0f3a',
'ActionPlan': '1.1-299bd9c76f2402a0b2167f8e4d744a05',
'ActionPlan': '1.2-42709eadf6b2bd228ea87817e8c3e31e',
'Action': '1.1-52c77e4db4ce0aa9480c9760faec61a1',
'EfficacyIndicator': '1.0-655b71234a82bc7478aff964639c4bb0',
'ScoringEngine': '1.0-4abbe833544000728e17bd9e83f97576',

View File

@@ -13,7 +13,12 @@
# License for the specific language governing permissions and limitations
# under the License.
import datetime
import iso8601
import mock
from watcher.db.sqlalchemy import api as db_api
from watcher import objects
from watcher.tests.db import base
from watcher.tests.db import utils
@@ -23,126 +28,125 @@ class TestScoringEngineObject(base.DbTestCase):
def setUp(self):
super(TestScoringEngineObject, self).setUp()
self.fake_scoring_engine = utils.get_test_scoring_engine()
self.fake_scoring_engine = utils.get_test_scoring_engine(
created_at=datetime.datetime.utcnow())
def test_get_by_id(self):
@mock.patch.object(db_api.Connection, 'get_scoring_engine_by_id')
def test_get_by_id(self, mock_get_scoring_engine):
scoring_engine_id = self.fake_scoring_engine['id']
with mock.patch.object(self.dbapi, 'get_scoring_engine_by_id',
autospec=True) as mock_get_scoring_engine:
mock_get_scoring_engine.return_value = self.fake_scoring_engine
scoring_engine = objects.ScoringEngine.get_by_id(
self.context, scoring_engine_id)
mock_get_scoring_engine.assert_called_once_with(self.context,
scoring_engine_id)
self.assertEqual(self.context, scoring_engine._context)
mock_get_scoring_engine.return_value = self.fake_scoring_engine
scoring_engine = objects.ScoringEngine.get_by_id(
self.context, scoring_engine_id)
mock_get_scoring_engine.assert_called_once_with(
self.context, scoring_engine_id)
self.assertEqual(self.context, scoring_engine._context)
def test_get_by_uuid(self):
@mock.patch.object(db_api.Connection, 'get_scoring_engine_by_uuid')
def test_get_by_uuid(self, mock_get_scoring_engine):
se_uuid = self.fake_scoring_engine['uuid']
with mock.patch.object(self.dbapi, 'get_scoring_engine_by_uuid',
autospec=True) as mock_get_scoring_engine:
mock_get_scoring_engine.return_value = self.fake_scoring_engine
scoring_engine = objects.ScoringEngine.get_by_uuid(
self.context, se_uuid)
mock_get_scoring_engine.assert_called_once_with(self.context,
se_uuid)
self.assertEqual(self.context, scoring_engine._context)
mock_get_scoring_engine.return_value = self.fake_scoring_engine
scoring_engine = objects.ScoringEngine.get_by_uuid(
self.context, se_uuid)
mock_get_scoring_engine.assert_called_once_with(
self.context, se_uuid)
self.assertEqual(self.context, scoring_engine._context)
def test_get_by_name(self):
@mock.patch.object(db_api.Connection, 'get_scoring_engine_by_uuid')
def test_get_by_name(self, mock_get_scoring_engine):
scoring_engine_uuid = self.fake_scoring_engine['uuid']
with mock.patch.object(self.dbapi, 'get_scoring_engine_by_uuid',
autospec=True) as mock_get_scoring_engine:
mock_get_scoring_engine.return_value = self.fake_scoring_engine
scoring_engine = objects.ScoringEngine.get(
self.context, scoring_engine_uuid)
mock_get_scoring_engine.assert_called_once_with(
self.context, scoring_engine_uuid)
self.assertEqual(self.context, scoring_engine._context)
mock_get_scoring_engine.return_value = self.fake_scoring_engine
scoring_engine = objects.ScoringEngine.get(
self.context, scoring_engine_uuid)
mock_get_scoring_engine.assert_called_once_with(
self.context, scoring_engine_uuid)
self.assertEqual(self.context, scoring_engine._context)
def test_list(self):
with mock.patch.object(self.dbapi, 'get_scoring_engine_list',
autospec=True) as mock_get_list:
mock_get_list.return_value = [self.fake_scoring_engine]
scoring_engines = objects.ScoringEngine.list(self.context)
self.assertEqual(1, mock_get_list.call_count, 1)
self.assertEqual(1, len(scoring_engines))
self.assertIsInstance(scoring_engines[0], objects.ScoringEngine)
self.assertEqual(self.context, scoring_engines[0]._context)
@mock.patch.object(db_api.Connection, 'get_scoring_engine_list')
def test_list(self, mock_get_list):
mock_get_list.return_value = [self.fake_scoring_engine]
scoring_engines = objects.ScoringEngine.list(self.context)
self.assertEqual(1, mock_get_list.call_count, 1)
self.assertEqual(1, len(scoring_engines))
self.assertIsInstance(scoring_engines[0], objects.ScoringEngine)
self.assertEqual(self.context, scoring_engines[0]._context)
def test_create(self):
with mock.patch.object(self.dbapi, 'create_scoring_engine',
autospec=True) as mock_create_scoring_engine:
mock_create_scoring_engine.return_value = self.fake_scoring_engine
scoring_engine = objects.ScoringEngine(
self.context, **self.fake_scoring_engine)
@mock.patch.object(db_api.Connection, 'create_scoring_engine')
def test_create(self, mock_create_scoring_engine):
mock_create_scoring_engine.return_value = self.fake_scoring_engine
scoring_engine = objects.ScoringEngine(
self.context, **self.fake_scoring_engine)
scoring_engine.create()
expected_scoring_engine = self.fake_scoring_engine.copy()
expected_scoring_engine['created_at'] = expected_scoring_engine[
'created_at'].replace(tzinfo=iso8601.iso8601.Utc())
mock_create_scoring_engine.assert_called_once_with(
expected_scoring_engine)
self.assertEqual(self.context, scoring_engine._context)
scoring_engine.create()
mock_create_scoring_engine.assert_called_once_with(
self.fake_scoring_engine)
self.assertEqual(self.context, scoring_engine._context)
def test_destroy(self):
@mock.patch.object(db_api.Connection, 'destroy_scoring_engine')
@mock.patch.object(db_api.Connection, 'get_scoring_engine_by_id')
def test_destroy(self, mock_get_scoring_engine,
mock_destroy_scoring_engine):
mock_get_scoring_engine.return_value = self.fake_scoring_engine
_id = self.fake_scoring_engine['id']
with mock.patch.object(self.dbapi, 'get_scoring_engine_by_id',
autospec=True) as mock_get_scoring_engine:
mock_get_scoring_engine.return_value = self.fake_scoring_engine
with mock.patch.object(
self.dbapi, 'destroy_scoring_engine',
autospec=True) as mock_destroy_scoring_engine:
scoring_engine = objects.ScoringEngine.get_by_id(
self.context, _id)
scoring_engine.destroy()
mock_get_scoring_engine.assert_called_once_with(
self.context, _id)
mock_destroy_scoring_engine.assert_called_once_with(_id)
self.assertEqual(self.context, scoring_engine._context)
scoring_engine = objects.ScoringEngine.get_by_id(self.context, _id)
scoring_engine.destroy()
mock_get_scoring_engine.assert_called_once_with(self.context, _id)
mock_destroy_scoring_engine.assert_called_once_with(_id)
self.assertEqual(self.context, scoring_engine._context)
def test_save(self):
_id = self.fake_scoring_engine['id']
with mock.patch.object(self.dbapi, 'get_scoring_engine_by_id',
autospec=True) as mock_get_scoring_engine:
mock_get_scoring_engine.return_value = self.fake_scoring_engine
with mock.patch.object(
self.dbapi, 'update_scoring_engine',
autospec=True) as mock_update_scoring_engine:
scoring_engine = objects.ScoringEngine.get_by_id(
self.context, _id)
scoring_engine.description = 'UPDATED DESCRIPTION'
scoring_engine.save()
@mock.patch.object(db_api.Connection, 'update_scoring_engine')
@mock.patch.object(db_api.Connection, 'get_scoring_engine_by_uuid')
def test_save(self, mock_get_scoring_engine, mock_update_scoring_engine):
mock_get_scoring_engine.return_value = self.fake_scoring_engine
fake_saved_scoring_engine = self.fake_scoring_engine.copy()
fake_saved_scoring_engine['updated_at'] = datetime.datetime.utcnow()
mock_update_scoring_engine.return_value = fake_saved_scoring_engine
mock_get_scoring_engine.assert_called_once_with(
self.context, _id)
mock_update_scoring_engine.assert_called_once_with(
_id, {'description': 'UPDATED DESCRIPTION'})
self.assertEqual(self.context, scoring_engine._context)
uuid = self.fake_scoring_engine['uuid']
scoring_engine = objects.ScoringEngine.get_by_uuid(self.context, uuid)
scoring_engine.description = 'UPDATED DESCRIPTION'
scoring_engine.save()
def test_refresh(self):
_id = self.fake_scoring_engine['id']
mock_get_scoring_engine.assert_called_once_with(self.context, uuid)
mock_update_scoring_engine.assert_called_once_with(
uuid, {'description': 'UPDATED DESCRIPTION'})
self.assertEqual(self.context, scoring_engine._context)
@mock.patch.object(db_api.Connection, 'get_scoring_engine_by_id')
def test_refresh(self, mock_get_scoring_engine):
returns = [
dict(self.fake_scoring_engine, description="first description"),
dict(self.fake_scoring_engine, description="second description")]
mock_get_scoring_engine.side_effect = returns
_id = self.fake_scoring_engine['id']
expected = [mock.call(self.context, _id),
mock.call(self.context, _id)]
with mock.patch.object(self.dbapi, 'get_scoring_engine_by_id',
side_effect=returns,
autospec=True) as mock_get_scoring_engine:
scoring_engine = objects.ScoringEngine.get_by_id(self.context, _id)
self.assertEqual("first description", scoring_engine.description)
scoring_engine.refresh()
self.assertEqual("second description", scoring_engine.description)
self.assertEqual(expected, mock_get_scoring_engine.call_args_list)
self.assertEqual(self.context, scoring_engine._context)
scoring_engine = objects.ScoringEngine.get_by_id(self.context, _id)
self.assertEqual("first description", scoring_engine.description)
scoring_engine.refresh()
self.assertEqual("second description", scoring_engine.description)
self.assertEqual(expected, mock_get_scoring_engine.call_args_list)
self.assertEqual(self.context, scoring_engine._context)
@mock.patch.object(db_api.Connection, 'soft_delete_scoring_engine')
@mock.patch.object(db_api.Connection, 'get_scoring_engine_by_id')
def test_soft_delete(self, mock_get_scoring_engine, mock_soft_delete):
mock_get_scoring_engine.return_value = self.fake_scoring_engine
fake_deleted_scoring_engine = self.fake_scoring_engine.copy()
fake_deleted_scoring_engine['deleted_at'] = datetime.datetime.utcnow()
mock_soft_delete.return_value = fake_deleted_scoring_engine
expected_scoring_engine = fake_deleted_scoring_engine.copy()
expected_scoring_engine['created_at'] = expected_scoring_engine[
'created_at'].replace(tzinfo=iso8601.iso8601.Utc())
expected_scoring_engine['deleted_at'] = expected_scoring_engine[
'deleted_at'].replace(tzinfo=iso8601.iso8601.Utc())
def test_soft_delete(self):
_id = self.fake_scoring_engine['id']
with mock.patch.object(self.dbapi, 'get_scoring_engine_by_id',
autospec=True) as mock_get_scoring_engine:
mock_get_scoring_engine.return_value = self.fake_scoring_engine
with mock.patch.object(self.dbapi, 'soft_delete_scoring_engine',
autospec=True) as mock_soft_delete:
scoring_engine = objects.ScoringEngine.get_by_id(
self.context, _id)
scoring_engine.soft_delete()
mock_get_scoring_engine.assert_called_once_with(
self.context, _id)
mock_soft_delete.assert_called_once_with(_id)
self.assertEqual(self.context, scoring_engine._context)
scoring_engine = objects.ScoringEngine.get_by_id(self.context, _id)
scoring_engine.soft_delete()
mock_get_scoring_engine.assert_called_once_with(self.context, _id)
mock_soft_delete.assert_called_once_with(_id)
self.assertEqual(self.context, scoring_engine._context)
self.assertEqual(expected_scoring_engine, scoring_engine.as_dict())

View File

@@ -13,8 +13,12 @@
# License for the specific language governing permissions and limitations
# under the License.
import datetime
import iso8601
import mock
from watcher.db.sqlalchemy import api as db_api
from watcher import objects
from watcher.tests.db import base
from watcher.tests.db import utils
@@ -24,81 +28,89 @@ class TestServiceObject(base.DbTestCase):
def setUp(self):
super(TestServiceObject, self).setUp()
self.fake_service = utils.get_test_service()
self.fake_service = utils.get_test_service(
created_at=datetime.datetime.utcnow())
def test_get_by_id(self):
@mock.patch.object(db_api.Connection, 'get_service_by_id')
def test_get_by_id(self, mock_get_service):
service_id = self.fake_service['id']
with mock.patch.object(self.dbapi, 'get_service_by_id',
autospec=True) as mock_get_service:
mock_get_service.return_value = self.fake_service
service = objects.Service.get(self.context, service_id)
mock_get_service.assert_called_once_with(self.context,
service_id)
self.assertEqual(self.context, service._context)
mock_get_service.return_value = self.fake_service
service = objects.Service.get(self.context, service_id)
mock_get_service.assert_called_once_with(self.context, service_id)
self.assertEqual(self.context, service._context)
def test_list(self):
with mock.patch.object(self.dbapi, 'get_service_list',
autospec=True) as mock_get_list:
mock_get_list.return_value = [self.fake_service]
services = objects.Service.list(self.context)
self.assertEqual(1, mock_get_list.call_count, 1)
self.assertEqual(1, len(services))
self.assertIsInstance(services[0], objects.Service)
self.assertEqual(self.context, services[0]._context)
@mock.patch.object(db_api.Connection, 'get_service_list')
def test_list(self, mock_get_list):
mock_get_list.return_value = [self.fake_service]
services = objects.Service.list(self.context)
self.assertEqual(1, mock_get_list.call_count, 1)
self.assertEqual(1, len(services))
self.assertIsInstance(services[0], objects.Service)
self.assertEqual(self.context, services[0]._context)
def test_create(self):
with mock.patch.object(self.dbapi, 'create_service',
autospec=True) as mock_create_service:
mock_create_service.return_value = self.fake_service
service = objects.Service(self.context, **self.fake_service)
@mock.patch.object(db_api.Connection, 'create_service')
def test_create(self, mock_create_service):
mock_create_service.return_value = self.fake_service
service = objects.Service(self.context, **self.fake_service)
fake_service = utils.get_test_service()
service.create()
expected_service = self.fake_service.copy()
expected_service['created_at'] = expected_service[
'created_at'].replace(tzinfo=iso8601.iso8601.Utc())
service.create()
mock_create_service.assert_called_once_with(fake_service)
self.assertEqual(self.context, service._context)
mock_create_service.assert_called_once_with(expected_service)
self.assertEqual(self.context, service._context)
def test_save(self):
@mock.patch.object(db_api.Connection, 'update_service')
@mock.patch.object(db_api.Connection, 'get_service_by_id')
def test_save(self, mock_get_service, mock_update_service):
mock_get_service.return_value = self.fake_service
fake_saved_service = self.fake_service.copy()
fake_saved_service['updated_at'] = datetime.datetime.utcnow()
mock_update_service.return_value = fake_saved_service
_id = self.fake_service['id']
with mock.patch.object(self.dbapi, 'get_service_by_id',
autospec=True) as mock_get_service:
mock_get_service.return_value = self.fake_service
with mock.patch.object(self.dbapi, 'update_service',
autospec=True) as mock_update_service:
service = objects.Service.get(self.context, _id)
service.name = 'UPDATED NAME'
service.save()
service = objects.Service.get(self.context, _id)
service.name = 'UPDATED NAME'
service.save()
mock_get_service.assert_called_once_with(self.context, _id)
mock_update_service.assert_called_once_with(
_id, {'name': 'UPDATED NAME'})
self.assertEqual(self.context, service._context)
mock_get_service.assert_called_once_with(self.context, _id)
mock_update_service.assert_called_once_with(
_id, {'name': 'UPDATED NAME'})
self.assertEqual(self.context, service._context)
def test_refresh(self):
_id = self.fake_service['id']
@mock.patch.object(db_api.Connection, 'get_service_by_id')
def test_refresh(self, mock_get_service):
returns = [dict(self.fake_service, name="first name"),
dict(self.fake_service, name="second name")]
mock_get_service.side_effect = returns
_id = self.fake_service['id']
expected = [mock.call(self.context, _id),
mock.call(self.context, _id)]
with mock.patch.object(self.dbapi, 'get_service_by_id',
side_effect=returns,
autospec=True) as mock_get_service:
service = objects.Service.get(self.context, _id)
self.assertEqual("first name", service.name)
service.refresh()
self.assertEqual("second name", service.name)
self.assertEqual(expected, mock_get_service.call_args_list)
self.assertEqual(self.context, service._context)
service = objects.Service.get(self.context, _id)
self.assertEqual("first name", service.name)
service.refresh()
self.assertEqual("second name", service.name)
self.assertEqual(expected, mock_get_service.call_args_list)
self.assertEqual(self.context, service._context)
@mock.patch.object(db_api.Connection, 'soft_delete_service')
@mock.patch.object(db_api.Connection, 'get_service_by_id')
def test_soft_delete(self, mock_get_service, mock_soft_delete):
mock_get_service.return_value = self.fake_service
fake_deleted_service = self.fake_service.copy()
fake_deleted_service['deleted_at'] = datetime.datetime.utcnow()
mock_soft_delete.return_value = fake_deleted_service
expected_service = fake_deleted_service.copy()
expected_service['created_at'] = expected_service[
'created_at'].replace(tzinfo=iso8601.iso8601.Utc())
expected_service['deleted_at'] = expected_service[
'deleted_at'].replace(tzinfo=iso8601.iso8601.Utc())
def test_soft_delete(self):
_id = self.fake_service['id']
with mock.patch.object(self.dbapi, 'get_service_by_id',
autospec=True) as mock_get_service:
mock_get_service.return_value = self.fake_service
with mock.patch.object(self.dbapi, 'soft_delete_service',
autospec=True) as mock_soft_delete:
service = objects.Service.get(self.context, _id)
service.soft_delete()
mock_get_service.assert_called_once_with(self.context, _id)
mock_soft_delete.assert_called_once_with(_id)
self.assertEqual(self.context, service._context)
service = objects.Service.get(self.context, _id)
service.soft_delete()
mock_get_service.assert_called_once_with(self.context, _id)
mock_soft_delete.assert_called_once_with(_id)
self.assertEqual(self.context, service._context)
self.assertEqual(expected_service, service.as_dict())

View File

@@ -15,8 +15,7 @@
# limitations under the License.
from oslo_serialization import jsonutils
import uuid
from watcher.common import utils
from watcher_tempest_plugin.services.infra_optim import base
@@ -69,7 +68,7 @@ class InfraOptimClientJSON(base.BaseInfraOptimClient):
parameters = {k: v for k, v in kwargs.items() if v is not None}
# This name is unique to avoid the DB unique constraint on names
unique_name = 'Tempest Audit Template %s' % uuid.uuid4()
unique_name = 'Tempest Audit Template %s' % utils.generate_uuid()
audit_template = {
'name': parameters.get('name', unique_name),

View File

@@ -16,7 +16,7 @@
from __future__ import unicode_literals
import uuid
from oslo_utils import uuidutils
from tempest.lib import exceptions
from tempest import test
@@ -33,7 +33,7 @@ class TestCreateDeleteAuditTemplate(base.BaseInfraOptimTest):
_, goal = self.client.show_goal(goal_name)
params = {
'name': 'my at name %s' % uuid.uuid4(),
'name': 'my at name %s' % uuidutils.generate_uuid(),
'description': 'my at description',
'goal': goal['uuid']}
expected_data = {
@@ -56,7 +56,7 @@ class TestCreateDeleteAuditTemplate(base.BaseInfraOptimTest):
_, goal = self.client.show_goal(goal_name)
# Use a unicode string for testing:
params = {
'name': 'my at name %s' % uuid.uuid4(),
'name': 'my at name %s' % uuidutils.generate_uuid(),
'description': 'my àt déscrïptïôn',
'goal': goal['uuid']}
@@ -158,13 +158,13 @@ class TestAuditTemplate(base.BaseInfraOptimTest):
_, new_goal = self.client.show_goal("server_consolidation")
_, new_strategy = self.client.show_strategy("basic")
params = {'name': 'my at name %s' % uuid.uuid4(),
params = {'name': 'my at name %s' % uuidutils.generate_uuid(),
'description': 'my at description',
'goal': self.goal['uuid']}
_, body = self.create_audit_template(**params)
new_name = 'my at new name %s' % uuid.uuid4()
new_name = 'my at new name %s' % uuidutils.generate_uuid()
new_description = 'my new at description'
patch = [{'path': '/name',
@@ -191,7 +191,7 @@ class TestAuditTemplate(base.BaseInfraOptimTest):
@test.attr(type='smoke')
def test_update_audit_template_remove(self):
description = 'my at description'
name = 'my at name %s' % uuid.uuid4()
name = 'my at name %s' % uuidutils.generate_uuid()
params = {'name': name,
'description': description,
'goal': self.goal['uuid']}
@@ -213,7 +213,7 @@ class TestAuditTemplate(base.BaseInfraOptimTest):
@test.attr(type='smoke')
def test_update_audit_template_add(self):
params = {'name': 'my at name %s' % uuid.uuid4(),
params = {'name': 'my at name %s' % uuidutils.generate_uuid(),
'goal': self.goal['uuid']}
_, body = self.create_audit_template(**params)