Compare commits

...

42 Commits

Author SHA1 Message Date
Zuul
3591d9fa0a Merge "Replace port 35357 with 5000 for test_clients.py" 2018-06-15 05:10:32 +00:00
Alexander Chadin
44fc7d5799 Restore requirements versions
Change-Id: I7704778324d7597d5df2de6b77f6b914d948d6fa
2018-06-13 15:08:11 +03:00
Zuul
a330576eae Merge "Update storage CDM collector" 2018-06-06 12:16:43 +00:00
Zuul
70d05214c7 Merge "add doc for host_maintenance" 2018-06-06 08:23:13 +00:00
suzhengwei
ca9644f4d8 add doc for host_maintenance
Change-Id: If9a112d33d7586d828024dbace1863ecc04408d9
2018-06-05 17:34:01 +08:00
inspurericzhang
44061cf333 Update pypi url to new url
Pypi url update to "https://pypi.org/"

Change-Id: I0bc9d7fc6111cb32db212d6ef3dab144fdd31c17
2018-05-25 17:11:56 +08:00
Zuul
18bf1f4e8d Merge "add strategy host_maintenance" 2018-05-23 09:18:24 +00:00
Zuul
f2df0da0b2 Merge "Trivial: update url to new url" 2018-05-23 07:46:28 +00:00
Hidekazu Nakamura
3c83077724 Update storage CDM collector
Storage CDM can not be build for some environment such as
the one using VMwareVcVmdkDriver, since some attributes of
Storage CDM'S pool element can be 'unknown'.

This patch updates storage CDM collector to raise watcher
specific exception if some attributes of storage CDM'S pool
element is 'unknown'

Change-Id: If75a909025c8d764e4de6e20f058b84e23123c1a
Closes-Bug: #1751206
2018-05-23 10:51:26 +09:00
caoyuan
d8872a743b Replace port 35357 with 5000 for test_clients.py
Now that the v2.0 API has been removed, we don't have a reason to
include deployment instructions for two separate applications on
different ports.

Related-bug: #1754104

Change-Id: I98fae626d39cb62ad51c86435c1a2c60be5c1fb9
2018-05-15 12:43:48 +00:00
Hidekazu Nakamura
7556d19638 Add Cinder Cluster Data Model Collector test case
This patch adds Cinder Data Model Collector test case.

Change-Id: Ifaf7cd4a962da287f740a12e4c382a1ca92750d6
2018-05-15 20:30:31 +09:00
suzhengwei
58276ec79e add strategy host_maintenance
maintain one compute node without having the user's application
been interruptted.
It will firstly migrate all instances from the maintenance node
to one backup node. If not, it will migrate all instances,
relying on nova-schduler.

Change-Id: I29ecb65745d5e6ecab41508e9a91b29b39a3f0a8
Implements:blueprint cluster-maintaining
2018-05-14 11:33:59 +00:00
XiaojueGuan
36ad9e12da Trivial: update url to new url
Change-Id: Ia238564c5c41aaf015d9d2f5839703a035c76fce
2018-05-13 21:39:50 +08:00
Hidekazu Nakamura
cdb1975530 Fix to reuse RabbitMQ connection
Currently RabbitMQ connection gradually increases by CONTINUOUS audit
with auto-trigger option.
This patch fixes watcher to reuse RabbitMQ connection.

Change-Id: I818fc1ce982f67bac08c815821f1ad67f8f3c893
2018-05-10 14:21:23 +09:00
Zuul
6efffd6d89 Merge "Updated tests on bug, when get list returns deleted items" 2018-05-09 08:40:18 +00:00
Zuul
95ec79626b Merge "Grouped _add_*_filters methods together" 2018-05-09 08:20:37 +00:00
Zuul
00aa77651b Merge "Replace of private _create methods in tests" 2018-05-09 08:20:36 +00:00
Zuul
7d62175b23 Merge "Added _get_model_list base method for all get_*_list methods" 2018-05-09 08:20:36 +00:00
Zuul
5107cfa30f Merge "Refactor watcher API for Action Plan Start" 2018-05-09 06:16:38 +00:00
deepak_mourya
ff57eb73f9 Refactor watcher API for Action Plan Start
Currently the REST API to start action plan in watcher
is which is same as for update action plan.

PATCH /v1/action_plans

https://docs.openstack.org/watcher/latest/api/v1.html

we need to make it easy to understand like :

POST /v1/action_plans/{action_plan_uuid}/start

the action should be start in above case.
Change-Id: I5353e4aa58d1675d8afb94bea35d9b953514129a
Closes-Bug: #1756274
2018-05-08 07:28:45 +00:00
Zuul
4c035a7cbd Merge "Update auth_url in install docs" 2018-05-08 05:57:39 +00:00
Zuul
b5d9eb6acb Merge "Exclude Project By Audit Scope" 2018-05-08 05:01:57 +00:00
Hidekazu Nakamura
904b72cf5e Update auth_url in install docs
Beginning with the Queens release, the keystone install guide
recommends running all interfaces on the same port. This patch
updates the install guide to reflect that change.

Change-Id: Ice155d0b80d2f2ed6c1a9a9738be2184b6e9e76c
Closes-bug: #1754104
2018-05-07 11:42:10 +09:00
Egor Panfilov
d23e7f0f8c Updated tests on bug, when get list returns deleted items
In I4d2f44fa149aee564c62a69822c6ad79de5bba8a we introduced new
_get_model_list method that introduces unify way for retrieving models
from db. This commit adds tests that do checks on bug 1761956, when
selecting with filter() method could return deleted entites.

Change-Id: I12df4af70bcc25654a0fb276ea7145d772d891e2
Related-Bug: 1761956
2018-05-05 14:30:00 +03:00
Zuul
55cbb15fbc Merge "Moved do_execute method to AuditHandler class" 2018-05-04 06:08:17 +00:00
wu.chunyang
3a5b42302c Fix the openstack endpoint create failed
Change-Id: Ic05950c47bf5ad26e91051ac5e1d766db0f5ccae
2018-04-27 22:44:13 +08:00
Zuul
4fdb22cba2 Merge "Update the default value for nova api_verison" 2018-04-27 06:10:54 +00:00
Zuul
431f17d999 Merge "add unittest for execute_audit in audit/continuous.py" 2018-04-25 08:24:25 +00:00
caoyuan
b586612d25 Update the default value for nova api_verison
refer to https://github.com/openstack/watcher/blob/master/watcher/conf/nova_client.py#L26

Change-Id: If7c12d49c68e1bfc30327d465b9d5bafe82882e0
2018-04-24 23:15:37 +08:00
Egor Panfilov
ad1593bb36 Moved do_execute method to AuditHandler class
Both Continuous and Oneshot audits made same action in
do_execute, so it's a good idea to move it to the base
class

TrivialFix

Change-Id: Ic0353f010509ce45f94126e4db0e629417128ded
2018-04-23 20:38:06 +03:00
Zuul
bbd0ae5b16 Merge "Fix typo in StorageCapacityBalance" 2018-04-23 07:59:51 +00:00
Zuul
5a30f814bf Merge "add strategy doc:storage capacity balance" 2018-04-23 05:46:08 +00:00
Egor Panfilov
7f6a300ea0 Fix typo in StorageCapacityBalance
TrivialFix

Change-Id: If1fb33276fc08945aa45e6baecaeebca3ba070fe
2018-04-22 18:00:53 +03:00
Egor Panfilov
93a8ba804f Grouped _add_*_filters methods together
TrivialFix

Change-Id: I148dc19140aede8cc905b0bdc2753b82d8484363
2018-04-22 00:52:27 +03:00
Egor Panfilov
415bab4bc9 Replace of private _create methods in tests
Methods that already implemented in utils module are removed from test
classes

TrivialFix

Change-Id: I38d806e23c162805b7d362b68bf3fe18da123ee3
2018-04-21 22:32:25 +03:00
aditi
fc388d8292 Exclude Project By Audit Scope
This patch adds project_id in compute CDM, It also adds logic for
excluding project_id in audit scope.

Change-Id: Ife228e3d1855b65abee637516470e463ba8a2815
Implements: blueprint audit-scope-exclude-project
2018-04-20 08:47:07 +00:00
Zuul
5b70c28047 Merge "amend delete action policy" 2018-04-20 03:08:52 +00:00
licanwei
b290ad7368 add strategy doc:storage capacity balance
Change-Id: Ifa37156e641b840ae560e1f7c8a0dd4bca7662ba
2018-04-19 19:55:37 -07:00
Alexander Chadin
8c8e58e7d9 Update requirements
Change-Id: Iee6ca0a49f8b1d67dd0d88f9a2cf9863b2c6c7bf
2018-04-19 11:10:39 +03:00
licanwei
171654c0ea add unittest for execute_audit in audit/continuous.py
Change-Id: I20b9cb9b4b175a1befdbe23f7c187bec6a195dac
2018-04-17 04:19:12 -07:00
suzhengwei
0157fa7dad amend delete action policy
Change-Id: I545b969a3f0a3451b880840108484ca7ef3fabf9
2018-04-17 16:18:14 +08:00
Egor Panfilov
aa74817686 Added _get_model_list base method for all get_*_list methods
When we call audittemplate list without filters, it returns all Audit
Templates that are not deleted, as expected. If we add any filter to
query and context.show_deleted is None (we request only current AT),
query.filter_by adds filter to joined table (for example, goals, results
 in a query like JOIN goals ... WHERE ... goals.deleted_at IS NULL) not
to model's table (AuditTemplate in our case).

We change call for filter_by to filter, explicitly point to model that
we want to filter.

Also, we moved query generating code to new method _get_model_list(). As
a result we applied same fix to all of the other models.

Change-Id: I4d2f44fa149aee564c62a69822c6ad79de5bba8a
Closes-bug: 1761956
2018-04-10 14:10:44 +03:00
62 changed files with 1680 additions and 553 deletions

View File

@@ -19,7 +19,7 @@ The source install instructions specifically avoid using platform specific
packages, instead using the source for the code and the Python Package Index
(PyPi_).
.. _PyPi: https://pypi.python.org/pypi
.. _PyPi: https://pypi.org/
It's expected that your system already has python2.7_, latest version of pip_,
and git_ available.

View File

@@ -129,10 +129,14 @@ Configure the Identity service for the Watcher service
.. code-block:: bash
$ openstack endpoint create --region YOUR_REGION watcher \
--publicurl http://WATCHER_API_PUBLIC_IP:9322 \
--internalurl http://WATCHER_API_INTERNAL_IP:9322 \
--adminurl http://WATCHER_API_ADMIN_IP:9322
$ openstack endpoint create --region YOUR_REGION
watcher public http://WATCHER_API_PUBLIC_IP:9322
$ openstack endpoint create --region YOUR_REGION
watcher internal http://WATCHER_API_INTERNAL_IP:9322
$ openstack endpoint create --region YOUR_REGION
watcher admin http://WATCHER_API_ADMIN_IP:9322
.. _watcher-db_configuration:
@@ -260,7 +264,7 @@ so that the watcher service is configured for your needs.
# Authentication URL (unknown value)
#auth_url = <None>
auth_url = http://IDENTITY_IP:35357
auth_url = http://IDENTITY_IP:5000
# Username (unknown value)
# Deprecated group/name - [DEFAULT]/username
@@ -306,7 +310,7 @@ so that the watcher service is configured for your needs.
# Authentication URL (unknown value)
#auth_url = <None>
auth_url = http://IDENTITY_IP:35357
auth_url = http://IDENTITY_IP:5000
# Username (unknown value)
# Deprecated group/name - [DEFAULT]/username
@@ -336,7 +340,7 @@ so that the watcher service is configured for your needs.
[nova_client]
# Version of Nova API to use in novaclient. (string value)
#api_version = 2.53
#api_version = 2.56
api_version = 2.1
#. Create the Watcher Service database tables::

View File

@@ -37,7 +37,7 @@ different version of the above, please document your configuration here!
.. _Python: https://www.python.org/
.. _git: https://git-scm.com/
.. _setuptools: https://pypi.python.org/pypi/setuptools
.. _setuptools: https://pypi.org/project/setuptools
.. _virtualenvwrapper: https://virtualenvwrapper.readthedocs.io/en/latest/install.html
Getting the latest code
@@ -69,8 +69,8 @@ itself.
These dependencies can be installed from PyPi_ using the Python tool pip_.
.. _PyPi: https://pypi.python.org/
.. _pip: https://pypi.python.org/pypi/pip
.. _PyPi: https://pypi.org/
.. _pip: https://pypi.org/project/pip
However, your system *may* need additional dependencies that `pip` (and by
extension, PyPi) cannot satisfy. These dependencies should be installed
@@ -126,7 +126,7 @@ You can re-activate this virtualenv for your current shell using:
For more information on virtual environments, see virtualenv_ and
virtualenvwrapper_.
.. _virtualenv: https://pypi.python.org/pypi/virtualenv/
.. _virtualenv: https://pypi.org/project/virtualenv/

View File

@@ -79,7 +79,7 @@ requirements.txt file::
.. _cookiecutter: https://github.com/audreyr/cookiecutter
.. _OpenStack cookiecutter: https://github.com/openstack-dev/cookiecutter
.. _python-watcher: https://pypi.python.org/pypi/python-watcher
.. _python-watcher: https://pypi.org/project/python-watcher
Implementing a plugin for Watcher
=================================

View File

@@ -27,7 +27,7 @@
[keystone_authtoken]
...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:35357
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
@@ -47,7 +47,7 @@
[watcher_clients_auth]
...
auth_type = password
auth_url = http://controller:35357
auth_url = http://controller:5000
username = watcher
password = WATCHER_PASS
project_domain_name = default

View File

@@ -0,0 +1,92 @@
===========================
Host Maintenance Strategy
===========================
Synopsis
--------
**display name**: ``Host Maintenance Strategy``
**goal**: ``cluster_maintaining``
.. watcher-term:: watcher.decision_engine.strategy.strategies.host_maintenance.HostMaintenance
Requirements
------------
None.
Metrics
*******
None
Cluster data model
******************
Default Watcher's Compute cluster data model:
.. watcher-term:: watcher.decision_engine.model.collector.nova.NovaClusterDataModelCollector
Actions
*******
Default Watcher's actions:
.. list-table::
:widths: 30 30
:header-rows: 1
* - action
- description
* - ``migration``
- .. watcher-term:: watcher.applier.actions.migration.Migrate
Planner
*******
Default Watcher's planner:
.. watcher-term:: watcher.decision_engine.planner.weight.WeightPlanner
Configuration
-------------
Strategy parameters are:
==================== ====== ====================================
parameter type default Value description
==================== ====== ====================================
``maintenance_node`` String The name of the compute node which
need maintenance. Required.
``backup_node`` String The name of the compute node which
will backup the maintenance node.
Optional.
==================== ====== ====================================
Efficacy Indicator
------------------
None
Algorithm
---------
For more information on the Host Maintenance Strategy please refer
to: https://specs.openstack.org/openstack/watcher-specs/specs/queens/approved/cluster-maintenance-strategy.html
How to use it ?
---------------
.. code-block:: shell
$ openstack optimize audit create \
-g cluster_maintaining -s host_maintenance \
-p maintenance_node=compute01 \
-p backup_node=compute02 \
--auto-trigger
External Links
--------------
None.

View File

@@ -0,0 +1,87 @@
========================
Storage capacity balance
========================
Synopsis
--------
**display name**: ``Storage Capacity Balance Strategy``
**goal**: ``workload_balancing``
.. watcher-term:: watcher.decision_engine.strategy.strategies.storage_capacity_balance.StorageCapacityBalance
Requirements
------------
Metrics
*******
None
Cluster data model
******************
Storage cluster data model is required:
.. watcher-term:: watcher.decision_engine.model.collector.cinder.CinderClusterDataModelCollector
Actions
*******
Default Watcher's actions:
.. list-table::
:widths: 25 35
:header-rows: 1
* - action
- description
* - ``volume_migrate``
- .. watcher-term:: watcher.applier.actions.volume_migration.VolumeMigrate
Planner
*******
Default Watcher's planner:
.. watcher-term:: watcher.decision_engine.planner.weight.WeightPlanner
Configuration
-------------
Strategy parameter is:
==================== ====== ============= =====================================
parameter type default Value description
==================== ====== ============= =====================================
``volume_threshold`` Number 80.0 Volume threshold for capacity balance
==================== ====== ============= =====================================
Efficacy Indicator
------------------
None
Algorithm
---------
For more information on the zone migration strategy please refer to:
http://specs.openstack.org/openstack/watcher-specs/specs/queens/implemented/storage-capacity-balance.html
How to use it ?
---------------
.. code-block:: shell
$ openstack optimize audittemplate create \
at1 workload_balancing --strategy storage_capacity_balance
$ openstack optimize audit create -a at1 \
-p volume_threshold=85.0
External Links
--------------
None

View File

@@ -25,6 +25,7 @@ doc8==0.8.0
docutils==0.14
dogpile.cache==0.6.5
dulwich==0.19.0
enum34==1.1.6
enum-compat==0.0.2
eventlet==0.20.0
extras==1.0.0
@@ -66,6 +67,7 @@ netifaces==0.10.6
networkx==1.11
openstackdocstheme==1.20.0
openstacksdk==0.12.0
os-api-ref===1.4.0
os-client-config==1.29.0
os-service-types==1.2.0
os-testr==1.0.0

View File

@@ -0,0 +1,6 @@
---
features:
- |
Feature to exclude instances from audit scope based on project_id is added.
Now instances from particular project in OpenStack can be excluded from audit
defining scope in audit templates.

View File

@@ -0,0 +1,9 @@
---
features:
- |
Added a strategy for one compute node maintenance,
without having the user's application been interrupted.
If given one backup node, the strategy will firstly
migrate all instances from the maintenance node to
the backup node. If the backup node is not provided,
it will migrate all instances, relying on nova-scheduler.

View File

@@ -2,48 +2,48 @@
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
apscheduler>=3.0.5 # MIT License
enum34>=1.0.4;python_version=='2.7' or python_version=='2.6' or python_version=='3.3' # BSD
jsonpatch!=1.20,>=1.16 # BSD
apscheduler>=3.5.1 # MIT License
enum34>=1.1.6;python_version=='2.7' or python_version=='2.6' or python_version=='3.3' # BSD
jsonpatch>=1.21 # BSD
keystoneauth1>=3.4.0 # Apache-2.0
jsonschema<3.0.0,>=2.6.0 # MIT
keystonemiddleware>=4.17.0 # Apache-2.0
lxml!=3.7.0,>=3.4.1 # BSD
croniter>=0.3.4 # MIT License
keystonemiddleware>=4.21.0 # Apache-2.0
lxml>=4.1.1 # BSD
croniter>=0.3.20 # MIT License
oslo.concurrency>=3.26.0 # Apache-2.0
oslo.cache>=1.26.0 # Apache-2.0
oslo.cache>=1.29.0 # Apache-2.0
oslo.config>=5.2.0 # Apache-2.0
oslo.context>=2.19.2 # Apache-2.0
oslo.db>=4.27.0 # Apache-2.0
oslo.i18n>=3.15.3 # Apache-2.0
oslo.log>=3.36.0 # Apache-2.0
oslo.messaging>=5.29.0 # Apache-2.0
oslo.policy>=1.30.0 # Apache-2.0
oslo.reports>=1.18.0 # Apache-2.0
oslo.serialization!=2.19.1,>=2.18.0 # Apache-2.0
oslo.service!=1.28.1,>=1.24.0 # Apache-2.0
oslo.utils>=3.33.0 # Apache-2.0
oslo.versionedobjects>=1.31.2 # Apache-2.0
PasteDeploy>=1.5.0 # MIT
pbr!=2.1.0,>=2.0.0 # Apache-2.0
pecan!=1.0.2,!=1.0.3,!=1.0.4,!=1.2,>=1.0.0 # BSD
PrettyTable<0.8,>=0.7.1 # BSD
voluptuous>=0.8.9 # BSD License
gnocchiclient>=3.3.1 # Apache-2.0
python-ceilometerclient>=2.5.0 # Apache-2.0
python-cinderclient>=3.3.0 # Apache-2.0
python-glanceclient>=2.8.0 # Apache-2.0
python-keystoneclient>=3.8.0 # Apache-2.0
python-monascaclient>=1.7.0 # Apache-2.0
oslo.context>=2.20.0 # Apache-2.0
oslo.db>=4.35.0 # Apache-2.0
oslo.i18n>=3.20.0 # Apache-2.0
oslo.log>=3.37.0 # Apache-2.0
oslo.messaging>=5.36.0 # Apache-2.0
oslo.policy>=1.34.0 # Apache-2.0
oslo.reports>=1.27.0 # Apache-2.0
oslo.serialization>=2.25.0 # Apache-2.0
oslo.service>=1.30.0 # Apache-2.0
oslo.utils>=3.36.0 # Apache-2.0
oslo.versionedobjects>=1.32.0 # Apache-2.0
PasteDeploy>=1.5.2 # MIT
pbr>=3.1.1 # Apache-2.0
pecan>=1.2.1 # BSD
PrettyTable<0.8,>=0.7.2 # BSD
voluptuous>=0.11.1 # BSD License
gnocchiclient>=7.0.1 # Apache-2.0
python-ceilometerclient>=2.9.0 # Apache-2.0
python-cinderclient>=3.5.0 # Apache-2.0
python-glanceclient>=2.9.1 # Apache-2.0
python-keystoneclient>=3.15.0 # Apache-2.0
python-monascaclient>=1.10.0 # Apache-2.0
python-neutronclient>=6.7.0 # Apache-2.0
python-novaclient>=9.1.0 # Apache-2.0
python-openstackclient>=3.12.0 # Apache-2.0
python-novaclient>=10.1.0 # Apache-2.0
python-openstackclient>=3.14.0 # Apache-2.0
python-ironicclient>=2.3.0 # Apache-2.0
six>=1.10.0 # MIT
SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 # MIT
stevedore>=1.20.0 # Apache-2.0
taskflow>=2.16.0 # Apache-2.0
WebOb>=1.7.1 # MIT
WSME>=0.8.0 # MIT
networkx<2.0,>=1.10 # BSD
six>=1.11.0 # MIT
SQLAlchemy>=1.2.5 # MIT
stevedore>=1.28.0 # Apache-2.0
taskflow>=3.1.0 # Apache-2.0
WebOb>=1.7.4 # MIT
WSME>=0.9.2 # MIT
networkx>=1.11 # BSD

View File

@@ -58,6 +58,7 @@ watcher_goals =
noisy_neighbor = watcher.decision_engine.goal.goals:NoisyNeighborOptimization
saving_energy = watcher.decision_engine.goal.goals:SavingEnergy
hardware_maintenance = watcher.decision_engine.goal.goals:HardwareMaintenance
cluster_maintaining = watcher.decision_engine.goal.goals:ClusterMaintaining
watcher_scoring_engines =
dummy_scorer = watcher.decision_engine.scoring.dummy_scorer:DummyScorer
@@ -80,6 +81,7 @@ watcher_strategies =
noisy_neighbor = watcher.decision_engine.strategy.strategies.noisy_neighbor:NoisyNeighbor
storage_capacity_balance = watcher.decision_engine.strategy.strategies.storage_capacity_balance:StorageCapacityBalance
zone_migration = watcher.decision_engine.strategy.strategies.zone_migration:ZoneMigration
host_maintenance = watcher.decision_engine.strategy.strategies.host_maintenance:HostMaintenance
watcher_actions =
migrate = watcher.applier.actions.migration:Migrate

View File

@@ -2,25 +2,27 @@
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
coverage!=4.4,>=4.0 # Apache-2.0
doc8>=0.6.0 # Apache-2.0
freezegun>=0.3.6 # Apache-2.0
coverage!=4.4 # Apache-2.0
doc8 # Apache-2.0
freezegun # Apache-2.0
hacking!=0.13.0,<0.14,>=0.12.0 # Apache-2.0
mock>=2.0.0 # BSD
oslotest>=3.2.0 # Apache-2.0
os-testr>=1.0.0 # Apache-2.0
testrepository>=0.0.18 # Apache-2.0/BSD
testscenarios>=0.4 # Apache-2.0/BSD
testtools>=2.2.0 # MIT
mock # BSD
oslotest # Apache-2.0
os-testr # Apache-2.0
testrepository # Apache-2.0/BSD
testscenarios # Apache-2.0/BSD
testtools # MIT
# Doc requirements
openstackdocstheme>=1.18.1 # Apache-2.0
sphinx!=1.6.6,!=1.6.7,>=1.6.2 # BSD
sphinxcontrib-pecanwsme>=0.8.0 # Apache-2.0
openstackdocstheme # Apache-2.0
sphinx!=1.6.6,!=1.6.7 # BSD
sphinxcontrib-pecanwsme # Apache-2.0
# api-ref
os-api-ref # Apache-2.0
# releasenotes
reno>=2.5.0 # Apache-2.0
reno # Apache-2.0
# bandit
bandit>=1.1.0 # Apache-2.0

View File

@@ -320,13 +320,15 @@ class ActionPlansController(rest.RestController):
def __init__(self):
super(ActionPlansController, self).__init__()
self.applier_client = rpcapi.ApplierAPI()
from_actionsPlans = False
"""A flag to indicate if the requests to this controller are coming
from the top-level resource ActionPlan."""
_custom_actions = {
'detail': ['GET'],
'start': ['POST'],
'detail': ['GET']
}
def _get_action_plans_collection(self, marker, limit,
@@ -535,7 +537,7 @@ class ActionPlansController(rest.RestController):
if action_plan_to_update[field] != patch_val:
action_plan_to_update[field] = patch_val
if (field == 'state'and
if (field == 'state' and
patch_val == objects.action_plan.State.PENDING):
launch_action_plan = True
@@ -552,11 +554,39 @@ class ActionPlansController(rest.RestController):
a.save()
if launch_action_plan:
applier_client = rpcapi.ApplierAPI()
applier_client.launch_action_plan(pecan.request.context,
action_plan.uuid)
self.applier_client.launch_action_plan(pecan.request.context,
action_plan.uuid)
action_plan_to_update = objects.ActionPlan.get_by_uuid(
pecan.request.context,
action_plan_uuid)
return ActionPlan.convert_with_links(action_plan_to_update)
@wsme_pecan.wsexpose(ActionPlan, types.uuid)
def start(self, action_plan_uuid, **kwargs):
"""Start an action_plan
:param action_plan_uuid: UUID of an action_plan.
"""
action_plan_to_start = api_utils.get_resource(
'ActionPlan', action_plan_uuid, eager=True)
context = pecan.request.context
policy.enforce(context, 'action_plan:start', action_plan_to_start,
action='action_plan:start')
if action_plan_to_start['state'] != \
objects.action_plan.State.RECOMMENDED:
raise Exception.StartError(
state=action_plan_to_start.state)
action_plan_to_start['state'] = objects.action_plan.State.PENDING
action_plan_to_start.save()
self.applier_client.launch_action_plan(pecan.request.context,
action_plan_uuid)
action_plan_to_start = objects.ActionPlan.get_by_uuid(
pecan.request.context, action_plan_uuid)
return ActionPlan.convert_with_links(action_plan_to_start)

View File

@@ -403,6 +403,7 @@ class AuditsController(rest.RestController):
"""REST controller for Audits."""
def __init__(self):
super(AuditsController, self).__init__()
self.dc_client = rpcapi.DecisionEngineAPI()
from_audits = False
"""A flag to indicate if the requests to this controller are coming
@@ -575,8 +576,7 @@ class AuditsController(rest.RestController):
# trigger decision-engine to run the audit
if new_audit.audit_type == objects.audit.AuditType.ONESHOT.value:
dc_client = rpcapi.DecisionEngineAPI()
dc_client.trigger_audit(context, new_audit.uuid)
self.dc_client.trigger_audit(context, new_audit.uuid)
return Audit.convert_with_links(new_audit)
@@ -639,8 +639,8 @@ class AuditsController(rest.RestController):
context = pecan.request.context
audit_to_delete = api_utils.get_resource(
'Audit', audit, eager=True)
policy.enforce(context, 'audit:update', audit_to_delete,
action='audit:update')
policy.enforce(context, 'audit:delete', audit_to_delete,
action='audit:delete')
initial_state = audit_to_delete.state
new_state = objects.audit.State.DELETED

View File

@@ -688,8 +688,8 @@ class AuditTemplatesController(rest.RestController):
context = pecan.request.context
audit_template_to_delete = api_utils.get_resource('AuditTemplate',
audit_template)
policy.enforce(context, 'audit_template:update',
policy.enforce(context, 'audit_template:delete',
audit_template_to_delete,
action='audit_template:update')
action='audit_template:delete')
audit_template_to_delete.soft_delete()

View File

@@ -336,6 +336,10 @@ class DeleteError(Invalid):
msg_fmt = _("Couldn't delete when state is '%(state)s'.")
class StartError(Invalid):
msg_fmt = _("Couldn't start when state is '%(state)s'.")
# decision engine
class WorkflowExecutionException(WatcherException):
@@ -512,3 +516,7 @@ class NegativeLimitError(WatcherException):
class NotificationPayloadError(WatcherException):
_msg_fmt = _("Payload not populated when trying to send notification "
"\"%(class_name)s\"")
class InvalidPoolAttributeValue(Invalid):
msg_fmt = _("The %(name)s pool %(attribute)s is not integer")

View File

@@ -71,6 +71,17 @@ rules = [
'method': 'PATCH'
}
]
),
policy.DocumentedRuleDefault(
name=ACTION_PLAN % 'start',
check_str=base.RULE_ADMIN_API,
description='Start an action plans.',
operations=[
{
'path': '/v1/action_plans/{action_plan_uuid}/action',
'method': 'POST'
}
]
)
]

View File

@@ -314,6 +314,21 @@ class Connection(api.BaseConnection):
query.delete()
def _get_model_list(self, model, add_filters_func, context, filters=None,
limit=None, marker=None, sort_key=None, sort_dir=None,
eager=False):
query = model_query(model)
if eager:
query = self._set_eager_options(model, query)
query = add_filters_func(query, filters)
if not context.show_deleted:
query = query.filter(model.deleted_at.is_(None))
return _paginate_query(model, limit, marker,
sort_key, sort_dir, query)
# NOTE(erakli): _add_..._filters methods should be refactored to have same
# content. join_fieldmap should be filled with JoinMap instead of dict
def _add_goals_filters(self, query, filters):
if filters is None:
filters = {}
@@ -426,18 +441,42 @@ class Connection(api.BaseConnection):
query=query, model=models.EfficacyIndicator, filters=filters,
plain_fields=plain_fields, join_fieldmap=join_fieldmap)
def _add_scoring_engine_filters(self, query, filters):
if filters is None:
filters = {}
plain_fields = ['id', 'description']
return self._add_filters(
query=query, model=models.ScoringEngine, filters=filters,
plain_fields=plain_fields)
def _add_action_descriptions_filters(self, query, filters):
if not filters:
filters = {}
plain_fields = ['id', 'action_type']
return self._add_filters(
query=query, model=models.ActionDescription, filters=filters,
plain_fields=plain_fields)
def _add_services_filters(self, query, filters):
if not filters:
filters = {}
plain_fields = ['id', 'name', 'host']
return self._add_filters(
query=query, model=models.Service, filters=filters,
plain_fields=plain_fields)
# ### GOALS ### #
def get_goal_list(self, context, filters=None, limit=None, marker=None,
sort_key=None, sort_dir=None, eager=False):
query = model_query(models.Goal)
if eager:
query = self._set_eager_options(models.Goal, query)
query = self._add_goals_filters(query, filters)
if not context.show_deleted:
query = query.filter_by(deleted_at=None)
return _paginate_query(models.Goal, limit, marker,
sort_key, sort_dir, query)
def get_goal_list(self, *args, **kwargs):
return self._get_model_list(models.Goal,
self._add_goals_filters,
*args, **kwargs)
def create_goal(self, values):
# ensure defaults are present for new goals
@@ -493,17 +532,10 @@ class Connection(api.BaseConnection):
# ### STRATEGIES ### #
def get_strategy_list(self, context, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None,
eager=True):
query = model_query(models.Strategy)
if eager:
query = self._set_eager_options(models.Strategy, query)
query = self._add_strategies_filters(query, filters)
if not context.show_deleted:
query = query.filter_by(deleted_at=None)
return _paginate_query(models.Strategy, limit, marker,
sort_key, sort_dir, query)
def get_strategy_list(self, *args, **kwargs):
return self._get_model_list(models.Strategy,
self._add_strategies_filters,
*args, **kwargs)
def create_strategy(self, values):
# ensure defaults are present for new strategies
@@ -559,18 +591,10 @@ class Connection(api.BaseConnection):
# ### AUDIT TEMPLATES ### #
def get_audit_template_list(self, context, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None,
eager=False):
query = model_query(models.AuditTemplate)
if eager:
query = self._set_eager_options(models.AuditTemplate, query)
query = self._add_audit_templates_filters(query, filters)
if not context.show_deleted:
query = query.filter_by(deleted_at=None)
return _paginate_query(models.AuditTemplate, limit, marker,
sort_key, sort_dir, query)
def get_audit_template_list(self, *args, **kwargs):
return self._get_model_list(models.AuditTemplate,
self._add_audit_templates_filters,
*args, **kwargs)
def create_audit_template(self, values):
# ensure defaults are present for new audit_templates
@@ -642,17 +666,10 @@ class Connection(api.BaseConnection):
# ### AUDITS ### #
def get_audit_list(self, context, filters=None, limit=None, marker=None,
sort_key=None, sort_dir=None, eager=False):
query = model_query(models.Audit)
if eager:
query = self._set_eager_options(models.Audit, query)
query = self._add_audits_filters(query, filters)
if not context.show_deleted:
query = query.filter_by(deleted_at=None)
return _paginate_query(models.Audit, limit, marker,
sort_key, sort_dir, query)
def get_audit_list(self, *args, **kwargs):
return self._get_model_list(models.Audit,
self._add_audits_filters,
*args, **kwargs)
def create_audit(self, values):
# ensure defaults are present for new audits
@@ -740,16 +757,10 @@ class Connection(api.BaseConnection):
# ### ACTIONS ### #
def get_action_list(self, context, filters=None, limit=None, marker=None,
sort_key=None, sort_dir=None, eager=False):
query = model_query(models.Action)
if eager:
query = self._set_eager_options(models.Action, query)
query = self._add_actions_filters(query, filters)
if not context.show_deleted:
query = query.filter_by(deleted_at=None)
return _paginate_query(models.Action, limit, marker,
sort_key, sort_dir, query)
def get_action_list(self, *args, **kwargs):
return self._get_model_list(models.Action,
self._add_actions_filters,
*args, **kwargs)
def create_action(self, values):
# ensure defaults are present for new actions
@@ -819,18 +830,10 @@ class Connection(api.BaseConnection):
# ### ACTION PLANS ### #
def get_action_plan_list(
self, context, filters=None, limit=None, marker=None,
sort_key=None, sort_dir=None, eager=False):
query = model_query(models.ActionPlan)
if eager:
query = self._set_eager_options(models.ActionPlan, query)
query = self._add_action_plans_filters(query, filters)
if not context.show_deleted:
query = query.filter(models.ActionPlan.deleted_at.is_(None))
return _paginate_query(models.ActionPlan, limit, marker,
sort_key, sort_dir, query)
def get_action_plan_list(self, *args, **kwargs):
return self._get_model_list(models.ActionPlan,
self._add_action_plans_filters,
*args, **kwargs)
def create_action_plan(self, values):
# ensure defaults are present for new audits
@@ -912,18 +915,10 @@ class Connection(api.BaseConnection):
# ### EFFICACY INDICATORS ### #
def get_efficacy_indicator_list(self, context, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None,
eager=False):
query = model_query(models.EfficacyIndicator)
if eager:
query = self._set_eager_options(models.EfficacyIndicator, query)
query = self._add_efficacy_indicators_filters(query, filters)
if not context.show_deleted:
query = query.filter_by(deleted_at=None)
return _paginate_query(models.EfficacyIndicator, limit, marker,
sort_key, sort_dir, query)
def get_efficacy_indicator_list(self, *args, **kwargs):
return self._get_model_list(models.EfficacyIndicator,
self._add_efficacy_indicators_filters,
*args, **kwargs)
def create_efficacy_indicator(self, values):
# ensure defaults are present for new efficacy indicators
@@ -992,28 +987,10 @@ class Connection(api.BaseConnection):
# ### SCORING ENGINES ### #
def _add_scoring_engine_filters(self, query, filters):
if filters is None:
filters = {}
plain_fields = ['id', 'description']
return self._add_filters(
query=query, model=models.ScoringEngine, filters=filters,
plain_fields=plain_fields)
def get_scoring_engine_list(
self, context, columns=None, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None, eager=False):
query = model_query(models.ScoringEngine)
if eager:
query = self._set_eager_options(models.ScoringEngine, query)
query = self._add_scoring_engine_filters(query, filters)
if not context.show_deleted:
query = query.filter_by(deleted_at=None)
return _paginate_query(models.ScoringEngine, limit, marker,
sort_key, sort_dir, query)
def get_scoring_engine_list(self, *args, **kwargs):
return self._get_model_list(models.ScoringEngine,
self._add_scoring_engine_filters,
*args, **kwargs)
def create_scoring_engine(self, values):
# ensure defaults are present for new scoring engines
@@ -1078,26 +1055,10 @@ class Connection(api.BaseConnection):
# ### SERVICES ### #
def _add_services_filters(self, query, filters):
if not filters:
filters = {}
plain_fields = ['id', 'name', 'host']
return self._add_filters(
query=query, model=models.Service, filters=filters,
plain_fields=plain_fields)
def get_service_list(self, context, filters=None, limit=None, marker=None,
sort_key=None, sort_dir=None, eager=False):
query = model_query(models.Service)
if eager:
query = self._set_eager_options(models.Service, query)
query = self._add_services_filters(query, filters)
if not context.show_deleted:
query = query.filter_by(deleted_at=None)
return _paginate_query(models.Service, limit, marker,
sort_key, sort_dir, query)
def get_service_list(self, *args, **kwargs):
return self._get_model_list(models.Service,
self._add_services_filters,
*args, **kwargs)
def create_service(self, values):
try:
@@ -1142,27 +1103,10 @@ class Connection(api.BaseConnection):
# ### ACTION_DESCRIPTIONS ### #
def _add_action_descriptions_filters(self, query, filters):
if not filters:
filters = {}
plain_fields = ['id', 'action_type']
return self._add_filters(
query=query, model=models.ActionDescription, filters=filters,
plain_fields=plain_fields)
def get_action_description_list(self, context, filters=None, limit=None,
marker=None, sort_key=None,
sort_dir=None, eager=False):
query = model_query(models.ActionDescription)
if eager:
query = self._set_eager_options(models.ActionDescription, query)
query = self._add_action_descriptions_filters(query, filters)
if not context.show_deleted:
query = query.filter_by(deleted_at=None)
return _paginate_query(models.ActionDescription, limit, marker,
sort_key, sort_dir, query)
def get_action_description_list(self, *args, **kwargs):
return self._get_model_list(models.ActionDescription,
self._add_action_descriptions_filters,
*args, **kwargs)
def create_action_description(self, values):
try:

View File

@@ -63,6 +63,7 @@ class AuditHandler(BaseAuditHandler):
self._strategy_context = default_context.DefaultStrategyContext()
self._planner_manager = planner_manager.PlannerManager()
self._planner = None
self.applier_client = rpcapi.ApplierAPI()
@property
def planner(self):
@@ -74,6 +75,13 @@ class AuditHandler(BaseAuditHandler):
def strategy_context(self):
return self._strategy_context
def do_execute(self, audit, request_context):
# execute the strategy
solution = self.strategy_context.execute_strategy(
audit, request_context)
return solution
def do_schedule(self, request_context, audit, solution):
try:
notifications.audit.send_action_notification(
@@ -118,9 +126,8 @@ class AuditHandler(BaseAuditHandler):
def post_execute(self, audit, solution, request_context):
action_plan = self.do_schedule(request_context, audit, solution)
if audit.auto_trigger:
applier_client = rpcapi.ApplierAPI()
applier_client.launch_action_plan(request_context,
action_plan.uuid)
self.applier_client.launch_action_plan(request_context,
action_plan.uuid)
def execute(self, audit, request_context):
try:

View File

@@ -71,9 +71,8 @@ class ContinuousAuditHandler(base.AuditHandler):
return False
def do_execute(self, audit, request_context):
# execute the strategy
solution = self.strategy_context.execute_strategy(
audit, request_context)
solution = super(ContinuousAuditHandler, self)\
.do_execute(audit, request_context)
if audit.audit_type == objects.audit.AuditType.CONTINUOUS.value:
a_plan_filters = {'audit_uuid': audit.uuid,

View File

@@ -20,13 +20,6 @@ from watcher import objects
class OneShotAuditHandler(base.AuditHandler):
def do_execute(self, audit, request_context):
# execute the strategy
solution = self.strategy_context.execute_strategy(
audit, request_context)
return solution
def post_execute(self, audit, solution, request_context):
super(OneShotAuditHandler, self).post_execute(audit, solution,
request_context)

View File

@@ -241,3 +241,28 @@ class HardwareMaintenance(base.Goal):
def get_efficacy_specification(cls):
"""The efficacy spec for the current goal"""
return specs.HardwareMaintenance()
class ClusterMaintaining(base.Goal):
"""ClusterMaintenance
This goal is used to maintain compute nodes
without having the user's application being interrupted.
"""
@classmethod
def get_name(cls):
return "cluster_maintaining"
@classmethod
def get_display_name(cls):
return _("Cluster Maintaining")
@classmethod
def get_translatable_display_name(cls):
return "Cluster Maintaining"
@classmethod
def get_efficacy_specification(cls):
"""The efficacy spec for the current goal"""
return specs.Unclassified()

View File

@@ -222,8 +222,21 @@ class ModelBuilder(object):
:param pool: A storage pool
:type pool: :py:class:`~cinderlient.v2.capabilities.Capabilities`
:raises: exception.InvalidPoolAttributeValue
"""
# build up the storage pool.
attrs = ["total_volumes", "total_capacity_gb",
"free_capacity_gb", "provisioned_capacity_gb",
"allocated_capacity_gb"]
for attr in attrs:
try:
int(getattr(pool, attr))
except ValueError:
raise exception.InvalidPoolAttributeValue(
name=pool.name, attribute=attr)
node_attributes = {
"name": pool.name,
"total_volumes": pool.total_volumes,

View File

@@ -104,6 +104,18 @@ class NovaClusterDataModelCollector(base.BaseClusterDataModelCollector):
"items": {
"type": "object"
}
},
"projects": {
"type": "array",
"items": {
"type": "object",
"properties": {
"uuid": {
"type": "string"
}
},
"additionalProperties": False
}
}
},
"additionalProperties": False
@@ -348,7 +360,8 @@ class ModelBuilder(object):
"disk_capacity": flavor["disk"],
"vcpus": flavor["vcpus"],
"state": getattr(instance, "OS-EXT-STS:vm_state"),
"metadata": instance.metadata}
"metadata": instance.metadata,
"project_id": instance.tenant_id}
# node_attributes = dict()
# node_attributes["layer"] = "virtual"

View File

@@ -52,6 +52,7 @@ class Instance(compute_resource.ComputeResource):
"disk_capacity": wfields.NonNegativeIntegerField(),
"vcpus": wfields.NonNegativeIntegerField(),
"metadata": wfields.JsonField(),
"project_id": wfields.UUIDField(),
}
def accept(self, visitor):

View File

@@ -76,6 +76,7 @@ class NovaNotification(base.NotificationEndpoint):
'disk': disk_gb,
'disk_capacity': disk_gb,
'metadata': instance_metadata,
'tenant_id': instance_data['tenant_id']
})
try:

View File

@@ -87,6 +87,7 @@ class ComputeScope(base.BaseScope):
instances_to_exclude = kwargs.get('instances')
nodes_to_exclude = kwargs.get('nodes')
instance_metadata = kwargs.get('instance_metadata')
projects_to_exclude = kwargs.get('projects')
for resource in resources:
if 'instances' in resource:
@@ -105,6 +106,9 @@ class ComputeScope(base.BaseScope):
elif 'instance_metadata' in resource:
instance_metadata.extend(
[metadata for metadata in resource['instance_metadata']])
elif 'projects' in resource:
projects_to_exclude.extend(
[project['uuid'] for project in resource['projects']])
def remove_nodes_from_model(self, nodes_to_remove, cluster_model):
for node_uuid in nodes_to_remove:
@@ -144,6 +148,13 @@ class ComputeScope(base.BaseScope):
if str(value).lower() == str(metadata.get(key)).lower():
instances_to_remove.add(uuid)
def exclude_instances_with_given_project(
self, projects_to_exclude, cluster_model, instances_to_exclude):
all_instances = cluster_model.get_all_instances()
for uuid, instance in all_instances.items():
if instance.project_id in projects_to_exclude:
instances_to_exclude.add(uuid)
def get_scoped_model(self, cluster_model):
"""Leave only nodes and instances proposed in the audit scope"""
if not cluster_model:
@@ -154,6 +165,7 @@ class ComputeScope(base.BaseScope):
nodes_to_remove = set()
instances_to_exclude = []
instance_metadata = []
projects_to_exclude = []
compute_scope = []
model_hosts = list(cluster_model.get_all_compute_nodes().keys())
@@ -177,7 +189,8 @@ class ComputeScope(base.BaseScope):
self.exclude_resources(
rule['exclude'], instances=instances_to_exclude,
nodes=nodes_to_exclude,
instance_metadata=instance_metadata)
instance_metadata=instance_metadata,
projects=projects_to_exclude)
instances_to_exclude = set(instances_to_exclude)
if allowed_nodes:
@@ -190,6 +203,10 @@ class ComputeScope(base.BaseScope):
self.exclude_instances_with_given_metadata(
instance_metadata, cluster_model, instances_to_exclude)
if projects_to_exclude:
self.exclude_instances_with_given_project(
projects_to_exclude, cluster_model, instances_to_exclude)
self.update_exclude_instance_in_model(instances_to_exclude,
cluster_model)

View File

@@ -18,6 +18,7 @@ from watcher.decision_engine.strategy.strategies import actuation
from watcher.decision_engine.strategy.strategies import basic_consolidation
from watcher.decision_engine.strategy.strategies import dummy_strategy
from watcher.decision_engine.strategy.strategies import dummy_with_scorer
from watcher.decision_engine.strategy.strategies import host_maintenance
from watcher.decision_engine.strategy.strategies import noisy_neighbor
from watcher.decision_engine.strategy.strategies import outlet_temp_control
from watcher.decision_engine.strategy.strategies import saving_energy
@@ -44,9 +45,10 @@ WorkloadStabilization = workload_stabilization.WorkloadStabilization
UniformAirflow = uniform_airflow.UniformAirflow
NoisyNeighbor = noisy_neighbor.NoisyNeighbor
ZoneMigration = zone_migration.ZoneMigration
HostMaintenance = host_maintenance.HostMaintenance
__all__ = ("Actuator", "BasicConsolidation", "OutletTempControl",
"DummyStrategy", "DummyWithScorer", "VMWorkloadConsolidation",
"WorkloadBalance", "WorkloadStabilization", "UniformAirflow",
"NoisyNeighbor", "SavingEnergy", "StorageCapacityBalance",
"ZoneMigration")
"ZoneMigration", "HostMaintenance")

10
watcher/decision_engine/strategy/strategies/base.py Normal file → Executable file
View File

@@ -471,3 +471,13 @@ class ZoneMigrationBaseStrategy(BaseStrategy):
@classmethod
def get_goal_name(cls):
return "hardware_maintenance"
@six.add_metaclass(abc.ABCMeta)
class HostMaintenanceBaseStrategy(BaseStrategy):
REASON_FOR_MAINTAINING = 'watcher_maintaining'
@classmethod
def get_goal_name(cls):
return "cluster_maintaining"

View File

@@ -0,0 +1,331 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2017 chinac.com
#
# Authors: suzhengwei<suzhengwei@chinac.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from oslo_log import log
import six
from watcher._i18n import _
from watcher.common import exception as wexc
from watcher.decision_engine.model import element
from watcher.decision_engine.strategy.strategies import base
LOG = log.getLogger(__name__)
class HostMaintenance(base.HostMaintenanceBaseStrategy):
"""[PoC]Host Maintenance
*Description*
It is a migration strategy for one compute node maintenance,
without having the user's application been interruptted.
If given one backup node, the strategy will firstly
migrate all instances from the maintenance node to
the backup node. If the backup node is not provided,
it will migrate all instances, relying on nova-scheduler.
*Requirements*
* You must have at least 2 physical compute nodes to run this strategy.
*Limitations*
- This is a proof of concept that is not meant to be used in production
- It migrates all instances from one host to other hosts. It's better to
execute such strategy when load is not heavy, and use this algorithm
with `ONESHOT` audit.
- It assume that cold and live migrations are possible
"""
INSTANCE_MIGRATION = "migrate"
CHANGE_NOVA_SERVICE_STATE = "change_nova_service_state"
REASON_FOR_DISABLE = 'watcher_disabled'
def __init__(self, config, osc=None):
super(HostMaintenance, self).__init__(config, osc)
@classmethod
def get_name(cls):
return "host_maintenance"
@classmethod
def get_display_name(cls):
return _("Host Maintenance Strategy")
@classmethod
def get_translatable_display_name(cls):
return "Host Maintenance Strategy"
@classmethod
def get_schema(cls):
return {
"properties": {
"maintenance_node": {
"description": "The name of the compute node which "
"need maintenance",
"type": "string",
},
"backup_node": {
"description": "The name of the compute node which "
"will backup the maintenance node.",
"type": "string",
},
},
"required": ["maintenance_node"],
}
def get_disabled_compute_nodes_with_reason(self, reason=None):
return {uuid: cn for uuid, cn in
self.compute_model.get_all_compute_nodes().items()
if cn.state == element.ServiceState.ONLINE.value and
cn.status == element.ServiceState.DISABLED.value and
cn.disabled_reason == reason}
def get_disabled_compute_nodes(self):
return self.get_disabled_compute_nodes_with_reason(
self.REASON_FOR_DISABLE)
def get_instance_state_str(self, instance):
"""Get instance state in string format"""
if isinstance(instance.state, six.string_types):
return instance.state
elif isinstance(instance.state, element.InstanceState):
return instance.state.value
else:
LOG.error('Unexpected instance state type, '
'state=%(state)s, state_type=%(st)s.',
dict(state=instance.state,
st=type(instance.state)))
raise wexc.WatcherException
def get_node_status_str(self, node):
"""Get node status in string format"""
if isinstance(node.status, six.string_types):
return node.status
elif isinstance(node.status, element.ServiceState):
return node.status.value
else:
LOG.error('Unexpected node status type, '
'status=%(status)s, status_type=%(st)s.',
dict(status=node.status,
st=type(node.status)))
raise wexc.WatcherException
def get_node_capacity(self, node):
"""Collect cpu, ram and disk capacity of a node.
:param node: node object
:return: dict(cpu(cores), ram(MB), disk(B))
"""
return dict(cpu=node.vcpus,
ram=node.memory,
disk=node.disk_capacity)
def get_node_used(self, node):
"""Collect cpu, ram and disk used of a node.
:param node: node object
:return: dict(cpu(cores), ram(MB), disk(B))
"""
vcpus_used = 0
memory_used = 0
disk_used = 0
for instance in self.compute_model.get_node_instances(node):
vcpus_used += instance.vcpus
memory_used += instance.memory
disk_used += instance.disk
return dict(cpu=vcpus_used,
ram=memory_used,
disk=disk_used)
def get_node_free(self, node):
"""Collect cpu, ram and disk free of a node.
:param node: node object
:return: dict(cpu(cores), ram(MB), disk(B))
"""
node_capacity = self.get_node_capacity(node)
node_used = self.get_node_used(node)
return dict(cpu=node_capacity['cpu']-node_used['cpu'],
ram=node_capacity['ram']-node_used['ram'],
disk=node_capacity['disk']-node_used['disk'],
)
def host_fits(self, source_node, destination_node):
"""check host fits
return True if VMs could intensively migrate
from source_node to destination_node.
"""
source_node_used = self.get_node_used(source_node)
destination_node_free = self.get_node_free(destination_node)
metrics = ['cpu', 'ram']
for m in metrics:
if source_node_used[m] > destination_node_free[m]:
return False
return True
def add_action_enable_compute_node(self, node):
"""Add an action for node enabler into the solution."""
params = {'state': element.ServiceState.ENABLED.value}
self.solution.add_action(
action_type=self.CHANGE_NOVA_SERVICE_STATE,
resource_id=node.uuid,
input_parameters=params)
def add_action_maintain_compute_node(self, node):
"""Add an action for node maintenance into the solution."""
params = {'state': element.ServiceState.DISABLED.value,
'disabled_reason': self.REASON_FOR_MAINTAINING}
self.solution.add_action(
action_type=self.CHANGE_NOVA_SERVICE_STATE,
resource_id=node.uuid,
input_parameters=params)
def enable_compute_node_if_disabled(self, node):
node_status_str = self.get_node_status_str(node)
if node_status_str != element.ServiceState.ENABLED.value:
self.add_action_enable_compute_node(node)
def instance_migration(self, instance, src_node, des_node=None):
"""Add an action for instance migration into the solution.
:param instance: instance object
:param src_node: node object
:param des_node: node object. if None, the instance will be
migrated relying on nova-scheduler
:return: None
"""
instance_state_str = self.get_instance_state_str(instance)
if instance_state_str == element.InstanceState.ACTIVE.value:
migration_type = 'live'
else:
migration_type = 'cold'
params = {'migration_type': migration_type,
'source_node': src_node.uuid}
if des_node:
params['destination_node'] = des_node.uuid
self.solution.add_action(action_type=self.INSTANCE_MIGRATION,
resource_id=instance.uuid,
input_parameters=params)
def host_migration(self, source_node, destination_node):
"""host migration
Migrate all instances from source_node to destination_node.
Active instances use "live-migrate",
and other instances use "cold-migrate"
"""
instances = self.compute_model.get_node_instances(source_node)
for instance in instances:
self.instance_migration(instance, source_node, destination_node)
def safe_maintain(self, maintenance_node, backup_node=None):
"""safe maintain one compute node
Migrate all instances of the maintenance_node intensively to the
backup host. If users didn't give the backup host, it will select
one unused node to backup the maintaining node.
It calculate the resource both of the backup node and maintaining
node to evaluate the migrations from maintaining node to backup node.
If all instances of the maintaining node can migrated to
the backup node, it will set the maintaining node in
'watcher_maintaining' status., and add the migrations to solution.
"""
# If user gives a backup node with required capacity, then migrate
# all instances from the maintaining node to the backup node.
if backup_node:
if self.host_fits(maintenance_node, backup_node):
self.enable_compute_node_if_disabled(backup_node)
self.add_action_maintain_compute_node(maintenance_node)
self.host_migration(maintenance_node, backup_node)
return True
# If uses didn't give the backup host, select one unused node
# with required capacity, then migrate all instances
# from maintaining node to it.
nodes = sorted(
self.get_disabled_compute_nodes().values(),
key=lambda x: self.get_node_capacity(x)['cpu'])
if maintenance_node in nodes:
nodes.remove(maintenance_node)
for node in nodes:
if self.host_fits(maintenance_node, node):
self.enable_compute_node_if_disabled(node)
self.add_action_maintain_compute_node(maintenance_node)
self.host_migration(maintenance_node, node)
return True
return False
def try_maintain(self, maintenance_node):
"""try to maintain one compute node
It firstly set the maintenance_node in 'watcher_maintaining' status.
Then try to migrate all instances of the maintenance node, rely
on nova-scheduler.
"""
self.add_action_maintain_compute_node(maintenance_node)
instances = self.compute_model.get_node_instances(maintenance_node)
for instance in instances:
self.instance_migration(instance, maintenance_node)
def pre_execute(self):
LOG.debug(self.compute_model.to_string())
if not self.compute_model:
raise wexc.ClusterStateNotDefined()
if self.compute_model.stale:
raise wexc.ClusterStateStale()
def do_execute(self):
LOG.info(_('Executing Host Maintenance Migration Strategy'))
maintenance_node = self.input_parameters.get('maintenance_node')
backup_node = self.input_parameters.get('backup_node')
# if no VMs in the maintenance_node, just maintain the compute node
src_node = self.compute_model.get_node_by_uuid(maintenance_node)
if len(self.compute_model.get_node_instances(src_node)) == 0:
if (src_node.disabled_reason !=
self.REASON_FOR_MAINTAINING):
self.add_action_maintain_compute_node(src_node)
return
if backup_node:
des_node = self.compute_model.get_node_by_uuid(backup_node)
else:
des_node = None
if not self.safe_maintain(src_node, des_node):
self.try_maintain(src_node)
def post_execute(self):
"""Post-execution phase
This can be used to compute the global efficacy
"""
LOG.debug(self.solution.actions)
LOG.debug(self.compute_model.to_string())

View File

@@ -110,7 +110,7 @@ class StorageCapacityBalance(base.WorkloadStabilizationBaseStrategy):
return filtered_pools
def get_volumes(self, cinder):
"""Get all volumes with staus in available or in-use and no snapshot.
"""Get all volumes with status in available or in-use and no snapshot.
:param cinder: cinder client
:return: all volumes

View File

@@ -137,6 +137,12 @@ class FunctionalTest(base.DbTestCase):
headers=headers, extra_environ=extra_environ,
status=status, method="put")
def post(self, *args, **kwargs):
headers = kwargs.pop('headers', {})
headers.setdefault('Accept', 'application/json')
kwargs['headers'] = headers
return self.app.post(*args, **kwargs)
def post_json(self, path, params, expect_errors=False, headers=None,
extra_environ=None, status=None):
"""Sends simulated HTTP POST request to Pecan test app.

View File

@@ -363,6 +363,53 @@ class TestDelete(api_base.FunctionalTest):
self.assertTrue(response.json['error_message'])
class TestStart(api_base.FunctionalTest):
def setUp(self):
super(TestStart, self).setUp()
obj_utils.create_test_goal(self.context)
obj_utils.create_test_strategy(self.context)
obj_utils.create_test_audit(self.context)
self.action_plan = obj_utils.create_test_action_plan(
self.context, state=objects.action_plan.State.RECOMMENDED)
p = mock.patch.object(db_api.BaseConnection, 'update_action_plan')
self.mock_action_plan_update = p.start()
self.mock_action_plan_update.side_effect = \
self._simulate_rpc_action_plan_update
self.addCleanup(p.stop)
def _simulate_rpc_action_plan_update(self, action_plan):
action_plan.save()
return action_plan
@mock.patch('watcher.common.policy.enforce')
def test_start_action_plan_not_found(self, mock_policy):
mock_policy.return_value = True
uuid = utils.generate_uuid()
response = self.post('/v1/action_plans/%s/%s' %
(uuid, 'start'), expect_errors=True)
self.assertEqual(404, response.status_int)
self.assertEqual('application/json', response.content_type)
self.assertTrue(response.json['error_message'])
@mock.patch('watcher.common.policy.enforce')
def test_start_action_plan(self, mock_policy):
mock_policy.return_value = True
action = obj_utils.create_test_action(
self.context, id=1)
self.action_plan.state = objects.action_plan.State.SUCCEEDED
response = self.post('/v1/action_plans/%s/%s/'
% (self.action_plan.uuid, 'start'),
expect_errors=True)
self.assertEqual(200, response.status_int)
act_response = self.get_json(
'/actions/%s' % action.uuid,
expect_errors=True)
self.assertEqual(200, act_response.status_int)
self.assertEqual('PENDING', act_response.json['state'])
self.assertEqual('application/json', act_response.content_type)
class TestPatch(api_base.FunctionalTest):
def setUp(self):
@@ -568,7 +615,6 @@ class TestPatchStateTransitionOk(api_base.FunctionalTest):
'/action_plans/%s' % action_plan.uuid,
[{'path': '/state', 'value': self.new_state, 'op': 'replace'}])
updated_ap = self.get_json('/action_plans/%s' % action_plan.uuid)
self.assertNotEqual(self.new_state, initial_ap['state'])
self.assertEqual(self.new_state, updated_ap['state'])
self.assertEqual('application/json', response.content_type)
@@ -642,4 +688,5 @@ class TestActionPlanPolicyEnforcementWithAdminContext(TestListActionPlan,
"action_plan:detail": "rule:default",
"action_plan:get": "rule:default",
"action_plan:get_all": "rule:default",
"action_plan:update": "rule:default"})
"action_plan:update": "rule:default",
"action_plan:start": "rule:default"})

View File

@@ -45,7 +45,7 @@ class TestClients(base.TestCase):
# ka_loading.load_auth_from_conf_options(CONF, _AUTH_CONF_GROUP)
# ka_loading.load_session_from_conf_options(CONF, _AUTH_CONF_GROUP)
# CONF.set_override(
# 'auth-url', 'http://server.ip:35357', group=_AUTH_CONF_GROUP)
# 'auth-url', 'http://server.ip:5000', group=_AUTH_CONF_GROUP)
# If we don't clean up the _AUTH_CONF_GROUP conf options, then other
# tests that run after this one will fail, complaining about required
@@ -68,7 +68,7 @@ class TestClients(base.TestCase):
expected = {'username': 'foousername',
'password': 'foopassword',
'auth_url': 'http://server.ip:35357',
'auth_url': 'http://server.ip:5000',
'cafile': None,
'certfile': None,
'keyfile': None,
@@ -99,7 +99,7 @@ class TestClients(base.TestCase):
expected = {'username': 'foousername',
'password': 'foopassword',
'auth_url': 'http://server.ip:35357',
'auth_url': 'http://server.ip:5000',
'user_domain_id': 'foouserdomainid',
'project_domain_id': 'fooprojdomainid'}
@@ -360,7 +360,7 @@ class TestClients(base.TestCase):
mock_call.assert_called_once_with(
CONF.monasca_client.api_version,
'test_endpoint',
auth_url='http://server.ip:35357', cert_file=None, insecure=False,
auth_url='http://server.ip:5000', cert_file=None, insecure=False,
key_file=None, keystone_timeout=None, os_cacert=None,
password='foopassword', service_type='monitoring',
token='test_token', username='foousername')

View File

@@ -233,16 +233,6 @@ class TestDbActionFilters(base.DbTestCase):
class DbActionTestCase(base.DbTestCase):
def _create_test_action(self, **kwargs):
action = utils.get_test_action(**kwargs)
self.dbapi.create_action(action)
return action
def _create_test_action_plan(self, **kwargs):
action_plan = utils.get_test_action_plan(**kwargs)
self.dbapi.create_action_plan(action_plan)
return action_plan
def test_get_action_list(self):
uuids = []
for _ in range(1, 4):
@@ -274,33 +264,44 @@ class DbActionTestCase(base.DbTestCase):
def test_get_action_list_with_filters(self):
audit = utils.create_test_audit(uuid=w_utils.generate_uuid())
action_plan = self._create_test_action_plan(
action_plan = utils.create_test_action_plan(
id=1,
uuid=w_utils.generate_uuid(),
audit_id=audit.id,
parents=None,
state=objects.action_plan.State.RECOMMENDED)
action1 = self._create_test_action(
action1 = utils.create_test_action(
id=1,
action_plan_id=1,
action_plan_id=action_plan['id'],
description='description action 1',
uuid=w_utils.generate_uuid(),
parents=None,
state=objects.action_plan.State.PENDING)
action2 = self._create_test_action(
action2 = utils.create_test_action(
id=2,
action_plan_id=2,
description='description action 2',
uuid=w_utils.generate_uuid(),
parents=[action1['uuid']],
state=objects.action_plan.State.PENDING)
action3 = self._create_test_action(
action3 = utils.create_test_action(
id=3,
action_plan_id=1,
action_plan_id=action_plan['id'],
description='description action 3',
uuid=w_utils.generate_uuid(),
parents=[action2['uuid']],
state=objects.action_plan.State.ONGOING)
action4 = utils.create_test_action(
id=4,
action_plan_id=action_plan['id'],
description='description action 4',
uuid=w_utils.generate_uuid(),
parents=None,
state=objects.action_plan.State.ONGOING)
self.dbapi.soft_delete_action(action4['uuid'])
res = self.dbapi.get_action_list(
self.context,
filters={'state': objects.action_plan.State.ONGOING})
@@ -322,6 +323,15 @@ class DbActionTestCase(base.DbTestCase):
sorted([action1['id'], action3['id']]),
sorted([r.id for r in res]))
temp_context = self.context
temp_context.show_deleted = True
res = self.dbapi.get_action_list(
temp_context,
filters={'action_plan_uuid': action_plan['uuid']})
self.assertEqual(
sorted([action1['id'], action3['id'], action4['id']]),
sorted([r.id for r in res]))
res = self.dbapi.get_action_list(
self.context,
filters={'audit_uuid': audit.uuid})
@@ -329,7 +339,7 @@ class DbActionTestCase(base.DbTestCase):
self.assertEqual(action_plan['id'], action.action_plan_id)
def test_get_action_list_with_filter_by_uuid(self):
action = self._create_test_action()
action = utils.create_test_action()
res = self.dbapi.get_action_list(
self.context, filters={'uuid': action["uuid"]})
@@ -337,12 +347,12 @@ class DbActionTestCase(base.DbTestCase):
self.assertEqual(action['uuid'], res[0].uuid)
def test_get_action_by_id(self):
action = self._create_test_action()
action = utils.create_test_action()
action = self.dbapi.get_action_by_id(self.context, action['id'])
self.assertEqual(action['uuid'], action.uuid)
def test_get_action_by_uuid(self):
action = self._create_test_action()
action = utils.create_test_action()
action = self.dbapi.get_action_by_uuid(self.context, action['uuid'])
self.assertEqual(action['id'], action.id)
@@ -351,7 +361,7 @@ class DbActionTestCase(base.DbTestCase):
self.dbapi.get_action_by_id, self.context, 1234)
def test_update_action(self):
action = self._create_test_action()
action = utils.create_test_action()
res = self.dbapi.update_action(
action['id'], {'state': objects.action_plan.State.CANCELLED})
self.assertEqual(objects.action_plan.State.CANCELLED, res.state)
@@ -361,13 +371,13 @@ class DbActionTestCase(base.DbTestCase):
self.dbapi.update_action, 1234, {'state': ''})
def test_update_action_uuid(self):
action = self._create_test_action()
action = utils.create_test_action()
self.assertRaises(exception.Invalid,
self.dbapi.update_action, action['id'],
{'uuid': 'hello'})
def test_destroy_action(self):
action = self._create_test_action()
action = utils.create_test_action()
self.dbapi.destroy_action(action['id'])
self.assertRaises(exception.ActionNotFound,
self.dbapi.get_action_by_id,
@@ -375,7 +385,7 @@ class DbActionTestCase(base.DbTestCase):
def test_destroy_action_by_uuid(self):
uuid = w_utils.generate_uuid()
self._create_test_action(uuid=uuid)
utils.create_test_action(uuid=uuid)
self.assertIsNotNone(self.dbapi.get_action_by_uuid(self.context,
uuid))
self.dbapi.destroy_action(uuid)
@@ -388,7 +398,7 @@ class DbActionTestCase(base.DbTestCase):
def test_create_action_already_exists(self):
uuid = w_utils.generate_uuid()
self._create_test_action(id=1, uuid=uuid)
utils.create_test_action(id=1, uuid=uuid)
self.assertRaises(exception.ActionAlreadyExists,
self._create_test_action,
utils.create_test_action,
id=2, uuid=uuid)

View File

@@ -232,11 +232,6 @@ class TestDbActionDescriptionFilters(base.DbTestCase):
class DbActionDescriptionTestCase(base.DbTestCase):
def _create_test_action_desc(self, **kwargs):
action_desc = utils.get_test_action_desc(**kwargs)
self.dbapi.create_action_description(action_desc)
return action_desc
def test_get_action_desc_list(self):
ids = []
for i in range(1, 4):
@@ -250,12 +245,12 @@ class DbActionDescriptionTestCase(base.DbTestCase):
self.assertEqual(sorted(ids), sorted(action_desc_ids))
def test_get_action_desc_list_with_filters(self):
action_desc1 = self._create_test_action_desc(
action_desc1 = utils.create_test_action_desc(
id=1,
action_type="action_1",
description="description_1",
)
action_desc2 = self._create_test_action_desc(
action_desc2 = utils.create_test_action_desc(
id=2,
action_type="action_2",
description="description_2",
@@ -275,7 +270,7 @@ class DbActionDescriptionTestCase(base.DbTestCase):
self.assertEqual([action_desc2['id']], [r.id for r in res])
def test_get_action_desc_by_type(self):
created_action_desc = self._create_test_action_desc()
created_action_desc = utils.create_test_action_desc()
action_desc = self.dbapi.get_action_description_by_type(
self.context, created_action_desc['action_type'])
self.assertEqual(action_desc.action_type,
@@ -287,7 +282,7 @@ class DbActionDescriptionTestCase(base.DbTestCase):
self.context, 404)
def test_update_action_desc(self):
action_desc = self._create_test_action_desc()
action_desc = utils.create_test_action_desc()
res = self.dbapi.update_action_description(
action_desc['id'], {'description': 'description_test'})
self.assertEqual('description_test', res.description)

View File

@@ -230,16 +230,6 @@ class TestDbActionPlanFilters(base.DbTestCase):
class DbActionPlanTestCase(base.DbTestCase):
def _create_test_audit(self, **kwargs):
audit = utils.get_test_audit(**kwargs)
self.dbapi.create_audit(audit)
return audit
def _create_test_action_plan(self, **kwargs):
action_plan = utils.get_test_action_plan(**kwargs)
self.dbapi.create_action_plan(action_plan)
return action_plan
def test_get_action_plan_list(self):
uuids = []
for _ in range(1, 4):
@@ -274,21 +264,30 @@ class DbActionPlanTestCase(base.DbTestCase):
self.assertEqual(audit.as_dict(), eager_action_plan.audit.as_dict())
def test_get_action_plan_list_with_filters(self):
audit = self._create_test_audit(
audit = utils.create_test_audit(
id=2,
audit_type='ONESHOT',
uuid=w_utils.generate_uuid(),
state=ap_objects.State.ONGOING)
action_plan1 = self._create_test_action_plan(
action_plan1 = utils.create_test_action_plan(
id=1,
uuid=w_utils.generate_uuid(),
audit_id=audit['id'],
state=ap_objects.State.RECOMMENDED)
action_plan2 = self._create_test_action_plan(
action_plan2 = utils.create_test_action_plan(
id=2,
uuid=w_utils.generate_uuid(),
audit_id=audit['id'],
state=ap_objects.State.ONGOING)
action_plan3 = utils.create_test_action_plan(
id=3,
uuid=w_utils.generate_uuid(),
audit_id=audit['id'],
state=ap_objects.State.RECOMMENDED)
# check on bug 1761956
self.dbapi.soft_delete_action_plan(action_plan3['uuid'])
res = self.dbapi.get_action_plan_list(
self.context,
@@ -303,7 +302,9 @@ class DbActionPlanTestCase(base.DbTestCase):
res = self.dbapi.get_action_plan_list(
self.context,
filters={'audit_uuid': audit['uuid']})
self.assertEqual(
sorted([action_plan1['id'], action_plan2['id']]),
sorted([r.id for r in res]))
for r in res:
self.assertEqual(audit['id'], r.audit_id)
@@ -316,7 +317,7 @@ class DbActionPlanTestCase(base.DbTestCase):
self.assertNotEqual([action_plan1['id']], [r.id for r in res])
def test_get_action_plan_list_with_filter_by_uuid(self):
action_plan = self._create_test_action_plan()
action_plan = utils.create_test_action_plan()
res = self.dbapi.get_action_plan_list(
self.context, filters={'uuid': action_plan["uuid"]})
@@ -324,13 +325,13 @@ class DbActionPlanTestCase(base.DbTestCase):
self.assertEqual(action_plan['uuid'], res[0].uuid)
def test_get_action_plan_by_id(self):
action_plan = self._create_test_action_plan()
action_plan = utils.create_test_action_plan()
action_plan = self.dbapi.get_action_plan_by_id(
self.context, action_plan['id'])
self.assertEqual(action_plan['uuid'], action_plan.uuid)
def test_get_action_plan_by_uuid(self):
action_plan = self._create_test_action_plan()
action_plan = utils.create_test_action_plan()
action_plan = self.dbapi.get_action_plan_by_uuid(
self.context, action_plan['uuid'])
self.assertEqual(action_plan['id'], action_plan.id)
@@ -340,7 +341,7 @@ class DbActionPlanTestCase(base.DbTestCase):
self.dbapi.get_action_plan_by_id, self.context, 1234)
def test_update_action_plan(self):
action_plan = self._create_test_action_plan()
action_plan = utils.create_test_action_plan()
res = self.dbapi.update_action_plan(
action_plan['id'], {'name': 'updated-model'})
self.assertEqual('updated-model', res.name)
@@ -350,13 +351,13 @@ class DbActionPlanTestCase(base.DbTestCase):
self.dbapi.update_action_plan, 1234, {'name': ''})
def test_update_action_plan_uuid(self):
action_plan = self._create_test_action_plan()
action_plan = utils.create_test_action_plan()
self.assertRaises(exception.Invalid,
self.dbapi.update_action_plan, action_plan['id'],
{'uuid': 'hello'})
def test_destroy_action_plan(self):
action_plan = self._create_test_action_plan()
action_plan = utils.create_test_action_plan()
self.dbapi.destroy_action_plan(action_plan['id'])
self.assertRaises(exception.ActionPlanNotFound,
self.dbapi.get_action_plan_by_id,
@@ -364,7 +365,7 @@ class DbActionPlanTestCase(base.DbTestCase):
def test_destroy_action_plan_by_uuid(self):
uuid = w_utils.generate_uuid()
self._create_test_action_plan(uuid=uuid)
utils.create_test_action_plan(uuid=uuid)
self.assertIsNotNone(self.dbapi.get_action_plan_by_uuid(
self.context, uuid))
self.dbapi.destroy_action_plan(uuid)
@@ -377,7 +378,7 @@ class DbActionPlanTestCase(base.DbTestCase):
self.dbapi.destroy_action_plan, 1234)
def test_destroy_action_plan_that_referenced_by_actions(self):
action_plan = self._create_test_action_plan()
action_plan = utils.create_test_action_plan()
action = utils.create_test_action(action_plan_id=action_plan['id'])
self.assertEqual(action_plan['id'], action.action_plan_id)
self.assertRaises(exception.ActionPlanReferenced,
@@ -385,7 +386,7 @@ class DbActionPlanTestCase(base.DbTestCase):
def test_create_action_plan_already_exists(self):
uuid = w_utils.generate_uuid()
self._create_test_action_plan(id=1, uuid=uuid)
utils.create_test_action_plan(id=1, uuid=uuid)
self.assertRaises(exception.ActionPlanAlreadyExists,
self._create_test_action_plan,
utils.create_test_action_plan,
id=2, uuid=uuid)

View File

@@ -265,11 +265,6 @@ class TestDbAuditFilters(base.DbTestCase):
class DbAuditTestCase(base.DbTestCase):
def _create_test_audit(self, **kwargs):
audit = utils.get_test_audit(**kwargs)
self.dbapi.create_audit(audit)
return audit
def test_get_audit_list(self):
uuids = []
for id_ in range(1, 4):
@@ -304,25 +299,40 @@ class DbAuditTestCase(base.DbTestCase):
self.assertEqual(strategy.as_dict(), eager_audit.strategy.as_dict())
def test_get_audit_list_with_filters(self):
audit1 = self._create_test_audit(
goal = utils.create_test_goal(name='DUMMY')
audit1 = utils.create_test_audit(
id=1,
audit_type=objects.audit.AuditType.ONESHOT.value,
uuid=w_utils.generate_uuid(),
name='My Audit {0}'.format(1),
state=objects.audit.State.ONGOING)
audit2 = self._create_test_audit(
state=objects.audit.State.ONGOING,
goal_id=goal['id'])
audit2 = utils.create_test_audit(
id=2,
audit_type='CONTINUOUS',
audit_type=objects.audit.AuditType.CONTINUOUS.value,
uuid=w_utils.generate_uuid(),
state=objects.audit.State.PENDING)
name='My Audit {0}'.format(2),
state=objects.audit.State.PENDING,
goal_id=goal['id'])
audit3 = utils.create_test_audit(
id=3,
audit_type=objects.audit.AuditType.CONTINUOUS.value,
uuid=w_utils.generate_uuid(),
name='My Audit {0}'.format(3),
state=objects.audit.State.ONGOING,
goal_id=goal['id'])
self.dbapi.soft_delete_audit(audit3['uuid'])
res = self.dbapi.get_audit_list(
self.context,
filters={'audit_type': objects.audit.AuditType.ONESHOT.value})
self.assertEqual([audit1['id']], [r.id for r in res])
res = self.dbapi.get_audit_list(self.context,
filters={'audit_type': 'bad-type'})
res = self.dbapi.get_audit_list(
self.context,
filters={'audit_type': 'bad-type'})
self.assertEqual([], [r.id for r in res])
res = self.dbapi.get_audit_list(
@@ -335,8 +345,22 @@ class DbAuditTestCase(base.DbTestCase):
filters={'state': objects.audit.State.PENDING})
self.assertEqual([audit2['id']], [r.id for r in res])
res = self.dbapi.get_audit_list(
self.context,
filters={'goal_name': 'DUMMY'})
self.assertEqual(sorted([audit1['id'], audit2['id']]),
sorted([r.id for r in res]))
temp_context = self.context
temp_context.show_deleted = True
res = self.dbapi.get_audit_list(
temp_context,
filters={'goal_name': 'DUMMY'})
self.assertEqual(sorted([audit1['id'], audit2['id'], audit3['id']]),
sorted([r.id for r in res]))
def test_get_audit_list_with_filter_by_uuid(self):
audit = self._create_test_audit()
audit = utils.create_test_audit()
res = self.dbapi.get_audit_list(
self.context, filters={'uuid': audit["uuid"]})
@@ -344,12 +368,12 @@ class DbAuditTestCase(base.DbTestCase):
self.assertEqual(audit['uuid'], res[0].uuid)
def test_get_audit_by_id(self):
audit = self._create_test_audit()
audit = utils.create_test_audit()
audit = self.dbapi.get_audit_by_id(self.context, audit['id'])
self.assertEqual(audit['uuid'], audit.uuid)
def test_get_audit_by_uuid(self):
audit = self._create_test_audit()
audit = utils.create_test_audit()
audit = self.dbapi.get_audit_by_uuid(self.context, audit['uuid'])
self.assertEqual(audit['id'], audit.id)
@@ -358,7 +382,7 @@ class DbAuditTestCase(base.DbTestCase):
self.dbapi.get_audit_by_id, self.context, 1234)
def test_update_audit(self):
audit = self._create_test_audit()
audit = utils.create_test_audit()
res = self.dbapi.update_audit(audit['id'], {'name': 'updated-model'})
self.assertEqual('updated-model', res.name)
@@ -367,20 +391,20 @@ class DbAuditTestCase(base.DbTestCase):
self.dbapi.update_audit, 1234, {'name': ''})
def test_update_audit_uuid(self):
audit = self._create_test_audit()
audit = utils.create_test_audit()
self.assertRaises(exception.Invalid,
self.dbapi.update_audit, audit['id'],
{'uuid': 'hello'})
def test_destroy_audit(self):
audit = self._create_test_audit()
audit = utils.create_test_audit()
self.dbapi.destroy_audit(audit['id'])
self.assertRaises(exception.AuditNotFound,
self.dbapi.get_audit_by_id,
self.context, audit['id'])
def test_destroy_audit_by_uuid(self):
audit = self._create_test_audit()
audit = utils.create_test_audit()
self.assertIsNotNone(self.dbapi.get_audit_by_uuid(self.context,
audit['uuid']))
self.dbapi.destroy_audit(audit['uuid'])
@@ -393,7 +417,7 @@ class DbAuditTestCase(base.DbTestCase):
self.dbapi.destroy_audit, 1234)
def test_destroy_audit_that_referenced_by_action_plans(self):
audit = self._create_test_audit()
audit = utils.create_test_audit()
action_plan = utils.create_test_action_plan(audit_id=audit['id'])
self.assertEqual(audit['id'], action_plan.audit_id)
self.assertRaises(exception.AuditReferenced,
@@ -401,9 +425,9 @@ class DbAuditTestCase(base.DbTestCase):
def test_create_audit_already_exists(self):
uuid = w_utils.generate_uuid()
self._create_test_audit(id=1, uuid=uuid)
utils.create_test_audit(id=1, uuid=uuid)
self.assertRaises(exception.AuditAlreadyExists,
self._create_test_audit,
utils.create_test_audit,
id=2, uuid=uuid)
def test_create_same_name_audit(self):

View File

@@ -225,16 +225,6 @@ class TestDbAuditTemplateFilters(base.DbTestCase):
class DbAuditTemplateTestCase(base.DbTestCase):
def _create_test_goal(self, **kwargs):
goal = utils.get_test_goal(**kwargs)
self.dbapi.create_goal(goal)
return goal
def _create_test_audit_template(self, **kwargs):
audit_template = utils.get_test_audit_template(**kwargs)
self.dbapi.create_audit_template(audit_template)
return audit_template
def test_get_audit_template_list(self):
uuids = []
for i in range(1, 4):
@@ -273,33 +263,55 @@ class DbAuditTemplateTestCase(base.DbTestCase):
strategy.as_dict(), eager_audit_template.strategy.as_dict())
def test_get_audit_template_list_with_filters(self):
goal = self._create_test_goal(name='DUMMY')
audit_template1 = self._create_test_audit_template(
goal = utils.create_test_goal(name='DUMMY')
audit_template1 = utils.create_test_audit_template(
id=1,
uuid=w_utils.generate_uuid(),
name='My Audit Template 1',
description='Description of my audit template 1',
goal_id=goal['id'])
audit_template2 = self._create_test_audit_template(
audit_template2 = utils.create_test_audit_template(
id=2,
uuid=w_utils.generate_uuid(),
name='My Audit Template 2',
description='Description of my audit template 2',
goal_id=goal['id'])
audit_template3 = utils.create_test_audit_template(
id=3,
uuid=w_utils.generate_uuid(),
name='My Audit Template 3',
description='Description of my audit template 3',
goal_id=goal['id'])
self.dbapi.soft_delete_audit_template(audit_template3['uuid'])
res = self.dbapi.get_audit_template_list(
self.context, filters={'name': 'My Audit Template 1'})
self.context,
filters={'name': 'My Audit Template 1'})
self.assertEqual([audit_template1['id']], [r.id for r in res])
res = self.dbapi.get_audit_template_list(
self.context, filters={'name': 'Does not exist'})
self.context,
filters={'name': 'Does not exist'})
self.assertEqual([], [r.id for r in res])
res = self.dbapi.get_audit_template_list(
self.context,
filters={'goal': 'DUMMY'})
self.assertEqual([audit_template1['id'], audit_template2['id']],
[r.id for r in res])
filters={'goal_name': 'DUMMY'})
self.assertEqual(
sorted([audit_template1['id'], audit_template2['id']]),
sorted([r.id for r in res]))
temp_context = self.context
temp_context.show_deleted = True
res = self.dbapi.get_audit_template_list(
temp_context,
filters={'goal_name': 'DUMMY'})
self.assertEqual(
sorted([audit_template1['id'], audit_template2['id'],
audit_template3['id']]),
sorted([r.id for r in res]))
res = self.dbapi.get_audit_template_list(
self.context,
@@ -307,7 +319,7 @@ class DbAuditTemplateTestCase(base.DbTestCase):
self.assertEqual([audit_template2['id']], [r.id for r in res])
def test_get_audit_template_list_with_filter_by_uuid(self):
audit_template = self._create_test_audit_template()
audit_template = utils.create_test_audit_template()
res = self.dbapi.get_audit_template_list(
self.context, filters={'uuid': audit_template["uuid"]})
@@ -315,13 +327,13 @@ class DbAuditTemplateTestCase(base.DbTestCase):
self.assertEqual(audit_template['uuid'], res[0].uuid)
def test_get_audit_template_by_id(self):
audit_template = self._create_test_audit_template()
audit_template = utils.create_test_audit_template()
audit_template = self.dbapi.get_audit_template_by_id(
self.context, audit_template['id'])
self.assertEqual(audit_template['uuid'], audit_template.uuid)
def test_get_audit_template_by_uuid(self):
audit_template = self._create_test_audit_template()
audit_template = utils.create_test_audit_template()
audit_template = self.dbapi.get_audit_template_by_uuid(
self.context, audit_template['uuid'])
self.assertEqual(audit_template['id'], audit_template.id)
@@ -332,7 +344,7 @@ class DbAuditTemplateTestCase(base.DbTestCase):
self.context, 1234)
def test_update_audit_template(self):
audit_template = self._create_test_audit_template()
audit_template = utils.create_test_audit_template()
res = self.dbapi.update_audit_template(audit_template['id'],
{'name': 'updated-model'})
self.assertEqual('updated-model', res.name)
@@ -342,14 +354,14 @@ class DbAuditTemplateTestCase(base.DbTestCase):
self.dbapi.update_audit_template, 1234, {'name': ''})
def test_update_audit_template_uuid(self):
audit_template = self._create_test_audit_template()
audit_template = utils.create_test_audit_template()
self.assertRaises(exception.Invalid,
self.dbapi.update_audit_template,
audit_template['id'],
{'uuid': 'hello'})
def test_destroy_audit_template(self):
audit_template = self._create_test_audit_template()
audit_template = utils.create_test_audit_template()
self.dbapi.destroy_audit_template(audit_template['id'])
self.assertRaises(exception.AuditTemplateNotFound,
self.dbapi.get_audit_template_by_id,
@@ -357,7 +369,7 @@ class DbAuditTemplateTestCase(base.DbTestCase):
def test_destroy_audit_template_by_uuid(self):
uuid = w_utils.generate_uuid()
self._create_test_audit_template(uuid=uuid)
utils.create_test_audit_template(uuid=uuid)
self.assertIsNotNone(self.dbapi.get_audit_template_by_uuid(
self.context, uuid))
self.dbapi.destroy_audit_template(uuid)
@@ -371,9 +383,9 @@ class DbAuditTemplateTestCase(base.DbTestCase):
def test_create_audit_template_already_exists(self):
uuid = w_utils.generate_uuid()
self._create_test_audit_template(id=1, uuid=uuid)
utils.create_test_audit_template(id=1, uuid=uuid)
self.assertRaises(exception.AuditTemplateAlreadyExists,
self._create_test_audit_template,
utils.create_test_audit_template,
id=2, uuid=uuid)
def test_audit_template_create_same_name(self):

View File

@@ -242,20 +242,9 @@ class TestDbEfficacyIndicatorFilters(base.DbTestCase):
class DbEfficacyIndicatorTestCase(base.DbTestCase):
def _create_test_efficacy_indicator(self, **kwargs):
efficacy_indicator_dict = utils.get_test_efficacy_indicator(**kwargs)
efficacy_indicator = self.dbapi.create_efficacy_indicator(
efficacy_indicator_dict)
return efficacy_indicator
def _create_test_action_plan(self, **kwargs):
action_plan_dict = utils.get_test_action_plan(**kwargs)
action_plan = self.dbapi.create_action_plan(action_plan_dict)
return action_plan
def test_get_efficacy_indicator_list(self):
uuids = []
action_plan = self._create_test_action_plan()
action_plan = utils.create_test_action_plan()
for id_ in range(1, 4):
efficacy_indicator = utils.create_test_efficacy_indicator(
action_plan_id=action_plan.id, id=id_, uuid=None,
@@ -290,39 +279,52 @@ class DbEfficacyIndicatorTestCase(base.DbTestCase):
def test_get_efficacy_indicator_list_with_filters(self):
audit = utils.create_test_audit(uuid=w_utils.generate_uuid())
action_plan = self._create_test_action_plan(
action_plan = utils.create_test_action_plan(
id=1,
uuid=w_utils.generate_uuid(),
audit_id=audit.id,
first_efficacy_indicator_id=None,
state=objects.action_plan.State.RECOMMENDED)
efficacy_indicator1 = self._create_test_efficacy_indicator(
efficacy_indicator1 = utils.create_test_efficacy_indicator(
id=1,
name='indicator_1',
uuid=w_utils.generate_uuid(),
action_plan_id=1,
action_plan_id=action_plan['id'],
description='Description efficacy indicator 1',
unit='%')
efficacy_indicator2 = self._create_test_efficacy_indicator(
efficacy_indicator2 = utils.create_test_efficacy_indicator(
id=2,
name='indicator_2',
uuid=w_utils.generate_uuid(),
action_plan_id=2,
description='Description efficacy indicator 2',
unit='%')
efficacy_indicator3 = self._create_test_efficacy_indicator(
efficacy_indicator3 = utils.create_test_efficacy_indicator(
id=3,
name='indicator_3',
uuid=w_utils.generate_uuid(),
action_plan_id=1,
action_plan_id=action_plan['id'],
description='Description efficacy indicator 3',
unit='%')
efficacy_indicator4 = utils.create_test_efficacy_indicator(
id=4,
name='indicator_4',
uuid=w_utils.generate_uuid(),
action_plan_id=action_plan['id'],
description='Description efficacy indicator 4',
unit='%')
self.dbapi.soft_delete_efficacy_indicator(efficacy_indicator4['uuid'])
res = self.dbapi.get_efficacy_indicator_list(
self.context, filters={'name': 'indicator_3'})
self.context,
filters={'name': 'indicator_3'})
self.assertEqual([efficacy_indicator3['id']], [r.id for r in res])
res = self.dbapi.get_efficacy_indicator_list(
self.context, filters={'unit': 'kWh'})
self.context,
filters={'unit': 'kWh'})
self.assertEqual([], [r.id for r in res])
res = self.dbapi.get_efficacy_indicator_list(
@@ -338,7 +340,7 @@ class DbEfficacyIndicatorTestCase(base.DbTestCase):
sorted([r.id for r in res]))
def test_get_efficacy_indicator_list_with_filter_by_uuid(self):
efficacy_indicator = self._create_test_efficacy_indicator()
efficacy_indicator = utils.create_test_efficacy_indicator()
res = self.dbapi.get_efficacy_indicator_list(
self.context, filters={'uuid': efficacy_indicator.uuid})
@@ -346,13 +348,13 @@ class DbEfficacyIndicatorTestCase(base.DbTestCase):
self.assertEqual(efficacy_indicator.uuid, res[0].uuid)
def test_get_efficacy_indicator_by_id(self):
efficacy_indicator = self._create_test_efficacy_indicator()
efficacy_indicator = utils.create_test_efficacy_indicator()
efficacy_indicator = self.dbapi.get_efficacy_indicator_by_id(
self.context, efficacy_indicator.id)
self.assertEqual(efficacy_indicator.uuid, efficacy_indicator.uuid)
def test_get_efficacy_indicator_by_uuid(self):
efficacy_indicator = self._create_test_efficacy_indicator()
efficacy_indicator = utils.create_test_efficacy_indicator()
efficacy_indicator = self.dbapi.get_efficacy_indicator_by_uuid(
self.context, efficacy_indicator.uuid)
self.assertEqual(efficacy_indicator['id'], efficacy_indicator.id)
@@ -363,7 +365,7 @@ class DbEfficacyIndicatorTestCase(base.DbTestCase):
self.dbapi.get_efficacy_indicator_by_id, self.context, 1234)
def test_update_efficacy_indicator(self):
efficacy_indicator = self._create_test_efficacy_indicator()
efficacy_indicator = utils.create_test_efficacy_indicator()
res = self.dbapi.update_efficacy_indicator(
efficacy_indicator.id,
{'state': objects.action_plan.State.CANCELLED})
@@ -375,14 +377,14 @@ class DbEfficacyIndicatorTestCase(base.DbTestCase):
self.dbapi.update_efficacy_indicator, 1234, {'state': ''})
def test_update_efficacy_indicator_uuid(self):
efficacy_indicator = self._create_test_efficacy_indicator()
efficacy_indicator = utils.create_test_efficacy_indicator()
self.assertRaises(
exception.Invalid,
self.dbapi.update_efficacy_indicator, efficacy_indicator.id,
{'uuid': 'hello'})
def test_destroy_efficacy_indicator(self):
efficacy_indicator = self._create_test_efficacy_indicator()
efficacy_indicator = utils.create_test_efficacy_indicator()
self.dbapi.destroy_efficacy_indicator(efficacy_indicator['id'])
self.assertRaises(exception.EfficacyIndicatorNotFound,
self.dbapi.get_efficacy_indicator_by_id,
@@ -390,7 +392,7 @@ class DbEfficacyIndicatorTestCase(base.DbTestCase):
def test_destroy_efficacy_indicator_by_uuid(self):
uuid = w_utils.generate_uuid()
self._create_test_efficacy_indicator(uuid=uuid)
utils.create_test_efficacy_indicator(uuid=uuid)
self.assertIsNotNone(self.dbapi.get_efficacy_indicator_by_uuid(
self.context, uuid))
self.dbapi.destroy_efficacy_indicator(uuid)
@@ -404,7 +406,7 @@ class DbEfficacyIndicatorTestCase(base.DbTestCase):
def test_create_efficacy_indicator_already_exists(self):
uuid = w_utils.generate_uuid()
self._create_test_efficacy_indicator(id=1, uuid=uuid)
utils.create_test_efficacy_indicator(id=1, uuid=uuid)
self.assertRaises(exception.EfficacyIndicatorAlreadyExists,
self._create_test_efficacy_indicator,
utils.create_test_efficacy_indicator,
id=2, uuid=uuid)

View File

@@ -223,11 +223,6 @@ class TestDbGoalFilters(base.DbTestCase):
class DbGoalTestCase(base.DbTestCase):
def _create_test_goal(self, **kwargs):
goal = utils.get_test_goal(**kwargs)
self.dbapi.create_goal(goal)
return goal
def test_get_goal_list(self):
uuids = []
for i in range(1, 4):
@@ -242,25 +237,33 @@ class DbGoalTestCase(base.DbTestCase):
self.assertEqual(sorted(uuids), sorted(goal_uuids))
def test_get_goal_list_with_filters(self):
goal1 = self._create_test_goal(
goal1 = utils.create_test_goal(
id=1,
uuid=w_utils.generate_uuid(),
name="GOAL_1",
display_name='Goal 1',
)
goal2 = self._create_test_goal(
goal2 = utils.create_test_goal(
id=2,
uuid=w_utils.generate_uuid(),
name="GOAL_2",
display_name='Goal 2',
)
goal3 = utils.create_test_goal(
id=3,
uuid=w_utils.generate_uuid(),
name="GOAL_3",
display_name='Goal 3',
)
res = self.dbapi.get_goal_list(self.context,
filters={'display_name': 'Goal 1'})
self.dbapi.soft_delete_goal(goal3['uuid'])
res = self.dbapi.get_goal_list(
self.context, filters={'display_name': 'Goal 1'})
self.assertEqual([goal1['uuid']], [r.uuid for r in res])
res = self.dbapi.get_goal_list(self.context,
filters={'display_name': 'Goal 3'})
res = self.dbapi.get_goal_list(
self.context, filters={'display_name': 'Goal 3'})
self.assertEqual([], [r.uuid for r in res])
res = self.dbapi.get_goal_list(
@@ -268,16 +271,19 @@ class DbGoalTestCase(base.DbTestCase):
self.assertEqual([goal1['uuid']], [r.uuid for r in res])
res = self.dbapi.get_goal_list(
self.context,
filters={'display_name': 'Goal 2'})
self.context, filters={'display_name': 'Goal 2'})
self.assertEqual([goal2['uuid']], [r.uuid for r in res])
res = self.dbapi.get_goal_list(
self.context, filters={'uuid': goal3['uuid']})
self.assertEqual([], [r.uuid for r in res])
def test_get_goal_by_uuid(self):
efficacy_spec = [{"unit": "%", "name": "dummy",
"schema": "Range(min=0, max=100, min_included=True, "
"max_included=True, msg=None)",
"description": "Dummy indicator"}]
created_goal = self._create_test_goal(
created_goal = utils.create_test_goal(
efficacy_specification=efficacy_spec)
goal = self.dbapi.get_goal_by_uuid(self.context, created_goal['uuid'])
self.assertEqual(goal.uuid, created_goal['uuid'])
@@ -289,13 +295,13 @@ class DbGoalTestCase(base.DbTestCase):
self.context, random_uuid)
def test_update_goal(self):
goal = self._create_test_goal()
goal = utils.create_test_goal()
res = self.dbapi.update_goal(goal['uuid'],
{'display_name': 'updated-model'})
self.assertEqual('updated-model', res.display_name)
def test_update_goal_id(self):
goal = self._create_test_goal()
goal = utils.create_test_goal()
self.assertRaises(exception.Invalid,
self.dbapi.update_goal, goal['uuid'],
{'uuid': 'NEW_GOAL'})
@@ -308,7 +314,7 @@ class DbGoalTestCase(base.DbTestCase):
{'display_name': ''})
def test_destroy_goal(self):
goal = self._create_test_goal()
goal = utils.create_test_goal()
self.dbapi.destroy_goal(goal['uuid'])
self.assertRaises(exception.GoalNotFound,
self.dbapi.get_goal_by_uuid,
@@ -321,7 +327,7 @@ class DbGoalTestCase(base.DbTestCase):
def test_create_goal_already_exists(self):
goal_uuid = w_utils.generate_uuid()
self._create_test_goal(uuid=goal_uuid)
utils.create_test_goal(uuid=goal_uuid)
self.assertRaises(exception.GoalAlreadyExists,
self._create_test_goal,
utils.create_test_goal,
uuid=goal_uuid)

View File

@@ -228,11 +228,6 @@ class TestDbScoringEngineFilters(base.DbTestCase):
class DbScoringEngineTestCase(base.DbTestCase):
def _create_test_scoring_engine(self, **kwargs):
scoring_engine = utils.get_test_scoring_engine(**kwargs)
self.dbapi.create_scoring_engine(scoring_engine)
return scoring_engine
def test_get_scoring_engine_list(self):
names = []
for i in range(1, 4):
@@ -248,20 +243,29 @@ class DbScoringEngineTestCase(base.DbTestCase):
self.assertEqual(sorted(names), sorted(scoring_engines_names))
def test_get_scoring_engine_list_with_filters(self):
scoring_engine1 = self._create_test_scoring_engine(
scoring_engine1 = utils.create_test_scoring_engine(
id=1,
uuid=w_utils.generate_uuid(),
name="SE_ID_1",
description='ScoringEngine 1',
metainfo="a1=b1",
)
scoring_engine2 = self._create_test_scoring_engine(
scoring_engine2 = utils.create_test_scoring_engine(
id=2,
uuid=w_utils.generate_uuid(),
name="SE_ID_2",
description='ScoringEngine 2',
metainfo="a2=b2",
)
scoring_engine3 = utils.create_test_scoring_engine(
id=3,
uuid=w_utils.generate_uuid(),
name="SE_ID_3",
description='ScoringEngine 3',
metainfo="a3=b3",
)
self.dbapi.soft_delete_scoring_engine(scoring_engine3['uuid'])
res = self.dbapi.get_scoring_engine_list(
self.context, filters={'description': 'ScoringEngine 1'})
@@ -272,24 +276,23 @@ class DbScoringEngineTestCase(base.DbTestCase):
self.assertEqual([], [r.name for r in res])
res = self.dbapi.get_scoring_engine_list(
self.context,
filters={'description': 'ScoringEngine 2'})
self.context, filters={'description': 'ScoringEngine 2'})
self.assertEqual([scoring_engine2['name']], [r.name for r in res])
def test_get_scoring_engine_by_id(self):
created_scoring_engine = self._create_test_scoring_engine()
created_scoring_engine = utils.create_test_scoring_engine()
scoring_engine = self.dbapi.get_scoring_engine_by_id(
self.context, created_scoring_engine['id'])
self.assertEqual(scoring_engine.id, created_scoring_engine['id'])
def test_get_scoring_engine_by_uuid(self):
created_scoring_engine = self._create_test_scoring_engine()
created_scoring_engine = utils.create_test_scoring_engine()
scoring_engine = self.dbapi.get_scoring_engine_by_uuid(
self.context, created_scoring_engine['uuid'])
self.assertEqual(scoring_engine.uuid, created_scoring_engine['uuid'])
def test_get_scoring_engine_by_name(self):
created_scoring_engine = self._create_test_scoring_engine()
created_scoring_engine = utils.create_test_scoring_engine()
scoring_engine = self.dbapi.get_scoring_engine_by_name(
self.context, created_scoring_engine['name'])
self.assertEqual(scoring_engine.name, created_scoring_engine['name'])
@@ -300,13 +303,13 @@ class DbScoringEngineTestCase(base.DbTestCase):
self.context, 404)
def test_update_scoring_engine(self):
scoring_engine = self._create_test_scoring_engine()
scoring_engine = utils.create_test_scoring_engine()
res = self.dbapi.update_scoring_engine(
scoring_engine['id'], {'description': 'updated-model'})
self.assertEqual('updated-model', res.description)
def test_update_scoring_engine_id(self):
scoring_engine = self._create_test_scoring_engine()
scoring_engine = utils.create_test_scoring_engine()
self.assertRaises(exception.Invalid,
self.dbapi.update_scoring_engine,
scoring_engine['id'],
@@ -319,7 +322,7 @@ class DbScoringEngineTestCase(base.DbTestCase):
{'description': ''})
def test_destroy_scoring_engine(self):
scoring_engine = self._create_test_scoring_engine()
scoring_engine = utils.create_test_scoring_engine()
self.dbapi.destroy_scoring_engine(scoring_engine['id'])
self.assertRaises(exception.ScoringEngineNotFound,
self.dbapi.get_scoring_engine_by_id,
@@ -331,7 +334,7 @@ class DbScoringEngineTestCase(base.DbTestCase):
def test_create_scoring_engine_already_exists(self):
scoring_engine_id = "SE_ID"
self._create_test_scoring_engine(name=scoring_engine_id)
utils.create_test_scoring_engine(name=scoring_engine_id)
self.assertRaises(exception.ScoringEngineAlreadyExists,
self._create_test_scoring_engine,
utils.create_test_scoring_engine,
name=scoring_engine_id)

View File

@@ -229,11 +229,6 @@ class TestDbServiceFilters(base.DbTestCase):
class DbServiceTestCase(base.DbTestCase):
def _create_test_service(self, **kwargs):
service = utils.get_test_service(**kwargs)
self.dbapi.create_service(service)
return service
def test_get_service_list(self):
ids = []
for i in range(1, 4):
@@ -247,16 +242,23 @@ class DbServiceTestCase(base.DbTestCase):
self.assertEqual(sorted(ids), sorted(service_ids))
def test_get_service_list_with_filters(self):
service1 = self._create_test_service(
service1 = utils.create_test_service(
id=1,
name="SERVICE_ID_1",
host="controller_1",
)
service2 = self._create_test_service(
service2 = utils.create_test_service(
id=2,
name="SERVICE_ID_2",
host="controller_2",
)
service3 = utils.create_test_service(
id=3,
name="SERVICE_ID_3",
host="controller_3",
)
self.dbapi.soft_delete_service(service3['id'])
res = self.dbapi.get_service_list(
self.context, filters={'host': 'controller_1'})
@@ -267,12 +269,11 @@ class DbServiceTestCase(base.DbTestCase):
self.assertEqual([], [r.id for r in res])
res = self.dbapi.get_service_list(
self.context,
filters={'host': 'controller_2'})
self.context, filters={'host': 'controller_2'})
self.assertEqual([service2['id']], [r.id for r in res])
def test_get_service_by_name(self):
created_service = self._create_test_service()
created_service = utils.create_test_service()
service = self.dbapi.get_service_by_name(
self.context, created_service['name'])
self.assertEqual(service.name, created_service['name'])
@@ -283,7 +284,7 @@ class DbServiceTestCase(base.DbTestCase):
self.context, 404)
def test_update_service(self):
service = self._create_test_service()
service = utils.create_test_service()
res = self.dbapi.update_service(
service['id'], {'host': 'controller_test'})
self.assertEqual('controller_test', res.host)
@@ -296,7 +297,7 @@ class DbServiceTestCase(base.DbTestCase):
def test_create_service_already_exists(self):
service_id = "STRATEGY_ID"
self._create_test_service(name=service_id)
utils.create_test_service(name=service_id)
self.assertRaises(exception.ServiceAlreadyExists,
self._create_test_service,
utils.create_test_service,
name=service_id)

View File

@@ -239,11 +239,6 @@ class TestDbStrategyFilters(base.DbTestCase):
class DbStrategyTestCase(base.DbTestCase):
def _create_test_strategy(self, **kwargs):
strategy = utils.get_test_strategy(**kwargs)
self.dbapi.create_strategy(strategy)
return strategy
def test_get_strategy_list(self):
uuids = []
for i in range(1, 4):
@@ -278,18 +273,29 @@ class DbStrategyTestCase(base.DbTestCase):
self.assertEqual(goal.as_dict(), eager_strategy.goal.as_dict())
def test_get_strategy_list_with_filters(self):
strategy1 = self._create_test_strategy(
# NOTE(erakli): we don't create goal in database but links to
# goal_id = 1. There is no error in dbapi.create_strategy() method.
# Is it right behaviour?
strategy1 = utils.create_test_strategy(
id=1,
uuid=w_utils.generate_uuid(),
name="STRATEGY_ID_1",
display_name='Strategy 1',
)
strategy2 = self._create_test_strategy(
strategy2 = utils.create_test_strategy(
id=2,
uuid=w_utils.generate_uuid(),
name="STRATEGY_ID_2",
display_name='Strategy 2',
)
strategy3 = utils.create_test_strategy(
id=3,
uuid=w_utils.generate_uuid(),
name="STRATEGY_ID_3",
display_name='Strategy 3',
)
self.dbapi.soft_delete_strategy(strategy3['uuid'])
res = self.dbapi.get_strategy_list(
self.context, filters={'display_name': 'Strategy 1'})
@@ -300,24 +306,22 @@ class DbStrategyTestCase(base.DbTestCase):
self.assertEqual([], [r.uuid for r in res])
res = self.dbapi.get_strategy_list(
self.context,
filters={'goal_id': 1})
self.context, filters={'goal_id': 1})
self.assertEqual([strategy1['uuid'], strategy2['uuid']],
[r.uuid for r in res])
res = self.dbapi.get_strategy_list(
self.context,
filters={'display_name': 'Strategy 2'})
self.context, filters={'display_name': 'Strategy 2'})
self.assertEqual([strategy2['uuid']], [r.uuid for r in res])
def test_get_strategy_by_uuid(self):
created_strategy = self._create_test_strategy()
created_strategy = utils.create_test_strategy()
strategy = self.dbapi.get_strategy_by_uuid(
self.context, created_strategy['uuid'])
self.assertEqual(strategy.uuid, created_strategy['uuid'])
def test_get_strategy_by_name(self):
created_strategy = self._create_test_strategy()
created_strategy = utils.create_test_strategy()
strategy = self.dbapi.get_strategy_by_name(
self.context, created_strategy['name'])
self.assertEqual(strategy.name, created_strategy['name'])
@@ -328,13 +332,13 @@ class DbStrategyTestCase(base.DbTestCase):
self.context, 404)
def test_update_strategy(self):
strategy = self._create_test_strategy()
strategy = utils.create_test_strategy()
res = self.dbapi.update_strategy(
strategy['uuid'], {'display_name': 'updated-model'})
self.assertEqual('updated-model', res.display_name)
def test_update_goal_id(self):
strategy = self._create_test_strategy()
strategy = utils.create_test_strategy()
self.assertRaises(exception.Invalid,
self.dbapi.update_strategy, strategy['uuid'],
{'uuid': 'new_strategy_id'})
@@ -346,7 +350,7 @@ class DbStrategyTestCase(base.DbTestCase):
{'display_name': ''})
def test_destroy_strategy(self):
strategy = self._create_test_strategy()
strategy = utils.create_test_strategy()
self.dbapi.destroy_strategy(strategy['uuid'])
self.assertRaises(exception.StrategyNotFound,
self.dbapi.get_strategy_by_id,
@@ -358,7 +362,7 @@ class DbStrategyTestCase(base.DbTestCase):
def test_create_strategy_already_exists(self):
strategy_id = "STRATEGY_ID"
self._create_test_strategy(name=strategy_id)
utils.create_test_strategy(name=strategy_id)
self.assertRaises(exception.StrategyAlreadyExists,
self._create_test_strategy,
utils.create_test_strategy,
name=strategy_id)

View File

@@ -224,6 +224,11 @@ class TestContinuousAuditHandler(base.DbTestCase):
def setUp(self):
super(TestContinuousAuditHandler, self).setUp()
p_audit_notifications = mock.patch.object(
notifications, 'audit', autospec=True)
self.m_audit_notifications = p_audit_notifications.start()
self.addCleanup(p_audit_notifications.stop)
self.goal = obj_utils.create_test_goal(
self.context, id=1, name=dummy_strategy.DummyStrategy.get_name())
audit_template = obj_utils.create_test_audit_template(
@@ -417,3 +422,27 @@ class TestContinuousAuditHandler(base.DbTestCase):
audit_handler.launch_audits_periodically()
m_remove_job.assert_called()
@mock.patch.object(continuous.ContinuousAuditHandler, 'planner')
def test_execute_audit(self, m_planner):
audit_handler = continuous.ContinuousAuditHandler()
audit = self.audits[0]
audit_handler.execute_audit(audit, self.context)
expected_calls = [
mock.call(self.context, audit,
action=objects.fields.NotificationAction.STRATEGY,
phase=objects.fields.NotificationPhase.START),
mock.call(self.context, audit,
action=objects.fields.NotificationAction.STRATEGY,
phase=objects.fields.NotificationPhase.END),
mock.call(self.context, audit,
action=objects.fields.NotificationAction.PLANNER,
phase=objects.fields.NotificationPhase.START),
mock.call(self.context, audit,
action=objects.fields.NotificationAction.PLANNER,
phase=objects.fields.NotificationPhase.END)]
self.assertEqual(
expected_calls,
self.m_audit_notifications.send_action_notification.call_args_list)

View File

@@ -0,0 +1,152 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import mock
from watcher.common import cinder_helper
from watcher.common import exception
from watcher.decision_engine.model.collector import cinder
from watcher.tests import base
from watcher.tests import conf_fixture
class TestCinderClusterDataModelCollector(base.TestCase):
def setUp(self):
super(TestCinderClusterDataModelCollector, self).setUp()
self.useFixture(conf_fixture.ConfReloadFixture())
@mock.patch('keystoneclient.v3.client.Client', mock.Mock())
@mock.patch.object(cinder_helper, 'CinderHelper')
def test_cinder_cdmc_execute(self, m_cinder_helper_cls):
m_cinder_helper = mock.Mock(name="cinder_helper")
m_cinder_helper_cls.return_value = m_cinder_helper
fake_storage_node = mock.Mock(
host='host@backend',
zone='zone',
status='enabled',
state='up',
volume_type=['fake_type']
)
fake_storage_pool = mock.Mock(
total_volumes=1,
total_capacity_gb=30,
free_capacity_gb=20,
provisioned_capacity_gb=10,
allocated_capacity_gb=10,
virtual_free=20
)
setattr(fake_storage_pool, 'name', 'host@backend#pool')
fake_volume = mock.Mock(
id=1,
size=1,
status='in-use',
attachments=[{"server_id": "server_id",
"attachment_id": "attachment_id"}],
multiattach='false',
snapshot_id='',
metadata='{"key": "value"}',
bootable='false'
)
setattr(fake_volume, 'name', 'name')
setattr(fake_volume, 'os-vol-tenant-attr:tenant_id', 'project_id')
setattr(fake_volume, 'os-vol-host-attr:host', 'host@backend#pool')
# storage node list
m_cinder_helper.get_storage_node_list.return_value = [
fake_storage_node]
m_cinder_helper.get_volume_type_by_backendname.return_value = [
'fake_type']
# storage pool list
m_cinder_helper.get_storage_pool_list.return_value = [
fake_storage_pool]
# volume list
m_cinder_helper.get_volume_list.return_value = [fake_volume]
m_config = mock.Mock()
m_osc = mock.Mock()
cinder_cdmc = cinder.CinderClusterDataModelCollector(
config=m_config, osc=m_osc)
model = cinder_cdmc.execute()
storage_nodes = model.get_all_storage_nodes()
storage_node = list(storage_nodes.values())[0]
storage_pools = model.get_node_pools(storage_node)
storage_pool = storage_pools[0]
volumes = model.get_pool_volumes(storage_pool)
volume = volumes[0]
self.assertEqual(1, len(storage_nodes))
self.assertEqual(1, len(storage_pools))
self.assertEqual(1, len(volumes))
self.assertEqual(storage_node.host, 'host@backend')
self.assertEqual(storage_pool.name, 'host@backend#pool')
self.assertEqual(volume.uuid, '1')
@mock.patch('keystoneclient.v3.client.Client', mock.Mock())
@mock.patch.object(cinder_helper, 'CinderHelper')
def test_cinder_cdmc_total_capacity_gb_not_integer(
self, m_cinder_helper_cls):
m_cinder_helper = mock.Mock(name="cinder_helper")
m_cinder_helper_cls.return_value = m_cinder_helper
fake_storage_node = mock.Mock(
host='host@backend',
zone='zone',
status='enabled',
state='up',
volume_type=['fake_type']
)
fake_storage_pool = mock.Mock(
total_volumes=1,
total_capacity_gb="unknown",
free_capacity_gb=20,
provisioned_capacity_gb=10,
allocated_capacity_gb=10,
virtual_free=20
)
setattr(fake_storage_pool, 'name', 'host@backend#pool')
# storage node list
m_cinder_helper.get_storage_node_list.return_value = [
fake_storage_node]
m_cinder_helper.get_volume_type_by_backendname.return_value = [
'fake_type']
# storage pool list
m_cinder_helper.get_storage_pool_list.return_value = [
fake_storage_pool]
# volume list
m_cinder_helper.get_volume_list.return_value = []
m_config = mock.Mock()
m_osc = mock.Mock()
cinder_cdmc = cinder.CinderClusterDataModelCollector(
config=m_config, osc=m_osc)
self.assertRaises(exception.InvalidPoolAttributeValue,
cinder_cdmc.execute)

View File

@@ -60,6 +60,7 @@ class TestNovaClusterDataModelCollector(base.TestCase):
human_id='fake_instance',
flavor={'ram': 333, 'disk': 222, 'vcpus': 4, 'id': 1},
metadata={'hi': 'hello'},
tenant_id='ff560f7e-dbc8-771f-960c-164482fce21b',
)
setattr(fake_instance, 'OS-EXT-STS:vm_state', 'VM_STATE')
setattr(fake_instance, 'OS-EXT-SRV-ATTR:host', 'test_hostname')

View File

@@ -1,47 +1,47 @@
<ModelRoot>
<ComputeNode human_id="" uuid="Node_0" status="enabled" state="up" id="0" hostname="hostname_0" vcpus="40" disk="250" disk_capacity="250" memory="132">
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_0" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_1" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_0" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_0"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_1" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_1"/>
</ComputeNode>
<ComputeNode human_id="" uuid="Node_1" status="enabled" state="up" id="1" hostname="hostname_1" vcpus="40" disk="250" disk_capacity="250" memory="132">
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_2" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_2" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_2"/>
</ComputeNode>
<ComputeNode human_id="" uuid="Node_2" status="enabled" state="up" id="2" hostname="hostname_2" vcpus="40" disk="250" disk_capacity="250" memory="132">
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_3" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_4" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_5" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_3" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_3"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_4" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_4"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_5" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_5"/>
</ComputeNode>
<ComputeNode human_id="" uuid="Node_3" status="enabled" state="up" id="3" hostname="hostname_3" vcpus="40" disk="250" disk_capacity="250" memory="132">
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_6" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_6" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_6"/>
</ComputeNode>
<ComputeNode human_id="" uuid="Node_4" status="enabled" state="up" id="4" hostname="hostname_4" vcpus="40" disk="250" disk_capacity="250" memory="132">
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_7" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_7" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_7"/>
</ComputeNode>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_10" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_11" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_12" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_13" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_14" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_15" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_16" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_17" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_18" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_19" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_20" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_21" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_22" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_23" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_24" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_25" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_26" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_27" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_28" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_29" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_30" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_31" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_32" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_33" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_34" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_8" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_9" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_10" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_10"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_11" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_11"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_12" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_12"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_13" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_13"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_14" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_14"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_15" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_15"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_16" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_16"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_17" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_17"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_18" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_18"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_19" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_19"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_20" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_20"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_21" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_21"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_22" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_22"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_23" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_23"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_24" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_24"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_25" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_25"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_26" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_26"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_27" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_27"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_28" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_28"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_29" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_29"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_30" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_30"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_31" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_31"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_32" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_32"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_33" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_33"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_34" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_34"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_8" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_8"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_9" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_9"/>
</ModelRoot>

View File

@@ -1,50 +1,50 @@
<ModelRoot>
<ComputeNode human_id="" uuid="Node_0" status="enabled" state="up" id="0" hostname="hostname_0" vcpus="40" disk="250" disk_capacity="250" memory="132">
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_0" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_1" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_0" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_0"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_1" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_1"/>
</ComputeNode>
<ComputeNode human_id="" uuid="Node_1" status="enabled" state="up" id="1" hostname="hostname_1" vcpus="40" disk="250" disk_capacity="250" memory="132">
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_2" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_2" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_2"/>
</ComputeNode>
<ComputeNode human_id="" uuid="Node_2" status="enabled" state="up" id="2" hostname="hostname_2" vcpus="40" disk="250" disk_capacity="250" memory="132">
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_3" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_4" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_5" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_3" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_3"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_4" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_4"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_5" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_5"/>
</ComputeNode>
<ComputeNode human_id="" uuid="Node_3" status="enabled" state="up" id="3" hostname="hostname_3" vcpus="40" disk="250" disk_capacity="250" memory="132">
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_6" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_6" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_6"/>
</ComputeNode>
<ComputeNode human_id="" uuid="Node_4" status="enabled" state="up" id="4" hostname="hostname_4" vcpus="40" disk="250" disk_capacity="250" memory="132">
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_7" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_7" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_7"/>
</ComputeNode>
<ComputeNode human_id="" uuid="LOST_NODE" status="enabled" state="up" id="1" hostname="hostname_7" vcpus="40" disk="250" disk_capacity="250" memory="132">
<Instance watcher_exclude="False" state="active" human_id="" uuid="LOST_INSTANCE" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="LOST_INSTANCE" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_8"/>
</ComputeNode>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_10" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_11" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_12" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_13" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_14" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_15" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_16" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_17" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_18" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_19" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_20" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_21" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_22" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_23" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_24" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_25" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_26" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_27" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_28" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_29" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_30" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_31" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_32" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_33" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_34" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_8" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_9" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_10" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_10"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_11" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_11"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_12" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_12"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_13" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_13"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_14" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_14"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_15" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_15"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_16" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_16"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_17" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_17"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_18" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_18"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_19" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_19"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_20" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_20"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_21" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_21"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_22" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_22"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_23" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_23"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_24" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_24"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_25" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_25"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_26" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_26"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_27" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_27"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_28" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_28"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_29" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_29"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_30" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_30"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_31" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_31"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_32" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_32"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_33" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_33"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_34" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_34"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_8" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_8"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_9" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_9"/>
</ModelRoot>

View File

@@ -1,8 +1,8 @@
<ModelRoot>
<ComputeNode hostname="hostname_0" uuid="Node_0" id="0" state="up" human_id="" status="enabled" vcpus="40" disk="250" disk_capacity="250" memory="64">
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_0" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_0" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_0"/>
</ComputeNode>
<ComputeNode hostname="hostname_1" uuid="Node_1" id="1" state="up" human_id="" status="enabled" vcpus="40" disk="250" disk_capacity="250" memory="64">
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_1" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_1" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_1"/>
</ComputeNode>
</ModelRoot>

View File

@@ -1,11 +1,11 @@
<ModelRoot>
<ComputeNode hostname="hostname_0" uuid="Node_0" id="0" state="up" human_id="" status="enabled" vcpus="16" disk="250" disk_capacity="250" memory="64">
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_0" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_1" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_2" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_3" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_4" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_5" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_0" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_0"/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_1" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_1"/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_2" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_2"/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_3" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_3"/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_4" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_4"/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_5" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_5"/>
</ComputeNode>
<ComputeNode hostname="hostname_1" uuid="Node_1" id="1" state="up" human_id="" status="enabled" vcpus="16" disk="250" disk_capacity="250" memory="64"/>
<ComputeNode hostname="hostname_2" uuid="Node_2" id="2" state="up" human_id="" status="enabled" vcpus="16" disk="250" disk_capacity="250" memory="64"/>

View File

@@ -1,8 +1,8 @@
<ModelRoot>
<ComputeNode human_id="" uuid="Node_0" status="enabled" state="up" id="0" hostname="hostname_0" vcpus="40" disk="250" disk_capacity="250" memory="132">
<Instance watcher_exclude="False" state="active" human_id="" uuid="73b09e16-35b7-4922-804e-e8f5d9b740fc" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="73b09e16-35b7-4922-804e-e8f5d9b740fc" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_0"/>
</ComputeNode>
<ComputeNode human_id="" uuid="Node_1" status="enabled" state="up" id="1" hostname="hostname_1" vcpus="40" disk="250" disk_capacity="250" memory="132">
<Instance watcher_exclude="False" state="active" human_id="" uuid="a4cab39b-9828-413a-bf88-f76921bf1517" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="a4cab39b-9828-413a-bf88-f76921bf1517" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_1"/>
</ComputeNode>
</ModelRoot>

View File

@@ -1,9 +1,9 @@
<ModelRoot>
<ComputeNode hostname="hostname_0" uuid="Node_0" id="0" state="up" human_id="" status="enabled" vcpus="10" disk="250" disk_capacity="250" memory="64">
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_6" vcpus="1" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_7" vcpus="2" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_8" vcpus="4" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_9" vcpus="8" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_6" vcpus="1" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_6"/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_7" vcpus="2" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_7"/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_8" vcpus="4" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_8"/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_9" vcpus="8" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_9"/>
</ComputeNode>
<ComputeNode hostname="hostname_1" uuid="Node_1" id="1" state="up" human_id="" status="enabled" vcpus="10" disk="250" disk_capacity="250" memory="64"/>
</ModelRoot>

View File

@@ -1,5 +1,5 @@
<ModelRoot>
<ComputeNode human_id="" uuid="Node_0" status="enabled" state="up" id="0" hostname="hostname_0" vcpus="4" disk="4" disk_capacity="4" memory="4">
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_0" vcpus="4" disk="0" disk_capacity="0" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_0" vcpus="4" disk="0" disk_capacity="0" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_0"/>
</ComputeNode>
</ModelRoot>

View File

@@ -1,10 +1,10 @@
<ModelRoot>
<ComputeNode human_id="" uuid="Node_0" status="enabled" state="up" id="0" hostname="hostname_0" vcpus="40" disk="250" disk_capacity="250" memory="132">
<Instance watcher_exclude="False" state="active" human_id="" uuid="73b09e16-35b7-4922-804e-e8f5d9b740fc" vcpus="10" disk="20" disk_capacity="20" memory="32" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_1" vcpus="10" disk="20" disk_capacity="20" memory="32" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="73b09e16-35b7-4922-804e-e8f5d9b740fc" vcpus="10" disk="20" disk_capacity="20" memory="32" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_0"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_1" vcpus="10" disk="20" disk_capacity="20" memory="32" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_1"/>
</ComputeNode>
<ComputeNode human_id="" uuid="Node_1" status="enabled" state="up" id="1" hostname="hostname_1" vcpus="40" disk="250" disk_capacity="250" memory="132">
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_3" vcpus="10" disk="20" disk_capacity="20" memory="32" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_4" vcpus="10" disk="20" disk_capacity="20" memory="32" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_3" vcpus="10" disk="20" disk_capacity="20" memory="32" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_3"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_4" vcpus="10" disk="20" disk_capacity="20" memory="32" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_4"/>
</ComputeNode>
</ModelRoot>

View File

@@ -1,10 +1,10 @@
<ModelRoot>
<ComputeNode human_id="" uuid="Node_0" status="enabled" state="up" id="0" hostname="hostname_0" vcpus="50" disk="250" disk_capacity="250" memory="132">
<Instance watcher_exclude="False" state="active" human_id="" uuid="73b09e16-35b7-4922-804e-e8f5d9b740fc" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}, "watcher-priority": "8"}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="cae81432-1631-4d4e-b29c-6f3acdcde906" vcpus="15" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}, "watcher-priority": "4"}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="73b09e16-35b7-4922-804e-e8f5d9b740fc" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}, "watcher-priority": "8"}' project_id="project_0"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="cae81432-1631-4d4e-b29c-6f3acdcde906" vcpus="15" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}, "watcher-priority": "4"}' project_id="project_1"/>
</ComputeNode>
<ComputeNode human_id="" uuid="Node_1" status="enabled" state="up" id="1" hostname="hostname_1" vcpus="50" disk="250" disk_capacity="250" memory="132">
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_3" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}, "watcher-priority": "1"}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_4" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}, "watcher-priority": "9"}'/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_3" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}, "watcher-priority": "1"}' project_id="project_3"/>
<Instance watcher_exclude="False" state="active" human_id="" uuid="INSTANCE_4" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}, "watcher-priority": "9"}' project_id="project_4"/>
</ComputeNode>
</ModelRoot>

View File

@@ -1,16 +1,16 @@
<ModelRoot>
<ComputeNode hostname="hostname_0" uuid="Node_0" id="0" state="up" human_id="" status="enabled" vcpus="16" disk="250" disk_capacity="250" memory="64">
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_0" vcpus="2" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_1" vcpus="2" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_2" vcpus="2" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_0" vcpus="2" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_0"/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_1" vcpus="2" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_1"/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_2" vcpus="2" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_2"/>
</ComputeNode>
<ComputeNode hostname="hostname_1" uuid="Node_1" id="1" state="up" human_id="" status="enabled" vcpus="16" disk="250" disk_capacity="250" memory="64">
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_3" vcpus="2" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_3" vcpus="2" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_3"/>
</ComputeNode>
<ComputeNode hostname="hostname_2" uuid="Node_2" id="2" state="up" human_id="" status="enabled" vcpus="16" disk="250" disk_capacity="250" memory="64">
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_4" vcpus="2" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_4" vcpus="2" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_4"/>
</ComputeNode>
<ComputeNode hostname="hostname_3" uuid="Node_3" id="3" state="up" human_id="" status="enabled" vcpus="16" disk="250" disk_capacity="250" memory="64">
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_5" vcpus="2" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_5" vcpus="2" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_5"/>
</ComputeNode>
</ModelRoot>

View File

@@ -1,16 +1,16 @@
<ModelRoot>
<ComputeNode hostname="hostname_0" uuid="Node_0" id="0" state="up" human_id="" status="enabled" vcpus="16" disk="250" disk_capacity="250" memory="64">
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_0" vcpus="2" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_1" vcpus="2" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_2" vcpus="2" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_0" vcpus="2" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_0"/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_1" vcpus="2" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_1"/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_2" vcpus="2" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_2"/>
</ComputeNode>
<ComputeNode hostname="hostname_1" uuid="Node_1" id="1" state="up" human_id="" status="enabled" vcpus="16" disk="250" disk_capacity="250" memory="64">
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_3" vcpus="2" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_3" vcpus="2" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_3"/>
</ComputeNode>
<ComputeNode hostname="hostname_2" uuid="Node_2" id="2" state="up" human_id="" status="enabled" vcpus="16" disk="250" disk_capacity="250" memory="64">
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_4" vcpus="2" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_4" vcpus="2" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_4"/>
</ComputeNode>
<ComputeNode hostname="hostname_3" uuid="Node_3" id="3" state="up" human_id="" status="disabled" disabled_reason='watcher_disabled' vcpus="16" disk="250" disk_capacity="250" memory="64">
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_5" vcpus="2" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance watcher_exclude="False" human_id="" state="active" uuid="INSTANCE_5" vcpus="2" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}' project_id="project_5"/>
</ComputeNode>
</ModelRoot>

View File

@@ -83,6 +83,7 @@ class FakerModelCollector(base.BaseClusterDataModelCollector):
for i in range(0, instance_count):
instance_uuid = "INSTANCE_{0}".format(i)
project_id = "project_{0}".format(i)
instance_attributes = {
"uuid": instance_uuid,
"memory": 2,
@@ -90,7 +91,8 @@ class FakerModelCollector(base.BaseClusterDataModelCollector):
"disk_capacity": 20,
"vcpus": 10,
"metadata":
'{"optimize": true,"top": "floor","nested": {"x": "y"}}'
'{"optimize": true,"top": "floor","nested": {"x": "y"}}',
"project_id": project_id
}
instance = element.Instance(**instance_attributes)

View File

@@ -198,14 +198,18 @@ class TestComputeScope(base.TestCase):
{'compute_nodes': [{'name': 'Node_2'},
{'name': 'Node_3'}]},
{'instance_metadata': [{'optimize': True},
{'optimize1': False}]}]
{'optimize1': False}]},
{'projects': [{'uuid': 'PROJECT_1'},
{'uuid': 'PROJECT_2'}]}]
instances_to_exclude = []
nodes_to_exclude = []
instance_metadata = []
projects_to_exclude = []
compute.ComputeScope([], mock.Mock(),
osc=mock.Mock()).exclude_resources(
resources_to_exclude, instances=instances_to_exclude,
nodes=nodes_to_exclude, instance_metadata=instance_metadata)
nodes=nodes_to_exclude, instance_metadata=instance_metadata,
projects=projects_to_exclude)
self.assertEqual(['Node_0', 'Node_1', 'Node_2', 'Node_3'],
sorted(nodes_to_exclude))
@@ -213,6 +217,8 @@ class TestComputeScope(base.TestCase):
sorted(instances_to_exclude))
self.assertEqual([{'optimize': True}, {'optimize1': False}],
instance_metadata)
self.assertEqual(['PROJECT_1', 'PROJECT_2'],
sorted(projects_to_exclude))
def test_exclude_instances_with_given_metadata(self):
cluster = self.fake_cluster.generate_scenario_1()
@@ -233,6 +239,17 @@ class TestComputeScope(base.TestCase):
instance_metadata, cluster, instances_to_remove)
self.assertEqual(set(), instances_to_remove)
def test_exclude_instances_with_given_project(self):
cluster = self.fake_cluster.generate_scenario_1()
instances_to_exclude = set()
projects_to_exclude = ['project_1', 'project_2']
compute.ComputeScope(
[], mock.Mock(),
osc=mock.Mock()).exclude_instances_with_given_project(
projects_to_exclude, cluster, instances_to_exclude)
self.assertEqual(['INSTANCE_1', 'INSTANCE_2'],
sorted(instances_to_exclude))
def test_remove_nodes_from_model(self):
model = self.fake_cluster.generate_scenario_1()
compute.ComputeScope([], mock.Mock(),

View File

@@ -0,0 +1,206 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2017 chinac.com
#
# Authors: suzhengwei<suzhengwei@chinac.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import mock
from watcher.common import exception
from watcher.decision_engine.model import model_root
from watcher.decision_engine.strategy import strategies
from watcher.tests import base
from watcher.tests.decision_engine.model import faker_cluster_state
class TestHostMaintenance(base.TestCase):
def setUp(self):
super(TestHostMaintenance, self).setUp()
# fake cluster
self.fake_cluster = faker_cluster_state.FakerModelCollector()
p_model = mock.patch.object(
strategies.HostMaintenance, "compute_model",
new_callable=mock.PropertyMock)
self.m_model = p_model.start()
self.addCleanup(p_model.stop)
p_audit_scope = mock.patch.object(
strategies.HostMaintenance, "audit_scope",
new_callable=mock.PropertyMock
)
self.m_audit_scope = p_audit_scope.start()
self.addCleanup(p_audit_scope.stop)
self.m_audit_scope.return_value = mock.Mock()
self.m_model.return_value = model_root.ModelRoot()
self.strategy = strategies.HostMaintenance(config=mock.Mock())
def test_exception_stale_cdm(self):
self.fake_cluster.set_cluster_data_model_as_stale()
self.m_model.return_value = self.fake_cluster.cluster_data_model
self.assertRaises(
exception.ClusterStateNotDefined,
self.strategy.execute)
def test_get_node_capacity(self):
model = self.fake_cluster.generate_scenario_1()
self.m_model.return_value = model
node_0 = model.get_node_by_uuid("Node_0")
node_capacity = dict(cpu=40, ram=132, disk=250)
self.assertEqual(node_capacity,
self.strategy.get_node_capacity(node_0))
def test_get_node_used(self):
model = self.fake_cluster.generate_scenario_1()
self.m_model.return_value = model
node_0 = model.get_node_by_uuid("Node_0")
node_used = dict(cpu=20, ram=4, disk=40)
self.assertEqual(node_used,
self.strategy.get_node_used(node_0))
def test_get_node_free(self):
model = self.fake_cluster.generate_scenario_1()
self.m_model.return_value = model
node_0 = model.get_node_by_uuid("Node_0")
node_free = dict(cpu=20, ram=128, disk=210)
self.assertEqual(node_free,
self.strategy.get_node_free(node_0))
def test_host_fits(self):
model = self.fake_cluster.generate_scenario_1()
self.m_model.return_value = model
node_0 = model.get_node_by_uuid("Node_0")
node_1 = model.get_node_by_uuid("Node_1")
self.assertTrue(self.strategy.host_fits(node_0, node_1))
def test_add_action_enable_compute_node(self):
model = self.fake_cluster.generate_scenario_1()
self.m_model.return_value = model
node_0 = model.get_node_by_uuid('Node_0')
self.strategy.add_action_enable_compute_node(node_0)
expected = [{'action_type': 'change_nova_service_state',
'input_parameters': {
'state': 'enabled',
'resource_id': 'Node_0'}}]
self.assertEqual(expected, self.strategy.solution.actions)
def test_add_action_maintain_compute_node(self):
model = self.fake_cluster.generate_scenario_1()
self.m_model.return_value = model
node_0 = model.get_node_by_uuid('Node_0')
self.strategy.add_action_maintain_compute_node(node_0)
expected = [{'action_type': 'change_nova_service_state',
'input_parameters': {
'state': 'disabled',
'disabled_reason': 'watcher_maintaining',
'resource_id': 'Node_0'}}]
self.assertEqual(expected, self.strategy.solution.actions)
def test_instance_migration(self):
model = self.fake_cluster.generate_scenario_1()
self.m_model.return_value = model
node_0 = model.get_node_by_uuid('Node_0')
node_1 = model.get_node_by_uuid('Node_1')
instance_0 = model.get_instance_by_uuid("INSTANCE_0")
self.strategy.instance_migration(instance_0, node_0, node_1)
self.assertEqual(1, len(self.strategy.solution.actions))
expected = [{'action_type': 'migrate',
'input_parameters': {'destination_node': node_1.uuid,
'source_node': node_0.uuid,
'migration_type': 'live',
'resource_id': instance_0.uuid}}]
self.assertEqual(expected, self.strategy.solution.actions)
def test_instance_migration_without_dest_node(self):
model = self.fake_cluster.generate_scenario_1()
self.m_model.return_value = model
node_0 = model.get_node_by_uuid('Node_0')
instance_0 = model.get_instance_by_uuid("INSTANCE_0")
self.strategy.instance_migration(instance_0, node_0)
self.assertEqual(1, len(self.strategy.solution.actions))
expected = [{'action_type': 'migrate',
'input_parameters': {'source_node': node_0.uuid,
'migration_type': 'live',
'resource_id': instance_0.uuid}}]
self.assertEqual(expected, self.strategy.solution.actions)
def test_host_migration(self):
model = self.fake_cluster.generate_scenario_1()
self.m_model.return_value = model
node_0 = model.get_node_by_uuid('Node_0')
node_1 = model.get_node_by_uuid('Node_1')
instance_0 = model.get_instance_by_uuid("INSTANCE_0")
instance_1 = model.get_instance_by_uuid("INSTANCE_1")
self.strategy.host_migration(node_0, node_1)
self.assertEqual(2, len(self.strategy.solution.actions))
expected = [{'action_type': 'migrate',
'input_parameters': {'destination_node': node_1.uuid,
'source_node': node_0.uuid,
'migration_type': 'live',
'resource_id': instance_0.uuid}},
{'action_type': 'migrate',
'input_parameters': {'destination_node': node_1.uuid,
'source_node': node_0.uuid,
'migration_type': 'live',
'resource_id': instance_1.uuid}}]
self.assertIn(expected[0], self.strategy.solution.actions)
self.assertIn(expected[1], self.strategy.solution.actions)
def test_safe_maintain(self):
model = self.fake_cluster.generate_scenario_1()
self.m_model.return_value = model
node_0 = model.get_node_by_uuid('Node_0')
node_1 = model.get_node_by_uuid('Node_1')
self.assertFalse(self.strategy.safe_maintain(node_0))
self.assertFalse(self.strategy.safe_maintain(node_1))
def test_try_maintain(self):
model = self.fake_cluster.generate_scenario_1()
self.m_model.return_value = model
node_1 = model.get_node_by_uuid('Node_1')
self.strategy.try_maintain(node_1)
self.assertEqual(2, len(self.strategy.solution.actions))
def test_strategy(self):
model = self.fake_cluster. \
generate_scenario_9_with_3_active_plus_1_disabled_nodes()
self.m_model.return_value = model
node_2 = model.get_node_by_uuid('Node_2')
node_3 = model.get_node_by_uuid('Node_3')
instance_4 = model.get_instance_by_uuid("INSTANCE_4")
if not self.strategy.safe_maintain(node_2, node_3):
self.strategy.try_maintain(node_2)
expected = [{'action_type': 'change_nova_service_state',
'input_parameters': {
'resource_id': 'Node_3',
'state': 'enabled'}},
{'action_type': 'change_nova_service_state',
'input_parameters': {
'resource_id': 'Node_2',
'state': 'disabled',
'disabled_reason': 'watcher_maintaining'}},
{'action_type': 'migrate',
'input_parameters': {
'destination_node': node_3.uuid,
'source_node': node_2.uuid,
'migration_type': 'live',
'resource_id': instance_4.uuid}}]
self.assertEqual(expected, self.strategy.solution.actions)