Compare commits

...

21 Commits
1.7.0 ... 1.4.2

Author SHA1 Message Date
Ian Wienand
30d6f07ceb Replace openstack.org git:// URLs with https://
This is a mechanically generated change to replace openstack.org
git:// URLs with https:// equivalents.

This is in aid of a planned future move of the git hosting
infrastructure to a self-hosted instance of gitea (https://gitea.io),
which does not support the git wire protocol at this stage.

This update should result in no functional change.

For more information see the thread at

 http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003825.html

Change-Id: I5ffcaf509ec6901f7e221a4312cc6b0577090440
2019-03-24 20:36:25 +00:00
licanwei
343a65952a Check job before removing it
Change-Id: Ibbd4da25fac6016a0d76c8f810ac567f6fd075f1
Closes-Bug: #1782731
(cherry picked from commit 4022714f5d)
2018-10-23 11:57:16 +00:00
LiXiangyu
9af6886b0e Fix TypeError in function chunkify
This patch fixes TypeError of range() in function chunkify, as
range() integer step argument expected, but got str.

Change-Id: I2acde859e014baa4c4c59caa6f4ea938c7c4c3bf
(cherry picked from commit c717be12a6)
2018-10-23 06:55:49 +00:00
Nguyen Hai
b0ef77f5d1 import zuul job settings from project-config
This is a mechanically generated patch to complete step 1 of moving
the zuul job settings out of project-config and into each project
repository.

Because there will be a separate patch on each branch, the branch
specifiers for branch-specific jobs have been removed.

Because this patch is generated by a script, there may be some
cosmetic changes to the layout of the YAML file(s) as the contents are
normalized.

See the python3-first goal document for details:
https://governance.openstack.org/tc/goals/stein/python3-first.html

Change-Id: I9ccef45c11c17c3bdda143a53b325be327b9459d
Story: #2002586
Task: #24344
2018-08-19 00:58:52 +09:00
Alexander Chadin
f5157f2894 workload_stabilization trivial fix
This fix allows to compare metric name by value,
not by object.

Change-Id: I57c50ff97efa43efe4fd81875e481b25e9a18cc6
2018-02-20 14:42:04 +00:00
OpenStack Proposal Bot
13331935df Updated from global requirements
Change-Id: I941f6e5a005124a98d2860695fb9a30a77bc595c
2018-02-14 18:48:22 +00:00
Alexander Chadin
8d61c1a2b4 Fix workload_stabilization unavailable nodes and instances
This patch set excludes nodes and instances from auditing
if appropriate metrics aren't available.

Change-Id: I87c6c249e3962f45d082f92d7e6e0be04e101799
Closes-Bug: #1736982
(cherry picked from commit 701b258dc7)
2018-01-24 11:57:14 +00:00
Hidekazu Nakamura
6b4b5c2fe5 Fix gnocchiclient creation
Gnocchiclient uses keystoneauth1.adapter so that adapter_options
need to be given.
This patch fixes gnocchiclient creation.

Change-Id: I6b5d8ee775929f4b3fd30be3321b378d19085547
Closes-Bug: #1714871
(cherry picked from commit a2fa13c8ff)
2017-11-13 08:20:55 +00:00
OpenStack Proposal Bot
62623a7f77 Updated from global requirements
Change-Id: Iede1409c379d90238b6f2ab6a9aa750b3081df94
2017-09-21 01:08:52 +00:00
licanwei
9d2f8d11ec Fix KeyError exception
During the strategy sync process,
if goal_id can't be found in the goals table,
will throw a KeyError exception.

Change-Id: I62800ac5c69f4f5c7820908f2e777094a51a5541
Closes-Bug: #1711086
2017-08-24 12:34:13 +00:00
Jenkins
f1d064c759 Merge "workload balance base on cpu or ram util" into stable/pike 2017-08-24 10:09:30 +00:00
Jenkins
6cb02c18a7 Merge "Remove pbr warnerrors" into stable/pike 2017-08-24 10:09:19 +00:00
Jenkins
37fc37e138 Merge "Update the documention for doc migration" into stable/pike 2017-08-24 10:09:13 +00:00
Jenkins
b68685741e Merge "Adjust the action state judgment logic" into stable/pike 2017-08-24 08:37:55 +00:00
zhengwei6082
6721977f74 Update the documention for doc migration
Change-Id: I22dc18e6f2f7471f5c804d4d19c631f81a6e196b
(cherry picked from commit d5bcd37478)
2017-08-23 10:04:55 +00:00
Alexander Chadin
c303ad4cdc Remove pbr warnerrors
This change removes the now unused "warnerrors" setting,
which is replaced by "warning-is-error" in sphinx
releases >= 1.5 [1].

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-March/113085.html

Change-Id: I32f078169668be08737e47cd15edbdfba42904dc
(cherry picked from commit f76a628d1f)
2017-08-23 10:03:58 +00:00
licanwei
51c9db2936 Adjust the action state judgment logic
Only when True is returned, the action state is set to SUCCEEDED
some actions(such as migrate) will return None if exception raised

Change-Id: I52e7a1ffb68f54594f2b00d9843e8e0a4c985667
(cherry picked from commit 965af1b6fd)
2017-08-23 10:03:32 +00:00
suzhengwei
1e003d4153 workload balance base on cpu or ram util
By the input parameter "metrics", it makes decision to migrate a VM
base on cpu or memory utilization.

Change-Id: I35cce3495c8dacad64ea6c6ee71082a85e9e0a83
(cherry picked from commit 5c86a54d20)
2017-08-23 10:03:15 +00:00
Hidekazu Nakamura
bab89fd769 Fix gnocchi repository URL in local.conf.controller
This patch set updates gnocchi repository URL in local.conf.controller
bacause it moved from under openstack to their own repository.

Change-Id: I53c6efcb40b26f83bc1867564b9067ae5f50938d
(cherry picked from commit 5cc4716a95)
2017-08-23 10:02:57 +00:00
OpenStack Release Bot
e1e17ab0b9 Update UPPER_CONSTRAINTS_FILE for stable/pike
Change-Id: I453ae1575d2dad4b724d96cfc6ebf5c5b2a2a3be
2017-08-11 01:09:58 +00:00
OpenStack Release Bot
3d542472f6 Update .gitreview for stable/pike
Change-Id: I685c0bc8773d2b5f9e747a53d870a92dd6baea36
2017-08-11 01:09:57 +00:00
30 changed files with 368 additions and 76 deletions

View File

@@ -2,3 +2,4 @@
host=review.openstack.org
port=29418
project=openstack/watcher.git
defaultbranch=stable/pike

9
.zuul.yaml Normal file
View File

@@ -0,0 +1,9 @@
- project:
templates:
- openstack-python-jobs
- openstack-python35-jobs
- publish-openstack-sphinx-docs
- check-requirements
- release-notes-jobs
gate:
queue: watcher

View File

@@ -35,7 +35,7 @@ VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP
NOVA_INSTANCES_PATH=/opt/stack/data/instances
# Enable the Ceilometer plugin for the compute agent
enable_plugin ceilometer git://git.openstack.org/openstack/ceilometer
enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer
disable_service ceilometer-acentral,ceilometer-collector,ceilometer-api
LOGFILE=$DEST/logs/stack.sh.log

View File

@@ -32,13 +32,13 @@ ENABLED_SERVICES+=,q-svc,q-dhcp,q-meta,q-agt,q-l3,neutron
enable_service n-cauth
# Enable the Watcher Dashboard plugin
enable_plugin watcher-dashboard git://git.openstack.org/openstack/watcher-dashboard
enable_plugin watcher-dashboard https://git.openstack.org/openstack/watcher-dashboard
# Enable the Watcher plugin
enable_plugin watcher git://git.openstack.org/openstack/watcher
enable_plugin watcher https://git.openstack.org/openstack/watcher
# Enable the Ceilometer plugin
enable_plugin ceilometer git://git.openstack.org/openstack/ceilometer
enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer
# This is the controller node, so disable the ceilometer compute agent
disable_service ceilometer-acompute
@@ -46,7 +46,7 @@ disable_service ceilometer-acompute
enable_service ceilometer-api
# Enable the Gnocchi plugin
enable_plugin gnocchi https://git.openstack.org/openstack/gnocchi
enable_plugin gnocchi https://github.com/gnocchixyz/gnocchi
LOGFILE=$DEST/logs/stack.sh.log
LOGDAYS=2

View File

@@ -165,7 +165,7 @@ You can easily generate and update a sample configuration file
named :ref:`watcher.conf.sample <watcher_sample_configuration_files>` by using
these following commands::
$ git clone git://git.openstack.org/openstack/watcher
$ git clone https://git.openstack.org/openstack/watcher
$ cd watcher/
$ tox -e genconfig
$ vi etc/watcher/watcher.conf.sample

View File

@@ -19,7 +19,7 @@ model. To enable the Watcher plugin with DevStack, add the following to the
`[[local|localrc]]` section of your controller's `local.conf` to enable the
Watcher plugin::
enable_plugin watcher git://git.openstack.org/openstack/watcher
enable_plugin watcher https://git.openstack.org/openstack/watcher
For more detailed instructions, see `Detailed DevStack Instructions`_. Check
out the `DevStack documentation`_ for more information regarding DevStack.

View File

@@ -178,7 +178,7 @@ Here below is how you would proceed to register ``DummyAction`` using pbr_:
watcher_actions =
dummy = thirdparty.dummy:DummyAction
.. _pbr: http://docs.openstack.org/developer/pbr/
.. _pbr: https://docs.openstack.org/pbr/latest
Using action plugins

View File

@@ -25,6 +25,7 @@ The *workload_balance* strategy requires the following metrics:
metric service name plugins comment
======================= ============ ======= =======
``cpu_util`` ceilometer_ none
``memory.resident`` ceilometer_ none
======================= ============ ======= =======
.. _ceilometer: http://docs.openstack.org/admin-guide/telemetry-measurements.html#openstack-compute
@@ -66,6 +67,9 @@ Strategy parameters are:
============== ====== ============= ====================================
parameter type default Value description
============== ====== ============= ====================================
``metrics`` String 'cpu_util' Workload balance base on cpu or ram
utilization. choice: ['cpu_util',
'memory.resident']
``threshold`` Number 25.0 Workload threshold for migration
``period`` Number 300 Aggregate time period of ceilometer
============== ====== ============= ====================================
@@ -90,7 +94,7 @@ How to use it ?
at1 workload_balancing --strategy workload_balance
$ openstack optimize audit create -a at1 -p threshold=26.0 \
-p period=310
-p period=310 -p metrics=cpu_util
External Links
--------------

View File

@@ -0,0 +1,7 @@
---
features:
- Existing workload_balance strategy based on
the VM workloads of CPU. This feature improves
the strategy. By the input parameter "metrics",
it makes decision to migrate a VM base on CPU
or memory utilization.

View File

@@ -37,7 +37,7 @@ python-keystoneclient>=3.8.0 # Apache-2.0
python-monascaclient>=1.7.0 # Apache-2.0
python-neutronclient>=6.3.0 # Apache-2.0
python-novaclient>=9.0.0 # Apache-2.0
python-openstackclient!=3.10.0,>=3.3.0 # Apache-2.0
python-openstackclient>=3.11.0 # Apache-2.0
python-ironicclient>=1.14.0 # Apache-2.0
six>=1.9.0 # MIT
SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 # MIT
@@ -45,5 +45,5 @@ stevedore>=1.20.0 # Apache-2.0
taskflow>=2.7.0 # Apache-2.0
WebOb>=1.7.1 # MIT
WSME>=0.8 # MIT
networkx>=1.10 # BSD
networkx<2.0,>=1.10 # BSD

View File

@@ -98,7 +98,6 @@ watcher_cluster_data_model_collectors =
[pbr]
warnerrors = true
autodoc_index_modules = true
autodoc_exclude_modules =
watcher.db.sqlalchemy.alembic.env

View File

@@ -7,7 +7,7 @@ skipsdist = True
usedevelop = True
whitelist_externals = find
rm
install_command = pip install -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} {opts} {packages}
install_command = pip install -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/pike} {opts} {packages}
setenv =
VIRTUAL_ENV={envdir}
deps = -r{toxinidir}/test-requirements.txt

View File

@@ -120,14 +120,15 @@ class TaskFlowActionContainer(base.BaseTaskFlowActionContainer):
def do_execute(self, *args, **kwargs):
LOG.debug("Running action: %s", self.name)
# NOTE: For result is False, set action state fail
# NOTE:Some actions(such as migrate) will return None when exception
# Only when True is returned, the action state is set to SUCCEEDED
result = self.action.execute()
if result is False:
return self.engine.notify(self._db_action,
objects.action.State.FAILED)
else:
if result is True:
return self.engine.notify(self._db_action,
objects.action.State.SUCCEEDED)
else:
return self.engine.notify(self._db_action,
objects.action.State.FAILED)
def do_post_execute(self):
LOG.debug("Post-condition action: %s", self.name)

View File

@@ -110,8 +110,12 @@ class OpenStackClients(object):
'api_version')
gnocchiclient_interface = self._get_client_option('gnocchi',
'endpoint_type')
adapter_options = {
"interface": gnocchiclient_interface
}
self._gnocchi = gnclient.Client(gnocchiclient_version,
interface=gnocchiclient_interface,
adapter_options=adapter_options,
session=self.session)
return self._gnocchi

View File

@@ -62,11 +62,12 @@ class ContinuousAuditHandler(base.AuditHandler):
if objects.audit.AuditStateTransitionManager().is_inactive(audit):
# if audit isn't in active states, audit's job must be removed to
# prevent using of inactive audit in future.
if self.scheduler.get_jobs():
[job for job in self.scheduler.get_jobs()
if job.name == 'execute_audit' and
job.args[0].uuid == audit.uuid][0].remove()
return True
jobs = [job for job in self.scheduler.get_jobs()
if job.name == 'execute_audit' and
job.args[0].uuid == audit.uuid]
if jobs:
jobs[0].remove()
return True
return False

View File

@@ -87,6 +87,7 @@ class WeightPlanner(base.BasePlanner):
@staticmethod
def chunkify(lst, n):
"""Yield successive n-sized chunks from lst."""
n = int(n)
if n < 1:
# Just to make sure the number is valid
n = 1

View File

@@ -22,7 +22,7 @@
*Description*
This strategy migrates a VM based on the VM workload of the hosts.
It makes decision to migrate a workload whenever a host's CPU
It makes decision to migrate a workload whenever a host's CPU or RAM
utilization % is higher than the specified threshold. The VM to
be moved should make the host close to average workload of all
hosts nodes.
@@ -32,7 +32,7 @@ hosts nodes.
* Hardware: compute node should use the same physical CPUs
* Software: Ceilometer component ceilometer-agent-compute
running in each compute node, and Ceilometer API can
report such telemetry "cpu_util" successfully.
report such telemetry "cpu_util" and "memory.resident" successfully.
* You must have at least 2 physical compute nodes to run
this strategy.
@@ -69,16 +69,16 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
It is a migration strategy based on the VM workload of physical
servers. It generates solutions to move a workload whenever a server's
CPU utilization % is higher than the specified threshold.
CPU or RAM utilization % is higher than the specified threshold.
The VM to be moved should make the host close to average workload
of all compute nodes.
*Requirements*
* Hardware: compute node should use the same physical CPUs
* Hardware: compute node should use the same physical CPUs/RAMs
* Software: Ceilometer component ceilometer-agent-compute running
in each compute node, and Ceilometer API can report such telemetry
"cpu_util" successfully.
"cpu_util" and "memory.resident" successfully.
* You must have at least 2 physical compute nodes to run this strategy
*Limitations*
@@ -91,8 +91,12 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
"""
# The meter to report CPU utilization % of VM in ceilometer
METER_NAME = "cpu_util"
# Unit: %, value range is [0 , 100]
CPU_METER_NAME = "cpu_util"
# The meter to report memory resident of VM in ceilometer
# Unit: MB
MEM_METER_NAME = "memory.resident"
MIGRATION = "migrate"
@@ -104,9 +108,9 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
:param osc: :py:class:`~.OpenStackClients` instance
"""
super(WorkloadBalance, self).__init__(config, osc)
# the migration plan will be triggered when the CPU utilization %
# reaches threshold
self._meter = self.METER_NAME
# the migration plan will be triggered when the CPU or RAM
# utilization % reaches threshold
self._meter = None
self._ceilometer = None
self._gnocchi = None
@@ -151,6 +155,13 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
# Mandatory default setting for each element
return {
"properties": {
"metrics": {
"description": "Workload balance based on metrics: "
"cpu or ram utilization",
"type": "string",
"choice": ["cpu_util", "memory.resident"],
"default": "cpu_util"
},
"threshold": {
"description": "workload threshold for migration",
"type": "number",
@@ -251,18 +262,21 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
cores_available = host.vcpus - cores_used
disk_available = host.disk - disk_used
mem_available = host.memory - mem_used
if (
cores_available >= required_cores and
disk_available >= required_disk and
if (cores_available >= required_cores and
mem_available >= required_mem and
disk_available >= required_disk):
if (self._meter == self.CPU_METER_NAME and
((src_instance_workload + workload) <
self.threshold / 100 * host.vcpus)
):
destination_hosts.append(instance_data)
self.threshold / 100 * host.vcpus)):
destination_hosts.append(instance_data)
if (self._meter == self.MEM_METER_NAME and
((src_instance_workload + workload) <
self.threshold / 100 * host.memory)):
destination_hosts.append(instance_data)
return destination_hosts
def group_hosts_by_cpu_util(self):
def group_hosts_by_cpu_or_ram_util(self):
"""Calculate the workloads of each node
try to find out the nodes which have reached threshold
@@ -286,10 +300,10 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
instances = self.compute_model.get_node_instances(node)
node_workload = 0.0
for instance in instances:
cpu_util = None
instance_util = None
try:
if self.config.datasource == "ceilometer":
cpu_util = self.ceilometer.statistic_aggregation(
instance_util = self.ceilometer.statistic_aggregation(
resource_id=instance.uuid,
meter_name=self._meter,
period=self._period,
@@ -298,7 +312,7 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(
seconds=int(self._period))
cpu_util = self.gnocchi.statistic_aggregation(
instance_util = self.gnocchi.statistic_aggregation(
resource_id=instance.uuid,
metric=self._meter,
granularity=self.granularity,
@@ -308,23 +322,32 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
)
except Exception as exc:
LOG.exception(exc)
LOG.error("Can not get cpu_util from %s",
LOG.error("Can not get %s from %s", self._meter,
self.config.datasource)
continue
if cpu_util is None:
LOG.debug("Instance (%s): cpu_util is None", instance.uuid)
if instance_util is None:
LOG.debug("Instance (%s): %s is None",
instance.uuid, self._meter)
continue
workload_cache[instance.uuid] = cpu_util * instance.vcpus / 100
if self._meter == self.CPU_METER_NAME:
workload_cache[instance.uuid] = (instance_util *
instance.vcpus / 100)
else:
workload_cache[instance.uuid] = instance_util
node_workload += workload_cache[instance.uuid]
LOG.debug("VM (%s): cpu_util %f", instance.uuid, cpu_util)
node_cpu_util = node_workload / node.vcpus * 100
LOG.debug("VM (%s): %s %f", instance.uuid, self._meter,
instance_util)
cluster_workload += node_workload
if self._meter == self.CPU_METER_NAME:
node_util = node_workload / node.vcpus * 100
else:
node_util = node_workload / node.memory * 100
instance_data = {
'node': node, "cpu_util": node_cpu_util,
'node': node, self._meter: node_util,
'workload': node_workload}
if node_cpu_util >= self.threshold:
if node_util >= self.threshold:
# mark the node to release resources
overload_hosts.append(instance_data)
else:
@@ -356,8 +379,9 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
"""
self.threshold = self.input_parameters.threshold
self._period = self.input_parameters.period
self._meter = self.input_parameters.metrics
source_nodes, target_nodes, avg_workload, workload_cache = (
self.group_hosts_by_cpu_util())
self.group_hosts_by_cpu_or_ram_util())
if not source_nodes:
LOG.debug("No hosts require optimization")
@@ -373,7 +397,7 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
# choose the server with largest cpu_util
source_nodes = sorted(source_nodes,
reverse=True,
key=lambda x: (x[self.METER_NAME]))
key=lambda x: (x[self._meter]))
instance_to_migrate = self.choose_instance_to_migrate(
source_nodes, avg_workload, workload_cache)
@@ -391,7 +415,7 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
"be because of there's no enough CPU/Memory/DISK")
return self.solution
destination_hosts = sorted(destination_hosts,
key=lambda x: (x["cpu_util"]))
key=lambda x: (x[self._meter]))
# always use the host with lowerest CPU utilization
mig_destination_node = destination_hosts[0]['node']
# generate solution to migrate the instance to the dest server,

View File

@@ -252,7 +252,7 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
"No values returned by %(resource_id)s "
"for %(metric_name)s" % dict(
resource_id=instance.uuid, metric_name=meter))
avg_meter = 0
return
if meter == 'cpu_util':
avg_meter /= float(100)
instance_load[meter] = avg_meter
@@ -308,12 +308,10 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
)
if avg_meter is None:
if meter_name == 'hardware.memory.used':
avg_meter = node.memory
if meter_name == 'compute.node.cpu.percent':
avg_meter = 1
LOG.warning('No values returned by node %s for %s',
node_id, meter_name)
del hosts_load[node_id]
break
else:
if meter_name == 'hardware.memory.used':
avg_meter /= oslo_utils.units.Ki
@@ -362,10 +360,12 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
migration_case = []
new_hosts = copy.deepcopy(hosts)
instance_load = self.get_instance_load(instance)
if not instance_load:
return
s_host_vcpus = new_hosts[src_node.uuid]['vcpus']
d_host_vcpus = new_hosts[dst_node.uuid]['vcpus']
for metric in self.metrics:
if metric is 'cpu_util':
if metric == 'cpu_util':
new_hosts[src_node.uuid][metric] -= (
self.transform_instance_cpu(instance_load, s_host_vcpus))
new_hosts[dst_node.uuid][metric] += (
@@ -408,6 +408,8 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
dst_node = self.compute_model.get_node_by_uuid(dst_host)
sd_case = self.calculate_migration_case(
hosts, instance, src_node, dst_node)
if sd_case is None:
break
weighted_sd = self.calculate_weighted_sd(sd_case[:-1])
@@ -416,6 +418,8 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
'host': dst_node.uuid, 'value': weighted_sd,
's_host': src_node.uuid, 'instance': instance.uuid}
instance_host_map.append(min_sd_case)
if sd_case is None:
continue
return sorted(instance_host_map, key=lambda x: x['value'])
def check_threshold(self):
@@ -424,7 +428,12 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
normalized_load = self.normalize_hosts_load(hosts_load)
for metric in self.metrics:
metric_sd = self.get_sd(normalized_load, metric)
LOG.info("Standard deviation for %s is %s."
% (metric, metric_sd))
if metric_sd > float(self.thresholds[metric]):
LOG.info("Standard deviation of %s exceeds"
" appropriate threshold %s."
% (metric, metric_sd))
return self.simulate_migrations(hosts_load)
def add_migration(self,

View File

@@ -78,6 +78,14 @@ class Syncer(object):
"""Strategies loaded from DB"""
if self._available_strategies is None:
self._available_strategies = objects.Strategy.list(self.ctx)
goal_ids = [g.id for g in self.available_goals]
stale_strategies = [s for s in self._available_strategies
if s.goal_id not in goal_ids]
for s in stale_strategies:
LOG.info("Can't find Goal id %d of strategy %s",
s.goal_id, s.name)
s.soft_delete()
self._available_strategies.remove(s)
return self._available_strategies
@property

View File

@@ -20,6 +20,8 @@ import eventlet
import mock
from watcher.applier.workflow_engine import default as tflow
from watcher.common import clients
from watcher.common import nova_helper
from watcher import objects
from watcher.tests.db import base
from watcher.tests.objects import utils as obj_utils
@@ -55,6 +57,32 @@ class TestTaskFlowActionContainer(base.DbTestCase):
self.assertTrue(action.state, objects.action.State.SUCCEEDED)
@mock.patch.object(clients.OpenStackClients, 'nova', mock.Mock())
def test_execute_with_failed(self):
nova_util = nova_helper.NovaHelper()
instance = "31b9dd5c-b1fd-4f61-9b68-a47096326dac"
nova_util.nova.servers.get.return_value = instance
action_plan = obj_utils.create_test_action_plan(
self.context, audit_id=self.audit.id,
strategy_id=self.strategy.id,
state=objects.action.State.ONGOING)
action = obj_utils.create_test_action(
self.context, action_plan_id=action_plan.id,
state=objects.action.State.ONGOING,
action_type='migrate',
input_parameters={"resource_id":
instance,
"migration_type": "live",
"destination_node": "host2",
"source_node": "host1"})
action_container = tflow.TaskFlowActionContainer(
db_action=action,
engine=self.engine)
action_container.execute()
self.assertTrue(action.state, objects.action.State.FAILED)
@mock.patch('eventlet.spawn')
def test_execute_with_cancel_action_plan(self, mock_eventlet_spawn):
action_plan = obj_utils.create_test_action_plan(

View File

@@ -190,7 +190,8 @@ class TestClients(base.TestCase):
osc.gnocchi()
mock_call.assert_called_once_with(
CONF.gnocchi_client.api_version,
interface=CONF.gnocchi_client.endpoint_type,
adapter_options={
"interface": CONF.gnocchi_client.endpoint_type},
session=mock_session)
@mock.patch.object(clients.OpenStackClients, 'session')

View File

@@ -379,8 +379,27 @@ class TestContinuousAuditHandler(base.DbTestCase):
self.audits[0].next_run_time = (datetime.datetime.now() -
datetime.timedelta(seconds=1800))
m_is_inactive.return_value = True
m_get_jobs.return_value = None
m_get_jobs.return_value = []
audit_handler.execute_audit(self.audits[0], self.context)
m_execute.assert_called_once_with(self.audits[0], self.context)
self.assertIsNotNone(self.audits[0].next_run_time)
@mock.patch.object(scheduling.BackgroundSchedulerService, 'get_jobs')
def test_is_audit_inactive(self, mock_jobs):
audit_handler = continuous.ContinuousAuditHandler()
mock_jobs.return_value = mock.MagicMock()
audit_handler._scheduler = mock.MagicMock()
ap_jobs = [job.Job(mock.MagicMock(), name='execute_audit',
func=audit_handler.execute_audit,
args=(self.audits[0], mock.MagicMock()),
kwargs={}),
]
audit_handler.update_audit_state(self.audits[1],
objects.audit.State.CANCELLED)
mock_jobs.return_value = ap_jobs
is_inactive = audit_handler._is_audit_inactive(self.audits[1])
self.assertTrue(is_inactive)
is_inactive = audit_handler._is_audit_inactive(self.audits[0])
self.assertFalse(is_inactive)

View File

@@ -54,6 +54,8 @@ class FakeCeilometerMetrics(object):
result = 0.0
if meter_name == "cpu_util":
result = self.get_average_usage_instance_cpu_wb(resource_id)
elif meter_name == "memory.resident":
result = self.get_average_usage_instance_memory_wb(resource_id)
return result
def mock_get_statistics_nn(self, resource_id, meter_name, period,
@@ -174,6 +176,8 @@ class FakeCeilometerMetrics(object):
# node 3
mock['Node_6_hostname_6'] = 8
# This node doesn't send metrics
mock['LOST_NODE_hostname_7'] = None
mock['Node_19_hostname_19'] = 10
# node 4
mock['INSTANCE_7_hostname_7'] = 4
@@ -188,7 +192,10 @@ class FakeCeilometerMetrics(object):
# mock[uuid] = random.randint(1, 4)
mock[uuid] = 8
return float(mock[str(uuid)])
if mock[str(uuid)] is not None:
return float(mock[str(uuid)])
else:
return mock[str(uuid)]
@staticmethod
def get_average_usage_instance_cpu_wb(uuid):
@@ -211,6 +218,20 @@ class FakeCeilometerMetrics(object):
mock['INSTANCE_4'] = 10
return float(mock[str(uuid)])
@staticmethod
def get_average_usage_instance_memory_wb(uuid):
mock = {}
# node 0
mock['INSTANCE_1'] = 30
# node 1
mock['INSTANCE_3'] = 12
mock['INSTANCE_4'] = 12
if uuid not in mock.keys():
# mock[uuid] = random.randint(1, 4)
mock[uuid] = 12
return mock[str(uuid)]
@staticmethod
def get_average_usage_instance_cpu(uuid):
"""The last VM CPU usage values to average
@@ -239,6 +260,8 @@ class FakeCeilometerMetrics(object):
# node 4
mock['INSTANCE_7'] = 4
mock['LOST_INSTANCE'] = None
if uuid not in mock.keys():
# mock[uuid] = random.randint(1, 4)
mock[uuid] = 8

View File

@@ -0,0 +1,50 @@
<ModelRoot>
<ComputeNode human_id="" uuid="Node_0" status="enabled" state="up" id="0" hostname="hostname_0" vcpus="40" disk="250" disk_capacity="250" memory="132">
<Instance state="active" human_id="" uuid="INSTANCE_0" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_1" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
</ComputeNode>
<ComputeNode human_id="" uuid="Node_1" status="enabled" state="up" id="1" hostname="hostname_1" vcpus="40" disk="250" disk_capacity="250" memory="132">
<Instance state="active" human_id="" uuid="INSTANCE_2" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
</ComputeNode>
<ComputeNode human_id="" uuid="Node_2" status="enabled" state="up" id="2" hostname="hostname_2" vcpus="40" disk="250" disk_capacity="250" memory="132">
<Instance state="active" human_id="" uuid="INSTANCE_3" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_4" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_5" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
</ComputeNode>
<ComputeNode human_id="" uuid="Node_3" status="enabled" state="up" id="3" hostname="hostname_3" vcpus="40" disk="250" disk_capacity="250" memory="132">
<Instance state="active" human_id="" uuid="INSTANCE_6" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
</ComputeNode>
<ComputeNode human_id="" uuid="Node_4" status="enabled" state="up" id="4" hostname="hostname_4" vcpus="40" disk="250" disk_capacity="250" memory="132">
<Instance state="active" human_id="" uuid="INSTANCE_7" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
</ComputeNode>
<ComputeNode human_id="" uuid="LOST_NODE" status="enabled" state="up" id="1" hostname="hostname_7" vcpus="40" disk="250" disk_capacity="250" memory="132">
<Instance state="active" human_id="" uuid="LOST_INSTANCE" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
</ComputeNode>
<Instance state="active" human_id="" uuid="INSTANCE_10" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_11" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_12" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_13" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_14" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_15" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_16" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_17" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_18" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_19" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_20" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_21" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_22" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_23" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_24" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_25" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_26" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_27" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_28" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_29" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_30" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_31" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_32" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_33" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_34" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_8" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_9" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
</ModelRoot>

View File

@@ -1,10 +1,10 @@
<ModelRoot>
<ComputeNode human_id="" uuid="Node_0" status="enabled" state="up" id="0" hostname="hostname_0" vcpus="40" disk="250" disk_capacity="250" memory="132">
<Instance state="active" human_id="" uuid="73b09e16-35b7-4922-804e-e8f5d9b740fc" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_1" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="73b09e16-35b7-4922-804e-e8f5d9b740fc" vcpus="10" disk="20" disk_capacity="20" memory="32" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_1" vcpus="10" disk="20" disk_capacity="20" memory="32" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
</ComputeNode>
<ComputeNode human_id="" uuid="Node_1" status="enabled" state="up" id="1" hostname="hostname_1" vcpus="40" disk="250" disk_capacity="250" memory="132">
<Instance state="active" human_id="" uuid="INSTANCE_3" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_4" vcpus="10" disk="20" disk_capacity="20" memory="2" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_3" vcpus="10" disk="20" disk_capacity="20" memory="32" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
<Instance state="active" human_id="" uuid="INSTANCE_4" vcpus="10" disk="20" disk_capacity="20" memory="32" metadata='{"optimize": true,"top": "floor", "nested": {"x": "y"}}'/>
</ComputeNode>
</ModelRoot>

View File

@@ -114,6 +114,9 @@ class FakerModelCollector(base.BaseClusterDataModelCollector):
def generate_scenario_1(self):
return self.load_model('scenario_1.xml')
def generate_scenario_1_with_1_node_unavailable(self):
return self.load_model('scenario_1_with_1_node_unavailable.xml')
def generate_scenario_3_with_2_nodes(self):
return self.load_model('scenario_3_with_2_nodes.xml')

View File

@@ -50,6 +50,8 @@ class FakeGnocchiMetrics(object):
result = 0.0
if metric == "cpu_util":
result = self.get_average_usage_instance_cpu_wb(resource_id)
elif metric == "memory.resident":
result = self.get_average_usage_instance_memory_wb(resource_id)
return result
@staticmethod
@@ -130,6 +132,8 @@ class FakeGnocchiMetrics(object):
# node 3
mock['Node_6_hostname_6'] = 8
# This node doesn't send metrics
mock['LOST_NODE_hostname_7'] = None
mock['Node_19_hostname_19'] = 10
# node 4
mock['INSTANCE_7_hostname_7'] = 4
@@ -143,7 +147,10 @@ class FakeGnocchiMetrics(object):
if uuid not in mock.keys():
mock[uuid] = 8
return float(mock[str(uuid)])
if mock[str(uuid)] is not None:
return float(mock[str(uuid)])
else:
return mock[str(uuid)]
@staticmethod
def get_average_usage_instance_cpu(uuid):
@@ -170,6 +177,8 @@ class FakeGnocchiMetrics(object):
# node 4
mock['INSTANCE_7'] = 4
mock['LOST_INSTANCE'] = None
if uuid not in mock.keys():
mock[uuid] = 8
@@ -242,3 +251,17 @@ class FakeGnocchiMetrics(object):
mock['INSTANCE_3'] = 20
mock['INSTANCE_4'] = 10
return float(mock[str(uuid)])
@staticmethod
def get_average_usage_instance_memory_wb(uuid):
mock = {}
# node 0
mock['INSTANCE_1'] = 30
# node 1
mock['INSTANCE_3'] = 12
mock['INSTANCE_4'] = 12
if uuid not in mock.keys():
# mock[uuid] = random.randint(1, 4)
mock[uuid] = 12
return mock[str(uuid)]

View File

@@ -74,10 +74,12 @@ class TestWorkloadBalance(base.TestCase):
self.strategy = strategies.WorkloadBalance(
config=mock.Mock(datasource=self.datasource))
self.strategy.input_parameters = utils.Struct()
self.strategy.input_parameters.update({'threshold': 25.0,
self.strategy.input_parameters.update({'metrics': 'cpu_util',
'threshold': 25.0,
'period': 300})
self.strategy.threshold = 25.0
self.strategy._period = 300
self.strategy._meter = "cpu_util"
def test_calc_used_resource(self):
model = self.fake_cluster.generate_scenario_6_with_2_nodes()
@@ -86,21 +88,31 @@ class TestWorkloadBalance(base.TestCase):
cores_used, mem_used, disk_used = (
self.strategy.calculate_used_resource(node))
self.assertEqual((cores_used, mem_used, disk_used), (20, 4, 40))
self.assertEqual((cores_used, mem_used, disk_used), (20, 64, 40))
def test_group_hosts_by_cpu_util(self):
model = self.fake_cluster.generate_scenario_6_with_2_nodes()
self.m_model.return_value = model
self.strategy.threshold = 30
n1, n2, avg, w_map = self.strategy.group_hosts_by_cpu_util()
n1, n2, avg, w_map = self.strategy.group_hosts_by_cpu_or_ram_util()
self.assertEqual(n1[0]['node'].uuid, 'Node_0')
self.assertEqual(n2[0]['node'].uuid, 'Node_1')
self.assertEqual(avg, 8.0)
def test_group_hosts_by_ram_util(self):
model = self.fake_cluster.generate_scenario_6_with_2_nodes()
self.m_model.return_value = model
self.strategy._meter = "memory.resident"
self.strategy.threshold = 30
n1, n2, avg, w_map = self.strategy.group_hosts_by_cpu_or_ram_util()
self.assertEqual(n1[0]['node'].uuid, 'Node_0')
self.assertEqual(n2[0]['node'].uuid, 'Node_1')
self.assertEqual(avg, 33.0)
def test_choose_instance_to_migrate(self):
model = self.fake_cluster.generate_scenario_6_with_2_nodes()
self.m_model.return_value = model
n1, n2, avg, w_map = self.strategy.group_hosts_by_cpu_util()
n1, n2, avg, w_map = self.strategy.group_hosts_by_cpu_or_ram_util()
instance_to_mig = self.strategy.choose_instance_to_migrate(
n1, avg, w_map)
self.assertEqual(instance_to_mig[0].uuid, 'Node_0')
@@ -110,7 +122,7 @@ class TestWorkloadBalance(base.TestCase):
def test_choose_instance_notfound(self):
model = self.fake_cluster.generate_scenario_6_with_2_nodes()
self.m_model.return_value = model
n1, n2, avg, w_map = self.strategy.group_hosts_by_cpu_util()
n1, n2, avg, w_map = self.strategy.group_hosts_by_cpu_or_ram_util()
instances = model.get_all_instances()
[model.remove_instance(inst) for inst in instances.values()]
instance_to_mig = self.strategy.choose_instance_to_migrate(
@@ -122,7 +134,7 @@ class TestWorkloadBalance(base.TestCase):
self.m_model.return_value = model
self.strategy.datasource = mock.MagicMock(
statistic_aggregation=self.fake_metrics.mock_get_statistics_wb)
n1, n2, avg, w_map = self.strategy.group_hosts_by_cpu_util()
n1, n2, avg, w_map = self.strategy.group_hosts_by_cpu_or_ram_util()
instance_to_mig = self.strategy.choose_instance_to_migrate(
n1, avg, w_map)
dest_hosts = self.strategy.filter_destination_hosts(
@@ -202,7 +214,7 @@ class TestWorkloadBalance(base.TestCase):
m_gnocchi.statistic_aggregation = mock.Mock(
side_effect=self.fake_metrics.mock_get_statistics_wb)
instance0 = model.get_instance_by_uuid("INSTANCE_0")
self.strategy.group_hosts_by_cpu_util()
self.strategy.group_hosts_by_cpu_or_ram_util()
if self.strategy.config.datasource == "ceilometer":
m_ceilometer.statistic_aggregation.assert_any_call(
aggregate='avg', meter_name='cpu_util',

View File

@@ -172,6 +172,12 @@ class TestWorkloadStabilization(base.TestCase):
granularity=300, start_time=start_time, stop_time=stop_time,
aggregation='mean')
def test_get_instance_load_with_no_metrics(self):
model = self.fake_cluster.generate_scenario_1_with_1_node_unavailable()
self.m_model.return_value = model
lost_instance = model.get_instance_by_uuid("LOST_INSTANCE")
self.assertIsNone(self.strategy.get_instance_load(lost_instance))
def test_normalize_hosts_load(self):
self.m_model.return_value = self.fake_cluster.generate_scenario_1()
fake_hosts = {'Node_0': {'cpu_util': 0.07, 'memory.resident': 7},
@@ -196,6 +202,12 @@ class TestWorkloadStabilization(base.TestCase):
self.assertEqual(self.strategy.get_hosts_load(),
self.hosts_load_assert)
def test_get_hosts_load_with_node_missing(self):
self.m_model.return_value = \
self.fake_cluster.generate_scenario_1_with_1_node_unavailable()
self.assertEqual(self.hosts_load_assert,
self.strategy.get_hosts_load())
def test_get_sd(self):
test_cpu_sd = 0.296
test_ram_sd = 9.3

View File

@@ -659,3 +659,56 @@ class TestSyncer(base.DbTestCase):
all(ap.state == objects.action_plan.State.CANCELLED
for ap in modified_action_plans.values()))
self.assertEqual(set([action_plan1.id]), set(unmodified_action_plans))
def test_sync_strategies_with_removed_goal(self):
# ### Setup ### #
goal1 = objects.Goal(
self.ctx, id=1, uuid=utils.generate_uuid(),
name="dummy_1", display_name="Dummy 1",
efficacy_specification=self.goal1_spec.serialize_indicators_specs()
)
goal2 = objects.Goal(
self.ctx, id=2, uuid=utils.generate_uuid(),
name="dummy_2", display_name="Dummy 2",
efficacy_specification=self.goal2_spec.serialize_indicators_specs()
)
goal1.create()
goal2.create()
strategy1 = objects.Strategy(
self.ctx, id=1, name="strategy_1", uuid=utils.generate_uuid(),
display_name="Strategy 1", goal_id=goal1.id)
strategy2 = objects.Strategy(
self.ctx, id=2, name="strategy_2", uuid=utils.generate_uuid(),
display_name="Strategy 2", goal_id=goal2.id)
strategy1.create()
strategy2.create()
# to be removed by some reasons
goal2.soft_delete()
before_goals = objects.Goal.list(self.ctx)
before_strategies = objects.Strategy.list(self.ctx)
# ### Action under test ### #
try:
self.syncer.sync()
except Exception as exc:
self.fail(exc)
# ### Assertions ### #
after_goals = objects.Goal.list(self.ctx)
after_strategies = objects.Strategy.list(self.ctx)
self.assertEqual(1, len(before_goals))
self.assertEqual(2, len(before_strategies))
self.assertEqual(2, len(after_goals))
self.assertEqual(4, len(after_strategies))
self.assertEqual(
{"dummy_1", "dummy_2"},
set([g.name for g in after_goals]))
self.assertEqual(
{"strategy_1", "strategy_2", "strategy_3", "strategy_4"},
set([s.name for s in after_strategies]))