Compare commits

..

26 Commits
1.7.0 ... 1.8.0

Author SHA1 Message Date
Zuul
40a653215f Merge "Zuul: Remove project name" 2018-02-07 07:24:53 +00:00
Zuul
1492f5d8dc Merge "Repalce Chinese double quotes to English double quotes" 2018-02-07 07:22:41 +00:00
Zuul
76263f149a Merge "Fix issues with aggregate and granularity attributes" 2018-02-06 06:05:50 +00:00
James E. Blair
028006d15d Zuul: Remove project name
Zuul no longer requires the project-name for in-repo configuration.
Omitting it makes forking or renaming projects easier.

Change-Id: Ib3be82015be1d6853c44cf53faacb238237ad701
2018-02-05 14:18:38 -08:00
Alexander Chadin
d27ba8cc2a Fix issues with aggregate and granularity attributes
This patch set fixes issues that have appeared after merging
watcher-multi-datasource and strategy-requirements patches.
It is final commit in watcher-multi-datasource blueprint.

Partially-Implements: blueprint watcher-multi-datasource
Change-Id: I25b4cb0e1b85379ff0c4da9d0c1474380d75ce3a
2018-02-05 11:08:48 +00:00
chengebj5238
33750ce7a9 Repalce Chinese double quotes to English double quotes
Change-Id: I566ce10064c3dc51b875fc973c0ad9b58449001c
2018-02-05 17:59:08 +08:00
Zuul
cb8d1a98d6 Merge "Fix get_compute_node_by_hostname in nova_helper" 2018-02-05 06:47:10 +00:00
Hidekazu Nakamura
f32252d510 Fix get_compute_node_by_hostname in nova_helper
If hostname is different from uuid in Compute CDM,
get_compute_node_by_hostname method returns empty.
This patch set fixes to return a compute node even if hostname
is different from uuid.

Change-Id: I6cbc0be1a79cc238f480caed9adb8dc31256754a
Closes-Bug: #1746162
2018-02-02 14:26:20 +09:00
Zuul
4849f8dde9 Merge "Add zone migration strategy document" 2018-02-02 04:51:26 +00:00
Hidekazu Nakamura
0cafdcdee9 Add zone migration strategy document
This patch set adds zone migration strategy document.

Change-Id: Ifd9d85d635977900929efd376f0d7990a6fec627
2018-02-02 09:35:58 +09:00
OpenStack Proposal Bot
3a70225164 Updated from global requirements
Change-Id: Ifb8d8d6cb1248eaf8715c84539d74fa04dd753dd
2018-02-01 07:36:19 +00:00
Zuul
892c766ac4 Merge "Fixed AttributeError in storage_model" 2018-01-31 13:58:53 +00:00
Zuul
63a3fd84ae Merge "Remove redundant import alias" 2018-01-31 12:45:21 +00:00
Zuul
287ace1dcc Merge "Update zone_migration comment" 2018-01-31 06:14:15 +00:00
Zuul
4b302e415e Merge "Zuul: Remove project name" 2018-01-30 12:22:41 +00:00
licanwei
f24744c910 Fixed AttributeError in storage_model
self.audit.scope should be self.audit_scope

Closes-Bug: #1746191

Change-Id: I0cce165a2bc1afd4c9e09c51e4d3250ee70d3705
2018-01-30 00:32:19 -08:00
Zuul
d9a85eda2c Merge "Imported Translations from Zanata" 2018-01-29 14:12:36 +00:00
Zuul
82c8633e42 Merge "[Doc] Add actuator strategy doc" 2018-01-29 14:12:35 +00:00
Hidekazu Nakamura
d3f23795f5 Update zone_migration comment
This patch updates zone_migration comment for document and
removes unnecessary TODO.

Change-Id: Ib1eadad6496fe202e406108f432349c82696ea88
2018-01-29 17:48:48 +09:00
Hoang Trung Hieu
e7f4456a80 Zuul: Remove project name
Zuul no longer requires the project-name for in-repo configuration[1].
Omitting it makes forking or renaming projects easier.

[1] https://docs.openstack.org/infra/manual/drivers.html#consistent-naming-for-jobs-with-zuul-v3

Change-Id: Iddf89707289a22ea322c14d1b11f58840871304d
2018-01-29 07:24:44 +00:00
OpenStack Proposal Bot
a36a309e2e Updated from global requirements
Change-Id: I29ebfe2e3398dab6f2e22f3d97c16b72843f1e34
2018-01-29 00:42:54 +00:00
Hidekazu Nakamura
8e3affd9ac [Doc] Add actuator strategy doc
This patch adds actuator strategy document.

Change-Id: I5f0415754c83e4f152155988625ada2208d6c35a
2018-01-28 20:00:05 +09:00
OpenStack Proposal Bot
71e979cae0 Imported Translations from Zanata
For more information about this automatic import see:
https://docs.openstack.org/i18n/latest/reviewing-translation-import.html

Change-Id: Ie34aafe6d9b54bb97469844d21de38d7c6249031
2018-01-28 07:16:20 +00:00
Luong Anh Tuan
6edfd34a53 Remove redundant import alias
This patch remove redundant import aliases and add pep8 hacking function
to check no redundant import aliases.

Co-Authored-By: Dao Cong Tien <tiendc@vn.fujitsu.com>

Change-Id: I3207cb9f0eb4b4a029b7e822b9c59cf48d1e0f9d
Closes-Bug: #1745527
2018-01-26 09:11:43 +07:00
Alexander Chadin
0c8c32e69e Fix strategy state
Change-Id: I003bb3b41aac69cc40a847f52a50c7bc4cc8d020
2018-01-25 15:41:34 +03:00
Alexander Chadin
9138b7bacb Add datasources to strategies
This patch set add datasources instead of datasource.

Change-Id: I94f17ae3a0b6a8990293dc9e33be1a2bd3432a14
2018-01-24 20:51:38 +03:00
36 changed files with 596 additions and 568 deletions

View File

@@ -1,5 +1,4 @@
- project:
name: openstack/watcher
check:
jobs:
- watcher-tempest-multinode

View File

@@ -267,7 +267,7 @@ the same goal and same workload of the :ref:`Cluster <cluster_definition>`.
Project
=======
:ref:`Projects <project_definition>` represent the base unit of ownership
:ref:`Projects <project_definition>` represent the base unit of "ownership"
in OpenStack, in that all :ref:`resources <managed_resource_definition>` in
OpenStack should be owned by a specific :ref:`project <project_definition>`.
In OpenStack Identity, a :ref:`project <project_definition>` must be owned by a

View File

@@ -0,0 +1,86 @@
=============
Actuator
=============
Synopsis
--------
**display name**: ``Actuator``
**goal**: ``unclassified``
.. watcher-term:: watcher.decision_engine.strategy.strategies.actuation
Requirements
------------
Metrics
*******
None
Cluster data model
******************
None
Actions
*******
Default Watcher's actions.
Planner
*******
Default Watcher's planner:
.. watcher-term:: watcher.decision_engine.planner.weight.WeightPlanner
Configuration
-------------
Strategy parameters are:
==================== ====== ===================== =============================
parameter type default Value description
==================== ====== ===================== =============================
``actions`` array None Actions to be executed.
==================== ====== ===================== =============================
The elements of actions array are:
==================== ====== ===================== =============================
parameter type default Value description
==================== ====== ===================== =============================
``action_type`` string None Action name defined in
setup.cfg(mandatory)
``resource_id`` string None Resource_id of the action.
``input_parameters`` object None Input_parameters of the
action(mandatory).
==================== ====== ===================== =============================
Efficacy Indicator
------------------
None
Algorithm
---------
This strategy create an action plan with a predefined set of actions.
How to use it ?
---------------
.. code-block:: shell
$ openstack optimize audittemplate create \
at1 unclassified --strategy actuator
$ openstack optimize audit create -a at1 \
-p actions='[{"action_type": "migrate", "resource_id": "56a40802-6fde-4b59-957c-c84baec7eaed", "input_parameters": {"migration_type": "live", "source_node": "s01"}}]'
External Links
--------------
None

View File

@@ -0,0 +1,154 @@
==============
Zone migration
==============
Synopsis
--------
**display name**: ``Zone migration``
**goal**: ``hardware_maintenance``
.. watcher-term:: watcher.decision_engine.strategy.strategies.zone_migration
Requirements
------------
Metrics
*******
None
Cluster data model
******************
Default Watcher's Compute cluster data model:
.. watcher-term:: watcher.decision_engine.model.collector.nova.NovaClusterDataModelCollector
Storage cluster data model is also required:
.. watcher-term:: watcher.decision_engine.model.collector.cinder.CinderClusterDataModelCollector
Actions
*******
Default Watcher's actions:
.. list-table::
:widths: 30 30
:header-rows: 1
* - action
- description
* - ``migrate``
- .. watcher-term:: watcher.applier.actions.migration.Migrate
* - ``volume_migrate``
- .. watcher-term:: watcher.applier.actions.volume_migration.VolumeMigrate
Planner
*******
Default Watcher's planner:
.. watcher-term:: watcher.decision_engine.planner.weight.WeightPlanner
Configuration
-------------
Strategy parameters are:
======================== ======== ============= ==============================
parameter type default Value description
======================== ======== ============= ==============================
``compute_nodes`` array None Compute nodes to migrate.
``storage_pools`` array None Storage pools to migrate.
``parallel_total`` integer 6 The number of actions to be
run in parallel in total.
``parallel_per_node`` integer 2 The number of actions to be
run in parallel per compute
node.
``parallel_per_pool`` integer 2 The number of actions to be
run in parallel per storage
pool.
``priority`` object None List prioritizes instances
and volumes.
``with_attached_volume`` boolean False False: Instances will migrate
after all volumes migrate.
True: An instance will migrate
after the attached volumes
migrate.
======================== ======== ============= ==============================
The elements of compute_nodes array are:
============= ======= =============== =============================
parameter type default Value description
============= ======= =============== =============================
``src_node`` string None Compute node from which
instances migrate(mandatory).
``dst_node`` string None Compute node to which
instances migrate.
============= ======= =============== =============================
The elements of storage_pools array are:
============= ======= =============== ==============================
parameter type default Value description
============= ======= =============== ==============================
``src_pool`` string None Storage pool from which
volumes migrate(mandatory).
``dst_pool`` string None Storage pool to which
volumes migrate.
``src_type`` string None Source volume type(mandatory).
``dst_type`` string None Destination volume type
(mandatory).
============= ======= =============== ==============================
The elements of priority object are:
================ ======= =============== ======================
parameter type default Value description
================ ======= =============== ======================
``project`` array None Project names.
``compute_node`` array None Compute node names.
``storage_pool`` array None Storage pool names.
``compute`` enum None Instance attributes.
|compute|
``storage`` enum None Volume attributes.
|storage|
================ ======= =============== ======================
.. |compute| replace:: ["vcpu_num", "mem_size", "disk_size", "created_at"]
.. |storage| replace:: ["size", "created_at"]
Efficacy Indicator
------------------
.. watcher-func::
:format: literal_block
watcher.decision_engine.goal.efficacy.specs.HardwareMaintenance.get_global_efficacy_indicator
Algorithm
---------
For more information on the zone migration strategy please refer
to: http://specs.openstack.org/openstack/watcher-specs/specs/queens/implemented/zone-migration-strategy.html
How to use it ?
---------------
.. code-block:: shell
$ openstack optimize audittemplate create \
at1 hardware_maintenance --strategy zone_migration
$ openstack optimize audit create -a at1 \
-p compute_nodes='[{"src_node": "s01", "dst_node": "d01"}]'
External Links
--------------
None

View File

@@ -4,11 +4,11 @@ msgid ""
msgstr ""
"Project-Id-Version: watcher\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2018-01-19 11:46+0000\n"
"POT-Creation-Date: 2018-01-26 00:18+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2018-01-19 07:16+0000\n"
"PO-Revision-Date: 2018-01-27 12:50+0000\n"
"Last-Translator: Andi Chandler <andi@gowling.com>\n"
"Language-Team: English (United Kingdom)\n"
"Language: en-GB\n"
@@ -42,8 +42,8 @@ msgstr "1.5.0"
msgid "1.6.0"
msgstr "1.6.0"
msgid "1.6.0-32"
msgstr "1.6.0-32"
msgid "1.7.0"
msgstr "1.7.0"
msgid "Add a service supervisor to watch Watcher deamons."
msgstr "Add a service supervisor to watch Watcher daemons."
@@ -136,6 +136,17 @@ msgstr ""
"Added a way to add a new action without having to amend the source code of "
"the default planner."
msgid ""
"Added a way to check state of strategy before audit's execution. "
"Administrator can use \"watcher strategy state <strategy_name>\" command to "
"get information about metrics' availability, datasource's availability and "
"CDM's availability."
msgstr ""
"Added a way to check state of strategy before audit's execution. "
"Administrator can use \"watcher strategy state <strategy_name>\" command to "
"get information about metrics' availability, datasource's availability and "
"CDM's availability."
msgid ""
"Added a way to compare the efficacy of different strategies for a give "
"optimization goal."
@@ -186,6 +197,18 @@ msgstr ""
msgid "Added policies to handle user rights to access Watcher API."
msgstr "Added policies to handle user rights to access Watcher API."
msgid "Added storage capacity balance strategy."
msgstr "Added storage capacity balance strategy."
msgid ""
"Added strategy \"Zone migration\" and it's goal \"Hardware maintenance\". "
"The strategy migrates many instances and volumes efficiently with minimum "
"downtime automatically."
msgstr ""
"Added strategy \"Zone migration\" and it's goal \"Hardware maintenance\". "
"The strategy migrates many instances and volumes efficiently with minimum "
"downtime automatically."
msgid ""
"Added strategy to identify and migrate a Noisy Neighbor - a low priority VM "
"that negatively affects peformance of a high priority VM by over utilizing "
@@ -212,6 +235,13 @@ msgstr "Added using of JSONSchema instead of voluptuous to validate Actions."
msgid "Added volume migrate action"
msgstr "Added volume migrate action"
msgid ""
"Adds audit scoper for storage data model, now watcher users can specify "
"audit scope for storage CDM in the same manner as compute scope."
msgstr ""
"Adds audit scoper for storage data model, now watcher users can specify "
"audit scope for storage CDM in the same manner as compute scope."
msgid "Adds baremetal data model in Watcher"
msgstr "Adds baremetal data model in Watcher"

View File

@@ -23,7 +23,7 @@ oslo.reports>=1.18.0 # Apache-2.0
oslo.serialization!=2.19.1,>=2.18.0 # Apache-2.0
oslo.service!=1.28.1,>=1.24.0 # Apache-2.0
oslo.utils>=3.33.0 # Apache-2.0
oslo.versionedobjects>=1.28.0 # Apache-2.0
oslo.versionedobjects>=1.31.2 # Apache-2.0
PasteDeploy>=1.5.0 # MIT
pbr!=2.1.0,>=2.0.0 # Apache-2.0
pecan!=1.0.2,!=1.0.3,!=1.0.4,!=1.2,>=1.0.0 # BSD
@@ -38,7 +38,7 @@ python-monascaclient>=1.7.0 # Apache-2.0
python-neutronclient>=6.3.0 # Apache-2.0
python-novaclient>=9.1.0 # Apache-2.0
python-openstackclient>=3.12.0 # Apache-2.0
python-ironicclient>=1.14.0 # Apache-2.0
python-ironicclient>=2.2.0 # Apache-2.0
six>=1.10.0 # MIT
SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 # MIT
stevedore>=1.20.0 # Apache-2.0

View File

@@ -52,14 +52,21 @@ class NovaHelper(object):
return self.nova.hypervisors.get(utils.Struct(id=node_id))
def get_compute_node_by_hostname(self, node_hostname):
"""Get compute node by ID (*not* UUID)"""
# We need to pass an object with an 'id' attribute to make it work
"""Get compute node by hostname"""
try:
compute_nodes = self.nova.hypervisors.search(node_hostname)
if len(compute_nodes) != 1:
hypervisors = [hv for hv in self.get_compute_node_list()
if hv.service['host'] == node_hostname]
if len(hypervisors) != 1:
# TODO(hidekazu)
# this may occur if VMware vCenter driver is used
raise exception.ComputeNodeNotFound(name=node_hostname)
else:
compute_nodes = self.nova.hypervisors.search(
hypervisors[0].hypervisor_hostname)
if len(compute_nodes) != 1:
raise exception.ComputeNodeNotFound(name=node_hostname)
return self.get_compute_node_by_id(compute_nodes[0].id)
return self.get_compute_node_by_id(compute_nodes[0].id)
except Exception as exc:
LOG.exception(exc)
raise exception.ComputeNodeNotFound(name=node_hostname)

View File

@@ -57,6 +57,12 @@ class DataSourceBase(object):
),
)
@abc.abstractmethod
def statistic_aggregation(self, resource_id=None, meter_name=None,
period=300, granularity=300, dimensions=None,
aggregation='avg', group_by='*'):
pass
@abc.abstractmethod
def list_metrics(self):
pass

View File

@@ -145,24 +145,28 @@ class CeilometerHelper(base.DataSourceBase):
else:
return meters
def statistic_aggregation(self,
resource_id,
meter_name,
period,
aggregate='avg'):
def statistic_aggregation(self, resource_id=None, meter_name=None,
period=300, granularity=300, dimensions=None,
aggregation='avg', group_by='*'):
"""Representing a statistic aggregate by operators
:param resource_id: id of resource to list statistics for.
:param meter_name: Name of meter to list statistics for.
:param period: Period in seconds over which to group samples.
:param aggregate: Available aggregates are: count, cardinality,
min, max, sum, stddev, avg. Defaults to avg.
:param granularity: frequency of marking metric point, in seconds.
This param isn't used in Ceilometer datasource.
:param dimensions: dimensions (dict). This param isn't used in
Ceilometer datasource.
:param aggregation: Available aggregates are: count, cardinality,
min, max, sum, stddev, avg. Defaults to avg.
:param group_by: list of columns to group the metrics to be returned.
This param isn't used in Ceilometer datasource.
:return: Return the latest statistical data, None if no data.
"""
end_time = datetime.datetime.utcnow()
if aggregate == 'mean':
aggregate = 'avg'
if aggregation == 'mean':
aggregation = 'avg'
start_time = end_time - datetime.timedelta(seconds=int(period))
query = self.build_query(
resource_id=resource_id, start_time=start_time, end_time=end_time)
@@ -171,11 +175,11 @@ class CeilometerHelper(base.DataSourceBase):
q=query,
period=period,
aggregates=[
{'func': aggregate}])
{'func': aggregation}])
item_value = None
if statistic:
item_value = statistic[-1]._info.get('aggregate').get(aggregate)
item_value = statistic[-1]._info.get('aggregate').get(aggregation)
return item_value
def get_last_sample_values(self, resource_id, meter_name, limit=1):
@@ -204,64 +208,64 @@ class CeilometerHelper(base.DataSourceBase):
granularity=None):
meter_name = self.METRIC_MAP.get('host_cpu_usage')
return self.statistic_aggregation(resource_id, meter_name, period,
aggregate=aggregate)
granularity, aggregate=aggregate)
def get_instance_cpu_usage(self, resource_id, period, aggregate,
granularity=None):
meter_name = self.METRIC_MAP.get('instance_cpu_usage')
return self.statistic_aggregation(resource_id, meter_name, period,
aggregate=aggregate)
granularity, aggregate=aggregate)
def get_host_memory_usage(self, resource_id, period, aggregate,
granularity=None):
meter_name = self.METRIC_MAP.get('host_memory_usage')
return self.statistic_aggregation(resource_id, meter_name, period,
aggregate=aggregate)
granularity, aggregate=aggregate)
def get_instance_memory_usage(self, resource_id, period, aggregate,
granularity=None):
meter_name = self.METRIC_MAP.get('instance_ram_usage')
return self.statistic_aggregation(resource_id, meter_name, period,
aggregate=aggregate)
granularity, aggregate=aggregate)
def get_instance_l3_cache_usage(self, resource_id, period, aggregate,
granularity=None):
meter_name = self.METRIC_MAP.get('instance_l3_cache_usage')
return self.statistic_aggregation(resource_id, meter_name, period,
aggregate=aggregate)
granularity, aggregate=aggregate)
def get_instance_ram_allocated(self, resource_id, period, aggregate,
granularity=None):
meter_name = self.METRIC_MAP.get('instance_ram_allocated')
return self.statistic_aggregation(resource_id, meter_name, period,
aggregate=aggregate)
granularity, aggregate=aggregate)
def get_instance_root_disk_allocated(self, resource_id, period, aggregate,
granularity=None):
meter_name = self.METRIC_MAP.get('instance_root_disk_size')
return self.statistic_aggregation(resource_id, meter_name, period,
aggregate=aggregate)
granularity, aggregate=aggregate)
def get_host_outlet_temperature(self, resource_id, period, aggregate,
granularity=None):
meter_name = self.METRIC_MAP.get('host_outlet_temp')
return self.statistic_aggregation(resource_id, meter_name, period,
aggregate=aggregate)
granularity, aggregate=aggregate)
def get_host_inlet_temperature(self, resource_id, period, aggregate,
granularity=None):
meter_name = self.METRIC_MAP.get('host_inlet_temp')
return self.statistic_aggregation(resource_id, meter_name, period,
aggregate=aggregate)
granularity, aggregate=aggregate)
def get_host_airflow(self, resource_id, period, aggregate,
granularity=None):
meter_name = self.METRIC_MAP.get('host_airflow')
return self.statistic_aggregation(resource_id, meter_name, period,
aggregate=aggregate)
granularity, aggregate=aggregate)
def get_host_power(self, resource_id, period, aggregate,
granularity=None):
meter_name = self.METRIC_MAP.get('host_power')
return self.statistic_aggregation(resource_id, meter_name, period,
aggregate=aggregate)
granularity, aggregate=aggregate)

View File

@@ -58,32 +58,35 @@ class GnocchiHelper(base.DataSourceBase):
return 'not available'
return 'available'
def _statistic_aggregation(self,
resource_id,
metric,
granularity,
start_time=None,
stop_time=None,
aggregation='mean'):
def list_metrics(self):
"""List the user's meters."""
try:
response = self.query_retry(f=self.gnocchi.metric.list)
except Exception:
return set()
else:
return set([metric['name'] for metric in response])
def statistic_aggregation(self, resource_id=None, meter_name=None,
period=300, granularity=300, dimensions=None,
aggregation='avg', group_by='*'):
"""Representing a statistic aggregate by operators
:param metric: metric name of which we want the statistics
:param resource_id: id of resource to list statistics for
:param start_time: Start datetime from which metrics will be used
:param stop_time: End datetime from which metrics will be used
:param granularity: frequency of marking metric point, in seconds
:param resource_id: id of resource to list statistics for.
:param meter_name: meter name of which we want the statistics.
:param period: Period in seconds over which to group samples.
:param granularity: frequency of marking metric point, in seconds.
:param dimensions: dimensions (dict). This param isn't used in
Gnocchi datasource.
:param aggregation: Should be chosen in accordance with policy
aggregations
aggregations.
:param group_by: list of columns to group the metrics to be returned.
This param isn't used in Gnocchi datasource.
:return: value of aggregated metric
"""
if start_time is not None and not isinstance(start_time, datetime):
raise exception.InvalidParameter(parameter='start_time',
parameter_type=datetime)
if stop_time is not None and not isinstance(stop_time, datetime):
raise exception.InvalidParameter(parameter='stop_time',
parameter_type=datetime)
stop_time = datetime.utcnow()
start_time = stop_time - timedelta(seconds=(int(period)))
if not common_utils.is_uuid_like(resource_id):
kwargs = dict(query={"=": {"original_resource_id": resource_id}},
@@ -97,7 +100,7 @@ class GnocchiHelper(base.DataSourceBase):
resource_id = resources[0]['id']
raw_kwargs = dict(
metric=metric,
metric=meter_name,
start=start_time,
stop=stop_time,
resource_id=resource_id,
@@ -115,27 +118,6 @@ class GnocchiHelper(base.DataSourceBase):
# measure has structure [time, granularity, value]
return statistics[-1][2]
def list_metrics(self):
"""List the user's meters."""
try:
response = self.query_retry(f=self.gnocchi.metric.list)
except Exception:
return set()
else:
return set([metric['name'] for metric in response])
def statistic_aggregation(self, resource_id, metric, period, granularity,
aggregation='mean'):
stop_time = datetime.utcnow()
start_time = stop_time - timedelta(seconds=(int(period)))
return self._statistic_aggregation(
resource_id=resource_id,
metric=metric,
granularity=granularity,
start_time=start_time,
stop_time=stop_time,
aggregation=aggregation)
def get_host_cpu_usage(self, resource_id, period, aggregate,
granularity=300):
meter_name = self.METRIC_MAP.get('host_cpu_usage')

View File

@@ -21,6 +21,7 @@ import datetime
from monascaclient import exc
from watcher.common import clients
from watcher.common import exception
from watcher.datasource import base
@@ -97,41 +98,42 @@ class MonascaHelper(base.DataSourceBase):
return statistics
def statistic_aggregation(self,
meter_name,
dimensions,
start_time=None,
end_time=None,
period=None,
aggregate='avg',
group_by='*'):
def statistic_aggregation(self, resource_id=None, meter_name=None,
period=300, granularity=300, dimensions=None,
aggregation='avg', group_by='*'):
"""Representing a statistic aggregate by operators
:param meter_name: meter names of which we want the statistics
:param dimensions: dimensions (dict)
:param start_time: Start datetime from which metrics will be used
:param end_time: End datetime from which metrics will be used
:param resource_id: id of resource to list statistics for.
This param isn't used in Monasca datasource.
:param meter_name: meter names of which we want the statistics.
:param period: Sampling `period`: In seconds. If no period is given,
only one aggregate statistic is returned. If given, a
faceted result will be returned, divided into given
periods. Periods with no data are ignored.
:param aggregate: Should be either 'avg', 'count', 'min' or 'max'
:param granularity: frequency of marking metric point, in seconds.
This param isn't used in Ceilometer datasource.
:param dimensions: dimensions (dict).
:param aggregation: Should be either 'avg', 'count', 'min' or 'max'.
:param group_by: list of columns to group the metrics to be returned.
:return: A list of dict with each dict being a distinct result row
"""
start_timestamp, end_timestamp, period = self._format_time_params(
start_time, end_time, period
)
if aggregate == 'mean':
aggregate = 'avg'
if dimensions is None:
raise exception.UnsupportedDataSource(datasource='Monasca')
stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(seconds=(int(period)))
if aggregation == 'mean':
aggregation = 'avg'
raw_kwargs = dict(
name=meter_name,
start_time=start_timestamp,
end_time=end_timestamp,
start_time=start_time.isoformat(),
end_time=stop_time.isoformat(),
dimensions=dimensions,
period=period,
statistics=aggregate,
statistics=aggregation,
group_by=group_by,
)
@@ -140,45 +142,36 @@ class MonascaHelper(base.DataSourceBase):
statistics = self.query_retry(
f=self.monasca.metrics.list_statistics, **kwargs)
return statistics
cpu_usage = None
for stat in statistics:
avg_col_idx = stat['columns'].index(aggregation)
values = [r[avg_col_idx] for r in stat['statistics']]
value = float(sum(values)) / len(values)
cpu_usage = value
return cpu_usage
def get_host_cpu_usage(self, resource_id, period, aggregate,
granularity=None):
metric_name = self.METRIC_MAP.get('host_cpu_usage')
node_uuid = resource_id.split('_')[0]
statistics = self.statistic_aggregation(
return self.statistic_aggregation(
meter_name=metric_name,
dimensions=dict(hostname=node_uuid),
period=period,
aggregate=aggregate
aggregation=aggregate
)
cpu_usage = None
for stat in statistics:
avg_col_idx = stat['columns'].index('avg')
values = [r[avg_col_idx] for r in stat['statistics']]
value = float(sum(values)) / len(values)
cpu_usage = value
return cpu_usage
def get_instance_cpu_usage(self, resource_id, period, aggregate,
granularity=None):
metric_name = self.METRIC_MAP.get('instance_cpu_usage')
statistics = self.statistic_aggregation(
return self.statistic_aggregation(
meter_name=metric_name,
dimensions=dict(resource_id=resource_id),
period=period,
aggregate=aggregate
aggregation=aggregate
)
cpu_usage = None
for stat in statistics:
avg_col_idx = stat['columns'].index('avg')
values = [r[avg_col_idx] for r in stat['statistics']]
value = float(sum(values)) / len(values)
cpu_usage = value
return cpu_usage
def get_host_memory_usage(self, resource_id, period, aggregate,
granularity=None):

View File

@@ -66,7 +66,7 @@ class StrategyEndpoint(object):
ds_metrics = datasource.list_metrics()
if ds_metrics is None:
raise exception.DataSourceNotAvailable(
datasource=strategy.config.datasource)
datasource=datasource.NAME)
else:
for metric in strategy.DATASOURCE_METRICS:
original_metric_name = datasource.METRIC_MAP.get(metric)
@@ -81,7 +81,7 @@ class StrategyEndpoint(object):
if not datasource:
state = "Datasource is not presented for this strategy"
else:
state = "%s: %s" % (strategy.config.datasource,
state = "%s: %s" % (datasource.NAME,
datasource.check_availability())
return {'type': 'Datasource',
'state': state,
@@ -104,7 +104,7 @@ class StrategyEndpoint(object):
try:
is_datasources = getattr(strategy.config, 'datasources', None)
if is_datasources:
datasource = is_datasources[0]
datasource = getattr(strategy, 'datasource_backend')
else:
datasource = getattr(strategy, strategy.config.datasource)
except (AttributeError, IndexError):
@@ -272,7 +272,7 @@ class BaseStrategy(loadable.Loadable):
collector = self.collector_manager.get_cluster_model_collector(
'storage', osc=self.osc)
audit_scope_handler = collector.get_audit_scope_handler(
audit_scope=self.audit.scope)
audit_scope=self.audit_scope)
self._storage_model = audit_scope_handler.get_scoped_model(
collector.get_latest_cluster_data_model())

View File

@@ -149,8 +149,10 @@ class BasicConsolidation(base.ServerConsolidationBaseStrategy):
def get_config_opts(cls):
return [
cfg.ListOpt(
"datasource",
help="Data source to use in order to query the needed metrics",
"datasources",
help="Datasources to use in order to query the needed metrics."
" If one of strategy metric isn't available in the first"
" datasource, the next datasource will be chosen.",
item_type=cfg.types.String(choices=['gnocchi', 'ceilometer',
'monasca']),
default=['gnocchi', 'ceilometer', 'monasca']),

View File

@@ -86,8 +86,10 @@ class NoisyNeighbor(base.NoisyNeighborBaseStrategy):
def get_config_opts(cls):
return [
cfg.ListOpt(
"datasource",
help="Data source to use in order to query the needed metrics",
"datasources",
help="Datasources to use in order to query the needed metrics."
" If one of strategy metric isn't available in the first"
" datasource, the next datasource will be chosen.",
item_type=cfg.types.String(choices=['gnocchi', 'ceilometer',
'monasca']),
default=['gnocchi', 'ceilometer', 'monasca'])

View File

@@ -28,15 +28,11 @@ Outlet (Exhaust Air) Temperature is one of the important thermal
telemetries to measure thermal/workload status of server.
"""
import datetime
from oslo_config import cfg
from oslo_log import log
from watcher._i18n import _
from watcher.common import exception as wexc
from watcher.datasource import ceilometer as ceil
from watcher.datasource import gnocchi as gnoc
from watcher.decision_engine.model import element
from watcher.decision_engine.strategy.strategies import base
@@ -95,8 +91,6 @@ class OutletTempControl(base.ThermalOptimizationBaseStrategy):
:type osc: :py:class:`~.OpenStackClients` instance, optional
"""
super(OutletTempControl, self).__init__(config, osc)
self._ceilometer = None
self._gnocchi = None
@classmethod
def get_name(cls):
@@ -139,26 +133,6 @@ class OutletTempControl(base.ThermalOptimizationBaseStrategy):
},
}
@property
def ceilometer(self):
if self._ceilometer is None:
self.ceilometer = ceil.CeilometerHelper(osc=self.osc)
return self._ceilometer
@ceilometer.setter
def ceilometer(self, c):
self._ceilometer = c
@property
def gnocchi(self):
if self._gnocchi is None:
self.gnocchi = gnoc.GnocchiHelper(osc=self.osc)
return self._gnocchi
@gnocchi.setter
def gnocchi(self, g):
self._gnocchi = g
@property
def granularity(self):
return self.input_parameters.get('granularity', 300)
@@ -208,25 +182,13 @@ class OutletTempControl(base.ThermalOptimizationBaseStrategy):
resource_id = node.uuid
outlet_temp = None
if self.config.datasource == "ceilometer":
outlet_temp = self.ceilometer.statistic_aggregation(
resource_id=resource_id,
meter_name=metric_name,
period=self.period,
aggregate='avg'
)
elif self.config.datasource == "gnocchi":
stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(
seconds=int(self.period))
outlet_temp = self.gnocchi.statistic_aggregation(
resource_id=resource_id,
metric=metric_name,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean'
)
outlet_temp = self.datasource_backend.statistic_aggregation(
resource_id=resource_id,
meter_name=metric_name,
period=self.period,
granularity=self.granularity,
)
# some hosts may not have outlet temp meters, remove from target
if outlet_temp is None:
LOG.warning("%s: no outlet temp data", resource_id)

View File

@@ -42,15 +42,11 @@ airflow is higher than the specified threshold.
- It assumes that live migrations are possible.
"""
import datetime
from oslo_config import cfg
from oslo_log import log
from watcher._i18n import _
from watcher.common import exception as wexc
from watcher.datasource import ceilometer as ceil
from watcher.datasource import gnocchi as gnoc
from watcher.decision_engine.model import element
from watcher.decision_engine.strategy.strategies import base
@@ -125,30 +121,8 @@ class UniformAirflow(base.BaseStrategy):
self.config.datasource]['host_inlet_temp']
self.meter_name_power = self.METRIC_NAMES[
self.config.datasource]['host_power']
self._ceilometer = None
self._gnocchi = None
self._period = self.PERIOD
@property
def ceilometer(self):
if self._ceilometer is None:
self.ceilometer = ceil.CeilometerHelper(osc=self.osc)
return self._ceilometer
@ceilometer.setter
def ceilometer(self, c):
self._ceilometer = c
@property
def gnocchi(self):
if self._gnocchi is None:
self.gnocchi = gnoc.GnocchiHelper(osc=self.osc)
return self._gnocchi
@gnocchi.setter
def gnocchi(self, g):
self._gnocchi = g
@classmethod
def get_name(cls):
return "uniform_airflow"
@@ -247,35 +221,16 @@ class UniformAirflow(base.BaseStrategy):
source_instances = self.compute_model.get_node_instances(
source_node)
if source_instances:
if self.config.datasource == "ceilometer":
inlet_t = self.ceilometer.statistic_aggregation(
resource_id=source_node.uuid,
meter_name=self.meter_name_inlet_t,
period=self._period,
aggregate='avg')
power = self.ceilometer.statistic_aggregation(
resource_id=source_node.uuid,
meter_name=self.meter_name_power,
period=self._period,
aggregate='avg')
elif self.config.datasource == "gnocchi":
stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(
seconds=int(self._period))
inlet_t = self.gnocchi.statistic_aggregation(
resource_id=source_node.uuid,
metric=self.meter_name_inlet_t,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean')
power = self.gnocchi.statistic_aggregation(
resource_id=source_node.uuid,
metric=self.meter_name_power,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean')
inlet_t = self.datasource_backend.statistic_aggregation(
resource_id=source_node.uuid,
meter_name=self.meter_name_inlet_t,
period=self._period,
granularity=self.granularity)
power = self.datasource_backend.statistic_aggregation(
resource_id=source_node.uuid,
meter_name=self.meter_name_power,
period=self._period,
granularity=self.granularity)
if (power < self.threshold_power and
inlet_t < self.threshold_inlet_t):
# hardware issue, migrate all instances from this node
@@ -353,23 +308,11 @@ class UniformAirflow(base.BaseStrategy):
node = self.compute_model.get_node_by_uuid(
node_id)
resource_id = node.uuid
if self.config.datasource == "ceilometer":
airflow = self.ceilometer.statistic_aggregation(
resource_id=resource_id,
meter_name=self.meter_name_airflow,
period=self._period,
aggregate='avg')
elif self.config.datasource == "gnocchi":
stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(
seconds=int(self._period))
airflow = self.gnocchi.statistic_aggregation(
resource_id=resource_id,
metric=self.meter_name_airflow,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean')
airflow = self.datasource_backend.statistic_aggregation(
resource_id=resource_id,
meter_name=self.meter_name_airflow,
period=self._period,
granularity=self.granularity)
# some hosts may not have airflow meter, remove from target
if airflow is None:
LOG.warning("%s: no airflow data", resource_id)

View File

@@ -52,7 +52,6 @@ correctly on all compute nodes within the cluster.
This strategy assumes it is possible to live migrate any VM from
an active compute node to any other active compute node.
"""
import datetime
from oslo_config import cfg
from oslo_log import log
@@ -60,8 +59,6 @@ import six
from watcher._i18n import _
from watcher.common import exception
from watcher.datasource import ceilometer as ceil
from watcher.datasource import gnocchi as gnoc
from watcher.decision_engine.model import element
from watcher.decision_engine.strategy.strategies import base
@@ -118,26 +115,6 @@ class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
def period(self):
return self.input_parameters.get('period', 3600)
@property
def ceilometer(self):
if self._ceilometer is None:
self.ceilometer = ceil.CeilometerHelper(osc=self.osc)
return self._ceilometer
@ceilometer.setter
def ceilometer(self, ceilometer):
self._ceilometer = ceilometer
@property
def gnocchi(self):
if self._gnocchi is None:
self.gnocchi = gnoc.GnocchiHelper(osc=self.osc)
return self._gnocchi
@gnocchi.setter
def gnocchi(self, gnocchi):
self._gnocchi = gnocchi
@property
def granularity(self):
return self.input_parameters.get('granularity', 300)
@@ -315,57 +292,28 @@ class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
disk_alloc_metric = self.METRIC_NAMES[
self.config.datasource]['disk_alloc_metric']
if self.config.datasource == "ceilometer":
instance_cpu_util = self.ceilometer.statistic_aggregation(
resource_id=instance.uuid, meter_name=cpu_util_metric,
period=self.period, aggregate='avg')
instance_ram_util = self.ceilometer.statistic_aggregation(
resource_id=instance.uuid, meter_name=ram_util_metric,
period=self.period, aggregate='avg')
if not instance_ram_util:
instance_ram_util = self.ceilometer.statistic_aggregation(
resource_id=instance.uuid, meter_name=ram_alloc_metric,
period=self.period, aggregate='avg')
instance_disk_util = self.ceilometer.statistic_aggregation(
resource_id=instance.uuid, meter_name=disk_alloc_metric,
period=self.period, aggregate='avg')
elif self.config.datasource == "gnocchi":
stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(
seconds=int(self.period))
instance_cpu_util = self.gnocchi.statistic_aggregation(
instance_cpu_util = self.datasource_backend.statistic_aggregation(
resource_id=instance.uuid,
meter_name=cpu_util_metric,
period=self.period,
granularity=self.granularity)
instance_ram_util = self.datasource_backend.statistic_aggregation(
resource_id=instance.uuid,
meter_name=ram_util_metric,
period=self.period,
granularity=self.granularity)
if not instance_ram_util:
instance_ram_util = self.datasource_backend.statistic_aggregation(
resource_id=instance.uuid,
metric=cpu_util_metric,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean'
)
instance_ram_util = self.gnocchi.statistic_aggregation(
resource_id=instance.uuid,
metric=ram_util_metric,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean'
)
if not instance_ram_util:
instance_ram_util = self.gnocchi.statistic_aggregation(
resource_id=instance.uuid,
metric=ram_alloc_metric,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean'
)
instance_disk_util = self.gnocchi.statistic_aggregation(
resource_id=instance.uuid,
metric=disk_alloc_metric,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean'
)
meter_name=ram_alloc_metric,
period=self.period,
granularity=self.granularity)
instance_disk_util = self.datasource_backend.statistic_aggregation(
resource_id=instance.uuid,
meter_name=disk_alloc_metric,
period=self.period,
granularity=self.granularity)
if instance_cpu_util:
total_cpu_utilization = (
instance.vcpus * (instance_cpu_util / 100.0))

View File

@@ -290,8 +290,9 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
util = None
try:
util = self.datasource_backend.statistic_aggregation(
instance.uuid, self._meter, self._period, 'mean',
granularity=self.granularity)
instance.uuid, self._meter, self._period,
self._granularity, aggregation='mean',
dimensions=dict(resource_id=instance.uuid))
except Exception as exc:
LOG.exception(exc)
LOG.error("Can not get %s from %s", self._meter,
@@ -352,6 +353,7 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
self.threshold = self.input_parameters.threshold
self._period = self.input_parameters.period
self._meter = self.input_parameters.metrics
self._granularity = self.input_parameters.granularity
source_nodes, target_nodes, avg_workload, workload_cache = (
self.group_hosts_by_cpu_or_ram_util())

View File

@@ -198,8 +198,8 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
instance_load = {'uuid': instance.uuid, 'vcpus': instance.vcpus}
for meter in self.metrics:
avg_meter = self.datasource_backend.statistic_aggregation(
instance.uuid, meter, self.periods['instance'], 'mean',
granularity=self.granularity)
instance.uuid, meter, self.periods['instance'],
self.granularity, aggregation='mean')
if avg_meter is None:
LOG.warning(
"No values returned by %(resource_id)s "
@@ -242,8 +242,7 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
resource_id = node_id
avg_meter = self.datasource_backend.statistic_aggregation(
resource_id, self.instance_metrics[metric],
self.periods['node'], 'mean', granularity=self.granularity)
self.periods['node'], self.granularity, aggregation='mean')
if avg_meter is None:
LOG.warning('No values returned by node %s for %s',
node_id, meter_name)

View File

@@ -10,6 +10,14 @@
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
*Zone migration using instance and volume migration*
This is zone migration strategy to migrate many instances and volumes
efficiently with minimum downtime for hardware maintenance.
"""
from dateutil.parser import parse
import six
@@ -40,6 +48,7 @@ IN_USE = "in-use"
class ZoneMigration(base.ZoneMigrationBaseStrategy):
"""Zone migration using instance and volume migration"""
def __init__(self, config, osc=None):
@@ -371,15 +380,6 @@ class ZoneMigration(base.ZoneMigrationBaseStrategy):
:param pool: pool name
:returns: host name
"""
# TODO(hidekazu) use this
# mapping = zonemgr.get_host_pool_mapping()
# for host, pools in six.iteritems(mapping):
# for _pool in pools:
# if pool == _pool:
# return host
# LOG.warning(self.msg_not_exist_corresponding_host % pool)
# return pool
return pool.split('@')[0]
def get_dst_node(self, src_node):

View File

@@ -60,6 +60,7 @@ log_warn = re.compile(
r"(.)*LOG\.(warn)\(\s*('|\"|_)")
unittest_imports_dot = re.compile(r"\bimport[\s]+unittest\b")
unittest_imports_from = re.compile(r"\bfrom[\s]+unittest\b")
re_redundant_import_alias = re.compile(r".*import (.+) as \1$")
@flake8ext
@@ -271,6 +272,18 @@ def check_builtins_gettext(logical_line, tokens, filename, lines, noqa):
yield (0, msg)
@flake8ext
def no_redundant_import_alias(logical_line):
"""Checking no redundant import alias.
https://bugs.launchpad.net/watcher/+bug/1745527
N342
"""
if re.match(re_redundant_import_alias, logical_line):
yield(0, "N342: No redundant import alias.")
def factory(register):
register(use_jsonutils)
register(check_assert_called_once_with)
@@ -286,3 +299,4 @@ def factory(register):
register(check_log_warn_deprecated)
register(check_oslo_i18n_wrapper)
register(check_builtins_gettext)
register(no_redundant_import_alias)

View File

@@ -4,11 +4,11 @@ msgid ""
msgstr ""
"Project-Id-Version: watcher VERSION\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2018-01-19 11:46+0000\n"
"POT-Creation-Date: 2018-01-26 00:18+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2018-01-19 08:01+0000\n"
"PO-Revision-Date: 2018-01-27 12:51+0000\n"
"Last-Translator: Andi Chandler <andi@gowling.com>\n"
"Language-Team: English (United Kingdom)\n"
"Language: en-GB\n"
@@ -280,6 +280,10 @@ msgstr "Couldn't apply patch '%(patch)s'. Reason: %(reason)s"
msgid "Couldn't delete when state is '%(state)s'."
msgstr "Couldn't delete when state is '%(state)s'."
#, python-format
msgid "Datasource %(datasource)s is not available."
msgstr "Datasource %(datasource)s is not available."
#, python-format
msgid "Datasource %(datasource)s is not supported by strategy %(strategy)s"
msgstr "Datasource %(datasource)s is not supported by strategy %(strategy)s"
@@ -369,6 +373,9 @@ msgstr "Goal %(goal)s is invalid"
msgid "Goals"
msgstr "Goals"
msgid "Hardware Maintenance"
msgstr "Hardware Maintenance"
#, python-format
msgid "Here below is a table containing the objects that can be purged%s:"
msgstr "Here below is a table containing the objects that can be purged%s:"
@@ -506,6 +513,30 @@ msgstr "Provided cron is invalid: %(message)s"
msgid "Purge results summary%s:"
msgstr "Purge results summary%s:"
msgid ""
"Ratio of actual attached volumes migrated to planned attached volumes "
"migrate."
msgstr ""
"Ratio of actual attached volumes migrated to planned attached volumes "
"migrate."
msgid ""
"Ratio of actual cold migrated instances to planned cold migrate instances."
msgstr ""
"Ratio of actual cold migrated instances to planned cold migrate instances."
msgid ""
"Ratio of actual detached volumes migrated to planned detached volumes "
"migrate."
msgstr ""
"Ratio of actual detached volumes migrated to planned detached volumes "
"migrate."
msgid ""
"Ratio of actual live migrated instances to planned live migrate instances."
msgstr ""
"Ratio of actual live migrated instances to planned live migrate instances."
msgid ""
"Ratio of released compute nodes divided by the total number of enabled "
"compute nodes."
@@ -561,6 +592,9 @@ msgstr ""
msgid "State transition not allowed: (%(initial_state)s -> %(new_state)s)"
msgstr "State transition not allowed: (%(initial_state)s -> %(new_state)s)"
msgid "Storage Capacity Balance Strategy"
msgstr "Storage Capacity Balance Strategy"
msgid "Strategies"
msgstr "Strategies"
@@ -644,15 +678,42 @@ msgstr "The Ironic node %(uuid)s could not be found"
msgid "The list of compute node(s) in the cluster is empty"
msgstr "The list of compute node(s) in the cluster is empty"
msgid "The list of storage node(s) in the cluster is empty"
msgstr "The list of storage node(s) in the cluster is empty"
msgid "The metrics resource collector is not defined"
msgstr "The metrics resource collector is not defined"
msgid "The number of VM migrations to be performed."
msgstr "The number of VM migrations to be performed."
msgid "The number of attached volumes actually migrated."
msgstr "The number of attached volumes actually migrated."
msgid "The number of attached volumes planned to migrate."
msgstr "The number of attached volumes planned to migrate."
msgid "The number of compute nodes to be released."
msgstr "The number of compute nodes to be released."
msgid "The number of detached volumes actually migrated."
msgstr "The number of detached volumes actually migrated."
msgid "The number of detached volumes planned to migrate."
msgstr "The number of detached volumes planned to migrate."
msgid "The number of instances actually cold migrated."
msgstr "The number of instances actually cold migrated."
msgid "The number of instances actually live migrated."
msgstr "The number of instances actually live migrated."
msgid "The number of instances planned to cold migrate."
msgstr "The number of instances planned to cold migrate."
msgid "The number of instances planned to live migrate."
msgstr "The number of instances planned to live migrate."
#, python-format
msgid ""
"The number of objects (%(num)s) to delete from the database exceeds the "
@@ -766,6 +827,9 @@ msgstr ""
"You shouldn't use any other IDs of %(resource)s if you use wildcard "
"character."
msgid "Zone migration"
msgstr "Zone migration"
msgid "destination type is required when migration type is swap"
msgstr "destination type is required when migration type is swap"

View File

@@ -16,7 +16,7 @@ import sys
import six
from watcher.notifications import base as notificationbase
from watcher.objects import base as base
from watcher.objects import base
from watcher.objects import fields as wfields

View File

@@ -55,7 +55,8 @@ class TestCeilometerHelper(base.BaseTestCase):
val = cm.statistic_aggregation(
resource_id="INSTANCE_ID",
meter_name="cpu_util",
period="7300"
period="7300",
granularity=None
)
self.assertEqual(expected_result, val)
@@ -100,7 +101,7 @@ class TestCeilometerHelper(base.BaseTestCase):
helper = ceilometer_helper.CeilometerHelper()
helper.get_host_cpu_usage('compute1', 600, 'mean')
mock_aggregation.assert_called_once_with(
'compute1', helper.METRIC_MAP['host_cpu_usage'], 600,
'compute1', helper.METRIC_MAP['host_cpu_usage'], 600, None,
aggregate='mean')
@mock.patch.object(ceilometer_helper.CeilometerHelper,
@@ -109,7 +110,7 @@ class TestCeilometerHelper(base.BaseTestCase):
helper = ceilometer_helper.CeilometerHelper()
helper.get_instance_cpu_usage('compute1', 600, 'mean')
mock_aggregation.assert_called_once_with(
'compute1', helper.METRIC_MAP['instance_cpu_usage'], 600,
'compute1', helper.METRIC_MAP['instance_cpu_usage'], 600, None,
aggregate='mean')
@mock.patch.object(ceilometer_helper.CeilometerHelper,
@@ -118,7 +119,7 @@ class TestCeilometerHelper(base.BaseTestCase):
helper = ceilometer_helper.CeilometerHelper()
helper.get_host_memory_usage('compute1', 600, 'mean')
mock_aggregation.assert_called_once_with(
'compute1', helper.METRIC_MAP['host_memory_usage'], 600,
'compute1', helper.METRIC_MAP['host_memory_usage'], 600, None,
aggregate='mean')
@mock.patch.object(ceilometer_helper.CeilometerHelper,
@@ -128,7 +129,7 @@ class TestCeilometerHelper(base.BaseTestCase):
helper = ceilometer_helper.CeilometerHelper()
helper.get_instance_memory_usage('compute1', 600, 'mean')
mock_aggregation.assert_called_once_with(
'compute1', helper.METRIC_MAP['instance_ram_usage'], 600,
'compute1', helper.METRIC_MAP['instance_ram_usage'], 600, None,
aggregate='mean')
@mock.patch.object(ceilometer_helper.CeilometerHelper,
@@ -139,7 +140,7 @@ class TestCeilometerHelper(base.BaseTestCase):
helper.get_instance_l3_cache_usage('compute1', 600, 'mean')
mock_aggregation.assert_called_once_with(
'compute1', helper.METRIC_MAP['instance_l3_cache_usage'], 600,
aggregate='mean')
None, aggregate='mean')
@mock.patch.object(ceilometer_helper.CeilometerHelper,
'statistic_aggregation')
@@ -148,7 +149,7 @@ class TestCeilometerHelper(base.BaseTestCase):
helper = ceilometer_helper.CeilometerHelper()
helper.get_instance_ram_allocated('compute1', 600, 'mean')
mock_aggregation.assert_called_once_with(
'compute1', helper.METRIC_MAP['instance_ram_allocated'], 600,
'compute1', helper.METRIC_MAP['instance_ram_allocated'], 600, None,
aggregate='mean')
@mock.patch.object(ceilometer_helper.CeilometerHelper,
@@ -159,7 +160,7 @@ class TestCeilometerHelper(base.BaseTestCase):
helper.get_instance_root_disk_allocated('compute1', 600, 'mean')
mock_aggregation.assert_called_once_with(
'compute1', helper.METRIC_MAP['instance_root_disk_size'], 600,
aggregate='mean')
None, aggregate='mean')
@mock.patch.object(ceilometer_helper.CeilometerHelper,
'statistic_aggregation')
@@ -168,7 +169,7 @@ class TestCeilometerHelper(base.BaseTestCase):
helper = ceilometer_helper.CeilometerHelper()
helper.get_host_outlet_temperature('compute1', 600, 'mean')
mock_aggregation.assert_called_once_with(
'compute1', helper.METRIC_MAP['host_outlet_temp'], 600,
'compute1', helper.METRIC_MAP['host_outlet_temp'], 600, None,
aggregate='mean')
@mock.patch.object(ceilometer_helper.CeilometerHelper,
@@ -178,7 +179,7 @@ class TestCeilometerHelper(base.BaseTestCase):
helper = ceilometer_helper.CeilometerHelper()
helper.get_host_inlet_temperature('compute1', 600, 'mean')
mock_aggregation.assert_called_once_with(
'compute1', helper.METRIC_MAP['host_inlet_temp'], 600,
'compute1', helper.METRIC_MAP['host_inlet_temp'], 600, None,
aggregate='mean')
@mock.patch.object(ceilometer_helper.CeilometerHelper,
@@ -187,7 +188,7 @@ class TestCeilometerHelper(base.BaseTestCase):
helper = ceilometer_helper.CeilometerHelper()
helper.get_host_airflow('compute1', 600, 'mean')
mock_aggregation.assert_called_once_with(
'compute1', helper.METRIC_MAP['host_airflow'], 600,
'compute1', helper.METRIC_MAP['host_airflow'], 600, None,
aggregate='mean')
@mock.patch.object(ceilometer_helper.CeilometerHelper,
@@ -196,7 +197,7 @@ class TestCeilometerHelper(base.BaseTestCase):
helper = ceilometer_helper.CeilometerHelper()
helper.get_host_power('compute1', 600, 'mean')
mock_aggregation.assert_called_once_with(
'compute1', helper.METRIC_MAP['host_power'], 600,
'compute1', helper.METRIC_MAP['host_power'], 600, None,
aggregate='mean')
def test_check_availability(self, mock_ceilometer):

View File

@@ -16,10 +16,8 @@
import mock
from oslo_config import cfg
from oslo_utils import timeutils
from watcher.common import clients
from watcher.common import exception
from watcher.datasource import gnocchi as gnocchi_helper
from watcher.tests import base
@@ -39,34 +37,17 @@ class TestGnocchiHelper(base.BaseTestCase):
mock_gnocchi.return_value = gnocchi
helper = gnocchi_helper.GnocchiHelper()
result = helper._statistic_aggregation(
result = helper.statistic_aggregation(
resource_id='16a86790-327a-45f9-bc82-45839f062fdc',
metric='cpu_util',
meter_name='cpu_util',
period=300,
granularity=360,
start_time=timeutils.parse_isotime("2017-02-02T09:00:00.000000"),
stop_time=timeutils.parse_isotime("2017-02-02T10:00:00.000000"),
aggregation='mean'
dimensions=None,
aggregation='mean',
group_by='*'
)
self.assertEqual(expected_result, result)
def test_gnocchi_wrong_datetime(self, mock_gnocchi):
gnocchi = mock.MagicMock()
expected_measures = [["2017-02-02T09:00:00.000000", 360, 5.5]]
gnocchi.metric.get_measures.return_value = expected_measures
mock_gnocchi.return_value = gnocchi
helper = gnocchi_helper.GnocchiHelper()
self.assertRaises(
exception.InvalidParameter, helper._statistic_aggregation,
resource_id='16a86790-327a-45f9-bc82-45839f062fdc',
metric='cpu_util',
granularity=360,
start_time="2017-02-02T09:00:00.000000",
stop_time=timeutils.parse_isotime("2017-02-02T10:00:00.000000"),
aggregation='mean')
@mock.patch.object(gnocchi_helper.GnocchiHelper, 'statistic_aggregation')
def test_get_host_cpu_usage(self, mock_aggregation, mock_gnocchi):
helper = gnocchi_helper.GnocchiHelper()

View File

@@ -16,7 +16,6 @@
import mock
from oslo_config import cfg
from oslo_utils import timeutils
from watcher.common import clients
from watcher.datasource import monasca as monasca_helper
@@ -30,7 +29,7 @@ class TestMonascaHelper(base.BaseTestCase):
def test_monasca_statistic_aggregation(self, mock_monasca):
monasca = mock.MagicMock()
expected_result = [{
expected_stat = [{
'columns': ['timestamp', 'avg'],
'dimensions': {
'hostname': 'rdev-indeedsrv001',
@@ -39,23 +38,23 @@ class TestMonascaHelper(base.BaseTestCase):
'name': 'cpu.percent',
'statistics': [
['2016-07-29T12:45:00Z', 0.0],
['2016-07-29T12:50:00Z', 0.9100000000000001],
['2016-07-29T12:55:00Z', 0.9111111111111112]]}]
['2016-07-29T12:50:00Z', 0.9],
['2016-07-29T12:55:00Z', 0.9]]}]
monasca.metrics.list_statistics.return_value = expected_result
monasca.metrics.list_statistics.return_value = expected_stat
mock_monasca.return_value = monasca
helper = monasca_helper.MonascaHelper()
result = helper.statistic_aggregation(
resource_id=None,
meter_name='cpu.percent',
dimensions={'hostname': 'NODE_UUID'},
start_time=timeutils.parse_isotime("2016-06-06T10:33:22.063176"),
end_time=None,
period=7200,
aggregate='avg',
granularity=300,
dimensions={'hostname': 'NODE_UUID'},
aggregation='avg',
group_by='*',
)
self.assertEqual(expected_result, result)
self.assertEqual(0.6, result)
def test_check_availability(self, mock_monasca):
monasca = mock.MagicMock()
@@ -117,34 +116,14 @@ class TestMonascaHelper(base.BaseTestCase):
@mock.patch.object(monasca_helper.MonascaHelper, 'statistic_aggregation')
def test_get_host_cpu_usage(self, mock_aggregation, mock_monasca):
node = "compute1_compute1"
mock_aggregation.return_value = [{
'columns': ['timestamp', 'avg'],
'dimensions': {
'hostname': 'rdev-indeedsrv001',
'service': 'monasca'},
'id': '0',
'name': 'cpu.percent',
'statistics': [
['2016-07-29T12:45:00Z', 0.0],
['2016-07-29T12:50:00Z', 0.9],
['2016-07-29T12:55:00Z', 0.9]]}]
mock_aggregation.return_value = 0.6
helper = monasca_helper.MonascaHelper()
cpu_usage = helper.get_host_cpu_usage(node, 600, 'mean')
self.assertEqual(0.6, cpu_usage)
@mock.patch.object(monasca_helper.MonascaHelper, 'statistic_aggregation')
def test_get_instance_cpu_usage(self, mock_aggregation, mock_monasca):
mock_aggregation.return_value = [{
'columns': ['timestamp', 'avg'],
'dimensions': {
'name': 'vm1',
'service': 'monasca'},
'id': '0',
'name': 'cpu.percent',
'statistics': [
['2016-07-29T12:45:00Z', 0.0],
['2016-07-29T12:50:00Z', 0.9],
['2016-07-29T12:55:00Z', 0.9]]}]
mock_aggregation.return_value = 0.6
helper = monasca_helper.MonascaHelper()
cpu_usage = helper.get_instance_cpu_usage('vm1', 600, 'mean')
self.assertEqual(0.6, cpu_usage)

View File

@@ -26,14 +26,9 @@ class FakeCeilometerMetrics(object):
def empty_one_metric(self, emptytype):
self.emptytype = emptytype
# TODO(alexchadin): This method is added as temporary solution until
# all strategies use datasource_backend property.
def temp_mock_get_statistics(self, resource_id, meter_name, period,
aggregate, granularity=300):
return self.mock_get_statistics(resource_id, meter_name, period)
def mock_get_statistics(self, resource_id, meter_name, period,
aggregate='avg'):
def mock_get_statistics(self, resource_id=None, meter_name=None,
period=None, granularity=None, dimensions=None,
aggregation='avg', group_by='*'):
result = 0
if meter_name == "hardware.cpu.util":
result = self.get_usage_node_cpu(resource_id)
@@ -56,7 +51,8 @@ class FakeCeilometerMetrics(object):
return result
def mock_get_statistics_wb(self, resource_id, meter_name, period,
aggregate, granularity=300):
granularity, dimensions=None,
aggregation='avg', group_by='*'):
result = 0.0
if meter_name == "cpu_util":
result = self.get_average_usage_instance_cpu_wb(resource_id)

View File

@@ -84,8 +84,9 @@ class FakeCeilometerMetrics(object):
def __init__(self, model):
self.model = model
def mock_get_statistics(self, resource_id, meter_name, period=3600,
aggregate='avg'):
def mock_get_statistics(self, resource_id=None, meter_name=None,
period=300, granularity=300, dimensions=None,
aggregation='avg', group_by='*'):
if meter_name == "compute.node.cpu.percent":
return self.get_node_cpu_util(resource_id)
elif meter_name == "cpu_util":
@@ -166,15 +167,16 @@ class FakeGnocchiMetrics(object):
def __init__(self, model):
self.model = model
def mock_get_statistics(self, resource_id, metric, granularity,
start_time, stop_time, aggregation='mean'):
if metric == "compute.node.cpu.percent":
def mock_get_statistics(self, resource_id=None, meter_name=None,
period=300, granularity=300, dimensions=None,
aggregation='avg', group_by='*'):
if meter_name == "compute.node.cpu.percent":
return self.get_node_cpu_util(resource_id)
elif metric == "cpu_util":
elif meter_name == "cpu_util":
return self.get_instance_cpu_util(resource_id)
elif metric == "memory.resident":
elif meter_name == "memory.resident":
return self.get_instance_ram_util(resource_id)
elif metric == "disk.root.size":
elif meter_name == "disk.root.size":
return self.get_instance_disk_root_size(resource_id)
def get_node_cpu_util(self, r_id):

View File

@@ -21,17 +21,10 @@ class FakeGnocchiMetrics(object):
def empty_one_metric(self, emptytype):
self.emptytype = emptytype
# TODO(alexchadin): This method is added as temporary solution until
# all strategies use datasource_backend property.
def temp_mock_get_statistics(self, resource_id, metric, period, aggregate,
granularity=300):
return self.mock_get_statistics(resource_id, metric, granularity,
0, 0, aggregation='mean')
def mock_get_statistics(self, resource_id, metric, granularity,
start_time, stop_time, aggregation='mean'):
def mock_get_statistics(self, resource_id=None, meter_name=None,
period=None, granularity=None, dimensions=None,
aggregation='avg', group_by='*'):
result = 0
meter_name = metric
if meter_name == "hardware.cpu.util":
result = self.get_usage_node_cpu(resource_id)
elif meter_name == "compute.node.cpu.percent":
@@ -87,12 +80,13 @@ class FakeGnocchiMetrics(object):
mock[uuid] = 25 * oslo_utils.units.Ki
return mock[str(uuid)]
def mock_get_statistics_wb(self, resource_id, metric, period, aggregate,
granularity=300):
def mock_get_statistics_wb(self, resource_id, meter_name, period,
granularity, dimensions=None,
aggregation='avg', group_by='*'):
result = 0.0
if metric == "cpu_util":
if meter_name == "cpu_util":
result = self.get_average_usage_instance_cpu_wb(resource_id)
elif metric == "memory.resident":
elif meter_name == "memory.resident":
result = self.get_average_usage_instance_memory_wb(resource_id)
return result

View File

@@ -26,15 +26,9 @@ class FakeMonascaMetrics(object):
def empty_one_metric(self, emptytype):
self.emptytype = emptytype
# This method is added as temporary solution until all strategies use
# datasource_backend property
def temp_mock_get_statistics(self, metric, dimensions, period,
aggregate='avg', granularity=300):
return self.mock_get_statistics(metric, dimensions,
period, aggregate='avg')
def mock_get_statistics(self, meter_name, dimensions, period,
aggregate='avg'):
def mock_get_statistics(self, resource_id=None, meter_name=None,
period=None, granularity=None, dimensions=None,
aggregation='avg', group_by='*'):
resource_id = dimensions.get(
"resource_id") or dimensions.get("hostname")
result = 0.0

View File

@@ -17,7 +17,6 @@
# limitations under the License.
#
import collections
import datetime
import mock
from watcher.applier.loading import default
@@ -57,7 +56,7 @@ class TestOutletTempControl(base.TestCase):
self.addCleanup(p_model.stop)
p_datasource = mock.patch.object(
strategies.OutletTempControl, self.datasource,
strategies.OutletTempControl, 'datasource_backend',
new_callable=mock.PropertyMock)
self.m_datasource = p_datasource.start()
self.addCleanup(p_datasource.stop)
@@ -164,44 +163,3 @@ class TestOutletTempControl(base.TestCase):
loaded_action = loader.load(action['action_type'])
loaded_action.input_parameters = action['input_parameters']
loaded_action.validate_parameters()
def test_periods(self):
model = self.fake_cluster.generate_scenario_3_with_2_nodes()
self.m_model.return_value = model
p_ceilometer = mock.patch.object(
strategies.OutletTempControl, "ceilometer")
m_ceilometer = p_ceilometer.start()
self.addCleanup(p_ceilometer.stop)
p_gnocchi = mock.patch.object(strategies.OutletTempControl, "gnocchi")
m_gnocchi = p_gnocchi.start()
self.addCleanup(p_gnocchi.stop)
datetime_patcher = mock.patch.object(
datetime, 'datetime',
mock.Mock(wraps=datetime.datetime)
)
mocked_datetime = datetime_patcher.start()
mocked_datetime.utcnow.return_value = datetime.datetime(
2017, 3, 19, 18, 53, 11, 657417)
self.addCleanup(datetime_patcher.stop)
m_ceilometer.statistic_aggregation = mock.Mock(
side_effect=self.fake_metrics.mock_get_statistics)
m_gnocchi.statistic_aggregation = mock.Mock(
side_effect=self.fake_metrics.mock_get_statistics)
node = model.get_node_by_uuid('Node_0')
self.strategy.input_parameters.update({'threshold': 35.0})
self.strategy.threshold = 35.0
self.strategy.group_hosts_by_outlet_temp()
if self.strategy.config.datasource == "ceilometer":
m_ceilometer.statistic_aggregation.assert_any_call(
aggregate='avg',
meter_name='hardware.ipmi.node.outlet_temperature',
period=30, resource_id=node.uuid)
elif self.strategy.config.datasource == "gnocchi":
stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(
seconds=int('30'))
m_gnocchi.statistic_aggregation.assert_called_with(
resource_id=mock.ANY,
metric='hardware.ipmi.node.outlet_temperature',
granularity=300, start_time=start_time, stop_time=stop_time,
aggregation='mean')

View File

@@ -42,7 +42,7 @@ class TestStrategyEndpoint(base.BaseTestCase):
def test_get_datasource_status(self):
strategy = mock.MagicMock()
datasource = mock.MagicMock()
strategy.config.datasource = "gnocchi"
datasource.NAME = 'gnocchi'
datasource.check_availability.return_value = "available"
se = strategy_base.StrategyEndpoint(mock.MagicMock())
result = se._get_datasource_status(strategy, datasource)

View File

@@ -17,7 +17,6 @@
# limitations under the License.
#
import collections
import datetime
import mock
from watcher.applier.loading import default
@@ -56,7 +55,7 @@ class TestUniformAirflow(base.TestCase):
self.addCleanup(p_model.stop)
p_datasource = mock.patch.object(
strategies.UniformAirflow, self.datasource,
strategies.UniformAirflow, 'datasource_backend',
new_callable=mock.PropertyMock)
self.m_datasource = p_datasource.start()
self.addCleanup(p_datasource.stop)
@@ -211,39 +210,3 @@ class TestUniformAirflow(base.TestCase):
loaded_action = loader.load(action['action_type'])
loaded_action.input_parameters = action['input_parameters']
loaded_action.validate_parameters()
def test_periods(self):
model = self.fake_cluster.generate_scenario_7_with_2_nodes()
self.m_model.return_value = model
p_ceilometer = mock.patch.object(
strategies.UniformAirflow, "ceilometer")
m_ceilometer = p_ceilometer.start()
self.addCleanup(p_ceilometer.stop)
p_gnocchi = mock.patch.object(strategies.UniformAirflow, "gnocchi")
m_gnocchi = p_gnocchi.start()
self.addCleanup(p_gnocchi.stop)
datetime_patcher = mock.patch.object(
datetime, 'datetime',
mock.Mock(wraps=datetime.datetime)
)
mocked_datetime = datetime_patcher.start()
mocked_datetime.utcnow.return_value = datetime.datetime(
2017, 3, 19, 18, 53, 11, 657417)
self.addCleanup(datetime_patcher.stop)
m_ceilometer.statistic_aggregation = mock.Mock(
side_effect=self.fake_metrics.mock_get_statistics)
m_gnocchi.statistic_aggregation = mock.Mock(
side_effect=self.fake_metrics.mock_get_statistics)
self.strategy.group_hosts_by_airflow()
if self.strategy.config.datasource == "ceilometer":
m_ceilometer.statistic_aggregation.assert_any_call(
aggregate='avg', meter_name='hardware.ipmi.node.airflow',
period=300, resource_id=mock.ANY)
elif self.strategy.config.datasource == "gnocchi":
stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(
seconds=int('300'))
m_gnocchi.statistic_aggregation.assert_called_with(
resource_id=mock.ANY, metric='hardware.ipmi.node.airflow',
granularity=300, start_time=start_time, stop_time=stop_time,
aggregation='mean')

View File

@@ -18,7 +18,6 @@
# limitations under the License.
#
import datetime
import mock
from watcher.common import exception
@@ -55,7 +54,7 @@ class TestVMWorkloadConsolidation(base.TestCase):
self.addCleanup(p_model.stop)
p_datasource = mock.patch.object(
strategies.VMWorkloadConsolidation, self.datasource,
strategies.VMWorkloadConsolidation, 'datasource_backend',
new_callable=mock.PropertyMock)
self.m_datasource = p_datasource.start()
self.addCleanup(p_datasource.stop)
@@ -333,41 +332,3 @@ class TestVMWorkloadConsolidation(base.TestCase):
del expected[3]
del expected[1]
self.assertEqual(expected, self.strategy.solution.actions)
def test_periods(self):
model = self.fake_cluster.generate_scenario_1()
self.m_model.return_value = model
p_ceilometer = mock.patch.object(
strategies.VMWorkloadConsolidation, "ceilometer")
m_ceilometer = p_ceilometer.start()
self.addCleanup(p_ceilometer.stop)
p_gnocchi = mock.patch.object(
strategies.VMWorkloadConsolidation, "gnocchi")
m_gnocchi = p_gnocchi.start()
self.addCleanup(p_gnocchi.stop)
datetime_patcher = mock.patch.object(
datetime, 'datetime',
mock.Mock(wraps=datetime.datetime)
)
mocked_datetime = datetime_patcher.start()
mocked_datetime.utcnow.return_value = datetime.datetime(
2017, 3, 19, 18, 53, 11, 657417)
self.addCleanup(datetime_patcher.stop)
m_ceilometer.return_value = mock.Mock(
statistic_aggregation=self.fake_metrics.mock_get_statistics)
m_gnocchi.return_value = mock.Mock(
statistic_aggregation=self.fake_metrics.mock_get_statistics)
instance0 = model.get_instance_by_uuid("INSTANCE_0")
self.strategy.get_instance_utilization(instance0)
if self.strategy.config.datasource == "ceilometer":
m_ceilometer.statistic_aggregation.assert_any_call(
aggregate='avg', meter_name='disk.root.size',
period=3600, resource_id=instance0.uuid)
elif self.strategy.config.datasource == "gnocchi":
stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(
seconds=int('3600'))
m_gnocchi.statistic_aggregation.assert_called_with(
resource_id=instance0.uuid, metric='disk.root.size',
granularity=300, start_time=start_time, stop_time=stop_time,
aggregation='mean')

View File

@@ -75,10 +75,12 @@ class TestWorkloadBalance(base.TestCase):
self.strategy.input_parameters = utils.Struct()
self.strategy.input_parameters.update({'metrics': 'cpu_util',
'threshold': 25.0,
'period': 300})
'period': 300,
'granularity': 300})
self.strategy.threshold = 25.0
self.strategy._period = 300
self.strategy._meter = "cpu_util"
self.strategy._granularity = 300
def test_calc_used_resource(self):
model = self.fake_cluster.generate_scenario_6_with_2_nodes()

View File

@@ -83,7 +83,7 @@ class TestWorkloadStabilization(base.TestCase):
self.m_model.return_value = model_root.ModelRoot()
self.m_audit_scope.return_value = mock.Mock()
self.m_datasource.return_value = mock.Mock(
statistic_aggregation=self.fake_metrics.temp_mock_get_statistics)
statistic_aggregation=self.fake_metrics.mock_get_statistics)
self.strategy = strategies.WorkloadStabilization(
config=mock.Mock(datasource=self.datasource))