Compare commits

..

22 Commits

Author SHA1 Message Date
OpenDev Sysadmins
16113e255b OpenDev Migration Patch
This commit was bulk generated and pushed by the OpenDev sysadmins
as a part of the Git hosting and code review systems migration
detailed in these mailing list posts:

http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003603.html
http://lists.openstack.org/pipermail/openstack-discuss/2019-April/004920.html

Attempts have been made to correct repository namespaces and
hostnames based on simple pattern matching, but it's possible some
were updated incorrectly or missed entirely. Please reach out to us
via the contact information listed at https://opendev.org/ with any
questions you may have.
2019-04-19 19:40:46 +00:00
Ian Wienand
30d6f07ceb Replace openstack.org git:// URLs with https://
This is a mechanically generated change to replace openstack.org
git:// URLs with https:// equivalents.

This is in aid of a planned future move of the git hosting
infrastructure to a self-hosted instance of gitea (https://gitea.io),
which does not support the git wire protocol at this stage.

This update should result in no functional change.

For more information see the thread at

 http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003825.html

Change-Id: I5ffcaf509ec6901f7e221a4312cc6b0577090440
2019-03-24 20:36:25 +00:00
licanwei
343a65952a Check job before removing it
Change-Id: Ibbd4da25fac6016a0d76c8f810ac567f6fd075f1
Closes-Bug: #1782731
(cherry picked from commit 4022714f5d)
2018-10-23 11:57:16 +00:00
LiXiangyu
9af6886b0e Fix TypeError in function chunkify
This patch fixes TypeError of range() in function chunkify, as
range() integer step argument expected, but got str.

Change-Id: I2acde859e014baa4c4c59caa6f4ea938c7c4c3bf
(cherry picked from commit c717be12a6)
2018-10-23 06:55:49 +00:00
Nguyen Hai
b0ef77f5d1 import zuul job settings from project-config
This is a mechanically generated patch to complete step 1 of moving
the zuul job settings out of project-config and into each project
repository.

Because there will be a separate patch on each branch, the branch
specifiers for branch-specific jobs have been removed.

Because this patch is generated by a script, there may be some
cosmetic changes to the layout of the YAML file(s) as the contents are
normalized.

See the python3-first goal document for details:
https://governance.openstack.org/tc/goals/stein/python3-first.html

Change-Id: I9ccef45c11c17c3bdda143a53b325be327b9459d
Story: #2002586
Task: #24344
2018-08-19 00:58:52 +09:00
Alexander Chadin
f5157f2894 workload_stabilization trivial fix
This fix allows to compare metric name by value,
not by object.

Change-Id: I57c50ff97efa43efe4fd81875e481b25e9a18cc6
2018-02-20 14:42:04 +00:00
OpenStack Proposal Bot
13331935df Updated from global requirements
Change-Id: I941f6e5a005124a98d2860695fb9a30a77bc595c
2018-02-14 18:48:22 +00:00
Alexander Chadin
8d61c1a2b4 Fix workload_stabilization unavailable nodes and instances
This patch set excludes nodes and instances from auditing
if appropriate metrics aren't available.

Change-Id: I87c6c249e3962f45d082f92d7e6e0be04e101799
Closes-Bug: #1736982
(cherry picked from commit 701b258dc7)
2018-01-24 11:57:14 +00:00
Hidekazu Nakamura
6b4b5c2fe5 Fix gnocchiclient creation
Gnocchiclient uses keystoneauth1.adapter so that adapter_options
need to be given.
This patch fixes gnocchiclient creation.

Change-Id: I6b5d8ee775929f4b3fd30be3321b378d19085547
Closes-Bug: #1714871
(cherry picked from commit a2fa13c8ff)
2017-11-13 08:20:55 +00:00
OpenStack Proposal Bot
62623a7f77 Updated from global requirements
Change-Id: Iede1409c379d90238b6f2ab6a9aa750b3081df94
2017-09-21 01:08:52 +00:00
licanwei
9d2f8d11ec Fix KeyError exception
During the strategy sync process,
if goal_id can't be found in the goals table,
will throw a KeyError exception.

Change-Id: I62800ac5c69f4f5c7820908f2e777094a51a5541
Closes-Bug: #1711086
2017-08-24 12:34:13 +00:00
Jenkins
f1d064c759 Merge "workload balance base on cpu or ram util" into stable/pike 2017-08-24 10:09:30 +00:00
Jenkins
6cb02c18a7 Merge "Remove pbr warnerrors" into stable/pike 2017-08-24 10:09:19 +00:00
Jenkins
37fc37e138 Merge "Update the documention for doc migration" into stable/pike 2017-08-24 10:09:13 +00:00
Jenkins
b68685741e Merge "Adjust the action state judgment logic" into stable/pike 2017-08-24 08:37:55 +00:00
zhengwei6082
6721977f74 Update the documention for doc migration
Change-Id: I22dc18e6f2f7471f5c804d4d19c631f81a6e196b
(cherry picked from commit d5bcd37478)
2017-08-23 10:04:55 +00:00
Alexander Chadin
c303ad4cdc Remove pbr warnerrors
This change removes the now unused "warnerrors" setting,
which is replaced by "warning-is-error" in sphinx
releases >= 1.5 [1].

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-March/113085.html

Change-Id: I32f078169668be08737e47cd15edbdfba42904dc
(cherry picked from commit f76a628d1f)
2017-08-23 10:03:58 +00:00
licanwei
51c9db2936 Adjust the action state judgment logic
Only when True is returned, the action state is set to SUCCEEDED
some actions(such as migrate) will return None if exception raised

Change-Id: I52e7a1ffb68f54594f2b00d9843e8e0a4c985667
(cherry picked from commit 965af1b6fd)
2017-08-23 10:03:32 +00:00
suzhengwei
1e003d4153 workload balance base on cpu or ram util
By the input parameter "metrics", it makes decision to migrate a VM
base on cpu or memory utilization.

Change-Id: I35cce3495c8dacad64ea6c6ee71082a85e9e0a83
(cherry picked from commit 5c86a54d20)
2017-08-23 10:03:15 +00:00
Hidekazu Nakamura
bab89fd769 Fix gnocchi repository URL in local.conf.controller
This patch set updates gnocchi repository URL in local.conf.controller
bacause it moved from under openstack to their own repository.

Change-Id: I53c6efcb40b26f83bc1867564b9067ae5f50938d
(cherry picked from commit 5cc4716a95)
2017-08-23 10:02:57 +00:00
OpenStack Release Bot
e1e17ab0b9 Update UPPER_CONSTRAINTS_FILE for stable/pike
Change-Id: I453ae1575d2dad4b724d96cfc6ebf5c5b2a2a3be
2017-08-11 01:09:58 +00:00
OpenStack Release Bot
3d542472f6 Update .gitreview for stable/pike
Change-Id: I685c0bc8773d2b5f9e747a53d870a92dd6baea36
2017-08-11 01:09:57 +00:00
157 changed files with 3871 additions and 1354 deletions

View File

@@ -1,4 +1,5 @@
[gerrit]
host=review.openstack.org
host=review.opendev.org
port=29418
project=openstack/watcher.git
defaultbranch=stable/pike

9
.zuul.yaml Normal file
View File

@@ -0,0 +1,9 @@
- project:
templates:
- openstack-python-jobs
- openstack-python35-jobs
- publish-openstack-sphinx-docs
- check-requirements
- release-notes-jobs
gate:
queue: watcher

View File

@@ -35,7 +35,7 @@ VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP
NOVA_INSTANCES_PATH=/opt/stack/data/instances
# Enable the Ceilometer plugin for the compute agent
enable_plugin ceilometer git://git.openstack.org/openstack/ceilometer
enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer
disable_service ceilometer-acentral,ceilometer-collector,ceilometer-api
LOGFILE=$DEST/logs/stack.sh.log

View File

@@ -24,18 +24,24 @@ MULTI_HOST=1
# This is the controller node, so disable nova-compute
disable_service n-cpu
# Disable nova-network and use neutron instead
disable_service n-net
ENABLED_SERVICES+=,q-svc,q-dhcp,q-meta,q-agt,q-l3,neutron
# Enable remote console access
enable_service n-cauth
# Enable the Watcher Dashboard plugin
enable_plugin watcher-dashboard git://git.openstack.org/openstack/watcher-dashboard
enable_plugin watcher-dashboard https://git.openstack.org/openstack/watcher-dashboard
# Enable the Watcher plugin
enable_plugin watcher git://git.openstack.org/openstack/watcher
enable_plugin watcher https://git.openstack.org/openstack/watcher
# Enable the Ceilometer plugin
enable_plugin ceilometer git://git.openstack.org/openstack/ceilometer
enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer
# This is the controller node, so disable the ceilometer compute agent
disable_service ceilometer-acompute
# Enable the ceilometer api explicitly(bug:1667678)
enable_service ceilometer-api

View File

@@ -7,7 +7,7 @@ _XTRACE_WATCHER_PLUGIN=$(set +o | grep xtrace)
set -o xtrace
echo_summary "watcher's plugin.sh was called..."
. $DEST/watcher/devstack/lib/watcher
source $DEST/watcher/devstack/lib/watcher
# Show all of defined environment variables
(set -o posix; set)

View File

@@ -22,7 +22,7 @@ from docutils import nodes
from docutils.parsers import rst
from docutils import statemachine
from watcher.version import version_string
from watcher.version import version_info
class BaseWatcherDirective(rst.Directive):
@@ -169,4 +169,4 @@ class WatcherFunc(BaseWatcherDirective):
def setup(app):
app.add_directive('watcher-term', WatcherTerm)
app.add_directive('watcher-func', WatcherFunc)
return {'version': version_string}
return {'version': version_info.version_string()}

View File

@@ -1,41 +0,0 @@
{
"priority": "INFO",
"payload": {
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0",
"watcher_object.name": "ActionCancelPayload",
"watcher_object.data": {
"uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"input_parameters": {
"param2": 2,
"param1": 1
},
"fault": null,
"created_at": "2016-10-18T09:52:05Z",
"updated_at": null,
"state": "CANCELLED",
"action_plan": {
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0",
"watcher_object.name": "TerseActionPlanPayload",
"watcher_object.data": {
"uuid": "76be87bd-3422-43f9-93a0-e85a577e3061",
"global_efficacy": {},
"created_at": "2016-10-18T09:52:05Z",
"updated_at": null,
"state": "CANCELLING",
"audit_uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"strategy_uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
"deleted_at": null
}
},
"parents": [],
"action_type": "nop",
"deleted_at": null
}
},
"event_type": "action.cancel.end",
"publisher_id": "infra-optim:node0",
"timestamp": "2017-01-01 00:00:00.000000",
"message_id": "530b409c-9b6b-459b-8f08-f93dbfeb4d41"
}

View File

@@ -1,51 +0,0 @@
{
"priority": "ERROR",
"payload": {
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0",
"watcher_object.name": "ActionCancelPayload",
"watcher_object.data": {
"uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"input_parameters": {
"param2": 2,
"param1": 1
},
"fault": {
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0",
"watcher_object.name": "ExceptionPayload",
"watcher_object.data": {
"module_name": "watcher.tests.notifications.test_action_notification",
"exception": "WatcherException",
"exception_message": "TEST",
"function_name": "test_send_action_cancel_with_error"
}
},
"created_at": "2016-10-18T09:52:05Z",
"updated_at": null,
"state": "FAILED",
"action_plan": {
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0",
"watcher_object.name": "TerseActionPlanPayload",
"watcher_object.data": {
"uuid": "76be87bd-3422-43f9-93a0-e85a577e3061",
"global_efficacy": {},
"created_at": "2016-10-18T09:52:05Z",
"updated_at": null,
"state": "CANCELLING",
"audit_uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"strategy_uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
"deleted_at": null
}
},
"parents": [],
"action_type": "nop",
"deleted_at": null
}
},
"event_type": "action.cancel.error",
"publisher_id": "infra-optim:node0",
"timestamp": "2017-01-01 00:00:00.000000",
"message_id": "530b409c-9b6b-459b-8f08-f93dbfeb4d41"
}

View File

@@ -1,41 +0,0 @@
{
"priority": "INFO",
"payload": {
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0",
"watcher_object.name": "ActionCancelPayload",
"watcher_object.data": {
"uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"input_parameters": {
"param2": 2,
"param1": 1
},
"fault": null,
"created_at": "2016-10-18T09:52:05Z",
"updated_at": null,
"state": "CANCELLING",
"action_plan": {
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0",
"watcher_object.name": "TerseActionPlanPayload",
"watcher_object.data": {
"uuid": "76be87bd-3422-43f9-93a0-e85a577e3061",
"global_efficacy": {},
"created_at": "2016-10-18T09:52:05Z",
"updated_at": null,
"state": "CANCELLING",
"audit_uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"strategy_uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
"deleted_at": null
}
},
"parents": [],
"action_type": "nop",
"deleted_at": null
}
},
"event_type": "action.cancel.start",
"publisher_id": "infra-optim:node0",
"timestamp": "2017-01-01 00:00:00.000000",
"message_id": "530b409c-9b6b-459b-8f08-f93dbfeb4d41"
}

View File

@@ -1,55 +0,0 @@
{
"event_type": "action_plan.cancel.end",
"payload": {
"watcher_object.namespace": "watcher",
"watcher_object.name": "ActionPlanCancelPayload",
"watcher_object.version": "1.0",
"watcher_object.data": {
"created_at": "2016-10-18T09:52:05Z",
"deleted_at": null,
"audit_uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"audit": {
"watcher_object.namespace": "watcher",
"watcher_object.name": "TerseAuditPayload",
"watcher_object.version": "1.0",
"watcher_object.data": {
"created_at": "2016-10-18T09:52:05Z",
"deleted_at": null,
"uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"goal_uuid": "bc830f84-8ae3-4fc6-8bc6-e3dd15e8b49a",
"strategy_uuid": "75234dfe-87e3-4f11-a0e0-3c3305d86a39",
"scope": [],
"audit_type": "ONESHOT",
"state": "SUCCEEDED",
"parameters": {},
"interval": null,
"updated_at": null
}
},
"uuid": "76be87bd-3422-43f9-93a0-e85a577e3061",
"fault": null,
"state": "CANCELLED",
"global_efficacy": {},
"strategy_uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
"strategy": {
"watcher_object.namespace": "watcher",
"watcher_object.name": "StrategyPayload",
"watcher_object.version": "1.0",
"watcher_object.data": {
"created_at": "2016-10-18T09:52:05Z",
"deleted_at": null,
"name": "TEST",
"uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
"parameters_spec": {},
"display_name": "test strategy",
"updated_at": null
}
},
"updated_at": null
}
},
"priority": "INFO",
"message_id": "3984dc2b-8aef-462b-a220-8ae04237a56e",
"timestamp": "2016-10-18 09:52:05.219414",
"publisher_id": "infra-optim:node0"
}

View File

@@ -1,65 +0,0 @@
{
"event_type": "action_plan.cancel.error",
"publisher_id": "infra-optim:node0",
"priority": "ERROR",
"message_id": "9a45c5ae-0e21-4300-8fa0-5555d52a66d9",
"payload": {
"watcher_object.version": "1.0",
"watcher_object.namespace": "watcher",
"watcher_object.name": "ActionPlanCancelPayload",
"watcher_object.data": {
"fault": {
"watcher_object.version": "1.0",
"watcher_object.namespace": "watcher",
"watcher_object.name": "ExceptionPayload",
"watcher_object.data": {
"exception_message": "TEST",
"module_name": "watcher.tests.notifications.test_action_plan_notification",
"function_name": "test_send_action_plan_cancel_with_error",
"exception": "WatcherException"
}
},
"uuid": "76be87bd-3422-43f9-93a0-e85a577e3061",
"created_at": "2016-10-18T09:52:05Z",
"strategy_uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
"strategy": {
"watcher_object.version": "1.0",
"watcher_object.namespace": "watcher",
"watcher_object.name": "StrategyPayload",
"watcher_object.data": {
"uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
"created_at": "2016-10-18T09:52:05Z",
"name": "TEST",
"updated_at": null,
"display_name": "test strategy",
"parameters_spec": {},
"deleted_at": null
}
},
"updated_at": null,
"deleted_at": null,
"audit_uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"audit": {
"watcher_object.version": "1.0",
"watcher_object.namespace": "watcher",
"watcher_object.name": "TerseAuditPayload",
"watcher_object.data": {
"parameters": {},
"uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"goal_uuid": "bc830f84-8ae3-4fc6-8bc6-e3dd15e8b49a",
"strategy_uuid": "75234dfe-87e3-4f11-a0e0-3c3305d86a39",
"created_at": "2016-10-18T09:52:05Z",
"scope": [],
"updated_at": null,
"audit_type": "ONESHOT",
"interval": null,
"deleted_at": null,
"state": "SUCCEEDED"
}
},
"global_efficacy": {},
"state": "CANCELLING"
}
},
"timestamp": "2016-10-18 09:52:05.219414"
}

View File

@@ -1,55 +0,0 @@
{
"event_type": "action_plan.cancel.start",
"payload": {
"watcher_object.namespace": "watcher",
"watcher_object.name": "ActionPlanCancelPayload",
"watcher_object.version": "1.0",
"watcher_object.data": {
"created_at": "2016-10-18T09:52:05Z",
"deleted_at": null,
"audit_uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"audit": {
"watcher_object.namespace": "watcher",
"watcher_object.name": "TerseAuditPayload",
"watcher_object.version": "1.0",
"watcher_object.data": {
"created_at": "2016-10-18T09:52:05Z",
"deleted_at": null,
"uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
"goal_uuid": "bc830f84-8ae3-4fc6-8bc6-e3dd15e8b49a",
"strategy_uuid": "75234dfe-87e3-4f11-a0e0-3c3305d86a39",
"scope": [],
"audit_type": "ONESHOT",
"state": "SUCCEEDED",
"parameters": {},
"interval": null,
"updated_at": null
}
},
"uuid": "76be87bd-3422-43f9-93a0-e85a577e3061",
"fault": null,
"state": "CANCELLING",
"global_efficacy": {},
"strategy_uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
"strategy": {
"watcher_object.namespace": "watcher",
"watcher_object.name": "StrategyPayload",
"watcher_object.version": "1.0",
"watcher_object.data": {
"created_at": "2016-10-18T09:52:05Z",
"deleted_at": null,
"name": "TEST",
"uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
"parameters_spec": {},
"display_name": "test strategy",
"updated_at": null
}
},
"updated_at": null
}
},
"priority": "INFO",
"message_id": "3984dc2b-8aef-462b-a220-8ae04237a56e",
"timestamp": "2016-10-18 09:52:05.219414",
"publisher_id": "infra-optim:node0"
}

View File

@@ -127,8 +127,8 @@ Here is single Dockerfile snippet you can use to run your Docker container:
RUN apt-get update
RUN apt-get dist-upgrade -y
RUN apt-get install vim net-tools
RUN apt-get install experimental watcher-api
RUN apt-get install -y vim net-tools
RUN apt-get install -yt experimental watcher-api
CMD ["/usr/bin/watcher-api"]

View File

@@ -119,7 +119,7 @@ The watcher command-line interface (CLI) can be used to interact with the
Watcher system in order to control it or to know its current status.
Please, read `the detailed documentation about Watcher CLI
<https://docs.openstack.org/python-watcherclient/latest/cli/>`_.
<https://factory.b-com.com/www/watcher/doc/python-watcherclient/>`_.
.. _archi_watcher_dashboard_definition:
@@ -130,7 +130,7 @@ The Watcher Dashboard can be used to interact with the Watcher system through
Horizon in order to control it or to know its current status.
Please, read `the detailed documentation about Watcher Dashboard
<https://docs.openstack.org/watcher-dashboard/latest>`_.
<http://docs.openstack.org/developer/watcher-dashboard/>`_.
.. _archi_watcher_database_definition:
@@ -170,7 +170,7 @@ Unless specified, it then selects the most appropriate :ref:`strategy
goal.
The :ref:`Strategy <strategy_definition>` is then dynamically loaded (via
`stevedore <https://docs.openstack.org/stevedore/latest>`_). The
`stevedore <http://docs.openstack.org/developer/stevedore/>`_). The
:ref:`Watcher Decision Engine <watcher_decision_engine_definition>` executes
the strategy.

View File

@@ -72,7 +72,7 @@ copyright = u'OpenStack Foundation'
# The full version, including alpha/beta/rc tags.
release = watcher_version.version_info.release_string()
# The short X.Y version.
version = watcher_version.version_string
version = watcher_version.version_info.version_string()
# A list of ignored prefixes for module index sorting.
modindex_common_prefix = ['watcher.']

View File

@@ -15,7 +15,7 @@ Service overview
================
The Watcher system is a collection of services that provides support to
optimize your IaaS platform. The Watcher service may, depending upon
optimize your IAAS platform. The Watcher service may, depending upon
configuration, interact with several other OpenStack services. This includes:
- the OpenStack Identity service (`keystone`_) for request authentication and
@@ -27,7 +27,7 @@ configuration, interact with several other OpenStack services. This includes:
The Watcher service includes the following components:
- ``watcher-decision-engine``: runs audit on part of your IaaS and return an
- ``watcher-decision-engine``: runs audit on part of your IAAS and return an
action plan in order to optimize resource placement.
- ``watcher-api``: A RESTful API that processes application requests by sending
them to the watcher-decision-engine over RPC.
@@ -165,7 +165,7 @@ You can easily generate and update a sample configuration file
named :ref:`watcher.conf.sample <watcher_sample_configuration_files>` by using
these following commands::
$ git clone git://git.openstack.org/openstack/watcher
$ git clone https://git.openstack.org/openstack/watcher
$ cd watcher/
$ tox -e genconfig
$ vi etc/watcher/watcher.conf.sample
@@ -349,7 +349,7 @@ so that the watcher service is configured for your needs.
[nova_client]
# Version of Nova API to use in novaclient. (string value)
#api_version = 2.53
#api_version = 2
api_version = 2.1
#. Create the Watcher Service database tables::
@@ -366,14 +366,15 @@ Configure Nova compute
Please check your hypervisor configuration to correctly handle
`instance migration`_.
.. _`instance migration`: https://docs.openstack.org/nova/latest/admin/migration.html
.. _`instance migration`: http://docs.openstack.org/admin-guide/compute-live-migration-usage.html
Configure Measurements
======================
You can configure and install Ceilometer by following the documentation below :
#. https://docs.openstack.org/ceilometer/latest
#. http://docs.openstack.org/developer/ceilometer
#. http://docs.openstack.org/kilo/install-guide/install/apt/content/ceilometer-nova.html
The built-in strategy 'basic_consolidation' provided by watcher requires
"**compute.node.cpu.percent**" and "**cpu_util**" measurements to be collected
@@ -385,13 +386,13 @@ the OpenStack site.
You can use 'ceilometer meter-list' to list the available meters.
For more information:
https://docs.openstack.org/ceilometer/latest/admin/telemetry-measurements.html
http://docs.openstack.org/developer/ceilometer/measurements.html
Ceilometer is designed to collect measurements from OpenStack services and from
other external components. If you would like to add new meters to the currently
existing ones, you need to follow the documentation below:
#. https://docs.openstack.org/ceilometer/latest/contributor/new_meters.html#meters
#. http://docs.openstack.org/developer/ceilometer/new_meters.html
The Ceilometer collector uses a pluggable storage system, meaning that you can
pick any database system you prefer.

View File

@@ -19,13 +19,13 @@ model. To enable the Watcher plugin with DevStack, add the following to the
`[[local|localrc]]` section of your controller's `local.conf` to enable the
Watcher plugin::
enable_plugin watcher git://git.openstack.org/openstack/watcher
enable_plugin watcher https://git.openstack.org/openstack/watcher
For more detailed instructions, see `Detailed DevStack Instructions`_. Check
out the `DevStack documentation`_ for more information regarding DevStack.
.. _PluginModelDocs: https://docs.openstack.org/devstack/latest/plugins.html
.. _DevStack documentation: https://docs.openstack.org/devstack/latest
.. _PluginModelDocs: http://docs.openstack.org/developer/devstack/plugins.html
.. _DevStack documentation: http://docs.openstack.org/developer/devstack/
Detailed DevStack Instructions
==============================

View File

@@ -4,7 +4,7 @@
https://creativecommons.org/licenses/by/3.0/
.. _watcher_development_environment:
.. _watcher_developement_environment:
=========================================
Set up a development environment manually

View File

@@ -22,7 +22,7 @@ Pre-requisites
We assume that you have set up a working Watcher development environment. So if
this not already the case, you can check out our documentation which explains
how to set up a :ref:`development environment
<watcher_development_environment>`.
<watcher_developement_environment>`.
.. _development environment:
@@ -34,7 +34,7 @@ First off, we need to create the project structure. To do so, we can use
generate the skeleton of our project::
$ virtualenv thirdparty
$ . thirdparty/bin/activate
$ source thirdparty/bin/activate
$ pip install cookiecutter
$ cookiecutter https://github.com/openstack-dev/cookiecutter

View File

@@ -127,7 +127,7 @@ To get a better understanding on how to implement a more advanced goal, have
a look at the
:py:class:`watcher.decision_engine.goal.goals.ServerConsolidation` class.
.. _pbr: https://docs.openstack.org/pbr/latest
.. _pbr: http://docs.openstack.org/developer/pbr/
.. _implement_efficacy_specification:

View File

@@ -145,7 +145,7 @@ Here below is how you would proceed to register ``DummyPlanner`` using pbr_:
watcher_planners =
dummy = third_party.dummy:DummyPlanner
.. _pbr: https://docs.openstack.org/pbr/latest
.. _pbr: http://docs.openstack.org/developer/pbr/
Using planner plugins

View File

@@ -190,7 +190,7 @@ the :py:class:`~.DummyScoringContainer` and the way it is configured in
watcher_scoring_engine_containers =
new_scoring_container = thirdparty.new:NewContainer
.. _pbr: https://docs.openstack.org/pbr/latest/
.. _pbr: http://docs.openstack.org/developer/pbr/
Using scoring engine plugins

View File

@@ -219,7 +219,7 @@ Here below is how you would proceed to register ``NewStrategy`` using pbr_:
To get a better understanding on how to implement a more advanced strategy,
have a look at the :py:class:`~.BasicConsolidation` class.
.. _pbr: https://docs.openstack.org/pbr/latest
.. _pbr: http://docs.openstack.org/developer/pbr/
Using strategy plugins
======================
@@ -264,11 +264,11 @@ requires new metrics not covered by Ceilometer, you can add them through a
.. _`Helper`: https://github.com/openstack/watcher/blob/master/watcher/decision_engine/cluster/history/ceilometer.py
.. _`Ceilometer developer guide`: https://docs.openstack.org/ceilometer/latest/contributor/architecture.html#storing-accessing-the-data
.. _`Ceilometer`: https://docs.openstack.org/ceilometer/latest
.. _`Ceilometer developer guide`: http://docs.openstack.org/developer/ceilometer/architecture.html#storing-the-data
.. _`Ceilometer`: http://docs.openstack.org/developer/ceilometer/
.. _`Monasca`: https://github.com/openstack/monasca-api/blob/master/docs/monasca-api-spec.md
.. _`here`: https://docs.openstack.org/ceilometer/latest/contributor/install/dbreco.html#choosing-a-database-backend
.. _`Ceilometer plugin`: https://docs.openstack.org/ceilometer/latest/contributor/plugins.html
.. _`here`: http://docs.openstack.org/developer/ceilometer/install/dbreco.html#choosing-a-database-backend
.. _`Ceilometer plugin`: http://docs.openstack.org/developer/ceilometer/plugins.html
.. _`Ceilosca`: https://github.com/openstack/monasca-ceilometer/blob/master/ceilosca/ceilometer/storage/impl_monasca.py
Read usage metrics using the Watcher Datasource Helper

View File

@@ -41,18 +41,10 @@ you can run the desired test::
$ workon watcher
(watcher) $ tox -e py27 -- -r watcher.tests.api
.. _os-testr: https://docs.openstack.org/os-testr/latest
.. _os-testr: http://docs.openstack.org/developer/os-testr/
When you're done, deactivate the virtualenv::
$ deactivate
.. _tempest_tests:
Tempest tests
=============
Tempest tests for Watcher has been migrated to the external repo
`watcher-tempest-plugin`_.
.. _watcher-tempest-plugin: https://github.com/openstack/watcher-tempest-plugin
.. include:: ../../../watcher_tempest_plugin/README.rst

View File

@@ -83,7 +83,7 @@ Audit Template
Availability Zone
=================
Please, read `the official OpenStack definition of an Availability Zone <https://docs.openstack.org/nova/latest/user/aggregates.html#availability-zones-azs>`_.
Please, read `the official OpenStack definition of an Availability Zone <http://docs.openstack.org/developer/nova/aggregates.html#availability-zones-azs>`_.
.. _cluster_definition:
@@ -115,8 +115,15 @@ Cluster Data Model (CDM)
Controller Node
===============
Please, read `the official OpenStack definition of a Controller Node
<https://docs.openstack.org/nova/latest/install/overview.html#controller>`_.
A controller node is a machine that typically runs the following core OpenStack
services:
- Keystone: for identity and service management
- Cinder scheduler: for volumes management
- Glance controller: for image management
- Neutron controller: for network management
- Nova controller: for global compute resources management with services
such as nova-scheduler, nova-conductor and nova-network.
In many configurations, Watcher will reside on a controller node even if it
can potentially be hosted on a dedicated machine.
@@ -127,7 +134,7 @@ Compute node
============
Please, read `the official OpenStack definition of a Compute Node
<https://docs.openstack.org/nova/latest/install/overview.html#compute>`_.
<http://docs.openstack.org/ops-guide/arch-compute-nodes.html>`_.
.. _customer_definition:
@@ -160,7 +167,7 @@ Host Aggregate
==============
Please, read `the official OpenStack definition of a Host Aggregate
<https://docs.openstack.org/nova/latest/user/aggregates.html>`_.
<http://docs.openstack.org/developer/nova/aggregates.html>`_.
.. _instance_definition:
@@ -199,18 +206,18 @@ the Watcher system can act on.
Here are some examples of
:ref:`Managed resource types <managed_resource_definition>`:
- `Nova Host Aggregates <https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Nova::HostAggregate>`_
- `Nova Servers <https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Nova::Server>`_
- `Cinder Volumes <https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Cinder::Volume>`_
- `Neutron Routers <https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Neutron::Router>`_
- `Neutron Networks <https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Neutron::Net>`_
- `Neutron load-balancers <https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Neutron::LoadBalancer>`_
- `Sahara Hadoop Cluster <https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Sahara::Cluster>`_
- `Nova Host Aggregates <http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::HostAggregate>`_
- `Nova Servers <http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::Server>`_
- `Cinder Volumes <http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Cinder::Volume>`_
- `Neutron Routers <http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Router>`_
- `Neutron Networks <http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Net>`_
- `Neutron load-balancers <http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LoadBalancer>`_
- `Sahara Hadoop Cluster <http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Sahara::Cluster>`_
- ...
It can be any of `the official list of available resource types defined in
It can be any of the `the official list of available resource types defined in
OpenStack for HEAT
<https://docs.openstack.org/heat/latest/template_guide/openstack.html>`_.
<http://docs.openstack.org/developer/heat/template_guide/openstack.html>`_.
.. _efficacy_indicator_definition:

View File

@@ -39,12 +39,12 @@
Replace WATCHER_PASS with the password you chose for the watcher user in the Identity service.
* Watcher interacts with other OpenStack projects via project clients, in order to instantiate these
clients, Watcher requests new session from Identity service. In the `[watcher_clients_auth]` section,
clients, Watcher requests new session from Identity service. In the `[watcher_client_auth]` section,
configure the identity service access to interact with other OpenStack project clients.
.. code-block:: ini
[watcher_clients_auth]
[watcher_client_auth]
...
auth_type = password
auth_url = http://controller:35357
@@ -56,16 +56,6 @@
Replace WATCHER_PASS with the password you chose for the watcher user in the Identity service.
* In the `[api]` section, configure host option.
.. code-block:: ini
[api]
...
host = controller
Replace controller with the IP address of the management network interface on your controller node, typically 10.0.0.11 for the first node in the example architecture.
* In the `[oslo_messaging_notifications]` section, configure the messaging driver.
.. code-block:: ini
@@ -78,4 +68,4 @@
.. code-block:: ini
su -s /bin/sh -c "watcher-db-manage --config-file /etc/watcher/watcher.conf upgrade"
su -s /bin/sh -c "watcher-db-manage --config-file /etc/watcher/watcher.conf create_schema"

View File

@@ -36,4 +36,4 @@ https://docs.openstack.org/watcher/latest/glossary.html
This chapter assumes a working setup of OpenStack following the
`OpenStack Installation Tutorial
<https://docs.openstack.org/pike/install/>`_.
<https://docs.openstack.org/project-install-guide/ocata/>`_.

View File

@@ -0,0 +1,35 @@
.. _install-obs:
Install and configure for openSUSE and SUSE Linux Enterprise
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the Infrastructure
Optimization service for openSUSE Leap 42.1 and
SUSE Linux Enterprise Server 12 SP1.
.. include:: common_prerequisites.rst
Install and configure components
--------------------------------
#. Install the packages:
.. code-block:: console
# zypper --quiet --non-interactive install
.. include:: common_configure.rst
Finalize installation
---------------------
Start the Infrastructure Optimization services and configure them to start when
the system boots:
.. code-block:: console
# systemctl enable openstack-watcher-api.service
# systemctl start openstack-watcher-api.service

View File

@@ -15,5 +15,6 @@ Note that installation and configuration vary by distribution.
.. toctree::
:maxdepth: 2
install-obs.rst
install-rdo.rst
install-ubuntu.rst

View File

@@ -6,4 +6,4 @@ Next steps
Your OpenStack environment now includes the watcher service.
To add additional services, see
https://docs.openstack.org/pike/install/.
https://docs.openstack.org/project-install-guide/ocata/.

View File

@@ -5,7 +5,7 @@ Basic Offline Server Consolidation
Synopsis
--------
**display name**: ``Basic offline consolidation``
**display name**: ``basic``
**goal**: ``server_consolidation``
@@ -26,7 +26,7 @@ metric service name plugins comment
``cpu_util`` ceilometer_ none
============================ ============ ======= =======
.. _ceilometer: https://docs.openstack.org/ceilometer/latest/admin/telemetry-measurements.html#openstack-compute
.. _ceilometer: http://docs.openstack.org/admin-guide/telemetry-measurements.html#openstack-compute
Cluster data model
******************

View File

@@ -5,7 +5,7 @@ Outlet Temperature Based Strategy
Synopsis
--------
**display name**: ``Outlet temperature based strategy``
**display name**: ``outlet_temperature``
**goal**: ``thermal_optimization``
@@ -33,7 +33,7 @@ metric service name plugins comment
``hardware.ipmi.node.outlet_temperature`` ceilometer_ IPMI
========================================= ============ ======= =======
.. _ceilometer: https://docs.openstack.org/ceilometer/latest/admin/telemetry-measurements.html#ipmi-based-meters
.. _ceilometer: http://docs.openstack.org/admin-guide/telemetry-measurements.html#ipmi-based-meters
Cluster data model
******************

View File

@@ -1,100 +0,0 @@
======================
Saving Energy Strategy
======================
Synopsis
--------
**display name**: ``Saving Energy Strategy``
**goal**: ``saving_energy``
.. watcher-term:: watcher.decision_engine.strategy.strategies.saving_energy
Requirements
------------
This feature will use Ironic to do the power on/off actions, therefore
this feature requires that the ironic component is configured.
And the compute node should be managed by Ironic.
Ironic installation: https://docs.openstack.org/ironic/latest/install/index.html
Cluster data model
******************
Default Watcher's Compute cluster data model:
.. watcher-term:: watcher.decision_engine.model.collector.nova.NovaClusterDataModelCollector
Actions
*******
.. list-table::
:widths: 30 30
:header-rows: 1
* - action
- description
* - ``change_node_power_state``
- .. watcher-term:: watcher.applier.actions.change_node_power_state.ChangeNodePowerState
Planner
*******
Default Watcher's planner:
.. watcher-term:: watcher.decision_engine.planner.weight.WeightPlanner
Configuration
-------------
Strategy parameter is:
====================== ====== ======= ======================================
parameter type default description
Value
====================== ====== ======= ======================================
``free_used_percent`` Number 10.0 a rational number, which describes the
the quotient of
min_free_hosts_num/nodes_with_VMs_num
``min_free_hosts_num`` Int 1 an int number describes minimum free
compute nodes
====================== ====== ======= ======================================
Efficacy Indicator
------------------
Energy saving strategy efficacy indicator is unclassified.
https://github.com/openstack/watcher/blob/master/watcher/decision_engine/goal/goals.py#L215-L218
Algorithm
---------
For more information on the Energy Saving Strategy please refer to:http://specs.openstack.org/openstack/watcher-specs/specs/pike/implemented/energy-saving-strategy.html
How to use it ?
---------------
step1: Add compute nodes info into ironic node management
.. code-block:: shell
$ ironic node-create -d pxe_ipmitool -i ipmi_address=10.43.200.184 \
ipmi_username=root -i ipmi_password=nomoresecret -e compute_node_id=3
step 2: Create audit to do optimization
.. code-block:: shell
$ openstack optimize audittemplate create \
at1 saving_energy --strategy saving_energy
$ openstack optimize audit create -a at1
External Links
--------------
*Spec URL*
http://specs.openstack.org/openstack/watcher-specs/specs/pike/implemented/energy-saving-strategy.html

View File

@@ -33,7 +33,7 @@ power ceilometer_ kwapi_ one point every 60s
======================= ============ ======= =======
.. _ceilometer: https://docs.openstack.org/ceilometer/latest/admin/telemetry-measurements.html#openstack-compute
.. _ceilometer: http://docs.openstack.org/admin-guide/telemetry-measurements.html#openstack-compute
.. _monasca: https://github.com/openstack/monasca-agent/blob/master/docs/Libvirt.md
.. _kwapi: https://kwapi.readthedocs.io/en/latest/index.html

View File

@@ -5,7 +5,7 @@ Uniform Airflow Migration Strategy
Synopsis
--------
**display name**: ``Uniform airflow migration strategy``
**display name**: ``uniform_airflow``
**goal**: ``airflow_optimization``

View File

@@ -5,7 +5,7 @@ VM Workload Consolidation Strategy
Synopsis
--------
**display name**: ``VM Workload Consolidation Strategy``
**display name**: ``vm_workload_consolidation``
**goal**: ``vm_consolidation``
@@ -36,7 +36,7 @@ metric service name plugins comment
``cpu_util`` ceilometer_ none
============================ ============ ======= =======
.. _ceilometer: https://docs.openstack.org/ceilometer/latest/admin/telemetry-measurements.html#openstack-compute
.. _ceilometer: http://docs.openstack.org/admin-guide/telemetry-measurements.html#openstack-compute
Cluster data model
******************

View File

@@ -5,7 +5,7 @@ Watcher Overload standard deviation algorithm
Synopsis
--------
**display name**: ``Workload stabilization``
**display name**: ``workload_stabilization``
**goal**: ``workload_balancing``
@@ -28,7 +28,7 @@ metric service name plugins comment
``memory.resident`` ceilometer_ none
============================ ============ ======= =======
.. _ceilometer: https://docs.openstack.org/ceilometer/latest/admin/telemetry-measurements.html#openstack-compute
.. _ceilometer: http://docs.openstack.org/admin-guide/telemetry-measurements.html#openstack-compute
.. _SNMP: http://docs.openstack.org/admin-guide/telemetry-measurements.html
Cluster data model
@@ -100,7 +100,7 @@ parameter type default Value description
into which the samples are
grouped for aggregation.
Watcher uses only the last
period of all received ones.
period of all recieved ones.
==================== ====== ===================== =============================
.. |metrics| replace:: ["cpu_util", "memory.resident"]

View File

@@ -5,7 +5,7 @@ Workload Balance Migration Strategy
Synopsis
--------
**display name**: ``Workload Balance Migration Strategy``
**display name**: ``workload_balance``
**goal**: ``workload_balancing``
@@ -28,7 +28,7 @@ metric service name plugins comment
``memory.resident`` ceilometer_ none
======================= ============ ======= =======
.. _ceilometer: https://docs.openstack.org/ceilometer/latest/admin/telemetry-measurements.html#openstack-compute
.. _ceilometer: http://docs.openstack.org/admin-guide/telemetry-measurements.html#openstack-compute
Cluster data model

View File

@@ -39,10 +39,10 @@ named ``watcher``, or by using the `OpenStack CLI`_ ``openstack``.
If you want to deploy Watcher in Horizon, please refer to the `Watcher Horizon
plugin installation guide`_.
.. _`installation guide`: https://docs.openstack.org/python-watcherclient/latest
.. _`Watcher Horizon plugin installation guide`: https://docs.openstack.org/watcher-dashboard/latest/install/installation.html
.. _`OpenStack CLI`: https://docs.openstack.org/python-openstackclient/latest/cli/man/openstack.html
.. _`Watcher CLI`: https://docs.openstack.org/python-watcherclient/latest/cli/index.html
.. _`installation guide`: http://docs.openstack.org/developer/python-watcherclient
.. _`Watcher Horizon plugin installation guide`: http://docs.openstack.org/developer/watcher-dashboard/deploy/installation.html
.. _`OpenStack CLI`: http://docs.openstack.org/developer/python-openstackclient/man/openstack.html
.. _`Watcher CLI`: http://docs.openstack.org/developer/python-watcherclient/index.html
Seeing what the Watcher CLI can do ?
------------------------------------

View File

@@ -27,7 +27,7 @@ Structure
Useful links
------------
* How to install: https://docs.openstack.org/rally/latest/install_and_upgrade/install.html
* How to install: http://docs.openstack.org/developer/rally/install.html
* How to set Rally up and launch your first scenario: https://rally.readthedocs.io/en/latest/tutorial/step_1_setting_up_env_and_running_benchmark_from_samples.html

View File

@@ -22,8 +22,7 @@
# All configuration values have a default; values that are commented out
# serve to show the default.
import os
import sys
import sys, os
from watcher import version as watcher_version
# If extensions (or modules to document with autodoc) are in another directory,
@@ -64,7 +63,7 @@ copyright = u'2016, Watcher developers'
# The short X.Y version.
version = watcher_version.version_info.release_string()
# The full version, including alpha/beta/rc tags.
release = watcher_version.version_string
release = watcher_version.version_info.version_string()
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.

View File

@@ -21,7 +21,6 @@ Contents:
:maxdepth: 1
unreleased
pike
ocata
newton

View File

@@ -1,6 +0,0 @@
===================================
Pike Series Release Notes
===================================
.. release-notes::
:branch: stable/pike

View File

@@ -2,48 +2,48 @@
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
apscheduler>=3.0.5 # MIT License
enum34>=1.0.4;python_version=='2.7' or python_version=='2.6' or python_version=='3.3' # BSD
jsonpatch>=1.16 # BSD
keystoneauth1>=3.2.0 # Apache-2.0
jsonschema<3.0.0,>=2.6.0 # MIT
keystonemiddleware>=4.17.0 # Apache-2.0
lxml!=3.7.0,>=3.4.1 # BSD
apscheduler # MIT License
enum34;python_version=='2.7' or python_version=='2.6' or python_version=='3.3' # BSD
jsonpatch>=1.1 # BSD
keystoneauth1>=3.1.0 # Apache-2.0
jsonschema!=2.5.0,<3.0.0,>=2.0.0 # MIT
keystonemiddleware>=4.12.0 # Apache-2.0
lxml!=3.7.0,>=2.3 # BSD
croniter>=0.3.4 # MIT License
oslo.concurrency>=3.20.0 # Apache-2.0
oslo.cache>=1.26.0 # Apache-2.0
oslo.config>=4.6.0 # Apache-2.0
oslo.context!=2.19.1,>=2.14.0 # Apache-2.0
oslo.db>=4.27.0 # Apache-2.0
oslo.i18n>=3.15.3 # Apache-2.0
oslo.log>=3.30.0 # Apache-2.0
oslo.messaging>=5.29.0 # Apache-2.0
oslo.concurrency>=3.8.0 # Apache-2.0
oslo.cache>=1.5.0 # Apache-2.0
oslo.config!=4.3.0,!=4.4.0,>=4.0.0 # Apache-2.0
oslo.context>=2.14.0 # Apache-2.0
oslo.db>=4.24.0 # Apache-2.0
oslo.i18n!=3.15.2,>=2.1.0 # Apache-2.0
oslo.log>=3.22.0 # Apache-2.0
oslo.messaging!=5.25.0,>=5.24.2 # Apache-2.0
oslo.policy>=1.23.0 # Apache-2.0
oslo.reports>=1.18.0 # Apache-2.0
oslo.serialization!=2.19.1,>=2.18.0 # Apache-2.0
oslo.service>=1.24.0 # Apache-2.0
oslo.utils>=3.28.0 # Apache-2.0
oslo.versionedobjects>=1.28.0 # Apache-2.0
oslo.reports>=0.6.0 # Apache-2.0
oslo.serialization!=2.19.1,>=1.10.0 # Apache-2.0
oslo.service>=1.10.0 # Apache-2.0
oslo.utils>=3.20.0 # Apache-2.0
oslo.versionedobjects>=1.17.0 # Apache-2.0
PasteDeploy>=1.5.0 # MIT
pbr!=2.1.0,>=2.0.0 # Apache-2.0
pecan!=1.0.2,!=1.0.3,!=1.0.4,!=1.2,>=1.0.0 # BSD
PrettyTable<0.8,>=0.7.1 # BSD
voluptuous>=0.8.9 # BSD License
gnocchiclient>=3.3.1 # Apache-2.0
gnocchiclient>=2.7.0 # Apache-2.0
python-ceilometerclient>=2.5.0 # Apache-2.0
python-cinderclient>=3.2.0 # Apache-2.0
python-cinderclient>=3.1.0 # Apache-2.0
python-glanceclient>=2.8.0 # Apache-2.0
python-keystoneclient>=3.8.0 # Apache-2.0
python-monascaclient>=1.7.0 # Apache-2.0
python-neutronclient>=6.3.0 # Apache-2.0
python-novaclient>=9.1.0 # Apache-2.0
python-openstackclient>=3.12.0 # Apache-2.0
python-novaclient>=9.0.0 # Apache-2.0
python-openstackclient>=3.11.0 # Apache-2.0
python-ironicclient>=1.14.0 # Apache-2.0
six>=1.9.0 # MIT
SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 # MIT
stevedore>=1.20.0 # Apache-2.0
taskflow>=2.7.0 # Apache-2.0
WebOb>=1.7.1 # MIT
WSME>=0.8.0 # MIT
WSME>=0.8 # MIT
networkx<2.0,>=1.10 # BSD

View File

@@ -21,6 +21,7 @@ classifier =
[files]
packages =
watcher
watcher_tempest_plugin
data_files =
etc/ = etc/*
@@ -39,6 +40,9 @@ console_scripts =
watcher-applier = watcher.cmd.applier:main
watcher-sync = watcher.cmd.sync:main
tempest.test_plugins =
watcher_tests = watcher_tempest_plugin.plugin:WatcherTempestPlugin
watcher.database.migration_backend =
sqlalchemy = watcher.db.sqlalchemy.migration
@@ -99,6 +103,7 @@ autodoc_exclude_modules =
watcher.db.sqlalchemy.alembic.env
watcher.db.sqlalchemy.alembic.versions.*
watcher.tests.*
watcher_tempest_plugin.*
watcher.doc

View File

@@ -3,24 +3,25 @@
# process, which may cause wedges in the gate later.
coverage!=4.4,>=4.0 # Apache-2.0
doc8>=0.6.0 # Apache-2.0
doc8 # Apache-2.0
freezegun>=0.3.6 # Apache-2.0
hacking!=0.13.0,<0.14,>=0.12.0 # Apache-2.0
mock>=2.0.0 # BSD
mock>=2.0 # BSD
oslotest>=1.10.0 # Apache-2.0
os-testr>=1.0.0 # Apache-2.0
os-testr>=0.8.0 # Apache-2.0
python-subunit>=0.0.18 # Apache-2.0/BSD
testrepository>=0.0.18 # Apache-2.0/BSD
testscenarios>=0.4 # Apache-2.0/BSD
testtools>=1.4.0 # MIT
# Doc requirements
openstackdocstheme>=1.17.0 # Apache-2.0
openstackdocstheme>=1.16.0 # Apache-2.0
sphinx>=1.6.2 # BSD
sphinxcontrib-pecanwsme>=0.8.0 # Apache-2.0
sphinxcontrib-pecanwsme>=0.8 # Apache-2.0
# releasenotes
reno>=2.5.0 # Apache-2.0
reno!=2.3.1,>=1.8.0 # Apache-2.0
# bandit
bandit>=1.1.0 # Apache-2.0

View File

@@ -7,7 +7,7 @@ skipsdist = True
usedevelop = True
whitelist_externals = find
rm
install_command = pip install -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} {opts} {packages}
install_command = pip install -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/pike} {opts} {packages}
setenv =
VIRTUAL_ENV={envdir}
deps = -r{toxinidir}/test-requirements.txt

View File

@@ -448,7 +448,7 @@ class AuditTemplatesController(rest.RestController):
sort_key, sort_dir, expand=False,
resource_url=None):
api_utils.validate_search_filters(
filters, list(objects.audit_template.AuditTemplate.fields) +
filters, list(objects.audit_template.AuditTemplate.fields.keys()) +
["goal_uuid", "goal_name", "strategy_uuid", "strategy_name"])
limit = api_utils.validate_limit(limit)
api_utils.validate_sort_dir(sort_dir)

View File

@@ -170,7 +170,7 @@ class GoalsController(rest.RestController):
limit = api_utils.validate_limit(limit)
api_utils.validate_sort_dir(sort_dir)
sort_db_key = (sort_key if sort_key in objects.Goal.fields
sort_db_key = (sort_key if sort_key in objects.Goal.fields.keys()
else None)
marker_obj = None

View File

@@ -104,7 +104,7 @@ class Service(base.APIBase):
def __init__(self, **kwargs):
super(Service, self).__init__()
fields = list(objects.Service.fields) + ['status']
fields = list(objects.Service.fields.keys()) + ['status']
self.fields = []
for field in fields:
self.fields.append(field)
@@ -194,7 +194,7 @@ class ServicesController(rest.RestController):
limit = api_utils.validate_limit(limit)
api_utils.validate_sort_dir(sort_dir)
sort_db_key = (sort_key if sort_key in objects.Service.fields
sort_db_key = (sort_key if sort_key in objects.Service.fields.keys()
else None)
marker_obj = None

View File

@@ -210,12 +210,12 @@ class StrategiesController(rest.RestController):
def _get_strategies_collection(self, filters, marker, limit, sort_key,
sort_dir, expand=False, resource_url=None):
api_utils.validate_search_filters(
filters, list(objects.strategy.Strategy.fields) +
filters, list(objects.strategy.Strategy.fields.keys()) +
["goal_uuid", "goal_name"])
limit = api_utils.validate_limit(limit)
api_utils.validate_sort_dir(sort_dir)
sort_db_key = (sort_key if sort_key in objects.Strategy.fields
sort_db_key = (sort_key if sort_key in objects.Strategy.fields.keys()
else None)
marker_obj = None

View File

@@ -57,7 +57,7 @@ def validate_sort_dir(sort_dir):
def validate_search_filters(filters, allowed_fields):
# Very lightweight validation for now
# todo: improve this (e.g. https://www.parse.com/docs/rest/guide/#queries)
for filter_name in filters:
for filter_name in filters.keys():
if filter_name not in allowed_fields:
raise wsme.exc.ClientSideError(
_("Invalid filter: %s") % filter_name)

View File

@@ -42,7 +42,7 @@ class APISchedulingService(scheduling.BackgroundSchedulerService):
services = objects.service.Service.list(context)
for service in services:
result = self.get_service_status(context, service.id)
if service.id not in self.services_status:
if service.id not in self.services_status.keys():
self.services_status[service.id] = result
continue
if self.services_status[service.id] != result:

View File

@@ -54,7 +54,6 @@ class DefaultActionPlanHandler(base.BaseActionPlanHandler):
applier.execute(self.action_plan_uuid)
action_plan.state = objects.action_plan.State.SUCCEEDED
action_plan.save()
notifications.action_plan.send_action_notification(
self.ctx, action_plan,
action=fields.NotificationAction.EXECUTION,
@@ -64,32 +63,17 @@ class DefaultActionPlanHandler(base.BaseActionPlanHandler):
LOG.exception(e)
action_plan.state = objects.action_plan.State.CANCELLED
self._update_action_from_pending_to_cancelled()
action_plan.save()
notifications.action_plan.send_cancel_notification(
self.ctx, action_plan,
action=fields.NotificationAction.CANCEL,
phase=fields.NotificationPhase.END)
except Exception as e:
LOG.exception(e)
action_plan = objects.ActionPlan.get_by_uuid(
self.ctx, self.action_plan_uuid, eager=True)
if action_plan.state == objects.action_plan.State.CANCELLING:
action_plan.state = objects.action_plan.State.FAILED
action_plan.save()
notifications.action_plan.send_cancel_notification(
self.ctx, action_plan,
action=fields.NotificationAction.CANCEL,
priority=fields.NotificationPriority.ERROR,
phase=fields.NotificationPhase.ERROR)
else:
action_plan.state = objects.action_plan.State.FAILED
action_plan.save()
notifications.action_plan.send_action_notification(
self.ctx, action_plan,
action=fields.NotificationAction.EXECUTION,
priority=fields.NotificationPriority.ERROR,
phase=fields.NotificationPhase.ERROR)
action_plan.state = objects.action_plan.State.FAILED
notifications.action_plan.send_action_notification(
self.ctx, action_plan,
action=fields.NotificationAction.EXECUTION,
priority=fields.NotificationPriority.ERROR,
phase=fields.NotificationPhase.ERROR)
finally:
action_plan.save()
def _update_action_from_pending_to_cancelled(self):
filters = {'action_plan_uuid': self.action_plan_uuid,

View File

@@ -18,7 +18,6 @@
#
import enum
import time
from watcher._i18n import _
from watcher.applier.actions import base
@@ -88,39 +87,25 @@ class ChangeNodePowerState(base.BaseAction):
target_state = NodeState.POWERON.value
return self._node_manage_power(target_state)
def _node_manage_power(self, state, retry=60):
def _node_manage_power(self, state):
if state is None:
raise exception.IllegalArgumentException(
message=_("The target state is not defined"))
result = False
ironic_client = self.osc.ironic()
nova_client = self.osc.nova()
current_state = ironic_client.node.get(self.node_uuid).power_state
# power state: 'power on' or 'power off', if current node state
# is the same as state, just return True
if state in current_state:
return True
if state == NodeState.POWEROFF.value:
node_info = ironic_client.node.get(self.node_uuid).to_dict()
compute_node_id = node_info['extra']['compute_node_id']
compute_node = nova_client.hypervisors.get(compute_node_id)
compute_node = compute_node.to_dict()
if (compute_node['running_vms'] == 0):
ironic_client.node.set_power_state(
result = ironic_client.node.set_power_state(
self.node_uuid, state)
else:
ironic_client.node.set_power_state(self.node_uuid, state)
ironic_node = ironic_client.node.get(self.node_uuid)
while ironic_node.power_state == current_state and retry:
time.sleep(10)
retry -= 1
ironic_node = ironic_client.node.get(self.node_uuid)
if retry > 0:
return True
else:
return False
result = ironic_client.node.set_power_state(self.node_uuid, state)
return result
def pre_condition(self):
pass

View File

@@ -124,8 +124,7 @@ class Migrate(base.BaseAction):
LOG.debug("Nova client exception occurred while live "
"migrating instance %s.Exception: %s" %
(self.instance_uuid, e))
except Exception as e:
LOG.exception(e)
except Exception:
LOG.critical("Unexpected error occurred. Migration failed for "
"instance %s. Leaving instance on previous "
"host.", self.instance_uuid)

View File

@@ -57,7 +57,6 @@ class BaseWorkFlowEngine(loadable.Loadable):
self._applier_manager = applier_manager
self._action_factory = factory.ActionFactory()
self._osc = None
self._is_notified = False
@classmethod
def get_config_opts(cls):
@@ -93,17 +92,6 @@ class BaseWorkFlowEngine(loadable.Loadable):
db_action.save()
return db_action
def notify_cancel_start(self, action_plan_uuid):
action_plan = objects.ActionPlan.get_by_uuid(self.context,
action_plan_uuid,
eager=True)
if not self._is_notified:
self._is_notified = True
notifications.action_plan.send_cancel_notification(
self._context, action_plan,
action=fields.NotificationAction.CANCEL,
phase=fields.NotificationPhase.START)
@abc.abstractmethod
def execute(self, actions):
raise NotImplementedError()
@@ -169,7 +157,6 @@ class BaseTaskFlowActionContainer(flow_task.Task):
fields.NotificationPhase.START)
except exception.ActionPlanCancelled as e:
LOG.exception(e)
self.engine.notify_cancel_start(action_plan.uuid)
raise
except Exception as e:
LOG.exception(e)
@@ -231,7 +218,6 @@ class BaseTaskFlowActionContainer(flow_task.Task):
# taskflow will call revert for the action,
# we will redirect it to abort.
except eventlet.greenlet.GreenletExit:
self.engine.notify_cancel_start(action_plan_object.uuid)
raise exception.ActionPlanCancelled(uuid=action_plan_object.uuid)
except Exception as e:
@@ -255,7 +241,7 @@ class BaseTaskFlowActionContainer(flow_task.Task):
action_plan = objects.ActionPlan.get_by_id(
self.engine.context, self._db_action.action_plan_id, eager=True)
# NOTE: check if revert cause by cancel action plan or
# some other exception occurred during action plan execution
# some other exception occured during action plan execution
# if due to some other exception keep the flow intact.
if action_plan.state not in CANCEL_STATE:
self.do_revert()
@@ -263,42 +249,15 @@ class BaseTaskFlowActionContainer(flow_task.Task):
action_object = objects.Action.get_by_uuid(
self.engine.context, self._db_action.uuid, eager=True)
try:
if action_object.state == objects.action.State.ONGOING:
action_object.state = objects.action.State.CANCELLING
action_object.save()
notifications.action.send_cancel_notification(
self.engine.context, action_object,
fields.NotificationAction.CANCEL,
fields.NotificationPhase.START)
action_object = self.abort()
notifications.action.send_cancel_notification(
self.engine.context, action_object,
fields.NotificationAction.CANCEL,
fields.NotificationPhase.END)
if action_object.state == objects.action.State.PENDING:
notifications.action.send_cancel_notification(
self.engine.context, action_object,
fields.NotificationAction.CANCEL,
fields.NotificationPhase.START)
action_object.state = objects.action.State.CANCELLED
action_object.save()
notifications.action.send_cancel_notification(
self.engine.context, action_object,
fields.NotificationAction.CANCEL,
fields.NotificationPhase.END)
except Exception as e:
LOG.exception(e)
action_object.state = objects.action.State.FAILED
if action_object.state == objects.action.State.ONGOING:
action_object.state = objects.action.State.CANCELLING
action_object.save()
notifications.action.send_cancel_notification(
self.engine.context, action_object,
fields.NotificationAction.CANCEL,
fields.NotificationPhase.ERROR,
priority=fields.NotificationPriority.ERROR)
self.abort()
elif action_object.state == objects.action.State.PENDING:
action_object.state = objects.action.State.CANCELLED
action_object.save()
else:
pass
def abort(self, *args, **kwargs):
return self.do_abort(*args, **kwargs)
self.do_abort(*args, **kwargs)

View File

@@ -34,7 +34,7 @@ class DefaultWorkFlowEngine(base.BaseWorkFlowEngine):
"""Taskflow as a workflow engine for Watcher
Full documentation on taskflow at
https://docs.openstack.org/taskflow/latest
http://docs.openstack.org/developer/taskflow/
"""
def decider(self, history):
@@ -45,7 +45,7 @@ class DefaultWorkFlowEngine(base.BaseWorkFlowEngine):
# (or whether the execution of v should be ignored,
# and therefore not executed). It is expected to take as single
# keyword argument history which will be the execution results of
# all u decidable links that have v as a target. It is expected
# all u decideable links that have v as a target. It is expected
# to return a single boolean
# (True to allow v execution or False to not).
return True
@@ -127,10 +127,8 @@ class TaskFlowActionContainer(base.BaseTaskFlowActionContainer):
return self.engine.notify(self._db_action,
objects.action.State.SUCCEEDED)
else:
self.engine.notify(self._db_action,
objects.action.State.FAILED)
raise exception.ActionExecutionFailure(
action_id=self._db_action.uuid)
return self.engine.notify(self._db_action,
objects.action.State.FAILED)
def do_post_execute(self):
LOG.debug("Post-condition action: %s", self.name)

View File

@@ -15,6 +15,8 @@ from oslo_log import log as logging
from oslo_utils import timeutils
import six
from watcher.common import utils
LOG = logging.getLogger(__name__)
@@ -100,7 +102,7 @@ class RequestContext(context.RequestContext):
'domain_name': getattr(self, 'domain_name', None),
'auth_token_info': getattr(self, 'auth_token_info', None),
'is_admin': getattr(self, 'is_admin', None),
'timestamp': self.timestamp.isoformat() if hasattr(
'timestamp': utils.strtime(self.timestamp) if hasattr(
self, 'timestamp') else None,
'request_id': getattr(self, 'request_id', None),
})

View File

@@ -435,10 +435,6 @@ class ActionDescriptionNotFound(ResourceNotFound):
msg_fmt = _("The action description %(action_id)s cannot be found.")
class ActionExecutionFailure(WatcherException):
msg_fmt = _("The action %(action_id)s execution failed.")
# Model
class ComputeResourceNotFound(WatcherException):

View File

@@ -70,6 +70,9 @@ class NovaHelper(object):
def get_service(self, service_id):
return self.nova.services.find(id=service_id)
def get_flavor(self, flavor_id):
return self.nova.flavors.get(flavor_id)
def get_aggregate_list(self):
return self.nova.aggregates.list()
@@ -451,7 +454,8 @@ class NovaHelper(object):
"Instance %s found on host '%s'." % (instance_id, host_name))
instance.live_migrate(host=dest_hostname,
block_migration=block_migration)
block_migration=block_migration,
disk_over_commit=True)
instance = self.nova.servers.get(instance_id)
@@ -524,10 +528,10 @@ class NovaHelper(object):
instance_host = getattr(instance, 'OS-EXT-SRV-ATTR:host')
instance_status = getattr(instance, 'status')
# Abort live migration successful, action is cancelled
# Abort live migration successfull, action is cancelled
if instance_host == source and instance_status == 'ACTIVE':
return True
# Nova Unable to abort live migration, action is succeeded
# Nova Unable to abort live migration, action is succeded
elif instance_host == destination and instance_status == 'ACTIVE':
return False

View File

@@ -49,7 +49,7 @@ def init(policy_file=None, rules=None,
"""
global _ENFORCER
if not _ENFORCER:
# https://docs.openstack.org/oslo.policy/latest/admin/index.html
# http://docs.openstack.org/developer/oslo.policy/usage.html
_ENFORCER = policy.Enforcer(CONF,
policy_file=policy_file,
rules=rules,

View File

@@ -26,6 +26,7 @@ from croniter import croniter
from jsonschema import validators
from oslo_log import log as logging
from oslo_utils import strutils
from oslo_utils import timeutils
from oslo_utils import uuidutils
import six
@@ -64,6 +65,7 @@ class Struct(dict):
generate_uuid = uuidutils.generate_uuid
is_uuid_like = uuidutils.is_uuid_like
is_int_like = strutils.is_int_like
strtime = timeutils.strtime
def is_cron_like(value):

View File

@@ -37,11 +37,13 @@ from watcher.conf import nova_client
from watcher.conf import paths
from watcher.conf import planner
from watcher.conf import service
from watcher.conf import utils
CONF = cfg.CONF
service.register_opts(CONF)
api.register_opts(CONF)
utils.register_opts(CONF)
paths.register_opts(CONF)
exception.register_opts(CONF)
db.register_opts(CONF)

View File

@@ -30,6 +30,7 @@ from watcher.conf import neutron_client as conf_neutron_client
from watcher.conf import nova_client as conf_nova_client
from watcher.conf import paths
from watcher.conf import planner as conf_planner
from watcher.conf import utils
def list_opts():
@@ -38,7 +39,8 @@ def list_opts():
('DEFAULT',
(conf_api.AUTH_OPTS +
exception.EXC_LOG_OPTS +
paths.PATH_OPTS)),
paths.PATH_OPTS +
utils.UTILS_OPTS)),
('api', conf_api.API_SERVICE_OPTS),
('database', db.SQL_OPTS),
('watcher_planner', conf_planner.WATCHER_PLANNER_OPTS),

View File

@@ -26,10 +26,10 @@ GNOCCHI_CLIENT_OPTS = [
default='1',
help='Version of Gnocchi API to use in gnocchiclient.'),
cfg.StrOpt('endpoint_type',
default='public',
default='internalURL',
help='Type of endpoint to use in gnocchi client.'
'Supported values: internal, public, admin'
'The default is public.'),
'Supported values: internalURL, publicURL, adminURL'
'The default is internalURL.'),
cfg.IntOpt('query_max_retries',
default=10,
help='How many times Watcher is trying to query again'),

View File

@@ -23,7 +23,7 @@ nova_client = cfg.OptGroup(name='nova_client',
NOVA_CLIENT_OPTS = [
cfg.StrOpt('api_version',
default='2.53',
default='2',
help='Version of Nova API to use in novaclient.'),
cfg.StrOpt('endpoint_type',
default='publicURL',

36
watcher/conf/utils.py Normal file
View File

@@ -0,0 +1,36 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2016 Intel Corp
#
# Authors: Prudhvi Rao Shedimbi <prudhvi.rao.shedimbi@intel.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
UTILS_OPTS = [
cfg.StrOpt('rootwrap_config',
default="/etc/watcher/rootwrap.conf",
help='Path to the rootwrap configuration file to use for '
'running commands as root.'),
cfg.StrOpt('tempdir',
help='Explicitly specify the temporary working directory.'),
]
def register_opts(conf):
conf.register_opts(UTILS_OPTS)
def list_opts():
return [('DEFAULT', UTILS_OPTS)]

View File

@@ -6,29 +6,25 @@ Create Date: 2017-07-13 20:33:01.473711
"""
from alembic import op
import oslo_db
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = 'd09a5945e4a0'
down_revision = 'd098df6021e2'
from alembic import op
import oslo_db
import sqlalchemy as sa
def upgrade():
op.create_table(
'action_descriptions',
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.Column('deleted_at', sa.DateTime(), nullable=True),
sa.Column('deleted', oslo_db.sqlalchemy.types.SoftDeleteInteger(),
nullable=True),
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('action_type', sa.String(length=255), nullable=False),
sa.Column('description', sa.String(length=255), nullable=False),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('action_type',
name='uniq_action_description0action_type')
op.create_table('action_descriptions',
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.Column('deleted_at', sa.DateTime(), nullable=True),
sa.Column('deleted', oslo_db.sqlalchemy.types.SoftDeleteInteger(), nullable=True),
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('action_type', sa.String(length=255), nullable=False),
sa.Column('description', sa.String(length=255), nullable=False),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('action_type', name='uniq_action_description0action_type')
)

View File

@@ -69,7 +69,7 @@ def create_schema(config=None, engine=None):
# schema, it will only add the new tables, but leave
# existing as is. So we should avoid of this situation.
if version(engine=engine) is not None:
raise db_exc.DBMigrationError(
raise db_exc.DbMigrationError(
_("Watcher database schema is already under version control; "
"use upgrade() instead"))

View File

@@ -96,10 +96,9 @@ class AuditHandler(BaseAuditHandler):
raise
def update_audit_state(self, audit, state):
if audit.state != state:
LOG.debug("Update audit state: %s", state)
audit.state = state
audit.save()
LOG.debug("Update audit state: %s", state)
audit.state = state
audit.save()
def check_ongoing_action_plans(self, request_context):
a_plan_filters = {'state': objects.action_plan.State.ONGOING}

View File

@@ -62,11 +62,12 @@ class ContinuousAuditHandler(base.AuditHandler):
if objects.audit.AuditStateTransitionManager().is_inactive(audit):
# if audit isn't in active states, audit's job must be removed to
# prevent using of inactive audit in future.
if self.scheduler.get_jobs():
[job for job in self.scheduler.get_jobs()
if job.name == 'execute_audit' and
job.args[0].uuid == audit.uuid][0].remove()
return True
jobs = [job for job in self.scheduler.get_jobs()
if job.name == 'execute_audit' and
job.args[0].uuid == audit.uuid]
if jobs:
jobs[0].remove()
return True
return False

View File

@@ -72,6 +72,38 @@ class IndicatorSpecification(object):
return str(self.to_dict())
class AverageCpuLoad(IndicatorSpecification):
def __init__(self):
super(AverageCpuLoad, self).__init__(
name="avg_cpu_percent",
description=_("Average CPU load as a percentage of the CPU time."),
unit="%",
)
@property
def schema(self):
return voluptuous.Schema(
voluptuous.Range(min=0, max=100), required=True)
class MigrationEfficacy(IndicatorSpecification):
def __init__(self):
super(MigrationEfficacy, self).__init__(
name="migration_efficacy",
description=_("Represents the percentage of released nodes out of "
"the total number of migrations."),
unit="%",
required=True
)
@property
def schema(self):
return voluptuous.Schema(
voluptuous.Range(min=0, max=100), required=True)
class ComputeNodesCount(IndicatorSpecification):
def __init__(self):
super(ComputeNodesCount, self).__init__(

View File

@@ -27,7 +27,7 @@ It is represented as a set of :ref:`Managed resources
<managed_resource_definition>` (which may be a simple tree or a flat list of
key-value pairs) which enables Watcher :ref:`Strategies <strategy_definition>`
to know the current relationships between the different :ref:`resources
<managed_resource_definition>` of the :ref:`Cluster <cluster_definition>`
<managed_resource_definition>`) of the :ref:`Cluster <cluster_definition>`
during an :ref:`Audit <audit_definition>` and enables the :ref:`Strategy
<strategy_definition>` to request information such as:

View File

@@ -117,7 +117,7 @@ class ModelBuilder(object):
# cpu_id, cpu_node = self.build_cpu_compute_node(base_id, node)
# self.add_node(cpu_id, cpu_node)
# # Connect the base compute node to the dependent nodes.
# # Connect the base compute node to the dependant nodes.
# self.add_edges_from([(base_id, disk_id), (base_id, mem_id),
# (base_id, cpu_id), (base_id, net_id)],
# label="contains")
@@ -227,14 +227,14 @@ class ModelBuilder(object):
:param instance: Nova VM object.
:return: A instance node for the graph.
"""
flavor = instance.flavor
flavor = self.nova_helper.get_flavor(instance.flavor["id"])
instance_attributes = {
"uuid": instance.id,
"human_id": instance.human_id,
"memory": flavor["ram"],
"disk": flavor["disk"],
"disk_capacity": flavor["disk"],
"vcpus": flavor["vcpus"],
"memory": flavor.ram,
"disk": flavor.disk,
"disk_capacity": flavor.disk,
"vcpus": flavor.vcpus,
"state": getattr(instance, "OS-EXT-STS:vm_state"),
"metadata": instance.metadata}

View File

@@ -33,7 +33,7 @@ class ServiceState(enum.Enum):
class ComputeNode(compute_resource.ComputeResource):
fields = {
"id": wfields.StringField(),
"id": wfields.NonNegativeIntegerField(),
"hostname": wfields.StringField(),
"status": wfields.StringField(default=ServiceState.ENABLED.value),
"state": wfields.StringField(default=ServiceState.ONLINE.value),

View File

@@ -249,9 +249,7 @@ class InstanceCreated(VersionedNotificationEndpoint):
event_type='instance.update',
# To be "fully" created, an instance transitions
# from the 'building' state to the 'active' one.
# See https://docs.openstack.org/nova/latest/reference/
# vm-states.html
# See http://docs.openstack.org/developer/nova/vmstates.html
payload={
'nova_object.data': {
'state': element.InstanceState.ACTIVE.value,

View File

@@ -36,10 +36,12 @@ class DefaultScope(base.BaseScope):
"host_aggregates": {
"type": "array",
"items": {
"anyOf": [
{"$ref": "#/host_aggregates/id"},
{"$ref": "#/host_aggregates/name"},
]
"type": "object",
"properties": {
"anyOf": [
{"type": ["string", "number"]}
]
},
}
},
"availability_zones": {
@@ -67,8 +69,7 @@ class DefaultScope(base.BaseScope):
"uuid": {
"type": "string"
}
},
"additionalProperties": False
}
}
},
"compute_nodes": {
@@ -79,17 +80,18 @@ class DefaultScope(base.BaseScope):
"name": {
"type": "string"
}
},
"additionalProperties": False
}
}
},
"host_aggregates": {
"type": "array",
"items": {
"anyOf": [
{"$ref": "#/host_aggregates/id"},
{"$ref": "#/host_aggregates/name"},
]
"type": "object",
"properties": {
"anyOf": [
{"type": ["string", "number"]}
]
},
}
},
"instance_metadata": {
@@ -104,29 +106,7 @@ class DefaultScope(base.BaseScope):
}
},
"additionalProperties": False
},
"host_aggregates": {
"id": {
"properties": {
"id": {
"oneOf": [
{"type": "integer"},
{"enum": ["*"]}
]
}
},
"additionalProperties": False
},
"name": {
"properties": {
"name": {
"type": "string"
}
},
"additionalProperties": False
}
},
"additionalProperties": False
}
}
def __init__(self, scope, config, osc=None):

View File

@@ -170,7 +170,7 @@ class BasicConsolidation(base.ServerConsolidationBaseStrategy):
@property
def ceilometer(self):
if self._ceilometer is None:
self.ceilometer = ceil.CeilometerHelper(osc=self.osc)
self._ceilometer = ceil.CeilometerHelper(osc=self.osc)
return self._ceilometer
@ceilometer.setter
@@ -180,7 +180,7 @@ class BasicConsolidation(base.ServerConsolidationBaseStrategy):
@property
def monasca(self):
if self._monasca is None:
self.monasca = mon.MonascaHelper(osc=self.osc)
self._monasca = mon.MonascaHelper(osc=self.osc)
return self._monasca
@monasca.setter
@@ -190,21 +190,13 @@ class BasicConsolidation(base.ServerConsolidationBaseStrategy):
@property
def gnocchi(self):
if self._gnocchi is None:
self.gnocchi = gnoc.GnocchiHelper(osc=self.osc)
self._gnocchi = gnoc.GnocchiHelper(osc=self.osc)
return self._gnocchi
@gnocchi.setter
def gnocchi(self, gnocchi):
self._gnocchi = gnocchi
def get_available_compute_nodes(self):
default_node_scope = [element.ServiceState.ENABLED.value,
element.ServiceState.DISABLED.value]
return {uuid: cn for uuid, cn in
self.compute_model.get_all_compute_nodes().items()
if cn.state == element.ServiceState.ONLINE.value and
cn.status in default_node_scope}
def check_migration(self, source_node, destination_node,
instance_to_migrate):
"""Check if the migration is possible
@@ -436,7 +428,7 @@ class BasicConsolidation(base.ServerConsolidationBaseStrategy):
def compute_score_of_nodes(self):
"""Calculate score of nodes based on load by VMs"""
score = []
for node in self.get_available_compute_nodes().values():
for node in self.compute_model.get_all_compute_nodes().values():
if node.status == element.ServiceState.ENABLED.value:
self.number_of_enabled_nodes += 1
@@ -510,7 +502,7 @@ class BasicConsolidation(base.ServerConsolidationBaseStrategy):
if not self.compute_model:
raise exception.ClusterStateNotDefined()
if len(self.get_available_compute_nodes()) == 0:
if len(self.compute_model.get_all_compute_nodes()) == 0:
raise exception.ClusterEmpty()
if self.compute_model.stale:

View File

@@ -50,7 +50,7 @@ class NoisyNeighbor(base.NoisyNeighborBaseStrategy):
@property
def ceilometer(self):
if self._ceilometer is None:
self.ceilometer = ceil.CeilometerHelper(osc=self.osc)
self._ceilometer = ceil.CeilometerHelper(osc=self.osc)
return self._ceilometer
@ceilometer.setter

View File

@@ -140,7 +140,7 @@ class OutletTempControl(base.ThermalOptimizationBaseStrategy):
@property
def ceilometer(self):
if self._ceilometer is None:
self.ceilometer = ceil.CeilometerHelper(osc=self.osc)
self._ceilometer = ceil.CeilometerHelper(osc=self.osc)
return self._ceilometer
@ceilometer.setter
@@ -150,7 +150,7 @@ class OutletTempControl(base.ThermalOptimizationBaseStrategy):
@property
def gnocchi(self):
if self._gnocchi is None:
self.gnocchi = gnoc.GnocchiHelper(osc=self.osc)
self._gnocchi = gnoc.GnocchiHelper(osc=self.osc)
return self._gnocchi
@gnocchi.setter
@@ -171,13 +171,6 @@ class OutletTempControl(base.ThermalOptimizationBaseStrategy):
choices=["ceilometer", "gnocchi"])
]
def get_available_compute_nodes(self):
default_node_scope = [element.ServiceState.ENABLED.value]
return {uuid: cn for uuid, cn in
self.compute_model.get_all_compute_nodes().items()
if cn.state == element.ServiceState.ONLINE.value and
cn.status in default_node_scope}
def calc_used_resource(self, node):
"""Calculate the used vcpus, memory and disk based on VM flavors"""
instances = self.compute_model.get_node_instances(node)
@@ -193,7 +186,7 @@ class OutletTempControl(base.ThermalOptimizationBaseStrategy):
def group_hosts_by_outlet_temp(self):
"""Group hosts based on outlet temp meters"""
nodes = self.get_available_compute_nodes()
nodes = self.compute_model.get_all_compute_nodes()
size_cluster = len(nodes)
if size_cluster == 0:
raise wexc.ClusterEmpty()

View File

@@ -29,42 +29,6 @@ LOG = log.getLogger(__name__)
class SavingEnergy(base.SavingEnergyBaseStrategy):
"""Saving Energy Strategy
Saving Energy Strategy together with VM Workload Consolidation Strategy
can perform the Dynamic Power Management (DPM) functionality, which tries
to save power by dynamically consolidating workloads even further during
periods of low resource utilization. Virtual machines are migrated onto
fewer hosts and the unneeded hosts are powered off.
After consolidation, Saving Energy Strategy produces a solution of powering
off/on according to the following detailed policy:
In this policy, a preset number(min_free_hosts_num) is given by user, and
this min_free_hosts_num describes minimum free compute nodes that users
expect to have, where "free compute nodes" refers to those nodes unused
but still powered on.
If the actual number of unused nodes(in power-on state) is larger than
the given number, randomly select the redundant nodes and power off them;
If the actual number of unused nodes(in poweron state) is smaller than
the given number and there are spare unused nodes(in poweroff state),
randomly select some nodes(unused,poweroff) and power on them.
In this policy, in order to calculate the min_free_hosts_num,
users must provide two parameters:
* One parameter("min_free_hosts_num") is a constant int number.
This number should be int type and larger than zero.
* The other parameter("free_used_percent") is a percentage number, which
describes the quotient of min_free_hosts_num/nodes_with_VMs_num,
where nodes_with_VMs_num is the number of nodes with VMs running on it.
This parameter is used to calculate a dynamic min_free_hosts_num.
The nodes with VMs refer to those nodes with VMs running on it.
Then choose the larger one as the final min_free_hosts_num.
"""
def __init__(self, config, osc=None):

View File

@@ -130,7 +130,7 @@ class UniformAirflow(base.BaseStrategy):
@property
def ceilometer(self):
if self._ceilometer is None:
self.ceilometer = ceil.CeilometerHelper(osc=self.osc)
self._ceilometer = ceil.CeilometerHelper(osc=self.osc)
return self._ceilometer
@ceilometer.setter
@@ -140,7 +140,7 @@ class UniformAirflow(base.BaseStrategy):
@property
def gnocchi(self):
if self._gnocchi is None:
self.gnocchi = gnoc.GnocchiHelper(osc=self.osc)
self._gnocchi = gnoc.GnocchiHelper(osc=self.osc)
return self._gnocchi
@gnocchi.setter
@@ -214,13 +214,6 @@ class UniformAirflow(base.BaseStrategy):
choices=["ceilometer", "gnocchi"])
]
def get_available_compute_nodes(self):
default_node_scope = [element.ServiceState.ENABLED.value]
return {uuid: cn for uuid, cn in
self.compute_model.get_all_compute_nodes().items()
if cn.state == element.ServiceState.ONLINE.value and
cn.status in default_node_scope}
def calculate_used_resource(self, node):
"""Compute the used vcpus, memory and disk based on instance flavors"""
instances = self.compute_model.get_node_instances(node)
@@ -341,7 +334,7 @@ class UniformAirflow(base.BaseStrategy):
def group_hosts_by_airflow(self):
"""Group hosts based on airflow meters"""
nodes = self.get_available_compute_nodes()
nodes = self.compute_model.get_all_compute_nodes()
if not nodes:
raise wexc.ClusterEmpty()
overload_hosts = []

View File

@@ -118,7 +118,7 @@ class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
@property
def ceilometer(self):
if self._ceilometer is None:
self.ceilometer = ceil.CeilometerHelper(osc=self.osc)
self._ceilometer = ceil.CeilometerHelper(osc=self.osc)
return self._ceilometer
@ceilometer.setter
@@ -128,7 +128,7 @@ class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
@property
def gnocchi(self):
if self._gnocchi is None:
self.gnocchi = gnoc.GnocchiHelper(osc=self.osc)
self._gnocchi = gnoc.GnocchiHelper(osc=self.osc)
return self._gnocchi
@gnocchi.setter
@@ -169,14 +169,6 @@ class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
choices=["ceilometer", "gnocchi"])
]
def get_available_compute_nodes(self):
default_node_scope = [element.ServiceState.ENABLED.value,
element.ServiceState.DISABLED.value]
return {uuid: cn for uuid, cn in
self.compute_model.get_all_compute_nodes().items()
if cn.state == element.ServiceState.ONLINE.value and
cn.status in default_node_scope}
def get_instance_state_str(self, instance):
"""Get instance state in string format.
@@ -281,7 +273,7 @@ class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
:return: None
"""
for node in self.get_available_compute_nodes().values():
for node in self.compute_model.get_all_compute_nodes().values():
if (len(self.compute_model.get_node_instances(node)) == 0 and
node.status !=
element.ServiceState.DISABLED.value):
@@ -430,7 +422,7 @@ class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
RCU is an average of relative utilizations (rhu) of active nodes.
:return: {'cpu': <0,1>, 'ram': <0,1>, 'disk': <0,1>}
"""
nodes = self.get_available_compute_nodes().values()
nodes = self.compute_model.get_all_compute_nodes().values()
rcu = {}
counters = {}
for node in nodes:
@@ -542,7 +534,7 @@ class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
:param cc: dictionary containing resource capacity coefficients
"""
sorted_nodes = sorted(
self.get_available_compute_nodes().values(),
self.compute_model.get_all_compute_nodes().values(),
key=lambda x: self.get_node_utilization(x)['cpu'])
for node in reversed(sorted_nodes):
if self.is_overloaded(node, cc):
@@ -575,7 +567,7 @@ class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
:param cc: dictionary containing resource capacity coefficients
"""
sorted_nodes = sorted(
self.get_available_compute_nodes().values(),
self.compute_model.get_all_compute_nodes().values(),
key=lambda x: self.get_node_utilization(x)['cpu'])
asc = 0
for node in sorted_nodes:
@@ -638,7 +630,7 @@ class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
rcu_after = self.get_relative_cluster_utilization()
info = {
"compute_nodes_count": len(
self.get_available_compute_nodes()),
self.compute_model.get_all_compute_nodes()),
'number_of_migrations': self.number_of_migrations,
'number_of_released_nodes':
self.number_of_released_nodes,
@@ -651,7 +643,7 @@ class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
def post_execute(self):
self.solution.set_efficacy_indicators(
compute_nodes_count=len(
self.get_available_compute_nodes()),
self.compute_model.get_all_compute_nodes()),
released_compute_nodes_count=self.number_of_released_nodes,
instance_migrations_count=self.number_of_migrations,
)

View File

@@ -117,7 +117,7 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
@property
def ceilometer(self):
if self._ceilometer is None:
self.ceilometer = ceil.CeilometerHelper(osc=self.osc)
self._ceilometer = ceil.CeilometerHelper(osc=self.osc)
return self._ceilometer
@ceilometer.setter
@@ -127,7 +127,7 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
@property
def gnocchi(self):
if self._gnocchi is None:
self.gnocchi = gnoc.GnocchiHelper(osc=self.osc)
self._gnocchi = gnoc.GnocchiHelper(osc=self.osc)
return self._gnocchi
@gnocchi.setter
@@ -191,13 +191,6 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
choices=["ceilometer", "gnocchi"])
]
def get_available_compute_nodes(self):
default_node_scope = [element.ServiceState.ENABLED.value]
return {uuid: cn for uuid, cn in
self.compute_model.get_all_compute_nodes().items()
if cn.state == element.ServiceState.ONLINE.value and
cn.status in default_node_scope}
def calculate_used_resource(self, node):
"""Calculate the used vcpus, memory and disk based on VM flavors"""
instances = self.compute_model.get_node_instances(node)
@@ -292,7 +285,7 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
and also generate the instance workload map.
"""
nodes = self.get_available_compute_nodes()
nodes = self.compute_model.get_all_compute_nodes()
cluster_size = len(nodes)
if not nodes:
raise wexc.ClusterEmpty()

View File

@@ -179,13 +179,13 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
@property
def ceilometer(self):
if self._ceilometer is None:
self.ceilometer = ceil.CeilometerHelper(osc=self.osc)
self._ceilometer = ceil.CeilometerHelper(osc=self.osc)
return self._ceilometer
@property
def nova(self):
if self._nova is None:
self.nova = self.osc.nova()
self._nova = self.osc.nova()
return self._nova
@nova.setter
@@ -199,7 +199,7 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
@property
def gnocchi(self):
if self._gnocchi is None:
self.gnocchi = gnoc.GnocchiHelper(osc=self.osc)
self._gnocchi = gnoc.GnocchiHelper(osc=self.osc)
return self._gnocchi
@gnocchi.setter
@@ -252,7 +252,7 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
"No values returned by %(resource_id)s "
"for %(metric_name)s" % dict(
resource_id=instance.uuid, metric_name=meter))
avg_meter = 0
return
if meter == 'cpu_util':
avg_meter /= float(100)
instance_load[meter] = avg_meter
@@ -308,12 +308,10 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
)
if avg_meter is None:
if meter_name == 'hardware.memory.used':
avg_meter = node.memory
if meter_name == 'compute.node.cpu.percent':
avg_meter = 1
LOG.warning('No values returned by node %s for %s',
node_id, meter_name)
del hosts_load[node_id]
break
else:
if meter_name == 'hardware.memory.used':
avg_meter /= oslo_utils.units.Ki
@@ -362,10 +360,12 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
migration_case = []
new_hosts = copy.deepcopy(hosts)
instance_load = self.get_instance_load(instance)
if not instance_load:
return
s_host_vcpus = new_hosts[src_node.uuid]['vcpus']
d_host_vcpus = new_hosts[dst_node.uuid]['vcpus']
for metric in self.metrics:
if metric is 'cpu_util':
if metric == 'cpu_util':
new_hosts[src_node.uuid][metric] -= (
self.transform_instance_cpu(instance_load, s_host_vcpus))
new_hosts[dst_node.uuid][metric] += (
@@ -408,6 +408,8 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
dst_node = self.compute_model.get_node_by_uuid(dst_host)
sd_case = self.calculate_migration_case(
hosts, instance, src_node, dst_node)
if sd_case is None:
break
weighted_sd = self.calculate_weighted_sd(sd_case[:-1])
@@ -416,6 +418,8 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
'host': dst_node.uuid, 'value': weighted_sd,
's_host': src_node.uuid, 'instance': instance.uuid}
instance_host_map.append(min_sd_case)
if sd_case is None:
continue
return sorted(instance_host_map, key=lambda x: x['value'])
def check_threshold(self):
@@ -424,7 +428,12 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
normalized_load = self.normalize_hosts_load(hosts_load)
for metric in self.metrics:
metric_sd = self.get_sd(normalized_load, metric)
LOG.info("Standard deviation for %s is %s."
% (metric, metric_sd))
if metric_sd > float(self.thresholds[metric]):
LOG.info("Standard deviation of %s exceeds"
" appropriate threshold %s."
% (metric, metric_sd))
return self.simulate_migrations(hosts_load)
def add_migration(self,

View File

@@ -120,21 +120,6 @@ class ActionExecutionPayload(ActionPayload):
**kwargs)
@base.WatcherObjectRegistry.register_notification
class ActionCancelPayload(ActionPayload):
# Version 1.0: Initial version
VERSION = '1.0'
fields = {
'fault': wfields.ObjectField('ExceptionPayload', nullable=True),
}
def __init__(self, action, action_plan, **kwargs):
super(ActionCancelPayload, self).__init__(
action=action,
action_plan=action_plan,
**kwargs)
@base.WatcherObjectRegistry.register_notification
class ActionDeletePayload(ActionPayload):
# Version 1.0: Initial version
@@ -193,19 +178,6 @@ class ActionDeleteNotification(notificationbase.NotificationBase):
}
@notificationbase.notification_sample('action-cancel-error.json')
@notificationbase.notification_sample('action-cancel-end.json')
@notificationbase.notification_sample('action-cancel-start.json')
@base.WatcherObjectRegistry.register_notification
class ActionCancelNotification(notificationbase.NotificationBase):
# Version 1.0: Initial version
VERSION = '1.0'
fields = {
'payload': wfields.ObjectField('ActionCancelPayload')
}
def _get_action_plan_payload(action):
action_plan = None
strategy_uuid = None
@@ -328,33 +300,3 @@ def send_execution_notification(context, action, notification_action, phase,
payload=versioned_payload)
notification.emit(context)
def send_cancel_notification(context, action, notification_action, phase,
priority=wfields.NotificationPriority.INFO,
service='infra-optim', host=None):
"""Emit an action cancel notification."""
action_plan_payload = _get_action_plan_payload(action)
fault = None
if phase == wfields.NotificationPhase.ERROR:
fault = exception_notifications.ExceptionPayload.from_exception()
versioned_payload = ActionCancelPayload(
action=action,
action_plan=action_plan_payload,
fault=fault,
)
notification = ActionCancelNotification(
priority=priority,
event_type=notificationbase.EventType(
object='action',
action=notification_action,
phase=phase),
publisher=notificationbase.NotificationPublisher(
host=host or CONF.host,
binary=service),
payload=versioned_payload)
notification.emit(context)

View File

@@ -167,22 +167,6 @@ class ActionPlanDeletePayload(ActionPlanPayload):
strategy=strategy)
@base.WatcherObjectRegistry.register_notification
class ActionPlanCancelPayload(ActionPlanPayload):
# Version 1.0: Initial version
VERSION = '1.0'
fields = {
'fault': wfields.ObjectField('ExceptionPayload', nullable=True),
}
def __init__(self, action_plan, audit, strategy, **kwargs):
super(ActionPlanCancelPayload, self).__init__(
action_plan=action_plan,
audit=audit,
strategy=strategy,
**kwargs)
@notificationbase.notification_sample('action_plan-execution-error.json')
@notificationbase.notification_sample('action_plan-execution-end.json')
@notificationbase.notification_sample('action_plan-execution-start.json')
@@ -229,19 +213,6 @@ class ActionPlanDeleteNotification(notificationbase.NotificationBase):
}
@notificationbase.notification_sample('action_plan-cancel-error.json')
@notificationbase.notification_sample('action_plan-cancel-end.json')
@notificationbase.notification_sample('action_plan-cancel-start.json')
@base.WatcherObjectRegistry.register_notification
class ActionPlanCancelNotification(notificationbase.NotificationBase):
# Version 1.0: Initial version
VERSION = '1.0'
fields = {
'payload': wfields.ObjectField('ActionPlanCancelPayload')
}
def _get_common_payload(action_plan):
audit = None
strategy = None
@@ -367,34 +338,3 @@ def send_action_notification(context, action_plan, action, phase=None,
payload=versioned_payload)
notification.emit(context)
def send_cancel_notification(context, action_plan, action, phase=None,
priority=wfields.NotificationPriority.INFO,
service='infra-optim', host=None):
"""Emit an action_plan cancel notification."""
audit_payload, strategy_payload = _get_common_payload(action_plan)
fault = None
if phase == wfields.NotificationPhase.ERROR:
fault = exception_notifications.ExceptionPayload.from_exception()
versioned_payload = ActionPlanCancelPayload(
action_plan=action_plan,
audit=audit_payload,
strategy=strategy_payload,
fault=fault,
)
notification = ActionPlanCancelNotification(
priority=priority,
event_type=notificationbase.EventType(
object='action_plan',
action=action,
phase=phase),
publisher=notificationbase.NotificationPublisher(
host=host or CONF.host,
binary=service),
payload=versioned_payload)
notification.emit(context)

View File

@@ -153,10 +153,7 @@ class NotificationAction(BaseWatcherEnum):
PLANNER = 'planner'
EXECUTION = 'execution'
CANCEL = 'cancel'
ALL = (CREATE, UPDATE, EXCEPTION, DELETE, STRATEGY, PLANNER, EXECUTION,
CANCEL)
ALL = (CREATE, UPDATE, EXCEPTION, DELETE, STRATEGY, PLANNER, EXECUTION)
class NotificationPriorityField(BaseEnumField):

View File

@@ -41,7 +41,7 @@ def datetime_or_none(value, tzinfo_aware=False):
# NOTE(danms): Legacy objects from sqlalchemy are stored in UTC,
# but are returned without a timezone attached.
# As a transitional aid, assume a tz-naive object is in UTC.
value = value.replace(tzinfo=iso8601.UTC)
value = value.replace(tzinfo=iso8601.iso8601.Utc())
elif not tzinfo_aware:
value = value.replace(tzinfo=None)

View File

@@ -511,7 +511,7 @@ class TestPost(FunctionalTestWithSetup):
response.json['created_at']).replace(tzinfo=None)
self.assertEqual(test_time, return_created_at)
def test_create_audit_template_validation_with_aggregates(self):
def test_create_audit_template_vlidation_with_aggregates(self):
scope = [{'host_aggregates': [{'id': '*'}]},
{'availability_zones': [{'name': 'AZ1'},
{'name': 'AZ2'}]},
@@ -532,14 +532,6 @@ class TestPost(FunctionalTestWithSetup):
"be included and excluded together"):
self.post_json('/audit_templates', audit_template_dict)
scope = [{'host_aggregates': [{'id1': '*'}]}]
audit_template_dict = post_get_test_audit_template(
goal=self.fake_goal1.uuid,
strategy=self.fake_strategy1.uuid, scope=scope)
response = self.post_json('/audit_templates',
audit_template_dict, expect_errors=True)
self.assertEqual(500, response.status_int)
def test_create_audit_template_does_autogenerate_id(self):
audit_template_dict = post_get_test_audit_template(
goal=self.fake_goal1.uuid, strategy=None)

View File

@@ -829,10 +829,8 @@ class TestDelete(api_base.FunctionalTest):
self.context.show_deleted = True
audit = objects.Audit.get_by_uuid(self.context, self.audit.uuid)
return_deleted_at = \
audit['deleted_at'].strftime('%Y-%m-%dT%H:%M:%S.%f')
self.assertEqual(test_time.strftime('%Y-%m-%dT%H:%M:%S.%f'),
return_deleted_at)
return_deleted_at = timeutils.strtime(audit['deleted_at'])
self.assertEqual(timeutils.strtime(test_time), return_deleted_at)
self.assertEqual(objects.audit.State.DELETED, audit['state'])
def test_delete_audit_not_found(self):

View File

@@ -16,4 +16,5 @@ from watcher.tests.api import base as api_base
class TestV1Routing(api_base.FunctionalTest):
pass
def setUp(self):
super(TestV1Routing, self).setUp()

View File

@@ -90,14 +90,7 @@ class TestChangeNodePowerState(base.TestCase):
def test_execute_node_service_state_with_poweron_target(
self, mock_ironic, mock_nova):
mock_irclient = mock_ironic.return_value
self.action.input_parameters["state"] = (
change_node_power_state.NodeState.POWERON.value)
mock_irclient.node.get.side_effect = [
mock.MagicMock(power_state='power off'),
mock.MagicMock(power_state='power on')]
result = self.action.execute()
self.assertTrue(result)
self.action.execute()
mock_irclient.node.set_power_state.assert_called_once_with(
COMPUTE_NODE, change_node_power_state.NodeState.POWERON.value)
@@ -111,12 +104,7 @@ class TestChangeNodePowerState(base.TestCase):
mock_nvclient.hypervisors.get.return_value = mock_get
self.action.input_parameters["state"] = (
change_node_power_state.NodeState.POWEROFF.value)
mock_irclient.node.get.side_effect = [
mock.MagicMock(power_state='power on'),
mock.MagicMock(power_state='power on'),
mock.MagicMock(power_state='power off')]
result = self.action.execute()
self.assertTrue(result)
self.action.execute()
mock_irclient.node.set_power_state.assert_called_once_with(
COMPUTE_NODE, change_node_power_state.NodeState.POWEROFF.value)
@@ -130,10 +118,6 @@ class TestChangeNodePowerState(base.TestCase):
mock_nvclient.hypervisors.get.return_value = mock_get
self.action.input_parameters["state"] = (
change_node_power_state.NodeState.POWERON.value)
mock_irclient.node.get.side_effect = [
mock.MagicMock(power_state='power on'),
mock.MagicMock(power_state='power on'),
mock.MagicMock(power_state='power off')]
self.action.revert()
mock_irclient.node.set_power_state.assert_called_once_with(
@@ -144,9 +128,6 @@ class TestChangeNodePowerState(base.TestCase):
mock_irclient = mock_ironic.return_value
self.action.input_parameters["state"] = (
change_node_power_state.NodeState.POWEROFF.value)
mock_irclient.node.get.side_effect = [
mock.MagicMock(power_state='power off'),
mock.MagicMock(power_state='power on')]
self.action.revert()
mock_irclient.node.set_power_state.assert_called_once_with(

View File

@@ -31,6 +31,9 @@ class TestTriggerActionPlan(base.TestCase):
self.applier = mock.MagicMock()
self.endpoint = trigger.TriggerActionPlan(self.applier)
def setUp(self):
super(TestTriggerActionPlan, self).setUp()
def test_launch_action_plan(self):
action_plan_uuid = utils.generate_uuid()
expected_uuid = self.endpoint.launch_action_plan(self.context,

View File

@@ -30,6 +30,9 @@ class TestApplierAPI(base.TestCase):
api = rpcapi.ApplierAPI()
def setUp(self):
super(TestApplierAPI, self).setUp()
def test_get_api_version(self):
with mock.patch.object(om.RPCClient, 'call') as mock_call:
expected_context = self.context

View File

@@ -51,7 +51,7 @@ class FakeAction(abase.BaseAction):
pass
def execute(self):
return False
raise ExpectedException()
def get_description(self):
return "fake action, just for test"
@@ -311,8 +311,7 @@ class TestDefaultWorkFlowEngine(base.DbTestCase):
exc = self.assertRaises(exception.WorkflowExecutionException,
self.engine.execute, actions)
self.assertIsInstance(exc.kwargs['error'],
exception.ActionExecutionFailure)
self.assertIsInstance(exc.kwargs['error'], ExpectedException)
self.check_action_state(actions[0], objects.action.State.FAILED)
@mock.patch.object(objects.ActionPlan, "get_by_uuid")

View File

@@ -21,7 +21,6 @@ import mock
from watcher.applier.workflow_engine import default as tflow
from watcher.common import clients
from watcher.common import exception
from watcher.common import nova_helper
from watcher import objects
from watcher.tests.db import base
@@ -80,9 +79,7 @@ class TestTaskFlowActionContainer(base.DbTestCase):
action_container = tflow.TaskFlowActionContainer(
db_action=action,
engine=self.engine)
self.assertRaises(exception.ActionExecutionFailure,
action_container.execute, action_id=action.uuid)
action_container.execute()
self.assertTrue(action.state, objects.action.State.FAILED)

View File

@@ -25,6 +25,9 @@ from watcher.tests import base
@mock.patch.object(clients.OpenStackClients, 'cinder')
class TestCinderHelper(base.TestCase):
def setUp(self):
super(TestCinderHelper, self).setUp()
@staticmethod
def fake_storage_node(**kwargs):
node = mock.MagicMock()

View File

@@ -28,18 +28,12 @@ from watcher.tests import base
CONF = cfg.CONF
class DummyEndpoint(object):
def __init__(self, messaging):
self._messaging = messaging
class DummyManager(object):
API_VERSION = '1.0'
conductor_endpoints = [DummyEndpoint]
notification_endpoints = [DummyEndpoint]
conductor_endpoints = [mock.Mock()]
notification_endpoints = [mock.Mock()]
def __init__(self):
self.publisher_id = "pub_id"
@@ -51,6 +45,9 @@ class DummyManager(object):
class TestServiceHeartbeat(base.TestCase):
def setUp(self):
super(TestServiceHeartbeat, self).setUp()
@mock.patch.object(objects.Service, 'list')
@mock.patch.object(objects.Service, 'create')
def test_send_beat_with_creating_service(self, mock_create,

Some files were not shown because too many files have changed in this diff Show More