Compare commits

...

61 Commits
1.1.0 ... 1.2.0

Author SHA1 Message Date
aditi
d7a44739a6 Cancel Action Plan
This patch adds feature to cancel action plan in watcher.
A General flow from watcher-api to watcher-applier is implemented.

action plan cancel can cancel any [ongoing, pending, recommended]
action plan, it will update the action states also to "cancelled".
For ongoing actions in action plan, actions needs to be aborted.
Seperate patches will be added to support abort operation
in each action.

Notification part is addressed by a seperate blueprint.
https://blueprints.launchpad.net/watcher/+spec/notifications-actionplan-cancel

Change-Id: I895a5eaca5239d5657702c8d1875b9ece21682dc
Partially-Implements: blueprint cancel-action-plan
2017-06-07 05:36:18 +00:00
OpenStack Proposal Bot
58d86de064 Updated from global requirements
Change-Id: I65ae1ac5170cf675d8c396ef3b7166bafc19190e
2017-06-06 06:14:58 +00:00
Yumeng Bao
8d84da307b Add rm to whitelist_externals in tox.ini
Fix the following WARNING:

WARNING:test command found but not installed in testenv
  cmd: /bin/rm
  env: /home/jenkins/workspace/gate-watcher-python27-ubuntu-xenial/.tox/py27
Maybe you forgot to specify a dependency? See also the whitelist_externals envconfig setting.

Change-Id: Ie091bd64b6a87c30535ada34daf9d594aa3fdd41
2017-06-05 19:10:28 +08:00
Jenkins
01e865edbf Merge "Replace default cinder endpoint type" 2017-06-05 08:37:45 +00:00
Jenkins
b4b3856f14 Merge "Add action description" 2017-06-02 08:31:21 +00:00
Jenkins
67d065e02a Merge "Remove usage of parameter enforce_type" 2017-06-02 01:26:10 +00:00
Jenkins
891a351a04 Merge "Watcher official install-guide" 2017-06-01 04:38:16 +00:00
Feng Shengqin
f47fd9ac5e Remove usage of parameter enforce_type
Oslo.config deprecated parameter enforce_type and change
its default value to True. Remove the usage of it to avoid
DeprecationWarning: "Using the 'enforce_type' argument is
deprecated in version '4.0' and will be removed in version
'5.0': The argument enforce_type has changed its default
value to True and then will be removed completely."

Change-Id: I59621664773ee5ad264e6da9b15231f30dbb9c40
Closes-Bug: #1694616
2017-06-01 10:13:20 +08:00
Hidekazu Nakamura
7b766680b0 Replace default cinder endpoint type
The default cinder endpoint type is publicURL in cinderclient.
This patch replaces default cinder endpoint type from
internalURL to publicURL.

Change-Id: Ie6951086e4656bd83195dab151dbaaf948113a7c
Related-Bug: #1686298
2017-06-01 01:20:00 +00:00
licanwei
75a025d2d2 Add action description
Add action get_description in class BaseAction.
This information will be sent to the API side via notification.

Partially Implements: blueprint dynamic-action-description

Change-Id: I9ce1b18ad8c5eb7db62ec926d1859d0f508074b0
2017-05-31 18:03:19 +08:00
Jenkins
590bd43a1d Merge "Trivial fix typos" 2017-05-31 08:54:56 +00:00
Jenkins
d2e42a835b Merge "Deleted audit record still get by 'audit list'cmd" 2017-05-31 08:54:15 +00:00
Jenkins
a34e55e47a Merge "Replace oslo_utils.timeutils.isotime" 2017-05-31 08:49:41 +00:00
aditi
a62acbf2ab Watcher official install-guide
Currently watcher project does not have an official
install guide at [1]

This patch adds watcher install guide for rdo and debian.

install-guide is written following the document [2].

[1] https://docs.openstack.org/project-install-guide/ocata/
[2] https://docs.openstack.org/contributor-guide/project-install-guide.html

Change-Id: Idfae7286003f81222dadf91ddcaf95a42c7eb07f
2017-05-31 08:40:27 +00:00
Vu Cong Tuan
35074edaf7 Trivial fix typos
Change-Id: I4c7d3a0d815a616d1ba2c0d26135db5f2aea0c2f
2017-05-30 15:55:33 +07:00
Luong Anh Tuan
dd4aac4092 Replace oslo_utils.timeutils.isotime
Function 'oslo_utils.timeutils.isotime()' is deprecated in version '1.6'
and will be removed in a future version. We use
datetime.datetime.isoformat() instead. For more informations:
http://docs.openstack.org/developer/oslo.utils/api/timeutils.html#oslo_utils.timeutils.isotime

Change-Id: I17384c369fdc7f86b37fd62370d800ed2463adbe
Closes-Bug: #1514331
2017-05-29 23:34:03 +07:00
Jenkins
bd8151e581 Merge "Reduced the code complexity" 2017-05-29 09:15:26 +00:00
Jenkins
8585e49359 Merge "Updated from global requirements" 2017-05-27 12:23:06 +00:00
Jenkins
5d3af47b7d Merge "Versioned Notifications for service object" 2017-05-27 12:20:45 +00:00
OpenStack Proposal Bot
1001525664 Updated from global requirements
Change-Id: I1baa46eba0e7dbae816dbbeb0914f4e1efc8fa0a
2017-05-26 17:31:36 +00:00
licanwei
a33f40ec21 Deleted audit record still get by 'audit list'cmd
Audit record was deleted but the field 'state' maybe not set DELETED.
get_audit_list's filter used field 'state' will get
wrong result.
filter rule should use field 'deleted_at' instead of field 'state'.
get_action_list and get_action_plan_list have the same filter rule.

Change-Id: I08b2a005ca5fb7c2741ac5ed97c6e6b4279758ed
Closes-Bug: #1693666
2017-05-26 14:53:32 +08:00
Vladimir Ostroverkhov
d2a8454043 Versioned Notifications for service object
Implements: blueprint service-versioned-notifications-api

Change-Id: I9d601edb265ee230104f6c63a5f044869aeb3a02
2017-05-25 12:52:01 +03:00
Jenkins
9bb1e653d8 Merge "Change cinder api_version to '3' in default" 2017-05-25 06:51:58 +00:00
Akihito INOH
bb536ee40d Change cinder api_version to '3' in default
Block Storage API v2 is deprecated now. Instead of it, we should use v3
in default (api_version).

This patch replace cinder default api_version from '2' to '3' in
conf/cinder_client.py

Change-Id: I53ffa74cdac7ac31c74937bf18da8ed2fec92223
Closes-Bug: #1691104
2017-05-25 09:26:33 +09:00
OpenStack Proposal Bot
a0bf1b7d70 Updated from global requirements
Change-Id: Id71f841aa19c8cc07205a59450ea376d4a7ffce4
2017-05-24 03:47:55 +00:00
Jenkins
40f6eea637 Merge "Remove the deprecated tempest.test.attr" 2017-05-22 09:38:12 +00:00
suzhengwei
6c5a3910a7 doc error for WeightPlanner
Change-Id: I6c51a919baa32e9b46bfadfcf8e6afac27890b17
2017-05-22 16:08:53 +08:00
Ngo Quoc Cuong
a4fac69d85 Remove the deprecated tempest.test.attr
[1] moves the attr decorator from test.py to tempest/lib. So, all the
references to tempest.test has to be moved to tempest.lib.decorator.

[2] https://review.openstack.org/#/c/456236/

Change-Id: If977e559d9f3b982baf2974efef3c5b375f263b9
2017-05-22 10:11:18 +07:00
Luong Anh Tuan
21994297cf Replace assertRaisesRegexp with assertRaisesRegex
This replaces the deprecated (in python 3.2) unittest.TestCase
method assertRaisesRegexp() with assertRaisesRegex().

Change-Id: I38c3055288034aba51c11bb1bccd3655f760cecc
Closes-Bug: #1436957
2017-05-19 18:06:00 +07:00
OpenStack Proposal Bot
8a818c9071 Updated from global requirements
Change-Id: Id3b644be5dccdf4abf60c6a49400f61582f0d244
2017-05-17 04:00:56 +00:00
Jenkins
041fcb4ca0 Merge "Added tempest test for workload_stabilization" 2017-05-17 01:32:13 +00:00
Jenkins
a8d765bb28 Merge "Fix a typo" 2017-05-10 18:22:08 +00:00
zhangjianfeng
2b152bf17c [bugfix]for division use accurate division
for now / just get int value ,this will lead method
filter_destination_hosts return 0 host
because self.threshold / 100 * host.vcpus is 0
so we need use accurate division to change this

close-bug: 1689269

Change-Id: I5663951ce750d6c4580a507ccfc0268baea0685f
2017-05-08 18:27:24 +08:00
chenaidong1
08e585d405 Fix a typo
'specfication' should be 'specification'

Change-Id: I845e6199c4e2152295fb02ac44a1b090a3d7561d
2017-05-08 15:43:57 +08:00
OpenStack Proposal Bot
38e4255ec0 Updated from global requirements
Change-Id: I5ab36d982f5ef09548303f1233116a7ad74e00f2
2017-05-04 13:33:41 +00:00
Jenkins
a167044cde Merge "[bugfix]retry is reached but action still success" 2017-05-04 09:17:36 +00:00
Jenkins
1366f79b63 Merge "Add Watcher JobStore for background jobs" 2017-05-03 14:50:35 +00:00
Jenkins
f55b9b127e Merge "Add 'rm -f .testrepository/times.dbm' command in testenv" 2017-05-03 12:01:49 +00:00
Jenkins
1b7b467151 Merge "replace nova endpoint" 2017-05-03 12:01:32 +00:00
Alexander Chadin
f40fcdc573 Add Watcher JobStore for background jobs
This patch set adds WatcherJobStore class that allows to link
jobs and services.

Partially-Implements: blueprint background-jobs-ha
Change-Id: I575887ca6dae60b3b7709e6d2e2b256e09a3d824
2017-05-03 12:07:01 +03:00
Jenkins
2d98d5e743 Merge "Fix devstack plugin" 2017-05-02 18:36:27 +00:00
OpenStack Proposal Bot
877230569a Updated from global requirements
Change-Id: I685b9a86764c24fb638c6a684a7b82818d8a6aac
2017-04-27 11:50:22 +00:00
Jenkins
f842c5f601 Merge "Add host_aggregates in exclude rule of audit scope" 2017-04-27 11:01:22 +00:00
Pradeep Kumar Singh
0a899a2dc2 Add host_aggregates in exclude rule of audit scope
Currently if user wants to skip some host_aggregates from audit,
it is not possible. This patch adds host_aggregates into the exclude
rule of audit scope. This patch also implements audit-tag-vm-metadata
using scopes.

TODOs:
1. Add tests
2. Remove old implementation of audit-tag-vm-metadata

Change-Id: Ie86378cb02145a660bbf446eedb29dc311fa29d7
Implements: BP audit-tag-vm-metadata
2017-04-27 08:37:16 +00:00
licanwei
426232e288 replace nova endpoint
The default is publicURL in novaclient.
This also caused the failure of
gate-watcher-dsvm-multinode-ubuntu-xenial-nv.

Change-Id: I485dd62fb7199ffeca29a9b573a624bf144484d1
Closes-Bug: #1686298
Closes-Bug: #1686281
2017-04-27 02:08:04 +00:00
Jenkins
778d4c6fe4 Merge "Set access_policy for messaging's dispatcher" 2017-04-25 14:11:17 +00:00
Jenkins
f852467d6a Merge "Add ironicclient" 2017-04-25 13:43:14 +00:00
M V P Nitesh
dcf64ed1f4 Add 'rm -f .testrepository/times.dbm' command in testenv
Running py2* post py3* tests results in error. Add
'rm -f .testrepository/times.dbm' command in testenv to
resolve this.

Change-Id: Ia43f8d10f157d988c4d2c89f16cac0ea729cabe6
2017-04-25 12:52:13 +05:30
Jenkins
03f75202c8 Merge "[Doc] fix local.conf.compute" 2017-04-25 07:01:57 +00:00
Jenkins
0173a713c1 Merge "use instance data replace exception.NoDataFound" 2017-04-25 00:14:07 +00:00
Hidekazu Nakamura
216f3bab29 [Doc] fix local.conf.compute
Compute node needs placement-client service from Ocata.

Change-Id: Ibd02a126bb4808625cede8fe04255ac014268adb
2017-04-24 20:58:37 +09:00
zhangjianfeng
077b806bf6 [bugfix]retry is reached but action still success
now when action like live-migration retry is reached
result is False but action still success
we can add check in do execute method can fix this problem.
close-bug: 1685757

Change-Id: I8390566ec8dcfa3a71b931d5be1b305802ac0b2a
2017-04-24 18:17:22 +08:00
licanwei
d892153b58 use instance data replace exception.NoDataFound
If instance_ram_util and instance_disk_util can not get data
from datasource. we can use instance data,
just like total_cpu_utilization

Change-Id: I4170b96946b07435411ada5ff4a14c978c0435b4
2017-04-24 15:35:04 +08:00
shubhendu
2efe211f36 Set access_policy for messaging's dispatcher
oslo.messaging allow dispatcher to restrict endpoint
methods with DefaultRPCAccessPolicy to fix FutureWarning

Closes-Bug:1663543

Change-Id: I0288320193b0153ee223664696abca21cbdb0349
2017-04-24 06:32:33 +00:00
Hidekazu Nakamura
f55ea7824e Fix devstack plugin
Stack.sh after unstack.sh results in error.
This patch fix it.

Change-Id: I35d74896611e56d916a9846b2f854bd060a606e4
2017-04-21 16:49:04 +09:00
Hidekazu Nakamura
e5eb4f51be [Doc] messaging -> messagingv2
Nova sends notifications using 2.0 messging format.

Change-Id: Ib0202ba5291f1666bdbf9b6830521b1a2aa20a80
2017-04-18 11:54:16 +09:00
licanwei
f637a368d7 Add ironicclient
This patch set adds ironicclient.

Change-Id: I122a26465d339ee6e36c0f234d3fd6c57cee2afa
Partially-Implements: blueprint build-baremetal-data-model-in-watcher
2017-04-17 06:48:05 +00:00
Jenkins
2e8fb5a821 Merge "exception when running 'watcher actionplan start XXX'" 2017-04-13 11:59:45 +00:00
licanwei
527423a5fa exception when running 'watcher actionplan start XXX'
In DefaultApplier.execute(), Action.list should set eager=True.
Otherwise it will trigger an exception when the notification is sent.

This also causes the FAILURE of
gate-watcher-dsvm-multinode-ubuntu-xenial-nv.

Change-Id: I27db9691727671abb582d4f22283ebda5bd51b07
Closes-Bug: #1676308
2017-04-07 16:19:21 +08:00
Vincent Françoise
334558f17c Added tempest test for workload_stabilization
In this changeset, I added a tempest test that is in charge of
executing the workload_stabilization strategy.

Change-Id: I61bad268fc5895ddb22312baeb21da5ae3c71de9
2017-03-30 07:20:13 +00:00
Béla Vancsics
fd55d28d42 Reduced the code complexity
I extracted some of the functionalities into helper functions
to reduce the length and complexity of build_query (in
watcher/datasource/ceilometer.py).
Additionally it became more readable as well, without
changing its functionality.

Change-Id: I9e5c524754cf0f9d718a216465ba1b7536add80e
2017-03-16 07:45:44 +01:00
112 changed files with 2646 additions and 350 deletions

View File

@@ -54,7 +54,7 @@ if is_ssl_enabled_service "watcher" || is_service_enabled tls-proxy; then
WATCHER_SERVICE_PROTOCOL="https"
fi
WATCHER_USE_MOD_WSGI=$(trueorfalse TRUE WATCHER_USE_MOD_WSGI)
WATCHER_USE_MOD_WSGI=$(trueorfalse True WATCHER_USE_MOD_WSGI)
if is_suse; then
WATCHER_WSGI_DIR=${WATCHER_WSGI_DIR:-/srv/www/htdocs/watcher}

View File

@@ -25,7 +25,7 @@ GLANCE_HOSTPORT=${SERVICE_HOST}:9292
DATABASE_TYPE=mysql
# Enable services (including neutron)
ENABLED_SERVICES=n-cpu,n-api-meta,c-vol,q-agt
ENABLED_SERVICES=n-cpu,n-api-meta,c-vol,q-agt,placement-client
NOVA_VNC_ENABLED=True
NOVNCPROXY_URL="http://$SERVICE_HOST:6080/vnc_auto.html"

View File

@@ -0,0 +1,26 @@
{
"payload": {
"watcher_object.name": "ServiceUpdatePayload",
"watcher_object.namespace": "watcher",
"watcher_object.data": {
"status_update": {
"watcher_object.name": "ServiceStatusUpdatePayload",
"watcher_object.namespace": "watcher",
"watcher_object.data": {
"old_state": "ACTIVE",
"state": "FAILED"
},
"watcher_object.version": "1.0"
},
"last_seen_up": "2016-09-22T08:32:06Z",
"name": "watcher-service",
"sevice_host": "controller"
},
"watcher_object.version": "1.0"
},
"event_type": "service.update",
"priority": "INFO",
"message_id": "3984dc2b-8aef-462b-a220-8ae04237a56e",
"timestamp": "2016-10-18 09:52:05.219414",
"publisher_id": "infra-optim:node0"
}

View File

@@ -424,7 +424,7 @@ to Watcher receives Nova notifications in ``watcher_notifications`` as well.
into which Nova services will publish events ::
[oslo_messaging_notifications]
driver = messaging
driver = messagingv2
topics = notifications,watcher_notifications
* Restart the Nova services.

View File

@@ -0,0 +1,71 @@
2. Edit the ``/etc/watcher/watcher.conf`` file and complete the following
actions:
* In the ``[database]`` section, configure database access:
.. code-block:: ini
[database]
...
connection = mysql+pymysql://watcher:WATCHER_DBPASS@controller/watcher?charset=utf8
* In the `[DEFAULT]` section, configure the transport url for RabbitMQ message broker.
.. code-block:: ini
[DEFAULT]
...
control_exchange = watcher
transport_url = rabbit://openstack:RABBIT_PASS@controller
Replace the RABBIT_PASS with the password you chose for OpenStack user in RabbitMQ.
* In the `[keystone_authtoken]` section, configure Identity service access.
.. code-block:: ini
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = watcher
password = WATCHER_PASS
Replace WATCHER_PASS with the password you chose for the watcher user in the Identity service.
* Watcher interacts with other OpenStack projects via project clients, in order to instantiate these
clients, Watcher requests new session from Identity service. In the `[watcher_client_auth]` section,
configure the identity service access to interact with other OpenStack project clients.
.. code-block:: ini
[watcher_client_auth]
...
auth_type = password
auth_url = http://controller:35357
username = watcher
password = WATCHER_PASS
project_domain_name = default
user_domain_name = default
project_name = service
Replace WATCHER_PASS with the password you chose for the watcher user in the Identity service.
* In the `[oslo_messaging_notifications]` section, configure the messaging driver.
.. code-block:: ini
[oslo_messaging_notifications]
...
driver = messagingv2
3. Populate watcher database:
.. code-block:: ini
su -s /bin/sh -c "watcher-db-manage" watcher

View File

@@ -0,0 +1,139 @@
Prerequisites
-------------
Before you install and configure the Infrastructure Optimization service,
you must create a database, service credentials, and API endpoints.
1. Create the database, complete these steps:
* Use the database access client to connect to the database
server as the ``root`` user:
.. code-block:: console
$ mysql -u root -p
* Create the ``watcher`` database:
.. code-block:: none
CREATE DATABASE watcher CHARACTER SET utf8;
* Grant proper access to the ``watcher`` database:
.. code-block:: none
GRANT ALL PRIVILEGES ON watcher.* TO 'watcher'@'localhost' \
IDENTIFIED BY 'WATCHER_DBPASS';
GRANT ALL PRIVILEGES ON watcher.* TO 'watcher'@'%' \
IDENTIFIED BY 'WATCHER_DBPASS';
Replace ``WATCHER_DBPASS`` with a suitable password.
* Exit the database access client.
.. code-block:: none
exit;
2. Source the ``admin`` credentials to gain access to
admin-only CLI commands:
.. code-block:: console
$ . admin-openrc
3. To create the service credentials, complete these steps:
* Create the ``watcher`` user:
.. code-block:: console
$ openstack user create --domain default --password-prompt watcher
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | b18ee38e06034b748141beda8fc8bfad |
| name | watcher |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
* Add the ``admin`` role to the ``watcher`` user:
.. code-block:: console
$ openstack role add --project service --user watcher admin
.. note::
This command produces no output.
* Create the watcher service entities:
.. code-block:: console
$ openstack service create --name watcher --description "Infrastructure Optimization" infra-optim
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Infrastructure Optimization |
| enabled | True |
| id | d854f6fff0a64f77bda8003c8dedfada |
| name | watcher |
| type | infra-optim |
+-------------+----------------------------------+
4. Create the Infrastructure Optimization service API endpoints:
.. code-block:: console
$ openstack endpoint create --region RegionOne \
infra-optim public http://controller:9322
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Infrastructure Optimization |
| enabled | True |
| id | d854f6fff0a64f77bda8003c8dedfada |
| name | watcher |
| type | infra-optim |
+-------------+----------------------------------+
$ openstack endpoint create --region RegionOne \
infra-optim internal http://controller:9322
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 225aef8465ef4df48a341aaaf2b0a390 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | d854f6fff0a64f77bda8003c8dedfada |
| service_name | watcher |
| service_type | infra-optim |
| url | http://controller:9322 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne \
infra-optim admin http://controller:9322
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 375eb5057fb546edbdf3ee4866179672 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | d854f6fff0a64f77bda8003c8dedfada |
| service_name | watcher |
| service_type | infra-optim |
| url | http://controller:9322 |
+--------------+----------------------------------+

View File

@@ -0,0 +1,301 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import os
# import sys
import openstackdocstheme
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
# sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
# TODO(ajaeger): enable PDF building, for example add 'rst2pdf.pdfbuilder'
# extensions =
# Add any paths that contain templates here, relative to this directory.
# templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Installation Guide for Infrastructure Optimization Service'
bug_tag = u'install-guide'
copyright = u'2016, OpenStack contributors'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '0.1'
# The full version, including alpha/beta/rc tags.
release = '0.1'
# A few variables have to be set for the log-a-bug feature.
# giturl: The location of conf.py on Git. Must be set manually.
# gitsha: The SHA checksum of the bug description. Automatically extracted
# from git log.
# bug_tag: Tag for categorizing the bug. Must be set manually.
# These variables are passed to the logabug code via html_context.
giturl = u'http://git.openstack.org/cgit/openstack/watcher/tree/install-guide/source' # noqa
git_cmd = "/usr/bin/git log | head -n1 | cut -f2 -d' '"
gitsha = os.popen(git_cmd).read().strip('\n')
html_context = {"gitsha": gitsha, "bug_tag": bug_tag,
"giturl": giturl,
"bug_project": "watcher"}
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ["common_prerequisites.rst", "common_configure.rst"]
# The reST default role (used for this markup: `text`) to use for all
# documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
# keep_warnings = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
html_theme_path = [openstackdocstheme.get_html_theme_path()]
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
# html_static_path = []
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
# html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
# So that we can enable "log-a-bug" links from each output HTML page, this
# variable must be set to a format that includes year, month, day, hours and
# minutes.
html_last_updated_fmt = '%Y-%m-%d %H:%M'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
# If false, no index is generated.
html_use_index = False
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
html_show_sourcelink = False
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'install-guide'
# If true, publish source files
html_copy_source = False
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
# 'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'InstallGuide.tex', u'Install Guide',
u'OpenStack contributors', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'installguide', u'Install Guide',
[u'OpenStack contributors'], 1)
]
# If true, show URL addresses after external links.
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'InstallGuide', u'Install Guide',
u'OpenStack contributors', 'InstallGuide',
'This guide shows OpenStack end users how to install '
'an OpenStack cloud.', 'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
# texinfo_no_detailmenu = False
# -- Options for Internationalization output ------------------------------
locale_dirs = ['locale/']
# -- Options for PDF output --------------------------------------------------
pdf_documents = [
('index', u'InstallGuide', u'Install Guide',
u'OpenStack contributors')
]

View File

@@ -0,0 +1,27 @@
============================================
Infrastructure Optimization service overview
============================================
The Infrastructure Optimization service provides flexible and scalable
optimization service for multi-tenant OpenStack based clouds.
The Infrastructure Optimization service consists of the following components:
``watcher`` command-line client
A CLI to communicate with ``watcher-api`` to optimize the cloud.
``watcher-api`` service
An OpenStack-native REST API that accepts and responds to end-user calls
by processing them and forwarding to appropriate underlying watcher
services via AMQP.
``watcher-decision-engine`` service
It runs audit and return an action plan to achieve optimization goal
specified by the end-user in audit.
``watcher-applier`` service
It executes action plan built by watcher-decision-engine. It interacts with
other OpenStack components like nova to execute the given action
plan.
``watcher-dashboard``
Watcher UI implemented as a plugin for the OpenStack Dashboard.

View File

@@ -0,0 +1,39 @@
===================================
Infrastructure Optimization service
===================================
.. toctree::
:maxdepth: 2
get_started.rst
install.rst
verify.rst
next-steps.rst
The Infrastructure Optimization service (watcher) provides
flexible and scalable resource optimization service for
multi-tenant OpenStack-based clouds.
Watcher provides a complete optimization loop including
everything from a metrics receiver, complex event processor
and profiler, optimization processor and an action plan
applier. This provides a robust framework to realize a wide
range of cloud optimization goals, including the reduction
of data center operating costs, increased system performance
via intelligent virtual machine migration, increased energy
efficiency—and more!
watcher also supports a pluggable architecture by which custom
optimization algorithms, data metrics and data profilers can be
developed and inserted into the Watcher framework.
check the documentation for watcher optimization strategies at
https://docs.openstack.org/developer/watcher/strategies
check watcher glossary at
https://docs.openstack.org/developer/watcher/glossary.html
This chapter assumes a working setup of OpenStack following the
`OpenStack Installation Tutorial
<https://docs.openstack.org/project-install-guide/ocata/>`_.

View File

@@ -0,0 +1,34 @@
.. _install-obs:
Install and configure for openSUSE and SUSE Linux Enterprise
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the Infrastructure Optimization service
for openSUSE Leap 42.1 and SUSE Linux Enterprise Server 12 SP1.
.. include:: common_prerequisites.rst
Install and configure components
--------------------------------
#. Install the packages:
.. code-block:: console
# zypper --quiet --non-interactive install
.. include:: common_configure.rst
Finalize installation
---------------------
Start the Infrastructure Optimization services and configure them to start when
the system boots:
.. code-block:: console
# systemctl enable openstack-watcher-api.service
# systemctl start openstack-watcher-api.service

View File

@@ -0,0 +1,38 @@
.. _install-rdo:
Install and configure for Red Hat Enterprise Linux and CentOS
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the Infrastructure Optimization service
for Red Hat Enterprise Linux 7 and CentOS 7.
.. include:: common_prerequisites.rst
Install and configure components
--------------------------------
1. Install the packages:
.. code-block:: console
# sudo yum install openstack-watcher-api openstack-watcher-applier \
openstack-watcher-decision-engine
.. include:: common_configure.rst
Finalize installation
---------------------
Start the Infrastructure Optimization services and configure them to start when
the system boots:
.. code-block:: console
# systemctl enable openstack-watcher-api.service \
openstack-watcher-decision-engine.service \
openstack-watcher-applier.service
# systemctl start openstack-watcher-api.service \
openstack-watcher-decision-engine.service \
openstack-watcher-applier.service

View File

@@ -0,0 +1,34 @@
.. _install-ubuntu:
Install and configure for Ubuntu
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the Infrastructure Optimization
service for Ubuntu 14.04 (LTS).
.. include:: common_prerequisites.rst
Install and configure components
--------------------------------
1. Install the packages:
.. code-block:: console
# apt install watcher-api watcher-decision-engine \
watcher-applier
# apt install python-watcherclient
.. include:: common_configure.rst
Finalize installation
---------------------
Restart the Infrastructure Optimization services:
.. code-block:: console
# service watcher-api restart
# service watcher-decision-engine restart
# service watcher-applier restart

View File

@@ -0,0 +1,20 @@
.. _install:
Install and configure
~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the
Infrastructure Optimization service, code-named watcher, on the controller node.
This section assumes that you already have a working OpenStack
environment with at least the following components installed:
Identity Service, Compute Service, Telemetry data collection service.
Note that installation and configuration vary by distribution.
.. toctree::
:maxdepth: 2
install-obs.rst
install-rdo.rst
install-ubuntu.rst

View File

@@ -0,0 +1,9 @@
.. _next-steps:
Next steps
~~~~~~~~~~
Your OpenStack environment now includes the watcher service.
To add additional services, see
https://docs.openstack.org/project-install-guide/ocata/.

View File

@@ -0,0 +1,119 @@
.. _verify:
Verify operation
~~~~~~~~~~~~~~~~
Verify operation of the Infrastructure Optimization service.
.. note::
Perform these commands on the controller node.
1. Source the ``admin`` project credentials to gain access to
admin-only CLI commands:
.. code-block:: console
$ . admin-openrc
2. List service components to verify successful launch and registration
of each process:
.. code-block:: console
$ openstack optimize service list
+----+-------------------------+------------+--------+
| ID | Name | Host | Status |
+----+-------------------------+------------+--------+
| 1 | watcher-decision-engine | controller | ACTIVE |
| 2 | watcher-applier | controller | ACTIVE |
+----+-------------------------+------------+--------+
3. List goals and strategies:
.. code-block:: console
$ openstack optimize goal list
+--------------------------------------+----------------------+----------------------+
| UUID | Name | Display name |
+--------------------------------------+----------------------+----------------------+
| a8cd6d1a-008b-4ff0-8dbc-b30493fcc5b9 | dummy | Dummy goal |
| 03953f2f-02d0-42b5-9a12-7ba500a54395 | workload_balancing | Workload Balancing |
| de0f8714-984b-4d6b-add1-9cad8120fbce | server_consolidation | Server Consolidation |
| f056bc80-c6d1-40dc-b002-938ccade9385 | thermal_optimization | Thermal Optimization |
| e7062856-892e-4f0f-b84d-b828464b3fd0 | airflow_optimization | Airflow Optimization |
| 1f038da9-b36c-449f-9f04-c225bf3eb478 | unclassified | Unclassified |
+--------------------------------------+----------------------+----------------------+
$ openstack optimize strategy list
+--------------------------------------+---------------------------+---------------------------------------------+----------------------+
| UUID | Name | Display name | Goal |
+--------------------------------------+---------------------------+---------------------------------------------+----------------------+
| 98ae84c8-7c9b-4cbd-8d9c-4bd7c6b106eb | dummy | Dummy strategy | dummy |
| 02a170b6-c72e-479d-95c0-8a4fdd4cc1ef | dummy_with_scorer | Dummy Strategy using sample Scoring Engines | dummy |
| 8bf591b8-57e5-4a9e-8c7d-c37bda735a45 | outlet_temperature | Outlet temperature based strategy | thermal_optimization |
| 8a0810fb-9d9a-47b9-ab25-e442878abc54 | vm_workload_consolidation | VM Workload Consolidation Strategy | server_consolidation |
| 1718859c-3eb5-45cb-9220-9cb79fe42fa5 | basic | Basic offline consolidation | server_consolidation |
| b5e7f5f1-4824-42c7-bb52-cf50724f67bf | workload_stabilization | Workload stabilization | workload_balancing |
| f853d71e-9286-4df3-9d3e-8eaf0f598e07 | workload_balance | Workload Balance Migration Strategy | workload_balancing |
| 58bdfa89-95b5-4630-adf6-fd3af5ff1f75 | uniform_airflow | Uniform airflow migration strategy | airflow_optimization |
| 66fde55d-a612-4be9-8cb0-ea63472b420b | dummy_with_resize | Dummy strategy with resize | dummy |
+--------------------------------------+---------------------------+---------------------------------------------+----------------------+
4. Run an action plan by creating an audit with dummy goal:
.. code-block:: console
$ openstack optimize audit create --goal dummy
+--------------+--------------------------------------+
| Field | Value |
+--------------+--------------------------------------+
| UUID | e94d4826-ad4e-44df-ad93-dff489fde457 |
| Created At | 2017-05-23T11:46:58.763394+00:00 |
| Updated At | None |
| Deleted At | None |
| State | PENDING |
| Audit Type | ONESHOT |
| Parameters | {} |
| Interval | None |
| Goal | dummy |
| Strategy | auto |
| Audit Scope | [] |
| Auto Trigger | False |
+--------------+--------------------------------------+
$ openstack optimize audit list
+--------------------------------------+------------+-----------+-------+----------+--------------+
| UUID | Audit Type | State | Goal | Strategy | Auto Trigger |
+--------------------------------------+------------+-----------+-------+----------+--------------+
| e94d4826-ad4e-44df-ad93-dff489fde457 | ONESHOT | SUCCEEDED | dummy | auto | False |
+--------------------------------------+------------+-----------+-------+----------+--------------+
$ openstack optimize actionplan list
+--------------------------------------+--------------------------------------+-------------+------------+-----------------+
| UUID | Audit | State | Updated At | Global efficacy |
+--------------------------------------+--------------------------------------+-------------+------------+-----------------+
| ba9ce6b3-969c-4b8e-bb61-ae24e8630f81 | e94d4826-ad4e-44df-ad93-dff489fde457 | RECOMMENDED | None | None |
+--------------------------------------+--------------------------------------+-------------+------------+-----------------+
$ openstack optimize actionplan start ba9ce6b3-969c-4b8e-bb61-ae24e8630f81
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| UUID | ba9ce6b3-969c-4b8e-bb61-ae24e8630f81 |
| Created At | 2017-05-23T11:46:58+00:00 |
| Updated At | 2017-05-23T11:53:12+00:00 |
| Deleted At | None |
| Audit | e94d4826-ad4e-44df-ad93-dff489fde457 |
| Strategy | dummy |
| State | ONGOING |
| Efficacy indicators | [] |
| Global efficacy | {} |
+---------------------+--------------------------------------+
$ openstack optimize actionplan list
+--------------------------------------+--------------------------------------+-----------+---------------------------+-----------------+
| UUID | Audit | State | Updated At | Global efficacy |
+--------------------------------------+--------------------------------------+-----------+---------------------------+-----------------+
| ba9ce6b3-969c-4b8e-bb61-ae24e8630f81 | e94d4826-ad4e-44df-ad93-dff489fde457 | SUCCEEDED | 2017-05-23T11:53:16+00:00 | None |
+--------------------------------------+--------------------------------------+-----------+---------------------------+-----------------+

View File

@@ -0,0 +1,4 @@
---
features:
- |
Adds feature to cancel an action-plan.

View File

@@ -5,17 +5,17 @@
apscheduler # MIT License
enum34;python_version=='2.7' or python_version=='2.6' or python_version=='3.3' # BSD
jsonpatch>=1.1 # BSD
keystoneauth1>=2.18.0 # Apache-2.0
keystoneauth1>=2.20.0 # Apache-2.0
keystonemiddleware>=4.12.0 # Apache-2.0
lxml!=3.7.0,>=2.3 # BSD
oslo.concurrency>=3.8.0 # Apache-2.0
oslo.cache>=1.5.0 # Apache-2.0
oslo.config>=3.22.0 # Apache-2.0
oslo.context>=2.12.0 # Apache-2.0
oslo.db>=4.19.0 # Apache-2.0
oslo.i18n>=2.1.0 # Apache-2.0
oslo.config>=4.0.0 # Apache-2.0
oslo.context>=2.14.0 # Apache-2.0
oslo.db>=4.21.1 # Apache-2.0
oslo.i18n!=3.15.2,>=2.1.0 # Apache-2.0
oslo.log>=3.22.0 # Apache-2.0
oslo.messaging>=5.19.0 # Apache-2.0
oslo.messaging!=5.25.0,>=5.24.2 # Apache-2.0
oslo.policy>=1.17.0 # Apache-2.0
oslo.reports>=0.6.0 # Apache-2.0
oslo.serialization>=1.10.0 # Apache-2.0
@@ -29,13 +29,14 @@ PrettyTable<0.8,>=0.7.1 # BSD
voluptuous>=0.8.9 # BSD License
gnocchiclient>=2.7.0 # Apache-2.0
python-ceilometerclient>=2.5.0 # Apache-2.0
python-cinderclient>=2.0.1 # Apache-2.0
python-glanceclient>=2.5.0 # Apache-2.0
python-cinderclient>=2.1.0 # Apache-2.0
python-glanceclient>=2.7.0 # Apache-2.0
python-keystoneclient>=3.8.0 # Apache-2.0
python-monascaclient>=1.1.0 # Apache-2.0
python-neutronclient>=5.1.0 # Apache-2.0
python-neutronclient>=6.3.0 # Apache-2.0
python-novaclient>=7.1.0 # Apache-2.0
python-openstackclient>=3.3.0 # Apache-2.0
python-openstackclient!=3.10.0,>=3.3.0 # Apache-2.0
python-ironicclient>=1.11.0 # Apache-2.0
six>=1.9.0 # MIT
SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 # MIT
stevedore>=1.20.0 # Apache-2.0

View File

@@ -2,7 +2,7 @@
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
coverage>=4.0 # Apache-2.0
coverage!=4.4,>=4.0 # Apache-2.0
doc8 # Apache-2.0
freezegun>=0.3.6 # Apache-2.0
hacking!=0.13.0,<0.14,>=0.12.0 # Apache-2.0
@@ -15,12 +15,14 @@ testscenarios>=0.4 # Apache-2.0/BSD
testtools>=1.4.0 # MIT
# Doc requirements
openstackdocstheme>=1.5.0 # Apache-2.0
oslosphinx>=4.7.0 # Apache-2.0
sphinx>=1.5.1 # BSD
sphinx!=1.6.1,>=1.5.1 # BSD
sphinxcontrib-pecanwsme>=0.8 # Apache-2.0
# releasenotes
reno>=1.8.0 # Apache-2.0
reno!=2.3.1,>=1.8.0 # Apache-2.0
# bandit
bandit>=1.1.0 # Apache-2.0

View File

@@ -6,11 +6,13 @@ skipsdist = True
[testenv]
usedevelop = True
whitelist_externals = find
rm
install_command = pip install -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} {opts} {packages}
setenv =
VIRTUAL_ENV={envdir}
deps = -r{toxinidir}/test-requirements.txt
commands =
rm -f .testrepository/times.dbm
find . -type f -name "*.py[c|o]" -delete
ostestr --concurrency=6 {posargs}
@@ -67,3 +69,6 @@ commands = sphinx-build -a -W -E -d releasenotes/build/doctrees -b html releasen
[testenv:bandit]
deps = -r{toxinidir}/test-requirements.txt
commands = bandit -r watcher -x tests -n5 -ll -s B320
[testenv:install-guide]
commands = sphinx-build -a -E -W -d install-guide/build/doctrees -b html install-guide/source install-guide/build/html

View File

@@ -488,6 +488,7 @@ class ActionPlansController(rest.RestController):
raise exception.PatchError(patch=patch, reason=e)
launch_action_plan = False
cancel_action_plan = False
# transitions that are allowed via PATCH
allowed_patch_transitions = [
@@ -496,7 +497,7 @@ class ActionPlansController(rest.RestController):
(ap_objects.State.RECOMMENDED,
ap_objects.State.CANCELLED),
(ap_objects.State.ONGOING,
ap_objects.State.CANCELLED),
ap_objects.State.CANCELLING),
(ap_objects.State.PENDING,
ap_objects.State.CANCELLED),
]
@@ -515,6 +516,8 @@ class ActionPlansController(rest.RestController):
if action_plan.state == ap_objects.State.PENDING:
launch_action_plan = True
if action_plan.state == ap_objects.State.CANCELLED:
cancel_action_plan = True
# Update only the fields that have changed
for field in objects.ActionPlan.fields:
@@ -534,6 +537,16 @@ class ActionPlansController(rest.RestController):
action_plan_to_update.save()
# NOTE: if action plan is cancelled from pending or recommended
# state update action state here only
if cancel_action_plan:
filters = {'action_plan_uuid': action_plan.uuid}
actions = objects.Action.list(pecan.request.context,
filters=filters, eager=True)
for a in actions:
a.state = objects.action.State.CANCELLED
a.save()
if launch_action_plan:
applier_client = rpcapi.ApplierAPI()
applier_client.launch_action_plan(pecan.request.context,

View File

@@ -109,6 +109,21 @@ class AuditTemplatePostType(wtypes.Base):
common_utils.Draft4Validator(
default.DefaultScope.DEFAULT_SCHEMA).validate(audit_template.scope)
include_host_aggregates = False
exclude_host_aggregates = False
for rule in audit_template.scope:
if 'host_aggregates' in rule:
include_host_aggregates = True
elif 'exclude' in rule:
for resource in rule['exclude']:
if 'host_aggregates' in resource:
exclude_host_aggregates = True
if include_host_aggregates and exclude_host_aggregates:
raise exception.Invalid(
message=_(
"host_aggregates can't be "
"included and excluded together"))
if audit_template.strategy:
available_strategies = objects.Strategy.list(
AuditTemplatePostType._ctx)

View File

@@ -34,6 +34,7 @@ from watcher.api.controllers import base
from watcher.api.controllers import link
from watcher.api.controllers.v1 import collection
from watcher.api.controllers.v1 import utils as api_utils
from watcher.common import context
from watcher.common import exception
from watcher.common import policy
from watcher import objects
@@ -51,6 +52,7 @@ class Service(base.APIBase):
"""
_status = None
_context = context.RequestContext(is_admin=True)
def _get_status(self):
return self._status

View File

@@ -181,7 +181,7 @@ class JsonPatchType(wtypes.Base):
@staticmethod
def mandatory_attrs():
"""Retruns a list of mandatory attributes.
"""Returns a list of mandatory attributes.
Mandatory attributes can't be removed from the document. This
method should be overwritten by derived class.

View File

@@ -55,7 +55,7 @@ def validate_sort_dir(sort_dir):
def validate_search_filters(filters, allowed_fields):
# Very leightweight validation for now
# Very lightweight validation for now
# todo: improve this (e.g. https://www.parse.com/docs/rest/guide/#queries)
for filter_name in filters.keys():
if filter_name not in allowed_fields:

99
watcher/api/scheduling.py Normal file
View File

@@ -0,0 +1,99 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2017 Servionica
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import datetime
from oslo_config import cfg
from oslo_log import log
from oslo_utils import timeutils
import six
from watcher._i18n import _LW
from watcher.common import context as watcher_context
from watcher.common import scheduling
from watcher import notifications
from watcher import objects
CONF = cfg.CONF
LOG = log.getLogger(__name__)
class APISchedulingService(scheduling.BackgroundSchedulerService):
def __init__(self, gconfig=None, **options):
self.services_status = {}
gconfig = None or {}
super(APISchedulingService, self).__init__(gconfig, **options)
def get_services_status(self, context):
services = objects.service.Service.list(context)
for service in services:
result = self.get_service_status(context, service.id)
if service.id not in self.services_status.keys():
self.services_status[service.id] = result
continue
if self.services_status[service.id] != result:
self.services_status[service.id] = result
notifications.service.send_service_update(context, service,
state=result)
def get_service_status(self, context, service_id):
service = objects.Service.get(context, service_id)
last_heartbeat = (service.last_seen_up or service.updated_at
or service.created_at)
if isinstance(last_heartbeat, six.string_types):
# NOTE(russellb) If this service came in over rpc via
# conductor, then the timestamp will be a string and needs to be
# converted back to a datetime.
last_heartbeat = timeutils.parse_strtime(last_heartbeat)
else:
# Objects have proper UTC timezones, but the timeutils comparison
# below does not (and will fail)
last_heartbeat = last_heartbeat.replace(tzinfo=None)
elapsed = timeutils.delta_seconds(last_heartbeat, timeutils.utcnow())
is_up = abs(elapsed) <= CONF.service_down_time
if not is_up:
LOG.warning(_LW('Seems service %(name)s on host %(host)s is down. '
'Last heartbeat was %(lhb)s.'
'Elapsed time is %(el)s'),
{'name': service.name,
'host': service.host,
'lhb': str(last_heartbeat), 'el': str(elapsed)})
return objects.service.ServiceStatus.FAILED
return objects.service.ServiceStatus.ACTIVE
def start(self):
"""Start service."""
context = watcher_context.make_context(is_admin=True)
self.add_job(self.get_services_status, name='service_status',
trigger='interval', jobstore='default', args=[context],
next_run_time=datetime.datetime.now(), seconds=60)
super(APISchedulingService, self).start()
def stop(self):
"""Stop service."""
self.shutdown()
def wait(self):
"""Wait for service to complete."""
def reset(self):
"""Reset service.
Called in case service running in daemon mode receives SIGHUP.
"""

View File

@@ -20,6 +20,7 @@ from oslo_log import log
from watcher.applier.action_plan import base
from watcher.applier import default
from watcher.common import exception
from watcher import notifications
from watcher import objects
from watcher.objects import fields
@@ -39,6 +40,9 @@ class DefaultActionPlanHandler(base.BaseActionPlanHandler):
try:
action_plan = objects.ActionPlan.get_by_uuid(
self.ctx, self.action_plan_uuid, eager=True)
if action_plan.state == objects.action_plan.State.CANCELLED:
self._update_action_from_pending_to_cancelled()
return
action_plan.state = objects.action_plan.State.ONGOING
action_plan.save()
notifications.action_plan.send_action_notification(
@@ -54,6 +58,12 @@ class DefaultActionPlanHandler(base.BaseActionPlanHandler):
self.ctx, action_plan,
action=fields.NotificationAction.EXECUTION,
phase=fields.NotificationPhase.END)
except exception.ActionPlanCancelled as e:
LOG.exception(e)
action_plan.state = objects.action_plan.State.CANCELLED
self._update_action_from_pending_to_cancelled()
except Exception as e:
LOG.exception(e)
action_plan.state = objects.action_plan.State.FAILED
@@ -64,3 +74,12 @@ class DefaultActionPlanHandler(base.BaseActionPlanHandler):
phase=fields.NotificationPhase.ERROR)
finally:
action_plan.save()
def _update_action_from_pending_to_cancelled(self):
filters = {'action_plan_uuid': self.action_plan_uuid,
'state': objects.action.State.PENDING}
actions = objects.Action.list(self.ctx, filters=filters, eager=True)
if actions:
for a in actions:
a.state = objects.action.State.CANCELLED
a.save()

View File

@@ -32,6 +32,9 @@ class BaseAction(loadable.Loadable):
# watcher dashboard and will be nested in input_parameters
RESOURCE_ID = 'resource_id'
# Add action class name to the list, if implementing abort.
ABORT_TRUE = ['Sleep', 'Nop']
def __init__(self, config, osc=None):
"""Constructor
@@ -111,7 +114,7 @@ class BaseAction(loadable.Loadable):
def post_condition(self):
"""Hook: called after the execution of an action
This function is called regardless of whether an action succeded or
This function is called regardless of whether an action succeeded or
not. So you can use it to perform cleanup operations.
"""
raise NotImplementedError()
@@ -129,3 +132,11 @@ class BaseAction(loadable.Loadable):
def validate_parameters(self):
self.schema(self.input_parameters)
return True
@abc.abstractmethod
def get_description(self):
"""Description of the action"""
raise NotImplementedError()
def check_abort(self):
return bool(self.__class__.__name__ in self.ABORT_TRUE)

View File

@@ -101,3 +101,9 @@ class ChangeNovaServiceState(base.BaseAction):
def post_condition(self):
pass
def get_description(self):
"""Description of the action"""
return ("Disables or enables the nova-compute service."
"A disabled nova-compute service can not be selected "
"by the nova for future deployment of new server.")

View File

@@ -164,6 +164,10 @@ class Migrate(base.BaseAction):
def revert(self):
return self.migrate(destination=self.source_node)
def abort(self):
# TODO(adisky): implement abort for migration
LOG.warning("Abort for migration not implemented")
def pre_condition(self):
# TODO(jed): check if the instance exists / check if the instance is on
# the source_node
@@ -172,3 +176,7 @@ class Migrate(base.BaseAction):
def post_condition(self):
# TODO(jed): check extra parameters (network response, etc.)
pass
def get_description(self):
"""Description of the action"""
return "Moving a VM instance from source_node to destination_node"

View File

@@ -23,7 +23,6 @@ import voluptuous
from watcher.applier.actions import base
LOG = log.getLogger(__name__)
@@ -65,3 +64,10 @@ class Nop(base.BaseAction):
def post_condition(self):
pass
def get_description(self):
"""Description of the action"""
return "Logging a NOP message"
def abort(self):
LOG.debug("Abort action NOP")

View File

@@ -104,3 +104,7 @@ class Resize(base.BaseAction):
def post_condition(self):
# TODO(jed): check extra parameters (network response, etc.)
pass
def get_description(self):
"""Description of the action"""
return "Resize a server with specified flavor."

View File

@@ -66,3 +66,10 @@ class Sleep(base.BaseAction):
def post_condition(self):
pass
def get_description(self):
"""Description of the action"""
return "Wait for a given interval in seconds."
def abort(self):
LOG.debug("Abort action sleep")

3
watcher/applier/default.py Normal file → Executable file
View File

@@ -58,5 +58,6 @@ class DefaultApplier(base.BaseApplier):
LOG.debug("Executing action plan %s ", action_plan_uuid)
filters = {'action_plan_uuid': action_plan_uuid}
actions = objects.Action.list(self.context, filters=filters)
actions = objects.Action.list(self.context, filters=filters,
eager=True)
return self.engine.execute(actions)

View File

@@ -17,13 +17,17 @@
#
import abc
import six
import time
import eventlet
from oslo_log import log
import six
from taskflow import task as flow_task
from watcher.applier.actions import factory
from watcher.common import clients
from watcher.common import exception
from watcher.common.loader import loadable
from watcher import notifications
from watcher import objects
@@ -32,6 +36,9 @@ from watcher.objects import fields
LOG = log.getLogger(__name__)
CANCEL_STATE = [objects.action_plan.State.CANCELLING,
objects.action_plan.State.CANCELLED]
@six.add_metaclass(abc.ABCMeta)
class BaseWorkFlowEngine(loadable.Loadable):
@@ -81,6 +88,10 @@ class BaseWorkFlowEngine(loadable.Loadable):
def notify(self, action, state):
db_action = objects.Action.get_by_uuid(self.context, action.uuid,
eager=True)
if (db_action.state in [objects.action.State.CANCELLING,
objects.action.State.CANCELLED] and
state == objects.action.State.SUCCEEDED):
return
db_action.state = state
db_action.save()
@@ -122,16 +133,34 @@ class BaseTaskFlowActionContainer(flow_task.Task):
def do_post_execute(self):
raise NotImplementedError()
@abc.abstractmethod
def do_revert(self):
raise NotImplementedError()
@abc.abstractmethod
def do_abort(self, *args, **kwargs):
raise NotImplementedError()
# NOTE(alexchadin): taskflow does 3 method calls (pre_execute, execute,
# post_execute) independently. We want to support notifications in base
# class, so child's methods should be named with `do_` prefix and wrapped.
def pre_execute(self):
try:
# NOTE(adisky): check the state of action plan before starting
# next action, if action plan is cancelled raise the exceptions
# so that taskflow does not schedule further actions.
action_plan = objects.ActionPlan.get_by_id(
self.engine.context, self._db_action.action_plan_id)
if action_plan.state in CANCEL_STATE:
raise exception.ActionPlanCancelled(uuid=action_plan.uuid)
self.do_pre_execute()
notifications.action.send_execution_notification(
self.engine.context, self._db_action,
fields.NotificationAction.EXECUTION,
fields.NotificationPhase.START)
except exception.ActionPlanCancelled as e:
LOG.exception(e)
raise
except Exception as e:
LOG.exception(e)
self.engine.notify(self._db_action, objects.action.State.FAILED)
@@ -142,22 +171,59 @@ class BaseTaskFlowActionContainer(flow_task.Task):
priority=fields.NotificationPriority.ERROR)
def execute(self, *args, **kwargs):
def _do_execute_action(*args, **kwargs):
try:
self.do_execute(*args, **kwargs)
notifications.action.send_execution_notification(
self.engine.context, self._db_action,
fields.NotificationAction.EXECUTION,
fields.NotificationPhase.END)
except Exception as e:
LOG.exception(e)
LOG.error('The workflow engine has failed'
'to execute the action: %s', self.name)
self.engine.notify(self._db_action,
objects.action.State.FAILED)
notifications.action.send_execution_notification(
self.engine.context, self._db_action,
fields.NotificationAction.EXECUTION,
fields.NotificationPhase.ERROR,
priority=fields.NotificationPriority.ERROR)
raise
# NOTE: spawn a new thread for action execution, so that if action plan
# is cancelled workflow engine will not wait to finish action execution
et = eventlet.spawn(_do_execute_action, *args, **kwargs)
# NOTE: check for the state of action plan periodically,so that if
# action is finished or action plan is cancelled we can exit from here.
while True:
action_object = objects.Action.get_by_uuid(
self.engine.context, self._db_action.uuid, eager=True)
action_plan_object = objects.ActionPlan.get_by_id(
self.engine.context, action_object.action_plan_id)
if (action_object.state in [objects.action.State.SUCCEEDED,
objects.action.State.FAILED] or
action_plan_object.state in CANCEL_STATE):
break
time.sleep(2)
try:
self.do_execute(*args, **kwargs)
notifications.action.send_execution_notification(
self.engine.context, self._db_action,
fields.NotificationAction.EXECUTION,
fields.NotificationPhase.END)
# NOTE: kill the action execution thread, if action plan is
# cancelled for all other cases wait for the result from action
# execution thread.
# Not all actions support abort operations, kill only those action
# which support abort operations
abort = self.action.check_abort()
if (action_plan_object.state in CANCEL_STATE and abort):
et.kill()
et.wait()
# NOTE: catch the greenlet exit exception due to thread kill,
# taskflow will call revert for the action,
# we will redirect it to abort.
except eventlet.greenlet.GreenletExit:
raise exception.ActionPlanCancelled(uuid=action_plan_object.uuid)
except Exception as e:
LOG.exception(e)
LOG.error('The workflow engine has failed '
'to execute the action: %s', self.name)
self.engine.notify(self._db_action, objects.action.State.FAILED)
notifications.action.send_execution_notification(
self.engine.context, self._db_action,
fields.NotificationAction.EXECUTION,
fields.NotificationPhase.ERROR,
priority=fields.NotificationPriority.ERROR)
raise
def post_execute(self):
@@ -171,3 +237,24 @@ class BaseTaskFlowActionContainer(flow_task.Task):
fields.NotificationAction.EXECUTION,
fields.NotificationPhase.ERROR,
priority=fields.NotificationPriority.ERROR)
def revert(self, *args, **kwargs):
action_plan = objects.ActionPlan.get_by_id(
self.engine.context, self._db_action.action_plan_id, eager=True)
# NOTE: check if revert cause by cancel action plan or
# some other exception occured during action plan execution
# if due to some other exception keep the flow intact.
if action_plan.state not in CANCEL_STATE:
self.do_revert()
action_object = objects.Action.get_by_uuid(
self.engine.context, self._db_action.uuid, eager=True)
if action_object.state == objects.action.State.ONGOING:
action_object.state = objects.action.State.CANCELLING
action_object.save()
self.abort()
if action_object.state == objects.action.State.PENDING:
action_object.state = objects.action.State.CANCELLED
action_object.save()
def abort(self, *args, **kwargs):
self.do_abort(*args, **kwargs)

View File

@@ -19,6 +19,7 @@ from oslo_concurrency import processutils
from oslo_config import cfg
from oslo_log import log
from taskflow import engines
from taskflow import exceptions as tf_exception
from taskflow.patterns import graph_flow as gf
from taskflow import task as flow_task
@@ -90,6 +91,15 @@ class DefaultWorkFlowEngine(base.BaseWorkFlowEngine):
return flow
except exception.ActionPlanCancelled as e:
raise
except tf_exception.WrappedFailure as e:
if e.check("watcher.common.exception.ActionPlanCancelled"):
raise exception.ActionPlanCancelled
else:
raise exception.WorkflowExecutionException(error=e)
except Exception as e:
raise exception.WorkflowExecutionException(error=e)
@@ -108,14 +118,20 @@ class TaskFlowActionContainer(base.BaseTaskFlowActionContainer):
def do_execute(self, *args, **kwargs):
LOG.debug("Running action: %s", self.name)
self.action.execute()
self.engine.notify(self._db_action, objects.action.State.SUCCEEDED)
# NOTE: For result is False, set action state fail
result = self.action.execute()
if result is False:
self.engine.notify(self._db_action,
objects.action.State.FAILED)
else:
self.engine.notify(self._db_action,
objects.action.State.SUCCEEDED)
def do_post_execute(self):
LOG.debug("Post-condition action: %s", self.name)
self.action.post_condition()
def revert(self, *args, **kwargs):
def do_revert(self, *args, **kwargs):
LOG.warning("Revert action: %s", self.name)
try:
# TODO(jed): do we need to update the states in case of failure?
@@ -124,6 +140,15 @@ class TaskFlowActionContainer(base.BaseTaskFlowActionContainer):
LOG.exception(e)
LOG.critical("Oops! We need a disaster recover plan.")
def do_abort(self, *args, **kwargs):
LOG.warning("Aborting action: %s", self.name)
try:
self.action.abort()
self.engine.notify(self._db_action, objects.action.State.CANCELLED)
except Exception as e:
self.engine.notify(self._db_action, objects.action.State.FAILED)
LOG.exception(e)
class TaskFlowNop(flow_task.Task):
"""This class is used in case of the workflow have only one Action.

View File

@@ -22,6 +22,7 @@ import sys
from oslo_config import cfg
from oslo_log import log as logging
from watcher.api import scheduling
from watcher.common import service
from watcher import conf
@@ -45,5 +46,8 @@ def main():
LOG.info('serving on %(protocol)s://%(host)s:%(port)s' %
dict(protocol=protocol, host=host, port=port))
api_schedule = scheduling.APISchedulingService()
api_schedule.start()
launcher = service.launch(CONF, server, workers=server.workers)
launcher.wait()

View File

@@ -44,10 +44,10 @@ def main():
syncer.sync()
de_service = watcher_service.Service(manager.DecisionEngineManager)
bg_schedulder_service = scheduling.DecisionEngineSchedulingService()
bg_scheduler_service = scheduling.DecisionEngineSchedulingService()
# Only 1 process
launcher = watcher_service.launch(CONF, de_service)
launcher.launch_service(bg_schedulder_service)
launcher.launch_service(bg_scheduler_service)
launcher.wait()

14
watcher/common/clients.py Normal file → Executable file
View File

@@ -14,6 +14,7 @@ from ceilometerclient import client as ceclient
from cinderclient import client as ciclient
from glanceclient import client as glclient
from gnocchiclient import client as gnclient
from ironicclient import client as irclient
from keystoneauth1 import loading as ka_loading
from keystoneclient import client as keyclient
from monascaclient import client as monclient
@@ -45,6 +46,7 @@ class OpenStackClients(object):
self._ceilometer = None
self._monasca = None
self._neutron = None
self._ironic = None
def _get_keystone_session(self):
auth = ka_loading.load_auth_from_conf_options(CONF,
@@ -188,3 +190,15 @@ class OpenStackClients(object):
session=self.session)
self._neutron.format = 'json'
return self._neutron
@exception.wrap_keystone_exception
def ironic(self):
if self._ironic:
return self._ironic
ironicclient_version = self._get_client_option('ironic', 'api_version')
endpoint_type = self._get_client_option('ironic', 'endpoint_type')
self._ironic = irclient.get_client(ironicclient_version,
ironic_url=endpoint_type,
session=self.session)
return self._ironic

View File

@@ -274,6 +274,10 @@ class ActionPlanReferenced(Invalid):
"multiple actions")
class ActionPlanCancelled(WatcherException):
msg_fmt = _("Action Plan with UUID %(uuid)s is cancelled by user")
class ActionPlanIsOngoing(Conflict):
msg_fmt = _("Action Plan %(action_plan)s is currently running.")

View File

@@ -17,6 +17,8 @@ from oslo_config import cfg
from oslo_log import log
import oslo_messaging as messaging
from oslo_messaging.rpc import dispatcher
from watcher.common import context as watcher_context
from watcher.common import exception
@@ -128,12 +130,14 @@ def get_client(target, version_cap=None, serializer=None):
def get_server(target, endpoints, serializer=None):
assert TRANSPORT is not None
access_policy = dispatcher.DefaultRPCAccessPolicy
serializer = RequestContextSerializer(serializer)
return messaging.get_rpc_server(TRANSPORT,
target,
endpoints,
executor='eventlet',
serializer=serializer)
serializer=serializer,
access_policy=access_policy)
def get_notifier(publisher_id):

View File

@@ -28,6 +28,8 @@ from oslo_reports import opts as gmr_opts
from oslo_service import service
from oslo_service import wsgi
from oslo_messaging.rpc import dispatcher
from watcher._i18n import _
from watcher.api import app
from watcher.common import config
@@ -110,16 +112,19 @@ class WSGIService(service.ServiceBase):
class ServiceHeartbeat(scheduling.BackgroundSchedulerService):
service_name = None
def __init__(self, gconfig=None, service_name=None, **kwargs):
gconfig = None or {}
super(ServiceHeartbeat, self).__init__(gconfig, **kwargs)
self.service_name = service_name
ServiceHeartbeat.service_name = service_name
self.context = context.make_context()
self.send_beat()
def send_beat(self):
host = CONF.host
watcher_list = objects.Service.list(
self.context, filters={'name': self.service_name,
self.context, filters={'name': ServiceHeartbeat.service_name,
'host': host})
if watcher_list:
watcher_service = watcher_list[0]
@@ -127,7 +132,7 @@ class ServiceHeartbeat(scheduling.BackgroundSchedulerService):
watcher_service.save()
else:
watcher_service = objects.Service(self.context)
watcher_service.name = self.service_name
watcher_service.name = ServiceHeartbeat.service_name
watcher_service.host = host
watcher_service.create()
@@ -135,6 +140,10 @@ class ServiceHeartbeat(scheduling.BackgroundSchedulerService):
self.add_job(self.send_beat, 'interval', seconds=60,
next_run_time=datetime.datetime.now())
@classmethod
def get_service_name(cls):
return CONF.host, cls.service_name
def start(self):
"""Start service."""
self.add_heartbeat_job()
@@ -168,6 +177,13 @@ class Service(service.ServiceBase):
self.conductor_topic = self.manager.conductor_topic
self.notification_topics = self.manager.notification_topics
self.heartbeat = None
self.service_name = self.manager.service_name
if self.service_name:
self.heartbeat = ServiceHeartbeat(
service_name=self.manager.service_name)
self.conductor_endpoints = [
ep(self) for ep in self.manager.conductor_endpoints
]
@@ -183,8 +199,6 @@ class Service(service.ServiceBase):
self.conductor_topic_handler = None
self.notification_handler = None
self.heartbeat = None
if self.conductor_topic and self.conductor_endpoints:
self.conductor_topic_handler = self.build_topic_handler(
self.conductor_topic, self.conductor_endpoints)
@@ -192,10 +206,6 @@ class Service(service.ServiceBase):
self.notification_handler = self.build_notification_handler(
self.notification_topics, self.notification_endpoints
)
self.service_name = self.manager.service_name
if self.service_name:
self.heartbeat = ServiceHeartbeat(
service_name=self.manager.service_name)
@property
def transport(self):
@@ -225,6 +235,7 @@ class Service(service.ServiceBase):
self.conductor_client = c
def build_topic_handler(self, topic_name, endpoints=()):
access_policy = dispatcher.DefaultRPCAccessPolicy
serializer = rpc.RequestContextSerializer(rpc.JsonPayloadSerializer())
target = om.Target(
topic=topic_name,
@@ -234,7 +245,8 @@ class Service(service.ServiceBase):
)
return om.get_rpc_server(
self.transport, target, endpoints,
executor='eventlet', serializer=serializer)
executor='eventlet', serializer=serializer,
access_policy=access_policy)
def build_notification_handler(self, topic_names, endpoints=()):
serializer = rpc.RequestContextSerializer(rpc.JsonPayloadSerializer())

2
watcher/conf/__init__.py Normal file → Executable file
View File

@@ -29,6 +29,7 @@ from watcher.conf import decision_engine
from watcher.conf import exception
from watcher.conf import glance_client
from watcher.conf import gnocchi_client
from watcher.conf import ironic_client
from watcher.conf import monasca_client
from watcher.conf import neutron_client
from watcher.conf import nova_client
@@ -56,3 +57,4 @@ cinder_client.register_opts(CONF)
ceilometer_client.register_opts(CONF)
neutron_client.register_opts(CONF)
clients_auth.register_opts(CONF)
ironic_client.register_opts(CONF)

View File

@@ -23,13 +23,13 @@ cinder_client = cfg.OptGroup(name='cinder_client',
CINDER_CLIENT_OPTS = [
cfg.StrOpt('api_version',
default='2',
default='3',
help='Version of Cinder API to use in cinderclient.'),
cfg.StrOpt('endpoint_type',
default='internalURL',
default='publicURL',
help='Type of endpoint to use in cinderclient.'
'Supported values: internalURL, publicURL, adminURL'
'The default is internalURL.')]
'The default is publicURL.')]
def register_opts(conf):

41
watcher/conf/ironic_client.py Executable file
View File

@@ -0,0 +1,41 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2017 ZTE Corp
#
# Authors: Prudhvi Rao Shedimbi <prudhvi.rao.shedimbi@intel.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
ironic_client = cfg.OptGroup(name='ironic_client',
title='Configuration Options for Ironic')
IRONIC_CLIENT_OPTS = [
cfg.StrOpt('api_version',
default=1,
help='Version of Ironic API to use in ironicclient.'),
cfg.StrOpt('endpoint_type',
default='internalURL',
help='Type of endpoint to use in ironicclient.'
'Supported values: internalURL, publicURL, adminURL'
'The default is internalURL.')]
def register_opts(conf):
conf.register_group(ironic_client)
conf.register_opts(IRONIC_CLIENT_OPTS, group=ironic_client)
def list_opts():
return [('ironic_client', IRONIC_CLIENT_OPTS)]

4
watcher/conf/nova_client.py Normal file → Executable file
View File

@@ -26,10 +26,10 @@ NOVA_CLIENT_OPTS = [
default='2',
help='Version of Nova API to use in novaclient.'),
cfg.StrOpt('endpoint_type',
default='internalURL',
default='publicURL',
help='Type of endpoint to use in novaclient.'
'Supported values: internalURL, publicURL, adminURL'
'The default is internalURL.')]
'The default is publicURL.')]
def register_opts(conf):

View File

@@ -32,6 +32,43 @@ class CeilometerHelper(object):
self.osc = osc if osc else clients.OpenStackClients()
self.ceilometer = self.osc.ceilometer()
@staticmethod
def format_query(user_id, tenant_id, resource_id,
user_ids, tenant_ids, resource_ids):
query = []
def query_append(query, _id, _ids, field):
if _id:
_ids = [_id]
for x_id in _ids:
query.append({"field": field, "op": "eq", "value": x_id})
query_append(query, user_id, (user_ids or []), "user_id")
query_append(query, tenant_id, (tenant_ids or []), "project_id")
query_append(query, resource_id, (resource_ids or []), "resource_id")
return query
def _timestamps(self, start_time, end_time):
def _format_timestamp(_time):
if _time:
if isinstance(_time, datetime.datetime):
return _time.isoformat()
return _time
return None
start_timestamp = _format_timestamp(start_time)
end_timestamp = _format_timestamp(end_time)
if ((start_timestamp is not None) and (end_timestamp is not None) and
(timeutils.parse_isotime(start_timestamp) >
timeutils.parse_isotime(end_timestamp))):
raise exception.Invalid(
_("Invalid query: %(start_time)s > %(end_time)s") % dict(
start_time=start_timestamp, end_time=end_timestamp))
return start_timestamp, end_timestamp
def build_query(self, user_id=None, tenant_id=None, resource_id=None,
user_ids=None, tenant_ids=None, resource_ids=None,
start_time=None, end_time=None):
@@ -49,45 +86,11 @@ class CeilometerHelper(object):
:param end_time: datetime until which measurements should be collected
"""
user_ids = user_ids or []
tenant_ids = tenant_ids or []
resource_ids = resource_ids or []
query = self.format_query(user_id, tenant_id, resource_id,
user_ids, tenant_ids, resource_ids)
query = []
if user_id:
user_ids = [user_id]
for u_id in user_ids:
query.append({"field": "user_id", "op": "eq", "value": u_id})
if tenant_id:
tenant_ids = [tenant_id]
for t_id in tenant_ids:
query.append({"field": "project_id", "op": "eq", "value": t_id})
if resource_id:
resource_ids = [resource_id]
for r_id in resource_ids:
query.append({"field": "resource_id", "op": "eq", "value": r_id})
start_timestamp = None
end_timestamp = None
if start_time:
start_timestamp = start_time
if isinstance(start_time, datetime.datetime):
start_timestamp = start_time.isoformat()
if end_time:
end_timestamp = end_time
if isinstance(end_time, datetime.datetime):
end_timestamp = end_time.isoformat()
if (start_timestamp and end_timestamp and
timeutils.parse_isotime(start_timestamp) >
timeutils.parse_isotime(end_timestamp)):
raise exception.Invalid(
_("Invalid query: %(start_time)s > %(end_time)s") % dict(
start_time=start_timestamp, end_time=end_timestamp))
start_timestamp, end_timestamp = self._timestamps(start_time,
end_time)
if start_timestamp:
query.append({"field": "timestamp", "op": "ge",

View File

@@ -59,7 +59,7 @@ class GnocchiHelper(object):
:param start_time: Start datetime from which metrics will be used
:param stop_time: End datetime from which metrics will be used
:param granularity: frequency of marking metric point, in seconds
:param aggregation: Should be chosen in accrodance with policy
:param aggregation: Should be chosen in accordance with policy
aggregations
:return: value of aggregated metric
"""

View File

@@ -688,7 +688,7 @@ class BaseConnection(object):
def update_efficacy_indicator(self, efficacy_indicator_id, values):
"""Update properties of an efficacy indicator.
:param efficacy_indicator_uuid: The UUID of an efficacy indicator
:param efficacy_indicator_id: The ID of an efficacy indicator
:returns: An efficacy indicator
:raises: :py:class:`~.EfficacyIndicatorNotFound`
:raises: :py:class:`~.Invalid`

View File

@@ -0,0 +1,33 @@
"""Add apscheduler_jobs table to store background jobs
Revision ID: 0f6042416884
Revises: 001
Create Date: 2017-03-24 11:21:29.036532
"""
from alembic import op
import sqlalchemy as sa
from watcher.db.sqlalchemy import models
# revision identifiers, used by Alembic.
revision = '0f6042416884'
down_revision = '001'
def upgrade():
op.create_table(
'apscheduler_jobs',
sa.Column('id', sa.Unicode(191, _warn_on_bytestring=False),
nullable=False),
sa.Column('next_run_time', sa.Float(25), index=True),
sa.Column('job_state', sa.LargeBinary, nullable=False),
sa.Column('service_id', sa.Integer(), nullable=False),
sa.Column('tag', models.JSONEncodedDict(), nullable=True),
sa.PrimaryKeyConstraint('id'),
sa.ForeignKeyConstraint(['service_id'], ['services.id'])
)
def downgrade():
op.drop_table('apscheduler_jobs')

View File

@@ -649,8 +649,7 @@ class Connection(api.BaseConnection):
query = self._set_eager_options(models.Audit, query)
query = self._add_audits_filters(query, filters)
if not context.show_deleted:
query = query.filter(
~(models.Audit.state == objects.audit.State.DELETED))
query = query.filter_by(deleted_at=None)
return _paginate_query(models.Audit, limit, marker,
sort_key, sort_dir, query)
@@ -736,8 +735,7 @@ class Connection(api.BaseConnection):
query = self._set_eager_options(models.Action, query)
query = self._add_actions_filters(query, filters)
if not context.show_deleted:
query = query.filter(
~(models.Action.state == objects.action.State.DELETED))
query = query.filter_by(deleted_at=None)
return _paginate_query(models.Action, limit, marker,
sort_key, sort_dir, query)
@@ -817,9 +815,7 @@ class Connection(api.BaseConnection):
query = self._set_eager_options(models.ActionPlan, query)
query = self._add_action_plans_filters(query, filters)
if not context.show_deleted:
query = query.filter(
~(models.ActionPlan.state ==
objects.action_plan.State.DELETED))
query = query.filter_by(deleted_at=None)
return _paginate_query(models.ActionPlan, limit, marker,
sort_key, sort_dir, query)

View File

@@ -0,0 +1,112 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2017 Servionica LTD
#
# Authors: Alexander Chadin <a.chadin@servionica.ru>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_serialization import jsonutils
from apscheduler.jobstores.base import ConflictingIdError
from apscheduler.jobstores import sqlalchemy
from apscheduler.util import datetime_to_utc_timestamp
from apscheduler.util import maybe_ref
from watcher.common import context
from watcher.common import service
from watcher import objects
try:
import cPickle as pickle
except ImportError: # pragma: nocover
import pickle
from sqlalchemy import Table, MetaData, select, and_
from sqlalchemy.exc import IntegrityError
class WatcherJobStore(sqlalchemy.SQLAlchemyJobStore):
"""Stores jobs in a database table using SQLAlchemy.
The table will be created if it doesn't exist in the database.
Plugin alias: ``sqlalchemy``
:param str url: connection string
:param engine: an SQLAlchemy Engine to use instead of creating a new
one based on ``url``
:param str tablename: name of the table to store jobs in
:param metadata: a :class:`~sqlalchemy.MetaData` instance to use instead of
creating a new one
:param int pickle_protocol: pickle protocol level to use
(for serialization), defaults to the highest available
:param dict tag: tag description
"""
def __init__(self, url=None, engine=None, tablename='apscheduler_jobs',
metadata=None, pickle_protocol=pickle.HIGHEST_PROTOCOL,
tag=None):
super(WatcherJobStore, self).__init__(url, engine, tablename,
metadata, pickle_protocol)
metadata = maybe_ref(metadata) or MetaData()
self.jobs_t = Table(tablename, metadata, autoload=True,
autoload_with=engine)
service_ident = service.ServiceHeartbeat.get_service_name()
self.tag = tag or {'host': service_ident[0], 'name': service_ident[1]}
self.service_id = objects.Service.list(context=context.make_context(),
filters=self.tag)[0].id
def start(self, scheduler, alias):
# There should be called 'start' method of parent of SQLAlchemyJobStore
super(self.__class__.__bases__[0], self).start(scheduler, alias)
def add_job(self, job):
insert = self.jobs_t.insert().values(**{
'id': job.id,
'next_run_time': datetime_to_utc_timestamp(job.next_run_time),
'job_state': pickle.dumps(job.__getstate__(),
self.pickle_protocol),
'service_id': self.service_id,
'tag': jsonutils.dumps(self.tag)
})
try:
self.engine.execute(insert)
except IntegrityError:
raise ConflictingIdError(job.id)
def get_all_jobs(self):
jobs = self._get_jobs(self.jobs_t.c.tag == jsonutils.dumps(self.tag))
self._fix_paused_jobs_sorting(jobs)
return jobs
def _get_jobs(self, *conditions):
jobs = []
conditions += (self.jobs_t.c.service_id == self.service_id,)
selectable = select(
[self.jobs_t.c.id, self.jobs_t.c.job_state, self.jobs_t.c.tag]
).order_by(self.jobs_t.c.next_run_time).where(and_(*conditions))
failed_job_ids = set()
for row in self.engine.execute(selectable):
try:
jobs.append(self._reconstitute_job(row.job_state))
except Exception:
self._logger.exception(
'Unable to restore job "%s" -- removing it', row.id)
failed_job_ids.add(row.id)
# Remove all the jobs we failed to restore
if failed_job_ids:
delete = self.jobs_t.delete().where(
self.jobs_t.c.id.in_(failed_job_ids))
self.engine.execute(delete)
return jobs

View File

@@ -24,6 +24,7 @@ from oslo_log import log
from watcher.applier import rpcapi
from watcher.common import exception
from watcher.common import service
from watcher.decision_engine.planner import manager as planner_manager
from watcher.decision_engine.strategy.context import default as default_context
from watcher import notifications
@@ -34,6 +35,7 @@ LOG = log.getLogger(__name__)
@six.add_metaclass(abc.ABCMeta)
@six.add_metaclass(service.Singleton)
class BaseAuditHandler(object):
@abc.abstractmethod
@@ -55,8 +57,9 @@ class BaseAuditHandler(object):
@six.add_metaclass(abc.ABCMeta)
class AuditHandler(BaseAuditHandler):
def __init__(self, messaging):
self._messaging = messaging
def __init__(self):
super(AuditHandler, self).__init__()
self._strategy_context = default_context.DefaultStrategyContext()
self._planner_manager = planner_manager.PlannerManager()
self._planner = None
@@ -67,10 +70,6 @@ class AuditHandler(BaseAuditHandler):
self._planner = self._planner_manager.load()
return self._planner
@property
def messaging(self):
return self._messaging
@property
def strategy_context(self):
return self._strategy_context
@@ -96,14 +95,12 @@ class AuditHandler(BaseAuditHandler):
phase=fields.NotificationPhase.ERROR)
raise
@staticmethod
def update_audit_state(audit, state):
def update_audit_state(self, audit, state):
LOG.debug("Update audit state: %s", state)
audit.state = state
audit.save()
@staticmethod
def check_ongoing_action_plans(request_context):
def check_ongoing_action_plans(self, request_context):
a_plan_filters = {'state': objects.action_plan.State.ONGOING}
ongoing_action_plans = objects.ActionPlan.list(
request_context, filters=a_plan_filters)

View File

@@ -20,30 +20,37 @@
import datetime
from apscheduler.schedulers import background
from apscheduler.jobstores import memory
from watcher.common import context
from watcher.common import scheduling
from watcher import conf
from watcher.db.sqlalchemy import api as sq_api
from watcher.db.sqlalchemy import job_store
from watcher.decision_engine.audit import base
from watcher import objects
from watcher import conf
CONF = conf.CONF
class ContinuousAuditHandler(base.AuditHandler):
def __init__(self, messaging):
super(ContinuousAuditHandler, self).__init__(messaging)
def __init__(self):
super(ContinuousAuditHandler, self).__init__()
self._scheduler = None
self.jobs = []
self._start()
self.context_show_deleted = context.RequestContext(is_admin=True,
show_deleted=True)
@property
def scheduler(self):
if self._scheduler is None:
self._scheduler = background.BackgroundScheduler()
self._scheduler = scheduling.BackgroundSchedulerService(
jobstores={
'default': job_store.WatcherJobStore(
engine=sq_api.get_engine()),
'memory': memory.MemoryJobStore()
}
)
return self._scheduler
def _is_audit_inactive(self, audit):
@@ -52,11 +59,9 @@ class ContinuousAuditHandler(base.AuditHandler):
if objects.audit.AuditStateTransitionManager().is_inactive(audit):
# if audit isn't in active states, audit's job must be removed to
# prevent using of inactive audit in future.
job_to_delete = [job for job in self.jobs
if list(job.keys())[0] == audit.uuid][0]
self.jobs.remove(job_to_delete)
job_to_delete[audit.uuid].remove()
[job for job in self.scheduler.get_jobs()
if job.name == 'execute_audit' and
job.args[0].uuid == audit.uuid][0].remove()
return True
return False
@@ -76,7 +81,9 @@ class ContinuousAuditHandler(base.AuditHandler):
plan.save()
return solution
def execute_audit(self, audit, request_context):
@classmethod
def execute_audit(cls, audit, request_context):
self = cls()
if not self._is_audit_inactive(audit):
self.execute(audit, request_context)
@@ -90,22 +97,23 @@ class ContinuousAuditHandler(base.AuditHandler):
}
audits = objects.Audit.list(
audit_context, filters=audit_filters, eager=True)
scheduler_job_args = [job.args for job in self.scheduler.get_jobs()
if job.name == 'execute_audit']
scheduler_job_args = [
job.args for job in self.scheduler.get_jobs()
if job.name == 'execute_audit']
for audit in audits:
if audit.uuid not in [arg[0].uuid for arg in scheduler_job_args]:
job = self.scheduler.add_job(
self.scheduler.add_job(
self.execute_audit, 'interval',
args=[audit, audit_context],
seconds=audit.interval,
name='execute_audit',
next_run_time=datetime.datetime.now())
self.jobs.append({audit.uuid: job})
def _start(self):
def start(self):
self.scheduler.add_job(
self.launch_audits_periodically,
'interval',
seconds=CONF.watcher_decision_engine.continuous_audit_interval,
next_run_time=datetime.datetime.now())
next_run_time=datetime.datetime.now(),
jobstore='memory')
self.scheduler.start()

View File

@@ -19,6 +19,7 @@ from watcher import objects
class OneShotAuditHandler(base.AuditHandler):
def do_execute(self, audit, request_context):
# execute the strategy
solution = self.strategy_context.execute_strategy(

View File

@@ -15,7 +15,7 @@
# limitations under the License.
"""
An efficacy specfication is a contract that is associated to each :ref:`Goal
An efficacy specification is a contract that is associated to each :ref:`Goal
<goal_definition>` that defines the various :ref:`efficacy indicators
<efficacy_indicator_definition>` a strategy achieving the associated goal
should provide within its :ref:`solution <solution_definition>`. Indeed, each

View File

@@ -21,8 +21,9 @@ from concurrent import futures
from oslo_config import cfg
from oslo_log import log
from watcher.decision_engine.audit import continuous as continuous_handler
from watcher.decision_engine.audit import oneshot as oneshot_handler
from watcher.decision_engine.audit import continuous as c_handler
from watcher.decision_engine.audit import oneshot as o_handler
from watcher import objects
CONF = cfg.CONF
@@ -35,19 +36,13 @@ class AuditEndpoint(object):
self._messaging = messaging
self._executor = futures.ThreadPoolExecutor(
max_workers=CONF.watcher_decision_engine.max_workers)
self._oneshot_handler = oneshot_handler.OneShotAuditHandler(
self.messaging)
self._continuous_handler = continuous_handler.ContinuousAuditHandler(
self.messaging)
self._oneshot_handler = o_handler.OneShotAuditHandler()
self._continuous_handler = c_handler.ContinuousAuditHandler().start()
@property
def executor(self):
return self._executor
@property
def messaging(self):
return self._messaging
def do_trigger_audit(self, context, audit_uuid):
audit = objects.Audit.get_by_uuid(context, audit_uuid, eager=True)
self._oneshot_handler.execute(audit, context)

4
watcher/decision_engine/planner/weight.py Normal file → Executable file
View File

@@ -33,7 +33,7 @@ class WeightPlanner(base.BasePlanner):
"""Weight planner implementation
This implementation builds actions with parents in accordance with weights.
Set of actions having a lower weight will be scheduled before
Set of actions having a higher weight will be scheduled before
the other ones. There are two config options to configure:
action_weights and parallelization.
@@ -104,7 +104,7 @@ class WeightPlanner(base.BasePlanner):
# START --> migrate-1 --> migrate-3
# \ \--> resize-1 --> FINISH
# \--> migrate-2 -------------/
# In the above case migrate-1 will the only memeber of the leaf
# In the above case migrate-1 will be the only member of the leaf
# group that migrate-3 will use as parent group, whereas
# resize-1 will have both migrate-2 and migrate-3 in its
# parent/leaf group

View File

@@ -29,9 +29,10 @@ class BaseScope(object):
requires Cluster Data Model which can be segregated to achieve audit scope.
"""
def __init__(self, scope):
def __init__(self, scope, config):
self.ctx = context.make_context()
self.scope = scope
self.config = config
@abc.abstractmethod
def get_scoped_model(self, cluster_model):

View File

@@ -82,6 +82,23 @@ class DefaultScope(base.BaseScope):
}
}
}
},
"host_aggregates": {
"type": "array",
"items": {
"type": "object",
"properties": {
"anyOf": [
{"type": ["string", "number"]}
]
},
}
},
"instance_metadata": {
"type": "array",
"items": {
"type": "object"
}
}
},
"additionalProperties": False
@@ -92,8 +109,8 @@ class DefaultScope(base.BaseScope):
}
}
def __init__(self, scope, osc=None):
super(DefaultScope, self).__init__(scope)
def __init__(self, scope, config, osc=None):
super(DefaultScope, self).__init__(scope, config)
self._osc = osc
self.wrapper = nova_helper.NovaHelper(osc=self._osc)
@@ -110,7 +127,7 @@ class DefaultScope(base.BaseScope):
resource="host aggregates")
return False
def _collect_aggregates(self, host_aggregates, allowed_nodes):
def _collect_aggregates(self, host_aggregates, compute_nodes):
aggregate_list = self.wrapper.get_aggregate_list()
aggregate_ids = [aggregate['id'] for aggregate
in host_aggregates if 'id' in aggregate]
@@ -125,7 +142,7 @@ class DefaultScope(base.BaseScope):
if (detailed_aggregate.id in aggregate_ids or
detailed_aggregate.name in aggregate_names or
include_all_nodes):
allowed_nodes.extend(detailed_aggregate.hosts)
compute_nodes.extend(detailed_aggregate.hosts)
def _collect_zones(self, availability_zones, allowed_nodes):
zone_list = self.wrapper.get_availability_zone_list()
@@ -145,6 +162,8 @@ class DefaultScope(base.BaseScope):
def exclude_resources(self, resources, **kwargs):
instances_to_exclude = kwargs.get('instances')
nodes_to_exclude = kwargs.get('nodes')
instance_metadata = kwargs.get('instance_metadata')
for resource in resources:
if 'instances' in resource:
instances_to_exclude.extend(
@@ -154,6 +173,14 @@ class DefaultScope(base.BaseScope):
nodes_to_exclude.extend(
[host['name'] for host
in resource['compute_nodes']])
elif 'host_aggregates' in resource:
prohibited_nodes = []
self._collect_aggregates(resource['host_aggregates'],
prohibited_nodes)
nodes_to_exclude.extend(prohibited_nodes)
elif 'instance_metadata' in resource:
instance_metadata.extend(
[metadata for metadata in resource['instance_metadata']])
def remove_nodes_from_model(self, nodes_to_remove, cluster_model):
for node_uuid in nodes_to_remove:
@@ -179,6 +206,19 @@ class DefaultScope(base.BaseScope):
cluster_model.get_instance_by_uuid(instance_uuid),
node_name)
def exclude_instances_with_given_metadata(
self, instance_metadata, cluster_model, instances_to_remove):
metadata_dict = {
key: val for d in instance_metadata for key, val in d.items()}
instances = cluster_model.get_all_instances()
for uuid, instance in instances.items():
metadata = instance.metadata
common_metadata = set(metadata_dict) & set(metadata)
if common_metadata and len(common_metadata) == len(metadata_dict):
for key, value in metadata_dict.items():
if str(value).lower() == str(metadata.get(key)).lower():
instances_to_remove.add(uuid)
def get_scoped_model(self, cluster_model):
"""Leave only nodes and instances proposed in the audit scope"""
if not cluster_model:
@@ -188,6 +228,7 @@ class DefaultScope(base.BaseScope):
nodes_to_exclude = []
nodes_to_remove = set()
instances_to_exclude = []
instance_metadata = []
model_hosts = list(cluster_model.get_all_compute_nodes().keys())
if not self.scope:
@@ -203,7 +244,8 @@ class DefaultScope(base.BaseScope):
elif 'exclude' in rule:
self.exclude_resources(
rule['exclude'], instances=instances_to_exclude,
nodes=nodes_to_exclude)
nodes=nodes_to_exclude,
instance_metadata=instance_metadata)
instances_to_remove = set(instances_to_exclude)
if allowed_nodes:
@@ -211,6 +253,11 @@ class DefaultScope(base.BaseScope):
nodes_to_remove.update(nodes_to_exclude)
self.remove_nodes_from_model(nodes_to_remove, cluster_model)
if instance_metadata and self.config.check_optimize_metadata:
self.exclude_instances_with_given_metadata(
instance_metadata, cluster_model, instances_to_remove)
self.remove_instances_from_model(instances_to_remove, cluster_model)
return cluster_model

View File

@@ -235,7 +235,7 @@ class BaseStrategy(loadable.Loadable):
def audit_scope_handler(self):
if not self._audit_scope_handler:
self._audit_scope_handler = default_scope.DefaultScope(
self.audit_scope)
self.audit_scope, self.config)
return self._audit_scope_handler
@property
@@ -297,7 +297,7 @@ class UnclassifiedStrategy(BaseStrategy):
The goal defined within this strategy can be used to simplify the
documentation explaining how to implement a new strategy plugin by
ommitting the need for the strategy developer to define a goal straight
omitting the need for the strategy developer to define a goal straight
away.
"""

View File

@@ -339,11 +339,15 @@ class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
else:
total_cpu_utilization = instance.vcpus
if not instance_ram_util or not instance_disk_util:
LOG.error(
'No values returned by %s for memory.usage '
'or disk.root.size', instance.uuid)
raise exception.NoDataFound
if not instance_ram_util:
instance_ram_util = instance.memory
LOG.warning('No values returned by %s for memory.usage, '
'use instance flavor ram value', instance.uuid)
if not instance_disk_util:
instance_disk_util = instance.disk
LOG.warning('No values returned by %s for disk.root.size, '
'use instance flavor disk value', instance.uuid)
self.datasource_instance_data_cache[instance.uuid] = dict(
cpu=total_cpu_utilization, ram=instance_ram_util,

View File

@@ -46,6 +46,7 @@ hosts nodes.
algorithm with `CONTINUOUS` audits.
"""
from __future__ import division
import datetime
from oslo_config import cfg
@@ -103,7 +104,7 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
:param osc: :py:class:`~.OpenStackClients` instance
"""
super(WorkloadBalance, self).__init__(config, osc)
# the migration plan will be triggered when the CPU utlization %
# the migration plan will be triggered when the CPU utilization %
# reaches threshold
self._meter = self.METER_NAME
self._ceilometer = None

View File

@@ -152,7 +152,7 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
"metrics. The period is simply a repeating"
" interval of time into which the samples"
" are grouped for aggregation. Watcher "
"uses only the last period of all recieved"
"uses only the last period of all received"
" ones.",
"type": "object",
"default": {"instance": 720, "node": 600}

View File

@@ -25,4 +25,5 @@ from watcher.notifications import action_plan # noqa
from watcher.notifications import audit # noqa
from watcher.notifications import exception # noqa
from watcher.notifications import goal # noqa
from watcher.notifications import service # noqa
from watcher.notifications import strategy # noqa

View File

@@ -0,0 +1,113 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2017 Servionica
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
from watcher.notifications import base as notificationbase
from watcher.objects import base
from watcher.objects import fields as wfields
from watcher.objects import service as o_service
CONF = cfg.CONF
@base.WatcherObjectRegistry.register_notification
class ServicePayload(notificationbase.NotificationPayloadBase):
SCHEMA = {
'sevice_host': ('failed_service', 'host'),
'name': ('failed_service', 'name'),
'last_seen_up': ('failed_service', 'last_seen_up'),
}
# Version 1.0: Initial version
VERSION = '1.0'
fields = {
'sevice_host': wfields.StringField(),
'name': wfields.StringField(),
'last_seen_up': wfields.DateTimeField(nullable=True),
}
def __init__(self, failed_service, status_update, **kwargs):
super(ServicePayload, self).__init__(
failed_service=failed_service,
status_update=status_update, **kwargs)
self.populate_schema(failed_service=failed_service)
@base.WatcherObjectRegistry.register_notification
class ServiceStatusUpdatePayload(notificationbase.NotificationPayloadBase):
# Version 1.0: Initial version
VERSION = '1.0'
fields = {
'old_state': wfields.StringField(nullable=True),
'state': wfields.StringField(nullable=True),
}
@base.WatcherObjectRegistry.register_notification
class ServiceUpdatePayload(ServicePayload):
# Version 1.0: Initial version
VERSION = '1.0'
fields = {
'status_update': wfields.ObjectField('ServiceStatusUpdatePayload'),
}
def __init__(self, failed_service, status_update):
super(ServiceUpdatePayload, self).__init__(
failed_service=failed_service,
status_update=status_update)
@notificationbase.notification_sample('service-update.json')
@base.WatcherObjectRegistry.register_notification
class ServiceUpdateNotification(notificationbase.NotificationBase):
# Version 1.0: Initial version
VERSION = '1.0'
fields = {
'payload': wfields.ObjectField('ServiceUpdatePayload')
}
def send_service_update(context, failed_service, state,
service='infra-optim',
host=None):
"""Emit an service failed notification."""
if state == o_service.ServiceStatus.FAILED:
priority = wfields.NotificationPriority.WARNING
status_update = ServiceStatusUpdatePayload(
old_state=o_service.ServiceStatus.ACTIVE,
state=o_service.ServiceStatus.FAILED)
else:
priority = wfields.NotificationPriority.INFO
status_update = ServiceStatusUpdatePayload(
old_state=o_service.ServiceStatus.FAILED,
state=o_service.ServiceStatus.ACTIVE)
versioned_payload = ServiceUpdatePayload(
failed_service=failed_service,
status_update=status_update
)
notification = ServiceUpdateNotification(
priority=priority,
event_type=notificationbase.EventType(
object='service',
action=wfields.NotificationAction.UPDATE),
publisher=notificationbase.NotificationPublisher(
host=host or CONF.host,
binary=service),
payload=versioned_payload)
notification.emit(context)

View File

@@ -30,6 +30,7 @@ class State(object):
SUCCEEDED = 'SUCCEEDED'
DELETED = 'DELETED'
CANCELLED = 'CANCELLED'
CANCELLING = 'CANCELLING'
@base.WatcherObjectRegistry.register

View File

@@ -94,6 +94,7 @@ class State(object):
DELETED = 'DELETED'
CANCELLED = 'CANCELLED'
SUPERSEDED = 'SUPERSEDED'
CANCELLING = 'CANCELLING'
@base.WatcherObjectRegistry.register

View File

@@ -128,7 +128,7 @@ def dt_serializer(name):
"""Return a datetime serializer for a named attribute."""
def serializer(self, name=name):
if getattr(self, name) is not None:
return timeutils.isotime(getattr(self, name))
return datetime.datetime.isoformat(getattr(self, name))
else:
return None
return serializer

View File

@@ -32,6 +32,7 @@ from six.moves.urllib import parse as urlparse
from watcher.api import hooks
from watcher.common import context as watcher_context
from watcher.notifications import service as n_service
from watcher.tests.db import base
PATH_PREFIX = '/v1'
@@ -50,11 +51,15 @@ class FunctionalTest(base.DbTestCase):
def setUp(self):
super(FunctionalTest, self).setUp()
cfg.CONF.set_override("auth_version", "v2.0",
group='keystone_authtoken',
enforce_type=True)
group='keystone_authtoken')
cfg.CONF.set_override("admin_user", "admin",
group='keystone_authtoken',
enforce_type=True)
group='keystone_authtoken')
p_services = mock.patch.object(n_service, "send_service_update",
new_callable=mock.PropertyMock)
self.m_services = p_services.start()
self.addCleanup(p_services.stop)
self.app = self._make_app()
def reset_pecan():

View File

@@ -120,7 +120,7 @@ class TestNoExceptionTracebackHook(base.FunctionalTest):
p = mock.patch.object(root.Root, 'convert')
self.root_convert_mock = p.start()
self.addCleanup(p.stop)
cfg.CONF.set_override('debug', False, enforce_type=True)
cfg.CONF.set_override('debug', False)
def test_hook_exception_success(self):
self.root_convert_mock.side_effect = Exception(self.MSG_WITH_TRACE)
@@ -164,7 +164,7 @@ class TestNoExceptionTracebackHook(base.FunctionalTest):
self._test_hook_without_traceback()
def test_hook_without_traceback_debug(self):
cfg.CONF.set_override('debug', True, enforce_type=True)
cfg.CONF.set_override('debug', True)
self._test_hook_without_traceback()
def _test_hook_on_serverfault(self):
@@ -177,12 +177,12 @@ class TestNoExceptionTracebackHook(base.FunctionalTest):
return actual_msg
def test_hook_on_serverfault(self):
cfg.CONF.set_override('debug', False, enforce_type=True)
cfg.CONF.set_override('debug', False)
msg = self._test_hook_on_serverfault()
self.assertEqual(self.MSG_WITHOUT_TRACE, msg)
def test_hook_on_serverfault_debug(self):
cfg.CONF.set_override('debug', True, enforce_type=True)
cfg.CONF.set_override('debug', True)
msg = self._test_hook_on_serverfault()
self.assertEqual(self.MSG_WITH_TRACE, msg)
@@ -202,7 +202,7 @@ class TestNoExceptionTracebackHook(base.FunctionalTest):
self.assertEqual(self.MSG_WITHOUT_TRACE, msg)
def test_hook_on_clientfault_debug_tracebacks(self):
cfg.CONF.set_override('debug', True, enforce_type=True)
cfg.CONF.set_override('debug', True)
msg = self._test_hook_on_clientfault()
self.assertEqual(self.MSG_WITH_TRACE, msg)

View File

@@ -0,0 +1,114 @@
# -*- encoding: utf-8 -*-
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from apscheduler.schedulers import background
import datetime
import freezegun
import mock
from watcher.api import scheduling
from watcher.notifications import service
from watcher import objects
from watcher.tests import base
from watcher.tests.db import base as db_base
from watcher.tests.db import utils
class TestSchedulingService(base.TestCase):
@mock.patch.object(background.BackgroundScheduler, 'start')
def test_start_scheduling_service(self, m_start):
scheduler = scheduling.APISchedulingService()
scheduler.start()
m_start.assert_called_once_with(scheduler)
jobs = scheduler.get_jobs()
self.assertEqual(1, len(jobs))
class TestSchedulingServiceFunctions(db_base.DbTestCase):
def setUp(self):
super(TestSchedulingServiceFunctions, self).setUp()
fake_service = utils.get_test_service(
created_at=datetime.datetime.utcnow())
self.fake_service = objects.Service(**fake_service)
@mock.patch.object(scheduling.APISchedulingService, 'get_service_status')
@mock.patch.object(objects.Service, 'list')
@mock.patch.object(service, 'send_service_update')
def test_get_services_status_without_services_in_list(
self, mock_service_update, mock_get_list, mock_service_status):
scheduler = scheduling.APISchedulingService()
mock_get_list.return_value = [self.fake_service]
mock_service_status.return_value = 'ACTIVE'
scheduler.get_services_status(mock.ANY)
mock_service_status.assert_called_once_with(mock.ANY,
self.fake_service.id)
mock_service_update.assert_not_called()
@mock.patch.object(scheduling.APISchedulingService, 'get_service_status')
@mock.patch.object(objects.Service, 'list')
@mock.patch.object(service, 'send_service_update')
def test_get_services_status_with_services_in_list_same_status(
self, mock_service_update, mock_get_list, mock_service_status):
scheduler = scheduling.APISchedulingService()
mock_get_list.return_value = [self.fake_service]
scheduler.services_status = {1: 'ACTIVE'}
mock_service_status.return_value = 'ACTIVE'
scheduler.get_services_status(mock.ANY)
mock_service_status.assert_called_once_with(mock.ANY,
self.fake_service.id)
mock_service_update.assert_not_called()
@mock.patch.object(scheduling.APISchedulingService, 'get_service_status')
@mock.patch.object(objects.Service, 'list')
@mock.patch.object(service, 'send_service_update')
def test_get_services_status_with_services_in_list_diff_status(
self, mock_service_update, mock_get_list, mock_service_status):
scheduler = scheduling.APISchedulingService()
mock_get_list.return_value = [self.fake_service]
scheduler.services_status = {1: 'FAILED'}
mock_service_status.return_value = 'ACTIVE'
scheduler.get_services_status(mock.ANY)
mock_service_status.assert_called_once_with(mock.ANY,
self.fake_service.id)
mock_service_update.assert_called_once_with(mock.ANY,
self.fake_service,
state='ACTIVE')
@mock.patch.object(objects.Service, 'get')
def test_get_service_status_failed_service(
self, mock_get):
scheduler = scheduling.APISchedulingService()
mock_get.return_value = self.fake_service
service_status = scheduler.get_service_status(mock.ANY,
self.fake_service.id)
mock_get.assert_called_once_with(mock.ANY,
self.fake_service.id)
self.assertEqual('FAILED', service_status)
@freezegun.freeze_time('2016-09-22T08:32:26.219414')
@mock.patch.object(objects.Service, 'get')
def test_get_service_status_failed_active(
self, mock_get):
scheduler = scheduling.APISchedulingService()
mock_get.return_value = self.fake_service
service_status = scheduler.get_service_status(mock.ANY,
self.fake_service.id)
mock_get.assert_called_once_with(mock.ANY,
self.fake_service.id)
self.assertEqual('ACTIVE', service_status)

View File

@@ -41,8 +41,7 @@ class TestApiUtilsValidScenarios(base.TestCase):
]
def test_validate_limit(self):
cfg.CONF.set_override("max_limit", self.max_limit, group="api",
enforce_type=True)
cfg.CONF.set_override("max_limit", self.max_limit, group="api")
actual_limit = v1_utils.validate_limit(self.limit)
self.assertEqual(self.expected, actual_limit)
@@ -54,8 +53,7 @@ class TestApiUtilsInvalidScenarios(base.TestCase):
]
def test_validate_limit_invalid_cases(self):
cfg.CONF.set_override("max_limit", self.max_limit, group="api",
enforce_type=True)
cfg.CONF.set_override("max_limit", self.max_limit, group="api")
self.assertRaises(
wsme.exc.ClientSideError, v1_utils.validate_limit, self.limit
)

View File

@@ -384,8 +384,7 @@ class TestListAction(api_base.FunctionalTest):
self.assertEqual(3, len(response['actions']))
def test_collection_links_default_limit(self):
cfg.CONF.set_override('max_limit', 3, 'api',
enforce_type=True)
cfg.CONF.set_override('max_limit', 3, 'api')
for id_ in range(5):
obj_utils.create_test_action(self.context, id=id_,
uuid=utils.generate_uuid())

View File

@@ -273,8 +273,7 @@ class TestListActionPlan(api_base.FunctionalTest):
self.assertIn(next_marker, response['next'])
def test_collection_links_default_limit(self):
cfg.CONF.set_override('max_limit', 3, 'api',
enforce_type=True)
cfg.CONF.set_override('max_limit', 3, 'api')
for id_ in range(5):
obj_utils.create_test_action_plan(
self.context, id=id_, uuid=utils.generate_uuid())
@@ -457,7 +456,7 @@ ALLOWED_TRANSITIONS = [
{"original_state": objects.action_plan.State.RECOMMENDED,
"new_state": objects.action_plan.State.CANCELLED},
{"original_state": objects.action_plan.State.ONGOING,
"new_state": objects.action_plan.State.CANCELLED},
"new_state": objects.action_plan.State.CANCELLING},
{"original_state": objects.action_plan.State.PENDING,
"new_state": objects.action_plan.State.CANCELLED},
]

View File

@@ -13,6 +13,7 @@
import datetime
import itertools
import mock
from webtest.app import AppError
from oslo_config import cfg
from oslo_serialization import jsonutils
@@ -36,6 +37,7 @@ def post_get_test_audit_template(**kw):
strategy = db_utils.get_test_strategy(goal_id=goal['id'])
kw['goal'] = kw.get('goal', goal['uuid'])
kw['strategy'] = kw.get('strategy', strategy['uuid'])
kw['scope'] = kw.get('scope', [])
audit_template = api_utils.audit_template_post_data(**kw)
return audit_template
@@ -229,8 +231,7 @@ class TestListAuditTemplate(FunctionalTestWithSetup):
self.assertIn(next_marker, response['next'])
def test_collection_links_default_limit(self):
cfg.CONF.set_override('max_limit', 3, 'api',
enforce_type=True)
cfg.CONF.set_override('max_limit', 3, 'api')
for id_ in range(5):
obj_utils.create_test_audit_template(
self.context, id=id_, uuid=utils.generate_uuid(),
@@ -510,6 +511,27 @@ class TestPost(FunctionalTestWithSetup):
response.json['created_at']).replace(tzinfo=None)
self.assertEqual(test_time, return_created_at)
def test_create_audit_template_vlidation_with_aggregates(self):
scope = [{'host_aggregates': [{'id': '*'}]},
{'availability_zones': [{'name': 'AZ1'},
{'name': 'AZ2'}]},
{'exclude': [
{'instances': [
{'uuid': 'INSTANCE_1'},
{'uuid': 'INSTANCE_2'}]},
{'compute_nodes': [
{'name': 'Node_1'},
{'name': 'Node_2'}]},
{'host_aggregates': [{'id': '*'}]}
]}
]
audit_template_dict = post_get_test_audit_template(
goal=self.fake_goal1.uuid,
strategy=self.fake_strategy1.uuid, scope=scope)
with self.assertRaisesRegex(AppError,
"be included and excluded together"):
self.post_json('/audit_templates', audit_template_dict)
def test_create_audit_template_does_autogenerate_id(self):
audit_template_dict = post_get_test_audit_template(
goal=self.fake_goal1.uuid, strategy=None)

View File

@@ -234,8 +234,7 @@ class TestListAudit(api_base.FunctionalTest):
self.assertIn(next_marker, response['next'])
def test_collection_links_default_limit(self):
cfg.CONF.set_override('max_limit', 3, 'api',
enforce_type=True)
cfg.CONF.set_override('max_limit', 3, 'api')
for id_ in range(5):
obj_utils.create_test_audit(self.context, id=id_,
uuid=utils.generate_uuid())

View File

@@ -116,7 +116,7 @@ class TestListGoal(api_base.FunctionalTest):
self.context, id=idx,
uuid=utils.generate_uuid(),
name='GOAL_{0}'.format(idx))
cfg.CONF.set_override('max_limit', 3, 'api', enforce_type=True)
cfg.CONF.set_override('max_limit', 3, 'api')
response = self.get_json('/goals')
self.assertEqual(3, len(response['goals']))

View File

@@ -109,7 +109,7 @@ class TestListScoringEngine(api_base.FunctionalTest):
obj_utils.create_test_scoring_engine(
self.context, id=idx, uuid=utils.generate_uuid(),
name=str(idx), description='SE_{0}'.format(idx))
cfg.CONF.set_override('max_limit', 3, 'api', enforce_type=True)
cfg.CONF.set_override('max_limit', 3, 'api')
response = self.get_json('/scoring_engines')
self.assertEqual(3, len(response['scoring_engines']))

View File

@@ -127,7 +127,7 @@ class TestListService(api_base.FunctionalTest):
self.context, id=idx,
host='CONTROLLER',
name='SERVICE_{0}'.format(idx))
cfg.CONF.set_override('max_limit', 3, 'api', enforce_type=True)
cfg.CONF.set_override('max_limit', 3, 'api')
response = self.get_json('/services')
self.assertEqual(3, len(response['services']))

View File

@@ -128,7 +128,7 @@ class TestListStrategy(api_base.FunctionalTest):
self.context, id=idx,
uuid=utils.generate_uuid(),
name='STRATEGY_{0}'.format(idx))
cfg.CONF.set_override('max_limit', 3, 'api', enforce_type=True)
cfg.CONF.set_override('max_limit', 3, 'api')
response = self.get_json('/strategies')
self.assertEqual(3, len(response['strategies']))

View File

@@ -19,6 +19,7 @@ import mock
from watcher.applier.action_plan import default
from watcher.applier import default as ap_applier
from watcher.common import exception
from watcher import notifications
from watcher import objects
from watcher.objects import action_plan as ap_objects
@@ -40,9 +41,16 @@ class TestDefaultActionPlanHandler(base.DbTestCase):
self.addCleanup(p_action_plan_notifications.stop)
obj_utils.create_test_goal(self.context)
obj_utils.create_test_strategy(self.context)
obj_utils.create_test_audit(self.context)
self.action_plan = obj_utils.create_test_action_plan(self.context)
self.strategy = obj_utils.create_test_strategy(self.context)
self.audit = obj_utils.create_test_audit(
self.context, strategy_id=self.strategy.id)
self.action_plan = obj_utils.create_test_action_plan(
self.context, audit_id=self.audit.id,
strategy_id=self.strategy.id)
self.action = obj_utils.create_test_action(
self.context, action_plan_id=self.action_plan.id,
action_type='nop',
input_parameters={'message': 'hello World'})
@mock.patch.object(objects.ActionPlan, "get_by_uuid")
def test_launch_action_plan(self, m_get_action_plan):
@@ -92,3 +100,27 @@ class TestDefaultActionPlanHandler(base.DbTestCase):
self.m_action_plan_notifications
.send_action_notification
.call_args_list)
@mock.patch.object(objects.ActionPlan, "get_by_uuid")
def test_cancel_action_plan(self, m_get_action_plan):
m_get_action_plan.return_value = self.action_plan
self.action_plan.state = ap_objects.State.CANCELLED
self.action_plan.save()
command = default.DefaultActionPlanHandler(
self.context, mock.MagicMock(), self.action_plan.uuid)
command.execute()
action = self.action.get_by_uuid(self.context, self.action.uuid)
self.assertEqual(ap_objects.State.CANCELLED, self.action_plan.state)
self.assertEqual(objects.action.State.CANCELLED, action.state)
@mock.patch.object(ap_applier.DefaultApplier, "execute")
@mock.patch.object(objects.ActionPlan, "get_by_uuid")
def test_cancel_action_plan_with_exception(self, m_get_action_plan,
m_execute):
m_get_action_plan.return_value = self.action_plan
m_execute.side_effect = exception.ActionPlanCancelled(
self.action_plan.uuid)
command = default.DefaultActionPlanHandler(
self.context, mock.MagicMock(), self.action_plan.uuid)
command.execute()
self.assertEqual(ap_objects.State.CANCELLED, self.action_plan.state)

View File

@@ -27,6 +27,10 @@ from watcher.tests import base
class TestApplierManager(base.TestCase):
def setUp(self):
super(TestApplierManager, self).setUp()
p_heartbeat = mock.patch.object(
service.ServiceHeartbeat, "send_beat")
self.m_heartbeat = p_heartbeat.start()
self.addCleanup(p_heartbeat.stop)
self.applier = service.Service(applier_manager.ApplierManager)
@mock.patch.object(om.rpc.server.RPCServer, "stop")

View File

@@ -29,6 +29,7 @@ from watcher.common import utils
from watcher import notifications
from watcher import objects
from watcher.tests.db import base
from watcher.tests.objects import utils as obj_utils
class ExpectedException(Exception):
@@ -52,6 +53,9 @@ class FakeAction(abase.BaseAction):
def execute(self):
raise ExpectedException()
def get_description(self):
return "fake action, just for test"
class TestDefaultWorkFlowEngine(base.DbTestCase):
def setUp(self):
@@ -72,7 +76,8 @@ class TestDefaultWorkFlowEngine(base.DbTestCase):
except Exception as exc:
self.fail(exc)
def create_action(self, action_type, parameters, parents=None, uuid=None):
def create_action(self, action_type, parameters, parents=None, uuid=None,
state=None):
action = {
'uuid': uuid or utils.generate_uuid(),
'action_plan_id': 0,
@@ -85,7 +90,6 @@ class TestDefaultWorkFlowEngine(base.DbTestCase):
new_action = objects.Action(self.context, **action)
with mock.patch.object(notifications.action, 'send_create'):
new_action.create()
return new_action
def check_action_state(self, action, expected_state):
@@ -107,10 +111,14 @@ class TestDefaultWorkFlowEngine(base.DbTestCase):
except Exception as exc:
self.fail(exc)
@mock.patch.object(objects.ActionPlan, "get_by_id")
@mock.patch.object(notifications.action, 'send_execution_notification')
@mock.patch.object(notifications.action, 'send_update')
def test_execute_with_one_action(self, mock_send_update,
mock_execution_notification):
mock_execution_notification,
m_get_actionplan):
m_get_actionplan.return_value = obj_utils.get_test_action_plan(
self.context, id=0)
actions = [self.create_action("nop", {'message': 'test'})]
try:
self.engine.execute(actions)
@@ -119,10 +127,14 @@ class TestDefaultWorkFlowEngine(base.DbTestCase):
except Exception as exc:
self.fail(exc)
@mock.patch.object(objects.ActionPlan, "get_by_id")
@mock.patch.object(notifications.action, 'send_execution_notification')
@mock.patch.object(notifications.action, 'send_update')
def test_execute_nop_sleep(self, mock_send_update,
mock_execution_notification):
mock_execution_notification,
m_get_actionplan):
m_get_actionplan.return_value = obj_utils.get_test_action_plan(
self.context, id=0)
actions = []
first_nop = self.create_action("nop", {'message': 'test'})
second_nop = self.create_action("nop", {'message': 'second test'})
@@ -137,10 +149,14 @@ class TestDefaultWorkFlowEngine(base.DbTestCase):
except Exception as exc:
self.fail(exc)
@mock.patch.object(objects.ActionPlan, "get_by_id")
@mock.patch.object(notifications.action, 'send_execution_notification')
@mock.patch.object(notifications.action, 'send_update')
def test_execute_with_parents(self, mock_send_update,
mock_execution_notification):
mock_execution_notification,
m_get_actionplan):
m_get_actionplan.return_value = obj_utils.get_test_action_plan(
self.context, id=0)
actions = []
first_nop = self.create_action(
"nop", {'message': 'test'},
@@ -205,9 +221,13 @@ class TestDefaultWorkFlowEngine(base.DbTestCase):
except Exception as exc:
self.fail(exc)
@mock.patch.object(objects.ActionPlan, "get_by_id")
@mock.patch.object(notifications.action, 'send_execution_notification')
@mock.patch.object(notifications.action, 'send_update')
def test_execute_with_two_actions(self, m_send_update, m_execution):
def test_execute_with_two_actions(self, m_send_update, m_execution,
m_get_actionplan):
m_get_actionplan.return_value = obj_utils.get_test_action_plan(
self.context, id=0)
actions = []
second = self.create_action("sleep", {'duration': 0.0})
first = self.create_action("nop", {'message': 'test'})
@@ -222,11 +242,14 @@ class TestDefaultWorkFlowEngine(base.DbTestCase):
except Exception as exc:
self.fail(exc)
@mock.patch.object(objects.ActionPlan, "get_by_id")
@mock.patch.object(notifications.action, 'send_execution_notification')
@mock.patch.object(notifications.action, 'send_update')
def test_execute_with_three_actions(self, m_send_update, m_execution):
def test_execute_with_three_actions(self, m_send_update, m_execution,
m_get_actionplan):
m_get_actionplan.return_value = obj_utils.get_test_action_plan(
self.context, id=0)
actions = []
third = self.create_action("nop", {'message': 'next'})
second = self.create_action("sleep", {'duration': 0.0})
first = self.create_action("nop", {'message': 'hello'})
@@ -246,9 +269,13 @@ class TestDefaultWorkFlowEngine(base.DbTestCase):
except Exception as exc:
self.fail(exc)
@mock.patch.object(objects.ActionPlan, "get_by_id")
@mock.patch.object(notifications.action, 'send_execution_notification')
@mock.patch.object(notifications.action, 'send_update')
def test_execute_with_exception(self, m_send_update, m_execution):
def test_execute_with_exception(self, m_send_update, m_execution,
m_get_actionplan):
m_get_actionplan.return_value = obj_utils.get_test_action_plan(
self.context, id=0)
actions = []
third = self.create_action("no_exist", {'message': 'next'})
@@ -270,11 +297,14 @@ class TestDefaultWorkFlowEngine(base.DbTestCase):
self.check_action_state(second, objects.action.State.SUCCEEDED)
self.check_action_state(third, objects.action.State.FAILED)
@mock.patch.object(objects.ActionPlan, "get_by_id")
@mock.patch.object(notifications.action, 'send_execution_notification')
@mock.patch.object(notifications.action, 'send_update')
@mock.patch.object(factory.ActionFactory, "make_action")
def test_execute_with_action_exception(self, m_make_action, m_send_update,
m_send_execution):
m_send_execution, m_get_actionplan):
m_get_actionplan.return_value = obj_utils.get_test_action_plan(
self.context, id=0)
actions = [self.create_action("fake_action", {})]
m_make_action.return_value = FakeAction(mock.Mock())
@@ -283,3 +313,43 @@ class TestDefaultWorkFlowEngine(base.DbTestCase):
self.assertIsInstance(exc.kwargs['error'], ExpectedException)
self.check_action_state(actions[0], objects.action.State.FAILED)
@mock.patch.object(objects.ActionPlan, "get_by_uuid")
def test_execute_with_action_plan_cancel(self, m_get_actionplan):
obj_utils.create_test_goal(self.context)
strategy = obj_utils.create_test_strategy(self.context)
audit = obj_utils.create_test_audit(
self.context, strategy_id=strategy.id)
action_plan = obj_utils.create_test_action_plan(
self.context, audit_id=audit.id,
strategy_id=strategy.id,
state=objects.action_plan.State.CANCELLING)
action1 = obj_utils.create_test_action(
self.context, action_plan_id=action_plan.id,
action_type='nop', state=objects.action.State.SUCCEEDED,
input_parameters={'message': 'hello World'})
action2 = obj_utils.create_test_action(
self.context, action_plan_id=action_plan.id,
action_type='nop', state=objects.action.State.ONGOING,
uuid='9eb51e14-936d-4d12-a500-6ba0f5e0bb1c',
input_parameters={'message': 'hello World'})
action3 = obj_utils.create_test_action(
self.context, action_plan_id=action_plan.id,
action_type='nop', state=objects.action.State.PENDING,
uuid='bc7eee5c-4fbe-4def-9744-b539be55aa19',
input_parameters={'message': 'hello World'})
m_get_actionplan.return_value = action_plan
actions = []
actions.append(action1)
actions.append(action2)
actions.append(action3)
self.assertRaises(exception.ActionPlanCancelled,
self.engine.execute, actions)
try:
self.check_action_state(action1, objects.action.State.SUCCEEDED)
self.check_action_state(action2, objects.action.State.CANCELLED)
self.check_action_state(action3, objects.action.State.CANCELLED)
except Exception as exc:
self.fail(exc)

View File

@@ -0,0 +1,79 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2015 b<>com
#
# Authors: Jean-Emile DARTOIS <jean-emile.dartois@b-com.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import eventlet
import mock
from watcher.applier.workflow_engine import default as tflow
from watcher import objects
from watcher.tests.db import base
from watcher.tests.objects import utils as obj_utils
class TestTaskFlowActionContainer(base.DbTestCase):
def setUp(self):
super(TestTaskFlowActionContainer, self).setUp()
self.engine = tflow.DefaultWorkFlowEngine(
config=mock.Mock(),
context=self.context,
applier_manager=mock.MagicMock())
obj_utils.create_test_goal(self.context)
self.strategy = obj_utils.create_test_strategy(self.context)
self.audit = obj_utils.create_test_audit(
self.context, strategy_id=self.strategy.id)
def test_execute(self):
action_plan = obj_utils.create_test_action_plan(
self.context, audit_id=self.audit.id,
strategy_id=self.strategy.id,
state=objects.action.State.ONGOING)
action = obj_utils.create_test_action(
self.context, action_plan_id=action_plan.id,
state=objects.action.State.ONGOING,
action_type='nop',
input_parameters={'message': 'hello World'})
action_container = tflow.TaskFlowActionContainer(
db_action=action,
engine=self.engine)
action_container.execute()
self.assertTrue(action.state, objects.action.State.SUCCEEDED)
@mock.patch('eventlet.spawn')
def test_execute_with_cancel_action_plan(self, mock_eventlet_spawn):
action_plan = obj_utils.create_test_action_plan(
self.context, audit_id=self.audit.id,
strategy_id=self.strategy.id,
state=objects.action_plan.State.CANCELLING)
action = obj_utils.create_test_action(
self.context, action_plan_id=action_plan.id,
state=objects.action.State.ONGOING,
action_type='nop',
input_parameters={'message': 'hello World'})
action_container = tflow.TaskFlowActionContainer(
db_action=action,
engine=self.engine)
def empty_test():
pass
et = eventlet.spawn(empty_test)
mock_eventlet_spawn.return_value = et
action_container.execute()
et.kill.assert_called_with()

View File

@@ -59,11 +59,9 @@ class TestCase(BaseTestCase):
self.messaging_conf.transport_driver = 'fake'
cfg.CONF.set_override("auth_type", "admin_token",
group='keystone_authtoken',
enforce_type=True)
group='keystone_authtoken')
cfg.CONF.set_override("auth_uri", "http://127.0.0.1/identity",
group='keystone_authtoken',
enforce_type=True)
group='keystone_authtoken')
app_config_path = os.path.join(os.path.dirname(__file__), 'config.py')
self.app = testing.load_test_app(app_config_path)
@@ -128,7 +126,7 @@ class TestCase(BaseTestCase):
"""Override config options for a test."""
group = kw.pop('group', None)
for k, v in kw.items():
CONF.set_override(k, v, group, enforce_type=True)
CONF.set_override(k, v, group)
def get_path(self, project_file=None):
"""Get the absolute path to a file. Used for testing the API.

View File

@@ -22,6 +22,7 @@ import types
import mock
from oslo_config import cfg
from oslo_service import service
from watcher.common import service as watcher_service
from watcher.cmd import applier
from watcher.tests import base
@@ -39,6 +40,10 @@ class TestApplier(base.BaseTestCase):
_fake_parse_method = types.MethodType(_fake_parse, self.conf)
self.conf._parse_cli_opts = _fake_parse_method
p_heartbeat = mock.patch.object(
watcher_service.ServiceHeartbeat, "send_beat")
self.m_heartbeat = p_heartbeat.start()
self.addCleanup(p_heartbeat.stop)
def tearDown(self):
super(TestApplier, self).tearDown()

View File

@@ -24,6 +24,8 @@ from oslo_config import cfg
from oslo_service import service
from watcher.cmd import decisionengine
from watcher.common import service as watcher_service
from watcher.decision_engine.audit import continuous
from watcher.decision_engine import sync
from watcher.tests import base
@@ -42,6 +44,15 @@ class TestDecisionEngine(base.BaseTestCase):
_fake_parse_method = types.MethodType(_fake_parse, self.conf)
self.conf._parse_cli_opts = _fake_parse_method
p_heartbeat = mock.patch.object(
watcher_service.ServiceHeartbeat, "send_beat")
self.m_heartbeat = p_heartbeat.start()
self.addCleanup(p_heartbeat.stop)
p_continuoushandler = mock.patch.object(
continuous.ContinuousAuditHandler, "start")
self.m_continuoushandler = p_continuoushandler.start()
self.addCleanup(p_continuoushandler.stop)
def tearDown(self):
super(TestDecisionEngine, self).tearDown()
self.conf._parse_cli_opts = self._parse_cli_opts

45
watcher/tests/common/test_clients.py Normal file → Executable file
View File

@@ -17,6 +17,8 @@ from cinderclient.v1 import client as ciclient_v1
from glanceclient import client as glclient
from gnocchiclient import client as gnclient
from gnocchiclient.v1 import client as gnclient_v1
from ironicclient import client as irclient
from ironicclient.v1 import client as irclient_v1
from keystoneauth1 import loading as ka_loading
import mock
from monascaclient import client as monclient
@@ -237,11 +239,12 @@ class TestClients(base.TestCase):
@mock.patch.object(clients.OpenStackClients, 'session')
def test_clients_cinder_diff_endpoint(self, mock_session):
CONF.set_override('endpoint_type', 'publicURL', group='cinder_client')
CONF.set_override('endpoint_type',
'internalURL', group='cinder_client')
osc = clients.OpenStackClients()
osc._cinder = None
osc.cinder()
self.assertEqual('publicURL', osc.cinder().client.interface)
self.assertEqual('internalURL', osc.cinder().client.interface)
@mock.patch.object(clients.OpenStackClients, 'session')
def test_clients_cinder_cached(self, mock_session):
@@ -387,3 +390,41 @@ class TestClients(base.TestCase):
monasca = osc.monasca()
monasca_cached = osc.monasca()
self.assertEqual(monasca, monasca_cached)
@mock.patch.object(irclient, 'Client')
@mock.patch.object(clients.OpenStackClients, 'session')
def test_clients_ironic(self, mock_session, mock_call):
osc = clients.OpenStackClients()
osc._ironic = None
osc.ironic()
mock_call.assert_called_once_with(
CONF.ironic_client.api_version,
CONF.ironic_client.endpoint_type,
max_retries=None,
os_ironic_api_version=None,
retry_interval=None,
session=mock_session)
@mock.patch.object(clients.OpenStackClients, 'session')
def test_clients_ironic_diff_vers(self, mock_session):
CONF.set_override('api_version', '1', group='ironic_client')
osc = clients.OpenStackClients()
osc._ironic = None
osc.ironic()
self.assertEqual(irclient_v1.Client, type(osc.ironic()))
@mock.patch.object(clients.OpenStackClients, 'session')
def test_clients_ironic_diff_endpoint(self, mock_session):
CONF.set_override('endpoint_type', 'publicURL', group='ironic_client')
osc = clients.OpenStackClients()
osc._ironic = None
osc.ironic()
self.assertEqual('publicURL', osc.ironic().http_client.endpoint)
@mock.patch.object(clients.OpenStackClients, 'session')
def test_clients_ironic_cached(self, mock_session):
osc = clients.OpenStackClients()
osc._ironic = None
ironic = osc.ironic()
ironic_cached = osc.ironic()
self.assertEqual(ironic, ironic_cached)

View File

@@ -53,10 +53,9 @@ class TestServiceHeartbeat(base.TestCase):
def test_send_beat_with_creating_service(self, mock_create,
mock_list):
CONF.set_default('host', 'fake-fqdn')
service_heartbeat = service.ServiceHeartbeat(
service_name='watcher-service')
mock_list.return_value = []
service_heartbeat.send_beat()
service.ServiceHeartbeat(service_name='watcher-service')
mock_list.assert_called_once_with(mock.ANY,
filters={'name': 'watcher-service',
'host': 'fake-fqdn'})
@@ -65,12 +64,11 @@ class TestServiceHeartbeat(base.TestCase):
@mock.patch.object(objects.Service, 'list')
@mock.patch.object(objects.Service, 'save')
def test_send_beat_without_creating_service(self, mock_save, mock_list):
service_heartbeat = service.ServiceHeartbeat(
service_name='watcher-service')
mock_list.return_value = [objects.Service(mock.Mock(),
name='watcher-service',
host='controller')]
service_heartbeat.send_beat()
service.ServiceHeartbeat(service_name='watcher-service')
self.assertEqual(1, mock_save.call_count)

2
watcher/tests/conf/test_list_opts.py Normal file → Executable file
View File

@@ -31,7 +31,7 @@ class TestListOpts(base.TestCase):
'DEFAULT', 'api', 'database', 'watcher_decision_engine',
'watcher_applier', 'watcher_planner', 'nova_client',
'glance_client', 'gnocchi_client', 'cinder_client',
'ceilometer_client', 'monasca_client',
'ceilometer_client', 'monasca_client', 'ironic_client',
'neutron_client', 'watcher_clients_auth']
self.opt_sections = list(dict(opts.list_opts()).keys())

View File

@@ -70,11 +70,9 @@ class DbTestCase(base.TestCase):
return next(self._id_gen)
def setUp(self):
cfg.CONF.set_override("enable_authentication", False,
enforce_type=True)
cfg.CONF.set_override("enable_authentication", False)
# To use in-memory SQLite DB
cfg.CONF.set_override("connection", "sqlite://", group="database",
enforce_type=True)
cfg.CONF.set_override("connection", "sqlite://", group="database")
super(DbTestCase, self).setUp()

View File

@@ -74,15 +74,15 @@ class TestDbEfficacyIndicatorFilters(base.DbTestCase):
with freezegun.freeze_time(self.FAKE_TODAY):
self.dbapi.update_efficacy_indicator(
self.efficacy_indicator1.uuid,
values={"description": "New decription 1"})
values={"description": "New description 1"})
with freezegun.freeze_time(self.FAKE_OLD_DATE):
self.dbapi.update_efficacy_indicator(
self.efficacy_indicator2.uuid,
values={"description": "New decription 2"})
values={"description": "New description 2"})
with freezegun.freeze_time(self.FAKE_OLDER_DATE):
self.dbapi.update_efficacy_indicator(
self.efficacy_indicator3.uuid,
values={"description": "New decription 3"})
values={"description": "New description 3"})
def test_get_efficacy_indicator_filter_deleted_true(self):
with freezegun.freeze_time(self.FAKE_TODAY):

View File

@@ -14,11 +14,14 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from apscheduler.schedulers import background
import mock
from oslo_utils import uuidutils
from apscheduler import job
from watcher.applier import rpcapi
from watcher.common import scheduling
from watcher.db.sqlalchemy import api as sq_api
from watcher.decision_engine.audit import continuous
from watcher.decision_engine.audit import oneshot
from watcher.decision_engine.model.collector import manager
@@ -57,7 +60,7 @@ class TestOneShotAuditHandler(base.DbTestCase):
@mock.patch.object(manager.CollectorManager, "get_cluster_model_collector")
def test_trigger_audit_without_errors(self, m_collector):
m_collector.return_value = faker.FakerModelCollector()
audit_handler = oneshot.OneShotAuditHandler(mock.MagicMock())
audit_handler = oneshot.OneShotAuditHandler()
audit_handler.execute(self.audit, self.context)
expected_calls = [
@@ -83,7 +86,7 @@ class TestOneShotAuditHandler(base.DbTestCase):
def test_trigger_audit_with_error(self, m_collector, m_do_execute):
m_collector.return_value = faker.FakerModelCollector()
m_do_execute.side_effect = Exception
audit_handler = oneshot.OneShotAuditHandler(mock.MagicMock())
audit_handler = oneshot.OneShotAuditHandler()
audit_handler.execute(self.audit, self.context)
expected_calls = [
@@ -102,7 +105,7 @@ class TestOneShotAuditHandler(base.DbTestCase):
@mock.patch.object(manager.CollectorManager, "get_cluster_model_collector")
def test_trigger_audit_state_succeeded(self, m_collector):
m_collector.return_value = faker.FakerModelCollector()
audit_handler = oneshot.OneShotAuditHandler(mock.MagicMock())
audit_handler = oneshot.OneShotAuditHandler()
audit_handler.execute(self.audit, self.context)
audit = objects.audit.Audit.get_by_uuid(self.context, self.audit.uuid)
self.assertEqual(objects.audit.State.SUCCEEDED, audit.state)
@@ -127,9 +130,8 @@ class TestOneShotAuditHandler(base.DbTestCase):
@mock.patch.object(manager.CollectorManager, "get_cluster_model_collector")
def test_trigger_audit_send_notification(self, m_collector):
messaging = mock.MagicMock()
m_collector.return_value = faker.FakerModelCollector()
audit_handler = oneshot.OneShotAuditHandler(messaging)
audit_handler = oneshot.OneShotAuditHandler()
audit_handler.execute(self.audit, self.context)
expected_calls = [
@@ -194,7 +196,7 @@ class TestAutoTriggerActionPlan(base.DbTestCase):
def test_trigger_audit_with_actionplan_ongoing(self, mock_list,
mock_do_execute):
mock_list.return_value = [self.ongoing_action_plan]
audit_handler = oneshot.OneShotAuditHandler(mock.MagicMock())
audit_handler = oneshot.OneShotAuditHandler()
audit_handler.execute(self.audit, self.context)
self.assertFalse(mock_do_execute.called)
@@ -205,9 +207,9 @@ class TestAutoTriggerActionPlan(base.DbTestCase):
mock_list, mock_applier):
mock_get_by_id.return_value = self.audit
mock_list.return_value = []
auto_trigger_handler = oneshot.OneShotAuditHandler(mock.MagicMock())
with mock.patch.object(auto_trigger_handler, 'do_schedule',
new_callable=mock.PropertyMock) as m_schedule:
auto_trigger_handler = oneshot.OneShotAuditHandler()
with mock.patch.object(auto_trigger_handler,
'do_schedule') as m_schedule:
m_schedule().uuid = self.recommended_action_plan.uuid
auto_trigger_handler.post_execute(self.audit, mock.MagicMock(),
self.context)
@@ -234,30 +236,39 @@ class TestContinuousAuditHandler(base.DbTestCase):
goal=self.goal)
for id_ in range(2, 4)]
@mock.patch.object(manager.CollectorManager, "get_cluster_model_collector")
@mock.patch.object(background.BackgroundScheduler, 'add_job')
@mock.patch.object(background.BackgroundScheduler, 'get_jobs')
@mock.patch.object(objects.service.Service, 'list')
@mock.patch.object(sq_api, 'get_engine')
@mock.patch.object(scheduling.BackgroundSchedulerService, 'add_job')
@mock.patch.object(scheduling.BackgroundSchedulerService, 'get_jobs')
@mock.patch.object(objects.audit.Audit, 'list')
def test_launch_audits_periodically(self, mock_list, mock_jobs,
m_add_job, m_collector):
audit_handler = continuous.ContinuousAuditHandler(mock.MagicMock())
m_add_job, m_engine, m_service):
audit_handler = continuous.ContinuousAuditHandler()
mock_list.return_value = self.audits
mock_jobs.return_value = mock.MagicMock()
m_engine.return_value = mock.MagicMock()
m_add_job.return_value = audit_handler.execute_audit(
self.audits[0], self.context)
m_collector.return_value = faker.FakerModelCollector()
audit_handler.launch_audits_periodically()
m_service.assert_called()
m_engine.assert_called()
m_add_job.assert_called()
mock_jobs.assert_called()
@mock.patch.object(background.BackgroundScheduler, 'add_job')
@mock.patch.object(background.BackgroundScheduler, 'get_jobs')
@mock.patch.object(objects.service.Service, 'list')
@mock.patch.object(sq_api, 'get_engine')
@mock.patch.object(scheduling.BackgroundSchedulerService, 'add_job')
@mock.patch.object(scheduling.BackgroundSchedulerService, 'get_jobs')
@mock.patch.object(objects.audit.Audit, 'list')
def test_launch_multiply_audits_periodically(self, mock_list,
mock_jobs, m_add_job):
audit_handler = continuous.ContinuousAuditHandler(mock.MagicMock())
mock_jobs, m_add_job,
m_engine, m_service):
audit_handler = continuous.ContinuousAuditHandler()
mock_list.return_value = self.audits
mock_jobs.return_value = mock.MagicMock()
m_engine.return_value = mock.MagicMock()
m_service.return_value = mock.MagicMock()
calls = [mock.call(audit_handler.execute_audit, 'interval',
args=[mock.ANY, mock.ANY],
seconds=3600,
@@ -266,26 +277,39 @@ class TestContinuousAuditHandler(base.DbTestCase):
audit_handler.launch_audits_periodically()
m_add_job.assert_has_calls(calls)
@mock.patch.object(background.BackgroundScheduler, 'add_job')
@mock.patch.object(background.BackgroundScheduler, 'get_jobs')
@mock.patch.object(objects.service.Service, 'list')
@mock.patch.object(sq_api, 'get_engine')
@mock.patch.object(scheduling.BackgroundSchedulerService, 'add_job')
@mock.patch.object(scheduling.BackgroundSchedulerService, 'get_jobs')
@mock.patch.object(objects.audit.Audit, 'list')
def test_period_audit_not_called_when_deleted(self, mock_list,
mock_jobs, m_add_job):
audit_handler = continuous.ContinuousAuditHandler(mock.MagicMock())
mock_jobs, m_add_job,
m_engine, m_service):
audit_handler = continuous.ContinuousAuditHandler()
mock_list.return_value = self.audits
mock_jobs.return_value = mock.MagicMock()
m_service.return_value = mock.MagicMock()
m_engine.return_value = mock.MagicMock()
self.audits[1].state = objects.audit.State.CANCELLED
self.audits[0].state = objects.audit.State.SUSPENDED
for state in [objects.audit.State.CANCELLED,
objects.audit.State.SUSPENDED]:
self.audits[1].state = state
calls = [mock.call(audit_handler.execute_audit, 'interval',
args=[mock.ANY, mock.ANY],
seconds=3600,
name='execute_audit',
next_run_time=mock.ANY)]
audit_handler.launch_audits_periodically()
m_add_job.assert_has_calls(calls)
ap_jobs = [job.Job(mock.MagicMock(), name='execute_audit',
func=audit_handler.execute_audit,
args=(self.audits[0], mock.MagicMock()),
kwargs={}),
job.Job(mock.MagicMock(), name='execute_audit',
func=audit_handler.execute_audit,
args=(self.audits[1], mock.MagicMock()),
kwargs={})
]
mock_jobs.return_value = ap_jobs
audit_handler.launch_audits_periodically()
audit_handler.update_audit_state(self.audits[1], state)
is_inactive = audit_handler._is_audit_inactive(self.audits[1])
audit_handler.update_audit_state(self.audits[1],
objects.audit.State.CANCELLED)
audit_handler.update_audit_state(self.audits[0],
objects.audit.State.SUSPENDED)
is_inactive = audit_handler._is_audit_inactive(self.audits[1])
self.assertTrue(is_inactive)
is_inactive = audit_handler._is_audit_inactive(self.audits[0])
self.assertTrue(is_inactive)

View File

@@ -16,6 +16,7 @@
import mock
from watcher.decision_engine.audit import continuous as continuous_handler
from watcher.decision_engine.audit import oneshot as oneshot_handler
from watcher.decision_engine.messaging import audit_endpoint
from watcher.decision_engine.model.collector import manager
@@ -34,11 +35,12 @@ class TestAuditEndpoint(base.DbTestCase):
self.context,
audit_template_id=self.audit_template.id)
@mock.patch.object(continuous_handler.ContinuousAuditHandler, 'start')
@mock.patch.object(manager.CollectorManager, "get_cluster_model_collector")
def test_do_trigger_audit(self, mock_collector):
def test_do_trigger_audit(self, mock_collector, mock_handler):
mock_collector.return_value = faker_cluster_state.FakerModelCollector()
audit_handler = oneshot_handler.OneShotAuditHandler(mock.MagicMock())
audit_handler = oneshot_handler.OneShotAuditHandler
endpoint = audit_endpoint.AuditEndpoint(audit_handler)
with mock.patch.object(oneshot_handler.OneShotAuditHandler,
@@ -48,11 +50,12 @@ class TestAuditEndpoint(base.DbTestCase):
self.assertEqual(mock_call.call_count, 1)
@mock.patch.object(continuous_handler.ContinuousAuditHandler, 'start')
@mock.patch.object(manager.CollectorManager, "get_cluster_model_collector")
def test_trigger_audit(self, mock_collector):
def test_trigger_audit(self, mock_collector, mock_handler):
mock_collector.return_value = faker_cluster_state.FakerModelCollector()
audit_handler = oneshot_handler.OneShotAuditHandler(mock.MagicMock())
audit_handler = oneshot_handler.OneShotAuditHandler
endpoint = audit_endpoint.AuditEndpoint(audit_handler)
with mock.patch.object(endpoint.executor, 'submit') as mock_call:

View File

@@ -72,8 +72,9 @@ class TestReceiveNotifications(NotificationTestCase):
m_from_dict.return_value = self.context
self.addCleanup(p_from_dict.stop)
@mock.patch.object(watcher_service.ServiceHeartbeat, 'send_beat')
@mock.patch.object(DummyNotification, 'info')
def test_receive_dummy_notification(self, m_info):
def test_receive_dummy_notification(self, m_info, m_heartbeat):
message = {
'publisher_id': 'nova-compute',
'event_type': 'compute.dummy',
@@ -90,8 +91,9 @@ class TestReceiveNotifications(NotificationTestCase):
{'data': {'nested': 'TEST'}},
{'message_id': None, 'timestamp': None})
@mock.patch.object(watcher_service.ServiceHeartbeat, 'send_beat')
@mock.patch.object(DummyNotification, 'info')
def test_skip_unwanted_notification(self, m_info):
def test_skip_unwanted_notification(self, m_info, m_heartbeat):
message = {
'publisher_id': 'nova-compute',
'event_type': 'compute.dummy',

View File

@@ -56,6 +56,10 @@ class TestReceiveNovaNotifications(NotificationTestCase):
m_from_dict = p_from_dict.start()
m_from_dict.return_value = self.context
self.addCleanup(p_from_dict.stop)
p_heartbeat = mock.patch.object(
watcher_service.ServiceHeartbeat, "send_beat")
self.m_heartbeat = p_heartbeat.start()
self.addCleanup(p_heartbeat.stop)
@mock.patch.object(novanotification.ServiceUpdated, 'info')
def test_nova_receive_service_update(self, m_info):

View File

@@ -40,7 +40,7 @@ class TestDefaultScope(base.TestCase):
mock.Mock(zoneName='AZ{0}'.format(i),
hosts={'Node_{0}'.format(i): {}})
for i in range(2)]
model = default.DefaultScope(audit_scope,
model = default.DefaultScope(audit_scope, mock.Mock(),
osc=mock.Mock()).get_scoped_model(cluster)
expected_edges = [('INSTANCE_2', 'Node_1')]
self.assertEqual(sorted(expected_edges), sorted(model.edges()))
@@ -48,13 +48,13 @@ class TestDefaultScope(base.TestCase):
@mock.patch.object(nova_helper.NovaHelper, 'get_availability_zone_list')
def test_get_scoped_model_without_scope(self, mock_zone_list):
model = self.fake_cluster.generate_scenario_1()
default.DefaultScope([],
default.DefaultScope([], mock.Mock(),
osc=mock.Mock()).get_scoped_model(model)
assert not mock_zone_list.called
def test_remove_instance(self):
model = self.fake_cluster.generate_scenario_1()
default.DefaultScope([], osc=mock.Mock()).remove_instance(
default.DefaultScope([], mock.Mock(), osc=mock.Mock()).remove_instance(
model, model.get_instance_by_uuid('INSTANCE_2'), 'Node_1')
expected_edges = [
('INSTANCE_0', 'Node_0'),
@@ -75,7 +75,7 @@ class TestDefaultScope(base.TestCase):
mock_detailed_aggregate.side_effect = [
mock.Mock(id=i, hosts=['Node_{0}'.format(i)]) for i in range(2)]
default.DefaultScope([{'host_aggregates': [{'id': 1}, {'id': 2}]}],
osc=mock.Mock())._collect_aggregates(
mock.Mock(), osc=mock.Mock())._collect_aggregates(
[{'id': 1}, {'id': 2}], allowed_nodes)
self.assertEqual(['Node_1'], allowed_nodes)
@@ -88,7 +88,7 @@ class TestDefaultScope(base.TestCase):
mock_detailed_aggregate.side_effect = [
mock.Mock(id=i, hosts=['Node_{0}'.format(i)]) for i in range(2)]
default.DefaultScope([{'host_aggregates': [{'id': '*'}]}],
osc=mock.Mock())._collect_aggregates(
mock.Mock(), osc=mock.Mock())._collect_aggregates(
[{'id': '*'}], allowed_nodes)
self.assertEqual(['Node_0', 'Node_1'], allowed_nodes)
@@ -98,7 +98,7 @@ class TestDefaultScope(base.TestCase):
mock_aggregate.return_value = [mock.Mock(id=i) for i in range(2)]
scope_handler = default.DefaultScope(
[{'host_aggregates': [{'id': '*'}, {'id': 1}]}],
osc=mock.Mock())
mock.Mock(), osc=mock.Mock())
self.assertRaises(exception.WildcardCharacterIsUsed,
scope_handler._collect_aggregates,
[{'id': '*'}, {'id': 1}],
@@ -121,7 +121,7 @@ class TestDefaultScope(base.TestCase):
default.DefaultScope([{'host_aggregates': [{'name': 'HA_1'},
{'id': 0}]}],
osc=mock.Mock())._collect_aggregates(
mock.Mock(), osc=mock.Mock())._collect_aggregates(
[{'name': 'HA_1'}, {'id': 0}], allowed_nodes)
self.assertEqual(['Node_0', 'Node_1'], allowed_nodes)
@@ -134,7 +134,7 @@ class TestDefaultScope(base.TestCase):
'Node_{0}'.format(2 * i + 1): 2})
for i in range(2)]
default.DefaultScope([{'availability_zones': [{'name': "AZ1"}]}],
osc=mock.Mock())._collect_zones(
mock.Mock(), osc=mock.Mock())._collect_zones(
[{'name': "AZ1"}], allowed_nodes)
self.assertEqual(['Node_0', 'Node_1'], sorted(allowed_nodes))
@@ -147,7 +147,7 @@ class TestDefaultScope(base.TestCase):
'Node_{0}'.format(2 * i + 1): 2})
for i in range(2)]
default.DefaultScope([{'availability_zones': [{'name': "*"}]}],
osc=mock.Mock())._collect_zones(
mock.Mock(), osc=mock.Mock())._collect_zones(
[{'name': "*"}], allowed_nodes)
self.assertEqual(['Node_0', 'Node_1', 'Node_2', 'Node_3'],
sorted(allowed_nodes))
@@ -162,7 +162,7 @@ class TestDefaultScope(base.TestCase):
for i in range(2)]
scope_handler = default.DefaultScope(
[{'availability_zones': [{'name': "*"}, {'name': 'AZ1'}]}],
osc=mock.Mock())
mock.Mock(), osc=mock.Mock())
self.assertRaises(exception.WildcardCharacterIsUsed,
scope_handler._collect_zones,
[{'name': "*"}, {'name': 'AZ1'}],
@@ -173,23 +173,65 @@ class TestDefaultScope(base.TestCase):
validators.Draft4Validator(
default.DefaultScope.DEFAULT_SCHEMA).validate(test_scope)
def test_exclude_resources(self):
resources_to_exclude = [{'instances': [{'uuid': 'INSTANCE_1'},
@mock.patch.object(nova_helper.NovaHelper, 'get_aggregate_detail')
@mock.patch.object(nova_helper.NovaHelper, 'get_aggregate_list')
def test_exclude_resource(
self, mock_aggregate, mock_detailed_aggregate):
mock_aggregate.return_value = [mock.Mock(id=i,
name="HA_{0}".format(i))
for i in range(2)]
mock_collection = [mock.Mock(id=i, hosts=['Node_{0}'.format(i)])
for i in range(2)]
mock_collection[0].name = 'HA_0'
mock_collection[1].name = 'HA_1'
mock_detailed_aggregate.side_effect = mock_collection
resources_to_exclude = [{'host_aggregates': [{'name': 'HA_1'},
{'id': 0}]},
{'instances': [{'uuid': 'INSTANCE_1'},
{'uuid': 'INSTANCE_2'}]},
{'compute_nodes': [{'name': 'Node_1'},
{'name': 'Node_2'}]}]
{'compute_nodes': [{'name': 'Node_2'},
{'name': 'Node_3'}]},
{'instance_metadata': [{'optimize': True},
{'optimize1': False}]}]
instances_to_exclude = []
nodes_to_exclude = []
default.DefaultScope([], osc=mock.Mock()).exclude_resources(
instance_metadata = []
default.DefaultScope([], mock.Mock(),
osc=mock.Mock()).exclude_resources(
resources_to_exclude, instances=instances_to_exclude,
nodes=nodes_to_exclude)
self.assertEqual(['Node_1', 'Node_2'], sorted(nodes_to_exclude))
nodes=nodes_to_exclude, instance_metadata=instance_metadata)
self.assertEqual(['Node_0', 'Node_1', 'Node_2', 'Node_3'],
sorted(nodes_to_exclude))
self.assertEqual(['INSTANCE_1', 'INSTANCE_2'],
sorted(instances_to_exclude))
self.assertEqual([{'optimize': True}, {'optimize1': False}],
instance_metadata)
def test_exclude_instances_with_given_metadata(self):
cluster = self.fake_cluster.generate_scenario_1()
instance_metadata = [{'optimize': True}]
instances_to_remove = set()
default.DefaultScope(
[], mock.Mock(),
osc=mock.Mock()).exclude_instances_with_given_metadata(
instance_metadata, cluster, instances_to_remove)
self.assertEqual(sorted(['INSTANCE_' + str(i) for i in range(35)]),
sorted(instances_to_remove))
instance_metadata = [{'optimize': False}]
instances_to_remove = set()
default.DefaultScope(
[], mock.Mock(),
osc=mock.Mock()).exclude_instances_with_given_metadata(
instance_metadata, cluster, instances_to_remove)
self.assertEqual(set(), instances_to_remove)
def test_remove_nodes_from_model(self):
model = self.fake_cluster.generate_scenario_1()
default.DefaultScope([], osc=mock.Mock()).remove_nodes_from_model(
default.DefaultScope([], mock.Mock(),
osc=mock.Mock()).remove_nodes_from_model(
['Node_1', 'Node_2'], model)
expected_edges = [
('INSTANCE_0', 'Node_0'),
@@ -200,7 +242,8 @@ class TestDefaultScope(base.TestCase):
def test_remove_instances_from_model(self):
model = self.fake_cluster.generate_scenario_1()
default.DefaultScope([], osc=mock.Mock()).remove_instances_from_model(
default.DefaultScope([], mock.Mock(),
osc=mock.Mock()).remove_instances_from_model(
['INSTANCE_1', 'INSTANCE_2'], model)
expected_edges = [
('INSTANCE_0', 'Node_0'),

View File

@@ -193,8 +193,8 @@ class TestBasicConsolidation(base.TestCase):
model = self.fake_cluster.generate_scenario_3_with_2_nodes()
self.m_model.return_value = copy.deepcopy(model)
self.assertEqual(
model.to_string(), self.strategy.compute_model.to_string())
self.assertTrue(model_root.ModelRoot.is_isomorphic(
model, self.strategy.compute_model))
self.assertIsNot(model, self.strategy.compute_model)
def test_basic_consolidation_migration(self):

View File

@@ -288,6 +288,11 @@ expected_notification_fingerprints = {
'ActionUpdateNotification': '1.0-9b69de0724fda8310d05e18418178866',
'ActionUpdatePayload': '1.0-03306c7e7f4d49ac328c261eff6b30b8',
'TerseActionPlanPayload': '1.0-42bf7a5585cc111a9a4dbc008a04c67e',
'ServiceUpdateNotification': '1.0-9b69de0724fda8310d05e18418178866',
'ServicePayload': '1.0-9c5a9bc51e6606e0ec3cf95baf698f4f',
'ServiceStatusUpdatePayload': '1.0-1a1b606bf14a2c468800c2b010801ce5',
'ServiceUpdatePayload': '1.0-e0e9812a45958974693a723a2c820c3f'
}

View File

@@ -0,0 +1,77 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2017 Servionica
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import datetime
import freezegun
import mock
import oslo_messaging as om
from watcher.common import rpc
from watcher import notifications
from watcher.objects import service as w_service
from watcher.tests.db import base
from watcher.tests.objects import utils
@freezegun.freeze_time('2016-10-18T09:52:05.219414')
class TestActionPlanNotification(base.DbTestCase):
def setUp(self):
super(TestActionPlanNotification, self).setUp()
p_get_notifier = mock.patch.object(rpc, 'get_notifier')
m_get_notifier = p_get_notifier.start()
self.addCleanup(p_get_notifier.stop)
self.m_notifier = mock.Mock(spec=om.Notifier)
def fake_get_notifier(publisher_id):
self.m_notifier.publisher_id = publisher_id
return self.m_notifier
m_get_notifier.side_effect = fake_get_notifier
def test_service_failed(self):
service = utils.get_test_service(mock.Mock(),
created_at=datetime.datetime.utcnow())
state = w_service.ServiceStatus.FAILED
notifications.service.send_service_update(mock.MagicMock(),
service,
state,
host='node0')
notification = self.m_notifier.warning.call_args[1]
payload = notification['payload']
self.assertEqual("infra-optim:node0", self.m_notifier.publisher_id)
self.assertDictEqual({
'watcher_object.data': {
'last_seen_up': '2016-09-22T08:32:06Z',
'name': 'watcher-service',
'sevice_host': 'controller',
'status_update': {
'watcher_object.data': {
'old_state': 'ACTIVE',
'state': 'FAILED'
},
'watcher_object.name': 'ServiceStatusUpdatePayload',
'watcher_object.namespace': 'watcher',
'watcher_object.version': '1.0'
}
},
'watcher_object.name': 'ServiceUpdatePayload',
'watcher_object.namespace': 'watcher',
'watcher_object.version': '1.0'
},
payload
)

Some files were not shown because too many files have changed in this diff Show More