Compare commits

..

10 Commits

Author SHA1 Message Date
Vincent Françoise
1c38637dff Fixed issue on compute nodes iteration
In this changeset, I fixed the issue with the basic server
consolidation strategy to now loop over all compute nodes
as expected instead of stopping after the first one.

Change-Id: If594f0df41e39dfb0ef8f0fce41822018490c4ec
Closes-bug: #1548874
2016-10-24 20:11:19 +00:00
Vincent Françoise
1eb2b517ef Refactored Tests to load scenarios from file
In this changeset, I simplified the logic that is used to create
cluster data model scenarios.

Change-Id: Ia6e138d9897190d3207a70485dc62ccc34087686
2016-10-21 17:32:42 +02:00
licanwei
3e030618fa 'tox -e py27' failed
cfg.CONF.debug should be set False as default,
If it's True, some unittests can't pass.

Change-Id: Ib098250af3aec48aa9d9152e20c80460f3bd641e
Closes-Bug: #1625560
2016-10-20 07:31:51 +00:00
Antoine Cabot
f823345424 Update Watcher description
This change-set update Watcher description
as it is used in the ML to announce each
new release.

Change-Id: I6318107c3e3322a3ef734e90c9e3e0176967ceaf
(cherry picked from commit 8cf233ab03)
2016-09-29 09:02:51 +00:00
Vincent Françoise
19fdd1557e Doc updates
Updated inconsistent docs.

Change-Id: I4be05f662fee6ebdf721ac93dd97611b5a686273
2016-09-28 15:29:30 +00:00
Jenkins
641989b424 Merge "Add constraint target to tox.ini and remove 1 dep" into stable/newton 2016-09-26 09:41:34 +00:00
Vincent Françoise
8814c09087 Fixed GMR configuration issue
GMR was ignoring the config because the conf wasn't passed when
starting any of the Watcher services. This changeset fixes this issue.

Change-Id: If386c5f0459c4278a2a56c8c3185fcdafce673a0
2016-09-21 13:49:39 +00:00
David TARDIVEL
eb4f46b703 Add constraint target to tox.ini and remove 1 dep
This adds a pip install command to tox.ini that is only used when the
tox env is passed with the 'constraints' factor appended onto it.
As such this will not effect developer workflows or current unit tests.

The initial use of this will be in a non-voting job, to verify that the
constrained checks with tox are stable.  DevStack is already running
constrained jobs, as such problems are no expected.

To run a tox with pip using constraints on a developer system a
developer should run the desired tox environment with -constraints.
For example: $(tox -epy27-constraints)
Pip will pull the current version of the upper-constraints.txt file down
from the git.openstack.org, however this method can be overriden to use
a local file setting the environment variable "UPPER_CONSTRAINTS_FILE"
to the local path or a different URL, it is passed directly to pip.

This is currently not enabled in the default tox run, however it is
possible to enable it as a default by adding it to 'envlist' in tox.ini

This also removes requirements.txt from tox.ini deps
This is redundant, per lifeless email:
http://lists.openstack.org/pipermail/openstack-dev/2015-July/069663.html

Change-Id: I79c0ceb46fc980840a8baf5fa4a303bb450bfbec
2016-09-21 12:03:50 +02:00
OpenStack Proposal Bot
2f33dd10c0 Updated from global requirements
Change-Id: Id918c0fd8f2a567f57d3849683282aad5c1c68f8
2016-09-20 13:17:37 +00:00
Thierry Carrez
1a197ab801 Update .gitreview for stable/newton
Change-Id: I410924887299ae8d32247ff1f798ac059f8d5dbd
2016-09-16 15:02:51 +02:00
212 changed files with 3999 additions and 9246 deletions

View File

@@ -2,3 +2,4 @@
host=review.openstack.org host=review.openstack.org
port=29418 port=29418
project=openstack/watcher.git project=openstack/watcher.git
defaultbranch=stable/newton

6
MANIFEST.in Normal file
View File

@@ -0,0 +1,6 @@
include AUTHORS
include ChangeLog
exclude .gitignore
exclude .gitreview
global-exclude *.pyc

View File

@@ -126,8 +126,6 @@ function create_watcher_conf {
iniset $WATCHER_CONF oslo_messaging_rabbit rabbit_password $RABBIT_PASSWORD iniset $WATCHER_CONF oslo_messaging_rabbit rabbit_password $RABBIT_PASSWORD
iniset $WATCHER_CONF oslo_messaging_rabbit rabbit_host $RABBIT_HOST iniset $WATCHER_CONF oslo_messaging_rabbit rabbit_host $RABBIT_HOST
iniset $WATCHER_CONF oslo_messaging_notifications driver "messaging"
iniset $NOVA_CONF oslo_messaging_notifications topics "notifications,watcher_notifications" iniset $NOVA_CONF oslo_messaging_notifications topics "notifications,watcher_notifications"
configure_auth_token_middleware $WATCHER_CONF watcher $WATCHER_AUTH_CACHE_DIR configure_auth_token_middleware $WATCHER_CONF watcher $WATCHER_AUTH_CACHE_DIR

View File

@@ -28,7 +28,7 @@ ENABLED_SERVICES+=,q-svc,q-dhcp,q-meta,q-agt,q-l3,neutron
enable_service n-cauth enable_service n-cauth
# Enable the Watcher Dashboard plugin # Enable the Watcher Dashboard plugin
# enable_plugin watcher-dashboard git://git.openstack.org/openstack/watcher-dashboard enable_plugin watcher-dashboard git://git.openstack.org/openstack/watcher-dashboard
# Enable the Watcher plugin # Enable the Watcher plugin
enable_plugin watcher git://git.openstack.org/openstack/watcher enable_plugin watcher git://git.openstack.org/openstack/watcher

View File

@@ -1,133 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
This provides a sphinx extension able to list the implemented versioned
notifications into the developer documentation.
It is used via a single directive in the .rst file
.. versioned_notifications::
"""
from sphinx.util.compat import Directive
from docutils import nodes
from watcher.notifications import base as notification
from watcher.objects import base
class VersionedNotificationDirective(Directive):
SAMPLE_ROOT = 'doc/notification_samples/'
TOGGLE_SCRIPT = """
<script>
jQuery(document).ready(function(){
jQuery('#%s-div').toggle('show');
jQuery('#%s-hideshow').on('click', function(event) {
jQuery('#%s-div').toggle('show');
});
});
</script>
"""
def run(self):
notifications = self._collect_notifications()
return self._build_markup(notifications)
def _collect_notifications(self):
base.WatcherObjectRegistry.register_notification_objects()
notifications = []
ovos = base.WatcherObjectRegistry.obj_classes()
for name, cls in ovos.items():
cls = cls[0]
if (issubclass(cls, notification.NotificationBase) and
cls != notification.NotificationBase):
payload_name = cls.fields['payload'].objname
payload_cls = ovos[payload_name][0]
for sample in cls.samples:
notifications.append((cls.__name__,
payload_cls.__name__,
sample))
return sorted(notifications)
def _build_markup(self, notifications):
content = []
cols = ['Event type', 'Notification class', 'Payload class', 'Sample']
table = nodes.table()
content.append(table)
group = nodes.tgroup(cols=len(cols))
table.append(group)
head = nodes.thead()
group.append(head)
for _ in cols:
group.append(nodes.colspec(colwidth=1))
body = nodes.tbody()
group.append(body)
# fill the table header
row = nodes.row()
body.append(row)
for col_name in cols:
col = nodes.entry()
row.append(col)
text = nodes.strong(text=col_name)
col.append(text)
# fill the table content, one notification per row
for name, payload, sample_file in notifications:
event_type = sample_file[0: -5].replace('-', '.')
row = nodes.row()
body.append(row)
col = nodes.entry()
row.append(col)
text = nodes.literal(text=event_type)
col.append(text)
col = nodes.entry()
row.append(col)
text = nodes.literal(text=name)
col.append(text)
col = nodes.entry()
row.append(col)
text = nodes.literal(text=payload)
col.append(text)
col = nodes.entry()
row.append(col)
with open(self.SAMPLE_ROOT + sample_file, 'r') as f:
sample_content = f.read()
event_type = sample_file[0: -5]
html_str = self.TOGGLE_SCRIPT % ((event_type, ) * 3)
html_str += ("<input type='button' id='%s-hideshow' "
"value='hide/show sample'>" % event_type)
html_str += ("<div id='%s-div'><pre>%s</pre></div>"
% (event_type, sample_content))
raw = nodes.raw('', html_str, format="html")
col.append(raw)
return content
def setup(app):
app.add_directive('versioned_notifications',
VersionedNotificationDirective)

View File

@@ -1,69 +0,0 @@
{
"priority": "INFO",
"payload": {
"watcher_object.data": {
"audit_type": "ONESHOT",
"parameters": {
"para2": "hello",
"para1": 3.2
},
"state": "PENDING",
"updated_at": null,
"deleted_at": null,
"goal": {
"watcher_object.data": {
"uuid": "bc830f84-8ae3-4fc6-8bc6-e3dd15e8b49a",
"name": "dummy",
"updated_at": null,
"deleted_at": null,
"efficacy_specification": [],
"created_at": "2016-11-04T16:25:35Z",
"display_name": "Dummy goal"
},
"watcher_object.name": "GoalPayload",
"watcher_object.version": "1.0",
"watcher_object.namespace": "watcher"
},
"interval": null,
"scope": [],
"strategy": {
"watcher_object.data": {
"parameters_spec": {
"properties": {
"para2": {
"type": "string",
"default": "hello",
"description": "string parameter example"
},
"para1": {
"description": "number parameter example",
"maximum": 10.2,
"type": "number",
"default": 3.2,
"minimum": 1.0
}
}
},
"name": "dummy",
"uuid": "75234dfe-87e3-4f11-a0e0-3c3305d86a39",
"updated_at": null,
"deleted_at": null,
"created_at": "2016-11-04T16:25:35Z",
"display_name": "Dummy strategy"
},
"watcher_object.name": "StrategyPayload",
"watcher_object.version": "1.0",
"watcher_object.namespace": "watcher"
},
"created_at": "2016-11-04T16:29:20Z",
"uuid": "4a97b9dd-2023-43dc-b713-815bdd94d4d6"
},
"watcher_object.name": "AuditCreatePayload",
"watcher_object.version": "1.0",
"watcher_object.namespace": "watcher"
},
"publisher_id": "infra-optim:localhost",
"timestamp": "2016-11-04 16:31:36.264673 ",
"event_type": "audit.create",
"message_id": "cbcf9f2c-7c53-4b4d-91ec-db49cca024b6"
}

View File

@@ -1,69 +0,0 @@
{
"priority": "INFO",
"payload": {
"watcher_object.data": {
"audit_type": "ONESHOT",
"parameters": {
"para2": "hello",
"para1": 3.2
},
"state": "DELETED",
"updated_at": null,
"deleted_at": null,
"goal": {
"watcher_object.data": {
"uuid": "bc830f84-8ae3-4fc6-8bc6-e3dd15e8b49a",
"name": "dummy",
"updated_at": null,
"deleted_at": null,
"efficacy_specification": [],
"created_at": "2016-11-04T16:25:35Z",
"display_name": "Dummy goal"
},
"watcher_object.name": "GoalPayload",
"watcher_object.version": "1.0",
"watcher_object.namespace": "watcher"
},
"interval": null,
"scope": [],
"strategy": {
"watcher_object.data": {
"parameters_spec": {
"properties": {
"para2": {
"type": "string",
"default": "hello",
"description": "string parameter example"
},
"para1": {
"description": "number parameter example",
"maximum": 10.2,
"type": "number",
"default": 3.2,
"minimum": 1.0
}
}
},
"name": "dummy",
"uuid": "75234dfe-87e3-4f11-a0e0-3c3305d86a39",
"updated_at": null,
"deleted_at": null,
"created_at": "2016-11-04T16:25:35Z",
"display_name": "Dummy strategy"
},
"watcher_object.name": "StrategyPayload",
"watcher_object.version": "1.0",
"watcher_object.namespace": "watcher"
},
"created_at": "2016-11-04T16:29:20Z",
"uuid": "4a97b9dd-2023-43dc-b713-815bdd94d4d6"
},
"watcher_object.name": "AuditDeletePayload",
"watcher_object.version": "1.0",
"watcher_object.namespace": "watcher"
},
"publisher_id": "infra-optim:localhost",
"timestamp": "2016-11-04 16:31:36.264673 ",
"event_type": "audit.delete",
"message_id": "cbcf9f2c-7c53-4b4d-91ec-db49cca024b6"
}

View File

@@ -1,70 +0,0 @@
{
"priority": "INFO",
"payload": {
"watcher_object.data": {
"audit_type": "ONESHOT",
"parameters": {
"para2": "hello",
"para1": 3.2
},
"state": "ONGOING",
"updated_at": null,
"deleted_at": null,
"fault": null,
"goal": {
"watcher_object.data": {
"uuid": "bc830f84-8ae3-4fc6-8bc6-e3dd15e8b49a",
"name": "dummy",
"updated_at": null,
"deleted_at": null,
"efficacy_specification": [],
"created_at": "2016-11-04T16:25:35Z",
"display_name": "Dummy goal"
},
"watcher_object.name": "GoalPayload",
"watcher_object.version": "1.0",
"watcher_object.namespace": "watcher"
},
"interval": null,
"scope": [],
"strategy": {
"watcher_object.data": {
"parameters_spec": {
"properties": {
"para2": {
"type": "string",
"default": "hello",
"description": "string parameter example"
},
"para1": {
"description": "number parameter example",
"maximum": 10.2,
"type": "number",
"default": 3.2,
"minimum": 1.0
}
}
},
"name": "dummy",
"uuid": "75234dfe-87e3-4f11-a0e0-3c3305d86a39",
"updated_at": null,
"deleted_at": null,
"created_at": "2016-11-04T16:25:35Z",
"display_name": "Dummy strategy"
},
"watcher_object.name": "StrategyPayload",
"watcher_object.version": "1.0",
"watcher_object.namespace": "watcher"
},
"created_at": "2016-11-04T16:29:20Z",
"uuid": "4a97b9dd-2023-43dc-b713-815bdd94d4d6"
},
"watcher_object.name": "AuditActionPayload",
"watcher_object.version": "1.0",
"watcher_object.namespace": "watcher"
},
"publisher_id": "infra-optim:localhost",
"timestamp": "2016-11-04 16:31:36.264673 ",
"event_type": "audit.planner.end",
"message_id": "cbcf9f2c-7c53-4b4d-91ec-db49cca024b6"
}

View File

@@ -1,80 +0,0 @@
{
"priority": "ERROR",
"payload": {
"watcher_object.data": {
"audit_type": "ONESHOT",
"parameters": {
"para2": "hello",
"para1": 3.2
},
"state": "ONGOING",
"updated_at": null,
"deleted_at": null,
"fault": {
"watcher_object.data": {
"exception": "WatcherException",
"exception_message": "TEST",
"function_name": "test_send_audit_action_with_error",
"module_name": "watcher.tests.notifications.test_audit_notification"
},
"watcher_object.name": "ExceptionPayload",
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0"
},
"goal": {
"watcher_object.data": {
"uuid": "bc830f84-8ae3-4fc6-8bc6-e3dd15e8b49a",
"name": "dummy",
"updated_at": null,
"deleted_at": null,
"efficacy_specification": [],
"created_at": "2016-11-04T16:25:35Z",
"display_name": "Dummy goal"
},
"watcher_object.name": "GoalPayload",
"watcher_object.version": "1.0",
"watcher_object.namespace": "watcher"
},
"interval": null,
"scope": [],
"strategy": {
"watcher_object.data": {
"parameters_spec": {
"properties": {
"para2": {
"type": "string",
"default": "hello",
"description": "string parameter example"
},
"para1": {
"description": "number parameter example",
"maximum": 10.2,
"type": "number",
"default": 3.2,
"minimum": 1.0
}
}
},
"name": "dummy",
"uuid": "75234dfe-87e3-4f11-a0e0-3c3305d86a39",
"updated_at": null,
"deleted_at": null,
"created_at": "2016-11-04T16:25:35Z",
"display_name": "Dummy strategy"
},
"watcher_object.name": "StrategyPayload",
"watcher_object.version": "1.0",
"watcher_object.namespace": "watcher"
},
"created_at": "2016-11-04T16:29:20Z",
"uuid": "4a97b9dd-2023-43dc-b713-815bdd94d4d6"
},
"watcher_object.name": "AuditActionPayload",
"watcher_object.version": "1.0",
"watcher_object.namespace": "watcher"
},
"publisher_id": "infra-optim:localhost",
"timestamp": "2016-11-04 16:31:36.264673 ",
"event_type": "audit.planner.error",
"message_id": "cbcf9f2c-7c53-4b4d-91ec-db49cca024b6"
}

View File

@@ -1,70 +0,0 @@
{
"priority": "INFO",
"payload": {
"watcher_object.data": {
"audit_type": "ONESHOT",
"parameters": {
"para2": "hello",
"para1": 3.2
},
"state": "ONGOING",
"updated_at": null,
"deleted_at": null,
"fault": null,
"goal": {
"watcher_object.data": {
"uuid": "bc830f84-8ae3-4fc6-8bc6-e3dd15e8b49a",
"name": "dummy",
"updated_at": null,
"deleted_at": null,
"efficacy_specification": [],
"created_at": "2016-11-04T16:25:35Z",
"display_name": "Dummy goal"
},
"watcher_object.name": "GoalPayload",
"watcher_object.version": "1.0",
"watcher_object.namespace": "watcher"
},
"interval": null,
"scope": [],
"strategy": {
"watcher_object.data": {
"parameters_spec": {
"properties": {
"para2": {
"type": "string",
"default": "hello",
"description": "string parameter example"
},
"para1": {
"description": "number parameter example",
"maximum": 10.2,
"type": "number",
"default": 3.2,
"minimum": 1.0
}
}
},
"name": "dummy",
"uuid": "75234dfe-87e3-4f11-a0e0-3c3305d86a39",
"updated_at": null,
"deleted_at": null,
"created_at": "2016-11-04T16:25:35Z",
"display_name": "Dummy strategy"
},
"watcher_object.name": "StrategyPayload",
"watcher_object.version": "1.0",
"watcher_object.namespace": "watcher"
},
"created_at": "2016-11-04T16:29:20Z",
"uuid": "4a97b9dd-2023-43dc-b713-815bdd94d4d6"
},
"watcher_object.name": "AuditActionPayload",
"watcher_object.version": "1.0",
"watcher_object.namespace": "watcher"
},
"publisher_id": "infra-optim:localhost",
"timestamp": "2016-11-04 16:31:36.264673 ",
"event_type": "audit.planner.start",
"message_id": "cbcf9f2c-7c53-4b4d-91ec-db49cca024b6"
}

View File

@@ -1,70 +0,0 @@
{
"priority": "INFO",
"payload": {
"watcher_object.data": {
"audit_type": "ONESHOT",
"parameters": {
"para2": "hello",
"para1": 3.2
},
"state": "ONGOING",
"updated_at": null,
"deleted_at": null,
"fault": null,
"goal": {
"watcher_object.data": {
"uuid": "bc830f84-8ae3-4fc6-8bc6-e3dd15e8b49a",
"name": "dummy",
"updated_at": null,
"deleted_at": null,
"efficacy_specification": [],
"created_at": "2016-11-04T16:25:35Z",
"display_name": "Dummy goal"
},
"watcher_object.name": "GoalPayload",
"watcher_object.version": "1.0",
"watcher_object.namespace": "watcher"
},
"interval": null,
"scope": [],
"strategy": {
"watcher_object.data": {
"parameters_spec": {
"properties": {
"para2": {
"type": "string",
"default": "hello",
"description": "string parameter example"
},
"para1": {
"description": "number parameter example",
"maximum": 10.2,
"type": "number",
"default": 3.2,
"minimum": 1.0
}
}
},
"name": "dummy",
"uuid": "75234dfe-87e3-4f11-a0e0-3c3305d86a39",
"updated_at": null,
"deleted_at": null,
"created_at": "2016-11-04T16:25:35Z",
"display_name": "Dummy strategy"
},
"watcher_object.name": "StrategyPayload",
"watcher_object.version": "1.0",
"watcher_object.namespace": "watcher"
},
"created_at": "2016-11-04T16:29:20Z",
"uuid": "4a97b9dd-2023-43dc-b713-815bdd94d4d6"
},
"watcher_object.name": "AuditActionPayload",
"watcher_object.version": "1.0",
"watcher_object.namespace": "watcher"
},
"publisher_id": "infra-optim:localhost",
"timestamp": "2016-11-04 16:31:36.264673 ",
"event_type": "audit.strategy.end",
"message_id": "cbcf9f2c-7c53-4b4d-91ec-db49cca024b6"
}

View File

@@ -1,80 +0,0 @@
{
"priority": "ERROR",
"payload": {
"watcher_object.data": {
"audit_type": "ONESHOT",
"parameters": {
"para2": "hello",
"para1": 3.2
},
"state": "ONGOING",
"updated_at": null,
"deleted_at": null,
"fault": {
"watcher_object.data": {
"exception": "WatcherException",
"exception_message": "TEST",
"function_name": "test_send_audit_action_with_error",
"module_name": "watcher.tests.notifications.test_audit_notification"
},
"watcher_object.name": "ExceptionPayload",
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0"
},
"goal": {
"watcher_object.data": {
"uuid": "bc830f84-8ae3-4fc6-8bc6-e3dd15e8b49a",
"name": "dummy",
"updated_at": null,
"deleted_at": null,
"efficacy_specification": [],
"created_at": "2016-11-04T16:25:35Z",
"display_name": "Dummy goal"
},
"watcher_object.name": "GoalPayload",
"watcher_object.version": "1.0",
"watcher_object.namespace": "watcher"
},
"interval": null,
"scope": [],
"strategy": {
"watcher_object.data": {
"parameters_spec": {
"properties": {
"para2": {
"type": "string",
"default": "hello",
"description": "string parameter example"
},
"para1": {
"description": "number parameter example",
"maximum": 10.2,
"type": "number",
"default": 3.2,
"minimum": 1.0
}
}
},
"name": "dummy",
"uuid": "75234dfe-87e3-4f11-a0e0-3c3305d86a39",
"updated_at": null,
"deleted_at": null,
"created_at": "2016-11-04T16:25:35Z",
"display_name": "Dummy strategy"
},
"watcher_object.name": "StrategyPayload",
"watcher_object.version": "1.0",
"watcher_object.namespace": "watcher"
},
"created_at": "2016-11-04T16:29:20Z",
"uuid": "4a97b9dd-2023-43dc-b713-815bdd94d4d6"
},
"watcher_object.name": "AuditActionPayload",
"watcher_object.version": "1.0",
"watcher_object.namespace": "watcher"
},
"publisher_id": "infra-optim:localhost",
"timestamp": "2016-11-04 16:31:36.264673 ",
"event_type": "audit.strategy.error",
"message_id": "cbcf9f2c-7c53-4b4d-91ec-db49cca024b6"
}

View File

@@ -1,70 +0,0 @@
{
"priority": "INFO",
"payload": {
"watcher_object.data": {
"audit_type": "ONESHOT",
"parameters": {
"para2": "hello",
"para1": 3.2
},
"state": "ONGOING",
"updated_at": null,
"deleted_at": null,
"fault": null,
"goal": {
"watcher_object.data": {
"uuid": "bc830f84-8ae3-4fc6-8bc6-e3dd15e8b49a",
"name": "dummy",
"updated_at": null,
"deleted_at": null,
"efficacy_specification": [],
"created_at": "2016-11-04T16:25:35Z",
"display_name": "Dummy goal"
},
"watcher_object.name": "GoalPayload",
"watcher_object.version": "1.0",
"watcher_object.namespace": "watcher"
},
"interval": null,
"scope": [],
"strategy": {
"watcher_object.data": {
"parameters_spec": {
"properties": {
"para2": {
"type": "string",
"default": "hello",
"description": "string parameter example"
},
"para1": {
"description": "number parameter example",
"maximum": 10.2,
"type": "number",
"default": 3.2,
"minimum": 1.0
}
}
},
"name": "dummy",
"uuid": "75234dfe-87e3-4f11-a0e0-3c3305d86a39",
"updated_at": null,
"deleted_at": null,
"created_at": "2016-11-04T16:25:35Z",
"display_name": "Dummy strategy"
},
"watcher_object.name": "StrategyPayload",
"watcher_object.version": "1.0",
"watcher_object.namespace": "watcher"
},
"created_at": "2016-11-04T16:29:20Z",
"uuid": "4a97b9dd-2023-43dc-b713-815bdd94d4d6"
},
"watcher_object.name": "AuditActionPayload",
"watcher_object.version": "1.0",
"watcher_object.namespace": "watcher"
},
"publisher_id": "infra-optim:localhost",
"timestamp": "2016-11-04 16:31:36.264673 ",
"event_type": "audit.strategy.start",
"message_id": "cbcf9f2c-7c53-4b4d-91ec-db49cca024b6"
}

View File

@@ -1,78 +0,0 @@
{
"publisher_id": "infra-optim:localhost",
"timestamp": "2016-11-04 16:51:38.722986 ",
"payload": {
"watcher_object.name": "AuditUpdatePayload",
"watcher_object.data": {
"strategy": {
"watcher_object.name": "StrategyPayload",
"watcher_object.data": {
"name": "dummy",
"parameters_spec": {
"properties": {
"para2": {
"default": "hello",
"type": "string",
"description": "string parameter example"
},
"para1": {
"maximum": 10.2,
"default": 3.2,
"minimum": 1.0,
"description": "number parameter example",
"type": "number"
}
}
},
"updated_at": null,
"display_name": "Dummy strategy",
"deleted_at": null,
"uuid": "75234dfe-87e3-4f11-a0e0-3c3305d86a39",
"created_at": "2016-11-04T16:25:35Z"
},
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0"
},
"scope": [],
"created_at": "2016-11-04T16:51:21Z",
"uuid": "f1e0d912-afd9-4bf2-91ef-c99cd08cc1ef",
"goal": {
"watcher_object.name": "GoalPayload",
"watcher_object.data": {
"efficacy_specification": [],
"updated_at": null,
"name": "dummy",
"display_name": "Dummy goal",
"deleted_at": null,
"uuid": "bc830f84-8ae3-4fc6-8bc6-e3dd15e8b49a",
"created_at": "2016-11-04T16:25:35Z"
},
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0"
},
"parameters": {
"para2": "hello",
"para1": 3.2
},
"deleted_at": null,
"state_update": {
"watcher_object.name": "AuditStateUpdatePayload",
"watcher_object.data": {
"state": "ONGOING",
"old_state": "PENDING"
},
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0"
},
"interval": null,
"updated_at": null,
"state": "ONGOING",
"audit_type": "ONESHOT"
},
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0"
},
"priority": "INFO",
"event_type": "audit.update",
"message_id": "697fdf55-7252-4b6c-a2c2-5b9e85f6342c"
}

View File

@@ -1,16 +0,0 @@
{
"event_type": "infra-optim.exception",
"payload": {
"watcher_object.data": {
"exception": "NoAvailableStrategyForGoal",
"exception_message": "No strategy could be found to achieve the server_consolidation goal.",
"function_name": "_aggregate_create_in_db",
"module_name": "watcher.objects.aggregate"
},
"watcher_object.name": "ExceptionPayload",
"watcher_object.namespace": "watcher",
"watcher_object.version": "1.0"
},
"priority": "ERROR",
"publisher_id": "watcher-api:fake-mini"
}

View File

@@ -298,7 +298,7 @@ The :ref:`Watcher Decision Engine <watcher_decision_engine_definition>` also
builds the :ref:`Cluster Data Model <cluster_data_model_definition>`. This builds the :ref:`Cluster Data Model <cluster_data_model_definition>`. This
data model is needed by the :ref:`Strategy <strategy_definition>` to know the data model is needed by the :ref:`Strategy <strategy_definition>` to know the
current state and topology of the audited current state and topology of the audited
:ref:`OpenStack cluster <cluster_definition>`. :ref:`Openstack cluster <cluster_definition>`.
The :ref:`Watcher Decision Engine <watcher_decision_engine_definition>` calls The :ref:`Watcher Decision Engine <watcher_decision_engine_definition>` calls
the **execute()** method of the instantiated the **execute()** method of the instantiated
@@ -450,7 +450,7 @@ state may be one of the following:
stored in the :ref:`Watcher database <watcher_database_definition>` but is stored in the :ref:`Watcher database <watcher_database_definition>` but is
not returned any more through the Watcher APIs. not returned any more through the Watcher APIs.
- **CANCELLED** : the :ref:`Action Plan <action_plan_definition>` was in - **CANCELLED** : the :ref:`Action Plan <action_plan_definition>` was in
**RECOMMENDED**, **PENDING** or **ONGOING** state and was cancelled by the **PENDING** or **ONGOING** state and was cancelled by the
:ref:`Administrator <administrator_definition>` :ref:`Administrator <administrator_definition>`

View File

@@ -11,21 +11,7 @@
# implied. # implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import os
import sys
from watcher import version as watcher_version from watcher import version as watcher_version
from watcher import objects
objects.register_all()
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('../../'))
sys.path.insert(0, os.path.abspath('../'))
sys.path.insert(0, os.path.abspath('./'))
# -- General configuration ---------------------------------------------------- # -- General configuration ----------------------------------------------------
@@ -40,8 +26,7 @@ extensions = [
'sphinxcontrib.pecanwsme.rest', 'sphinxcontrib.pecanwsme.rest',
'stevedore.sphinxext', 'stevedore.sphinxext',
'wsmeext.sphinxext', 'wsmeext.sphinxext',
'ext.term', 'watcher.doc',
'ext.versioned_notifications',
] ]
wsme_protocols = ['restjson'] wsme_protocols = ['restjson']
@@ -82,8 +67,6 @@ exclude_patterns = [
# them when scanning for input files. # them when scanning for input files.
'man/footer.rst', 'man/footer.rst',
'man/general-options.rst', 'man/general-options.rst',
'strategies/strategy-template.rst',
'image_src/plantuml/README.rst',
] ]
# If true, '()' will be appended to :func: etc. cross-reference text. # If true, '()' will be appended to :func: etc. cross-reference text.

View File

@@ -93,11 +93,11 @@ following command:
.. code:: bash .. code:: bash
$ watcher strategy list --goal <your_goal_uuid_or_name> $ watcher strategy list --goal-uuid <your_goal_uuid>
or:: or::
$ openstack optimize strategy list --goal <your_goal_uuid_or_name> $ openstack optimize strategy list --goal-uuid <your_goal_uuid>
You can use the following command to check strategy details including which You can use the following command to check strategy details including which
parameters of which format it supports: parameters of which format it supports:

View File

@@ -1,13 +0,0 @@
..
Except where otherwise noted, this document is licensed under Creative
Commons Attribution 3.0 License. You can view the license at:
https://creativecommons.org/licenses/by/3.0/
.. _watcher_notifications:
========================
Notifications in Watcher
========================
.. versioned_notifications::

View File

@@ -22,7 +22,7 @@ cluster data model collectors within Watcher.
Creating a new plugin Creating a new plugin
===================== =====================
In order to create a new cluster data model collector, you have to: In order to create a new model, you have to:
- Extend the :py:class:`~.base.BaseClusterDataModelCollector` class. - Extend the :py:class:`~.base.BaseClusterDataModelCollector` class.
- Implement its :py:meth:`~.BaseClusterDataModelCollector.execute` abstract - Implement its :py:meth:`~.BaseClusterDataModelCollector.execute` abstract
@@ -65,49 +65,6 @@ This implementation is the most basic one. So in order to get a better
understanding on how to implement a more advanced cluster data model collector, understanding on how to implement a more advanced cluster data model collector,
have a look at the :py:class:`~.NovaClusterDataModelCollector` class. have a look at the :py:class:`~.NovaClusterDataModelCollector` class.
Define a custom model
=====================
As you may have noticed in the above example, we are reusing an existing model
provided by Watcher. However, this model can be easily customized by
implementing a new class that would implement the :py:class:`~.Model` abstract
base class. Here below is simple example on how to proceed in implementing a
custom Model:
.. code-block:: python
# Filepath = <PROJECT_DIR>/thirdparty/dummy.py
# Import path = thirdparty.dummy
from watcher.decision_engine.model import base as modelbase
from watcher.decision_engine.model.collector import base
class MyModel(modelbase.Model):
def to_string(self):
return 'MyModel'
class DummyClusterDataModelCollector(base.BaseClusterDataModelCollector):
def execute(self):
model = MyModel()
# Do something here...
return model
@property
def notification_endpoints(self):
return []
Here below is the abstract ``Model`` class that every single cluster data model
should implement:
.. autoclass:: watcher.decision_engine.model.base.Model
:members:
:special-members: __init__
:noindex:
Define configuration parameters Define configuration parameters
=============================== ===============================

View File

@@ -43,7 +43,7 @@ In order to create a new strategy, you have to:
Note: Do not use a variable to return the translated string so it can be Note: Do not use a variable to return the translated string so it can be
automatically collected by the translation tool. automatically collected by the translation tool.
- Implement its :py:meth:`~.BaseStrategy.get_translatable_display_name` - Implement its :py:meth:`~.BaseStrategy.get_translatable_display_name`
class method to return the translation key (actually the English display class method to return the translation key (actually the english display
name) of your new strategy. The value return should be the same as the name) of your new strategy. The value return should be the same as the
string translated in :py:meth:`~.BaseStrategy.get_display_name`. string translated in :py:meth:`~.BaseStrategy.get_display_name`.
- Implement its :py:meth:`~.BaseStrategy.execute` method to return the - Implement its :py:meth:`~.BaseStrategy.execute` method to return the

View File

@@ -96,8 +96,8 @@ The :ref:`Cluster <cluster_definition>` may be divided in one or several
.. _cluster_data_model_definition: .. _cluster_data_model_definition:
Cluster Data Model (CDM) Cluster Data Model
======================== ==================
.. watcher-term:: watcher.decision_engine.model.collector.base .. watcher-term:: watcher.decision_engine.model.collector.base
@@ -164,8 +164,7 @@ Goal
Host Aggregate Host Aggregate
============== ==============
Please, read `the official OpenStack definition of a Host Aggregate Please, read `the official OpenStack definition of a Host Aggregate <http://docs.openstack.org/developer/nova/aggregates.html>`_.
<http://docs.openstack.org/developer/nova/aggregates.html>`_.
.. _instance_definition: .. _instance_definition:

View File

@@ -4,7 +4,7 @@ actor Administrator
== Create some Audit settings == == Create some Audit settings ==
Administrator -> Watcher : create new Audit Template (i.e. Audit settings : goal, scope, ...) Administrator -> Watcher : create new Audit Template (i.e. Audit settings : goal, scope, deadline,...)
Watcher -> Watcher : save Audit Template in database Watcher -> Watcher : save Audit Template in database
Administrator <-- Watcher : Audit Template UUID Administrator <-- Watcher : Audit Template UUID

View File

@@ -41,7 +41,9 @@ table(audit_templates) {
uuid : String[36] uuid : String[36]
name : String[63], nullable name : String[63], nullable
description : String[255], nullable description : String[255], nullable
scope : JSONEncodedList host_aggregate : Integer, nullable
extra : JSONEncodedDict
version : String[15], nullable
created_at : DateTime created_at : DateTime
updated_at : DateTime updated_at : DateTime
@@ -57,9 +59,10 @@ table(audits) {
uuid : String[36] uuid : String[36]
audit_type : String[20] audit_type : String[20]
state : String[20], nullable state : String[20], nullable
deadline : DateTime, nullable
interval : Integer, nullable interval : Integer, nullable
parameters : JSONEncodedDict, nullable parameters : JSONEncodedDict, nullable
scope : JSONEncodedList, nullable host_aggregate : Integer, nullable
created_at : DateTime created_at : DateTime
updated_at : DateTime updated_at : DateTime
@@ -128,18 +131,6 @@ table(scoring_engines) {
deleted : Integer deleted : Integer
} }
table(service) {
primary_key(id: Integer)
name: String[255]
host: String[255]
last_seen_up: DateTime
created_at : DateTime
updated_at : DateTime
deleted_at : DateTime
deleted : Integer
}
"goals" <.. "strategies" : Foreign Key "goals" <.. "strategies" : Foreign Key
"goals" <.. "audit_templates" : Foreign Key "goals" <.. "audit_templates" : Foreign Key
"strategies" <.. "audit_templates" : Foreign Key "strategies" <.. "audit_templates" : Foreign Key

Binary file not shown.

Before

Width:  |  Height:  |  Size: 45 KiB

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 72 KiB

After

Width:  |  Height:  |  Size: 56 KiB

View File

@@ -56,7 +56,6 @@ Getting Started
dev/devstack dev/devstack
deploy/configuration deploy/configuration
deploy/conf-files deploy/conf-files
dev/notifications
dev/testing dev/testing
dev/rally_link dev/rally_link
@@ -97,7 +96,6 @@ Introduction
deploy/user-guide deploy/user-guide
deploy/policy deploy/policy
deploy/gmr deploy/gmr
strategies/strategies
Watcher Manual Pages Watcher Manual Pages
==================== ====================

View File

@@ -1,96 +0,0 @@
==================================
Basic Offline Server Consolidation
==================================
Synopsis
--------
**display name**: ``basic``
**goal**: ``server_consolidation``
.. watcher-term:: watcher.decision_engine.strategy.strategies.basic_consolidation
Requirements
------------
Metrics
*******
The *basic* strategy requires the following metrics:
============================ ============ ======= =======
metric service name plugins comment
============================ ============ ======= =======
``compute.node.cpu.percent`` ceilometer_ none
``cpu_util`` ceilometer_ none
============================ ============ ======= =======
.. _ceilometer: http://docs.openstack.org/admin-guide/telemetry-measurements.html#openstack-compute
Cluster data model
******************
Default Watcher's Compute cluster data model:
.. watcher-term:: watcher.decision_engine.model.collector.nova.NovaClusterDataModelCollector
Actions
*******
Default Watcher's actions:
.. list-table::
:widths: 30 30
:header-rows: 1
* - action
- description
* - ``migration``
- .. watcher-term:: watcher.applier.actions.migration.Migrate
* - ``change_nova_service_state``
- .. watcher-term:: watcher.applier.actions.change_nova_service_state.ChangeNovaServiceState
Planner
*******
Default Watcher's planner:
.. watcher-term:: watcher.decision_engine.planner.default.DefaultPlanner
Configuration
-------------
Strategy parameter is:
====================== ====== ============= ===================================
parameter type default Value description
====================== ====== ============= ===================================
``migration_attempts`` Number 0 Maximum number of combinations to
be tried by the strategy while
searching for potential candidates.
To remove the limit, set it to 0
====================== ====== ============= ===================================
Efficacy Indicator
------------------
.. watcher-func::
:format: literal_block
watcher.decision_engine.goal.efficacy.specs.ServerConsolidation.get_global_efficacy_indicator
How to use it ?
---------------
.. code-block:: shell
$ openstack optimize audittemplate create \
at1 server_consolidation --strategy basic
$ openstack optimize audit create -a at1 -p migration_attempts=4
External Links
--------------
None.

View File

@@ -1,8 +0,0 @@
Strategies
==========
.. toctree::
:glob:
:maxdepth: 1
./*

View File

@@ -1,115 +0,0 @@
=============
Strategy name
=============
Synopsis
--------
**display name**:
**goal**:
Add here a complete description of your strategy
Requirements
------------
Metrics
*******
Write here the list of metrics required by your strategy algorithm (in the form
of a table). If these metrics requires specific Telemetry plugin or other
additional software, please explain here how to deploy them (and add link to
dedicated installation guide).
Example:
======================= ============ ======= =======
metric service name plugins comment
======================= ============ ======= =======
compute.node.* ceilometer_ none one point every 60s
vm.cpu.utilization_perc monasca_ none
power ceilometer_ kwapi_ one point every 60s
======================= ============ ======= =======
.. _ceilometer: http://docs.openstack.org/admin-guide/telemetry-measurements.html#openstack-compute
.. _monasca: https://github.com/openstack/monasca-agent/blob/master/docs/Libvirt.md
.. _kwapi: https://kwapi.readthedocs.io/en/latest/index.html
Cluster data model
******************
Default Watcher's cluster data model.
or
If your strategy implementation requires a new cluster data model, please
describe it in this section, with a link to model plugin's installation guide.
Actions
*******
Default Watcher's actions.
or
If your strategy implementation requires new actions, add the list of Action
plugins here (in the form of a table) with a link to the plugin's installation
procedure.
======== =================
action description
======== =================
action1_ This action1 ...
action2_ This action2 ...
======== =================
.. _action1 : https://github.com/myrepo/watcher/plugins/action1
.. _action2 : https://github.com/myrepo/watcher/plugins/action2
Planner
*******
Default Watcher's planner.
or
If your strategy requires also a new planner to schedule built actions in time,
please describe it in this section, with a link to planner plugin's
installation guide.
Configuration
-------------
If your strategy use configurable parameters, explain here how to tune them.
Efficacy Indicator
------------------
Add here the Efficacy indicator computed by your strategy.
Algorithm
---------
Add here either the description of your algorithm or
link to the existing description.
How to use it ?
---------------
.. code-block:: shell
$ Write the command line to create an audit with your strategy.
External Links
--------------
If you have written papers, blog articles .... about your strategy into Watcher,
or if your strategy is based from external publication(s), please add HTTP
links and references in this section.
- `link1 <http://www.link1.papers.com>`_
- `link2 <http://www.link2.papers.com>`_

View File

@@ -1,100 +0,0 @@
==================================
VM Workload Consolidation Strategy
==================================
Synopsis
--------
**display name**: ``vm_workload_consolidation``
**goal**: ``vm_consolidation``
.. watcher-term:: watcher.decision_engine.strategy.strategies.vm_workload_consolidation
Requirements
------------
Metrics
*******
The *vm_workload_consolidation* strategy requires the following metrics:
============================ ============ ======= =======
metric service name plugins comment
============================ ============ ======= =======
``memory`` ceilometer_ none
``disk.root.size`` ceilometer_ none
============================ ============ ======= =======
The following metrics are not required but increase the accuracy of
the strategy if available:
============================ ============ ======= =======
metric service name plugins comment
============================ ============ ======= =======
``memory.usage`` ceilometer_ none
``cpu_util`` ceilometer_ none
============================ ============ ======= =======
.. _ceilometer: http://docs.openstack.org/admin-guide/telemetry-measurements.html#openstack-compute
Cluster data model
******************
Default Watcher's Compute cluster data model:
.. watcher-term:: watcher.decision_engine.model.collector.nova.NovaClusterDataModelCollector
Actions
*******
Default Watcher's actions:
.. list-table::
:widths: 30 30
:header-rows: 1
* - action
- description
* - ``migration``
- .. watcher-term:: watcher.applier.actions.migration.Migrate
* - ``change_nova_service_state``
- .. watcher-term:: watcher.applier.actions.change_nova_service_state.ChangeNovaServiceState
Planner
*******
Default Watcher's planner:
.. watcher-term:: watcher.decision_engine.planner.default.DefaultPlanner
Efficacy Indicator
------------------
.. watcher-func::
:format: literal_block
watcher.decision_engine.goal.efficacy.specs.ServerConsolidation.get_global_efficacy_indicator
Algorithm
---------
For more information on the VM Workload consolidation strategy please refer to: https://specs.openstack.org/openstack/watcher-specs/specs/mitaka/implemented/zhaw-load-consolidation.html
How to use it ?
---------------
.. code-block:: shell
$ openstack optimize audittemplate create \
at1 vm_consolidation --strategy vm_workload_consolidation
$ openstack optimize audit create -a at1
External Links
--------------
*Spec URL*
https://specs.openstack.org/openstack/watcher-specs/specs/mitaka/implemented/zhaw-load-consolidation.html

View File

@@ -1,131 +0,0 @@
=============================================
Watcher Overload standard deviation algorithm
=============================================
Synopsis
--------
**display name**: ``workload_stabilization``
**goal**: ``workload_balancing``
.. watcher-term:: watcher.decision_engine.strategy.strategies.workload_stabilization
Requirements
------------
Metrics
*******
The *workload_stabilization* strategy requires the following metrics:
============================ ============ ======= =======
metric service name plugins comment
============================ ============ ======= =======
``compute.node.cpu.percent`` ceilometer_ none
``hardware.memory.used`` ceilometer_ SNMP_
``cpu_util`` ceilometer_ none
``memory.resident`` ceilometer_ none
============================ ============ ======= =======
.. _ceilometer: http://docs.openstack.org/admin-guide/telemetry-measurements.html#openstack-compute
.. _SNMP: http://docs.openstack.org/admin-guide/telemetry-measurements.html
Cluster data model
******************
Default Watcher's Compute cluster data model:
.. watcher-term:: watcher.decision_engine.model.collector.nova.NovaClusterDataModelCollector
Actions
*******
Default Watcher's actions:
.. list-table::
:widths: 30 30
:header-rows: 1
* - action
- description
* - ``migration``
- .. watcher-term:: watcher.applier.actions.migration.Migrate
Planner
*******
Default Watcher's planner:
.. watcher-term:: watcher.decision_engine.planner.default.DefaultPlanner
Configuration
-------------
Strategy parameters are:
==================== ====== ===================== =============================
parameter type default Value description
==================== ====== ===================== =============================
``metrics`` array |metrics| Metrics used as rates of
cluster loads.
``thresholds`` object |thresholds| Dict where key is a metric
and value is a trigger value.
``weights`` object |weights| These weights used to
calculate common standard
deviation. Name of weight
contains meter name and
_weight suffix.
``instance_metrics`` object |instance_metrics| Mapping to get hardware
statistics using instance
metrics.
``host_choice`` string retry Method of host's choice.
There are cycle, retry and
fullsearch methods. Cycle
will iterate hosts in cycle.
Retry will get some hosts
random (count defined in
retry_count option).
Fullsearch will return each
host from list.
``retry_count`` number 1 Count of random returned
hosts.
==================== ====== ===================== =============================
.. |metrics| replace:: ["cpu_util", "memory.resident"]
.. |thresholds| replace:: {"cpu_util": 0.2, "memory.resident": 0.2}
.. |weights| replace:: {"cpu_util_weight": 1.0, "memory.resident_weight": 1.0}
.. |instance_metrics| replace:: {"cpu_util": "hardware.cpu.util", "memory.resident": "hardware.memory.used"}
Efficacy Indicator
------------------
.. watcher-func::
:format: literal_block
watcher.decision_engine.goal.efficacy.specs.ServerConsolidation.get_global_efficacy_indicator
Algorithm
---------
You can find description of overload algorithm and role of standard deviation
here: https://specs.openstack.org/openstack/watcher-specs/specs/newton/implemented/sd-strategy.html
How to use it ?
---------------
.. code-block:: shell
$ openstack optimize audittemplate create \
at1 workload_balancing --strategy workload_stabilization
$ openstack optimize audit create -a at1 \
-p thresholds='{"memory.resident": 0.05}' \
-p metrics='["memory.resident"]'
External Links
--------------
- `Watcher Overload standard deviation algorithm spec <https://specs.openstack.org/openstack/watcher-specs/specs/newton/implemented/sd-strategy.html>`_

View File

@@ -37,9 +37,5 @@
"strategy:detail": "rule:default", "strategy:detail": "rule:default",
"strategy:get": "rule:default", "strategy:get": "rule:default",
"strategy:get_all": "rule:default", "strategy:get_all": "rule:default"
"service:detail": "rule:default",
"service:get": "rule:default",
"service:get_all": "rule:default"
} }

View File

@@ -242,6 +242,3 @@ texinfo_documents = [
# How to display URL addresses: 'footnote', 'no', or 'inline'. # How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote' #texinfo_show_urls = 'footnote'
# -- Options for Internationalization output ------------------------------
locale_dirs = ['locale/']

View File

@@ -6,6 +6,5 @@ Contents:
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
unreleased unreleased.rst
newton

View File

@@ -1,6 +0,0 @@
===================================
Newton Series Release Notes
===================================
.. release-notes::
:branch: origin/stable/newton

View File

@@ -5,38 +5,37 @@
apscheduler # MIT License apscheduler # MIT License
enum34;python_version=='2.7' or python_version=='2.6' or python_version=='3.3' # BSD enum34;python_version=='2.7' or python_version=='2.6' or python_version=='3.3' # BSD
jsonpatch>=1.1 # BSD jsonpatch>=1.1 # BSD
keystoneauth1>=2.14.0 # Apache-2.0 keystoneauth1>=2.10.0 # Apache-2.0
keystonemiddleware!=4.5.0,>=4.2.0 # Apache-2.0 keystonemiddleware!=4.1.0,!=4.5.0,>=4.0.0 # Apache-2.0
lxml>=2.3 # BSD lxml>=2.3 # BSD
oslo.concurrency>=3.8.0 # Apache-2.0 oslo.concurrency>=3.8.0 # Apache-2.0
oslo.cache>=1.5.0 # Apache-2.0 oslo.cache>=1.5.0 # Apache-2.0
oslo.config!=3.18.0,>=3.14.0 # Apache-2.0 oslo.config>=3.14.0 # Apache-2.0
oslo.context>=2.9.0 # Apache-2.0 oslo.context>=2.9.0 # Apache-2.0
oslo.db!=4.13.1,!=4.13.2,>=4.11.0 # Apache-2.0 oslo.db!=4.13.1,!=4.13.2,>=4.10.0 # Apache-2.0
oslo.i18n>=2.1.0 # Apache-2.0 oslo.i18n>=2.1.0 # Apache-2.0
oslo.log>=3.11.0 # Apache-2.0 oslo.log>=1.14.0 # Apache-2.0
oslo.messaging>=5.2.0 # Apache-2.0 oslo.messaging>=5.2.0 # Apache-2.0
oslo.policy>=1.15.0 # Apache-2.0 oslo.policy>=1.9.0 # Apache-2.0
oslo.reports>=0.6.0 # Apache-2.0 oslo.reports>=0.6.0 # Apache-2.0
oslo.serialization>=1.10.0 # Apache-2.0 oslo.serialization>=1.10.0 # Apache-2.0
oslo.service>=1.10.0 # Apache-2.0 oslo.service>=1.10.0 # Apache-2.0
oslo.utils>=3.18.0 # Apache-2.0 oslo.utils>=3.16.0 # Apache-2.0
oslo.versionedobjects>=1.13.0 # Apache-2.0
PasteDeploy>=1.5.0 # MIT PasteDeploy>=1.5.0 # MIT
pbr>=1.8 # Apache-2.0 pbr>=1.6 # Apache-2.0
pecan!=1.0.2,!=1.0.3,!=1.0.4,!=1.2,>=1.0.0 # BSD pecan!=1.0.2,!=1.0.3,!=1.0.4,>=1.0.0 # BSD
PrettyTable<0.8,>=0.7.1 # BSD PrettyTable<0.8,>=0.7 # BSD
voluptuous>=0.8.9 # BSD License voluptuous>=0.8.9 # BSD License
python-ceilometerclient>=2.5.0 # Apache-2.0 python-ceilometerclient>=2.5.0 # Apache-2.0
python-cinderclient!=1.7.0,!=1.7.1,>=1.6.0 # Apache-2.0 python-cinderclient!=1.7.0,!=1.7.1,>=1.6.0 # Apache-2.0
python-glanceclient>=2.5.0 # Apache-2.0 python-glanceclient!=2.4.0,>=2.3.0 # Apache-2.0
python-keystoneclient>=3.6.0 # Apache-2.0 python-keystoneclient!=2.1.0,>=2.0.0 # Apache-2.0
python-neutronclient>=5.1.0 # Apache-2.0 python-neutronclient>=5.1.0 # Apache-2.0
python-novaclient!=2.33.0,>=2.29.0 # Apache-2.0 python-novaclient!=2.33.0,>=2.29.0 # Apache-2.0
python-openstackclient>=3.3.0 # Apache-2.0 python-openstackclient>=2.1.0 # Apache-2.0
six>=1.9.0 # MIT six>=1.9.0 # MIT
SQLAlchemy<1.1.0,>=1.0.10 # MIT SQLAlchemy<1.1.0,>=1.0.10 # MIT
stevedore>=1.17.1 # Apache-2.0 stevedore>=1.16.0 # Apache-2.0
taskflow>=1.26.0 # Apache-2.0 taskflow>=1.26.0 # Apache-2.0
WebOb>=1.6.0 # MIT WebOb>=1.2.3 # MIT
WSME>=0.8 # MIT WSME>=0.8 # MIT

View File

@@ -32,7 +32,7 @@ setup-hooks =
[entry_points] [entry_points]
oslo.config.opts = oslo.config.opts =
watcher = watcher.conf.opts:list_opts watcher = watcher.opts:list_opts
console_scripts = console_scripts =
watcher-api = watcher.cmd.api:main watcher-api = watcher.cmd.api:main

View File

@@ -2,25 +2,25 @@
# of appearance. Changing the order has an impact on the overall integration # of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later. # process, which may cause wedges in the gate later.
coverage>=4.0 # Apache-2.0 coverage>=3.6 # Apache-2.0
doc8 # Apache-2.0 doc8 # Apache-2.0
freezegun>=0.3.6 # Apache-2.0 freezegun # Apache-2.0
hacking<0.11,>=0.10.2 hacking<0.11,>=0.10.2
mock>=2.0 # BSD mock>=2.0 # BSD
oslotest>=1.10.0 # Apache-2.0 oslotest>=1.10.0 # Apache-2.0
os-testr>=0.8.0 # Apache-2.0 os-testr>=0.7.0 # Apache-2.0
python-subunit>=0.0.18 # Apache-2.0/BSD python-subunit>=0.0.18 # Apache-2.0/BSD
testrepository>=0.0.18 # Apache-2.0/BSD testrepository>=0.0.18 # Apache-2.0/BSD
testscenarios>=0.4 # Apache-2.0/BSD testscenarios>=0.4 # Apache-2.0/BSD
testtools>=1.4.0 # MIT testtools>=1.4.0 # MIT
# Doc requirements # Doc requirements
oslosphinx>=4.7.0 # Apache-2.0 oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
sphinx!=1.3b1,<1.4,>=1.2.1 # BSD sphinx!=1.3b1,<1.3,>=1.2.1 # BSD
sphinxcontrib-pecanwsme>=0.8 # Apache-2.0 sphinxcontrib-pecanwsme>=0.8 # Apache-2.0
# releasenotes # releasenotes
reno>=1.8.0 # Apache-2.0 reno>=1.8.0 # Apache2
# bandit # bandit
bandit>=1.1.0 # Apache-2.0 bandit>=1.1.0 # Apache-2.0

View File

@@ -7,13 +7,14 @@ skipsdist = True
usedevelop = True usedevelop = True
whitelist_externals = find whitelist_externals = find
install_command = install_command =
constraints: pip install -U --force-reinstall -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} {opts} {packages} constraints: pip install -U --force-reinstall -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/newton} {opts} {packages}
pip install -U {opts} {packages} pip install -U {opts} {packages}
setenv = setenv =
VIRTUAL_ENV={envdir} VIRTUAL_ENV={envdir}
deps = -r{toxinidir}/test-requirements.txt deps = -r{toxinidir}/test-requirements.txt
commands = commands =
find . -type f -name "*.py[c|o]" -delete find . -type f -name "*.pyc" -delete
find . -type d -name "__pycache__" -delete
ostestr --concurrency=6 {posargs} ostestr --concurrency=6 {posargs}
[testenv:pep8] [testenv:pep8]

View File

@@ -35,7 +35,7 @@ CONF.register_opts(AUTH_OPTS)
def install(app, conf, public_routes): def install(app, conf, public_routes):
"""Install ACL check on application. """Install ACL check on application.
:param app: A WSGI application. :param app: A WSGI applicatin.
:param conf: Settings. Dict'ified and passed to keystonemiddleware :param conf: Settings. Dict'ified and passed to keystonemiddleware
:param public_routes: The list of the routes which will be allowed to :param public_routes: The list of the routes which will be allowed to
access without authentication. access without authentication.

View File

@@ -61,7 +61,7 @@ class Root(base.APIBase):
root = Root() root = Root()
root.name = "OpenStack Watcher API" root.name = "OpenStack Watcher API"
root.description = ("Watcher is an OpenStack project which aims to " root.description = ("Watcher is an OpenStack project which aims to "
"improve physical resources usage through " "to improve physical resources usage through "
"better VM placement.") "better VM placement.")
root.versions = [Version.convert('v1')] root.versions = [Version.convert('v1')]
root.default_version = Version.convert('v1') root.default_version = Version.convert('v1')

View File

@@ -35,7 +35,6 @@ from watcher.api.controllers.v1 import audit
from watcher.api.controllers.v1 import audit_template from watcher.api.controllers.v1 import audit_template
from watcher.api.controllers.v1 import goal from watcher.api.controllers.v1 import goal
from watcher.api.controllers.v1 import scoring_engine from watcher.api.controllers.v1 import scoring_engine
from watcher.api.controllers.v1 import service
from watcher.api.controllers.v1 import strategy from watcher.api.controllers.v1 import strategy
@@ -106,9 +105,6 @@ class V1(APIBase):
scoring_engines = [link.Link] scoring_engines = [link.Link]
"""Links to the Scoring Engines resource""" """Links to the Scoring Engines resource"""
services = [link.Link]
"""Links to the services resource"""
links = [link.Link] links = [link.Link]
"""Links that point to a specific URL for this version and documentation""" """Links that point to a specific URL for this version and documentation"""
@@ -163,14 +159,6 @@ class V1(APIBase):
'scoring_engines', '', 'scoring_engines', '',
bookmark=True) bookmark=True)
] ]
v1.services = [link.Link.make_link(
'self', pecan.request.host_url, 'services', ''),
link.Link.make_link('bookmark',
pecan.request.host_url,
'services', '',
bookmark=True)
]
return v1 return v1
@@ -183,7 +171,6 @@ class Controller(rest.RestController):
action_plans = action_plan.ActionPlansController() action_plans = action_plan.ActionPlansController()
goals = goal.GoalsController() goals = goal.GoalsController()
scoring_engines = scoring_engine.ScoringEngineController() scoring_engines = scoring_engine.ScoringEngineController()
services = service.ServicesController()
strategies = strategy.StrategiesController() strategies = strategy.StrategiesController()
@wsme_pecan.wsexpose(V1) @wsme_pecan.wsexpose(V1)

View File

@@ -379,7 +379,7 @@ class ActionsController(rest.RestController):
action_dict = action.as_dict() action_dict = action.as_dict()
context = pecan.request.context context = pecan.request.context
new_action = objects.Action(context, **action_dict) new_action = objects.Action(context, **action_dict)
new_action.create() new_action.create(context)
# Set the HTTP Location Header # Set the HTTP Location Header
pecan.response.location = link.build_url('actions', new_action.uuid) pecan.response.location = link.build_url('actions', new_action.uuid)

View File

@@ -60,6 +60,8 @@ class AuditPostType(wtypes.Base):
audit_type = wtypes.wsattr(wtypes.text, mandatory=True) audit_type = wtypes.wsattr(wtypes.text, mandatory=True)
deadline = wtypes.wsattr(datetime.datetime, mandatory=False)
state = wsme.wsattr(wtypes.text, readonly=True, state = wsme.wsattr(wtypes.text, readonly=True,
default=objects.audit.State.PENDING) default=objects.audit.State.PENDING)
@@ -67,7 +69,8 @@ class AuditPostType(wtypes.Base):
default={}) default={})
interval = wsme.wsattr(int, mandatory=False) interval = wsme.wsattr(int, mandatory=False)
scope = wtypes.wsattr(types.jsontype, readonly=True) host_aggregate = wsme.wsattr(wtypes.IntegerType(minimum=1),
mandatory=False)
def as_audit(self, context): def as_audit(self, context):
audit_type_values = [val.value for val in objects.audit.AuditType] audit_type_values = [val.value for val in objects.audit.AuditType]
@@ -79,7 +82,7 @@ class AuditPostType(wtypes.Base):
raise exception.AuditIntervalNotAllowed(audit_type=self.audit_type) raise exception.AuditIntervalNotAllowed(audit_type=self.audit_type)
if (self.audit_type == objects.audit.AuditType.CONTINUOUS.value and if (self.audit_type == objects.audit.AuditType.CONTINUOUS.value and
self.interval in (wtypes.Unset, None)): self.interval in (wtypes.Unset, None)):
raise exception.AuditIntervalNotSpecified( raise exception.AuditIntervalNotSpecified(
audit_type=self.audit_type) audit_type=self.audit_type)
@@ -97,7 +100,7 @@ class AuditPostType(wtypes.Base):
at2a = { at2a = {
'goal': 'goal_id', 'goal': 'goal_id',
'strategy': 'strategy_id', 'strategy': 'strategy_id',
'scope': 'scope', 'host_aggregate': 'host_aggregate'
} }
to_string_fields = set(['goal', 'strategy']) to_string_fields = set(['goal', 'strategy'])
for k in at2a: for k in at2a:
@@ -111,11 +114,12 @@ class AuditPostType(wtypes.Base):
pass pass
return Audit( return Audit(
audit_type=self.audit_type, audit_type=self.audit_type,
deadline=self.deadline,
parameters=self.parameters, parameters=self.parameters,
goal_id=self.goal, goal_id=self.goal,
host_aggregate=self.host_aggregate,
strategy_id=self.strategy, strategy_id=self.strategy,
interval=self.interval, interval=self.interval)
scope=self.scope,)
class AuditPatchType(types.JsonPatchType): class AuditPatchType(types.JsonPatchType):
@@ -226,6 +230,9 @@ class Audit(base.APIBase):
audit_type = wtypes.text audit_type = wtypes.text
"""Type of this audit""" """Type of this audit"""
deadline = datetime.datetime
"""deadline of the audit"""
state = wtypes.text state = wtypes.text
"""This audit state""" """This audit state"""
@@ -254,8 +261,8 @@ class Audit(base.APIBase):
interval = wsme.wsattr(int, mandatory=False) interval = wsme.wsattr(int, mandatory=False)
"""Launch audit periodically (in seconds)""" """Launch audit periodically (in seconds)"""
scope = wsme.wsattr(types.jsontype, mandatory=False) host_aggregate = wtypes.IntegerType(minimum=1)
"""Audit Scope""" """ID of the Nova host aggregate targeted by the audit template"""
def __init__(self, **kwargs): def __init__(self, **kwargs):
self.fields = [] self.fields = []
@@ -285,10 +292,10 @@ class Audit(base.APIBase):
@staticmethod @staticmethod
def _convert_with_links(audit, url, expand=True): def _convert_with_links(audit, url, expand=True):
if not expand: if not expand:
audit.unset_fields_except(['uuid', 'audit_type', 'state', audit.unset_fields_except(['uuid', 'audit_type', 'deadline',
'goal_uuid', 'interval', 'scope', 'state', 'goal_uuid', 'interval',
'strategy_uuid', 'goal_name', 'strategy_uuid', 'host_aggregate',
'strategy_name']) 'goal_name', 'strategy_name'])
audit.links = [link.Link.make_link('self', url, audit.links = [link.Link.make_link('self', url,
'audits', audit.uuid), 'audits', audit.uuid),
@@ -309,15 +316,15 @@ class Audit(base.APIBase):
sample = cls(uuid='27e3153e-d5bf-4b7e-b517-fb518e17f34c', sample = cls(uuid='27e3153e-d5bf-4b7e-b517-fb518e17f34c',
audit_type='ONESHOT', audit_type='ONESHOT',
state='PENDING', state='PENDING',
deadline=None,
created_at=datetime.datetime.utcnow(), created_at=datetime.datetime.utcnow(),
deleted_at=None, deleted_at=None,
updated_at=datetime.datetime.utcnow(), updated_at=datetime.datetime.utcnow(),
interval=7200, interval=7200)
scope=[])
sample.goal_id = '7ae81bb3-dec3-4289-8d6c-da80bd8001ae' sample.goal_id = '7ae81bb3-dec3-4289-8d6c-da80bd8001ae'
sample.strategy_id = '7ae81bb3-dec3-4289-8d6c-da80bd8001ff' sample.strategy_id = '7ae81bb3-dec3-4289-8d6c-da80bd8001ff'
sample.host_aggregate = 1
return cls._convert_with_links(sample, 'http://localhost:9322', expand) return cls._convert_with_links(sample, 'http://localhost:9322', expand)
@@ -374,7 +381,7 @@ class AuditsController(rest.RestController):
def _get_audits_collection(self, marker, limit, def _get_audits_collection(self, marker, limit,
sort_key, sort_dir, expand=False, sort_key, sort_dir, expand=False,
resource_url=None, goal=None, resource_url=None, goal=None,
strategy=None): strategy=None, host_aggregate=None):
limit = api_utils.validate_limit(limit) limit = api_utils.validate_limit(limit)
api_utils.validate_sort_dir(sort_dir) api_utils.validate_sort_dir(sort_dir)
marker_obj = None marker_obj = None
@@ -417,16 +424,19 @@ class AuditsController(rest.RestController):
@wsme_pecan.wsexpose(AuditCollection, types.uuid, int, wtypes.text, @wsme_pecan.wsexpose(AuditCollection, types.uuid, int, wtypes.text,
wtypes.text, wtypes.text, wtypes.text, int) wtypes.text, wtypes.text, wtypes.text, int)
def get_all(self, marker=None, limit=None, sort_key='id', sort_dir='asc', def get_all(self, marker=None, limit=None,
goal=None, strategy=None): sort_key='id', sort_dir='asc', goal=None,
strategy=None, host_aggregate=None):
"""Retrieve a list of audits. """Retrieve a list of audits.
:param marker: pagination marker for large data sets. :param marker: pagination marker for large data sets.
:param limit: maximum number of resources to return in a single result. :param limit: maximum number of resources to return in a single result.
:param sort_key: column to sort results by. Default: id. :param sort_key: column to sort results by. Default: id.
:param sort_dir: direction to sort. "asc" or "desc". Default: asc. :param sort_dir: direction to sort. "asc" or "desc". Default: asc.
id.
:param goal: goal UUID or name to filter by :param goal: goal UUID or name to filter by
:param strategy: strategy UUID or name to filter by :param strategy: strategy UUID or name to filter by
:param host_aggregate: Optional host_aggregate
""" """
context = pecan.request.context context = pecan.request.context
@@ -435,7 +445,8 @@ class AuditsController(rest.RestController):
return self._get_audits_collection(marker, limit, sort_key, return self._get_audits_collection(marker, limit, sort_key,
sort_dir, goal=goal, sort_dir, goal=goal,
strategy=strategy) strategy=strategy,
host_aggregate=host_aggregate)
@wsme_pecan.wsexpose(AuditCollection, wtypes.text, types.uuid, int, @wsme_pecan.wsexpose(AuditCollection, wtypes.text, types.uuid, int,
wtypes.text, wtypes.text) wtypes.text, wtypes.text)
@@ -519,7 +530,7 @@ class AuditsController(rest.RestController):
audit_dict = audit.as_dict() audit_dict = audit.as_dict()
new_audit = objects.Audit(context, **audit_dict) new_audit = objects.Audit(context, **audit_dict)
new_audit.create() new_audit.create(context)
# Set the HTTP Location Header # Set the HTTP Location Header
pecan.response.location = link.build_url('audits', new_audit.uuid) pecan.response.location = link.build_url('audits', new_audit.uuid)
@@ -544,11 +555,14 @@ class AuditsController(rest.RestController):
raise exception.OperationNotPermitted raise exception.OperationNotPermitted
context = pecan.request.context context = pecan.request.context
audit_to_update = api_utils.get_resource( audit_to_update = api_utils.get_resource('Audit',
'Audit', audit_uuid, eager=True) audit_uuid)
policy.enforce(context, 'audit:update', audit_to_update, policy.enforce(context, 'audit:update', audit_to_update,
action='audit:update') action='audit:update')
audit_to_update = objects.Audit.get_by_uuid(pecan.request.context,
audit_uuid)
try: try:
audit_dict = audit_to_update.as_dict() audit_dict = audit_to_update.as_dict()
audit = Audit(**api_utils.apply_jsonpatch(audit_dict, patch)) audit = Audit(**api_utils.apply_jsonpatch(audit_dict, patch))
@@ -577,8 +591,7 @@ class AuditsController(rest.RestController):
:param audit_uuid: UUID of a audit. :param audit_uuid: UUID of a audit.
""" """
context = pecan.request.context context = pecan.request.context
audit_to_delete = api_utils.get_resource( audit_to_delete = api_utils.get_resource('Audit', audit_uuid)
'Audit', audit_uuid, eager=True)
policy.enforce(context, 'audit:update', audit_to_delete, policy.enforce(context, 'audit:update', audit_to_delete,
action='audit:update') action='audit:update')

View File

@@ -41,6 +41,11 @@ settings related to the level of automation for the
A flag will indicate whether the :ref:`Action Plan <action_plan_definition>` A flag will indicate whether the :ref:`Action Plan <action_plan_definition>`
will be launched automatically or will need a manual confirmation from the will be launched automatically or will need a manual confirmation from the
:ref:`Administrator <administrator_definition>`. :ref:`Administrator <administrator_definition>`.
Last but not least, an :ref:`Audit Template <audit_template_definition>` may
contain a list of extra parameters related to the
:ref:`Strategy <strategy_definition>` configuration. These parameters can be
provided as a list of key-value pairs.
""" """
import datetime import datetime
@@ -61,7 +66,6 @@ from watcher.common import context as context_utils
from watcher.common import exception from watcher.common import exception
from watcher.common import policy from watcher.common import policy
from watcher.common import utils as common_utils from watcher.common import utils as common_utils
from watcher.decision_engine.scope import default
from watcher import objects from watcher import objects
@@ -74,24 +78,37 @@ class AuditTemplatePostType(wtypes.Base):
description = wtypes.wsattr(wtypes.text, mandatory=False) description = wtypes.wsattr(wtypes.text, mandatory=False)
"""Short description of this audit template""" """Short description of this audit template"""
deadline = wsme.wsattr(datetime.datetime, mandatory=False)
"""deadline of the audit template"""
host_aggregate = wsme.wsattr(wtypes.IntegerType(minimum=1),
mandatory=False)
"""ID of the Nova host aggregate targeted by the audit template"""
extra = wtypes.wsattr({wtypes.text: types.jsontype}, mandatory=False)
"""The metadata of the audit template"""
goal = wtypes.wsattr(wtypes.text, mandatory=True) goal = wtypes.wsattr(wtypes.text, mandatory=True)
"""Goal UUID or name of the audit template""" """Goal UUID or name of the audit template"""
strategy = wtypes.wsattr(wtypes.text, mandatory=False) strategy = wtypes.wsattr(wtypes.text, mandatory=False)
"""Strategy UUID or name of the audit template""" """Strategy UUID or name of the audit template"""
scope = wtypes.wsattr(types.jsontype, mandatory=False, default=[]) version = wtypes.text
"""Audit Scope""" """Internal version of the audit template"""
def as_audit_template(self): def as_audit_template(self):
return AuditTemplate( return AuditTemplate(
name=self.name, name=self.name,
description=self.description, description=self.description,
deadline=self.deadline,
host_aggregate=self.host_aggregate,
extra=self.extra,
goal_id=self.goal, # Dirty trick ... goal_id=self.goal, # Dirty trick ...
goal=self.goal, goal=self.goal,
strategy_id=self.strategy, # Dirty trick ... strategy_id=self.strategy, # Dirty trick ...
strategy_uuid=self.strategy, strategy_uuid=self.strategy,
scope=self.scope, version=self.version,
) )
@staticmethod @staticmethod
@@ -106,9 +123,6 @@ class AuditTemplatePostType(wtypes.Base):
else: else:
raise exception.InvalidGoal(goal=audit_template.goal) raise exception.InvalidGoal(goal=audit_template.goal)
common_utils.Draft4Validator(
default.DefaultScope.DEFAULT_SCHEMA).validate(audit_template.scope)
if audit_template.strategy: if audit_template.strategy:
available_strategies = objects.Strategy.list( available_strategies = objects.Strategy.list(
AuditTemplatePostType._ctx) AuditTemplatePostType._ctx)
@@ -291,9 +305,18 @@ class AuditTemplate(base.APIBase):
name = wtypes.text name = wtypes.text
"""Name of this audit template""" """Name of this audit template"""
description = wtypes.wsattr(wtypes.text, mandatory=False) description = wtypes.text
"""Short description of this audit template""" """Short description of this audit template"""
deadline = datetime.datetime
"""deadline of the audit template"""
host_aggregate = wtypes.IntegerType(minimum=1)
"""ID of the Nova host aggregate targeted by the audit template"""
extra = {wtypes.text: types.jsontype}
"""The metadata of the audit template"""
goal_uuid = wsme.wsproperty( goal_uuid = wsme.wsproperty(
wtypes.text, _get_goal_uuid, _set_goal_uuid, mandatory=True) wtypes.text, _get_goal_uuid, _set_goal_uuid, mandatory=True)
"""Goal UUID the audit template refers to""" """Goal UUID the audit template refers to"""
@@ -310,15 +333,15 @@ class AuditTemplate(base.APIBase):
wtypes.text, _get_strategy_name, _set_strategy_name, mandatory=False) wtypes.text, _get_strategy_name, _set_strategy_name, mandatory=False)
"""The name of the strategy this audit template refers to""" """The name of the strategy this audit template refers to"""
version = wtypes.text
"""Internal version of the audit template"""
audits = wsme.wsattr([link.Link], readonly=True) audits = wsme.wsattr([link.Link], readonly=True)
"""Links to the collection of audits contained in this audit template""" """Links to the collection of audits contained in this audit template"""
links = wsme.wsattr([link.Link], readonly=True) links = wsme.wsattr([link.Link], readonly=True)
"""A list containing a self link and associated audit template links""" """A list containing a self link and associated audit template links"""
scope = wsme.wsattr(types.jsontype, mandatory=False)
"""Audit Scope"""
def __init__(self, **kwargs): def __init__(self, **kwargs):
super(AuditTemplate, self).__init__() super(AuditTemplate, self).__init__()
self.fields = [] self.fields = []
@@ -351,8 +374,8 @@ class AuditTemplate(base.APIBase):
def _convert_with_links(audit_template, url, expand=True): def _convert_with_links(audit_template, url, expand=True):
if not expand: if not expand:
audit_template.unset_fields_except( audit_template.unset_fields_except(
['uuid', 'name', 'goal_uuid', 'goal_name', ['uuid', 'name', 'host_aggregate', 'goal_uuid', 'goal_name',
'scope', 'strategy_uuid', 'strategy_name']) 'strategy_uuid', 'strategy_name'])
# The numeric ID should not be exposed to # The numeric ID should not be exposed to
# the user, it's internal only. # the user, it's internal only.
@@ -379,12 +402,13 @@ class AuditTemplate(base.APIBase):
sample = cls(uuid='27e3153e-d5bf-4b7e-b517-fb518e17f34c', sample = cls(uuid='27e3153e-d5bf-4b7e-b517-fb518e17f34c',
name='My Audit Template', name='My Audit Template',
description='Description of my audit template', description='Description of my audit template',
host_aggregate=5,
goal_uuid='83e44733-b640-40e2-8d8a-7dd3be7134e6', goal_uuid='83e44733-b640-40e2-8d8a-7dd3be7134e6',
strategy_uuid='367d826e-b6a4-4b70-bc44-c3f6fe1c9986', strategy_uuid='367d826e-b6a4-4b70-bc44-c3f6fe1c9986',
extra={'automatic': True},
created_at=datetime.datetime.utcnow(), created_at=datetime.datetime.utcnow(),
deleted_at=None, deleted_at=None,
updated_at=datetime.datetime.utcnow(), updated_at=datetime.datetime.utcnow())
scope=[],)
return cls._convert_with_links(sample, 'http://localhost:9322', expand) return cls._convert_with_links(sample, 'http://localhost:9322', expand)
@@ -568,11 +592,11 @@ class AuditTemplatesController(rest.RestController):
audit_template_dict = audit_template.as_dict() audit_template_dict = audit_template.as_dict()
new_audit_template = objects.AuditTemplate(context, new_audit_template = objects.AuditTemplate(context,
**audit_template_dict) **audit_template_dict)
new_audit_template.create() new_audit_template.create(context)
# Set the HTTP Location Header # Set the HTTP Location Header
pecan.response.location = link.build_url( pecan.response.location = link.build_url('audit_templates',
'audit_templates', new_audit_template.uuid) new_audit_template.uuid)
return AuditTemplate.convert_with_links(new_audit_template) return AuditTemplate.convert_with_links(new_audit_template)
@wsme.validate(types.uuid, [AuditTemplatePatchType]) @wsme.validate(types.uuid, [AuditTemplatePatchType])

View File

@@ -19,7 +19,7 @@ An efficacy indicator is a single value that gives an indication on how the
:ref:`solution <solution_definition>` produced by a given :ref:`strategy :ref:`solution <solution_definition>` produced by a given :ref:`strategy
<strategy_definition>` performed. These efficacy indicators are specific to a <strategy_definition>` performed. These efficacy indicators are specific to a
given :ref:`goal <goal_definition>` and are usually used to compute the given :ref:`goal <goal_definition>` and are usually used to compute the
:ref:`global efficacy <efficacy_definition>` of the resulting :ref:`action plan :ref:`gobal efficacy <efficacy_definition>` of the resulting :ref:`action plan
<action_plan_definition>`. <action_plan_definition>`.
In Watcher, these efficacy indicators are specified alongside the goal they In Watcher, these efficacy indicators are specified alongside the goal they

View File

@@ -1,263 +0,0 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2016 Servionica
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Service mechanism provides ability to monitor Watcher services state.
"""
import datetime
import six
from oslo_config import cfg
from oslo_log import log
from oslo_utils import timeutils
import pecan
from pecan import rest
import wsme
from wsme import types as wtypes
import wsmeext.pecan as wsme_pecan
from watcher._i18n import _LW
from watcher.api.controllers import base
from watcher.api.controllers import link
from watcher.api.controllers.v1 import collection
from watcher.api.controllers.v1 import utils as api_utils
from watcher.common import exception
from watcher.common import policy
from watcher import objects
CONF = cfg.CONF
LOG = log.getLogger(__name__)
class Service(base.APIBase):
"""API representation of a service.
This class enforces type checking and value constraints, and converts
between the internal object model and the API representation of a service.
"""
_status = None
def _get_status(self):
return self._status
def _set_status(self, name):
service = objects.Service.get_by_name(pecan.request.context, name)
last_heartbeat = (service.last_seen_up or service.updated_at
or service.created_at)
if isinstance(last_heartbeat, six.string_types):
# NOTE(russellb) If this service came in over rpc via
# conductor, then the timestamp will be a string and needs to be
# converted back to a datetime.
last_heartbeat = timeutils.parse_strtime(last_heartbeat)
else:
# Objects have proper UTC timezones, but the timeutils comparison
# below does not (and will fail)
last_heartbeat = last_heartbeat.replace(tzinfo=None)
elapsed = timeutils.delta_seconds(last_heartbeat, timeutils.utcnow())
is_up = abs(elapsed) <= CONF.service_down_time
if not is_up:
LOG.warning(_LW('Seems service %(name)s on host %(host)s is down. '
'Last heartbeat was %(lhb)s.'
'Elapsed time is %(el)s'),
{'name': service.name,
'host': service.host,
'lhb': str(last_heartbeat), 'el': str(elapsed)})
self._status = objects.service.ServiceStatus.FAILED
else:
self._status = objects.service.ServiceStatus.ACTIVE
id = wsme.wsattr(int, readonly=True)
"""ID for this service."""
name = wtypes.text
"""Name of the service."""
host = wtypes.text
"""Host where service is placed on."""
last_seen_up = wsme.wsattr(datetime.datetime, readonly=True)
"""Time when Watcher service sent latest heartbeat."""
status = wsme.wsproperty(wtypes.text, _get_status, _set_status,
mandatory=True)
links = wsme.wsattr([link.Link], readonly=True)
"""A list containing a self link."""
def __init__(self, **kwargs):
super(Service, self).__init__()
fields = list(objects.Service.fields.keys()) + ['status']
self.fields = []
for field in fields:
self.fields.append(field)
setattr(self, field, kwargs.get(
field if field != 'status' else 'name', wtypes.Unset))
@staticmethod
def _convert_with_links(service, url, expand=True):
if not expand:
service.unset_fields_except(
['id', 'name', 'host', 'status'])
service.links = [
link.Link.make_link('self', url, 'services', str(service.id)),
link.Link.make_link('bookmark', url, 'services', str(service.id),
bookmark=True)]
return service
@classmethod
def convert_with_links(cls, service, expand=True):
service = Service(**service.as_dict())
return cls._convert_with_links(
service, pecan.request.host_url, expand)
@classmethod
def sample(cls, expand=True):
sample = cls(id=1,
name='watcher-applier',
host='Controller',
last_seen_up=datetime.datetime(2016, 1, 1))
return cls._convert_with_links(sample, 'http://localhost:9322', expand)
class ServiceCollection(collection.Collection):
"""API representation of a collection of services."""
services = [Service]
"""A list containing services objects"""
def __init__(self, **kwargs):
super(ServiceCollection, self).__init__()
self._type = 'services'
@staticmethod
def convert_with_links(services, limit, url=None, expand=False,
**kwargs):
service_collection = ServiceCollection()
service_collection.services = [
Service.convert_with_links(g, expand) for g in services]
if 'sort_key' in kwargs:
reverse = False
if kwargs['sort_key'] == 'service':
if 'sort_dir' in kwargs:
reverse = True if kwargs['sort_dir'] == 'desc' else False
service_collection.services = sorted(
service_collection.services,
key=lambda service: service.id,
reverse=reverse)
service_collection.next = service_collection.get_next(
limit, url=url, marker_field='id', **kwargs)
return service_collection
@classmethod
def sample(cls):
sample = cls()
sample.services = [Service.sample(expand=False)]
return sample
class ServicesController(rest.RestController):
"""REST controller for Services."""
def __init__(self):
super(ServicesController, self).__init__()
from_services = False
"""A flag to indicate if the requests to this controller are coming
from the top-level resource Services."""
_custom_actions = {
'detail': ['GET'],
}
def _get_services_collection(self, marker, limit, sort_key, sort_dir,
expand=False, resource_url=None):
limit = api_utils.validate_limit(limit)
api_utils.validate_sort_dir(sort_dir)
sort_db_key = (sort_key if sort_key in objects.Service.fields.keys()
else None)
marker_obj = None
if marker:
marker_obj = objects.Service.get(
pecan.request.context, marker)
services = objects.Service.list(
pecan.request.context, limit, marker_obj,
sort_key=sort_db_key, sort_dir=sort_dir)
return ServiceCollection.convert_with_links(
services, limit, url=resource_url, expand=expand,
sort_key=sort_key, sort_dir=sort_dir)
@wsme_pecan.wsexpose(ServiceCollection, int, int, wtypes.text, wtypes.text)
def get_all(self, marker=None, limit=None, sort_key='id', sort_dir='asc'):
"""Retrieve a list of services.
:param marker: pagination marker for large data sets.
:param limit: maximum number of resources to return in a single result.
:param sort_key: column to sort results by. Default: id.
:param sort_dir: direction to sort. "asc" or "desc". Default: asc.
"""
context = pecan.request.context
policy.enforce(context, 'service:get_all',
action='service:get_all')
return self._get_services_collection(marker, limit, sort_key, sort_dir)
@wsme_pecan.wsexpose(ServiceCollection, int, int, wtypes.text, wtypes.text)
def detail(self, marker=None, limit=None, sort_key='id', sort_dir='asc'):
"""Retrieve a list of services with detail.
:param marker: pagination marker for large data sets.
:param limit: maximum number of resources to return in a single result.
:param sort_key: column to sort results by. Default: id.
:param sort_dir: direction to sort. "asc" or "desc". Default: asc.
"""
context = pecan.request.context
policy.enforce(context, 'service:detail',
action='service:detail')
# NOTE(lucasagomes): /detail should only work agaist collections
parent = pecan.request.path.split('/')[:-1][-1]
if parent != "services":
raise exception.HTTPNotFound
expand = True
resource_url = '/'.join(['services', 'detail'])
return self._get_services_collection(
marker, limit, sort_key, sort_dir, expand, resource_url)
@wsme_pecan.wsexpose(Service, wtypes.text)
def get_one(self, service):
"""Retrieve information about the given service.
:param service: ID or name of the service.
"""
if self.from_services:
raise exception.OperationNotPermitted
context = pecan.request.context
rpc_service = api_utils.get_resource('Service', service)
policy.enforce(context, 'service:get', rpc_service,
action='service:get')
return Service.convert_with_links(rpc_service)

View File

@@ -110,7 +110,7 @@ class Strategy(base.APIBase):
"""The name of the goal this audit refers to""" """The name of the goal this audit refers to"""
parameters_spec = {wtypes.text: types.jsontype} parameters_spec = {wtypes.text: types.jsontype}
"""Parameters spec dict""" """ Parameters spec dict"""
def __init__(self, **kwargs): def __init__(self, **kwargs):
super(Strategy, self).__init__() super(Strategy, self).__init__()

View File

@@ -15,13 +15,11 @@
import jsonpatch import jsonpatch
from oslo_config import cfg from oslo_config import cfg
from oslo_utils import reflection
from oslo_utils import uuidutils from oslo_utils import uuidutils
import pecan import pecan
import wsme import wsme
from watcher._i18n import _ from watcher._i18n import _
from watcher.common import utils
from watcher import objects from watcher import objects
CONF = cfg.CONF CONF = cfg.CONF
@@ -82,27 +80,17 @@ def as_filters_dict(**filters):
return filters_dict return filters_dict
def get_resource(resource, resource_id, eager=False): def get_resource(resource, resource_ident):
"""Get the resource from the uuid, id or logical name. """Get the resource from the uuid or logical name.
:param resource: the resource type. :param resource: the resource type.
:param resource_id: the UUID, ID or logical name of the resource. :param resource_ident: the UUID or logical name of the resource.
:returns: The resource. :returns: The resource.
""" """
resource = getattr(objects, resource) resource = getattr(objects, resource)
_get = None if uuidutils.is_uuid_like(resource_ident):
if utils.is_int_like(resource_id): return resource.get_by_uuid(pecan.request.context, resource_ident)
resource_id = int(resource_id)
_get = resource.get
elif uuidutils.is_uuid_like(resource_id):
_get = resource.get_by_uuid
else:
_get = resource.get_by_name
method_signature = reflection.get_signature(_get) return resource.get_by_name(pecan.request.context, resource_ident)
if 'eager' in method_signature.parameters:
return _get(pecan.request.context, resource_id, eager=eager)
return _get(pecan.request.context, resource_id)

View File

@@ -114,6 +114,6 @@ class NoExceptionTracebackHook(hooks.PecanHook):
faultstring = faultstring.split(traceback_marker, 1)[0] faultstring = faultstring.split(traceback_marker, 1)[0]
# Remove trailing newlines and spaces if any. # Remove trailing newlines and spaces if any.
json_body['faultstring'] = faultstring.rstrip() json_body['faultstring'] = faultstring.rstrip()
# Replace the whole json. Cannot change original one because it's # Replace the whole json. Cannot change original one beacause it's
# generated on the fly. # generated on the fly.
state.response.json = json_body state.response.json = json_body

View File

@@ -33,7 +33,7 @@ class AuthTokenMiddleware(auth_token.AuthProtocol):
for public routes in the API. for public routes in the API.
""" """
def __init__(self, app, conf, public_api_routes=()): def __init__(self, app, conf, public_api_routes=[]):
route_pattern_tpl = '%s(\.json|\.xml)?$' route_pattern_tpl = '%s(\.json|\.xml)?$'
try: try:

View File

@@ -20,7 +20,9 @@ from oslo_log import log
from watcher.applier.action_plan import base from watcher.applier.action_plan import base
from watcher.applier import default from watcher.applier import default
from watcher import objects from watcher.applier.messaging import event_types
from watcher.common.messaging.events import event
from watcher.objects import action_plan as ap_objects
LOG = log.getLogger(__name__) LOG = log.getLogger(__name__)
@@ -32,20 +34,32 @@ class DefaultActionPlanHandler(base.BaseActionPlanHandler):
self.service = service self.service = service
self.action_plan_uuid = action_plan_uuid self.action_plan_uuid = action_plan_uuid
def update_action_plan(self, uuid, state): def notify(self, uuid, event_type, state):
action_plan = objects.ActionPlan.get_by_uuid(self.ctx, uuid) action_plan = ap_objects.ActionPlan.get_by_uuid(self.ctx, uuid)
action_plan.state = state action_plan.state = state
action_plan.save() action_plan.save()
ev = event.Event()
ev.type = event_type
ev.data = {}
payload = {'action_plan__uuid': uuid,
'action_plan_state': state}
self.service.publish_status_event(ev.type.name, payload)
def execute(self): def execute(self):
try: try:
self.update_action_plan(self.action_plan_uuid, # update state
objects.action_plan.State.ONGOING) self.notify(self.action_plan_uuid,
event_types.EventTypes.LAUNCH_ACTION_PLAN,
ap_objects.State.ONGOING)
applier = default.DefaultApplier(self.ctx, self.service) applier = default.DefaultApplier(self.ctx, self.service)
applier.execute(self.action_plan_uuid) applier.execute(self.action_plan_uuid)
state = objects.action_plan.State.SUCCEEDED state = ap_objects.State.SUCCEEDED
except Exception as e: except Exception as e:
LOG.exception(e) LOG.exception(e)
state = objects.action_plan.State.FAILED state = ap_objects.State.FAILED
finally: finally:
self.update_action_plan(self.action_plan_uuid, state) # update state
self.notify(self.action_plan_uuid,
event_types.EventTypes.LAUNCH_ACTION_PLAN,
state)

View File

@@ -20,7 +20,6 @@
from oslo_config import cfg from oslo_config import cfg
from watcher.applier.messaging import trigger from watcher.applier.messaging import trigger
from watcher.common import service_manager
CONF = cfg.CONF CONF = cfg.CONF
@@ -37,6 +36,13 @@ APPLIER_MANAGER_OPTS = [
help='The topic name used for' help='The topic name used for'
'control events, this topic ' 'control events, this topic '
'used for rpc call '), 'used for rpc call '),
cfg.StrOpt('status_topic',
default='watcher.applier.status',
help='The topic name used for '
'status events, this topic '
'is used so as to notify'
'the others components '
'of the system'),
cfg.StrOpt('publisher_id', cfg.StrOpt('publisher_id',
default='watcher.applier.api', default='watcher.applier.api',
help='The identifier used by watcher ' help='The identifier used by watcher '
@@ -54,32 +60,17 @@ CONF.register_group(opt_group)
CONF.register_opts(APPLIER_MANAGER_OPTS, opt_group) CONF.register_opts(APPLIER_MANAGER_OPTS, opt_group)
class ApplierManager(service_manager.ServiceManager): class ApplierManager(object):
@property API_VERSION = '1.0'
def service_name(self):
return 'watcher-applier'
@property conductor_endpoints = [trigger.TriggerActionPlan]
def api_version(self): status_endpoints = []
return '1.0' notification_endpoints = []
notification_topics = []
@property def __init__(self):
def publisher_id(self): self.publisher_id = CONF.watcher_applier.publisher_id
return CONF.watcher_applier.publisher_id self.conductor_topic = CONF.watcher_applier.conductor_topic
self.status_topic = CONF.watcher_applier.status_topic
@property self.api_version = self.API_VERSION
def conductor_topic(self):
return CONF.watcher_applier.conductor_topic
@property
def notification_topics(self):
return []
@property
def conductor_endpoints(self):
return [trigger.TriggerActionPlan]
@property
def notification_endpoints(self):
return []

View File

@@ -1,7 +1,7 @@
# -*- encoding: utf-8 -*- # -*- encoding: utf-8 -*-
# Copyright (c) 2016 b<>com # Copyright (c) 2015 b<>com
# #
# Authors: Vincent FRANCOISE <Vincent.FRANCOISE@b-com.com> # Authors: Jean-Emile DARTOIS <jean-emile.dartois@b-com.com>
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@@ -15,22 +15,11 @@
# implied. # implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
#
""" import enum
This component is in charge of executing the
:ref:`Action Plan <action_plan_definition>` built by the
:ref:`Watcher Decision Engine <watcher_decision_engine_definition>`.
See: :doc:`../architecture` for more details on this component.
"""
import abc
import six
@six.add_metaclass(abc.ABCMeta) class EventTypes(enum.Enum):
class Model(object): LAUNCH_ACTION_PLAN = "launch_action_plan"
LAUNCH_ACTION = "launch_action"
@abc.abstractmethod
def to_string(self):
raise NotImplementedError()

View File

@@ -15,13 +15,12 @@
# implied. # implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
#
from oslo_config import cfg from oslo_config import cfg
from watcher.applier import manager from watcher.applier import manager
from watcher.common import exception from watcher.common import exception
from watcher.common import service from watcher.common import service
from watcher.common import service_manager
from watcher.common import utils from watcher.common import utils
@@ -40,35 +39,21 @@ class ApplierAPI(service.Service):
raise exception.InvalidUuidOrName(name=action_plan_uuid) raise exception.InvalidUuidOrName(name=action_plan_uuid)
return self.conductor_client.call( return self.conductor_client.call(
context, 'launch_action_plan', action_plan_uuid=action_plan_uuid) context.to_dict(), 'launch_action_plan',
action_plan_uuid=action_plan_uuid)
class ApplierAPIManager(service_manager.ServiceManager): class ApplierAPIManager(object):
@property API_VERSION = '1.0'
def service_name(self):
return None
@property conductor_endpoints = []
def api_version(self): status_endpoints = []
return '1.0' notification_endpoints = []
notification_topics = []
@property def __init__(self):
def publisher_id(self): self.publisher_id = CONF.watcher_applier.publisher_id
return CONF.watcher_applier.publisher_id self.conductor_topic = CONF.watcher_applier.conductor_topic
self.status_topic = CONF.watcher_applier.status_topic
@property self.api_version = self.API_VERSION
def conductor_topic(self):
return CONF.watcher_applier.conductor_topic
@property
def notification_topics(self):
return []
@property
def conductor_endpoints(self):
return []
@property
def notification_endpoints(self):
return []

View File

@@ -21,8 +21,10 @@ import abc
import six import six
from watcher.applier.actions import factory from watcher.applier.actions import factory
from watcher.applier.messaging import event_types
from watcher.common import clients from watcher.common import clients
from watcher.common.loader import loadable from watcher.common.loader import loadable
from watcher.common.messaging.events import event
from watcher import objects from watcher import objects
@@ -75,7 +77,12 @@ class BaseWorkFlowEngine(loadable.Loadable):
db_action = objects.Action.get_by_uuid(self.context, action.uuid) db_action = objects.Action.get_by_uuid(self.context, action.uuid)
db_action.state = state db_action.state = state
db_action.save() db_action.save()
# NOTE(v-francoise): Implement notifications for action ev = event.Event()
ev.type = event_types.EventTypes.LAUNCH_ACTION
ev.data = {}
payload = {'action_uuid': action.uuid,
'action_state': state}
self.applier_manager.publish_status_event(ev.type.name, payload)
@abc.abstractmethod @abc.abstractmethod
def execute(self, actions): def execute(self, actions):

View File

@@ -23,7 +23,7 @@ from taskflow import task
from watcher._i18n import _LE, _LW, _LC from watcher._i18n import _LE, _LW, _LC
from watcher.applier.workflow_engine import base from watcher.applier.workflow_engine import base
from watcher.common import exception from watcher.common import exception
from watcher import objects from watcher.objects import action as obj_action
LOG = log.getLogger(__name__) LOG = log.getLogger(__name__)
@@ -107,12 +107,14 @@ class TaskFlowActionContainer(task.Task):
def pre_execute(self): def pre_execute(self):
try: try:
self.engine.notify(self._db_action, objects.action.State.ONGOING) self.engine.notify(self._db_action,
obj_action.State.ONGOING)
LOG.debug("Pre-condition action: %s", self.name) LOG.debug("Pre-condition action: %s", self.name)
self.action.pre_condition() self.action.pre_condition()
except Exception as e: except Exception as e:
LOG.exception(e) LOG.exception(e)
self.engine.notify(self._db_action, objects.action.State.FAILED) self.engine.notify(self._db_action,
obj_action.State.FAILED)
raise raise
def execute(self, *args, **kwargs): def execute(self, *args, **kwargs):
@@ -120,13 +122,15 @@ class TaskFlowActionContainer(task.Task):
LOG.debug("Running action: %s", self.name) LOG.debug("Running action: %s", self.name)
self.action.execute() self.action.execute()
self.engine.notify(self._db_action, objects.action.State.SUCCEEDED) self.engine.notify(self._db_action,
obj_action.State.SUCCEEDED)
except Exception as e: except Exception as e:
LOG.exception(e) LOG.exception(e)
LOG.error(_LE('The workflow engine has failed ' LOG.error(_LE('The workflow engine has failed '
'to execute the action: %s'), self.name) 'to execute the action: %s'), self.name)
self.engine.notify(self._db_action, objects.action.State.FAILED) self.engine.notify(self._db_action,
obj_action.State.FAILED)
raise raise
def post_execute(self): def post_execute(self):
@@ -135,7 +139,8 @@ class TaskFlowActionContainer(task.Task):
self.action.post_condition() self.action.post_condition()
except Exception as e: except Exception as e:
LOG.exception(e) LOG.exception(e)
self.engine.notify(self._db_action, objects.action.State.FAILED) self.engine.notify(self._db_action,
obj_action.State.FAILED)
raise raise
def revert(self, *args, **kwargs): def revert(self, *args, **kwargs):

View File

@@ -24,19 +24,19 @@ from oslo_log import log as logging
from watcher._i18n import _LI from watcher._i18n import _LI
from watcher.common import service from watcher.common import service
from watcher import conf
LOG = logging.getLogger(__name__) LOG = logging.getLogger(__name__)
CONF = conf.CONF CONF = cfg.CONF
def main(): def main():
service.prepare_service(sys.argv, CONF) service.prepare_service(sys.argv)
host, port = cfg.CONF.api.host, cfg.CONF.api.port host, port = cfg.CONF.api.host, cfg.CONF.api.port
protocol = "http" if not CONF.api.enable_ssl_api else "https" protocol = "http" if not CONF.api.enable_ssl_api else "https"
# Build and start the WSGI app # Build and start the WSGI app
server = service.WSGIService('watcher-api', CONF.api.enable_ssl_api) server = service.WSGIService(
'watcher-api', CONF.api.enable_ssl_api)
if host == '127.0.0.1': if host == '127.0.0.1':
LOG.info(_LI('serving on 127.0.0.1:%(port)s, ' LOG.info(_LI('serving on 127.0.0.1:%(port)s, '

View File

@@ -20,19 +20,19 @@
import os import os
import sys import sys
from oslo_config import cfg
from oslo_log import log as logging from oslo_log import log as logging
from watcher._i18n import _LI from watcher._i18n import _LI
from watcher.applier import manager from watcher.applier import manager
from watcher.common import service as watcher_service from watcher.common import service as watcher_service
from watcher import conf
LOG = logging.getLogger(__name__) LOG = logging.getLogger(__name__)
CONF = conf.CONF CONF = cfg.CONF
def main(): def main():
watcher_service.prepare_service(sys.argv, CONF) watcher_service.prepare_service(sys.argv)
LOG.info(_LI('Starting Watcher Applier service in PID %s'), os.getpid()) LOG.info(_LI('Starting Watcher Applier service in PID %s'), os.getpid())

View File

@@ -24,11 +24,10 @@ import sys
from oslo_config import cfg from oslo_config import cfg
from watcher.common import service from watcher.common import service
from watcher import conf
from watcher.db import migration from watcher.db import migration
from watcher.db import purge from watcher.db import purge
CONF = conf.CONF CONF = cfg.CONF
class DBCommand(object): class DBCommand(object):
@@ -153,5 +152,5 @@ def main():
if not set(sys.argv).intersection(valid_commands): if not set(sys.argv).intersection(valid_commands):
sys.argv.append('upgrade') sys.argv.append('upgrade')
service.prepare_service(sys.argv, CONF) service.prepare_service(sys.argv)
CONF.command.func() CONF.command.func()

View File

@@ -20,22 +20,22 @@
import os import os
import sys import sys
from oslo_config import cfg
from oslo_log import log as logging from oslo_log import log as logging
from watcher._i18n import _LI from watcher._i18n import _LI
from watcher.common import service as watcher_service from watcher.common import service as watcher_service
from watcher import conf
from watcher.decision_engine import gmr from watcher.decision_engine import gmr
from watcher.decision_engine import manager from watcher.decision_engine import manager
from watcher.decision_engine import scheduling from watcher.decision_engine import scheduling
from watcher.decision_engine import sync from watcher.decision_engine import sync
LOG = logging.getLogger(__name__) LOG = logging.getLogger(__name__)
CONF = conf.CONF CONF = cfg.CONF
def main(): def main():
watcher_service.prepare_service(sys.argv, CONF) watcher_service.prepare_service(sys.argv)
gmr.register_gmr_plugins() gmr.register_gmr_plugins()
LOG.info(_LI('Starting Watcher Decision Engine service in PID %s'), LOG.info(_LI('Starting Watcher Decision Engine service in PID %s'),

View File

@@ -24,17 +24,15 @@ from oslo_log import log as logging
from watcher._i18n import _LI from watcher._i18n import _LI
from watcher.common import service as service from watcher.common import service as service
from watcher import conf
from watcher.decision_engine import sync from watcher.decision_engine import sync
LOG = logging.getLogger(__name__) LOG = logging.getLogger(__name__)
CONF = conf.CONF
def main(): def main():
LOG.info(_LI('Watcher sync started.')) LOG.info(_LI('Watcher sync started.'))
service.prepare_service(sys.argv, CONF) service.prepare_service(sys.argv)
syncer = sync.Syncer() syncer = sync.Syncer()
syncer.sync() syncer.sync()

View File

@@ -144,17 +144,17 @@ class CeilometerHelper(object):
:return: :return:
""" """
end_time = datetime.datetime.utcnow()
start_time = (datetime.datetime.utcnow() - start_time = (datetime.datetime.utcnow() -
datetime.timedelta(seconds=int(period))) datetime.timedelta(seconds=int(period)))
query = self.build_query( query = self.build_query(
resource_id=resource_id, start_time=start_time, end_time=end_time) resource_id=resource_id, start_time=start_time)
statistic = self.query_retry(f=self.ceilometer.statistics.list, statistic = self.query_retry(f=self.ceilometer.statistics.list,
meter_name=meter_name, meter_name=meter_name,
q=query, q=query,
period=period, period=period,
aggregates=[ aggregates=[
{'func': aggregate}]) {'func': aggregate}],
groupby=['resource_id'])
item_value = None item_value = None
if statistic: if statistic:

View File

@@ -34,14 +34,14 @@ from watcher._i18n import _, _LE
LOG = logging.getLogger(__name__) LOG = logging.getLogger(__name__)
EXC_LOG_OPTS = [ exc_log_opts = [
cfg.BoolOpt('fatal_exception_format_errors', cfg.BoolOpt('fatal_exception_format_errors',
default=False, default=False,
help='Make exception message format errors fatal.'), help='Make exception message format errors fatal.'),
] ]
CONF = cfg.CONF CONF = cfg.CONF
CONF.register_opts(EXC_LOG_OPTS) CONF.register_opts(exc_log_opts)
def wrap_keystone_exception(func): def wrap_keystone_exception(func):
@@ -119,10 +119,6 @@ class WatcherException(Exception):
return six.text_type(self) return six.text_type(self)
class UnsupportedError(WatcherException):
msg_fmt = _("Not supported")
class NotAuthorized(WatcherException): class NotAuthorized(WatcherException):
msg_fmt = _("Not authorized") msg_fmt = _("Not authorized")
code = 403 code = 403
@@ -172,14 +168,6 @@ class InvalidStrategy(Invalid):
msg_fmt = _("Strategy %(strategy)s is invalid") msg_fmt = _("Strategy %(strategy)s is invalid")
class InvalidAudit(Invalid):
msg_fmt = _("Audit %(audit)s is invalid")
class EagerlyLoadedAuditRequired(InvalidAudit):
msg_fmt = _("Audit %(audit)s was not eagerly loaded")
class InvalidUUID(Invalid): class InvalidUUID(Invalid):
msg_fmt = _("Expected a uuid but received %(uuid)s") msg_fmt = _("Expected a uuid but received %(uuid)s")
@@ -374,19 +362,6 @@ class NoSuchMetricForHost(WatcherException):
msg_fmt = _("No %(metric)s metric for %(host)s found.") msg_fmt = _("No %(metric)s metric for %(host)s found.")
class ServiceAlreadyExists(Conflict):
msg_fmt = _("A service with name %(name)s is already working on %(host)s.")
class ServiceNotFound(ResourceNotFound):
msg_fmt = _("The service %(service)s cannot be found.")
class WildcardCharacterIsUsed(WatcherException):
msg_fmt = _("You shouldn't use any other IDs of %(resource)s if you use "
"wildcard character.")
# Model # Model
class InstanceNotFound(WatcherException): class InstanceNotFound(WatcherException):
@@ -411,8 +386,3 @@ class NotSoftDeletedStateError(WatcherException):
class NegativeLimitError(WatcherException): class NegativeLimitError(WatcherException):
msg_fmt = _("Limit should be positive") msg_fmt = _("Limit should be positive")
class NotificationPayloadError(WatcherException):
_msg_fmt = _("Payload not populated when trying to send notification "
"\"%(class_name)s\"")

View File

@@ -0,0 +1,54 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2015 b<>com
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
class Event(object):
"""Generic event to use with EventDispatcher"""
def __init__(self, event_type=None, data=None, request_id=None):
"""Default constructor
:param event_type: the type of the event
:param data: a dictionary which contains data
:param request_id: a string which represent the uuid of the request
"""
self._type = event_type
self._data = data
self._request_id = request_id
@property
def type(self):
return self._type
@type.setter
def type(self, type):
self._type = type
@property
def data(self):
return self._data
@data.setter
def data(self, data):
self._data = data
@property
def request_id(self):
return self._request_id
@request_id.setter
def request_id(self, id):
self._request_id = id

View File

@@ -0,0 +1,78 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2015 b<>com
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_log import log
from watcher.decision_engine.messaging import events as messaging_events
LOG = log.getLogger(__name__)
class EventDispatcher(object):
"""Generic event dispatcher which listen and dispatch events"""
def __init__(self):
self._events = dict()
def __del__(self):
self._events = None
def has_listener(self, event_type, listener):
"""Return true if listener is register to event_type """
# Check for event type and for the listener
if event_type in self._events.keys():
return listener in self._events[event_type]
else:
return False
def dispatch_event(self, event):
LOG.debug("dispatch evt : %s" % str(event.type))
"""
Dispatch an instance of Event class
"""
if messaging_events.Events.ALL in self._events.keys():
listeners = self._events[messaging_events.Events.ALL]
for listener in listeners:
listener(event)
# Dispatch the event to all the associated listeners
if event.type in self._events.keys():
listeners = self._events[event.type]
for listener in listeners:
listener(event)
def add_event_listener(self, event_type, listener):
"""Add an event listener for an event type"""
# Add listener to the event type
if not self.has_listener(event_type, listener):
listeners = self._events.get(event_type, [])
listeners.append(listener)
self._events[event_type] = listeners
def remove_event_listener(self, event_type, listener):
"""Remove event listener. """
# Remove the listener from the event type
if self.has_listener(event_type, listener):
listeners = self._events[event_type]
if len(listeners) == 1:
# Only this listener remains so remove the key
del self._events[event_type]
else:
# Update listeners chain
listeners.remove(listener)
self._events[event_type] = listeners

View File

@@ -0,0 +1,120 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2015 b<>com
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import socket
import threading
import eventlet
from oslo_config import cfg
from oslo_log import log
import oslo_messaging as om
from watcher.common import rpc
from watcher._i18n import _LE, _LW
# NOTE:
# Ubuntu 14.04 forces librabbitmq when kombu is used
# Unfortunately it forces a version that has a crash
# bug. Calling eventlet.monkey_patch() tells kombu
# to use libamqp instead.
eventlet.monkey_patch()
LOG = log.getLogger(__name__)
CONF = cfg.CONF
class MessagingHandler(threading.Thread):
def __init__(self, publisher_id, topic_name, endpoints, version,
serializer=None):
super(MessagingHandler, self).__init__()
self.publisher_id = publisher_id
self.topic_name = topic_name
self.__endpoints = []
self.__serializer = serializer
self.__version = version
self.__server = None
self.__notifier = None
self.__transport = None
self.add_endpoints(endpoints)
def add_endpoints(self, endpoints):
self.__endpoints.extend(endpoints)
def remove_endpoint(self, endpoint):
if endpoint in self.__endpoints:
self.__endpoints.remove(endpoint)
@property
def endpoints(self):
return self.__endpoints
@property
def transport(self):
return self.__transport
def build_notifier(self):
serializer = rpc.RequestContextSerializer(rpc.JsonPayloadSerializer())
return om.Notifier(
self.__transport,
publisher_id=self.publisher_id,
topic=self.topic_name,
serializer=serializer
)
def build_server(self, target):
return om.get_rpc_server(self.__transport, target,
self.__endpoints,
executor='eventlet',
serializer=self.__serializer)
def _configure(self):
try:
self.__transport = om.get_transport(CONF)
self.__notifier = self.build_notifier()
if len(self.__endpoints):
target = om.Target(
topic=self.topic_name,
# For compatibility, we can override it with 'host' opt
server=CONF.host or socket.getfqdn(),
version=self.__version,
)
self.__server = self.build_server(target)
else:
LOG.warning(
_LW("No endpoint defined; can only publish events"))
except Exception as e:
LOG.exception(e)
LOG.error(_LE("Messaging configuration error"))
def run(self):
LOG.debug("configure MessagingHandler for %s" % self.topic_name)
self._configure()
if len(self.__endpoints) > 0:
LOG.debug("Starting up server")
self.__server.start()
def stop(self):
LOG.debug('Stopped server')
self.__server.stop()
def publish_event(self, event_type, payload, request_id=None):
self.__notifier.info(
{'version_api': self.__version,
'request_id': request_id},
{'event_id': event_type}, payload
)

View File

@@ -0,0 +1,47 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2015 b<>com
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import eventlet
import oslo_messaging as messaging
from watcher.common.messaging.utils import observable
eventlet.monkey_patch()
class NotificationHandler(observable.Observable):
def __init__(self, publisher_id):
super(NotificationHandler, self).__init__()
self.publisher_id = publisher_id
def info(self, ctx, publisher_id, event_type, payload, metadata):
if publisher_id == self.publisher_id:
self.set_changed()
self.notify(ctx, publisher_id, event_type, metadata, payload)
return messaging.NotificationResult.HANDLED
def warn(self, ctx, publisher_id, event_type, payload, metadata):
if publisher_id == self.publisher_id:
self.set_changed()
self.notify(ctx, publisher_id, event_type, metadata, payload)
return messaging.NotificationResult.HANDLED
def error(self, ctx, publisher_id, event_type, payload, metadata):
if publisher_id == self.publisher_id:
self.set_changed()
self.notify(ctx, publisher_id, event_type, metadata, payload)
return messaging.NotificationResult.HANDLED

View File

@@ -63,17 +63,16 @@ class NovaHelper(object):
LOG.exception(exc) LOG.exception(exc)
raise exception.ComputeNodeNotFound(name=node_hostname) raise exception.ComputeNodeNotFound(name=node_hostname)
def get_aggregate_list(self):
return self.nova.aggregates.list()
def get_aggregate_detail(self, aggregate_id):
return self.nova.aggregates.get(aggregate_id)
def get_availability_zone_list(self):
return self.nova.availability_zones.list(detailed=True)
def find_instance(self, instance_id): def find_instance(self, instance_id):
return self.nova.servers.get(instance_id) search_opts = {'all_tenants': True}
instances = self.nova.servers.list(detailed=True,
search_opts=search_opts)
instance = None
for _instance in instances:
if _instance.id == instance_id:
instance = _instance
break
return instance
def wait_for_volume_status(self, volume, status, timeout=60, def wait_for_volume_status(self, volume, status, timeout=60,
poll_interval=1): poll_interval=1):
@@ -664,7 +663,7 @@ class NovaHelper(object):
cache[fid] = flavor cache[fid] = flavor
attr_defaults = [('name', 'unknown-id-%s' % fid), attr_defaults = [('name', 'unknown-id-%s' % fid),
('vcpus', 0), ('ram', 0), ('disk', 0), ('vcpus', 0), ('ram', 0), ('disk', 0),
('ephemeral', 0), ('extra_specs', {})] ('ephemeral', 0)]
for attr, default in attr_defaults: for attr, default in attr_defaults:
if not flavor: if not flavor:
instance.flavor[attr] = default instance.flavor[attr] = default

View File

@@ -72,12 +72,8 @@ def init(conf):
aliases=TRANSPORT_ALIASES) aliases=TRANSPORT_ALIASES)
serializer = RequestContextSerializer(JsonPayloadSerializer()) serializer = RequestContextSerializer(JsonPayloadSerializer())
if not conf.notification_level: NOTIFIER = messaging.Notifier(NOTIFICATION_TRANSPORT,
NOTIFIER = messaging.Notifier( serializer=serializer)
NOTIFICATION_TRANSPORT, serializer=serializer, driver='noop')
else:
NOTIFIER = messaging.Notifier(NOTIFICATION_TRANSPORT,
serializer=serializer)
def initialized(): def initialized():
@@ -125,7 +121,7 @@ class RequestContextSerializer(messaging.Serializer):
return self._base.deserialize_entity(context, entity) return self._base.deserialize_entity(context, entity)
def serialize_context(self, context): def serialize_context(self, context):
return context.to_dict() return context
def deserialize_context(self, context): def deserialize_context(self, context):
return watcher_context.RequestContext.from_dict(context) return watcher_context.RequestContext.from_dict(context)
@@ -150,6 +146,8 @@ def get_server(target, endpoints, serializer=None):
serializer=serializer) serializer=serializer)
def get_notifier(publisher_id): def get_notifier(service=None, host=None, publisher_id=None):
assert NOTIFIER is not None assert NOTIFIER is not None
if not publisher_id:
publisher_id = "%s.%s" % (service, host or CONF.host)
return NOTIFIER.prepare(publisher_id=publisher_id) return NOTIFIER.prepare(publisher_id=publisher_id)

View File

@@ -14,10 +14,9 @@
# License for the specific language governing permissions and limitations # License for the specific language governing permissions and limitations
# under the License. # under the License.
import datetime import logging
import socket import socket
import eventlet
from oslo_concurrency import processutils from oslo_concurrency import processutils
from oslo_config import cfg from oslo_config import cfg
from oslo_log import _options from oslo_log import _options
@@ -28,36 +27,30 @@ from oslo_reports import opts as gmr_opts
from oslo_service import service from oslo_service import service
from oslo_service import wsgi from oslo_service import wsgi
from watcher._i18n import _ from watcher._i18n import _, _LI
from watcher.api import app from watcher.api import app
from watcher.common import config from watcher.common import config
from watcher.common import context from watcher.common.messaging.events import event_dispatcher as dispatcher
from watcher.common.messaging import messaging_handler
from watcher.common import rpc from watcher.common import rpc
from watcher.common import scheduling
from watcher.conf import plugins as plugins_conf
from watcher import objects
from watcher.objects import base from watcher.objects import base
from watcher.objects import fields as wfields from watcher import opts
from watcher import version from watcher import version
# NOTE: service_opts = [
# Ubuntu 14.04 forces librabbitmq when kombu is used cfg.IntOpt('periodic_interval',
# Unfortunately it forces a version that has a crash default=60,
# bug. Calling eventlet.monkey_patch() tells kombu help=_('Seconds between running periodic tasks.')),
# to use libamqp instead. cfg.StrOpt('host',
eventlet.monkey_patch() default=socket.getfqdn(),
help=_('Name of this node. This can be an opaque identifier. '
NOTIFICATION_OPTS = [ 'It is not necessarily a hostname, FQDN, or IP address. '
cfg.StrOpt('notification_level', 'However, the node name must be valid within '
choices=[''] + list(wfields.NotificationPriority.ALL), 'an AMQP key, and if using ZeroMQ, a valid '
default=wfields.NotificationPriority.INFO, 'hostname, FQDN, or IP address.')),
help=_('Specifies the minimum level for which to send '
'notifications. If not set, no notifications will '
'be sent. The default is for this option to be at the '
'`INFO` level.'))
] ]
cfg.CONF.register_opts(NOTIFICATION_OPTS)
cfg.CONF.register_opts(service_opts)
CONF = cfg.CONF CONF = cfg.CONF
LOG = log.getLogger(__name__) LOG = log.getLogger(__name__)
@@ -75,21 +68,21 @@ Singleton = service.Singleton
class WSGIService(service.ServiceBase): class WSGIService(service.ServiceBase):
"""Provides ability to launch Watcher API from wsgi app.""" """Provides ability to launch Watcher API from wsgi app."""
def __init__(self, service_name, use_ssl=False): def __init__(self, name, use_ssl=False):
"""Initialize, but do not start the WSGI server. """Initialize, but do not start the WSGI server.
:param service_name: The service name of the WSGI server. :param name: The name of the WSGI server given to the loader.
:param use_ssl: Wraps the socket in an SSL context if True. :param use_ssl: Wraps the socket in an SSL context if True.
""" """
self.service_name = service_name self.name = name
self.app = app.VersionSelectorApplication() self.app = app.VersionSelectorApplication()
self.workers = (CONF.api.workers or self.workers = (CONF.api.workers or
processutils.get_worker_count()) processutils.get_worker_count())
self.server = wsgi.Server(CONF, self.service_name, self.app, self.server = wsgi.Server(CONF, name, self.app,
host=CONF.api.host, host=CONF.api.host,
port=CONF.api.port, port=CONF.api.port,
use_ssl=use_ssl, use_ssl=use_ssl,
logger_name=self.service_name) logger_name=name)
def start(self): def start(self):
"""Start serving this service using loaded configuration""" """Start serving this service using loaded configuration"""
@@ -108,53 +101,7 @@ class WSGIService(service.ServiceBase):
self.server.reset() self.server.reset()
class ServiceHeartbeat(scheduling.BackgroundSchedulerService): class Service(service.ServiceBase, dispatcher.EventDispatcher):
def __init__(self, gconfig=None, service_name=None, **kwargs):
gconfig = None or {}
super(ServiceHeartbeat, self).__init__(gconfig, **kwargs)
self.service_name = service_name
self.context = context.make_context()
def send_beat(self):
host = CONF.host
watcher_list = objects.Service.list(
self.context, filters={'name': self.service_name,
'host': host})
if watcher_list:
watcher_service = watcher_list[0]
watcher_service.last_seen_up = datetime.datetime.utcnow()
watcher_service.save()
else:
watcher_service = objects.Service(self.context)
watcher_service.name = self.service_name
watcher_service.host = host
watcher_service.create()
def add_heartbeat_job(self):
self.add_job(self.send_beat, 'interval', seconds=60,
next_run_time=datetime.datetime.now())
def start(self):
"""Start service."""
self.add_heartbeat_job()
super(ServiceHeartbeat, self).start()
def stop(self):
"""Stop service."""
self.shutdown()
def wait(self):
"""Wait for service to complete."""
def reset(self):
"""Reset service.
Called in case service running in daemon mode receives SIGHUP.
"""
class Service(service.ServiceBase):
API_VERSION = '1.0' API_VERSION = '1.0'
@@ -163,14 +110,18 @@ class Service(service.ServiceBase):
self.manager = manager_class() self.manager = manager_class()
self.publisher_id = self.manager.publisher_id self.publisher_id = self.manager.publisher_id
self.api_version = self.manager.api_version self.api_version = self.manager.API_VERSION
self.conductor_topic = self.manager.conductor_topic self.conductor_topic = self.manager.conductor_topic
self.status_topic = self.manager.status_topic
self.notification_topics = self.manager.notification_topics self.notification_topics = self.manager.notification_topics
self.conductor_endpoints = [ self.conductor_endpoints = [
ep(self) for ep in self.manager.conductor_endpoints ep(self) for ep in self.manager.conductor_endpoints
] ]
self.status_endpoints = [
ep(self.publisher_id) for ep in self.manager.status_endpoints
]
self.notification_endpoints = self.manager.notification_endpoints self.notification_endpoints = self.manager.notification_endpoints
self.serializer = rpc.RequestContextSerializer( self.serializer = rpc.RequestContextSerializer(
@@ -179,23 +130,22 @@ class Service(service.ServiceBase):
self._transport = None self._transport = None
self._notification_transport = None self._notification_transport = None
self._conductor_client = None self._conductor_client = None
self._status_client = None
self.conductor_topic_handler = None self.conductor_topic_handler = None
self.status_topic_handler = None
self.notification_handler = None self.notification_handler = None
self.heartbeat = None
if self.conductor_topic and self.conductor_endpoints: if self.conductor_topic and self.conductor_endpoints:
self.conductor_topic_handler = self.build_topic_handler( self.conductor_topic_handler = self.build_topic_handler(
self.conductor_topic, self.conductor_endpoints) self.conductor_topic, self.conductor_endpoints)
if self.status_topic and self.status_endpoints:
self.status_topic_handler = self.build_topic_handler(
self.status_topic, self.status_endpoints)
if self.notification_topics and self.notification_endpoints: if self.notification_topics and self.notification_endpoints:
self.notification_handler = self.build_notification_handler( self.notification_handler = self.build_notification_handler(
self.notification_topics, self.notification_endpoints self.notification_topics, self.notification_endpoints
) )
self.service_name = self.manager.service_name
if self.service_name:
self.heartbeat = ServiceHeartbeat(
service_name=self.manager.service_name)
@property @property
def transport(self): def transport(self):
@@ -224,17 +174,25 @@ class Service(service.ServiceBase):
def conductor_client(self, c): def conductor_client(self, c):
self.conductor_client = c self.conductor_client = c
@property
def status_client(self):
if self._status_client is None:
target = om.Target(
topic=self.status_topic,
version=self.API_VERSION,
)
self._status_client = om.RPCClient(
self.transport, target, serializer=self.serializer)
return self._status_client
@status_client.setter
def status_client(self, c):
self.status_client = c
def build_topic_handler(self, topic_name, endpoints=()): def build_topic_handler(self, topic_name, endpoints=()):
serializer = rpc.RequestContextSerializer(rpc.JsonPayloadSerializer()) return messaging_handler.MessagingHandler(
target = om.Target( self.publisher_id, topic_name, [self.manager] + list(endpoints),
topic=topic_name, self.api_version, self.serializer)
# For compatibility, we can override it with 'host' opt
server=CONF.host or socket.gethostname(),
version=self.api_version,
)
return om.get_rpc_server(
self.transport, target, endpoints,
executor='eventlet', serializer=serializer)
def build_notification_handler(self, topic_names, endpoints=()): def build_notification_handler(self, topic_names, endpoints=()):
serializer = rpc.RequestContextSerializer(rpc.JsonPayloadSerializer()) serializer = rpc.RequestContextSerializer(rpc.JsonPayloadSerializer())
@@ -249,20 +207,20 @@ class Service(service.ServiceBase):
CONF.transport_url, CONF.rpc_backend) CONF.transport_url, CONF.rpc_backend)
if self.conductor_topic_handler: if self.conductor_topic_handler:
self.conductor_topic_handler.start() self.conductor_topic_handler.start()
if self.status_topic_handler:
self.status_topic_handler.start()
if self.notification_handler: if self.notification_handler:
self.notification_handler.start() self.notification_handler.start()
if self.heartbeat:
self.heartbeat.start()
def stop(self): def stop(self):
LOG.debug("Disconnecting from '%s' (%s)", LOG.debug("Disconnecting from '%s' (%s)",
CONF.transport_url, CONF.rpc_backend) CONF.transport_url, CONF.rpc_backend)
if self.conductor_topic_handler: if self.conductor_topic_handler:
self.conductor_topic_handler.stop() self.conductor_topic_handler.stop()
if self.status_topic_handler:
self.status_topic_handler.stop()
if self.notification_handler: if self.notification_handler:
self.notification_handler.stop() self.notification_handler.stop()
if self.heartbeat:
self.heartbeat.stop()
def reset(self): def reset(self):
"""Reset a service in case it received a SIGHUP.""" """Reset a service in case it received a SIGHUP."""
@@ -270,11 +228,34 @@ class Service(service.ServiceBase):
def wait(self): def wait(self):
"""Wait for service to complete.""" """Wait for service to complete."""
def check_api_version(self, ctx): def publish_control(self, event, payload):
return self.conductor_topic_handler.publish_event(event, payload)
def publish_status_event(self, event, payload, request_id=None):
if self.status_topic_handler:
return self.status_topic_handler.publish_event(
event, payload, request_id)
else:
LOG.info(
_LI("No status notifier declared: notification '%s' not sent"),
event)
def get_version(self):
return self.api_version
def check_api_version(self, context):
api_manager_version = self.conductor_client.call( api_manager_version = self.conductor_client.call(
ctx, 'check_api_version', api_version=self.api_version) context.to_dict(), 'check_api_version',
api_version=self.api_version)
return api_manager_version return api_manager_version
def response(self, evt, ctx, message):
payload = {
'request_id': ctx['request_id'],
'msg': message
}
self.publish_status_event(evt, payload)
def launch(conf, service_, workers=1, restart_method='reload'): def launch(conf, service_, workers=1, restart_method='reload'):
return service.launch(conf, service_, workers, restart_method) return service.launch(conf, service_, workers, restart_method)
@@ -288,9 +269,7 @@ def prepare_service(argv=(), conf=cfg.CONF):
cfg.set_defaults(_options.log_opts, cfg.set_defaults(_options.log_opts,
default_log_levels=_DEFAULT_LOG_LEVELS) default_log_levels=_DEFAULT_LOG_LEVELS)
log.setup(conf, 'python-watcher') log.setup(conf, 'python-watcher')
conf.log_opt_values(LOG, log.DEBUG) conf.log_opt_values(LOG, logging.DEBUG)
objects.register_all()
gmr.TextGuruMeditation.register_section( gmr.TextGuruMeditation.register_section(_('Plugins'), opts.show_plugins)
_('Plugins'), plugins_conf.show_plugins)
gmr.TextGuruMeditation.setup_autorun(version, conf=conf) gmr.TextGuruMeditation.setup_autorun(version, conf=conf)

View File

@@ -1,50 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright © 2016 Servionica
##
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import six
@six.add_metaclass(abc.ABCMeta)
class ServiceManager(object):
@abc.abstractproperty
def service_name(self):
raise NotImplementedError()
@abc.abstractproperty
def api_version(self):
raise NotImplementedError()
@abc.abstractproperty
def publisher_id(self):
raise NotImplementedError()
@abc.abstractproperty
def conductor_topic(self):
raise NotImplementedError()
@abc.abstractproperty
def notification_topics(self):
raise NotImplementedError()
@abc.abstractproperty
def conductor_endpoints(self):
raise NotImplementedError()
@abc.abstractproperty
def notification_endpoints(self):
raise NotImplementedError()

View File

@@ -16,19 +16,17 @@
"""Utilities and helper functions.""" """Utilities and helper functions."""
import re
from jsonschema import validators from jsonschema import validators
from oslo_config import cfg from oslo_config import cfg
from oslo_log import log as logging from oslo_log import log as logging
from oslo_utils import strutils
from oslo_utils import timeutils
from oslo_utils import uuidutils
import six
from watcher._i18n import _LW
from watcher.common import exception from watcher.common import exception
import re
import six
import uuid
from watcher._i18n import _LW
UTILS_OPTS = [ UTILS_OPTS = [
cfg.StrOpt('rootwrap_config', cfg.StrOpt('rootwrap_config',
@@ -68,12 +66,6 @@ class Struct(dict):
raise AttributeError(name) raise AttributeError(name)
generate_uuid = uuidutils.generate_uuid
is_uuid_like = uuidutils.is_uuid_like
is_int_like = strutils.is_int_like
strtime = timeutils.strtime
def safe_rstrip(value, chars=None): def safe_rstrip(value, chars=None):
"""Removes trailing characters from a string if that does not make it empty """Removes trailing characters from a string if that does not make it empty
@@ -91,6 +83,31 @@ def safe_rstrip(value, chars=None):
return value.rstrip(chars) or value return value.rstrip(chars) or value
def generate_uuid():
return str(uuid.uuid4())
def is_int_like(val):
"""Check if a value looks like an int."""
try:
return str(int(val)) == str(val)
except Exception:
return False
def is_uuid_like(val):
"""Returns validation of a value as a UUID.
For our purposes, a UUID is a canonical form string:
aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa
"""
try:
return str(uuid.UUID(val)) == val
except (TypeError, ValueError, AttributeError):
return False
def is_hostname_safe(hostname): def is_hostname_safe(hostname):
"""Determine if the supplied hostname is RFC compliant. """Determine if the supplied hostname is RFC compliant.
@@ -116,6 +133,10 @@ def get_cls_import_path(cls):
return module + '.' + cls.__name__ return module + '.' + cls.__name__
def strtime(at):
return at.strftime("%Y-%m-%dT%H:%M:%S.%f")
# Default value feedback extension as jsonschema doesn't support it # Default value feedback extension as jsonschema doesn't support it
def extend_with_default(validator_class): def extend_with_default(validator_class):
validate_properties = validator_class.VALIDATORS["properties"] validate_properties = validator_class.VALIDATORS["properties"]
@@ -152,5 +173,3 @@ def extend_with_strict_schema(validator_class):
StrictDefaultValidatingDraft4Validator = extend_with_default( StrictDefaultValidatingDraft4Validator = extend_with_default(
extend_with_strict_schema(validators.Draft4Validator)) extend_with_strict_schema(validators.Draft4Validator))
Draft4Validator = validators.Draft4Validator

View File

@@ -1,56 +0,0 @@
# -*- encoding: utf-8 -*-
# Copyright 2014
# The Cloudscaling Group, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from keystoneauth1 import loading as ka_loading
from watcher.api import acl as api_acl
from watcher.api import app as api_app
from watcher.applier import manager as applier_manager
from watcher.common import clients
from watcher.common import exception
from watcher.common import paths
from watcher.db.sqlalchemy import models
from watcher.decision_engine.audit import continuous
from watcher.decision_engine import manager as decision_engine_manager
from watcher.decision_engine.planner import manager as planner_manager
def list_opts():
"""Legacy aggregation of all the watcher config options"""
return [
('DEFAULT',
(api_app.API_SERVICE_OPTS +
api_acl.AUTH_OPTS +
exception.EXC_LOG_OPTS +
paths.PATH_OPTS)),
('api', api_app.API_SERVICE_OPTS),
('database', models.SQL_OPTS),
('watcher_decision_engine',
(decision_engine_manager.WATCHER_DECISION_ENGINE_OPTS +
continuous.WATCHER_CONTINUOUS_OPTS)),
('watcher_applier', applier_manager.APPLIER_MANAGER_OPTS),
('watcher_planner', planner_manager.WATCHER_PLANNER_OPTS),
('nova_client', clients.NOVA_CLIENT_OPTS),
('glance_client', clients.GLANCE_CLIENT_OPTS),
('cinder_client', clients.CINDER_CLIENT_OPTS),
('ceilometer_client', clients.CEILOMETER_CLIENT_OPTS),
('neutron_client', clients.NEUTRON_CLIENT_OPTS),
('watcher_clients_auth',
(ka_loading.get_auth_common_conf_options() +
ka_loading.get_auth_plugin_conf_options('password') +
ka_loading.get_session_conf_options()))
]

View File

@@ -1,95 +0,0 @@
# Copyright 2016 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
This is the single point of entry to generate the sample configuration
file for Watcher. It collects all the necessary info from the other modules
in this package. It is assumed that:
* every other module in this package has a 'list_opts' function which
return a dict where
* the keys are strings which are the group names
* the value of each key is a list of config options for that group
* the watcher.conf package doesn't have further packages with config options
* this module is only used in the context of sample file generation
"""
import collections
import importlib
import os
import pkgutil
LIST_OPTS_FUNC_NAME = "list_opts"
def _tupleize(dct):
"""Take the dict of options and convert to the 2-tuple format."""
return [(key, val) for key, val in dct.items()]
def list_opts():
"""Grouped list of all the Watcher-specific configuration options
:return: A list of ``(group, [opt_1, opt_2])`` tuple pairs, where ``group``
is either a group name as a string or an OptGroup object.
"""
opts = collections.defaultdict(list)
module_names = _list_module_names()
imported_modules = _import_modules(module_names)
_append_config_options(imported_modules, opts)
return _tupleize(opts)
def _list_module_names():
module_names = []
package_path = os.path.dirname(os.path.abspath(__file__))
for __, modname, ispkg in pkgutil.iter_modules(path=[package_path]):
if modname == "opts" or ispkg:
continue
else:
module_names.append(modname)
return module_names
def _import_modules(module_names):
imported_modules = []
for modname in module_names:
mod = importlib.import_module("watcher.conf." + modname)
if not hasattr(mod, LIST_OPTS_FUNC_NAME):
msg = "The module 'watcher.conf.%s' should have a '%s' "\
"function which returns the config options." % \
(modname, LIST_OPTS_FUNC_NAME)
raise Exception(msg)
else:
imported_modules.append(mod)
return imported_modules
def _process_old_opts(configs):
"""Convert old-style 2-tuple configs to dicts."""
if isinstance(configs, tuple):
configs = [configs]
return {label: options for label, options in configs}
def _append_config_options(imported_modules, config_options):
for mod in imported_modules:
configs = mod.list_opts()
# TODO(markus_z): Remove this compatibility shim once all list_opts()
# functions have been updated to return dicts.
if not isinstance(configs, dict):
configs = _process_old_opts(configs)
for key, val in configs.items():
config_options[key].extend(val)

View File

@@ -1,49 +0,0 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2016 b<>com
#
# Authors: Vincent FRANCOISE <vincent.francoise@b-com.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import socket
from oslo_config import cfg
from watcher._i18n import _
SERVICE_OPTS = [
cfg.IntOpt('periodic_interval',
default=60,
help=_('Seconds between running periodic tasks.')),
cfg.StrOpt('host',
default=socket.gethostname(),
help=_('Name of this node. This can be an opaque identifier. '
'It is not necessarily a hostname, FQDN, or IP address. '
'However, the node name must be valid within '
'an AMQP key, and if using ZeroMQ, a valid '
'hostname, FQDN, or IP address.')),
cfg.IntOpt('service_down_time',
default=90,
help=_('Maximum time since last check-in for up service.'))
]
def register_opts(conf):
conf.register_opts(SERVICE_OPTS)
def list_opts():
return [
('DEFAULT', SERVICE_OPTS),
]

View File

@@ -36,7 +36,7 @@ class BaseConnection(object):
@abc.abstractmethod @abc.abstractmethod
def get_goal_list(self, context, filters=None, limit=None, def get_goal_list(self, context, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None, eager=False): marker=None, sort_key=None, sort_dir=None):
"""Get specific columns for matching goals. """Get specific columns for matching goals.
Return a list of the specified columns for all goals that Return a list of the specified columns for all goals that
@@ -50,7 +50,6 @@ class BaseConnection(object):
:param sort_key: Attribute by which results should be sorted. :param sort_key: Attribute by which results should be sorted.
:param sort_dir: direction in which results should be sorted. :param sort_dir: direction in which results should be sorted.
(asc, desc) (asc, desc)
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A list of tuples of the specified columns. :returns: A list of tuples of the specified columns.
""" """
@@ -73,34 +72,31 @@ class BaseConnection(object):
""" """
@abc.abstractmethod @abc.abstractmethod
def get_goal_by_id(self, context, goal_id, eager=False): def get_goal_by_id(self, context, goal_id):
"""Return a goal given its ID. """Return a goal given its ID.
:param context: The security context :param context: The security context
:param goal_id: The ID of a goal :param goal_id: The ID of a goal
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A goal :returns: A goal
:raises: :py:class:`~.GoalNotFound` :raises: :py:class:`~.GoalNotFound`
""" """
@abc.abstractmethod @abc.abstractmethod
def get_goal_by_uuid(self, context, goal_uuid, eager=False): def get_goal_by_uuid(self, context, goal_uuid):
"""Return a goal given its UUID. """Return a goal given its UUID.
:param context: The security context :param context: The security context
:param goal_uuid: The UUID of a goal :param goal_uuid: The UUID of a goal
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A goal :returns: A goal
:raises: :py:class:`~.GoalNotFound` :raises: :py:class:`~.GoalNotFound`
""" """
@abc.abstractmethod @abc.abstractmethod
def get_goal_by_name(self, context, goal_name, eager=False): def get_goal_by_name(self, context, goal_name):
"""Return a goal given its name. """Return a goal given its name.
:param context: The security context :param context: The security context
:param goal_name: The name of a goal :param goal_name: The name of a goal
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A goal :returns: A goal
:raises: :py:class:`~.GoalNotFound` :raises: :py:class:`~.GoalNotFound`
""" """
@@ -133,17 +129,9 @@ class BaseConnection(object):
:raises: :py:class:`~.Invalid` :raises: :py:class:`~.Invalid`
""" """
def soft_delete_goal(self, goal_id):
"""Soft delete a goal.
:param goal_id: The id or uuid of a goal.
:raises: :py:class:`~.GoalNotFound`
"""
@abc.abstractmethod @abc.abstractmethod
def get_strategy_list(self, context, filters=None, limit=None, def get_strategy_list(self, context, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None, marker=None, sort_key=None, sort_dir=None):
eager=True):
"""Get specific columns for matching strategies. """Get specific columns for matching strategies.
Return a list of the specified columns for all strategies that Return a list of the specified columns for all strategies that
@@ -158,7 +146,6 @@ class BaseConnection(object):
:param sort_key: Attribute by which results should be sorted. :param sort_key: Attribute by which results should be sorted.
:param sort_dir: Direction in which results should be sorted. :param sort_dir: Direction in which results should be sorted.
(asc, desc) (asc, desc)
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A list of tuples of the specified columns. :returns: A list of tuples of the specified columns.
""" """
@@ -183,34 +170,31 @@ class BaseConnection(object):
""" """
@abc.abstractmethod @abc.abstractmethod
def get_strategy_by_id(self, context, strategy_id, eager=False): def get_strategy_by_id(self, context, strategy_id):
"""Return a strategy given its ID. """Return a strategy given its ID.
:param context: The security context :param context: The security context
:param strategy_id: The ID of a strategy :param strategy_id: The ID of a strategy
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A strategy :returns: A strategy
:raises: :py:class:`~.StrategyNotFound` :raises: :py:class:`~.StrategyNotFound`
""" """
@abc.abstractmethod @abc.abstractmethod
def get_strategy_by_uuid(self, context, strategy_uuid, eager=False): def get_strategy_by_uuid(self, context, strategy_uuid):
"""Return a strategy given its UUID. """Return a strategy given its UUID.
:param context: The security context :param context: The security context
:param strategy_uuid: The UUID of a strategy :param strategy_uuid: The UUID of a strategy
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A strategy :returns: A strategy
:raises: :py:class:`~.StrategyNotFound` :raises: :py:class:`~.StrategyNotFound`
""" """
@abc.abstractmethod @abc.abstractmethod
def get_strategy_by_name(self, context, strategy_name, eager=False): def get_strategy_by_name(self, context, strategy_name):
"""Return a strategy given its name. """Return a strategy given its name.
:param context: The security context :param context: The security context
:param strategy_name: The name of a strategy :param strategy_name: The name of a strategy
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A strategy :returns: A strategy
:raises: :py:class:`~.StrategyNotFound` :raises: :py:class:`~.StrategyNotFound`
""" """
@@ -233,17 +217,10 @@ class BaseConnection(object):
:raises: :py:class:`~.Invalid` :raises: :py:class:`~.Invalid`
""" """
def soft_delete_strategy(self, strategy_id):
"""Soft delete a strategy.
:param strategy_id: The id or uuid of a strategy.
:raises: :py:class:`~.StrategyNotFound`
"""
@abc.abstractmethod @abc.abstractmethod
def get_audit_template_list(self, context, filters=None, def get_audit_template_list(self, context, filters=None,
limit=None, marker=None, sort_key=None, limit=None, marker=None, sort_key=None,
sort_dir=None, eager=False): sort_dir=None):
"""Get specific columns for matching audit templates. """Get specific columns for matching audit templates.
Return a list of the specified columns for all audit templates that Return a list of the specified columns for all audit templates that
@@ -257,7 +234,6 @@ class BaseConnection(object):
:param sort_key: Attribute by which results should be sorted. :param sort_key: Attribute by which results should be sorted.
:param sort_dir: direction in which results should be sorted. :param sort_dir: direction in which results should be sorted.
(asc, desc) (asc, desc)
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A list of tuples of the specified columns. :returns: A list of tuples of the specified columns.
""" """
@@ -274,50 +250,46 @@ class BaseConnection(object):
'uuid': utils.generate_uuid(), 'uuid': utils.generate_uuid(),
'name': 'example', 'name': 'example',
'description': 'free text description' 'description': 'free text description'
'host_aggregate': 'nova aggregate name or id'
'goal': 'DUMMY' 'goal': 'DUMMY'
'extra': {'automatic': True}
} }
:returns: An audit template. :returns: An audit template.
:raises: :py:class:`~.AuditTemplateAlreadyExists` :raises: :py:class:`~.AuditTemplateAlreadyExists`
""" """
@abc.abstractmethod @abc.abstractmethod
def get_audit_template_by_id(self, context, audit_template_id, def get_audit_template_by_id(self, context, audit_template_id):
eager=False):
"""Return an audit template. """Return an audit template.
:param context: The security context :param context: The security context
:param audit_template_id: The id of an audit template. :param audit_template_id: The id of an audit template.
:param eager: If True, also loads One-to-X data (Default: False)
:returns: An audit template. :returns: An audit template.
:raises: :py:class:`~.AuditTemplateNotFound` :raises: :py:class:`~.AuditTemplateNotFound`
""" """
@abc.abstractmethod @abc.abstractmethod
def get_audit_template_by_uuid(self, context, audit_template_uuid, def get_audit_template_by_uuid(self, context, audit_template_uuid):
eager=False):
"""Return an audit template. """Return an audit template.
:param context: The security context :param context: The security context
:param audit_template_uuid: The uuid of an audit template. :param audit_template_uuid: The uuid of an audit template.
:param eager: If True, also loads One-to-X data (Default: False)
:returns: An audit template. :returns: An audit template.
:raises: :py:class:`~.AuditTemplateNotFound` :raises: :py:class:`~.AuditTemplateNotFound`
""" """
def get_audit_template_by_name(self, context, audit_template_name, def get_audit_template_by_name(self, context, audit_template_name):
eager=False):
"""Return an audit template. """Return an audit template.
:param context: The security context :param context: The security context
:param audit_template_name: The name of an audit template. :param audit_template_name: The name of an audit template.
:param eager: If True, also loads One-to-X data (Default: False)
:returns: An audit template. :returns: An audit template.
:raises: :py:class:`~.AuditTemplateNotFound` :raises: :py:class:`~.AuditTemplateNotFound`
""" """
@abc.abstractmethod @abc.abstractmethod
def destroy_audit_template(self, audit_template_id): def destroy_audit_template(self, audit_template_id):
"""Destroy an audit template. """Destroy an audit_template.
:param audit_template_id: The id or uuid of an audit template. :param audit_template_id: The id or uuid of an audit template.
:raises: :py:class:`~.AuditTemplateNotFound` :raises: :py:class:`~.AuditTemplateNotFound`
@@ -335,7 +307,7 @@ class BaseConnection(object):
@abc.abstractmethod @abc.abstractmethod
def soft_delete_audit_template(self, audit_template_id): def soft_delete_audit_template(self, audit_template_id):
"""Soft delete an audit template. """Soft delete an audit_template.
:param audit_template_id: The id or uuid of an audit template. :param audit_template_id: The id or uuid of an audit template.
:raises: :py:class:`~.AuditTemplateNotFound` :raises: :py:class:`~.AuditTemplateNotFound`
@@ -343,7 +315,7 @@ class BaseConnection(object):
@abc.abstractmethod @abc.abstractmethod
def get_audit_list(self, context, filters=None, limit=None, def get_audit_list(self, context, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None, eager=False): marker=None, sort_key=None, sort_dir=None):
"""Get specific columns for matching audits. """Get specific columns for matching audits.
Return a list of the specified columns for all audits that match the Return a list of the specified columns for all audits that match the
@@ -357,7 +329,6 @@ class BaseConnection(object):
:param sort_key: Attribute by which results should be sorted. :param sort_key: Attribute by which results should be sorted.
:param sort_dir: direction in which results should be sorted. :param sort_dir: direction in which results should be sorted.
(asc, desc) (asc, desc)
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A list of tuples of the specified columns. :returns: A list of tuples of the specified columns.
""" """
@@ -374,29 +345,28 @@ class BaseConnection(object):
{ {
'uuid': utils.generate_uuid(), 'uuid': utils.generate_uuid(),
'type': 'ONESHOT', 'type': 'ONESHOT',
'deadline': None
} }
:returns: An audit. :returns: An audit.
:raises: :py:class:`~.AuditAlreadyExists` :raises: :py:class:`~.AuditAlreadyExists`
""" """
@abc.abstractmethod @abc.abstractmethod
def get_audit_by_id(self, context, audit_id, eager=False): def get_audit_by_id(self, context, audit_id):
"""Return an audit. """Return an audit.
:param context: The security context :param context: The security context
:param audit_id: The id of an audit. :param audit_id: The id of an audit.
:param eager: If True, also loads One-to-X data (Default: False)
:returns: An audit. :returns: An audit.
:raises: :py:class:`~.AuditNotFound` :raises: :py:class:`~.AuditNotFound`
""" """
@abc.abstractmethod @abc.abstractmethod
def get_audit_by_uuid(self, context, audit_uuid, eager=False): def get_audit_by_uuid(self, context, audit_uuid):
"""Return an audit. """Return an audit.
:param context: The security context :param context: The security context
:param audit_uuid: The uuid of an audit. :param audit_uuid: The uuid of an audit.
:param eager: If True, also loads One-to-X data (Default: False)
:returns: An audit. :returns: An audit.
:raises: :py:class:`~.AuditNotFound` :raises: :py:class:`~.AuditNotFound`
""" """
@@ -423,13 +393,13 @@ class BaseConnection(object):
"""Soft delete an audit and all associated action plans. """Soft delete an audit and all associated action plans.
:param audit_id: The id or uuid of an audit. :param audit_id: The id or uuid of an audit.
:returns: An audit.
:raises: :py:class:`~.AuditNotFound` :raises: :py:class:`~.AuditNotFound`
""" """
@abc.abstractmethod @abc.abstractmethod
def get_action_list(self, context, filters=None, limit=None, def get_action_list(self, context, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None, marker=None, sort_key=None, sort_dir=None):
eager=False):
"""Get specific columns for matching actions. """Get specific columns for matching actions.
Return a list of the specified columns for all actions that match the Return a list of the specified columns for all actions that match the
@@ -443,7 +413,6 @@ class BaseConnection(object):
:param sort_key: Attribute by which results should be sorted. :param sort_key: Attribute by which results should be sorted.
:param sort_dir: direction in which results should be sorted. :param sort_dir: direction in which results should be sorted.
(asc, desc) (asc, desc)
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A list of tuples of the specified columns. :returns: A list of tuples of the specified columns.
""" """
@@ -468,23 +437,21 @@ class BaseConnection(object):
""" """
@abc.abstractmethod @abc.abstractmethod
def get_action_by_id(self, context, action_id, eager=False): def get_action_by_id(self, context, action_id):
"""Return a action. """Return a action.
:param context: The security context :param context: The security context
:param action_id: The id of a action. :param action_id: The id of a action.
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A action. :returns: A action.
:raises: :py:class:`~.ActionNotFound` :raises: :py:class:`~.ActionNotFound`
""" """
@abc.abstractmethod @abc.abstractmethod
def get_action_by_uuid(self, context, action_uuid, eager=False): def get_action_by_uuid(self, context, action_uuid):
"""Return a action. """Return a action.
:param context: The security context :param context: The security context
:param action_uuid: The uuid of a action. :param action_uuid: The uuid of a action.
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A action. :returns: A action.
:raises: :py:class:`~.ActionNotFound` :raises: :py:class:`~.ActionNotFound`
""" """
@@ -509,17 +476,10 @@ class BaseConnection(object):
:raises: :py:class:`~.Invalid` :raises: :py:class:`~.Invalid`
""" """
def soft_delete_action(self, action_id):
"""Soft delete an action.
:param action_id: The id or uuid of an action.
:raises: :py:class:`~.ActionNotFound`
"""
@abc.abstractmethod @abc.abstractmethod
def get_action_plan_list( def get_action_plan_list(
self, context, filters=None, limit=None, self, context, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None, eager=False): marker=None, sort_key=None, sort_dir=None):
"""Get specific columns for matching action plans. """Get specific columns for matching action plans.
Return a list of the specified columns for all action plans that Return a list of the specified columns for all action plans that
@@ -533,7 +493,6 @@ class BaseConnection(object):
:param sort_key: Attribute by which results should be sorted. :param sort_key: Attribute by which results should be sorted.
:param sort_dir: direction in which results should be sorted. :param sort_dir: direction in which results should be sorted.
(asc, desc) (asc, desc)
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A list of tuples of the specified columns. :returns: A list of tuples of the specified columns.
""" """
@@ -548,23 +507,21 @@ class BaseConnection(object):
""" """
@abc.abstractmethod @abc.abstractmethod
def get_action_plan_by_id(self, context, action_plan_id, eager=False): def get_action_plan_by_id(self, context, action_plan_id):
"""Return an action plan. """Return an action plan.
:param context: The security context :param context: The security context
:param action_plan_id: The id of an action plan. :param action_plan_id: The id of an action plan.
:param eager: If True, also loads One-to-X data (Default: False)
:returns: An action plan. :returns: An action plan.
:raises: :py:class:`~.ActionPlanNotFound` :raises: :py:class:`~.ActionPlanNotFound`
""" """
@abc.abstractmethod @abc.abstractmethod
def get_action_plan_by_uuid(self, context, action_plan__uuid, eager=False): def get_action_plan_by_uuid(self, context, action_plan__uuid):
"""Return a action plan. """Return a action plan.
:param context: The security context :param context: The security context
:param action_plan__uuid: The uuid of an action plan. :param action_plan__uuid: The uuid of an action plan.
:param eager: If True, also loads One-to-X data (Default: False)
:returns: An action plan. :returns: An action plan.
:raises: :py:class:`~.ActionPlanNotFound` :raises: :py:class:`~.ActionPlanNotFound`
""" """
@@ -589,17 +546,9 @@ class BaseConnection(object):
:raises: :py:class:`~.Invalid` :raises: :py:class:`~.Invalid`
""" """
def soft_delete_action_plan(self, action_plan_id):
"""Soft delete an action plan.
:param action_plan_id: The id or uuid of an action plan.
:raises: :py:class:`~.ActionPlanNotFound`
"""
@abc.abstractmethod @abc.abstractmethod
def get_efficacy_indicator_list(self, context, filters=None, limit=None, def get_efficacy_indicator_list(self, context, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None, marker=None, sort_key=None, sort_dir=None):
eager=False):
"""Get specific columns for matching efficacy indicators. """Get specific columns for matching efficacy indicators.
Return a list of the specified columns for all efficacy indicators that Return a list of the specified columns for all efficacy indicators that
@@ -616,7 +565,6 @@ class BaseConnection(object):
:param sort_key: Attribute by which results should be sorted. :param sort_key: Attribute by which results should be sorted.
:param sort_dir: Direction in which results should be sorted. :param sort_dir: Direction in which results should be sorted.
(asc, desc) (asc, desc)
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A list of tuples of the specified columns. :returns: A list of tuples of the specified columns.
""" """
@@ -641,37 +589,31 @@ class BaseConnection(object):
""" """
@abc.abstractmethod @abc.abstractmethod
def get_efficacy_indicator_by_id(self, context, efficacy_indicator_id, def get_efficacy_indicator_by_id(self, context, efficacy_indicator_id):
eager=False):
"""Return an efficacy indicator given its ID. """Return an efficacy indicator given its ID.
:param context: The security context :param context: The security context
:param efficacy_indicator_id: The ID of an efficacy indicator :param efficacy_indicator_id: The ID of an efficacy indicator
:param eager: If True, also loads One-to-X data (Default: False)
:returns: An efficacy indicator :returns: An efficacy indicator
:raises: :py:class:`~.EfficacyIndicatorNotFound` :raises: :py:class:`~.EfficacyIndicatorNotFound`
""" """
@abc.abstractmethod @abc.abstractmethod
def get_efficacy_indicator_by_uuid(self, context, efficacy_indicator_uuid, def get_efficacy_indicator_by_uuid(self, context, efficacy_indicator_uuid):
eager=False):
"""Return an efficacy indicator given its UUID. """Return an efficacy indicator given its UUID.
:param context: The security context :param context: The security context
:param efficacy_indicator_uuid: The UUID of an efficacy indicator :param efficacy_indicator_uuid: The UUID of an efficacy indicator
:param eager: If True, also loads One-to-X data (Default: False)
:returns: An efficacy indicator :returns: An efficacy indicator
:raises: :py:class:`~.EfficacyIndicatorNotFound` :raises: :py:class:`~.EfficacyIndicatorNotFound`
""" """
@abc.abstractmethod @abc.abstractmethod
def get_efficacy_indicator_by_name(self, context, efficacy_indicator_name, def get_efficacy_indicator_by_name(self, context, efficacy_indicator_name):
eager=False):
"""Return an efficacy indicator given its name. """Return an efficacy indicator given its name.
:param context: The security context :param context: The security context
:param efficacy_indicator_name: The name of an efficacy indicator :param efficacy_indicator_name: The name of an efficacy indicator
:param eager: If True, also loads One-to-X data (Default: False)
:returns: An efficacy indicator :returns: An efficacy indicator
:raises: :py:class:`~.EfficacyIndicatorNotFound` :raises: :py:class:`~.EfficacyIndicatorNotFound`
""" """
@@ -685,7 +627,7 @@ class BaseConnection(object):
""" """
@abc.abstractmethod @abc.abstractmethod
def update_efficacy_indicator(self, efficacy_indicator_id, values): def update_efficacy_indicator(self, efficacy_indicator_uuid, values):
"""Update properties of an efficacy indicator. """Update properties of an efficacy indicator.
:param efficacy_indicator_uuid: The UUID of an efficacy indicator :param efficacy_indicator_uuid: The UUID of an efficacy indicator
@@ -697,7 +639,7 @@ class BaseConnection(object):
@abc.abstractmethod @abc.abstractmethod
def get_scoring_engine_list( def get_scoring_engine_list(
self, context, columns=None, filters=None, limit=None, self, context, columns=None, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None, eager=False): marker=None, sort_key=None, sort_dir=None):
"""Get specific columns for matching scoring engines. """Get specific columns for matching scoring engines.
Return a list of the specified columns for all scoring engines that Return a list of the specified columns for all scoring engines that
@@ -713,7 +655,6 @@ class BaseConnection(object):
:param sort_key: Attribute by which results should be sorted. :param sort_key: Attribute by which results should be sorted.
:param sort_dir: direction in which results should be sorted. :param sort_dir: direction in which results should be sorted.
(asc, desc) (asc, desc)
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A list of tuples of the specified columns. :returns: A list of tuples of the specified columns.
""" """
@@ -728,37 +669,31 @@ class BaseConnection(object):
""" """
@abc.abstractmethod @abc.abstractmethod
def get_scoring_engine_by_id(self, context, scoring_engine_id, def get_scoring_engine_by_id(self, context, scoring_engine_id):
eager=False):
"""Return a scoring engine by its id. """Return a scoring engine by its id.
:param context: The security context :param context: The security context
:param scoring_engine_id: The id of a scoring engine. :param scoring_engine_id: The id of a scoring engine.
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A scoring engine. :returns: A scoring engine.
:raises: :py:class:`~.ScoringEngineNotFound` :raises: :py:class:`~.ScoringEngineNotFound`
""" """
@abc.abstractmethod @abc.abstractmethod
def get_scoring_engine_by_uuid(self, context, scoring_engine_uuid, def get_scoring_engine_by_uuid(self, context, scoring_engine_uuid):
eager=False):
"""Return a scoring engine by its uuid. """Return a scoring engine by its uuid.
:param context: The security context :param context: The security context
:param scoring_engine_uuid: The uuid of a scoring engine. :param scoring_engine_uuid: The uuid of a scoring engine.
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A scoring engine. :returns: A scoring engine.
:raises: :py:class:`~.ScoringEngineNotFound` :raises: :py:class:`~.ScoringEngineNotFound`
""" """
@abc.abstractmethod @abc.abstractmethod
def get_scoring_engine_by_name(self, context, scoring_engine_name, def get_scoring_engine_by_name(self, context, scoring_engine_name):
eager=False):
"""Return a scoring engine by its name. """Return a scoring engine by its name.
:param context: The security context :param context: The security context
:param scoring_engine_name: The name of a scoring engine. :param scoring_engine_name: The name of a scoring engine.
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A scoring engine. :returns: A scoring engine.
:raises: :py:class:`~.ScoringEngineNotFound` :raises: :py:class:`~.ScoringEngineNotFound`
""" """
@@ -780,92 +715,3 @@ class BaseConnection(object):
:raises: :py:class:`~.ScoringEngineNotFound` :raises: :py:class:`~.ScoringEngineNotFound`
:raises: :py:class:`~.Invalid` :raises: :py:class:`~.Invalid`
""" """
@abc.abstractmethod
def get_service_list(self, context, filters=None, limit=None, marker=None,
sort_key=None, sort_dir=None, eager=False):
"""Get specific columns for matching services.
Return a list of the specified columns for all services that
match the specified filters.
:param context: The security context
:param filters: Filters to apply. Defaults to None.
:param limit: Maximum number of services to return.
:param marker: The last item of the previous page; we return the next
result set.
:param sort_key: Attribute by which results should be sorted.
:param sort_dir: Direction in which results should be sorted.
(asc, desc)
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A list of tuples of the specified columns.
"""
@abc.abstractmethod
def create_service(self, values):
"""Create a new service.
:param values: A dict containing items used to identify
and track the service. For example:
::
{
'id': 1,
'name': 'watcher-api',
'status': 'ACTIVE',
'host': 'controller'
}
:returns: A service
:raises: :py:class:`~.ServiceAlreadyExists`
"""
@abc.abstractmethod
def get_service_by_id(self, context, service_id, eager=False):
"""Return a service given its ID.
:param context: The security context
:param service_id: The ID of a service
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A service
:raises: :py:class:`~.ServiceNotFound`
"""
@abc.abstractmethod
def get_service_by_name(self, context, service_name, eager=False):
"""Return a service given its name.
:param context: The security context
:param service_name: The name of a service
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A service
:raises: :py:class:`~.ServiceNotFound`
"""
@abc.abstractmethod
def destroy_service(self, service_id):
"""Destroy a service.
:param service_id: The ID of a service
:raises: :py:class:`~.ServiceNotFound`
"""
@abc.abstractmethod
def update_service(self, service_id, values):
"""Update properties of a service.
:param service_id: The ID of a service
:returns: A service
:raises: :py:class:`~.ServiceyNotFound`
:raises: :py:class:`~.Invalid`
"""
@abc.abstractmethod
def soft_delete_service(self, service_id):
"""Soft delete a service.
:param service_id: The id of a service.
:returns: A service.
:raises: :py:class:`~.ServiceNotFound`
"""

View File

@@ -24,17 +24,17 @@ from oslo_config import cfg
from oslo_db import exception as db_exc from oslo_db import exception as db_exc
from oslo_db.sqlalchemy import session as db_session from oslo_db.sqlalchemy import session as db_session
from oslo_db.sqlalchemy import utils as db_utils from oslo_db.sqlalchemy import utils as db_utils
from oslo_utils import timeutils
from sqlalchemy.inspection import inspect
from sqlalchemy.orm import exc from sqlalchemy.orm import exc
from sqlalchemy.orm import joinedload
from watcher._i18n import _ from watcher._i18n import _
from watcher.common import exception from watcher.common import exception
from watcher.common import utils from watcher.common import utils
from watcher.db import api from watcher.db import api
from watcher.db.sqlalchemy import models from watcher.db.sqlalchemy import models
from watcher import objects from watcher.objects import action as action_objects
from watcher.objects import action_plan as ap_objects
from watcher.objects import audit as audit_objects
from watcher.objects import utils as objutils
CONF = cfg.CONF CONF = cfg.CONF
@@ -68,6 +68,7 @@ def model_query(model, *args, **kwargs):
:param session: if present, the session to use :param session: if present, the session to use
""" """
session = kwargs.get('session') or get_session() session = kwargs.get('session') or get_session()
query = session.query(model, *args) query = session.query(model, *args)
return query return query
@@ -132,9 +133,8 @@ class Connection(api.BaseConnection):
def __add_simple_filter(self, query, model, fieldname, value, operator_): def __add_simple_filter(self, query, model, fieldname, value, operator_):
field = getattr(model, fieldname) field = getattr(model, fieldname)
if field.type.python_type is datetime.datetime and value: if field.type.python_type is datetime.datetime:
if not isinstance(value, datetime.datetime): value = objutils.datetime_or_str_or_none(value)
value = timeutils.parse_isotime(value)
return query.filter(self.valid_operators[operator_](field, value)) return query.filter(self.valid_operators[operator_](field, value))
@@ -233,32 +233,8 @@ class Connection(api.BaseConnection):
return query return query
@staticmethod def _get(self, context, model, fieldname, value):
def _get_relationships(model):
return inspect(model).relationships
@staticmethod
def _set_eager_options(model, query):
relationships = inspect(model).relationships
for relationship in relationships:
if not relationship.uselist:
# We have a One-to-X relationship
query = query.options(joinedload(relationship.key))
return query
def _create(self, model, values):
obj = model()
cleaned_values = {k: v for k, v in values.items()
if k not in self._get_relationships(model)}
obj.update(cleaned_values)
obj.save()
return obj
def _get(self, context, model, fieldname, value, eager):
query = model_query(model) query = model_query(model)
if eager:
query = self._set_eager_options(model, query)
query = query.filter(getattr(model, fieldname) == value) query = query.filter(getattr(model, fieldname) == value)
if not context.show_deleted: if not context.show_deleted:
query = query.filter(model.deleted_at.is_(None)) query = query.filter(model.deleted_at.is_(None))
@@ -270,8 +246,7 @@ class Connection(api.BaseConnection):
return obj return obj
@staticmethod def _update(self, model, id_, values):
def _update(model, id_, values):
session = get_session() session = get_session()
with session.begin(): with session.begin():
query = model_query(model, session=session) query = model_query(model, session=session)
@@ -284,8 +259,7 @@ class Connection(api.BaseConnection):
ref.update(values) ref.update(values)
return ref return ref
@staticmethod def _soft_delete(self, model, id_):
def _soft_delete(model, id_):
session = get_session() session = get_session()
with session.begin(): with session.begin():
query = model_query(model, session=session) query = model_query(model, session=session)
@@ -297,8 +271,7 @@ class Connection(api.BaseConnection):
query.soft_delete() query.soft_delete()
@staticmethod def _destroy(self, model, id_):
def _destroy(model, id_):
session = get_session() session = get_session()
with session.begin(): with session.begin():
query = model_query(model, session=session) query = model_query(model, session=session)
@@ -336,7 +309,8 @@ class Connection(api.BaseConnection):
if filters is None: if filters is None:
filters = {} filters = {}
plain_fields = ['uuid', 'name', 'goal_id', 'strategy_id'] plain_fields = ['uuid', 'name', 'host_aggregate',
'goal_id', 'strategy_id']
join_fieldmap = JoinMap( join_fieldmap = JoinMap(
goal_uuid=NaturalJoinFilter( goal_uuid=NaturalJoinFilter(
join_fieldname="uuid", join_model=models.Goal), join_fieldname="uuid", join_model=models.Goal),
@@ -425,11 +399,10 @@ class Connection(api.BaseConnection):
# ### GOALS ### # # ### GOALS ### #
def get_goal_list(self, context, filters=None, limit=None, marker=None, def get_goal_list(self, context, filters=None, limit=None,
sort_key=None, sort_dir=None, eager=False): marker=None, sort_key=None, sort_dir=None):
query = model_query(models.Goal) query = model_query(models.Goal)
if eager:
query = self._set_eager_options(models.Goal, query)
query = self._add_goals_filters(query, filters) query = self._add_goals_filters(query, filters)
if not context.show_deleted: if not context.show_deleted:
query = query.filter_by(deleted_at=None) query = query.filter_by(deleted_at=None)
@@ -441,30 +414,30 @@ class Connection(api.BaseConnection):
if not values.get('uuid'): if not values.get('uuid'):
values['uuid'] = utils.generate_uuid() values['uuid'] = utils.generate_uuid()
goal = models.Goal()
goal.update(values)
try: try:
goal = self._create(models.Goal, values) goal.save()
except db_exc.DBDuplicateEntry: except db_exc.DBDuplicateEntry:
raise exception.GoalAlreadyExists(uuid=values['uuid']) raise exception.GoalAlreadyExists(uuid=values['uuid'])
return goal return goal
def _get_goal(self, context, fieldname, value, eager): def _get_goal(self, context, fieldname, value):
try: try:
return self._get(context, model=models.Goal, return self._get(context, model=models.Goal,
fieldname=fieldname, value=value, eager=eager) fieldname=fieldname, value=value)
except exception.ResourceNotFound: except exception.ResourceNotFound:
raise exception.GoalNotFound(goal=value) raise exception.GoalNotFound(goal=value)
def get_goal_by_id(self, context, goal_id, eager=False): def get_goal_by_id(self, context, goal_id):
return self._get_goal( return self._get_goal(context, fieldname="id", value=goal_id)
context, fieldname="id", value=goal_id, eager=eager)
def get_goal_by_uuid(self, context, goal_uuid, eager=False): def get_goal_by_uuid(self, context, goal_uuid):
return self._get_goal( return self._get_goal(context, fieldname="uuid", value=goal_uuid)
context, fieldname="uuid", value=goal_uuid, eager=eager)
def get_goal_by_name(self, context, goal_name, eager=False): def get_goal_by_name(self, context, goal_name):
return self._get_goal( return self._get_goal(context, fieldname="name", value=goal_name)
context, fieldname="name", value=goal_name, eager=eager)
def destroy_goal(self, goal_id): def destroy_goal(self, goal_id):
try: try:
@@ -491,11 +464,9 @@ class Connection(api.BaseConnection):
# ### STRATEGIES ### # # ### STRATEGIES ### #
def get_strategy_list(self, context, filters=None, limit=None, def get_strategy_list(self, context, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None, marker=None, sort_key=None, sort_dir=None):
eager=True):
query = model_query(models.Strategy) query = model_query(models.Strategy)
if eager:
query = self._set_eager_options(models.Strategy, query)
query = self._add_strategies_filters(query, filters) query = self._add_strategies_filters(query, filters)
if not context.show_deleted: if not context.show_deleted:
query = query.filter_by(deleted_at=None) query = query.filter_by(deleted_at=None)
@@ -507,30 +478,32 @@ class Connection(api.BaseConnection):
if not values.get('uuid'): if not values.get('uuid'):
values['uuid'] = utils.generate_uuid() values['uuid'] = utils.generate_uuid()
strategy = models.Strategy()
strategy.update(values)
try: try:
strategy = self._create(models.Strategy, values) strategy.save()
except db_exc.DBDuplicateEntry: except db_exc.DBDuplicateEntry:
raise exception.StrategyAlreadyExists(uuid=values['uuid']) raise exception.StrategyAlreadyExists(uuid=values['uuid'])
return strategy return strategy
def _get_strategy(self, context, fieldname, value, eager): def _get_strategy(self, context, fieldname, value):
try: try:
return self._get(context, model=models.Strategy, return self._get(context, model=models.Strategy,
fieldname=fieldname, value=value, eager=eager) fieldname=fieldname, value=value)
except exception.ResourceNotFound: except exception.ResourceNotFound:
raise exception.StrategyNotFound(strategy=value) raise exception.StrategyNotFound(strategy=value)
def get_strategy_by_id(self, context, strategy_id, eager=False): def get_strategy_by_id(self, context, strategy_id):
return self._get_strategy( return self._get_strategy(context, fieldname="id", value=strategy_id)
context, fieldname="id", value=strategy_id, eager=eager)
def get_strategy_by_uuid(self, context, strategy_uuid, eager=False): def get_strategy_by_uuid(self, context, strategy_uuid):
return self._get_strategy( return self._get_strategy(
context, fieldname="uuid", value=strategy_uuid, eager=eager) context, fieldname="uuid", value=strategy_uuid)
def get_strategy_by_name(self, context, strategy_name, eager=False): def get_strategy_by_name(self, context, strategy_name):
return self._get_strategy( return self._get_strategy(
context, fieldname="name", value=strategy_name, eager=eager) context, fieldname="name", value=strategy_name)
def destroy_strategy(self, strategy_id): def destroy_strategy(self, strategy_id):
try: try:
@@ -557,12 +530,9 @@ class Connection(api.BaseConnection):
# ### AUDIT TEMPLATES ### # # ### AUDIT TEMPLATES ### #
def get_audit_template_list(self, context, filters=None, limit=None, def get_audit_template_list(self, context, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None, marker=None, sort_key=None, sort_dir=None):
eager=False):
query = model_query(models.AuditTemplate) query = model_query(models.AuditTemplate)
if eager:
query = self._set_eager_options(models.AuditTemplate, query)
query = self._add_audit_templates_filters(query, filters) query = self._add_audit_templates_filters(query, filters)
if not context.show_deleted: if not context.show_deleted:
query = query.filter_by(deleted_at=None) query = query.filter_by(deleted_at=None)
@@ -582,34 +552,34 @@ class Connection(api.BaseConnection):
raise exception.AuditTemplateAlreadyExists( raise exception.AuditTemplateAlreadyExists(
audit_template=values['name']) audit_template=values['name'])
audit_template = models.AuditTemplate()
audit_template.update(values)
try: try:
audit_template = self._create(models.AuditTemplate, values) audit_template.save()
except db_exc.DBDuplicateEntry: except db_exc.DBDuplicateEntry:
raise exception.AuditTemplateAlreadyExists( raise exception.AuditTemplateAlreadyExists(
audit_template=values['name']) audit_template=values['name'])
return audit_template return audit_template
def _get_audit_template(self, context, fieldname, value, eager): def _get_audit_template(self, context, fieldname, value):
try: try:
return self._get(context, model=models.AuditTemplate, return self._get(context, model=models.AuditTemplate,
fieldname=fieldname, value=value, eager=eager) fieldname=fieldname, value=value)
except exception.ResourceNotFound: except exception.ResourceNotFound:
raise exception.AuditTemplateNotFound(audit_template=value) raise exception.AuditTemplateNotFound(audit_template=value)
def get_audit_template_by_id(self, context, audit_template_id, def get_audit_template_by_id(self, context, audit_template_id):
eager=False):
return self._get_audit_template( return self._get_audit_template(
context, fieldname="id", value=audit_template_id, eager=eager) context, fieldname="id", value=audit_template_id)
def get_audit_template_by_uuid(self, context, audit_template_uuid, def get_audit_template_by_uuid(self, context, audit_template_uuid):
eager=False):
return self._get_audit_template( return self._get_audit_template(
context, fieldname="uuid", value=audit_template_uuid, eager=eager) context, fieldname="uuid", value=audit_template_uuid)
def get_audit_template_by_name(self, context, audit_template_name, def get_audit_template_by_name(self, context, audit_template_name):
eager=False):
return self._get_audit_template( return self._get_audit_template(
context, fieldname="name", value=audit_template_name, eager=eager) context, fieldname="name", value=audit_template_name)
def destroy_audit_template(self, audit_template_id): def destroy_audit_template(self, audit_template_id):
try: try:
@@ -640,14 +610,12 @@ class Connection(api.BaseConnection):
# ### AUDITS ### # # ### AUDITS ### #
def get_audit_list(self, context, filters=None, limit=None, marker=None, def get_audit_list(self, context, filters=None, limit=None, marker=None,
sort_key=None, sort_dir=None, eager=False): sort_key=None, sort_dir=None):
query = model_query(models.Audit) query = model_query(models.Audit)
if eager:
query = self._set_eager_options(models.Audit, query)
query = self._add_audits_filters(query, filters) query = self._add_audits_filters(query, filters)
if not context.show_deleted: if not context.show_deleted:
query = query.filter( query = query.filter(
~(models.Audit.state == objects.audit.State.DELETED)) ~(models.Audit.state == audit_objects.State.DELETED))
return _paginate_query(models.Audit, limit, marker, return _paginate_query(models.Audit, limit, marker,
sort_key, sort_dir, query) sort_key, sort_dir, query)
@@ -658,28 +626,41 @@ class Connection(api.BaseConnection):
values['uuid'] = utils.generate_uuid() values['uuid'] = utils.generate_uuid()
if values.get('state') is None: if values.get('state') is None:
values['state'] = objects.audit.State.PENDING values['state'] = audit_objects.State.PENDING
audit = models.Audit()
audit.update(values)
try: try:
audit = self._create(models.Audit, values) audit.save()
except db_exc.DBDuplicateEntry: except db_exc.DBDuplicateEntry:
raise exception.AuditAlreadyExists(uuid=values['uuid']) raise exception.AuditAlreadyExists(uuid=values['uuid'])
return audit return audit
def _get_audit(self, context, fieldname, value, eager): def get_audit_by_id(self, context, audit_id):
query = model_query(models.Audit)
query = query.filter_by(id=audit_id)
try: try:
return self._get(context, model=models.Audit, audit = query.one()
fieldname=fieldname, value=value, eager=eager) if not context.show_deleted:
except exception.ResourceNotFound: if audit.state == audit_objects.State.DELETED:
raise exception.AuditNotFound(audit=value) raise exception.AuditNotFound(audit=audit_id)
return audit
except exc.NoResultFound:
raise exception.AuditNotFound(audit=audit_id)
def get_audit_by_id(self, context, audit_id, eager=False): def get_audit_by_uuid(self, context, audit_uuid):
return self._get_audit( query = model_query(models.Audit)
context, fieldname="id", value=audit_id, eager=eager) query = query.filter_by(uuid=audit_uuid)
def get_audit_by_uuid(self, context, audit_uuid, eager=False): try:
return self._get_audit( audit = query.one()
context, fieldname="uuid", value=audit_uuid, eager=eager) if not context.show_deleted:
if audit.state == audit_objects.State.DELETED:
raise exception.AuditNotFound(audit=audit_uuid)
return audit
except exc.NoResultFound:
raise exception.AuditNotFound(audit=audit_uuid)
def destroy_audit(self, audit_id): def destroy_audit(self, audit_id):
def is_audit_referenced(session, audit_id): def is_audit_referenced(session, audit_id):
@@ -724,14 +705,12 @@ class Connection(api.BaseConnection):
# ### ACTIONS ### # # ### ACTIONS ### #
def get_action_list(self, context, filters=None, limit=None, marker=None, def get_action_list(self, context, filters=None, limit=None, marker=None,
sort_key=None, sort_dir=None, eager=False): sort_key=None, sort_dir=None):
query = model_query(models.Action) query = model_query(models.Action)
if eager:
query = self._set_eager_options(models.Action, query)
query = self._add_actions_filters(query, filters) query = self._add_actions_filters(query, filters)
if not context.show_deleted: if not context.show_deleted:
query = query.filter( query = query.filter(
~(models.Action.state == objects.action.State.DELETED)) ~(models.Action.state == action_objects.State.DELETED))
return _paginate_query(models.Action, limit, marker, return _paginate_query(models.Action, limit, marker,
sort_key, sort_dir, query) sort_key, sort_dir, query)
@@ -740,26 +719,39 @@ class Connection(api.BaseConnection):
if not values.get('uuid'): if not values.get('uuid'):
values['uuid'] = utils.generate_uuid() values['uuid'] = utils.generate_uuid()
action = models.Action()
action.update(values)
try: try:
action = self._create(models.Action, values) action.save()
except db_exc.DBDuplicateEntry: except db_exc.DBDuplicateEntry:
raise exception.ActionAlreadyExists(uuid=values['uuid']) raise exception.ActionAlreadyExists(uuid=values['uuid'])
return action return action
def _get_action(self, context, fieldname, value, eager): def get_action_by_id(self, context, action_id):
query = model_query(models.Action)
query = query.filter_by(id=action_id)
try: try:
return self._get(context, model=models.Action, action = query.one()
fieldname=fieldname, value=value, eager=eager) if not context.show_deleted:
except exception.ResourceNotFound: if action.state == action_objects.State.DELETED:
raise exception.ActionNotFound(action=value) raise exception.ActionNotFound(
action=action_id)
return action
except exc.NoResultFound:
raise exception.ActionNotFound(action=action_id)
def get_action_by_id(self, context, action_id, eager=False): def get_action_by_uuid(self, context, action_uuid):
return self._get_action( query = model_query(models.Action)
context, fieldname="id", value=action_id, eager=eager) query = query.filter_by(uuid=action_uuid)
try:
def get_action_by_uuid(self, context, action_uuid, eager=False): action = query.one()
return self._get_action( if not context.show_deleted:
context, fieldname="uuid", value=action_uuid, eager=eager) if action.state == action_objects.State.DELETED:
raise exception.ActionNotFound(
action=action_uuid)
return action
except exc.NoResultFound:
raise exception.ActionNotFound(action=action_uuid)
def destroy_action(self, action_id): def destroy_action(self, action_id):
session = get_session() session = get_session()
@@ -774,12 +766,12 @@ class Connection(api.BaseConnection):
# NOTE(dtantsur): this can lead to very strange errors # NOTE(dtantsur): this can lead to very strange errors
if 'uuid' in values: if 'uuid' in values:
raise exception.Invalid( raise exception.Invalid(
message=_("Cannot overwrite UUID for an existing Action.")) message=_("Cannot overwrite UUID for an existing "
"Action."))
return self._do_update_action(action_id, values) return self._do_update_action(action_id, values)
@staticmethod def _do_update_action(self, action_id, values):
def _do_update_action(action_id, values):
session = get_session() session = get_session()
with session.begin(): with session.begin():
query = model_query(models.Action, session=session) query = model_query(models.Action, session=session)
@@ -808,16 +800,13 @@ class Connection(api.BaseConnection):
# ### ACTION PLANS ### # # ### ACTION PLANS ### #
def get_action_plan_list( def get_action_plan_list(
self, context, filters=None, limit=None, marker=None, self, context, filters=None, limit=None,
sort_key=None, sort_dir=None, eager=False): marker=None, sort_key=None, sort_dir=None):
query = model_query(models.ActionPlan) query = model_query(models.ActionPlan)
if eager:
query = self._set_eager_options(models.ActionPlan, query)
query = self._add_action_plans_filters(query, filters) query = self._add_action_plans_filters(query, filters)
if not context.show_deleted: if not context.show_deleted:
query = query.filter( query = query.filter(
~(models.ActionPlan.state == ~(models.ActionPlan.state == ap_objects.State.DELETED))
objects.action_plan.State.DELETED))
return _paginate_query(models.ActionPlan, limit, marker, return _paginate_query(models.ActionPlan, limit, marker,
sort_key, sort_dir, query) sort_key, sort_dir, query)
@@ -827,26 +816,41 @@ class Connection(api.BaseConnection):
if not values.get('uuid'): if not values.get('uuid'):
values['uuid'] = utils.generate_uuid() values['uuid'] = utils.generate_uuid()
action_plan = models.ActionPlan()
action_plan.update(values)
try: try:
action_plan = self._create(models.ActionPlan, values) action_plan.save()
except db_exc.DBDuplicateEntry: except db_exc.DBDuplicateEntry:
raise exception.ActionPlanAlreadyExists(uuid=values['uuid']) raise exception.ActionPlanAlreadyExists(uuid=values['uuid'])
return action_plan return action_plan
def _get_action_plan(self, context, fieldname, value, eager): def get_action_plan_by_id(self, context, action_plan_id):
query = model_query(models.ActionPlan)
query = query.filter_by(id=action_plan_id)
try: try:
return self._get(context, model=models.ActionPlan, action_plan = query.one()
fieldname=fieldname, value=value, eager=eager) if not context.show_deleted:
except exception.ResourceNotFound: if action_plan.state == ap_objects.State.DELETED:
raise exception.ActionPlanNotFound(action_plan=value) raise exception.ActionPlanNotFound(
action_plan=action_plan_id)
return action_plan
except exc.NoResultFound:
raise exception.ActionPlanNotFound(action_plan=action_plan_id)
def get_action_plan_by_id(self, context, action_plan_id, eager=False): def get_action_plan_by_uuid(self, context, action_plan__uuid):
return self._get_action_plan( query = model_query(models.ActionPlan)
context, fieldname="id", value=action_plan_id, eager=eager) query = query.filter_by(uuid=action_plan__uuid)
def get_action_plan_by_uuid(self, context, action_plan_uuid, eager=False): try:
return self._get_action_plan( action_plan = query.one()
context, fieldname="uuid", value=action_plan_uuid, eager=eager) if not context.show_deleted:
if action_plan.state == ap_objects.State.DELETED:
raise exception.ActionPlanNotFound(
action_plan=action_plan__uuid)
return action_plan
except exc.NoResultFound:
raise exception.ActionPlanNotFound(action_plan=action_plan__uuid)
def destroy_action_plan(self, action_plan_id): def destroy_action_plan(self, action_plan_id):
def is_action_plan_referenced(session, action_plan_id): def is_action_plan_referenced(session, action_plan_id):
@@ -880,8 +884,7 @@ class Connection(api.BaseConnection):
return self._do_update_action_plan(action_plan_id, values) return self._do_update_action_plan(action_plan_id, values)
@staticmethod def _do_update_action_plan(self, action_plan_id, values):
def _do_update_action_plan(action_plan_id, values):
session = get_session() session = get_session()
with session.begin(): with session.begin():
query = model_query(models.ActionPlan, session=session) query = model_query(models.ActionPlan, session=session)
@@ -910,12 +913,9 @@ class Connection(api.BaseConnection):
# ### EFFICACY INDICATORS ### # # ### EFFICACY INDICATORS ### #
def get_efficacy_indicator_list(self, context, filters=None, limit=None, def get_efficacy_indicator_list(self, context, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None, marker=None, sort_key=None, sort_dir=None):
eager=False):
query = model_query(models.EfficacyIndicator) query = model_query(models.EfficacyIndicator)
if eager:
query = self._set_eager_options(models.EfficacyIndicator, query)
query = self._add_efficacy_indicators_filters(query, filters) query = self._add_efficacy_indicators_filters(query, filters)
if not context.show_deleted: if not context.show_deleted:
query = query.filter_by(deleted_at=None) query = query.filter_by(deleted_at=None)
@@ -927,36 +927,33 @@ class Connection(api.BaseConnection):
if not values.get('uuid'): if not values.get('uuid'):
values['uuid'] = utils.generate_uuid() values['uuid'] = utils.generate_uuid()
efficacy_indicator = models.EfficacyIndicator()
efficacy_indicator.update(values)
try: try:
efficacy_indicator = self._create(models.EfficacyIndicator, values) efficacy_indicator.save()
except db_exc.DBDuplicateEntry: except db_exc.DBDuplicateEntry:
raise exception.EfficacyIndicatorAlreadyExists(uuid=values['uuid']) raise exception.EfficacyIndicatorAlreadyExists(uuid=values['uuid'])
return efficacy_indicator return efficacy_indicator
def _get_efficacy_indicator(self, context, fieldname, value, eager): def _get_efficacy_indicator(self, context, fieldname, value):
try: try:
return self._get(context, model=models.EfficacyIndicator, return self._get(context, model=models.EfficacyIndicator,
fieldname=fieldname, value=value, eager=eager) fieldname=fieldname, value=value)
except exception.ResourceNotFound: except exception.ResourceNotFound:
raise exception.EfficacyIndicatorNotFound(efficacy_indicator=value) raise exception.EfficacyIndicatorNotFound(efficacy_indicator=value)
def get_efficacy_indicator_by_id(self, context, efficacy_indicator_id, def get_efficacy_indicator_by_id(self, context, efficacy_indicator_id):
eager=False):
return self._get_efficacy_indicator( return self._get_efficacy_indicator(
context, fieldname="id", context, fieldname="id", value=efficacy_indicator_id)
value=efficacy_indicator_id, eager=eager)
def get_efficacy_indicator_by_uuid(self, context, efficacy_indicator_uuid, def get_efficacy_indicator_by_uuid(self, context, efficacy_indicator_uuid):
eager=False):
return self._get_efficacy_indicator( return self._get_efficacy_indicator(
context, fieldname="uuid", context, fieldname="uuid", value=efficacy_indicator_uuid)
value=efficacy_indicator_uuid, eager=eager)
def get_efficacy_indicator_by_name(self, context, efficacy_indicator_name, def get_efficacy_indicator_by_name(self, context, efficacy_indicator_name):
eager=False):
return self._get_efficacy_indicator( return self._get_efficacy_indicator(
context, fieldname="name", context, fieldname="name", value=efficacy_indicator_name)
value=efficacy_indicator_name, eager=eager)
def update_efficacy_indicator(self, efficacy_indicator_id, values): def update_efficacy_indicator(self, efficacy_indicator_id, values):
if 'uuid' in values: if 'uuid' in values:
@@ -999,11 +996,9 @@ class Connection(api.BaseConnection):
plain_fields=plain_fields) plain_fields=plain_fields)
def get_scoring_engine_list( def get_scoring_engine_list(
self, context, columns=None, filters=None, limit=None, self, context, columns=None, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None, eager=False): marker=None, sort_key=None, sort_dir=None):
query = model_query(models.ScoringEngine) query = model_query(models.ScoringEngine)
if eager:
query = self._set_eager_options(models.ScoringEngine, query)
query = self._add_scoring_engine_filters(query, filters) query = self._add_scoring_engine_filters(query, filters)
if not context.show_deleted: if not context.show_deleted:
query = query.filter_by(deleted_at=None) query = query.filter_by(deleted_at=None)
@@ -1016,33 +1011,33 @@ class Connection(api.BaseConnection):
if not values.get('uuid'): if not values.get('uuid'):
values['uuid'] = utils.generate_uuid() values['uuid'] = utils.generate_uuid()
scoring_engine = models.ScoringEngine()
scoring_engine.update(values)
try: try:
scoring_engine = self._create(models.ScoringEngine, values) scoring_engine.save()
except db_exc.DBDuplicateEntry: except db_exc.DBDuplicateEntry:
raise exception.ScoringEngineAlreadyExists(uuid=values['uuid']) raise exception.ScoringEngineAlreadyExists(uuid=values['uuid'])
return scoring_engine return scoring_engine
def _get_scoring_engine(self, context, fieldname, value, eager): def _get_scoring_engine(self, context, fieldname, value):
try: try:
return self._get(context, model=models.ScoringEngine, return self._get(context, model=models.ScoringEngine,
fieldname=fieldname, value=value, eager=eager) fieldname=fieldname, value=value)
except exception.ResourceNotFound: except exception.ResourceNotFound:
raise exception.ScoringEngineNotFound(scoring_engine=value) raise exception.ScoringEngineNotFound(scoring_engine=value)
def get_scoring_engine_by_id(self, context, scoring_engine_id, def get_scoring_engine_by_id(self, context, scoring_engine_id):
eager=False):
return self._get_scoring_engine( return self._get_scoring_engine(
context, fieldname="id", value=scoring_engine_id, eager=eager) context, fieldname="id", value=scoring_engine_id)
def get_scoring_engine_by_uuid(self, context, scoring_engine_uuid, def get_scoring_engine_by_uuid(self, context, scoring_engine_uuid):
eager=False):
return self._get_scoring_engine( return self._get_scoring_engine(
context, fieldname="uuid", value=scoring_engine_uuid, eager=eager) context, fieldname="uuid", value=scoring_engine_uuid)
def get_scoring_engine_by_name(self, context, scoring_engine_name, def get_scoring_engine_by_name(self, context, scoring_engine_name):
eager=False):
return self._get_scoring_engine( return self._get_scoring_engine(
context, fieldname="name", value=scoring_engine_name, eager=eager) context, fieldname="name", value=scoring_engine_name)
def destroy_scoring_engine(self, scoring_engine_id): def destroy_scoring_engine(self, scoring_engine_id):
try: try:
@@ -1052,9 +1047,9 @@ class Connection(api.BaseConnection):
scoring_engine=scoring_engine_id) scoring_engine=scoring_engine_id)
def update_scoring_engine(self, scoring_engine_id, values): def update_scoring_engine(self, scoring_engine_id, values):
if 'uuid' in values: if 'id' in values:
raise exception.Invalid( raise exception.Invalid(
message=_("Cannot overwrite UUID for an existing " message=_("Cannot overwrite ID for an existing "
"Scoring Engine.")) "Scoring Engine."))
try: try:
@@ -1070,67 +1065,3 @@ class Connection(api.BaseConnection):
except exception.ResourceNotFound: except exception.ResourceNotFound:
raise exception.ScoringEngineNotFound( raise exception.ScoringEngineNotFound(
scoring_engine=scoring_engine_id) scoring_engine=scoring_engine_id)
# ### SERVICES ### #
def _add_services_filters(self, query, filters):
if not filters:
filters = {}
plain_fields = ['id', 'name', 'host']
return self._add_filters(
query=query, model=models.Service, filters=filters,
plain_fields=plain_fields)
def get_service_list(self, context, filters=None, limit=None, marker=None,
sort_key=None, sort_dir=None, eager=False):
query = model_query(models.Service)
if eager:
query = self._set_eager_options(models.Service, query)
query = self._add_services_filters(query, filters)
if not context.show_deleted:
query = query.filter_by(deleted_at=None)
return _paginate_query(models.Service, limit, marker,
sort_key, sort_dir, query)
def create_service(self, values):
try:
service = self._create(models.Service, values)
except db_exc.DBDuplicateEntry:
raise exception.ServiceAlreadyExists(name=values['name'],
host=values['host'])
return service
def _get_service(self, context, fieldname, value, eager):
try:
return self._get(context, model=models.Service,
fieldname=fieldname, value=value, eager=eager)
except exception.ResourceNotFound:
raise exception.ServiceNotFound(service=value)
def get_service_by_id(self, context, service_id, eager=False):
return self._get_service(
context, fieldname="id", value=service_id, eager=eager)
def get_service_by_name(self, context, service_name, eager=False):
return self._get_service(
context, fieldname="name", value=service_name, eager=eager)
def destroy_service(self, service_id):
try:
return self._destroy(models.Service, service_id)
except exception.ResourceNotFound:
raise exception.ServiceNotFound(service=service_id)
def update_service(self, service_id, values):
try:
return self._update(models.Service, service_id, values)
except exception.ResourceNotFound:
raise exception.ServiceNotFound(service=service_id)
def soft_delete_service(self, service_id):
try:
self._soft_delete(models.Service, service_id)
except exception.ResourceNotFound:
raise exception.ServiceNotFound(service=service_id)

View File

@@ -27,15 +27,14 @@ from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import ForeignKey from sqlalchemy import ForeignKey
from sqlalchemy import Integer from sqlalchemy import Integer
from sqlalchemy import Numeric from sqlalchemy import Numeric
from sqlalchemy import orm from sqlalchemy import schema
from sqlalchemy import String from sqlalchemy import String
from sqlalchemy import Text from sqlalchemy import Text
from sqlalchemy.types import TypeDecorator, TEXT from sqlalchemy.types import TypeDecorator, TEXT
from sqlalchemy import UniqueConstraint
from watcher.common import paths from watcher.common import paths
SQL_OPTS = [ sql_opts = [
cfg.StrOpt('mysql_engine', cfg.StrOpt('mysql_engine',
default='InnoDB', default='InnoDB',
help='MySQL engine to use.') help='MySQL engine to use.')
@@ -44,7 +43,7 @@ SQL_OPTS = [
_DEFAULT_SQL_CONNECTION = 'sqlite:///{0}'.format( _DEFAULT_SQL_CONNECTION = 'sqlite:///{0}'.format(
paths.state_path_def('watcher.sqlite')) paths.state_path_def('watcher.sqlite'))
cfg.CONF.register_opts(SQL_OPTS, 'database') cfg.CONF.register_opts(sql_opts, 'database')
db_options.set_defaults(cfg.CONF, _DEFAULT_SQL_CONNECTION, 'watcher.sqlite') db_options.set_defaults(cfg.CONF, _DEFAULT_SQL_CONNECTION, 'watcher.sqlite')
@@ -58,7 +57,6 @@ def table_args():
class JsonEncodedType(TypeDecorator): class JsonEncodedType(TypeDecorator):
"""Abstract base type serialized as json-encoded string in db.""" """Abstract base type serialized as json-encoded string in db."""
type = None type = None
impl = TEXT impl = TEXT
@@ -83,13 +81,11 @@ class JsonEncodedType(TypeDecorator):
class JSONEncodedDict(JsonEncodedType): class JSONEncodedDict(JsonEncodedType):
"""Represents dict serialized as json-encoded string in db.""" """Represents dict serialized as json-encoded string in db."""
type = dict type = dict
class JSONEncodedList(JsonEncodedType): class JSONEncodedList(JsonEncodedType):
"""Represents list serialized as json-encoded string in db.""" """Represents list serialized as json-encoded string in db."""
type = list type = list
@@ -115,39 +111,35 @@ class WatcherBase(models.SoftDeleteMixin,
Base = declarative_base(cls=WatcherBase) Base = declarative_base(cls=WatcherBase)
class Goal(Base):
"""Represents a goal."""
__tablename__ = 'goals'
__table_args__ = (
UniqueConstraint('uuid', name='uniq_goals0uuid'),
UniqueConstraint('name', 'deleted', name='uniq_goals0name'),
table_args(),
)
id = Column(Integer, primary_key=True, autoincrement=True)
uuid = Column(String(36))
name = Column(String(63), nullable=False)
display_name = Column(String(63), nullable=False)
efficacy_specification = Column(JSONEncodedList, nullable=False)
class Strategy(Base): class Strategy(Base):
"""Represents a strategy.""" """Represents a strategy."""
__tablename__ = 'strategies' __tablename__ = 'strategies'
__table_args__ = ( __table_args__ = (
UniqueConstraint('uuid', name='uniq_strategies0uuid'), schema.UniqueConstraint('uuid', name='uniq_strategies0uuid'),
UniqueConstraint('name', 'deleted', name='uniq_strategies0name'),
table_args() table_args()
) )
id = Column(Integer, primary_key=True, autoincrement=True) id = Column(Integer, primary_key=True)
uuid = Column(String(36)) uuid = Column(String(36))
name = Column(String(63), nullable=False) name = Column(String(63), nullable=False)
display_name = Column(String(63), nullable=False) display_name = Column(String(63), nullable=False)
goal_id = Column(Integer, ForeignKey('goals.id'), nullable=False) goal_id = Column(Integer, ForeignKey('goals.id'), nullable=False)
parameters_spec = Column(JSONEncodedDict, nullable=True) parameters_spec = Column(JSONEncodedDict, nullable=True)
goal = orm.relationship(Goal, foreign_keys=goal_id, lazy=None)
class Goal(Base):
"""Represents a goal."""
__tablename__ = 'goals'
__table_args__ = (
schema.UniqueConstraint('uuid', name='uniq_goals0uuid'),
table_args(),
)
id = Column(Integer, primary_key=True)
uuid = Column(String(36))
name = Column(String(63), nullable=False)
display_name = Column(String(63), nullable=False)
efficacy_specification = Column(JSONEncodedList, nullable=False)
class AuditTemplate(Base): class AuditTemplate(Base):
@@ -155,20 +147,18 @@ class AuditTemplate(Base):
__tablename__ = 'audit_templates' __tablename__ = 'audit_templates'
__table_args__ = ( __table_args__ = (
UniqueConstraint('uuid', name='uniq_audit_templates0uuid'), schema.UniqueConstraint('uuid', name='uniq_audit_templates0uuid'),
UniqueConstraint('name', 'deleted', name='uniq_audit_templates0name'),
table_args() table_args()
) )
id = Column(Integer, primary_key=True) id = Column(Integer, primary_key=True)
uuid = Column(String(36)) uuid = Column(String(36))
name = Column(String(63), nullable=True) name = Column(String(63), nullable=True)
description = Column(String(255), nullable=True) description = Column(String(255), nullable=True)
host_aggregate = Column(Integer, nullable=True)
goal_id = Column(Integer, ForeignKey('goals.id'), nullable=False) goal_id = Column(Integer, ForeignKey('goals.id'), nullable=False)
strategy_id = Column(Integer, ForeignKey('strategies.id'), nullable=True) strategy_id = Column(Integer, ForeignKey('strategies.id'), nullable=True)
scope = Column(JSONEncodedList) extra = Column(JSONEncodedDict)
version = Column(String(15), nullable=True)
goal = orm.relationship(Goal, foreign_keys=goal_id, lazy=None)
strategy = orm.relationship(Strategy, foreign_keys=strategy_id, lazy=None)
class Audit(Base): class Audit(Base):
@@ -176,41 +166,19 @@ class Audit(Base):
__tablename__ = 'audits' __tablename__ = 'audits'
__table_args__ = ( __table_args__ = (
UniqueConstraint('uuid', name='uniq_audits0uuid'), schema.UniqueConstraint('uuid', name='uniq_audits0uuid'),
table_args() table_args()
) )
id = Column(Integer, primary_key=True, autoincrement=True) id = Column(Integer, primary_key=True)
uuid = Column(String(36)) uuid = Column(String(36))
audit_type = Column(String(20)) audit_type = Column(String(20))
state = Column(String(20), nullable=True) state = Column(String(20), nullable=True)
deadline = Column(DateTime, nullable=True)
parameters = Column(JSONEncodedDict, nullable=True) parameters = Column(JSONEncodedDict, nullable=True)
interval = Column(Integer, nullable=True) interval = Column(Integer, nullable=True)
host_aggregate = Column(Integer, nullable=True)
goal_id = Column(Integer, ForeignKey('goals.id'), nullable=False) goal_id = Column(Integer, ForeignKey('goals.id'), nullable=False)
strategy_id = Column(Integer, ForeignKey('strategies.id'), nullable=True) strategy_id = Column(Integer, ForeignKey('strategies.id'), nullable=True)
scope = Column(JSONEncodedList, nullable=True)
goal = orm.relationship(Goal, foreign_keys=goal_id, lazy=None)
strategy = orm.relationship(Strategy, foreign_keys=strategy_id, lazy=None)
class ActionPlan(Base):
"""Represents an action plan."""
__tablename__ = 'action_plans'
__table_args__ = (
UniqueConstraint('uuid', name='uniq_action_plans0uuid'),
table_args()
)
id = Column(Integer, primary_key=True, autoincrement=True)
uuid = Column(String(36))
first_action_id = Column(Integer)
audit_id = Column(Integer, ForeignKey('audits.id'), nullable=False)
strategy_id = Column(Integer, ForeignKey('strategies.id'), nullable=False)
state = Column(String(20), nullable=True)
global_efficacy = Column(JSONEncodedDict, nullable=True)
audit = orm.relationship(Audit, foreign_keys=audit_id, lazy=None)
strategy = orm.relationship(Strategy, foreign_keys=strategy_id, lazy=None)
class Action(Base): class Action(Base):
@@ -218,10 +186,10 @@ class Action(Base):
__tablename__ = 'actions' __tablename__ = 'actions'
__table_args__ = ( __table_args__ = (
UniqueConstraint('uuid', name='uniq_actions0uuid'), schema.UniqueConstraint('uuid', name='uniq_actions0uuid'),
table_args() table_args()
) )
id = Column(Integer, primary_key=True, autoincrement=True) id = Column(Integer, primary_key=True)
uuid = Column(String(36), nullable=False) uuid = Column(String(36), nullable=False)
action_plan_id = Column(Integer, ForeignKey('action_plans.id'), action_plan_id = Column(Integer, ForeignKey('action_plans.id'),
nullable=False) nullable=False)
@@ -231,8 +199,22 @@ class Action(Base):
state = Column(String(20), nullable=True) state = Column(String(20), nullable=True)
next = Column(String(36), nullable=True) next = Column(String(36), nullable=True)
action_plan = orm.relationship(
ActionPlan, foreign_keys=action_plan_id, lazy=None) class ActionPlan(Base):
"""Represents an action plan."""
__tablename__ = 'action_plans'
__table_args__ = (
schema.UniqueConstraint('uuid', name='uniq_action_plans0uuid'),
table_args()
)
id = Column(Integer, primary_key=True)
uuid = Column(String(36))
first_action_id = Column(Integer)
audit_id = Column(Integer, ForeignKey('audits.id'), nullable=False)
strategy_id = Column(Integer, ForeignKey('strategies.id'), nullable=False)
state = Column(String(20), nullable=True)
global_efficacy = Column(JSONEncodedDict, nullable=True)
class EfficacyIndicator(Base): class EfficacyIndicator(Base):
@@ -240,10 +222,10 @@ class EfficacyIndicator(Base):
__tablename__ = 'efficacy_indicators' __tablename__ = 'efficacy_indicators'
__table_args__ = ( __table_args__ = (
UniqueConstraint('uuid', name='uniq_efficacy_indicators0uuid'), schema.UniqueConstraint('uuid', name='uniq_efficacy_indicators0uuid'),
table_args() table_args()
) )
id = Column(Integer, primary_key=True, autoincrement=True) id = Column(Integer, primary_key=True)
uuid = Column(String(36)) uuid = Column(String(36))
name = Column(String(63)) name = Column(String(63))
description = Column(String(255), nullable=True) description = Column(String(255), nullable=True)
@@ -252,20 +234,16 @@ class EfficacyIndicator(Base):
action_plan_id = Column(Integer, ForeignKey('action_plans.id'), action_plan_id = Column(Integer, ForeignKey('action_plans.id'),
nullable=False) nullable=False)
action_plan = orm.relationship(
ActionPlan, foreign_keys=action_plan_id, lazy=None)
class ScoringEngine(Base): class ScoringEngine(Base):
"""Represents a scoring engine.""" """Represents a scoring engine."""
__tablename__ = 'scoring_engines' __tablename__ = 'scoring_engines'
__table_args__ = ( __table_args__ = (
UniqueConstraint('uuid', name='uniq_scoring_engines0uuid'), schema.UniqueConstraint('uuid', name='uniq_scoring_engines0uuid'),
UniqueConstraint('name', 'deleted', name='uniq_scoring_engines0name'),
table_args() table_args()
) )
id = Column(Integer, primary_key=True, autoincrement=True) id = Column(Integer, primary_key=True)
uuid = Column(String(36), nullable=False) uuid = Column(String(36), nullable=False)
name = Column(String(63), nullable=False) name = Column(String(63), nullable=False)
description = Column(String(255), nullable=True) description = Column(String(255), nullable=True)
@@ -273,18 +251,3 @@ class ScoringEngine(Base):
# The format might vary between different models (e.g. be JSON, XML or # The format might vary between different models (e.g. be JSON, XML or
# even some custom format), the blob type should cover all scenarios. # even some custom format), the blob type should cover all scenarios.
metainfo = Column(Text, nullable=True) metainfo = Column(Text, nullable=True)
class Service(Base):
"""Represents a service entity"""
__tablename__ = 'services'
__table_args__ = (
UniqueConstraint('host', 'name', 'deleted',
name="uniq_services0host0name0deleted"),
table_args()
)
id = Column(Integer, primary_key=True)
name = Column(String(255), nullable=False)
host = Column(String(255), nullable=False)
last_seen_up = Column(DateTime, nullable=True)

View File

@@ -22,18 +22,17 @@ import six
from oslo_log import log from oslo_log import log
from watcher.common.messaging.events import event as watcher_event
from watcher.decision_engine.messaging import events as de_events
from watcher.decision_engine.planner import manager as planner_manager from watcher.decision_engine.planner import manager as planner_manager
from watcher.decision_engine.strategy.context import default as default_context from watcher.decision_engine.strategy.context import default as default_context
from watcher import notifications from watcher.objects import audit as audit_objects
from watcher import objects
from watcher.objects import fields
LOG = log.getLogger(__name__) LOG = log.getLogger(__name__)
@six.add_metaclass(abc.ABCMeta) @six.add_metaclass(abc.ABCMeta)
class BaseAuditHandler(object): class BaseAuditHandler(object):
@abc.abstractmethod @abc.abstractmethod
def execute(self, audit_uuid, request_context): def execute(self, audit_uuid, request_context):
raise NotImplementedError() raise NotImplementedError()
@@ -73,40 +72,32 @@ class AuditHandler(BaseAuditHandler):
def strategy_context(self): def strategy_context(self):
return self._strategy_context return self._strategy_context
def do_schedule(self, request_context, audit, solution): def notify(self, audit_uuid, event_type, status):
try: event = watcher_event.Event()
notifications.audit.send_action_notification( event.type = event_type
request_context, audit, event.data = {}
action=fields.NotificationAction.PLANNER, payload = {'audit_uuid': audit_uuid,
phase=fields.NotificationPhase.START) 'audit_status': status}
self.planner.schedule(request_context, audit.id, solution) self.messaging.publish_status_event(event.type.name, payload)
notifications.audit.send_action_notification(
request_context, audit,
action=fields.NotificationAction.PLANNER,
phase=fields.NotificationPhase.END)
except Exception:
notifications.audit.send_action_notification(
request_context, audit,
action=fields.NotificationAction.PLANNER,
priority=fields.NotificationPriority.ERROR,
phase=fields.NotificationPhase.ERROR)
raise
@staticmethod def update_audit_state(self, request_context, audit, state):
def update_audit_state(audit, state):
LOG.debug("Update audit state: %s", state) LOG.debug("Update audit state: %s", state)
audit.state = state audit.state = state
audit.save() audit.save()
self.notify(audit.uuid, de_events.Events.TRIGGER_AUDIT, state)
def pre_execute(self, audit, request_context): def pre_execute(self, audit, request_context):
LOG.debug("Trigger audit %s", audit.uuid) LOG.debug("Trigger audit %s", audit.uuid)
# change state of the audit to ONGOING # change state of the audit to ONGOING
self.update_audit_state(audit, objects.audit.State.ONGOING) self.update_audit_state(request_context, audit,
audit_objects.State.ONGOING)
def post_execute(self, audit, solution, request_context): def post_execute(self, audit, solution, request_context):
self.do_schedule(request_context, audit, solution) self.planner.schedule(request_context, audit.id, solution)
# change state of the audit to SUCCEEDED # change state of the audit to SUCCEEDED
self.update_audit_state(audit, objects.audit.State.SUCCEEDED) self.update_audit_state(request_context, audit,
audit_objects.State.SUCCEEDED)
def execute(self, audit, request_context): def execute(self, audit, request_context):
try: try:
@@ -115,4 +106,5 @@ class AuditHandler(BaseAuditHandler):
self.post_execute(audit, solution, request_context) self.post_execute(audit, solution, request_context)
except Exception as e: except Exception as e:
LOG.exception(e) LOG.exception(e)
self.update_audit_state(audit, objects.audit.State.FAILED) self.update_audit_state(request_context, audit,
audit_objects.State.FAILED)

View File

@@ -25,7 +25,8 @@ from oslo_config import cfg
from watcher.common import context from watcher.common import context
from watcher.decision_engine.audit import base from watcher.decision_engine.audit import base
from watcher import objects from watcher.objects import action_plan as action_objects
from watcher.objects import audit as audit_objects
CONF = cfg.CONF CONF = cfg.CONF
@@ -55,11 +56,11 @@ class ContinuousAuditHandler(base.AuditHandler):
return self._scheduler return self._scheduler
def _is_audit_inactive(self, audit): def _is_audit_inactive(self, audit):
audit = objects.Audit.get_by_uuid( audit = audit_objects.Audit.get_by_uuid(self.context_show_deleted,
self.context_show_deleted, audit.uuid) audit.uuid)
if audit.state in (objects.audit.State.CANCELLED, if audit.state in (audit_objects.State.CANCELLED,
objects.audit.State.DELETED, audit_objects.State.DELETED,
objects.audit.State.FAILED): audit_objects.State.FAILED):
# if audit isn't in active states, audit's job must be removed to # if audit isn't in active states, audit's job must be removed to
# prevent using of inactive audit in future. # prevent using of inactive audit in future.
job_to_delete = [job for job in self.jobs job_to_delete = [job for job in self.jobs
@@ -76,13 +77,14 @@ class ContinuousAuditHandler(base.AuditHandler):
solution = self.strategy_context.execute_strategy( solution = self.strategy_context.execute_strategy(
audit, request_context) audit, request_context)
if audit.audit_type == objects.audit.AuditType.CONTINUOUS.value: if audit.audit_type == audit_objects.AuditType.CONTINUOUS.value:
a_plan_filters = {'audit_uuid': audit.uuid, a_plan_filters = {'audit_uuid': audit.uuid,
'state': objects.action_plan.State.RECOMMENDED} 'state': action_objects.State.RECOMMENDED}
action_plans = objects.ActionPlan.list( action_plans = action_objects.ActionPlan.list(
request_context, filters=a_plan_filters) request_context,
filters=a_plan_filters)
for plan in action_plans: for plan in action_plans:
plan.state = objects.action_plan.State.CANCELLED plan.state = action_objects.State.CANCELLED
plan.save() plan.save()
return solution return solution
@@ -91,18 +93,18 @@ class ContinuousAuditHandler(base.AuditHandler):
self.execute(audit, request_context) self.execute(audit, request_context)
def post_execute(self, audit, solution, request_context): def post_execute(self, audit, solution, request_context):
self.do_schedule(request_context, audit, solution) self.planner.schedule(request_context, audit.id, solution)
def launch_audits_periodically(self): def launch_audits_periodically(self):
audit_context = context.RequestContext(is_admin=True) audit_context = context.RequestContext(is_admin=True)
audit_filters = { audit_filters = {
'audit_type': objects.audit.AuditType.CONTINUOUS.value, 'audit_type': audit_objects.AuditType.CONTINUOUS.value,
'state__in': (objects.audit.State.PENDING, 'state__in': (audit_objects.State.PENDING,
objects.audit.State.ONGOING, audit_objects.State.ONGOING,
objects.audit.State.SUCCEEDED) audit_objects.State.SUCCEEDED)
} }
audits = objects.Audit.list( audits = audit_objects.Audit.list(audit_context,
audit_context, filters=audit_filters, eager=True) filters=audit_filters)
scheduler_job_args = [job.args for job in self.scheduler.get_jobs() scheduler_job_args = [job.args for job in self.scheduler.get_jobs()
if job.name == 'execute_audit'] if job.name == 'execute_audit']
for audit in audits: for audit in audits:

View File

@@ -59,7 +59,7 @@ in any appropriate storage system (InfluxDB, OpenTSDB, MongoDB,...).
import abc import abc
import six import six
"""Work in progress Helper to query metrics""" """ Work in progress Helper to query metrics """
@six.add_metaclass(abc.ABCMeta) @six.add_metaclass(abc.ABCMeta)

View File

@@ -37,9 +37,9 @@ class ServerConsolidation(base.EfficacySpecification):
indicators.InstanceMigrationsCount(), indicators.InstanceMigrationsCount(),
] ]
def get_global_efficacy_indicator(self, indicators_map=None): def get_global_efficacy_indicator(self, indicators_map):
value = 0 value = 0
if indicators_map and indicators_map.instance_migrations_count > 0: if indicators_map.instance_migrations_count > 0:
value = (float(indicators_map.released_compute_nodes_count) / value = (float(indicators_map.released_compute_nodes_count) /
float(indicators_map.instance_migrations_count)) * 100 float(indicators_map.instance_migrations_count)) * 100

View File

@@ -38,7 +38,6 @@ See :doc:`../architecture` for more details on this component.
from oslo_config import cfg from oslo_config import cfg
from watcher.common import service_manager
from watcher.decision_engine.messaging import audit_endpoint from watcher.decision_engine.messaging import audit_endpoint
from watcher.decision_engine.model.collector import manager from watcher.decision_engine.model.collector import manager
@@ -51,6 +50,13 @@ WATCHER_DECISION_ENGINE_OPTS = [
help='The topic name used for ' help='The topic name used for '
'control events, this topic ' 'control events, this topic '
'used for RPC calls'), 'used for RPC calls'),
cfg.StrOpt('status_topic',
default='watcher.decision.status',
help='The topic name used for '
'status events; this topic '
'is used so as to notify'
'the others components '
'of the system'),
cfg.ListOpt('notification_topics', cfg.ListOpt('notification_topics',
default=['versioned_notifications', 'watcher_notifications'], default=['versioned_notifications', 'watcher_notifications'],
help='The topic names from which notification events ' help='The topic names from which notification events '
@@ -72,36 +78,23 @@ CONF.register_group(decision_engine_opt_group)
CONF.register_opts(WATCHER_DECISION_ENGINE_OPTS, decision_engine_opt_group) CONF.register_opts(WATCHER_DECISION_ENGINE_OPTS, decision_engine_opt_group)
class DecisionEngineManager(service_manager.ServiceManager): class DecisionEngineManager(object):
@property API_VERSION = '1.0'
def service_name(self):
return 'watcher-decision-engine'
@property def __init__(self):
def api_version(self): self.api_version = self.API_VERSION
return '1.0'
@property self.publisher_id = CONF.watcher_decision_engine.publisher_id
def publisher_id(self): self.conductor_topic = CONF.watcher_decision_engine.conductor_topic
return CONF.watcher_decision_engine.publisher_id self.status_topic = CONF.watcher_decision_engine.status_topic
self.notification_topics = (
CONF.watcher_decision_engine.notification_topics)
@property self.conductor_endpoints = [audit_endpoint.AuditEndpoint]
def conductor_topic(self):
return CONF.watcher_decision_engine.conductor_topic
@property self.status_endpoints = []
def notification_topics(self):
return CONF.watcher_decision_engine.notification_topics
@property self.collector_manager = manager.CollectorManager()
def conductor_endpoints(self): self.notification_endpoints = (
return [audit_endpoint.AuditEndpoint] self.collector_manager.get_notification_endpoints())
@property
def notification_endpoints(self):
return self.collector_manager.get_notification_endpoints()
@property
def collector_manager(self):
return manager.CollectorManager()

View File

@@ -23,7 +23,7 @@ from oslo_log import log
from watcher.decision_engine.audit import continuous as continuous_handler from watcher.decision_engine.audit import continuous as continuous_handler
from watcher.decision_engine.audit import oneshot as oneshot_handler from watcher.decision_engine.audit import oneshot as oneshot_handler
from watcher import objects from watcher.objects import audit as audit_objects
CONF = cfg.CONF CONF = cfg.CONF
LOG = log.getLogger(__name__) LOG = log.getLogger(__name__)
@@ -49,7 +49,7 @@ class AuditEndpoint(object):
return self._messaging return self._messaging
def do_trigger_audit(self, context, audit_uuid): def do_trigger_audit(self, context, audit_uuid):
audit = objects.Audit.get_by_uuid(context, audit_uuid, eager=True) audit = audit_objects.Audit.get_by_uuid(context, audit_uuid)
self._oneshot_handler.execute(audit, context) self._oneshot_handler.execute(audit, context)
def trigger_audit(self, context, audit_uuid): def trigger_audit(self, context, audit_uuid):

View File

@@ -1,7 +1,7 @@
# -*- encoding: utf-8 -*- # -*- encoding: utf-8 -*-
# Copyright (c) 2016 b<>com # Copyright (c) 2015 b<>com
# #
# Authors: Vincent FRANCOISE <vincent.francoise@b-com.com> # Authors: Jean-Emile DARTOIS <jean-emile.dartois@b-com.com>
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@@ -15,11 +15,12 @@
# implied. # implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
#
from oslo_config import cfg import enum
from watcher.conf import service
CONF = cfg.CONF class Events(enum.Enum):
ALL = '*',
service.register_opts(CONF) ACTION_PLAN = "action_plan"
TRIGGER_AUDIT = "trigger_audit"

View File

@@ -19,8 +19,8 @@
# #
""" """
A :ref:`Cluster Data Model <cluster_data_model_definition>` (or CDM) is a A :ref:`Cluster Data Model <cluster_data_model_definition>` is a logical
logical representation of the current state and topology of the :ref:`Cluster representation of the current state and topology of the :ref:`Cluster
<cluster_definition>` :ref:`Managed resources <managed_resource_definition>`. <cluster_definition>` :ref:`Managed resources <managed_resource_definition>`.
It is represented as a set of :ref:`Managed resources It is represented as a set of :ref:`Managed resources
@@ -31,8 +31,9 @@ to know the current relationships between the different :ref:`resources
during an :ref:`Audit <audit_definition>` and enables the :ref:`Strategy during an :ref:`Audit <audit_definition>` and enables the :ref:`Strategy
<strategy_definition>` to request information such as: <strategy_definition>` to request information such as:
- What compute nodes are in a given :ref:`Audit Scope - What compute nodes are in a given :ref:`Availability Zone
<audit_scope_definition>`? <availability_zone_definition>` or a given :ref:`Host Aggregate
<host_aggregates_definition>`?
- What :ref:`Instances <instance_definition>` are hosted on a given compute - What :ref:`Instances <instance_definition>` are hosted on a given compute
node? node?
- What is the current load of a compute node? - What is the current load of a compute node?
@@ -58,11 +59,8 @@ to know:
In the Watcher project, we aim at providing a some generic and basic In the Watcher project, we aim at providing a some generic and basic
:ref:`Cluster Data Model <cluster_data_model_definition>` for each :ref:`Goal :ref:`Cluster Data Model <cluster_data_model_definition>` for each :ref:`Goal
<goal_definition>`, usable in the associated :ref:`Strategies <goal_definition>`, usable in the associated :ref:`Strategies
<strategy_definition>` through a plugin-based mechanism which are called <strategy_definition>` through a plugin-based mechanism that are directly
cluster data model collectors (or CDMCs). These CDMCs are responsible for accessible from the strategies classes in order to:
loading and keeping up-to-date their associated CDM by listening to events and
also periodically rebuilding themselves from the ground up. They are also
directly accessible from the strategies classes. These CDMs are used to:
- simplify the development of a new :ref:`Strategy <strategy_definition>` for a - simplify the development of a new :ref:`Strategy <strategy_definition>` for a
given :ref:`Goal <goal_definition>` when there already are some existing given :ref:`Goal <goal_definition>` when there already are some existing

View File

@@ -28,10 +28,16 @@ LOG = log.getLogger(__name__)
class NovaClusterDataModelCollector(base.BaseClusterDataModelCollector): class NovaClusterDataModelCollector(base.BaseClusterDataModelCollector):
"""Nova cluster data model collector """nova
The Nova cluster data model collector creates an in-memory *Description*
representation of the resources exposed by the compute service.
This Nova cluster data model collector creates an in-memory representation
of the resources exposed by the compute service.
*Spec URL*
<None>
""" """
def __init__(self, config, osc=None): def __init__(self, config, osc=None):

View File

@@ -14,7 +14,8 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from oslo_concurrency import lockutils import threading
from oslo_log import log from oslo_log import log
from watcher._i18n import _LW from watcher._i18n import _LW
@@ -27,6 +28,7 @@ class Mapping(object):
self.model = model self.model = model
self.compute_node_mapping = {} self.compute_node_mapping = {}
self.instance_mapping = {} self.instance_mapping = {}
self.lock = threading.Lock()
def map(self, node, instance): def map(self, node, instance):
"""Select the node where the instance is launched """Select the node where the instance is launched
@@ -34,7 +36,9 @@ class Mapping(object):
:param node: the node :param node: the node
:param instance: the virtual machine or instance :param instance: the virtual machine or instance
""" """
with lockutils.lock(__name__): try:
self.lock.acquire()
# init first # init first
if node.uuid not in self.compute_node_mapping.keys(): if node.uuid not in self.compute_node_mapping.keys():
self.compute_node_mapping[node.uuid] = set() self.compute_node_mapping[node.uuid] = set()
@@ -45,6 +49,9 @@ class Mapping(object):
# map instance => node # map instance => node
self.instance_mapping[instance.uuid] = node.uuid self.instance_mapping[instance.uuid] = node.uuid
finally:
self.lock.release()
def unmap(self, node, instance): def unmap(self, node, instance):
"""Remove the instance from the node """Remove the instance from the node
@@ -58,7 +65,8 @@ class Mapping(object):
:rtype : object :rtype : object
""" """
with lockutils.lock(__name__): try:
self.lock.acquire()
if str(node_uuid) in self.compute_node_mapping: if str(node_uuid) in self.compute_node_mapping:
self.compute_node_mapping[str(node_uuid)].remove( self.compute_node_mapping[str(node_uuid)].remove(
str(instance_uuid)) str(instance_uuid))
@@ -69,6 +77,8 @@ class Mapping(object):
_LW("Trying to delete the instance %(instance)s but it " _LW("Trying to delete the instance %(instance)s but it "
"was not found on node %(node)s") % "was not found on node %(node)s") %
{'instance': instance_uuid, 'node': node_uuid}) {'instance': instance_uuid, 'node': node_uuid})
finally:
self.lock.release()
def get_mapping(self): def get_mapping(self):
return self.compute_node_mapping return self.compute_node_mapping

View File

@@ -22,13 +22,11 @@ import six
from watcher._i18n import _ from watcher._i18n import _
from watcher.common import exception from watcher.common import exception
from watcher.common import utils from watcher.common import utils
from watcher.decision_engine.model import base
from watcher.decision_engine.model import element from watcher.decision_engine.model import element
from watcher.decision_engine.model import mapping from watcher.decision_engine.model import mapping
class ModelRoot(base.Model): class ModelRoot(object):
def __init__(self, stale=False): def __init__(self, stale=False):
self._nodes = utils.Struct() self._nodes = utils.Struct()
self._instances = utils.Struct() self._instances = utils.Struct()

View File

@@ -19,6 +19,8 @@
import abc import abc
import six import six
from watcher.common import rpc
@six.add_metaclass(abc.ABCMeta) @six.add_metaclass(abc.ABCMeta)
class NotificationEndpoint(object): class NotificationEndpoint(object):
@@ -36,3 +38,10 @@ class NotificationEndpoint(object):
@property @property
def cluster_data_model(self): def cluster_data_model(self):
return self.collector.cluster_data_model return self.collector.cluster_data_model
@property
def notifier(self):
if self._notifier is None:
self._notifier = rpc.get_notifier('decision-engine')
return self._notifier

View File

@@ -128,7 +128,7 @@ class DefaultPlanner(base.BasePlanner):
} }
new_action_plan = objects.ActionPlan(context, **action_plan_dict) new_action_plan = objects.ActionPlan(context, **action_plan_dict)
new_action_plan.create() new_action_plan.create(context)
return new_action_plan return new_action_plan
@@ -145,7 +145,7 @@ class DefaultPlanner(base.BasePlanner):
} }
new_efficacy_indicator = objects.EfficacyIndicator( new_efficacy_indicator = objects.EfficacyIndicator(
context, **efficacy_indicator_dict) context, **efficacy_indicator_dict)
new_efficacy_indicator.create() new_efficacy_indicator.create(context)
efficacy_indicators.append(new_efficacy_indicator) efficacy_indicators.append(new_efficacy_indicator)
return efficacy_indicators return efficacy_indicators
@@ -156,7 +156,7 @@ class DefaultPlanner(base.BasePlanner):
_action.get("action_type")) _action.get("action_type"))
new_action = objects.Action(context, **_action) new_action = objects.Action(context, **_action)
new_action.create() new_action.create(context)
new_action.save() new_action.save()
if parent_action: if parent_action:

View File

@@ -20,8 +20,8 @@
from oslo_config import cfg from oslo_config import cfg
from watcher.common import exception from watcher.common import exception
from watcher.common.messaging import notification_handler
from watcher.common import service from watcher.common import service
from watcher.common import service_manager
from watcher.common import utils from watcher.common import utils
from watcher.decision_engine import manager from watcher.decision_engine import manager
@@ -43,35 +43,20 @@ class DecisionEngineAPI(service.Service):
raise exception.InvalidUuidOrName(name=audit_uuid) raise exception.InvalidUuidOrName(name=audit_uuid)
return self.conductor_client.call( return self.conductor_client.call(
context, 'trigger_audit', audit_uuid=audit_uuid) context.to_dict(), 'trigger_audit', audit_uuid=audit_uuid)
class DecisionEngineAPIManager(service_manager.ServiceManager): class DecisionEngineAPIManager(object):
@property API_VERSION = '1.0'
def service_name(self):
return None
@property conductor_endpoints = []
def api_version(self): status_endpoints = [notification_handler.NotificationHandler]
return '1.0' notification_endpoints = []
notification_topics = []
@property def __init__(self):
def publisher_id(self): self.publisher_id = CONF.watcher_decision_engine.publisher_id
return CONF.watcher_decision_engine.publisher_id self.conductor_topic = CONF.watcher_decision_engine.conductor_topic
self.status_topic = CONF.watcher_decision_engine.status_topic
@property self.api_version = self.API_VERSION
def conductor_topic(self):
return CONF.watcher_decision_engine.conductor_topic
@property
def notification_topics(self):
return []
@property
def conductor_endpoints(self):
return []
@property
def notification_endpoints(self):
return []

View File

@@ -1,38 +0,0 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2016 Servionica
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import abc
import six
from watcher.common import context
@six.add_metaclass(abc.ABCMeta)
class BaseScope(object):
"""A base class for Scope mechanism
Child of this class is called when audit launches strategy. This strategy
requires Cluster Data Model which can be segregated to achieve audit scope.
"""
def __init__(self, scope):
self.ctx = context.make_context()
self.scope = scope
@abc.abstractmethod
def get_scoped_model(self, cluster_model):
"""Leave only nodes and instances proposed in the audit scope"""

Some files were not shown because too many files have changed in this diff Show More