Compare commits

...

108 Commits
1.6.0 ... 1.8.0

Author SHA1 Message Date
Zuul
40a653215f Merge "Zuul: Remove project name" 2018-02-07 07:24:53 +00:00
Zuul
1492f5d8dc Merge "Repalce Chinese double quotes to English double quotes" 2018-02-07 07:22:41 +00:00
Zuul
76263f149a Merge "Fix issues with aggregate and granularity attributes" 2018-02-06 06:05:50 +00:00
James E. Blair
028006d15d Zuul: Remove project name
Zuul no longer requires the project-name for in-repo configuration.
Omitting it makes forking or renaming projects easier.

Change-Id: Ib3be82015be1d6853c44cf53faacb238237ad701
2018-02-05 14:18:38 -08:00
Alexander Chadin
d27ba8cc2a Fix issues with aggregate and granularity attributes
This patch set fixes issues that have appeared after merging
watcher-multi-datasource and strategy-requirements patches.
It is final commit in watcher-multi-datasource blueprint.

Partially-Implements: blueprint watcher-multi-datasource
Change-Id: I25b4cb0e1b85379ff0c4da9d0c1474380d75ce3a
2018-02-05 11:08:48 +00:00
chengebj5238
33750ce7a9 Repalce Chinese double quotes to English double quotes
Change-Id: I566ce10064c3dc51b875fc973c0ad9b58449001c
2018-02-05 17:59:08 +08:00
Zuul
cb8d1a98d6 Merge "Fix get_compute_node_by_hostname in nova_helper" 2018-02-05 06:47:10 +00:00
Hidekazu Nakamura
f32252d510 Fix get_compute_node_by_hostname in nova_helper
If hostname is different from uuid in Compute CDM,
get_compute_node_by_hostname method returns empty.
This patch set fixes to return a compute node even if hostname
is different from uuid.

Change-Id: I6cbc0be1a79cc238f480caed9adb8dc31256754a
Closes-Bug: #1746162
2018-02-02 14:26:20 +09:00
Zuul
4849f8dde9 Merge "Add zone migration strategy document" 2018-02-02 04:51:26 +00:00
Hidekazu Nakamura
0cafdcdee9 Add zone migration strategy document
This patch set adds zone migration strategy document.

Change-Id: Ifd9d85d635977900929efd376f0d7990a6fec627
2018-02-02 09:35:58 +09:00
OpenStack Proposal Bot
3a70225164 Updated from global requirements
Change-Id: Ifb8d8d6cb1248eaf8715c84539d74fa04dd753dd
2018-02-01 07:36:19 +00:00
Zuul
892c766ac4 Merge "Fixed AttributeError in storage_model" 2018-01-31 13:58:53 +00:00
Zuul
63a3fd84ae Merge "Remove redundant import alias" 2018-01-31 12:45:21 +00:00
Zuul
287ace1dcc Merge "Update zone_migration comment" 2018-01-31 06:14:15 +00:00
Zuul
4b302e415e Merge "Zuul: Remove project name" 2018-01-30 12:22:41 +00:00
licanwei
f24744c910 Fixed AttributeError in storage_model
self.audit.scope should be self.audit_scope

Closes-Bug: #1746191

Change-Id: I0cce165a2bc1afd4c9e09c51e4d3250ee70d3705
2018-01-30 00:32:19 -08:00
Zuul
d9a85eda2c Merge "Imported Translations from Zanata" 2018-01-29 14:12:36 +00:00
Zuul
82c8633e42 Merge "[Doc] Add actuator strategy doc" 2018-01-29 14:12:35 +00:00
Hidekazu Nakamura
d3f23795f5 Update zone_migration comment
This patch updates zone_migration comment for document and
removes unnecessary TODO.

Change-Id: Ib1eadad6496fe202e406108f432349c82696ea88
2018-01-29 17:48:48 +09:00
Hoang Trung Hieu
e7f4456a80 Zuul: Remove project name
Zuul no longer requires the project-name for in-repo configuration[1].
Omitting it makes forking or renaming projects easier.

[1] https://docs.openstack.org/infra/manual/drivers.html#consistent-naming-for-jobs-with-zuul-v3

Change-Id: Iddf89707289a22ea322c14d1b11f58840871304d
2018-01-29 07:24:44 +00:00
OpenStack Proposal Bot
a36a309e2e Updated from global requirements
Change-Id: I29ebfe2e3398dab6f2e22f3d97c16b72843f1e34
2018-01-29 00:42:54 +00:00
Hidekazu Nakamura
8e3affd9ac [Doc] Add actuator strategy doc
This patch adds actuator strategy document.

Change-Id: I5f0415754c83e4f152155988625ada2208d6c35a
2018-01-28 20:00:05 +09:00
OpenStack Proposal Bot
71e979cae0 Imported Translations from Zanata
For more information about this automatic import see:
https://docs.openstack.org/i18n/latest/reviewing-translation-import.html

Change-Id: Ie34aafe6d9b54bb97469844d21de38d7c6249031
2018-01-28 07:16:20 +00:00
Luong Anh Tuan
6edfd34a53 Remove redundant import alias
This patch remove redundant import aliases and add pep8 hacking function
to check no redundant import aliases.

Co-Authored-By: Dao Cong Tien <tiendc@vn.fujitsu.com>

Change-Id: I3207cb9f0eb4b4a029b7e822b9c59cf48d1e0f9d
Closes-Bug: #1745527
2018-01-26 09:11:43 +07:00
Alexander Chadin
0c8c32e69e Fix strategy state
Change-Id: I003bb3b41aac69cc40a847f52a50c7bc4cc8d020
2018-01-25 15:41:34 +03:00
Alexander Chadin
9138b7bacb Add datasources to strategies
This patch set add datasources instead of datasource.

Change-Id: I94f17ae3a0b6a8990293dc9e33be1a2bd3432a14
2018-01-24 20:51:38 +03:00
Zuul
072822d920 Merge "Add baremetal strategy validation" 2018-01-24 14:59:14 +00:00
Zuul
f67ce8cca5 Merge "Add zone migration strategy" 2018-01-24 14:56:07 +00:00
Zuul
9e6f768263 Merge "Strategy requirements" 2018-01-24 14:53:47 +00:00
Zuul
ba9c89186b Merge "Update unreachable link" 2018-01-24 14:21:49 +00:00
Alexander Chadin
16e7d9c13b Add baremetal strategy validation
This patch set adds validation of baremetal model.

It also fixes PEP issues with storage capacity balance
strategy.

Change-Id: I53e37d91fa6c65f7c3d290747169007809100304
Depends-On: I177b443648301eb50da0da63271ecbfd9408bd4f
2018-01-24 14:35:52 +03:00
Zuul
c3536406bd Merge "Audit scoper for storage CDM" 2018-01-24 10:57:37 +00:00
Alexander Chadin
0c66fe2e65 Strategy requirements
This patch set adds /state resource to strategy API
which allows to retrieve strategy requirements.

Partially-Implements: blueprint check-strategy-requirements
Change-Id: I177b443648301eb50da0da63271ecbfd9408bd4f
2018-01-24 13:39:42 +03:00
Zuul
74933bf0ba Merge "Fix workload_stabilization unavailable nodes and instances" 2018-01-24 10:35:25 +00:00
Hidekazu Nakamura
1dae83da57 Add zone migration strategy
This patch adds hardware maintenance goal, efficacy and zone
migration strategy.

Change-Id: I5bfee421780233ffeea8c1539aba720ae554983d
Implements: blueprint zone-migration-strategy
2018-01-24 19:33:22 +09:00
Zuul
5ec8932182 Merge "Add storage capacity balance Strategy" 2018-01-24 10:22:25 +00:00
Alexander Chadin
701b258dc7 Fix workload_stabilization unavailable nodes and instances
This patch set excludes nodes and instances from auditing
if appropriate metrics aren't available.

Change-Id: I87c6c249e3962f45d082f92d7e6e0be04e101799
Closes-Bug: #1736982
2018-01-24 11:37:43 +03:00
gaofei
f7fcdf14d0 Update unreachable link
Change-Id: I74bbe5a8c4ca9df550f1279aa80a836d6a2f8a93
2018-01-24 14:40:43 +08:00
OpenStack Proposal Bot
47ba6c0808 Updated from global requirements
Change-Id: I4cbf5308061707e28c202f22e8a9bf8492742040
2018-01-24 01:42:12 +00:00
Zuul
5b5fbbedb4 Merge "Fix compute api ref link" 2018-01-23 15:16:19 +00:00
Zuul
a1c575bfc5 Merge "check audit name length" 2018-01-23 11:21:14 +00:00
deepak_mourya
27e887556d Fix compute api ref link
This is to fix some compute api ref link.

Change-Id: Id5acc4d0f635f3d19b916721b6839a0eef544b2a
2018-01-23 09:23:55 +00:00
Alexander Chadin
891f6bc241 Adapt workload_balance strategy to multiple datasource backend
This patch set:
1. Removes nova, ceilometer and gnocchi properties.
2. Adds using of datasource_backend properties along with
   statistic_aggregation method.
3. Changes type of datasource config.

Change-Id: I09d2dce00378f0ee5381d7c85006752aea6975d2
Partially-Implements: blueprint watcher-multi-datasource
2018-01-23 11:51:02 +03:00
Alexander Chadin
5dd6817d47 Adapt noisy_neighbor strategy to multiple datasource backend
Partially-Implements: blueprint watcher-multi-datasource
Change-Id: Ibcd5d0776280bb68ed838f88ebfcde27fc1a3d35
2018-01-23 11:51:02 +03:00
Alexander Chadin
7cdcb4743e Adapt basic_consolidation strategy to multiple datasource backend
Change-Id: Ie30308fd08ed1fd103b70f58f1d17b3749a6fe04
2018-01-23 11:51:02 +03:00
licanwei
6d03c4c543 check audit name length
No more than 63 characters

Change-Id: I52adbd7e9f12dd4a8b6977756d788ee0e5d6391a
Closes-Bug: #1744231
2018-01-23 00:47:26 -08:00
aditi
bcc129cf94 Audit scoper for storage CDM
This patch adds audit scoper for Storage CDM.

Change-Id: I0c5b3b652027e1394fd7744d904397ce87ed35a1
Implements: blueprint audit-scoper-for-storage-data-model
2018-01-23 13:53:31 +05:30
Zuul
40cff311c6 Merge "Adapt workload_stabilization strategy to new datasource backend" 2018-01-23 01:08:32 +00:00
OpenStack Proposal Bot
1a48a7fc57 Imported Translations from Zanata
For more information about this automatic import see:
https://docs.openstack.org/i18n/latest/reviewing-translation-import.html

Change-Id: I19a628bc7a0623e2f1ff8ab8794658bfe25801f5
2018-01-20 07:21:59 +00:00
Zuul
652aa54586 Merge "Update link address" 2018-01-19 11:40:25 +00:00
zhangdebo
42a3886ded Update link address
Link to new measurements is out of date and should be updated.
Change
https://docs.openstack.org/ceilometer/latest/contributor/new_meters.html#meters
to
https://docs.openstack.org/ceilometer/latest/contributor/measurements.html#new-measurements

Change-Id: Idc77e29a69a1f1eb9f8827fa74c9fde79e5619df
2018-01-19 07:59:15 +00:00
licanwei
3430493de1 Fix tempest devstack error
Devstack failed because mysql wasn't enabled.

Change-Id: Ifc1c00f2dddd0f3d67c6672d3b9d3d4bd78a4a90
Closes-Bug: #1744224
2018-01-18 23:33:08 -08:00
licanwei
f5bcf9d355 Add storage capacity balance Strategy
This patch adds Storage Capacity Balance Strategy to balance the
storage capacity through volume migration.

Change-Id: I52ea7ce00deb609a2f668db330f1fbc1c9932613
Implements: blueprint storage-workload-balance
2018-01-18 22:18:18 -08:00
Zuul
d809523bef Merge "Add baremetal data model" 2018-01-18 10:38:12 +00:00
Zuul
bfe3c28986 Merge "Fix compute scope test bug" 2018-01-18 09:37:24 +00:00
OpenStack Proposal Bot
3c8caa3d0a Updated from global requirements
Change-Id: I4814a236f5d015ee25b9de95dd1f3f97e375d382
2018-01-18 03:39:36 +00:00
Zuul
766d064dd0 Merge "Update pike install supermark to queens" 2018-01-17 12:34:35 +00:00
Alexander Chadin
ce196b68c4 Adapt workload_stabilization strategy to new datasource backend
This patch set:
1. Removes nova, ceilometer and gnocchi properties.
2. Adds using of datasource_backend properties along with
   statistic_aggregation method.
3. Changes type of datasource config.

Change-Id: I4a2f05772248fddd97a41e27be4094eb59ee0bdb
Partially-Implements: blueprint watcher-multi-datasource
2018-01-17 13:01:05 +03:00
OpenStack Proposal Bot
42130c42a1 Updated from global requirements
Change-Id: I4ef734eeaeee414c3e6340490f1146d537370127
2018-01-16 12:57:22 +00:00
caoyuan
1a8639d256 Update pike install supermark to queens
Change-Id: If981c77518d0605b4113f4bb4345d152545ffc52
2018-01-15 11:56:36 +00:00
zhang.lei
1702fe1a83 Add the title of API Guide
Currently, The title of API Guide is missing.[1] We should add a
title just like other projects.[2]

[1] https://docs.openstack.org/watcher/latest/api
[2] https://developer.openstack.org/api-ref/application-catalog

Change-Id: I012d746e99a68fc5f259a189188d9cea00d5a4f7
2018-01-13 08:04:36 +00:00
aditi
354ebd35cc Fix compute scope test bug
We were excluding 'INSTANCE_6'from scope, which belongs to 'NODE_3'
in scenerio_1.xml [1]. But NODE_3 is removed from model before only
as it is not in scope.

So, This Patch adds 'AZ3' in fake_scope.

[1] https://github.com/openstack/watcher/blob/master/watcher/tests/decision_engine/model/data/scenario_1.xml
Closes-Bug: #1737901

Change-Id: Ib1aaca7045908418ad0c23b718887cd89db98a83
2018-01-12 16:17:25 +05:30
Zuul
7297603f65 Merge "reset job interval when audit was updated" 2018-01-11 09:12:38 +00:00
Zuul
9626cb1356 Merge "check actionplan state when deleting actionplan" 2018-01-11 09:12:37 +00:00
Zuul
9e027940d7 Merge "use current weighted sd as min_sd when starting to simulate migrations" 2018-01-11 08:48:43 +00:00
Zuul
3754938d96 Merge "Set apscheduler logs to WARN level" 2018-01-11 05:39:10 +00:00
Zuul
8a7f930a64 Merge "update audit API description" 2018-01-11 05:32:50 +00:00
Zuul
f7e506155b Merge "Fix configuration doc link" 2018-01-10 17:02:26 +00:00
Yumeng_Bao
54da2a75fb Add baremetal data model
Change-Id: I57b7bb53b3bc84ad383ae485069274f5c5362c50
Implements: blueprint build-baremetal-data-model-in-watcher
2018-01-10 14:46:41 +08:00
Zuul
5cbb9aca7e Merge "bug fix remove volume migration type 'cold'" 2018-01-10 06:15:01 +00:00
Alexander Chadin
bd79882b16 Set apscheduler logs to WARN level
This patch set defines level of apscheduler logs as WARN.

Closes-Bug: #1742153
Change-Id: Idbb4b3e16187afc5c3969096deaf3248fcef2164
2018-01-09 16:30:14 +03:00
licanwei
960c50ba45 Fix configuration doc link
Change-Id: I7b144194287514144948f8547bc45d6bc4551a52
2018-01-07 23:36:20 -08:00
licanwei
9411f85cd2 update audit API description
Change-Id: I1d3eb9364fb5597788a282d275c71f5a314a0923
2018-01-02 23:51:05 -08:00
licanwei
b4370f0461 update action API description
POST/PATCH/DELETE actions APIs aren't permitted.

Change-Id: I4126bcc6bf6fe2628748d1f151617a38be06efd8
2017-12-28 22:06:33 -08:00
Zuul
97799521f9 Merge "correct audit parameter typo" 2017-12-28 10:54:57 +00:00
suzhengwei
96fa7f33ac use current weighted sd as min_sd when starting to simulate migrations
If it uses a specific value(usually 1 or 2) as the min_sd when starting
to simulate migrations. The first simulate_migration case will always be
less than the min_sd and come into the solution, even though the migration
will increase the weighted sd. This is unreasonable, and make migrations
among hosts back and forth

Change-Id: I7813c4c92c380c489c349444b85187c5611d9c92
Closes-Bug: #1739723
2017-12-27 15:00:57 +03:00
Zuul
1c2d0aa1f2 Merge "Updated from global requirements" 2017-12-27 10:00:01 +00:00
licanwei
070aed7076 correct audit parameter typo
Change-Id: Id98294a093ac9a704791cdbf52046ce1377f1796
2017-12-25 23:52:43 -08:00
Zuul
2b402d3cbf Merge "Fix watcher audit list command" 2017-12-26 04:49:49 +00:00
Zuul
cca3e75ac1 Merge "Add Datasource Abstraction" 2017-12-26 03:02:36 +00:00
OpenStack Proposal Bot
6f27275f44 Updated from global requirements
Change-Id: I26c1f4be398496b88b69094ec1804b07f7c1d7f1
2017-12-23 10:18:41 +00:00
Alexander Chadin
95548af426 Fix watcher audit list command
This patch set adds data migration version that fills noname audits
with name like strategy.name + '-' + audit.created_at.

Closes-Bug: #1738758
Change-Id: I1d65b3110166e9f64ce5b80a34672d24d629807d
2017-12-22 08:43:28 +00:00
licanwei
cdc847d352 check actionplan state when deleting actionplan
If actionplan is 'ONGOING' or 'PENDING',
don't delete it.

Change-Id: I8bfa31a70bba0a7adb1bfd09fc22e6a66b9ebf3a
Closes-Bug: #1738360
2017-12-21 22:32:09 -08:00
Zuul
b69244f8ef Merge "TrivialFix: remove redundant import alias" 2017-12-21 15:43:42 +00:00
Dao Cong Tien
cbd6d88025 TrivialFix: remove redundant import alias
Change-Id: Idf53683def6588e626144ecc3b74033d57ab3f87
2017-12-21 20:09:07 +07:00
Zuul
028d7c939c Merge "check audit state when deleting audit" 2017-12-20 09:04:02 +00:00
licanwei
a8fa969379 check audit state when deleting audit
If audit is 'ONGOING' or 'PENDING', don't delete audit.

Change-Id: Iac714e7e78e7bb5b52f401e5b2ad0e1d8af8bb45
Closes-Bug: #1738358
2017-12-19 18:09:42 -08:00
licanwei
80ee4b29f5 reset job interval when audit was updated
when we update a existing audit's interval, the interval of
'execute_audit' job is still the old value.
We need to update the interval of 'execute_audit' job.

Change-Id: I402efaa6b2fd3a454717c3df9746c827927ffa91
Closes-Bug: #1738140
2017-12-19 17:57:37 -08:00
Zuul
e562c9173c Merge "Updated from global requirements" 2017-12-19 16:38:39 +00:00
OpenStack Proposal Bot
ec0c359037 Updated from global requirements
Change-Id: I96d4a5a7e2b05df3f06d7c08f64cd9bcf83ff99b
2017-12-19 01:52:42 +00:00
Andreas Jaeger
3b6bef180b Fix releasenotes build
Remove a stray import of watcher project that breaks releasenotes build.

Change-Id: I4d107449b88adb19a3f269b2f33221addef0d9d6
2017-12-18 15:39:25 +01:00
Zuul
640e4e1fea Merge "Update getting scoped storage CDM" 2017-12-18 14:31:39 +00:00
Zuul
eeb817cd6e Merge "listen to 'compute.instance.rebuild.end' event" 2017-12-18 13:12:26 +00:00
Hidekazu Nakamura
c6afa7c320 Update getting scoped storage CDM
Now that CDM-scoping was implemented, Getting scoped storage model
have to be updated.
This patch updates getting storage cluster data model.

Change-Id: Iefc22b54995aa8d2f3a7b3698575f6eb800d4289
2017-12-16 15:20:58 +00:00
OpenStack Proposal Bot
9ccd17e40b Updated from global requirements
Change-Id: I0af2c9fd266f925af5e3e8731b37a00dab91d6a8
2017-12-15 22:24:15 +00:00
Zuul
2a7e0d652c Merge "'get_volume_type_by_backendname' returns a list" 2017-12-14 06:18:04 +00:00
Zuul
a94e35b60e Merge "Fix 'unable to exclude instance'" 2017-12-14 05:38:34 +00:00
Zuul
72e3d5c7f9 Merge "Add and identify excluded instances in compute CDM" 2017-12-13 13:34:33 +00:00
aditi
be56441e55 Fix 'unable to exclude instance'
Change-Id: I1599a86a2ba7d3af755fb1412a5e38516c736957
Closes-Bug: #1736129
2017-12-12 10:29:35 +00:00
Zuul
aa2b213a45 Merge "Register default policies in code" 2017-12-12 03:38:13 +00:00
Zuul
668513d771 Merge "Updated from global requirements" 2017-12-12 02:57:47 +00:00
Lance Bragstad
0242d33adb Register default policies in code
This commit registers all policies formally kept in policy.json as
defaults in code. This is an effort to make policy management easier
for operators. More information on this initiative can be found
below:

  https://governance.openstack.org/tc/goals/queens/policy-in-code.html

bp policy-and-docs-in-code

Change-Id: Ibab08f8e1c95b86e08737c67a39c293566dbabc7
2017-12-11 15:19:10 +03:00
suzhengwei
c38dc9828b listen to 'compute.instance.rebuild.end' event
In one integrated cloud env, there would be many solutions, which would
make the compute resource strongly relocated. Watcher should listen to
all the notifications which represent the compute resource changes, to
update compute CDM. If not, the compute CDM will be stale, Watcher
couldn't work steadily and harmoniously.

Change-Id: I793131dd8f24f1ac5f5a6a070bb4fe7980c8dfb2
Implements:blueprint listen-all-necessary-notifications
2017-12-08 16:18:35 +08:00
OpenStack Proposal Bot
4ce1a9096b Updated from global requirements
Change-Id: I04a2a04de3b32570bb0afaf9eb736976e888a031
2017-12-07 13:53:09 +00:00
Yumeng_Bao
02163d64aa bug fix remove volume migration type 'cold'
Migration action 'cold' is not intuitive for the developers and users,
so this patch replaces it with ‘migrate’ and 'retype'.

Change-Id: I58acac741499f47e79630a6031d44088681e038a
Closes-Bug: #1733247
2017-12-06 18:03:25 +08:00
suzhengwei
d91f0bff22 Add and identify excluded instances in compute CDM
Change-Id: If03893c5e9b6a37e1126ad91e4f3bfafe0f101d9
Implements:blueprint compute-cdm-include-all-instances
2017-12-06 17:43:42 +08:00
aditi
e401cb7c9d Add Datasource Abstraction
This patch set adds, datasource abstraction layer.

Change-Id: Id828e427b998aa34efa07e04e615c82c5730d3c9
Partially-Implements: blueprint watcher-multi-datasource
2017-12-05 17:33:04 +03:00
licanwei
fa31341bbb 'get_volume_type_by_backendname' returns a list
Storage pool can have many volume types,
'get_volume_type_by_backendname' should return a list of types.

Closes-Bug: #1733257
Change-Id: I877d5886259e482089ed0f9944d97bb99f375824
2017-11-26 23:28:56 -08:00
125 changed files with 7536 additions and 1114 deletions

View File

@@ -1,5 +1,4 @@
- project:
name: openstack/watcher
check:
jobs:
- watcher-tempest-multinode

View File

@@ -42,7 +42,7 @@ WATCHER_AUTH_CACHE_DIR=${WATCHER_AUTH_CACHE_DIR:-/var/cache/watcher}
WATCHER_CONF_DIR=/etc/watcher
WATCHER_CONF=$WATCHER_CONF_DIR/watcher.conf
WATCHER_POLICY_JSON=$WATCHER_CONF_DIR/policy.json
WATCHER_POLICY_YAML=$WATCHER_CONF_DIR/policy.yaml.sample
WATCHER_DEVSTACK_DIR=$WATCHER_DIR/devstack
WATCHER_DEVSTACK_FILES_DIR=$WATCHER_DEVSTACK_DIR/files
@@ -106,7 +106,25 @@ function configure_watcher {
# Put config files in ``/etc/watcher`` for everyone to find
sudo install -d -o $STACK_USER $WATCHER_CONF_DIR
install_default_policy watcher
local project=watcher
local project_uc
project_uc=$(echo watcher|tr a-z A-Z)
local conf_dir="${project_uc}_CONF_DIR"
# eval conf dir to get the variable
conf_dir="${!conf_dir}"
local project_dir="${project_uc}_DIR"
# eval project dir to get the variable
project_dir="${!project_dir}"
local sample_conf_dir="${project_dir}/etc/${project}"
local sample_policy_dir="${project_dir}/etc/${project}/policy.d"
local sample_policy_generator="${project_dir}/etc/${project}/oslo-policy-generator/watcher-policy-generator.conf"
# first generate policy.yaml
oslopolicy-sample-generator --config-file $sample_policy_generator
# then optionally copy over policy.d
if [[ -d $sample_policy_dir ]]; then
cp -r $sample_policy_dir $conf_dir/policy.d
fi
# Rebuild the config file from scratch
create_watcher_conf
@@ -163,7 +181,7 @@ function create_watcher_conf {
iniset $WATCHER_CONF api host "$WATCHER_SERVICE_HOST"
iniset $WATCHER_CONF api port "$WATCHER_SERVICE_PORT"
iniset $WATCHER_CONF oslo_policy policy_file $WATCHER_POLICY_JSON
iniset $WATCHER_CONF oslo_policy policy_file $WATCHER_POLICY_YAML
iniset $WATCHER_CONF oslo_messaging_rabbit rabbit_userid $RABBIT_USERID
iniset $WATCHER_CONF oslo_messaging_rabbit rabbit_password $RABBIT_PASSWORD

View File

@@ -3,6 +3,9 @@
# Make sure rabbit is enabled
enable_service rabbit
# Make sure mysql is enabled
enable_service mysql
# Enable Watcher services
enable_service watcher-api
enable_service watcher-decision-engine

View File

@@ -1,3 +1,7 @@
==================================================
OpenStack Infrastructure Optimization Service APIs
==================================================
.. toctree::
:maxdepth: 1

View File

@@ -200,8 +200,8 @@ configuration file, in order:
Although some configuration options are mentioned here, it is recommended that
you review all the `available options
<https://git.openstack.org/cgit/openstack/watcher/tree/etc/watcher/watcher.conf.sample>`_
you review all the :ref:`available options
<watcher_sample_configuration_files>`
so that the watcher service is configured for your needs.
#. The Watcher Service stores information in a database. This guide uses the
@@ -391,7 +391,7 @@ Ceilometer is designed to collect measurements from OpenStack services and from
other external components. If you would like to add new meters to the currently
existing ones, you need to follow the documentation below:
#. https://docs.openstack.org/ceilometer/latest/contributor/new_meters.html#meters
#. https://docs.openstack.org/ceilometer/latest/contributor/measurements.html#new-measurements
The Ceilometer collector uses a pluggable storage system, meaning that you can
pick any database system you prefer.

View File

@@ -263,7 +263,7 @@ requires new metrics not covered by Ceilometer, you can add them through a
`Ceilometer plugin`_.
.. _`Helper`: https://github.com/openstack/watcher/blob/master/watcher/decision_engine/cluster/history/ceilometer.py
.. _`Helper`: https://github.com/openstack/watcher/blob/master/watcher/datasource/ceilometer.py
.. _`Ceilometer developer guide`: https://docs.openstack.org/ceilometer/latest/contributor/architecture.html#storing-accessing-the-data
.. _`Ceilometer`: https://docs.openstack.org/ceilometer/latest
.. _`Monasca`: https://github.com/openstack/monasca-api/blob/master/docs/monasca-api-spec.md

View File

@@ -267,7 +267,7 @@ the same goal and same workload of the :ref:`Cluster <cluster_definition>`.
Project
=======
:ref:`Projects <project_definition>` represent the base unit of ownership
:ref:`Projects <project_definition>` represent the base unit of "ownership"
in OpenStack, in that all :ref:`resources <managed_resource_definition>` in
OpenStack should be owned by a specific :ref:`project <project_definition>`.
In OpenStack Identity, a :ref:`project <project_definition>` must be owned by a

View File

@@ -36,4 +36,4 @@ https://docs.openstack.org/watcher/latest/glossary.html
This chapter assumes a working setup of OpenStack following the
`OpenStack Installation Tutorial
<https://docs.openstack.org/pike/install/>`_.
<https://docs.openstack.org/queens/install/>`_.

View File

@@ -6,4 +6,4 @@ Next steps
Your OpenStack environment now includes the watcher service.
To add additional services, see
https://docs.openstack.org/pike/install/.
https://docs.openstack.org/queens/install/.

View File

@@ -0,0 +1,86 @@
=============
Actuator
=============
Synopsis
--------
**display name**: ``Actuator``
**goal**: ``unclassified``
.. watcher-term:: watcher.decision_engine.strategy.strategies.actuation
Requirements
------------
Metrics
*******
None
Cluster data model
******************
None
Actions
*******
Default Watcher's actions.
Planner
*******
Default Watcher's planner:
.. watcher-term:: watcher.decision_engine.planner.weight.WeightPlanner
Configuration
-------------
Strategy parameters are:
==================== ====== ===================== =============================
parameter type default Value description
==================== ====== ===================== =============================
``actions`` array None Actions to be executed.
==================== ====== ===================== =============================
The elements of actions array are:
==================== ====== ===================== =============================
parameter type default Value description
==================== ====== ===================== =============================
``action_type`` string None Action name defined in
setup.cfg(mandatory)
``resource_id`` string None Resource_id of the action.
``input_parameters`` object None Input_parameters of the
action(mandatory).
==================== ====== ===================== =============================
Efficacy Indicator
------------------
None
Algorithm
---------
This strategy create an action plan with a predefined set of actions.
How to use it ?
---------------
.. code-block:: shell
$ openstack optimize audittemplate create \
at1 unclassified --strategy actuator
$ openstack optimize audit create -a at1 \
-p actions='[{"action_type": "migrate", "resource_id": "56a40802-6fde-4b59-957c-c84baec7eaed", "input_parameters": {"migration_type": "live", "source_node": "s01"}}]'
External Links
--------------
None

View File

@@ -0,0 +1,154 @@
==============
Zone migration
==============
Synopsis
--------
**display name**: ``Zone migration``
**goal**: ``hardware_maintenance``
.. watcher-term:: watcher.decision_engine.strategy.strategies.zone_migration
Requirements
------------
Metrics
*******
None
Cluster data model
******************
Default Watcher's Compute cluster data model:
.. watcher-term:: watcher.decision_engine.model.collector.nova.NovaClusterDataModelCollector
Storage cluster data model is also required:
.. watcher-term:: watcher.decision_engine.model.collector.cinder.CinderClusterDataModelCollector
Actions
*******
Default Watcher's actions:
.. list-table::
:widths: 30 30
:header-rows: 1
* - action
- description
* - ``migrate``
- .. watcher-term:: watcher.applier.actions.migration.Migrate
* - ``volume_migrate``
- .. watcher-term:: watcher.applier.actions.volume_migration.VolumeMigrate
Planner
*******
Default Watcher's planner:
.. watcher-term:: watcher.decision_engine.planner.weight.WeightPlanner
Configuration
-------------
Strategy parameters are:
======================== ======== ============= ==============================
parameter type default Value description
======================== ======== ============= ==============================
``compute_nodes`` array None Compute nodes to migrate.
``storage_pools`` array None Storage pools to migrate.
``parallel_total`` integer 6 The number of actions to be
run in parallel in total.
``parallel_per_node`` integer 2 The number of actions to be
run in parallel per compute
node.
``parallel_per_pool`` integer 2 The number of actions to be
run in parallel per storage
pool.
``priority`` object None List prioritizes instances
and volumes.
``with_attached_volume`` boolean False False: Instances will migrate
after all volumes migrate.
True: An instance will migrate
after the attached volumes
migrate.
======================== ======== ============= ==============================
The elements of compute_nodes array are:
============= ======= =============== =============================
parameter type default Value description
============= ======= =============== =============================
``src_node`` string None Compute node from which
instances migrate(mandatory).
``dst_node`` string None Compute node to which
instances migrate.
============= ======= =============== =============================
The elements of storage_pools array are:
============= ======= =============== ==============================
parameter type default Value description
============= ======= =============== ==============================
``src_pool`` string None Storage pool from which
volumes migrate(mandatory).
``dst_pool`` string None Storage pool to which
volumes migrate.
``src_type`` string None Source volume type(mandatory).
``dst_type`` string None Destination volume type
(mandatory).
============= ======= =============== ==============================
The elements of priority object are:
================ ======= =============== ======================
parameter type default Value description
================ ======= =============== ======================
``project`` array None Project names.
``compute_node`` array None Compute node names.
``storage_pool`` array None Storage pool names.
``compute`` enum None Instance attributes.
|compute|
``storage`` enum None Volume attributes.
|storage|
================ ======= =============== ======================
.. |compute| replace:: ["vcpu_num", "mem_size", "disk_size", "created_at"]
.. |storage| replace:: ["size", "created_at"]
Efficacy Indicator
------------------
.. watcher-func::
:format: literal_block
watcher.decision_engine.goal.efficacy.specs.HardwareMaintenance.get_global_efficacy_indicator
Algorithm
---------
For more information on the zone migration strategy please refer
to: http://specs.openstack.org/openstack/watcher-specs/specs/queens/implemented/zone-migration-strategy.html
How to use it ?
---------------
.. code-block:: shell
$ openstack optimize audittemplate create \
at1 hardware_maintenance --strategy zone_migration
$ openstack optimize audit create -a at1 \
-p compute_nodes='[{"src_node": "s01", "dst_node": "d01"}]'
External Links
--------------
None

View File

@@ -0,0 +1,3 @@
[DEFAULT]
output_file = /etc/watcher/policy.yaml.sample
namespace = watcher

View File

@@ -1,45 +0,0 @@
{
"admin_api": "role:admin or role:administrator",
"show_password": "!",
"default": "rule:admin_api",
"action:detail": "rule:default",
"action:get": "rule:default",
"action:get_all": "rule:default",
"action_plan:delete": "rule:default",
"action_plan:detail": "rule:default",
"action_plan:get": "rule:default",
"action_plan:get_all": "rule:default",
"action_plan:update": "rule:default",
"audit:create": "rule:default",
"audit:delete": "rule:default",
"audit:detail": "rule:default",
"audit:get": "rule:default",
"audit:get_all": "rule:default",
"audit:update": "rule:default",
"audit_template:create": "rule:default",
"audit_template:delete": "rule:default",
"audit_template:detail": "rule:default",
"audit_template:get": "rule:default",
"audit_template:get_all": "rule:default",
"audit_template:update": "rule:default",
"goal:detail": "rule:default",
"goal:get": "rule:default",
"goal:get_all": "rule:default",
"scoring_engine:detail": "rule:default",
"scoring_engine:get": "rule:default",
"scoring_engine:get_all": "rule:default",
"strategy:detail": "rule:default",
"strategy:get": "rule:default",
"strategy:get_all": "rule:default",
"service:detail": "rule:default",
"service:get": "rule:default",
"service:get_all": "rule:default"
}

View File

@@ -0,0 +1,5 @@
---
features:
- |
Adds audit scoper for storage data model, now watcher users can specify
audit scope for storage CDM in the same manner as compute scope.

View File

@@ -0,0 +1,4 @@
---
features:
- |
Adds baremetal data model in Watcher

View File

@@ -0,0 +1,6 @@
---
features:
- Added a way to check state of strategy before audit's execution.
Administrator can use "watcher strategy state <strategy_name>" command
to get information about metrics' availability, datasource's availability
and CDM's availability.

View File

@@ -0,0 +1,4 @@
---
features:
- |
Added storage capacity balance strategy.

View File

@@ -0,0 +1,6 @@
---
features:
- |
Added strategy "Zone migration" and it's goal "Hardware maintenance".
The strategy migrates many instances and volumes efficiently with
minimum downtime automatically.

View File

@@ -24,7 +24,6 @@
import os
import sys
from watcher import version as watcher_version
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the

View File

@@ -1,14 +1,14 @@
# Andi Chandler <andi@gowling.com>, 2016. #zanata
# Andi Chandler <andi@gowling.com>, 2017. #zanata
# Andi Chandler <andi@gowling.com>, 2018. #zanata
msgid ""
msgstr ""
"Project-Id-Version: watcher 1.4.1.dev113\n"
"Project-Id-Version: watcher\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2017-10-23 04:03+0000\n"
"POT-Creation-Date: 2018-01-26 00:18+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2017-10-21 06:22+0000\n"
"PO-Revision-Date: 2018-01-27 12:50+0000\n"
"Last-Translator: Andi Chandler <andi@gowling.com>\n"
"Language-Team: English (United Kingdom)\n"
"Language: en-GB\n"
@@ -18,9 +18,6 @@ msgstr ""
msgid "0.29.0"
msgstr "0.29.0"
msgid "0.33.0"
msgstr "0.33.0"
msgid "0.34.0"
msgstr "0.34.0"
@@ -39,6 +36,15 @@ msgstr "1.4.0"
msgid "1.4.1"
msgstr "1.4.1"
msgid "1.5.0"
msgstr "1.5.0"
msgid "1.6.0"
msgstr "1.6.0"
msgid "1.7.0"
msgstr "1.7.0"
msgid "Add a service supervisor to watch Watcher deamons."
msgstr "Add a service supervisor to watch Watcher daemons."
@@ -74,17 +80,6 @@ msgstr ""
msgid "Added SUSPENDED audit state"
msgstr "Added SUSPENDED audit state"
msgid ""
"Added a generic scoring engine module, which will standardize interactions "
"with scoring engines through the common API. It is possible to use the "
"scoring engine by different Strategies, which improve the code and data "
"model re-use."
msgstr ""
"Added a generic scoring engine module, which will standardize interactions "
"with scoring engines through the common API. It is possible to use the "
"scoring engine by different Strategies, which improve the code and data "
"model re-use."
msgid ""
"Added a generic scoring engine module, which will standarize interactions "
"with scoring engines through the common API. It is possible to use the "
@@ -141,6 +136,17 @@ msgstr ""
"Added a way to add a new action without having to amend the source code of "
"the default planner."
msgid ""
"Added a way to check state of strategy before audit's execution. "
"Administrator can use \"watcher strategy state <strategy_name>\" command to "
"get information about metrics' availability, datasource's availability and "
"CDM's availability."
msgstr ""
"Added a way to check state of strategy before audit's execution. "
"Administrator can use \"watcher strategy state <strategy_name>\" command to "
"get information about metrics' availability, datasource's availability and "
"CDM's availability."
msgid ""
"Added a way to compare the efficacy of different strategies for a give "
"optimization goal."
@@ -155,13 +161,6 @@ msgstr ""
"Added a way to create periodic audit to be able to continuously optimise the "
"cloud infrastructure."
msgid ""
"Added a way to return the of available goals depending on which strategies "
"have been deployed on the node where the decision engine is running."
msgstr ""
"Added a way to return the of available goals depending on which strategies "
"have been deployed on the node where the decision engine is running."
msgid ""
"Added a way to return the of available goals depending on which strategies "
"have been deployed on the node where the decison engine is running."
@@ -198,10 +197,202 @@ msgstr ""
msgid "Added policies to handle user rights to access Watcher API."
msgstr "Added policies to handle user rights to access Watcher API."
#, fuzzy
msgid "Added storage capacity balance strategy."
msgstr "Added storage capacity balance strategy."
msgid ""
"Added strategy \"Zone migration\" and it's goal \"Hardware maintenance\". "
"The strategy migrates many instances and volumes efficiently with minimum "
"downtime automatically."
msgstr ""
"Added strategy \"Zone migration\" and it's goal \"Hardware maintenance\". "
"The strategy migrates many instances and volumes efficiently with minimum "
"downtime automatically."
msgid ""
"Added strategy to identify and migrate a Noisy Neighbor - a low priority VM "
"that negatively affects peformance of a high priority VM by over utilizing "
"Last Level Cache."
msgstr ""
"Added strategy to identify and migrate a Noisy Neighbour - a low priority VM "
"that negatively affects performance of a high priority VM by over utilising "
"Last Level Cache."
msgid ""
"Added the functionality to filter out instances which have metadata field "
"'optimize' set to False. For now, this is only available for the "
"basic_consolidation strategy (if \"check_optimize_metadata\" configuration "
"option is enabled)."
msgstr ""
"Added the functionality to filter out instances which have metadata field "
"'optimize' set to False. For now, this is only available for the "
"basic_consolidation strategy (if \"check_optimize_metadata\" configuration "
"option is enabled)."
msgid "Added using of JSONSchema instead of voluptuous to validate Actions."
msgstr "Added using of JSONSchema instead of voluptuous to validate Actions."
msgid "Added volume migrate action"
msgstr "Added volume migrate action"
msgid ""
"Adds audit scoper for storage data model, now watcher users can specify "
"audit scope for storage CDM in the same manner as compute scope."
msgstr ""
"Adds audit scoper for storage data model, now watcher users can specify "
"audit scope for storage CDM in the same manner as compute scope."
msgid "Adds baremetal data model in Watcher"
msgstr "Adds baremetal data model in Watcher"
msgid ""
"Allow decision engine to pass strategy parameters, like optimization "
"threshold, to selected strategy, also strategy to provide parameters info to "
"end user."
msgstr ""
"Allow decision engine to pass strategy parameters, like optimisation "
"threshold, to selected strategy, also strategy to provide parameters info to "
"end user."
msgid "Centralize all configuration options for Watcher."
msgstr "Centralise all configuration options for Watcher."
msgid "Contents:"
msgstr "Contents:"
#, fuzzy
msgid ""
"Copy all audit templates parameters into audit instead of having a reference "
"to the audit template."
msgstr ""
"Copy all audit templates parameters into audit instead of having a reference "
"to the audit template."
msgid "Current Series Release Notes"
msgstr "Current Series Release Notes"
msgid ""
"Each CDM collector can have its own CDM scoper now. This changed Scope JSON "
"schema definition for the audit template POST data. Please see audit "
"template create help message in python-watcherclient."
msgstr ""
"Each CDM collector can have its own CDM scoper now. This changed Scope JSON "
"schema definition for the audit template POST data. Please see audit "
"template create help message in python-watcherclient."
msgid ""
"Enhancement of vm_workload_consolidation strategy by using 'memory.resident' "
"metric in place of 'memory.usage', as memory.usage shows the memory usage "
"inside guest-os and memory.resident represents volume of RAM used by "
"instance on host machine."
msgstr ""
"Enhancement of vm_workload_consolidation strategy by using 'memory.resident' "
"metric in place of 'memory.usage', as memory.usage shows the memory usage "
"inside guest-os and memory.resident represents volume of RAM used by "
"instance on host machine."
msgid ""
"Existing workload_balance strategy based on the VM workloads of CPU. This "
"feature improves the strategy. By the input parameter \"metrics\", it makes "
"decision to migrate a VM base on CPU or memory utilization."
msgstr ""
"Existing workload_balance strategy based on the VM workloads of CPU. This "
"feature improves the strategy. By the input parameter \"metrics\", it makes "
"decision to migrate a VM base on CPU or memory utilisation."
msgid "New Features"
msgstr "New Features"
msgid "Newton Series Release Notes"
msgstr "Newton Series Release Notes"
msgid "Ocata Series Release Notes"
msgstr "Ocata Series Release Notes"
msgid "Pike Series Release Notes"
msgstr "Pike Series Release Notes"
msgid ""
"Provide a notification mechanism into Watcher that supports versioning. "
"Whenever a Watcher object is created, updated or deleted, a versioned "
"notification will, if it's relevant, be automatically sent to notify in "
"order to allow an event-driven style of architecture within Watcher. "
"Moreover, it will also give other services and/or 3rd party softwares (e.g. "
"monitoring solutions or rules engines) the ability to react to such events."
msgstr ""
"Provide a notification mechanism into Watcher that supports versioning. "
"Whenever a Watcher object is created, updated or deleted, a versioned "
"notification will, if it's relevant, be automatically sent to notify in "
"order to allow an event-driven style of architecture within Watcher. "
"Moreover, it will also give other services and/or 3rd party software (e.g. "
"monitoring solutions or rules engines) the ability to react to such events."
msgid ""
"Provides a generic way to define the scope of an audit. The set of audited "
"resources will be called \"Audit scope\" and will be defined in each audit "
"template (which contains the audit settings)."
msgstr ""
"Provides a generic way to define the scope of an audit. The set of audited "
"resources will be called \"Audit scope\" and will be defined in each audit "
"template (which contains the audit settings)."
msgid ""
"The graph model describes how VMs are associated to compute hosts. This "
"allows for seeing relationships upfront between the entities and hence can "
"be used to identify hot/cold spots in the data center and influence a "
"strategy decision."
msgstr ""
"The graph model describes how VMs are associated to compute hosts. This "
"allows for seeing relationships upfront between the entities and hence can "
"be used to identify hot/cold spots in the data centre and influence a "
"strategy decision."
msgid ""
"There is new ability to create Watcher continuous audits with cron interval. "
"It means you may use, for example, optional argument '--interval \"\\*/5 \\* "
"\\* \\* \\*\"' to launch audit every 5 minutes. These jobs are executed on a "
"best effort basis and therefore, we recommend you to use a minimal cron "
"interval of at least one minute."
msgstr ""
"There is new ability to create Watcher continuous audits with cron interval. "
"It means you may use, for example, optional argument '--interval \"\\*/5 \\* "
"\\* \\* \\*\"' to launch audit every 5 minutes. These jobs are executed on a "
"best effort basis and therefore, we recommend you to use a minimal cron "
"interval of at least one minute."
msgid ""
"Watcher can continuously optimize the OpenStack cloud for a specific "
"strategy or goal by triggering an audit periodically which generates an "
"action plan and run it automatically."
msgstr ""
"Watcher can continuously optimise the OpenStack cloud for a specific "
"strategy or goal by triggering an audit periodically which generates an "
"action plan and run it automatically."
msgid ""
"Watcher can now run specific actions in parallel improving the performances "
"dramatically when executing an action plan."
msgstr ""
"Watcher can now run specific actions in parallel improving the performance "
"dramatically when executing an action plan."
msgid "Watcher database can now be upgraded thanks to Alembic."
msgstr "Watcher database can now be upgraded thanks to Alembic."
msgid ""
"Watcher supports multiple metrics backend and relies on Ceilometer and "
"Monasca."
msgstr ""
"Watcher supports multiple metrics backend and relies on Ceilometer and "
"Monasca."
msgid "Welcome to watcher's Release Notes documentation!"
msgstr "Welcome to watcher's Release Notes documentation!"
msgid ""
"all Watcher objects have been refactored to support OVO (oslo."
"versionedobjects) which was a prerequisite step in order to implement "
"versioned notifications."
msgstr ""
"all Watcher objects have been refactored to support OVO (oslo."
"versionedobjects) which was a prerequisite step in order to implement "
"versioned notifications."

View File

@@ -1,33 +0,0 @@
# Gérald LONLAS <g.lonlas@gmail.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: watcher 1.0.1.dev51\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2017-03-21 11:57+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-10-22 06:44+0000\n"
"Last-Translator: Gérald LONLAS <g.lonlas@gmail.com>\n"
"Language-Team: French\n"
"Language: fr\n"
"X-Generator: Zanata 3.9.6\n"
"Plural-Forms: nplurals=2; plural=(n > 1)\n"
msgid "0.29.0"
msgstr "0.29.0"
msgid "Contents:"
msgstr "Contenu :"
msgid "Current Series Release Notes"
msgstr "Note de la release actuelle"
msgid "New Features"
msgstr "Nouvelles fonctionnalités"
msgid "Newton Series Release Notes"
msgstr "Note de release pour Newton"
msgid "Welcome to watcher's Release Notes documentation!"
msgstr "Bienvenue dans la documentation de la note de Release de Watcher"

View File

@@ -10,20 +10,20 @@ jsonschema<3.0.0,>=2.6.0 # MIT
keystonemiddleware>=4.17.0 # Apache-2.0
lxml!=3.7.0,>=3.4.1 # BSD
croniter>=0.3.4 # MIT License
oslo.concurrency>=3.20.0 # Apache-2.0
oslo.concurrency>=3.25.0 # Apache-2.0
oslo.cache>=1.26.0 # Apache-2.0
oslo.config>=5.1.0 # Apache-2.0
oslo.context>=2.19.2 # Apache-2.0
oslo.db>=4.27.0 # Apache-2.0
oslo.i18n>=3.15.3 # Apache-2.0
oslo.log>=3.30.0 # Apache-2.0
oslo.log>=3.36.0 # Apache-2.0
oslo.messaging>=5.29.0 # Apache-2.0
oslo.policy>=1.23.0 # Apache-2.0
oslo.policy>=1.30.0 # Apache-2.0
oslo.reports>=1.18.0 # Apache-2.0
oslo.serialization!=2.19.1,>=2.18.0 # Apache-2.0
oslo.service>=1.24.0 # Apache-2.0
oslo.utils>=3.31.0 # Apache-2.0
oslo.versionedobjects>=1.28.0 # Apache-2.0
oslo.service!=1.28.1,>=1.24.0 # Apache-2.0
oslo.utils>=3.33.0 # Apache-2.0
oslo.versionedobjects>=1.31.2 # Apache-2.0
PasteDeploy>=1.5.0 # MIT
pbr!=2.1.0,>=2.0.0 # Apache-2.0
pecan!=1.0.2,!=1.0.3,!=1.0.4,!=1.2,>=1.0.0 # BSD
@@ -31,18 +31,18 @@ PrettyTable<0.8,>=0.7.1 # BSD
voluptuous>=0.8.9 # BSD License
gnocchiclient>=3.3.1 # Apache-2.0
python-ceilometerclient>=2.5.0 # Apache-2.0
python-cinderclient>=3.2.0 # Apache-2.0
python-cinderclient>=3.3.0 # Apache-2.0
python-glanceclient>=2.8.0 # Apache-2.0
python-keystoneclient>=3.8.0 # Apache-2.0
python-monascaclient>=1.7.0 # Apache-2.0
python-neutronclient>=6.3.0 # Apache-2.0
python-novaclient>=9.1.0 # Apache-2.0
python-openstackclient>=3.12.0 # Apache-2.0
python-ironicclient>=1.14.0 # Apache-2.0
python-ironicclient>=2.2.0 # Apache-2.0
six>=1.10.0 # MIT
SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 # MIT
stevedore>=1.20.0 # Apache-2.0
taskflow>=2.7.0 # Apache-2.0
taskflow>=2.16.0 # Apache-2.0
WebOb>=1.7.1 # MIT
WSME>=0.8.0 # MIT
networkx<2.0,>=1.10 # BSD

View File

@@ -32,6 +32,12 @@ setup-hooks =
oslo.config.opts =
watcher = watcher.conf.opts:list_opts
oslo.policy.policies =
watcher = watcher.common.policies:list_rules
oslo.policy.enforcer =
watcher = watcher.common.policy:get_enforcer
console_scripts =
watcher-api = watcher.cmd.api:main
watcher-db-manage = watcher.cmd.dbmanage:main
@@ -51,6 +57,7 @@ watcher_goals =
airflow_optimization = watcher.decision_engine.goal.goals:AirflowOptimization
noisy_neighbor = watcher.decision_engine.goal.goals:NoisyNeighborOptimization
saving_energy = watcher.decision_engine.goal.goals:SavingEnergy
hardware_maintenance = watcher.decision_engine.goal.goals:HardwareMaintenance
watcher_scoring_engines =
dummy_scorer = watcher.decision_engine.scoring.dummy_scorer:DummyScorer
@@ -71,6 +78,8 @@ watcher_strategies =
workload_balance = watcher.decision_engine.strategy.strategies.workload_balance:WorkloadBalance
uniform_airflow = watcher.decision_engine.strategy.strategies.uniform_airflow:UniformAirflow
noisy_neighbor = watcher.decision_engine.strategy.strategies.noisy_neighbor:NoisyNeighbor
storage_capacity_balance = watcher.decision_engine.strategy.strategies.storage_capacity_balance:StorageCapacityBalance
zone_migration = watcher.decision_engine.strategy.strategies.zone_migration:ZoneMigration
watcher_actions =
migrate = watcher.applier.actions.migration:Migrate
@@ -91,6 +100,7 @@ watcher_planners =
watcher_cluster_data_model_collectors =
compute = watcher.decision_engine.model.collector.nova:NovaClusterDataModelCollector
storage = watcher.decision_engine.model.collector.cinder:CinderClusterDataModelCollector
baremetal = watcher.decision_engine.model.collector.ironic:BaremetalClusterDataModelCollector
[pbr]

View File

@@ -7,15 +7,15 @@ doc8>=0.6.0 # Apache-2.0
freezegun>=0.3.6 # Apache-2.0
hacking!=0.13.0,<0.14,>=0.12.0 # Apache-2.0
mock>=2.0.0 # BSD
oslotest>=1.10.0 # Apache-2.0
oslotest>=3.2.0 # Apache-2.0
os-testr>=1.0.0 # Apache-2.0
testrepository>=0.0.18 # Apache-2.0/BSD
testscenarios>=0.4 # Apache-2.0/BSD
testtools>=2.2.0 # MIT
# Doc requirements
openstackdocstheme>=1.17.0 # Apache-2.0
sphinx>=1.6.2 # BSD
openstackdocstheme>=1.18.1 # Apache-2.0
sphinx!=1.6.6,>=1.6.2 # BSD
sphinxcontrib-pecanwsme>=0.8.0 # Apache-2.0

View File

@@ -46,6 +46,10 @@ sitepackages = False
commands =
oslo-config-generator --config-file etc/watcher/oslo-config-generator/watcher.conf
[testenv:genpolicy]
commands =
oslopolicy-sample-generator --config-file etc/watcher/oslo-policy-generator/watcher-policy-generator.conf
[flake8]
filename = *.py,app.wsgi
show-source=True

View File

@@ -341,7 +341,7 @@ class ActionsController(rest.RestController):
@wsme_pecan.wsexpose(Action, body=Action, status_code=201)
def post(self, action):
"""Create a new action.
"""Create a new action(forbidden).
:param action: a action within the request body.
"""
@@ -364,7 +364,7 @@ class ActionsController(rest.RestController):
@wsme.validate(types.uuid, [ActionPatchType])
@wsme_pecan.wsexpose(Action, types.uuid, body=[ActionPatchType])
def patch(self, action_uuid, patch):
"""Update an existing action.
"""Update an existing action(forbidden).
:param action_uuid: UUID of a action.
:param patch: a json PATCH document to apply to this action.
@@ -401,7 +401,7 @@ class ActionsController(rest.RestController):
@wsme_pecan.wsexpose(None, types.uuid, status_code=204)
def delete(self, action_uuid):
"""Delete a action.
"""Delete a action(forbidden).
:param action_uuid: UUID of a action.
"""

View File

@@ -460,6 +460,15 @@ class ActionPlansController(rest.RestController):
policy.enforce(context, 'action_plan:delete', action_plan,
action='action_plan:delete')
allowed_states = (ap_objects.State.SUCCEEDED,
ap_objects.State.RECOMMENDED,
ap_objects.State.FAILED,
ap_objects.State.SUPERSEDED,
ap_objects.State.CANCELLED)
if action_plan.state not in allowed_states:
raise exception.DeleteError(
state=action_plan.state)
action_plan.soft_delete()
@wsme.validate(types.uuid, [ActionPlanPatchType])

View File

@@ -37,6 +37,8 @@ import wsme
from wsme import types as wtypes
import wsmeext.pecan as wsme_pecan
from oslo_log import log
from watcher._i18n import _
from watcher.api.controllers import base
from watcher.api.controllers import link
@@ -49,6 +51,8 @@ from watcher.common import utils
from watcher.decision_engine import rpcapi
from watcher import objects
LOG = log.getLogger(__name__)
class AuditPostType(wtypes.Base):
@@ -129,6 +133,11 @@ class AuditPostType(wtypes.Base):
goal = objects.Goal.get(context, self.goal)
self.name = "%s-%s" % (goal.name,
datetime.datetime.utcnow().isoformat())
# No more than 63 characters
if len(self.name) > 63:
LOG.warning("Audit: %s length exceeds 63 characters",
self.name)
self.name = self.name[0:63]
return Audit(
name=self.name,
@@ -166,10 +175,10 @@ class AuditPatchType(types.JsonPatchType):
class Audit(base.APIBase):
"""API representation of a audit.
"""API representation of an audit.
This class enforces type checking and value constraints, and converts
between the internal object model and the API representation of a audit.
between the internal object model and the API representation of an audit.
"""
_goal_uuid = None
_goal_name = None
@@ -264,19 +273,19 @@ class Audit(base.APIBase):
goal_uuid = wsme.wsproperty(
wtypes.text, _get_goal_uuid, _set_goal_uuid, mandatory=True)
"""Goal UUID the audit template refers to"""
"""Goal UUID the audit refers to"""
goal_name = wsme.wsproperty(
wtypes.text, _get_goal_name, _set_goal_name, mandatory=False)
"""The name of the goal this audit template refers to"""
"""The name of the goal this audit refers to"""
strategy_uuid = wsme.wsproperty(
wtypes.text, _get_strategy_uuid, _set_strategy_uuid, mandatory=False)
"""Strategy UUID the audit template refers to"""
"""Strategy UUID the audit refers to"""
strategy_name = wsme.wsproperty(
wtypes.text, _get_strategy_name, _set_strategy_name, mandatory=False)
"""The name of the strategy this audit template refers to"""
"""The name of the strategy this audit refers to"""
parameters = {wtypes.text: types.jsontype}
"""The strategy parameters for this audit"""
@@ -511,7 +520,7 @@ class AuditsController(rest.RestController):
def get_one(self, audit):
"""Retrieve information about the given audit.
:param audit_uuid: UUID or name of an audit.
:param audit: UUID or name of an audit.
"""
if self.from_audits:
raise exception.OperationNotPermitted
@@ -526,7 +535,7 @@ class AuditsController(rest.RestController):
def post(self, audit_p):
"""Create a new audit.
:param audit_p: a audit within the request body.
:param audit_p: an audit within the request body.
"""
context = pecan.request.context
policy.enforce(context, 'audit:create',
@@ -556,7 +565,7 @@ class AuditsController(rest.RestController):
if no_schema and audit.parameters:
raise exception.Invalid(_('Specify parameters but no predefined '
'strategy for audit template, or no '
'strategy for audit, or no '
'parameter spec in predefined strategy'))
audit_dict = audit.as_dict()
@@ -579,7 +588,7 @@ class AuditsController(rest.RestController):
def patch(self, audit, patch):
"""Update an existing audit.
:param auditd: UUID or name of a audit.
:param audit: UUID or name of an audit.
:param patch: a json PATCH document to apply to this audit.
"""
if self.from_audits:
@@ -636,4 +645,11 @@ class AuditsController(rest.RestController):
policy.enforce(context, 'audit:update', audit_to_delete,
action='audit:update')
initial_state = audit_to_delete.state
new_state = objects.audit.State.DELETED
if not objects.audit.AuditStateTransitionManager(
).check_transition(initial_state, new_state):
raise exception.DeleteError(
state=initial_state)
audit_to_delete.soft_delete()

View File

@@ -41,6 +41,7 @@ from watcher.api.controllers.v1 import utils as api_utils
from watcher.common import exception
from watcher.common import policy
from watcher.common import utils as common_utils
from watcher.decision_engine import rpcapi
from watcher import objects
@@ -205,6 +206,7 @@ class StrategiesController(rest.RestController):
_custom_actions = {
'detail': ['GET'],
'state': ['GET'],
}
def _get_strategies_collection(self, filters, marker, limit, sort_key,
@@ -288,6 +290,26 @@ class StrategiesController(rest.RestController):
return self._get_strategies_collection(
filters, marker, limit, sort_key, sort_dir, expand, resource_url)
@wsme_pecan.wsexpose(wtypes.text, wtypes.text)
def state(self, strategy):
"""Retrieve a inforamation about strategy requirements.
:param strategy: name of the strategy.
"""
context = pecan.request.context
policy.enforce(context, 'strategy:state', action='strategy:state')
parents = pecan.request.path.split('/')[:-1]
if parents[-2] != "strategies":
raise exception.HTTPNotFound
rpc_strategy = api_utils.get_resource('Strategy', strategy)
de_client = rpcapi.DecisionEngineAPI()
strategy_state = de_client.get_strategy_info(context,
rpc_strategy.name)
strategy_state.extend([{
'type': 'Name', 'state': rpc_strategy.name,
'mandatory': '', 'comment': ''}])
return strategy_state
@wsme_pecan.wsexpose(Strategy, wtypes.text)
def get_one(self, strategy):
"""Retrieve information about the given strategy.

View File

@@ -36,13 +36,16 @@ class VolumeMigrate(base.BaseAction):
By using this action, you will be able to migrate cinder volume.
Migration type 'swap' can only be used for migrating attached volume.
Migration type 'cold' can only be used for migrating detached volume.
Migration type 'migrate' can be used for migrating detached volume to
the pool of same volume type.
Migration type 'retype' can be used for changing volume type of
detached volume.
The action schema is::
schema = Schema({
'resource_id': str, # should be a UUID
'migration_type': str, # choices -> "swap", "cold"
'migration_type': str, # choices -> "swap", "migrate","retype"
'destination_node': str,
'destination_type': str,
})
@@ -60,7 +63,8 @@ class VolumeMigrate(base.BaseAction):
MIGRATION_TYPE = 'migration_type'
SWAP = 'swap'
COLD = 'cold'
RETYPE = 'retype'
MIGRATE = 'migrate'
DESTINATION_NODE = "destination_node"
DESTINATION_TYPE = "destination_type"
@@ -85,7 +89,7 @@ class VolumeMigrate(base.BaseAction):
},
'migration_type': {
'type': 'string',
"enum": ["swap", "cold"]
"enum": ["swap", "retype", "migrate"]
},
'destination_node': {
"anyof": [
@@ -127,20 +131,6 @@ class VolumeMigrate(base.BaseAction):
def destination_type(self):
return self.input_parameters.get(self.DESTINATION_TYPE)
def _cold_migrate(self, volume, dest_node, dest_type):
if not self.cinder_util.can_cold(volume, dest_node):
raise exception.Invalid(
message=(_("Invalid state for cold migration")))
if dest_node:
return self.cinder_util.migrate(volume, dest_node)
elif dest_type:
return self.cinder_util.retype(volume, dest_type)
else:
raise exception.Invalid(
message=(_("destination host or destination type is "
"required when migration type is cold")))
def _can_swap(self, volume):
"""Judge volume can be swapped"""
@@ -212,12 +202,14 @@ class VolumeMigrate(base.BaseAction):
try:
volume = self.cinder_util.get_volume(volume_id)
if self.migration_type == self.COLD:
return self._cold_migrate(volume, dest_node, dest_type)
elif self.migration_type == self.SWAP:
if self.migration_type == self.SWAP:
if dest_node:
LOG.warning("dest_node is ignored")
return self._swap_volume(volume, dest_type)
elif self.migration_type == self.RETYPE:
return self.cinder_util.retype(volume, dest_type)
elif self.migration_type == self.MIGRATE:
return self.cinder_util.migrate(volume, dest_node)
else:
raise exception.Invalid(
message=(_("Migration of type '%(migration_type)s' is not "

View File

@@ -22,7 +22,7 @@ import sys
from oslo_log import log
from watcher.common import service as service
from watcher.common import service
from watcher import conf
from watcher.decision_engine import sync

View File

@@ -70,16 +70,18 @@ class CinderHelper(object):
def get_volume_type_list(self):
return self.cinder.volume_types.list()
def get_volume_snapshots_list(self):
return self.cinder.volume_snapshots.list(
search_opts={'all_tenants': True})
def get_volume_type_by_backendname(self, backendname):
"""Retrun a list of volume type"""
volume_type_list = self.get_volume_type_list()
volume_type = [volume_type for volume_type in volume_type_list
volume_type = [volume_type.name for volume_type in volume_type_list
if volume_type.extra_specs.get(
'volume_backend_name') == backendname]
if volume_type:
return volume_type[0].name
else:
return ""
return volume_type
def get_volume(self, volume):
@@ -111,23 +113,6 @@ class CinderHelper(object):
return True
return False
def can_cold(self, volume, host=None):
"""Judge volume can be migrated"""
can_cold = False
status = self.get_volume(volume).status
snapshot = self._has_snapshot(volume)
same_host = False
if host and getattr(volume, 'os-vol-host-attr:host') == host:
same_host = True
if (status == 'available' and
snapshot is False and
same_host is False):
can_cold = True
return can_cold
def get_deleting_volume(self, volume):
volume = self.get_volume(volume)
all_volume = self.get_volume_list()
@@ -204,7 +189,7 @@ class CinderHelper(object):
volume = self.get_volume(volume)
dest_backend = self.backendname_from_poolname(dest_node)
dest_type = self.get_volume_type_by_backendname(dest_backend)
if volume.volume_type != dest_type:
if volume.volume_type not in dest_type:
raise exception.Invalid(
message=(_("Volume type must be same for migrating")))

View File

@@ -332,6 +332,10 @@ class PatchError(Invalid):
msg_fmt = _("Couldn't apply patch '%(patch)s'. Reason: %(reason)s")
class DeleteError(Invalid):
msg_fmt = _("Couldn't delete when state is '%(state)s'.")
# decision engine
class WorkflowExecutionException(WatcherException):
@@ -362,6 +366,14 @@ class ClusterEmpty(WatcherException):
msg_fmt = _("The list of compute node(s) in the cluster is empty")
class ComputeClusterEmpty(WatcherException):
msg_fmt = _("The list of compute node(s) in the cluster is empty")
class StorageClusterEmpty(WatcherException):
msg_fmt = _("The list of storage node(s) in the cluster is empty")
class MetricCollectorNotDefined(WatcherException):
msg_fmt = _("The metrics resource collector is not defined")
@@ -405,6 +417,10 @@ class UnsupportedDataSource(UnsupportedError):
"by strategy %(strategy)s")
class DataSourceNotAvailable(WatcherException):
msg_fmt = _("Datasource %(datasource)s is not available.")
class NoSuchMetricForHost(WatcherException):
msg_fmt = _("No %(metric)s metric for %(host)s found.")
@@ -469,6 +485,14 @@ class VolumeNotFound(StorageResourceNotFound):
msg_fmt = _("The volume '%(name)s' could not be found")
class BaremetalResourceNotFound(WatcherException):
msg_fmt = _("The baremetal resource '%(name)s' could not be found")
class IronicNodeNotFound(BaremetalResourceNotFound):
msg_fmt = _("The ironic node %(uuid)s could not be found")
class LoadingError(WatcherException):
msg_fmt = _("Error loading plugin '%(name)s'")

View File

@@ -0,0 +1,49 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2017 ZTE Corporation
#
# Authors:Yumeng Bao <bao.yumeng@zte.com.cn>
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from oslo_log import log
from watcher.common import clients
from watcher.common import exception
from watcher.common import utils
LOG = log.getLogger(__name__)
class IronicHelper(object):
def __init__(self, osc=None):
""":param osc: an OpenStackClients instance"""
self.osc = osc if osc else clients.OpenStackClients()
self.ironic = self.osc.ironic()
def get_ironic_node_list(self):
return self.ironic.node.list()
def get_ironic_node_by_uuid(self, node_uuid):
"""Get ironic node by node UUID"""
try:
node = self.ironic.node.get(utils.Struct(uuid=node_uuid))
if not node:
raise exception.IronicNodeNotFound(uuid=node_uuid)
except Exception as exc:
LOG.exception(exc)
raise exception.IronicNodeNotFound(uuid=node_uuid)
# We need to pass an object with an 'uuid' attribute to make it work
return node

View File

@@ -52,14 +52,21 @@ class NovaHelper(object):
return self.nova.hypervisors.get(utils.Struct(id=node_id))
def get_compute_node_by_hostname(self, node_hostname):
"""Get compute node by ID (*not* UUID)"""
# We need to pass an object with an 'id' attribute to make it work
"""Get compute node by hostname"""
try:
compute_nodes = self.nova.hypervisors.search(node_hostname)
if len(compute_nodes) != 1:
hypervisors = [hv for hv in self.get_compute_node_list()
if hv.service['host'] == node_hostname]
if len(hypervisors) != 1:
# TODO(hidekazu)
# this may occur if VMware vCenter driver is used
raise exception.ComputeNodeNotFound(name=node_hostname)
else:
compute_nodes = self.nova.hypervisors.search(
hypervisors[0].hypervisor_hostname)
if len(compute_nodes) != 1:
raise exception.ComputeNodeNotFound(name=node_hostname)
return self.get_compute_node_by_id(compute_nodes[0].id)
return self.get_compute_node_by_id(compute_nodes[0].id)
except Exception as exc:
LOG.exception(exc)
raise exception.ComputeNodeNotFound(name=node_hostname)
@@ -67,6 +74,9 @@ class NovaHelper(object):
def get_instance_list(self):
return self.nova.servers.list(search_opts={'all_tenants': True})
def get_flavor_list(self):
return self.nova.flavors.list(**{'is_public': None})
def get_service(self, service_id):
return self.nova.services.find(id=service_id)
@@ -551,7 +561,7 @@ class NovaHelper(object):
return False
def set_host_offline(self, hostname):
# See API on http://developer.openstack.org/api-ref-compute-v2.1.html
# See API on https://developer.openstack.org/api-ref/compute/
# especially the PUT request
# regarding this resource : /v2.1/os-hosts/{host_name}
#

View File

@@ -0,0 +1,37 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import itertools
from watcher.common.policies import action
from watcher.common.policies import action_plan
from watcher.common.policies import audit
from watcher.common.policies import audit_template
from watcher.common.policies import base
from watcher.common.policies import goal
from watcher.common.policies import scoring_engine
from watcher.common.policies import service
from watcher.common.policies import strategy
def list_rules():
return itertools.chain(
base.list_rules(),
action.list_rules(),
action_plan.list_rules(),
audit.list_rules(),
audit_template.list_rules(),
goal.list_rules(),
scoring_engine.list_rules(),
service.list_rules(),
strategy.list_rules(),
)

View File

@@ -0,0 +1,57 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
from watcher.common.policies import base
ACTION = 'action:%s'
rules = [
policy.DocumentedRuleDefault(
name=ACTION % 'detail',
check_str=base.RULE_ADMIN_API,
description='Retrieve a list of actions with detail.',
operations=[
{
'path': '/v1/actions/detail',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=ACTION % 'get',
check_str=base.RULE_ADMIN_API,
description='Retrieve information about a given action.',
operations=[
{
'path': '/v1/actions/{action_id}',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=ACTION % 'get_all',
check_str=base.RULE_ADMIN_API,
description='Retrieve a list of all actions.',
operations=[
{
'path': '/v1/actions',
'method': 'GET'
}
]
)
]
def list_rules():
return rules

View File

@@ -0,0 +1,79 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
from watcher.common.policies import base
ACTION_PLAN = 'action_plan:%s'
rules = [
policy.DocumentedRuleDefault(
name=ACTION_PLAN % 'delete',
check_str=base.RULE_ADMIN_API,
description='Delete an action plan.',
operations=[
{
'path': '/v1/action_plans/{action_plan_uuid}',
'method': 'DELETE'
}
]
),
policy.DocumentedRuleDefault(
name=ACTION_PLAN % 'detail',
check_str=base.RULE_ADMIN_API,
description='Retrieve a list of action plans with detail.',
operations=[
{
'path': '/v1/action_plans/detail',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=ACTION_PLAN % 'get',
check_str=base.RULE_ADMIN_API,
description='Get an action plan.',
operations=[
{
'path': '/v1/action_plans/{action_plan_id}',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=ACTION_PLAN % 'get_all',
check_str=base.RULE_ADMIN_API,
description='Get all action plans.',
operations=[
{
'path': '/v1/action_plans',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=ACTION_PLAN % 'update',
check_str=base.RULE_ADMIN_API,
description='Update an action plans.',
operations=[
{
'path': '/v1/action_plans/{action_plan_uuid}',
'method': 'PATCH'
}
]
)
]
def list_rules():
return rules

View File

@@ -0,0 +1,90 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
from watcher.common.policies import base
AUDIT = 'audit:%s'
rules = [
policy.DocumentedRuleDefault(
name=AUDIT % 'create',
check_str=base.RULE_ADMIN_API,
description='Create a new audit.',
operations=[
{
'path': '/v1/audits',
'method': 'POST'
}
]
),
policy.DocumentedRuleDefault(
name=AUDIT % 'delete',
check_str=base.RULE_ADMIN_API,
description='Delete an audit.',
operations=[
{
'path': '/v1/audits/{audit_uuid}',
'method': 'DELETE'
}
]
),
policy.DocumentedRuleDefault(
name=AUDIT % 'detail',
check_str=base.RULE_ADMIN_API,
description='Retrieve audit list with details.',
operations=[
{
'path': '/v1/audits/detail',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=AUDIT % 'get',
check_str=base.RULE_ADMIN_API,
description='Get an audit.',
operations=[
{
'path': '/v1/audits/{audit_uuid}',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=AUDIT % 'get_all',
check_str=base.RULE_ADMIN_API,
description='Get all audits.',
operations=[
{
'path': '/v1/audits',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=AUDIT % 'update',
check_str=base.RULE_ADMIN_API,
description='Update an audit.',
operations=[
{
'path': '/v1/audits/{audit_uuid}',
'method': 'PATCH'
}
]
)
]
def list_rules():
return rules

View File

@@ -0,0 +1,90 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
from watcher.common.policies import base
AUDIT_TEMPLATE = 'audit_template:%s'
rules = [
policy.DocumentedRuleDefault(
name=AUDIT_TEMPLATE % 'create',
check_str=base.RULE_ADMIN_API,
description='Create an audit template.',
operations=[
{
'path': '/v1/audit_templates',
'method': 'POST'
}
]
),
policy.DocumentedRuleDefault(
name=AUDIT_TEMPLATE % 'delete',
check_str=base.RULE_ADMIN_API,
description='Delete an audit template.',
operations=[
{
'path': '/v1/audit_templates/{audit_template_uuid}',
'method': 'DELETE'
}
]
),
policy.DocumentedRuleDefault(
name=AUDIT_TEMPLATE % 'detail',
check_str=base.RULE_ADMIN_API,
description='Retrieve a list of audit templates with details.',
operations=[
{
'path': '/v1/audit_templates/detail',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=AUDIT_TEMPLATE % 'get',
check_str=base.RULE_ADMIN_API,
description='Get an audit template.',
operations=[
{
'path': '/v1/audit_templates/{audit_template_uuid}',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=AUDIT_TEMPLATE % 'get_all',
check_str=base.RULE_ADMIN_API,
description='Get a list of all audit templates.',
operations=[
{
'path': '/v1/audit_templates',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=AUDIT_TEMPLATE % 'update',
check_str=base.RULE_ADMIN_API,
description='Update an audit template.',
operations=[
{
'path': '/v1/audit_templates/{audit_template_uuid}',
'method': 'PATCH'
}
]
)
]
def list_rules():
return rules

View File

@@ -0,0 +1,32 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
RULE_ADMIN_API = 'rule:admin_api'
ROLE_ADMIN_OR_ADMINISTRATOR = 'role:admin or role:administrator'
ALWAYS_DENY = '!'
rules = [
policy.RuleDefault(
name='admin_api',
check_str=ROLE_ADMIN_OR_ADMINISTRATOR
),
policy.RuleDefault(
name='show_password',
check_str=ALWAYS_DENY
)
]
def list_rules():
return rules

View File

@@ -0,0 +1,57 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
from watcher.common.policies import base
GOAL = 'goal:%s'
rules = [
policy.DocumentedRuleDefault(
name=GOAL % 'detail',
check_str=base.RULE_ADMIN_API,
description='Retrieve a list of goals with detail.',
operations=[
{
'path': '/v1/goals/detail',
'method': 'DELETE'
}
]
),
policy.DocumentedRuleDefault(
name=GOAL % 'get',
check_str=base.RULE_ADMIN_API,
description='Get a goal.',
operations=[
{
'path': '/v1/goals/{goal_uuid}',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=GOAL % 'get_all',
check_str=base.RULE_ADMIN_API,
description='Get all goals.',
operations=[
{
'path': '/v1/goals',
'method': 'GET'
}
]
)
]
def list_rules():
return rules

View File

@@ -0,0 +1,66 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
from watcher.common.policies import base
SCORING_ENGINE = 'scoring_engine:%s'
rules = [
# FIXME(lbragstad): Find someone from watcher to double check this
# information. This API isn't listed in watcher's API reference
# documentation.
policy.DocumentedRuleDefault(
name=SCORING_ENGINE % 'detail',
check_str=base.RULE_ADMIN_API,
description='List scoring engines with details.',
operations=[
{
'path': '/v1/scoring_engines/detail',
'method': 'GET'
}
]
),
# FIXME(lbragstad): Find someone from watcher to double check this
# information. This API isn't listed in watcher's API reference
# documentation.
policy.DocumentedRuleDefault(
name=SCORING_ENGINE % 'get',
check_str=base.RULE_ADMIN_API,
description='Get a scoring engine.',
operations=[
{
'path': '/v1/scoring_engines/{scoring_engine_id}',
'method': 'GET'
}
]
),
# FIXME(lbragstad): Find someone from watcher to double check this
# information. This API isn't listed in watcher's API reference
# documentation.
policy.DocumentedRuleDefault(
name=SCORING_ENGINE % 'get_all',
check_str=base.RULE_ADMIN_API,
description='Get all scoring engines.',
operations=[
{
'path': '/v1/scoring_engines',
'method': 'GET'
}
]
)
]
def list_rules():
return rules

View File

@@ -0,0 +1,57 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
from watcher.common.policies import base
SERVICE = 'service:%s'
rules = [
policy.DocumentedRuleDefault(
name=SERVICE % 'detail',
check_str=base.RULE_ADMIN_API,
description='List services with detail.',
operations=[
{
'path': '/v1/services/',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=SERVICE % 'get',
check_str=base.RULE_ADMIN_API,
description='Get a specific service.',
operations=[
{
'path': '/v1/services/{service_id}',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=SERVICE % 'get_all',
check_str=base.RULE_ADMIN_API,
description='List all services.',
operations=[
{
'path': '/v1/services/',
'method': 'GET'
}
]
),
]
def list_rules():
return rules

View File

@@ -0,0 +1,68 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
from watcher.common.policies import base
STRATEGY = 'strategy:%s'
rules = [
policy.DocumentedRuleDefault(
name=STRATEGY % 'detail',
check_str=base.RULE_ADMIN_API,
description='List strategies with detail.',
operations=[
{
'path': '/v1/strategies/detail',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=STRATEGY % 'get',
check_str=base.RULE_ADMIN_API,
description='Get a strategy.',
operations=[
{
'path': '/v1/strategies/{strategy_uuid}',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=STRATEGY % 'get_all',
check_str=base.RULE_ADMIN_API,
description='List all strategies.',
operations=[
{
'path': '/v1/strategies',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=STRATEGY % 'state',
check_str=base.RULE_ADMIN_API,
description='Get state of strategy.',
operations=[
{
'path': '/v1/strategies{strategy_uuid}/state',
'method': 'GET'
}
]
)
]
def list_rules():
return rules

View File

@@ -15,11 +15,13 @@
"""Policy Engine For Watcher."""
import sys
from oslo_config import cfg
from oslo_policy import policy
from watcher.common import exception
from watcher.common import policies
_ENFORCER = None
CONF = cfg.CONF
@@ -56,6 +58,7 @@ def init(policy_file=None, rules=None,
default_rule=default_rule,
use_conf=use_conf,
overwrite=overwrite)
_ENFORCER.register_defaults(policies.list_rules())
return _ENFORCER
@@ -92,3 +95,23 @@ def enforce(context, rule=None, target=None,
'user_id': context.user_id}
return enforcer.enforce(rule, target, credentials,
do_raise=do_raise, exc=exc, *args, **kwargs)
def get_enforcer():
# This method is for use by oslopolicy CLI scripts. Those scripts need the
# 'output-file' and 'namespace' options, but having those in sys.argv means
# loading the Watcher config options will fail as those are not expected
# to be present. So we pass in an arg list with those stripped out.
conf_args = []
# Start at 1 because cfg.CONF expects the equivalent of sys.argv[1:]
i = 1
while i < len(sys.argv):
if sys.argv[i].strip('-') in ['namespace', 'output-file']:
i += 2
continue
conf_args.append(sys.argv[i])
i += 1
cfg.CONF(conf_args, project='watcher')
init()
return _ENFORCER

View File

@@ -69,7 +69,8 @@ _DEFAULT_LOG_LEVELS = ['amqp=WARN', 'amqplib=WARN', 'qpid.messaging=INFO',
'keystoneclient=INFO', 'stevedore=INFO',
'eventlet.wsgi.server=WARN', 'iso8601=WARN',
'paramiko=WARN', 'requests=WARN', 'neutronclient=WARN',
'glanceclient=WARN', 'watcher.openstack.common=WARN']
'glanceclient=WARN', 'watcher.openstack.common=WARN',
'apscheduler=WARN']
Singleton = service.Singleton

126
watcher/datasource/base.py Normal file
View File

@@ -0,0 +1,126 @@
# -*- encoding: utf-8 -*-
# Copyright 2017 NEC Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import abc
class DataSourceBase(object):
METRIC_MAP = dict(
ceilometer=dict(host_cpu_usage='compute.node.cpu.percent',
instance_cpu_usage='cpu_util',
instance_l3_cache_usage='cpu_l3_cache',
host_outlet_temp=(
'hardware.ipmi.node.outlet_temperature'),
host_airflow='hardware.ipmi.node.airflow',
host_inlet_temp='hardware.ipmi.node.temperature',
host_power='hardware.ipmi.node.power',
instance_ram_usage='memory.resident',
instance_ram_allocated='memory',
instance_root_disk_size='disk.root.size',
host_memory_usage='hardware.memory.used', ),
gnocchi=dict(host_cpu_usage='compute.node.cpu.percent',
instance_cpu_usage='cpu_util',
instance_l3_cache_usage='cpu_l3_cache',
host_outlet_temp='hardware.ipmi.node.outlet_temperature',
host_airflow='hardware.ipmi.node.airflow',
host_inlet_temp='hardware.ipmi.node.temperature',
host_power='hardware.ipmi.node.power',
instance_ram_usage='memory.resident',
instance_ram_allocated='memory',
instance_root_disk_size='disk.root.size',
host_memory_usage='hardware.memory.used'
),
monasca=dict(host_cpu_usage='cpu.percent',
instance_cpu_usage='vm.cpu.utilization_perc',
instance_l3_cache_usage=None,
host_outlet_temp=None,
host_airflow=None,
host_inlet_temp=None,
host_power=None,
instance_ram_usage=None,
instance_ram_allocated=None,
instance_root_disk_size=None,
host_memory_usage=None
),
)
@abc.abstractmethod
def statistic_aggregation(self, resource_id=None, meter_name=None,
period=300, granularity=300, dimensions=None,
aggregation='avg', group_by='*'):
pass
@abc.abstractmethod
def list_metrics(self):
pass
@abc.abstractmethod
def check_availability(self):
pass
@abc.abstractmethod
def get_host_cpu_usage(self, resource_id, period, aggregate,
granularity=None):
pass
@abc.abstractmethod
def get_instance_cpu_usage(self, resource_id, period, aggregate,
granularity=None):
pass
@abc.abstractmethod
def get_host_memory_usage(self, resource_id, period, aggregate,
granularity=None):
pass
@abc.abstractmethod
def get_instance_memory_usage(self, resource_id, period, aggregate,
granularity=None):
pass
@abc.abstractmethod
def get_instance_l3_cache_usage(self, resource_id, period, aggregate,
granularity=None):
pass
@abc.abstractmethod
def get_instance_ram_allocated(self, resource_id, period, aggregate,
granularity=None):
pass
@abc.abstractmethod
def get_instance_root_disk_allocated(self, resource_id, period, aggregate,
granularity=None):
pass
@abc.abstractmethod
def get_host_outlet_temperature(self, resource_id, period, aggregate,
granularity=None):
pass
@abc.abstractmethod
def get_host_inlet_temperature(self, resource_id, period, aggregate,
granularity=None):
pass
@abc.abstractmethod
def get_host_airflow(self, resource_id, period, aggregate,
granularity=None):
pass
@abc.abstractmethod
def get_host_power(self, resource_id, period, aggregate, granularity=None):
pass

View File

@@ -24,9 +24,14 @@ from oslo_utils import timeutils
from watcher._i18n import _
from watcher.common import clients
from watcher.common import exception
from watcher.datasource import base
class CeilometerHelper(object):
class CeilometerHelper(base.DataSourceBase):
NAME = 'ceilometer'
METRIC_MAP = base.DataSourceBase.METRIC_MAP['ceilometer']
def __init__(self, osc=None):
""":param osc: an OpenStackClients instance"""
self.osc = osc if osc else clients.OpenStackClients()
@@ -110,6 +115,13 @@ class CeilometerHelper(object):
except Exception:
raise
def check_availability(self):
try:
self.query_retry(self.ceilometer.resources.list)
except Exception:
return 'not available'
return 'available'
def query_sample(self, meter_name, query, limit=1):
return self.query_retry(f=self.ceilometer.samples.list,
meter_name=meter_name,
@@ -124,28 +136,37 @@ class CeilometerHelper(object):
period=period)
return statistics
def meter_list(self, query=None):
def list_metrics(self):
"""List the user's meters."""
meters = self.query_retry(f=self.ceilometer.meters.list,
query=query)
return meters
try:
meters = self.query_retry(f=self.ceilometer.meters.list)
except Exception:
return set()
else:
return meters
def statistic_aggregation(self,
resource_id,
meter_name,
period,
aggregate='avg'):
def statistic_aggregation(self, resource_id=None, meter_name=None,
period=300, granularity=300, dimensions=None,
aggregation='avg', group_by='*'):
"""Representing a statistic aggregate by operators
:param resource_id: id of resource to list statistics for.
:param meter_name: Name of meter to list statistics for.
:param period: Period in seconds over which to group samples.
:param aggregate: Available aggregates are: count, cardinality,
min, max, sum, stddev, avg. Defaults to avg.
:param granularity: frequency of marking metric point, in seconds.
This param isn't used in Ceilometer datasource.
:param dimensions: dimensions (dict). This param isn't used in
Ceilometer datasource.
:param aggregation: Available aggregates are: count, cardinality,
min, max, sum, stddev, avg. Defaults to avg.
:param group_by: list of columns to group the metrics to be returned.
This param isn't used in Ceilometer datasource.
:return: Return the latest statistical data, None if no data.
"""
end_time = datetime.datetime.utcnow()
if aggregation == 'mean':
aggregation = 'avg'
start_time = end_time - datetime.timedelta(seconds=int(period))
query = self.build_query(
resource_id=resource_id, start_time=start_time, end_time=end_time)
@@ -154,11 +175,11 @@ class CeilometerHelper(object):
q=query,
period=period,
aggregates=[
{'func': aggregate}])
{'func': aggregation}])
item_value = None
if statistic:
item_value = statistic[-1]._info.get('aggregate').get(aggregate)
item_value = statistic[-1]._info.get('aggregate').get(aggregation)
return item_value
def get_last_sample_values(self, resource_id, meter_name, limit=1):
@@ -182,3 +203,69 @@ class CeilometerHelper(object):
return samples[-1]._info['counter_volume']
else:
return False
def get_host_cpu_usage(self, resource_id, period, aggregate,
granularity=None):
meter_name = self.METRIC_MAP.get('host_cpu_usage')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregate=aggregate)
def get_instance_cpu_usage(self, resource_id, period, aggregate,
granularity=None):
meter_name = self.METRIC_MAP.get('instance_cpu_usage')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregate=aggregate)
def get_host_memory_usage(self, resource_id, period, aggregate,
granularity=None):
meter_name = self.METRIC_MAP.get('host_memory_usage')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregate=aggregate)
def get_instance_memory_usage(self, resource_id, period, aggregate,
granularity=None):
meter_name = self.METRIC_MAP.get('instance_ram_usage')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregate=aggregate)
def get_instance_l3_cache_usage(self, resource_id, period, aggregate,
granularity=None):
meter_name = self.METRIC_MAP.get('instance_l3_cache_usage')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregate=aggregate)
def get_instance_ram_allocated(self, resource_id, period, aggregate,
granularity=None):
meter_name = self.METRIC_MAP.get('instance_ram_allocated')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregate=aggregate)
def get_instance_root_disk_allocated(self, resource_id, period, aggregate,
granularity=None):
meter_name = self.METRIC_MAP.get('instance_root_disk_size')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregate=aggregate)
def get_host_outlet_temperature(self, resource_id, period, aggregate,
granularity=None):
meter_name = self.METRIC_MAP.get('host_outlet_temp')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregate=aggregate)
def get_host_inlet_temperature(self, resource_id, period, aggregate,
granularity=None):
meter_name = self.METRIC_MAP.get('host_inlet_temp')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregate=aggregate)
def get_host_airflow(self, resource_id, period, aggregate,
granularity=None):
meter_name = self.METRIC_MAP.get('host_airflow')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregate=aggregate)
def get_host_power(self, resource_id, period, aggregate,
granularity=None):
meter_name = self.METRIC_MAP.get('host_power')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregate=aggregate)

View File

@@ -17,6 +17,7 @@
# limitations under the License.
from datetime import datetime
from datetime import timedelta
import time
from oslo_config import cfg
@@ -25,12 +26,16 @@ from oslo_log import log
from watcher.common import clients
from watcher.common import exception
from watcher.common import utils as common_utils
from watcher.datasource import base
CONF = cfg.CONF
LOG = log.getLogger(__name__)
class GnocchiHelper(object):
class GnocchiHelper(base.DataSourceBase):
NAME = 'gnocchi'
METRIC_MAP = base.DataSourceBase.METRIC_MAP['gnocchi']
def __init__(self, osc=None):
""":param osc: an OpenStackClients instance"""
@@ -44,34 +49,44 @@ class GnocchiHelper(object):
except Exception as e:
LOG.exception(e)
time.sleep(CONF.gnocchi_client.query_timeout)
raise
raise exception.DataSourceNotAvailable(datasource='gnocchi')
def statistic_aggregation(self,
resource_id,
metric,
granularity,
start_time=None,
stop_time=None,
aggregation='mean'):
def check_availability(self):
try:
self.query_retry(self.gnocchi.status.get)
except Exception:
return 'not available'
return 'available'
def list_metrics(self):
"""List the user's meters."""
try:
response = self.query_retry(f=self.gnocchi.metric.list)
except Exception:
return set()
else:
return set([metric['name'] for metric in response])
def statistic_aggregation(self, resource_id=None, meter_name=None,
period=300, granularity=300, dimensions=None,
aggregation='avg', group_by='*'):
"""Representing a statistic aggregate by operators
:param metric: metric name of which we want the statistics
:param resource_id: id of resource to list statistics for
:param start_time: Start datetime from which metrics will be used
:param stop_time: End datetime from which metrics will be used
:param granularity: frequency of marking metric point, in seconds
:param resource_id: id of resource to list statistics for.
:param meter_name: meter name of which we want the statistics.
:param period: Period in seconds over which to group samples.
:param granularity: frequency of marking metric point, in seconds.
:param dimensions: dimensions (dict). This param isn't used in
Gnocchi datasource.
:param aggregation: Should be chosen in accordance with policy
aggregations
aggregations.
:param group_by: list of columns to group the metrics to be returned.
This param isn't used in Gnocchi datasource.
:return: value of aggregated metric
"""
if start_time is not None and not isinstance(start_time, datetime):
raise exception.InvalidParameter(parameter='start_time',
parameter_type=datetime)
if stop_time is not None and not isinstance(stop_time, datetime):
raise exception.InvalidParameter(parameter='stop_time',
parameter_type=datetime)
stop_time = datetime.utcnow()
start_time = stop_time - timedelta(seconds=(int(period)))
if not common_utils.is_uuid_like(resource_id):
kwargs = dict(query={"=": {"original_resource_id": resource_id}},
@@ -85,7 +100,7 @@ class GnocchiHelper(object):
resource_id = resources[0]['id']
raw_kwargs = dict(
metric=metric,
metric=meter_name,
start=start_time,
stop=stop_time,
resource_id=resource_id,
@@ -102,3 +117,69 @@ class GnocchiHelper(object):
# return value of latest measure
# measure has structure [time, granularity, value]
return statistics[-1][2]
def get_host_cpu_usage(self, resource_id, period, aggregate,
granularity=300):
meter_name = self.METRIC_MAP.get('host_cpu_usage')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregation=aggregate)
def get_instance_cpu_usage(self, resource_id, period, aggregate,
granularity=300):
meter_name = self.METRIC_MAP.get('instance_cpu_usage')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregation=aggregate)
def get_host_memory_usage(self, resource_id, period, aggregate,
granularity=300):
meter_name = self.METRIC_MAP.get('host_memory_usage')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregation=aggregate)
def get_instance_memory_usage(self, resource_id, period, aggregate,
granularity=300):
meter_name = self.METRIC_MAP.get('instance_ram_usage')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregation=aggregate)
def get_instance_l3_cache_usage(self, resource_id, period, aggregate,
granularity=300):
meter_name = self.METRIC_MAP.get('instance_l3_cache_usage')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregation=aggregate)
def get_instance_ram_allocated(self, resource_id, period, aggregate,
granularity=300):
meter_name = self.METRIC_MAP.get('instance_ram_allocated')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregation=aggregate)
def get_instance_root_disk_allocated(self, resource_id, period, aggregate,
granularity=300):
meter_name = self.METRIC_MAP.get('instance_root_disk_size')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregation=aggregate)
def get_host_outlet_temperature(self, resource_id, period, aggregate,
granularity=300):
meter_name = self.METRIC_MAP.get('host_outlet_temp')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregation=aggregate)
def get_host_inlet_temperature(self, resource_id, period, aggregate,
granularity=300):
meter_name = self.METRIC_MAP.get('host_inlet_temp')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregation=aggregate)
def get_host_airflow(self, resource_id, period, aggregate,
granularity=300):
meter_name = self.METRIC_MAP.get('host_airflow')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregation=aggregate)
def get_host_power(self, resource_id, period, aggregate,
granularity=300):
meter_name = self.METRIC_MAP.get('host_power')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregation=aggregate)

View File

@@ -0,0 +1,78 @@
# -*- encoding: utf-8 -*-
# Copyright 2017 NEC Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_log import log
from watcher.common import exception
from watcher.datasource import base
from watcher.datasource import ceilometer as ceil
from watcher.datasource import gnocchi as gnoc
from watcher.datasource import monasca as mon
LOG = log.getLogger(__name__)
class DataSourceManager(object):
def __init__(self, config=None, osc=None):
self.osc = osc
self.config = config
self._ceilometer = None
self._monasca = None
self._gnocchi = None
self.metric_map = base.DataSourceBase.METRIC_MAP
self.datasources = self.config.datasources
@property
def ceilometer(self):
if self._ceilometer is None:
self.ceilometer = ceil.CeilometerHelper(osc=self.osc)
return self._ceilometer
@ceilometer.setter
def ceilometer(self, ceilometer):
self._ceilometer = ceilometer
@property
def monasca(self):
if self._monasca is None:
self._monasca = mon.MonascaHelper(osc=self.osc)
return self._monasca
@monasca.setter
def monasca(self, monasca):
self._monasca = monasca
@property
def gnocchi(self):
if self._gnocchi is None:
self._gnocchi = gnoc.GnocchiHelper(osc=self.osc)
return self._gnocchi
@gnocchi.setter
def gnocchi(self, gnocchi):
self._gnocchi = gnocchi
def get_backend(self, metrics):
for datasource in self.datasources:
no_metric = False
for metric in metrics:
if (metric not in self.metric_map[datasource] or
self.metric_map[datasource].get(metric) is None):
no_metric = True
break
if not no_metric:
return getattr(self, datasource)
raise exception.NoSuchMetric()

View File

@@ -21,9 +21,14 @@ import datetime
from monascaclient import exc
from watcher.common import clients
from watcher.common import exception
from watcher.datasource import base
class MonascaHelper(object):
class MonascaHelper(base.DataSourceBase):
NAME = 'monasca'
METRIC_MAP = base.DataSourceBase.METRIC_MAP['monasca']
def __init__(self, osc=None):
""":param osc: an OpenStackClients instance"""
@@ -61,6 +66,18 @@ class MonascaHelper(object):
return start_timestamp, end_timestamp, period
def check_availability(self):
try:
self.query_retry(self.monasca.metrics.list)
except Exception:
return 'not available'
return 'available'
def list_metrics(self):
# TODO(alexchadin): this method should be implemented in accordance to
# monasca API.
pass
def statistics_list(self, meter_name, dimensions, start_time=None,
end_time=None, period=None,):
"""List of statistics."""
@@ -81,38 +98,42 @@ class MonascaHelper(object):
return statistics
def statistic_aggregation(self,
meter_name,
dimensions,
start_time=None,
end_time=None,
period=None,
aggregate='avg',
group_by='*'):
def statistic_aggregation(self, resource_id=None, meter_name=None,
period=300, granularity=300, dimensions=None,
aggregation='avg', group_by='*'):
"""Representing a statistic aggregate by operators
:param meter_name: meter names of which we want the statistics
:param dimensions: dimensions (dict)
:param start_time: Start datetime from which metrics will be used
:param end_time: End datetime from which metrics will be used
:param resource_id: id of resource to list statistics for.
This param isn't used in Monasca datasource.
:param meter_name: meter names of which we want the statistics.
:param period: Sampling `period`: In seconds. If no period is given,
only one aggregate statistic is returned. If given, a
faceted result will be returned, divided into given
periods. Periods with no data are ignored.
:param aggregate: Should be either 'avg', 'count', 'min' or 'max'
:param granularity: frequency of marking metric point, in seconds.
This param isn't used in Ceilometer datasource.
:param dimensions: dimensions (dict).
:param aggregation: Should be either 'avg', 'count', 'min' or 'max'.
:param group_by: list of columns to group the metrics to be returned.
:return: A list of dict with each dict being a distinct result row
"""
start_timestamp, end_timestamp, period = self._format_time_params(
start_time, end_time, period
)
if dimensions is None:
raise exception.UnsupportedDataSource(datasource='Monasca')
stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(seconds=(int(period)))
if aggregation == 'mean':
aggregation = 'avg'
raw_kwargs = dict(
name=meter_name,
start_time=start_timestamp,
end_time=end_timestamp,
start_time=start_time.isoformat(),
end_time=stop_time.isoformat(),
dimensions=dimensions,
period=period,
statistics=aggregate,
statistics=aggregation,
group_by=group_by,
)
@@ -121,4 +142,69 @@ class MonascaHelper(object):
statistics = self.query_retry(
f=self.monasca.metrics.list_statistics, **kwargs)
return statistics
cpu_usage = None
for stat in statistics:
avg_col_idx = stat['columns'].index(aggregation)
values = [r[avg_col_idx] for r in stat['statistics']]
value = float(sum(values)) / len(values)
cpu_usage = value
return cpu_usage
def get_host_cpu_usage(self, resource_id, period, aggregate,
granularity=None):
metric_name = self.METRIC_MAP.get('host_cpu_usage')
node_uuid = resource_id.split('_')[0]
return self.statistic_aggregation(
meter_name=metric_name,
dimensions=dict(hostname=node_uuid),
period=period,
aggregation=aggregate
)
def get_instance_cpu_usage(self, resource_id, period, aggregate,
granularity=None):
metric_name = self.METRIC_MAP.get('instance_cpu_usage')
return self.statistic_aggregation(
meter_name=metric_name,
dimensions=dict(resource_id=resource_id),
period=period,
aggregation=aggregate
)
def get_host_memory_usage(self, resource_id, period, aggregate,
granularity=None):
raise NotImplementedError
def get_instance_memory_usage(self, resource_id, period, aggregate,
granularity=None):
raise NotImplementedError
def get_instance_l3_cache_usage(self, resource_id, period, aggregate,
granularity=None):
raise NotImplementedError
def get_instance_ram_allocated(self, resource_id, period, aggregate,
granularity=None):
raise NotImplementedError
def get_instance_root_disk_allocated(self, resource_id, period, aggregate,
granularity=None):
raise NotImplementedError
def get_host_outlet_temperature(self, resource_id, period, aggregate,
granularity=None):
raise NotImplementedError
def get_host_inlet_temperature(self, resource_id, period, aggregate,
granularity=None):
raise NotImplementedError
def get_host_airflow(self, resource_id, period, aggregate,
granularity=None):
raise NotImplementedError
def get_host_power(self, resource_id, period, aggregate,
granularity=None):
raise NotImplementedError

View File

@@ -0,0 +1,34 @@
"""Set name for Audit as part of backward compatibility
Revision ID: a86240e89a29
Revises: 3cfc94cecf4e
Create Date: 2017-12-21 13:00:09.278587
"""
# revision identifiers, used by Alembic.
revision = 'a86240e89a29'
down_revision = '3cfc94cecf4e'
from alembic import op
from sqlalchemy.orm import sessionmaker
from watcher.db.sqlalchemy import models
def upgrade():
connection = op.get_bind()
session = sessionmaker()
s = session(bind=connection)
for audit in s.query(models.Audit).filter(models.Audit.name is None).all():
strategy_name = s.query(models.Strategy).filter_by(id=audit.strategy_id).one().name
audit.update({'name': strategy_name + '-' + str(audit.created_at)})
s.commit()
def downgrade():
connection = op.get_bind()
session = sessionmaker()
s = session(bind=connection)
for audit in s.query(models.Audit).filter(models.Audit.name is not None).all():
audit.update({'name': None})
s.commit()

View File

@@ -129,14 +129,25 @@ class ContinuousAuditHandler(base.AuditHandler):
audits = objects.Audit.list(
audit_context, filters=audit_filters, eager=True)
scheduler_job_args = [
job.args for job in self.scheduler.get_jobs()
(job.args[0].uuid, job) for job
in self.scheduler.get_jobs()
if job.name == 'execute_audit']
for args in scheduler_job_args:
if self._is_audit_inactive(args[0]):
scheduler_job_args.remove(args)
scheduler_jobs = dict(scheduler_job_args)
# if audit isn't in active states, audit's job should be removed
for job in scheduler_jobs.values():
if self._is_audit_inactive(job.args[0]):
scheduler_jobs.pop(job.args[0].uuid)
for audit in audits:
# if audit is not presented in scheduled audits yet.
if audit.uuid not in [arg[0].uuid for arg in scheduler_job_args]:
existing_job = scheduler_jobs.get(audit.uuid, None)
# if audit is not presented in scheduled audits yet,
# just add a new audit job.
# if audit is already in the job queue, and interval has changed,
# we need to remove the old job and add a new one.
if (existing_job is None) or (
existing_job and
audit.interval != existing_job.args[0].interval):
if existing_job:
self.scheduler.remove_job(existing_job.id)
# if interval is provided with seconds
if utils.is_int_like(audit.interval):
# if audit has already been provided and we need

View File

@@ -23,7 +23,10 @@ Unclassified = goals.Unclassified
WorkloadBalancing = goals.WorkloadBalancing
NoisyNeighbor = goals.NoisyNeighborOptimization
SavingEnergy = goals.SavingEnergy
HardwareMaintenance = goals.HardwareMaintenance
__all__ = ("Dummy", "ServerConsolidation", "ThermalOptimization",
"Unclassified", "WorkloadBalancing",
"NoisyNeighborOptimization", "SavingEnergy")
"NoisyNeighborOptimization", "SavingEnergy",
"HardwareMaintenance")

View File

@@ -112,3 +112,118 @@ class InstanceMigrationsCount(IndicatorSpecification):
def schema(self):
return voluptuous.Schema(
voluptuous.Range(min=0), required=True)
class LiveInstanceMigrateCount(IndicatorSpecification):
def __init__(self):
super(LiveInstanceMigrateCount, self).__init__(
name="live_migrate_instance_count",
description=_("The number of instances actually live migrated."),
unit=None,
)
@property
def schema(self):
return voluptuous.Schema(
voluptuous.Range(min=0), required=True)
class PlannedLiveInstanceMigrateCount(IndicatorSpecification):
def __init__(self):
super(PlannedLiveInstanceMigrateCount, self).__init__(
name="planned_live_migrate_instance_count",
description=_("The number of instances planned to live migrate."),
unit=None,
)
@property
def schema(self):
return voluptuous.Schema(
voluptuous.Range(min=0), required=True)
class ColdInstanceMigrateCount(IndicatorSpecification):
def __init__(self):
super(ColdInstanceMigrateCount, self).__init__(
name="cold_migrate_instance_count",
description=_("The number of instances actually cold migrated."),
unit=None,
)
@property
def schema(self):
return voluptuous.Schema(
voluptuous.Range(min=0), required=True)
class PlannedColdInstanceMigrateCount(IndicatorSpecification):
def __init__(self):
super(PlannedColdInstanceMigrateCount, self).__init__(
name="planned_cold_migrate_instance_count",
description=_("The number of instances planned to cold migrate."),
unit=None,
)
@property
def schema(self):
return voluptuous.Schema(
voluptuous.Range(min=0), required=True)
class VolumeMigrateCount(IndicatorSpecification):
def __init__(self):
super(VolumeMigrateCount, self).__init__(
name="volume_migrate_count",
description=_("The number of detached volumes actually migrated."),
unit=None,
)
@property
def schema(self):
return voluptuous.Schema(
voluptuous.Range(min=0), required=True)
class PlannedVolumeMigrateCount(IndicatorSpecification):
def __init__(self):
super(PlannedVolumeMigrateCount, self).__init__(
name="planned_volume_migrate_count",
description=_("The number of detached volumes planned"
" to migrate."),
unit=None,
)
@property
def schema(self):
return voluptuous.Schema(
voluptuous.Range(min=0), required=True)
class VolumeUpdateCount(IndicatorSpecification):
def __init__(self):
super(VolumeUpdateCount, self).__init__(
name="volume_update_count",
description=_("The number of attached volumes actually"
" migrated."),
unit=None,
)
@property
def schema(self):
return voluptuous.Schema(
voluptuous.Range(min=0), required=True)
class PlannedVolumeUpdateCount(IndicatorSpecification):
def __init__(self):
super(PlannedVolumeUpdateCount, self).__init__(
name="planned_volume_update_count",
description=_("The number of attached volumes planned to"
" migrate."),
unit=None,
)
@property
def schema(self):
return voluptuous.Schema(
voluptuous.Range(min=0), required=True)

View File

@@ -53,3 +53,86 @@ class ServerConsolidation(base.EfficacySpecification):
))
return global_efficacy
class HardwareMaintenance(base.EfficacySpecification):
def get_indicators_specifications(self):
return [
indicators.LiveInstanceMigrateCount(),
indicators.PlannedLiveInstanceMigrateCount(),
indicators.ColdInstanceMigrateCount(),
indicators.PlannedColdInstanceMigrateCount(),
indicators.VolumeMigrateCount(),
indicators.PlannedVolumeMigrateCount(),
indicators.VolumeUpdateCount(),
indicators.PlannedVolumeUpdateCount()
]
def get_global_efficacy_indicator(self, indicators_map=None):
li_value = 0
if (indicators_map and
indicators_map.planned_live_migrate_instance_count > 0):
li_value = (
float(indicators_map.planned_live_migrate_instance_count)
/ float(indicators_map.live_migrate_instance_count)
* 100
)
li_indicator = efficacy.Indicator(
name="live_instance_migrate_ratio",
description=_("Ratio of actual live migrated instances "
"to planned live migrate instances."),
unit='%',
value=li_value)
ci_value = 0
if (indicators_map and
indicators_map.planned_cold_migrate_instance_count > 0):
ci_value = (
float(indicators_map.planned_cold_migrate_instance_count)
/ float(indicators_map.cold_migrate_instance_count)
* 100
)
ci_indicator = efficacy.Indicator(
name="cold_instance_migrate_ratio",
description=_("Ratio of actual cold migrated instances "
"to planned cold migrate instances."),
unit='%',
value=ci_value)
dv_value = 0
if (indicators_map and
indicators_map.planned_volume_migrate_count > 0):
dv_value = (float(indicators_map.planned_volume_migrate_count) /
float(indicators_map.
volume_migrate_count)
* 100)
dv_indicator = efficacy.Indicator(
name="volume_migrate_ratio",
description=_("Ratio of actual detached volumes migrated to"
" planned detached volumes migrate."),
unit='%',
value=dv_value)
av_value = 0
if (indicators_map and
indicators_map.planned_volume_update_count > 0):
av_value = (float(indicators_map.planned_volume_update_count) /
float(indicators_map.
volume_update_count)
* 100)
av_indicator = efficacy.Indicator(
name="volume_update_ratio",
description=_("Ratio of actual attached volumes migrated to"
" planned attached volumes migrate."),
unit='%',
value=av_value)
return [li_indicator,
ci_indicator,
dv_indicator,
av_indicator]

View File

@@ -216,3 +216,28 @@ class SavingEnergy(base.Goal):
def get_efficacy_specification(cls):
"""The efficacy spec for the current goal"""
return specs.Unclassified()
class HardwareMaintenance(base.Goal):
"""HardwareMaintenance
This goal is to migrate instances and volumes on a set of compute nodes
and storage from nodes under maintenance
"""
@classmethod
def get_name(cls):
return "hardware_maintenance"
@classmethod
def get_display_name(cls):
return _("Hardware Maintenance")
@classmethod
def get_translatable_display_name(cls):
return "Hardware Maintenance"
@classmethod
def get_efficacy_specification(cls):
"""The efficacy spec for the current goal"""
return specs.HardwareMaintenance()

View File

@@ -40,6 +40,8 @@ See :doc:`../architecture` for more details on this component.
from watcher.common import service_manager
from watcher.decision_engine.messaging import audit_endpoint
from watcher.decision_engine.model.collector import manager
from watcher.decision_engine.strategy.strategies import base \
as strategy_endpoint
from watcher import conf
@@ -70,7 +72,8 @@ class DecisionEngineManager(service_manager.ServiceManager):
@property
def conductor_endpoints(self):
return [audit_endpoint.AuditEndpoint]
return [audit_endpoint.AuditEndpoint,
strategy_endpoint.StrategyEndpoint]
@property
def notification_endpoints(self):

View File

@@ -23,6 +23,7 @@ from watcher.decision_engine.model.collector import base
from watcher.decision_engine.model import element
from watcher.decision_engine.model import model_root
from watcher.decision_engine.model.notification import cinder
from watcher.decision_engine.scope import storage as storage_scope
LOG = log.getLogger(__name__)
@@ -33,6 +34,85 @@ class CinderClusterDataModelCollector(base.BaseClusterDataModelCollector):
The Cinder cluster data model collector creates an in-memory
representation of the resources exposed by the storage service.
"""
SCHEMA = {
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "array",
"items": {
"type": "object",
"properties": {
"availability_zones": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {
"type": "string"
}
},
"additionalProperties": False
}
},
"volume_types": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {
"type": "string"
}
},
"additionalProperties": False
}
},
"exclude": {
"type": "array",
"items": {
"type": "object",
"properties": {
"storage_pools": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {
"type": "string"
}
},
"additionalProperties": False
}
},
"volumes": {
"type": "array",
"items": {
"type": "object",
"properties": {
"uuid": {
"type": "string"
}
},
"additionalProperties": False
}
},
"projects": {
"type": "array",
"items": {
"type": "object",
"properties": {
"uuid": {
"type": "string"
}
},
"additionalProperties": False
}
},
"additionalProperties": False
}
}
}
},
"additionalProperties": False
}
}
def __init__(self, config, osc=None):
super(CinderClusterDataModelCollector, self).__init__(config, osc)
@@ -55,7 +135,9 @@ class CinderClusterDataModelCollector(base.BaseClusterDataModelCollector):
]
def get_audit_scope_handler(self, audit_scope):
return None
self._audit_scope_handler = storage_scope.StorageScope(
audit_scope, self.config)
return self._audit_scope_handler
def execute(self):
"""Build the storage cluster data model"""

View File

@@ -0,0 +1,97 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2017 ZTE Corporation
#
# Authors:Yumeng Bao <bao.yumeng@zte.com.cn>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_log import log
from watcher.common import ironic_helper
from watcher.decision_engine.model.collector import base
from watcher.decision_engine.model import element
from watcher.decision_engine.model import model_root
LOG = log.getLogger(__name__)
class BaremetalClusterDataModelCollector(base.BaseClusterDataModelCollector):
"""Baremetal cluster data model collector
The Baremetal cluster data model collector creates an in-memory
representation of the resources exposed by the baremetal service.
"""
def __init__(self, config, osc=None):
super(BaremetalClusterDataModelCollector, self).__init__(config, osc)
@property
def notification_endpoints(self):
"""Associated notification endpoints
:return: Associated notification endpoints
:rtype: List of :py:class:`~.EventsNotificationEndpoint` instances
"""
return None
def get_audit_scope_handler(self, audit_scope):
return None
def execute(self):
"""Build the baremetal cluster data model"""
LOG.debug("Building latest Baremetal cluster data model")
builder = ModelBuilder(self.osc)
return builder.execute()
class ModelBuilder(object):
"""Build the graph-based model
This model builder adds the following data"
- Baremetal-related knowledge (Ironic)
"""
def __init__(self, osc):
self.osc = osc
self.model = model_root.BaremetalModelRoot()
self.ironic_helper = ironic_helper.IronicHelper(osc=self.osc)
def add_ironic_node(self, node):
# Build and add base node.
ironic_node = self.build_ironic_node(node)
self.model.add_node(ironic_node)
def build_ironic_node(self, node):
"""Build a Baremetal node from a Ironic node
:param node: A ironic node
:type node: :py:class:`~ironicclient.v1.node.Node`
"""
# build up the ironic node.
node_attributes = {
"uuid": node.uuid,
"power_state": node.power_state,
"maintenance": node.maintenance,
"maintenance_reason": node.maintenance_reason,
"extra": {"compute_node_id": node.extra.compute_node_id}
}
ironic_node = element.IronicNode(**node_attributes)
return ironic_node
def execute(self):
for node in self.ironic_helper.get_ironic_node_list():
self.add_ironic_node(node)
return self.model

View File

@@ -158,6 +158,7 @@ class NovaClusterDataModelCollector(base.BaseClusterDataModelCollector):
nova.LegacyInstanceDeletedEnd(self),
nova.LegacyLiveMigratedEnd(self),
nova.LegacyInstanceResizeConfirmEnd(self),
nova.LegacyInstanceRebuildEnd(self),
]
def get_audit_scope_handler(self, audit_scope):

View File

@@ -23,6 +23,7 @@ from watcher.decision_engine.model.element import volume
ServiceState = node.ServiceState
ComputeNode = node.ComputeNode
StorageNode = node.StorageNode
IronicNode = node.IronicNode
Pool = node.Pool
InstanceState = instance.InstanceState
@@ -37,4 +38,5 @@ __all__ = ['ServiceState',
'StorageNode',
'Pool',
'VolumeState',
'Volume']
'Volume',
'IronicNode']

View File

@@ -0,0 +1,33 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2017 ZTE Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import abc
import six
from watcher.decision_engine.model.element import base
from watcher.objects import fields as wfields
@six.add_metaclass(abc.ABCMeta)
class BaremetalResource(base.Element):
VERSION = '1.0'
fields = {
"uuid": wfields.StringField(),
"human_id": wfields.StringField(default=""),
}

View File

@@ -42,6 +42,9 @@ class InstanceState(enum.Enum):
class Instance(compute_resource.ComputeResource):
fields = {
# If the resource is excluded by the scope,
# 'watcher_exclude' property will be set True.
"watcher_exclude": wfields.BooleanField(default=False),
"state": wfields.StringField(default=InstanceState.ACTIVE.value),
"memory": wfields.NonNegativeIntegerField(),

View File

@@ -16,6 +16,7 @@
import enum
from watcher.decision_engine.model.element import baremetal_resource
from watcher.decision_engine.model.element import compute_resource
from watcher.decision_engine.model.element import storage_resource
from watcher.objects import base
@@ -56,7 +57,7 @@ class StorageNode(storage_resource.StorageResource):
"zone": wfields.StringField(),
"status": wfields.StringField(default=ServiceState.ENABLED.value),
"state": wfields.StringField(default=ServiceState.ONLINE.value),
"volume_type": wfields.StringField()
"volume_type": wfields.ListOfStringsField()
}
def accept(self, visitor):
@@ -78,3 +79,17 @@ class Pool(storage_resource.StorageResource):
def accept(self, visitor):
raise NotImplementedError()
@base.WatcherObjectRegistry.register_if(False)
class IronicNode(baremetal_resource.BaremetalResource):
fields = {
"power_state": wfields.StringField(),
"maintenance": wfields.BooleanField(),
"maintenance_reason": wfields.StringField(),
"extra": wfields.DictField()
}
def accept(self, visitor):
raise NotImplementedError()

View File

@@ -508,7 +508,13 @@ class StorageModelRoot(nx.DiGraph, base.Model):
root = etree.fromstring(data)
for cn in root.findall('.//StorageNode'):
node = element.StorageNode(**cn.attrib)
ndata = {}
for attr, val in cn.items():
ndata[attr] = val
volume_type = ndata.get('volume_type')
if volume_type:
ndata['volume_type'] = [volume_type]
node = element.StorageNode(**ndata)
model.add_node(node)
for p in root.findall('.//Pool'):
@@ -539,3 +545,85 @@ class StorageModelRoot(nx.DiGraph, base.Model):
def is_isomorphic(cls, G1, G2):
return nx.algorithms.isomorphism.isomorph.is_isomorphic(
G1, G2)
class BaremetalModelRoot(nx.DiGraph, base.Model):
"""Cluster graph for an Openstack cluster: Baremetal Cluster."""
def __init__(self, stale=False):
super(BaremetalModelRoot, self).__init__()
self.stale = stale
def __nonzero__(self):
return not self.stale
__bool__ = __nonzero__
@staticmethod
def assert_node(obj):
if not isinstance(obj, element.IronicNode):
raise exception.IllegalArgumentException(
message=_("'obj' argument type is not valid: %s") % type(obj))
@lockutils.synchronized("baremetal_model")
def add_node(self, node):
self.assert_node(node)
super(BaremetalModelRoot, self).add_node(node.uuid, node)
@lockutils.synchronized("baremetal_model")
def remove_node(self, node):
self.assert_node(node)
try:
super(BaremetalModelRoot, self).remove_node(node.uuid)
except nx.NetworkXError as exc:
LOG.exception(exc)
raise exception.IronicNodeNotFound(name=node.uuid)
@lockutils.synchronized("baremetal_model")
def get_all_ironic_nodes(self):
return {uuid: cn for uuid, cn in self.nodes(data=True)
if isinstance(cn, element.IronicNode)}
@lockutils.synchronized("baremetal_model")
def get_node_by_uuid(self, uuid):
try:
return self._get_by_uuid(uuid)
except exception.BaremetalResourceNotFound:
raise exception.IronicNodeNotFound(name=uuid)
def _get_by_uuid(self, uuid):
try:
return self.node[uuid]
except Exception as exc:
LOG.exception(exc)
raise exception.BaremetalResourceNotFound(name=uuid)
def to_string(self):
return self.to_xml()
def to_xml(self):
root = etree.Element("ModelRoot")
# Build Ironic node tree
for cn in sorted(self.get_all_ironic_nodes().values(),
key=lambda cn: cn.uuid):
ironic_node_el = cn.as_xml_element()
root.append(ironic_node_el)
return etree.tostring(root, pretty_print=True).decode('utf-8')
@classmethod
def from_xml(cls, data):
model = cls()
root = etree.fromstring(data)
for cn in root.findall('.//IronicNode'):
node = element.IronicNode(**cn.attrib)
model.add_node(node)
return model
@classmethod
def is_isomorphic(cls, G1, G2):
return nx.algorithms.isomorphism.isomorph.is_isomorphic(
G1, G2)

View File

@@ -497,3 +497,30 @@ class LegacyInstanceResizeConfirmEnd(UnversionedNotificationEndpoint):
instance = self.get_or_create_instance(instance_uuid, node_uuid)
self.legacy_update_instance(instance, payload)
class LegacyInstanceRebuildEnd(UnversionedNotificationEndpoint):
@property
def filter_rule(self):
"""Nova compute.instance.rebuild.end filter"""
return filtering.NotificationFilter(
publisher_id=self.publisher_id_regex,
event_type='compute.instance.rebuild.end',
)
def info(self, ctxt, publisher_id, event_type, payload, metadata):
ctxt.request_id = metadata['message_id']
ctxt.project_domain = event_type
LOG.info("Event '%(event)s' received from %(publisher)s "
"with metadata %(metadata)s" %
dict(event=event_type,
publisher=publisher_id,
metadata=metadata))
LOG.debug(payload)
instance_uuid = payload['instance_id']
node_uuid = payload.get('node')
instance = self.get_or_create_instance(instance_uuid, node_uuid)
self.legacy_update_instance(instance, payload)

View File

@@ -40,6 +40,10 @@ class DecisionEngineAPI(service.Service):
self.conductor_client.cast(
context, 'trigger_audit', audit_uuid=audit_uuid)
def get_strategy_info(self, context, strategy_name):
return self.conductor_client.call(
context, 'get_strategy_info', strategy_name=strategy_name)
class DecisionEngineAPIManager(service_manager.ServiceManager):

View File

@@ -0,0 +1,46 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2018 ZTE Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_log import log
from watcher.decision_engine.scope import base
LOG = log.getLogger(__name__)
class BaremetalScope(base.BaseScope):
"""Baremetal Audit Scope Handler"""
def __init__(self, scope, config, osc=None):
super(BaremetalScope, self).__init__(scope, config)
self._osc = osc
def get_scoped_model(self, cluster_model):
"""Leave only nodes and instances proposed in the audit scope"""
if not cluster_model:
return None
for scope in self.scope:
baremetal_scope = scope.get('baremetal')
if not baremetal_scope:
return cluster_model
# TODO(yumeng-bao): currently self.scope is always []
# Audit scoper for baremetal data model will be implemented:
# https://blueprints.launchpad.net/watcher/+spec/audit-scoper-for-baremetal-data-model
return cluster_model

View File

@@ -36,6 +36,12 @@ class ComputeScope(base.BaseScope):
node = cluster_model.get_node_by_uuid(node_name)
cluster_model.delete_instance(instance, node)
def update_exclude_instance(self, cluster_model, instance, node_name):
node = cluster_model.get_node_by_uuid(node_name)
cluster_model.unmap_instance(instance, node)
instance.update({"watcher_exclude": True})
cluster_model.map_instance(instance, node)
def _check_wildcard(self, aggregate_list):
if '*' in aggregate_list:
if len(aggregate_list) == 1:
@@ -108,8 +114,9 @@ class ComputeScope(base.BaseScope):
self.remove_instance(cluster_model, instance, node_uuid)
cluster_model.remove_node(node)
def remove_instances_from_model(self, instances_to_remove, cluster_model):
for instance_uuid in instances_to_remove:
def update_exclude_instance_in_model(
self, instances_to_exclude, cluster_model):
for instance_uuid in instances_to_exclude:
try:
node_name = cluster_model.get_node_by_instance_uuid(
instance_uuid).uuid
@@ -119,7 +126,7 @@ class ComputeScope(base.BaseScope):
" instance was hosted on.",
instance_uuid)
continue
self.remove_instance(
self.update_exclude_instance(
cluster_model,
cluster_model.get_instance_by_uuid(instance_uuid),
node_name)
@@ -147,12 +154,19 @@ class ComputeScope(base.BaseScope):
nodes_to_remove = set()
instances_to_exclude = []
instance_metadata = []
compute_scope = []
model_hosts = list(cluster_model.get_all_compute_nodes().keys())
if not self.scope:
return cluster_model
for rule in self.scope:
for scope in self.scope:
compute_scope = scope.get('compute')
if not compute_scope:
return cluster_model
for rule in compute_scope:
if 'host_aggregates' in rule:
self._collect_aggregates(rule['host_aggregates'],
allowed_nodes)
@@ -165,7 +179,7 @@ class ComputeScope(base.BaseScope):
nodes=nodes_to_exclude,
instance_metadata=instance_metadata)
instances_to_remove = set(instances_to_exclude)
instances_to_exclude = set(instances_to_exclude)
if allowed_nodes:
nodes_to_remove = set(model_hosts) - set(allowed_nodes)
nodes_to_remove.update(nodes_to_exclude)
@@ -174,8 +188,9 @@ class ComputeScope(base.BaseScope):
if instance_metadata and self.config.check_optimize_metadata:
self.exclude_instances_with_given_metadata(
instance_metadata, cluster_model, instances_to_remove)
instance_metadata, cluster_model, instances_to_exclude)
self.remove_instances_from_model(instances_to_remove, cluster_model)
self.update_exclude_instance_in_model(instances_to_exclude,
cluster_model)
return cluster_model

View File

@@ -0,0 +1,165 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_log import log
from watcher.common import cinder_helper
from watcher.common import exception
from watcher.decision_engine.scope import base
LOG = log.getLogger(__name__)
class StorageScope(base.BaseScope):
"""Storage Audit Scope Handler"""
def __init__(self, scope, config, osc=None):
super(StorageScope, self).__init__(scope, config)
self._osc = osc
self.wrapper = cinder_helper.CinderHelper(osc=self._osc)
def _collect_vtype(self, volume_types, allowed_nodes):
service_list = self.wrapper.get_storage_node_list()
vt_names = [volume_type['name'] for volume_type in volume_types]
include_all_nodes = False
if '*' in vt_names:
if len(vt_names) == 1:
include_all_nodes = True
else:
raise exception.WildcardCharacterIsUsed(
resource="volume_types")
for service in service_list:
if include_all_nodes:
allowed_nodes.append(service.host)
continue
backend = service.host.split('@')[1]
v_types = self.wrapper.get_volume_type_by_backendname(
backend)
for volume_type in v_types:
if volume_type in vt_names:
# Note(adisky): It can generate duplicate values
# but it will later converted to set
allowed_nodes.append(service.host)
def _collect_zones(self, availability_zones, allowed_nodes):
service_list = self.wrapper.get_storage_node_list()
zone_names = [zone['name'] for zone
in availability_zones]
include_all_nodes = False
if '*' in zone_names:
if len(zone_names) == 1:
include_all_nodes = True
else:
raise exception.WildcardCharacterIsUsed(
resource="availability zones")
for service in service_list:
if service.zone in zone_names or include_all_nodes:
allowed_nodes.append(service.host)
def exclude_resources(self, resources, **kwargs):
pools_to_exclude = kwargs.get('pools')
volumes_to_exclude = kwargs.get('volumes')
projects_to_exclude = kwargs.get('projects')
for resource in resources:
if 'storage_pools' in resource:
pools_to_exclude.extend(
[storage_pool['name'] for storage_pool
in resource['storage_pools']])
elif 'volumes' in resource:
volumes_to_exclude.extend(
[volume['uuid'] for volume in
resource['volumes']])
elif 'projects' in resource:
projects_to_exclude.extend(
[project['uuid'] for project in
resource['projects']])
def exclude_pools(self, pools_to_exclude, cluster_model):
for pool_name in pools_to_exclude:
pool = cluster_model.get_pool_by_pool_name(pool_name)
volumes = cluster_model.get_pool_volumes(pool)
for volume in volumes:
cluster_model.remove_volume(volume)
cluster_model.remove_pool(pool)
def exclude_volumes(self, volumes_to_exclude, cluster_model):
for volume_uuid in volumes_to_exclude:
volume = cluster_model.get_volume_by_uuid(volume_uuid)
cluster_model.remove_volume(volume)
def exclude_projects(self, projects_to_exclude, cluster_model):
all_volumes = cluster_model.get_all_volumes()
for volume_uuid in all_volumes:
volume = all_volumes.get(volume_uuid)
if volume.project_id in projects_to_exclude:
cluster_model.remove_volume(volume)
def remove_nodes_from_model(self, nodes_to_remove, cluster_model):
for hostname in nodes_to_remove:
node = cluster_model.get_node_by_name(hostname)
pools = cluster_model.get_node_pools(node)
for pool in pools:
volumes = cluster_model.get_pool_volumes(pool)
for volume in volumes:
cluster_model.remove_volume(volume)
cluster_model.remove_pool(pool)
cluster_model.remove_node(node)
def get_scoped_model(self, cluster_model):
"""Leave only nodes, pools and volumes proposed in the audit scope"""
if not cluster_model:
return None
allowed_nodes = []
nodes_to_remove = set()
volumes_to_exclude = []
projects_to_exclude = []
pools_to_exclude = []
model_hosts = list(cluster_model.get_all_storage_nodes().keys())
storage_scope = []
for scope in self.scope:
storage_scope = scope.get('storage')
if not storage_scope:
return cluster_model
for rule in storage_scope:
if 'volume_types' in rule:
self._collect_vtype(rule['volume_types'],
allowed_nodes, cluster_model)
elif 'availability_zones' in rule:
self._collect_zones(rule['availability_zones'],
allowed_nodes)
elif 'exclude' in rule:
self.exclude_resources(
rule['exclude'], pools=pools_to_exclude,
volumes=volumes_to_exclude,
projects=projects_to_exclude)
if allowed_nodes:
nodes_to_remove = set(model_hosts) - set(allowed_nodes)
self.remove_nodes_from_model(nodes_to_remove, cluster_model)
self.exclude_pools(pools_to_exclude, cluster_model)
self.exclude_volumes(volumes_to_exclude, cluster_model)
self.exclude_projects(projects_to_exclude, cluster_model)
return cluster_model

View File

@@ -21,11 +21,15 @@ from watcher.decision_engine.strategy.strategies import dummy_with_scorer
from watcher.decision_engine.strategy.strategies import noisy_neighbor
from watcher.decision_engine.strategy.strategies import outlet_temp_control
from watcher.decision_engine.strategy.strategies import saving_energy
from watcher.decision_engine.strategy.strategies import \
storage_capacity_balance
from watcher.decision_engine.strategy.strategies import uniform_airflow
from watcher.decision_engine.strategy.strategies import \
vm_workload_consolidation
from watcher.decision_engine.strategy.strategies import workload_balance
from watcher.decision_engine.strategy.strategies import workload_stabilization
from watcher.decision_engine.strategy.strategies import zone_migration
Actuator = actuation.Actuator
BasicConsolidation = basic_consolidation.BasicConsolidation
@@ -33,13 +37,16 @@ OutletTempControl = outlet_temp_control.OutletTempControl
DummyStrategy = dummy_strategy.DummyStrategy
DummyWithScorer = dummy_with_scorer.DummyWithScorer
SavingEnergy = saving_energy.SavingEnergy
StorageCapacityBalance = storage_capacity_balance.StorageCapacityBalance
VMWorkloadConsolidation = vm_workload_consolidation.VMWorkloadConsolidation
WorkloadBalance = workload_balance.WorkloadBalance
WorkloadStabilization = workload_stabilization.WorkloadStabilization
UniformAirflow = uniform_airflow.UniformAirflow
NoisyNeighbor = noisy_neighbor.NoisyNeighbor
ZoneMigration = zone_migration.ZoneMigration
__all__ = ("Actuator", "BasicConsolidation", "OutletTempControl",
"DummyStrategy", "DummyWithScorer", "VMWorkloadConsolidation",
"WorkloadBalance", "WorkloadStabilization", "UniformAirflow",
"NoisyNeighbor", "SavingEnergy")
"NoisyNeighbor", "SavingEnergy", "StorageCapacityBalance",
"ZoneMigration")

View File

@@ -46,12 +46,76 @@ from watcher.common import context
from watcher.common import exception
from watcher.common.loader import loadable
from watcher.common import utils
from watcher.datasource import manager as ds_manager
from watcher.decision_engine.loading import default as loading
from watcher.decision_engine.model.collector import manager
from watcher.decision_engine.solution import default
from watcher.decision_engine.strategy.common import level
class StrategyEndpoint(object):
def __init__(self, messaging):
self._messaging = messaging
def _collect_metrics(self, strategy, datasource):
metrics = []
if not datasource:
return {'type': 'Metrics', 'state': metrics,
'mandatory': False, 'comment': ''}
else:
ds_metrics = datasource.list_metrics()
if ds_metrics is None:
raise exception.DataSourceNotAvailable(
datasource=datasource.NAME)
else:
for metric in strategy.DATASOURCE_METRICS:
original_metric_name = datasource.METRIC_MAP.get(metric)
if original_metric_name in ds_metrics:
metrics.append({original_metric_name: 'available'})
else:
metrics.append({original_metric_name: 'not available'})
return {'type': 'Metrics', 'state': metrics,
'mandatory': False, 'comment': ''}
def _get_datasource_status(self, strategy, datasource):
if not datasource:
state = "Datasource is not presented for this strategy"
else:
state = "%s: %s" % (datasource.NAME,
datasource.check_availability())
return {'type': 'Datasource',
'state': state,
'mandatory': True, 'comment': ''}
def _get_cdm(self, strategy):
models = []
for model in ['compute_model', 'storage_model', 'baremetal_model']:
try:
getattr(strategy, model)
except Exception:
models.append({model: 'not available'})
else:
models.append({model: 'available'})
return {'type': 'CDM', 'state': models,
'mandatory': True, 'comment': ''}
def get_strategy_info(self, context, strategy_name):
strategy = loading.DefaultStrategyLoader().load(strategy_name)
try:
is_datasources = getattr(strategy.config, 'datasources', None)
if is_datasources:
datasource = getattr(strategy, 'datasource_backend')
else:
datasource = getattr(strategy, strategy.config.datasource)
except (AttributeError, IndexError):
datasource = []
available_datasource = self._get_datasource_status(strategy,
datasource)
available_metrics = self._collect_metrics(strategy, datasource)
available_cdm = self._get_cdm(strategy)
return [available_datasource, available_metrics, available_cdm]
@six.add_metaclass(abc.ABCMeta)
class BaseStrategy(loadable.Loadable):
"""A base class for all the strategies
@@ -60,6 +124,8 @@ class BaseStrategy(loadable.Loadable):
Solution for a given Goal.
"""
DATASOURCE_METRICS = []
def __init__(self, config, osc=None):
"""Constructor: the signature should be identical within the subclasses
@@ -82,8 +148,10 @@ class BaseStrategy(loadable.Loadable):
self._collector_manager = None
self._compute_model = None
self._storage_model = None
self._baremetal_model = None
self._input_parameters = utils.Struct()
self._audit_scope = None
self._datasource_backend = None
@classmethod
@abc.abstractmethod
@@ -203,7 +271,9 @@ class BaseStrategy(loadable.Loadable):
if self._storage_model is None:
collector = self.collector_manager.get_cluster_model_collector(
'storage', osc=self.osc)
self._storage_model = self.audit_scope_handler.get_scoped_model(
audit_scope_handler = collector.get_audit_scope_handler(
audit_scope=self.audit_scope)
self._storage_model = audit_scope_handler.get_scoped_model(
collector.get_latest_cluster_data_model())
if not self._storage_model:
@@ -214,6 +284,29 @@ class BaseStrategy(loadable.Loadable):
return self._storage_model
@property
def baremetal_model(self):
"""Cluster data model
:returns: Cluster data model the strategy is executed on
:rtype model: :py:class:`~.ModelRoot` instance
"""
if self._baremetal_model is None:
collector = self.collector_manager.get_cluster_model_collector(
'baremetal', osc=self.osc)
audit_scope_handler = collector.get_audit_scope_handler(
audit_scope=self.audit_scope)
self._baremetal_model = audit_scope_handler.get_scoped_model(
collector.get_latest_cluster_data_model())
if not self._baremetal_model:
raise exception.ClusterStateNotDefined()
if self._baremetal_model.stale:
raise exception.ClusterStateStale()
return self._baremetal_model
@classmethod
def get_schema(cls):
"""Defines a Schema that the input parameters shall comply to
@@ -223,6 +316,15 @@ class BaseStrategy(loadable.Loadable):
"""
return {}
@property
def datasource_backend(self):
if not self._datasource_backend:
self._datasource_backend = ds_manager.DataSourceManager(
config=self.config,
osc=self.osc
).get_backend(self.DATASOURCE_METRICS)
return self._datasource_backend
@property
def input_parameters(self):
return self._input_parameters
@@ -361,3 +463,11 @@ class SavingEnergyBaseStrategy(BaseStrategy):
@classmethod
def get_goal_name(cls):
return "saving_energy"
@six.add_metaclass(abc.ABCMeta)
class ZoneMigrationBaseStrategy(BaseStrategy):
@classmethod
def get_goal_name(cls):
return "hardware_maintenance"

View File

@@ -35,16 +35,11 @@ migration is possible on your OpenStack cluster.
"""
import datetime
from oslo_config import cfg
from oslo_log import log
from watcher._i18n import _
from watcher.common import exception
from watcher.datasource import ceilometer as ceil
from watcher.datasource import gnocchi as gnoc
from watcher.datasource import monasca as mon
from watcher.decision_engine.model import element
from watcher.decision_engine.strategy.strategies import base
@@ -57,6 +52,8 @@ class BasicConsolidation(base.ServerConsolidationBaseStrategy):
HOST_CPU_USAGE_METRIC_NAME = 'compute.node.cpu.percent'
INSTANCE_CPU_USAGE_METRIC_NAME = 'cpu_util'
DATASOURCE_METRICS = ['host_cpu_usage', 'instance_cpu_usage']
METRIC_NAMES = dict(
ceilometer=dict(
host_cpu_usage='compute.node.cpu.percent',
@@ -91,10 +88,6 @@ class BasicConsolidation(base.ServerConsolidationBaseStrategy):
# set default value for the efficacy
self.efficacy = 100
self._ceilometer = None
self._monasca = None
self._gnocchi = None
# TODO(jed): improve threshold overbooking?
self.threshold_mem = 1
self.threshold_disk = 1
@@ -155,11 +148,14 @@ class BasicConsolidation(base.ServerConsolidationBaseStrategy):
@classmethod
def get_config_opts(cls):
return [
cfg.StrOpt(
"datasource",
help="Data source to use in order to query the needed metrics",
default="gnocchi",
choices=["ceilometer", "monasca", "gnocchi"]),
cfg.ListOpt(
"datasources",
help="Datasources to use in order to query the needed metrics."
" If one of strategy metric isn't available in the first"
" datasource, the next datasource will be chosen.",
item_type=cfg.types.String(choices=['gnocchi', 'ceilometer',
'monasca']),
default=['gnocchi', 'ceilometer', 'monasca']),
cfg.BoolOpt(
"check_optimize_metadata",
help="Check optimize metadata field in instance before "
@@ -167,36 +163,6 @@ class BasicConsolidation(base.ServerConsolidationBaseStrategy):
default=False),
]
@property
def ceilometer(self):
if self._ceilometer is None:
self.ceilometer = ceil.CeilometerHelper(osc=self.osc)
return self._ceilometer
@ceilometer.setter
def ceilometer(self, ceilometer):
self._ceilometer = ceilometer
@property
def monasca(self):
if self._monasca is None:
self.monasca = mon.MonascaHelper(osc=self.osc)
return self._monasca
@monasca.setter
def monasca(self, monasca):
self._monasca = monasca
@property
def gnocchi(self):
if self._gnocchi is None:
self.gnocchi = gnoc.GnocchiHelper(osc=self.osc)
return self._gnocchi
@gnocchi.setter
def gnocchi(self, gnocchi):
self._gnocchi = gnocchi
def get_available_compute_nodes(self):
default_node_scope = [element.ServiceState.ENABLED.value,
element.ServiceState.DISABLED.value]
@@ -290,87 +256,13 @@ class BasicConsolidation(base.ServerConsolidationBaseStrategy):
return (score_cores + score_disk + score_memory) / 3
def get_node_cpu_usage(self, node):
metric_name = self.METRIC_NAMES[
self.config.datasource]['host_cpu_usage']
if self.config.datasource == "ceilometer":
resource_id = "%s_%s" % (node.uuid, node.hostname)
return self.ceilometer.statistic_aggregation(
resource_id=resource_id,
meter_name=metric_name,
period=self.period,
aggregate='avg',
)
elif self.config.datasource == "gnocchi":
resource_id = "%s_%s" % (node.uuid, node.hostname)
stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(
seconds=int(self.period))
return self.gnocchi.statistic_aggregation(
resource_id=resource_id,
metric=metric_name,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean'
)
elif self.config.datasource == "monasca":
statistics = self.monasca.statistic_aggregation(
meter_name=metric_name,
dimensions=dict(hostname=node.uuid),
period=self.period,
aggregate='avg'
)
cpu_usage = None
for stat in statistics:
avg_col_idx = stat['columns'].index('avg')
values = [r[avg_col_idx] for r in stat['statistics']]
value = float(sum(values)) / len(values)
cpu_usage = value
return cpu_usage
raise exception.UnsupportedDataSource(
strategy=self.name, datasource=self.config.datasource)
resource_id = "%s_%s" % (node.uuid, node.hostname)
return self.datasource_backend.get_host_cpu_usage(
resource_id, self.period, 'mean', granularity=300)
def get_instance_cpu_usage(self, instance):
metric_name = self.METRIC_NAMES[
self.config.datasource]['instance_cpu_usage']
if self.config.datasource == "ceilometer":
return self.ceilometer.statistic_aggregation(
resource_id=instance.uuid,
meter_name=metric_name,
period=self.period,
aggregate='avg'
)
elif self.config.datasource == "gnocchi":
stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(
seconds=int(self.period))
return self.gnocchi.statistic_aggregation(
resource_id=instance.uuid,
metric=metric_name,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean',
)
elif self.config.datasource == "monasca":
statistics = self.monasca.statistic_aggregation(
meter_name=metric_name,
dimensions=dict(resource_id=instance.uuid),
period=self.period,
aggregate='avg'
)
cpu_usage = None
for stat in statistics:
avg_col_idx = stat['columns'].index('avg')
values = [r[avg_col_idx] for r in stat['statistics']]
value = float(sum(values)) / len(values)
cpu_usage = value
return cpu_usage
raise exception.UnsupportedDataSource(
strategy=self.name, datasource=self.config.datasource)
return self.datasource_backend.get_instance_cpu_usage(
instance.uuid, self.period, 'mean', granularity=300)
def calculate_score_node(self, node):
"""Calculate the score that represent the utilization level

View File

@@ -16,19 +16,23 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
from oslo_config import cfg
from oslo_log import log
from watcher._i18n import _
from watcher.common import exception as wexc
from watcher.datasource import ceilometer as ceil
from watcher.decision_engine.strategy.strategies import base
LOG = log.getLogger(__name__)
CONF = cfg.CONF
class NoisyNeighbor(base.NoisyNeighborBaseStrategy):
MIGRATION = "migrate"
DATASOURCE_METRICS = ['instance_l3_cache_usage']
# The meter to report L3 cache in ceilometer
METER_NAME_L3 = "cpu_l3_cache"
DEFAULT_WATCHER_PRIORITY = 5
@@ -45,17 +49,6 @@ class NoisyNeighbor(base.NoisyNeighborBaseStrategy):
super(NoisyNeighbor, self).__init__(config, osc)
self.meter_name = self.METER_NAME_L3
self._ceilometer = None
@property
def ceilometer(self):
if self._ceilometer is None:
self.ceilometer = ceil.CeilometerHelper(osc=self.osc)
return self._ceilometer
@ceilometer.setter
def ceilometer(self, c):
self._ceilometer = c
@classmethod
def get_name(cls):
@@ -81,32 +74,41 @@ class NoisyNeighbor(base.NoisyNeighborBaseStrategy):
"default": 35.0
},
"period": {
"description": "Aggregate time period of ceilometer",
"description": "Aggregate time period of "
"ceilometer and gnocchi",
"type": "number",
"default": 100.0
},
},
}
@classmethod
def get_config_opts(cls):
return [
cfg.ListOpt(
"datasources",
help="Datasources to use in order to query the needed metrics."
" If one of strategy metric isn't available in the first"
" datasource, the next datasource will be chosen.",
item_type=cfg.types.String(choices=['gnocchi', 'ceilometer',
'monasca']),
default=['gnocchi', 'ceilometer', 'monasca'])
]
def get_current_and_previous_cache(self, instance):
try:
current_cache = self.ceilometer.statistic_aggregation(
resource_id=instance.uuid,
meter_name=self.meter_name, period=self.period,
aggregate='avg')
curr_cache = self.datasource_backend.get_instance_l3_cache_usage(
instance.uuid, self.period, 'mean', granularity=300)
previous_cache = 2 * (
self.ceilometer.statistic_aggregation(
resource_id=instance.uuid,
meter_name=self.meter_name,
period=2*self.period, aggregate='avg')) - current_cache
self.datasource_backend.get_instance_l3_cache_usage(
instance.uuid, 2 * self.period,
'mean', granularity=300)) - curr_cache
except Exception as exc:
LOG.exception(exc)
return None
return None, None
return current_cache, previous_cache
return curr_cache, previous_cache
def find_priority_instance(self, instance):
@@ -114,7 +116,7 @@ class NoisyNeighbor(base.NoisyNeighborBaseStrategy):
self.get_current_and_previous_cache(instance)
if None in (current_cache, previous_cache):
LOG.warning("Ceilometer unable to pick L3 Cache "
LOG.warning("Datasource unable to pick L3 Cache "
"values. Skipping the instance")
return None
@@ -130,7 +132,7 @@ class NoisyNeighbor(base.NoisyNeighborBaseStrategy):
self.get_current_and_previous_cache(instance)
if None in (noisy_current_cache, noisy_previous_cache):
LOG.warning("Ceilometer unable to pick "
LOG.warning("Datasource unable to pick "
"L3 Cache. Skipping the instance")
return None

View File

@@ -28,15 +28,11 @@ Outlet (Exhaust Air) Temperature is one of the important thermal
telemetries to measure thermal/workload status of server.
"""
import datetime
from oslo_config import cfg
from oslo_log import log
from watcher._i18n import _
from watcher.common import exception as wexc
from watcher.datasource import ceilometer as ceil
from watcher.datasource import gnocchi as gnoc
from watcher.decision_engine.model import element
from watcher.decision_engine.strategy.strategies import base
@@ -77,6 +73,8 @@ class OutletTempControl(base.ThermalOptimizationBaseStrategy):
# The meter to report outlet temperature in ceilometer
MIGRATION = "migrate"
DATASOURCE_METRICS = ['host_outlet_temp']
METRIC_NAMES = dict(
ceilometer=dict(
host_outlet_temp='hardware.ipmi.node.outlet_temperature'),
@@ -93,8 +91,6 @@ class OutletTempControl(base.ThermalOptimizationBaseStrategy):
:type osc: :py:class:`~.OpenStackClients` instance, optional
"""
super(OutletTempControl, self).__init__(config, osc)
self._ceilometer = None
self._gnocchi = None
@classmethod
def get_name(cls):
@@ -137,26 +133,6 @@ class OutletTempControl(base.ThermalOptimizationBaseStrategy):
},
}
@property
def ceilometer(self):
if self._ceilometer is None:
self.ceilometer = ceil.CeilometerHelper(osc=self.osc)
return self._ceilometer
@ceilometer.setter
def ceilometer(self, c):
self._ceilometer = c
@property
def gnocchi(self):
if self._gnocchi is None:
self.gnocchi = gnoc.GnocchiHelper(osc=self.osc)
return self._gnocchi
@gnocchi.setter
def gnocchi(self, g):
self._gnocchi = g
@property
def granularity(self):
return self.input_parameters.get('granularity', 300)
@@ -206,25 +182,13 @@ class OutletTempControl(base.ThermalOptimizationBaseStrategy):
resource_id = node.uuid
outlet_temp = None
if self.config.datasource == "ceilometer":
outlet_temp = self.ceilometer.statistic_aggregation(
resource_id=resource_id,
meter_name=metric_name,
period=self.period,
aggregate='avg'
)
elif self.config.datasource == "gnocchi":
stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(
seconds=int(self.period))
outlet_temp = self.gnocchi.statistic_aggregation(
resource_id=resource_id,
metric=metric_name,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean'
)
outlet_temp = self.datasource_backend.statistic_aggregation(
resource_id=resource_id,
meter_name=metric_name,
period=self.period,
granularity=self.granularity,
)
# some hosts may not have outlet temp meters, remove from target
if outlet_temp is None:
LOG.warning("%s: no outlet temp data", resource_id)

View File

@@ -0,0 +1,411 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2017 ZTE Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
*Workload balance using cinder volume migration*
*Description*
This strategy migrates volumes based on the workload of the
cinder pools.
It makes decision to migrate a volume whenever a pool's used
utilization % is higher than the specified threshold. The volume
to be moved should make the pool close to average workload of all
cinder pools.
*Requirements*
* You must have at least 2 cinder volume pools to run
this strategy.
"""
from oslo_config import cfg
from oslo_log import log
from watcher._i18n import _
from watcher.common import cinder_helper
from watcher.decision_engine.strategy.strategies import base
LOG = log.getLogger(__name__)
class StorageCapacityBalance(base.WorkloadStabilizationBaseStrategy):
"""Storage capacity balance using cinder volume migration
*Description*
This strategy migrates volumes based on the workload of the
cinder pools.
It makes decision to migrate a volume whenever a pool's used
utilization % is higher than the specified threshold. The volume
to be moved should make the pool close to average workload of all
cinder pools.
*Requirements*
* You must have at least 2 cinder volume pools to run
this strategy.
"""
def __init__(self, config, osc=None):
"""VolumeMigrate using cinder volume migration
:param config: A mapping containing the configuration of this strategy
:type config: :py:class:`~.Struct` instance
:param osc: :py:class:`~.OpenStackClients` instance
"""
super(StorageCapacityBalance, self).__init__(config, osc)
self._cinder = None
self.volume_threshold = 80.0
self.pool_type_cache = dict()
self.source_pools = []
self.dest_pools = []
@property
def cinder(self):
if not self._cinder:
self._cinder = cinder_helper.CinderHelper(osc=self.osc)
return self._cinder
@classmethod
def get_name(cls):
return "storage_capacity_balance"
@classmethod
def get_display_name(cls):
return _("Storage Capacity Balance Strategy")
@classmethod
def get_translatable_display_name(cls):
return "Storage Capacity Balance Strategy"
@classmethod
def get_schema(cls):
# Mandatory default setting for each element
return {
"properties": {
"volume_threshold": {
"description": "volume threshold for capacity balance",
"type": "number",
"default": 80.0
},
},
}
@classmethod
def get_config_opts(cls):
return [
cfg.ListOpt(
"ex_pools",
help="exclude pools",
default=['local_vstorage']),
]
def get_pools(self, cinder):
"""Get all volume pools excepting ex_pools.
:param cinder: cinder client
:return: volume pools
"""
ex_pools = self.config.ex_pools
pools = cinder.get_storage_pool_list()
filtered_pools = [p for p in pools
if p.pool_name not in ex_pools]
return filtered_pools
def get_volumes(self, cinder):
"""Get all volumes with staus in available or in-use and no snapshot.
:param cinder: cinder client
:return: all volumes
"""
all_volumes = cinder.get_volume_list()
valid_status = ['in-use', 'available']
volume_snapshots = cinder.get_volume_snapshots_list()
snapshot_volume_ids = []
for snapshot in volume_snapshots:
snapshot_volume_ids.append(snapshot.volume_id)
nosnap_volumes = list(filter(lambda v: v.id not in snapshot_volume_ids,
all_volumes))
LOG.info("volumes in snap: %s", snapshot_volume_ids)
status_volumes = list(
filter(lambda v: v.status in valid_status, nosnap_volumes))
valid_volumes = [v for v in status_volumes
if getattr(v, 'migration_status') == 'success' or
getattr(v, 'migration_status') is None]
LOG.info("valid volumes: %s", valid_volumes)
return valid_volumes
def group_pools(self, pools, threshold):
"""group volume pools by threshold.
:param pools: all volume pools
:param threshold: volume threshold
:return: under and over threshold pools
"""
under_pools = list(
filter(lambda p: float(p.total_capacity_gb) -
float(p.free_capacity_gb) <
float(p.total_capacity_gb) * threshold, pools))
over_pools = list(
filter(lambda p: float(p.total_capacity_gb) -
float(p.free_capacity_gb) >=
float(p.total_capacity_gb) * threshold, pools))
return over_pools, under_pools
def get_volume_type_by_name(self, cinder, backendname):
# return list of pool type
if backendname in self.pool_type_cache.keys():
return self.pool_type_cache.get(backendname)
volume_type_list = cinder.get_volume_type_list()
volume_type = list(filter(
lambda volume_type:
volume_type.extra_specs.get(
'volume_backend_name') == backendname, volume_type_list))
if volume_type:
self.pool_type_cache[backendname] = volume_type
return self.pool_type_cache.get(backendname)
else:
return []
def migrate_fit(self, volume, threshold):
target_pool_name = None
if volume.volume_type:
LOG.info("volume %s type %s", volume.id, volume.volume_type)
return target_pool_name
self.dest_pools.sort(
key=lambda p: float(p.free_capacity_gb) /
float(p.total_capacity_gb))
for pool in reversed(self.dest_pools):
total_cap = float(pool.total_capacity_gb)
allocated = float(pool.allocated_capacity_gb)
ratio = pool.max_over_subscription_ratio
if total_cap * ratio < allocated + float(volume.size):
LOG.info("pool %s allocated over", pool.name)
continue
free_cap = float(pool.free_capacity_gb) - float(volume.size)
if free_cap > (1 - threshold) * total_cap:
target_pool_name = pool.name
index = self.dest_pools.index(pool)
setattr(self.dest_pools[index], 'free_capacity_gb',
str(free_cap))
LOG.info("volume: get pool %s for vol %s", target_pool_name,
volume.name)
break
return target_pool_name
def check_pool_type(self, volume, dest_pool):
target_type = None
# check type feature
if not volume.volume_type:
return target_type
volume_type_list = self.cinder.get_volume_type_list()
volume_type = list(filter(
lambda volume_type:
volume_type.name == volume.volume_type, volume_type_list))
if volume_type:
src_extra_specs = volume_type[0].extra_specs
src_extra_specs.pop('volume_backend_name', None)
backendname = getattr(dest_pool, 'volume_backend_name')
dst_pool_type = self.get_volume_type_by_name(self.cinder, backendname)
for src_key in src_extra_specs.keys():
dst_pool_type = [pt for pt in dst_pool_type
if pt.extra_specs.get(src_key) ==
src_extra_specs.get(src_key)]
if dst_pool_type:
if volume.volume_type:
if dst_pool_type[0].name != volume.volume_type:
target_type = dst_pool_type[0].name
else:
target_type = dst_pool_type[0].name
return target_type
def retype_fit(self, volume, threshold):
target_type = None
self.dest_pools.sort(
key=lambda p: float(p.free_capacity_gb) /
float(p.total_capacity_gb))
for pool in reversed(self.dest_pools):
backendname = getattr(pool, 'volume_backend_name')
pool_type = self.get_volume_type_by_name(self.cinder, backendname)
LOG.info("volume: pool %s, type %s", pool.name, pool_type)
if pool_type is None:
continue
total_cap = float(pool.total_capacity_gb)
allocated = float(pool.allocated_capacity_gb)
ratio = pool.max_over_subscription_ratio
if total_cap * ratio < allocated + float(volume.size):
LOG.info("pool %s allocated over", pool.name)
continue
free_cap = float(pool.free_capacity_gb) - float(volume.size)
if free_cap > (1 - threshold) * total_cap:
target_type = self.check_pool_type(volume, pool)
if target_type is None:
continue
index = self.dest_pools.index(pool)
setattr(self.dest_pools[index], 'free_capacity_gb',
str(free_cap))
LOG.info("volume: get type %s for vol %s", target_type,
volume.name)
break
return target_type
def get_actions(self, pool, volumes, threshold):
"""get volume, pool key-value action
return: retype, migrate dict
"""
retype_dicts = dict()
migrate_dicts = dict()
total_cap = float(pool.total_capacity_gb)
used_cap = float(pool.total_capacity_gb) - float(pool.free_capacity_gb)
seek_flag = True
volumes_in_pool = list(
filter(lambda v: getattr(v, 'os-vol-host-attr:host') == pool.name,
volumes))
LOG.info("volumes in pool: %s", str(volumes_in_pool))
if not volumes_in_pool:
return retype_dicts, migrate_dicts
ava_volumes = list(filter(lambda v: v.status == 'available',
volumes_in_pool))
ava_volumes.sort(key=lambda v: float(v.size))
LOG.info("available volumes in pool: %s ", str(ava_volumes))
for vol in ava_volumes:
vol_flag = False
migrate_pool = self.migrate_fit(vol, threshold)
if migrate_pool:
migrate_dicts[vol.id] = migrate_pool
vol_flag = True
else:
target_type = self.retype_fit(vol, threshold)
if target_type:
retype_dicts[vol.id] = target_type
vol_flag = True
if vol_flag:
used_cap -= float(vol.size)
if used_cap < threshold * total_cap:
seek_flag = False
break
if seek_flag:
noboot_volumes = list(
filter(lambda v: v.bootable.lower() == 'false' and
v.status == 'in-use', volumes_in_pool))
noboot_volumes.sort(key=lambda v: float(v.size))
LOG.info("noboot volumes: %s ", str(noboot_volumes))
for vol in noboot_volumes:
vol_flag = False
migrate_pool = self.migrate_fit(vol, threshold)
if migrate_pool:
migrate_dicts[vol.id] = migrate_pool
vol_flag = True
else:
target_type = self.retype_fit(vol, threshold)
if target_type:
retype_dicts[vol.id] = target_type
vol_flag = True
if vol_flag:
used_cap -= float(vol.size)
if used_cap < threshold * total_cap:
seek_flag = False
break
if seek_flag:
boot_volumes = list(
filter(lambda v: v.bootable.lower() == 'true' and
v.status == 'in-use', volumes_in_pool)
)
boot_volumes.sort(key=lambda v: float(v.size))
LOG.info("boot volumes: %s ", str(boot_volumes))
for vol in boot_volumes:
vol_flag = False
migrate_pool = self.migrate_fit(vol, threshold)
if migrate_pool:
migrate_dicts[vol.id] = migrate_pool
vol_flag = True
else:
target_type = self.retype_fit(vol, threshold)
if target_type:
retype_dicts[vol.id] = target_type
vol_flag = True
if vol_flag:
used_cap -= float(vol.size)
if used_cap < threshold * total_cap:
seek_flag = False
break
return retype_dicts, migrate_dicts
def pre_execute(self):
"""Pre-execution phase
This can be used to fetch some pre-requisites or data.
"""
LOG.info("Initializing Storage Capacity Balance Strategy")
self.volume_threshold = self.input_parameters.volume_threshold
def do_execute(self, audit=None):
"""Strategy execution phase
This phase is where you should put the main logic of your strategy.
"""
all_pools = self.get_pools(self.cinder)
all_volumes = self.get_volumes(self.cinder)
threshold = float(self.volume_threshold) / 100
self.source_pools, self.dest_pools = self.group_pools(
all_pools, threshold)
LOG.info(" source pools: %s dest pools:%s",
self.source_pools, self.dest_pools)
if not self.source_pools:
LOG.info("No pools require optimization")
return
if not self.dest_pools:
LOG.info("No enough pools for optimization")
return
for source_pool in self.source_pools:
retype_actions, migrate_actions = self.get_actions(
source_pool, all_volumes, threshold)
for vol_id, pool_type in retype_actions.items():
vol = [v for v in all_volumes if v.id == vol_id]
parameters = {'migration_type': 'retype',
'destination_type': pool_type,
'resource_name': vol[0].name}
self.solution.add_action(action_type='volume_migrate',
resource_id=vol_id,
input_parameters=parameters)
for vol_id, pool_name in migrate_actions.items():
vol = [v for v in all_volumes if v.id == vol_id]
parameters = {'migration_type': 'migrate',
'destination_node': pool_name,
'resource_name': vol[0].name}
self.solution.add_action(action_type='volume_migrate',
resource_id=vol_id,
input_parameters=parameters)
def post_execute(self):
"""Post-execution phase
"""
pass

View File

@@ -42,15 +42,11 @@ airflow is higher than the specified threshold.
- It assumes that live migrations are possible.
"""
import datetime
from oslo_config import cfg
from oslo_log import log
from watcher._i18n import _
from watcher.common import exception as wexc
from watcher.datasource import ceilometer as ceil
from watcher.datasource import gnocchi as gnoc
from watcher.decision_engine.model import element
from watcher.decision_engine.strategy.strategies import base
@@ -86,6 +82,8 @@ class UniformAirflow(base.BaseStrategy):
# choose 300 seconds as the default duration of meter aggregation
PERIOD = 300
DATASOURCE_METRICS = ['host_airflow', 'host_inlet_temp', 'host_power']
METRIC_NAMES = dict(
ceilometer=dict(
# The meter to report Airflow of physical server in ceilometer
@@ -123,30 +121,8 @@ class UniformAirflow(base.BaseStrategy):
self.config.datasource]['host_inlet_temp']
self.meter_name_power = self.METRIC_NAMES[
self.config.datasource]['host_power']
self._ceilometer = None
self._gnocchi = None
self._period = self.PERIOD
@property
def ceilometer(self):
if self._ceilometer is None:
self.ceilometer = ceil.CeilometerHelper(osc=self.osc)
return self._ceilometer
@ceilometer.setter
def ceilometer(self, c):
self._ceilometer = c
@property
def gnocchi(self):
if self._gnocchi is None:
self.gnocchi = gnoc.GnocchiHelper(osc=self.osc)
return self._gnocchi
@gnocchi.setter
def gnocchi(self, g):
self._gnocchi = g
@classmethod
def get_name(cls):
return "uniform_airflow"
@@ -245,35 +221,16 @@ class UniformAirflow(base.BaseStrategy):
source_instances = self.compute_model.get_node_instances(
source_node)
if source_instances:
if self.config.datasource == "ceilometer":
inlet_t = self.ceilometer.statistic_aggregation(
resource_id=source_node.uuid,
meter_name=self.meter_name_inlet_t,
period=self._period,
aggregate='avg')
power = self.ceilometer.statistic_aggregation(
resource_id=source_node.uuid,
meter_name=self.meter_name_power,
period=self._period,
aggregate='avg')
elif self.config.datasource == "gnocchi":
stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(
seconds=int(self._period))
inlet_t = self.gnocchi.statistic_aggregation(
resource_id=source_node.uuid,
metric=self.meter_name_inlet_t,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean')
power = self.gnocchi.statistic_aggregation(
resource_id=source_node.uuid,
metric=self.meter_name_power,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean')
inlet_t = self.datasource_backend.statistic_aggregation(
resource_id=source_node.uuid,
meter_name=self.meter_name_inlet_t,
period=self._period,
granularity=self.granularity)
power = self.datasource_backend.statistic_aggregation(
resource_id=source_node.uuid,
meter_name=self.meter_name_power,
period=self._period,
granularity=self.granularity)
if (power < self.threshold_power and
inlet_t < self.threshold_inlet_t):
# hardware issue, migrate all instances from this node
@@ -351,23 +308,11 @@ class UniformAirflow(base.BaseStrategy):
node = self.compute_model.get_node_by_uuid(
node_id)
resource_id = node.uuid
if self.config.datasource == "ceilometer":
airflow = self.ceilometer.statistic_aggregation(
resource_id=resource_id,
meter_name=self.meter_name_airflow,
period=self._period,
aggregate='avg')
elif self.config.datasource == "gnocchi":
stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(
seconds=int(self._period))
airflow = self.gnocchi.statistic_aggregation(
resource_id=resource_id,
metric=self.meter_name_airflow,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean')
airflow = self.datasource_backend.statistic_aggregation(
resource_id=resource_id,
meter_name=self.meter_name_airflow,
period=self._period,
granularity=self.granularity)
# some hosts may not have airflow meter, remove from target
if airflow is None:
LOG.warning("%s: no airflow data", resource_id)

View File

@@ -52,7 +52,6 @@ correctly on all compute nodes within the cluster.
This strategy assumes it is possible to live migrate any VM from
an active compute node to any other active compute node.
"""
import datetime
from oslo_config import cfg
from oslo_log import log
@@ -60,8 +59,6 @@ import six
from watcher._i18n import _
from watcher.common import exception
from watcher.datasource import ceilometer as ceil
from watcher.datasource import gnocchi as gnoc
from watcher.decision_engine.model import element
from watcher.decision_engine.strategy.strategies import base
@@ -74,6 +71,9 @@ class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
HOST_CPU_USAGE_METRIC_NAME = 'compute.node.cpu.percent'
INSTANCE_CPU_USAGE_METRIC_NAME = 'cpu_util'
DATASOURCE_METRICS = ['instance_ram_allocated', 'instance_cpu_usage',
'instance_ram_usage', 'instance_root_disk_size']
METRIC_NAMES = dict(
ceilometer=dict(
cpu_util_metric='cpu_util',
@@ -115,26 +115,6 @@ class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
def period(self):
return self.input_parameters.get('period', 3600)
@property
def ceilometer(self):
if self._ceilometer is None:
self.ceilometer = ceil.CeilometerHelper(osc=self.osc)
return self._ceilometer
@ceilometer.setter
def ceilometer(self, ceilometer):
self._ceilometer = ceilometer
@property
def gnocchi(self):
if self._gnocchi is None:
self.gnocchi = gnoc.GnocchiHelper(osc=self.osc)
return self._gnocchi
@gnocchi.setter
def gnocchi(self, gnocchi):
self._gnocchi = gnocchi
@property
def granularity(self):
return self.input_parameters.get('granularity', 300)
@@ -312,57 +292,28 @@ class VMWorkloadConsolidation(base.ServerConsolidationBaseStrategy):
disk_alloc_metric = self.METRIC_NAMES[
self.config.datasource]['disk_alloc_metric']
if self.config.datasource == "ceilometer":
instance_cpu_util = self.ceilometer.statistic_aggregation(
resource_id=instance.uuid, meter_name=cpu_util_metric,
period=self.period, aggregate='avg')
instance_ram_util = self.ceilometer.statistic_aggregation(
resource_id=instance.uuid, meter_name=ram_util_metric,
period=self.period, aggregate='avg')
if not instance_ram_util:
instance_ram_util = self.ceilometer.statistic_aggregation(
resource_id=instance.uuid, meter_name=ram_alloc_metric,
period=self.period, aggregate='avg')
instance_disk_util = self.ceilometer.statistic_aggregation(
resource_id=instance.uuid, meter_name=disk_alloc_metric,
period=self.period, aggregate='avg')
elif self.config.datasource == "gnocchi":
stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(
seconds=int(self.period))
instance_cpu_util = self.gnocchi.statistic_aggregation(
instance_cpu_util = self.datasource_backend.statistic_aggregation(
resource_id=instance.uuid,
meter_name=cpu_util_metric,
period=self.period,
granularity=self.granularity)
instance_ram_util = self.datasource_backend.statistic_aggregation(
resource_id=instance.uuid,
meter_name=ram_util_metric,
period=self.period,
granularity=self.granularity)
if not instance_ram_util:
instance_ram_util = self.datasource_backend.statistic_aggregation(
resource_id=instance.uuid,
metric=cpu_util_metric,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean'
)
instance_ram_util = self.gnocchi.statistic_aggregation(
resource_id=instance.uuid,
metric=ram_util_metric,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean'
)
if not instance_ram_util:
instance_ram_util = self.gnocchi.statistic_aggregation(
resource_id=instance.uuid,
metric=ram_alloc_metric,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean'
)
instance_disk_util = self.gnocchi.statistic_aggregation(
resource_id=instance.uuid,
metric=disk_alloc_metric,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean'
)
meter_name=ram_alloc_metric,
period=self.period,
granularity=self.granularity)
instance_disk_util = self.datasource_backend.statistic_aggregation(
resource_id=instance.uuid,
meter_name=disk_alloc_metric,
period=self.period,
granularity=self.granularity)
if instance_cpu_util:
total_cpu_utilization = (
instance.vcpus * (instance_cpu_util / 100.0))

View File

@@ -47,15 +47,12 @@ hosts nodes.
"""
from __future__ import division
import datetime
from oslo_config import cfg
from oslo_log import log
from watcher._i18n import _
from watcher.common import exception as wexc
from watcher.datasource import ceilometer as ceil
from watcher.datasource import gnocchi as gnoc
from watcher.decision_engine.model import element
from watcher.decision_engine.strategy.strategies import base
@@ -98,6 +95,8 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
# Unit: MB
MEM_METER_NAME = "memory.resident"
DATASOURCE_METRICS = ['instance_cpu_usage', 'instance_ram_usage']
MIGRATION = "migrate"
def __init__(self, config, osc=None):
@@ -111,28 +110,6 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
# the migration plan will be triggered when the CPU or RAM
# utilization % reaches threshold
self._meter = None
self._ceilometer = None
self._gnocchi = None
@property
def ceilometer(self):
if self._ceilometer is None:
self.ceilometer = ceil.CeilometerHelper(osc=self.osc)
return self._ceilometer
@ceilometer.setter
def ceilometer(self, c):
self._ceilometer = c
@property
def gnocchi(self):
if self._gnocchi is None:
self.gnocchi = gnoc.GnocchiHelper(osc=self.osc)
return self._gnocchi
@gnocchi.setter
def gnocchi(self, gnocchi):
self._gnocchi = gnocchi
@classmethod
def get_name(cls):
@@ -184,11 +161,14 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
@classmethod
def get_config_opts(cls):
return [
cfg.StrOpt(
"datasource",
help="Data source to use in order to query the needed metrics",
default="gnocchi",
choices=["ceilometer", "gnocchi"])
cfg.ListOpt(
"datasources",
help="Datasources to use in order to query the needed metrics."
" If one of strategy metric isn't available in the first"
" datasource, the next datasource will be chosen.",
item_type=cfg.types.String(choices=['gnocchi', 'ceilometer',
'monasca']),
default=['gnocchi', 'ceilometer', 'monasca'])
]
def get_available_compute_nodes(self):
@@ -307,43 +287,29 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
instances = self.compute_model.get_node_instances(node)
node_workload = 0.0
for instance in instances:
instance_util = None
util = None
try:
if self.config.datasource == "ceilometer":
instance_util = self.ceilometer.statistic_aggregation(
resource_id=instance.uuid,
meter_name=self._meter,
period=self._period,
aggregate='avg')
elif self.config.datasource == "gnocchi":
stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(
seconds=int(self._period))
instance_util = self.gnocchi.statistic_aggregation(
resource_id=instance.uuid,
metric=self._meter,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean'
)
util = self.datasource_backend.statistic_aggregation(
instance.uuid, self._meter, self._period,
self._granularity, aggregation='mean',
dimensions=dict(resource_id=instance.uuid))
except Exception as exc:
LOG.exception(exc)
LOG.error("Can not get %s from %s", self._meter,
self.config.datasource)
continue
if instance_util is None:
if util is None:
LOG.debug("Instance (%s): %s is None",
instance.uuid, self._meter)
continue
if self._meter == self.CPU_METER_NAME:
workload_cache[instance.uuid] = (instance_util *
workload_cache[instance.uuid] = (util *
instance.vcpus / 100)
else:
workload_cache[instance.uuid] = instance_util
workload_cache[instance.uuid] = util
node_workload += workload_cache[instance.uuid]
LOG.debug("VM (%s): %s %f", instance.uuid, self._meter,
instance_util)
util)
cluster_workload += node_workload
if self._meter == self.CPU_METER_NAME:
@@ -387,6 +353,7 @@ class WorkloadBalance(base.WorkloadStabilizationBaseStrategy):
self.threshold = self.input_parameters.threshold
self._period = self.input_parameters.period
self._meter = self.input_parameters.metrics
self._granularity = self.input_parameters.granularity
source_nodes, target_nodes, avg_workload, workload_cache = (
self.group_hosts_by_cpu_or_ram_util())

View File

@@ -28,7 +28,6 @@ It assumes that live migrations are possible in your cluster.
"""
import copy
import datetime
import itertools
import math
import random
@@ -41,8 +40,6 @@ import oslo_utils
from watcher._i18n import _
from watcher.common import exception
from watcher.datasource import ceilometer as ceil
from watcher.datasource import gnocchi as gnoc
from watcher.decision_engine.model import element
from watcher.decision_engine.strategy.strategies import base
@@ -65,6 +62,9 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
MIGRATION = "migrate"
MEMOIZE = _set_memoize(CONF)
DATASOURCE_METRICS = ['host_cpu_usage', 'instance_cpu_usage',
'instance_ram_usage', 'host_memory_usage']
def __init__(self, config, osc=None):
"""Workload Stabilization control using live migration
@@ -73,9 +73,6 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
:param osc: :py:class:`~.OpenStackClients` instance
"""
super(WorkloadStabilization, self).__init__(config, osc)
self._ceilometer = None
self._gnocchi = None
self._nova = None
self.weights = None
self.metrics = None
self.thresholds = None
@@ -169,43 +166,16 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
@classmethod
def get_config_opts(cls):
return [
cfg.StrOpt(
"datasource",
help="Data source to use in order to query the needed metrics",
default="gnocchi",
choices=["ceilometer", "gnocchi"])
cfg.ListOpt(
"datasources",
help="Datasources to use in order to query the needed metrics."
" If one of strategy metric isn't available in the first"
" datasource, the next datasource will be chosen.",
item_type=cfg.types.String(choices=['gnocchi', 'ceilometer',
'monasca']),
default=['gnocchi', 'ceilometer', 'monasca'])
]
@property
def ceilometer(self):
if self._ceilometer is None:
self.ceilometer = ceil.CeilometerHelper(osc=self.osc)
return self._ceilometer
@property
def nova(self):
if self._nova is None:
self.nova = self.osc.nova()
return self._nova
@nova.setter
def nova(self, n):
self._nova = n
@ceilometer.setter
def ceilometer(self, c):
self._ceilometer = c
@property
def gnocchi(self):
if self._gnocchi is None:
self.gnocchi = gnoc.GnocchiHelper(osc=self.osc)
return self._gnocchi
@gnocchi.setter
def gnocchi(self, gnocchi):
self._gnocchi = gnocchi
def transform_instance_cpu(self, instance_load, host_vcpus):
"""Transform instance cpu utilization to overall host cpu utilization.
@@ -227,32 +197,15 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
LOG.debug('get_instance_load started')
instance_load = {'uuid': instance.uuid, 'vcpus': instance.vcpus}
for meter in self.metrics:
avg_meter = None
if self.config.datasource == "ceilometer":
avg_meter = self.ceilometer.statistic_aggregation(
resource_id=instance.uuid,
meter_name=meter,
period=self.periods['instance'],
aggregate='min'
)
elif self.config.datasource == "gnocchi":
stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(
seconds=int(self.periods['instance']))
avg_meter = self.gnocchi.statistic_aggregation(
resource_id=instance.uuid,
metric=meter,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean'
)
avg_meter = self.datasource_backend.statistic_aggregation(
instance.uuid, meter, self.periods['instance'],
self.granularity, aggregation='mean')
if avg_meter is None:
LOG.warning(
"No values returned by %(resource_id)s "
"for %(metric_name)s" % dict(
resource_id=instance.uuid, metric_name=meter))
avg_meter = 0
return
if meter == 'cpu_util':
avg_meter /= float(100)
instance_load[meter] = avg_meter
@@ -287,33 +240,14 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
resource_id = "%s_%s" % (node.uuid, node.hostname)
else:
resource_id = node_id
if self.config.datasource == "ceilometer":
avg_meter = self.ceilometer.statistic_aggregation(
resource_id=resource_id,
meter_name=self.instance_metrics[metric],
period=self.periods['node'],
aggregate='avg'
)
elif self.config.datasource == "gnocchi":
stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(
seconds=int(self.periods['node']))
avg_meter = self.gnocchi.statistic_aggregation(
resource_id=resource_id,
metric=self.instance_metrics[metric],
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean'
)
avg_meter = self.datasource_backend.statistic_aggregation(
resource_id, self.instance_metrics[metric],
self.periods['node'], self.granularity, aggregation='mean')
if avg_meter is None:
if meter_name == 'hardware.memory.used':
avg_meter = node.memory
if meter_name == 'compute.node.cpu.percent':
avg_meter = 1
LOG.warning('No values returned by node %s for %s',
node_id, meter_name)
del hosts_load[node_id]
break
else:
if meter_name == 'hardware.memory.used':
avg_meter /= oslo_utils.units.Ki
@@ -362,6 +296,8 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
migration_case = []
new_hosts = copy.deepcopy(hosts)
instance_load = self.get_instance_load(instance)
if not instance_load:
return
s_host_vcpus = new_hosts[src_node.uuid]['vcpus']
d_host_vcpus = new_hosts[dst_node.uuid]['vcpus']
for metric in self.metrics:
@@ -379,6 +315,16 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
migration_case.append(new_hosts)
return migration_case
def get_current_weighted_sd(self, hosts_load):
"""Calculate current weighted sd"""
current_sd = []
normalized_load = self.normalize_hosts_load(hosts_load)
for metric in self.metrics:
metric_sd = self.get_sd(normalized_load, metric)
current_sd.append(metric_sd)
current_sd.append(hosts_load)
return self.calculate_weighted_sd(current_sd[:-1])
def simulate_migrations(self, hosts):
"""Make sorted list of pairs instance:dst_host"""
def yield_nodes(nodes):
@@ -393,14 +339,15 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
yield nodes
instance_host_map = []
nodes = list(self.get_available_nodes())
nodes = sorted(list(self.get_available_nodes()))
current_weighted_sd = self.get_current_weighted_sd(hosts)
for src_host in nodes:
src_node = self.compute_model.get_node_by_uuid(src_host)
c_nodes = copy.copy(nodes)
c_nodes.remove(src_host)
node_list = yield_nodes(c_nodes)
for instance in self.compute_model.get_node_instances(src_node):
min_sd_case = {'value': len(self.metrics)}
min_sd_case = {'value': current_weighted_sd}
if instance.state not in [element.InstanceState.ACTIVE.value,
element.InstanceState.PAUSED.value]:
continue
@@ -408,6 +355,8 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
dst_node = self.compute_model.get_node_by_uuid(dst_host)
sd_case = self.calculate_migration_case(
hosts, instance, src_node, dst_node)
if sd_case is None:
break
weighted_sd = self.calculate_weighted_sd(sd_case[:-1])
@@ -416,6 +365,8 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
'host': dst_node.uuid, 'value': weighted_sd,
's_host': src_node.uuid, 'instance': instance.uuid}
instance_host_map.append(min_sd_case)
if sd_case is None:
continue
return sorted(instance_host_map, key=lambda x: x['value'])
def check_threshold(self):
@@ -424,7 +375,12 @@ class WorkloadStabilization(base.WorkloadStabilizationBaseStrategy):
normalized_load = self.normalize_hosts_load(hosts_load)
for metric in self.metrics:
metric_sd = self.get_sd(normalized_load, metric)
LOG.info("Standard deviation for %s is %s."
% (metric, metric_sd))
if metric_sd > float(self.thresholds[metric]):
LOG.info("Standard deviation of %s exceeds"
" appropriate threshold %s."
% (metric, metric_sd))
return self.simulate_migrations(hosts_load)
def add_migration(self,

View File

@@ -0,0 +1,975 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
*Zone migration using instance and volume migration*
This is zone migration strategy to migrate many instances and volumes
efficiently with minimum downtime for hardware maintenance.
"""
from dateutil.parser import parse
import six
from oslo_log import log
from cinderclient.v2.volumes import Volume
from novaclient.v2.servers import Server
from watcher._i18n import _
from watcher.common import cinder_helper
from watcher.common import exception as wexc
from watcher.common import nova_helper
from watcher.decision_engine.model import element
from watcher.decision_engine.strategy.strategies import base
LOG = log.getLogger(__name__)
INSTANCE = "instance"
VOLUME = "volume"
ACTIVE = "active"
PAUSED = 'paused'
STOPPED = "stopped"
status_ACTIVE = 'ACTIVE'
status_PAUSED = 'PAUSED'
status_SHUTOFF = 'SHUTOFF'
AVAILABLE = "available"
IN_USE = "in-use"
class ZoneMigration(base.ZoneMigrationBaseStrategy):
"""Zone migration using instance and volume migration"""
def __init__(self, config, osc=None):
super(ZoneMigration, self).__init__(config, osc)
self._nova = None
self._cinder = None
self.live_count = 0
self.planned_live_count = 0
self.cold_count = 0
self.planned_cold_count = 0
self.volume_count = 0
self.planned_volume_count = 0
self.volume_update_count = 0
self.planned_volume_update_count = 0
@classmethod
def get_name(cls):
return "zone_migration"
@classmethod
def get_display_name(cls):
return _("Zone migration")
@classmethod
def get_translatable_display_name(cls):
return "Zone migration"
@classmethod
def get_schema(cls):
return {
"properties": {
"compute_nodes": {
"type": "array",
"items": {
"type": "object",
"properties": {
"src_node": {
"description": "Compute node from which"
" instances migrate",
"type": "string"
},
"dst_node": {
"description": "Compute node to which"
"instances migrate",
"type": "string"
}
},
"required": ["src_node"],
"additionalProperties": False
}
},
"storage_pools": {
"type": "array",
"items": {
"type": "object",
"properties": {
"src_pool": {
"description": "Storage pool from which"
" volumes migrate",
"type": "string"
},
"dst_pool": {
"description": "Storage pool to which"
" volumes migrate",
"type": "string"
},
"src_type": {
"description": "Volume type from which"
" volumes migrate",
"type": "string"
},
"dst_type": {
"description": "Volume type to which"
" volumes migrate",
"type": "string"
}
},
"required": ["src_pool", "src_type", "dst_type"],
"additionalProperties": False
}
},
"parallel_total": {
"description": "The number of actions to be run in"
" parallel in total",
"type": "integer", "minimum": 0, "default": 6
},
"parallel_per_node": {
"description": "The number of actions to be run in"
" parallel per compute node",
"type": "integer", "minimum": 0, "default": 2
},
"parallel_per_pool": {
"description": "The number of actions to be run in"
" parallel per storage host",
"type": "integer", "minimum": 0, "default": 2
},
"priority": {
"description": "List prioritizes instances and volumes",
"type": "object",
"properties": {
"project": {
"type": "array", "items": {"type": "string"}
},
"compute_node": {
"type": "array", "items": {"type": "string"}
},
"storage_pool": {
"type": "array", "items": {"type": "string"}
},
"compute": {
"enum": ["vcpu_num", "mem_size", "disk_size",
"created_at"]
},
"storage": {
"enum": ["size", "created_at"]
}
},
"additionalProperties": False
},
"with_attached_volume": {
"description": "instance migrates just after attached"
" volumes or not",
"type": "boolean", "default": False
},
},
"additionalProperties": False
}
@property
def migrate_compute_nodes(self):
"""Get compute nodes from input_parameters
:returns: compute nodes
e.g. [{"src_node": "w012", "dst_node": "w022"},
{"src_node": "w013", "dst_node": "w023"}]
"""
return self.input_parameters.get('compute_nodes')
@property
def migrate_storage_pools(self):
"""Get storage pools from input_parameters
:returns: storage pools
e.g. [
{"src_pool": "src1@back1#pool1",
"dst_pool": "dst1@back1#pool1",
"src_type": "src1_type",
"dst_type": "dst1_type"},
{"src_pool": "src1@back2#pool1",
"dst_pool": "dst1@back2#pool1",
"src_type": "src1_type",
"dst_type": "dst1_type"}
]
"""
return self.input_parameters.get('storage_pools')
@property
def parallel_total(self):
return self.input_parameters.get('parallel_total')
@property
def parallel_per_node(self):
return self.input_parameters.get('parallel_per_node')
@property
def parallel_per_pool(self):
return self.input_parameters.get('parallel_per_pool')
@property
def priority(self):
"""Get priority from input_parameters
:returns: priority map
e.g.
{
"project": ["pj1"],
"compute_node": ["compute1", "compute2"],
"compute": ["vcpu_num"],
"storage_pool": ["pool1", "pool2"],
"storage": ["size", "created_at"]
}
"""
return self.input_parameters.get('priority')
@property
def with_attached_volume(self):
return self.input_parameters.get('with_attached_volume')
@property
def nova(self):
if self._nova is None:
self._nova = nova_helper.NovaHelper(osc=self.osc)
return self._nova
@property
def cinder(self):
if self._cinder is None:
self._cinder = cinder_helper.CinderHelper(osc=self.osc)
return self._cinder
def get_available_compute_nodes(self):
default_node_scope = [element.ServiceState.ENABLED.value,
element.ServiceState.DISABLED.value]
return {uuid: cn for uuid, cn in
self.compute_model.get_all_compute_nodes().items()
if cn.state == element.ServiceState.ONLINE.value and
cn.status in default_node_scope}
def get_available_storage_nodes(self):
default_node_scope = [element.ServiceState.ENABLED.value,
element.ServiceState.DISABLED.value]
return {uuid: cn for uuid, cn in
self.storage_model.get_all_storage_nodes().items()
if cn.state == element.ServiceState.ONLINE.value and
cn.status in default_node_scope}
def pre_execute(self):
"""Pre-execution phase
This can be used to fetch some pre-requisites or data.
"""
LOG.info("Initializing zone migration Strategy")
if len(self.get_available_compute_nodes()) == 0:
raise wexc.ComputeClusterEmpty()
if len(self.get_available_storage_nodes()) == 0:
raise wexc.StorageClusterEmpty()
LOG.debug(self.compute_model.to_string())
LOG.debug(self.storage_model.to_string())
def do_execute(self):
"""Strategy execution phase
"""
filtered_targets = self.filtered_targets()
self.set_migration_count(filtered_targets)
total_limit = self.parallel_total
per_node_limit = self.parallel_per_node
per_pool_limit = self.parallel_per_pool
action_counter = ActionCounter(total_limit,
per_pool_limit, per_node_limit)
for k, targets in six.iteritems(filtered_targets):
if k == VOLUME:
self.volumes_migration(targets, action_counter)
elif k == INSTANCE:
if self.volume_count == 0 and self.volume_update_count == 0:
# if with_attached_volume is true,
# instance having attached volumes already migrated,
# migrate instances which does not have attached volumes
if self.with_attached_volume:
targets = self.instances_no_attached(targets)
self.instances_migration(targets, action_counter)
else:
self.instances_migration(targets, action_counter)
LOG.debug("action total: %s, pools: %s, nodes %s " % (
action_counter.total_count,
action_counter.per_pool_count,
action_counter.per_node_count))
def post_execute(self):
"""Post-execution phase
This can be used to compute the global efficacy
"""
self.solution.set_efficacy_indicators(
live_migrate_instance_count=self.live_count,
planned_live_migrate_instance_count=self.planned_live_count,
cold_migrate_instance_count=self.cold_count,
planned_cold_migrate_instance_count=self.planned_cold_count,
volume_migrate_count=self.volume_count,
planned_volume_migrate_count=self.planned_volume_count,
volume_update_count=self.volume_update_count,
planned_volume_update_count=self.planned_volume_update_count
)
def set_migration_count(self, targets):
"""Set migration count
:param targets: dict of instance object and volume object list
keys of dict are instance and volume
"""
for instance in targets.get('instance', []):
if self.is_live(instance):
self.live_count += 1
elif self.is_cold(instance):
self.cold_count += 1
for volume in targets.get('volume', []):
if self.is_available(volume):
self.volume_count += 1
elif self.is_in_use(volume):
self.volume_update_count += 1
def is_live(self, instance):
status = getattr(instance, 'status')
state = getattr(instance, 'OS-EXT-STS:vm_state')
return (status == status_ACTIVE and state == ACTIVE
) or (status == status_PAUSED and state == PAUSED)
def is_cold(self, instance):
status = getattr(instance, 'status')
state = getattr(instance, 'OS-EXT-STS:vm_state')
return status == status_SHUTOFF and state == STOPPED
def is_available(self, volume):
return getattr(volume, 'status') == AVAILABLE
def is_in_use(self, volume):
return getattr(volume, 'status') == IN_USE
def instances_no_attached(instances):
return [i for i in instances
if not getattr(i, "os-extended-volumes:volumes_attached")]
def get_host_by_pool(self, pool):
"""Get host name from pool name
Utility method to get host name from pool name
which is formatted as host@backend#pool.
:param pool: pool name
:returns: host name
"""
return pool.split('@')[0]
def get_dst_node(self, src_node):
"""Get destination node from self.migration_compute_nodes
:param src_node: compute node name
:returns: destination node name
"""
for node in self.migrate_compute_nodes:
if node.get("src_node") == src_node:
return node.get("dst_node")
def get_dst_pool_and_type(self, src_pool, src_type):
"""Get destination pool and type from self.migration_storage_pools
:param src_pool: storage pool name
:param src_type: storage volume type
:returns: set of storage pool name and volume type name
"""
for pool in self.migrate_storage_pools:
if pool.get("src_pool") == src_pool:
return (pool.get("dst_pool", None),
pool.get("dst_type"))
def volumes_migration(self, volumes, action_counter):
for volume in volumes:
if action_counter.is_total_max():
LOG.debug('total reached limit')
break
pool = getattr(volume, 'os-vol-host-attr:host')
if action_counter.is_pool_max(pool):
LOG.debug("%s has objects to be migrated, but it has"
" reached the limit of parallelization." % pool)
continue
src_type = volume.volume_type
dst_pool, dst_type = self.get_dst_pool_and_type(pool, src_type)
LOG.debug(src_type)
LOG.debug("%s %s" % (dst_pool, dst_type))
if self.is_available(volume):
if src_type == dst_type:
self._volume_migrate(volume.id, dst_pool)
else:
self._volume_retype(volume.id, dst_type)
elif self.is_in_use(volume):
self._volume_update(volume.id, dst_type)
# if with_attached_volume is True, migrate attaching instances
if self.with_attached_volume:
instances = [self.nova.find_instance(dic.get('server_id'))
for dic in volume.attachments]
self.instances_migration(instances, action_counter)
action_counter.add_pool(pool)
def instances_migration(self, instances, action_counter):
for instance in instances:
src_node = getattr(instance, 'OS-EXT-SRV-ATTR:host')
if action_counter.is_total_max():
LOG.debug('total reached limit')
break
if action_counter.is_node_max(src_node):
LOG.debug("%s has objects to be migrated, but it has"
" reached the limit of parallelization." % src_node)
continue
dst_node = self.get_dst_node(src_node)
if self.is_live(instance):
self._live_migration(instance.id, src_node, dst_node)
elif self.is_cold(instance):
self._cold_migration(instance.id, src_node, dst_node)
action_counter.add_node(src_node)
def _live_migration(self, resource_id, src_node, dst_node):
parameters = {"migration_type": "live",
"destination_node": dst_node,
"source_node": src_node}
self.solution.add_action(
action_type="migrate",
resource_id=resource_id,
input_parameters=parameters)
self.planned_live_count += 1
def _cold_migration(self, resource_id, src_node, dst_node):
parameters = {"migration_type": "cold",
"destination_node": dst_node,
"source_node": src_node}
self.solution.add_action(
action_type="migrate",
resource_id=resource_id,
input_parameters=parameters)
self.planned_cold_count += 1
def _volume_update(self, resource_id, dst_type):
parameters = {"migration_type": "swap",
"destination_type": dst_type}
self.solution.add_action(
action_type="volume_migrate",
resource_id=resource_id,
input_parameters=parameters)
self.planned_volume_update_count += 1
def _volume_migrate(self, resource_id, dst_pool):
parameters = {"migration_type": "migrate",
"destination_node": dst_pool}
self.solution.add_action(
action_type="volume_migrate",
resource_id=resource_id,
input_parameters=parameters)
self.planned_volume_count += 1
def _volume_retype(self, resource_id, dst_type):
parameters = {"migration_type": "retype",
"destination_type": dst_type}
self.solution.add_action(
action_type="volume_migrate",
resource_id=resource_id,
input_parameters=parameters)
self.planned_volume_count += 1
def get_src_node_list(self):
"""Get src nodes from migrate_compute_nodes
:returns: src node name list
"""
if not self.migrate_compute_nodes:
return None
return [v for dic in self.migrate_compute_nodes
for k, v in dic.items() if k == "src_node"]
def get_src_pool_list(self):
"""Get src pools from migrate_storage_pools
:returns: src pool name list
"""
return [v for dic in self.migrate_storage_pools
for k, v in dic.items() if k == "src_pool"]
def get_instances(self):
"""Get migrate target instances
:returns: instance list on src nodes and compute scope
"""
src_node_list = self.get_src_node_list()
if not src_node_list:
return None
return [i for i in self.nova.get_instance_list()
if getattr(i, 'OS-EXT-SRV-ATTR:host') in src_node_list
and self.compute_model.get_instance_by_uuid(i.id)]
def get_volumes(self):
"""Get migrate target volumes
:returns: volume list on src pools and storage scope
"""
src_pool_list = self.get_src_pool_list()
return [i for i in self.cinder.get_volume_list()
if getattr(i, 'os-vol-host-attr:host') in src_pool_list
and self.storage_model.get_volume_by_uuid(i.id)]
def filtered_targets(self):
"""Filter targets
prioritize instances and volumes based on priorities
from input parameters.
:returns: prioritized targets
"""
result = {}
if self.migrate_compute_nodes:
result["instance"] = self.get_instances()
if self.migrate_storage_pools:
result["volume"] = self.get_volumes()
if not self.priority:
return result
filter_actions = self.get_priority_filter_list()
LOG.debug(filter_actions)
# apply all filters set in input prameter
for action in list(reversed(filter_actions)):
LOG.debug(action)
result = action.apply_filter(result)
return result
def get_priority_filter_list(self):
"""Get priority filters
:returns: list of filter object with arguments in self.priority
"""
filter_list = []
priority_filter_map = self.get_priority_filter_map()
for k, v in six.iteritems(self.priority):
if k in priority_filter_map:
filter_list.append(priority_filter_map[k](v))
return filter_list
def get_priority_filter_map(self):
"""Get priority filter map
:returns: filter map
key is the key in priority input parameters.
value is filter class for prioritizing.
"""
return {
"project": ProjectSortFilter,
"compute_node": ComputeHostSortFilter,
"storage_pool": StorageHostSortFilter,
"compute": ComputeSpecSortFilter,
"storage": StorageSpecSortFilter,
}
class ActionCounter(object):
"""Manage the number of actions in parallel"""
def __init__(self, total_limit=6, per_pool_limit=2, per_node_limit=2):
"""Initialize dict of host and the number of action
:param total_limit: total number of actions
:param per_pool_limit: the number of migrate actions per storage pool
:param per_node_limit: the number of migrate actions per compute node
"""
self.total_limit = total_limit
self.per_pool_limit = per_pool_limit
self.per_node_limit = per_node_limit
self.per_pool_count = {}
self.per_node_count = {}
self.total_count = 0
def add_pool(self, pool):
"""Increment the number of actions on host and total count
:param pool: storage pool
:returns: True if incremented, False otherwise
"""
if pool not in self.per_pool_count:
self.per_pool_count[pool] = 0
if not self.is_total_max() and not self.is_pool_max(pool):
self.per_pool_count[pool] += 1
self.total_count += 1
LOG.debug("total: %s, per_pool: %s" % (
self.total_count, self.per_pool_count))
return True
return False
def add_node(self, node):
"""Add the number of actions on node
:param host: compute node
:returns: True if action can be added, False otherwise
"""
if node not in self.per_node_count:
self.per_node_count[node] = 0
if not self.is_total_max() and not self.is_node_max(node):
self.per_node_count[node] += 1
self.total_count += 1
LOG.debug("total: %s, per_node: %s" % (
self.total_count, self.per_node_count))
return True
return False
def is_total_max(self):
"""Check if total count reached limit
:returns: True if total count reached limit, False otherwise
"""
return self.total_count >= self.total_limit
def is_pool_max(self, pool):
"""Check if per pool count reached limit
:returns: True if count reached limit, False otherwise
"""
if pool not in self.per_pool_count:
self.per_pool_count[pool] = 0
LOG.debug("the number of parallel per pool %s is %s " %
(pool, self.per_pool_count[pool]))
LOG.debug("per pool limit is %s" % self.per_pool_limit)
return self.per_pool_count[pool] >= self.per_pool_limit
def is_node_max(self, node):
"""Check if per node count reached limit
:returns: True if count reached limit, False otherwise
"""
if node not in self.per_node_count:
self.per_node_count[node] = 0
return self.per_node_count[node] >= self.per_node_limit
class BaseFilter(object):
"""Base class for Filter"""
apply_targets = ('ALL',)
def __init__(self, values=[], **kwargs):
"""initialization
:param values: priority value
"""
if not isinstance(values, list):
values = [values]
self.condition = values
def apply_filter(self, targets):
"""apply filter to targets
:param targets: dict of instance object and volume object list
keys of dict are instance and volume
"""
if not targets:
return {}
for cond in list(reversed(self.condition)):
for k, v in six.iteritems(targets):
if not self.is_allowed(k):
continue
LOG.debug("filter:%s with the key: %s" % (cond, k))
targets[k] = self.exec_filter(v, cond)
LOG.debug(targets)
return targets
def is_allowed(self, key):
return (key in self.apply_targets) or ('ALL' in self.apply_targets)
def exec_filter(self, items, sort_key):
"""This is implemented by sub class"""
return items
class SortMovingToFrontFilter(BaseFilter):
"""This is to move to front if a condition is True"""
def exec_filter(self, items, sort_key):
return self.sort_moving_to_front(items,
sort_key,
self.compare_func)
def sort_moving_to_front(self, items, sort_key=None, compare_func=None):
if not compare_func or not sort_key:
return items
for item in list(reversed(items)):
if compare_func(item, sort_key):
items.remove(item)
items.insert(0, item)
return items
def compare_func(self, item, sort_key):
return True
class ProjectSortFilter(SortMovingToFrontFilter):
"""ComputeHostSortFilter"""
apply_targets = ('instance', 'volume')
def __init__(self, values=[], **kwargs):
super(ProjectSortFilter, self).__init__(values, **kwargs)
def compare_func(self, item, sort_key):
"""Compare project id of item with sort_key
:param item: instance object or volume object
:param sort_key: project id
:returns: true: project id of item equals sort_key
false: otherwise
"""
project_id = self.get_project_id(item)
LOG.debug("project_id: %s, sort_key: %s" % (project_id, sort_key))
return project_id == sort_key
def get_project_id(self, item):
"""get project id of item
:param item: instance object or volume object
:returns: project id
"""
if isinstance(item, Volume):
return getattr(item, 'os-vol-tenant-attr:tenant_id')
elif isinstance(item, Server):
return item.tenant_id
class ComputeHostSortFilter(SortMovingToFrontFilter):
"""ComputeHostSortFilter"""
apply_targets = ('instance',)
def __init__(self, values=[], **kwargs):
super(ComputeHostSortFilter, self).__init__(values, **kwargs)
def compare_func(self, item, sort_key):
"""Compare compute name of item with sort_key
:param item: instance object
:param sort_key: compute host name
:returns: true: compute name on which intance is equals sort_key
false: otherwise
"""
host = self.get_host(item)
LOG.debug("host: %s, sort_key: %s" % (host, sort_key))
return host == sort_key
def get_host(self, item):
"""get hostname on which item is
:param item: instance object
:returns: hostname on which item is
"""
return getattr(item, 'OS-EXT-SRV-ATTR:host')
class StorageHostSortFilter(SortMovingToFrontFilter):
"""StoragehostSortFilter"""
apply_targets = ('volume',)
def compare_func(self, item, sort_key):
"""Compare pool name of item with sort_key
:param item: volume object
:param sort_key: storage pool name
:returns: true: pool name on which intance is equals sort_key
false: otherwise
"""
host = self.get_host(item)
LOG.debug("host: %s, sort_key: %s" % (host, sort_key))
return host == sort_key
def get_host(self, item):
return getattr(item, 'os-vol-host-attr:host')
class ComputeSpecSortFilter(BaseFilter):
"""ComputeSpecSortFilter"""
apply_targets = ('instance',)
accept_keys = ['vcpu_num', 'mem_size', 'disk_size', 'created_at']
def __init__(self, values=[], **kwargs):
super(ComputeSpecSortFilter, self).__init__(values, **kwargs)
self._nova = None
@property
def nova(self):
if self._nova is None:
self._nova = nova_helper.NovaHelper()
return self._nova
def exec_filter(self, items, sort_key):
result = items
if sort_key not in self.accept_keys:
LOG.warning("Invalid key is specified: %s" % sort_key)
else:
result = self.get_sorted_items(items, sort_key)
return result
def get_sorted_items(self, items, sort_key):
"""Sort items by sort_key
:param items: instances
:param sort_key: sort_key
:returns: items sorted by sort_key
"""
result = items
flavors = self.nova.get_flavor_list()
if sort_key == 'mem_size':
result = sorted(items,
key=lambda x: float(self.get_mem_size(x, flavors)),
reverse=True)
elif sort_key == 'vcpu_num':
result = sorted(items,
key=lambda x: float(self.get_vcpu_num(x, flavors)),
reverse=True)
elif sort_key == 'disk_size':
result = sorted(items,
key=lambda x: float(
self.get_disk_size(x, flavors)),
reverse=True)
elif sort_key == 'created_at':
result = sorted(items,
key=lambda x: parse(getattr(x, sort_key)),
reverse=False)
return result
def get_mem_size(self, item, flavors):
"""Get memory size of item
:param item: instance
:param flavors: flavors
:returns: memory size of item
"""
LOG.debug("item: %s, flavors: %s" % (item, flavors))
for flavor in flavors:
LOG.debug("item.flavor: %s, flavor: %s" % (item.flavor, flavor))
if item.flavor.get('id') == flavor.id:
LOG.debug("flavor.ram: %s" % flavor.ram)
return flavor.ram
def get_vcpu_num(self, item, flavors):
"""Get vcpu number of item
:param item: instance
:param flavors: flavors
:returns: vcpu number of item
"""
LOG.debug("item: %s, flavors: %s" % (item, flavors))
for flavor in flavors:
LOG.debug("item.flavor: %s, flavor: %s" % (item.flavor, flavor))
if item.flavor.get('id') == flavor.id:
LOG.debug("flavor.vcpus: %s" % flavor.vcpus)
return flavor.vcpus
def get_disk_size(self, item, flavors):
"""Get disk size of item
:param item: instance
:param flavors: flavors
:returns: disk size of item
"""
LOG.debug("item: %s, flavors: %s" % (item, flavors))
for flavor in flavors:
LOG.debug("item.flavor: %s, flavor: %s" % (item.flavor, flavor))
if item.flavor.get('id') == flavor.id:
LOG.debug("flavor.disk: %s" % flavor.disk)
return flavor.disk
class StorageSpecSortFilter(BaseFilter):
"""StorageSpecSortFilter"""
apply_targets = ('volume',)
accept_keys = ['size', 'created_at']
def exec_filter(self, items, sort_key):
result = items
if sort_key not in self.accept_keys:
LOG.warning("Invalid key is specified: %s" % sort_key)
return result
if sort_key == 'created_at':
result = sorted(items,
key=lambda x: parse(getattr(x, sort_key)),
reverse=False)
else:
result = sorted(items,
key=lambda x: float(getattr(x, sort_key)),
reverse=True)
LOG.debug(result)
return result

View File

@@ -60,6 +60,7 @@ log_warn = re.compile(
r"(.)*LOG\.(warn)\(\s*('|\"|_)")
unittest_imports_dot = re.compile(r"\bimport[\s]+unittest\b")
unittest_imports_from = re.compile(r"\bfrom[\s]+unittest\b")
re_redundant_import_alias = re.compile(r".*import (.+) as \1$")
@flake8ext
@@ -271,6 +272,18 @@ def check_builtins_gettext(logical_line, tokens, filename, lines, noqa):
yield (0, msg)
@flake8ext
def no_redundant_import_alias(logical_line):
"""Checking no redundant import alias.
https://bugs.launchpad.net/watcher/+bug/1745527
N342
"""
if re.match(re_redundant_import_alias, logical_line):
yield(0, "N342: No redundant import alias.")
def factory(register):
register(use_jsonutils)
register(check_assert_called_once_with)
@@ -286,3 +299,4 @@ def factory(register):
register(check_log_warn_deprecated)
register(check_oslo_i18n_wrapper)
register(check_builtins_gettext)
register(no_redundant_import_alias)

View File

@@ -0,0 +1,837 @@
# Andi Chandler <andi@gowling.com>, 2017. #zanata
# Andi Chandler <andi@gowling.com>, 2018. #zanata
msgid ""
msgstr ""
"Project-Id-Version: watcher VERSION\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2018-01-26 00:18+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2018-01-27 12:51+0000\n"
"Last-Translator: Andi Chandler <andi@gowling.com>\n"
"Language-Team: English (United Kingdom)\n"
"Language: en-GB\n"
"X-Generator: Zanata 3.9.6\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
msgid " (may include orphans)"
msgstr " (may include orphans)"
msgid " (orphans excluded)"
msgstr " (orphans excluded)"
#, python-format
msgid "%(client)s connection failed. Reason: %(reason)s"
msgstr "%(client)s connection failed. Reason: %(reason)s"
#, python-format
msgid "%(field)s can't be updated."
msgstr "%(field)s can't be updated."
#, python-format
msgid "%(parameter)s has to be of type %(parameter_type)s"
msgstr "%(parameter)s has to be of type %(parameter_type)s"
#, python-format
msgid "%s is not JSON serializable"
msgstr "%s is not JSON serialisable"
#, python-format
msgid ""
"'%(strategy)s' strategy does relate to the '%(goal)s' goal. Possible "
"choices: %(choices)s"
msgstr ""
"'%(strategy)s' strategy does relate to the '%(goal)s' goal. Possible "
"choices: %(choices)s"
#, python-format
msgid "'%s' is a mandatory attribute and can not be removed"
msgstr "'%s' is a mandatory attribute and can not be removed"
#, python-format
msgid "'%s' is an internal attribute and can not be updated"
msgstr "'%s' is an internal attribute and can not be updated"
msgid "'add' and 'replace' operations needs value"
msgstr "'add' and 'replace' operations needs value"
msgid "'obj' argument type is not valid"
msgstr "'obj' argument type is not valid"
#, python-format
msgid "'obj' argument type is not valid: %s"
msgstr "'obj' argument type is not valid: %s"
#, python-format
msgid "A datetime.datetime is required here. Got %s"
msgstr "A datetime.datetime is required here. Got %s"
#, python-format
msgid "A goal with UUID %(uuid)s already exists"
msgstr "A goal with UUID %(uuid)s already exists"
#, python-format
msgid "A scoring engine with UUID %(uuid)s already exists"
msgstr "A scoring engine with UUID %(uuid)s already exists"
#, python-format
msgid "A service with name %(name)s is already working on %(host)s."
msgstr "A service with name %(name)s is already working on %(host)s."
#, python-format
msgid "A strategy with UUID %(uuid)s already exists"
msgstr "A strategy with UUID %(uuid)s already exists"
msgid "A valid goal_id or audit_template_id must be provided"
msgstr "A valid goal_id or audit_template_id must be provided"
#, python-format
msgid "Action %(action)s could not be found"
msgstr "Action %(action)s could not be found"
#, python-format
msgid "Action %(action)s was not eagerly loaded"
msgstr "Action %(action)s was not eagerly loaded"
#, python-format
msgid "Action Plan %(action_plan)s is currently running."
msgstr "Action Plan %(action_plan)s is currently running."
#, python-format
msgid "Action Plan %(action_plan)s is referenced by one or multiple actions"
msgstr "Action Plan %(action_plan)s is referenced by one or multiple actions"
#, python-format
msgid "Action Plan with UUID %(uuid)s is cancelled by user"
msgstr "Action Plan with UUID %(uuid)s is cancelled by user"
msgid "Action Plans"
msgstr "Action Plans"
#, python-format
msgid "Action plan %(action_plan)s is invalid"
msgstr "Action plan %(action_plan)s is invalid"
#, python-format
msgid "Action plan %(action_plan)s is referenced by one or multiple goals"
msgstr "Action plan %(action_plan)s is referenced by one or multiple goals"
#, python-format
msgid "Action plan %(action_plan)s was not eagerly loaded"
msgstr "Action plan %(action_plan)s was not eagerly loaded"
#, python-format
msgid "ActionPlan %(action_plan)s could not be found"
msgstr "ActionPlan %(action_plan)s could not be found"
msgid "Actions"
msgstr "Actions"
msgid "Actuator"
msgstr "Actuator"
#, python-format
msgid "Adding a new attribute (%s) to the root of the resource is not allowed"
msgstr ""
"Adding a new attribute (%s) to the root of the resource is not allowed"
msgid "Airflow Optimization"
msgstr "Airflow Optimisation"
#, python-format
msgid "An action description with type %(action_type)s is already exist."
msgstr "An action description with type %(action_type)s is already exist."
#, python-format
msgid "An action plan with UUID %(uuid)s already exists"
msgstr "An action plan with UUID %(uuid)s already exists"
#, python-format
msgid "An action with UUID %(uuid)s already exists"
msgstr "An action with UUID %(uuid)s already exists"
#, python-format
msgid "An audit with UUID or name %(audit)s already exists"
msgstr "An audit with UUID or name %(audit)s already exists"
#, python-format
msgid "An audit_template with UUID or name %(audit_template)s already exists"
msgstr "An audit_template with UUID or name %(audit_template)s already exists"
msgid "An indicator value should be a number"
msgstr "An indicator value should be a number"
#, python-format
msgid "An object of class %s is required here"
msgstr "An object of class %s is required here"
msgid "An unknown exception occurred"
msgstr "An unknown exception occurred"
msgid "At least one feature is required"
msgstr "At least one feature is required"
#, python-format
msgid "Audit %(audit)s could not be found"
msgstr "Audit %(audit)s could not be found"
#, python-format
msgid "Audit %(audit)s is invalid"
msgstr "Audit %(audit)s is invalid"
#, python-format
msgid "Audit %(audit)s is referenced by one or multiple action plans"
msgstr "Audit %(audit)s is referenced by one or multiple action plans"
#, python-format
msgid "Audit %(audit)s was not eagerly loaded"
msgstr "Audit %(audit)s was not eagerly loaded"
msgid "Audit Templates"
msgstr "Audit Templates"
#, python-format
msgid "Audit parameter %(parameter)s are not allowed"
msgstr "Audit parameter %(parameter)s are not allowed"
#, python-format
msgid "Audit type %(audit_type)s could not be found"
msgstr "Audit type %(audit_type)s could not be found"
#, python-format
msgid "AuditTemplate %(audit_template)s could not be found"
msgstr "AuditTemplate %(audit_template)s could not be found"
#, python-format
msgid ""
"AuditTemplate %(audit_template)s is referenced by one or multiple audits"
msgstr ""
"AuditTemplate %(audit_template)s is referenced by one or multiple audits"
msgid "Audits"
msgstr "Audits"
msgid "Basic offline consolidation"
msgstr "Basic offline consolidation"
msgid "CDMCs"
msgstr "CDMCs"
msgid "Cannot compile public API routes"
msgstr "Cannot compile public API routes"
msgid "Cannot create an action directly"
msgstr "Cannot create an action directly"
msgid "Cannot delete an action directly"
msgstr "Cannot delete an action directly"
msgid "Cannot modify an action directly"
msgstr "Cannot modify an action directly"
msgid "Cannot overwrite UUID for an existing Action Plan."
msgstr "Cannot overwrite UUID for an existing Action Plan."
msgid "Cannot overwrite UUID for an existing Action."
msgstr "Cannot overwrite UUID for an existing Action."
msgid "Cannot overwrite UUID for an existing Audit Template."
msgstr "Cannot overwrite UUID for an existing Audit Template."
msgid "Cannot overwrite UUID for an existing Audit."
msgstr "Cannot overwrite UUID for an existing Audit."
msgid "Cannot overwrite UUID for an existing Goal."
msgstr "Cannot overwrite UUID for an existing Goal."
msgid "Cannot overwrite UUID for an existing Scoring Engine."
msgstr "Cannot overwrite UUID for an existing Scoring Engine."
msgid "Cannot overwrite UUID for an existing Strategy."
msgstr "Cannot overwrite UUID for an existing Strategy."
msgid "Cannot overwrite UUID for an existing efficacy indicator."
msgstr "Cannot overwrite UUID for an existing efficacy indicator."
msgid "Cannot remove 'goal' attribute from an audit template"
msgstr "Cannot remove 'goal' attribute from an audit template"
msgid "Conflict"
msgstr "Conflict"
#, python-format
msgid ""
"Could not compute the global efficacy for the '%(goal)s' goal using the "
"'%(strategy)s' strategy."
msgstr ""
"Could not compute the global efficacy for the '%(goal)s' goal using the "
"'%(strategy)s' strategy."
#, python-format
msgid "Could not load any strategy for goal %(goal)s"
msgstr "Could not load any strategy for goal %(goal)s"
#, python-format
msgid "Couldn't apply patch '%(patch)s'. Reason: %(reason)s"
msgstr "Couldn't apply patch '%(patch)s'. Reason: %(reason)s"
#, python-format
msgid "Couldn't delete when state is '%(state)s'."
msgstr "Couldn't delete when state is '%(state)s'."
#, python-format
msgid "Datasource %(datasource)s is not available."
msgstr "Datasource %(datasource)s is not available."
#, python-format
msgid "Datasource %(datasource)s is not supported by strategy %(strategy)s"
msgstr "Datasource %(datasource)s is not supported by strategy %(strategy)s"
msgid "Do you want to delete objects up to the specified maximum number? [y/N]"
msgstr ""
"Do you want to delete objects up to the specified maximum number? [y/N]"
#, python-format
msgid "Domain name seems ambiguous: %s"
msgstr "Domain name seems ambiguous: %s"
#, python-format
msgid "Domain not Found: %s"
msgstr "Domain not Found: %s"
msgid "Dummy Strategy using sample Scoring Engines"
msgstr "Dummy Strategy using sample Scoring Engines"
msgid "Dummy goal"
msgstr "Dummy goal"
msgid "Dummy strategy"
msgstr "Dummy strategy"
msgid "Dummy strategy with resize"
msgstr "Dummy strategy with resize"
#, python-format
msgid "Efficacy indicator %(efficacy_indicator)s could not be found"
msgstr "Efficacy indicator %(efficacy_indicator)s could not be found"
#, python-format
msgid "Error loading plugin '%(name)s'"
msgstr "Error loading plugin '%(name)s'"
#, python-format
msgid "ErrorDocumentMiddleware received an invalid status %s"
msgstr "ErrorDocumentMiddleware received an invalid status %s"
#, python-format
msgid "Expected a logical name but received %(name)s"
msgstr "Expected a logical name but received %(name)s"
#, python-format
msgid "Expected a logical name or uuid but received %(name)s"
msgstr "Expected a logical name or UUID but received %(name)s"
#, python-format
msgid "Expected a uuid but received %(uuid)s"
msgstr "Expected a UUID but received %(uuid)s"
#, python-format
msgid "Expected a uuid or int but received %(identity)s"
msgstr "Expected a UUID or int but received %(identity)s"
#, python-format
msgid "Expected an interval or cron syntax but received %(name)s"
msgstr "Expected an interval or cron syntax but received %(name)s"
#, python-format
msgid "Failed to create volume '%(volume)s. "
msgstr "Failed to create volume '%(volume)s. "
#, python-format
msgid "Failed to delete volume '%(volume)s. "
msgstr "Failed to delete volume '%(volume)s. "
#, python-format
msgid "Filter operator is not valid: %(operator)s not in %(valid_operators)s"
msgstr "Filter operator is not valid: %(operator)s not in %(valid_operators)s"
msgid "Filtering actions on both audit and action-plan is prohibited"
msgstr "Filtering actions on both audit and action-plan is prohibited"
msgid "Goal"
msgstr "Goal"
#, python-format
msgid "Goal %(goal)s could not be found"
msgstr "Goal %(goal)s could not be found"
#, python-format
msgid "Goal %(goal)s is invalid"
msgstr "Goal %(goal)s is invalid"
msgid "Goals"
msgstr "Goals"
msgid "Hardware Maintenance"
msgstr "Hardware Maintenance"
#, python-format
msgid "Here below is a table containing the objects that can be purged%s:"
msgstr "Here below is a table containing the objects that can be purged%s:"
msgid "Illegal argument"
msgstr "Illegal argument"
#, python-format
msgid ""
"Incorrect mapping: could not find associated weight for %s in weight dict."
msgstr ""
"Incorrect mapping: could not find associated weight for %s in weight dict."
#, python-format
msgid "Interval of audit must be specified for %(audit_type)s."
msgstr "Interval of audit must be specified for %(audit_type)s."
#, python-format
msgid "Interval of audit must not be set for %(audit_type)s."
msgstr "Interval of audit must not be set for %(audit_type)s."
#, python-format
msgid "Invalid filter: %s"
msgstr "Invalid filter: %s"
msgid "Invalid number of features, expected 9"
msgstr "Invalid number of features, expected 9"
#, python-format
msgid "Invalid query: %(start_time)s > %(end_time)s"
msgstr "Invalid query: %(start_time)s > %(end_time)s"
#, python-format
msgid "Invalid sort direction: %s. Acceptable values are 'asc' or 'desc'"
msgstr "Invalid sort direction: %s. Acceptable values are 'asc' or 'desc'"
msgid "Invalid state for swapping volume"
msgstr "Invalid state for swapping volume"
#, python-format
msgid "Invalid state: %(state)s"
msgstr "Invalid state: %(state)s"
msgid "JSON list expected in feature argument"
msgstr "JSON list expected in feature argument"
msgid "Keystone API endpoint is missing"
msgstr "Keystone API endpoint is missing"
msgid "Limit must be positive"
msgstr "Limit must be positive"
msgid "Limit should be positive"
msgstr "Limit should be positive"
msgid "Maximum time since last check-in for up service."
msgstr "Maximum time since last check-in for up service."
#, python-format
msgid "Migration of type '%(migration_type)s' is not supported."
msgstr "Migration of type '%(migration_type)s' is not supported."
msgid ""
"Name of this node. This can be an opaque identifier. It is not necessarily a "
"hostname, FQDN, or IP address. However, the node name must be valid within "
"an AMQP key, and if using ZeroMQ, a valid hostname, FQDN, or IP address."
msgstr ""
"Name of this node. This can be an opaque identifier. It is not necessarily a "
"hostname, FQDN, or IP address. However, the node name must be valid within "
"an AMQP key, and if using ZeroMQ, a valid hostname, FQDN, or IP address."
#, python-format
msgid "No %(metric)s metric for %(host)s found."
msgstr "No %(metric)s metric for %(host)s found."
msgid "No rows were returned"
msgstr "No rows were returned"
#, python-format
msgid "No strategy could be found to achieve the '%(goal)s' goal."
msgstr "No strategy could be found to achieve the '%(goal)s' goal."
msgid "No such metric"
msgstr "No such metric"
#, python-format
msgid "No values returned by %(resource_id)s for %(metric_name)s."
msgstr "No values returned by %(resource_id)s for %(metric_name)s."
msgid "Noisy Neighbor"
msgstr "Noisy Neighbour"
msgid "Not authorized"
msgstr "Not authorised"
msgid "Not supported"
msgstr "Not supported"
msgid "Operation not permitted"
msgstr "Operation not permitted"
msgid "Outlet temperature based strategy"
msgstr "Outlet temperature based strategy"
#, python-format
msgid ""
"Payload not populated when trying to send notification \"%(class_name)s\""
msgstr ""
"Payload not populated when trying to send notification \"%(class_name)s\""
msgid "Plugins"
msgstr "Plugins"
#, python-format
msgid "Policy doesn't allow %(action)s to be performed."
msgstr "Policy doesn't allow %(action)s to be performed."
#, python-format
msgid "Project name seems ambiguous: %s"
msgstr "Project name seems ambiguous: %s"
#, python-format
msgid "Project not Found: %s"
msgstr "Project not Found: %s"
#, python-format
msgid "Provided %(action_type) is not supported yet"
msgstr "Provided %(action_type) is not supported yet"
#, python-format
msgid "Provided cron is invalid: %(message)s"
msgstr "Provided cron is invalid: %(message)s"
#, python-format
msgid "Purge results summary%s:"
msgstr "Purge results summary%s:"
msgid ""
"Ratio of actual attached volumes migrated to planned attached volumes "
"migrate."
msgstr ""
"Ratio of actual attached volumes migrated to planned attached volumes "
"migrate."
msgid ""
"Ratio of actual cold migrated instances to planned cold migrate instances."
msgstr ""
"Ratio of actual cold migrated instances to planned cold migrate instances."
msgid ""
"Ratio of actual detached volumes migrated to planned detached volumes "
"migrate."
msgstr ""
"Ratio of actual detached volumes migrated to planned detached volumes "
"migrate."
msgid ""
"Ratio of actual live migrated instances to planned live migrate instances."
msgstr ""
"Ratio of actual live migrated instances to planned live migrate instances."
msgid ""
"Ratio of released compute nodes divided by the total number of enabled "
"compute nodes."
msgstr ""
"Ratio of released compute nodes divided by the total number of enabled "
"compute nodes."
#, python-format
msgid "Role name seems ambiguous: %s"
msgstr "Role name seems ambiguous: %s"
#, python-format
msgid "Role not Found: %s"
msgstr "Role not Found: %s"
msgid "Saving Energy"
msgstr "Saving Energy"
msgid "Saving Energy Strategy"
msgstr "Saving Energy Strategy"
#, python-format
msgid "Scoring Engine with name=%s not found"
msgstr "Scoring Engine with name=%s not found"
#, python-format
msgid "ScoringEngine %(scoring_engine)s could not be found"
msgstr "ScoringEngine %(scoring_engine)s could not be found"
msgid "Seconds between running periodic tasks."
msgstr "Seconds between running periodic tasks."
msgid "Server Consolidation"
msgstr "Server Consolidation"
msgid ""
"Specifies the minimum level for which to send notifications. If not set, no "
"notifications will be sent. The default is for this option to be at the "
"`INFO` level."
msgstr ""
"Specifies the minimum level for which to send notifications. If not set, no "
"notifications will be sent. The default is for this option to be at the "
"`INFO` level."
msgid ""
"Specify parameters but no predefined strategy for audit, or no parameter "
"spec in predefined strategy"
msgstr ""
"Specify parameters but no predefined strategy for audit, or no parameter "
"spec in predefined strategy"
#, python-format
msgid "State transition not allowed: (%(initial_state)s -> %(new_state)s)"
msgstr "State transition not allowed: (%(initial_state)s -> %(new_state)s)"
msgid "Storage Capacity Balance Strategy"
msgstr "Storage Capacity Balance Strategy"
msgid "Strategies"
msgstr "Strategies"
#, python-format
msgid "Strategy %(strategy)s could not be found"
msgstr "Strategy %(strategy)s could not be found"
#, python-format
msgid "Strategy %(strategy)s is invalid"
msgstr "Strategy %(strategy)s is invalid"
#, python-format
msgid "The %(name)s %(id)s could not be found"
msgstr "The %(name)s %(id)s could not be found"
#, python-format
msgid "The %(name)s resource %(id)s could not be found"
msgstr "The %(name)s resource %(id)s could not be found"
#, python-format
msgid "The %(name)s resource %(id)s is not soft deleted"
msgstr "The %(name)s resource %(id)s is not soft deleted"
#, python-format
msgid "The action %(action_id)s execution failed."
msgstr "The action %(action_id)s execution failed."
#, python-format
msgid "The action description %(action_id)s cannot be found."
msgstr "The action description %(action_id)s cannot be found."
msgid "The audit template UUID or name specified is invalid"
msgstr "The audit template UUID or name specified is invalid"
#, python-format
msgid "The baremetal resource '%(name)s' could not be found"
msgstr "The baremetal resource '%(name)s' could not be found"
#, python-format
msgid "The capacity %(capacity)s is not defined for '%(resource)s'"
msgstr "The capacity %(capacity)s is not defined for '%(resource)s'"
#, python-format
msgid "The cluster data model '%(cdm)s' could not be built"
msgstr "The cluster data model '%(cdm)s' could not be built"
msgid "The cluster state is not defined"
msgstr "The cluster state is not defined"
msgid "The cluster state is stale"
msgstr "The cluster state is stale"
#, python-format
msgid "The compute node %(name)s could not be found"
msgstr "The compute node %(name)s could not be found"
#, python-format
msgid "The compute resource '%(name)s' could not be found"
msgstr "The compute resource '%(name)s' could not be found"
#, python-format
msgid "The identifier '%(name)s' is a reserved word"
msgstr "The identifier '%(name)s' is a reserved word"
#, python-format
msgid ""
"The indicator '%(name)s' with value '%(value)s' and spec type "
"'%(spec_type)s' is invalid."
msgstr ""
"The indicator '%(name)s' with value '%(value)s' and spec type "
"'%(spec_type)s' is invalid."
#, python-format
msgid "The instance '%(name)s' could not be found"
msgstr "The instance '%(name)s' could not be found"
#, python-format
msgid "The ironic node %(uuid)s could not be found"
msgstr "The Ironic node %(uuid)s could not be found"
msgid "The list of compute node(s) in the cluster is empty"
msgstr "The list of compute node(s) in the cluster is empty"
msgid "The list of storage node(s) in the cluster is empty"
msgstr "The list of storage node(s) in the cluster is empty"
msgid "The metrics resource collector is not defined"
msgstr "The metrics resource collector is not defined"
msgid "The number of VM migrations to be performed."
msgstr "The number of VM migrations to be performed."
msgid "The number of attached volumes actually migrated."
msgstr "The number of attached volumes actually migrated."
msgid "The number of attached volumes planned to migrate."
msgstr "The number of attached volumes planned to migrate."
msgid "The number of compute nodes to be released."
msgstr "The number of compute nodes to be released."
msgid "The number of detached volumes actually migrated."
msgstr "The number of detached volumes actually migrated."
msgid "The number of detached volumes planned to migrate."
msgstr "The number of detached volumes planned to migrate."
msgid "The number of instances actually cold migrated."
msgstr "The number of instances actually cold migrated."
msgid "The number of instances actually live migrated."
msgstr "The number of instances actually live migrated."
msgid "The number of instances planned to cold migrate."
msgstr "The number of instances planned to cold migrate."
msgid "The number of instances planned to live migrate."
msgstr "The number of instances planned to live migrate."
#, python-format
msgid ""
"The number of objects (%(num)s) to delete from the database exceeds the "
"maximum number of objects (%(max_number)s) specified."
msgstr ""
"The number of objects (%(num)s) to delete from the database exceeds the "
"maximum number of objects (%(max_number)s) specified."
#, python-format
msgid "The pool %(name)s could not be found"
msgstr "The pool %(name)s could not be found"
#, python-format
msgid "The service %(service)s cannot be found."
msgstr "The service %(service)s cannot be found."
#, python-format
msgid "The storage node %(name)s could not be found"
msgstr "The storage node %(name)s could not be found"
#, python-format
msgid "The storage resource '%(name)s' could not be found"
msgstr "The storage resource '%(name)s' could not be found"
msgid "The target state is not defined"
msgstr "The target state is not defined"
msgid "The total number of enabled compute nodes."
msgstr "The total number of enabled compute nodes."
#, python-format
msgid "The volume '%(name)s' could not be found"
msgstr "The volume '%(name)s' could not be found"
#, python-format
msgid "There are %(count)d objects set for deletion. Continue? [y/N]"
msgstr "There are %(count)d objects set for deletion. Continue? [y/N]"
msgid "Thermal Optimization"
msgstr "Thermal Optimisation"
msgid "Total"
msgstr "Total"
msgid "Unable to parse features: "
msgstr "Unable to parse features: "
#, python-format
msgid "Unable to parse features: %s"
msgstr "Unable to parse features: %s"
msgid "Unacceptable parameters"
msgstr "Unacceptable parameters"
msgid "Unclassified"
msgstr "Unclassified"
#, python-format
msgid "Unexpected keystone client error occurred: %s"
msgstr "Unexpected Keystone client error occurred: %s"
msgid "Uniform airflow migration strategy"
msgstr "Uniform airflow migration strategy"
#, python-format
msgid "User name seems ambiguous: %s"
msgstr "User name seems ambiguous: %s"
#, python-format
msgid "User not Found: %s"
msgstr "User not Found: %s"
msgid "VM Workload Consolidation Strategy"
msgstr "VM Workload Consolidation Strategy"
msgid "Volume type must be different for retyping"
msgstr "Volume type must be different for retyping"
msgid "Volume type must be same for migrating"
msgstr "Volume type must be same for migrating"
msgid ""
"Watcher database schema is already under version control; use upgrade() "
"instead"
msgstr ""
"Watcher database schema is already under version control; use upgrade() "
"instead"
#, python-format
msgid "Workflow execution error: %(error)s"
msgstr "Workflow execution error: %(error)s"
msgid "Workload Balance Migration Strategy"
msgstr "Workload Balance Migration Strategy"
msgid "Workload Balancing"
msgstr "Workload Balancing"
msgid "Workload stabilization"
msgstr "Workload stabilisation"
#, python-format
msgid "Wrong type. Expected '%(type)s', got '%(value)s'"
msgstr "Wrong type. Expected '%(type)s', got '%(value)s'"
#, python-format
msgid ""
"You shouldn't use any other IDs of %(resource)s if you use wildcard "
"character."
msgstr ""
"You shouldn't use any other IDs of %(resource)s if you use wildcard "
"character."
msgid "Zone migration"
msgstr "Zone migration"
msgid "destination type is required when migration type is swap"
msgstr "destination type is required when migration type is swap"
msgid "host_aggregates can't be included and excluded together"
msgstr "host_aggregates can't be included and excluded together"

View File

@@ -16,7 +16,7 @@ import sys
import six
from watcher.notifications import base as notificationbase
from watcher.objects import base as base
from watcher.objects import base
from watcher.objects import fields as wfields

View File

@@ -296,6 +296,8 @@ class TestListAction(api_base.FunctionalTest):
uuid=utils.generate_uuid())
ap2_action_list.append(action)
action_plan1.state = objects.action_plan.State.CANCELLED
action_plan1.save()
self.delete('/action_plans/%s' % action_plan1.uuid)
response = self.get_json('/actions')

View File

@@ -147,6 +147,11 @@ class TestListActionPlan(api_base.FunctionalTest):
audit_id=audit2.id)
action_plan_list.append(action_plan.uuid)
new_state = objects.audit.State.CANCELLED
self.patch_json(
'/audits/%s' % audit1.uuid,
[{'path': '/state', 'value': new_state,
'op': 'replace'}])
self.delete('/audits/%s' % audit1.uuid)
response = self.get_json('/action_plans')
@@ -304,6 +309,13 @@ class TestDelete(api_base.FunctionalTest):
action_plan.destroy()
def test_delete_action_plan_without_action(self):
response = self.delete('/action_plans/%s' % self.action_plan.uuid,
expect_errors=True)
self.assertEqual(400, response.status_int)
self.assertEqual('application/json', response.content_type)
self.assertTrue(response.json['error_message'])
self.action_plan.state = objects.action_plan.State.SUCCEEDED
self.action_plan.save()
self.delete('/action_plans/%s' % self.action_plan.uuid)
response = self.get_json('/action_plans/%s' % self.action_plan.uuid,
expect_errors=True)
@@ -315,6 +327,8 @@ class TestDelete(api_base.FunctionalTest):
action = obj_utils.create_test_action(
self.context, id=1)
self.action_plan.state = objects.action_plan.State.SUCCEEDED
self.action_plan.save()
self.delete('/action_plans/%s' % self.action_plan.uuid)
ap_response = self.get_json('/action_plans/%s' % self.action_plan.uuid,
expect_errors=True)

View File

@@ -721,7 +721,7 @@ class TestPost(api_base.FunctionalTest):
self.assertEqual('application/json', response.content_type)
self.assertEqual(400, response.status_int)
expected_error_msg = ('Specify parameters but no predefined '
'strategy for audit template, or no '
'strategy for audit, or no '
'parameter spec in predefined strategy')
self.assertTrue(response.json['error_message'])
self.assertIn(expected_error_msg, response.json['error_message'])
@@ -743,7 +743,7 @@ class TestPost(api_base.FunctionalTest):
self.assertEqual('application/json', response.content_type)
self.assertEqual(400, response.status_int)
expected_error_msg = ('Specify parameters but no predefined '
'strategy for audit template, or no '
'strategy for audit, or no '
'parameter spec in predefined strategy')
self.assertTrue(response.json['error_message'])
self.assertIn(expected_error_msg, response.json['error_message'])
@@ -806,6 +806,35 @@ class TestPost(api_base.FunctionalTest):
strategy_id=strategy['id'], uuid=template_uuid, name=template_name)
return audit_template
@mock.patch.object(deapi.DecisionEngineAPI, 'trigger_audit')
@mock.patch('oslo_utils.timeutils.utcnow')
def test_create_audit_with_name(self, mock_utcnow, mock_trigger_audit):
mock_trigger_audit.return_value = mock.ANY
test_time = datetime.datetime(2000, 1, 1, 0, 0)
mock_utcnow.return_value = test_time
audit_dict = post_get_test_audit()
normal_name = 'this audit name is just for test'
# long_name length exceeds 63 characters
long_name = normal_name+audit_dict['uuid']
del audit_dict['uuid']
del audit_dict['state']
del audit_dict['interval']
del audit_dict['scope']
del audit_dict['next_run_time']
audit_dict['name'] = normal_name
response = self.post_json('/audits', audit_dict)
self.assertEqual('application/json', response.content_type)
self.assertEqual(201, response.status_int)
self.assertEqual(normal_name, response.json['name'])
audit_dict['name'] = long_name
response = self.post_json('/audits', audit_dict)
self.assertEqual('application/json', response.content_type)
self.assertEqual(201, response.status_int)
self.assertNotEqual(long_name, response.json['name'])
class TestDelete(api_base.FunctionalTest):
@@ -828,6 +857,23 @@ class TestDelete(api_base.FunctionalTest):
def test_delete_audit(self, mock_utcnow):
test_time = datetime.datetime(2000, 1, 1, 0, 0)
mock_utcnow.return_value = test_time
new_state = objects.audit.State.ONGOING
self.patch_json(
'/audits/%s' % self.audit.uuid,
[{'path': '/state', 'value': new_state,
'op': 'replace'}])
response = self.delete('/audits/%s' % self.audit.uuid,
expect_errors=True)
self.assertEqual(400, response.status_int)
self.assertEqual('application/json', response.content_type)
self.assertTrue(response.json['error_message'])
new_state = objects.audit.State.CANCELLED
self.patch_json(
'/audits/%s' % self.audit.uuid,
[{'path': '/state', 'value': new_state,
'op': 'replace'}])
self.delete('/audits/%s' % self.audit.uuid)
response = self.get_json('/audits/%s' % self.audit.uuid,
expect_errors=True)

View File

@@ -10,11 +10,14 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import mock
from oslo_config import cfg
from oslo_serialization import jsonutils
from six.moves.urllib import parse as urlparse
from watcher.common import utils
from watcher.decision_engine import rpcapi as deapi
from watcher.tests.api import base as api_base
from watcher.tests.objects import utils as obj_utils
@@ -31,6 +34,28 @@ class TestListStrategy(api_base.FunctionalTest):
for field in strategy_fields:
self.assertIn(field, strategy)
@mock.patch.object(deapi.DecisionEngineAPI, 'get_strategy_info')
def test_state(self, mock_strategy_info):
strategy = obj_utils.create_test_strategy(self.context)
mock_state = [
{"type": "Datasource", "mandatory": True, "comment": "",
"state": "gnocchi: True"},
{"type": "Metrics", "mandatory": False, "comment": "",
"state": [{"compute.node.cpu.percent": "available"},
{"cpu_util": "available"}]},
{"type": "CDM", "mandatory": True, "comment": "",
"state": [{"compute_model": "available"},
{"storage_model": "not available"}]},
{"type": "Name", "mandatory": "", "comment": "",
"state": strategy.name}
]
mock_strategy_info.return_value = mock_state
response = self.get_json('/strategies/%s/state' % strategy.uuid)
strategy_name = [requirement["state"] for requirement in response
if requirement["type"] == "Name"][0]
self.assertEqual(strategy.name, strategy_name)
def test_one(self):
strategy = obj_utils.create_test_strategy(self.context)
response = self.get_json('/strategies')
@@ -234,6 +259,13 @@ class TestStrategyPolicyEnforcement(api_base.FunctionalTest):
'/strategies/detail',
expect_errors=True)
def test_policy_disallow_state(self):
strategy = obj_utils.create_test_strategy(self.context)
self._common_policy_check(
"strategy:get", self.get_json,
'/strategies/%s/state' % strategy.uuid,
expect_errors=True)
class TestStrategyEnforcementWithAdminContext(
TestListStrategy, api_base.AdminRoleTest):
@@ -245,4 +277,5 @@ class TestStrategyEnforcementWithAdminContext(
"default": "rule:admin_api",
"strategy:detail": "rule:default",
"strategy:get": "rule:default",
"strategy:get_all": "rule:default"})
"strategy:get_all": "rule:default",
"strategy:state": "rule:default"})

View File

@@ -84,7 +84,7 @@ class TestMigration(base.TestCase):
self.action_swap.input_parameters = self.input_parameters_swap
self.input_parameters_migrate = {
"migration_type": "cold",
"migration_type": "migrate",
"destination_node": "storage1-poolname",
"destination_type": "",
baction.BaseAction.RESOURCE_ID: self.VOLUME_UUID,
@@ -93,7 +93,7 @@ class TestMigration(base.TestCase):
self.action_migrate.input_parameters = self.input_parameters_migrate
self.input_parameters_retype = {
"migration_type": "cold",
"migration_type": "retype",
"destination_node": "",
"destination_type": "storage1-typename",
baction.BaseAction.RESOURCE_ID: self.VOLUME_UUID,
@@ -130,7 +130,7 @@ class TestMigration(base.TestCase):
def test_parameters_migrate(self):
params = {baction.BaseAction.RESOURCE_ID:
self.VOLUME_UUID,
self.action.MIGRATION_TYPE: 'cold',
self.action.MIGRATION_TYPE: 'migrate',
self.action.DESTINATION_NODE: 'node-1',
self.action.DESTINATION_TYPE: None}
self.action_migrate.input_parameters = params
@@ -139,7 +139,7 @@ class TestMigration(base.TestCase):
def test_parameters_retype(self):
params = {baction.BaseAction.RESOURCE_ID:
self.VOLUME_UUID,
self.action.MIGRATION_TYPE: 'cold',
self.action.MIGRATION_TYPE: 'retype',
self.action.DESTINATION_NODE: None,
self.action.DESTINATION_TYPE: 'type-1'}
self.action_retype.input_parameters = params
@@ -157,7 +157,6 @@ class TestMigration(base.TestCase):
def test_migrate_success(self):
volume = self.fake_volume()
self.m_c_helper.can_cold.return_value = True
self.m_c_helper.get_volume.return_value = volume
result = self.action_migrate.execute()
self.assertTrue(result)
@@ -166,16 +165,9 @@ class TestMigration(base.TestCase):
"storage1-poolname"
)
def test_migrate_fail(self):
self.m_c_helper.can_cold.return_value = False
result = self.action_migrate.execute()
self.assertFalse(result)
self.m_c_helper.migrate.assert_not_called()
def test_retype_success(self):
volume = self.fake_volume()
self.m_c_helper.can_cold.return_value = True
self.m_c_helper.get_volume.return_value = volume
result = self.action_retype.execute()
self.assertTrue(result)
@@ -184,12 +176,6 @@ class TestMigration(base.TestCase):
"storage1-typename",
)
def test_retype_fail(self):
self.m_c_helper.can_cold.return_value = False
result = self.action_migrate.execute()
self.assertFalse(result)
self.m_c_helper.migrate.assert_not_called()
def test_swap_success(self):
volume = self.fake_volume(
status='in-use', attachments=[{'server_id': 'server_id'}])

View File

@@ -112,7 +112,7 @@ class TestCinderHelper(base.TestCase):
volume_type_name = cinder_util.get_volume_type_by_backendname(
'backend')
self.assertEqual(volume_type_name, volume_type1.name)
self.assertEqual(volume_type_name[0], volume_type1.name)
def test_get_volume_type_by_backendname_with_no_backend_exist(
self, mock_cinder):
@@ -122,7 +122,7 @@ class TestCinderHelper(base.TestCase):
volume_type_name = cinder_util.get_volume_type_by_backendname(
'nobackend')
self.assertEqual("", volume_type_name)
self.assertEqual([], volume_type_name)
@staticmethod
def fake_volume(**kwargs):
@@ -136,33 +136,6 @@ class TestCinderHelper(base.TestCase):
volume.volume_type = kwargs.get('volume_type', 'fake_type')
return volume
def test_can_cold_success(self, mock_cinder):
cinder_util = cinder_helper.CinderHelper()
volume = self.fake_volume()
cinder_util.cinder.volumes.get.return_value = volume
result = cinder_util.can_cold(volume)
self.assertTrue(result)
def test_can_cold_fail(self, mock_cinder):
cinder_util = cinder_helper.CinderHelper()
volume = self.fake_volume(status='in-use')
cinder_util.cinder.volumes.get.return_value = volume
result = cinder_util.can_cold(volume)
self.assertFalse(result)
volume = self.fake_volume(snapshot_id='snapshot_id')
cinder_util.cinder.volumes.get.return_value = volume
result = cinder_util.can_cold(volume)
self.assertFalse(result)
volume = self.fake_volume()
setattr(volume, 'os-vol-host-attr:host', 'host@backend#pool')
cinder_util.cinder.volumes.get.return_value = volume
result = cinder_util.can_cold(volume, 'host@backend#pool')
self.assertFalse(result)
@mock.patch.object(time, 'sleep', mock.Mock())
def test_migrate_success(self, mock_cinder):

View File

@@ -0,0 +1,63 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2017 ZTE Corporation
#
# Authors:Yumeng Bao <bao.yumeng@zte.com.cn>
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import mock
from watcher.common import clients
from watcher.common import exception
from watcher.common import ironic_helper
from watcher.common import utils as w_utils
from watcher.tests import base
class TestIronicHelper(base.TestCase):
def setUp(self):
super(TestIronicHelper, self).setUp()
osc = clients.OpenStackClients()
p_ironic = mock.patch.object(osc, 'ironic')
p_ironic.start()
self.addCleanup(p_ironic.stop)
self.ironic_util = ironic_helper.IronicHelper(osc=osc)
@staticmethod
def fake_ironic_node():
node = mock.MagicMock()
node.uuid = w_utils.generate_uuid()
return node
def test_get_ironic_node_list(self):
node1 = self.fake_ironic_node()
self.ironic_util.ironic.node.list.return_value = [node1]
rt_nodes = self.ironic_util.get_ironic_node_list()
self.assertEqual(rt_nodes, [node1])
def test_get_ironic_node_by_uuid_success(self):
node1 = self.fake_ironic_node()
self.ironic_util.ironic.node.get.return_value = node1
node = self.ironic_util.get_ironic_node_by_uuid(node1.uuid)
self.assertEqual(node, node1)
def test_get_ironic_node_by_uuid_failure(self):
self.ironic_util.ironic.node.get.return_value = None
self.assertRaisesRegex(
exception.IronicNodeNotFound,
"The ironic node node1 could not be found",
self.ironic_util.get_ironic_node_by_uuid, 'node1')

View File

@@ -55,7 +55,8 @@ class TestCeilometerHelper(base.BaseTestCase):
val = cm.statistic_aggregation(
resource_id="INSTANCE_ID",
meter_name="cpu_util",
period="7300"
period="7300",
granularity=None
)
self.assertEqual(expected_result, val)
@@ -93,3 +94,124 @@ class TestCeilometerHelper(base.BaseTestCase):
cm = ceilometer_helper.CeilometerHelper()
val = cm.statistic_list(meter_name="cpu_util")
self.assertEqual(expected_value, val)
@mock.patch.object(ceilometer_helper.CeilometerHelper,
'statistic_aggregation')
def test_get_host_cpu_usage(self, mock_aggregation, mock_ceilometer):
helper = ceilometer_helper.CeilometerHelper()
helper.get_host_cpu_usage('compute1', 600, 'mean')
mock_aggregation.assert_called_once_with(
'compute1', helper.METRIC_MAP['host_cpu_usage'], 600, None,
aggregate='mean')
@mock.patch.object(ceilometer_helper.CeilometerHelper,
'statistic_aggregation')
def test_get_instance_cpu_usage(self, mock_aggregation, mock_ceilometer):
helper = ceilometer_helper.CeilometerHelper()
helper.get_instance_cpu_usage('compute1', 600, 'mean')
mock_aggregation.assert_called_once_with(
'compute1', helper.METRIC_MAP['instance_cpu_usage'], 600, None,
aggregate='mean')
@mock.patch.object(ceilometer_helper.CeilometerHelper,
'statistic_aggregation')
def test_get_host_memory_usage(self, mock_aggregation, mock_ceilometer):
helper = ceilometer_helper.CeilometerHelper()
helper.get_host_memory_usage('compute1', 600, 'mean')
mock_aggregation.assert_called_once_with(
'compute1', helper.METRIC_MAP['host_memory_usage'], 600, None,
aggregate='mean')
@mock.patch.object(ceilometer_helper.CeilometerHelper,
'statistic_aggregation')
def test_get_instance_memory_usage(self, mock_aggregation,
mock_ceilometer):
helper = ceilometer_helper.CeilometerHelper()
helper.get_instance_memory_usage('compute1', 600, 'mean')
mock_aggregation.assert_called_once_with(
'compute1', helper.METRIC_MAP['instance_ram_usage'], 600, None,
aggregate='mean')
@mock.patch.object(ceilometer_helper.CeilometerHelper,
'statistic_aggregation')
def test_get_instance_l3_cache_usage(self, mock_aggregation,
mock_ceilometer):
helper = ceilometer_helper.CeilometerHelper()
helper.get_instance_l3_cache_usage('compute1', 600, 'mean')
mock_aggregation.assert_called_once_with(
'compute1', helper.METRIC_MAP['instance_l3_cache_usage'], 600,
None, aggregate='mean')
@mock.patch.object(ceilometer_helper.CeilometerHelper,
'statistic_aggregation')
def test_get_instance_ram_allocated(self, mock_aggregation,
mock_ceilometer):
helper = ceilometer_helper.CeilometerHelper()
helper.get_instance_ram_allocated('compute1', 600, 'mean')
mock_aggregation.assert_called_once_with(
'compute1', helper.METRIC_MAP['instance_ram_allocated'], 600, None,
aggregate='mean')
@mock.patch.object(ceilometer_helper.CeilometerHelper,
'statistic_aggregation')
def test_get_instance_root_disk_allocated(self, mock_aggregation,
mock_ceilometer):
helper = ceilometer_helper.CeilometerHelper()
helper.get_instance_root_disk_allocated('compute1', 600, 'mean')
mock_aggregation.assert_called_once_with(
'compute1', helper.METRIC_MAP['instance_root_disk_size'], 600,
None, aggregate='mean')
@mock.patch.object(ceilometer_helper.CeilometerHelper,
'statistic_aggregation')
def test_get_host_outlet_temperature(self, mock_aggregation,
mock_ceilometer):
helper = ceilometer_helper.CeilometerHelper()
helper.get_host_outlet_temperature('compute1', 600, 'mean')
mock_aggregation.assert_called_once_with(
'compute1', helper.METRIC_MAP['host_outlet_temp'], 600, None,
aggregate='mean')
@mock.patch.object(ceilometer_helper.CeilometerHelper,
'statistic_aggregation')
def test_get_host_inlet_temperature(self, mock_aggregation,
mock_ceilometer):
helper = ceilometer_helper.CeilometerHelper()
helper.get_host_inlet_temperature('compute1', 600, 'mean')
mock_aggregation.assert_called_once_with(
'compute1', helper.METRIC_MAP['host_inlet_temp'], 600, None,
aggregate='mean')
@mock.patch.object(ceilometer_helper.CeilometerHelper,
'statistic_aggregation')
def test_get_host_airflow(self, mock_aggregation, mock_ceilometer):
helper = ceilometer_helper.CeilometerHelper()
helper.get_host_airflow('compute1', 600, 'mean')
mock_aggregation.assert_called_once_with(
'compute1', helper.METRIC_MAP['host_airflow'], 600, None,
aggregate='mean')
@mock.patch.object(ceilometer_helper.CeilometerHelper,
'statistic_aggregation')
def test_get_host_power(self, mock_aggregation, mock_ceilometer):
helper = ceilometer_helper.CeilometerHelper()
helper.get_host_power('compute1', 600, 'mean')
mock_aggregation.assert_called_once_with(
'compute1', helper.METRIC_MAP['host_power'], 600, None,
aggregate='mean')
def test_check_availability(self, mock_ceilometer):
ceilometer = mock.MagicMock()
ceilometer.resources.list.return_value = True
mock_ceilometer.return_value = ceilometer
helper = ceilometer_helper.CeilometerHelper()
result = helper.check_availability()
self.assertEqual('available', result)
def test_check_availability_with_failure(self, mock_ceilometer):
ceilometer = mock.MagicMock()
ceilometer.resources.list.side_effect = Exception()
mock_ceilometer.return_value = ceilometer
helper = ceilometer_helper.CeilometerHelper()
self.assertEqual('not available', helper.check_availability())

View File

@@ -16,10 +16,8 @@
import mock
from oslo_config import cfg
from oslo_utils import timeutils
from watcher.common import clients
from watcher.common import exception
from watcher.datasource import gnocchi as gnocchi_helper
from watcher.tests import base
@@ -41,28 +39,134 @@ class TestGnocchiHelper(base.BaseTestCase):
helper = gnocchi_helper.GnocchiHelper()
result = helper.statistic_aggregation(
resource_id='16a86790-327a-45f9-bc82-45839f062fdc',
metric='cpu_util',
meter_name='cpu_util',
period=300,
granularity=360,
start_time=timeutils.parse_isotime("2017-02-02T09:00:00.000000"),
stop_time=timeutils.parse_isotime("2017-02-02T10:00:00.000000"),
aggregation='mean'
dimensions=None,
aggregation='mean',
group_by='*'
)
self.assertEqual(expected_result, result)
def test_gnocchi_wrong_datetime(self, mock_gnocchi):
gnocchi = mock.MagicMock()
expected_measures = [["2017-02-02T09:00:00.000000", 360, 5.5]]
gnocchi.metric.get_measures.return_value = expected_measures
mock_gnocchi.return_value = gnocchi
@mock.patch.object(gnocchi_helper.GnocchiHelper, 'statistic_aggregation')
def test_get_host_cpu_usage(self, mock_aggregation, mock_gnocchi):
helper = gnocchi_helper.GnocchiHelper()
self.assertRaises(
exception.InvalidParameter, helper.statistic_aggregation,
resource_id='16a86790-327a-45f9-bc82-45839f062fdc',
metric='cpu_util',
granularity=360,
start_time="2017-02-02T09:00:00.000000",
stop_time=timeutils.parse_isotime("2017-02-02T10:00:00.000000"),
helper.get_host_cpu_usage('compute1', 600, 'mean', granularity=300)
mock_aggregation.assert_called_once_with(
'compute1', helper.METRIC_MAP['host_cpu_usage'], 600, 300,
aggregation='mean')
@mock.patch.object(gnocchi_helper.GnocchiHelper, 'statistic_aggregation')
def test_get_instance_cpu_usage(self, mock_aggregation, mock_gnocchi):
helper = gnocchi_helper.GnocchiHelper()
helper.get_instance_cpu_usage('compute1', 600, 'mean', granularity=300)
mock_aggregation.assert_called_once_with(
'compute1', helper.METRIC_MAP['instance_cpu_usage'], 600, 300,
aggregation='mean')
@mock.patch.object(gnocchi_helper.GnocchiHelper, 'statistic_aggregation')
def test_get_host_memory_usage(self, mock_aggregation, mock_gnocchi):
helper = gnocchi_helper.GnocchiHelper()
helper.get_host_memory_usage('compute1', 600, 'mean', granularity=300)
mock_aggregation.assert_called_once_with(
'compute1', helper.METRIC_MAP['host_memory_usage'], 600, 300,
aggregation='mean')
@mock.patch.object(gnocchi_helper.GnocchiHelper, 'statistic_aggregation')
def test_get_instance_memory_usage(self, mock_aggregation, mock_gnocchi):
helper = gnocchi_helper.GnocchiHelper()
helper.get_instance_memory_usage('compute1', 600, 'mean',
granularity=300)
mock_aggregation.assert_called_once_with(
'compute1', helper.METRIC_MAP['instance_ram_usage'], 600, 300,
aggregation='mean')
@mock.patch.object(gnocchi_helper.GnocchiHelper, 'statistic_aggregation')
def test_get_instance_ram_allocated(self, mock_aggregation, mock_gnocchi):
helper = gnocchi_helper.GnocchiHelper()
helper.get_instance_ram_allocated('compute1', 600, 'mean',
granularity=300)
mock_aggregation.assert_called_once_with(
'compute1', helper.METRIC_MAP['instance_ram_allocated'], 600, 300,
aggregation='mean')
@mock.patch.object(gnocchi_helper.GnocchiHelper, 'statistic_aggregation')
def test_get_instance_root_disk_allocated(self, mock_aggregation,
mock_gnocchi):
helper = gnocchi_helper.GnocchiHelper()
helper.get_instance_root_disk_allocated('compute1', 600, 'mean',
granularity=300)
mock_aggregation.assert_called_once_with(
'compute1', helper.METRIC_MAP['instance_root_disk_size'], 600,
300, aggregation='mean')
@mock.patch.object(gnocchi_helper.GnocchiHelper, 'statistic_aggregation')
def test_get_host_outlet_temperature(self, mock_aggregation, mock_gnocchi):
helper = gnocchi_helper.GnocchiHelper()
helper.get_host_outlet_temperature('compute1', 600, 'mean',
granularity=300)
mock_aggregation.assert_called_once_with(
'compute1', helper.METRIC_MAP['host_outlet_temp'], 600, 300,
aggregation='mean')
@mock.patch.object(gnocchi_helper.GnocchiHelper, 'statistic_aggregation')
def test_get_host_inlet_temperature(self, mock_aggregation, mock_gnocchi):
helper = gnocchi_helper.GnocchiHelper()
helper.get_host_inlet_temperature('compute1', 600, 'mean',
granularity=300)
mock_aggregation.assert_called_once_with(
'compute1', helper.METRIC_MAP['host_inlet_temp'], 600, 300,
aggregation='mean')
@mock.patch.object(gnocchi_helper.GnocchiHelper, 'statistic_aggregation')
def test_get_host_airflow(self, mock_aggregation, mock_gnocchi):
helper = gnocchi_helper.GnocchiHelper()
helper.get_host_airflow('compute1', 600, 'mean', granularity=300)
mock_aggregation.assert_called_once_with(
'compute1', helper.METRIC_MAP['host_airflow'], 600, 300,
aggregation='mean')
@mock.patch.object(gnocchi_helper.GnocchiHelper, 'statistic_aggregation')
def test_get_host_power(self, mock_aggregation, mock_gnocchi):
helper = gnocchi_helper.GnocchiHelper()
helper.get_host_power('compute1', 600, 'mean', granularity=300)
mock_aggregation.assert_called_once_with(
'compute1', helper.METRIC_MAP['host_power'], 600, 300,
aggregation='mean')
def test_gnocchi_check_availability(self, mock_gnocchi):
gnocchi = mock.MagicMock()
gnocchi.status.get.return_value = True
mock_gnocchi.return_value = gnocchi
helper = gnocchi_helper.GnocchiHelper()
result = helper.check_availability()
self.assertEqual('available', result)
def test_gnocchi_check_availability_with_failure(self, mock_gnocchi):
cfg.CONF.set_override("query_max_retries", 1,
group='gnocchi_client')
gnocchi = mock.MagicMock()
gnocchi.status.get.side_effect = Exception()
mock_gnocchi.return_value = gnocchi
helper = gnocchi_helper.GnocchiHelper()
self.assertEqual('not available', helper.check_availability())
def test_gnocchi_list_metrics(self, mock_gnocchi):
gnocchi = mock.MagicMock()
metrics = [{"name": "metric1"}, {"name": "metric2"}]
expected_metrics = set(["metric1", "metric2"])
gnocchi.metric.list.return_value = metrics
mock_gnocchi.return_value = gnocchi
helper = gnocchi_helper.GnocchiHelper()
result = helper.list_metrics()
self.assertEqual(expected_metrics, result)
def test_gnocchi_list_metrics_with_failure(self, mock_gnocchi):
cfg.CONF.set_override("query_max_retries", 1,
group='gnocchi_client')
gnocchi = mock.MagicMock()
gnocchi.metric.list.side_effect = Exception()
mock_gnocchi.return_value = gnocchi
helper = gnocchi_helper.GnocchiHelper()
self.assertFalse(helper.list_metrics())

View File

@@ -0,0 +1,43 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2017 Servionica
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import mock
from watcher.common import exception
from watcher.datasource import gnocchi as gnoc
from watcher.datasource import manager as ds_manager
from watcher.tests import base
class TestDataSourceManager(base.BaseTestCase):
@mock.patch.object(gnoc, 'GnocchiHelper')
def test_get_backend(self, mock_gnoc):
manager = ds_manager.DataSourceManager(
config=mock.MagicMock(
datasources=['gnocchi', 'ceilometer', 'monasca']),
osc=mock.MagicMock())
backend = manager.get_backend(['host_cpu_usage',
'instance_cpu_usage'])
self.assertEqual(backend, manager.gnocchi)
def test_get_backend_wrong_metric(self):
manager = ds_manager.DataSourceManager(
config=mock.MagicMock(
datasources=['gnocchi', 'ceilometer', 'monasca']),
osc=mock.MagicMock())
self.assertRaises(exception.NoSuchMetric, manager.get_backend,
['host_cpu', 'instance_cpu_usage'])

View File

@@ -16,7 +16,6 @@
import mock
from oslo_config import cfg
from oslo_utils import timeutils
from watcher.common import clients
from watcher.datasource import monasca as monasca_helper
@@ -30,7 +29,7 @@ class TestMonascaHelper(base.BaseTestCase):
def test_monasca_statistic_aggregation(self, mock_monasca):
monasca = mock.MagicMock()
expected_result = [{
expected_stat = [{
'columns': ['timestamp', 'avg'],
'dimensions': {
'hostname': 'rdev-indeedsrv001',
@@ -39,23 +38,38 @@ class TestMonascaHelper(base.BaseTestCase):
'name': 'cpu.percent',
'statistics': [
['2016-07-29T12:45:00Z', 0.0],
['2016-07-29T12:50:00Z', 0.9100000000000001],
['2016-07-29T12:55:00Z', 0.9111111111111112]]}]
['2016-07-29T12:50:00Z', 0.9],
['2016-07-29T12:55:00Z', 0.9]]}]
monasca.metrics.list_statistics.return_value = expected_result
monasca.metrics.list_statistics.return_value = expected_stat
mock_monasca.return_value = monasca
helper = monasca_helper.MonascaHelper()
result = helper.statistic_aggregation(
resource_id=None,
meter_name='cpu.percent',
dimensions={'hostname': 'NODE_UUID'},
start_time=timeutils.parse_isotime("2016-06-06T10:33:22.063176"),
end_time=None,
period=7200,
aggregate='avg',
granularity=300,
dimensions={'hostname': 'NODE_UUID'},
aggregation='avg',
group_by='*',
)
self.assertEqual(expected_result, result)
self.assertEqual(0.6, result)
def test_check_availability(self, mock_monasca):
monasca = mock.MagicMock()
monasca.metrics.list.return_value = True
mock_monasca.return_value = monasca
helper = monasca_helper.MonascaHelper()
result = helper.check_availability()
self.assertEqual('available', result)
def test_check_availability_with_failure(self, mock_monasca):
monasca = mock.MagicMock()
monasca.metrics.list.side_effect = Exception()
mock_monasca.return_value = monasca
helper = monasca_helper.MonascaHelper()
self.assertEqual('not available', helper.check_availability())
def test_monasca_statistic_list(self, mock_monasca):
monasca = mock.MagicMock()
@@ -98,3 +112,18 @@ class TestMonascaHelper(base.BaseTestCase):
helper = monasca_helper.MonascaHelper()
val = helper.statistics_list(meter_name="cpu.percent", dimensions={})
self.assertEqual(expected_result, val)
@mock.patch.object(monasca_helper.MonascaHelper, 'statistic_aggregation')
def test_get_host_cpu_usage(self, mock_aggregation, mock_monasca):
node = "compute1_compute1"
mock_aggregation.return_value = 0.6
helper = monasca_helper.MonascaHelper()
cpu_usage = helper.get_host_cpu_usage(node, 600, 'mean')
self.assertEqual(0.6, cpu_usage)
@mock.patch.object(monasca_helper.MonascaHelper, 'statistic_aggregation')
def test_get_instance_cpu_usage(self, mock_aggregation, mock_monasca):
mock_aggregation.return_value = 0.6
helper = monasca_helper.MonascaHelper()
cpu_usage = helper.get_instance_cpu_usage('vm1', 600, 'mean')
self.assertEqual(0.6, cpu_usage)

View File

@@ -383,3 +383,37 @@ class TestContinuousAuditHandler(base.DbTestCase):
audit_handler.execute_audit(self.audits[0], self.context)
m_execute.assert_called_once_with(self.audits[0], self.context)
self.assertIsNotNone(self.audits[0].next_run_time)
@mock.patch.object(objects.service.Service, 'list')
@mock.patch.object(sq_api, 'get_engine')
@mock.patch.object(scheduling.BackgroundSchedulerService, 'remove_job')
@mock.patch.object(scheduling.BackgroundSchedulerService, 'add_job')
@mock.patch.object(scheduling.BackgroundSchedulerService, 'get_jobs')
@mock.patch.object(objects.audit.Audit, 'list')
def test_launch_audits_periodically_with_diff_interval(
self, mock_list, mock_jobs, m_add_job, m_remove_job,
m_engine, m_service):
audit_handler = continuous.ContinuousAuditHandler()
mock_list.return_value = self.audits
self.audits[0].next_run_time = (datetime.datetime.now() -
datetime.timedelta(seconds=1800))
m_job1 = mock.MagicMock()
m_job1.name = 'execute_audit'
m_audit = mock.MagicMock()
m_audit.uuid = self.audits[0].uuid
m_audit.interval = 60
m_job1.args = [m_audit]
mock_jobs.return_value = [m_job1]
m_engine.return_value = mock.MagicMock()
m_add_job.return_value = mock.MagicMock()
audit_handler.launch_audits_periodically()
m_service.assert_called()
m_engine.assert_called()
m_add_job.assert_called()
mock_jobs.assert_called()
self.assertIsNotNone(self.audits[0].next_run_time)
self.assertIsNone(self.audits[1].next_run_time)
audit_handler.launch_audits_periodically()
m_remove_job.assert_called()

View File

@@ -26,8 +26,9 @@ class FakeCeilometerMetrics(object):
def empty_one_metric(self, emptytype):
self.emptytype = emptytype
def mock_get_statistics(self, resource_id, meter_name, period,
aggregate='avg'):
def mock_get_statistics(self, resource_id=None, meter_name=None,
period=None, granularity=None, dimensions=None,
aggregation='avg', group_by='*'):
result = 0
if meter_name == "hardware.cpu.util":
result = self.get_usage_node_cpu(resource_id)
@@ -50,7 +51,8 @@ class FakeCeilometerMetrics(object):
return result
def mock_get_statistics_wb(self, resource_id, meter_name, period,
aggregate='avg'):
granularity, dimensions=None,
aggregation='avg', group_by='*'):
result = 0.0
if meter_name == "cpu_util":
result = self.get_average_usage_instance_cpu_wb(resource_id)
@@ -58,12 +60,12 @@ class FakeCeilometerMetrics(object):
result = self.get_average_usage_instance_memory_wb(resource_id)
return result
def mock_get_statistics_nn(self, resource_id, meter_name, period,
aggregate='avg'):
def mock_get_statistics_nn(self, resource_id, period,
aggregation, granularity=300):
result = 0.0
if meter_name == "cpu_l3_cache" and period == 100:
if period == 100:
result = self.get_average_l3_cache_current(resource_id)
if meter_name == "cpu_l3_cache" and period == 200:
if period == 200:
result = self.get_average_l3_cache_previous(resource_id)
return result
@@ -152,12 +154,13 @@ class FakeCeilometerMetrics(object):
return mock[str(uuid)]
@staticmethod
def get_usage_node_cpu(uuid):
def get_usage_node_cpu(*args, **kwargs):
"""The last VM CPU usage values to average
:param uuid:00
:return:
"""
uuid = args[0]
# query influxdb stream
# compute in stream
@@ -176,6 +179,8 @@ class FakeCeilometerMetrics(object):
# node 3
mock['Node_6_hostname_6'] = 8
# This node doesn't send metrics
mock['LOST_NODE_hostname_7'] = None
mock['Node_19_hostname_19'] = 10
# node 4
mock['INSTANCE_7_hostname_7'] = 4
@@ -190,7 +195,10 @@ class FakeCeilometerMetrics(object):
# mock[uuid] = random.randint(1, 4)
mock[uuid] = 8
return float(mock[str(uuid)])
if mock[str(uuid)] is not None:
return float(mock[str(uuid)])
else:
return mock[str(uuid)]
@staticmethod
def get_average_usage_instance_cpu_wb(uuid):
@@ -228,12 +236,13 @@ class FakeCeilometerMetrics(object):
return mock[str(uuid)]
@staticmethod
def get_average_usage_instance_cpu(uuid):
def get_average_usage_instance_cpu(*args, **kwargs):
"""The last VM CPU usage values to average
:param uuid:00
:return:
"""
uuid = args[0]
# query influxdb stream
# compute in stream
@@ -255,6 +264,8 @@ class FakeCeilometerMetrics(object):
# node 4
mock['INSTANCE_7'] = 4
mock['LOST_INSTANCE'] = None
if uuid not in mock.keys():
# mock[uuid] = random.randint(1, 4)
mock[uuid] = 8

View File

@@ -0,0 +1,12 @@
<ModelRoot>
<IronicNode uuid="c5941348-5a87-4016-94d4-4f9e0ce2b87a" power_state="power on" maintenance="false" maintenance_reason="null">
<extra>
<compute_node_id> 1</compute_node_id>
</extra>
</IronicNode>
<IronicNode uuid="c5941348-5a87-4016-94d4-4f9e0ce2b87c" power_state="power on" maintenance="false" maintenance_reason="null">
<extra>
<compute_node_id> 2</compute_node_id>
</extra>
</IronicNode>
</ModelRoot>

Some files were not shown because too many files have changed in this diff Show More