Compare commits

..

126 Commits

Author SHA1 Message Date
OpenDev Sysadmins
5307f5a80e OpenDev Migration Patch
This commit was bulk generated and pushed by the OpenDev sysadmins
as a part of the Git hosting and code review systems migration
detailed in these mailing list posts:

http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003603.html
http://lists.openstack.org/pipermail/openstack-discuss/2019-April/004920.html

Attempts have been made to correct repository namespaces and
hostnames based on simple pattern matching, but it's possible some
were updated incorrectly or missed entirely. Please reach out to us
via the contact information listed at https://opendev.org/ with any
questions you may have.
2019-04-19 19:40:46 +00:00
Ian Wienand
b5467a2a1f Replace openstack.org git:// URLs with https://
This is a mechanically generated change to replace openstack.org
git:// URLs with https:// equivalents.

This is in aid of a planned future move of the git hosting
infrastructure to a self-hosted instance of gitea (https://gitea.io),
which does not support the git wire protocol at this stage.

This update should result in no functional change.

For more information see the thread at

 http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003825.html

Change-Id: I886b29ba8a1814cf876e70b5b20504a221d32fa1
2019-03-24 20:36:26 +00:00
Alexander Chadin
83411ec89f Fix stop_watcher function
Apache should be reloaded after watcher-api is disabled.

Change-Id: Ifee0e7701849348630568aa36b3f3c4c62d3382e
2018-12-10 13:55:44 +00:00
licanwei
08750536e7 optimize get_instances_by_node
We can set host filed in search_opts.
refer to:
https://developer.openstack.org/api-ref/compute/?expanded=list-servers-detail#list-servers

Change-Id: I36b27167d7223f3bf6bb05995210af41ad01fc6d
2018-11-06 13:39:14 +00:00
Tatiana Kholkina
9f7ccfe408 Use limit -1 for nova servers list
By default nova has a limit for returned items in a single response [1].
We should pass limit=-1 to get all items.

[1] https://docs.openstack.org/nova/rocky/configuration/config.html

Change-Id: I1fabd909c4c0356ef5fcb7c51718fb4513e6befa
2018-10-16 08:37:45 +00:00
Tatiana Kholkina
fb2619e538 Provide region name while initialize clients
Add new option 'region_name' to config for each client section.

Change-Id: Ifad8908852f4be69dd294a4c4ab28d2e1df265e8
Closes-Bug: #1787937
(cherry picked from commit 925b971377)
2018-09-21 12:31:04 +00:00
Nguyen Hai
6bd857fa0e import zuul job settings from project-config
This is a mechanically generated patch to complete step 1 of moving
the zuul job settings out of project-config and into each project
repository.

Because there will be a separate patch on each branch, the branch
specifiers for branch-specific jobs have been removed.

Because this patch is generated by a script, there may be some
cosmetic changes to the layout of the YAML file(s) as the contents are
normalized.

See the python3-first goal document for details:
https://governance.openstack.org/tc/goals/stein/python3-first.html

Change-Id: I35a8ce3dc54cb662ee9154e343cf50fe96f64807
Story: #2002586
Task: #24344
2018-08-19 00:59:08 +09:00
Clark Boylan
e0faeea608 Remove undefined job
The legacy-rally-dsvm-watcher-rally job does not exist but it is listed
in the .zuul.yaml config. This is a zuul configuration error. Remove
this job which does not exist to fix zuul.

Change-Id: I1bbfd373ad12b98696ab2ddb78e56e6503cc4c4d
2018-07-03 13:27:12 -07:00
Zuul
61aca40e6e Merge "Update auth_uri option to www_authenticate_uri" into stable/queens 2018-06-05 07:49:22 +00:00
caoyuan
b293389734 Delete the unnecessary '-'
fix a typo

Change-Id: I4ecdb827d94ef0ae88e2f37db9d1a53525140947
(cherry picked from commit 4844baa816)
2018-05-16 05:03:45 +00:00
caoyuan
050e6d58f1 Update auth_uri option to www_authenticate_uri
Option auth_uri from group keystone_authtoken is deprecated in Queens [1].
Use option www_authenticate_uri from group keystone_authtoken.

[1]https://review.openstack.org/#/c/508522/

Change-Id: I2ef330d7f9b632e9a81d22a8edec3c88eb532ff5
(cherry picked from commit 8c916930c8)
2018-05-15 07:57:53 +00:00
Zuul
7223d35c47 Merge "Imported Translations from Zanata" into stable/queens 2018-03-06 05:30:53 +00:00
Zuul
57f1971982 Merge "Add a hacking rule for string interpolation at logging" into stable/queens 2018-03-06 02:42:13 +00:00
OpenStack Proposal Bot
c9b2b2aa39 Imported Translations from Zanata
For more information about this automatic import see:
https://docs.openstack.org/i18n/latest/reviewing-translation-import.html

Change-Id: Ia00d11dd76a27a5c052c7a512cadaefa168d0340
2018-03-03 07:22:16 +00:00
Andreas Jaeger
a42c31c221 Fix exception string format
The string %(action) is not valid, it misses the conversion specified,
add s for string.

Note that this leads to an untranslatable string, since our translation
tools check for valid formats and fail. In this case the failure comes
from a source code fail.

Change-Id: I2e630928dc32542a8a7c02657a9f0ab1eaab62ff
2018-03-02 20:57:41 +00:00
ForestLee
403ec94bc1 Add a hacking rule for string interpolation at logging
String interpolation should be delayed to be handled by
the logging code, rather than being done at the point
of the logging call.
See the oslo i18n guideline
* https://docs.openstack.org/oslo.i18n/latest/user/guidelines.html#adding-variables-to-log-messages
and
* https://github.com/openstack-dev/hacking/blob/master/hacking/checks/other.py#L39
Closes-Bug: #1596829

Change-Id: Ibba5791669c137be1483805db657beb907030227
2018-02-28 12:13:10 +00:00
OpenStack Release Bot
3431b77388 Update UPPER_CONSTRAINTS_FILE for stable/queens
The new stable upper-constraints file is only available
after the openstack/requirements repository is branched.
This will happen around the RC1 timeframe.

Recheck and merge this change once the requirements
repository has been branched.

The CI system will work with this patch before the requirements
repository is branched because zuul configues the job to run
with a local copy of the file and defaults to the master branch.
However, accepting the patch will break the test configuration
on developers' local systems, so please wait until after the
requirements repository is branched to merge the patch.

Change-Id: I8ec196a62e7c0146f25045e643073f414ae69249
2018-02-08 16:34:03 +00:00
OpenStack Release Bot
eb4cacc00e Update .gitreview for stable/queens
Change-Id: I4ac0da37285c34471654bb5125c034b415c6031d
2018-02-08 16:33:58 +00:00
Zuul
40a653215f Merge "Zuul: Remove project name" 2018-02-07 07:24:53 +00:00
Zuul
1492f5d8dc Merge "Repalce Chinese double quotes to English double quotes" 2018-02-07 07:22:41 +00:00
Zuul
76263f149a Merge "Fix issues with aggregate and granularity attributes" 2018-02-06 06:05:50 +00:00
James E. Blair
028006d15d Zuul: Remove project name
Zuul no longer requires the project-name for in-repo configuration.
Omitting it makes forking or renaming projects easier.

Change-Id: Ib3be82015be1d6853c44cf53faacb238237ad701
2018-02-05 14:18:38 -08:00
Alexander Chadin
d27ba8cc2a Fix issues with aggregate and granularity attributes
This patch set fixes issues that have appeared after merging
watcher-multi-datasource and strategy-requirements patches.
It is final commit in watcher-multi-datasource blueprint.

Partially-Implements: blueprint watcher-multi-datasource
Change-Id: I25b4cb0e1b85379ff0c4da9d0c1474380d75ce3a
2018-02-05 11:08:48 +00:00
chengebj5238
33750ce7a9 Repalce Chinese double quotes to English double quotes
Change-Id: I566ce10064c3dc51b875fc973c0ad9b58449001c
2018-02-05 17:59:08 +08:00
Zuul
cb8d1a98d6 Merge "Fix get_compute_node_by_hostname in nova_helper" 2018-02-05 06:47:10 +00:00
Hidekazu Nakamura
f32252d510 Fix get_compute_node_by_hostname in nova_helper
If hostname is different from uuid in Compute CDM,
get_compute_node_by_hostname method returns empty.
This patch set fixes to return a compute node even if hostname
is different from uuid.

Change-Id: I6cbc0be1a79cc238f480caed9adb8dc31256754a
Closes-Bug: #1746162
2018-02-02 14:26:20 +09:00
Zuul
4849f8dde9 Merge "Add zone migration strategy document" 2018-02-02 04:51:26 +00:00
Hidekazu Nakamura
0cafdcdee9 Add zone migration strategy document
This patch set adds zone migration strategy document.

Change-Id: Ifd9d85d635977900929efd376f0d7990a6fec627
2018-02-02 09:35:58 +09:00
OpenStack Proposal Bot
3a70225164 Updated from global requirements
Change-Id: Ifb8d8d6cb1248eaf8715c84539d74fa04dd753dd
2018-02-01 07:36:19 +00:00
Zuul
892c766ac4 Merge "Fixed AttributeError in storage_model" 2018-01-31 13:58:53 +00:00
Zuul
63a3fd84ae Merge "Remove redundant import alias" 2018-01-31 12:45:21 +00:00
Zuul
287ace1dcc Merge "Update zone_migration comment" 2018-01-31 06:14:15 +00:00
Zuul
4b302e415e Merge "Zuul: Remove project name" 2018-01-30 12:22:41 +00:00
licanwei
f24744c910 Fixed AttributeError in storage_model
self.audit.scope should be self.audit_scope

Closes-Bug: #1746191

Change-Id: I0cce165a2bc1afd4c9e09c51e4d3250ee70d3705
2018-01-30 00:32:19 -08:00
Zuul
d9a85eda2c Merge "Imported Translations from Zanata" 2018-01-29 14:12:36 +00:00
Zuul
82c8633e42 Merge "[Doc] Add actuator strategy doc" 2018-01-29 14:12:35 +00:00
Hidekazu Nakamura
d3f23795f5 Update zone_migration comment
This patch updates zone_migration comment for document and
removes unnecessary TODO.

Change-Id: Ib1eadad6496fe202e406108f432349c82696ea88
2018-01-29 17:48:48 +09:00
Hoang Trung Hieu
e7f4456a80 Zuul: Remove project name
Zuul no longer requires the project-name for in-repo configuration[1].
Omitting it makes forking or renaming projects easier.

[1] https://docs.openstack.org/infra/manual/drivers.html#consistent-naming-for-jobs-with-zuul-v3

Change-Id: Iddf89707289a22ea322c14d1b11f58840871304d
2018-01-29 07:24:44 +00:00
OpenStack Proposal Bot
a36a309e2e Updated from global requirements
Change-Id: I29ebfe2e3398dab6f2e22f3d97c16b72843f1e34
2018-01-29 00:42:54 +00:00
Hidekazu Nakamura
8e3affd9ac [Doc] Add actuator strategy doc
This patch adds actuator strategy document.

Change-Id: I5f0415754c83e4f152155988625ada2208d6c35a
2018-01-28 20:00:05 +09:00
OpenStack Proposal Bot
71e979cae0 Imported Translations from Zanata
For more information about this automatic import see:
https://docs.openstack.org/i18n/latest/reviewing-translation-import.html

Change-Id: Ie34aafe6d9b54bb97469844d21de38d7c6249031
2018-01-28 07:16:20 +00:00
Luong Anh Tuan
6edfd34a53 Remove redundant import alias
This patch remove redundant import aliases and add pep8 hacking function
to check no redundant import aliases.

Co-Authored-By: Dao Cong Tien <tiendc@vn.fujitsu.com>

Change-Id: I3207cb9f0eb4b4a029b7e822b9c59cf48d1e0f9d
Closes-Bug: #1745527
2018-01-26 09:11:43 +07:00
Alexander Chadin
0c8c32e69e Fix strategy state
Change-Id: I003bb3b41aac69cc40a847f52a50c7bc4cc8d020
2018-01-25 15:41:34 +03:00
Alexander Chadin
9138b7bacb Add datasources to strategies
This patch set add datasources instead of datasource.

Change-Id: I94f17ae3a0b6a8990293dc9e33be1a2bd3432a14
2018-01-24 20:51:38 +03:00
Zuul
072822d920 Merge "Add baremetal strategy validation" 2018-01-24 14:59:14 +00:00
Zuul
f67ce8cca5 Merge "Add zone migration strategy" 2018-01-24 14:56:07 +00:00
Zuul
9e6f768263 Merge "Strategy requirements" 2018-01-24 14:53:47 +00:00
Zuul
ba9c89186b Merge "Update unreachable link" 2018-01-24 14:21:49 +00:00
Alexander Chadin
16e7d9c13b Add baremetal strategy validation
This patch set adds validation of baremetal model.

It also fixes PEP issues with storage capacity balance
strategy.

Change-Id: I53e37d91fa6c65f7c3d290747169007809100304
Depends-On: I177b443648301eb50da0da63271ecbfd9408bd4f
2018-01-24 14:35:52 +03:00
Zuul
c3536406bd Merge "Audit scoper for storage CDM" 2018-01-24 10:57:37 +00:00
Alexander Chadin
0c66fe2e65 Strategy requirements
This patch set adds /state resource to strategy API
which allows to retrieve strategy requirements.

Partially-Implements: blueprint check-strategy-requirements
Change-Id: I177b443648301eb50da0da63271ecbfd9408bd4f
2018-01-24 13:39:42 +03:00
Zuul
74933bf0ba Merge "Fix workload_stabilization unavailable nodes and instances" 2018-01-24 10:35:25 +00:00
Hidekazu Nakamura
1dae83da57 Add zone migration strategy
This patch adds hardware maintenance goal, efficacy and zone
migration strategy.

Change-Id: I5bfee421780233ffeea8c1539aba720ae554983d
Implements: blueprint zone-migration-strategy
2018-01-24 19:33:22 +09:00
Zuul
5ec8932182 Merge "Add storage capacity balance Strategy" 2018-01-24 10:22:25 +00:00
Alexander Chadin
701b258dc7 Fix workload_stabilization unavailable nodes and instances
This patch set excludes nodes and instances from auditing
if appropriate metrics aren't available.

Change-Id: I87c6c249e3962f45d082f92d7e6e0be04e101799
Closes-Bug: #1736982
2018-01-24 11:37:43 +03:00
gaofei
f7fcdf14d0 Update unreachable link
Change-Id: I74bbe5a8c4ca9df550f1279aa80a836d6a2f8a93
2018-01-24 14:40:43 +08:00
OpenStack Proposal Bot
47ba6c0808 Updated from global requirements
Change-Id: I4cbf5308061707e28c202f22e8a9bf8492742040
2018-01-24 01:42:12 +00:00
Zuul
5b5fbbedb4 Merge "Fix compute api ref link" 2018-01-23 15:16:19 +00:00
Zuul
a1c575bfc5 Merge "check audit name length" 2018-01-23 11:21:14 +00:00
deepak_mourya
27e887556d Fix compute api ref link
This is to fix some compute api ref link.

Change-Id: Id5acc4d0f635f3d19b916721b6839a0eef544b2a
2018-01-23 09:23:55 +00:00
Alexander Chadin
891f6bc241 Adapt workload_balance strategy to multiple datasource backend
This patch set:
1. Removes nova, ceilometer and gnocchi properties.
2. Adds using of datasource_backend properties along with
   statistic_aggregation method.
3. Changes type of datasource config.

Change-Id: I09d2dce00378f0ee5381d7c85006752aea6975d2
Partially-Implements: blueprint watcher-multi-datasource
2018-01-23 11:51:02 +03:00
Alexander Chadin
5dd6817d47 Adapt noisy_neighbor strategy to multiple datasource backend
Partially-Implements: blueprint watcher-multi-datasource
Change-Id: Ibcd5d0776280bb68ed838f88ebfcde27fc1a3d35
2018-01-23 11:51:02 +03:00
Alexander Chadin
7cdcb4743e Adapt basic_consolidation strategy to multiple datasource backend
Change-Id: Ie30308fd08ed1fd103b70f58f1d17b3749a6fe04
2018-01-23 11:51:02 +03:00
licanwei
6d03c4c543 check audit name length
No more than 63 characters

Change-Id: I52adbd7e9f12dd4a8b6977756d788ee0e5d6391a
Closes-Bug: #1744231
2018-01-23 00:47:26 -08:00
aditi
bcc129cf94 Audit scoper for storage CDM
This patch adds audit scoper for Storage CDM.

Change-Id: I0c5b3b652027e1394fd7744d904397ce87ed35a1
Implements: blueprint audit-scoper-for-storage-data-model
2018-01-23 13:53:31 +05:30
Zuul
40cff311c6 Merge "Adapt workload_stabilization strategy to new datasource backend" 2018-01-23 01:08:32 +00:00
OpenStack Proposal Bot
1a48a7fc57 Imported Translations from Zanata
For more information about this automatic import see:
https://docs.openstack.org/i18n/latest/reviewing-translation-import.html

Change-Id: I19a628bc7a0623e2f1ff8ab8794658bfe25801f5
2018-01-20 07:21:59 +00:00
Zuul
652aa54586 Merge "Update link address" 2018-01-19 11:40:25 +00:00
zhangdebo
42a3886ded Update link address
Link to new measurements is out of date and should be updated.
Change
https://docs.openstack.org/ceilometer/latest/contributor/new_meters.html#meters
to
https://docs.openstack.org/ceilometer/latest/contributor/measurements.html#new-measurements

Change-Id: Idc77e29a69a1f1eb9f8827fa74c9fde79e5619df
2018-01-19 07:59:15 +00:00
licanwei
3430493de1 Fix tempest devstack error
Devstack failed because mysql wasn't enabled.

Change-Id: Ifc1c00f2dddd0f3d67c6672d3b9d3d4bd78a4a90
Closes-Bug: #1744224
2018-01-18 23:33:08 -08:00
licanwei
f5bcf9d355 Add storage capacity balance Strategy
This patch adds Storage Capacity Balance Strategy to balance the
storage capacity through volume migration.

Change-Id: I52ea7ce00deb609a2f668db330f1fbc1c9932613
Implements: blueprint storage-workload-balance
2018-01-18 22:18:18 -08:00
Zuul
d809523bef Merge "Add baremetal data model" 2018-01-18 10:38:12 +00:00
Zuul
bfe3c28986 Merge "Fix compute scope test bug" 2018-01-18 09:37:24 +00:00
OpenStack Proposal Bot
3c8caa3d0a Updated from global requirements
Change-Id: I4814a236f5d015ee25b9de95dd1f3f97e375d382
2018-01-18 03:39:36 +00:00
Zuul
766d064dd0 Merge "Update pike install supermark to queens" 2018-01-17 12:34:35 +00:00
Alexander Chadin
ce196b68c4 Adapt workload_stabilization strategy to new datasource backend
This patch set:
1. Removes nova, ceilometer and gnocchi properties.
2. Adds using of datasource_backend properties along with
   statistic_aggregation method.
3. Changes type of datasource config.

Change-Id: I4a2f05772248fddd97a41e27be4094eb59ee0bdb
Partially-Implements: blueprint watcher-multi-datasource
2018-01-17 13:01:05 +03:00
OpenStack Proposal Bot
42130c42a1 Updated from global requirements
Change-Id: I4ef734eeaeee414c3e6340490f1146d537370127
2018-01-16 12:57:22 +00:00
caoyuan
1a8639d256 Update pike install supermark to queens
Change-Id: If981c77518d0605b4113f4bb4345d152545ffc52
2018-01-15 11:56:36 +00:00
zhang.lei
1702fe1a83 Add the title of API Guide
Currently, The title of API Guide is missing.[1] We should add a
title just like other projects.[2]

[1] https://docs.openstack.org/watcher/latest/api
[2] https://developer.openstack.org/api-ref/application-catalog

Change-Id: I012d746e99a68fc5f259a189188d9cea00d5a4f7
2018-01-13 08:04:36 +00:00
aditi
354ebd35cc Fix compute scope test bug
We were excluding 'INSTANCE_6'from scope, which belongs to 'NODE_3'
in scenerio_1.xml [1]. But NODE_3 is removed from model before only
as it is not in scope.

So, This Patch adds 'AZ3' in fake_scope.

[1] https://github.com/openstack/watcher/blob/master/watcher/tests/decision_engine/model/data/scenario_1.xml
Closes-Bug: #1737901

Change-Id: Ib1aaca7045908418ad0c23b718887cd89db98a83
2018-01-12 16:17:25 +05:30
Zuul
7297603f65 Merge "reset job interval when audit was updated" 2018-01-11 09:12:38 +00:00
Zuul
9626cb1356 Merge "check actionplan state when deleting actionplan" 2018-01-11 09:12:37 +00:00
Zuul
9e027940d7 Merge "use current weighted sd as min_sd when starting to simulate migrations" 2018-01-11 08:48:43 +00:00
Zuul
3754938d96 Merge "Set apscheduler logs to WARN level" 2018-01-11 05:39:10 +00:00
Zuul
8a7f930a64 Merge "update audit API description" 2018-01-11 05:32:50 +00:00
Zuul
f7e506155b Merge "Fix configuration doc link" 2018-01-10 17:02:26 +00:00
Yumeng_Bao
54da2a75fb Add baremetal data model
Change-Id: I57b7bb53b3bc84ad383ae485069274f5c5362c50
Implements: blueprint build-baremetal-data-model-in-watcher
2018-01-10 14:46:41 +08:00
Zuul
5cbb9aca7e Merge "bug fix remove volume migration type 'cold'" 2018-01-10 06:15:01 +00:00
Alexander Chadin
bd79882b16 Set apscheduler logs to WARN level
This patch set defines level of apscheduler logs as WARN.

Closes-Bug: #1742153
Change-Id: Idbb4b3e16187afc5c3969096deaf3248fcef2164
2018-01-09 16:30:14 +03:00
licanwei
960c50ba45 Fix configuration doc link
Change-Id: I7b144194287514144948f8547bc45d6bc4551a52
2018-01-07 23:36:20 -08:00
licanwei
9411f85cd2 update audit API description
Change-Id: I1d3eb9364fb5597788a282d275c71f5a314a0923
2018-01-02 23:51:05 -08:00
licanwei
b4370f0461 update action API description
POST/PATCH/DELETE actions APIs aren't permitted.

Change-Id: I4126bcc6bf6fe2628748d1f151617a38be06efd8
2017-12-28 22:06:33 -08:00
Zuul
97799521f9 Merge "correct audit parameter typo" 2017-12-28 10:54:57 +00:00
suzhengwei
96fa7f33ac use current weighted sd as min_sd when starting to simulate migrations
If it uses a specific value(usually 1 or 2) as the min_sd when starting
to simulate migrations. The first simulate_migration case will always be
less than the min_sd and come into the solution, even though the migration
will increase the weighted sd. This is unreasonable, and make migrations
among hosts back and forth

Change-Id: I7813c4c92c380c489c349444b85187c5611d9c92
Closes-Bug: #1739723
2017-12-27 15:00:57 +03:00
Zuul
1c2d0aa1f2 Merge "Updated from global requirements" 2017-12-27 10:00:01 +00:00
licanwei
070aed7076 correct audit parameter typo
Change-Id: Id98294a093ac9a704791cdbf52046ce1377f1796
2017-12-25 23:52:43 -08:00
Zuul
2b402d3cbf Merge "Fix watcher audit list command" 2017-12-26 04:49:49 +00:00
Zuul
cca3e75ac1 Merge "Add Datasource Abstraction" 2017-12-26 03:02:36 +00:00
OpenStack Proposal Bot
6f27275f44 Updated from global requirements
Change-Id: I26c1f4be398496b88b69094ec1804b07f7c1d7f1
2017-12-23 10:18:41 +00:00
Alexander Chadin
95548af426 Fix watcher audit list command
This patch set adds data migration version that fills noname audits
with name like strategy.name + '-' + audit.created_at.

Closes-Bug: #1738758
Change-Id: I1d65b3110166e9f64ce5b80a34672d24d629807d
2017-12-22 08:43:28 +00:00
licanwei
cdc847d352 check actionplan state when deleting actionplan
If actionplan is 'ONGOING' or 'PENDING',
don't delete it.

Change-Id: I8bfa31a70bba0a7adb1bfd09fc22e6a66b9ebf3a
Closes-Bug: #1738360
2017-12-21 22:32:09 -08:00
Zuul
b69244f8ef Merge "TrivialFix: remove redundant import alias" 2017-12-21 15:43:42 +00:00
Dao Cong Tien
cbd6d88025 TrivialFix: remove redundant import alias
Change-Id: Idf53683def6588e626144ecc3b74033d57ab3f87
2017-12-21 20:09:07 +07:00
Zuul
028d7c939c Merge "check audit state when deleting audit" 2017-12-20 09:04:02 +00:00
licanwei
a8fa969379 check audit state when deleting audit
If audit is 'ONGOING' or 'PENDING', don't delete audit.

Change-Id: Iac714e7e78e7bb5b52f401e5b2ad0e1d8af8bb45
Closes-Bug: #1738358
2017-12-19 18:09:42 -08:00
licanwei
80ee4b29f5 reset job interval when audit was updated
when we update a existing audit's interval, the interval of
'execute_audit' job is still the old value.
We need to update the interval of 'execute_audit' job.

Change-Id: I402efaa6b2fd3a454717c3df9746c827927ffa91
Closes-Bug: #1738140
2017-12-19 17:57:37 -08:00
Zuul
e562c9173c Merge "Updated from global requirements" 2017-12-19 16:38:39 +00:00
OpenStack Proposal Bot
ec0c359037 Updated from global requirements
Change-Id: I96d4a5a7e2b05df3f06d7c08f64cd9bcf83ff99b
2017-12-19 01:52:42 +00:00
Andreas Jaeger
3b6bef180b Fix releasenotes build
Remove a stray import of watcher project that breaks releasenotes build.

Change-Id: I4d107449b88adb19a3f269b2f33221addef0d9d6
2017-12-18 15:39:25 +01:00
Zuul
640e4e1fea Merge "Update getting scoped storage CDM" 2017-12-18 14:31:39 +00:00
Zuul
eeb817cd6e Merge "listen to 'compute.instance.rebuild.end' event" 2017-12-18 13:12:26 +00:00
Hidekazu Nakamura
c6afa7c320 Update getting scoped storage CDM
Now that CDM-scoping was implemented, Getting scoped storage model
have to be updated.
This patch updates getting storage cluster data model.

Change-Id: Iefc22b54995aa8d2f3a7b3698575f6eb800d4289
2017-12-16 15:20:58 +00:00
OpenStack Proposal Bot
9ccd17e40b Updated from global requirements
Change-Id: I0af2c9fd266f925af5e3e8731b37a00dab91d6a8
2017-12-15 22:24:15 +00:00
Zuul
2a7e0d652c Merge "'get_volume_type_by_backendname' returns a list" 2017-12-14 06:18:04 +00:00
Zuul
a94e35b60e Merge "Fix 'unable to exclude instance'" 2017-12-14 05:38:34 +00:00
Zuul
72e3d5c7f9 Merge "Add and identify excluded instances in compute CDM" 2017-12-13 13:34:33 +00:00
aditi
be56441e55 Fix 'unable to exclude instance'
Change-Id: I1599a86a2ba7d3af755fb1412a5e38516c736957
Closes-Bug: #1736129
2017-12-12 10:29:35 +00:00
Zuul
aa2b213a45 Merge "Register default policies in code" 2017-12-12 03:38:13 +00:00
Zuul
668513d771 Merge "Updated from global requirements" 2017-12-12 02:57:47 +00:00
Lance Bragstad
0242d33adb Register default policies in code
This commit registers all policies formally kept in policy.json as
defaults in code. This is an effort to make policy management easier
for operators. More information on this initiative can be found
below:

  https://governance.openstack.org/tc/goals/queens/policy-in-code.html

bp policy-and-docs-in-code

Change-Id: Ibab08f8e1c95b86e08737c67a39c293566dbabc7
2017-12-11 15:19:10 +03:00
suzhengwei
c38dc9828b listen to 'compute.instance.rebuild.end' event
In one integrated cloud env, there would be many solutions, which would
make the compute resource strongly relocated. Watcher should listen to
all the notifications which represent the compute resource changes, to
update compute CDM. If not, the compute CDM will be stale, Watcher
couldn't work steadily and harmoniously.

Change-Id: I793131dd8f24f1ac5f5a6a070bb4fe7980c8dfb2
Implements:blueprint listen-all-necessary-notifications
2017-12-08 16:18:35 +08:00
OpenStack Proposal Bot
4ce1a9096b Updated from global requirements
Change-Id: I04a2a04de3b32570bb0afaf9eb736976e888a031
2017-12-07 13:53:09 +00:00
Yumeng_Bao
02163d64aa bug fix remove volume migration type 'cold'
Migration action 'cold' is not intuitive for the developers and users,
so this patch replaces it with ‘migrate’ and 'retype'.

Change-Id: I58acac741499f47e79630a6031d44088681e038a
Closes-Bug: #1733247
2017-12-06 18:03:25 +08:00
suzhengwei
d91f0bff22 Add and identify excluded instances in compute CDM
Change-Id: If03893c5e9b6a37e1126ad91e4f3bfafe0f101d9
Implements:blueprint compute-cdm-include-all-instances
2017-12-06 17:43:42 +08:00
aditi
e401cb7c9d Add Datasource Abstraction
This patch set adds, datasource abstraction layer.

Change-Id: Id828e427b998aa34efa07e04e615c82c5730d3c9
Partially-Implements: blueprint watcher-multi-datasource
2017-12-05 17:33:04 +03:00
licanwei
fa31341bbb 'get_volume_type_by_backendname' returns a list
Storage pool can have many volume types,
'get_volume_type_by_backendname' should return a list of types.

Closes-Bug: #1733257
Change-Id: I877d5886259e482089ed0f9944d97bb99f375824
2017-11-26 23:28:56 -08:00
151 changed files with 8420 additions and 1449 deletions

View File

@@ -1,4 +1,5 @@
[gerrit]
host=review.openstack.org
host=review.opendev.org
port=29418
project=openstack/watcher.git
defaultbranch=stable/queens

View File

@@ -1,10 +1,16 @@
- project:
name: openstack/watcher
templates:
- openstack-python-jobs
- openstack-python35-jobs
- publish-openstack-sphinx-docs
- check-requirements
- release-notes-jobs
check:
jobs:
- watcher-tempest-multinode
- legacy-rally-dsvm-watcher-rally
gate:
queue: watcher
- job:
name: watcher-tempest-base-multinode
parent: legacy-dsvm-base-multinode
@@ -12,7 +18,7 @@
post-run: playbooks/legacy/watcher-tempest-base-multinode/post.yaml
timeout: 4200
required-projects:
- openstack-infra/devstack-gate
- openstack/devstack-gate
- openstack/python-openstackclient
- openstack/python-watcherclient
- openstack/watcher
@@ -32,8 +38,8 @@
post-run: playbooks/legacy/watcherclient-tempest-functional/post.yaml
timeout: 4200
required-projects:
- openstack-dev/devstack
- openstack-infra/devstack-gate
- openstack/devstack
- openstack/devstack-gate
- openstack/python-openstackclient
- openstack/python-watcherclient
- openstack/watcher

View File

@@ -42,7 +42,7 @@ WATCHER_AUTH_CACHE_DIR=${WATCHER_AUTH_CACHE_DIR:-/var/cache/watcher}
WATCHER_CONF_DIR=/etc/watcher
WATCHER_CONF=$WATCHER_CONF_DIR/watcher.conf
WATCHER_POLICY_JSON=$WATCHER_CONF_DIR/policy.json
WATCHER_POLICY_YAML=$WATCHER_CONF_DIR/policy.yaml.sample
WATCHER_DEVSTACK_DIR=$WATCHER_DIR/devstack
WATCHER_DEVSTACK_FILES_DIR=$WATCHER_DEVSTACK_DIR/files
@@ -106,7 +106,25 @@ function configure_watcher {
# Put config files in ``/etc/watcher`` for everyone to find
sudo install -d -o $STACK_USER $WATCHER_CONF_DIR
install_default_policy watcher
local project=watcher
local project_uc
project_uc=$(echo watcher|tr a-z A-Z)
local conf_dir="${project_uc}_CONF_DIR"
# eval conf dir to get the variable
conf_dir="${!conf_dir}"
local project_dir="${project_uc}_DIR"
# eval project dir to get the variable
project_dir="${!project_dir}"
local sample_conf_dir="${project_dir}/etc/${project}"
local sample_policy_dir="${project_dir}/etc/${project}/policy.d"
local sample_policy_generator="${project_dir}/etc/${project}/oslo-policy-generator/watcher-policy-generator.conf"
# first generate policy.yaml
oslopolicy-sample-generator --config-file $sample_policy_generator
# then optionally copy over policy.d
if [[ -d $sample_policy_dir ]]; then
cp -r $sample_policy_dir $conf_dir/policy.d
fi
# Rebuild the config file from scratch
create_watcher_conf
@@ -163,7 +181,7 @@ function create_watcher_conf {
iniset $WATCHER_CONF api host "$WATCHER_SERVICE_HOST"
iniset $WATCHER_CONF api port "$WATCHER_SERVICE_PORT"
iniset $WATCHER_CONF oslo_policy policy_file $WATCHER_POLICY_JSON
iniset $WATCHER_CONF oslo_policy policy_file $WATCHER_POLICY_YAML
iniset $WATCHER_CONF oslo_messaging_rabbit rabbit_userid $RABBIT_USERID
iniset $WATCHER_CONF oslo_messaging_rabbit rabbit_password $RABBIT_PASSWORD
@@ -296,6 +314,7 @@ function start_watcher {
function stop_watcher {
if [[ "$WATCHER_USE_MOD_WSGI" == "True" ]]; then
disable_apache_site watcher-api
restart_apache_server
else
stop_process watcher-api
fi

View File

@@ -35,7 +35,7 @@ VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP
NOVA_INSTANCES_PATH=/opt/stack/data/instances
# Enable the Ceilometer plugin for the compute agent
enable_plugin ceilometer git://git.openstack.org/openstack/ceilometer
enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer
disable_service ceilometer-acentral,ceilometer-collector,ceilometer-api
LOGFILE=$DEST/logs/stack.sh.log

View File

@@ -25,13 +25,13 @@ MULTI_HOST=1
disable_service n-cpu
# Enable the Watcher Dashboard plugin
enable_plugin watcher-dashboard git://git.openstack.org/openstack/watcher-dashboard
enable_plugin watcher-dashboard https://git.openstack.org/openstack/watcher-dashboard
# Enable the Watcher plugin
enable_plugin watcher git://git.openstack.org/openstack/watcher
enable_plugin watcher https://git.openstack.org/openstack/watcher
# Enable the Ceilometer plugin
enable_plugin ceilometer git://git.openstack.org/openstack/ceilometer
enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer
# This is the controller node, so disable the ceilometer compute agent
disable_service ceilometer-acompute

View File

@@ -3,6 +3,9 @@
# Make sure rabbit is enabled
enable_service rabbit
# Make sure mysql is enabled
enable_service mysql
# Enable Watcher services
enable_service watcher-api
enable_service watcher-decision-engine

View File

@@ -1,3 +1,7 @@
==================================================
OpenStack Infrastructure Optimization Service APIs
==================================================
.. toctree::
:maxdepth: 1

View File

@@ -165,7 +165,7 @@ You can easily generate and update a sample configuration file
named :ref:`watcher.conf.sample <watcher_sample_configuration_files>` by using
these following commands::
$ git clone git://git.openstack.org/openstack/watcher
$ git clone https://git.openstack.org/openstack/watcher
$ cd watcher/
$ tox -e genconfig
$ vi etc/watcher/watcher.conf.sample
@@ -200,8 +200,8 @@ configuration file, in order:
Although some configuration options are mentioned here, it is recommended that
you review all the `available options
<https://git.openstack.org/cgit/openstack/watcher/tree/etc/watcher/watcher.conf.sample>`_
you review all the :ref:`available options
<watcher_sample_configuration_files>`
so that the watcher service is configured for your needs.
#. The Watcher Service stores information in a database. This guide uses the
@@ -391,7 +391,7 @@ Ceilometer is designed to collect measurements from OpenStack services and from
other external components. If you would like to add new meters to the currently
existing ones, you need to follow the documentation below:
#. https://docs.openstack.org/ceilometer/latest/contributor/new_meters.html#meters
#. https://docs.openstack.org/ceilometer/latest/contributor/measurements.html#new-measurements
The Ceilometer collector uses a pluggable storage system, meaning that you can
pick any database system you prefer.

View File

@@ -19,7 +19,7 @@ model. To enable the Watcher plugin with DevStack, add the following to the
`[[local|localrc]]` section of your controller's `local.conf` to enable the
Watcher plugin::
enable_plugin watcher git://git.openstack.org/openstack/watcher
enable_plugin watcher https://git.openstack.org/openstack/watcher
For more detailed instructions, see `Detailed DevStack Instructions`_. Check
out the `DevStack documentation`_ for more information regarding DevStack.

View File

@@ -263,7 +263,7 @@ requires new metrics not covered by Ceilometer, you can add them through a
`Ceilometer plugin`_.
.. _`Helper`: https://github.com/openstack/watcher/blob/master/watcher/decision_engine/cluster/history/ceilometer.py
.. _`Helper`: https://github.com/openstack/watcher/blob/master/watcher/datasource/ceilometer.py
.. _`Ceilometer developer guide`: https://docs.openstack.org/ceilometer/latest/contributor/architecture.html#storing-accessing-the-data
.. _`Ceilometer`: https://docs.openstack.org/ceilometer/latest
.. _`Monasca`: https://github.com/openstack/monasca-api/blob/master/docs/monasca-api-spec.md

View File

@@ -267,7 +267,7 @@ the same goal and same workload of the :ref:`Cluster <cluster_definition>`.
Project
=======
:ref:`Projects <project_definition>` represent the base unit of ownership
:ref:`Projects <project_definition>` represent the base unit of "ownership"
in OpenStack, in that all :ref:`resources <managed_resource_definition>` in
OpenStack should be owned by a specific :ref:`project <project_definition>`.
In OpenStack Identity, a :ref:`project <project_definition>` must be owned by a

View File

@@ -26,7 +26,7 @@
[keystone_authtoken]
...
auth_uri = http://controller:5000
www_authenticate_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password

View File

@@ -10,7 +10,7 @@ Infrastructure Optimization service
verify.rst
next-steps.rst
The Infrastructure Optimization service (watcher) provides
The Infrastructure Optimization service (Watcher) provides
flexible and scalable resource optimization service for
multi-tenant OpenStack-based clouds.
@@ -21,7 +21,7 @@ applier. This provides a robust framework to realize a wide
range of cloud optimization goals, including the reduction
of data center operating costs, increased system performance
via intelligent virtual machine migration, increased energy
efficiencyand more!
efficiency and more!
Watcher also supports a pluggable architecture by which custom
optimization algorithms, data metrics and data profilers can be
@@ -36,4 +36,4 @@ https://docs.openstack.org/watcher/latest/glossary.html
This chapter assumes a working setup of OpenStack following the
`OpenStack Installation Tutorial
<https://docs.openstack.org/pike/install/>`_.
<https://docs.openstack.org/queens/install/>`_.

View File

@@ -6,4 +6,4 @@ Next steps
Your OpenStack environment now includes the watcher service.
To add additional services, see
https://docs.openstack.org/pike/install/.
https://docs.openstack.org/queens/install/.

View File

@@ -0,0 +1,86 @@
=============
Actuator
=============
Synopsis
--------
**display name**: ``Actuator``
**goal**: ``unclassified``
.. watcher-term:: watcher.decision_engine.strategy.strategies.actuation
Requirements
------------
Metrics
*******
None
Cluster data model
******************
None
Actions
*******
Default Watcher's actions.
Planner
*******
Default Watcher's planner:
.. watcher-term:: watcher.decision_engine.planner.weight.WeightPlanner
Configuration
-------------
Strategy parameters are:
==================== ====== ===================== =============================
parameter type default Value description
==================== ====== ===================== =============================
``actions`` array None Actions to be executed.
==================== ====== ===================== =============================
The elements of actions array are:
==================== ====== ===================== =============================
parameter type default Value description
==================== ====== ===================== =============================
``action_type`` string None Action name defined in
setup.cfg(mandatory)
``resource_id`` string None Resource_id of the action.
``input_parameters`` object None Input_parameters of the
action(mandatory).
==================== ====== ===================== =============================
Efficacy Indicator
------------------
None
Algorithm
---------
This strategy create an action plan with a predefined set of actions.
How to use it ?
---------------
.. code-block:: shell
$ openstack optimize audittemplate create \
at1 unclassified --strategy actuator
$ openstack optimize audit create -a at1 \
-p actions='[{"action_type": "migrate", "resource_id": "56a40802-6fde-4b59-957c-c84baec7eaed", "input_parameters": {"migration_type": "live", "source_node": "s01"}}]'
External Links
--------------
None

View File

@@ -0,0 +1,154 @@
==============
Zone migration
==============
Synopsis
--------
**display name**: ``Zone migration``
**goal**: ``hardware_maintenance``
.. watcher-term:: watcher.decision_engine.strategy.strategies.zone_migration
Requirements
------------
Metrics
*******
None
Cluster data model
******************
Default Watcher's Compute cluster data model:
.. watcher-term:: watcher.decision_engine.model.collector.nova.NovaClusterDataModelCollector
Storage cluster data model is also required:
.. watcher-term:: watcher.decision_engine.model.collector.cinder.CinderClusterDataModelCollector
Actions
*******
Default Watcher's actions:
.. list-table::
:widths: 30 30
:header-rows: 1
* - action
- description
* - ``migrate``
- .. watcher-term:: watcher.applier.actions.migration.Migrate
* - ``volume_migrate``
- .. watcher-term:: watcher.applier.actions.volume_migration.VolumeMigrate
Planner
*******
Default Watcher's planner:
.. watcher-term:: watcher.decision_engine.planner.weight.WeightPlanner
Configuration
-------------
Strategy parameters are:
======================== ======== ============= ==============================
parameter type default Value description
======================== ======== ============= ==============================
``compute_nodes`` array None Compute nodes to migrate.
``storage_pools`` array None Storage pools to migrate.
``parallel_total`` integer 6 The number of actions to be
run in parallel in total.
``parallel_per_node`` integer 2 The number of actions to be
run in parallel per compute
node.
``parallel_per_pool`` integer 2 The number of actions to be
run in parallel per storage
pool.
``priority`` object None List prioritizes instances
and volumes.
``with_attached_volume`` boolean False False: Instances will migrate
after all volumes migrate.
True: An instance will migrate
after the attached volumes
migrate.
======================== ======== ============= ==============================
The elements of compute_nodes array are:
============= ======= =============== =============================
parameter type default Value description
============= ======= =============== =============================
``src_node`` string None Compute node from which
instances migrate(mandatory).
``dst_node`` string None Compute node to which
instances migrate.
============= ======= =============== =============================
The elements of storage_pools array are:
============= ======= =============== ==============================
parameter type default Value description
============= ======= =============== ==============================
``src_pool`` string None Storage pool from which
volumes migrate(mandatory).
``dst_pool`` string None Storage pool to which
volumes migrate.
``src_type`` string None Source volume type(mandatory).
``dst_type`` string None Destination volume type
(mandatory).
============= ======= =============== ==============================
The elements of priority object are:
================ ======= =============== ======================
parameter type default Value description
================ ======= =============== ======================
``project`` array None Project names.
``compute_node`` array None Compute node names.
``storage_pool`` array None Storage pool names.
``compute`` enum None Instance attributes.
|compute|
``storage`` enum None Volume attributes.
|storage|
================ ======= =============== ======================
.. |compute| replace:: ["vcpu_num", "mem_size", "disk_size", "created_at"]
.. |storage| replace:: ["size", "created_at"]
Efficacy Indicator
------------------
.. watcher-func::
:format: literal_block
watcher.decision_engine.goal.efficacy.specs.HardwareMaintenance.get_global_efficacy_indicator
Algorithm
---------
For more information on the zone migration strategy please refer
to: http://specs.openstack.org/openstack/watcher-specs/specs/queens/implemented/zone-migration-strategy.html
How to use it ?
---------------
.. code-block:: shell
$ openstack optimize audittemplate create \
at1 hardware_maintenance --strategy zone_migration
$ openstack optimize audit create -a at1 \
-p compute_nodes='[{"src_node": "s01", "dst_node": "d01"}]'
External Links
--------------
None

View File

@@ -0,0 +1,3 @@
[DEFAULT]
output_file = /etc/watcher/policy.yaml.sample
namespace = watcher

View File

@@ -1,45 +0,0 @@
{
"admin_api": "role:admin or role:administrator",
"show_password": "!",
"default": "rule:admin_api",
"action:detail": "rule:default",
"action:get": "rule:default",
"action:get_all": "rule:default",
"action_plan:delete": "rule:default",
"action_plan:detail": "rule:default",
"action_plan:get": "rule:default",
"action_plan:get_all": "rule:default",
"action_plan:update": "rule:default",
"audit:create": "rule:default",
"audit:delete": "rule:default",
"audit:detail": "rule:default",
"audit:get": "rule:default",
"audit:get_all": "rule:default",
"audit:update": "rule:default",
"audit_template:create": "rule:default",
"audit_template:delete": "rule:default",
"audit_template:detail": "rule:default",
"audit_template:get": "rule:default",
"audit_template:get_all": "rule:default",
"audit_template:update": "rule:default",
"goal:detail": "rule:default",
"goal:get": "rule:default",
"goal:get_all": "rule:default",
"scoring_engine:detail": "rule:default",
"scoring_engine:get": "rule:default",
"scoring_engine:get_all": "rule:default",
"strategy:detail": "rule:default",
"strategy:get": "rule:default",
"strategy:get_all": "rule:default",
"service:detail": "rule:default",
"service:get": "rule:default",
"service:get_all": "rule:default"
}

View File

@@ -13,12 +13,12 @@
set -x
cat > clonemap.yaml << EOF
clonemap:
- name: openstack-infra/devstack-gate
- name: openstack/devstack-gate
dest: devstack-gate
EOF
/usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \
git://git.openstack.org \
openstack-infra/devstack-gate
https://opendev.org \
openstack/devstack-gate
executable: /bin/bash
chdir: '{{ ansible_user_dir }}/workspace'
environment: '{{ zuul | zuul_legacy_vars }}'
@@ -30,9 +30,9 @@
cat << 'EOF' >>"/tmp/dg-local.conf"
[[local|localrc]]
TEMPEST_PLUGINS='/opt/stack/new/watcher-tempest-plugin'
enable_plugin ceilometer git://git.openstack.org/openstack/ceilometer
enable_plugin ceilometer https://opendev.org/openstack/ceilometer
# Enable watcher devstack plugin.
enable_plugin watcher git://git.openstack.org/openstack/watcher
enable_plugin watcher https://opendev.org/openstack/watcher
EOF
executable: /bin/bash

View File

@@ -13,12 +13,12 @@
set -x
cat > clonemap.yaml << EOF
clonemap:
- name: openstack-infra/devstack-gate
- name: openstack/devstack-gate
dest: devstack-gate
EOF
/usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \
git://git.openstack.org \
openstack-infra/devstack-gate
https://opendev.org \
openstack/devstack-gate
executable: /bin/bash
chdir: '{{ ansible_user_dir }}/workspace'
environment: '{{ zuul | zuul_legacy_vars }}'
@@ -29,7 +29,7 @@
set -x
cat << 'EOF' >>"/tmp/dg-local.conf"
[[local|localrc]]
enable_plugin watcher git://git.openstack.org/openstack/watcher
enable_plugin watcher https://opendev.org/openstack/watcher
EOF
executable: /bin/bash

View File

@@ -0,0 +1,5 @@
---
features:
- |
Adds audit scoper for storage data model, now watcher users can specify
audit scope for storage CDM in the same manner as compute scope.

View File

@@ -0,0 +1,4 @@
---
features:
- |
Adds baremetal data model in Watcher

View File

@@ -0,0 +1,6 @@
---
features:
- Added a way to check state of strategy before audit's execution.
Administrator can use "watcher strategy state <strategy_name>" command
to get information about metrics' availability, datasource's availability
and CDM's availability.

View File

@@ -0,0 +1,4 @@
---
features:
- |
Added storage capacity balance strategy.

View File

@@ -0,0 +1,6 @@
---
features:
- |
Added strategy "Zone migration" and it's goal "Hardware maintenance".
The strategy migrates many instances and volumes efficiently with
minimum downtime automatically.

View File

@@ -24,7 +24,6 @@
import os
import sys
from watcher import version as watcher_version
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the

View File

@@ -1,207 +0,0 @@
# Andi Chandler <andi@gowling.com>, 2016. #zanata
# Andi Chandler <andi@gowling.com>, 2017. #zanata
msgid ""
msgstr ""
"Project-Id-Version: watcher 1.4.1.dev113\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2017-10-23 04:03+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2017-10-21 06:22+0000\n"
"Last-Translator: Andi Chandler <andi@gowling.com>\n"
"Language-Team: English (United Kingdom)\n"
"Language: en-GB\n"
"X-Generator: Zanata 3.9.6\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
msgid "0.29.0"
msgstr "0.29.0"
msgid "0.33.0"
msgstr "0.33.0"
msgid "0.34.0"
msgstr "0.34.0"
msgid "1.0.0"
msgstr "1.0.0"
msgid "1.1.0"
msgstr "1.1.0"
msgid "1.3.0"
msgstr "1.3.0"
msgid "1.4.0"
msgstr "1.4.0"
msgid "1.4.1"
msgstr "1.4.1"
msgid "Add a service supervisor to watch Watcher deamons."
msgstr "Add a service supervisor to watch Watcher daemons."
msgid "Add action for compute node power on/off"
msgstr "Add action for compute node power on/off"
msgid ""
"Add description property for dynamic action. Admin can see detail "
"information of any specify action."
msgstr ""
"Add description property for dynamic action. Admin can see detail "
"information of any specify action."
msgid "Add notifications related to Action object."
msgstr "Add notifications related to Action object."
msgid "Add notifications related to Action plan object."
msgstr "Add notifications related to Action plan object."
msgid "Add notifications related to Audit object."
msgstr "Add notifications related to Audit object."
msgid "Add notifications related to Service object."
msgstr "Add notifications related to Service object."
msgid ""
"Add superseded state for an action plan if the cluster data model has "
"changed after it has been created."
msgstr ""
"Add superseded state for an action plan if the cluster data model has "
"changed after it has been created."
msgid "Added SUSPENDED audit state"
msgstr "Added SUSPENDED audit state"
msgid ""
"Added a generic scoring engine module, which will standardize interactions "
"with scoring engines through the common API. It is possible to use the "
"scoring engine by different Strategies, which improve the code and data "
"model re-use."
msgstr ""
"Added a generic scoring engine module, which will standardize interactions "
"with scoring engines through the common API. It is possible to use the "
"scoring engine by different Strategies, which improve the code and data "
"model re-use."
msgid ""
"Added a generic scoring engine module, which will standarize interactions "
"with scoring engines through the common API. It is possible to use the "
"scoring engine by different Strategies, which improve the code and data "
"model re-use."
msgstr ""
"Added a generic scoring engine module, which will standardise interactions "
"with scoring engines through the common API. It is possible to use the "
"scoring engine by different Strategies, which improve the code and data "
"model re-use."
msgid ""
"Added a new strategy based on the airflow of servers. This strategy makes "
"decisions to migrate VMs to make the airflow uniform."
msgstr ""
"Added a new strategy based on the airflow of servers. This strategy makes "
"decisions to migrate VMs to make the airflow uniform."
msgid ""
"Added a standard way to both declare and fetch configuration options so that "
"whenever the administrator generates the Watcher configuration sample file, "
"it contains the configuration options of the plugins that are currently "
"available."
msgstr ""
"Added a standard way to both declare and fetch configuration options so that "
"whenever the administrator generates the Watcher configuration sample file, "
"it contains the configuration options of the plugins that are currently "
"available."
msgid ""
"Added a strategy based on the VM workloads of hypervisors. This strategy "
"makes decisions to migrate workloads to make the total VM workloads of each "
"hypervisor balanced, when the total VM workloads of hypervisor reaches "
"threshold."
msgstr ""
"Added a strategy based on the VM workloads of hypervisors. This strategy "
"makes decisions to migrate workloads to make the total VM workloads of each "
"hypervisor balanced, when the total VM workloads of hypervisor reaches "
"threshold."
msgid ""
"Added a strategy that monitors if there is a higher load on some hosts "
"compared to other hosts in the cluster and re-balances the work across hosts "
"to minimize the standard deviation of the loads in the cluster."
msgstr ""
"Added a strategy that monitors if there is a higher load on some hosts "
"compared to other hosts in the cluster and re-balances the work across hosts "
"to minimise the standard deviation of the loads in the cluster."
msgid ""
"Added a way to add a new action without having to amend the source code of "
"the default planner."
msgstr ""
"Added a way to add a new action without having to amend the source code of "
"the default planner."
msgid ""
"Added a way to compare the efficacy of different strategies for a give "
"optimization goal."
msgstr ""
"Added a way to compare the efficacy of different strategies for a give "
"optimisation goal."
msgid ""
"Added a way to create periodic audit to be able to optimize continuously the "
"cloud infrastructure."
msgstr ""
"Added a way to create periodic audit to be able to continuously optimise the "
"cloud infrastructure."
msgid ""
"Added a way to return the of available goals depending on which strategies "
"have been deployed on the node where the decision engine is running."
msgstr ""
"Added a way to return the of available goals depending on which strategies "
"have been deployed on the node where the decision engine is running."
msgid ""
"Added a way to return the of available goals depending on which strategies "
"have been deployed on the node where the decison engine is running."
msgstr ""
"Added a way to return the of available goals depending on which strategies "
"have been deployed on the node where the decision engine is running."
msgid ""
"Added an in-memory cache of the cluster model built up and kept fresh via "
"notifications from services of interest in addition to periodic syncing "
"logic."
msgstr ""
"Added an in-memory cache of the cluster model built up and kept fresh via "
"notifications from services of interest in addition to periodic syncing "
"logic."
msgid ""
"Added binding between apscheduler job and Watcher decision engine service. "
"It will allow to provide HA support in the future."
msgstr ""
"Added binding between apscheduler job and Watcher decision engine service. "
"It will allow to provide HA support in the future."
msgid "Added cinder cluster data model"
msgstr "Added cinder cluster data model"
msgid ""
"Added gnocchi support as data source for metrics. Administrator can change "
"data source for each strategy using config file."
msgstr ""
"Added Gnocchi support as data source for metrics. Administrator can change "
"data source for each strategy using config file."
msgid "Added policies to handle user rights to access Watcher API."
msgstr "Added policies to handle user rights to access Watcher API."
#, fuzzy
msgid "Contents:"
msgstr "Contents:"
#, fuzzy
msgid "Current Series Release Notes"
msgstr "Current Series Release Notes"

View File

@@ -1,33 +0,0 @@
# Gérald LONLAS <g.lonlas@gmail.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: watcher 1.0.1.dev51\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2017-03-21 11:57+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-10-22 06:44+0000\n"
"Last-Translator: Gérald LONLAS <g.lonlas@gmail.com>\n"
"Language-Team: French\n"
"Language: fr\n"
"X-Generator: Zanata 3.9.6\n"
"Plural-Forms: nplurals=2; plural=(n > 1)\n"
msgid "0.29.0"
msgstr "0.29.0"
msgid "Contents:"
msgstr "Contenu :"
msgid "Current Series Release Notes"
msgstr "Note de la release actuelle"
msgid "New Features"
msgstr "Nouvelles fonctionnalités"
msgid "Newton Series Release Notes"
msgstr "Note de release pour Newton"
msgid "Welcome to watcher's Release Notes documentation!"
msgstr "Bienvenue dans la documentation de la note de Release de Watcher"

View File

@@ -10,20 +10,20 @@ jsonschema<3.0.0,>=2.6.0 # MIT
keystonemiddleware>=4.17.0 # Apache-2.0
lxml!=3.7.0,>=3.4.1 # BSD
croniter>=0.3.4 # MIT License
oslo.concurrency>=3.20.0 # Apache-2.0
oslo.concurrency>=3.25.0 # Apache-2.0
oslo.cache>=1.26.0 # Apache-2.0
oslo.config>=5.1.0 # Apache-2.0
oslo.context>=2.19.2 # Apache-2.0
oslo.db>=4.27.0 # Apache-2.0
oslo.i18n>=3.15.3 # Apache-2.0
oslo.log>=3.30.0 # Apache-2.0
oslo.log>=3.36.0 # Apache-2.0
oslo.messaging>=5.29.0 # Apache-2.0
oslo.policy>=1.23.0 # Apache-2.0
oslo.policy>=1.30.0 # Apache-2.0
oslo.reports>=1.18.0 # Apache-2.0
oslo.serialization!=2.19.1,>=2.18.0 # Apache-2.0
oslo.service>=1.24.0 # Apache-2.0
oslo.utils>=3.31.0 # Apache-2.0
oslo.versionedobjects>=1.28.0 # Apache-2.0
oslo.service!=1.28.1,>=1.24.0 # Apache-2.0
oslo.utils>=3.33.0 # Apache-2.0
oslo.versionedobjects>=1.31.2 # Apache-2.0
PasteDeploy>=1.5.0 # MIT
pbr!=2.1.0,>=2.0.0 # Apache-2.0
pecan!=1.0.2,!=1.0.3,!=1.0.4,!=1.2,>=1.0.0 # BSD
@@ -31,18 +31,18 @@ PrettyTable<0.8,>=0.7.1 # BSD
voluptuous>=0.8.9 # BSD License
gnocchiclient>=3.3.1 # Apache-2.0
python-ceilometerclient>=2.5.0 # Apache-2.0
python-cinderclient>=3.2.0 # Apache-2.0
python-cinderclient>=3.3.0 # Apache-2.0
python-glanceclient>=2.8.0 # Apache-2.0
python-keystoneclient>=3.8.0 # Apache-2.0
python-monascaclient>=1.7.0 # Apache-2.0
python-neutronclient>=6.3.0 # Apache-2.0
python-novaclient>=9.1.0 # Apache-2.0
python-openstackclient>=3.12.0 # Apache-2.0
python-ironicclient>=1.14.0 # Apache-2.0
python-ironicclient>=2.2.0 # Apache-2.0
six>=1.10.0 # MIT
SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 # MIT
stevedore>=1.20.0 # Apache-2.0
taskflow>=2.7.0 # Apache-2.0
taskflow>=2.16.0 # Apache-2.0
WebOb>=1.7.1 # MIT
WSME>=0.8.0 # MIT
networkx<2.0,>=1.10 # BSD

View File

@@ -32,6 +32,12 @@ setup-hooks =
oslo.config.opts =
watcher = watcher.conf.opts:list_opts
oslo.policy.policies =
watcher = watcher.common.policies:list_rules
oslo.policy.enforcer =
watcher = watcher.common.policy:get_enforcer
console_scripts =
watcher-api = watcher.cmd.api:main
watcher-db-manage = watcher.cmd.dbmanage:main
@@ -51,6 +57,7 @@ watcher_goals =
airflow_optimization = watcher.decision_engine.goal.goals:AirflowOptimization
noisy_neighbor = watcher.decision_engine.goal.goals:NoisyNeighborOptimization
saving_energy = watcher.decision_engine.goal.goals:SavingEnergy
hardware_maintenance = watcher.decision_engine.goal.goals:HardwareMaintenance
watcher_scoring_engines =
dummy_scorer = watcher.decision_engine.scoring.dummy_scorer:DummyScorer
@@ -71,6 +78,8 @@ watcher_strategies =
workload_balance = watcher.decision_engine.strategy.strategies.workload_balance:WorkloadBalance
uniform_airflow = watcher.decision_engine.strategy.strategies.uniform_airflow:UniformAirflow
noisy_neighbor = watcher.decision_engine.strategy.strategies.noisy_neighbor:NoisyNeighbor
storage_capacity_balance = watcher.decision_engine.strategy.strategies.storage_capacity_balance:StorageCapacityBalance
zone_migration = watcher.decision_engine.strategy.strategies.zone_migration:ZoneMigration
watcher_actions =
migrate = watcher.applier.actions.migration:Migrate
@@ -91,6 +100,7 @@ watcher_planners =
watcher_cluster_data_model_collectors =
compute = watcher.decision_engine.model.collector.nova:NovaClusterDataModelCollector
storage = watcher.decision_engine.model.collector.cinder:CinderClusterDataModelCollector
baremetal = watcher.decision_engine.model.collector.ironic:BaremetalClusterDataModelCollector
[pbr]

View File

@@ -7,15 +7,15 @@ doc8>=0.6.0 # Apache-2.0
freezegun>=0.3.6 # Apache-2.0
hacking!=0.13.0,<0.14,>=0.12.0 # Apache-2.0
mock>=2.0.0 # BSD
oslotest>=1.10.0 # Apache-2.0
oslotest>=3.2.0 # Apache-2.0
os-testr>=1.0.0 # Apache-2.0
testrepository>=0.0.18 # Apache-2.0/BSD
testscenarios>=0.4 # Apache-2.0/BSD
testtools>=2.2.0 # MIT
# Doc requirements
openstackdocstheme>=1.17.0 # Apache-2.0
sphinx>=1.6.2 # BSD
openstackdocstheme>=1.18.1 # Apache-2.0
sphinx!=1.6.6,>=1.6.2 # BSD
sphinxcontrib-pecanwsme>=0.8.0 # Apache-2.0

View File

@@ -7,7 +7,7 @@ skipsdist = True
usedevelop = True
whitelist_externals = find
rm
install_command = pip install -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} {opts} {packages}
install_command = pip install -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/queens} {opts} {packages}
setenv =
VIRTUAL_ENV={envdir}
deps = -r{toxinidir}/test-requirements.txt
@@ -46,12 +46,16 @@ sitepackages = False
commands =
oslo-config-generator --config-file etc/watcher/oslo-config-generator/watcher.conf
[testenv:genpolicy]
commands =
oslopolicy-sample-generator --config-file etc/watcher/oslo-policy-generator/watcher-policy-generator.conf
[flake8]
filename = *.py,app.wsgi
show-source=True
ignore= H105,E123,E226,N320,H202
builtins= _
enable-extensions = H106,H203
enable-extensions = H106,H203,H904
exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,build,*sqlalchemy/alembic/versions/*,demo/,releasenotes
[testenv:wheel]

View File

@@ -341,7 +341,7 @@ class ActionsController(rest.RestController):
@wsme_pecan.wsexpose(Action, body=Action, status_code=201)
def post(self, action):
"""Create a new action.
"""Create a new action(forbidden).
:param action: a action within the request body.
"""
@@ -364,7 +364,7 @@ class ActionsController(rest.RestController):
@wsme.validate(types.uuid, [ActionPatchType])
@wsme_pecan.wsexpose(Action, types.uuid, body=[ActionPatchType])
def patch(self, action_uuid, patch):
"""Update an existing action.
"""Update an existing action(forbidden).
:param action_uuid: UUID of a action.
:param patch: a json PATCH document to apply to this action.
@@ -401,7 +401,7 @@ class ActionsController(rest.RestController):
@wsme_pecan.wsexpose(None, types.uuid, status_code=204)
def delete(self, action_uuid):
"""Delete a action.
"""Delete a action(forbidden).
:param action_uuid: UUID of a action.
"""

View File

@@ -460,6 +460,15 @@ class ActionPlansController(rest.RestController):
policy.enforce(context, 'action_plan:delete', action_plan,
action='action_plan:delete')
allowed_states = (ap_objects.State.SUCCEEDED,
ap_objects.State.RECOMMENDED,
ap_objects.State.FAILED,
ap_objects.State.SUPERSEDED,
ap_objects.State.CANCELLED)
if action_plan.state not in allowed_states:
raise exception.DeleteError(
state=action_plan.state)
action_plan.soft_delete()
@wsme.validate(types.uuid, [ActionPlanPatchType])

View File

@@ -37,6 +37,8 @@ import wsme
from wsme import types as wtypes
import wsmeext.pecan as wsme_pecan
from oslo_log import log
from watcher._i18n import _
from watcher.api.controllers import base
from watcher.api.controllers import link
@@ -49,6 +51,8 @@ from watcher.common import utils
from watcher.decision_engine import rpcapi
from watcher import objects
LOG = log.getLogger(__name__)
class AuditPostType(wtypes.Base):
@@ -129,6 +133,11 @@ class AuditPostType(wtypes.Base):
goal = objects.Goal.get(context, self.goal)
self.name = "%s-%s" % (goal.name,
datetime.datetime.utcnow().isoformat())
# No more than 63 characters
if len(self.name) > 63:
LOG.warning("Audit: %s length exceeds 63 characters",
self.name)
self.name = self.name[0:63]
return Audit(
name=self.name,
@@ -166,10 +175,10 @@ class AuditPatchType(types.JsonPatchType):
class Audit(base.APIBase):
"""API representation of a audit.
"""API representation of an audit.
This class enforces type checking and value constraints, and converts
between the internal object model and the API representation of a audit.
between the internal object model and the API representation of an audit.
"""
_goal_uuid = None
_goal_name = None
@@ -264,19 +273,19 @@ class Audit(base.APIBase):
goal_uuid = wsme.wsproperty(
wtypes.text, _get_goal_uuid, _set_goal_uuid, mandatory=True)
"""Goal UUID the audit template refers to"""
"""Goal UUID the audit refers to"""
goal_name = wsme.wsproperty(
wtypes.text, _get_goal_name, _set_goal_name, mandatory=False)
"""The name of the goal this audit template refers to"""
"""The name of the goal this audit refers to"""
strategy_uuid = wsme.wsproperty(
wtypes.text, _get_strategy_uuid, _set_strategy_uuid, mandatory=False)
"""Strategy UUID the audit template refers to"""
"""Strategy UUID the audit refers to"""
strategy_name = wsme.wsproperty(
wtypes.text, _get_strategy_name, _set_strategy_name, mandatory=False)
"""The name of the strategy this audit template refers to"""
"""The name of the strategy this audit refers to"""
parameters = {wtypes.text: types.jsontype}
"""The strategy parameters for this audit"""
@@ -511,7 +520,7 @@ class AuditsController(rest.RestController):
def get_one(self, audit):
"""Retrieve information about the given audit.
:param audit_uuid: UUID or name of an audit.
:param audit: UUID or name of an audit.
"""
if self.from_audits:
raise exception.OperationNotPermitted
@@ -526,7 +535,7 @@ class AuditsController(rest.RestController):
def post(self, audit_p):
"""Create a new audit.
:param audit_p: a audit within the request body.
:param audit_p: an audit within the request body.
"""
context = pecan.request.context
policy.enforce(context, 'audit:create',
@@ -556,7 +565,7 @@ class AuditsController(rest.RestController):
if no_schema and audit.parameters:
raise exception.Invalid(_('Specify parameters but no predefined '
'strategy for audit template, or no '
'strategy for audit, or no '
'parameter spec in predefined strategy'))
audit_dict = audit.as_dict()
@@ -579,7 +588,7 @@ class AuditsController(rest.RestController):
def patch(self, audit, patch):
"""Update an existing audit.
:param auditd: UUID or name of a audit.
:param audit: UUID or name of an audit.
:param patch: a json PATCH document to apply to this audit.
"""
if self.from_audits:
@@ -636,4 +645,11 @@ class AuditsController(rest.RestController):
policy.enforce(context, 'audit:update', audit_to_delete,
action='audit:update')
initial_state = audit_to_delete.state
new_state = objects.audit.State.DELETED
if not objects.audit.AuditStateTransitionManager(
).check_transition(initial_state, new_state):
raise exception.DeleteError(
state=initial_state)
audit_to_delete.soft_delete()

View File

@@ -41,6 +41,7 @@ from watcher.api.controllers.v1 import utils as api_utils
from watcher.common import exception
from watcher.common import policy
from watcher.common import utils as common_utils
from watcher.decision_engine import rpcapi
from watcher import objects
@@ -205,6 +206,7 @@ class StrategiesController(rest.RestController):
_custom_actions = {
'detail': ['GET'],
'state': ['GET'],
}
def _get_strategies_collection(self, filters, marker, limit, sort_key,
@@ -288,6 +290,26 @@ class StrategiesController(rest.RestController):
return self._get_strategies_collection(
filters, marker, limit, sort_key, sort_dir, expand, resource_url)
@wsme_pecan.wsexpose(wtypes.text, wtypes.text)
def state(self, strategy):
"""Retrieve a inforamation about strategy requirements.
:param strategy: name of the strategy.
"""
context = pecan.request.context
policy.enforce(context, 'strategy:state', action='strategy:state')
parents = pecan.request.path.split('/')[:-1]
if parents[-2] != "strategies":
raise exception.HTTPNotFound
rpc_strategy = api_utils.get_resource('Strategy', strategy)
de_client = rpcapi.DecisionEngineAPI()
strategy_state = de_client.get_strategy_info(context,
rpc_strategy.name)
strategy_state.extend([{
'type': 'Name', 'state': rpc_strategy.name,
'mandatory': '', 'comment': ''}])
return strategy_state
@wsme_pecan.wsexpose(Strategy, wtypes.text)
def get_one(self, strategy):
"""Retrieve information about the given strategy.

View File

@@ -63,7 +63,7 @@ class ContextHook(hooks.PecanHook):
auth_url = headers.get('X-Auth-Url')
if auth_url is None:
importutils.import_module('keystonemiddleware.auth_token')
auth_url = cfg.CONF.keystone_authtoken.auth_uri
auth_url = cfg.CONF.keystone_authtoken.www_authenticate_uri
state.request.context = context.make_context(
auth_token=auth_token,

View File

@@ -113,8 +113,10 @@ class Migrate(base.BaseAction):
dest_hostname=destination)
except nova_helper.nvexceptions.ClientException as e:
LOG.debug("Nova client exception occurred while live "
"migrating instance %s.Exception: %s" %
(self.instance_uuid, e))
"migrating instance "
"%(instance)s.Exception: %(exception)s",
{'instance': self.instance_uuid, 'exception': e})
except Exception as e:
LOG.exception(e)
LOG.critical("Unexpected error occurred. Migration failed for "

View File

@@ -36,13 +36,16 @@ class VolumeMigrate(base.BaseAction):
By using this action, you will be able to migrate cinder volume.
Migration type 'swap' can only be used for migrating attached volume.
Migration type 'cold' can only be used for migrating detached volume.
Migration type 'migrate' can be used for migrating detached volume to
the pool of same volume type.
Migration type 'retype' can be used for changing volume type of
detached volume.
The action schema is::
schema = Schema({
'resource_id': str, # should be a UUID
'migration_type': str, # choices -> "swap", "cold"
'migration_type': str, # choices -> "swap", "migrate","retype"
'destination_node': str,
'destination_type': str,
})
@@ -60,7 +63,8 @@ class VolumeMigrate(base.BaseAction):
MIGRATION_TYPE = 'migration_type'
SWAP = 'swap'
COLD = 'cold'
RETYPE = 'retype'
MIGRATE = 'migrate'
DESTINATION_NODE = "destination_node"
DESTINATION_TYPE = "destination_type"
@@ -85,7 +89,7 @@ class VolumeMigrate(base.BaseAction):
},
'migration_type': {
'type': 'string',
"enum": ["swap", "cold"]
"enum": ["swap", "retype", "migrate"]
},
'destination_node': {
"anyof": [
@@ -127,20 +131,6 @@ class VolumeMigrate(base.BaseAction):
def destination_type(self):
return self.input_parameters.get(self.DESTINATION_TYPE)
def _cold_migrate(self, volume, dest_node, dest_type):
if not self.cinder_util.can_cold(volume, dest_node):
raise exception.Invalid(
message=(_("Invalid state for cold migration")))
if dest_node:
return self.cinder_util.migrate(volume, dest_node)
elif dest_type:
return self.cinder_util.retype(volume, dest_type)
else:
raise exception.Invalid(
message=(_("destination host or destination type is "
"required when migration type is cold")))
def _can_swap(self, volume):
"""Judge volume can be swapped"""
@@ -212,12 +202,14 @@ class VolumeMigrate(base.BaseAction):
try:
volume = self.cinder_util.get_volume(volume_id)
if self.migration_type == self.COLD:
return self._cold_migrate(volume, dest_node, dest_type)
elif self.migration_type == self.SWAP:
if self.migration_type == self.SWAP:
if dest_node:
LOG.warning("dest_node is ignored")
return self._swap_volume(volume, dest_type)
elif self.migration_type == self.RETYPE:
return self.cinder_util.retype(volume, dest_type)
elif self.migration_type == self.MIGRATE:
return self.cinder_util.migrate(volume, dest_node)
else:
raise exception.Invalid(
message=(_("Migration of type '%(migration_type)s' is not "

View File

@@ -40,10 +40,10 @@ def main():
if host == '127.0.0.1':
LOG.info('serving on 127.0.0.1:%(port)s, '
'view at %(protocol)s://127.0.0.1:%(port)s' %
'view at %(protocol)s://127.0.0.1:%(port)s',
dict(protocol=protocol, port=port))
else:
LOG.info('serving on %(protocol)s://%(host)s:%(port)s' %
LOG.info('serving on %(protocol)s://%(host)s:%(port)s',
dict(protocol=protocol, host=host, port=port))
api_schedule = scheduling.APISchedulingService()

View File

@@ -22,7 +22,7 @@ import sys
from oslo_log import log
from watcher.common import service as service
from watcher.common import service
from watcher import conf
from watcher.decision_engine import sync

View File

@@ -70,16 +70,18 @@ class CinderHelper(object):
def get_volume_type_list(self):
return self.cinder.volume_types.list()
def get_volume_snapshots_list(self):
return self.cinder.volume_snapshots.list(
search_opts={'all_tenants': True})
def get_volume_type_by_backendname(self, backendname):
"""Retrun a list of volume type"""
volume_type_list = self.get_volume_type_list()
volume_type = [volume_type for volume_type in volume_type_list
volume_type = [volume_type.name for volume_type in volume_type_list
if volume_type.extra_specs.get(
'volume_backend_name') == backendname]
if volume_type:
return volume_type[0].name
else:
return ""
return volume_type
def get_volume(self, volume):
@@ -111,23 +113,6 @@ class CinderHelper(object):
return True
return False
def can_cold(self, volume, host=None):
"""Judge volume can be migrated"""
can_cold = False
status = self.get_volume(volume).status
snapshot = self._has_snapshot(volume)
same_host = False
if host and getattr(volume, 'os-vol-host-attr:host') == host:
same_host = True
if (status == 'available' and
snapshot is False and
same_host is False):
can_cold = True
return can_cold
def get_deleting_volume(self, volume):
volume = self.get_volume(volume)
all_volume = self.get_volume_list()
@@ -154,13 +139,13 @@ class CinderHelper(object):
volume = self.get_volume(volume.id)
time.sleep(retry_interval)
retry -= 1
LOG.debug("retry count: %s" % retry)
LOG.debug("Waiting to complete deletion of volume %s" % volume.id)
LOG.debug("retry count: %s", retry)
LOG.debug("Waiting to complete deletion of volume %s", volume.id)
if self._can_get_volume(volume.id):
LOG.error("Volume deletion error: %s" % volume.id)
LOG.error("Volume deletion error: %s", volume.id)
return False
LOG.debug("Volume %s was deleted successfully." % volume.id)
LOG.debug("Volume %s was deleted successfully.", volume.id)
return True
def check_migrated(self, volume, retry_interval=10):
@@ -194,8 +179,7 @@ class CinderHelper(object):
LOG.error(error_msg)
return False
LOG.debug(
"Volume migration succeeded : "
"volume %s is now on host '%s'." % (
"Volume migration succeeded : volume %s is now on host '%s'.", (
volume.id, host_name))
return True
@@ -204,13 +188,13 @@ class CinderHelper(object):
volume = self.get_volume(volume)
dest_backend = self.backendname_from_poolname(dest_node)
dest_type = self.get_volume_type_by_backendname(dest_backend)
if volume.volume_type != dest_type:
if volume.volume_type not in dest_type:
raise exception.Invalid(
message=(_("Volume type must be same for migrating")))
source_node = getattr(volume, 'os-vol-host-attr:host')
LOG.debug("Volume %s found on host '%s'."
% (volume.id, source_node))
LOG.debug("Volume %s found on host '%s'.",
(volume.id, source_node))
self.cinder.volumes.migrate_volume(
volume, dest_node, False, True)
@@ -226,8 +210,8 @@ class CinderHelper(object):
source_node = getattr(volume, 'os-vol-host-attr:host')
LOG.debug(
"Volume %s found on host '%s'." % (
volume.id, source_node))
"Volume %s found on host '%s'.",
(volume.id, source_node))
self.cinder.volumes.retype(
volume, dest_type, "on-demand")
@@ -249,14 +233,14 @@ class CinderHelper(object):
LOG.debug('Waiting volume creation of {0}'.format(new_volume))
time.sleep(retry_interval)
retry -= 1
LOG.debug("retry count: %s" % retry)
LOG.debug("retry count: %s", retry)
if getattr(new_volume, 'status') != 'available':
error_msg = (_("Failed to create volume '%(volume)s. ") %
{'volume': new_volume.id})
raise Exception(error_msg)
LOG.debug("Volume %s was created successfully." % new_volume)
LOG.debug("Volume %s was created successfully.", new_volume)
return new_volume
def delete_volume(self, volume):

View File

@@ -83,8 +83,10 @@ class OpenStackClients(object):
novaclient_version = self._get_client_option('nova', 'api_version')
nova_endpoint_type = self._get_client_option('nova', 'endpoint_type')
nova_region_name = self._get_client_option('nova', 'region_name')
self._nova = nvclient.Client(novaclient_version,
endpoint_type=nova_endpoint_type,
region_name=nova_region_name,
session=self.session)
return self._nova
@@ -96,8 +98,10 @@ class OpenStackClients(object):
glanceclient_version = self._get_client_option('glance', 'api_version')
glance_endpoint_type = self._get_client_option('glance',
'endpoint_type')
glance_region_name = self._get_client_option('glance', 'region_name')
self._glance = glclient.Client(glanceclient_version,
interface=glance_endpoint_type,
region_name=glance_region_name,
session=self.session)
return self._glance
@@ -110,8 +114,11 @@ class OpenStackClients(object):
'api_version')
gnocchiclient_interface = self._get_client_option('gnocchi',
'endpoint_type')
gnocchiclient_region_name = self._get_client_option('gnocchi',
'region_name')
adapter_options = {
"interface": gnocchiclient_interface
"interface": gnocchiclient_interface,
"region_name": gnocchiclient_region_name
}
self._gnocchi = gnclient.Client(gnocchiclient_version,
@@ -127,8 +134,10 @@ class OpenStackClients(object):
cinderclient_version = self._get_client_option('cinder', 'api_version')
cinder_endpoint_type = self._get_client_option('cinder',
'endpoint_type')
cinder_region_name = self._get_client_option('cinder', 'region_name')
self._cinder = ciclient.Client(cinderclient_version,
endpoint_type=cinder_endpoint_type,
region_name=cinder_region_name,
session=self.session)
return self._cinder
@@ -141,9 +150,12 @@ class OpenStackClients(object):
'api_version')
ceilometer_endpoint_type = self._get_client_option('ceilometer',
'endpoint_type')
ceilometer_region_name = self._get_client_option('ceilometer',
'region_name')
self._ceilometer = ceclient.get_client(
ceilometerclient_version,
endpoint_type=ceilometer_endpoint_type,
region_name=ceilometer_region_name,
session=self.session)
return self._ceilometer
@@ -156,6 +168,8 @@ class OpenStackClients(object):
'monasca', 'api_version')
monascaclient_interface = self._get_client_option(
'monasca', 'interface')
monascaclient_region = self._get_client_option(
'monasca', 'region_name')
token = self.session.get_token()
watcher_clients_auth_config = CONF.get(_CLIENTS_AUTH_GROUP)
service_type = 'monitoring'
@@ -172,7 +186,8 @@ class OpenStackClients(object):
'password': watcher_clients_auth_config.password,
}
endpoint = self.session.get_endpoint(service_type=service_type,
interface=monascaclient_interface)
interface=monascaclient_interface,
region_name=monascaclient_region)
self._monasca = monclient.Client(
monascaclient_version, endpoint, **monasca_kwargs)
@@ -188,9 +203,11 @@ class OpenStackClients(object):
'api_version')
neutron_endpoint_type = self._get_client_option('neutron',
'endpoint_type')
neutron_region_name = self._get_client_option('neutron', 'region_name')
self._neutron = netclient.Client(neutronclient_version,
endpoint_type=neutron_endpoint_type,
region_name=neutron_region_name,
session=self.session)
self._neutron.format = 'json'
return self._neutron
@@ -202,7 +219,9 @@ class OpenStackClients(object):
ironicclient_version = self._get_client_option('ironic', 'api_version')
endpoint_type = self._get_client_option('ironic', 'endpoint_type')
ironic_region_name = self._get_client_option('ironic', 'region_name')
self._ironic = irclient.get_client(ironicclient_version,
os_endpoint_type=endpoint_type,
region_name=ironic_region_name,
session=self.session)
return self._ironic

View File

@@ -305,7 +305,7 @@ class ActionFilterCombinationProhibited(Invalid):
class UnsupportedActionType(UnsupportedError):
msg_fmt = _("Provided %(action_type) is not supported yet")
msg_fmt = _("Provided %(action_type)s is not supported yet")
class EfficacyIndicatorNotFound(ResourceNotFound):
@@ -332,6 +332,10 @@ class PatchError(Invalid):
msg_fmt = _("Couldn't apply patch '%(patch)s'. Reason: %(reason)s")
class DeleteError(Invalid):
msg_fmt = _("Couldn't delete when state is '%(state)s'.")
# decision engine
class WorkflowExecutionException(WatcherException):
@@ -362,6 +366,14 @@ class ClusterEmpty(WatcherException):
msg_fmt = _("The list of compute node(s) in the cluster is empty")
class ComputeClusterEmpty(WatcherException):
msg_fmt = _("The list of compute node(s) in the cluster is empty")
class StorageClusterEmpty(WatcherException):
msg_fmt = _("The list of storage node(s) in the cluster is empty")
class MetricCollectorNotDefined(WatcherException):
msg_fmt = _("The metrics resource collector is not defined")
@@ -405,6 +417,10 @@ class UnsupportedDataSource(UnsupportedError):
"by strategy %(strategy)s")
class DataSourceNotAvailable(WatcherException):
msg_fmt = _("Datasource %(datasource)s is not available.")
class NoSuchMetricForHost(WatcherException):
msg_fmt = _("No %(metric)s metric for %(host)s found.")
@@ -469,6 +485,14 @@ class VolumeNotFound(StorageResourceNotFound):
msg_fmt = _("The volume '%(name)s' could not be found")
class BaremetalResourceNotFound(WatcherException):
msg_fmt = _("The baremetal resource '%(name)s' could not be found")
class IronicNodeNotFound(BaremetalResourceNotFound):
msg_fmt = _("The ironic node %(uuid)s could not be found")
class LoadingError(WatcherException):
msg_fmt = _("Error loading plugin '%(name)s'")

View File

@@ -0,0 +1,49 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2017 ZTE Corporation
#
# Authors:Yumeng Bao <bao.yumeng@zte.com.cn>
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from oslo_log import log
from watcher.common import clients
from watcher.common import exception
from watcher.common import utils
LOG = log.getLogger(__name__)
class IronicHelper(object):
def __init__(self, osc=None):
""":param osc: an OpenStackClients instance"""
self.osc = osc if osc else clients.OpenStackClients()
self.ironic = self.osc.ironic()
def get_ironic_node_list(self):
return self.ironic.node.list()
def get_ironic_node_by_uuid(self, node_uuid):
"""Get ironic node by node UUID"""
try:
node = self.ironic.node.get(utils.Struct(uuid=node_uuid))
if not node:
raise exception.IronicNodeNotFound(uuid=node_uuid)
except Exception as exc:
LOG.exception(exc)
raise exception.IronicNodeNotFound(uuid=node_uuid)
# We need to pass an object with an 'uuid' attribute to make it work
return node

View File

@@ -52,20 +52,31 @@ class NovaHelper(object):
return self.nova.hypervisors.get(utils.Struct(id=node_id))
def get_compute_node_by_hostname(self, node_hostname):
"""Get compute node by ID (*not* UUID)"""
# We need to pass an object with an 'id' attribute to make it work
"""Get compute node by hostname"""
try:
compute_nodes = self.nova.hypervisors.search(node_hostname)
if len(compute_nodes) != 1:
hypervisors = [hv for hv in self.get_compute_node_list()
if hv.service['host'] == node_hostname]
if len(hypervisors) != 1:
# TODO(hidekazu)
# this may occur if VMware vCenter driver is used
raise exception.ComputeNodeNotFound(name=node_hostname)
else:
compute_nodes = self.nova.hypervisors.search(
hypervisors[0].hypervisor_hostname)
if len(compute_nodes) != 1:
raise exception.ComputeNodeNotFound(name=node_hostname)
return self.get_compute_node_by_id(compute_nodes[0].id)
return self.get_compute_node_by_id(compute_nodes[0].id)
except Exception as exc:
LOG.exception(exc)
raise exception.ComputeNodeNotFound(name=node_hostname)
def get_instance_list(self):
return self.nova.servers.list(search_opts={'all_tenants': True})
return self.nova.servers.list(search_opts={'all_tenants': True},
limit=-1)
def get_flavor_list(self):
return self.nova.flavors.list(**{'is_public': None})
def get_service(self, service_id):
return self.nova.services.find(id=service_id)
@@ -96,7 +107,7 @@ class NovaHelper(object):
return True
else:
LOG.debug("confirm resize failed for the "
"instance %s" % instance.id)
"instance %s", instance.id)
return False
def wait_for_volume_status(self, volume, status, timeout=60,
@@ -144,19 +155,20 @@ class NovaHelper(object):
"""
new_image_name = ""
LOG.debug(
"Trying a non-live migrate of instance '%s' " % instance_id)
"Trying a non-live migrate of instance '%s' ", instance_id)
# Looking for the instance to migrate
instance = self.find_instance(instance_id)
if not instance:
LOG.debug("Instance %s not found !" % instance_id)
LOG.debug("Instance %s not found !", instance_id)
return False
else:
# NOTE: If destination node is None call Nova API to migrate
# instance
host_name = getattr(instance, "OS-EXT-SRV-ATTR:host")
LOG.debug(
"Instance %s found on host '%s'." % (instance_id, host_name))
"Instance %(instance)s found on host '%(host)s'.",
{'instance': instance_id, 'host': host_name})
if dest_hostname is None:
previous_status = getattr(instance, 'status')
@@ -176,12 +188,12 @@ class NovaHelper(object):
return False
LOG.debug(
"cold migration succeeded : "
"instance %s is now on host '%s'." % (
"instance %s is now on host '%s'.", (
instance_id, new_hostname))
return True
else:
LOG.debug(
"cold migration for instance %s failed" % instance_id)
"cold migration for instance %s failed", instance_id)
return False
if not keep_original_image_name:
@@ -210,7 +222,7 @@ class NovaHelper(object):
for network_name, network_conf_obj in addresses.items():
LOG.debug(
"Extracting network configuration for network '%s'" %
"Extracting network configuration for network '%s'",
network_name)
network_names_list.append(network_name)
@@ -231,7 +243,7 @@ class NovaHelper(object):
stopped_ok = self.stop_instance(instance_id)
if not stopped_ok:
LOG.debug("Could not stop instance: %s" % instance_id)
LOG.debug("Could not stop instance: %s", instance_id)
return False
# Building the temporary image which will be used
@@ -241,7 +253,7 @@ class NovaHelper(object):
if not image_uuid:
LOG.debug(
"Could not build temporary image of instance: %s" %
"Could not build temporary image of instance: %s",
instance_id)
return False
@@ -289,8 +301,10 @@ class NovaHelper(object):
blocks.append(
block_device_mapping_v2_item)
LOG.debug("Detaching volume %s from instance: %s" % (
volume_id, instance_id))
LOG.debug(
"Detaching volume %(volume)s from "
"instance: %(instance)s",
{'volume': volume_id, 'instance': instance_id})
# volume.detach()
self.nova.volumes.delete_server_volume(instance_id,
volume_id)
@@ -298,11 +312,12 @@ class NovaHelper(object):
if not self.wait_for_volume_status(volume, "available", 5,
10):
LOG.debug(
"Could not detach volume %s from instance: %s" % (
volume_id, instance_id))
"Could not detach volume %(volume)s "
"from instance: %(instance)s",
{'volume': volume_id, 'instance': instance_id})
return False
except ciexceptions.NotFound:
LOG.debug("Volume '%s' not found " % image_id)
LOG.debug("Volume '%s' not found ", image_id)
return False
# We create the new instance from
@@ -321,18 +336,21 @@ class NovaHelper(object):
if not new_instance:
LOG.debug(
"Could not create new instance "
"for non-live migration of instance %s" % instance_id)
"for non-live migration of instance %s", instance_id)
return False
try:
LOG.debug("Detaching floating ip '%s' from instance %s" % (
floating_ip, instance_id))
LOG.debug(
"Detaching floating ip '%(floating_ip)s' "
"from instance %(instance)s",
{'floating_ip': floating_ip, 'instance': instance_id})
# We detach the floating ip from the current instance
instance.remove_floating_ip(floating_ip)
LOG.debug(
"Attaching floating ip '%s' to the new instance %s" % (
floating_ip, new_instance.id))
"Attaching floating ip '%(ip)s' to the new "
"instance %(id)s",
{'ip': floating_ip, 'id': new_instance.id})
# We attach the same floating ip to the new instance
new_instance.add_floating_ip(floating_ip)
@@ -344,12 +362,12 @@ class NovaHelper(object):
# Deleting the old instance (because no more useful)
delete_ok = self.delete_instance(instance_id)
if not delete_ok:
LOG.debug("Could not delete instance: %s" % instance_id)
LOG.debug("Could not delete instance: %s", instance_id)
return False
LOG.debug(
"Instance %s has been successfully migrated "
"to new host '%s' and its new id is %s." % (
"to new host '%s' and its new id is %s.", (
instance_id, new_host_name, new_instance.id))
return True
@@ -366,8 +384,10 @@ class NovaHelper(object):
:param instance_id: the unique id of the instance to resize.
:param flavor: the name or ID of the flavor to resize to.
"""
LOG.debug("Trying a resize of instance %s to flavor '%s'" % (
instance_id, flavor))
LOG.debug(
"Trying a resize of instance %(instance)s to "
"flavor '%(flavor)s'",
{'instance': instance_id, 'flavor': flavor})
# Looking for the instance to resize
instance = self.find_instance(instance_id)
@@ -384,17 +404,17 @@ class NovaHelper(object):
"instance %s. Exception: %s", instance_id, e)
if not flavor_id:
LOG.debug("Flavor not found: %s" % flavor)
LOG.debug("Flavor not found: %s", flavor)
return False
if not instance:
LOG.debug("Instance not found: %s" % instance_id)
LOG.debug("Instance not found: %s", instance_id)
return False
instance_status = getattr(instance, 'OS-EXT-STS:vm_state')
LOG.debug(
"Instance %s is in '%s' status." % (instance_id,
instance_status))
"Instance %(id)s is in '%(status)s' status.",
{'id': instance_id, 'status': instance_status})
instance.resize(flavor=flavor_id)
while getattr(instance,
@@ -432,17 +452,20 @@ class NovaHelper(object):
destination_node is None, nova scheduler choose
the destination host
"""
LOG.debug("Trying to live migrate instance %s " % (instance_id))
LOG.debug(
"Trying a live migrate instance %(instance)s ",
{'instance': instance_id})
# Looking for the instance to migrate
instance = self.find_instance(instance_id)
if not instance:
LOG.debug("Instance not found: %s" % instance_id)
LOG.debug("Instance not found: %s", instance_id)
return False
else:
host_name = getattr(instance, 'OS-EXT-SRV-ATTR:host')
LOG.debug(
"Instance %s found on host '%s'." % (instance_id, host_name))
"Instance %(instance)s found on host '%(host)s'.",
{'instance': instance_id, 'host': host_name})
# From nova api version 2.25(Mitaka release), the default value of
# block_migration is None which is mapped to 'auto'.
@@ -464,7 +487,7 @@ class NovaHelper(object):
if host_name != new_hostname and instance.status == 'ACTIVE':
LOG.debug(
"Live migration succeeded : "
"instance %s is now on host '%s'." % (
"instance %s is now on host '%s'.", (
instance_id, new_hostname))
return True
else:
@@ -475,7 +498,7 @@ class NovaHelper(object):
and retry:
instance = self.nova.servers.get(instance.id)
if not getattr(instance, 'OS-EXT-STS:task_state'):
LOG.debug("Instance task state: %s is null" % instance_id)
LOG.debug("Instance task state: %s is null", instance_id)
break
LOG.debug(
'Waiting the migration of {0} to {1}'.format(
@@ -491,13 +514,13 @@ class NovaHelper(object):
LOG.debug(
"Live migration succeeded : "
"instance %s is now on host '%s'." % (
instance_id, host_name))
"instance %(instance)s is now on host '%(host)s'.",
{'instance': instance_id, 'host': host_name})
return True
def abort_live_migrate(self, instance_id, source, destination, retry=240):
LOG.debug("Aborting live migration of instance %s" % instance_id)
LOG.debug("Aborting live migration of instance %s", instance_id)
migration = self.get_running_migration(instance_id)
if migration:
migration_id = getattr(migration[0], "id")
@@ -510,7 +533,7 @@ class NovaHelper(object):
LOG.exception(e)
else:
LOG.debug(
"No running migrations found for instance %s" % instance_id)
"No running migrations found for instance %s", instance_id)
while retry:
instance = self.nova.servers.get(instance_id)
@@ -551,7 +574,7 @@ class NovaHelper(object):
return False
def set_host_offline(self, hostname):
# See API on http://developer.openstack.org/api-ref-compute-v2.1.html
# See API on https://developer.openstack.org/api-ref/compute/
# especially the PUT request
# regarding this resource : /v2.1/os-hosts/{host_name}
#
@@ -575,7 +598,7 @@ class NovaHelper(object):
host = self.nova.hosts.get(hostname)
if not host:
LOG.debug("host not found: %s" % hostname)
LOG.debug("host not found: %s", hostname)
return False
else:
host[0].update(
@@ -597,18 +620,19 @@ class NovaHelper(object):
key-value pairs to associate to the image as metadata.
"""
LOG.debug(
"Trying to create an image from instance %s ..." % instance_id)
"Trying to create an image from instance %s ...", instance_id)
# Looking for the instance
instance = self.find_instance(instance_id)
if not instance:
LOG.debug("Instance not found: %s" % instance_id)
LOG.debug("Instance not found: %s", instance_id)
return None
else:
host_name = getattr(instance, 'OS-EXT-SRV-ATTR:host')
LOG.debug(
"Instance %s found on host '%s'." % (instance_id, host_name))
"Instance %(instance)s found on host '%(host)s'.",
{'instance': instance_id, 'host': host_name})
# We need to wait for an appropriate status
# of the instance before we can build an image from it
@@ -635,14 +659,15 @@ class NovaHelper(object):
if not image:
break
status = image.status
LOG.debug("Current image status: %s" % status)
LOG.debug("Current image status: %s", status)
if not image:
LOG.debug("Image not found: %s" % image_uuid)
LOG.debug("Image not found: %s", image_uuid)
else:
LOG.debug(
"Image %s successfully created for instance %s" % (
image_uuid, instance_id))
"Image %(image)s successfully created for "
"instance %(instance)s",
{'image': image_uuid, 'instance': instance_id})
return image_uuid
return None
@@ -651,16 +676,16 @@ class NovaHelper(object):
:param instance_id: the unique id of the instance to delete.
"""
LOG.debug("Trying to remove instance %s ..." % instance_id)
LOG.debug("Trying to remove instance %s ...", instance_id)
instance = self.find_instance(instance_id)
if not instance:
LOG.debug("Instance not found: %s" % instance_id)
LOG.debug("Instance not found: %s", instance_id)
return False
else:
self.nova.servers.delete(instance_id)
LOG.debug("Instance %s removed." % instance_id)
LOG.debug("Instance %s removed.", instance_id)
return True
def stop_instance(self, instance_id):
@@ -668,21 +693,21 @@ class NovaHelper(object):
:param instance_id: the unique id of the instance to stop.
"""
LOG.debug("Trying to stop instance %s ..." % instance_id)
LOG.debug("Trying to stop instance %s ...", instance_id)
instance = self.find_instance(instance_id)
if not instance:
LOG.debug("Instance not found: %s" % instance_id)
LOG.debug("Instance not found: %s", instance_id)
return False
elif getattr(instance, 'OS-EXT-STS:vm_state') == "stopped":
LOG.debug("Instance has been stopped: %s" % instance_id)
LOG.debug("Instance has been stopped: %s", instance_id)
return True
else:
self.nova.servers.stop(instance_id)
if self.wait_for_instance_state(instance, "stopped", 8, 10):
LOG.debug("Instance %s stopped." % instance_id)
LOG.debug("Instance %s stopped.", instance_id)
return True
else:
return False
@@ -723,11 +748,11 @@ class NovaHelper(object):
return False
while instance.status not in status_list and retry:
LOG.debug("Current instance status: %s" % instance.status)
LOG.debug("Current instance status: %s", instance.status)
time.sleep(sleep)
instance = self.nova.servers.get(instance.id)
retry -= 1
LOG.debug("Current instance status: %s" % instance.status)
LOG.debug("Current instance status: %s", instance.status)
return instance.status in status_list
def create_instance(self, node_id, inst_name="test", image_id=None,
@@ -743,26 +768,26 @@ class NovaHelper(object):
It returns the unique id of the created instance.
"""
LOG.debug(
"Trying to create new instance '%s' "
"from image '%s' with flavor '%s' ..." % (
inst_name, image_id, flavor_name))
"Trying to create new instance '%(inst)s' "
"from image '%(image)s' with flavor '%(flavor)s' ...",
{'inst': inst_name, 'image': image_id, 'flavor': flavor_name})
try:
self.nova.keypairs.findall(name=keypair_name)
except nvexceptions.NotFound:
LOG.debug("Key pair '%s' not found " % keypair_name)
LOG.debug("Key pair '%s' not found ", keypair_name)
return
try:
image = self.glance.images.get(image_id)
except glexceptions.NotFound:
LOG.debug("Image '%s' not found " % image_id)
LOG.debug("Image '%s' not found ", image_id)
return
try:
flavor = self.nova.flavors.find(name=flavor_name)
except nvexceptions.NotFound:
LOG.debug("Flavor '%s' not found " % flavor_name)
LOG.debug("Flavor '%s' not found ", flavor_name)
return
# Make sure all security groups exist
@@ -770,7 +795,7 @@ class NovaHelper(object):
group_id = self.get_security_group_id_from_name(sec_group_name)
if not group_id:
LOG.debug("Security group '%s' not found " % sec_group_name)
LOG.debug("Security group '%s' not found ", sec_group_name)
return
net_list = list()
@@ -779,7 +804,7 @@ class NovaHelper(object):
nic_id = self.get_network_id_from_name(network_name)
if not nic_id:
LOG.debug("Network '%s' not found " % network_name)
LOG.debug("Network '%s' not found ", network_name)
return
net_obj = {"net-id": nic_id}
net_list.append(net_obj)
@@ -805,14 +830,16 @@ class NovaHelper(object):
if create_new_floating_ip and instance.status == 'ACTIVE':
LOG.debug(
"Creating a new floating IP"
" for instance '%s'" % instance.id)
" for instance '%s'", instance.id)
# Creating floating IP for the new instance
floating_ip = self.nova.floating_ips.create()
instance.add_floating_ip(floating_ip)
LOG.debug("Instance %s associated to Floating IP '%s'" % (
instance.id, floating_ip.ip))
LOG.debug(
"Instance %(instance)s associated to "
"Floating IP '%(ip)s'",
{'instance': instance.id, 'ip': floating_ip.ip})
return instance
@@ -845,8 +872,9 @@ class NovaHelper(object):
def get_instances_by_node(self, host):
return [instance for instance in
self.nova.servers.list(search_opts={"all_tenants": True})
if self.get_hostname(instance) == host]
self.nova.servers.list(search_opts={"all_tenants": True,
"host": host},
limit=-1)]
def get_hostname(self, instance):
return str(getattr(instance, 'OS-EXT-SRV-ATTR:host'))
@@ -886,7 +914,7 @@ class NovaHelper(object):
LOG.debug('Waiting volume update to {0}'.format(new_volume))
time.sleep(retry_interval)
retry -= 1
LOG.debug("retry count: %s" % retry)
LOG.debug("retry count: %s", retry)
if getattr(new_volume, 'status') != "in-use":
LOG.error("Volume update retry timeout or error")
return False
@@ -894,5 +922,6 @@ class NovaHelper(object):
host_name = getattr(new_volume, "os-vol-host-attr:host")
LOG.debug(
"Volume update succeeded : "
"Volume %s is now on host '%s'." % (new_volume.id, host_name))
"Volume %s is now on host '%s'.",
(new_volume.id, host_name))
return True

View File

@@ -0,0 +1,37 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import itertools
from watcher.common.policies import action
from watcher.common.policies import action_plan
from watcher.common.policies import audit
from watcher.common.policies import audit_template
from watcher.common.policies import base
from watcher.common.policies import goal
from watcher.common.policies import scoring_engine
from watcher.common.policies import service
from watcher.common.policies import strategy
def list_rules():
return itertools.chain(
base.list_rules(),
action.list_rules(),
action_plan.list_rules(),
audit.list_rules(),
audit_template.list_rules(),
goal.list_rules(),
scoring_engine.list_rules(),
service.list_rules(),
strategy.list_rules(),
)

View File

@@ -0,0 +1,57 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
from watcher.common.policies import base
ACTION = 'action:%s'
rules = [
policy.DocumentedRuleDefault(
name=ACTION % 'detail',
check_str=base.RULE_ADMIN_API,
description='Retrieve a list of actions with detail.',
operations=[
{
'path': '/v1/actions/detail',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=ACTION % 'get',
check_str=base.RULE_ADMIN_API,
description='Retrieve information about a given action.',
operations=[
{
'path': '/v1/actions/{action_id}',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=ACTION % 'get_all',
check_str=base.RULE_ADMIN_API,
description='Retrieve a list of all actions.',
operations=[
{
'path': '/v1/actions',
'method': 'GET'
}
]
)
]
def list_rules():
return rules

View File

@@ -0,0 +1,79 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
from watcher.common.policies import base
ACTION_PLAN = 'action_plan:%s'
rules = [
policy.DocumentedRuleDefault(
name=ACTION_PLAN % 'delete',
check_str=base.RULE_ADMIN_API,
description='Delete an action plan.',
operations=[
{
'path': '/v1/action_plans/{action_plan_uuid}',
'method': 'DELETE'
}
]
),
policy.DocumentedRuleDefault(
name=ACTION_PLAN % 'detail',
check_str=base.RULE_ADMIN_API,
description='Retrieve a list of action plans with detail.',
operations=[
{
'path': '/v1/action_plans/detail',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=ACTION_PLAN % 'get',
check_str=base.RULE_ADMIN_API,
description='Get an action plan.',
operations=[
{
'path': '/v1/action_plans/{action_plan_id}',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=ACTION_PLAN % 'get_all',
check_str=base.RULE_ADMIN_API,
description='Get all action plans.',
operations=[
{
'path': '/v1/action_plans',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=ACTION_PLAN % 'update',
check_str=base.RULE_ADMIN_API,
description='Update an action plans.',
operations=[
{
'path': '/v1/action_plans/{action_plan_uuid}',
'method': 'PATCH'
}
]
)
]
def list_rules():
return rules

View File

@@ -0,0 +1,90 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
from watcher.common.policies import base
AUDIT = 'audit:%s'
rules = [
policy.DocumentedRuleDefault(
name=AUDIT % 'create',
check_str=base.RULE_ADMIN_API,
description='Create a new audit.',
operations=[
{
'path': '/v1/audits',
'method': 'POST'
}
]
),
policy.DocumentedRuleDefault(
name=AUDIT % 'delete',
check_str=base.RULE_ADMIN_API,
description='Delete an audit.',
operations=[
{
'path': '/v1/audits/{audit_uuid}',
'method': 'DELETE'
}
]
),
policy.DocumentedRuleDefault(
name=AUDIT % 'detail',
check_str=base.RULE_ADMIN_API,
description='Retrieve audit list with details.',
operations=[
{
'path': '/v1/audits/detail',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=AUDIT % 'get',
check_str=base.RULE_ADMIN_API,
description='Get an audit.',
operations=[
{
'path': '/v1/audits/{audit_uuid}',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=AUDIT % 'get_all',
check_str=base.RULE_ADMIN_API,
description='Get all audits.',
operations=[
{
'path': '/v1/audits',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=AUDIT % 'update',
check_str=base.RULE_ADMIN_API,
description='Update an audit.',
operations=[
{
'path': '/v1/audits/{audit_uuid}',
'method': 'PATCH'
}
]
)
]
def list_rules():
return rules

View File

@@ -0,0 +1,90 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
from watcher.common.policies import base
AUDIT_TEMPLATE = 'audit_template:%s'
rules = [
policy.DocumentedRuleDefault(
name=AUDIT_TEMPLATE % 'create',
check_str=base.RULE_ADMIN_API,
description='Create an audit template.',
operations=[
{
'path': '/v1/audit_templates',
'method': 'POST'
}
]
),
policy.DocumentedRuleDefault(
name=AUDIT_TEMPLATE % 'delete',
check_str=base.RULE_ADMIN_API,
description='Delete an audit template.',
operations=[
{
'path': '/v1/audit_templates/{audit_template_uuid}',
'method': 'DELETE'
}
]
),
policy.DocumentedRuleDefault(
name=AUDIT_TEMPLATE % 'detail',
check_str=base.RULE_ADMIN_API,
description='Retrieve a list of audit templates with details.',
operations=[
{
'path': '/v1/audit_templates/detail',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=AUDIT_TEMPLATE % 'get',
check_str=base.RULE_ADMIN_API,
description='Get an audit template.',
operations=[
{
'path': '/v1/audit_templates/{audit_template_uuid}',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=AUDIT_TEMPLATE % 'get_all',
check_str=base.RULE_ADMIN_API,
description='Get a list of all audit templates.',
operations=[
{
'path': '/v1/audit_templates',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=AUDIT_TEMPLATE % 'update',
check_str=base.RULE_ADMIN_API,
description='Update an audit template.',
operations=[
{
'path': '/v1/audit_templates/{audit_template_uuid}',
'method': 'PATCH'
}
]
)
]
def list_rules():
return rules

View File

@@ -0,0 +1,32 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
RULE_ADMIN_API = 'rule:admin_api'
ROLE_ADMIN_OR_ADMINISTRATOR = 'role:admin or role:administrator'
ALWAYS_DENY = '!'
rules = [
policy.RuleDefault(
name='admin_api',
check_str=ROLE_ADMIN_OR_ADMINISTRATOR
),
policy.RuleDefault(
name='show_password',
check_str=ALWAYS_DENY
)
]
def list_rules():
return rules

View File

@@ -0,0 +1,57 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
from watcher.common.policies import base
GOAL = 'goal:%s'
rules = [
policy.DocumentedRuleDefault(
name=GOAL % 'detail',
check_str=base.RULE_ADMIN_API,
description='Retrieve a list of goals with detail.',
operations=[
{
'path': '/v1/goals/detail',
'method': 'DELETE'
}
]
),
policy.DocumentedRuleDefault(
name=GOAL % 'get',
check_str=base.RULE_ADMIN_API,
description='Get a goal.',
operations=[
{
'path': '/v1/goals/{goal_uuid}',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=GOAL % 'get_all',
check_str=base.RULE_ADMIN_API,
description='Get all goals.',
operations=[
{
'path': '/v1/goals',
'method': 'GET'
}
]
)
]
def list_rules():
return rules

View File

@@ -0,0 +1,66 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
from watcher.common.policies import base
SCORING_ENGINE = 'scoring_engine:%s'
rules = [
# FIXME(lbragstad): Find someone from watcher to double check this
# information. This API isn't listed in watcher's API reference
# documentation.
policy.DocumentedRuleDefault(
name=SCORING_ENGINE % 'detail',
check_str=base.RULE_ADMIN_API,
description='List scoring engines with details.',
operations=[
{
'path': '/v1/scoring_engines/detail',
'method': 'GET'
}
]
),
# FIXME(lbragstad): Find someone from watcher to double check this
# information. This API isn't listed in watcher's API reference
# documentation.
policy.DocumentedRuleDefault(
name=SCORING_ENGINE % 'get',
check_str=base.RULE_ADMIN_API,
description='Get a scoring engine.',
operations=[
{
'path': '/v1/scoring_engines/{scoring_engine_id}',
'method': 'GET'
}
]
),
# FIXME(lbragstad): Find someone from watcher to double check this
# information. This API isn't listed in watcher's API reference
# documentation.
policy.DocumentedRuleDefault(
name=SCORING_ENGINE % 'get_all',
check_str=base.RULE_ADMIN_API,
description='Get all scoring engines.',
operations=[
{
'path': '/v1/scoring_engines',
'method': 'GET'
}
]
)
]
def list_rules():
return rules

View File

@@ -0,0 +1,57 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
from watcher.common.policies import base
SERVICE = 'service:%s'
rules = [
policy.DocumentedRuleDefault(
name=SERVICE % 'detail',
check_str=base.RULE_ADMIN_API,
description='List services with detail.',
operations=[
{
'path': '/v1/services/',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=SERVICE % 'get',
check_str=base.RULE_ADMIN_API,
description='Get a specific service.',
operations=[
{
'path': '/v1/services/{service_id}',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=SERVICE % 'get_all',
check_str=base.RULE_ADMIN_API,
description='List all services.',
operations=[
{
'path': '/v1/services/',
'method': 'GET'
}
]
),
]
def list_rules():
return rules

View File

@@ -0,0 +1,68 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
from watcher.common.policies import base
STRATEGY = 'strategy:%s'
rules = [
policy.DocumentedRuleDefault(
name=STRATEGY % 'detail',
check_str=base.RULE_ADMIN_API,
description='List strategies with detail.',
operations=[
{
'path': '/v1/strategies/detail',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=STRATEGY % 'get',
check_str=base.RULE_ADMIN_API,
description='Get a strategy.',
operations=[
{
'path': '/v1/strategies/{strategy_uuid}',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=STRATEGY % 'get_all',
check_str=base.RULE_ADMIN_API,
description='List all strategies.',
operations=[
{
'path': '/v1/strategies',
'method': 'GET'
}
]
),
policy.DocumentedRuleDefault(
name=STRATEGY % 'state',
check_str=base.RULE_ADMIN_API,
description='Get state of strategy.',
operations=[
{
'path': '/v1/strategies{strategy_uuid}/state',
'method': 'GET'
}
]
)
]
def list_rules():
return rules

View File

@@ -15,11 +15,13 @@
"""Policy Engine For Watcher."""
import sys
from oslo_config import cfg
from oslo_policy import policy
from watcher.common import exception
from watcher.common import policies
_ENFORCER = None
CONF = cfg.CONF
@@ -56,6 +58,7 @@ def init(policy_file=None, rules=None,
default_rule=default_rule,
use_conf=use_conf,
overwrite=overwrite)
_ENFORCER.register_defaults(policies.list_rules())
return _ENFORCER
@@ -92,3 +95,23 @@ def enforce(context, rule=None, target=None,
'user_id': context.user_id}
return enforcer.enforce(rule, target, credentials,
do_raise=do_raise, exc=exc, *args, **kwargs)
def get_enforcer():
# This method is for use by oslopolicy CLI scripts. Those scripts need the
# 'output-file' and 'namespace' options, but having those in sys.argv means
# loading the Watcher config options will fail as those are not expected
# to be present. So we pass in an arg list with those stripped out.
conf_args = []
# Start at 1 because cfg.CONF expects the equivalent of sys.argv[1:]
i = 1
while i < len(sys.argv):
if sys.argv[i].strip('-') in ['namespace', 'output-file']:
i += 2
continue
conf_args.append(sys.argv[i])
i += 1
cfg.CONF(conf_args, project='watcher')
init()
return _ENFORCER

View File

@@ -69,7 +69,8 @@ _DEFAULT_LOG_LEVELS = ['amqp=WARN', 'amqplib=WARN', 'qpid.messaging=INFO',
'keystoneclient=INFO', 'stevedore=INFO',
'eventlet.wsgi.server=WARN', 'iso8601=WARN',
'paramiko=WARN', 'requests=WARN', 'neutronclient=WARN',
'glanceclient=WARN', 'watcher.openstack.common=WARN']
'glanceclient=WARN', 'watcher.openstack.common=WARN',
'apscheduler=WARN']
Singleton = service.Singleton

View File

@@ -30,7 +30,10 @@ CEILOMETER_CLIENT_OPTS = [
default='internalURL',
help='Type of endpoint to use in ceilometerclient.'
'Supported values: internalURL, publicURL, adminURL'
'The default is internalURL.')]
'The default is internalURL.'),
cfg.StrOpt('region_name',
help='Region in Identity service catalog to use for '
'communication with the OpenStack service.')]
def register_opts(conf):

View File

@@ -29,7 +29,10 @@ CINDER_CLIENT_OPTS = [
default='publicURL',
help='Type of endpoint to use in cinderclient.'
'Supported values: internalURL, publicURL, adminURL'
'The default is publicURL.')]
'The default is publicURL.'),
cfg.StrOpt('region_name',
help='Region in Identity service catalog to use for '
'communication with the OpenStack service.')]
def register_opts(conf):

View File

@@ -29,7 +29,10 @@ GLANCE_CLIENT_OPTS = [
default='publicURL',
help='Type of endpoint to use in glanceclient.'
'Supported values: internalURL, publicURL, adminURL'
'The default is publicURL.')]
'The default is publicURL.'),
cfg.StrOpt('region_name',
help='Region in Identity service catalog to use for '
'communication with the OpenStack service.')]
def register_opts(conf):

View File

@@ -30,6 +30,9 @@ GNOCCHI_CLIENT_OPTS = [
help='Type of endpoint to use in gnocchi client.'
'Supported values: internal, public, admin'
'The default is public.'),
cfg.StrOpt('region_name',
help='Region in Identity service catalog to use for '
'communication with the OpenStack service.'),
cfg.IntOpt('query_max_retries',
default=10,
help='How many times Watcher is trying to query again'),

View File

@@ -29,7 +29,10 @@ IRONIC_CLIENT_OPTS = [
default='publicURL',
help='Type of endpoint to use in ironicclient.'
'Supported values: internalURL, publicURL, adminURL'
'The default is publicURL.')]
'The default is publicURL.'),
cfg.StrOpt('region_name',
help='Region in Identity service catalog to use for '
'communication with the OpenStack service.')]
def register_opts(conf):

View File

@@ -29,7 +29,10 @@ MONASCA_CLIENT_OPTS = [
default='internal',
help='Type of interface used for monasca endpoint.'
'Supported values: internal, public, admin'
'The default is internal.')]
'The default is internal.'),
cfg.StrOpt('region_name',
help='Region in Identity service catalog to use for '
'communication with the OpenStack service.')]
def register_opts(conf):

View File

@@ -29,7 +29,10 @@ NEUTRON_CLIENT_OPTS = [
default='publicURL',
help='Type of endpoint to use in neutronclient.'
'Supported values: internalURL, publicURL, adminURL'
'The default is publicURL.')]
'The default is publicURL.'),
cfg.StrOpt('region_name',
help='Region in Identity service catalog to use for '
'communication with the OpenStack service.')]
def register_opts(conf):

View File

@@ -29,7 +29,10 @@ NOVA_CLIENT_OPTS = [
default='publicURL',
help='Type of endpoint to use in novaclient.'
'Supported values: internalURL, publicURL, adminURL'
'The default is publicURL.')]
'The default is publicURL.'),
cfg.StrOpt('region_name',
help='Region in Identity service catalog to use for '
'communication with the OpenStack service.')]
def register_opts(conf):

126
watcher/datasource/base.py Normal file
View File

@@ -0,0 +1,126 @@
# -*- encoding: utf-8 -*-
# Copyright 2017 NEC Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import abc
class DataSourceBase(object):
METRIC_MAP = dict(
ceilometer=dict(host_cpu_usage='compute.node.cpu.percent',
instance_cpu_usage='cpu_util',
instance_l3_cache_usage='cpu_l3_cache',
host_outlet_temp=(
'hardware.ipmi.node.outlet_temperature'),
host_airflow='hardware.ipmi.node.airflow',
host_inlet_temp='hardware.ipmi.node.temperature',
host_power='hardware.ipmi.node.power',
instance_ram_usage='memory.resident',
instance_ram_allocated='memory',
instance_root_disk_size='disk.root.size',
host_memory_usage='hardware.memory.used', ),
gnocchi=dict(host_cpu_usage='compute.node.cpu.percent',
instance_cpu_usage='cpu_util',
instance_l3_cache_usage='cpu_l3_cache',
host_outlet_temp='hardware.ipmi.node.outlet_temperature',
host_airflow='hardware.ipmi.node.airflow',
host_inlet_temp='hardware.ipmi.node.temperature',
host_power='hardware.ipmi.node.power',
instance_ram_usage='memory.resident',
instance_ram_allocated='memory',
instance_root_disk_size='disk.root.size',
host_memory_usage='hardware.memory.used'
),
monasca=dict(host_cpu_usage='cpu.percent',
instance_cpu_usage='vm.cpu.utilization_perc',
instance_l3_cache_usage=None,
host_outlet_temp=None,
host_airflow=None,
host_inlet_temp=None,
host_power=None,
instance_ram_usage=None,
instance_ram_allocated=None,
instance_root_disk_size=None,
host_memory_usage=None
),
)
@abc.abstractmethod
def statistic_aggregation(self, resource_id=None, meter_name=None,
period=300, granularity=300, dimensions=None,
aggregation='avg', group_by='*'):
pass
@abc.abstractmethod
def list_metrics(self):
pass
@abc.abstractmethod
def check_availability(self):
pass
@abc.abstractmethod
def get_host_cpu_usage(self, resource_id, period, aggregate,
granularity=None):
pass
@abc.abstractmethod
def get_instance_cpu_usage(self, resource_id, period, aggregate,
granularity=None):
pass
@abc.abstractmethod
def get_host_memory_usage(self, resource_id, period, aggregate,
granularity=None):
pass
@abc.abstractmethod
def get_instance_memory_usage(self, resource_id, period, aggregate,
granularity=None):
pass
@abc.abstractmethod
def get_instance_l3_cache_usage(self, resource_id, period, aggregate,
granularity=None):
pass
@abc.abstractmethod
def get_instance_ram_allocated(self, resource_id, period, aggregate,
granularity=None):
pass
@abc.abstractmethod
def get_instance_root_disk_allocated(self, resource_id, period, aggregate,
granularity=None):
pass
@abc.abstractmethod
def get_host_outlet_temperature(self, resource_id, period, aggregate,
granularity=None):
pass
@abc.abstractmethod
def get_host_inlet_temperature(self, resource_id, period, aggregate,
granularity=None):
pass
@abc.abstractmethod
def get_host_airflow(self, resource_id, period, aggregate,
granularity=None):
pass
@abc.abstractmethod
def get_host_power(self, resource_id, period, aggregate, granularity=None):
pass

View File

@@ -24,9 +24,14 @@ from oslo_utils import timeutils
from watcher._i18n import _
from watcher.common import clients
from watcher.common import exception
from watcher.datasource import base
class CeilometerHelper(object):
class CeilometerHelper(base.DataSourceBase):
NAME = 'ceilometer'
METRIC_MAP = base.DataSourceBase.METRIC_MAP['ceilometer']
def __init__(self, osc=None):
""":param osc: an OpenStackClients instance"""
self.osc = osc if osc else clients.OpenStackClients()
@@ -110,6 +115,13 @@ class CeilometerHelper(object):
except Exception:
raise
def check_availability(self):
try:
self.query_retry(self.ceilometer.resources.list)
except Exception:
return 'not available'
return 'available'
def query_sample(self, meter_name, query, limit=1):
return self.query_retry(f=self.ceilometer.samples.list,
meter_name=meter_name,
@@ -124,28 +136,37 @@ class CeilometerHelper(object):
period=period)
return statistics
def meter_list(self, query=None):
def list_metrics(self):
"""List the user's meters."""
meters = self.query_retry(f=self.ceilometer.meters.list,
query=query)
return meters
try:
meters = self.query_retry(f=self.ceilometer.meters.list)
except Exception:
return set()
else:
return meters
def statistic_aggregation(self,
resource_id,
meter_name,
period,
aggregate='avg'):
def statistic_aggregation(self, resource_id=None, meter_name=None,
period=300, granularity=300, dimensions=None,
aggregation='avg', group_by='*'):
"""Representing a statistic aggregate by operators
:param resource_id: id of resource to list statistics for.
:param meter_name: Name of meter to list statistics for.
:param period: Period in seconds over which to group samples.
:param aggregate: Available aggregates are: count, cardinality,
min, max, sum, stddev, avg. Defaults to avg.
:param granularity: frequency of marking metric point, in seconds.
This param isn't used in Ceilometer datasource.
:param dimensions: dimensions (dict). This param isn't used in
Ceilometer datasource.
:param aggregation: Available aggregates are: count, cardinality,
min, max, sum, stddev, avg. Defaults to avg.
:param group_by: list of columns to group the metrics to be returned.
This param isn't used in Ceilometer datasource.
:return: Return the latest statistical data, None if no data.
"""
end_time = datetime.datetime.utcnow()
if aggregation == 'mean':
aggregation = 'avg'
start_time = end_time - datetime.timedelta(seconds=int(period))
query = self.build_query(
resource_id=resource_id, start_time=start_time, end_time=end_time)
@@ -154,11 +175,11 @@ class CeilometerHelper(object):
q=query,
period=period,
aggregates=[
{'func': aggregate}])
{'func': aggregation}])
item_value = None
if statistic:
item_value = statistic[-1]._info.get('aggregate').get(aggregate)
item_value = statistic[-1]._info.get('aggregate').get(aggregation)
return item_value
def get_last_sample_values(self, resource_id, meter_name, limit=1):
@@ -182,3 +203,69 @@ class CeilometerHelper(object):
return samples[-1]._info['counter_volume']
else:
return False
def get_host_cpu_usage(self, resource_id, period, aggregate,
granularity=None):
meter_name = self.METRIC_MAP.get('host_cpu_usage')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregate=aggregate)
def get_instance_cpu_usage(self, resource_id, period, aggregate,
granularity=None):
meter_name = self.METRIC_MAP.get('instance_cpu_usage')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregate=aggregate)
def get_host_memory_usage(self, resource_id, period, aggregate,
granularity=None):
meter_name = self.METRIC_MAP.get('host_memory_usage')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregate=aggregate)
def get_instance_memory_usage(self, resource_id, period, aggregate,
granularity=None):
meter_name = self.METRIC_MAP.get('instance_ram_usage')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregate=aggregate)
def get_instance_l3_cache_usage(self, resource_id, period, aggregate,
granularity=None):
meter_name = self.METRIC_MAP.get('instance_l3_cache_usage')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregate=aggregate)
def get_instance_ram_allocated(self, resource_id, period, aggregate,
granularity=None):
meter_name = self.METRIC_MAP.get('instance_ram_allocated')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregate=aggregate)
def get_instance_root_disk_allocated(self, resource_id, period, aggregate,
granularity=None):
meter_name = self.METRIC_MAP.get('instance_root_disk_size')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregate=aggregate)
def get_host_outlet_temperature(self, resource_id, period, aggregate,
granularity=None):
meter_name = self.METRIC_MAP.get('host_outlet_temp')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregate=aggregate)
def get_host_inlet_temperature(self, resource_id, period, aggregate,
granularity=None):
meter_name = self.METRIC_MAP.get('host_inlet_temp')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregate=aggregate)
def get_host_airflow(self, resource_id, period, aggregate,
granularity=None):
meter_name = self.METRIC_MAP.get('host_airflow')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregate=aggregate)
def get_host_power(self, resource_id, period, aggregate,
granularity=None):
meter_name = self.METRIC_MAP.get('host_power')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregate=aggregate)

View File

@@ -17,6 +17,7 @@
# limitations under the License.
from datetime import datetime
from datetime import timedelta
import time
from oslo_config import cfg
@@ -25,12 +26,16 @@ from oslo_log import log
from watcher.common import clients
from watcher.common import exception
from watcher.common import utils as common_utils
from watcher.datasource import base
CONF = cfg.CONF
LOG = log.getLogger(__name__)
class GnocchiHelper(object):
class GnocchiHelper(base.DataSourceBase):
NAME = 'gnocchi'
METRIC_MAP = base.DataSourceBase.METRIC_MAP['gnocchi']
def __init__(self, osc=None):
""":param osc: an OpenStackClients instance"""
@@ -44,34 +49,44 @@ class GnocchiHelper(object):
except Exception as e:
LOG.exception(e)
time.sleep(CONF.gnocchi_client.query_timeout)
raise
raise exception.DataSourceNotAvailable(datasource='gnocchi')
def statistic_aggregation(self,
resource_id,
metric,
granularity,
start_time=None,
stop_time=None,
aggregation='mean'):
def check_availability(self):
try:
self.query_retry(self.gnocchi.status.get)
except Exception:
return 'not available'
return 'available'
def list_metrics(self):
"""List the user's meters."""
try:
response = self.query_retry(f=self.gnocchi.metric.list)
except Exception:
return set()
else:
return set([metric['name'] for metric in response])
def statistic_aggregation(self, resource_id=None, meter_name=None,
period=300, granularity=300, dimensions=None,
aggregation='avg', group_by='*'):
"""Representing a statistic aggregate by operators
:param metric: metric name of which we want the statistics
:param resource_id: id of resource to list statistics for
:param start_time: Start datetime from which metrics will be used
:param stop_time: End datetime from which metrics will be used
:param granularity: frequency of marking metric point, in seconds
:param resource_id: id of resource to list statistics for.
:param meter_name: meter name of which we want the statistics.
:param period: Period in seconds over which to group samples.
:param granularity: frequency of marking metric point, in seconds.
:param dimensions: dimensions (dict). This param isn't used in
Gnocchi datasource.
:param aggregation: Should be chosen in accordance with policy
aggregations
aggregations.
:param group_by: list of columns to group the metrics to be returned.
This param isn't used in Gnocchi datasource.
:return: value of aggregated metric
"""
if start_time is not None and not isinstance(start_time, datetime):
raise exception.InvalidParameter(parameter='start_time',
parameter_type=datetime)
if stop_time is not None and not isinstance(stop_time, datetime):
raise exception.InvalidParameter(parameter='stop_time',
parameter_type=datetime)
stop_time = datetime.utcnow()
start_time = stop_time - timedelta(seconds=(int(period)))
if not common_utils.is_uuid_like(resource_id):
kwargs = dict(query={"=": {"original_resource_id": resource_id}},
@@ -85,7 +100,7 @@ class GnocchiHelper(object):
resource_id = resources[0]['id']
raw_kwargs = dict(
metric=metric,
metric=meter_name,
start=start_time,
stop=stop_time,
resource_id=resource_id,
@@ -102,3 +117,69 @@ class GnocchiHelper(object):
# return value of latest measure
# measure has structure [time, granularity, value]
return statistics[-1][2]
def get_host_cpu_usage(self, resource_id, period, aggregate,
granularity=300):
meter_name = self.METRIC_MAP.get('host_cpu_usage')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregation=aggregate)
def get_instance_cpu_usage(self, resource_id, period, aggregate,
granularity=300):
meter_name = self.METRIC_MAP.get('instance_cpu_usage')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregation=aggregate)
def get_host_memory_usage(self, resource_id, period, aggregate,
granularity=300):
meter_name = self.METRIC_MAP.get('host_memory_usage')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregation=aggregate)
def get_instance_memory_usage(self, resource_id, period, aggregate,
granularity=300):
meter_name = self.METRIC_MAP.get('instance_ram_usage')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregation=aggregate)
def get_instance_l3_cache_usage(self, resource_id, period, aggregate,
granularity=300):
meter_name = self.METRIC_MAP.get('instance_l3_cache_usage')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregation=aggregate)
def get_instance_ram_allocated(self, resource_id, period, aggregate,
granularity=300):
meter_name = self.METRIC_MAP.get('instance_ram_allocated')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregation=aggregate)
def get_instance_root_disk_allocated(self, resource_id, period, aggregate,
granularity=300):
meter_name = self.METRIC_MAP.get('instance_root_disk_size')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregation=aggregate)
def get_host_outlet_temperature(self, resource_id, period, aggregate,
granularity=300):
meter_name = self.METRIC_MAP.get('host_outlet_temp')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregation=aggregate)
def get_host_inlet_temperature(self, resource_id, period, aggregate,
granularity=300):
meter_name = self.METRIC_MAP.get('host_inlet_temp')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregation=aggregate)
def get_host_airflow(self, resource_id, period, aggregate,
granularity=300):
meter_name = self.METRIC_MAP.get('host_airflow')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregation=aggregate)
def get_host_power(self, resource_id, period, aggregate,
granularity=300):
meter_name = self.METRIC_MAP.get('host_power')
return self.statistic_aggregation(resource_id, meter_name, period,
granularity, aggregation=aggregate)

View File

@@ -0,0 +1,78 @@
# -*- encoding: utf-8 -*-
# Copyright 2017 NEC Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_log import log
from watcher.common import exception
from watcher.datasource import base
from watcher.datasource import ceilometer as ceil
from watcher.datasource import gnocchi as gnoc
from watcher.datasource import monasca as mon
LOG = log.getLogger(__name__)
class DataSourceManager(object):
def __init__(self, config=None, osc=None):
self.osc = osc
self.config = config
self._ceilometer = None
self._monasca = None
self._gnocchi = None
self.metric_map = base.DataSourceBase.METRIC_MAP
self.datasources = self.config.datasources
@property
def ceilometer(self):
if self._ceilometer is None:
self.ceilometer = ceil.CeilometerHelper(osc=self.osc)
return self._ceilometer
@ceilometer.setter
def ceilometer(self, ceilometer):
self._ceilometer = ceilometer
@property
def monasca(self):
if self._monasca is None:
self._monasca = mon.MonascaHelper(osc=self.osc)
return self._monasca
@monasca.setter
def monasca(self, monasca):
self._monasca = monasca
@property
def gnocchi(self):
if self._gnocchi is None:
self._gnocchi = gnoc.GnocchiHelper(osc=self.osc)
return self._gnocchi
@gnocchi.setter
def gnocchi(self, gnocchi):
self._gnocchi = gnocchi
def get_backend(self, metrics):
for datasource in self.datasources:
no_metric = False
for metric in metrics:
if (metric not in self.metric_map[datasource] or
self.metric_map[datasource].get(metric) is None):
no_metric = True
break
if not no_metric:
return getattr(self, datasource)
raise exception.NoSuchMetric()

View File

@@ -21,9 +21,14 @@ import datetime
from monascaclient import exc
from watcher.common import clients
from watcher.common import exception
from watcher.datasource import base
class MonascaHelper(object):
class MonascaHelper(base.DataSourceBase):
NAME = 'monasca'
METRIC_MAP = base.DataSourceBase.METRIC_MAP['monasca']
def __init__(self, osc=None):
""":param osc: an OpenStackClients instance"""
@@ -61,6 +66,18 @@ class MonascaHelper(object):
return start_timestamp, end_timestamp, period
def check_availability(self):
try:
self.query_retry(self.monasca.metrics.list)
except Exception:
return 'not available'
return 'available'
def list_metrics(self):
# TODO(alexchadin): this method should be implemented in accordance to
# monasca API.
pass
def statistics_list(self, meter_name, dimensions, start_time=None,
end_time=None, period=None,):
"""List of statistics."""
@@ -81,38 +98,42 @@ class MonascaHelper(object):
return statistics
def statistic_aggregation(self,
meter_name,
dimensions,
start_time=None,
end_time=None,
period=None,
aggregate='avg',
group_by='*'):
def statistic_aggregation(self, resource_id=None, meter_name=None,
period=300, granularity=300, dimensions=None,
aggregation='avg', group_by='*'):
"""Representing a statistic aggregate by operators
:param meter_name: meter names of which we want the statistics
:param dimensions: dimensions (dict)
:param start_time: Start datetime from which metrics will be used
:param end_time: End datetime from which metrics will be used
:param resource_id: id of resource to list statistics for.
This param isn't used in Monasca datasource.
:param meter_name: meter names of which we want the statistics.
:param period: Sampling `period`: In seconds. If no period is given,
only one aggregate statistic is returned. If given, a
faceted result will be returned, divided into given
periods. Periods with no data are ignored.
:param aggregate: Should be either 'avg', 'count', 'min' or 'max'
:param granularity: frequency of marking metric point, in seconds.
This param isn't used in Ceilometer datasource.
:param dimensions: dimensions (dict).
:param aggregation: Should be either 'avg', 'count', 'min' or 'max'.
:param group_by: list of columns to group the metrics to be returned.
:return: A list of dict with each dict being a distinct result row
"""
start_timestamp, end_timestamp, period = self._format_time_params(
start_time, end_time, period
)
if dimensions is None:
raise exception.UnsupportedDataSource(datasource='Monasca')
stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(seconds=(int(period)))
if aggregation == 'mean':
aggregation = 'avg'
raw_kwargs = dict(
name=meter_name,
start_time=start_timestamp,
end_time=end_timestamp,
start_time=start_time.isoformat(),
end_time=stop_time.isoformat(),
dimensions=dimensions,
period=period,
statistics=aggregate,
statistics=aggregation,
group_by=group_by,
)
@@ -121,4 +142,69 @@ class MonascaHelper(object):
statistics = self.query_retry(
f=self.monasca.metrics.list_statistics, **kwargs)
return statistics
cpu_usage = None
for stat in statistics:
avg_col_idx = stat['columns'].index(aggregation)
values = [r[avg_col_idx] for r in stat['statistics']]
value = float(sum(values)) / len(values)
cpu_usage = value
return cpu_usage
def get_host_cpu_usage(self, resource_id, period, aggregate,
granularity=None):
metric_name = self.METRIC_MAP.get('host_cpu_usage')
node_uuid = resource_id.split('_')[0]
return self.statistic_aggregation(
meter_name=metric_name,
dimensions=dict(hostname=node_uuid),
period=period,
aggregation=aggregate
)
def get_instance_cpu_usage(self, resource_id, period, aggregate,
granularity=None):
metric_name = self.METRIC_MAP.get('instance_cpu_usage')
return self.statistic_aggregation(
meter_name=metric_name,
dimensions=dict(resource_id=resource_id),
period=period,
aggregation=aggregate
)
def get_host_memory_usage(self, resource_id, period, aggregate,
granularity=None):
raise NotImplementedError
def get_instance_memory_usage(self, resource_id, period, aggregate,
granularity=None):
raise NotImplementedError
def get_instance_l3_cache_usage(self, resource_id, period, aggregate,
granularity=None):
raise NotImplementedError
def get_instance_ram_allocated(self, resource_id, period, aggregate,
granularity=None):
raise NotImplementedError
def get_instance_root_disk_allocated(self, resource_id, period, aggregate,
granularity=None):
raise NotImplementedError
def get_host_outlet_temperature(self, resource_id, period, aggregate,
granularity=None):
raise NotImplementedError
def get_host_inlet_temperature(self, resource_id, period, aggregate,
granularity=None):
raise NotImplementedError
def get_host_airflow(self, resource_id, period, aggregate,
granularity=None):
raise NotImplementedError
def get_host_power(self, resource_id, period, aggregate,
granularity=None):
raise NotImplementedError

View File

@@ -0,0 +1,34 @@
"""Set name for Audit as part of backward compatibility
Revision ID: a86240e89a29
Revises: 3cfc94cecf4e
Create Date: 2017-12-21 13:00:09.278587
"""
# revision identifiers, used by Alembic.
revision = 'a86240e89a29'
down_revision = '3cfc94cecf4e'
from alembic import op
from sqlalchemy.orm import sessionmaker
from watcher.db.sqlalchemy import models
def upgrade():
connection = op.get_bind()
session = sessionmaker()
s = session(bind=connection)
for audit in s.query(models.Audit).filter(models.Audit.name is None).all():
strategy_name = s.query(models.Strategy).filter_by(id=audit.strategy_id).one().name
audit.update({'name': strategy_name + '-' + str(audit.created_at)})
s.commit()
def downgrade():
connection = op.get_bind()
session = sessionmaker()
s = session(bind=connection)
for audit in s.query(models.Audit).filter(models.Audit.name is not None).all():
audit.update({'name': None})
s.commit()

View File

@@ -129,14 +129,25 @@ class ContinuousAuditHandler(base.AuditHandler):
audits = objects.Audit.list(
audit_context, filters=audit_filters, eager=True)
scheduler_job_args = [
job.args for job in self.scheduler.get_jobs()
(job.args[0].uuid, job) for job
in self.scheduler.get_jobs()
if job.name == 'execute_audit']
for args in scheduler_job_args:
if self._is_audit_inactive(args[0]):
scheduler_job_args.remove(args)
scheduler_jobs = dict(scheduler_job_args)
# if audit isn't in active states, audit's job should be removed
for job in scheduler_jobs.values():
if self._is_audit_inactive(job.args[0]):
scheduler_jobs.pop(job.args[0].uuid)
for audit in audits:
# if audit is not presented in scheduled audits yet.
if audit.uuid not in [arg[0].uuid for arg in scheduler_job_args]:
existing_job = scheduler_jobs.get(audit.uuid, None)
# if audit is not presented in scheduled audits yet,
# just add a new audit job.
# if audit is already in the job queue, and interval has changed,
# we need to remove the old job and add a new one.
if (existing_job is None) or (
existing_job and
audit.interval != existing_job.args[0].interval):
if existing_job:
self.scheduler.remove_job(existing_job.id)
# if interval is provided with seconds
if utils.is_int_like(audit.interval):
# if audit has already been provided and we need

View File

@@ -23,7 +23,10 @@ Unclassified = goals.Unclassified
WorkloadBalancing = goals.WorkloadBalancing
NoisyNeighbor = goals.NoisyNeighborOptimization
SavingEnergy = goals.SavingEnergy
HardwareMaintenance = goals.HardwareMaintenance
__all__ = ("Dummy", "ServerConsolidation", "ThermalOptimization",
"Unclassified", "WorkloadBalancing",
"NoisyNeighborOptimization", "SavingEnergy")
"NoisyNeighborOptimization", "SavingEnergy",
"HardwareMaintenance")

View File

@@ -112,3 +112,118 @@ class InstanceMigrationsCount(IndicatorSpecification):
def schema(self):
return voluptuous.Schema(
voluptuous.Range(min=0), required=True)
class LiveInstanceMigrateCount(IndicatorSpecification):
def __init__(self):
super(LiveInstanceMigrateCount, self).__init__(
name="live_migrate_instance_count",
description=_("The number of instances actually live migrated."),
unit=None,
)
@property
def schema(self):
return voluptuous.Schema(
voluptuous.Range(min=0), required=True)
class PlannedLiveInstanceMigrateCount(IndicatorSpecification):
def __init__(self):
super(PlannedLiveInstanceMigrateCount, self).__init__(
name="planned_live_migrate_instance_count",
description=_("The number of instances planned to live migrate."),
unit=None,
)
@property
def schema(self):
return voluptuous.Schema(
voluptuous.Range(min=0), required=True)
class ColdInstanceMigrateCount(IndicatorSpecification):
def __init__(self):
super(ColdInstanceMigrateCount, self).__init__(
name="cold_migrate_instance_count",
description=_("The number of instances actually cold migrated."),
unit=None,
)
@property
def schema(self):
return voluptuous.Schema(
voluptuous.Range(min=0), required=True)
class PlannedColdInstanceMigrateCount(IndicatorSpecification):
def __init__(self):
super(PlannedColdInstanceMigrateCount, self).__init__(
name="planned_cold_migrate_instance_count",
description=_("The number of instances planned to cold migrate."),
unit=None,
)
@property
def schema(self):
return voluptuous.Schema(
voluptuous.Range(min=0), required=True)
class VolumeMigrateCount(IndicatorSpecification):
def __init__(self):
super(VolumeMigrateCount, self).__init__(
name="volume_migrate_count",
description=_("The number of detached volumes actually migrated."),
unit=None,
)
@property
def schema(self):
return voluptuous.Schema(
voluptuous.Range(min=0), required=True)
class PlannedVolumeMigrateCount(IndicatorSpecification):
def __init__(self):
super(PlannedVolumeMigrateCount, self).__init__(
name="planned_volume_migrate_count",
description=_("The number of detached volumes planned"
" to migrate."),
unit=None,
)
@property
def schema(self):
return voluptuous.Schema(
voluptuous.Range(min=0), required=True)
class VolumeUpdateCount(IndicatorSpecification):
def __init__(self):
super(VolumeUpdateCount, self).__init__(
name="volume_update_count",
description=_("The number of attached volumes actually"
" migrated."),
unit=None,
)
@property
def schema(self):
return voluptuous.Schema(
voluptuous.Range(min=0), required=True)
class PlannedVolumeUpdateCount(IndicatorSpecification):
def __init__(self):
super(PlannedVolumeUpdateCount, self).__init__(
name="planned_volume_update_count",
description=_("The number of attached volumes planned to"
" migrate."),
unit=None,
)
@property
def schema(self):
return voluptuous.Schema(
voluptuous.Range(min=0), required=True)

View File

@@ -53,3 +53,86 @@ class ServerConsolidation(base.EfficacySpecification):
))
return global_efficacy
class HardwareMaintenance(base.EfficacySpecification):
def get_indicators_specifications(self):
return [
indicators.LiveInstanceMigrateCount(),
indicators.PlannedLiveInstanceMigrateCount(),
indicators.ColdInstanceMigrateCount(),
indicators.PlannedColdInstanceMigrateCount(),
indicators.VolumeMigrateCount(),
indicators.PlannedVolumeMigrateCount(),
indicators.VolumeUpdateCount(),
indicators.PlannedVolumeUpdateCount()
]
def get_global_efficacy_indicator(self, indicators_map=None):
li_value = 0
if (indicators_map and
indicators_map.planned_live_migrate_instance_count > 0):
li_value = (
float(indicators_map.planned_live_migrate_instance_count)
/ float(indicators_map.live_migrate_instance_count)
* 100
)
li_indicator = efficacy.Indicator(
name="live_instance_migrate_ratio",
description=_("Ratio of actual live migrated instances "
"to planned live migrate instances."),
unit='%',
value=li_value)
ci_value = 0
if (indicators_map and
indicators_map.planned_cold_migrate_instance_count > 0):
ci_value = (
float(indicators_map.planned_cold_migrate_instance_count)
/ float(indicators_map.cold_migrate_instance_count)
* 100
)
ci_indicator = efficacy.Indicator(
name="cold_instance_migrate_ratio",
description=_("Ratio of actual cold migrated instances "
"to planned cold migrate instances."),
unit='%',
value=ci_value)
dv_value = 0
if (indicators_map and
indicators_map.planned_volume_migrate_count > 0):
dv_value = (float(indicators_map.planned_volume_migrate_count) /
float(indicators_map.
volume_migrate_count)
* 100)
dv_indicator = efficacy.Indicator(
name="volume_migrate_ratio",
description=_("Ratio of actual detached volumes migrated to"
" planned detached volumes migrate."),
unit='%',
value=dv_value)
av_value = 0
if (indicators_map and
indicators_map.planned_volume_update_count > 0):
av_value = (float(indicators_map.planned_volume_update_count) /
float(indicators_map.
volume_update_count)
* 100)
av_indicator = efficacy.Indicator(
name="volume_update_ratio",
description=_("Ratio of actual attached volumes migrated to"
" planned attached volumes migrate."),
unit='%',
value=av_value)
return [li_indicator,
ci_indicator,
dv_indicator,
av_indicator]

View File

@@ -216,3 +216,28 @@ class SavingEnergy(base.Goal):
def get_efficacy_specification(cls):
"""The efficacy spec for the current goal"""
return specs.Unclassified()
class HardwareMaintenance(base.Goal):
"""HardwareMaintenance
This goal is to migrate instances and volumes on a set of compute nodes
and storage from nodes under maintenance
"""
@classmethod
def get_name(cls):
return "hardware_maintenance"
@classmethod
def get_display_name(cls):
return _("Hardware Maintenance")
@classmethod
def get_translatable_display_name(cls):
return "Hardware Maintenance"
@classmethod
def get_efficacy_specification(cls):
"""The efficacy spec for the current goal"""
return specs.HardwareMaintenance()

View File

@@ -40,6 +40,8 @@ See :doc:`../architecture` for more details on this component.
from watcher.common import service_manager
from watcher.decision_engine.messaging import audit_endpoint
from watcher.decision_engine.model.collector import manager
from watcher.decision_engine.strategy.strategies import base \
as strategy_endpoint
from watcher import conf
@@ -70,7 +72,8 @@ class DecisionEngineManager(service_manager.ServiceManager):
@property
def conductor_endpoints(self):
return [audit_endpoint.AuditEndpoint]
return [audit_endpoint.AuditEndpoint,
strategy_endpoint.StrategyEndpoint]
@property
def notification_endpoints(self):

View File

@@ -48,7 +48,7 @@ class AuditEndpoint(object):
self._oneshot_handler.execute(audit, context)
def trigger_audit(self, context, audit_uuid):
LOG.debug("Trigger audit %s" % audit_uuid)
LOG.debug("Trigger audit %s", audit_uuid)
self.executor.submit(self.do_trigger_audit,
context,
audit_uuid)

View File

@@ -23,6 +23,7 @@ from watcher.decision_engine.model.collector import base
from watcher.decision_engine.model import element
from watcher.decision_engine.model import model_root
from watcher.decision_engine.model.notification import cinder
from watcher.decision_engine.scope import storage as storage_scope
LOG = log.getLogger(__name__)
@@ -33,6 +34,85 @@ class CinderClusterDataModelCollector(base.BaseClusterDataModelCollector):
The Cinder cluster data model collector creates an in-memory
representation of the resources exposed by the storage service.
"""
SCHEMA = {
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "array",
"items": {
"type": "object",
"properties": {
"availability_zones": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {
"type": "string"
}
},
"additionalProperties": False
}
},
"volume_types": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {
"type": "string"
}
},
"additionalProperties": False
}
},
"exclude": {
"type": "array",
"items": {
"type": "object",
"properties": {
"storage_pools": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {
"type": "string"
}
},
"additionalProperties": False
}
},
"volumes": {
"type": "array",
"items": {
"type": "object",
"properties": {
"uuid": {
"type": "string"
}
},
"additionalProperties": False
}
},
"projects": {
"type": "array",
"items": {
"type": "object",
"properties": {
"uuid": {
"type": "string"
}
},
"additionalProperties": False
}
},
"additionalProperties": False
}
}
}
},
"additionalProperties": False
}
}
def __init__(self, config, osc=None):
super(CinderClusterDataModelCollector, self).__init__(config, osc)
@@ -55,7 +135,9 @@ class CinderClusterDataModelCollector(base.BaseClusterDataModelCollector):
]
def get_audit_scope_handler(self, audit_scope):
return None
self._audit_scope_handler = storage_scope.StorageScope(
audit_scope, self.config)
return self._audit_scope_handler
def execute(self):
"""Build the storage cluster data model"""

View File

@@ -0,0 +1,97 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2017 ZTE Corporation
#
# Authors:Yumeng Bao <bao.yumeng@zte.com.cn>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_log import log
from watcher.common import ironic_helper
from watcher.decision_engine.model.collector import base
from watcher.decision_engine.model import element
from watcher.decision_engine.model import model_root
LOG = log.getLogger(__name__)
class BaremetalClusterDataModelCollector(base.BaseClusterDataModelCollector):
"""Baremetal cluster data model collector
The Baremetal cluster data model collector creates an in-memory
representation of the resources exposed by the baremetal service.
"""
def __init__(self, config, osc=None):
super(BaremetalClusterDataModelCollector, self).__init__(config, osc)
@property
def notification_endpoints(self):
"""Associated notification endpoints
:return: Associated notification endpoints
:rtype: List of :py:class:`~.EventsNotificationEndpoint` instances
"""
return None
def get_audit_scope_handler(self, audit_scope):
return None
def execute(self):
"""Build the baremetal cluster data model"""
LOG.debug("Building latest Baremetal cluster data model")
builder = ModelBuilder(self.osc)
return builder.execute()
class ModelBuilder(object):
"""Build the graph-based model
This model builder adds the following data"
- Baremetal-related knowledge (Ironic)
"""
def __init__(self, osc):
self.osc = osc
self.model = model_root.BaremetalModelRoot()
self.ironic_helper = ironic_helper.IronicHelper(osc=self.osc)
def add_ironic_node(self, node):
# Build and add base node.
ironic_node = self.build_ironic_node(node)
self.model.add_node(ironic_node)
def build_ironic_node(self, node):
"""Build a Baremetal node from a Ironic node
:param node: A ironic node
:type node: :py:class:`~ironicclient.v1.node.Node`
"""
# build up the ironic node.
node_attributes = {
"uuid": node.uuid,
"power_state": node.power_state,
"maintenance": node.maintenance,
"maintenance_reason": node.maintenance_reason,
"extra": {"compute_node_id": node.extra.compute_node_id}
}
ironic_node = element.IronicNode(**node_attributes)
return ironic_node
def execute(self):
for node in self.ironic_helper.get_ironic_node_list():
self.add_ironic_node(node)
return self.model

View File

@@ -158,6 +158,7 @@ class NovaClusterDataModelCollector(base.BaseClusterDataModelCollector):
nova.LegacyInstanceDeletedEnd(self),
nova.LegacyLiveMigratedEnd(self),
nova.LegacyInstanceResizeConfirmEnd(self),
nova.LegacyInstanceRebuildEnd(self),
]
def get_audit_scope_handler(self, audit_scope):

View File

@@ -23,6 +23,7 @@ from watcher.decision_engine.model.element import volume
ServiceState = node.ServiceState
ComputeNode = node.ComputeNode
StorageNode = node.StorageNode
IronicNode = node.IronicNode
Pool = node.Pool
InstanceState = instance.InstanceState
@@ -37,4 +38,5 @@ __all__ = ['ServiceState',
'StorageNode',
'Pool',
'VolumeState',
'Volume']
'Volume',
'IronicNode']

View File

@@ -0,0 +1,33 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2017 ZTE Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import abc
import six
from watcher.decision_engine.model.element import base
from watcher.objects import fields as wfields
@six.add_metaclass(abc.ABCMeta)
class BaremetalResource(base.Element):
VERSION = '1.0'
fields = {
"uuid": wfields.StringField(),
"human_id": wfields.StringField(default=""),
}

View File

@@ -42,6 +42,9 @@ class InstanceState(enum.Enum):
class Instance(compute_resource.ComputeResource):
fields = {
# If the resource is excluded by the scope,
# 'watcher_exclude' property will be set True.
"watcher_exclude": wfields.BooleanField(default=False),
"state": wfields.StringField(default=InstanceState.ACTIVE.value),
"memory": wfields.NonNegativeIntegerField(),

View File

@@ -16,6 +16,7 @@
import enum
from watcher.decision_engine.model.element import baremetal_resource
from watcher.decision_engine.model.element import compute_resource
from watcher.decision_engine.model.element import storage_resource
from watcher.objects import base
@@ -56,7 +57,7 @@ class StorageNode(storage_resource.StorageResource):
"zone": wfields.StringField(),
"status": wfields.StringField(default=ServiceState.ENABLED.value),
"state": wfields.StringField(default=ServiceState.ONLINE.value),
"volume_type": wfields.StringField()
"volume_type": wfields.ListOfStringsField()
}
def accept(self, visitor):
@@ -78,3 +79,17 @@ class Pool(storage_resource.StorageResource):
def accept(self, visitor):
raise NotImplementedError()
@base.WatcherObjectRegistry.register_if(False)
class IronicNode(baremetal_resource.BaremetalResource):
fields = {
"power_state": wfields.StringField(),
"maintenance": wfields.BooleanField(),
"maintenance_reason": wfields.StringField(),
"extra": wfields.DictField()
}
def accept(self, visitor):
raise NotImplementedError()

View File

@@ -508,7 +508,13 @@ class StorageModelRoot(nx.DiGraph, base.Model):
root = etree.fromstring(data)
for cn in root.findall('.//StorageNode'):
node = element.StorageNode(**cn.attrib)
ndata = {}
for attr, val in cn.items():
ndata[attr] = val
volume_type = ndata.get('volume_type')
if volume_type:
ndata['volume_type'] = [volume_type]
node = element.StorageNode(**ndata)
model.add_node(node)
for p in root.findall('.//Pool'):
@@ -539,3 +545,85 @@ class StorageModelRoot(nx.DiGraph, base.Model):
def is_isomorphic(cls, G1, G2):
return nx.algorithms.isomorphism.isomorph.is_isomorphic(
G1, G2)
class BaremetalModelRoot(nx.DiGraph, base.Model):
"""Cluster graph for an Openstack cluster: Baremetal Cluster."""
def __init__(self, stale=False):
super(BaremetalModelRoot, self).__init__()
self.stale = stale
def __nonzero__(self):
return not self.stale
__bool__ = __nonzero__
@staticmethod
def assert_node(obj):
if not isinstance(obj, element.IronicNode):
raise exception.IllegalArgumentException(
message=_("'obj' argument type is not valid: %s") % type(obj))
@lockutils.synchronized("baremetal_model")
def add_node(self, node):
self.assert_node(node)
super(BaremetalModelRoot, self).add_node(node.uuid, node)
@lockutils.synchronized("baremetal_model")
def remove_node(self, node):
self.assert_node(node)
try:
super(BaremetalModelRoot, self).remove_node(node.uuid)
except nx.NetworkXError as exc:
LOG.exception(exc)
raise exception.IronicNodeNotFound(name=node.uuid)
@lockutils.synchronized("baremetal_model")
def get_all_ironic_nodes(self):
return {uuid: cn for uuid, cn in self.nodes(data=True)
if isinstance(cn, element.IronicNode)}
@lockutils.synchronized("baremetal_model")
def get_node_by_uuid(self, uuid):
try:
return self._get_by_uuid(uuid)
except exception.BaremetalResourceNotFound:
raise exception.IronicNodeNotFound(name=uuid)
def _get_by_uuid(self, uuid):
try:
return self.node[uuid]
except Exception as exc:
LOG.exception(exc)
raise exception.BaremetalResourceNotFound(name=uuid)
def to_string(self):
return self.to_xml()
def to_xml(self):
root = etree.Element("ModelRoot")
# Build Ironic node tree
for cn in sorted(self.get_all_ironic_nodes().values(),
key=lambda cn: cn.uuid):
ironic_node_el = cn.as_xml_element()
root.append(ironic_node_el)
return etree.tostring(root, pretty_print=True).decode('utf-8')
@classmethod
def from_xml(cls, data):
model = cls()
root = etree.fromstring(data)
for cn in root.findall('.//IronicNode'):
node = element.IronicNode(**cn.attrib)
model.add_node(node)
return model
@classmethod
def is_isomorphic(cls, G1, G2):
return nx.algorithms.isomorphism.isomorph.is_isomorphic(
G1, G2)

View File

@@ -255,7 +255,7 @@ class CapacityNotificationEndpoint(CinderNotification):
ctxt.request_id = metadata['message_id']
ctxt.project_domain = event_type
LOG.info("Event '%(event)s' received from %(publisher)s "
"with metadata %(metadata)s" %
"with metadata %(metadata)s",
dict(event=event_type,
publisher=publisher_id,
metadata=metadata))
@@ -286,7 +286,7 @@ class VolumeCreateEnd(VolumeNotificationEndpoint):
ctxt.request_id = metadata['message_id']
ctxt.project_domain = event_type
LOG.info("Event '%(event)s' received from %(publisher)s "
"with metadata %(metadata)s" %
"with metadata %(metadata)s",
dict(event=event_type,
publisher=publisher_id,
metadata=metadata))
@@ -311,7 +311,7 @@ class VolumeUpdateEnd(VolumeNotificationEndpoint):
ctxt.request_id = metadata['message_id']
ctxt.project_domain = event_type
LOG.info("Event '%(event)s' received from %(publisher)s "
"with metadata %(metadata)s" %
"with metadata %(metadata)s",
dict(event=event_type,
publisher=publisher_id,
metadata=metadata))
@@ -369,7 +369,7 @@ class VolumeDeleteEnd(VolumeNotificationEndpoint):
ctxt.request_id = metadata['message_id']
ctxt.project_domain = event_type
LOG.info("Event '%(event)s' received from %(publisher)s "
"with metadata %(metadata)s" %
"with metadata %(metadata)s",
dict(event=event_type,
publisher=publisher_id,
metadata=metadata))

View File

@@ -229,7 +229,7 @@ class ServiceUpdated(VersionedNotificationEndpoint):
ctxt.request_id = metadata['message_id']
ctxt.project_domain = event_type
LOG.info("Event '%(event)s' received from %(publisher)s "
"with metadata %(metadata)s" %
"with metadata %(metadata)s",
dict(event=event_type,
publisher=publisher_id,
metadata=metadata))
@@ -275,7 +275,7 @@ class InstanceCreated(VersionedNotificationEndpoint):
ctxt.request_id = metadata['message_id']
ctxt.project_domain = event_type
LOG.info("Event '%(event)s' received from %(publisher)s "
"with metadata %(metadata)s" %
"with metadata %(metadata)s",
dict(event=event_type,
publisher=publisher_id,
metadata=metadata))
@@ -310,7 +310,7 @@ class InstanceUpdated(VersionedNotificationEndpoint):
ctxt.request_id = metadata['message_id']
ctxt.project_domain = event_type
LOG.info("Event '%(event)s' received from %(publisher)s "
"with metadata %(metadata)s" %
"with metadata %(metadata)s",
dict(event=event_type,
publisher=publisher_id,
metadata=metadata))
@@ -337,7 +337,7 @@ class InstanceDeletedEnd(VersionedNotificationEndpoint):
ctxt.request_id = metadata['message_id']
ctxt.project_domain = event_type
LOG.info("Event '%(event)s' received from %(publisher)s "
"with metadata %(metadata)s" %
"with metadata %(metadata)s",
dict(event=event_type,
publisher=publisher_id,
metadata=metadata))
@@ -372,7 +372,7 @@ class LegacyInstanceUpdated(UnversionedNotificationEndpoint):
ctxt.request_id = metadata['message_id']
ctxt.project_domain = event_type
LOG.info("Event '%(event)s' received from %(publisher)s "
"with metadata %(metadata)s" %
"with metadata %(metadata)s",
dict(event=event_type,
publisher=publisher_id,
metadata=metadata))
@@ -399,7 +399,7 @@ class LegacyInstanceCreatedEnd(UnversionedNotificationEndpoint):
ctxt.request_id = metadata['message_id']
ctxt.project_domain = event_type
LOG.info("Event '%(event)s' received from %(publisher)s "
"with metadata %(metadata)s" %
"with metadata %(metadata)s",
dict(event=event_type,
publisher=publisher_id,
metadata=metadata))
@@ -426,7 +426,7 @@ class LegacyInstanceDeletedEnd(UnversionedNotificationEndpoint):
ctxt.request_id = metadata['message_id']
ctxt.project_domain = event_type
LOG.info("Event '%(event)s' received from %(publisher)s "
"with metadata %(metadata)s" %
"with metadata %(metadata)s",
dict(event=event_type,
publisher=publisher_id,
metadata=metadata))
@@ -459,7 +459,7 @@ class LegacyLiveMigratedEnd(UnversionedNotificationEndpoint):
ctxt.request_id = metadata['message_id']
ctxt.project_domain = event_type
LOG.info("Event '%(event)s' received from %(publisher)s "
"with metadata %(metadata)s" %
"with metadata %(metadata)s",
dict(event=event_type,
publisher=publisher_id,
metadata=metadata))
@@ -486,7 +486,34 @@ class LegacyInstanceResizeConfirmEnd(UnversionedNotificationEndpoint):
ctxt.request_id = metadata['message_id']
ctxt.project_domain = event_type
LOG.info("Event '%(event)s' received from %(publisher)s "
"with metadata %(metadata)s" %
"with metadata %(metadata)s",
dict(event=event_type,
publisher=publisher_id,
metadata=metadata))
LOG.debug(payload)
instance_uuid = payload['instance_id']
node_uuid = payload.get('node')
instance = self.get_or_create_instance(instance_uuid, node_uuid)
self.legacy_update_instance(instance, payload)
class LegacyInstanceRebuildEnd(UnversionedNotificationEndpoint):
@property
def filter_rule(self):
"""Nova compute.instance.rebuild.end filter"""
return filtering.NotificationFilter(
publisher_id=self.publisher_id_regex,
event_type='compute.instance.rebuild.end',
)
def info(self, ctxt, publisher_id, event_type, payload, metadata):
ctxt.request_id = metadata['message_id']
ctxt.project_domain = event_type
LOG.info("Event '%(event)s' received from %(publisher)s "
"with metadata %(metadata)s",
dict(event=event_type,
publisher=publisher_id,
metadata=metadata))

View File

@@ -40,6 +40,10 @@ class DecisionEngineAPI(service.Service):
self.conductor_client.cast(
context, 'trigger_audit', audit_uuid=audit_uuid)
def get_strategy_info(self, context, strategy_name):
return self.conductor_client.call(
context, 'get_strategy_info', strategy_name=strategy_name)
class DecisionEngineAPIManager(service_manager.ServiceManager):

View File

@@ -0,0 +1,46 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2018 ZTE Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_log import log
from watcher.decision_engine.scope import base
LOG = log.getLogger(__name__)
class BaremetalScope(base.BaseScope):
"""Baremetal Audit Scope Handler"""
def __init__(self, scope, config, osc=None):
super(BaremetalScope, self).__init__(scope, config)
self._osc = osc
def get_scoped_model(self, cluster_model):
"""Leave only nodes and instances proposed in the audit scope"""
if not cluster_model:
return None
for scope in self.scope:
baremetal_scope = scope.get('baremetal')
if not baremetal_scope:
return cluster_model
# TODO(yumeng-bao): currently self.scope is always []
# Audit scoper for baremetal data model will be implemented:
# https://blueprints.launchpad.net/watcher/+spec/audit-scoper-for-baremetal-data-model
return cluster_model

View File

@@ -36,6 +36,12 @@ class ComputeScope(base.BaseScope):
node = cluster_model.get_node_by_uuid(node_name)
cluster_model.delete_instance(instance, node)
def update_exclude_instance(self, cluster_model, instance, node_name):
node = cluster_model.get_node_by_uuid(node_name)
cluster_model.unmap_instance(instance, node)
instance.update({"watcher_exclude": True})
cluster_model.map_instance(instance, node)
def _check_wildcard(self, aggregate_list):
if '*' in aggregate_list:
if len(aggregate_list) == 1:
@@ -108,8 +114,9 @@ class ComputeScope(base.BaseScope):
self.remove_instance(cluster_model, instance, node_uuid)
cluster_model.remove_node(node)
def remove_instances_from_model(self, instances_to_remove, cluster_model):
for instance_uuid in instances_to_remove:
def update_exclude_instance_in_model(
self, instances_to_exclude, cluster_model):
for instance_uuid in instances_to_exclude:
try:
node_name = cluster_model.get_node_by_instance_uuid(
instance_uuid).uuid
@@ -119,7 +126,7 @@ class ComputeScope(base.BaseScope):
" instance was hosted on.",
instance_uuid)
continue
self.remove_instance(
self.update_exclude_instance(
cluster_model,
cluster_model.get_instance_by_uuid(instance_uuid),
node_name)
@@ -147,12 +154,19 @@ class ComputeScope(base.BaseScope):
nodes_to_remove = set()
instances_to_exclude = []
instance_metadata = []
compute_scope = []
model_hosts = list(cluster_model.get_all_compute_nodes().keys())
if not self.scope:
return cluster_model
for rule in self.scope:
for scope in self.scope:
compute_scope = scope.get('compute')
if not compute_scope:
return cluster_model
for rule in compute_scope:
if 'host_aggregates' in rule:
self._collect_aggregates(rule['host_aggregates'],
allowed_nodes)
@@ -165,7 +179,7 @@ class ComputeScope(base.BaseScope):
nodes=nodes_to_exclude,
instance_metadata=instance_metadata)
instances_to_remove = set(instances_to_exclude)
instances_to_exclude = set(instances_to_exclude)
if allowed_nodes:
nodes_to_remove = set(model_hosts) - set(allowed_nodes)
nodes_to_remove.update(nodes_to_exclude)
@@ -174,8 +188,9 @@ class ComputeScope(base.BaseScope):
if instance_metadata and self.config.check_optimize_metadata:
self.exclude_instances_with_given_metadata(
instance_metadata, cluster_model, instances_to_remove)
instance_metadata, cluster_model, instances_to_exclude)
self.remove_instances_from_model(instances_to_remove, cluster_model)
self.update_exclude_instance_in_model(instances_to_exclude,
cluster_model)
return cluster_model

View File

@@ -0,0 +1,165 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_log import log
from watcher.common import cinder_helper
from watcher.common import exception
from watcher.decision_engine.scope import base
LOG = log.getLogger(__name__)
class StorageScope(base.BaseScope):
"""Storage Audit Scope Handler"""
def __init__(self, scope, config, osc=None):
super(StorageScope, self).__init__(scope, config)
self._osc = osc
self.wrapper = cinder_helper.CinderHelper(osc=self._osc)
def _collect_vtype(self, volume_types, allowed_nodes):
service_list = self.wrapper.get_storage_node_list()
vt_names = [volume_type['name'] for volume_type in volume_types]
include_all_nodes = False
if '*' in vt_names:
if len(vt_names) == 1:
include_all_nodes = True
else:
raise exception.WildcardCharacterIsUsed(
resource="volume_types")
for service in service_list:
if include_all_nodes:
allowed_nodes.append(service.host)
continue
backend = service.host.split('@')[1]
v_types = self.wrapper.get_volume_type_by_backendname(
backend)
for volume_type in v_types:
if volume_type in vt_names:
# Note(adisky): It can generate duplicate values
# but it will later converted to set
allowed_nodes.append(service.host)
def _collect_zones(self, availability_zones, allowed_nodes):
service_list = self.wrapper.get_storage_node_list()
zone_names = [zone['name'] for zone
in availability_zones]
include_all_nodes = False
if '*' in zone_names:
if len(zone_names) == 1:
include_all_nodes = True
else:
raise exception.WildcardCharacterIsUsed(
resource="availability zones")
for service in service_list:
if service.zone in zone_names or include_all_nodes:
allowed_nodes.append(service.host)
def exclude_resources(self, resources, **kwargs):
pools_to_exclude = kwargs.get('pools')
volumes_to_exclude = kwargs.get('volumes')
projects_to_exclude = kwargs.get('projects')
for resource in resources:
if 'storage_pools' in resource:
pools_to_exclude.extend(
[storage_pool['name'] for storage_pool
in resource['storage_pools']])
elif 'volumes' in resource:
volumes_to_exclude.extend(
[volume['uuid'] for volume in
resource['volumes']])
elif 'projects' in resource:
projects_to_exclude.extend(
[project['uuid'] for project in
resource['projects']])
def exclude_pools(self, pools_to_exclude, cluster_model):
for pool_name in pools_to_exclude:
pool = cluster_model.get_pool_by_pool_name(pool_name)
volumes = cluster_model.get_pool_volumes(pool)
for volume in volumes:
cluster_model.remove_volume(volume)
cluster_model.remove_pool(pool)
def exclude_volumes(self, volumes_to_exclude, cluster_model):
for volume_uuid in volumes_to_exclude:
volume = cluster_model.get_volume_by_uuid(volume_uuid)
cluster_model.remove_volume(volume)
def exclude_projects(self, projects_to_exclude, cluster_model):
all_volumes = cluster_model.get_all_volumes()
for volume_uuid in all_volumes:
volume = all_volumes.get(volume_uuid)
if volume.project_id in projects_to_exclude:
cluster_model.remove_volume(volume)
def remove_nodes_from_model(self, nodes_to_remove, cluster_model):
for hostname in nodes_to_remove:
node = cluster_model.get_node_by_name(hostname)
pools = cluster_model.get_node_pools(node)
for pool in pools:
volumes = cluster_model.get_pool_volumes(pool)
for volume in volumes:
cluster_model.remove_volume(volume)
cluster_model.remove_pool(pool)
cluster_model.remove_node(node)
def get_scoped_model(self, cluster_model):
"""Leave only nodes, pools and volumes proposed in the audit scope"""
if not cluster_model:
return None
allowed_nodes = []
nodes_to_remove = set()
volumes_to_exclude = []
projects_to_exclude = []
pools_to_exclude = []
model_hosts = list(cluster_model.get_all_storage_nodes().keys())
storage_scope = []
for scope in self.scope:
storage_scope = scope.get('storage')
if not storage_scope:
return cluster_model
for rule in storage_scope:
if 'volume_types' in rule:
self._collect_vtype(rule['volume_types'],
allowed_nodes, cluster_model)
elif 'availability_zones' in rule:
self._collect_zones(rule['availability_zones'],
allowed_nodes)
elif 'exclude' in rule:
self.exclude_resources(
rule['exclude'], pools=pools_to_exclude,
volumes=volumes_to_exclude,
projects=projects_to_exclude)
if allowed_nodes:
nodes_to_remove = set(model_hosts) - set(allowed_nodes)
self.remove_nodes_from_model(nodes_to_remove, cluster_model)
self.exclude_pools(pools_to_exclude, cluster_model)
self.exclude_volumes(volumes_to_exclude, cluster_model)
self.exclude_projects(projects_to_exclude, cluster_model)
return cluster_model

View File

@@ -91,16 +91,16 @@ def _reload_scoring_engines(refresh=False):
for name in engines.keys():
se_impl = default.DefaultScoringLoader().load(name)
LOG.debug("Found Scoring Engine plugin: %s" % se_impl.get_name())
LOG.debug("Found Scoring Engine plugin: %s", se_impl.get_name())
_scoring_engine_map[se_impl.get_name()] = se_impl
engine_containers = \
default.DefaultScoringContainerLoader().list_available()
for container_id, container_cls in engine_containers.items():
LOG.debug("Found Scoring Engine container plugin: %s" %
LOG.debug("Found Scoring Engine container plugin: %s",
container_id)
for se in container_cls.get_scoring_engine_list():
LOG.debug("Found Scoring Engine plugin: %s" %
LOG.debug("Found Scoring Engine plugin: %s",
se.get_name())
_scoring_engine_map[se.get_name()] = se

View File

@@ -21,11 +21,15 @@ from watcher.decision_engine.strategy.strategies import dummy_with_scorer
from watcher.decision_engine.strategy.strategies import noisy_neighbor
from watcher.decision_engine.strategy.strategies import outlet_temp_control
from watcher.decision_engine.strategy.strategies import saving_energy
from watcher.decision_engine.strategy.strategies import \
storage_capacity_balance
from watcher.decision_engine.strategy.strategies import uniform_airflow
from watcher.decision_engine.strategy.strategies import \
vm_workload_consolidation
from watcher.decision_engine.strategy.strategies import workload_balance
from watcher.decision_engine.strategy.strategies import workload_stabilization
from watcher.decision_engine.strategy.strategies import zone_migration
Actuator = actuation.Actuator
BasicConsolidation = basic_consolidation.BasicConsolidation
@@ -33,13 +37,16 @@ OutletTempControl = outlet_temp_control.OutletTempControl
DummyStrategy = dummy_strategy.DummyStrategy
DummyWithScorer = dummy_with_scorer.DummyWithScorer
SavingEnergy = saving_energy.SavingEnergy
StorageCapacityBalance = storage_capacity_balance.StorageCapacityBalance
VMWorkloadConsolidation = vm_workload_consolidation.VMWorkloadConsolidation
WorkloadBalance = workload_balance.WorkloadBalance
WorkloadStabilization = workload_stabilization.WorkloadStabilization
UniformAirflow = uniform_airflow.UniformAirflow
NoisyNeighbor = noisy_neighbor.NoisyNeighbor
ZoneMigration = zone_migration.ZoneMigration
__all__ = ("Actuator", "BasicConsolidation", "OutletTempControl",
"DummyStrategy", "DummyWithScorer", "VMWorkloadConsolidation",
"WorkloadBalance", "WorkloadStabilization", "UniformAirflow",
"NoisyNeighbor", "SavingEnergy")
"NoisyNeighbor", "SavingEnergy", "StorageCapacityBalance",
"ZoneMigration")

View File

@@ -46,12 +46,76 @@ from watcher.common import context
from watcher.common import exception
from watcher.common.loader import loadable
from watcher.common import utils
from watcher.datasource import manager as ds_manager
from watcher.decision_engine.loading import default as loading
from watcher.decision_engine.model.collector import manager
from watcher.decision_engine.solution import default
from watcher.decision_engine.strategy.common import level
class StrategyEndpoint(object):
def __init__(self, messaging):
self._messaging = messaging
def _collect_metrics(self, strategy, datasource):
metrics = []
if not datasource:
return {'type': 'Metrics', 'state': metrics,
'mandatory': False, 'comment': ''}
else:
ds_metrics = datasource.list_metrics()
if ds_metrics is None:
raise exception.DataSourceNotAvailable(
datasource=datasource.NAME)
else:
for metric in strategy.DATASOURCE_METRICS:
original_metric_name = datasource.METRIC_MAP.get(metric)
if original_metric_name in ds_metrics:
metrics.append({original_metric_name: 'available'})
else:
metrics.append({original_metric_name: 'not available'})
return {'type': 'Metrics', 'state': metrics,
'mandatory': False, 'comment': ''}
def _get_datasource_status(self, strategy, datasource):
if not datasource:
state = "Datasource is not presented for this strategy"
else:
state = "%s: %s" % (datasource.NAME,
datasource.check_availability())
return {'type': 'Datasource',
'state': state,
'mandatory': True, 'comment': ''}
def _get_cdm(self, strategy):
models = []
for model in ['compute_model', 'storage_model', 'baremetal_model']:
try:
getattr(strategy, model)
except Exception:
models.append({model: 'not available'})
else:
models.append({model: 'available'})
return {'type': 'CDM', 'state': models,
'mandatory': True, 'comment': ''}
def get_strategy_info(self, context, strategy_name):
strategy = loading.DefaultStrategyLoader().load(strategy_name)
try:
is_datasources = getattr(strategy.config, 'datasources', None)
if is_datasources:
datasource = getattr(strategy, 'datasource_backend')
else:
datasource = getattr(strategy, strategy.config.datasource)
except (AttributeError, IndexError):
datasource = []
available_datasource = self._get_datasource_status(strategy,
datasource)
available_metrics = self._collect_metrics(strategy, datasource)
available_cdm = self._get_cdm(strategy)
return [available_datasource, available_metrics, available_cdm]
@six.add_metaclass(abc.ABCMeta)
class BaseStrategy(loadable.Loadable):
"""A base class for all the strategies
@@ -60,6 +124,8 @@ class BaseStrategy(loadable.Loadable):
Solution for a given Goal.
"""
DATASOURCE_METRICS = []
def __init__(self, config, osc=None):
"""Constructor: the signature should be identical within the subclasses
@@ -82,8 +148,10 @@ class BaseStrategy(loadable.Loadable):
self._collector_manager = None
self._compute_model = None
self._storage_model = None
self._baremetal_model = None
self._input_parameters = utils.Struct()
self._audit_scope = None
self._datasource_backend = None
@classmethod
@abc.abstractmethod
@@ -203,7 +271,9 @@ class BaseStrategy(loadable.Loadable):
if self._storage_model is None:
collector = self.collector_manager.get_cluster_model_collector(
'storage', osc=self.osc)
self._storage_model = self.audit_scope_handler.get_scoped_model(
audit_scope_handler = collector.get_audit_scope_handler(
audit_scope=self.audit_scope)
self._storage_model = audit_scope_handler.get_scoped_model(
collector.get_latest_cluster_data_model())
if not self._storage_model:
@@ -214,6 +284,29 @@ class BaseStrategy(loadable.Loadable):
return self._storage_model
@property
def baremetal_model(self):
"""Cluster data model
:returns: Cluster data model the strategy is executed on
:rtype model: :py:class:`~.ModelRoot` instance
"""
if self._baremetal_model is None:
collector = self.collector_manager.get_cluster_model_collector(
'baremetal', osc=self.osc)
audit_scope_handler = collector.get_audit_scope_handler(
audit_scope=self.audit_scope)
self._baremetal_model = audit_scope_handler.get_scoped_model(
collector.get_latest_cluster_data_model())
if not self._baremetal_model:
raise exception.ClusterStateNotDefined()
if self._baremetal_model.stale:
raise exception.ClusterStateStale()
return self._baremetal_model
@classmethod
def get_schema(cls):
"""Defines a Schema that the input parameters shall comply to
@@ -223,6 +316,15 @@ class BaseStrategy(loadable.Loadable):
"""
return {}
@property
def datasource_backend(self):
if not self._datasource_backend:
self._datasource_backend = ds_manager.DataSourceManager(
config=self.config,
osc=self.osc
).get_backend(self.DATASOURCE_METRICS)
return self._datasource_backend
@property
def input_parameters(self):
return self._input_parameters
@@ -361,3 +463,11 @@ class SavingEnergyBaseStrategy(BaseStrategy):
@classmethod
def get_goal_name(cls):
return "saving_energy"
@six.add_metaclass(abc.ABCMeta)
class ZoneMigrationBaseStrategy(BaseStrategy):
@classmethod
def get_goal_name(cls):
return "hardware_maintenance"

View File

@@ -35,16 +35,11 @@ migration is possible on your OpenStack cluster.
"""
import datetime
from oslo_config import cfg
from oslo_log import log
from watcher._i18n import _
from watcher.common import exception
from watcher.datasource import ceilometer as ceil
from watcher.datasource import gnocchi as gnoc
from watcher.datasource import monasca as mon
from watcher.decision_engine.model import element
from watcher.decision_engine.strategy.strategies import base
@@ -57,6 +52,8 @@ class BasicConsolidation(base.ServerConsolidationBaseStrategy):
HOST_CPU_USAGE_METRIC_NAME = 'compute.node.cpu.percent'
INSTANCE_CPU_USAGE_METRIC_NAME = 'cpu_util'
DATASOURCE_METRICS = ['host_cpu_usage', 'instance_cpu_usage']
METRIC_NAMES = dict(
ceilometer=dict(
host_cpu_usage='compute.node.cpu.percent',
@@ -91,10 +88,6 @@ class BasicConsolidation(base.ServerConsolidationBaseStrategy):
# set default value for the efficacy
self.efficacy = 100
self._ceilometer = None
self._monasca = None
self._gnocchi = None
# TODO(jed): improve threshold overbooking?
self.threshold_mem = 1
self.threshold_disk = 1
@@ -155,11 +148,14 @@ class BasicConsolidation(base.ServerConsolidationBaseStrategy):
@classmethod
def get_config_opts(cls):
return [
cfg.StrOpt(
"datasource",
help="Data source to use in order to query the needed metrics",
default="gnocchi",
choices=["ceilometer", "monasca", "gnocchi"]),
cfg.ListOpt(
"datasources",
help="Datasources to use in order to query the needed metrics."
" If one of strategy metric isn't available in the first"
" datasource, the next datasource will be chosen.",
item_type=cfg.types.String(choices=['gnocchi', 'ceilometer',
'monasca']),
default=['gnocchi', 'ceilometer', 'monasca']),
cfg.BoolOpt(
"check_optimize_metadata",
help="Check optimize metadata field in instance before "
@@ -167,36 +163,6 @@ class BasicConsolidation(base.ServerConsolidationBaseStrategy):
default=False),
]
@property
def ceilometer(self):
if self._ceilometer is None:
self.ceilometer = ceil.CeilometerHelper(osc=self.osc)
return self._ceilometer
@ceilometer.setter
def ceilometer(self, ceilometer):
self._ceilometer = ceilometer
@property
def monasca(self):
if self._monasca is None:
self.monasca = mon.MonascaHelper(osc=self.osc)
return self._monasca
@monasca.setter
def monasca(self, monasca):
self._monasca = monasca
@property
def gnocchi(self):
if self._gnocchi is None:
self.gnocchi = gnoc.GnocchiHelper(osc=self.osc)
return self._gnocchi
@gnocchi.setter
def gnocchi(self, gnocchi):
self._gnocchi = gnocchi
def get_available_compute_nodes(self):
default_node_scope = [element.ServiceState.ENABLED.value,
element.ServiceState.DISABLED.value]
@@ -290,87 +256,13 @@ class BasicConsolidation(base.ServerConsolidationBaseStrategy):
return (score_cores + score_disk + score_memory) / 3
def get_node_cpu_usage(self, node):
metric_name = self.METRIC_NAMES[
self.config.datasource]['host_cpu_usage']
if self.config.datasource == "ceilometer":
resource_id = "%s_%s" % (node.uuid, node.hostname)
return self.ceilometer.statistic_aggregation(
resource_id=resource_id,
meter_name=metric_name,
period=self.period,
aggregate='avg',
)
elif self.config.datasource == "gnocchi":
resource_id = "%s_%s" % (node.uuid, node.hostname)
stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(
seconds=int(self.period))
return self.gnocchi.statistic_aggregation(
resource_id=resource_id,
metric=metric_name,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean'
)
elif self.config.datasource == "monasca":
statistics = self.monasca.statistic_aggregation(
meter_name=metric_name,
dimensions=dict(hostname=node.uuid),
period=self.period,
aggregate='avg'
)
cpu_usage = None
for stat in statistics:
avg_col_idx = stat['columns'].index('avg')
values = [r[avg_col_idx] for r in stat['statistics']]
value = float(sum(values)) / len(values)
cpu_usage = value
return cpu_usage
raise exception.UnsupportedDataSource(
strategy=self.name, datasource=self.config.datasource)
resource_id = "%s_%s" % (node.uuid, node.hostname)
return self.datasource_backend.get_host_cpu_usage(
resource_id, self.period, 'mean', granularity=300)
def get_instance_cpu_usage(self, instance):
metric_name = self.METRIC_NAMES[
self.config.datasource]['instance_cpu_usage']
if self.config.datasource == "ceilometer":
return self.ceilometer.statistic_aggregation(
resource_id=instance.uuid,
meter_name=metric_name,
period=self.period,
aggregate='avg'
)
elif self.config.datasource == "gnocchi":
stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(
seconds=int(self.period))
return self.gnocchi.statistic_aggregation(
resource_id=instance.uuid,
metric=metric_name,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean',
)
elif self.config.datasource == "monasca":
statistics = self.monasca.statistic_aggregation(
meter_name=metric_name,
dimensions=dict(resource_id=instance.uuid),
period=self.period,
aggregate='avg'
)
cpu_usage = None
for stat in statistics:
avg_col_idx = stat['columns'].index('avg')
values = [r[avg_col_idx] for r in stat['statistics']]
value = float(sum(values)) / len(values)
cpu_usage = value
return cpu_usage
raise exception.UnsupportedDataSource(
strategy=self.name, datasource=self.config.datasource)
return self.datasource_backend.get_instance_cpu_usage(
instance.uuid, self.period, 'mean', granularity=300)
def calculate_score_node(self, node):
"""Calculate the score that represent the utilization level
@@ -385,7 +277,7 @@ class BasicConsolidation(base.ServerConsolidationBaseStrategy):
resource_id = "%s_%s" % (node.uuid, node.hostname)
LOG.error(
"No values returned by %(resource_id)s "
"for %(metric_name)s" % dict(
"for %(metric_name)s", dict(
resource_id=resource_id,
metric_name=self.METRIC_NAMES[
self.config.datasource]['host_cpu_usage']))
@@ -405,7 +297,7 @@ class BasicConsolidation(base.ServerConsolidationBaseStrategy):
if instance_cpu_utilization is None:
LOG.error(
"No values returned by %(resource_id)s "
"for %(metric_name)s" % dict(
"for %(metric_name)s", dict(
resource_id=instance.uuid,
metric_name=self.METRIC_NAMES[
self.config.datasource]['instance_cpu_usage']))

View File

@@ -16,19 +16,23 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
from oslo_config import cfg
from oslo_log import log
from watcher._i18n import _
from watcher.common import exception as wexc
from watcher.datasource import ceilometer as ceil
from watcher.decision_engine.strategy.strategies import base
LOG = log.getLogger(__name__)
CONF = cfg.CONF
class NoisyNeighbor(base.NoisyNeighborBaseStrategy):
MIGRATION = "migrate"
DATASOURCE_METRICS = ['instance_l3_cache_usage']
# The meter to report L3 cache in ceilometer
METER_NAME_L3 = "cpu_l3_cache"
DEFAULT_WATCHER_PRIORITY = 5
@@ -45,17 +49,6 @@ class NoisyNeighbor(base.NoisyNeighborBaseStrategy):
super(NoisyNeighbor, self).__init__(config, osc)
self.meter_name = self.METER_NAME_L3
self._ceilometer = None
@property
def ceilometer(self):
if self._ceilometer is None:
self.ceilometer = ceil.CeilometerHelper(osc=self.osc)
return self._ceilometer
@ceilometer.setter
def ceilometer(self, c):
self._ceilometer = c
@classmethod
def get_name(cls):
@@ -81,32 +74,41 @@ class NoisyNeighbor(base.NoisyNeighborBaseStrategy):
"default": 35.0
},
"period": {
"description": "Aggregate time period of ceilometer",
"description": "Aggregate time period of "
"ceilometer and gnocchi",
"type": "number",
"default": 100.0
},
},
}
@classmethod
def get_config_opts(cls):
return [
cfg.ListOpt(
"datasources",
help="Datasources to use in order to query the needed metrics."
" If one of strategy metric isn't available in the first"
" datasource, the next datasource will be chosen.",
item_type=cfg.types.String(choices=['gnocchi', 'ceilometer',
'monasca']),
default=['gnocchi', 'ceilometer', 'monasca'])
]
def get_current_and_previous_cache(self, instance):
try:
current_cache = self.ceilometer.statistic_aggregation(
resource_id=instance.uuid,
meter_name=self.meter_name, period=self.period,
aggregate='avg')
curr_cache = self.datasource_backend.get_instance_l3_cache_usage(
instance.uuid, self.period, 'mean', granularity=300)
previous_cache = 2 * (
self.ceilometer.statistic_aggregation(
resource_id=instance.uuid,
meter_name=self.meter_name,
period=2*self.period, aggregate='avg')) - current_cache
self.datasource_backend.get_instance_l3_cache_usage(
instance.uuid, 2 * self.period,
'mean', granularity=300)) - curr_cache
except Exception as exc:
LOG.exception(exc)
return None
return None, None
return current_cache, previous_cache
return curr_cache, previous_cache
def find_priority_instance(self, instance):
@@ -114,7 +116,7 @@ class NoisyNeighbor(base.NoisyNeighborBaseStrategy):
self.get_current_and_previous_cache(instance)
if None in (current_cache, previous_cache):
LOG.warning("Ceilometer unable to pick L3 Cache "
LOG.warning("Datasource unable to pick L3 Cache "
"values. Skipping the instance")
return None
@@ -130,7 +132,7 @@ class NoisyNeighbor(base.NoisyNeighborBaseStrategy):
self.get_current_and_previous_cache(instance)
if None in (noisy_current_cache, noisy_previous_cache):
LOG.warning("Ceilometer unable to pick "
LOG.warning("Datasource unable to pick "
"L3 Cache. Skipping the instance")
return None
@@ -197,10 +199,10 @@ class NoisyNeighbor(base.NoisyNeighborBaseStrategy):
hosts_need_release[node.uuid] = {
'priority_vm': potential_priority_instance,
'noisy_vm': potential_noisy_instance}
LOG.debug("Priority VM found: %s" % (
potential_priority_instance.uuid))
LOG.debug("Noisy VM found: %s" % (
potential_noisy_instance.uuid))
LOG.debug("Priority VM found: %s",
potential_priority_instance.uuid)
LOG.debug("Noisy VM found: %s",
potential_noisy_instance.uuid)
loop_break_flag = True
break

View File

@@ -28,15 +28,11 @@ Outlet (Exhaust Air) Temperature is one of the important thermal
telemetries to measure thermal/workload status of server.
"""
import datetime
from oslo_config import cfg
from oslo_log import log
from watcher._i18n import _
from watcher.common import exception as wexc
from watcher.datasource import ceilometer as ceil
from watcher.datasource import gnocchi as gnoc
from watcher.decision_engine.model import element
from watcher.decision_engine.strategy.strategies import base
@@ -77,6 +73,8 @@ class OutletTempControl(base.ThermalOptimizationBaseStrategy):
# The meter to report outlet temperature in ceilometer
MIGRATION = "migrate"
DATASOURCE_METRICS = ['host_outlet_temp']
METRIC_NAMES = dict(
ceilometer=dict(
host_outlet_temp='hardware.ipmi.node.outlet_temperature'),
@@ -93,8 +91,6 @@ class OutletTempControl(base.ThermalOptimizationBaseStrategy):
:type osc: :py:class:`~.OpenStackClients` instance, optional
"""
super(OutletTempControl, self).__init__(config, osc)
self._ceilometer = None
self._gnocchi = None
@classmethod
def get_name(cls):
@@ -137,26 +133,6 @@ class OutletTempControl(base.ThermalOptimizationBaseStrategy):
},
}
@property
def ceilometer(self):
if self._ceilometer is None:
self.ceilometer = ceil.CeilometerHelper(osc=self.osc)
return self._ceilometer
@ceilometer.setter
def ceilometer(self, c):
self._ceilometer = c
@property
def gnocchi(self):
if self._gnocchi is None:
self.gnocchi = gnoc.GnocchiHelper(osc=self.osc)
return self._gnocchi
@gnocchi.setter
def gnocchi(self, g):
self._gnocchi = g
@property
def granularity(self):
return self.input_parameters.get('granularity', 300)
@@ -206,31 +182,20 @@ class OutletTempControl(base.ThermalOptimizationBaseStrategy):
resource_id = node.uuid
outlet_temp = None
if self.config.datasource == "ceilometer":
outlet_temp = self.ceilometer.statistic_aggregation(
resource_id=resource_id,
meter_name=metric_name,
period=self.period,
aggregate='avg'
)
elif self.config.datasource == "gnocchi":
stop_time = datetime.datetime.utcnow()
start_time = stop_time - datetime.timedelta(
seconds=int(self.period))
outlet_temp = self.gnocchi.statistic_aggregation(
resource_id=resource_id,
metric=metric_name,
granularity=self.granularity,
start_time=start_time,
stop_time=stop_time,
aggregation='mean'
)
outlet_temp = self.datasource_backend.statistic_aggregation(
resource_id=resource_id,
meter_name=metric_name,
period=self.period,
granularity=self.granularity,
)
# some hosts may not have outlet temp meters, remove from target
if outlet_temp is None:
LOG.warning("%s: no outlet temp data", resource_id)
continue
LOG.debug("%s: outlet temperature %f" % (resource_id, outlet_temp))
LOG.debug("%(resource)s: outlet temperature %(temp)f",
{'resource': resource_id, 'temp': outlet_temp})
instance_data = {'node': node, 'outlet_temp': outlet_temp}
if outlet_temp >= self.threshold:
# mark the node to release resources

View File

@@ -0,0 +1,411 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2017 ZTE Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
*Workload balance using cinder volume migration*
*Description*
This strategy migrates volumes based on the workload of the
cinder pools.
It makes decision to migrate a volume whenever a pool's used
utilization % is higher than the specified threshold. The volume
to be moved should make the pool close to average workload of all
cinder pools.
*Requirements*
* You must have at least 2 cinder volume pools to run
this strategy.
"""
from oslo_config import cfg
from oslo_log import log
from watcher._i18n import _
from watcher.common import cinder_helper
from watcher.decision_engine.strategy.strategies import base
LOG = log.getLogger(__name__)
class StorageCapacityBalance(base.WorkloadStabilizationBaseStrategy):
"""Storage capacity balance using cinder volume migration
*Description*
This strategy migrates volumes based on the workload of the
cinder pools.
It makes decision to migrate a volume whenever a pool's used
utilization % is higher than the specified threshold. The volume
to be moved should make the pool close to average workload of all
cinder pools.
*Requirements*
* You must have at least 2 cinder volume pools to run
this strategy.
"""
def __init__(self, config, osc=None):
"""VolumeMigrate using cinder volume migration
:param config: A mapping containing the configuration of this strategy
:type config: :py:class:`~.Struct` instance
:param osc: :py:class:`~.OpenStackClients` instance
"""
super(StorageCapacityBalance, self).__init__(config, osc)
self._cinder = None
self.volume_threshold = 80.0
self.pool_type_cache = dict()
self.source_pools = []
self.dest_pools = []
@property
def cinder(self):
if not self._cinder:
self._cinder = cinder_helper.CinderHelper(osc=self.osc)
return self._cinder
@classmethod
def get_name(cls):
return "storage_capacity_balance"
@classmethod
def get_display_name(cls):
return _("Storage Capacity Balance Strategy")
@classmethod
def get_translatable_display_name(cls):
return "Storage Capacity Balance Strategy"
@classmethod
def get_schema(cls):
# Mandatory default setting for each element
return {
"properties": {
"volume_threshold": {
"description": "volume threshold for capacity balance",
"type": "number",
"default": 80.0
},
},
}
@classmethod
def get_config_opts(cls):
return [
cfg.ListOpt(
"ex_pools",
help="exclude pools",
default=['local_vstorage']),
]
def get_pools(self, cinder):
"""Get all volume pools excepting ex_pools.
:param cinder: cinder client
:return: volume pools
"""
ex_pools = self.config.ex_pools
pools = cinder.get_storage_pool_list()
filtered_pools = [p for p in pools
if p.pool_name not in ex_pools]
return filtered_pools
def get_volumes(self, cinder):
"""Get all volumes with staus in available or in-use and no snapshot.
:param cinder: cinder client
:return: all volumes
"""
all_volumes = cinder.get_volume_list()
valid_status = ['in-use', 'available']
volume_snapshots = cinder.get_volume_snapshots_list()
snapshot_volume_ids = []
for snapshot in volume_snapshots:
snapshot_volume_ids.append(snapshot.volume_id)
nosnap_volumes = list(filter(lambda v: v.id not in snapshot_volume_ids,
all_volumes))
LOG.info("volumes in snap: %s", snapshot_volume_ids)
status_volumes = list(
filter(lambda v: v.status in valid_status, nosnap_volumes))
valid_volumes = [v for v in status_volumes
if getattr(v, 'migration_status') == 'success' or
getattr(v, 'migration_status') is None]
LOG.info("valid volumes: %s", valid_volumes)
return valid_volumes
def group_pools(self, pools, threshold):
"""group volume pools by threshold.
:param pools: all volume pools
:param threshold: volume threshold
:return: under and over threshold pools
"""
under_pools = list(
filter(lambda p: float(p.total_capacity_gb) -
float(p.free_capacity_gb) <
float(p.total_capacity_gb) * threshold, pools))
over_pools = list(
filter(lambda p: float(p.total_capacity_gb) -
float(p.free_capacity_gb) >=
float(p.total_capacity_gb) * threshold, pools))
return over_pools, under_pools
def get_volume_type_by_name(self, cinder, backendname):
# return list of pool type
if backendname in self.pool_type_cache.keys():
return self.pool_type_cache.get(backendname)
volume_type_list = cinder.get_volume_type_list()
volume_type = list(filter(
lambda volume_type:
volume_type.extra_specs.get(
'volume_backend_name') == backendname, volume_type_list))
if volume_type:
self.pool_type_cache[backendname] = volume_type
return self.pool_type_cache.get(backendname)
else:
return []
def migrate_fit(self, volume, threshold):
target_pool_name = None
if volume.volume_type:
LOG.info("volume %s type %s", volume.id, volume.volume_type)
return target_pool_name
self.dest_pools.sort(
key=lambda p: float(p.free_capacity_gb) /
float(p.total_capacity_gb))
for pool in reversed(self.dest_pools):
total_cap = float(pool.total_capacity_gb)
allocated = float(pool.allocated_capacity_gb)
ratio = pool.max_over_subscription_ratio
if total_cap * ratio < allocated + float(volume.size):
LOG.info("pool %s allocated over", pool.name)
continue
free_cap = float(pool.free_capacity_gb) - float(volume.size)
if free_cap > (1 - threshold) * total_cap:
target_pool_name = pool.name
index = self.dest_pools.index(pool)
setattr(self.dest_pools[index], 'free_capacity_gb',
str(free_cap))
LOG.info("volume: get pool %s for vol %s", target_pool_name,
volume.name)
break
return target_pool_name
def check_pool_type(self, volume, dest_pool):
target_type = None
# check type feature
if not volume.volume_type:
return target_type
volume_type_list = self.cinder.get_volume_type_list()
volume_type = list(filter(
lambda volume_type:
volume_type.name == volume.volume_type, volume_type_list))
if volume_type:
src_extra_specs = volume_type[0].extra_specs
src_extra_specs.pop('volume_backend_name', None)
backendname = getattr(dest_pool, 'volume_backend_name')
dst_pool_type = self.get_volume_type_by_name(self.cinder, backendname)
for src_key in src_extra_specs.keys():
dst_pool_type = [pt for pt in dst_pool_type
if pt.extra_specs.get(src_key) ==
src_extra_specs.get(src_key)]
if dst_pool_type:
if volume.volume_type:
if dst_pool_type[0].name != volume.volume_type:
target_type = dst_pool_type[0].name
else:
target_type = dst_pool_type[0].name
return target_type
def retype_fit(self, volume, threshold):
target_type = None
self.dest_pools.sort(
key=lambda p: float(p.free_capacity_gb) /
float(p.total_capacity_gb))
for pool in reversed(self.dest_pools):
backendname = getattr(pool, 'volume_backend_name')
pool_type = self.get_volume_type_by_name(self.cinder, backendname)
LOG.info("volume: pool %s, type %s", pool.name, pool_type)
if pool_type is None:
continue
total_cap = float(pool.total_capacity_gb)
allocated = float(pool.allocated_capacity_gb)
ratio = pool.max_over_subscription_ratio
if total_cap * ratio < allocated + float(volume.size):
LOG.info("pool %s allocated over", pool.name)
continue
free_cap = float(pool.free_capacity_gb) - float(volume.size)
if free_cap > (1 - threshold) * total_cap:
target_type = self.check_pool_type(volume, pool)
if target_type is None:
continue
index = self.dest_pools.index(pool)
setattr(self.dest_pools[index], 'free_capacity_gb',
str(free_cap))
LOG.info("volume: get type %s for vol %s", target_type,
volume.name)
break
return target_type
def get_actions(self, pool, volumes, threshold):
"""get volume, pool key-value action
return: retype, migrate dict
"""
retype_dicts = dict()
migrate_dicts = dict()
total_cap = float(pool.total_capacity_gb)
used_cap = float(pool.total_capacity_gb) - float(pool.free_capacity_gb)
seek_flag = True
volumes_in_pool = list(
filter(lambda v: getattr(v, 'os-vol-host-attr:host') == pool.name,
volumes))
LOG.info("volumes in pool: %s", str(volumes_in_pool))
if not volumes_in_pool:
return retype_dicts, migrate_dicts
ava_volumes = list(filter(lambda v: v.status == 'available',
volumes_in_pool))
ava_volumes.sort(key=lambda v: float(v.size))
LOG.info("available volumes in pool: %s ", str(ava_volumes))
for vol in ava_volumes:
vol_flag = False
migrate_pool = self.migrate_fit(vol, threshold)
if migrate_pool:
migrate_dicts[vol.id] = migrate_pool
vol_flag = True
else:
target_type = self.retype_fit(vol, threshold)
if target_type:
retype_dicts[vol.id] = target_type
vol_flag = True
if vol_flag:
used_cap -= float(vol.size)
if used_cap < threshold * total_cap:
seek_flag = False
break
if seek_flag:
noboot_volumes = list(
filter(lambda v: v.bootable.lower() == 'false' and
v.status == 'in-use', volumes_in_pool))
noboot_volumes.sort(key=lambda v: float(v.size))
LOG.info("noboot volumes: %s ", str(noboot_volumes))
for vol in noboot_volumes:
vol_flag = False
migrate_pool = self.migrate_fit(vol, threshold)
if migrate_pool:
migrate_dicts[vol.id] = migrate_pool
vol_flag = True
else:
target_type = self.retype_fit(vol, threshold)
if target_type:
retype_dicts[vol.id] = target_type
vol_flag = True
if vol_flag:
used_cap -= float(vol.size)
if used_cap < threshold * total_cap:
seek_flag = False
break
if seek_flag:
boot_volumes = list(
filter(lambda v: v.bootable.lower() == 'true' and
v.status == 'in-use', volumes_in_pool)
)
boot_volumes.sort(key=lambda v: float(v.size))
LOG.info("boot volumes: %s ", str(boot_volumes))
for vol in boot_volumes:
vol_flag = False
migrate_pool = self.migrate_fit(vol, threshold)
if migrate_pool:
migrate_dicts[vol.id] = migrate_pool
vol_flag = True
else:
target_type = self.retype_fit(vol, threshold)
if target_type:
retype_dicts[vol.id] = target_type
vol_flag = True
if vol_flag:
used_cap -= float(vol.size)
if used_cap < threshold * total_cap:
seek_flag = False
break
return retype_dicts, migrate_dicts
def pre_execute(self):
"""Pre-execution phase
This can be used to fetch some pre-requisites or data.
"""
LOG.info("Initializing Storage Capacity Balance Strategy")
self.volume_threshold = self.input_parameters.volume_threshold
def do_execute(self, audit=None):
"""Strategy execution phase
This phase is where you should put the main logic of your strategy.
"""
all_pools = self.get_pools(self.cinder)
all_volumes = self.get_volumes(self.cinder)
threshold = float(self.volume_threshold) / 100
self.source_pools, self.dest_pools = self.group_pools(
all_pools, threshold)
LOG.info(" source pools: %s dest pools:%s",
self.source_pools, self.dest_pools)
if not self.source_pools:
LOG.info("No pools require optimization")
return
if not self.dest_pools:
LOG.info("No enough pools for optimization")
return
for source_pool in self.source_pools:
retype_actions, migrate_actions = self.get_actions(
source_pool, all_volumes, threshold)
for vol_id, pool_type in retype_actions.items():
vol = [v for v in all_volumes if v.id == vol_id]
parameters = {'migration_type': 'retype',
'destination_type': pool_type,
'resource_name': vol[0].name}
self.solution.add_action(action_type='volume_migrate',
resource_id=vol_id,
input_parameters=parameters)
for vol_id, pool_name in migrate_actions.items():
vol = [v for v in all_volumes if v.id == vol_id]
parameters = {'migration_type': 'migrate',
'destination_node': pool_name,
'resource_name': vol[0].name}
self.solution.add_action(action_type='volume_migrate',
resource_id=vol_id,
input_parameters=parameters)
def post_execute(self):
"""Post-execution phase
"""
pass

Some files were not shown because too many files have changed in this diff Show More