Compare commits
292 Commits
xena-em
...
15.0.0.0rc
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
61cca16dcd | ||
|
|
f3d0ec5869 | ||
|
|
fe56660c44 | ||
|
|
6cb4e2fa83 | ||
|
|
9b1adaa7c7 | ||
|
|
f21df7ce1e | ||
|
|
b1aad46209 | ||
|
|
90009aac84 | ||
|
|
e5b18afa01 | ||
|
|
fedc74a5b0 | ||
|
|
a4b785e4f1 | ||
|
|
cdde0fb41e | ||
|
|
ef0f35192d | ||
|
|
c9bfb763c2 | ||
|
|
eb3fdb1e97 | ||
|
|
848cde3606 | ||
|
|
63cf35349c | ||
|
|
7106a12251 | ||
|
|
03c09825f7 | ||
|
|
2452c1e541 | ||
|
|
d91b550fc9 | ||
|
|
1668b9b9f8 | ||
|
|
5e05b50048 | ||
|
|
4d8f86b432 | ||
|
|
05d8f0e3c8 | ||
|
|
1a87abc666 | ||
|
|
fa4552b93f | ||
|
|
a07bfa141d | ||
|
|
a6668a1b39 | ||
|
|
534c340df1 | ||
|
|
a963e0ff85 | ||
|
|
457819072f | ||
|
|
6d155c4be6 | ||
|
|
83fea206df | ||
|
|
00a3edeac6 | ||
|
|
b69642181b | ||
|
|
616c8f4cc4 | ||
|
|
cc26b3b334 | ||
|
|
9003906bdc | ||
|
|
e06f1b0475 | ||
|
|
6d35be11ec | ||
|
|
1009c3781b | ||
|
|
5048a6e3ba | ||
|
|
84742be8c2 | ||
|
|
1fb89aeac3 | ||
|
|
1a9f17748e | ||
|
|
90f0c2264c | ||
|
|
3742e0a79c | ||
|
|
8309d9848a | ||
|
|
355671e979 | ||
|
|
9becb68495 | ||
|
|
37faf614e2 | ||
|
|
4080d5767d | ||
|
|
9925fd2cc9 | ||
|
|
27baff5184 | ||
|
|
8ca794cdbb | ||
|
|
f879b10b05 | ||
|
|
95d975f339 | ||
|
|
0435200fb1 | ||
|
|
adfe3858aa | ||
|
|
a1e7156c7e | ||
|
|
71470dac73 | ||
|
|
5ba086095c | ||
|
|
3e8392b8f1 | ||
|
|
20cd4a0394 | ||
|
|
374750847f | ||
|
|
2fe3b0cdbe | ||
|
|
9b9965265a | ||
|
|
98b56b66ac | ||
|
|
081cd5fae9 | ||
|
|
1ab5babbb6 | ||
|
|
d771d00c5a | ||
|
|
e3b813e27e | ||
|
|
c0a5abe29c | ||
|
|
bbe30f93f2 | ||
|
|
3bc5c72039 | ||
|
|
203b926be0 | ||
|
|
e64709ea08 | ||
|
|
94d8676db8 | ||
|
|
828bcadf6a | ||
|
|
93366df264 | ||
|
|
aa67096fe8 | ||
|
|
6f72e33de5 | ||
|
|
56d0a0d6ea | ||
|
|
de9eb2cd80 | ||
|
|
76de167171 | ||
|
|
70032aa477 | ||
|
|
16131e5cac | ||
|
|
bfbd136f4b | ||
|
|
fe8d8c8839 | ||
|
|
b8e0e6b01c | ||
|
|
6ea362da0b | ||
|
|
0f78386462 | ||
|
|
1529e3fadd | ||
|
|
31879d26f4 | ||
|
|
efbae9321e | ||
|
|
0599618add | ||
|
|
1d50c12e15 | ||
|
|
3860de0b1e | ||
|
|
15981117ee | ||
|
|
4f8c14646d | ||
|
|
520ec0b79b | ||
|
|
f42cb8557b | ||
|
|
b788a67c52 | ||
|
|
73f8728d22 | ||
|
|
bf6a28bd1e | ||
|
|
1256b24133 | ||
|
|
a559c0505e | ||
|
|
59757249bb | ||
|
|
58b25101e6 | ||
|
|
690a389369 | ||
|
|
1cdd392f96 | ||
|
|
20f231054a | ||
|
|
077c36be8a | ||
|
|
88d81c104e | ||
|
|
8ac8a29fda | ||
|
|
cd2910b0e9 | ||
|
|
167fb61b4e | ||
|
|
188e583dcb | ||
|
|
26e36e1620 | ||
|
|
a016b3f4ea | ||
|
|
9f6c8725ed | ||
|
|
040a7f5c41 | ||
|
|
3585e0cc3e | ||
|
|
ba8370e1ad | ||
|
|
97c4e70847 | ||
|
|
c6302edeca | ||
|
|
0651fff910 | ||
|
|
b36ba8399e | ||
|
|
4629402f38 | ||
|
|
86a260a2c7 | ||
|
|
63626d6fc3 | ||
|
|
0f5b6a07d0 | ||
|
|
7d90a079b0 | ||
|
|
891119470c | ||
|
|
9dea55bd64 | ||
|
|
b4ef969eec | ||
|
|
322c89d982 | ||
|
|
59607f616a | ||
|
|
3f6c7e406a | ||
|
|
fd3d8b67ff | ||
|
|
c73f126b15 | ||
|
|
17d1cf535a | ||
|
|
ae48f65f20 | ||
|
|
0ed3d4de83 | ||
|
|
6c5845721b | ||
|
|
77e7e4ef7b | ||
|
|
f38ab70ba4 | ||
|
|
7aabd6dd5a | ||
|
|
1b12e80882 | ||
|
|
9f685a8cf1 | ||
|
|
57b248f9fe | ||
|
|
278cb7e98c | ||
|
|
2c76da2868 | ||
|
|
c4acce91d6 | ||
|
|
adbcac9319 | ||
|
|
c9a1d06e7c | ||
|
|
25c1a8207f | ||
|
|
0702cb3869 | ||
|
|
03c107a4ce | ||
|
|
c7158b08d1 | ||
|
|
035e6584c7 | ||
|
|
253e97678c | ||
|
|
c7bb1fe52d | ||
|
|
a65e7e9b59 | ||
|
|
b671550c91 | ||
|
|
52bba70fec | ||
|
|
f2ee231f14 | ||
|
|
3861701f4a | ||
|
|
d167134265 | ||
|
|
539be503f0 | ||
|
|
bbf5c41cab | ||
|
|
df3d67a4ed | ||
|
|
82f1c720dd | ||
|
|
77a30ef281 | ||
|
|
383751904c | ||
|
|
6a1f19d314 | ||
|
|
342fe8882a | ||
|
|
7fcca0cc46 | ||
|
|
977f014cba | ||
|
|
753c44b0c4 | ||
|
|
dd0082c343 | ||
|
|
5f6fbaea56 | ||
|
|
6b81b34b27 | ||
|
|
961bbb9460 | ||
|
|
d56e8ee65a | ||
|
|
4527f89d8d | ||
|
|
e535177bc0 | ||
|
|
022d150d20 | ||
|
|
136e5d927c | ||
|
|
1968334b29 | ||
|
|
0b78f31e3a | ||
|
|
56b8c1211a | ||
|
|
3f26dc47f2 | ||
|
|
1b6f723cc3 | ||
|
|
d6cb38289e | ||
|
|
406be36c45 | ||
|
|
6bb761a803 | ||
|
|
a169d42b1f | ||
|
|
4827d6e766 | ||
|
|
2a2db362e3 | ||
|
|
32756dc7b4 | ||
|
|
ee447a2281 | ||
|
|
4d8bb57c8d | ||
|
|
70ba13ca6d | ||
|
|
da23fdc621 | ||
|
|
2ab27c0dfe | ||
|
|
811a704f80 | ||
|
|
99fea33fac | ||
|
|
9d37d705e4 | ||
|
|
fbb290b223 | ||
|
|
c80c940a4f | ||
|
|
f07694ba6c | ||
|
|
9abec18c8b | ||
|
|
1f8d06e075 | ||
|
|
29c94c102b | ||
|
|
3f3e660367 | ||
|
|
2eefaeed14 | ||
|
|
5fadd0de57 | ||
|
|
c5edad2246 | ||
|
|
405bb93030 | ||
|
|
5f79ab87c7 | ||
|
|
4d5022ab94 | ||
|
|
6adaedf696 | ||
|
|
f3ff65f233 | ||
|
|
b5e45b43b9 | ||
|
|
61afdd3df7 | ||
|
|
e8f9e31541 | ||
|
|
38288dd9c8 | ||
|
|
9d8b990fd1 | ||
|
|
0f96f99404 | ||
|
|
57177aebb2 | ||
|
|
2c4fb7a990 | ||
|
|
61a7dd85ca | ||
|
|
a7dd51390c | ||
|
|
a47cedecfa | ||
|
|
566a830f64 | ||
|
|
5c627a3aa3 | ||
|
|
a9dc3794a6 | ||
|
|
d6f169197e | ||
|
|
2bc49149b3 | ||
|
|
bc5922c684 | ||
|
|
f0935fb3e1 | ||
|
|
762686e99e | ||
|
|
0f0527abc1 | ||
|
|
6e26e41519 | ||
|
|
954fc282ee | ||
|
|
9d58a6d457 | ||
|
|
c95ce4ec17 | ||
|
|
9492c2190e | ||
|
|
808f1bcee3 | ||
|
|
3b224b5629 | ||
|
|
424e9a76af | ||
|
|
40e93407c7 | ||
|
|
721aec1cb6 | ||
|
|
8a3ee8f931 | ||
|
|
00fea975e2 | ||
|
|
fd6562382e | ||
|
|
ec90891636 | ||
|
|
7336a48057 | ||
|
|
922478fbda | ||
|
|
9f0eca2343 | ||
|
|
1e11c490a7 | ||
|
|
8a7a8db661 | ||
|
|
0610070e59 | ||
|
|
a0997a0423 | ||
|
|
4ea3eada3e | ||
|
|
cd1c0f3054 | ||
|
|
684350977d | ||
|
|
d28630b759 | ||
|
|
f7fbaf46a2 | ||
|
|
e7cda537e7 | ||
|
|
c7be34fbaa | ||
|
|
52da088011 | ||
|
|
6ac3a6febf | ||
|
|
e36b77ad6d | ||
|
|
6003322711 | ||
|
|
f4ffca01b8 | ||
|
|
5d70c207cd | ||
|
|
0b2e641d00 | ||
|
|
ff84b052a5 | ||
|
|
a43b040ebc | ||
|
|
749fa2507a | ||
|
|
76d61362ee | ||
|
|
c55143bc21 | ||
|
|
7609df3370 | ||
|
|
b57eac12cb | ||
|
|
ac6911d3c4 | ||
|
|
23c2010681 | ||
|
|
01d74d0a87 | ||
|
|
e4fab0ce7f | ||
|
|
76ecaaeb3a |
62
.pre-commit-config.yaml
Normal file
62
.pre-commit-config.yaml
Normal file
@@ -0,0 +1,62 @@
|
||||
---
|
||||
repos:
|
||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||
rev: v5.0.0
|
||||
hooks:
|
||||
# whitespace
|
||||
- id: trailing-whitespace
|
||||
- id: mixed-line-ending
|
||||
args: ['--fix', 'lf']
|
||||
exclude: '.*\.(svg)$'
|
||||
- id: check-byte-order-marker
|
||||
# file format and permissions
|
||||
- id: check-ast
|
||||
- id: debug-statements
|
||||
- id: check-json
|
||||
files: .*\.json$
|
||||
- id: check-yaml
|
||||
files: .*\.(yaml|yml)$
|
||||
- id: check-executables-have-shebangs
|
||||
- id: check-shebang-scripts-are-executable
|
||||
# git
|
||||
- id: check-added-large-files
|
||||
- id: check-case-conflict
|
||||
- id: detect-private-key
|
||||
- id: check-merge-conflict
|
||||
- repo: https://github.com/Lucas-C/pre-commit-hooks
|
||||
rev: v1.5.5
|
||||
hooks:
|
||||
- id: remove-tabs
|
||||
exclude: '.*\.(svg)$'
|
||||
- repo: https://opendev.org/openstack/hacking
|
||||
rev: 7.0.0
|
||||
hooks:
|
||||
- id: hacking
|
||||
additional_dependencies: []
|
||||
exclude: '^(doc|releasenotes|tools)/.*$'
|
||||
- repo: https://github.com/PyCQA/bandit
|
||||
rev: 1.8.3
|
||||
hooks:
|
||||
- id: bandit
|
||||
args: ['-x', 'tests', '-s', 'B101,B311,B320']
|
||||
- repo: https://github.com/hhatto/autopep8
|
||||
rev: v2.3.2
|
||||
hooks:
|
||||
- id: autopep8
|
||||
files: '^.*\.py$'
|
||||
- repo: https://github.com/codespell-project/codespell
|
||||
rev: v2.4.1
|
||||
hooks:
|
||||
- id: codespell
|
||||
args: ['--ignore-words=doc/dictionary.txt']
|
||||
- repo: https://github.com/sphinx-contrib/sphinx-lint
|
||||
rev: v1.0.0
|
||||
hooks:
|
||||
- id: sphinx-lint
|
||||
args: [--enable=default-role]
|
||||
files: ^doc/|^releasenotes/|^api-guide/
|
||||
types: [rst]
|
||||
- repo: https://github.com/PyCQA/doc8
|
||||
rev: v1.1.2
|
||||
hooks:
|
||||
- id: doc8
|
||||
361
.zuul.yaml
361
.zuul.yaml
@@ -1,95 +1,24 @@
|
||||
- project:
|
||||
templates:
|
||||
- check-requirements
|
||||
- openstack-cover-jobs
|
||||
- openstack-python3-xena-jobs
|
||||
- publish-openstack-docs-pti
|
||||
- release-notes-jobs-python3
|
||||
check:
|
||||
jobs:
|
||||
- watcher-tempest-functional
|
||||
- watcher-grenade
|
||||
- watcher-tempest-strategies
|
||||
- watcher-tempest-actuator
|
||||
- watcherclient-tempest-functional
|
||||
- watcher-tempest-functional-ipv6-only
|
||||
gate:
|
||||
queue: watcher
|
||||
jobs:
|
||||
- watcher-tempest-functional
|
||||
- watcher-tempest-functional-ipv6-only
|
||||
|
||||
- job:
|
||||
name: watcher-tempest-dummy_optim
|
||||
parent: watcher-tempest-multinode
|
||||
vars:
|
||||
tempest_test_regex: watcher_tempest_plugin.tests.scenario.test_execute_dummy_optim
|
||||
|
||||
- job:
|
||||
name: watcher-tempest-actuator
|
||||
parent: watcher-tempest-multinode
|
||||
vars:
|
||||
tempest_test_regex: watcher_tempest_plugin.tests.scenario.test_execute_actuator
|
||||
|
||||
- job:
|
||||
name: watcher-tempest-basic_optim
|
||||
parent: watcher-tempest-multinode
|
||||
vars:
|
||||
tempest_test_regex: watcher_tempest_plugin.tests.scenario.test_execute_basic_optim
|
||||
|
||||
- job:
|
||||
name: watcher-tempest-vm_workload_consolidation
|
||||
parent: watcher-tempest-multinode
|
||||
vars:
|
||||
tempest_test_regex: watcher_tempest_plugin.tests.scenario.test_execute_vm_workload_consolidation
|
||||
devstack_local_conf:
|
||||
test-config:
|
||||
$WATCHER_CONFIG:
|
||||
watcher_strategies.vm_workload_consolidation:
|
||||
datasource: ceilometer
|
||||
|
||||
- job:
|
||||
name: watcher-tempest-workload_balancing
|
||||
parent: watcher-tempest-multinode
|
||||
vars:
|
||||
tempest_test_regex: watcher_tempest_plugin.tests.scenario.test_execute_workload_balancing
|
||||
|
||||
- job:
|
||||
name: watcher-tempest-zone_migration
|
||||
parent: watcher-tempest-multinode
|
||||
vars:
|
||||
tempest_test_regex: watcher_tempest_plugin.tests.scenario.test_execute_zone_migration
|
||||
|
||||
- job:
|
||||
name: watcher-tempest-host_maintenance
|
||||
parent: watcher-tempest-multinode
|
||||
vars:
|
||||
tempest_test_regex: watcher_tempest_plugin.tests.scenario.test_execute_host_maintenance
|
||||
|
||||
- job:
|
||||
name: watcher-tempest-storage_balance
|
||||
parent: watcher-tempest-multinode
|
||||
vars:
|
||||
tempest_test_regex: watcher_tempest_plugin.tests.scenario.test_execute_storage_balance
|
||||
devstack_local_conf:
|
||||
test-config:
|
||||
$TEMPEST_CONFIG:
|
||||
volume:
|
||||
backend_names: ['BACKEND_1', 'BACKEND_2']
|
||||
volume-feature-enabled:
|
||||
multi_backend: true
|
||||
|
||||
- job:
|
||||
name: watcher-tempest-strategies
|
||||
parent: watcher-tempest-multinode
|
||||
vars:
|
||||
tempest_concurrency: 1
|
||||
tempest_test_regex: watcher_tempest_plugin.tests.scenario.test_execute_strategies
|
||||
# All tests inside watcher_tempest_plugin.tests.scenario with tag "strategy"
|
||||
# or test_execute_strategies file
|
||||
# excluding tests with tag "real_load"
|
||||
tempest_test_regex: (^watcher_tempest_plugin.tests.scenario)(.*\[.*\bstrategy\b.*\].*)|(^watcher_tempest_plugin.tests.scenario.test_execute_strategies)
|
||||
tempest_exclude_regex: .*\[.*\breal_load\b.*\].*
|
||||
|
||||
- job:
|
||||
name: watcher-tempest-multinode
|
||||
parent: watcher-tempest-functional
|
||||
nodeset: openstack-two-node-focal
|
||||
nodeset: openstack-two-node-noble
|
||||
roles:
|
||||
- zuul: openstack/tempest
|
||||
group-vars:
|
||||
@@ -103,10 +32,16 @@
|
||||
period: 120
|
||||
watcher_cluster_data_model_collectors.storage:
|
||||
period: 120
|
||||
$CINDER_CONF:
|
||||
# enable notifications in compute node, by default they are only
|
||||
# configured in the controller
|
||||
oslo_messaging_notifications:
|
||||
driver: messagingv2
|
||||
devstack_services:
|
||||
watcher-api: false
|
||||
watcher-decision-engine: true
|
||||
watcher-applier: false
|
||||
c-bak: false
|
||||
ceilometer: false
|
||||
ceilometer-acompute: false
|
||||
ceilometer-acentral: false
|
||||
@@ -117,6 +52,13 @@
|
||||
rabbit: false
|
||||
mysql: false
|
||||
vars:
|
||||
devstack_localrc:
|
||||
GNOCCHI_ARCHIVE_POLICY_TEMPEST: "ceilometer-low-rate"
|
||||
CEILOMETER_PIPELINE_INTERVAL: 15
|
||||
devstack_services:
|
||||
ceilometer-acompute: false
|
||||
ceilometer-acentral: true
|
||||
ceilometer-anotification: true
|
||||
devstack_local_conf:
|
||||
post-config:
|
||||
$WATCHER_CONF:
|
||||
@@ -126,6 +68,11 @@
|
||||
period: 120
|
||||
watcher_cluster_data_model_collectors.storage:
|
||||
period: 120
|
||||
$CINDER_CONF:
|
||||
# enable notifications in compute node, by default they are only
|
||||
# configured in the controller
|
||||
oslo_messaging_notifications:
|
||||
driver: messagingv2
|
||||
test-config:
|
||||
$TEMPEST_CONFIG:
|
||||
compute:
|
||||
@@ -136,6 +83,8 @@
|
||||
block_migration_for_live_migration: true
|
||||
placement:
|
||||
min_microversion: 1.29
|
||||
telemetry:
|
||||
ceilometer_polling_interval: 15
|
||||
devstack_plugins:
|
||||
ceilometer: https://opendev.org/openstack/ceilometer
|
||||
|
||||
@@ -185,7 +134,7 @@
|
||||
- openstack/python-watcherclient
|
||||
- openstack/watcher-tempest-plugin
|
||||
vars: *base_vars
|
||||
irrelevant-files:
|
||||
irrelevant-files: &irrelevent_files
|
||||
- ^(test-|)requirements.txt$
|
||||
- ^.*\.rst$
|
||||
- ^api-ref/.*$
|
||||
@@ -198,10 +147,256 @@
|
||||
- ^tox.ini$
|
||||
|
||||
- job:
|
||||
# This job is used in python-watcherclient repo
|
||||
name: watcherclient-tempest-functional
|
||||
parent: watcher-tempest-functional
|
||||
timeout: 4200
|
||||
name: watcher-sg-core-tempest-base
|
||||
parent: devstack-tempest
|
||||
nodeset: openstack-two-node-noble
|
||||
description: |
|
||||
This job is for testing watcher and sg-core/prometheus installation
|
||||
abstract: true
|
||||
pre-run:
|
||||
- playbooks/generate_prometheus_config.yml
|
||||
irrelevant-files: *irrelevent_files
|
||||
timeout: 7800
|
||||
required-projects: &base_sg_required_projects
|
||||
- openstack/aodh
|
||||
- openstack/ceilometer
|
||||
- openstack/tempest
|
||||
- openstack-k8s-operators/sg-core
|
||||
- openstack/watcher
|
||||
- openstack/python-watcherclient
|
||||
- openstack/watcher-tempest-plugin
|
||||
- openstack/devstack-plugin-prometheus
|
||||
vars:
|
||||
configure_swap_size: 8192
|
||||
devstack_plugins:
|
||||
ceilometer: https://opendev.org/openstack/ceilometer
|
||||
aodh: https://opendev.org/openstack/aodh
|
||||
sg-core: https://github.com/openstack-k8s-operators/sg-core
|
||||
watcher: https://opendev.org/openstack/watcher
|
||||
devstack-plugin-prometheus: https://opendev.org/openstack/devstack-plugin-prometheus
|
||||
devstack_services:
|
||||
ceilometer-acompute: true
|
||||
watcher-api: true
|
||||
watcher-decision-engine: true
|
||||
watcher-applier: true
|
||||
tempest: true
|
||||
# We do not need Swift in this job so disable it for speed
|
||||
# Swift services
|
||||
s-account: false
|
||||
s-container: false
|
||||
s-object: false
|
||||
s-proxy: false
|
||||
# Prometheus related service
|
||||
prometheus: true
|
||||
node_exporter: true
|
||||
devstack_localrc:
|
||||
CEILOMETER_BACKENDS: "sg-core"
|
||||
CEILOMETER_PIPELINE_INTERVAL: 15
|
||||
CEILOMETER_ALARM_THRESHOLD: 6000000000
|
||||
PROMETHEUS_CONFIG_FILE: "/home/zuul/prometheus.yml"
|
||||
devstack_local_conf:
|
||||
post-config:
|
||||
$WATCHER_CONF:
|
||||
watcher_datasources:
|
||||
datasources: prometheus
|
||||
prometheus_client:
|
||||
host: 127.0.0.1
|
||||
port: 9090
|
||||
watcher_cluster_data_model_collectors.compute:
|
||||
period: 120
|
||||
watcher_cluster_data_model_collectors.baremetal:
|
||||
period: 120
|
||||
watcher_cluster_data_model_collectors.storage:
|
||||
period: 120
|
||||
compute_model:
|
||||
enable_extended_attributes: true
|
||||
nova_client:
|
||||
api_version: "2.96"
|
||||
test-config:
|
||||
$TEMPEST_CONFIG:
|
||||
compute:
|
||||
min_compute_nodes: 2
|
||||
min_microversion: 2.56
|
||||
compute-feature-enabled:
|
||||
live_migration: true
|
||||
block_migration_for_live_migration: true
|
||||
placement:
|
||||
min_microversion: 1.29
|
||||
service_available:
|
||||
sg_core: True
|
||||
telemetry_services:
|
||||
metric_backends: prometheus
|
||||
telemetry:
|
||||
disable_ssl_certificate_validation: True
|
||||
ceilometer_polling_interval: 15
|
||||
optimize:
|
||||
datasource: prometheus
|
||||
extended_attributes_nova_microversion: "2.96"
|
||||
data_model_collectors_period: 120
|
||||
tempest_plugins:
|
||||
- watcher-tempest-plugin
|
||||
# All tests inside watcher_tempest_plugin.tests.scenario with tag "strategy"
|
||||
# and test_execute_strategies, test_data_model files
|
||||
# excluding tests with tag "real_load"
|
||||
tempest_test_regex: (watcher_tempest_plugin.tests.scenario)(.*\[.*\bstrategy\b.*\].*)|(watcher_tempest_plugin.tests.scenario.(test_execute_strategies|test_data_model))
|
||||
tempest_exclude_regex: .*\[.*\breal_load\b.*\].*
|
||||
tempest_concurrency: 1
|
||||
tempest_test_regex: watcher_tempest_plugin.tests.client_functional
|
||||
tox_envlist: all
|
||||
zuul_copy_output:
|
||||
/etc/prometheus/prometheus.yml: logs
|
||||
group-vars:
|
||||
subnode:
|
||||
devstack_plugins:
|
||||
ceilometer: https://opendev.org/openstack/ceilometer
|
||||
devstack-plugin-prometheus: https://opendev.org/openstack/devstack-plugin-prometheus
|
||||
devstack_services:
|
||||
ceilometer-acompute: true
|
||||
sg-core: false
|
||||
prometheus: false
|
||||
node_exporter: true
|
||||
devstack_localrc:
|
||||
CEILOMETER_BACKEND: "none"
|
||||
CEILOMETER_BACKENDS: "none"
|
||||
devstack_local_conf:
|
||||
post-config:
|
||||
$WATCHER_CONF:
|
||||
watcher_cluster_data_model_collectors.compute:
|
||||
period: 120
|
||||
watcher_cluster_data_model_collectors.baremetal:
|
||||
period: 120
|
||||
watcher_cluster_data_model_collectors.storage:
|
||||
period: 120
|
||||
|
||||
- job:
|
||||
name: watcher-prometheus-integration
|
||||
parent: watcher-sg-core-tempest-base
|
||||
vars:
|
||||
devstack_services:
|
||||
ceilometer-acompute: false
|
||||
node_exporter: false
|
||||
group-vars:
|
||||
subnode:
|
||||
devstack_services:
|
||||
ceilometer-acompute: false
|
||||
node_exporter: false
|
||||
|
||||
- job:
|
||||
name: watcher-aetos-integration
|
||||
parent: watcher-sg-core-tempest-base
|
||||
description: |
|
||||
This job tests Watcher with Aetos reverse-proxy for Prometheus
|
||||
using Keystone authentication instead of direct Prometheus access.
|
||||
required-projects:
|
||||
- openstack/python-observabilityclient
|
||||
- openstack/aetos
|
||||
vars: &aetos_vars
|
||||
devstack_services:
|
||||
ceilometer-acompute: false
|
||||
node_exporter: false
|
||||
devstack_plugins:
|
||||
ceilometer: https://opendev.org/openstack/ceilometer
|
||||
sg-core: https://github.com/openstack-k8s-operators/sg-core
|
||||
watcher: https://opendev.org/openstack/watcher
|
||||
devstack-plugin-prometheus: https://opendev.org/openstack/devstack-plugin-prometheus
|
||||
aetos: https://opendev.org/openstack/aetos
|
||||
devstack_local_conf:
|
||||
post-config:
|
||||
$WATCHER_CONF:
|
||||
watcher_datasources:
|
||||
datasources: aetos
|
||||
aetos_client:
|
||||
interface: public
|
||||
region_name: RegionOne
|
||||
fqdn_label: fqdn
|
||||
instance_uuid_label: resource
|
||||
test-config:
|
||||
$TEMPEST_CONFIG:
|
||||
optimize:
|
||||
datasource: prometheus
|
||||
group-vars:
|
||||
subnode:
|
||||
devstack_services:
|
||||
ceilometer-acompute: false
|
||||
node_exporter: false
|
||||
|
||||
- job:
|
||||
name: watcher-prometheus-integration-realdata
|
||||
parent: watcher-sg-core-tempest-base
|
||||
vars: &realdata_vars
|
||||
devstack_services:
|
||||
ceilometer-acompute: true
|
||||
node_exporter: true
|
||||
devstack_localrc:
|
||||
NODE_EXPORTER_COLLECTOR_EXCLUDE: ""
|
||||
devstack_local_conf:
|
||||
test-config:
|
||||
$TEMPEST_CONFIG:
|
||||
optimize:
|
||||
datasource: ""
|
||||
real_workload_period: 300
|
||||
# All tests inside watcher_tempest_plugin.tests.scenario with tag "real_load"
|
||||
tempest_test_regex: (^watcher_tempest_plugin.tests.scenario)(.*\[.*\breal_load\b.*\].*)
|
||||
tempest_exclude_regex: ""
|
||||
group-vars: &realdata_group_vars
|
||||
subnode:
|
||||
devstack_services:
|
||||
ceilometer-acompute: true
|
||||
node_exporter: true
|
||||
devstack_localrc:
|
||||
NODE_EXPORTER_COLLECTOR_EXCLUDE: ""
|
||||
|
||||
- job:
|
||||
name: watcher-prometheus-integration-threading
|
||||
parent: watcher-prometheus-integration
|
||||
vars:
|
||||
devstack_localrc:
|
||||
'SYSTEMD_ENV_VARS["watcher-decision-engine"]': OS_WATCHER_DISABLE_EVENTLET_PATCHING=true
|
||||
|
||||
- job:
|
||||
name: openstack-tox-py312-threading
|
||||
parent: openstack-tox-py312
|
||||
description: |
|
||||
Run tox with the py3-threading environment.
|
||||
vars:
|
||||
tox_envlist: py3-threading
|
||||
|
||||
- job:
|
||||
name: watcher-aetos-integration-realdata
|
||||
parent: watcher-aetos-integration
|
||||
vars: *realdata_vars
|
||||
group-vars: *realdata_group_vars
|
||||
|
||||
- project:
|
||||
queue: watcher
|
||||
templates:
|
||||
- check-requirements
|
||||
- openstack-cover-jobs
|
||||
- openstack-python3-jobs
|
||||
- publish-openstack-docs-pti
|
||||
- release-notes-jobs-python3
|
||||
check:
|
||||
jobs:
|
||||
- openstack-tox-py312-threading
|
||||
- watcher-tempest-functional
|
||||
- watcher-grenade
|
||||
- watcher-tempest-strategies
|
||||
- watcher-tempest-actuator
|
||||
- python-watcherclient-functional:
|
||||
files:
|
||||
- ^watcher/api/*
|
||||
- watcher-tempest-functional-ipv6-only
|
||||
- watcher-prometheus-integration
|
||||
- watcher-prometheus-integration-threading
|
||||
- watcher-aetos-integration
|
||||
gate:
|
||||
jobs:
|
||||
- watcher-tempest-functional
|
||||
- watcher-tempest-functional-ipv6-only
|
||||
experimental:
|
||||
jobs:
|
||||
- watcher-prometheus-integration-realdata
|
||||
- watcher-aetos-integration-realdata
|
||||
periodic-weekly:
|
||||
jobs:
|
||||
- watcher-prometheus-integration-realdata
|
||||
- watcher-aetos-integration-realdata
|
||||
|
||||
@@ -189,6 +189,16 @@ action_state:
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
action_status_message:
|
||||
description: |
|
||||
Message with additional information about the Action state.
|
||||
This field can be set when transitioning an action to SKIPPED state,
|
||||
or updated for actions that are already in SKIPPED state to provide
|
||||
more detailed explanations, fix typos, or expand on initial reasons.
|
||||
in: body
|
||||
required: false
|
||||
type: string
|
||||
min_version: 1.5
|
||||
action_type:
|
||||
description: |
|
||||
Action type based on specific API action. Actions in Watcher are
|
||||
@@ -230,6 +240,13 @@ actionplan_state:
|
||||
in: body
|
||||
required: false
|
||||
type: string
|
||||
actionplan_status_message:
|
||||
description: |
|
||||
Message with additional information about the Action Plan state.
|
||||
in: body
|
||||
required: false
|
||||
type: string
|
||||
min_version: 1.5
|
||||
|
||||
# Audit
|
||||
audit_autotrigger:
|
||||
@@ -320,6 +337,13 @@ audit_state:
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
audit_status_message:
|
||||
description: |
|
||||
Message with additional information about the Audit state.
|
||||
in: body
|
||||
required: false
|
||||
type: string
|
||||
min_version: 1.5
|
||||
audit_strategy:
|
||||
description: |
|
||||
The UUID or name of the Strategy.
|
||||
@@ -420,12 +444,24 @@ links:
|
||||
type: array
|
||||
|
||||
# Data Model Node
|
||||
node_disabled_reason:
|
||||
description: |
|
||||
The Disabled Reason of the node.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
node_disk:
|
||||
description: |
|
||||
The Disk of the node(in GiB).
|
||||
in: body
|
||||
required: true
|
||||
type: integer
|
||||
node_disk_gb_reserved:
|
||||
description: |
|
||||
The Disk Reserved of the node (in GiB).
|
||||
in: body
|
||||
required: true
|
||||
type: integer
|
||||
node_disk_ratio:
|
||||
description: |
|
||||
The Disk Ratio of the node.
|
||||
@@ -444,6 +480,12 @@ node_memory:
|
||||
in: body
|
||||
required: true
|
||||
type: integer
|
||||
node_memory_mb_reserved:
|
||||
description: |
|
||||
The Memory Reserved of the node(in MiB).
|
||||
in: body
|
||||
required: true
|
||||
type: integer
|
||||
node_memory_ratio:
|
||||
description: |
|
||||
The Memory Ratio of the node.
|
||||
@@ -456,6 +498,12 @@ node_state:
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
node_status:
|
||||
description: |
|
||||
The Status of the node.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
node_uuid:
|
||||
description: |
|
||||
The Unique UUID of the node.
|
||||
@@ -468,13 +516,18 @@ node_vcpu_ratio:
|
||||
in: body
|
||||
required: true
|
||||
type: float
|
||||
node_vcpu_reserved:
|
||||
description: |
|
||||
The Vcpu Reserved of the node.
|
||||
in: body
|
||||
required: true
|
||||
type: integer
|
||||
node_vcpus:
|
||||
description: |
|
||||
The Vcpu of the node.
|
||||
in: body
|
||||
required: true
|
||||
type: integer
|
||||
|
||||
# Scoring Engine
|
||||
scoring_engine_description:
|
||||
description: |
|
||||
@@ -502,18 +555,50 @@ server_disk:
|
||||
in: body
|
||||
required: true
|
||||
type: integer
|
||||
server_flavor_extra_specs:
|
||||
description: |
|
||||
The flavor extra specs of the server.
|
||||
in: body
|
||||
required: true
|
||||
type: JSON
|
||||
min_version: 1.6
|
||||
server_locked:
|
||||
description: |
|
||||
Whether the server is locked.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
server_memory:
|
||||
description: |
|
||||
The Memory of server.
|
||||
in: body
|
||||
required: true
|
||||
type: integer
|
||||
server_metadata:
|
||||
description: |
|
||||
The metadata associated with the server.
|
||||
in: body
|
||||
required: true
|
||||
type: JSON
|
||||
server_name:
|
||||
description: |
|
||||
The Name of the server.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
server_pinned_az:
|
||||
description: |
|
||||
The pinned availability zone of the server.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
min_version: 1.6
|
||||
server_project_id:
|
||||
description: |
|
||||
The project ID of the server.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
server_state:
|
||||
description: |
|
||||
The State of the server.
|
||||
@@ -532,6 +617,12 @@ server_vcpus:
|
||||
in: body
|
||||
required: true
|
||||
type: integer
|
||||
server_watcher_exclude:
|
||||
description: |
|
||||
Whether the server is excluded from the scope.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
# Service
|
||||
service_host:
|
||||
description: |
|
||||
|
||||
12
api-ref/source/samples/action-skip-request-with-message.json
Normal file
12
api-ref/source/samples/action-skip-request-with-message.json
Normal file
@@ -0,0 +1,12 @@
|
||||
[
|
||||
{
|
||||
"op": "replace",
|
||||
"value": "SKIPPED",
|
||||
"path": "/state"
|
||||
},
|
||||
{
|
||||
"op": "replace",
|
||||
"value": "Skipping due to maintenance window",
|
||||
"path": "/status_message"
|
||||
}
|
||||
]
|
||||
7
api-ref/source/samples/action-skip-request.json
Normal file
7
api-ref/source/samples/action-skip-request.json
Normal file
@@ -0,0 +1,7 @@
|
||||
[
|
||||
{
|
||||
"op": "replace",
|
||||
"value": "SKIPPED",
|
||||
"path": "/state"
|
||||
}
|
||||
]
|
||||
29
api-ref/source/samples/action-skip-response.json
Normal file
29
api-ref/source/samples/action-skip-response.json
Normal file
@@ -0,0 +1,29 @@
|
||||
{
|
||||
"state": "SKIPPED",
|
||||
"description": "Migrate instance to another compute node",
|
||||
"parents": [
|
||||
"b4529294-1de6-4302-b57a-9b5d5dc363c6"
|
||||
],
|
||||
"links": [
|
||||
{
|
||||
"rel": "self",
|
||||
"href": "http://controller:9322/v1/actions/54acc7a0-91b0-46ea-a5f7-4ae2b9df0b0a"
|
||||
},
|
||||
{
|
||||
"rel": "bookmark",
|
||||
"href": "http://controller:9322/actions/54acc7a0-91b0-46ea-a5f7-4ae2b9df0b0a"
|
||||
}
|
||||
],
|
||||
"action_plan_uuid": "4cbc4ede-0d25-481b-b86e-998dbbd4f8bf",
|
||||
"uuid": "54acc7a0-91b0-46ea-a5f7-4ae2b9df0b0a",
|
||||
"deleted_at": null,
|
||||
"updated_at": "2018-04-10T12:15:44.026973+00:00",
|
||||
"input_parameters": {
|
||||
"migration_type": "live",
|
||||
"destination_node": "compute-2",
|
||||
"resource_id": "a1b2c3d4-e5f6-7890-1234-567890abcdef"
|
||||
},
|
||||
"action_type": "migrate",
|
||||
"created_at": "2018-04-10T11:59:12.725147+00:00",
|
||||
"status_message": "Action skipped by user. Reason:Skipping due to maintenance window"
|
||||
}
|
||||
@@ -0,0 +1,7 @@
|
||||
[
|
||||
{
|
||||
"op": "replace",
|
||||
"value": "Action skipped due to scheduled maintenance window",
|
||||
"path": "/status_message"
|
||||
}
|
||||
]
|
||||
@@ -0,0 +1,29 @@
|
||||
{
|
||||
"state": "SKIPPED",
|
||||
"description": "Migrate instance to another compute node",
|
||||
"parents": [
|
||||
"b4529294-1de6-4302-b57a-9b5d5dc363c6"
|
||||
],
|
||||
"links": [
|
||||
{
|
||||
"rel": "self",
|
||||
"href": "http://controller:9322/v1/actions/54acc7a0-91b0-46ea-a5f7-4ae2b9df0b0a"
|
||||
},
|
||||
{
|
||||
"rel": "bookmark",
|
||||
"href": "http://controller:9322/actions/54acc7a0-91b0-46ea-a5f7-4ae2b9df0b0a"
|
||||
}
|
||||
],
|
||||
"action_plan_uuid": "4cbc4ede-0d25-481b-b86e-998dbbd4f8bf",
|
||||
"uuid": "54acc7a0-91b0-46ea-a5f7-4ae2b9df0b0a",
|
||||
"deleted_at": null,
|
||||
"updated_at": "2018-04-10T12:20:15.123456+00:00",
|
||||
"input_parameters": {
|
||||
"migration_type": "live",
|
||||
"destination_node": "compute-2",
|
||||
"resource_id": "a1b2c3d4-e5f6-7890-1234-567890abcdef"
|
||||
},
|
||||
"action_type": "migrate",
|
||||
"created_at": "2018-04-10T11:59:12.725147+00:00",
|
||||
"status_message": "Action skipped by user. Reason: Action skipped due to scheduled maintenance window"
|
||||
}
|
||||
@@ -21,7 +21,8 @@
|
||||
"uuid": "4cbc4ede-0d25-481b-b86e-998dbbd4f8bf",
|
||||
"audit_uuid": "7d100b05-0a86-491f-98a7-f93da19b272a",
|
||||
"created_at": "2018-04-10T11:59:52.640067+00:00",
|
||||
"hostname": "controller"
|
||||
"hostname": "controller",
|
||||
"status_message": null
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@@ -17,5 +17,6 @@
|
||||
"strategy_name": "dummy_with_resize",
|
||||
"uuid": "4cbc4ede-0d25-481b-b86e-998dbbd4f8bf",
|
||||
"audit_uuid": "7d100b05-0a86-491f-98a7-f93da19b272a",
|
||||
"hostname": "controller"
|
||||
}
|
||||
"hostname": "controller",
|
||||
"status_message": null
|
||||
}
|
||||
|
||||
@@ -24,7 +24,8 @@
|
||||
"duration": 3.2
|
||||
},
|
||||
"action_type": "sleep",
|
||||
"created_at": "2018-03-26T11:56:08.235226+00:00"
|
||||
"created_at": "2018-03-26T11:56:08.235226+00:00",
|
||||
"status_message": null
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
@@ -22,5 +22,6 @@
|
||||
"message": "Welcome"
|
||||
},
|
||||
"action_type": "nop",
|
||||
"created_at": "2018-04-10T11:59:12.725147+00:00"
|
||||
}
|
||||
"created_at": "2018-04-10T11:59:12.725147+00:00",
|
||||
"status_message": null
|
||||
}
|
||||
|
||||
@@ -51,5 +51,6 @@
|
||||
"updated_at": null,
|
||||
"hostname": null,
|
||||
"start_time": null,
|
||||
"end_time": null
|
||||
"end_time": null,
|
||||
"status_message": null
|
||||
}
|
||||
|
||||
@@ -30,7 +30,7 @@
|
||||
}
|
||||
},
|
||||
"auto_trigger": false,
|
||||
"force": false,
|
||||
"force": false,
|
||||
"uuid": "65a5da84-5819-4aea-8278-a28d2b489028",
|
||||
"goal_name": "workload_balancing",
|
||||
"scope": [],
|
||||
@@ -53,7 +53,8 @@
|
||||
"updated_at": "2018-04-06T09:44:01.604146+00:00",
|
||||
"hostname": "controller",
|
||||
"start_time": null,
|
||||
"end_time": null
|
||||
"end_time": null,
|
||||
"status_message": null
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@@ -51,5 +51,6 @@
|
||||
"updated_at": "2018-04-06T11:54:01.266447+00:00",
|
||||
"hostname": "controller",
|
||||
"start_time": null,
|
||||
"end_time": null
|
||||
"end_time": null,
|
||||
"status_message": null
|
||||
}
|
||||
|
||||
@@ -1,38 +1,62 @@
|
||||
{
|
||||
"context": [
|
||||
{
|
||||
"server_uuid": "1bf91464-9b41-428d-a11e-af691e5563bb",
|
||||
"server_watcher_exclude": false,
|
||||
"server_name": "chenke-test1",
|
||||
"server_vcpus": "1",
|
||||
"server_state": "active",
|
||||
"server_memory": "512",
|
||||
"server_disk": "1",
|
||||
"server_state": "active",
|
||||
"node_uuid": "253e5dd0-9384-41ab-af13-4f2c2ce26112",
|
||||
"server_vcpus": "1",
|
||||
"server_metadata": {},
|
||||
"server_project_id": "baea342fc74b4a1785b4a40c69a8d958",
|
||||
"server_locked":false,
|
||||
"server_uuid": "1bf91464-9b41-428d-a11e-af691e5563bb",
|
||||
"server_pinned_az": "nova",
|
||||
"server_flavor_extra_specs": {
|
||||
"hw_rng:allowed": true
|
||||
},
|
||||
"node_hostname": "localhost.localdomain",
|
||||
"node_vcpus": "4",
|
||||
"node_vcpu_ratio": "16.0",
|
||||
"node_memory": "16383",
|
||||
"node_memory_ratio": "1.5",
|
||||
"node_disk": "37"
|
||||
"node_disk_ratio": "1.0",
|
||||
"node_status": "enabled",
|
||||
"node_disabled_reason": null,
|
||||
"node_state": "up",
|
||||
"node_memory": "16383",
|
||||
"node_memory_mb_reserved": "512",
|
||||
"node_disk": "37",
|
||||
"node_disk_gb_reserved": "0",
|
||||
"node_vcpus": "4",
|
||||
"node_vcpu_reserved": "0",
|
||||
"node_memory_ratio": "1.5",
|
||||
"node_vcpu_ratio": "16.0",
|
||||
"node_disk_ratio": "1.0",
|
||||
"node_uuid": "253e5dd0-9384-41ab-af13-4f2c2ce26112"
|
||||
},
|
||||
{
|
||||
"server_uuid": "e2cb5f6f-fa1d-4ba2-be1e-0bf02fa86ba4",
|
||||
"server_watcher_exclude": false,
|
||||
"server_name": "chenke-test2",
|
||||
"server_vcpus": "1",
|
||||
"server_state": "active",
|
||||
"server_memory": "512",
|
||||
"server_disk": "1",
|
||||
"server_state": "active",
|
||||
"node_uuid": "253e5dd0-9384-41ab-af13-4f2c2ce26112",
|
||||
"server_vcpus": "1",
|
||||
"server_metadata": {},
|
||||
"server_project_id": "baea342fc74b4a1785b4a40c69a8d958",
|
||||
"server_locked": false,
|
||||
"server_uuid": "e2cb5f6f-fa1d-4ba2-be1e-0bf02fa86ba4",
|
||||
"server_pinned_az": "nova",
|
||||
"server_flavor_extra_specs": {},
|
||||
"node_hostname": "localhost.localdomain",
|
||||
"node_vcpus": "4",
|
||||
"node_vcpu_ratio": "16.0",
|
||||
"node_memory": "16383",
|
||||
"node_memory_ratio": "1.5",
|
||||
"node_disk": "37"
|
||||
"node_disk_ratio": "1.0",
|
||||
"node_status": "enabled",
|
||||
"node_disabled_reason": null,
|
||||
"node_state": "up",
|
||||
"node_memory": "16383",
|
||||
"node_memory_mb_reserved": "512",
|
||||
"node_disk": "37",
|
||||
"node_disk_gb_reserved": "0",
|
||||
"node_vcpus": "4",
|
||||
"node_vcpu_reserved": "0",
|
||||
"node_memory_ratio": "1.5",
|
||||
"node_vcpu_ratio": "16.0",
|
||||
"node_disk_ratio": "1.0",
|
||||
"node_uuid": "253e5dd0-9384-41ab-af13-4f2c2ce26112"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@@ -139,6 +139,7 @@ Response
|
||||
- global_efficacy: actionplan_global_efficacy
|
||||
- links: links
|
||||
- hostname: actionplan_hostname
|
||||
- status_message: actionplan_status_message
|
||||
|
||||
**Example JSON representation of an Action Plan:**
|
||||
|
||||
@@ -177,6 +178,7 @@ Response
|
||||
- global_efficacy: actionplan_global_efficacy
|
||||
- links: links
|
||||
- hostname: actionplan_hostname
|
||||
- status_message: actionplan_status_message
|
||||
|
||||
**Example JSON representation of an Audit:**
|
||||
|
||||
@@ -233,6 +235,7 @@ version 1:
|
||||
- global_efficacy: actionplan_global_efficacy
|
||||
- links: links
|
||||
- hostname: actionplan_hostname
|
||||
- status_message: actionplan_status_message
|
||||
|
||||
**Example JSON representation of an Action Plan:**
|
||||
|
||||
|
||||
@@ -23,6 +23,9 @@ following:
|
||||
|
||||
- **PENDING** : the ``Action`` has not been executed yet by the
|
||||
``Watcher Applier``.
|
||||
- **SKIPPED** : the ``Action`` will not be executed because a predefined
|
||||
skipping condition is found by ``Watcher Applier`` or is explicitly
|
||||
skipped by the ``Administrator``.
|
||||
- **ONGOING** : the ``Action`` is currently being processed by the
|
||||
``Watcher Applier``.
|
||||
- **SUCCEEDED** : the ``Action`` has been executed successfully
|
||||
@@ -111,6 +114,7 @@ Response
|
||||
- description: action_description
|
||||
- input_parameters: action_input_parameters
|
||||
- links: links
|
||||
- status_message: action_status_message
|
||||
|
||||
**Example JSON representation of an Action:**
|
||||
|
||||
@@ -148,8 +152,111 @@ Response
|
||||
- description: action_description
|
||||
- input_parameters: action_input_parameters
|
||||
- links: links
|
||||
- status_message: action_status_message
|
||||
|
||||
**Example JSON representation of an Action:**
|
||||
|
||||
.. literalinclude:: samples/actions-show-response.json
|
||||
:language: javascript
|
||||
|
||||
Skip Action
|
||||
===========
|
||||
|
||||
.. rest_method:: PATCH /v1/actions/{action_ident}
|
||||
|
||||
Skips an Action resource by changing its state to SKIPPED.
|
||||
|
||||
.. note::
|
||||
Only Actions in PENDING state can be skipped. The Action must belong to
|
||||
an Action Plan in RECOMMENDED or PENDING state. This operation requires
|
||||
API microversion 1.5 or later.
|
||||
|
||||
Normal response codes: 200
|
||||
|
||||
Error codes: 400,404,403,409
|
||||
|
||||
Request
|
||||
-------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- action_ident: action_ident
|
||||
|
||||
**Example Action skip request:**
|
||||
|
||||
.. literalinclude:: samples/action-skip-request.json
|
||||
:language: javascript
|
||||
|
||||
**Example Action skip request with custom status message:**
|
||||
|
||||
.. literalinclude:: samples/action-skip-request-with-message.json
|
||||
:language: javascript
|
||||
|
||||
Response
|
||||
--------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- uuid: uuid
|
||||
- action_type: action_type
|
||||
- state: action_state
|
||||
- action_plan_uuid: action_action_plan_uuid
|
||||
- parents: action_parents
|
||||
- description: action_description
|
||||
- input_parameters: action_input_parameters
|
||||
- links: links
|
||||
- status_message: action_status_message
|
||||
|
||||
**Example JSON representation of a skipped Action:**
|
||||
|
||||
.. literalinclude:: samples/action-skip-response.json
|
||||
:language: javascript
|
||||
|
||||
Update Action Status Message
|
||||
============================
|
||||
|
||||
.. rest_method:: PATCH /v1/actions/{action_ident}
|
||||
|
||||
Updates the status_message of an Action that is already in SKIPPED state.
|
||||
|
||||
.. note::
|
||||
The status_message field can only be updated for Actions that are currently
|
||||
in SKIPPED state. This allows administrators to fix typos, provide more
|
||||
detailed explanations, or expand on reasons that were initially omitted.
|
||||
This operation requires API microversion 1.5 or later.
|
||||
|
||||
Normal response codes: 200
|
||||
|
||||
Error codes: 400,404,403,409
|
||||
|
||||
Request
|
||||
-------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- action_ident: action_ident
|
||||
|
||||
**Example status_message update request for a SKIPPED action:**
|
||||
|
||||
.. literalinclude:: samples/action-update-status-message-request.json
|
||||
:language: javascript
|
||||
|
||||
Response
|
||||
--------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- uuid: uuid
|
||||
- action_type: action_type
|
||||
- state: action_state
|
||||
- action_plan_uuid: action_action_plan_uuid
|
||||
- parents: action_parents
|
||||
- description: action_description
|
||||
- input_parameters: action_input_parameters
|
||||
- links: links
|
||||
- status_message: action_status_message
|
||||
|
||||
**Example JSON representation of an Action with updated status_message:**
|
||||
|
||||
.. literalinclude:: samples/action-update-status-message-response.json
|
||||
:language: javascript
|
||||
@@ -85,6 +85,7 @@ version 1:
|
||||
- start_time: audit_starttime_resp
|
||||
- end_time: audit_endtime_resp
|
||||
- force: audit_force
|
||||
- status_message: audit_status_message
|
||||
|
||||
**Example JSON representation of an Audit:**
|
||||
|
||||
@@ -184,6 +185,7 @@ Response
|
||||
- start_time: audit_starttime_resp
|
||||
- end_time: audit_endtime_resp
|
||||
- force: audit_force
|
||||
- status_message: audit_status_message
|
||||
|
||||
**Example JSON representation of an Audit:**
|
||||
|
||||
@@ -231,6 +233,7 @@ Response
|
||||
- start_time: audit_starttime_resp
|
||||
- end_time: audit_endtime_resp
|
||||
- force: audit_force
|
||||
- status_message: audit_status_message
|
||||
|
||||
**Example JSON representation of an Audit:**
|
||||
|
||||
@@ -286,6 +289,7 @@ version 1:
|
||||
- start_time: audit_starttime_resp
|
||||
- end_time: audit_endtime_resp
|
||||
- force: audit_force
|
||||
- status_message: audit_status_message
|
||||
|
||||
**Example JSON representation of an Audit:**
|
||||
|
||||
@@ -341,6 +345,7 @@ Response
|
||||
- start_time: audit_starttime_resp
|
||||
- end_time: audit_endtime_resp
|
||||
- force: audit_force
|
||||
- status_message: audit_status_message
|
||||
|
||||
**Example JSON representation of an Audit:**
|
||||
|
||||
|
||||
@@ -35,21 +35,32 @@ Response
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- server_uuid: server_uuid
|
||||
- server_watcher_exclude: server_watcher_exclude
|
||||
- server_name: server_name
|
||||
- server_vcpus: server_vcpus
|
||||
- server_state: server_state
|
||||
- server_memory: server_memory
|
||||
- server_disk: server_disk
|
||||
- server_state: server_state
|
||||
- node_uuid: node_uuid
|
||||
- server_vcpus: server_vcpus
|
||||
- server_metadata: server_metadata
|
||||
- server_project_id: server_project_id
|
||||
- server_locked: server_locked
|
||||
- server_uuid: server_uuid
|
||||
- server_pinned_az: server_pinned_az
|
||||
- server_flavor_extra_specs: server_flavor_extra_specs
|
||||
- node_hostname: node_hostname
|
||||
- node_vcpus: node_vcpus
|
||||
- node_vcpu_ratio: node_vcpu_ratio
|
||||
- node_memory: node_memory
|
||||
- node_memory_ratio: node_memory_ratio
|
||||
- node_disk: node_disk
|
||||
- node_disk_ratio: node_disk_ratio
|
||||
- node_status: node_status
|
||||
- node_disabled_reason: node_disabled_reason
|
||||
- node_state: node_state
|
||||
- node_memory: node_memory
|
||||
- node_memory_mb_reserved: node_memory_mb_reserved
|
||||
- node_disk: node_disk
|
||||
- node_disk_gb_reserved: node_disk_gb_reserved
|
||||
- node_vcpus: node_vcpus
|
||||
- node_vcpu_reserved: node_vcpu_reserved
|
||||
- node_memory_ratio: node_memory_ratio
|
||||
- node_vcpu_ratio: node_vcpu_ratio
|
||||
- node_disk_ratio: node_disk_ratio
|
||||
- node_uuid: node_uuid
|
||||
|
||||
**Example JSON representation of a Data Model:**
|
||||
|
||||
|
||||
@@ -12,7 +12,7 @@ Here are some examples of ``Goals``:
|
||||
- minimize the energy consumption
|
||||
- minimize the number of compute nodes (consolidation)
|
||||
- balance the workload among compute nodes
|
||||
- minimize the license cost (some softwares have a licensing model which is
|
||||
- minimize the license cost (some software have a licensing model which is
|
||||
based on the number of sockets or cores where the software is deployed)
|
||||
- find the most appropriate moment for a planned maintenance on a
|
||||
given group of host (which may be an entire availability zone):
|
||||
@@ -123,4 +123,4 @@ Response
|
||||
**Example JSON representation of a Goal:**
|
||||
|
||||
.. literalinclude:: samples/goal-show-response.json
|
||||
:language: javascript
|
||||
:language: javascript
|
||||
|
||||
23
bindep.txt
Normal file
23
bindep.txt
Normal file
@@ -0,0 +1,23 @@
|
||||
# This is a cross-platform list tracking distribution packages needed for install and tests;
|
||||
# see https://docs.openstack.org/infra/bindep/ for additional information.
|
||||
|
||||
mysql [platform:rpm !platform:redhat test]
|
||||
mysql-client [platform:dpkg !platform:debian test]
|
||||
mysql-devel [platform:rpm !platform:redhat test]
|
||||
mysql-server [!platform:redhat !platform:debian test]
|
||||
mariadb-devel [platform:rpm platform:redhat test]
|
||||
mariadb-server [platform:rpm platform:redhat platform:debian test]
|
||||
python3-all [platform:dpkg test]
|
||||
python3-all-dev [platform:dpkg test]
|
||||
python3 [platform:rpm test]
|
||||
python3-devel [platform:rpm test]
|
||||
sqlite-devel [platform:rpm test]
|
||||
# gettext and graphviz are needed by doc builds only.
|
||||
gettext [doc]
|
||||
graphviz [doc]
|
||||
# fonts-freefont-otf is needed for pdf docs builds with the 'xelatex' engine
|
||||
fonts-freefont-otf [pdf-docs]
|
||||
texlive [pdf-docs]
|
||||
texlive-latex-recommended [pdf-docs]
|
||||
texlive-xetex [pdf-docs]
|
||||
latexmk [pdf-docs]
|
||||
@@ -1,42 +0,0 @@
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
# This is an example Apache2 configuration file for using the
|
||||
# Watcher API through mod_wsgi. This version assumes you are
|
||||
# running devstack to configure the software.
|
||||
|
||||
Listen %WATCHER_SERVICE_PORT%
|
||||
|
||||
<VirtualHost *:%WATCHER_SERVICE_PORT%>
|
||||
WSGIDaemonProcess watcher-api user=%USER% processes=%APIWORKERS% threads=1 display-name=%{GROUP}
|
||||
WSGIScriptAlias / %WATCHER_WSGI_DIR%/app.wsgi
|
||||
WSGIApplicationGroup %{GLOBAL}
|
||||
WSGIProcessGroup watcher-api
|
||||
WSGIPassAuthorization On
|
||||
|
||||
ErrorLogFormat "%M"
|
||||
ErrorLog /var/log/%APACHE_NAME%/watcher-api.log
|
||||
CustomLog /var/log/%APACHE_NAME%/watcher-api-access.log combined
|
||||
|
||||
|
||||
<Directory %WATCHER_WSGI_DIR%>
|
||||
WSGIProcessGroup watcher-api
|
||||
WSGIApplicationGroup %{GLOBAL}
|
||||
<IfVersion >= 2.4>
|
||||
Require all granted
|
||||
</IfVersion>
|
||||
<IfVersion < 2.4>
|
||||
Order allow,deny
|
||||
Allow from all
|
||||
</IfVersion>
|
||||
</Directory>
|
||||
</VirtualHost>
|
||||
@@ -1,5 +1,3 @@
|
||||
#!/bin/bash
|
||||
#
|
||||
# lib/watcher
|
||||
# Functions to control the configuration and operation of the watcher services
|
||||
|
||||
@@ -38,7 +36,6 @@ GITBRANCH["python-watcherclient"]=${WATCHERCLIENT_BRANCH:-master}
|
||||
GITDIR["python-watcherclient"]=$DEST/python-watcherclient
|
||||
|
||||
WATCHER_STATE_PATH=${WATCHER_STATE_PATH:=$DATA_DIR/watcher}
|
||||
WATCHER_AUTH_CACHE_DIR=${WATCHER_AUTH_CACHE_DIR:-/var/cache/watcher}
|
||||
|
||||
WATCHER_CONF_DIR=/etc/watcher
|
||||
WATCHER_CONF=$WATCHER_CONF_DIR/watcher.conf
|
||||
@@ -58,29 +55,16 @@ else
|
||||
WATCHER_BIN_DIR=$(get_python_exec_prefix)
|
||||
fi
|
||||
|
||||
# There are 2 modes, which is "uwsgi" which runs with an apache
|
||||
# proxy uwsgi in front of it, or "mod_wsgi", which runs in
|
||||
# apache. mod_wsgi is deprecated, don't use it.
|
||||
WATCHER_USE_WSGI_MODE=${WATCHER_USE_WSGI_MODE:-$WSGI_MODE}
|
||||
WATCHER_UWSGI=$WATCHER_BIN_DIR/watcher-api-wsgi
|
||||
WATCHER_UWSGI=watcher.wsgi.api:application
|
||||
WATCHER_UWSGI_CONF=$WATCHER_CONF_DIR/watcher-uwsgi.ini
|
||||
|
||||
if is_suse; then
|
||||
WATCHER_WSGI_DIR=${WATCHER_WSGI_DIR:-/srv/www/htdocs/watcher}
|
||||
else
|
||||
WATCHER_WSGI_DIR=${WATCHER_WSGI_DIR:-/var/www/watcher}
|
||||
fi
|
||||
WATCHER_WSGI_DIR=${WATCHER_WSGI_DIR:-/var/www/watcher}
|
||||
# Public facing bits
|
||||
WATCHER_SERVICE_HOST=${WATCHER_SERVICE_HOST:-$SERVICE_HOST}
|
||||
WATCHER_SERVICE_PORT=${WATCHER_SERVICE_PORT:-9322}
|
||||
WATCHER_SERVICE_PORT_INT=${WATCHER_SERVICE_PORT_INT:-19322}
|
||||
WATCHER_SERVICE_PROTOCOL=${WATCHER_SERVICE_PROTOCOL:-$SERVICE_PROTOCOL}
|
||||
|
||||
if [[ "$WATCHER_USE_WSGI_MODE" == "uwsgi" ]]; then
|
||||
WATCHER_API_URL="$WATCHER_SERVICE_PROTOCOL://$WATCHER_SERVICE_HOST/infra-optim"
|
||||
else
|
||||
WATCHER_API_URL="$WATCHER_SERVICE_PROTOCOL://$WATCHER_SERVICE_HOST:$WATCHER_SERVICE_PORT"
|
||||
fi
|
||||
WATCHER_API_URL="$WATCHER_SERVICE_PROTOCOL://$WATCHER_SERVICE_HOST/infra-optim"
|
||||
|
||||
# Entry Points
|
||||
# ------------
|
||||
@@ -103,12 +87,8 @@ function _cleanup_watcher_apache_wsgi {
|
||||
# cleanup_watcher() - Remove residual data files, anything left over from previous
|
||||
# runs that a clean run would need to clean up
|
||||
function cleanup_watcher {
|
||||
sudo rm -rf $WATCHER_STATE_PATH $WATCHER_AUTH_CACHE_DIR
|
||||
if [[ "$WATCHER_USE_WSGI_MODE" == "uwsgi" ]]; then
|
||||
remove_uwsgi_config "$WATCHER_UWSGI_CONF" "$WATCHER_UWSGI"
|
||||
else
|
||||
_cleanup_watcher_apache_wsgi
|
||||
fi
|
||||
sudo rm -rf $WATCHER_STATE_PATH
|
||||
remove_uwsgi_config "$WATCHER_UWSGI_CONF" "$WATCHER_UWSGI"
|
||||
}
|
||||
|
||||
# configure_watcher() - Set config files, create data dirs, etc
|
||||
@@ -157,31 +137,6 @@ function create_watcher_accounts {
|
||||
"$WATCHER_API_URL"
|
||||
}
|
||||
|
||||
# _config_watcher_apache_wsgi() - Set WSGI config files of watcher
|
||||
function _config_watcher_apache_wsgi {
|
||||
local watcher_apache_conf
|
||||
if [[ "$WATCHER_USE_WSGI_MODE" == "mod_wsgi" ]]; then
|
||||
local service_port=$WATCHER_SERVICE_PORT
|
||||
if is_service_enabled tls-proxy; then
|
||||
service_port=$WATCHER_SERVICE_PORT_INT
|
||||
service_protocol="http"
|
||||
fi
|
||||
sudo mkdir -p $WATCHER_WSGI_DIR
|
||||
sudo cp $WATCHER_DIR/watcher/api/app.wsgi $WATCHER_WSGI_DIR/app.wsgi
|
||||
watcher_apache_conf=$(apache_site_config_for watcher-api)
|
||||
sudo cp $WATCHER_DEVSTACK_FILES_DIR/apache-watcher-api.template $watcher_apache_conf
|
||||
sudo sed -e "
|
||||
s|%WATCHER_SERVICE_PORT%|$service_port|g;
|
||||
s|%WATCHER_WSGI_DIR%|$WATCHER_WSGI_DIR|g;
|
||||
s|%USER%|$STACK_USER|g;
|
||||
s|%APIWORKERS%|$API_WORKERS|g;
|
||||
s|%APACHE_NAME%|$APACHE_NAME|g;
|
||||
" -i $watcher_apache_conf
|
||||
enable_apache_site watcher-api
|
||||
fi
|
||||
|
||||
}
|
||||
|
||||
# create_watcher_conf() - Create a new watcher.conf file
|
||||
function create_watcher_conf {
|
||||
# (Re)create ``watcher.conf``
|
||||
@@ -199,21 +154,16 @@ function create_watcher_conf {
|
||||
iniset $WATCHER_CONF api host "$(ipv6_unquote $WATCHER_SERVICE_HOST)"
|
||||
iniset $WATCHER_CONF api port "$WATCHER_SERVICE_PORT_INT"
|
||||
# iniset $WATCHER_CONF api enable_ssl_api "True"
|
||||
else
|
||||
if [[ "$WATCHER_USE_WSGI_MODE" == "mod_wsgi" ]]; then
|
||||
iniset $WATCHER_CONF api host "$(ipv6_unquote $WATCHER_SERVICE_HOST)"
|
||||
iniset $WATCHER_CONF api port "$WATCHER_SERVICE_PORT"
|
||||
fi
|
||||
fi
|
||||
|
||||
iniset $WATCHER_CONF oslo_policy policy_file $WATCHER_POLICY_YAML
|
||||
|
||||
iniset $WATCHER_CONF oslo_messaging_notifications driver "messagingv2"
|
||||
|
||||
configure_auth_token_middleware $WATCHER_CONF watcher $WATCHER_AUTH_CACHE_DIR
|
||||
configure_auth_token_middleware $WATCHER_CONF watcher $WATCHER_AUTH_CACHE_DIR "watcher_clients_auth"
|
||||
configure_keystone_authtoken_middleware $WATCHER_CONF watcher
|
||||
configure_keystone_authtoken_middleware $WATCHER_CONF watcher "watcher_clients_auth"
|
||||
|
||||
if is_fedora || is_suse; then
|
||||
if is_fedora; then
|
||||
# watcher defaults to /usr/local/bin, but fedora and suse pip like to
|
||||
# install things in /usr/bin
|
||||
iniset $WATCHER_CONF DEFAULT bindir "/usr/bin"
|
||||
@@ -231,12 +181,8 @@ function create_watcher_conf {
|
||||
# Format logging
|
||||
setup_logging $WATCHER_CONF
|
||||
|
||||
#config apache files
|
||||
if [[ "$WATCHER_USE_WSGI_MODE" == "uwsgi" ]]; then
|
||||
write_uwsgi_config "$WATCHER_UWSGI_CONF" "$WATCHER_UWSGI" "/infra-optim"
|
||||
else
|
||||
_config_watcher_apache_wsgi
|
||||
fi
|
||||
write_uwsgi_config "$WATCHER_UWSGI_CONF" "$WATCHER_UWSGI" "/infra-optim" "" "watcher-api"
|
||||
|
||||
# Register SSL certificates if provided
|
||||
if is_ssl_enabled_service watcher; then
|
||||
ensure_certificates WATCHER
|
||||
@@ -248,13 +194,6 @@ function create_watcher_conf {
|
||||
fi
|
||||
}
|
||||
|
||||
# create_watcher_cache_dir() - Part of the init_watcher() process
|
||||
function create_watcher_cache_dir {
|
||||
# Create cache dir
|
||||
sudo install -d -o $STACK_USER $WATCHER_AUTH_CACHE_DIR
|
||||
rm -rf $WATCHER_AUTH_CACHE_DIR/*
|
||||
}
|
||||
|
||||
# init_watcher() - Initialize databases, etc.
|
||||
function init_watcher {
|
||||
# clean up from previous (possibly aborted) runs
|
||||
@@ -266,7 +205,6 @@ function init_watcher {
|
||||
# Create watcher schema
|
||||
$WATCHER_BIN_DIR/watcher-db-manage --config-file $WATCHER_CONF upgrade
|
||||
fi
|
||||
create_watcher_cache_dir
|
||||
}
|
||||
|
||||
# install_watcherclient() - Collect source and prepare
|
||||
@@ -275,15 +213,15 @@ function install_watcherclient {
|
||||
git_clone_by_name "python-watcherclient"
|
||||
setup_dev_lib "python-watcherclient"
|
||||
fi
|
||||
if [[ "$GLOBAL_VENV" == "True" ]]; then
|
||||
sudo ln -sf /opt/stack/data/venv/bin/watcher /usr/local/bin
|
||||
fi
|
||||
}
|
||||
|
||||
# install_watcher() - Collect source and prepare
|
||||
function install_watcher {
|
||||
git_clone $WATCHER_REPO $WATCHER_DIR $WATCHER_BRANCH
|
||||
setup_develop $WATCHER_DIR
|
||||
if [[ "$WATCHER_USE_WSGI_MODE" == "mod_wsgi" ]]; then
|
||||
install_apache_wsgi
|
||||
fi
|
||||
}
|
||||
|
||||
# start_watcher_api() - Start the API process ahead of other things
|
||||
@@ -297,19 +235,10 @@ function start_watcher_api {
|
||||
service_port=$WATCHER_SERVICE_PORT_INT
|
||||
service_protocol="http"
|
||||
fi
|
||||
if [[ "$WATCHER_USE_WSGI_MODE" == "uwsgi" ]]; then
|
||||
run_process "watcher-api" "$(which uwsgi) --procname-prefix watcher-api --ini $WATCHER_UWSGI_CONF"
|
||||
watcher_url=$service_protocol://$SERVICE_HOST/infra-optim
|
||||
else
|
||||
watcher_url=$service_protocol://$SERVICE_HOST:$service_port
|
||||
enable_apache_site watcher-api
|
||||
restart_apache_server
|
||||
# Start proxies if enabled
|
||||
if is_service_enabled tls-proxy; then
|
||||
start_tls_proxy watcher '*' $WATCHER_SERVICE_PORT $WATCHER_SERVICE_HOST $WATCHER_SERVICE_PORT_INT
|
||||
fi
|
||||
fi
|
||||
|
||||
run_process "watcher-api" "$(which uwsgi) --procname-prefix watcher-api --ini $WATCHER_UWSGI_CONF"
|
||||
watcher_url=$service_protocol://$SERVICE_HOST/infra-optim
|
||||
# TODO(sean-k-mooney): we should probably check that we can hit
|
||||
# the microversion endpoint and get a valid response.
|
||||
echo "Waiting for watcher-api to start..."
|
||||
if ! wait_for_service $SERVICE_TIMEOUT $watcher_url; then
|
||||
die $LINENO "watcher-api did not start"
|
||||
@@ -327,17 +256,25 @@ function start_watcher {
|
||||
|
||||
# stop_watcher() - Stop running processes (non-screen)
|
||||
function stop_watcher {
|
||||
if [[ "$WATCHER_USE_WSGI_MODE" == "uwsgi" ]]; then
|
||||
stop_process watcher-api
|
||||
else
|
||||
disable_apache_site watcher-api
|
||||
restart_apache_server
|
||||
fi
|
||||
stop_process watcher-api
|
||||
for serv in watcher-decision-engine watcher-applier; do
|
||||
stop_process $serv
|
||||
done
|
||||
}
|
||||
|
||||
# configure_tempest_for_watcher() - Configure Tempest for watcher
|
||||
function configure_tempest_for_watcher {
|
||||
# Set default microversion for watcher-tempest-plugin
|
||||
# Please make sure to update this when the microversion is updated, otherwise
|
||||
# new tests may be skipped.
|
||||
TEMPEST_WATCHER_MIN_MICROVERSION=${TEMPEST_WATCHER_MIN_MICROVERSION:-"1.0"}
|
||||
TEMPEST_WATCHER_MAX_MICROVERSION=${TEMPEST_WATCHER_MAX_MICROVERSION:-"1.6"}
|
||||
|
||||
# Set microversion options in tempest.conf
|
||||
iniset $TEMPEST_CONFIG optimize min_microversion $TEMPEST_WATCHER_MIN_MICROVERSION
|
||||
iniset $TEMPEST_CONFIG optimize max_microversion $TEMPEST_WATCHER_MAX_MICROVERSION
|
||||
}
|
||||
|
||||
# Restore xtrace
|
||||
$_XTRACE_WATCHER
|
||||
|
||||
|
||||
@@ -26,7 +26,7 @@ GLANCE_HOSTPORT=${SERVICE_HOST}:9292
|
||||
DATABASE_TYPE=mysql
|
||||
|
||||
# Enable services (including neutron)
|
||||
ENABLED_SERVICES=n-cpu,n-api-meta,c-vol,q-agt,placement-client
|
||||
ENABLED_SERVICES=n-cpu,n-api-meta,c-vol,q-agt,placement-client,node-exporter
|
||||
|
||||
NOVA_VNC_ENABLED=True
|
||||
NOVNCPROXY_URL="http://$SERVICE_HOST:6080/vnc_auto.html"
|
||||
@@ -42,6 +42,10 @@ disable_service ceilometer-acentral,ceilometer-collector,ceilometer-api
|
||||
LOGFILE=$DEST/logs/stack.sh.log
|
||||
LOGDAYS=2
|
||||
|
||||
CEILOMETER_BACKEND="none"
|
||||
CEILOMETER_BACKENDS="none"
|
||||
enable_plugin devstack-plugin-prometheus https://opendev.org/openstack/devstack-plugin-prometheus
|
||||
|
||||
[[post-config|$NOVA_CONF]]
|
||||
[DEFAULT]
|
||||
compute_monitors=cpu.virt_driver
|
||||
|
||||
@@ -18,6 +18,10 @@ NETWORK_GATEWAY=10.254.1.1 # Change this for your network
|
||||
|
||||
MULTI_HOST=1
|
||||
|
||||
CEILOMETER_ALARM_THRESHOLD="6000000000"
|
||||
CEILOMETER_BACKENDS="sg-core"
|
||||
CEILOMETER_PIPELINE_INTERVAL="15"
|
||||
|
||||
|
||||
#Set this to FALSE if do not want to run watcher-api behind mod-wsgi
|
||||
#WATCHER_USE_MOD_WSGI=TRUE
|
||||
@@ -40,8 +44,10 @@ disable_service ceilometer-acompute
|
||||
# Enable the ceilometer api explicitly(bug:1667678)
|
||||
enable_service ceilometer-api
|
||||
|
||||
# Enable the Gnocchi plugin
|
||||
enable_plugin gnocchi https://github.com/gnocchixyz/gnocchi
|
||||
enable_service prometheus
|
||||
enable_plugin aodh https://opendev.org/openstack/aodh
|
||||
enable_plugin devstack-plugin-prometheus https://opendev.org/openstack/devstack-plugin-prometheus
|
||||
enable_plugin sg-core https://github.com/openstack-k8s-operators/sg-core main
|
||||
|
||||
LOGFILE=$DEST/logs/stack.sh.log
|
||||
LOGDAYS=2
|
||||
@@ -55,3 +61,42 @@ compute_monitors=cpu.virt_driver
|
||||
# can change this to just versioned when ceilometer handles versioned
|
||||
# notifications from nova: https://bugs.launchpad.net/ceilometer/+bug/1665449
|
||||
notification_format=both
|
||||
|
||||
[[post-config|$WATCHER_CONF]]
|
||||
[prometheus_client]
|
||||
host = 127.0.0.1
|
||||
port = 9090
|
||||
|
||||
[watcher_cluster_data_model_collectors.baremetal]
|
||||
period = 120
|
||||
|
||||
[watcher_cluster_data_model_collectors.compute]
|
||||
period = 120
|
||||
|
||||
[watcher_cluster_data_model_collectors.storage]
|
||||
period = 120
|
||||
|
||||
[watcher_datasources]
|
||||
datasources = prometheus
|
||||
|
||||
[[test-config|$TEMPEST_CONFIG]]
|
||||
[optimize]
|
||||
datasource = prometheus
|
||||
|
||||
[service_available]
|
||||
sg_core = True
|
||||
|
||||
[telemetry]
|
||||
ceilometer_polling_interval = 15
|
||||
disable_ssl_certificate_validation = True
|
||||
|
||||
[telemetry_services]
|
||||
metric_backends = prometheus
|
||||
|
||||
[compute]
|
||||
min_compute_nodes = 2
|
||||
min_microversion = 2.56
|
||||
|
||||
[compute-feature-enabled]
|
||||
block_migration_for_live_migration = True
|
||||
live_migration = True
|
||||
|
||||
53
devstack/local_gnocchi.conf.compute
Normal file
53
devstack/local_gnocchi.conf.compute
Normal file
@@ -0,0 +1,53 @@
|
||||
# Sample ``local.conf`` for compute node for Watcher development
|
||||
# NOTE: Copy this file to the root DevStack directory for it to work properly.
|
||||
|
||||
[[local|localrc]]
|
||||
|
||||
ADMIN_PASSWORD=nomoresecrete
|
||||
DATABASE_PASSWORD=stackdb
|
||||
RABBIT_PASSWORD=stackqueue
|
||||
SERVICE_PASSWORD=$ADMIN_PASSWORD
|
||||
SERVICE_TOKEN=azertytoken
|
||||
|
||||
HOST_IP=192.168.42.2 # Change this to this compute node's IP address
|
||||
#HOST_IPV6=2001:db8::7
|
||||
FLAT_INTERFACE=eth0
|
||||
|
||||
FIXED_RANGE=10.254.1.0/24 # Change this to whatever your network is
|
||||
NETWORK_GATEWAY=10.254.1.1 # Change this for your network
|
||||
|
||||
MULTI_HOST=1
|
||||
|
||||
SERVICE_HOST=192.168.42.1 # Change this to the IP of your controller node
|
||||
MYSQL_HOST=$SERVICE_HOST
|
||||
RABBIT_HOST=$SERVICE_HOST
|
||||
GLANCE_HOSTPORT=${SERVICE_HOST}:9292
|
||||
|
||||
DATABASE_TYPE=mysql
|
||||
|
||||
# Enable services (including neutron)
|
||||
ENABLED_SERVICES=n-cpu,n-api-meta,c-vol,q-agt,placement-client
|
||||
|
||||
NOVA_VNC_ENABLED=True
|
||||
NOVNCPROXY_URL="http://$SERVICE_HOST:6080/vnc_auto.html"
|
||||
VNCSERVER_LISTEN=0.0.0.0
|
||||
VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP # or HOST_IPV6
|
||||
|
||||
NOVA_INSTANCES_PATH=/opt/stack/data/instances
|
||||
|
||||
# Enable the Ceilometer plugin for the compute agent
|
||||
enable_plugin ceilometer https://opendev.org/openstack/ceilometer
|
||||
disable_service ceilometer-acentral,ceilometer-collector,ceilometer-api
|
||||
|
||||
LOGFILE=$DEST/logs/stack.sh.log
|
||||
LOGDAYS=2
|
||||
|
||||
[[post-config|$NOVA_CONF]]
|
||||
[DEFAULT]
|
||||
compute_monitors=cpu.virt_driver
|
||||
[notifications]
|
||||
# Enable both versioned and unversioned notifications. Watcher only
|
||||
# uses versioned notifications but ceilometer uses unversioned. We
|
||||
# can change this to just versioned when ceilometer handles versioned
|
||||
# notifications from nova: https://bugs.launchpad.net/ceilometer/+bug/1665449
|
||||
notification_format=both
|
||||
57
devstack/local_gnocchi.conf.controller
Normal file
57
devstack/local_gnocchi.conf.controller
Normal file
@@ -0,0 +1,57 @@
|
||||
# Sample ``local.conf`` for controller node for Watcher development
|
||||
# NOTE: Copy this file to the root DevStack directory for it to work properly.
|
||||
|
||||
[[local|localrc]]
|
||||
|
||||
ADMIN_PASSWORD=nomoresecrete
|
||||
DATABASE_PASSWORD=stackdb
|
||||
RABBIT_PASSWORD=stackqueue
|
||||
SERVICE_PASSWORD=$ADMIN_PASSWORD
|
||||
SERVICE_TOKEN=azertytoken
|
||||
|
||||
HOST_IP=192.168.42.1 # Change this to your controller node IP address
|
||||
#HOST_IPV6=2001:db8::7
|
||||
FLAT_INTERFACE=eth0
|
||||
|
||||
FIXED_RANGE=10.254.1.0/24 # Change this to whatever your network is
|
||||
NETWORK_GATEWAY=10.254.1.1 # Change this for your network
|
||||
|
||||
MULTI_HOST=1
|
||||
|
||||
|
||||
#Set this to FALSE if do not want to run watcher-api behind mod-wsgi
|
||||
#WATCHER_USE_MOD_WSGI=TRUE
|
||||
|
||||
# This is the controller node, so disable nova-compute
|
||||
disable_service n-cpu
|
||||
|
||||
# Enable the Watcher Dashboard plugin
|
||||
enable_plugin watcher-dashboard https://opendev.org/openstack/watcher-dashboard
|
||||
|
||||
# Enable the Watcher plugin
|
||||
enable_plugin watcher https://opendev.org/openstack/watcher
|
||||
|
||||
# Enable the Ceilometer plugin
|
||||
enable_plugin ceilometer https://opendev.org/openstack/ceilometer
|
||||
|
||||
# This is the controller node, so disable the ceilometer compute agent
|
||||
disable_service ceilometer-acompute
|
||||
|
||||
# Enable the ceilometer api explicitly(bug:1667678)
|
||||
enable_service ceilometer-api
|
||||
|
||||
# Enable the Gnocchi plugin
|
||||
enable_plugin gnocchi https://github.com/gnocchixyz/gnocchi
|
||||
|
||||
LOGFILE=$DEST/logs/stack.sh.log
|
||||
LOGDAYS=2
|
||||
|
||||
[[post-config|$NOVA_CONF]]
|
||||
[DEFAULT]
|
||||
compute_monitors=cpu.virt_driver
|
||||
[notifications]
|
||||
# Enable both versioned and unversioned notifications. Watcher only
|
||||
# uses versioned notifications but ceilometer uses unversioned. We
|
||||
# can change this to just versioned when ceilometer handles versioned
|
||||
# notifications from nova: https://bugs.launchpad.net/ceilometer/+bug/1665449
|
||||
notification_format=both
|
||||
@@ -1,5 +1,3 @@
|
||||
#!/bin/bash
|
||||
#
|
||||
# plugin.sh - DevStack plugin script to install watcher
|
||||
|
||||
# Save trace setting
|
||||
@@ -38,6 +36,9 @@ if is_service_enabled watcher-api watcher-decision-engine watcher-applier; then
|
||||
# Start the watcher components
|
||||
echo_summary "Starting watcher"
|
||||
start_watcher
|
||||
elif [[ "$1" == "stack" && "$2" == "test-config" ]]; then
|
||||
echo_summary "Configuring tempest for watcher"
|
||||
configure_tempest_for_watcher
|
||||
fi
|
||||
|
||||
if [[ "$1" == "unstack" ]]; then
|
||||
|
||||
16
devstack/prometheus.yml
Normal file
16
devstack/prometheus.yml
Normal file
@@ -0,0 +1,16 @@
|
||||
global:
|
||||
scrape_interval: 10s
|
||||
scrape_configs:
|
||||
- job_name: "node"
|
||||
static_configs:
|
||||
- targets: ["controller:3000"]
|
||||
- targets: ["controller:9100"]
|
||||
labels:
|
||||
fqdn: "controller" # change the hostname here to your controller hostname
|
||||
- targets: ["compute-1:9100"]
|
||||
labels:
|
||||
fqdn: "compute-1" # change the hostname here to your fist compute hostname
|
||||
- targets: ["compute-2:9100"]
|
||||
labels:
|
||||
fqdn: "compute-2" # change the hostname her to your secondd compute hostname
|
||||
# add as many blocks as compute nodes you have
|
||||
@@ -1,5 +1,3 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# ``upgrade-watcher``
|
||||
|
||||
function configure_watcher_upgrade {
|
||||
|
||||
@@ -70,7 +70,7 @@ then write_uwsgi_config "$WATCHER_UWSGI_CONF" "$WATCHER_UWSGI" "/infra-optim"
|
||||
fi
|
||||
|
||||
# Migrate the database
|
||||
watcher-db-manage upgrade || die $LINO "DB migration error"
|
||||
$WATCHER_BIN_DIR/watcher-db-manage upgrade || die $LINO "DB migration error"
|
||||
|
||||
start_watcher
|
||||
|
||||
|
||||
4
doc/dictionary.txt
Normal file
4
doc/dictionary.txt
Normal file
@@ -0,0 +1,4 @@
|
||||
thirdparty
|
||||
assertin
|
||||
notin
|
||||
|
||||
@@ -52,7 +52,7 @@ class BaseWatcherDirective(rst.Directive):
|
||||
obj_raw_docstring = obj.__init__.__doc__
|
||||
|
||||
if not obj_raw_docstring:
|
||||
# Raise a warning to make the tests fail wit doc8
|
||||
# Raise a warning to make the tests fail with doc8
|
||||
raise self.error("No docstring available for %s!" % obj)
|
||||
|
||||
obj_docstring = inspect.cleandoc(obj_raw_docstring)
|
||||
|
||||
@@ -14,6 +14,7 @@
|
||||
"created_at": "2016-10-18T09:52:05Z",
|
||||
"updated_at": null,
|
||||
"state": "CANCELLED",
|
||||
"status_message": null,
|
||||
"action_plan": {
|
||||
"watcher_object.namespace": "watcher",
|
||||
"watcher_object.version": "1.0",
|
||||
@@ -24,6 +25,7 @@
|
||||
"created_at": "2016-10-18T09:52:05Z",
|
||||
"updated_at": null,
|
||||
"state": "CANCELLING",
|
||||
"status_message": null,
|
||||
"audit_uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
|
||||
"strategy_uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
|
||||
"deleted_at": null
|
||||
|
||||
@@ -24,6 +24,7 @@
|
||||
"created_at": "2016-10-18T09:52:05Z",
|
||||
"updated_at": null,
|
||||
"state": "FAILED",
|
||||
"status_message": null,
|
||||
"action_plan": {
|
||||
"watcher_object.namespace": "watcher",
|
||||
"watcher_object.version": "1.0",
|
||||
@@ -34,6 +35,7 @@
|
||||
"created_at": "2016-10-18T09:52:05Z",
|
||||
"updated_at": null,
|
||||
"state": "CANCELLING",
|
||||
"status_message": null,
|
||||
"audit_uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
|
||||
"strategy_uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
|
||||
"deleted_at": null
|
||||
|
||||
@@ -14,6 +14,7 @@
|
||||
"created_at": "2016-10-18T09:52:05Z",
|
||||
"updated_at": null,
|
||||
"state": "CANCELLING",
|
||||
"status_message": null,
|
||||
"action_plan": {
|
||||
"watcher_object.namespace": "watcher",
|
||||
"watcher_object.version": "1.0",
|
||||
@@ -24,6 +25,7 @@
|
||||
"created_at": "2016-10-18T09:52:05Z",
|
||||
"updated_at": null,
|
||||
"state": "CANCELLING",
|
||||
"status_message": null,
|
||||
"audit_uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
|
||||
"strategy_uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
|
||||
"deleted_at": null
|
||||
|
||||
@@ -13,6 +13,7 @@
|
||||
"created_at": "2016-10-18T09:52:05Z",
|
||||
"updated_at": null,
|
||||
"state": "PENDING",
|
||||
"status_message": null,
|
||||
"action_plan": {
|
||||
"watcher_object.namespace": "watcher",
|
||||
"watcher_object.version": "1.0",
|
||||
@@ -23,6 +24,7 @@
|
||||
"created_at": "2016-10-18T09:52:05Z",
|
||||
"updated_at": null,
|
||||
"state": "ONGOING",
|
||||
"status_message": null,
|
||||
"audit_uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
|
||||
"strategy_uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
|
||||
"deleted_at": null
|
||||
|
||||
@@ -13,6 +13,7 @@
|
||||
"created_at": "2016-10-18T09:52:05Z",
|
||||
"updated_at": null,
|
||||
"state": "DELETED",
|
||||
"status_message": null,
|
||||
"action_plan": {
|
||||
"watcher_object.namespace": "watcher",
|
||||
"watcher_object.version": "1.0",
|
||||
@@ -23,6 +24,7 @@
|
||||
"created_at": "2016-10-18T09:52:05Z",
|
||||
"updated_at": null,
|
||||
"state": "ONGOING",
|
||||
"status_message": null,
|
||||
"audit_uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
|
||||
"strategy_uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
|
||||
"deleted_at": null
|
||||
|
||||
@@ -14,6 +14,7 @@
|
||||
"created_at": "2016-10-18T09:52:05Z",
|
||||
"updated_at": null,
|
||||
"state": "SUCCEEDED",
|
||||
"status_message": null,
|
||||
"action_plan": {
|
||||
"watcher_object.namespace": "watcher",
|
||||
"watcher_object.version": "1.0",
|
||||
@@ -24,6 +25,7 @@
|
||||
"created_at": "2016-10-18T09:52:05Z",
|
||||
"updated_at": null,
|
||||
"state": "ONGOING",
|
||||
"status_message": null,
|
||||
"audit_uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
|
||||
"strategy_uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
|
||||
"deleted_at": null
|
||||
|
||||
@@ -24,6 +24,7 @@
|
||||
"created_at": "2016-10-18T09:52:05Z",
|
||||
"updated_at": null,
|
||||
"state": "FAILED",
|
||||
"status_message": "Action execution failed",
|
||||
"action_plan": {
|
||||
"watcher_object.namespace": "watcher",
|
||||
"watcher_object.version": "1.0",
|
||||
@@ -34,6 +35,7 @@
|
||||
"created_at": "2016-10-18T09:52:05Z",
|
||||
"updated_at": null,
|
||||
"state": "ONGOING",
|
||||
"status_message": null,
|
||||
"audit_uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
|
||||
"strategy_uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
|
||||
"deleted_at": null
|
||||
|
||||
@@ -14,6 +14,7 @@
|
||||
"created_at": "2016-10-18T09:52:05Z",
|
||||
"updated_at": null,
|
||||
"state": "ONGOING",
|
||||
"status_message": null,
|
||||
"action_plan": {
|
||||
"watcher_object.namespace": "watcher",
|
||||
"watcher_object.version": "1.0",
|
||||
@@ -24,6 +25,7 @@
|
||||
"created_at": "2016-10-18T09:52:05Z",
|
||||
"updated_at": null,
|
||||
"state": "ONGOING",
|
||||
"status_message": null,
|
||||
"audit_uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
|
||||
"strategy_uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
|
||||
"deleted_at": null
|
||||
|
||||
@@ -18,10 +18,12 @@
|
||||
"watcher_object.name": "ActionStateUpdatePayload",
|
||||
"watcher_object.data": {
|
||||
"old_state": "PENDING",
|
||||
"state": "ONGOING"
|
||||
"state": "ONGOING",
|
||||
"status_message": null
|
||||
}
|
||||
},
|
||||
"state": "ONGOING",
|
||||
"status_message": null,
|
||||
"action_plan": {
|
||||
"watcher_object.namespace": "watcher",
|
||||
"watcher_object.version": "1.0",
|
||||
@@ -32,6 +34,7 @@
|
||||
"created_at": "2016-10-18T09:52:05Z",
|
||||
"updated_at": null,
|
||||
"state": "ONGOING",
|
||||
"status_message": null,
|
||||
"audit_uuid": "10a47dd1-4874-4298-91cf-eff046dbdb8d",
|
||||
"strategy_uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
|
||||
"deleted_at": null
|
||||
|
||||
@@ -21,6 +21,7 @@
|
||||
"scope": [],
|
||||
"audit_type": "ONESHOT",
|
||||
"state": "SUCCEEDED",
|
||||
"status_message": null,
|
||||
"parameters": {},
|
||||
"interval": null,
|
||||
"updated_at": null
|
||||
@@ -29,6 +30,7 @@
|
||||
"uuid": "76be87bd-3422-43f9-93a0-e85a577e3061",
|
||||
"fault": null,
|
||||
"state": "CANCELLED",
|
||||
"status_message": null,
|
||||
"global_efficacy": [],
|
||||
"strategy_uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
|
||||
"strategy": {
|
||||
|
||||
@@ -52,13 +52,15 @@
|
||||
"scope": [],
|
||||
"updated_at": null,
|
||||
"audit_type": "ONESHOT",
|
||||
"status_message": null,
|
||||
"interval": null,
|
||||
"deleted_at": null,
|
||||
"state": "SUCCEEDED"
|
||||
}
|
||||
},
|
||||
"global_efficacy": [],
|
||||
"state": "CANCELLING"
|
||||
"state": "CANCELLING",
|
||||
"status_message": null
|
||||
}
|
||||
},
|
||||
"timestamp": "2016-10-18 09:52:05.219414"
|
||||
|
||||
@@ -21,6 +21,7 @@
|
||||
"scope": [],
|
||||
"audit_type": "ONESHOT",
|
||||
"state": "SUCCEEDED",
|
||||
"status_message": null,
|
||||
"parameters": {},
|
||||
"interval": null,
|
||||
"updated_at": null
|
||||
@@ -29,6 +30,7 @@
|
||||
"uuid": "76be87bd-3422-43f9-93a0-e85a577e3061",
|
||||
"fault": null,
|
||||
"state": "CANCELLING",
|
||||
"status_message": null,
|
||||
"global_efficacy": [],
|
||||
"strategy_uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
|
||||
"strategy": {
|
||||
|
||||
@@ -33,6 +33,7 @@
|
||||
"interval": null,
|
||||
"deleted_at": null,
|
||||
"state": "PENDING",
|
||||
"status_message": null,
|
||||
"created_at": "2016-10-18T09:52:05Z",
|
||||
"updated_at": null
|
||||
},
|
||||
@@ -43,6 +44,7 @@
|
||||
"global_efficacy": {},
|
||||
"deleted_at": null,
|
||||
"state": "RECOMMENDED",
|
||||
"status_message": null,
|
||||
"updated_at": null
|
||||
},
|
||||
"watcher_object.namespace": "watcher",
|
||||
|
||||
@@ -18,6 +18,7 @@
|
||||
"updated_at": null,
|
||||
"deleted_at": null,
|
||||
"state": "PENDING",
|
||||
"status_message": null,
|
||||
"created_at": "2016-10-18T09:52:05Z",
|
||||
"parameters": {}
|
||||
},
|
||||
@@ -43,7 +44,8 @@
|
||||
"watcher_object.name": "StrategyPayload",
|
||||
"watcher_object.namespace": "watcher"
|
||||
},
|
||||
"state": "DELETED"
|
||||
"state": "DELETED",
|
||||
"status_message": null
|
||||
},
|
||||
"watcher_object.version": "1.0",
|
||||
"watcher_object.name": "ActionPlanDeletePayload",
|
||||
|
||||
@@ -22,6 +22,7 @@
|
||||
"scope": [],
|
||||
"audit_type": "ONESHOT",
|
||||
"state": "SUCCEEDED",
|
||||
"status_message": null,
|
||||
"parameters": {},
|
||||
"interval": null,
|
||||
"updated_at": null
|
||||
@@ -30,6 +31,7 @@
|
||||
"uuid": "76be87bd-3422-43f9-93a0-e85a577e3061",
|
||||
"fault": null,
|
||||
"state": "ONGOING",
|
||||
"status_message": null,
|
||||
"global_efficacy": [],
|
||||
"strategy_uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
|
||||
"strategy": {
|
||||
|
||||
@@ -55,11 +55,13 @@
|
||||
"audit_type": "ONESHOT",
|
||||
"interval": null,
|
||||
"deleted_at": null,
|
||||
"state": "PENDING"
|
||||
"state": "PENDING",
|
||||
"status_message": null
|
||||
}
|
||||
},
|
||||
"global_efficacy": [],
|
||||
"state": "ONGOING"
|
||||
"state": "ONGOING",
|
||||
"status_message": null
|
||||
}
|
||||
},
|
||||
"timestamp": "2016-10-18 09:52:05.219414"
|
||||
|
||||
@@ -22,6 +22,7 @@
|
||||
"scope": [],
|
||||
"audit_type": "ONESHOT",
|
||||
"state": "PENDING",
|
||||
"status_message": null,
|
||||
"parameters": {},
|
||||
"interval": null,
|
||||
"updated_at": null
|
||||
@@ -30,6 +31,7 @@
|
||||
"uuid": "76be87bd-3422-43f9-93a0-e85a577e3061",
|
||||
"fault": null,
|
||||
"state": "ONGOING",
|
||||
"status_message": null,
|
||||
"global_efficacy": [],
|
||||
"strategy_uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
|
||||
"strategy": {
|
||||
|
||||
@@ -16,6 +16,7 @@
|
||||
"interval": null,
|
||||
"updated_at": null,
|
||||
"state": "PENDING",
|
||||
"status_message": null,
|
||||
"deleted_at": null,
|
||||
"parameters": {}
|
||||
},
|
||||
@@ -35,6 +36,7 @@
|
||||
"watcher_object.name": "ActionPlanStateUpdatePayload"
|
||||
},
|
||||
"state": "ONGOING",
|
||||
"status_message": null,
|
||||
"deleted_at": null,
|
||||
"strategy_uuid": "cb3d0b58-4415-4d90-b75b-1e96878730e3",
|
||||
"strategy": {
|
||||
|
||||
@@ -9,6 +9,7 @@
|
||||
"para1": 3.2
|
||||
},
|
||||
"state": "PENDING",
|
||||
"status_message": null,
|
||||
"updated_at": null,
|
||||
"deleted_at": null,
|
||||
"goal_uuid": "bc830f84-8ae3-4fc6-8bc6-e3dd15e8b49a",
|
||||
|
||||
@@ -9,6 +9,7 @@
|
||||
"para1": 3.2
|
||||
},
|
||||
"state": "DELETED",
|
||||
"status_message": null,
|
||||
"updated_at": null,
|
||||
"deleted_at": null,
|
||||
"goal_uuid": "bc830f84-8ae3-4fc6-8bc6-e3dd15e8b49a",
|
||||
|
||||
@@ -9,6 +9,7 @@
|
||||
"para1": 3.2
|
||||
},
|
||||
"state": "ONGOING",
|
||||
"status_message": null,
|
||||
"updated_at": null,
|
||||
"deleted_at": null,
|
||||
"fault": null,
|
||||
|
||||
@@ -9,6 +9,7 @@
|
||||
"para1": 3.2
|
||||
},
|
||||
"state": "ONGOING",
|
||||
"status_message": null,
|
||||
"updated_at": null,
|
||||
"deleted_at": null,
|
||||
"fault": {
|
||||
|
||||
@@ -9,6 +9,7 @@
|
||||
"para1": 3.2
|
||||
},
|
||||
"state": "ONGOING",
|
||||
"status_message": null,
|
||||
"updated_at": null,
|
||||
"deleted_at": null,
|
||||
"fault": null,
|
||||
|
||||
@@ -9,6 +9,7 @@
|
||||
"para1": 3.2
|
||||
},
|
||||
"state": "ONGOING",
|
||||
"status_message": null,
|
||||
"updated_at": null,
|
||||
"deleted_at": null,
|
||||
"fault": null,
|
||||
|
||||
@@ -9,6 +9,7 @@
|
||||
"para1": 3.2
|
||||
},
|
||||
"state": "ONGOING",
|
||||
"status_message": null,
|
||||
"updated_at": null,
|
||||
"deleted_at": null,
|
||||
"fault": {
|
||||
|
||||
@@ -9,6 +9,7 @@
|
||||
"para1": 3.2
|
||||
},
|
||||
"state": "ONGOING",
|
||||
"status_message": null,
|
||||
"updated_at": null,
|
||||
"deleted_at": null,
|
||||
"fault": null,
|
||||
|
||||
@@ -70,6 +70,7 @@
|
||||
"interval": null,
|
||||
"updated_at": null,
|
||||
"state": "ONGOING",
|
||||
"status_message": null,
|
||||
"audit_type": "ONESHOT"
|
||||
},
|
||||
"watcher_object.namespace": "watcher",
|
||||
|
||||
@@ -1,10 +1,10 @@
|
||||
# The order of packages is significant, because pip processes them in the order
|
||||
# of appearance. Changing the order has an impact on the overall integration
|
||||
# process, which may cause wedges in the gate later.
|
||||
openstackdocstheme>=2.2.1 # Apache-2.0
|
||||
sphinx>=2.0.0,!=2.1.0 # BSD
|
||||
sphinxcontrib-pecanwsme>=0.8.0 # Apache-2.0
|
||||
sphinx>=2.1.1 # BSD
|
||||
sphinxcontrib-svg2pdfconverter>=0.1.0 # BSD
|
||||
reno>=3.1.0 # Apache-2.0
|
||||
sphinxcontrib-pecanwsme>=0.8.0 # Apache-2.0
|
||||
sphinxcontrib-apidoc>=0.2.0 # BSD
|
||||
# openstack
|
||||
os-api-ref>=1.4.0 # Apache-2.0
|
||||
openstackdocstheme>=2.2.1 # Apache-2.0
|
||||
# releasenotes
|
||||
reno>=3.1.0 # Apache-2.0
|
||||
|
||||
|
||||
@@ -34,7 +34,7 @@ own sections. However, the base *GMR* consists of several sections:
|
||||
|
||||
Package
|
||||
Shows information about the package to which this process belongs, including
|
||||
version informations.
|
||||
version information.
|
||||
|
||||
Threads
|
||||
Shows stack traces and thread ids for each of the threads within this
|
||||
|
||||
@@ -285,7 +285,7 @@ Audit and interval (in case of CONTINUOUS type). There is three types of Audit:
|
||||
ONESHOT, CONTINUOUS and EVENT. ONESHOT Audit is launched once and if it
|
||||
succeeded executed new action plan list will be provided; CONTINUOUS Audit
|
||||
creates action plans with specified interval (in seconds or cron format, cron
|
||||
inteval can be used like: `*/5 * * * *`), if action plan
|
||||
interval can be used like: ``*/5 * * * *``), if action plan
|
||||
has been created, all previous action plans get CANCELLED state;
|
||||
EVENT audit is launched when receiving webhooks API.
|
||||
|
||||
@@ -384,7 +384,9 @@ following methods of the :ref:`Action <action_definition>` handler:
|
||||
|
||||
- **preconditions()**: this method will make sure that all conditions are met
|
||||
before executing the action (for example, it makes sure that an instance
|
||||
still exists before trying to migrate it).
|
||||
still exists before trying to migrate it). If action specific preconditions
|
||||
are not met in this phase, the Action is set to **SKIPPED** state and will
|
||||
not be executed.
|
||||
- **execute()**: this method is what triggers real commands on other
|
||||
OpenStack services (such as Nova, ...) in order to change target resource
|
||||
state. If the action is successfully executed, a notification message is
|
||||
@@ -479,6 +481,39 @@ change to a new value:
|
||||
.. image:: ./images/action_plan_state_machine.png
|
||||
:width: 100%
|
||||
|
||||
.. _action_state_machine:
|
||||
|
||||
Action State Machine
|
||||
-------------------------
|
||||
|
||||
An :ref:`Action <action_definition>` has a life-cycle and its current state may
|
||||
be one of the following:
|
||||
|
||||
- **PENDING** : the :ref:`Action <action_definition>` has not been executed
|
||||
yet by the :ref:`Watcher Applier <watcher_applier_definition>`
|
||||
- **SKIPPED** : the :ref:`Action <action_definition>` will not be executed
|
||||
because a predefined skipping condition is found by
|
||||
:ref:`Watcher Applier <watcher_applier_definition>` or is explicitly
|
||||
skipped by the :ref:`Administrator <administrator_definition>`.
|
||||
- **ONGOING** : the :ref:`Action <action_definition>` is currently being
|
||||
processed by the :ref:`Watcher Applier <watcher_applier_definition>`
|
||||
- **SUCCEEDED** : the :ref:`Action <action_definition>` has been executed
|
||||
successfully
|
||||
- **FAILED** : an error occurred while trying to execute the
|
||||
:ref:`Action <action_definition>`
|
||||
- **DELETED** : the :ref:`Action <action_definition>` is still stored in the
|
||||
:ref:`Watcher database <watcher_database_definition>` but is not returned
|
||||
any more through the Watcher APIs.
|
||||
- **CANCELLED** : the :ref:`Action <action_definition>` was in **PENDING** or
|
||||
**ONGOING** state and was cancelled by the
|
||||
:ref:`Administrator <administrator_definition>`
|
||||
|
||||
The following diagram shows the different possible states of an
|
||||
:ref:`Action <action_definition>` and what event makes the state change
|
||||
change to a new value:
|
||||
|
||||
.. image:: ./images/action_state_machine.png
|
||||
:width: 100%
|
||||
|
||||
|
||||
.. _Watcher API: https://docs.openstack.org/api-ref/resource-optimization/
|
||||
|
||||
22
doc/source/conf.py
Executable file → Normal file
22
doc/source/conf.py
Executable file → Normal file
@@ -56,8 +56,8 @@ source_suffix = '.rst'
|
||||
master_doc = 'index'
|
||||
|
||||
# General information about the project.
|
||||
project = u'Watcher'
|
||||
copyright = u'OpenStack Foundation'
|
||||
project = 'Watcher'
|
||||
copyright = 'OpenStack Foundation'
|
||||
|
||||
# A list of ignored prefixes for module index sorting.
|
||||
modindex_common_prefix = ['watcher.']
|
||||
@@ -91,14 +91,14 @@ pygments_style = 'native'
|
||||
# List of tuples 'sourcefile', 'target', u'title', u'Authors name', 'manual'
|
||||
|
||||
man_pages = [
|
||||
('man/watcher-api', 'watcher-api', u'Watcher API Server',
|
||||
[u'OpenStack'], 1),
|
||||
('man/watcher-applier', 'watcher-applier', u'Watcher Applier',
|
||||
[u'OpenStack'], 1),
|
||||
('man/watcher-api', 'watcher-api', 'Watcher API Server',
|
||||
['OpenStack'], 1),
|
||||
('man/watcher-applier', 'watcher-applier', 'Watcher Applier',
|
||||
['OpenStack'], 1),
|
||||
('man/watcher-db-manage', 'watcher-db-manage',
|
||||
u'Watcher Db Management Utility', [u'OpenStack'], 1),
|
||||
'Watcher Db Management Utility', ['OpenStack'], 1),
|
||||
('man/watcher-decision-engine', 'watcher-decision-engine',
|
||||
u'Watcher Decision Engine', [u'OpenStack'], 1),
|
||||
'Watcher Decision Engine', ['OpenStack'], 1),
|
||||
]
|
||||
|
||||
# -- Options for HTML output --------------------------------------------------
|
||||
@@ -115,7 +115,7 @@ html_theme = 'openstackdocs'
|
||||
htmlhelp_basename = '%sdoc' % project
|
||||
|
||||
|
||||
#openstackdocstheme options
|
||||
# openstackdocstheme options
|
||||
openstackdocs_repo_name = 'openstack/watcher'
|
||||
openstackdocs_pdf_link = True
|
||||
openstackdocs_auto_name = False
|
||||
@@ -128,8 +128,8 @@ openstackdocs_bug_tag = ''
|
||||
latex_documents = [
|
||||
('index',
|
||||
'doc-watcher.tex',
|
||||
u'Watcher Documentation',
|
||||
u'OpenStack Foundation', 'manual'),
|
||||
'Watcher Documentation',
|
||||
'OpenStack Foundation', 'manual'),
|
||||
]
|
||||
|
||||
# If false, no module index is generated.
|
||||
|
||||
@@ -194,11 +194,14 @@ The configuration file is organized into the following sections:
|
||||
* ``[watcher_applier]`` - Watcher Applier module configuration
|
||||
* ``[watcher_decision_engine]`` - Watcher Decision Engine module configuration
|
||||
* ``[oslo_messaging_rabbit]`` - Oslo Messaging RabbitMQ driver configuration
|
||||
* ``[ceilometer_client]`` - Ceilometer client configuration
|
||||
* ``[cinder_client]`` - Cinder client configuration
|
||||
* ``[glance_client]`` - Glance client configuration
|
||||
* ``[gnocchi_client]`` - Gnocchi client configuration
|
||||
* ``[ironic_client]`` - Ironic client configuration
|
||||
* ``[keystone_client]`` - Keystone client configuration
|
||||
* ``[nova_client]`` - Nova client configuration
|
||||
* ``[neutron_client]`` - Neutron client configuration
|
||||
* ``[placement_client]`` - Placement client configuration
|
||||
|
||||
The Watcher configuration file is expected to be named
|
||||
``watcher.conf``. When starting Watcher, you can specify a different
|
||||
@@ -372,7 +375,7 @@ You can configure and install Ceilometer by following the documentation below :
|
||||
#. https://docs.openstack.org/ceilometer/latest
|
||||
|
||||
The built-in strategy 'basic_consolidation' provided by watcher requires
|
||||
"**compute.node.cpu.percent**" and "**cpu_util**" measurements to be collected
|
||||
"**compute.node.cpu.percent**" and "**cpu**" measurements to be collected
|
||||
by Ceilometer.
|
||||
The measurements available depend on the hypervisors that OpenStack manages on
|
||||
the specific implementation.
|
||||
@@ -426,20 +429,38 @@ Configure Cinder Notifications
|
||||
|
||||
Watcher can also consume notifications generated by the Cinder services, in
|
||||
order to build or update, in real time, its cluster data model related to
|
||||
storage resources. To do so, you have to update the Cinder configuration
|
||||
file on controller and volume nodes, in order to let Watcher receive Cinder
|
||||
notifications in a dedicated ``watcher_notifications`` channel.
|
||||
storage resources.
|
||||
|
||||
* In the file ``/etc/cinder/cinder.conf``, update the section
|
||||
``[oslo_messaging_notifications]``, by redefining the list of topics
|
||||
into which Cinder services will publish events ::
|
||||
Cinder emits notifications on the ``notifications`` topic, in the openstack
|
||||
control exchange (as it can be seen in the `Cinder conf`_).
|
||||
|
||||
* In the file ``/etc/cinder/cinder.conf``, the value of driver in the section
|
||||
``[oslo_messaging_notifications]`` can't be noop.
|
||||
|
||||
[oslo_messaging_notifications]
|
||||
driver = messagingv2
|
||||
topics = notifications,watcher_notifications
|
||||
|
||||
* Restart the Cinder services.
|
||||
.. _`Cinder conf`: https://docs.openstack.org/cinder/latest/configuration/block-storage/samples/cinder.conf.html
|
||||
|
||||
Configure Watcher listening to the Notifications
|
||||
================================================
|
||||
|
||||
To consume either Cinder or Nova notifications, (or both), Watcher must be
|
||||
configured to listen to the notifications topics that Cinder and Nova emit.
|
||||
|
||||
Use the `notification_topics`_ config option to indicate to Watcher that it
|
||||
should listen to the correct topics. By default, Cinder emits notifications
|
||||
on ``openstack.notifications``, while Nova emits notifications on
|
||||
``nova.versioned_notifications``. The Watcher conf should have the topics for
|
||||
the desired notifications, below is an example for both Cinder and Nova::
|
||||
|
||||
[watcher_decision_engine]
|
||||
|
||||
...
|
||||
|
||||
notification_topics = nova.versioned_notifications,openstack.notifications
|
||||
|
||||
.. _`notification_topics`: https://docs.openstack.org/watcher/latest/configuration/watcher.html#watcher_decision_engine.notification_topics
|
||||
|
||||
Workers
|
||||
=======
|
||||
|
||||
@@ -52,18 +52,43 @@ types of concurrency used in various services of Watcher.
|
||||
.. _wait_for_any: https://docs.openstack.org/futurist/latest/reference/index.html#waiters
|
||||
|
||||
|
||||
Concurrency modes
|
||||
#################
|
||||
|
||||
Evenlet has been the main concurrency library within the OpenStack community
|
||||
for the last 10 years since the removal of twisted. Over the last few years,
|
||||
the maintenance of eventlet has decreased and the efforts to remove the GIL
|
||||
from Python (PEP 703), have fundamentally changed how concurrency is making
|
||||
eventlet no longer viable. While transitioning to a new native thread
|
||||
solution, Watcher services will be supporting both modes, with the usage of
|
||||
native threading mode initially classified as ``experimental``.
|
||||
|
||||
It is possible to enable the new native threading mode by setting the following
|
||||
environment variable in the corresponding service configuration:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
OS_WATCHER_DISABLE_EVENTLET_PATCHING=true
|
||||
|
||||
.. note::
|
||||
|
||||
The only service that supports two different concurrency modes is the
|
||||
``decision engine``.
|
||||
|
||||
Decision engine concurrency
|
||||
***************************
|
||||
|
||||
The concurrency in the decision engine is governed by two independent
|
||||
threadpools. Both of these threadpools are GreenThreadPoolExecutor_ from the
|
||||
futurist_ library. One of these is used automatically and most contributors
|
||||
threadpools. These threadpools can be configured as GreenThreadPoolExecutor_
|
||||
or ThreadPoolExecutor_, both from the futurist_ library, depending on the
|
||||
service configuration. One of these is used automatically and most contributors
|
||||
will not interact with it while developing new features. The other threadpool
|
||||
can frequently be used while developing new features or updating existing ones.
|
||||
It is known as the DecisionEngineThreadpool and allows to achieve performance
|
||||
improvements in network or I/O bound operations.
|
||||
|
||||
.. _GreenThreadPoolExecutor: https://docs.openstack.org/futurist/latest/reference/index.html#executors
|
||||
.. _GreenThreadPoolExecutor: https://docs.openstack.org/futurist/latest/reference/index.html#futurist.GreenThreadPoolExecutor
|
||||
.. _ThreadPoolExecutor: https://docs.openstack.org/futurist/latest/reference/index.html#futurist.ThreadPoolExecutor
|
||||
|
||||
AuditEndpoint
|
||||
#############
|
||||
@@ -221,7 +246,7 @@ workflow engine can halt or take other actions while the action plan is being
|
||||
executed based on the success or failure of individual actions. However, the
|
||||
base workflow engine simply uses these notifies to store the result of
|
||||
individual actions in the database. Additionally, since taskflow uses a graph
|
||||
flow if any of the tasks would fail all childs of this tasks not be executed
|
||||
flow if any of the tasks would fail all children of this tasks not be executed
|
||||
while ``do_revert`` will be triggered for all parents.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
@@ -16,7 +16,7 @@ multinode environment to use.
|
||||
You can set up the Watcher services quickly and easily using a Watcher
|
||||
DevStack plugin. See `PluginModelDocs`_ for information on DevStack's plugin
|
||||
model. To enable the Watcher plugin with DevStack, add the following to the
|
||||
`[[local|localrc]]` section of your controller's `local.conf` to enable the
|
||||
``[[local|localrc]]`` section of your controller's ``local.conf`` to enable the
|
||||
Watcher plugin::
|
||||
|
||||
enable_plugin watcher https://opendev.org/openstack/watcher
|
||||
@@ -31,66 +31,104 @@ Quick Devstack Instructions with Datasources
|
||||
============================================
|
||||
|
||||
Watcher requires a datasource to collect metrics from compute nodes and
|
||||
instances in order to execute most strategies. To enable this a
|
||||
`[[local|localrc]]` to setup DevStack for some of the supported datasources
|
||||
is provided. These examples specify the minimal configuration parameters to
|
||||
get both Watcher and the datasource working but can be expanded is desired.
|
||||
instances in order to execute most strategies. To enable this two possible
|
||||
examples of ``[[local|localrc]]`` to setup DevStack for some of the
|
||||
supported datasources is provided. These examples specify the minimal
|
||||
configuration parameters to get both Watcher and the datasource working
|
||||
but can be expanded is desired.
|
||||
The first example configures watcher to user prometheus as a datasource, while
|
||||
the second example show how to use gnocchi as the datasource. The procedure is
|
||||
equivalent, it just requires using the ``local.conf.controller`` and
|
||||
``local.conf.compute`` in the first example and
|
||||
``local_gnocchi.conf.controller`` and ``local_gnocchi.conf.compute`` in the
|
||||
second.
|
||||
|
||||
Prometheus
|
||||
----------
|
||||
|
||||
With the Prometheus datasource most of the metrics for compute nodes and
|
||||
instances will work with the provided configuration but metrics that
|
||||
require Ironic such as ``host_airflow and`` ``host_power`` will still be
|
||||
unavailable as well as ``instance_l3_cpu_cache``
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[[local|localrc]]
|
||||
|
||||
enable_plugin watcher https://opendev.org/openstack/watcher
|
||||
enable_plugin watcher-dashboard https://opendev.org/openstack/watcher-dashboard
|
||||
enable_plugin ceilometer https://opendev.org/openstack/ceilometer.git
|
||||
enable_plugin aodh https://opendev.org/openstack/aodh
|
||||
enable_plugin devstack-plugin-prometheus https://opendev.org/openstack/devstack-plugin-prometheus
|
||||
enable_plugin sg-core https://github.com/openstack-k8s-operators/sg-core main
|
||||
|
||||
|
||||
CEILOMETER_BACKEND=sg-core
|
||||
[[post-config|$NOVA_CONF]]
|
||||
[DEFAULT]
|
||||
compute_monitors=cpu.virt_driver
|
||||
|
||||
Gnocchi
|
||||
-------
|
||||
|
||||
With the Gnocchi datasource most of the metrics for compute nodes and
|
||||
instances will work with the provided configuration but metrics that
|
||||
require Ironic such as `host_airflow and` `host_power` will still be
|
||||
unavailable as well as `instance_l3_cpu_cache`::
|
||||
require Ironic such as ``host_airflow and`` ``host_power`` will still be
|
||||
unavailable as well as ``instance_l3_cpu_cache``
|
||||
|
||||
[[local|localrc]]
|
||||
enable_plugin watcher https://opendev.org/openstack/watcher
|
||||
.. code-block:: ini
|
||||
|
||||
enable_plugin watcher-dashboard https://opendev.org/openstack/watcher-dashboard
|
||||
[[local|localrc]]
|
||||
|
||||
enable_plugin ceilometer https://opendev.org/openstack/ceilometer.git
|
||||
CEILOMETER_BACKEND=gnocchi
|
||||
enable_plugin watcher https://opendev.org/openstack/watcher
|
||||
enable_plugin watcher-dashboard https://opendev.org/openstack/watcher-dashboard
|
||||
enable_plugin ceilometer https://opendev.org/openstack/ceilometer.git
|
||||
enable_plugin aodh https://opendev.org/openstack/aodh
|
||||
enable_plugin panko https://opendev.org/openstack/panko
|
||||
|
||||
enable_plugin aodh https://opendev.org/openstack/aodh
|
||||
enable_plugin panko https://opendev.org/openstack/panko
|
||||
|
||||
[[post-config|$NOVA_CONF]]
|
||||
[DEFAULT]
|
||||
compute_monitors=cpu.virt_driver
|
||||
CEILOMETER_BACKEND=gnocchi
|
||||
[[post-config|$NOVA_CONF]]
|
||||
[DEFAULT]
|
||||
compute_monitors=cpu.virt_driver
|
||||
|
||||
Detailed DevStack Instructions
|
||||
==============================
|
||||
|
||||
#. Obtain N (where N >= 1) servers (virtual machines preferred for DevStack).
|
||||
One of these servers will be the controller node while the others will be
|
||||
compute nodes. N is preferably >= 3 so that you have at least 2 compute
|
||||
nodes, but in order to stand up the Watcher services only 1 server is
|
||||
needed (i.e., no computes are needed if you want to just experiment with
|
||||
the Watcher services). These servers can be VMs running on your local
|
||||
machine via VirtualBox if you prefer. DevStack currently recommends that
|
||||
you use Ubuntu 16.04 LTS. The servers should also have connections to the
|
||||
same network such that they are all able to communicate with one another.
|
||||
#. Obtain N (where N >= 1) servers (virtual machines preferred for DevStack).
|
||||
One of these servers will be the controller node while the others will be
|
||||
compute nodes. N is preferably >= 3 so that you have at least 2 compute
|
||||
nodes, but in order to stand up the Watcher services only 1 server is
|
||||
needed (i.e., no computes are needed if you want to just experiment with
|
||||
the Watcher services). These servers can be VMs running on your local
|
||||
machine via VirtualBox if you prefer. DevStack currently recommends that
|
||||
you use Ubuntu 16.04 LTS. The servers should also have connections to the
|
||||
same network such that they are all able to communicate with one another.
|
||||
|
||||
#. For each server, clone the DevStack repository and create the stack user::
|
||||
#. For each server, clone the DevStack repository and create the stack user
|
||||
|
||||
sudo apt-get update
|
||||
sudo apt-get install git
|
||||
git clone https://opendev.org/openstack/devstack.git
|
||||
sudo ./devstack/tools/create-stack-user.sh
|
||||
.. code-block:: bash
|
||||
|
||||
sudo apt-get update
|
||||
sudo apt-get install git
|
||||
git clone https://opendev.org/openstack/devstack.git
|
||||
sudo ./devstack/tools/create-stack-user.sh
|
||||
|
||||
Now you have a stack user that is used to run the DevStack processes. You
|
||||
may want to give your stack user a password to allow SSH via a password::
|
||||
may want to give your stack user a password to allow SSH via a password
|
||||
|
||||
sudo passwd stack
|
||||
.. code-block:: bash
|
||||
|
||||
#. Switch to the stack user and clone the DevStack repo again::
|
||||
sudo passwd stack
|
||||
|
||||
sudo su stack
|
||||
cd ~
|
||||
git clone https://opendev.org/openstack/devstack.git
|
||||
#. Switch to the stack user and clone the DevStack repo again
|
||||
|
||||
#. For each compute node, copy the provided `local.conf.compute`_ example file
|
||||
.. code-block:: bash
|
||||
|
||||
sudo su stack
|
||||
cd ~
|
||||
git clone https://opendev.org/openstack/devstack.git
|
||||
|
||||
#. For each compute node, copy the provided `local.conf.compute`_
|
||||
(`local_gnocchi.conf.compute`_ if deploying with gnocchi) example file
|
||||
to the compute node's system at ~/devstack/local.conf. Make sure the
|
||||
HOST_IP and SERVICE_HOST values are changed appropriately - i.e., HOST_IP
|
||||
is set to the IP address of the compute node and SERVICE_HOST is set to the
|
||||
@@ -106,29 +144,47 @@ Detailed DevStack Instructions
|
||||
to configure similar configuration options for the projects providing those
|
||||
metrics.
|
||||
|
||||
#. For the controller node, copy the provided `local.conf.controller`_ example
|
||||
#. For the controller node, copy the provided `local.conf.controller`_
|
||||
(`local_gnocchi.conf.controller`_ if deploying with gnocchi) example
|
||||
file to the controller node's system at ~/devstack/local.conf. Make sure
|
||||
the HOST_IP value is changed appropriately - i.e., HOST_IP is set to the IP
|
||||
address of the controller node.
|
||||
|
||||
Note: if you want to use another Watcher git repository (such as a local
|
||||
one), then change the enable plugin line::
|
||||
.. NOTE::
|
||||
if you want to use another Watcher git repository (such as a local
|
||||
one), then change the enable plugin line
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
enable_plugin watcher <your_local_git_repo> [optional_branch]
|
||||
|
||||
enable_plugin watcher <your_local_git_repo> [optional_branch]
|
||||
|
||||
If you do this, then the Watcher DevStack plugin will try to pull the
|
||||
python-watcherclient repo from <your_local_git_repo>/../, so either make
|
||||
sure that is also available or specify WATCHERCLIENT_REPO in the local.conf
|
||||
python-watcherclient repo from ``<your_local_git_repo>/../``, so either make
|
||||
sure that is also available or specify WATCHERCLIENT_REPO in the ``local.conf``
|
||||
file.
|
||||
|
||||
Note: if you want to use a specific branch, specify WATCHER_BRANCH in the
|
||||
local.conf file. By default it will use the master branch.
|
||||
.. NOTE::
|
||||
if you want to use a specific branch, specify WATCHER_BRANCH in the
|
||||
local.conf file. By default it will use the master branch.
|
||||
|
||||
Note: watcher-api will default run under apache/httpd, set the variable
|
||||
WATCHER_USE_MOD_WSGI=FALSE if you do not wish to run under apache/httpd.
|
||||
For development environment it is suggested to set WATHCER_USE_MOD_WSGI
|
||||
to FALSE. For Production environment it is suggested to keep it at the
|
||||
default TRUE value.
|
||||
.. Note::
|
||||
watcher-api will default run under apache/httpd, set the variable
|
||||
WATCHER_USE_MOD_WSGI=FALSE if you do not wish to run under apache/httpd.
|
||||
For development environment it is suggested to set WATHCER_USE_MOD_WSGI
|
||||
to FALSE. For Production environment it is suggested to keep it at the
|
||||
default TRUE value.
|
||||
|
||||
#. If you want to use prometheus as a datasource, you need to provide a
|
||||
Prometheus configuration with the compute nodes set as targets, so
|
||||
it can consume their node-exporter metrics (if you are deploying watcher
|
||||
with gnocchi as datasource you can skip this step altogether). Copy the
|
||||
provided `prometheus.yml`_ example file and set the appropriate hostnames
|
||||
for all the compute nodes (the example configures 2 of them plus the
|
||||
controller, but you should add all of them if using more than 2 compute
|
||||
nodes). Set the value of ``PROMETHEUS_CONFIG_FILE`` to the path of the
|
||||
file you created in the local.conf file (the sample local.conf file uses
|
||||
``$DEST`` as the default value for the prometheus config path).
|
||||
|
||||
#. Start stacking from the controller node::
|
||||
|
||||
@@ -136,11 +192,15 @@ Detailed DevStack Instructions
|
||||
|
||||
#. Start stacking on each of the compute nodes using the same command.
|
||||
|
||||
#. Configure the environment for live migration via NFS. See the
|
||||
`Multi-Node DevStack Environment`_ section for more details.
|
||||
.. seealso::
|
||||
Configure the environment for live migration via NFS. See the
|
||||
`Multi-Node DevStack Environment`_ section for more details.
|
||||
|
||||
.. _local.conf.controller: https://github.com/openstack/watcher/tree/master/devstack/local.conf.controller
|
||||
.. _local.conf.compute: https://github.com/openstack/watcher/tree/master/devstack/local.conf.compute
|
||||
.. _local_gnocchi.conf.controller: https://github.com/openstack/watcher/tree/master/devstack/local_gnocchi.conf.controller
|
||||
.. _local_gnocchi.conf.compute: https://github.com/openstack/watcher/tree/master/devstack/local_gnocchi.conf.compute
|
||||
.. _prometheus.yml: https://github.com/openstack/watcher/tree/master/devstack/prometheus.yml
|
||||
|
||||
Multi-Node DevStack Environment
|
||||
===============================
|
||||
@@ -149,60 +209,19 @@ Since deploying Watcher with only a single compute node is not very useful, a
|
||||
few tips are given here for enabling a multi-node environment with live
|
||||
migration.
|
||||
|
||||
Configuring NFS Server
|
||||
----------------------
|
||||
.. NOTE::
|
||||
|
||||
If you would like to use live migration for shared storage, then the controller
|
||||
can serve as the NFS server if needed::
|
||||
Nova supports live migration with local block storage so by default NFS
|
||||
is not required and is considered an advance configuration.
|
||||
The minimum requirements for live migration are:
|
||||
|
||||
sudo apt-get install nfs-kernel-server
|
||||
sudo mkdir -p /nfs/instances
|
||||
sudo chown stack:stack /nfs/instances
|
||||
- all hostnames are resolvable on each host
|
||||
- all hosts have a passwordless ssh key that is trusted by the other hosts
|
||||
- all hosts have a known_hosts file that lists each hosts
|
||||
|
||||
Add an entry to `/etc/exports` with the appropriate gateway and netmask
|
||||
information::
|
||||
|
||||
/nfs/instances <gateway>/<netmask>(rw,fsid=0,insecure,no_subtree_check,async,no_root_squash)
|
||||
|
||||
Export the NFS directories::
|
||||
|
||||
sudo exportfs -ra
|
||||
|
||||
Make sure the NFS server is running::
|
||||
|
||||
sudo service nfs-kernel-server status
|
||||
|
||||
If the server is not running, then start it::
|
||||
|
||||
sudo service nfs-kernel-server start
|
||||
|
||||
Configuring NFS on Compute Node
|
||||
-------------------------------
|
||||
|
||||
Each compute node needs to use the NFS server to hold the instance data::
|
||||
|
||||
sudo apt-get install rpcbind nfs-common
|
||||
mkdir -p /opt/stack/data/instances
|
||||
sudo mount <nfs-server-ip>:/nfs/instances /opt/stack/data/instances
|
||||
|
||||
If you would like to have the NFS directory automatically mounted on reboot,
|
||||
then add the following to `/etc/fstab`::
|
||||
|
||||
<nfs-server-ip>:/nfs/instances /opt/stack/data/instances nfs auto 0 0
|
||||
|
||||
Edit `/etc/libvirt/libvirtd.conf` to make sure the following values are set::
|
||||
|
||||
listen_tls = 0
|
||||
listen_tcp = 1
|
||||
auth_tcp = "none"
|
||||
|
||||
Edit `/etc/default/libvirt-bin`::
|
||||
|
||||
libvirtd_opts="-d -l"
|
||||
|
||||
Restart the libvirt service::
|
||||
|
||||
sudo service libvirt-bin restart
|
||||
If these requirements are met live migration will be possible.
|
||||
Shared storage such as ceph, booting form cinder volume or nfs are recommend
|
||||
when testing evacuate if you want to preserve vm data.
|
||||
|
||||
Setting up SSH keys between compute nodes to enable live migration
|
||||
------------------------------------------------------------------
|
||||
@@ -231,22 +250,91 @@ must exist in every other compute node's stack user's authorized_keys file and
|
||||
every compute node's public ECDSA key needs to be in every other compute
|
||||
node's root user's known_hosts file.
|
||||
|
||||
Disable serial console
|
||||
----------------------
|
||||
Configuring NFS Server (ADVANCED)
|
||||
---------------------------------
|
||||
|
||||
Serial console needs to be disabled for live migration to work.
|
||||
If you would like to use live migration for shared storage, then the controller
|
||||
can serve as the NFS server if needed
|
||||
|
||||
On both the controller and compute node, in /etc/nova/nova.conf
|
||||
.. code-block:: bash
|
||||
|
||||
[serial_console]
|
||||
enabled = False
|
||||
sudo apt-get install nfs-kernel-server
|
||||
sudo mkdir -p /nfs/instances
|
||||
sudo chown stack:stack /nfs/instances
|
||||
|
||||
Alternatively, in devstack's local.conf:
|
||||
Add an entry to ``/etc/exports`` with the appropriate gateway and netmask
|
||||
information
|
||||
|
||||
[[post-config|$NOVA_CONF]]
|
||||
[serial_console]
|
||||
#enabled=false
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
/nfs/instances <gateway>/<netmask>(rw,fsid=0,insecure,no_subtree_check,async,no_root_squash)
|
||||
|
||||
Export the NFS directories
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo exportfs -ra
|
||||
|
||||
Make sure the NFS server is running
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo service nfs-kernel-server status
|
||||
|
||||
If the server is not running, then start it
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo service nfs-kernel-server start
|
||||
|
||||
Configuring NFS on Compute Node (ADVANCED)
|
||||
------------------------------------------
|
||||
|
||||
Each compute node needs to use the NFS server to hold the instance data
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo apt-get install rpcbind nfs-common
|
||||
mkdir -p /opt/stack/data/instances
|
||||
sudo mount <nfs-server-ip>:/nfs/instances /opt/stack/data/instances
|
||||
|
||||
If you would like to have the NFS directory automatically mounted on reboot,
|
||||
then add the following to ``/etc/fstab``
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
<nfs-server-ip>:/nfs/instances /opt/stack/data/instances nfs auto 0 0
|
||||
|
||||
Configuring libvirt to listen on tcp (ADVANCED)
|
||||
-----------------------------------------------
|
||||
|
||||
.. NOTE::
|
||||
|
||||
By default nova will use ssh as a transport for live migration
|
||||
if you have a low bandwidth connection you can use tcp instead
|
||||
however this is generally not recommended.
|
||||
|
||||
|
||||
Edit ``/etc/libvirt/libvirtd.conf`` to make sure the following values are set
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
listen_tls = 0
|
||||
listen_tcp = 1
|
||||
auth_tcp = "none"
|
||||
|
||||
Edit ``/etc/default/libvirt-bin``
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
libvirtd_opts="-d -l"
|
||||
|
||||
Restart the libvirt service
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo service libvirt-bin restart
|
||||
|
||||
VNC server configuration
|
||||
------------------------
|
||||
@@ -254,13 +342,18 @@ VNC server configuration
|
||||
The VNC server listening parameter needs to be set to any address so
|
||||
that the server can accept connections from all of the compute nodes.
|
||||
|
||||
On both the controller and compute node, in /etc/nova/nova.conf
|
||||
On both the controller and compute node, in ``/etc/nova/nova.conf``
|
||||
|
||||
vncserver_listen = 0.0.0.0
|
||||
.. code-block:: ini
|
||||
|
||||
Alternatively, in devstack's local.conf:
|
||||
[vnc]
|
||||
server_listen = "0.0.0.0"
|
||||
|
||||
VNCSERVER_LISTEN=0.0.0.0
|
||||
Alternatively, in devstack's ``local.conf``:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
VNCSERVER_LISTEN="0.0.0.0"
|
||||
|
||||
|
||||
Environment final checkup
|
||||
|
||||
@@ -43,7 +43,7 @@ different version of the above, please document your configuration here!
|
||||
Getting the latest code
|
||||
=======================
|
||||
|
||||
Make a clone of the code from our `Git repository`:
|
||||
Make a clone of the code from our ``Git repository``:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
@@ -72,9 +72,9 @@ These dependencies can be installed from PyPi_ using the Python tool pip_.
|
||||
.. _PyPi: https://pypi.org/
|
||||
.. _pip: https://pypi.org/project/pip
|
||||
|
||||
However, your system *may* need additional dependencies that `pip` (and by
|
||||
However, your system *may* need additional dependencies that ``pip`` (and by
|
||||
extension, PyPi) cannot satisfy. These dependencies should be installed
|
||||
prior to using `pip`, and the installation method may vary depending on
|
||||
prior to using ``pip``, and the installation method may vary depending on
|
||||
your platform.
|
||||
|
||||
* Ubuntu 16.04::
|
||||
@@ -141,7 +141,7 @@ forget to activate it:
|
||||
|
||||
$ workon watcher
|
||||
|
||||
You should then be able to `import watcher` using Python without issue:
|
||||
You should then be able to ``import watcher`` using Python without issue:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
|
||||
@@ -10,3 +10,4 @@ Contribution Guide
|
||||
devstack
|
||||
testing
|
||||
rally_link
|
||||
release-guide
|
||||
|
||||
@@ -300,6 +300,6 @@ Using that you can now query the values for that specific metric:
|
||||
.. code-block:: py
|
||||
|
||||
avg_meter = self.datasource_backend.statistic_aggregation(
|
||||
instance.uuid, 'cpu_util', self.periods['instance'],
|
||||
instance.uuid, 'instance_cpu_usage', self.periods['instance'],
|
||||
self.granularity,
|
||||
aggregation=self.aggregation_method['instance'])
|
||||
|
||||
462
doc/source/contributor/release-guide.rst
Normal file
462
doc/source/contributor/release-guide.rst
Normal file
@@ -0,0 +1,462 @@
|
||||
..
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
not use this file except in compliance with the License. You may obtain
|
||||
a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
License for the specific language governing permissions and limitations
|
||||
under the License.
|
||||
|
||||
Chronological Release Liaison Guide
|
||||
====================================
|
||||
|
||||
This is a reference guide that a release liaison may use as an aid, if
|
||||
they choose.
|
||||
|
||||
Watcher uses the `Distributed Project Leadership (DPL)`__ model where
|
||||
traditional release liaison responsibilities are distributed among various
|
||||
liaisons. The release liaison is responsible for requesting releases,
|
||||
reviewing Feature Freeze Exception (FFE) requests, and coordinating
|
||||
release-related activities with the team.
|
||||
|
||||
.. __: https://governance.openstack.org/tc/reference/distributed-project-leadership.html
|
||||
|
||||
How to Use This Guide
|
||||
---------------------
|
||||
|
||||
This guide is organized chronologically to follow the OpenStack release
|
||||
cycle from PTG planning through post-release activities. You can use it
|
||||
in two ways:
|
||||
|
||||
**For New Release Liaisons**
|
||||
Read through the entire guide to understand the full release cycle,
|
||||
then bookmark it for reference during your term.
|
||||
|
||||
**For Experienced Release Liaisons**
|
||||
Jump directly to the relevant section for your current phase in the
|
||||
release cycle. Each major section corresponds to a specific time period.
|
||||
|
||||
**Key Navigation Tips**
|
||||
* The :ref:`glossary` defines all acronyms and terminology used
|
||||
* Time-sensitive activities are clearly marked by milestone phases
|
||||
* DPL coordination notes indicate when team collaboration is required
|
||||
|
||||
DPL Liaison Coordination
|
||||
-------------------------
|
||||
|
||||
Under the DPL model, the release liaison coordinates with other project
|
||||
liaisons and the broader team for effective release management. The release
|
||||
liaison has authority for release-specific decisions (FFE approvals, release
|
||||
timing, etc.) while major process changes and strategic decisions require
|
||||
team consensus.
|
||||
|
||||
This coordination approach ensures that:
|
||||
|
||||
* Release activities are properly managed by a dedicated liaison
|
||||
* Team input is gathered for significant decisions
|
||||
* Other liaisons are informed of release-related developments that may
|
||||
affect their areas
|
||||
* Release processes remain responsive while maintaining team alignment
|
||||
|
||||
Project Context
|
||||
---------------
|
||||
|
||||
* Coordinate with the watcher meeting (chair rotates each meeting, with
|
||||
volunteers requested at the end of each meeting)
|
||||
|
||||
* Meeting etherpad: https://etherpad.opendev.org/p/openstack-watcher-irc-meeting
|
||||
* IRC channel: #openstack-watcher
|
||||
|
||||
* Get acquainted with the release schedule
|
||||
|
||||
* Example: https://releases.openstack.org/<current-release>/schedule.html
|
||||
|
||||
* Familiarize with Watcher project repositories and tracking:
|
||||
|
||||
Watcher Main Repository
|
||||
`Primary codebase for the Watcher service <https://opendev.org/openstack/watcher>`__
|
||||
|
||||
Watcher Dashboard
|
||||
`Horizon plugin for Watcher UI <https://opendev.org/openstack/watcher-dashboard>`__
|
||||
|
||||
Watcher Tempest Plugin
|
||||
`Integration tests <https://opendev.org/openstack/watcher-tempest-plugin>`__ (follows tempest cycle)
|
||||
|
||||
Python Watcher Client
|
||||
`Command-line client and Python library <https://opendev.org/openstack/python-watcherclient>`__
|
||||
|
||||
Watcher Specifications
|
||||
`Design specifications <https://opendev.org/openstack/watcher-specs>`__ (not released)
|
||||
|
||||
Watcher Launchpad (Main)
|
||||
`Primary bug and feature tracking <https://launchpad.net/watcher>`__
|
||||
|
||||
Watcher Dashboard Launchpad
|
||||
`Dashboard-specific tracking <https://launchpad.net/watcher-dashboard/>`__
|
||||
|
||||
Watcher Tempest Plugin Launchpad
|
||||
`Test plugin tracking <https://launchpad.net/watcher-tempest-plugin>`__
|
||||
|
||||
Python Watcher Client Launchpad
|
||||
`Client library tracking <https://launchpad.net/python-watcherclient>`__
|
||||
|
||||
Project Team Gathering
|
||||
----------------------
|
||||
|
||||
Event Liaison Coordination
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
* Work with the project team to select an event liaison for PTG coordination.
|
||||
The event liaison is responsible for:
|
||||
|
||||
* Reserving sufficient space at PTG for the project team's meetings
|
||||
* Putting out an agenda for team meetings
|
||||
* Ensuring meetings are organized and facilitated
|
||||
* Documenting meeting results
|
||||
|
||||
* If no event liaison is selected, these duties revert to the release liaison.
|
||||
|
||||
* Monitor for OpenStack Events team queries on the mailing list requesting
|
||||
event liaison volunteers - teams not responding may lose event
|
||||
representation.
|
||||
|
||||
PTG Planning and Execution
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
* Create PTG planning etherpad, retrospective etherpad and alert about it in
|
||||
watcher meeting and dev mailing list
|
||||
|
||||
* Example: https://etherpad.opendev.org/p/apr2025-ptg-watcher
|
||||
|
||||
* Run sessions at the PTG (if no event liaison is selected)
|
||||
|
||||
* Do a retro of the previous cycle
|
||||
|
||||
* Coordinate with team to establish agreement on the agenda for this release:
|
||||
|
||||
Review Days Planning
|
||||
Determine number of review days allocated for specs and implementation work
|
||||
|
||||
Freeze Dates Coordination
|
||||
Define Spec approval and Feature freeze dates through team collaboration
|
||||
|
||||
Release Schedule Modifications
|
||||
Modify the OpenStack release schedule if needed by proposing new dates
|
||||
(Example: https://review.opendev.org/c/openstack/releases/+/877094)
|
||||
|
||||
* Discuss the implications of the `SLURP or non-SLURP`__ current release
|
||||
|
||||
.. __: https://governance.openstack.org/tc/resolutions/20220210-release-cadence-adjustment.html
|
||||
|
||||
* Sign up for group photo at the PTG (if applicable)
|
||||
|
||||
|
||||
After PTG
|
||||
---------
|
||||
|
||||
* Send PTG session summaries to the dev mailing list
|
||||
|
||||
* Add `RFE bugs`__ if you have action items that are simple to do but
|
||||
without a owner yet.
|
||||
|
||||
* Update IRC #openstack-watcher channel topic to point to new
|
||||
development-planning etherpad.
|
||||
|
||||
.. __: https://bugs.launchpad.net/watcher/+bugs?field.tag=rfe
|
||||
|
||||
A few weeks before milestone 1
|
||||
------------------------------
|
||||
|
||||
* Plan a spec review day
|
||||
|
||||
* Periodically check the series goals others have proposed in the “Set series
|
||||
goals” link:
|
||||
|
||||
* Example: https://blueprints.launchpad.net/watcher/<current-release>/+setgoals
|
||||
|
||||
Milestone 1
|
||||
-----------
|
||||
|
||||
* Release watcher and python-watcherclient via the openstack/releases repo.
|
||||
Watcher follows the `cycle-with-intermediary`__ release model:
|
||||
|
||||
.. __: https://releases.openstack.org/reference/release_models.html#cycle-with-intermediary
|
||||
|
||||
* Create actual releases (not just launchpad bookkeeping) at milestone points
|
||||
* No launchpad milestone releases are created for intermediary releases
|
||||
* When releasing the first version of a library for the cycle,
|
||||
bump
|
||||
the minor version to leave room for future stable branch
|
||||
releases
|
||||
|
||||
* Release stable branches of watcher
|
||||
|
||||
Stable Branch Release Process
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Prepare the stable branch for evaluation:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
git checkout <stable branch>
|
||||
git log --no-merges <last tag>..
|
||||
|
||||
Analyze commits to determine version bump according to semantic versioning.
|
||||
|
||||
Semantic Versioning Guidelines
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Choose version bump based on changes since last release:
|
||||
|
||||
Major Version (X)
|
||||
Backward-incompatible changes that break existing APIs
|
||||
|
||||
Minor Version (Y)
|
||||
New features that maintain backward compatibility
|
||||
|
||||
Patch Version (Z)
|
||||
Bug fixes that maintain backward compatibility
|
||||
|
||||
Release Command Usage
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Generate the release using OpenStack tooling:
|
||||
|
||||
* Use the `new-release command
|
||||
<https://releases.openstack.org/reference/using.html#using-new-release-command>`__
|
||||
* Propose the release with version according to chosen semver format
|
||||
(x.y.z)
|
||||
|
||||
Summit
|
||||
------
|
||||
|
||||
``Responsibility Precedence for Summit Activities:``
|
||||
|
||||
1. ``Project Update/Onboarding Liaisons`` (if appointed):
|
||||
|
||||
* ``Project Update Liaison``: responsible for giving the project update
|
||||
showcasing team's achievements for the cycle to the community
|
||||
* ``Project Onboarding Liaison``: responsible for giving/facilitating
|
||||
onboarding sessions during events for the project's community
|
||||
|
||||
2. ``Event Liaison`` (if no Project Update/Onboarding liaisons exist):
|
||||
|
||||
* Coordinates all Summit activities including project updates and onboarding
|
||||
|
||||
3. ``Release Liaison`` (if no Event Liaison is appointed):
|
||||
|
||||
* Work with the team to ensure Summit activities are properly handled:
|
||||
|
||||
* Prepare the project update presentation
|
||||
* Prepare the on-boarding session materials
|
||||
* Prepare the operator meet-and-greet session
|
||||
|
||||
.. note::
|
||||
|
||||
The team can choose to not have a Summit presence if desired.
|
||||
|
||||
A few weeks before milestone 2
|
||||
------------------------------
|
||||
|
||||
* Plan a spec review day (optional)
|
||||
|
||||
Milestone 2
|
||||
-----------
|
||||
|
||||
* Spec freeze (unless changed by team agreement at PTG)
|
||||
|
||||
* Release watcher and python-watcherclient (if needed)
|
||||
|
||||
* Stable branch releases of watcher
|
||||
|
||||
|
||||
Shortly after spec freeze
|
||||
-------------------------
|
||||
|
||||
* Create a blueprint status etherpad to help track, especially non-priority
|
||||
blueprint work, to help things get done by Feature Freeze (FF). Example:
|
||||
|
||||
* https://etherpad.opendev.org/p/watcher-<release>-blueprint-status
|
||||
|
||||
* Create or review a patch to add the next release’s specs directory so people
|
||||
can propose specs for next release after spec freeze for current release
|
||||
|
||||
Milestone 3
|
||||
-----------
|
||||
|
||||
* Feature freeze day
|
||||
|
||||
* Client library freeze, release python-watcherclient
|
||||
|
||||
* Close out all blueprints, including “catch all” blueprints like mox,
|
||||
versioned notifications
|
||||
|
||||
* Stable branch releases of watcher
|
||||
|
||||
* Start writing the `cycle highlights
|
||||
<https://docs.openstack.org/project-team-guide/release-management.html#cycle-highlights>`__
|
||||
|
||||
Week following milestone 3
|
||||
--------------------------
|
||||
|
||||
* If warranted, announce the FFE (feature freeze exception process) to
|
||||
have people propose FFE requests to a special etherpad where they will
|
||||
be reviewed.
|
||||
FFE requests should first be discussed in the IRC meeting with the
|
||||
requester present.
|
||||
The release liaison has final decision on granting exceptions.
|
||||
|
||||
.. note::
|
||||
|
||||
if there is only a short time between FF and RC1 (lately it’s been 2
|
||||
weeks), then the only likely candidates will be low-risk things that are
|
||||
almost done. In general Feature Freeze exceptions should not be granted,
|
||||
instead features should be deferred and reproposed for the next
|
||||
development
|
||||
cycle. FFE never extend beyond RC1.
|
||||
|
||||
* Mark the max microversion for the release in the
|
||||
:doc:`/contributor/api_microversion_history`
|
||||
|
||||
A few weeks before RC
|
||||
---------------------
|
||||
|
||||
* Update the release status etherpad with RC1 todos and keep track
|
||||
of them in meetings
|
||||
|
||||
* Go through the bug list and identify any rc-potential bugs and tag them
|
||||
|
||||
RC
|
||||
--
|
||||
|
||||
* Follow the standard OpenStack release checklist process
|
||||
|
||||
* If we want to drop backward-compat RPC code, we have to do a major RPC
|
||||
version bump and coordinate it just before the major release:
|
||||
|
||||
* https://wiki.openstack.org/wiki/RpcMajorVersionUpdates
|
||||
|
||||
* Example: https://review.opendev.org/541035
|
||||
|
||||
* “Merge latest translations" means translation patches
|
||||
|
||||
* Check for translations with:
|
||||
|
||||
* https://review.opendev.org/#/q/status:open+project:openstack/watcher+branch:master+topic:zanata/translations
|
||||
|
||||
* Should NOT plan to have more than one RC if possible. RC2 should only happen
|
||||
if there was a mistake and something was missed for RC, or a new regression
|
||||
was discovered
|
||||
|
||||
* Write the reno prelude for the release GA
|
||||
|
||||
* Example: https://review.opendev.org/644412
|
||||
|
||||
* Push the cycle-highlights in marketing-friendly sentences and propose to the
|
||||
openstack/releases repo. Usually based on reno prelude but made more readable
|
||||
and friendly
|
||||
|
||||
* Example: https://review.opendev.org/644697
|
||||
|
||||
Immediately after RC
|
||||
--------------------
|
||||
|
||||
* Look for bot proposed changes to reno and stable/<cycle>
|
||||
|
||||
* Create the launchpad series for the next cycle
|
||||
|
||||
* Set the development focus of the project to the new cycle series
|
||||
|
||||
* Set the status of the new series to “active development”
|
||||
|
||||
* Set the last series status to “current stable branch release”
|
||||
|
||||
* Set the previous to last series status to “supported”
|
||||
|
||||
* Repeat launchpad steps ^ for all watcher deliverables.
|
||||
|
||||
* Make sure the specs directory for the next cycle gets created so people can
|
||||
start proposing new specs
|
||||
|
||||
* Make sure to move implemented specs from the previous release
|
||||
|
||||
* Move implemented specs manually (TODO: add tox command in future)
|
||||
|
||||
* Remove template files:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
rm doc/source/specs/<release>/index.rst
|
||||
rm doc/source/specs/<release>/template.rst
|
||||
|
||||
* Ensure liaison handoff: either transition to new release liaison or confirm
|
||||
reappointment for next cycle
|
||||
|
||||
.. _glossary:
|
||||
|
||||
Glossary
|
||||
--------
|
||||
|
||||
DPL
|
||||
Distributed Project Leadership - A governance model where traditional PTL
|
||||
responsibilities are distributed among various specialized liaisons.
|
||||
|
||||
FFE
|
||||
Feature Freeze Exception - A request to add a feature after the feature
|
||||
freeze deadline. Should be used sparingly for low-risk, nearly
|
||||
complete features.
|
||||
|
||||
GA
|
||||
General Availability - The final release of a software version for
|
||||
production use.
|
||||
|
||||
PTG
|
||||
Project Team Gathering - A collaborative event where OpenStack project
|
||||
teams meet to plan and coordinate development activities.
|
||||
|
||||
RC
|
||||
Release Candidate - A pre-release version that is potentially the final
|
||||
version, pending testing and bug fixes.
|
||||
|
||||
RFE
|
||||
Request for Enhancement - A type of bug report requesting a new feature
|
||||
or enhancement to existing functionality.
|
||||
|
||||
SLURP
|
||||
Skip Level Upgrade Release Process - An extended maintenance release
|
||||
that allows skipping intermediate versions during upgrades.
|
||||
|
||||
Summit
|
||||
OpenStack Summit - A conference where the OpenStack community gathers
|
||||
for presentations, discussions, and project updates.
|
||||
|
||||
Miscellaneous Notes
|
||||
-------------------
|
||||
|
||||
How to track launchpad blueprint approvals
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Core team approves blueprints through team consensus. The release liaison
|
||||
ensures launchpad status is updated correctly after core team approval:
|
||||
|
||||
* Set the approver as the core team member who approved the spec
|
||||
|
||||
* Set the Direction => Approved and Definition => Approved and make sure the
|
||||
Series goal is set to the current release. If code is already proposed, set
|
||||
Implementation => Needs Code Review
|
||||
|
||||
* Optional: add a comment to the Whiteboard explaining the approval,
|
||||
with a date
|
||||
(launchpad does not record approval dates). For example: “We discussed this
|
||||
in the team meeting and agreed to approve this for <release>. -- <nick>
|
||||
<YYYYMMDD>”
|
||||
|
||||
How to complete a launchpad blueprint
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
* Set Implementation => Implemented. The completion date will be recorded by
|
||||
launchpad
|
||||
157
doc/source/datasources/aetos.rst
Normal file
157
doc/source/datasources/aetos.rst
Normal file
@@ -0,0 +1,157 @@
|
||||
================
|
||||
Aetos datasource
|
||||
================
|
||||
|
||||
Synopsis
|
||||
--------
|
||||
The Aetos datasource allows Watcher to use an Aetos reverse proxy server as the
|
||||
source for collected metrics used by the Watcher decision engine. Aetos is a
|
||||
multi-tenant aware reverse proxy that sits in front of a Prometheus server and
|
||||
provides Keystone authentication and role-based access control. The Aetos
|
||||
datasource uses Keystone service discovery to locate the Aetos endpoint and
|
||||
requires authentication via Keystone tokens.
|
||||
|
||||
Requirements
|
||||
-------------
|
||||
The Aetos datasource has the following requirements:
|
||||
|
||||
* An Aetos reverse proxy server deployed in front of Prometheus
|
||||
* Aetos service registered in Keystone with service type 'metric-storage'
|
||||
* Valid Keystone credentials for Watcher with admin or service role
|
||||
* Prometheus metrics with appropriate labels (same as direct Prometheus access)
|
||||
|
||||
Like the Prometheus datasource, it is required that Prometheus metrics contain
|
||||
a label to identify the hostname of the exporter from which the metric was
|
||||
collected. This is used to match against the Watcher cluster model
|
||||
``ComputeNode.hostname``. The default for this label is ``fqdn`` and in the
|
||||
prometheus scrape configs would look like:
|
||||
|
||||
.. code-block::
|
||||
|
||||
scrape_configs:
|
||||
- job_name: node
|
||||
static_configs:
|
||||
- targets: ['10.1.2.3:9100']
|
||||
labels:
|
||||
fqdn: "testbox.controlplane.domain"
|
||||
|
||||
This default can be overridden when a deployer uses a different label to
|
||||
identify the exporter host (for example ``hostname`` or ``host``, or any other
|
||||
label, as long as it identifies the host).
|
||||
|
||||
Internally this label is used in creating ``fqdn_instance_labels``, containing
|
||||
the list of values assigned to the label in the Prometheus targets.
|
||||
The elements of the resulting fqdn_instance_labels are expected to match the
|
||||
``ComputeNode.hostname`` used in the Watcher decision engine cluster model.
|
||||
An example ``fqdn_instance_labels`` is the following:
|
||||
|
||||
.. code-block::
|
||||
|
||||
[
|
||||
'ena.controlplane.domain',
|
||||
'dio.controlplane.domain',
|
||||
'tria.controlplane.domain',
|
||||
]
|
||||
|
||||
For instance metrics, it is required that Prometheus contains a label
|
||||
with the uuid of the OpenStack instance in each relevant metric. By default,
|
||||
the datasource will look for the label ``resource``. The
|
||||
``instance_uuid_label`` config option in watcher.conf allows deployers to
|
||||
override this default to any other label name that stores the ``uuid``.
|
||||
|
||||
Limitations
|
||||
-----------
|
||||
The Aetos datasource shares the same limitations as the Prometheus datasource:
|
||||
|
||||
The current implementation doesn't support the ``statistic_series`` function of
|
||||
the Watcher ``class DataSourceBase``. It is expected that the
|
||||
``statistic_aggregation`` function (which is implemented) is sufficient in
|
||||
providing the **current** state of the managed resources in the cluster.
|
||||
The ``statistic_aggregation`` function defaults to querying back 300 seconds,
|
||||
starting from the present time (the time period is a function parameter and
|
||||
can be set to a value as required). Implementing the ``statistic_series`` can
|
||||
always be re-visited if the requisite interest and work cycles are volunteered
|
||||
by the interested parties.
|
||||
|
||||
One further note about a limitation in the implemented
|
||||
``statistic_aggregation`` function. This function is defined with a
|
||||
``granularity`` parameter, to be used when querying whichever of the Watcher
|
||||
``DataSourceBase`` metrics providers. In the case of Aetos (like Prometheus),
|
||||
we do not fetch and then process individual metrics across the specified time
|
||||
period. Instead we use the PromQL querying operators and functions, so that the
|
||||
server itself will process the request across the specified parameters and
|
||||
then return the result. So ``granularity`` parameter is redundant and remains
|
||||
unused for the Aetos implementation of ``statistic_aggregation``. The
|
||||
granularity of the data fetched by Prometheus server is specified in
|
||||
configuration as the server ``scrape_interval`` (current default 15 seconds).
|
||||
|
||||
Additionally, there is a slight performance impact compared to direct
|
||||
Prometheus access. Since Aetos acts as a reverse proxy in front of Prometheus,
|
||||
there is an additional step for each request, resulting in slightly longer
|
||||
delays.
|
||||
|
||||
Configuration
|
||||
-------------
|
||||
A deployer must set the ``datasources`` parameter to include ``aetos``
|
||||
under the watcher_datasources section of watcher.conf (or add ``aetos`` in
|
||||
datasources for a specific strategy if preferred eg. under the
|
||||
``[watcher_strategies.workload_stabilization]`` section).
|
||||
|
||||
.. note::
|
||||
Having both Prometheus and Aetos datasources configured at the same time
|
||||
is not supported and will result in a configuration error. Allowing this
|
||||
can be investigated in the future if a need or a proper use case is
|
||||
identified.
|
||||
|
||||
The watcher.conf configuration file is also used to set the parameter values
|
||||
required by the Watcher Aetos data source. The configuration can be
|
||||
added under the ``[aetos_client]`` section and the available options are
|
||||
duplicated below from the code as they are self documenting:
|
||||
|
||||
.. code-block::
|
||||
|
||||
cfg.StrOpt('interface',
|
||||
default='public',
|
||||
choices=['internal', 'public', 'admin'],
|
||||
help="Type of endpoint to use in keystoneclient."),
|
||||
cfg.StrOpt('region_name',
|
||||
help="Region in Identity service catalog to use for "
|
||||
"communication with the OpenStack service."),
|
||||
cfg.StrOpt('fqdn_label',
|
||||
default='fqdn',
|
||||
help="The label that Prometheus uses to store the fqdn of "
|
||||
"exporters. Defaults to 'fqdn'."),
|
||||
cfg.StrOpt('instance_uuid_label',
|
||||
default='resource',
|
||||
help="The label that Prometheus uses to store the uuid of "
|
||||
"OpenStack instances. Defaults to 'resource'."),
|
||||
|
||||
|
||||
Authentication and Service Discovery
|
||||
------------------------------------
|
||||
Unlike the Prometheus datasource which requires explicit host and port
|
||||
configuration, the Aetos datasource uses Keystone service discovery to
|
||||
automatically locate the Aetos endpoint. The datasource:
|
||||
|
||||
1. Uses the configured Keystone credentials to authenticate
|
||||
2. Searches the service catalog for a service with type 'metric-storage'
|
||||
3. Uses the discovered endpoint URL to connect to Aetos
|
||||
4. Attaches a Keystone token to each request for authentication
|
||||
|
||||
If the Aetos service is not registered in Keystone, the datasource will
|
||||
fail to initialize and prevent the decision engine from starting.
|
||||
|
||||
So a sample watcher.conf configured to use the Aetos datasource would look
|
||||
like the following:
|
||||
|
||||
.. code-block::
|
||||
|
||||
[watcher_datasources]
|
||||
|
||||
datasources = aetos
|
||||
|
||||
[aetos_client]
|
||||
|
||||
interface = public
|
||||
region_name = RegionOne
|
||||
fqdn_label = fqdn
|
||||
@@ -90,15 +90,15 @@ parameter will need to specify the type of http protocol and the use of
|
||||
plain text http is strongly discouraged due to the transmission of the access
|
||||
token. Additionally the path to the proxy interface needs to be supplied as
|
||||
well in case Grafana is placed in a sub directory of the web server. An example
|
||||
would be: `https://mygrafana.org/api/datasource/proxy/` were
|
||||
`/api/datasource/proxy` is the default path without any subdirectories.
|
||||
would be: ``https://mygrafana.org/api/datasource/proxy/`` were
|
||||
``/api/datasource/proxy`` is the default path without any subdirectories.
|
||||
Likewise, this parameter can not be placed in the yaml.
|
||||
|
||||
To prevent many errors from occurring and potentially filing the logs files it
|
||||
is advised to specify the desired datasource in the configuration as it would
|
||||
prevent the datasource manager from having to iterate and try possible
|
||||
datasources with the launch of each audit. To do this specify `datasources` in
|
||||
the `[watcher_datasources]` group.
|
||||
datasources with the launch of each audit. To do this specify
|
||||
``datasources`` in the ``[watcher_datasources]`` group.
|
||||
|
||||
The current configuration that is required to be placed in the traditional
|
||||
configuration file would look like the following:
|
||||
@@ -120,7 +120,7 @@ traditional configuration file or in the yaml, however, it is not advised to
|
||||
mix and match but in the case it does occur the yaml would override the
|
||||
settings from the traditional configuration file. All five of these parameters
|
||||
are dictionaries mapping specific metrics to a configuration parameter. For
|
||||
instance the `project_id_map` will specify the specific project id in Grafana
|
||||
instance the ``project_id_map`` will specify the specific project id in Grafana
|
||||
to be used. The parameters are named as follow:
|
||||
|
||||
* project_id_map
|
||||
@@ -149,10 +149,10 @@ project_id
|
||||
|
||||
The project id's can only be determined by someone with the admin role in
|
||||
Grafana as that role is required to open the list of projects. The list of
|
||||
projects can be found on `/datasources` in the web interface but
|
||||
projects can be found on ``/datasources`` in the web interface but
|
||||
unfortunately it does not immediately display the project id. To display
|
||||
the id one can best hover the mouse over the projects and the url will show the
|
||||
project id's for example `/datasources/edit/7563`. Alternatively the entire
|
||||
project id's for example ``/datasources/edit/7563``. Alternatively the entire
|
||||
list of projects can be retrieved using the `REST api`_. To easily make
|
||||
requests to the REST api a tool such as Postman can be used.
|
||||
|
||||
@@ -239,18 +239,24 @@ conversion from bytes to megabytes.
|
||||
|
||||
SELECT value/1000000 FROM memory...
|
||||
|
||||
Queries will be formatted using the .format string method within Python. This
|
||||
format will currently have give attributes exposed to it labeled `{0}` to
|
||||
`{4}`. Every occurrence of these characters within the string will be replaced
|
||||
Queries will be formatted using the .format string method within Python.
|
||||
This format will currently have give attributes exposed to it labeled
|
||||
``{0}`` through ``{4}``.
|
||||
Every occurrence of these characters within the string will be replaced
|
||||
with the specific attribute.
|
||||
|
||||
- {0} is the aggregate typically `mean`, `min`, `max` but `count` is also
|
||||
supported.
|
||||
- {1} is the attribute as specified in the attribute parameter.
|
||||
- {2} is the period of time to aggregate data over in seconds.
|
||||
- {3} is the granularity or the interval between data points in seconds.
|
||||
- {4} is translator specific and in the case of InfluxDB it will be used for
|
||||
retention_periods.
|
||||
{0}
|
||||
is the aggregate typically ``mean``, ``min``, ``max`` but ``count``
|
||||
is also supported.
|
||||
{1}
|
||||
is the attribute as specified in the attribute parameter.
|
||||
{2}
|
||||
is the period of time to aggregate data over in seconds.
|
||||
{3}
|
||||
is the granularity or the interval between data points in seconds.
|
||||
{4}
|
||||
is translator specific and in the case of InfluxDB it will be used for
|
||||
retention_periods.
|
||||
|
||||
**InfluxDB**
|
||||
|
||||
|
||||
@@ -1,6 +1,11 @@
|
||||
Datasources
|
||||
===========
|
||||
|
||||
.. note::
|
||||
The Monasca datasource is deprecated for removal and optional. To use it, install the optional extra:
|
||||
``pip install watcher[monasca]``. If Monasca is configured without installing the extra, Watcher will raise
|
||||
an error guiding you to install the client.
|
||||
|
||||
.. toctree::
|
||||
:glob:
|
||||
:maxdepth: 1
|
||||
|
||||
140
doc/source/datasources/prometheus.rst
Normal file
140
doc/source/datasources/prometheus.rst
Normal file
@@ -0,0 +1,140 @@
|
||||
=====================
|
||||
Prometheus datasource
|
||||
=====================
|
||||
|
||||
Synopsis
|
||||
--------
|
||||
The Prometheus datasource allows Watcher to use a Prometheus server as the
|
||||
source for collected metrics used by the Watcher decision engine. At minimum
|
||||
deployers must configure the ``host`` and ``port`` at which the Prometheus
|
||||
server is listening.
|
||||
|
||||
Requirements
|
||||
-------------
|
||||
It is required that Prometheus metrics contain a label to identify the hostname
|
||||
of the exporter from which the metric was collected. This is used to match
|
||||
against the Watcher cluster model ``ComputeNode.hostname``. The default for
|
||||
this label is ``fqdn`` and in the prometheus scrape configs would look like:
|
||||
|
||||
.. code-block::
|
||||
|
||||
scrape_configs:
|
||||
- job_name: node
|
||||
static_configs:
|
||||
- targets: ['10.1.2.3:9100']
|
||||
labels:
|
||||
fqdn: "testbox.controlplane.domain"
|
||||
|
||||
This default can be overridden when a deployer uses a different label to
|
||||
identify the exporter host (for example ``hostname`` or ``host``, or any other
|
||||
label, as long as it identifies the host).
|
||||
|
||||
Internally this label is used in creating ``fqdn_instance_labels``, containing
|
||||
the list of values assigned to the label in the Prometheus targets.
|
||||
The elements of the resulting fqdn_instance_labels are expected to match the
|
||||
``ComputeNode.hostname`` used in the Watcher decision engine cluster model.
|
||||
An example ``fqdn_instance_labels`` is the following:
|
||||
|
||||
.. code-block::
|
||||
|
||||
[
|
||||
'ena.controlplane.domain',
|
||||
'dio.controlplane.domain',
|
||||
'tria.controlplane.domain',
|
||||
]
|
||||
|
||||
For instance metrics, it is required that Prometheus contains a label
|
||||
with the uuid of the OpenStack instance in each relevant metric. By default,
|
||||
the datasource will look for the label ``resource``. The
|
||||
``instance_uuid_label`` config option in watcher.conf allows deployers to
|
||||
override this default to any other label name that stores the ``uuid``.
|
||||
|
||||
Limitations
|
||||
-----------
|
||||
The current implementation doesn't support the ``statistic_series`` function of
|
||||
the Watcher ``class DataSourceBase``. It is expected that the
|
||||
``statistic_aggregation`` function (which is implemented) is sufficient in
|
||||
providing the **current** state of the managed resources in the cluster.
|
||||
The ``statistic_aggregation`` function defaults to querying back 300 seconds,
|
||||
starting from the present time (the time period is a function parameter and
|
||||
can be set to a value as required). Implementing the ``statistic_series`` can
|
||||
always be re-visited if the requisite interest and work cycles are volunteered
|
||||
by the interested parties.
|
||||
|
||||
One further note about a limitation in the implemented
|
||||
``statistic_aggregation`` function. This function is defined with a
|
||||
``granularity`` parameter, to be used when querying whichever of the Watcher
|
||||
``DataSourceBase`` metrics providers. In the case of Prometheus, we do not
|
||||
fetch and then process individual metrics across the specified time period.
|
||||
Instead we use the PromQL querying operators and functions, so that the
|
||||
server itself will process the request across the specified parameters and
|
||||
then return the result. So ``granularity`` parameter is redundant and remains
|
||||
unused for the Prometheus implementation of ``statistic_aggregation``. The
|
||||
granularity of the data fetched by Prometheus server is specified in
|
||||
configuration as the server ``scrape_interval`` (current default 15 seconds).
|
||||
|
||||
Configuration
|
||||
-------------
|
||||
A deployer must set the ``datasources`` parameter to include ``prometheus``
|
||||
under the watcher_datasources section of watcher.conf (or add ``prometheus`` in
|
||||
datasources for a specific strategy if preferred eg. under the
|
||||
``[watcher_strategies.workload_stabilization]`` section).
|
||||
|
||||
The watcher.conf configuration file is also used to set the parameter values
|
||||
required by the Watcher Prometheus data source. The configuration can be
|
||||
added under the ``[prometheus_client]`` section and the available options are
|
||||
duplicated below from the code as they are self documenting:
|
||||
|
||||
.. code-block::
|
||||
|
||||
cfg.StrOpt('host',
|
||||
help="The hostname or IP address for the prometheus server."),
|
||||
cfg.StrOpt('port',
|
||||
help="The port number used by the prometheus server."),
|
||||
cfg.StrOpt('fqdn_label',
|
||||
default="fqdn",
|
||||
help="The label that Prometheus uses to store the fqdn of "
|
||||
"exporters. Defaults to 'fqdn'."),
|
||||
cfg.StrOpt('instance_uuid_label',
|
||||
default="resource",
|
||||
help="The label that Prometheus uses to store the uuid of "
|
||||
"OpenStack instances. Defaults to 'resource'."),
|
||||
cfg.StrOpt('username',
|
||||
help="The basic_auth username to use to authenticate with the "
|
||||
"Prometheus server."),
|
||||
cfg.StrOpt('password',
|
||||
secret=True,
|
||||
help="The basic_auth password to use to authenticate with the "
|
||||
"Prometheus server."),
|
||||
cfg.StrOpt('cafile',
|
||||
help="Path to the CA certificate for establishing a TLS "
|
||||
"connection with the Prometheus server."),
|
||||
cfg.StrOpt('certfile',
|
||||
help="Path to the client certificate for establishing a TLS "
|
||||
"connection with the Prometheus server."),
|
||||
cfg.StrOpt('keyfile',
|
||||
help="Path to the client key for establishing a TLS "
|
||||
"connection with the Prometheus server."),
|
||||
|
||||
The ``host`` and ``port`` are **required** configuration options which have
|
||||
no set default. These specify the hostname (or IP) and port for at which
|
||||
the Prometheus server is listening. The ``fqdn_label`` allows deployers to
|
||||
override the required metric label used to match Prometheus node exporters
|
||||
against the Watcher ComputeNodes in the Watcher decision engine cluster data
|
||||
model. The default is ``fqdn`` and deployers can specify any other value
|
||||
(e.g. if they have an equivalent but different label such as ``host``).
|
||||
|
||||
So a sample watcher.conf configured to use the Prometheus server at
|
||||
``10.2.3.4:9090`` would look like the following:
|
||||
|
||||
.. code-block::
|
||||
|
||||
[watcher_datasources]
|
||||
|
||||
datasources = prometheus
|
||||
|
||||
[prometheus_client]
|
||||
|
||||
host = 10.2.3.4
|
||||
port = 9090
|
||||
fqdn_label = fqdn
|
||||
23
doc/source/image_src/plantuml/action_state_machine.txt
Normal file
23
doc/source/image_src/plantuml/action_state_machine.txt
Normal file
@@ -0,0 +1,23 @@
|
||||
@startuml
|
||||
|
||||
skinparam ArrowColor DarkRed
|
||||
skinparam StateBorderColor DarkRed
|
||||
skinparam StateBackgroundColor LightYellow
|
||||
skinparam Shadowing true
|
||||
|
||||
[*] --> PENDING: The Watcher Planner\ncreates the Action
|
||||
PENDING --> SKIPPED: The Action detects skipping condition\n in pre_condition or was\n skipped by cloud Admin.
|
||||
PENDING --> FAILED: The Action fails unexpectedly\n in pre_condition.
|
||||
PENDING --> ONGOING: The Watcher Applier starts executing/n the action.
|
||||
ONGOING --> FAILED: Something failed while executing\nthe Action in the Watcher Applier
|
||||
ONGOING --> SUCCEEDED: The Watcher Applier executed\nthe Action successfully
|
||||
FAILED --> DELETED : Administrator removes\nAction Plan
|
||||
SUCCEEDED --> DELETED : Administrator removes\n theAction
|
||||
ONGOING --> CANCELLED : The Action was cancelled\n as part of an Action Plan cancellation.
|
||||
PENDING --> CANCELLED : The Action was cancelled\n as part of an Action Plan cancellation.
|
||||
CANCELLED --> DELETED
|
||||
FAILED --> DELETED
|
||||
SKIPPED --> DELETED
|
||||
DELETED --> [*]
|
||||
|
||||
@enduml
|
||||
BIN
doc/source/images/action_state_machine.png
Normal file
BIN
doc/source/images/action_state_machine.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 75 KiB |
@@ -42,6 +42,7 @@ specific prior release.
|
||||
user/index
|
||||
configuration/index
|
||||
contributor/plugin/index
|
||||
integrations/index
|
||||
man/index
|
||||
|
||||
.. toctree::
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
...
|
||||
connection = mysql+pymysql://watcher:WATCHER_DBPASS@controller/watcher?charset=utf8
|
||||
|
||||
* In the `[DEFAULT]` section, configure the transport url for RabbitMQ message broker.
|
||||
* In the ``[DEFAULT]`` section, configure the transport url for RabbitMQ message broker.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
@@ -20,7 +20,7 @@
|
||||
|
||||
Replace the RABBIT_PASS with the password you chose for OpenStack user in RabbitMQ.
|
||||
|
||||
* In the `[keystone_authtoken]` section, configure Identity service access.
|
||||
* In the ``[keystone_authtoken]`` section, configure Identity service access.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
@@ -39,7 +39,7 @@
|
||||
Replace WATCHER_PASS with the password you chose for the watcher user in the Identity service.
|
||||
|
||||
* Watcher interacts with other OpenStack projects via project clients, in order to instantiate these
|
||||
clients, Watcher requests new session from Identity service. In the `[watcher_clients_auth]` section,
|
||||
clients, Watcher requests new session from Identity service. In the ``[watcher_clients_auth]`` section,
|
||||
configure the identity service access to interact with other OpenStack project clients.
|
||||
|
||||
.. code-block:: ini
|
||||
@@ -56,7 +56,7 @@
|
||||
|
||||
Replace WATCHER_PASS with the password you chose for the watcher user in the Identity service.
|
||||
|
||||
* In the `[api]` section, configure host option.
|
||||
* In the ``[api]`` section, configure host option.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
@@ -66,7 +66,7 @@
|
||||
|
||||
Replace controller with the IP address of the management network interface on your controller node, typically 10.0.0.11 for the first node in the example architecture.
|
||||
|
||||
* In the `[oslo_messaging_notifications]` section, configure the messaging driver.
|
||||
* In the ``[oslo_messaging_notifications]`` section, configure the messaging driver.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
|
||||
126
doc/source/integrations/index.rst
Normal file
126
doc/source/integrations/index.rst
Normal file
@@ -0,0 +1,126 @@
|
||||
============
|
||||
Integrations
|
||||
============
|
||||
|
||||
The following table provides an Integration status with different services
|
||||
which Watcher interact with. Some integrations are marked as Supported,
|
||||
while others as Experimental due to the lack of testing and a proper
|
||||
documentations.
|
||||
|
||||
Integration Status Matrix
|
||||
-------------------------
|
||||
|
||||
.. list-table::
|
||||
:widths: 20 20 20 20
|
||||
:header-rows: 1
|
||||
|
||||
* - Service Name
|
||||
- Integration Status
|
||||
- Documentation
|
||||
- Testing
|
||||
* - :ref:`Cinder <cinder_integration>`
|
||||
- Supported
|
||||
- Minimal
|
||||
- Unit
|
||||
* - :ref:`Glance <glance_integration>`
|
||||
- Experimental
|
||||
- Missing
|
||||
- None
|
||||
* - :ref:`Ironic <ironic_integration>`
|
||||
- Experimental
|
||||
- Minimal
|
||||
- Unit
|
||||
* - :ref:`Keystone <keystone_integration>`
|
||||
- Supported
|
||||
- Minimal
|
||||
- Integration
|
||||
* - :ref:`MAAS <maas_integration>`
|
||||
- Experimental
|
||||
- Missing
|
||||
- Unit
|
||||
* - :ref:`Neutron <neutron_integration>`
|
||||
- Experimental
|
||||
- Missing
|
||||
- Unit
|
||||
* - :ref:`Nova <nova_integration>`
|
||||
- Supported
|
||||
- Minimal
|
||||
- Unit and Integration
|
||||
* - :ref:`Placement <placement_integration>`
|
||||
- Supported
|
||||
- Minimal
|
||||
- Unit and Integration
|
||||
|
||||
.. note::
|
||||
Minimal documentation covers only basic configuration and, if available,
|
||||
how to enable notifications.
|
||||
|
||||
.. _cinder_integration:
|
||||
|
||||
Cinder
|
||||
^^^^^^
|
||||
The OpenStack Block Storage service integration includes a cluster data
|
||||
model collector that creates a in-memory representation of the storage
|
||||
resources, strategies that propose solutions based on storage capacity
|
||||
and Actions that perform volume migration.
|
||||
|
||||
.. _glance_integration:
|
||||
|
||||
Glance
|
||||
^^^^^^
|
||||
The Image service integration is consumed by Nova Helper to create instances
|
||||
from images, which was used older releases of Watcher to cold migrate
|
||||
instances. This procedure is not used by Watcher anymore and this integration
|
||||
is classified as Experimental and may be removed in future releases.
|
||||
|
||||
.. _ironic_integration:
|
||||
|
||||
Ironic
|
||||
^^^^^^
|
||||
The Bare Metal service integration includes a data model collector that
|
||||
creates an in-memory representation of Ironic resources and Actions that
|
||||
allows the management of the power state of nodes. This integration is
|
||||
classified as Experimental and may be removed in future releases.
|
||||
|
||||
.. _keystone_integration:
|
||||
|
||||
Keystone
|
||||
^^^^^^^^
|
||||
The Identity service integration includes authentication with other services
|
||||
and retrieving information about domains, projects and users.
|
||||
|
||||
.. _maas_integration:
|
||||
|
||||
MAAS (Metal As A Service)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
This integration allows managing bare metal servers of a MAAS service,
|
||||
which includes Actions that manage the power state of nodes. This
|
||||
integration is classified as Experimental and may be removed in future
|
||||
releases.
|
||||
|
||||
.. _neutron_integration:
|
||||
|
||||
Neutron
|
||||
^^^^^^^
|
||||
Neutron integration is currently consumed by Nova Helper to create instance,
|
||||
which was used by older releases of Watcher to cold migrate instances. This
|
||||
procedure is not used by Watcher anymore and this integration is classified
|
||||
as Experimental and may be removed in future releases.
|
||||
|
||||
.. _nova_integration:
|
||||
|
||||
Nova
|
||||
^^^^
|
||||
Nova service integration includes a cluster data model collector that creates
|
||||
an in-memory representation of the compute resources available in the cloud,
|
||||
strategies that propose solutions based on available resources and Actions
|
||||
that perform instance migrations.
|
||||
|
||||
.. _placement_integration:
|
||||
|
||||
Placement
|
||||
^^^^^^^^^
|
||||
Placement integration allows Watcher to track resource provider inventories
|
||||
and usages information, building a in-memory representation of those resources
|
||||
that can be used by strategies when calculating new solutions.
|
||||
|
||||
@@ -48,7 +48,7 @@
|
||||
logging configuration to any other existing logging
|
||||
options. Please see the Python logging module documentation
|
||||
for details on logging configuration files. The log-config
|
||||
name for this option is depcrecated.
|
||||
name for this option is deprecated.
|
||||
|
||||
**--log-format FORMAT**
|
||||
A logging.Formatter log message format string which may use any
|
||||
|
||||
@@ -26,8 +26,7 @@ metric service name plugins comment
|
||||
``compute_monitors`` option
|
||||
to ``cpu.virt_driver`` in
|
||||
the nova.conf.
|
||||
``cpu_util`` ceilometer_ none cpu_util has been removed
|
||||
since Stein.
|
||||
``cpu`` ceilometer_ none
|
||||
============================ ============ ======= ===========================
|
||||
|
||||
.. _ceilometer: https://docs.openstack.org/ceilometer/latest/admin/telemetry-measurements.html#openstack-compute
|
||||
|
||||
@@ -11,10 +11,6 @@ Synopsis
|
||||
|
||||
.. watcher-term:: watcher.decision_engine.strategy.strategies.host_maintenance.HostMaintenance
|
||||
|
||||
Requirements
|
||||
------------
|
||||
|
||||
None.
|
||||
|
||||
Metrics
|
||||
*******
|
||||
@@ -56,15 +52,29 @@ Configuration
|
||||
|
||||
Strategy parameters are:
|
||||
|
||||
==================== ====== ====================================
|
||||
parameter type default Value description
|
||||
==================== ====== ====================================
|
||||
``maintenance_node`` String The name of the compute node which
|
||||
need maintenance. Required.
|
||||
``backup_node`` String The name of the compute node which
|
||||
will backup the maintenance node.
|
||||
Optional.
|
||||
==================== ====== ====================================
|
||||
========================== ======== ========================== ==========
|
||||
parameter type description required
|
||||
========================== ======== ========================== ==========
|
||||
``maintenance_node`` String The name of the Required
|
||||
compute node
|
||||
which needs maintenance.
|
||||
``backup_node`` String The name of the compute Optional
|
||||
node which will backup
|
||||
the maintenance node.
|
||||
``disable_live_migration`` Boolean False: Active instances Optional
|
||||
will be live migrated.
|
||||
True: Active instances
|
||||
will be cold migrated
|
||||
if cold migration is
|
||||
not disabled. Otherwise,
|
||||
they will be stopped.
|
||||
False by default.
|
||||
``disable_cold_migration`` Boolean False: Inactive instances Optional
|
||||
will be cold migrated.
|
||||
True: Inactive instances
|
||||
will not be cold migrated.
|
||||
False by default.
|
||||
========================== ======== ========================== ==========
|
||||
|
||||
Efficacy Indicator
|
||||
------------------
|
||||
@@ -80,13 +90,46 @@ to: https://specs.openstack.org/openstack/watcher-specs/specs/queens/approved/cl
|
||||
How to use it ?
|
||||
---------------
|
||||
|
||||
Run an audit using Host Maintenance strategy.
|
||||
Executing the actions will move the servers from compute01 host
|
||||
to a host determined by the Nova scheduler service.
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
$ openstack optimize audit create \
|
||||
-g cluster_maintaining -s host_maintenance \
|
||||
-p maintenance_node=compute01
|
||||
|
||||
Run an audit using Host Maintenance strategy with a backup node specified.
|
||||
Executing the actions will move the servers from compute01 host
|
||||
to compute02 host.
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
$ openstack optimize audit create \
|
||||
-g cluster_maintaining -s host_maintenance \
|
||||
-p maintenance_node=compute01 \
|
||||
-p backup_node=compute02 \
|
||||
--auto-trigger
|
||||
-p backup_node=compute02
|
||||
|
||||
Run an audit using Host Maintenance strategy with migration disabled.
|
||||
This will only stop active instances on compute01, useful for maintenance
|
||||
scenarios where operators do not want to migrate workloads to other hosts.
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
$ openstack optimize audit create \
|
||||
-g cluster_maintaining -s host_maintenance \
|
||||
-p maintenance_node=compute01 \
|
||||
-p disable_live_migration=True \
|
||||
-p disable_cold_migration=True
|
||||
|
||||
Note that after executing this strategy, the *maintenance_node* will be
|
||||
marked as disabled, with the reason set to ``watcher_maintaining``.
|
||||
To enable the node again:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
$ openstack compute service set --enable compute01
|
||||
|
||||
External Links
|
||||
--------------
|
||||
|
||||
@@ -6,3 +6,67 @@ Strategies
|
||||
:maxdepth: 1
|
||||
|
||||
./*
|
||||
|
||||
Strategies status matrix
|
||||
------------------------
|
||||
|
||||
.. list-table::
|
||||
:widths: 20 20 20 20
|
||||
:header-rows: 1
|
||||
|
||||
* - Strategy Name
|
||||
- Status
|
||||
- Testing
|
||||
- Can Be Triggered from Horizon (UI)
|
||||
* - :doc:`actuation`
|
||||
- Experimental
|
||||
- Unit, Integration
|
||||
- No
|
||||
* - :doc:`basic-server-consolidation`
|
||||
- Experimental
|
||||
- Missing
|
||||
- Yes, with default values
|
||||
* - :doc:`host_maintenance`
|
||||
- Supported
|
||||
- Unit, Integration
|
||||
- No (requires parameters)
|
||||
* - :doc:`node_resource_consolidation`
|
||||
- Supported
|
||||
- Unit, Integration
|
||||
- Yes, with default values
|
||||
* - :doc:`noisy_neighbor`
|
||||
- Deprecated
|
||||
- Unit
|
||||
- N/A
|
||||
* - :doc:`outlet_temp_control`
|
||||
- Experimental
|
||||
- Unit
|
||||
- Yes, with default values
|
||||
* - :doc:`saving_energy`
|
||||
- Experimental
|
||||
- Unit
|
||||
- Yes, with default values
|
||||
* - :doc:`storage_capacity_balance`
|
||||
- Experimental
|
||||
- Unit
|
||||
- Yes, with default values
|
||||
* - :doc:`uniform_airflow`
|
||||
- Experimental
|
||||
- Unit
|
||||
- Yes, with default values
|
||||
* - :doc:`vm_workload_consolidation`
|
||||
- Supported
|
||||
- Unit, Integration
|
||||
- Yes, with default values
|
||||
* - :doc:`workload-stabilization`
|
||||
- Experimental
|
||||
- Missing
|
||||
- Yes, with default values
|
||||
* - :doc:`workload_balance`
|
||||
- Supported
|
||||
- Unit, Integration
|
||||
- Yes, with default values
|
||||
* - :doc:`zone_migration`
|
||||
- Supported (Instance migrations), Experimental (Volume migration)
|
||||
- Unit, Some Integration
|
||||
- No
|
||||
|
||||
@@ -89,9 +89,9 @@ step 2: Create audit to do optimization
|
||||
.. code-block:: shell
|
||||
|
||||
$ openstack optimize audittemplate create \
|
||||
at1 saving_energy --strategy saving_energy
|
||||
saving_energy_template1 saving_energy --strategy saving_energy
|
||||
|
||||
$ openstack optimize audit create -a at1 \
|
||||
$ openstack optimize audit create -a saving_energy_audit1 \
|
||||
-p free_used_percent=20.0
|
||||
|
||||
External Links
|
||||
|
||||
@@ -35,6 +35,11 @@ power ceilometer_ kwapi_ one point every 60s
|
||||
|
||||
.. _ceilometer: https://docs.openstack.org/ceilometer/latest/admin/telemetry-measurements.html#openstack-compute
|
||||
.. _monasca: https://github.com/openstack/monasca-agent/blob/master/docs/Libvirt.md
|
||||
|
||||
.. note::
|
||||
The Monasca datasource is deprecated for removal and optional. If a strategy requires Monasca metrics,
|
||||
ensure the Monasca optional extra is installed: ``pip install watcher[monasca]``.
|
||||
|
||||
.. _kwapi: https://kwapi.readthedocs.io/en/latest/index.html
|
||||
|
||||
|
||||
|
||||
@@ -22,14 +22,19 @@ The *vm_workload_consolidation* strategy requires the following metrics:
|
||||
============================ ============ ======= =========================
|
||||
metric service name plugins comment
|
||||
============================ ============ ======= =========================
|
||||
``cpu_util`` ceilometer_ none cpu_util has been removed
|
||||
since Stein.
|
||||
``cpu`` ceilometer_ none
|
||||
``memory.resident`` ceilometer_ none
|
||||
``memory`` ceilometer_ none
|
||||
``disk.root.size`` ceilometer_ none
|
||||
``compute.node.cpu.percent`` ceilometer_ none (optional) need to set the
|
||||
``compute_monitors`` option
|
||||
to ``cpu.virt_driver`` in the
|
||||
nova.conf.
|
||||
``hardware.memory.used`` ceilometer_ SNMP_ (optional)
|
||||
============================ ============ ======= =========================
|
||||
|
||||
.. _ceilometer: https://docs.openstack.org/ceilometer/latest/admin/telemetry-measurements.html#openstack-compute
|
||||
.. _SNMP: https://docs.openstack.org/ceilometer/latest/admin/telemetry-measurements.html#snmp-based-meters
|
||||
|
||||
Cluster data model
|
||||
******************
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
=============================================
|
||||
Watcher Overload standard deviation algorithm
|
||||
=============================================
|
||||
===============================
|
||||
Workload Stabilization Strategy
|
||||
===============================
|
||||
|
||||
Synopsis
|
||||
--------
|
||||
@@ -19,21 +19,20 @@ Metrics
|
||||
|
||||
The *workload_stabilization* strategy requires the following metrics:
|
||||
|
||||
============================ ============ ======= =============================
|
||||
metric service name plugins comment
|
||||
============================ ============ ======= =============================
|
||||
``compute.node.cpu.percent`` ceilometer_ none need to set the
|
||||
``compute_monitors`` option
|
||||
to ``cpu.virt_driver`` in the
|
||||
nova.conf.
|
||||
``hardware.memory.used`` ceilometer_ SNMP_
|
||||
``cpu_util`` ceilometer_ none cpu_util has been removed
|
||||
since Stein.
|
||||
``memory.resident`` ceilometer_ none
|
||||
============================ ============ ======= =============================
|
||||
|
||||
.. _ceilometer: https://docs.openstack.org/ceilometer/latest/admin/telemetry-measurements.html#openstack-compute
|
||||
.. _SNMP: https://docs.openstack.org/ceilometer/latest/admin/telemetry-measurements.html#snmp-based-meters
|
||||
============================ ==================================================
|
||||
metric description
|
||||
============================ ==================================================
|
||||
``instance_ram_usage`` ram memory usage in an instance as float in
|
||||
megabytes
|
||||
``instance_cpu_usage`` cpu usage in an instance as float ranging between
|
||||
0 and 100 representing the total cpu usage as
|
||||
percentage
|
||||
``host_ram_usage`` ram memory usage in a compute node as float in
|
||||
megabytes
|
||||
``host_cpu_usage`` cpu usage in a compute node as float ranging
|
||||
between 0 and 100 representing the total cpu
|
||||
usage as percentage
|
||||
============================ ==================================================
|
||||
|
||||
Cluster data model
|
||||
******************
|
||||
@@ -69,23 +68,49 @@ Configuration
|
||||
|
||||
Strategy parameters are:
|
||||
|
||||
==================== ====== ===================== =============================
|
||||
parameter type default Value description
|
||||
==================== ====== ===================== =============================
|
||||
``metrics`` array |metrics| Metrics used as rates of
|
||||
====================== ====== =================== =============================
|
||||
parameter type default Value description
|
||||
====================== ====== =================== =============================
|
||||
``metrics`` array |metrics| Metrics used as rates of
|
||||
cluster loads.
|
||||
``thresholds`` object |thresholds| Dict where key is a metric
|
||||
``thresholds`` object |thresholds| Dict where key is a metric
|
||||
and value is a trigger value.
|
||||
|
||||
``weights`` object |weights| These weights used to
|
||||
The strategy will only will
|
||||
look for an action plan when
|
||||
the standard deviation for
|
||||
the usage of one of the
|
||||
resources included in the
|
||||
metrics, taken as a
|
||||
normalized usage between
|
||||
0 and 1 among the hosts is
|
||||
higher than the threshold.
|
||||
The value of a perfectly
|
||||
balanced cluster for the
|
||||
standard deviation would be
|
||||
0, while in a totally
|
||||
unbalanced one would be 0.5,
|
||||
which should be the maximum
|
||||
value.
|
||||
``weights`` object |weights| These weights are used to
|
||||
calculate common standard
|
||||
deviation. Name of weight
|
||||
contains meter name and
|
||||
_weight suffix.
|
||||
``instance_metrics`` object |instance_metrics| Mapping to get hardware
|
||||
statistics using instance
|
||||
metrics.
|
||||
``host_choice`` string retry Method of host's choice.
|
||||
deviation when optimizing
|
||||
the resources usage.
|
||||
Name of weight contains meter
|
||||
name and _weight suffix.
|
||||
Higher values imply the
|
||||
metric will be prioritized
|
||||
when calculating an optimal
|
||||
resulting cluster
|
||||
distribution.
|
||||
``instance_metrics`` object |instance_metrics| This parameter represents
|
||||
the compute node metrics
|
||||
representing compute resource
|
||||
usage for the instances
|
||||
resource indicated in the
|
||||
metrics parameter.
|
||||
``host_choice`` string retry Method of host’s choice when
|
||||
analyzing destination for
|
||||
instances.
|
||||
There are cycle, retry and
|
||||
fullsearch methods. Cycle
|
||||
will iterate hosts in cycle.
|
||||
@@ -94,32 +119,49 @@ parameter type default Value description
|
||||
retry_count option).
|
||||
Fullsearch will return each
|
||||
host from list.
|
||||
``retry_count`` number 1 Count of random returned
|
||||
``retry_count`` number 1 Count of random returned
|
||||
hosts.
|
||||
``periods`` object |periods| These periods are used to get
|
||||
statistic aggregation for
|
||||
instance and host metrics.
|
||||
The period is simply a
|
||||
repeating interval of time
|
||||
into which the samples are
|
||||
grouped for aggregation.
|
||||
Watcher uses only the last
|
||||
period of all received ones.
|
||||
==================== ====== ===================== =============================
|
||||
``periods`` object |periods| Time, in seconds, to get
|
||||
statistical values for
|
||||
resources usage for instance
|
||||
and host metrics.
|
||||
Watcher will use the last
|
||||
period to calculate resource
|
||||
usage.
|
||||
``granularity`` number 300 NOT RECOMMENDED TO MODIFY:
|
||||
The time between two measures
|
||||
in an aggregated timeseries
|
||||
of a metric.
|
||||
``aggregation_method`` object |aggn_method| NOT RECOMMENDED TO MODIFY:
|
||||
Function used to aggregate
|
||||
multiple measures into an
|
||||
aggregated value.
|
||||
====================== ====== =================== =============================
|
||||
|
||||
.. |metrics| replace:: ["cpu_util", "memory.resident"]
|
||||
.. |thresholds| replace:: {"cpu_util": 0.2, "memory.resident": 0.2}
|
||||
.. |weights| replace:: {"cpu_util_weight": 1.0, "memory.resident_weight": 1.0}
|
||||
.. |instance_metrics| replace:: {"cpu_util": "compute.node.cpu.percent", "memory.resident": "hardware.memory.used"}
|
||||
.. |metrics| replace:: ["instance_cpu_usage", "instance_ram_usage"]
|
||||
.. |thresholds| replace:: {"instance_cpu_usage": 0.2, "instance_ram_usage": 0.2}
|
||||
.. |weights| replace:: {"instance_cpu_usage_weight": 1.0, "instance_ram_usage_weight": 1.0}
|
||||
.. |instance_metrics| replace:: {"instance_cpu_usage": "host_cpu_usage", "instance_ram_usage": "host_ram_usage"}
|
||||
.. |periods| replace:: {"instance": 720, "node": 600}
|
||||
.. |aggn_method| replace:: {"instance": 'mean', "compute_node": 'mean'}
|
||||
|
||||
|
||||
Efficacy Indicator
|
||||
------------------
|
||||
|
||||
Global efficacy indicator:
|
||||
|
||||
.. watcher-func::
|
||||
:format: literal_block
|
||||
|
||||
watcher.decision_engine.goal.efficacy.specs.ServerConsolidation.get_global_efficacy_indicator
|
||||
watcher.decision_engine.goal.efficacy.specs.WorkloadBalancing.get_global_efficacy_indicator
|
||||
|
||||
Other efficacy indicators of the goal are:
|
||||
|
||||
- ``instance_migrations_count``: The number of VM migrations to be performed
|
||||
- ``instances_count``: The total number of audited instances in strategy
|
||||
- ``standard_deviation_after_audit``: The value of resulted standard deviation
|
||||
- ``standard_deviation_before_audit``: The value of original standard deviation
|
||||
|
||||
Algorithm
|
||||
---------
|
||||
@@ -136,10 +178,10 @@ How to use it ?
|
||||
at1 workload_balancing --strategy workload_stabilization
|
||||
|
||||
$ openstack optimize audit create -a at1 \
|
||||
-p thresholds='{"memory.resident": 0.05}' \
|
||||
-p metrics='["memory.resident"]'
|
||||
-p thresholds='{"instance_ram_usage": 0.05}' \
|
||||
-p metrics='["instance_ram_usage"]'
|
||||
|
||||
External Links
|
||||
--------------
|
||||
|
||||
- `Watcher Overload standard deviation algorithm spec <https://specs.openstack.org/openstack/watcher-specs/specs/newton/implemented/sd-strategy.html>`_
|
||||
None
|
||||
|
||||
@@ -11,26 +11,35 @@ Synopsis
|
||||
|
||||
.. watcher-term:: watcher.decision_engine.strategy.strategies.workload_balance.WorkloadBalance
|
||||
|
||||
Requirements
|
||||
------------
|
||||
|
||||
None.
|
||||
|
||||
Metrics
|
||||
*******
|
||||
|
||||
The *workload_balance* strategy requires the following metrics:
|
||||
The ``workload_balance`` strategy requires the following metrics:
|
||||
|
||||
======================= ============ ======= =========================
|
||||
metric service name plugins comment
|
||||
======================= ============ ======= =========================
|
||||
``cpu_util`` ceilometer_ none cpu_util has been removed
|
||||
since Stein.
|
||||
``memory.resident`` ceilometer_ none
|
||||
======================= ============ ======= =========================
|
||||
======================= ============ ======= =========== ======================
|
||||
metric service name plugins unit comment
|
||||
======================= ============ ======= =========== ======================
|
||||
``cpu`` ceilometer_ none percentage CPU of the instance.
|
||||
Used to calculate the
|
||||
threshold
|
||||
``memory.resident`` ceilometer_ none MB RAM of the instance.
|
||||
Used to calculate the
|
||||
threshold
|
||||
======================= ============ ======= =========== ======================
|
||||
|
||||
.. _ceilometer: https://docs.openstack.org/ceilometer/latest/admin/telemetry-measurements.html#openstack-compute
|
||||
|
||||
.. note::
|
||||
* The parameters above reference the instance CPU or RAM usage, but
|
||||
the threshold calculation is based of the CPU/RAM usage on the
|
||||
hypervisor.
|
||||
* The RAM usage can be calculated based on the RAM consumed by the instance,
|
||||
and the available RAM on the hypervisor.
|
||||
* The CPU percentage calculation relies on the CPU load, but also on the
|
||||
number of CPUs on the hypervisor.
|
||||
* The host memory metric is calculated by summing the RAM usage of each
|
||||
instance on the host. This measure is close to the real usage, but is
|
||||
not the exact usage on the host.
|
||||
|
||||
Cluster data model
|
||||
******************
|
||||
@@ -65,15 +74,28 @@ Configuration
|
||||
|
||||
Strategy parameters are:
|
||||
|
||||
============== ====== ============= ====================================
|
||||
parameter type default Value description
|
||||
============== ====== ============= ====================================
|
||||
``metrics`` String 'cpu_util' Workload balance base on cpu or ram
|
||||
utilization. choice: ['cpu_util',
|
||||
'memory.resident']
|
||||
``threshold`` Number 25.0 Workload threshold for migration
|
||||
``period`` Number 300 Aggregate time period of ceilometer
|
||||
============== ====== ============= ====================================
|
||||
================ ====== ==================== ==================================
|
||||
parameter type default value description
|
||||
================ ====== ==================== ==================================
|
||||
``metrics`` String instance_cpu_usage Workload balance base on cpu or
|
||||
ram utilization. Choices:
|
||||
['instance_cpu_usage',
|
||||
'instance_ram_usage']
|
||||
``threshold`` Number 25.0 Workload threshold for migration.
|
||||
Used for both the source and the
|
||||
destination calculations.
|
||||
Threshold is always a percentage.
|
||||
``period`` Number 300 Aggregate time period of
|
||||
ceilometer
|
||||
``granularity`` Number 300 The time between two measures in
|
||||
an aggregated timeseries of a
|
||||
metric.
|
||||
This parameter is only used
|
||||
with the Gnocchi data source,
|
||||
and it must match to any of the
|
||||
valid archive policies for the
|
||||
metric.
|
||||
================ ====== ==================== ==================================
|
||||
|
||||
Efficacy Indicator
|
||||
------------------
|
||||
@@ -89,13 +111,35 @@ to: https://specs.openstack.org/openstack/watcher-specs/specs/mitaka/implemented
|
||||
How to use it ?
|
||||
---------------
|
||||
|
||||
Create an audit template using the Workload Balancing strategy.
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
$ openstack optimize audittemplate create \
|
||||
at1 workload_balancing --strategy workload_balance
|
||||
|
||||
Run an audit using the Workload Balance strategy. The result of
|
||||
the audit should be an action plan to move VMs from any host
|
||||
where the CPU usage is over the threshold of 26%, to a host
|
||||
where the utilization of CPU is under the threshold.
|
||||
The measurements of CPU utilization are taken from the configured
|
||||
datasouce plugin with an aggregate period of 310.
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
$ openstack optimize audit create -a at1 -p threshold=26.0 \
|
||||
-p period=310 -p metrics=cpu_util
|
||||
-p period=310 -p metrics=instance_cpu_usage
|
||||
|
||||
Run an audit using the Workload Balance strategy to
|
||||
obtain a plan to balance VMs over hosts with a threshold of 20%.
|
||||
In this case, the stipulation of the CPU utilization metric
|
||||
measurement is a combination of period and granularity.
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
$ openstack optimize audit create -a at1 \
|
||||
-p granularity=30 -p threshold=20 -p period=300 \
|
||||
-p metrics=instance_cpu_usage --auto-trigger
|
||||
|
||||
External Links
|
||||
--------------
|
||||
|
||||
@@ -11,6 +11,13 @@ Synopsis
|
||||
|
||||
.. watcher-term:: watcher.decision_engine.strategy.strategies.zone_migration.ZoneMigration
|
||||
|
||||
.. note::
|
||||
The term ``Zone`` in the strategy name is not a reference to
|
||||
`Openstack availability zones <https://docs.openstack.org/nova/latest/admin/availability-zones.html>`_
|
||||
but rather a user-defined set of Compute nodes and storage pools.
|
||||
Currently, migrations across actual availability zones is not fully tested
|
||||
and might not work in all cluster configurations.
|
||||
|
||||
Requirements
|
||||
------------
|
||||
|
||||
@@ -59,66 +66,83 @@ Configuration
|
||||
|
||||
Strategy parameters are:
|
||||
|
||||
======================== ======== ============= ==============================
|
||||
parameter type default Value description
|
||||
======================== ======== ============= ==============================
|
||||
``compute_nodes`` array None Compute nodes to migrate.
|
||||
``storage_pools`` array None Storage pools to migrate.
|
||||
``parallel_total`` integer 6 The number of actions to be
|
||||
run in parallel in total.
|
||||
``parallel_per_node`` integer 2 The number of actions to be
|
||||
run in parallel per compute
|
||||
node.
|
||||
``parallel_per_pool`` integer 2 The number of actions to be
|
||||
run in parallel per storage
|
||||
pool.
|
||||
``priority`` object None List prioritizes instances
|
||||
and volumes.
|
||||
``with_attached_volume`` boolean False False: Instances will migrate
|
||||
after all volumes migrate.
|
||||
True: An instance will migrate
|
||||
after the attached volumes
|
||||
migrate.
|
||||
======================== ======== ============= ==============================
|
||||
======================== ======== ======== ========= ==========================
|
||||
parameter type default required description
|
||||
======================== ======== ======== ========= ==========================
|
||||
``compute_nodes`` array None Optional Compute nodes to migrate.
|
||||
``storage_pools`` array None Optional Storage pools to migrate.
|
||||
``parallel_total`` integer 6 Optional The number of actions to
|
||||
be run in parallel in
|
||||
total.
|
||||
``parallel_per_node`` integer 2 Optional The number of actions to
|
||||
be run in parallel per
|
||||
compute node in one
|
||||
action plan.
|
||||
``parallel_per_pool`` integer 2 Optional The number of actions to
|
||||
be run in parallel per
|
||||
storage pool.
|
||||
``priority`` object None Optional List prioritizes instances
|
||||
and volumes.
|
||||
``with_attached_volume`` boolean False Optional False: Instances will
|
||||
migrate after all volumes
|
||||
migrate.
|
||||
True: An instance will
|
||||
migrate after the
|
||||
attached volumes migrate.
|
||||
======================== ======== ======== ========= ==========================
|
||||
|
||||
.. note::
|
||||
* All parameters in the table above have defaults and therefore the
|
||||
user can create an audit without specifying a value. However,
|
||||
if **only** defaults parameters are used, there will be nothing
|
||||
actionable for the audit.
|
||||
* ``parallel_*`` parameters are not in reference to concurrency,
|
||||
but rather on limiting the amount of actions to be added to the action
|
||||
plan
|
||||
* ``compute_nodes``, ``storage_pools``, and ``priority`` are optional
|
||||
parameters, however, if they are passed they **require** the parameters
|
||||
in the tables below:
|
||||
|
||||
The elements of compute_nodes array are:
|
||||
|
||||
============= ======= =============== =============================
|
||||
parameter type default Value description
|
||||
============= ======= =============== =============================
|
||||
``src_node`` string None Compute node from which
|
||||
instances migrate(mandatory).
|
||||
``dst_node`` string None Compute node to which
|
||||
instances migrate.
|
||||
============= ======= =============== =============================
|
||||
============= ======= ======== ========= ========================
|
||||
parameter type default required description
|
||||
============= ======= ======== ========= ========================
|
||||
``src_node`` string None Required Compute node from which
|
||||
instances migrate.
|
||||
``dst_node`` string None Optional Compute node to which
|
||||
instances migrate.
|
||||
If omitted, nova will
|
||||
choose the destination
|
||||
node automatically.
|
||||
============= ======= ======== ========= ========================
|
||||
|
||||
The elements of storage_pools array are:
|
||||
|
||||
============= ======= =============== ==============================
|
||||
parameter type default Value description
|
||||
============= ======= =============== ==============================
|
||||
``src_pool`` string None Storage pool from which
|
||||
volumes migrate(mandatory).
|
||||
``dst_pool`` string None Storage pool to which
|
||||
volumes migrate.
|
||||
``src_type`` string None Source volume type(mandatory).
|
||||
``dst_type`` string None Destination volume type
|
||||
(mandatory).
|
||||
============= ======= =============== ==============================
|
||||
============= ======= ======== ========= ========================
|
||||
parameter type default required description
|
||||
============= ======= ======== ========= ========================
|
||||
``src_pool`` string None Required Storage pool from which
|
||||
volumes migrate.
|
||||
``dst_pool`` string None Optional Storage pool to which
|
||||
volumes migrate.
|
||||
``src_type`` string None Required Source volume type.
|
||||
``dst_type`` string None Required Destination volume type
|
||||
============= ======= ======== ========= ========================
|
||||
|
||||
The elements of priority object are:
|
||||
|
||||
================ ======= =============== ======================
|
||||
parameter type default Value description
|
||||
================ ======= =============== ======================
|
||||
``project`` array None Project names.
|
||||
``compute_node`` array None Compute node names.
|
||||
``storage_pool`` array None Storage pool names.
|
||||
``compute`` enum None Instance attributes.
|
||||
|compute|
|
||||
``storage`` enum None Volume attributes.
|
||||
|storage|
|
||||
================ ======= =============== ======================
|
||||
================ ======= ======== ========= =====================
|
||||
parameter type default Required description
|
||||
================ ======= ======== ========= =====================
|
||||
``project`` array None Optional Project names.
|
||||
``compute_node`` array None Optional Compute node names.
|
||||
``storage_pool`` array None Optional Storage pool names.
|
||||
``compute`` enum None Optional Instance attributes.
|
||||
|compute|
|
||||
``storage`` enum None Optional Volume attributes.
|
||||
|storage|
|
||||
================ ======= ======== ========= =====================
|
||||
|
||||
.. |compute| replace:: ["vcpu_num", "mem_size", "disk_size", "created_at"]
|
||||
.. |storage| replace:: ["size", "created_at"]
|
||||
@@ -126,11 +150,26 @@ parameter type default Value description
|
||||
Efficacy Indicator
|
||||
------------------
|
||||
|
||||
The efficacy indicators for action plans built from the command line
|
||||
are:
|
||||
|
||||
.. watcher-func::
|
||||
:format: literal_block
|
||||
|
||||
watcher.decision_engine.goal.efficacy.specs.HardwareMaintenance.get_global_efficacy_indicator
|
||||
|
||||
In **Horizon**, these indictors are shown with alternative text.
|
||||
|
||||
* ``live_migrate_instance_count`` is shown as
|
||||
``The number of instances actually live migrated`` in Horizon
|
||||
* ``planned_live_migrate_instance_count`` is shown as
|
||||
``The number of instances planned to live migrate`` in Horizon
|
||||
* ``planned_live_migration_instance_count`` refers to the instances planned
|
||||
to live migrate in the action plan.
|
||||
* ``live_migrate_instance_count`` tracks all the instances that could be
|
||||
migrated according to the audit input.
|
||||
|
||||
|
||||
Algorithm
|
||||
---------
|
||||
|
||||
@@ -148,6 +187,22 @@ How to use it ?
|
||||
$ openstack optimize audit create -a at1 \
|
||||
-p compute_nodes='[{"src_node": "s01", "dst_node": "d01"}]'
|
||||
|
||||
.. note::
|
||||
* Currently, the strategy will not generate both volume migration and
|
||||
instance migrations in the same audit. If both are requested,
|
||||
only volume migrations will be included in the action plan.
|
||||
* The Cinder model collector is not enabled by default.
|
||||
If the Cinder model collector is not enabled while deploying Watcher,
|
||||
the model will become outdated and cause errors eventually.
|
||||
See the `Configuration option to enable the storage collector <https://docs.openstack.org/watcher/latest/configuration/watcher.html#collector.collector_plugins>`_ documentation.
|
||||
|
||||
Support caveats
|
||||
---------------
|
||||
|
||||
This strategy offers the option to perform both Instance migrations and
|
||||
Volume migrations. Currently, Instance migrations are ready for production
|
||||
use while Volume migrations remain experimental.
|
||||
|
||||
External Links
|
||||
--------------
|
||||
|
||||
|
||||
@@ -132,8 +132,8 @@ audit) that you want to use.
|
||||
$ openstack optimize audit create -a <your_audit_template>
|
||||
|
||||
If your_audit_template was created by --strategy <your_strategy>, and it
|
||||
defines some parameters (command `watcher strategy show` to check parameters
|
||||
format), your can append `-p` to input required parameters:
|
||||
defines some parameters (command ``watcher strategy show`` to check parameters
|
||||
format), your can append ``-p`` to input required parameters:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
|
||||
9
playbooks/generate_prometheus_config.yml
Normal file
9
playbooks/generate_prometheus_config.yml
Normal file
@@ -0,0 +1,9 @@
|
||||
---
|
||||
- hosts: all
|
||||
tasks:
|
||||
- name: Generate prometheus.yml config file
|
||||
delegate_to: controller
|
||||
template:
|
||||
src: "templates/prometheus.yml.j2"
|
||||
dest: "/home/zuul/prometheus.yml"
|
||||
mode: "0644"
|
||||
13
playbooks/templates/prometheus.yml.j2
Normal file
13
playbooks/templates/prometheus.yml.j2
Normal file
@@ -0,0 +1,13 @@
|
||||
global:
|
||||
scrape_interval: 10s
|
||||
scrape_configs:
|
||||
- job_name: "node"
|
||||
static_configs:
|
||||
- targets: ["localhost:3000"]
|
||||
{% if 'compute' in groups %}
|
||||
{% for host in groups['compute'] %}
|
||||
- targets: ["{{ hostvars[host]['ansible_fqdn'] }}:9100"]
|
||||
labels:
|
||||
fqdn: "{{ hostvars[host]['ansible_fqdn'] }}"
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
3
pyproject.toml
Normal file
3
pyproject.toml
Normal file
@@ -0,0 +1,3 @@
|
||||
[build-system]
|
||||
requires = ["pbr>=6.0.0", "setuptools>=64.0.0"]
|
||||
build-backend = "pbr.build"
|
||||
@@ -1,7 +1,8 @@
|
||||
Rally job
|
||||
=========
|
||||
|
||||
We provide, with Watcher, a Rally plugin you can use to benchmark the optimization service.
|
||||
We provide, with Watcher, a Rally plugin you can use to benchmark
|
||||
the optimization service.
|
||||
|
||||
To launch this task with configured Rally you just need to run:
|
||||
|
||||
|
||||
33
releasenotes/notes/2025.1-prelude-8be97eece4e1d1ff.yaml
Normal file
33
releasenotes/notes/2025.1-prelude-8be97eece4e1d1ff.yaml
Normal file
@@ -0,0 +1,33 @@
|
||||
---
|
||||
prelude: |
|
||||
The ``Openstack 2025.1`` (``Watcher 14.0.0``) includes several new features,
|
||||
deprecations, and removals. After a period of inactivity, the Watcher
|
||||
project moved to the Distributed leadership model in ``2025.1`` with
|
||||
several new contributors working to modernize the code base.
|
||||
Activity this cycle was mainly focused on paying down technical debt
|
||||
related to supporting newer testing runtimes. With this release,
|
||||
``ubuntu 24.04`` is now officially tested and supported.
|
||||
|
||||
``Ubuntu 24.04`` brings a new default Python runtime ``3.12`` and with it
|
||||
improvements to eventlet and SQLAlchemy 2.0 compatibility where required.
|
||||
``2025.1`` is the last release to officially support and test with ``Ubuntu 22.04``.
|
||||
|
||||
``2025.1`` is the second official skip-level upgrade release supporting
|
||||
upgrades from either ``2024.1`` or ``2024.2``
|
||||
|
||||
Another area of focus in this cycle was the data sources supported by Watcher.
|
||||
The long obsolete `Ceilometer` API data source has been removed, and the untested
|
||||
`Monasca` data source has been deprecated and a new `Prometheus` data source
|
||||
has been added.
|
||||
https://specs.openstack.org/openstack/watcher-specs/specs/2025.1/approved/prometheus-datasource.html
|
||||
fixes:
|
||||
- https://bugs.launchpad.net/watcher/+bug/2086710 watcher compatibility between
|
||||
eventlet, apscheduler, and python 3.12
|
||||
- https://bugs.launchpad.net/watcher/+bug/2067815 refactoring of the SQLAlchemy
|
||||
database layer to improve compatibility with eventlet on newer Pythons
|
||||
- A number of linting issues were addressed with the introduction
|
||||
of pre-commit. The issues include but are not limited to, spelling and grammar
|
||||
fixes across all documentation and code, numerous sphinx documentation build warnings
|
||||
, and incorrect file permission such as files having the execute bit set when not required.
|
||||
While none of these changes should affect the runtime behavior of Watcher, they
|
||||
generally improve the maintainability and quality of the codebase.
|
||||
23
releasenotes/notes/2025.2-prelude-a9f4c7b2e8d15692.yaml
Normal file
23
releasenotes/notes/2025.2-prelude-a9f4c7b2e8d15692.yaml
Normal file
@@ -0,0 +1,23 @@
|
||||
---
|
||||
prelude: |
|
||||
The ``OpenStack 2025.2`` (``Watcher 15.0.0``) release delivers stronger security,
|
||||
granular operational control, and comprehensive monitoring capabilities for cloud
|
||||
optimization. This release strengthens the foundation for reliable, large-scale
|
||||
cloud operations while giving administrators complete flexibility in
|
||||
managing optimization workflows.
|
||||
|
||||
Cloud operators gain secure and reliable volume migration processes that eliminate
|
||||
data loss scenarios and ensure tenant isolation. The Host Maintenance strategy
|
||||
now provides granular control over migration behavior, including the ability to
|
||||
disable live or cold migration and safely stop instances when migration
|
||||
cannot proceed.
|
||||
|
||||
The new ``Aetos data source`` adds secure, role-based access to Prometheus metrics
|
||||
through Keystone authentication. This enables multi-tenant monitoring while
|
||||
maintaining access controls across your cloud infrastructure.
|
||||
|
||||
Administrators can now exercise precise control over optimization workflows by
|
||||
manually skipping actions or allowing Watcher to automatically skip actions
|
||||
based on detected conditions. Custom status messages document why administrators
|
||||
or Watcher took specific actions, improving operational visibility and
|
||||
troubleshooting.
|
||||
@@ -0,0 +1,14 @@
|
||||
---
|
||||
features:
|
||||
- |
|
||||
Three new parameters have been added to the ``nop`` action:
|
||||
|
||||
* ``fail_pre_condition``: When setting it to `true` the action
|
||||
fails on the pre_condition execution.
|
||||
|
||||
* ``fail_execute``: When setting it to `true` the action fails
|
||||
on the execute step.
|
||||
|
||||
* ``fail_post_condition``: When setting it to `true` the action
|
||||
fails on the post_condition execution.
|
||||
|
||||
@@ -0,0 +1,6 @@
|
||||
---
|
||||
features:
|
||||
- |
|
||||
Support for instance metrics has been added to the prometheus data source.
|
||||
The included metrics are `instance_cpu_usage`, `instance_ram_usage`,
|
||||
`instance_ram_allocated` and `instance_root_disk_size`.
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user