Compare commits

...

120 Commits

Author SHA1 Message Date
zhoulinhui
583c946061 Use importlib to take place of im module
The imp module is deprecated[1] since version 3.4, use importlib to
instead

1: https://docs.python.org/3/library/imp.html#imp.reload

Change-Id: Ic126bc8e0936e5d7a2c7a910b54b7348026fedcb
2020-08-29 16:12:52 +00:00
Zuul
25a0b184a1 Merge "option to rollback action_plan when it fails" 2020-08-18 07:55:59 +00:00
Luigi Toscano
ed59145354 Native Zuul v3 watcher-grenade job + some cleanup
Create a native Zuul v3 grenade job. It matches the existing job,
even though it doesn't call any local hook as the current legacy
job does (because no local hook exists and it should be rewritten
as zuul configuration if it did).

The new job reuses the variable definition of the devstack watcher
job, so clean up that job as well:
- do not depend on devstack-gate, which is not needed and will be
  deprecated soon anyway;
- use the new way (tempest_plugins) to define which tempest plugin
  should be installed;
- remove the definition of USE_PYTHON3: true and simply inherit
  the value set by devstack;
- remove the definition of PYTHONUNBUFFERED, not really set
  anywhere else and only useful back in the days in Jenkins.

Change-Id: Ib0ed3c0f395e1b85b8f25f6e438c414165baab32
2020-07-29 09:45:17 +02:00
suzhengwei
19adfda3b9 option to rollback action_plan when it fails
It has costs when rollback action_plan.
So give users an option whether to rollback it
when the action_plan fails.

Change-Id: I20c0afded795eda7fb1b57ffdd2ae1ca36c45301
2020-07-10 10:31:26 +08:00
Zuul
fa56bc715e Merge "resize action don't support revert" 2020-07-06 01:48:00 +00:00
Zuul
350ce66d3c Merge "Watcher API supports strategy name when creating audit template" 2020-07-06 01:47:59 +00:00
licanwei
1667046f58 resize action don't support revert
Change-Id: Ia2df0e0a4f242392915aa2a89d4fbae39b6c70e9
2020-07-02 14:48:55 +08:00
limin0801
3f7a508a2e Watcher API supports strategy name when creating audit template
when directly using the `curl` command to create audit template,
strategy name can be accepted.

Closes-Bug: #1884174

Change-Id: I7c0ca760a7fa414faca03c5293df34a84aad6fac
2020-07-01 01:46:44 +00:00
Zuul
f7f5659bca Merge "Revert "Don't revert Migrate action"" 2020-06-25 03:28:11 +00:00
suzhengwei
57f55190ff Revert "Don't revert Migrate action"
Whether to revert migrate action when the action_plan fails is determained by 'rollback_actionplan' option.

This reverts commit c522e881b1.

Change-Id: I5379018b7838dff4caf0ee0ce06cfa32e7b37b12
2020-06-22 09:26:46 +00:00
Zuul
237550ad57 Merge "remove mox3" 2020-06-19 07:24:56 +00:00
Zuul
cad67702d6 Merge "Use unittest.mock instead of mock" 2020-06-19 02:19:21 +00:00
licanwei
ae678dfaaa remove mox3
Change-Id: Ia7a4dce8ccc8d9062d6fcca74b8184d85ee7fccb
2020-06-19 09:49:32 +08:00
Zuul
5ad3960286 Merge "voting watcher-grenade" 2020-06-18 08:09:19 +00:00
licanwei
dbd86be363 voting watcher-grenade
Change-Id: I69ef17b545c62fe5b17e002b4c154e80e7fa5ffa
2020-06-18 10:14:01 +08:00
licanwei
9f0138e1cf Check if scope is None
if scope is None, don't create data model

Change-Id: Icf611966c9b0a3882615d778ee6c72a8da73841d
Closed-Bug: #1881920
2020-06-18 00:58:16 +00:00
zhurong
097ac06f0b Use uwsgi binary from path and mark grenade non-voting
Change-Id: Iaa6283e3f34166210cc2d0c918e610484bfd3ab9
2020-06-16 08:02:26 +00:00
Hervé Beraud
0869b1c75c Use unittest.mock instead of mock
The mock third party library was needed for mock support in py2
runtimes. Since we now only support py36 and later, we can use the
standard lib unittest.mock module instead.

Change-Id: I4ee01710d04d650a3ad5ae069015255d3f674c74
2020-06-09 12:20:06 +02:00
Zuul
527578a147 Merge "Compatible with old scope format" 2020-06-09 07:32:12 +00:00
Hervé Beraud
b0c411b22a Cap jsonschema 3.2.0 as the minimal version
Previous versions of jsonschema (<3.2.0) doesn't support python 3.8 [1].
Python 3.8 is part of the victoria supported runtimes [2] so we now force
to use jsonschema version 3.2.0 to avoid issues, remove ambiguity and ensure
that everything works with python 3 in general.

[1] https://github.com/Julian/jsonschema/pull/627
[2] https://governance.openstack.org/tc/reference/runtimes/victoria.html#python-runtimes-for-victoria

Change-Id: Id476227552c3fa91eecadbc6c4370c354f56a40d
2020-06-05 03:39:13 +00:00
licanwei
4a1915bec4 Compatible with old scope format
Scope format changed from old to new after bp cdm-scoping.

old format:
  - availability_zones:
    - name: nova
  - host_aggregates:
    - id: 1
    - name: agg
  - exclude:
    - compute_nodes:
      - name: w012

new format:
- compute:
  - availability_zones:
    - name: nova
  - host_aggregates:
    - id: 1
    - name: agg
  - exclude:
    - compute_nodes:
      - name: w012

Change-Id: I2b5cd4d1cee19f5588e4d2185eb074343fff1187
Closed-Bug: #1882049
2020-06-04 17:24:41 +08:00
Sean McGinnis
751027858b Use unittest.mock instead of third party mock
Now that we no longer support py27, we can use the standard library
unittest.mock module instead of the third party mock lib.

Change-Id: I6cdd4c35a52a014ba3c4dfe4cc2bd4d670c96bc3
Signed-off-by: Sean McGinnis <sean.mcginnis@gmail.com>
2020-05-29 13:48:06 -05:00
Zuul
12bd9c0590 Merge "Remove translation sections from setup.cfg" 2020-05-28 02:30:26 +00:00
Andreas Jaeger
1ff940598f Switch to newer openstackdocstheme and reno versions
Switch to openstackdocstheme 2.2.1 and reno 3.1.0 versions. Using
these versions will allow especially:
* Linking from HTML to PDF document
* Allow parallel building of documents
* Fix some rendering problems

Update Sphinx version as well.

Set openstackdocs_pdf_link to link to PDF file. Note that
the link to the published document only works on docs.openstack.org
where the PDF file is placed in the top-level html directory. The
site-preview places the PDF in a pdf directory.

Set openstackdocs_auto_name to False to use 'project' variable as name.

Change pygments_style to 'native' since old theme version always used
'native' and the theme now respects the setting and using 'sphinx' can
lead to some strange rendering.

Remove docs requirements from lower-constraints, they are not needed
during install or test but only for docs building.

openstackdocstheme renames some variables, so follow the renames
before the next release removes them. A couple of variables are also
not needed anymore, remove them.

See also
http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014971.html

Change-Id: Ia9a3fb804fb59bb70edc150a3eb20c07a279170b
2020-05-21 15:15:16 +00:00
Andreas Jaeger
9d495618d2 Fix requirements check
Remove python_version so that requirements-check passes again.

Change-Id: I46c6118d9b29a17a3186b3fd5f47115236913a16
2020-05-21 12:35:46 +02:00
jacky06
c6d2690aa3 Remove translation sections from setup.cfg
These translation sections are not needed anymore, Babel can
generate translation files without them.

Change-Id: I95bde8575638511449edaa1e546e3399bf0e6451
2020-05-15 00:56:16 +08:00
Zuul
623e44ecf9 Merge "Monkey patch original current_thread _active" 2020-05-14 03:54:43 +00:00
zhangbailin
5c34b6bc47 hacking: force explicit import of python's mock
Since we dropped support for python 2 [1], we no longer need to use the
mock library, which existed to backport py3 functionality into py2.
Which must be done by saying::

    from unittest import mock

...because if you say::

    import mock

...you definitely will not be getting the standard library mock.
That will always import the third party mock library.

This commit adds hacking check N366 to enforce the former.

This check can be removed in the future (and we can start saying
``import mock`` again) if we manage to purge these transitive
dependencies. I'm not holding my breath.

[1]https://review.opendev.org/#/c/717540

Change-Id: I8c8c99024e8de61d9151480d70543f809a100998
2020-05-13 15:42:42 +08:00
zhangbailin
8a36ad5f87 Use unittest.mock instead of third party mock
Now that we no longer support py27, we can use the standard library
unittest.mock module instead of the third party mock lib.

The remainder was auto-generated with the following (hacky) script, with
one or two manual tweaks after the fact:

  import glob

  for path in glob.glob('watcher/tests/**/*.py', recursive=True):
      with open(path) as fh:
          lines = fh.readlines()
      if 'import mock\n' not in lines:
          continue
      import_group_found = False
      create_first_party_group = False
      for num, line in enumerate(lines):
          line = line.strip()
          if line.startswith('import ') or line.startswith('from '):
              tokens = line.split()
              for lib in (
                  'ddt', 'six', 'webob', 'fixtures', 'testtools'
                  'neutron', 'cinder', 'ironic', 'keystone', 'oslo',
              ):
                  if lib in tokens[1]:
                      create_first_party_group = True
                      break
              if create_first_party_group:
                  break
              import_group_found = True
          if not import_group_found:
              continue
          if line.startswith('import ') or line.startswith('from '):
              tokens = line.split()
              if tokens[1] > 'unittest':
                  break
              elif tokens[1] == 'unittest' and (
                  len(tokens) == 2 or tokens[4] > 'mock'
              ):
                  break
          elif not line:
              break
      if create_first_party_group:
          lines.insert(num, 'from unittest import mock\n\n')
      else:
          lines.insert(num, 'from unittest import mock\n')
      del lines[lines.index('import mock\n')]
      with open(path, 'w+') as fh:
          fh.writelines(lines)

Co-Authored-By: Sean McGinnis <sean.mcginnis@gmail.com>

Change-Id: Icf35d3a6c10c529e07d1a4edaa36f504e5bf553a
2020-05-13 15:41:55 +08:00
Ghanshyam Mann
6ff95efaf6 Fix hacking min version to 3.0.1
flake8 new release 3.8.0 added new checks and gate pep8
job start failing. hacking 3.0.1 fix the pinning of flake8 to
avoid bringing in a new version with new checks.

Though it is fixed in latest hacking but 2.0 and 3.0 has cap for
flake8 as <4.0.0 which mean flake8 new version 3.9.0 can also
break the pep8 job if new check are added.

To avoid similar gate break in future, we need to bump the hacking min
version.

- http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014828.html

Change-Id: I1fe394ebd1f161eb73f53bfa17d2ccc860b9f51b
2020-05-12 21:35:41 -05:00
Zuul
ba2f1804b0 Merge "Add py38 package metadata" 2020-05-07 09:27:32 +00:00
Zuul
44061326e9 Merge "Remove future imports" 2020-05-07 08:57:16 +00:00
Chris MacNaughton
0b4c4f1de6 Monkey patch original current_thread _active
Monkey patch the original current_thread to use the up-to-date _active
global variable. This solution is based on that documented at:
https://github.com/eventlet/eventlet/issues/592

Change-Id: I194eedd505d45137963eb40d1b1d5da2309caeac
Closes-Bug: #1863021
2020-05-06 10:21:32 +02:00
Sean McGinnis
9652571437 Add py38 package metadata
Now that we are running the Victoria tests that include a
voting py38, we can now add the Python 3.8 metadata to the
package information to reflect that support.

Change-Id: Icf85483ff64055d16d35f189755e5fb01fabf574
Signed-off-by: Sean McGinnis <sean.mcginnis@gmail.com>
2020-05-02 07:48:18 -05:00
zhangbailin
f0f15f89c6 Remove future imports
These particular imports are no longer needed in a Python 3-only world.

Change-Id: I5e9e15556c04871c451f6363380f2a7ac026c968
2020-05-02 00:33:39 +00:00
qiufossen
075e374b3d Remove Babel requirement
Babel is not needed as requirement, remove it.

See also
http://lists.openstack.org/pipermail/openstack-discuss/2020-April/014227.html

Change-Id: Id5c54668738e3de8ded900f389b646dcdef5d007
2020-04-29 15:38:43 +08:00
Zuul
eaa0dfea4b Merge "Remove six[8] remove requirement&low-requirement" 2020-04-29 03:03:40 +00:00
Zuul
b7956de761 Merge "Remove six[7]" 2020-04-29 03:03:39 +00:00
OpenStack Proposal Bot
a30dbdd724 Imported Translations from Zanata
For more information about this automatic import see:
https://docs.openstack.org/i18n/latest/reviewing-translation-import.html

Change-Id: I7cdff6bcc91edf445f60365a1cb921bb582c7c13
2020-04-26 09:05:59 +00:00
OpenStack Release Bot
60a829e982 Add Python3 victoria unit tests
This is an automatically generated patch to ensure unit testing
is in place for all the of the tested runtimes for victoria.

See also the PTI in governance [1].

[1]: https://governance.openstack.org/tc/reference/project-testing-interface.html

Change-Id: Ia59e92394115c4b672c86772840a1e188695079f
2020-04-23 09:48:55 +00:00
OpenStack Release Bot
74cfa0fc8c Update master for stable/ussuri
Add file to the reno documentation build to show release notes for
stable/ussuri.

Use pbr instruction to increment the minor version number
automatically so that master versions are higher than the versions on
stable/ussuri.

Change-Id: I63fc3e49802f89ac2d967ee089a9dd9dffbe9c78
Sem-Ver: feature
2020-04-23 09:48:53 +00:00
chenker
5071c8f8fa Remove six[8] remove requirement&low-requirement
Change-Id: I84de517a08a87936f6a9015de350dcda2e24bcef
2020-04-22 16:04:00 +08:00
chenke
0ef0f165cb Remove six[7]
Since our code will only support py3. So remove six is necessary.

Change-Id: I3738118b1898421ee41e9e2902c255ead73f3915
2020-04-22 15:59:15 +08:00
Zuul
25f313a3ef Merge "Remove six[6]" 2020-04-18 09:26:36 +00:00
Zuul
7218947c5b Merge "Remove six[5]" 2020-04-18 09:26:35 +00:00
Zuul
0aa5bb3265 Merge "Remove six[4]" 2020-04-18 09:26:34 +00:00
Zuul
7591beb65d Merge "Remove six[3]" 2020-04-18 09:26:33 +00:00
Zuul
7e9236939f Merge "Remove six[2]" 2020-04-18 09:25:59 +00:00
Zuul
e48c0893e7 Merge "Remove six[1]" 2020-04-18 09:24:34 +00:00
licanwei
38649b2df0 convert EfficacyIndicator.value to float type
EfficacyIndicator.value is Decimal type, it's
not JSON serializable. So we convert value type
before serialization.

Closed-Bug: #1873377
Change-Id: Id38969775c446bece71f7a85c5c5d3efee9befa0
2020-04-17 10:43:26 +08:00
chenke
0ff8248f91 Remove six[6]
Change-Id: I7bf782ac1aa2ff404bef180d9ff37ffcfb29001a
2020-04-16 16:14:54 +08:00
chenke
bf2caf8b1d Remove six[5]
Change-Id: I27b341cb8f48313bd2aad6b7996cd9cbbad94217
2020-04-16 16:12:36 +08:00
chenke
6a6dbc1491 Remove six[4]
Change-Id: I3026b5d3eb20f71d4218873646c69d8328db054d
2020-04-16 16:09:48 +08:00
chenke
244e02c3d5 Remove six[3]
Change-Id: I92535c69f7055a7431ff14d3b9722149950e7f91
2020-04-16 16:04:32 +08:00
chenke
591e4a8f38 Remove six[2]
Change-Id: Id952d00e689c1077d741c742175be06778af6ec1
2020-04-16 16:02:39 +08:00
chenke
4bf59cfe51 Remove six[1]
Change-Id: I2738db925d650af5921b77d0315ec0a8d4ee985b
2020-04-16 16:00:37 +08:00
licanwei
de9d250537 update description about audit argument interval
Change-Id: I5ae8ab672edac1637c2bef4201fec30e896cd8ed
2020-04-13 08:31:11 +00:00
licanwei
3a3a487c71 remove wsmeext.sphinxext
Error when importing wsmeext.sphinxext
Could not import extension wsmeext.sphinxext
(exception: cannot import name 'l_')

Change-Id: Id23c9c1fd35153d67d4ffb50dc1cd40f30b7ab41
2020-04-13 11:27:45 +08:00
zhangbailin
f3c427bdef Cleanup py27 support
This commit cleanup below:
- Remove python 2.7 stanza from setup.py
- Add requires on python >= 3.6 to setup.cfg so that pypi and pip
  know about the requirement
- Add "ignore_basepython_conflict=True" to tox.ini

Change-Id: Ic4fcc1fb15f214ca4204f56ee1ea15dc6a782fc2
2020-04-09 02:37:00 +00:00
zhangbailin
6a0fe94e5c Block Sphinx 3.0.0
Sphinx 3.0.0 breaks the building here, block it for now.

Depends-On: https://review.opendev.org/#/c/717949/

Change-Id: Ibf0c93ea79fec647fbf749257835f1fa99d5f59d
2020-04-08 06:20:24 +00:00
Andreas Jaeger
1bb2aefec3 Update hacking for Python3
The repo is Python 3 now, so update hacking to version 3.0 which
supports Python 3.

Fix problems found.

Update local hacking checks for new flake8.

Remove hacking and friends from lower-constraints, they are not needed
to be installed at run-time.

Change-Id: Ia6af344ec8441dc98a0820176373dcff3a8c80d5
2020-04-02 07:50:02 +02:00
licanwei
60a3f1f072 Removed py27 in testing doc
Change-Id: Ib7e45aec73c4d3b11eaf5288d739edad3b12c4ee
2020-03-23 10:22:31 +08:00
chenke
c17e96d38b Add procname for uwsgi based service watcher-api
Code in grenade and elsewhere rely on the process/service name
when one runs "ps auxw" and they grep for example "grep -e watcher-api"
to check if the service is running. with uwsgi, let us make sure
we use process name prefix so it is easier to spot the services
and be compatible with code elsewhere that relies on this.

Reference:
https://review.opendev.org/#/c/494531/

Change-Id: I69dbe8840e87a8cb0b2720caa95fb17fb7a30848
2020-03-02 16:21:07 +08:00
Zuul
fa37036304 Merge "Add config option enable_webhooks_auth" 2020-02-26 13:25:24 +00:00
Zuul
8140173aa3 Merge "just set necessary config options" 2020-02-22 01:34:24 +00:00
licanwei
18a516b87a just set necessary config options
There are many warning info in the Watcher api log file,
the reason is that keystonemiddleware only need config
section keystone_authtoken.
refer to https://docs.openstack.org/keystonemiddleware/latest/
Closes-Bug: #1864129

Change-Id: Ie790277d55b3a2d93c26781f7e5e8f66b87227d8
2020-02-21 01:29:46 +00:00
licanwei
e71aaa66db simplify doc directory
Change-Id: I11bfa4d3bcb7c01ef638c0fa97cb872e96698e29
2020-02-17 17:19:13 +08:00
licanwei
4255d5b28f Add config option enable_webhooks_auth
Partially Implements: blueprint event-driven-optimization-based

Change-Id: I6cdfc18661b279f0d7200f39212ecdb31e500723
2020-02-15 14:21:13 +08:00
Zuul
2591b03625 Merge "Add releasenote for event-driven-optimization-based" 2020-02-13 07:05:04 +00:00
Zuul
11d55bc9fc Merge "Doc: Add EVENT audit description" 2020-02-13 07:04:44 +00:00
Zuul
42f001d34c Merge "api-ref: Add webhook API reference" 2020-02-12 08:57:52 +00:00
Zuul
4cf722161b Merge "Add api version history" 2020-02-12 00:56:19 +00:00
licanwei
145fccdd23 api-ref: Add webhook API reference
Change-Id: I75c5b2de55df276d414633f16ad9735a9871b59d
Implements: blueprint event-driven-optimization-based
2020-02-12 00:47:09 +00:00
Zuul
3e4eda2a80 Merge "Community Goal: Project PTL & Contrib Docs Update" 2020-02-10 14:44:40 +00:00
Zuul
16b08c39e6 Merge "releasenotes: Fix reference url" 2020-02-10 03:43:58 +00:00
licanwei
9b6629054a Doc: Add EVENT audit description
Change-Id: Ia6db6f8c21282a4755997cf47fd618670148c23f
Implements: blueprint event-driven-optimization-based
2020-02-10 11:13:06 +08:00
licanwei
56b2f113ed Community Goal: Project PTL & Contrib Docs Update
Change-Id: I07de7b94ed51eebc31886793aa5a1e87353dfbc6
Story: #2007236
Task: #38570
2020-02-10 10:09:23 +08:00
licanwei
83d37d2bee Add api version history
Change-Id: I4079f015e59b8acd5460574c67af58b45c46dc4d
Implements: blueprint event-driven-optimization-based
2020-02-06 10:40:24 +08:00
licanwei
58083bb67b releasenotes: Fix reference url
Change-Id: I0da6021f6d39cb7d6e79e8f637046d8dd0285647
2020-02-05 16:48:49 +08:00
licanwei
f79321ceeb Add releasenote for event-driven-optimization-based
Change-Id: If8fa82dab2e7f0ae359805eb68cc8562cfc641e3
Implements: blueprint event-driven-optimization-based
2020-02-04 03:46:32 +00:00
licanwei
05e81c3d88 doc: move Concurrency doc to admin guide
Change-Id: Ia1b034a5f79a5c7eeffdba2df727fd26cf13d1cc
2020-02-03 09:55:23 +00:00
licanwei
ae83ef02e7 doc for event type audit
Partially Implements: blueprint event-driven-optimization-based

Change-Id: I11211b606afd55dfa46a0942132be58dc30e28a4
2020-01-14 17:07:24 +08:00
licanwei
91b58a6775 Move install doc to user guide
Change-Id: I7b4b4ddffbe66a00fdcec4d497c6efa2e9e7729e
2020-01-11 10:16:27 +08:00
Zuul
8835576374 Merge "Add audit type: event" 2020-01-10 03:30:03 +00:00
Zuul
3bc05eaa00 Merge "Add webhook api" 2020-01-10 03:30:02 +00:00
licanwei
693d214166 Update user guide doc
Change-Id: I881b429552e15f13ddcd0ccf1663fb0e2f4123aa
2020-01-09 19:09:15 +08:00
licanwei
775be27719 Add webhook api
Add a new webhook api and its microversion is 1.4

Partially Implements: blueprint event-driven-optimization-based

Change-Id: I50f7c824e52f3c5fc775d5064898ed422e375a99
2020-01-08 09:41:03 +08:00
zhufl
db709691be Fix duplicated words issue like "an active instance instance"
This is to fix the duplicated words issue like
"Pick up an active instance instance to migrate".

Change-Id: I74de4eb06aa1e462f0b499e3fd62a7cdc7570b31
2020-01-06 15:29:25 +08:00
licanwei
6a173a9161 Add audit type: event
This patchset added a new audit type: event,
and the handler to execute event audit.

Partially Implements: blueprint event-driven-optimization-based

Change-Id: I287471ee4d1dcc42af7a6bcc15f8509d4ce73072
2019-12-13 15:14:41 +08:00
Zuul
0c02b08a6a Merge "Add list datamodel microversion to api-ref" 2019-12-04 03:07:50 +00:00
Zuul
58eb481e19 Merge "Add a new microversion for data model API" 2019-12-03 04:09:55 +00:00
licanwei
002ea535ae Add list datamodel microversion to api-ref
Change-Id: I0703a935fa6d9d61f62374dbd7afb09b1dfffd5c
Related-Bug: #1854121
2019-12-03 11:03:16 +08:00
licanwei
6f43f2b003 Add a new microversion for data model API
microversion 1.3 for list data model API

Change-Id: Ibf8774a48c3d13ca9762bd5319f5e1ce2ed82b2f
Closes-Bug: #1854121
2019-12-02 14:37:11 +08:00
Dantali0n
ba43f766b8 Releasenote for decision engine threadpool
Add the releasenote for the general purpose decision engine threadpool.
Including config parameters and how contributors can find relevant
documentation.

Implements: blueprint general-purpose-decision-engine-threadpool

Change-Id: I3560069b4e34f13305950559a0f05f7921f7867e
2019-11-30 03:13:15 +00:00
Zuul
42fea1c568 Merge "Documentation on concurrency for contributors" 2019-11-30 02:59:19 +00:00
Zuul
b7baa88010 Merge "Use threadpool when building compute data model" 2019-11-30 02:23:51 +00:00
Zuul
65ec309050 Merge "General purpose threadpool for decision engine" 2019-11-30 02:22:13 +00:00
Zuul
f4fb4981f0 Merge "Migrate grenade jobs to py3" 2019-11-30 02:22:11 +00:00
Zuul
8ae9375e6b Merge "replace host_url with application_url" 2019-11-30 02:22:11 +00:00
Zuul
012c653432 Merge "Use enum class define microversions" 2019-11-30 02:22:09 +00:00
licanwei
a2f1089038 Use enum class define microversions
Related-Bug: #1854121
Change-Id: I53b51e149be7252093aefcf2878684f42a3209c7
2019-11-29 10:55:20 +08:00
Q.hongtao
5171d84b8d Start README.rst with a better title
Now that we are using gitea the contents of our README.rst are
more prominently displayed. Starting it with a "Team and repository
tags" title is a bit confusing. This change makes it start with the
name of the project instead.

Change-Id: Icfce3764aa9e1aabf5e78443cf7ce102de63a052
2019-11-28 09:56:55 +08:00
licanwei
4a269ba039 Change self.node to self.nodes in model_root
networkx removed G.node in version 2.4[1]
G.node was replaced by G.nodes since version 2.0[2],
and supports Python 2.7, 3.5, 3.6 and 3.7 from 2.2
so the lower constraint version is 2.2.
lib task_flow also invokes lib networkx,
task_flow version is also needed to be updated.
[1]: https://networkx.github.io/documentation/stable/release/release_2.4.html
[2]: https://networkx.github.io/documentation/stable/release/release_2.0.html
Change-Id: I268bcf57ec977bd8132a9f1573b28b681cb4ce1e
Closes-Bug: #1854132
2019-11-27 17:19:29 +08:00
Dantali0n
b5f8e9a910 Documentation on concurrency for contributors
Documentation with details on general concurrency as well as OpenStack
specific libraries.

Describes how different libraries are effectively used across different
Watcher services. This includes describing how futurist is used in the
Decision Engine and how taskflow is used in the Applier.

Finally, this documentation describes how contributors can use the
new DecisionEngineThreadpool effectively and includes examples.

https://docs.openstack.org/futurist/latest/
https://docs.openstack.org/taskflow/latest/

Change-Id: Ic1cd1f3733a0e9a239c9b8d49951e1e4ece49f3a
Partially Implements: blueprint general-purpose-decision-engine-threadpool
2019-11-27 08:48:16 +01:00
licanwei
0032ed9237 replace host_url with application_url
for url http://localhost/infra-optim
pecan.request.host_url is http://localhost
and pecan.request.application_url is http://localhost/infra-optim
we should use application_url to make href in links.

Change-Id: I5d7746b3da196ea2e072fbdf1adb1523ba2bffaf
Closes-Bug: #1854119
2019-11-27 14:47:19 +08:00
Zuul
89055577e6 Merge "[ussuri][goal] Drop python 2.7 support and testing" 2019-11-22 07:12:55 +00:00
Zuul
cc0c2d227e Merge "Refactoring the codes about getting used and free resources" 2019-11-20 03:18:38 +00:00
Ghanshyam Mann
ab9a68c784 Migrate grenade jobs to py3
As part of community goal of dropping py27 support[1], we are
moving the devstack to py3 by default[2]. That will make grenade job
to perform upgrade from py2 to py3 which will not work (we have seen
the failure in neutron-grenade job).

To avoid existing grenade py2 job break, this commit moves grenade
jobs to py3 which is what we planned as part dropping the py2 support.

[1] https://governance.openstack.org/tc/goals/selected/ussuri/drop-py27.html
[2] https://review.opendev.org/#/c/649097/12
    http://lists.openstack.org/pipermail/openstack-discuss/2019-November/010938.html

Depends-On: https://review.opendev.org/#/c/649097/

Change-Id: I36229a3fc0bbcd994907154b638d24737959e6e3
2019-11-19 23:38:04 +00:00
Ghanshyam Mann
17f5a65a62 [ussuri][goal] Drop python 2.7 support and testing
OpenStack is dropping the py2.7 support in ussuri cycle.

Watcher is ready with python 3 and ok to drop the
python 2.7 support.

Complete discussion & schedule can be found in
- http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010142.html
- https://etherpad.openstack.org/p/drop-python2-support

Ussuri Communtiy-wide goal:
https://governance.openstack.org/tc/goals/selected/ussuri/drop-py27.html

Depends-On: https://review.opendev.org/#/c/693631/

Change-Id: I603c6d2c22779e8ef2e70eb6369fc521a77c9c3a
2019-11-16 14:55:01 +00:00
licanwei
689ae25ef5 Refactoring the codes about getting used and free resources
We have provided functions to get used and free resources in
class ModelRoot. So strategies can invoke the functions to
get used and free resources.

Change-Id: I3c74d56539ac6c6eb16b0d254a76260bc791567c
2019-11-12 16:22:09 +08:00
Zuul
b3a3c686bf Merge "tox: Keeping going with docs" 2019-11-12 03:25:16 +00:00
Dantali0n
c644e23ca0 Use threadpool when building compute data model
Use the general purpose threadpool when building the nova compute
data model. Additionally, adds thorough explanation about theory of
operation.

Updates related test cases to better ensure the correct operation
of add_physical_layer.

Partially Implements: blueprint general-purpose-decision-engine-threadpool

Change-Id: I53ed32a4b2a089b05d1ffede629c9f4c5cb720c8
2019-11-01 13:44:15 +01:00
Dantali0n
2b6ee38327 General purpose threadpool for decision engine
Implements the singleton general purpose threadpool for the decision
engine and associated tests.

A threadpool is a collection of one or more threads typically called
'workers' to which tasks can be submitted. These submitted tasks will
be scheduled by the threadpool and subsequently executed. How many
tasks will be executed concurrently is managed by the underlying
threadpool and its configuration. In Python the submission of tasks
to a threadpool returns an object called a 'future'. Futures provide
a method to interface with the task being executed that allows to
retrieve information about its state. Such as if it currently is being
executed, if it is waiting on a condition and if it has completed
succesfully. Finally, futures allow to retrieve what has been returned
by the submitted task.

In the case of most OpenStack projects instead of interfacing with native
Python concurrency the futurist library is used. This library provides
very similar interfaces to native concurrency with some extras such as
the wait_for_any method.

For more information about futurist or Python concurrency the following
references can be consulted:
https://docs.python.org/3/library/concurrent.futures.html
https://docs.openstack.org/futurist/latest/reference/index.html#executors

Partially Implements: blueprint general-purpose-decision-engine-threadpool

Change-Id: I94bd9a17290967f011762f2b9c787ee7c46ff930
2019-11-01 11:33:59 +01:00
jacky06
7d2191d4e6 tox: Keeping going with docs
Sphinx 1.8 introduced [1] the '--keep-going' argument which, as its name
suggests, keeps the build running when it encounters non-fatal errors.
This is exceptionally useful in avoiding a continuous edit-build loop
when undertaking large doc reworks where multiple errors may be
introduced.

[1] https://github.com/sphinx-doc/sphinx/commit/e3483e9b045

Change-Id: If2bbfd8ae6d1fc75cbc494578310c1dc03c367e6
2019-10-24 00:45:42 +00:00
sunjia
a7b24ac6a5 Switch to Ussuri jobs
Change-Id: I681f324c243860255a9ede0794e7d96026bca5a3
2019-10-22 13:39:11 +08:00
Zuul
ff5bc51052 Merge "Don't throw exception when missing metrics" 2019-10-17 12:59:18 +00:00
licanwei
f685bf62ab Don't throw exception when missing metrics
When querying data from datasource, it's possible to miss some data.
In this case if we throw an exception, Audit will failed because of
the exception. We should remove the exception and give the decision
to the strategy.

Change-Id: I1b0e6b78b3bba4df9ba16e093b3910aab1de922e
Closes-Bug: #1847434
2019-10-16 21:01:39 -07:00
Zuul
066f9e02e2 Merge "Remove print()" 2019-10-14 07:43:25 +00:00
licanwei
aa36e6a881 Remove print()
Change-Id: Ida31237b77e98c803cb1ccb3bd5b190289434207
2019-10-11 14:59:14 +08:00
OpenStack Release Bot
e835efaa3f Update master for stable/train
Add file to the reno documentation build to show release notes for
stable/train.

Use pbr instruction to increment the minor version number
automatically so that master versions are higher than the versions on
stable/train.

Change-Id: I0d5ae49a33583514925ad966de067afaa8881ff3
Sem-Ver: feature
2019-09-25 08:46:33 +00:00
257 changed files with 2128 additions and 1172 deletions

View File

@@ -3,8 +3,7 @@
- check-requirements - check-requirements
- openstack-cover-jobs - openstack-cover-jobs
- openstack-lower-constraints-jobs - openstack-lower-constraints-jobs
- openstack-python-jobs - openstack-python3-victoria-jobs
- openstack-python3-train-jobs
- publish-openstack-docs-pti - publish-openstack-docs-pti
- release-notes-jobs-python3 - release-notes-jobs-python3
check: check:
@@ -161,7 +160,6 @@
timeout: 7200 timeout: 7200
required-projects: &base_required_projects required-projects: &base_required_projects
- openstack/ceilometer - openstack/ceilometer
- openstack/devstack-gate
- openstack/python-openstackclient - openstack/python-openstackclient
- openstack/python-watcherclient - openstack/python-watcherclient
- openstack/watcher - openstack/watcher
@@ -180,13 +178,10 @@
s-container: false s-container: false
s-object: false s-object: false
s-proxy: false s-proxy: false
devstack_localrc: tempest_plugins:
TEMPEST_PLUGINS: /opt/stack/watcher-tempest-plugin - watcher-tempest-plugin
tempest_test_regex: watcher_tempest_plugin.tests.api tempest_test_regex: watcher_tempest_plugin.tests.api
tox_envlist: all tox_envlist: all
tox_environment:
# Do we really need to set this? It's cargo culted
PYTHONUNBUFFERED: 'true'
zuul_copy_output: zuul_copy_output:
/etc/hosts: logs /etc/hosts: logs
@@ -200,10 +195,12 @@
- job: - job:
name: watcher-grenade name: watcher-grenade
parent: legacy-dsvm-base parent: grenade
timeout: 10800 required-projects:
run: playbooks/legacy/grenade-devstack-watcher/run.yaml - openstack/watcher
post-run: playbooks/legacy/grenade-devstack-watcher/post.yaml - openstack/python-watcherclient
- openstack/watcher-tempest-plugin
vars: *base_vars
irrelevant-files: irrelevant-files:
- ^(test-|)requirements.txt$ - ^(test-|)requirements.txt$
- ^.*\.rst$ - ^.*\.rst$
@@ -215,12 +212,6 @@
- ^setup.cfg$ - ^setup.cfg$
- ^tools/.*$ - ^tools/.*$
- ^tox.ini$ - ^tox.ini$
required-projects:
- openstack/grenade
- openstack/devstack-gate
- openstack/watcher
- openstack/python-watcherclient
- openstack/watcher-tempest-plugin
- job: - job:
# This job is used in python-watcherclient repo # This job is used in python-watcherclient repo

View File

@@ -1,6 +1,6 @@
======================== =======
Team and repository tags Watcher
======================== =======
.. image:: https://governance.openstack.org/tc/badges/watcher.svg .. image:: https://governance.openstack.org/tc/badges/watcher.svg
:target: https://governance.openstack.org/tc/reference/tags/index.html :target: https://governance.openstack.org/tc/reference/tags/index.html
@@ -13,10 +13,6 @@ Team and repository tags
https://creativecommons.org/licenses/by/3.0/ https://creativecommons.org/licenses/by/3.0/
=======
Watcher
=======
OpenStack Watcher provides a flexible and scalable resource optimization OpenStack Watcher provides a flexible and scalable resource optimization
service for multi-tenant OpenStack-based clouds. service for multi-tenant OpenStack-based clouds.
Watcher provides a robust framework to realize a wide range of cloud Watcher provides a robust framework to realize a wide range of cloud

View File

@@ -22,9 +22,6 @@
# All configuration values have a default; values that are commented out # All configuration values have a default; values that are commented out
# serve to show the default. # serve to show the default.
from watcher import version as watcher_version
extensions = [ extensions = [
'openstackdocstheme', 'openstackdocstheme',
'os_api_ref', 'os_api_ref',
@@ -46,21 +43,13 @@ project = u'Infrastructure Optimization API Reference'
copyright = u'2010-present, OpenStack Foundation' copyright = u'2010-present, OpenStack Foundation'
# openstackdocstheme options # openstackdocstheme options
repository_name = 'openstack/watcher' openstackdocs_repo_name = 'openstack/watcher'
bug_project = 'watcher' openstackdocs_auto_name = False
bug_tag = '' openstackdocs_bug_project = 'watcher'
openstackdocs_bug_tag = ''
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The full version, including alpha/beta/rc tags.
release = watcher_version.version_info.release_string()
# The short X.Y version.
version = watcher_version.version_string
# The name of the Pygments (syntax highlighting) style to use. # The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx' pygments_style = 'native'
# -- Options for HTML output -------------------------------------------------- # -- Options for HTML output --------------------------------------------------
@@ -75,10 +64,6 @@ html_theme_options = {
"sidebar_mode": "toc", "sidebar_mode": "toc",
} }
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
html_last_updated_fmt = '%Y-%m-%d %H:%M'
# -- Options for LaTeX output ------------------------------------------------- # -- Options for LaTeX output -------------------------------------------------
# Grouping the document tree into LaTeX files. List of tuples # Grouping the document tree into LaTeX files. List of tuples

View File

@@ -16,3 +16,4 @@ Watcher API
.. include:: watcher-api-v1-services.inc .. include:: watcher-api-v1-services.inc
.. include:: watcher-api-v1-scoring_engines.inc .. include:: watcher-api-v1-scoring_engines.inc
.. include:: watcher-api-v1-datamodel.inc .. include:: watcher-api-v1-datamodel.inc
.. include:: watcher-api-v1-webhooks.inc

View File

@@ -4,6 +4,8 @@
Data Model Data Model
========== ==========
.. versionadded:: 1.3
``Data Model`` is very important for Watcher to generate resource ``Data Model`` is very important for Watcher to generate resource
optimization solutions. Users can easily view the data model by the optimization solutions. Users can easily view the data model by the
API. API.
@@ -18,7 +20,7 @@ Returns the information about Data Model.
Normal response codes: 200 Normal response codes: 200
Error codes: 400,401 Error codes: 400,401,406
Request Request
------- -------

View File

@@ -0,0 +1,26 @@
.. -*- rst -*-
========
Webhooks
========
.. versionadded:: 1.4
Triggers an event based Audit.
Trigger EVENT Audit
===================
.. rest_method:: POST /v1/webhooks/{audit_ident}
Normal response codes: 202
Error codes: 400,404
Request
-------
.. rest_parameters:: parameters.yaml
- audit_ident: audit_ident

View File

@@ -1,2 +0,0 @@
[python: **.py]

View File

@@ -298,7 +298,7 @@ function start_watcher_api {
service_protocol="http" service_protocol="http"
fi fi
if [[ "$WATCHER_USE_WSGI_MODE" == "uwsgi" ]]; then if [[ "$WATCHER_USE_WSGI_MODE" == "uwsgi" ]]; then
run_process "watcher-api" "$WATCHER_BIN_DIR/uwsgi --ini $WATCHER_UWSGI_CONF" run_process "watcher-api" "$(which uwsgi) --procname-prefix watcher-api --ini $WATCHER_UWSGI_CONF"
watcher_url=$service_protocol://$SERVICE_HOST/infra-optim watcher_url=$service_protocol://$SERVICE_HOST/infra-optim
else else
watcher_url=$service_protocol://$SERVICE_HOST:$service_port watcher_url=$service_protocol://$SERVICE_HOST:$service_port

View File

@@ -13,8 +13,6 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from __future__ import unicode_literals
import importlib import importlib
import inspect import inspect

View File

@@ -1,11 +1,10 @@
# The order of packages is significant, because pip processes them in the order # The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration # of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later. # process, which may cause wedges in the gate later.
openstackdocstheme>=1.20.0 # Apache-2.0 openstackdocstheme>=2.2.1 # Apache-2.0
sphinx>=1.6.5,!=1.6.6,!=1.6.7,<2.0.0;python_version=='2.7' # BSD sphinx>=2.0.0,!=2.1.0 # BSD
sphinx>=1.6.5,!=1.6.6,!=1.6.7,!=2.1.0;python_version>='3.4' # BSD
sphinxcontrib-pecanwsme>=0.8.0 # Apache-2.0 sphinxcontrib-pecanwsme>=0.8.0 # Apache-2.0
sphinxcontrib-svg2pdfconverter>=0.1.0 # BSD sphinxcontrib-svg2pdfconverter>=0.1.0 # BSD
reno>=2.7.0 # Apache-2.0 reno>=3.1.0 # Apache-2.0
sphinxcontrib-apidoc>=0.2.0 # BSD sphinxcontrib-apidoc>=0.2.0 # BSD
os-api-ref>=1.4.0 # Apache-2.0 os-api-ref>=1.4.0 # Apache-2.0

View File

@@ -8,6 +8,7 @@ Administrator Guide
apache-mod-wsgi apache-mod-wsgi
gmr gmr
policy policy
ways-to-install
../strategies/index ../strategies/index
../datasources/index ../datasources/index
../contributor/notifications
../contributor/concurrency

View File

@@ -281,11 +281,13 @@ previously created :ref:`Audit template <audit_template_definition>`:
:width: 100% :width: 100%
The :ref:`Administrator <administrator_definition>` also can specify type of The :ref:`Administrator <administrator_definition>` also can specify type of
Audit and interval (in case of CONTINUOUS type). There is two types of Audit: Audit and interval (in case of CONTINUOUS type). There is three types of Audit:
ONESHOT and CONTINUOUS. Oneshot Audit is launched once and if it succeeded ONESHOT, CONTINUOUS and EVENT. ONESHOT Audit is launched once and if it
executed new action plan list will be provided. Continuous Audit creates succeeded executed new action plan list will be provided; CONTINUOUS Audit
action plans with specified interval (in seconds); if action plan creates action plans with specified interval (in seconds or cron format, cron
has been created, all previous action plans get CANCELLED state. inteval can be used like: `*/5 * * * *`), if action plan
has been created, all previous action plans get CANCELLED state;
EVENT audit is launched when receiving webhooks API.
A message is sent on the :ref:`AMQP bus <amqp_bus_definition>` which triggers A message is sent on the :ref:`AMQP bus <amqp_bus_definition>` which triggers
the Audit in the the Audit in the

View File

@@ -14,7 +14,6 @@
import os import os
import sys import sys
from watcher import version as watcher_version
from watcher import objects from watcher import objects
objects.register_all() objects.register_all()
@@ -36,7 +35,6 @@ extensions = [
'sphinxcontrib.httpdomain', 'sphinxcontrib.httpdomain',
'sphinxcontrib.pecanwsme.rest', 'sphinxcontrib.pecanwsme.rest',
'stevedore.sphinxext', 'stevedore.sphinxext',
'wsmeext.sphinxext',
'ext.term', 'ext.term',
'ext.versioned_notifications', 'ext.versioned_notifications',
'oslo_config.sphinxconfiggen', 'oslo_config.sphinxconfiggen',
@@ -61,16 +59,6 @@ master_doc = 'index'
project = u'Watcher' project = u'Watcher'
copyright = u'OpenStack Foundation' copyright = u'OpenStack Foundation'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
# The full version, including alpha/beta/rc tags.
release = watcher_version.version_info.release_string()
# The short X.Y version.
version = watcher_version.version_string
# A list of ignored prefixes for module index sorting. # A list of ignored prefixes for module index sorting.
modindex_common_prefix = ['watcher.'] modindex_common_prefix = ['watcher.']
@@ -95,7 +83,7 @@ add_module_names = True
suppress_warnings = ['app.add_directive'] suppress_warnings = ['app.add_directive']
# The name of the Pygments (syntax highlighting) style to use. # The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx' pygments_style = 'native'
# -- Options for man page output -------------------------------------------- # -- Options for man page output --------------------------------------------
@@ -126,12 +114,13 @@ html_theme = 'openstackdocs'
# Output file base name for HTML help builder. # Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project htmlhelp_basename = '%sdoc' % project
html_last_updated_fmt = '%Y-%m-%d %H:%M'
#openstackdocstheme options #openstackdocstheme options
repository_name = 'openstack/watcher' openstackdocs_repo_name = 'openstack/watcher'
bug_project = 'watcher' openstackdocs_pdf_link = True
bug_tag = '' openstackdocs_auto_name = False
openstackdocs_bug_project = 'watcher'
openstackdocs_bug_tag = ''
# Grouping the document tree into LaTeX files. List of tuples # Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass # (source start file, target name, title, author, documentclass
@@ -139,7 +128,7 @@ bug_tag = ''
latex_documents = [ latex_documents = [
('index', ('index',
'doc-watcher.tex', 'doc-watcher.tex',
u'%s Documentation' % project, u'Watcher Documentation',
u'OpenStack Foundation', 'manual'), u'OpenStack Foundation', 'manual'),
] ]

View File

@@ -0,0 +1,248 @@
===========
Concurrency
===========
Introduction
************
Modern processors typically contain multiple cores all capable of executing
instructions in parallel. Ensuring applications can fully utilize modern
underlying hardware requires developing with these concepts in mind. The
OpenStack foundation maintains a number of libraries to facilitate this
utilization, combined with constructs like CPython's GIL_ the proper use of
these concepts becomes more straightforward compared to other programming
languages.
The primary libraries maintained by OpenStack to facilitate concurrency are
futurist_ and taskflow_. Here futurist is a more straightforward and
lightweight library while taskflow is more advanced supporting features like
rollback mechanisms. Within Watcher both libraries are used to facilitate
concurrency.
.. _GIL: https://wiki.python.org/moin/GlobalInterpreterLock
.. _futurist: https://docs.openstack.org/futurist/latest/
.. _taskflow: https://docs.openstack.org/taskflow/latest/
Threadpool
**********
A threadpool is a collection of one or more threads typically called *workers*
to which tasks can be submitted. These submitted tasks will be scheduled by a
threadpool and subsequently executed. In the case of Python tasks typically are
bounded or unbounded methods while other programming languages like Java
require implementing an interface.
The order and amount of concurrency with which these tasks are executed is up
to the threadpool to decide. Some libraries like taskflow allow for either
strong or loose ordering of tasks while others like futurist might only support
loose ordering. Taskflow supports building tree-based hierarchies of dependent
tasks for example.
Upon submission of a task to a threadpool a so called future_ is returned.
These objects allow to determine information about the task such as if it is
currently being executed or if it has finished execution. When the task has
finished execution the future can also be used to retrieve what was returned by
the method.
Some libraries like futurist provide synchronization primitives for collections
of futures such as wait_for_any_. The following sections will cover different
types of concurrency used in various services of Watcher.
.. _future: https://docs.python.org/3/library/concurrent.futures.html
.. _wait_for_any: https://docs.openstack.org/futurist/latest/reference/index.html#waiters
Decision engine concurrency
***************************
The concurrency in the decision engine is governed by two independent
threadpools. Both of these threadpools are GreenThreadPoolExecutor_ from the
futurist_ library. One of these is used automatically and most contributors
will not interact with it while developing new features. The other threadpool
can frequently be used while developing new features or updating existing ones.
It is known as the DecisionEngineThreadpool and allows to achieve performance
improvements in network or I/O bound operations.
.. _GreenThreadPoolExecutor: https://docs.openstack.org/futurist/latest/reference/index.html#executors
AuditEndpoint
#############
The first threadpool is used to allow multiple audits to be run in parallel.
In practice, however, only one audit can be run in parallel. This is due to
the data model used by audits being a singleton. To prevent audits destroying
each others data model one must wait for the other to complete before being
allowed to access this data model. A performance improvement could be achieved
by being more intelligent in the use, caching and construction of these
data models.
DecisionEngineThreadPool
########################
The second threadpool is used for generic tasks, typically networking and I/O
could benefit the most of this threadpool. Upon execution of an audit this
threadpool can be utilized to retrieve information from the Nova compute
service for instance. This second threadpool is a singleton and is shared
amongst concurrently running audits as a result the amount of workers is static
and independent from the amount of workers in the first threadpool. The use of
the :class:`~.DecisionEngineThreadpool` while building the Nova compute data
model is demonstrated to show how it can effectively be used.
In the following example a reference to the
:class:`~.DecisionEngineThreadpool` is stored in ``self.executor``. Here two
tasks are submitted one with function ``self._collect_aggregates`` and the
other function ``self._collect_zones``. With both ``self.executor.submit``
calls subsequent arguments are passed to the function. All subsequent arguments
are passed to the function being submitted as task following the common
``(fn, *args, **kwargs)`` signature. One of the original signatures would be
``def _collect_aggregates(host_aggregates, compute_nodes)`` for example.
.. code-block:: python
zone_aggregate_futures = {
self.executor.submit(
self._collect_aggregates, host_aggregates, compute_nodes),
self.executor.submit(
self._collect_zones, availability_zones, compute_nodes)
}
waiters.wait_for_all(zone_aggregate_futures)
The last statement of the example above waits on all futures to complete.
Similarly, ``waiters.wait_for_any`` will wait for any future of the specified
collection to complete. To simplify the usage of ``wait_for_any`` the
:class:`~.DecisiongEngineThreadpool` defines a ``do_while_futures`` method.
This method will iterate in a do_while loop over a collection of futures until
all of them have completed. The advantage of ``do_while_futures`` is that it
allows to immediately call a method as soon as a future finishes. The arguments
for this callback method can be supplied when calling ``do_while_futures``,
however, the first argument to the callback is always the future itself! If
the collection of futures can safely be modified ``do_while_futures_modify``
can be used and should have slightly better performance. The following example
will show how ``do_while_futures`` is used in the decision engine.
.. code-block:: python
# For every compute node from compute_nodes submit a task to gather the node it's information.
# List comprehension is used to store all the futures of the submitted tasks in node_futures.
node_futures = [self.executor.submit(
self.nova_helper.get_compute_node_by_name,
node, servers=True, detailed=True)
for node in compute_nodes]
LOG.debug("submitted {0} jobs".format(len(compute_nodes)))
future_instances = []
# do_while iterate over node_futures and upon completion of a future call
# self._compute_node_future with the future and future_instances as arguments.
self.executor.do_while_futures_modify(
node_futures, self._compute_node_future, future_instances)
# Wait for all instance jobs to finish
waiters.wait_for_all(future_instances)
Finally, let's demonstrate how powerful this ``do_while_futures`` can be by
showing what the ``compute_node_future`` callback does. First, it retrieves the
result from the future and adds the compute node to the data model. Afterwards,
it checks if the compute node has any associated instances and if so it submits
an additional task to the :class:`~.DecisionEngineThreadpool`. The future is
appended to the ``future_instances`` so ``waiters.wait_for_all`` can be called
on this list. This is important as otherwise the building of the data model
might return before all tasks for instances have finished.
.. code-block:: python
# Get the result from the future.
node_info = future.result()[0]
# Filter out baremetal nodes.
if node_info.hypervisor_type == 'ironic':
LOG.debug("filtering out baremetal node: %s", node_info)
return
# Add the compute node to the data model.
self.add_compute_node(node_info)
# Get the instances from the compute node.
instances = getattr(node_info, "servers", None)
# Do not submit job if there are no instances on compute node.
if instances is None:
LOG.info("No instances on compute_node: {0}".format(node_info))
return
# Submit a job to retrieve detailed information about the instances.
future_instances.append(
self.executor.submit(
self.add_instance_node, node_info, instances)
)
Without ``do_while_futures`` an additional ``waiters.wait_for_all`` would be
required in between the compute node tasks and the instance tasks. This would
cause the progress of the decision engine to stall as less and less tasks
remain active before the instance tasks could be submitted. This demonstrates
how ``do_while_futures`` can be used to achieve more constant utilization of
the underlying hardware.
Applier concurrency
*******************
The applier does not use the futurist_ GreenThreadPoolExecutor_ directly but
instead uses taskflow_. However, taskflow still utilizes a greenthreadpool.
This threadpool is initialized in the workflow engine called
:class:`~.DefaultWorkFlowEngine`. Currently Watcher supports one workflow
engine but the base class allows contributors to develop other workflow engines
as well. In taskflow tasks are created using different types of flows such as a
linear, unordered or a graph flow. The linear and graph flow allow for strong
ordering between individual tasks and it is for this reason that the workflow
engine utilizes a graph flow. The creation of tasks, subsequently linking them
into a graph like structure and submitting them is shown below.
.. code-block:: python
self.execution_rule = self.get_execution_rule(actions)
flow = gf.Flow("watcher_flow")
actions_uuid = {}
for a in actions:
task = TaskFlowActionContainer(a, self)
flow.add(task)
actions_uuid[a.uuid] = task
for a in actions:
for parent_id in a.parents:
flow.link(actions_uuid[parent_id], actions_uuid[a.uuid],
decider=self.decider)
e = engines.load(
flow, executor='greenthreaded', engine='parallel',
max_workers=self.config.max_workers)
e.run()
return flow
In the applier tasks are contained in a :class:`~.TaskFlowActionContainer`
which allows them to trigger events in the workflow engine. This way the
workflow engine can halt or take other actions while the action plan is being
executed based on the success or failure of individual actions. However, the
base workflow engine simply uses these notifies to store the result of
individual actions in the database. Additionally, since taskflow uses a graph
flow if any of the tasks would fail all childs of this tasks not be executed
while ``do_revert`` will be triggered for all parents.
.. code-block:: python
class TaskFlowActionContainer(...):
...
def do_execute(self, *args, **kwargs):
...
result = self.action.execute()
if result is True:
return self.engine.notify(self._db_action,
objects.action.State.SUCCEEDED)
else:
self.engine.notify(self._db_action,
objects.action.State.FAILED)
class BaseWorkFlowEngine(...):
...
def notify(self, action, state):
db_action = objects.Action.get_by_uuid(self.context, action.uuid,
eager=True)
db_action.state = state
db_action.save()
return db_action

View File

@@ -1,71 +1,111 @@
.. ============================
Except where otherwise noted, this document is licensed under Creative So You Want to Contribute...
Commons Attribution 3.0 License. You can view the license at: ============================
https://creativecommons.org/licenses/by/3.0/ For general information on contributing to OpenStack, please check out the
`contributor guide <https://docs.openstack.org/contributors/>`_ to get started.
It covers all the basics that are common to all OpenStack projects:
the accounts you need, the basics of interacting with our Gerrit review system,
how we communicate as a community, etc.
.. _contributing: Below will cover the more project specific information you need to get started
with Watcher.
======================= Communication
Contributing to Watcher ~~~~~~~~~~~~~~
======================= .. This would be a good place to put the channel you chat in as a project; when/
where your meeting is, the tags you prepend to your ML threads, etc.
If you're interested in contributing to the Watcher project,
the following will help get you started.
Contributor License Agreement
-----------------------------
.. index::
single: license; agreement
In order to contribute to the Watcher project, you need to have
signed OpenStack's contributor's agreement.
.. seealso::
* https://docs.openstack.org/infra/manual/developers.html
* https://wiki.openstack.org/CLA
LaunchPad Project
-----------------
Most of the tools used for OpenStack depend on a launchpad.net ID for
authentication. After signing up for a launchpad account, join the
"openstack" team to have access to the mailing list and receive
notifications of important events.
.. seealso::
* https://launchpad.net
* https://launchpad.net/watcher
* https://launchpad.net/openstack
Project Hosting Details
-----------------------
Bug tracker
https://launchpad.net/watcher
Mailing list (prefix subjects with ``[watcher]`` for faster responses)
http://lists.openstack.org/pipermail/openstack-discuss/
Wiki
https://wiki.openstack.org/Watcher
Code Hosting
https://opendev.org/openstack/watcher
Code Review
https://review.opendev.org/#/q/status:open+project:openstack/watcher,n,z
IRC Channel IRC Channel
``#openstack-watcher`` (changelog_) ``#openstack-watcher`` (changelog_)
Mailing list(prefix subjects with ``[watcher]``)
http://lists.openstack.org/pipermail/openstack-discuss/
Weekly Meetings Weekly Meetings
Bi-weekly, on Wednesdays at 08:00 UTC on odd weeks in the Bi-weekly, on Wednesdays at 08:00 UTC on odd weeks in the
``#openstack-meeting-alt`` IRC channel (`meetings logs`_) ``#openstack-meeting-alt`` IRC channel (`meetings logs`_)
Meeting Agenda
https://wiki.openstack.org/wiki/Watcher_Meeting_Agenda
.. _changelog: http://eavesdrop.openstack.org/irclogs/%23openstack-watcher/ .. _changelog: http://eavesdrop.openstack.org/irclogs/%23openstack-watcher/
.. _meetings logs: http://eavesdrop.openstack.org/meetings/watcher/ .. _meetings logs: http://eavesdrop.openstack.org/meetings/watcher/
Contacting the Core Team
~~~~~~~~~~~~~~~~~~~~~~~~~
.. This section should list the core team, their irc nicks, emails, timezones etc.
If all this info is maintained elsewhere (i.e. a wiki), you can link to that
instead of enumerating everyone here.
+--------------------+---------------+------------------------------------+
| Name | IRC | Email |
+====================+===============+====================================+
| `Li Canwei`_ | licanwei | li.canwei2@zte.com.cn |
+--------------------+---------------+------------------------------------+
| `chen ke`_ | chenke | chen.ke14@zte.com.cn |
+--------------------+---------------+------------------------------------+
| `Corne Lukken`_ | dantalion | info@dantalion.nl |
+--------------------+---------------+------------------------------------+
| `su zhengwei`_ | suzhengwei | sugar-2008@163.com |
+--------------------+---------------+------------------------------------+
| `Yumeng Bao`_ | Yumeng | yumeng_bao@yahoo.com |
+--------------------+---------------+------------------------------------+
.. _Corne Lukken: https://launchpad.net/~dantalion
.. _Li Canwei: https://launchpad.net/~li-canwei2
.. _su zhengwei: https://launchpad.net/~sue.sam
.. _Yumeng Bao: https://launchpad.net/~yumeng-bao
.. _chen ke: https://launchpad.net/~chenker
New Feature Planning
~~~~~~~~~~~~~~~~~~~~
.. This section is for talking about the process to get a new feature in. Some
projects use blueprints, some want specs, some want both! Some projects
stick to a strict schedule when selecting what new features will be reviewed
for a release.
New feature will be discussed via IRC or ML (with [Watcher] prefix).
Watcher team uses blueprints in `Launchpad`_ to manage the new features.
.. _Launchpad: https://launchpad.net/watcher
Task Tracking
~~~~~~~~~~~~~~
.. This section is about where you track tasks- launchpad? storyboard?
is there more than one launchpad project? what's the name of the project
group in storyboard?
We track our tasks in Launchpad.
If you're looking for some smaller, easier work item to pick up and get started
on, search for the 'low-hanging-fruit' tag.
.. NOTE: If your tag is not 'low-hanging-fruit' please change the text above.
Reporting a Bug
~~~~~~~~~~~~~~~
.. Pretty self explanatory section, link directly to where people should report bugs for
your project.
You found an issue and want to make sure we are aware of it? You can do so
`HERE`_.
.. _HERE: https://bugs.launchpad.net/watcher
Getting Your Patch Merged
~~~~~~~~~~~~~~~~~~~~~~~~~
.. This section should have info about what it takes to get something merged.
Do you require one or two +2's before +W? Do some of your repos require
unit test changes with all patches? etc.
Due to the small number of core reviewers of the Watcher project,
we only need one +2 before +W (merge). All patches excepting for documentation
or typos fixes must have unit test.
Project Team Lead Duties
------------------------
.. this section is where you can put PTL specific duties not already listed in
the common PTL guide (linked below) or if you already have them written
up elsewhere, you can link to that doc here.
All common PTL duties are enumerated here in the `PTL guide <https://docs.openstack.org/project-team-guide/ptl.html>`_.

View File

@@ -1,8 +1,12 @@
.. toctree:: ==================
:maxdepth: 1 Contribution Guide
==================
.. toctree::
:maxdepth: 2
contributing
environment environment
devstack devstack
notifications
testing testing
rally_link rally_link

View File

@@ -1,3 +1,7 @@
============
Plugin Guide
============
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1

View File

@@ -4,9 +4,9 @@
https://creativecommons.org/licenses/by/3.0/ https://creativecommons.org/licenses/by/3.0/
======= =================
Testing Developer Testing
======= =================
.. _unit_tests: .. _unit_tests:
@@ -15,7 +15,7 @@ Unit tests
All unit tests should be run using `tox`_. Before running the unit tests, you All unit tests should be run using `tox`_. Before running the unit tests, you
should download the latest `watcher`_ from the github. To run the same unit should download the latest `watcher`_ from the github. To run the same unit
tests that are executing onto `Gerrit`_ which includes ``py35``, ``py27`` and tests that are executing onto `Gerrit`_ which includes ``py36``, ``py37`` and
``pep8``, you can issue the following command:: ``pep8``, you can issue the following command::
$ git clone https://opendev.org/openstack/watcher $ git clone https://opendev.org/openstack/watcher
@@ -26,8 +26,8 @@ tests that are executing onto `Gerrit`_ which includes ``py35``, ``py27`` and
If you only want to run one of the aforementioned, you can then issue one of If you only want to run one of the aforementioned, you can then issue one of
the following:: the following::
$ tox -e py35 $ tox -e py36
$ tox -e py27 $ tox -e py37
$ tox -e pep8 $ tox -e pep8
.. _tox: https://tox.readthedocs.org/ .. _tox: https://tox.readthedocs.org/
@@ -38,7 +38,7 @@ If you only want to run specific unit test code and don't like to waste time
waiting for all unit tests to execute, you can add parameters ``--`` followed waiting for all unit tests to execute, you can add parameters ``--`` followed
by a regex string:: by a regex string::
$ tox -e py27 -- watcher.tests.api $ tox -e py37 -- watcher.tests.api
.. _tempest_tests: .. _tempest_tests:

View File

@@ -32,91 +32,21 @@ specific prior release.
.. _python-watcherclient: https://opendev.org/openstack/python-watcherclient/ .. _python-watcherclient: https://opendev.org/openstack/python-watcherclient/
.. _watcher-dashboard: https://opendev.org/openstack/watcher-dashboard/ .. _watcher-dashboard: https://opendev.org/openstack/watcher-dashboard/
Developer Guide
===============
Introduction
------------
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 2
glossary
architecture architecture
contributor/contributing
Getting Started
---------------
.. toctree::
:maxdepth: 1
contributor/index contributor/index
Installation
============
.. toctree::
:maxdepth: 2
install/index install/index
Admin Guide
===========
.. toctree::
:maxdepth: 2
admin/index admin/index
User Guide
==========
.. toctree::
:maxdepth: 2
user/index user/index
configuration/index
API References contributor/plugin/index
============== man/index
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
API Reference <https://docs.openstack.org/api-ref/resource-optimization/> API Reference <https://docs.openstack.org/api-ref/resource-optimization/>
Watcher API Microversion History </contributor/api_microversion_history> Watcher API Microversion History </contributor/api_microversion_history>
glossary
Plugins
-------
.. toctree::
:maxdepth: 1
contributor/plugin/index
Watcher Configuration Options
=============================
.. toctree::
:maxdepth: 2
configuration/index
Watcher Manual Pages
====================
.. toctree::
:glob:
:maxdepth: 1
man/index
.. only:: html
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@@ -1,6 +1,6 @@
=================================== =============
Infrastructure Optimization service Install Guide
=================================== =============
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2

View File

@@ -1,3 +1,7 @@
====================
Watcher Manual Pages
====================
.. toctree:: .. toctree::
:glob: :glob:
:maxdepth: 1 :maxdepth: 1

View File

@@ -0,0 +1,195 @@
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
======================
Audit using Aodh alarm
======================
Audit with EVENT type can be triggered by special alarm. This guide walks
you through the steps to build an event-driven optimization solution by
integrating Watcher with Ceilometer/Aodh.
Step 1: Create an audit with EVENT type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The first step is to create an audit with EVENT type,
you can create an audit template firstly:
.. code-block:: bash
$ openstack optimize audittemplate create your_template_name <your_goal> \
--strategy <your_strategy>
or create an audit directly with special goal and strategy:
.. code-block:: bash
$ openstack optimize audit create --goal <your_goal> \
--strategy <your_strategy> --audit_type EVENT
This is an example for creating an audit with dummy strategy:
.. code-block:: bash
$ openstack optimize audit create --goal dummy \
--strategy dummy --audit_type EVENT
+---------------+--------------------------------------+
| Field | Value |
+---------------+--------------------------------------+
| UUID | a3326a6a-c18e-4e8e-adba-d0c61ad404c5 |
| Name | dummy-2020-01-14T03:21:19.168467 |
| Created At | 2020-01-14T03:21:19.200279+00:00 |
| Updated At | None |
| Deleted At | None |
| State | PENDING |
| Audit Type | EVENT |
| Parameters | {u'para2': u'hello', u'para1': 3.2} |
| Interval | None |
| Goal | dummy |
| Strategy | dummy |
| Audit Scope | [] |
| Auto Trigger | False |
| Next Run Time | None |
| Hostname | None |
| Start Time | None |
| End Time | None |
| Force | False |
+---------------+--------------------------------------+
We need to build Aodh action url using Watcher webhook API.
For convenience we export the url into an environment variable:
.. code-block:: bash
$ export AUDIT_UUID=a3326a6a-c18e-4e8e-adba-d0c61ad404c5
$ export ALARM_URL="trust+http://localhost/infra-optim/v1/webhooks/$AUDIT_UUID"
Step 2: Create Aodh Alarm
~~~~~~~~~~~~~~~~~~~~~~~~~
Once we have the audit created, we can continue to create Aodh alarm and
set the alarm action to Watcher webhook API. The alarm type can be event(
i.e. ``compute.instance.create.end``) or gnocchi_resources_threshold(i.e.
``cpu_util``), more info refer to alarm-creation_
For example:
.. code-block:: bash
$ openstack alarm create \
--type event --name instance_create \
--event-type "compute.instance.create.end" \
--enable True --repeat-actions False \
--alarm-action $ALARM_URL
+---------------------------+------------------------------------------------------------------------------------------+
| Field | Value |
+---------------------------+------------------------------------------------------------------------------------------+
| alarm_actions | [u'trust+http://localhost/infra-optim/v1/webhooks/a3326a6a-c18e-4e8e-adba-d0c61ad404c5'] |
| alarm_id | b9e381fc-8e3e-4943-82ee-647e7a2ef644 |
| description | Alarm when compute.instance.create.end event occurred. |
| enabled | True |
| event_type | compute.instance.create.end |
| insufficient_data_actions | [] |
| name | instance_create |
| ok_actions | [] |
| project_id | 728d66e18c914af1a41e2a585cf766af |
| query | |
| repeat_actions | False |
| severity | low |
| state | insufficient data |
| state_reason | Not evaluated yet |
| state_timestamp | 2020-01-14T03:56:26.894416 |
| time_constraints | [] |
| timestamp | 2020-01-14T03:56:26.894416 |
| type | event |
| user_id | 88c40156af7445cc80580a1e7e3ba308 |
+---------------------------+------------------------------------------------------------------------------------------+
.. _alarm-creation: https://docs.openstack.org/aodh/latest/admin/telemetry-alarms.html#alarm-creation
Step 3: Trigger the alarm
~~~~~~~~~~~~~~~~~~~~~~~~~
In this example, you can create a new instance to trigger the alarm.
The alarm state will translate from ``insufficient data`` to ``alarm``.
.. code-block:: bash
$ openstack alarm show b9e381fc-8e3e-4943-82ee-647e7a2ef644
+---------------------------+-------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+---------------------------+-------------------------------------------------------------------------------------------------------------------+
| alarm_actions | [u'trust+http://localhost/infra-optim/v1/webhooks/a3326a6a-c18e-4e8e-adba-d0c61ad404c5'] |
| alarm_id | b9e381fc-8e3e-4943-82ee-647e7a2ef644 |
| description | Alarm when compute.instance.create.end event occurred. |
| enabled | True |
| event_type | compute.instance.create.end |
| insufficient_data_actions | [] |
| name | instance_create |
| ok_actions | [] |
| project_id | 728d66e18c914af1a41e2a585cf766af |
| query | |
| repeat_actions | False |
| severity | low |
| state | alarm |
| state_reason | Event <id=67dd0afa-2082-45a4-8825-9573b2cc60e5,event_type=compute.instance.create.end> hits the query <query=[]>. |
| state_timestamp | 2020-01-14T03:56:26.894416 |
| time_constraints | [] |
| timestamp | 2020-01-14T06:17:40.350649 |
| type | event |
| user_id | 88c40156af7445cc80580a1e7e3ba308 |
+---------------------------+-------------------------------------------------------------------------------------------------------------------+
Step 4: Verify the audit
~~~~~~~~~~~~~~~~~~~~~~~~
This can be verified to check if the audit state was ``SUCCEEDED``:
.. code-block:: bash
$ openstack optimize audit show a3326a6a-c18e-4e8e-adba-d0c61ad404c5
+---------------+--------------------------------------+
| Field | Value |
+---------------+--------------------------------------+
| UUID | a3326a6a-c18e-4e8e-adba-d0c61ad404c5 |
| Name | dummy-2020-01-14T03:21:19.168467 |
| Created At | 2020-01-14T03:21:19+00:00 |
| Updated At | 2020-01-14T06:26:40+00:00 |
| Deleted At | None |
| State | SUCCEEDED |
| Audit Type | EVENT |
| Parameters | {u'para2': u'hello', u'para1': 3.2} |
| Interval | None |
| Goal | dummy |
| Strategy | dummy |
| Audit Scope | [] |
| Auto Trigger | False |
| Next Run Time | None |
| Hostname | ubuntudbs |
| Start Time | None |
| End Time | None |
| Force | False |
+---------------+--------------------------------------+
and you can use the following command to check if the action plan
was created:
.. code-block:: bash
$ openstack optimize actionplan list --audit a3326a6a-c18e-4e8e-adba-d0c61ad404c5
+--------------------------------------+--------------------------------------+-------------+------------+-----------------+
| UUID | Audit | State | Updated At | Global efficacy |
+--------------------------------------+--------------------------------------+-------------+------------+-----------------+
| 673b3fcb-8c16-4a41-9ee3-2956d9f6ca9e | a3326a6a-c18e-4e8e-adba-d0c61ad404c5 | RECOMMENDED | None | |
+--------------------------------------+--------------------------------------+-------------+------------+-----------------+

View File

@@ -1,4 +1,10 @@
==========
User Guide
==========
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
ways-to-install
user-guide user-guide
event_type_audit

View File

@@ -4,8 +4,6 @@
https://creativecommons.org/licenses/by/3.0/ https://creativecommons.org/licenses/by/3.0/
.. _user-guide:
================== ==================
Watcher User Guide Watcher User Guide
================== ==================
@@ -60,8 +58,8 @@ plugin installation guide`_.
.. _`OpenStack CLI`: https://docs.openstack.org/python-openstackclient/latest/cli/man/openstack.html .. _`OpenStack CLI`: https://docs.openstack.org/python-openstackclient/latest/cli/man/openstack.html
.. _`Watcher CLI`: https://docs.openstack.org/python-watcherclient/latest/cli/index.html .. _`Watcher CLI`: https://docs.openstack.org/python-watcherclient/latest/cli/index.html
Seeing what the Watcher CLI can do ? Watcher CLI Command
------------------------------------ -------------------
We can see all of the commands available with Watcher CLI by running the We can see all of the commands available with Watcher CLI by running the
watcher binary without options. watcher binary without options.
@@ -69,8 +67,8 @@ watcher binary without options.
$ openstack help optimize $ openstack help optimize
How do I run an audit of my cluster ? Running an audit of the cluster
------------------------------------- -------------------------------
First, you need to find the :ref:`goal <goal_definition>` you want to achieve: First, you need to find the :ref:`goal <goal_definition>` you want to achieve:

View File

@@ -5,7 +5,6 @@ appdirs==1.4.3
APScheduler==3.5.1 APScheduler==3.5.1
asn1crypto==0.24.0 asn1crypto==0.24.0
automaton==1.14.0 automaton==1.14.0
Babel==2.5.3
beautifulsoup4==4.6.0 beautifulsoup4==4.6.0
cachetools==2.0.1 cachetools==2.0.1
certifi==2018.1.18 certifi==2018.1.18
@@ -30,15 +29,12 @@ eventlet==0.20.0
extras==1.0.0 extras==1.0.0
fasteners==0.14.1 fasteners==0.14.1
fixtures==3.0.0 fixtures==3.0.0
flake8==2.5.5
freezegun==0.3.10 freezegun==0.3.10
future==0.16.0
futurist==1.8.0 futurist==1.8.0
gitdb2==2.0.3 gitdb2==2.0.3
GitPython==2.1.8 GitPython==2.1.8
gnocchiclient==7.0.1 gnocchiclient==7.0.1
greenlet==0.4.13 greenlet==0.4.13
hacking==0.12.0
idna==2.6 idna==2.6
imagesize==1.0.0 imagesize==1.0.0
iso8601==0.1.12 iso8601==0.1.12
@@ -46,7 +42,7 @@ Jinja2==2.10
jmespath==0.9.3 jmespath==0.9.3
jsonpatch==1.21 jsonpatch==1.21
jsonpointer==2.0 jsonpointer==2.0
jsonschema==2.6.0 jsonschema==3.2.0
keystoneauth1==3.4.0 keystoneauth1==3.4.0
keystonemiddleware==4.21.0 keystonemiddleware==4.21.0
kombu==4.1.0 kombu==4.1.0
@@ -57,15 +53,12 @@ Mako==1.0.7
MarkupSafe==1.0 MarkupSafe==1.0
mccabe==0.2.1 mccabe==0.2.1
microversion_parse==0.2.1 microversion_parse==0.2.1
mock==2.0.0
monotonic==1.4 monotonic==1.4
mox3==0.25.0
msgpack==0.5.6 msgpack==0.5.6
munch==2.2.0 munch==2.2.0
netaddr==0.7.19 netaddr==0.7.19
netifaces==0.10.6 netifaces==0.10.6
networkx==1.11 networkx==2.2
openstackdocstheme==1.20.0
openstacksdk==0.12.0 openstacksdk==0.12.0
os-api-ref===1.4.0 os-api-ref===1.4.0
os-client-config==1.29.0 os-client-config==1.29.0
@@ -95,14 +88,12 @@ Paste==2.0.3
PasteDeploy==1.5.2 PasteDeploy==1.5.2
pbr==3.1.1 pbr==3.1.1
pecan==1.3.2 pecan==1.3.2
pep8==1.5.7
pika==0.10.0 pika==0.10.0
pika-pool==0.1.3 pika-pool==0.1.3
prettytable==0.7.2 prettytable==0.7.2
psutil==5.4.3 psutil==5.4.3
pycadf==2.7.0 pycadf==2.7.0
pycparser==2.18 pycparser==2.18
pyflakes==0.8.1
Pygments==2.2.0 Pygments==2.2.0
pyinotify==0.9.6 pyinotify==0.9.6
pyOpenSSL==17.5.0 pyOpenSSL==17.5.0
@@ -123,7 +114,6 @@ python-openstackclient==3.14.0
python-subunit==1.2.0 python-subunit==1.2.0
pytz==2018.3 pytz==2018.3
PyYAML==3.12 PyYAML==3.12
reno==2.7.0
repoze.lru==0.7 repoze.lru==0.7
requests==2.18.4 requests==2.18.4
requestsexceptions==1.4.0 requestsexceptions==1.4.0
@@ -132,20 +122,15 @@ rfc3986==1.1.0
Routes==2.4.1 Routes==2.4.1
simplegeneric==0.8.1 simplegeneric==0.8.1
simplejson==3.13.2 simplejson==3.13.2
six==1.11.0
smmap2==2.0.3 smmap2==2.0.3
snowballstemmer==1.2.1 snowballstemmer==1.2.1
Sphinx==1.6.5
sphinxcontrib-httpdomain==1.6.1
sphinxcontrib-pecanwsme==0.8.0
sphinxcontrib-websupport==1.0.1
SQLAlchemy==1.2.5 SQLAlchemy==1.2.5
sqlalchemy-migrate==0.11.0 sqlalchemy-migrate==0.11.0
sqlparse==0.2.4 sqlparse==0.2.4
statsd==3.2.2 statsd==3.2.2
stestr==2.0.0 stestr==2.0.0
stevedore==1.28.0 stevedore==1.28.0
taskflow==3.1.0 taskflow==3.7.1
Tempita==0.5.2 Tempita==0.5.2
tenacity==4.9.0 tenacity==4.9.0
testresources==2.0.1 testresources==2.0.1

View File

@@ -1,15 +0,0 @@
- hosts: primary
tasks:
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=/logs/**
- --include=*/
- --exclude=*
- --prune-empty-dirs

View File

@@ -1,60 +0,0 @@
- hosts: all
name: legacy-grenade-dsvm-watcher
tasks:
- name: Ensure legacy workspace directory
file:
path: '{{ ansible_user_dir }}/workspace'
state: directory
- shell:
cmd: |
set -e
set -x
cat > clonemap.yaml << EOF
clonemap:
- name: openstack/devstack-gate
dest: devstack-gate
EOF
/usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \
https://opendev.org \
openstack/devstack-gate
executable: /bin/bash
chdir: '{{ ansible_user_dir }}/workspace'
environment: '{{ zuul | zuul_legacy_vars }}'
- shell:
cmd: |
set -e
set -x
export PYTHONUNBUFFERED=true
export PROJECTS="openstack/grenade $PROJECTS"
export PROJECTS="openstack/watcher $PROJECTS"
export PROJECTS="openstack/watcher-tempest-plugin $PROJECTS"
export PROJECTS="openstack/python-watcherclient $PROJECTS"
export DEVSTACK_PROJECT_FROM_GIT="python-watcherclient $DEVSTACK_PROJECT_FROM_GIT"
export GRENADE_PLUGINRC="enable_grenade_plugin watcher https://opendev.org/openstack/watcher"
export DEVSTACK_LOCAL_CONFIG+=$'\n'"export TEMPEST_PLUGINS='/opt/stack/new/watcher-tempest-plugin'"
export DEVSTACK_GATE_TEMPEST_NOTESTS=1
export DEVSTACK_GATE_GRENADE=pullup
export BRANCH_OVERRIDE=default
if [ "$BRANCH_OVERRIDE" != "default" ] ; then
export OVERRIDE_ZUUL_BRANCH=$BRANCH_OVERRIDE
fi
# Add configuration values for enabling security features in local.conf
function pre_test_hook {
if [ -f /opt/stack/old/watcher-tempest-plugin/tools/pre_test_hook.sh ] ; then
. /opt/stack/old/watcher-tempest-plugin/tools/pre_test_hook.sh
fi
}
export -f pre_test_hook
cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh
./safe-devstack-vm-gate-wrap.sh
executable: /bin/bash
chdir: '{{ ansible_user_dir }}/workspace'
environment: '{{ zuul | zuul_legacy_vars }}'

View File

@@ -0,0 +1,6 @@
---
upgrade:
- |
Python 2.7 support has been dropped. Last release of Watcher
to support py2.7 is OpenStack Train. The minimum version of Python now
supported by Watcher is Python 3.6.

View File

@@ -0,0 +1,8 @@
---
features:
- |
Add a new webhook API and a new audit type EVENT, the microversion is 1.4.
Now Watcher user can create audit with EVENT type and the audit will be
triggered by webhook API.
The user guide is available online:
https://docs.openstack.org/watcher/latest/user/event_type_audit.html

View File

@@ -0,0 +1,20 @@
---
prelude: >
Many operations in the decision engine will block on I/O. Such I/O
operations can stall the execution of a sequential application
significantly. To reduce the potential bottleneck of many operations
the general purpose decision engine threadpool is introduced.
features:
- |
A new threadpool for the decision engine that contributors can use to
improve the performance of many operations, primarily I/O bound onces.
The amount of workers used by the decision engine threadpool can be
configured to scale according to the available infrastructure using
the `watcher_decision_engine.max_general_workers` config option.
Documentation for contributors to effectively use this threadpool is
available online:
https://docs.openstack.org/watcher/latest/contributor/concurrency.html
- |
The building of the compute (Nova) data model will be done using the
decision engine threadpool, thereby, significantly reducing the total
time required to build it.

View File

@@ -53,7 +53,6 @@ source_suffix = '.rst'
master_doc = 'index' master_doc = 'index'
# General information about the project. # General information about the project.
project = u'watcher'
copyright = u'2016, Watcher developers' copyright = u'2016, Watcher developers'
# Release notes are version independent # Release notes are version independent
@@ -91,11 +90,15 @@ exclude_patterns = ['_build']
#show_authors = False #show_authors = False
# The name of the Pygments (syntax highlighting) style to use. # The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx' pygments_style = 'native'
# A list of ignored prefixes for module index sorting. # A list of ignored prefixes for module index sorting.
#modindex_common_prefix = [] #modindex_common_prefix = []
# openstackdocstheme options
openstackdocs_repo_name = 'openstack/watcher'
openstackdocs_bug_project = 'watcher'
openstackdocs_bug_tag = ''
# -- Options for HTML output -------------------------------------------------- # -- Options for HTML output --------------------------------------------------

View File

@@ -21,6 +21,8 @@ Contents:
:maxdepth: 1 :maxdepth: 1
unreleased unreleased
ussuri
train
stein stein
rocky rocky
queens queens

View File

@@ -0,0 +1,6 @@
==========================
Train Series Release Notes
==========================
.. release-notes::
:branch: stable/train

View File

@@ -0,0 +1,6 @@
===========================
Ussuri Series Release Notes
===========================
.. release-notes::
:branch: stable/ussuri

View File

@@ -3,10 +3,9 @@
# process, which may cause wedges in the gate later. # process, which may cause wedges in the gate later.
apscheduler>=3.5.1 # MIT License apscheduler>=3.5.1 # MIT License
enum34>=1.1.6;python_version=='2.7' or python_version=='2.6' or python_version=='3.3' # BSD
jsonpatch>=1.21 # BSD jsonpatch>=1.21 # BSD
keystoneauth1>=3.4.0 # Apache-2.0 keystoneauth1>=3.4.0 # Apache-2.0
jsonschema>=2.6.0 # MIT jsonschema>=3.2.0 # MIT
keystonemiddleware>=4.21.0 # Apache-2.0 keystonemiddleware>=4.21.0 # Apache-2.0
lxml>=4.1.1 # BSD lxml>=4.1.1 # BSD
croniter>=0.3.20 # MIT License croniter>=0.3.20 # MIT License
@@ -40,14 +39,11 @@ python-neutronclient>=6.7.0 # Apache-2.0
python-novaclient>=14.1.0 # Apache-2.0 python-novaclient>=14.1.0 # Apache-2.0
python-openstackclient>=3.14.0 # Apache-2.0 python-openstackclient>=3.14.0 # Apache-2.0
python-ironicclient>=2.5.0 # Apache-2.0 python-ironicclient>=2.5.0 # Apache-2.0
six>=1.11.0 # MIT
SQLAlchemy>=1.2.5 # MIT SQLAlchemy>=1.2.5 # MIT
stevedore>=1.28.0 # Apache-2.0 stevedore>=1.28.0 # Apache-2.0
taskflow>=3.1.0 # Apache-2.0 taskflow>=3.7.1 # Apache-2.0
WebOb>=1.8.5 # MIT WebOb>=1.8.5 # MIT
WSME>=0.9.2 # MIT WSME>=0.9.2 # MIT
# NOTE(fdegir): NetworkX 2.3 dropped support for Python 2 networkx>=2.2 # BSD
networkx>=1.11,<2.3;python_version<'3.0' # BSD
networkx>=1.11;python_version>='3.4' # BSD
microversion_parse>=0.2.1 # Apache-2.0 microversion_parse>=0.2.1 # Apache-2.0
futurist>=1.8.0 # Apache-2.0 futurist>=1.8.0 # Apache-2.0

View File

@@ -6,6 +6,7 @@ description-file =
author = OpenStack author = OpenStack
author-email = openstack-discuss@lists.openstack.org author-email = openstack-discuss@lists.openstack.org
home-page = https://docs.openstack.org/watcher/latest/ home-page = https://docs.openstack.org/watcher/latest/
python-requires = >=3.6
classifier = classifier =
Environment :: OpenStack Environment :: OpenStack
Intended Audience :: Information Technology Intended Audience :: Information Technology
@@ -13,11 +14,12 @@ classifier =
License :: OSI Approved :: Apache Software License License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux Operating System :: POSIX :: Linux
Programming Language :: Python Programming Language :: Python
Programming Language :: Python :: 2 Programming Language :: Python :: Implementation :: CPython
Programming Language :: Python :: 2.7 Programming Language :: Python :: 3 :: Only
Programming Language :: Python :: 3 Programming Language :: Python :: 3
Programming Language :: Python :: 3.6 Programming Language :: Python :: 3.6
Programming Language :: Python :: 3.7 Programming Language :: Python :: 3.7
Programming Language :: Python :: 3.8
[files] [files]
packages = packages =
@@ -25,10 +27,6 @@ packages =
data_files = data_files =
etc/ = etc/* etc/ = etc/*
[global]
setup-hooks =
pbr.hooks.setup_hook
[entry_points] [entry_points]
oslo.config.opts = oslo.config.opts =
watcher = watcher.conf.opts:list_opts watcher = watcher.conf.opts:list_opts
@@ -110,18 +108,3 @@ watcher_cluster_data_model_collectors =
compute = watcher.decision_engine.model.collector.nova:NovaClusterDataModelCollector compute = watcher.decision_engine.model.collector.nova:NovaClusterDataModelCollector
storage = watcher.decision_engine.model.collector.cinder:CinderClusterDataModelCollector storage = watcher.decision_engine.model.collector.cinder:CinderClusterDataModelCollector
baremetal = watcher.decision_engine.model.collector.ironic:BaremetalClusterDataModelCollector baremetal = watcher.decision_engine.model.collector.ironic:BaremetalClusterDataModelCollector
[compile_catalog]
directory = watcher/locale
domain = watcher
[update_catalog]
domain = watcher
output_dir = watcher/locale
input_file = watcher/locale/watcher.pot
[extract_messages]
keywords = _ gettext ngettext l_ lazy_gettext _LI _LW _LE _LC
mapping_file = babel.cfg
output_file = watcher/locale/watcher.pot

View File

@@ -13,17 +13,8 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools import setuptools
# In python < 2.7.4, a lazy loading of package `pbr` will break
# setuptools if some other modules registered functions in `atexit`.
# solution from: http://bugs.python.org/issue15881#msg170215
try:
import multiprocessing # noqa
except ImportError:
pass
setuptools.setup( setuptools.setup(
setup_requires=['pbr>=2.0.0'], setup_requires=['pbr>=2.0.0'],
pbr=True) pbr=True)

View File

@@ -5,8 +5,7 @@
coverage>=4.5.1 # Apache-2.0 coverage>=4.5.1 # Apache-2.0
doc8>=0.8.0 # Apache-2.0 doc8>=0.8.0 # Apache-2.0
freezegun>=0.3.10 # Apache-2.0 freezegun>=0.3.10 # Apache-2.0
hacking>=1.1.0,<1.2.0 # Apache-2.0 hacking>=3.0.1,<3.1.0 # Apache-2.0
mock>=2.0.0 # BSD
oslotest>=3.3.0 # Apache-2.0 oslotest>=3.3.0 # Apache-2.0
os-testr>=1.0.0 # Apache-2.0 os-testr>=1.0.0 # Apache-2.0
testscenarios>=0.5.0 # Apache-2.0/BSD testscenarios>=0.5.0 # Apache-2.0/BSD

52
tox.ini
View File

@@ -1,9 +1,11 @@
[tox] [tox]
minversion = 2.0 minversion = 2.0
envlist = py36,py37,py27,pep8 envlist = py36,py37,pep8
skipsdist = True skipsdist = True
ignore_basepython_conflict = True
[testenv] [testenv]
basepython = python3
usedevelop = True usedevelop = True
whitelist_externals = find whitelist_externals = find
rm rm
@@ -21,14 +23,12 @@ commands =
passenv = http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY passenv = http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY
[testenv:pep8] [testenv:pep8]
basepython = python3
commands = commands =
doc8 doc/source/ CONTRIBUTING.rst HACKING.rst README.rst doc8 doc/source/ CONTRIBUTING.rst HACKING.rst README.rst
flake8 flake8
bandit -r watcher -x watcher/tests/* -n5 -ll -s B320 bandit -r watcher -x watcher/tests/* -n5 -ll -s B320,B322
[testenv:venv] [testenv:venv]
basepython = python3
setenv = PYTHONHASHSEED=0 setenv = PYTHONHASHSEED=0
deps = deps =
-c{env:UPPER_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master} -c{env:UPPER_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master}
@@ -38,7 +38,6 @@ deps =
commands = {posargs} commands = {posargs}
[testenv:cover] [testenv:cover]
basepython = python3
setenv = setenv =
PYTHON=coverage run --source watcher --parallel-mode PYTHON=coverage run --source watcher --parallel-mode
commands = commands =
@@ -49,51 +48,66 @@ commands =
coverage report coverage report
[testenv:docs] [testenv:docs]
basepython = python3
setenv = PYTHONHASHSEED=0 setenv = PYTHONHASHSEED=0
deps = -r{toxinidir}/doc/requirements.txt deps = -r{toxinidir}/doc/requirements.txt
commands = commands =
rm -fr doc/build doc/source/api/ .autogenerated rm -fr doc/build doc/source/api/ .autogenerated
sphinx-build -W -b html doc/source doc/build/html sphinx-build -W --keep-going -b html doc/source doc/build/html
[testenv:api-ref] [testenv:api-ref]
basepython = python3
deps = -r{toxinidir}/doc/requirements.txt deps = -r{toxinidir}/doc/requirements.txt
whitelist_externals = bash whitelist_externals = bash
commands = commands =
bash -c 'rm -rf api-ref/build' bash -c 'rm -rf api-ref/build'
sphinx-build -W -b html -d api-ref/build/doctrees api-ref/source api-ref/build/html sphinx-build -W --keep-going -b html -d api-ref/build/doctrees api-ref/source api-ref/build/html
[testenv:debug] [testenv:debug]
basepython = python3
commands = oslo_debug_helper -t watcher/tests {posargs} commands = oslo_debug_helper -t watcher/tests {posargs}
[testenv:genconfig] [testenv:genconfig]
basepython = python3
sitepackages = False sitepackages = False
commands = commands =
oslo-config-generator --config-file etc/watcher/oslo-config-generator/watcher.conf oslo-config-generator --config-file etc/watcher/oslo-config-generator/watcher.conf
[testenv:genpolicy] [testenv:genpolicy]
basepython = python3
commands = commands =
oslopolicy-sample-generator --config-file etc/watcher/oslo-policy-generator/watcher-policy-generator.conf oslopolicy-sample-generator --config-file etc/watcher/oslo-policy-generator/watcher-policy-generator.conf
[flake8] [flake8]
filename = *.py,app.wsgi filename = *.py,app.wsgi
show-source=True show-source=True
ignore= H105,E123,E226,N320,H202 # W504 line break after binary operator
ignore= H105,E123,E226,N320,H202,W504
builtins= _ builtins= _
enable-extensions = H106,H203,H904 enable-extensions = H106,H203,H904
exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,build,*sqlalchemy/alembic/versions/*,demo/,releasenotes exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,build,*sqlalchemy/alembic/versions/*,demo/,releasenotes
[testenv:wheel] [testenv:wheel]
basepython = python3
commands = python setup.py bdist_wheel commands = python setup.py bdist_wheel
[hacking] [hacking]
import_exceptions = watcher._i18n import_exceptions = watcher._i18n
local-check-factory = watcher.hacking.checks.factory
[flake8:local-plugins]
extension =
N319 = checks:no_translate_debug_logs
N321 = checks:use_jsonutils
N322 = checks:check_assert_called_once_with
N325 = checks:check_python3_xrange
N326 = checks:check_no_basestring
N327 = checks:check_python3_no_iteritems
N328 = checks:check_asserttrue
N329 = checks:check_assertfalse
N330 = checks:check_assertempty
N331 = checks:check_assertisinstance
N332 = checks:check_assertequal_for_httpcode
N333 = checks:check_log_warn_deprecated
N340 = checks:check_oslo_i18n_wrapper
N341 = checks:check_builtins_gettext
N342 = checks:no_redundant_import_alias
N366 = checks:import_stock_mock
paths = ./watcher/hacking
[doc8] [doc8]
extension=.rst extension=.rst
@@ -101,7 +115,6 @@ extension=.rst
ignore-path=doc/source/image_src,doc/source/man,doc/source/api ignore-path=doc/source/image_src,doc/source/man,doc/source/api
[testenv:pdf-docs] [testenv:pdf-docs]
basepython = python3
envdir = {toxworkdir}/docs envdir = {toxworkdir}/docs
deps = {[testenv:docs]deps} deps = {[testenv:docs]deps}
whitelist_externals = whitelist_externals =
@@ -109,21 +122,18 @@ whitelist_externals =
make make
commands = commands =
rm -rf doc/build/pdf rm -rf doc/build/pdf
sphinx-build -W -b latex doc/source doc/build/pdf sphinx-build -W --keep-going -b latex doc/source doc/build/pdf
make -C doc/build/pdf make -C doc/build/pdf
[testenv:releasenotes] [testenv:releasenotes]
basepython = python3
deps = -r{toxinidir}/doc/requirements.txt deps = -r{toxinidir}/doc/requirements.txt
commands = sphinx-build -a -W -E -d releasenotes/build/doctrees -b html releasenotes/source releasenotes/build/html commands = sphinx-build -a -W -E -d releasenotes/build/doctrees --keep-going -b html releasenotes/source releasenotes/build/html
[testenv:bandit] [testenv:bandit]
basepython = python3
deps = -r{toxinidir}/test-requirements.txt deps = -r{toxinidir}/test-requirements.txt
commands = bandit -r watcher -x watcher/tests/* -n5 -ll -s B320 commands = bandit -r watcher -x watcher/tests/* -n5 -ll -s B320
[testenv:lower-constraints] [testenv:lower-constraints]
basepython = python3
deps = deps =
-c{toxinidir}/lower-constraints.txt -c{toxinidir}/lower-constraints.txt
-r{toxinidir}/test-requirements.txt -r{toxinidir}/test-requirements.txt

View File

@@ -37,5 +37,5 @@ def install(app, conf, public_routes):
if not CONF.get('enable_authentication'): if not CONF.get('enable_authentication'):
return app return app
return auth_token.AuthTokenMiddleware(app, return auth_token.AuthTokenMiddleware(app,
conf=dict(conf), conf=dict(conf.keystone_authtoken),
public_api_routes=public_routes) public_api_routes=public_routes)

View File

@@ -13,8 +13,6 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from __future__ import unicode_literals
from oslo_config import cfg from oslo_config import cfg
from watcher.api import hooks from watcher.api import hooks
@@ -27,6 +25,10 @@ server = {
# Pecan Application Configurations # Pecan Application Configurations
# See https://pecan.readthedocs.org/en/latest/configuration.html#application-configuration # noqa # See https://pecan.readthedocs.org/en/latest/configuration.html#application-configuration # noqa
acl_public_routes = ['/']
if not cfg.CONF.api.get("enable_webhooks_auth"):
acl_public_routes.append('/v1/webhooks/.*')
app = { app = {
'root': 'watcher.api.controllers.root.RootController', 'root': 'watcher.api.controllers.root.RootController',
'modules': ['watcher.api'], 'modules': ['watcher.api'],
@@ -36,9 +38,7 @@ app = {
], ],
'static_root': '%(confdir)s/public', 'static_root': '%(confdir)s/public',
'enable_acl': True, 'enable_acl': True,
'acl_public_routes': [ 'acl_public_routes': acl_public_routes,
'/',
],
} }
# WSME Configurations # WSME Configurations

View File

@@ -23,7 +23,7 @@ from watcher.api.controllers import base
def build_url(resource, resource_args, bookmark=False, base_url=None): def build_url(resource, resource_args, bookmark=False, base_url=None):
if base_url is None: if base_url is None:
base_url = pecan.request.host_url base_url = pecan.request.application_url
template = '%(url)s/%(res)s' if bookmark else '%(url)s/v1/%(res)s' template = '%(url)s/%(res)s' if bookmark else '%(url)s/v1/%(res)s'
# FIXME(lucasagomes): I'm getting a 404 when doing a GET on # FIXME(lucasagomes): I'm getting a 404 when doing a GET on

View File

@@ -30,3 +30,12 @@ audits.
--- ---
Added ``force`` into create audit request. If ``force`` is true, Added ``force`` into create audit request. If ``force`` is true,
audit will be executed despite of ongoing actionplan. audit will be executed despite of ongoing actionplan.
1.3
---
Added list data model API.
1.4
---
Added Watcher webhook API. It can be used to trigger audit
with ``event`` type.

View File

@@ -59,7 +59,8 @@ class Version(base.APIBase):
version.status = status version.status = status
version.max_version = v.max_version_string() version.max_version = v.max_version_string()
version.min_version = v.min_version_string() version.min_version = v.min_version_string()
version.links = [link.Link.make_link('self', pecan.request.host_url, version.links = [link.Link.make_link('self',
pecan.request.application_url,
id, '', bookmark=True)] id, '', bookmark=True)]
return version return version

View File

@@ -40,7 +40,9 @@ from watcher.api.controllers.v1 import goal
from watcher.api.controllers.v1 import scoring_engine from watcher.api.controllers.v1 import scoring_engine
from watcher.api.controllers.v1 import service from watcher.api.controllers.v1 import service
from watcher.api.controllers.v1 import strategy from watcher.api.controllers.v1 import strategy
from watcher.api.controllers.v1 import utils
from watcher.api.controllers.v1 import versions from watcher.api.controllers.v1 import versions
from watcher.api.controllers.v1 import webhooks
def min_version(): def min_version():
@@ -130,6 +132,9 @@ class V1(APIBase):
services = [link.Link] services = [link.Link]
"""Links to the services resource""" """Links to the services resource"""
webhooks = [link.Link]
"""Links to the webhooks resource"""
links = [link.Link] links = [link.Link]
"""Links that point to a specific URL for this version and documentation""" """Links that point to a specific URL for this version and documentation"""
@@ -137,7 +142,8 @@ class V1(APIBase):
def convert(): def convert():
v1 = V1() v1 = V1()
v1.id = "v1" v1.id = "v1"
v1.links = [link.Link.make_link('self', pecan.request.host_url, base_url = pecan.request.application_url
v1.links = [link.Link.make_link('self', base_url,
'v1', '', bookmark=True), 'v1', '', bookmark=True),
link.Link.make_link('describedby', link.Link.make_link('describedby',
'http://docs.openstack.org', 'http://docs.openstack.org',
@@ -148,57 +154,66 @@ class V1(APIBase):
v1.media_types = [MediaType('application/json', v1.media_types = [MediaType('application/json',
'application/vnd.openstack.watcher.v1+json')] 'application/vnd.openstack.watcher.v1+json')]
v1.audit_templates = [link.Link.make_link('self', v1.audit_templates = [link.Link.make_link('self',
pecan.request.host_url, base_url,
'audit_templates', ''), 'audit_templates', ''),
link.Link.make_link('bookmark', link.Link.make_link('bookmark',
pecan.request.host_url, base_url,
'audit_templates', '', 'audit_templates', '',
bookmark=True) bookmark=True)
] ]
v1.audits = [link.Link.make_link('self', pecan.request.host_url, v1.audits = [link.Link.make_link('self', base_url,
'audits', ''), 'audits', ''),
link.Link.make_link('bookmark', link.Link.make_link('bookmark',
pecan.request.host_url, base_url,
'audits', '', 'audits', '',
bookmark=True) bookmark=True)
] ]
v1.data_model = [link.Link.make_link('self', pecan.request.host_url, if utils.allow_list_datamodel():
'data_model', ''), v1.data_model = [link.Link.make_link('self', base_url,
link.Link.make_link('bookmark', 'data_model', ''),
pecan.request.host_url, link.Link.make_link('bookmark',
'data_model', '', base_url,
bookmark=True) 'data_model', '',
] bookmark=True)
v1.actions = [link.Link.make_link('self', pecan.request.host_url, ]
v1.actions = [link.Link.make_link('self', base_url,
'actions', ''), 'actions', ''),
link.Link.make_link('bookmark', link.Link.make_link('bookmark',
pecan.request.host_url, base_url,
'actions', '', 'actions', '',
bookmark=True) bookmark=True)
] ]
v1.action_plans = [link.Link.make_link( v1.action_plans = [link.Link.make_link(
'self', pecan.request.host_url, 'action_plans', ''), 'self', base_url, 'action_plans', ''),
link.Link.make_link('bookmark', link.Link.make_link('bookmark',
pecan.request.host_url, base_url,
'action_plans', '', 'action_plans', '',
bookmark=True) bookmark=True)
] ]
v1.scoring_engines = [link.Link.make_link( v1.scoring_engines = [link.Link.make_link(
'self', pecan.request.host_url, 'scoring_engines', ''), 'self', base_url, 'scoring_engines', ''),
link.Link.make_link('bookmark', link.Link.make_link('bookmark',
pecan.request.host_url, base_url,
'scoring_engines', '', 'scoring_engines', '',
bookmark=True) bookmark=True)
] ]
v1.services = [link.Link.make_link( v1.services = [link.Link.make_link(
'self', pecan.request.host_url, 'services', ''), 'self', base_url, 'services', ''),
link.Link.make_link('bookmark', link.Link.make_link('bookmark',
pecan.request.host_url, base_url,
'services', '', 'services', '',
bookmark=True) bookmark=True)
] ]
if utils.allow_webhook_api():
v1.webhooks = [link.Link.make_link(
'self', base_url, 'webhooks', ''),
link.Link.make_link('bookmark',
base_url,
'webhooks', '',
bookmark=True)
]
return v1 return v1
@@ -214,6 +229,7 @@ class Controller(rest.RestController):
services = service.ServicesController() services = service.ServicesController()
strategies = strategy.StrategiesController() strategies = strategy.StrategiesController()
data_model = data_model.DataModelController() data_model = data_model.DataModelController()
webhooks = webhooks.WebhookController()
@wsme_pecan.wsexpose(V1) @wsme_pecan.wsexpose(V1)
def get(self): def get(self):

View File

@@ -165,7 +165,7 @@ class ActionPlan(base.APIBase):
name=indicator.name, name=indicator.name,
description=indicator.description, description=indicator.description,
unit=indicator.unit, unit=indicator.unit,
value=indicator.value, value=float(indicator.value),
) )
efficacy_indicators.append(efficacy_indicator.as_dict()) efficacy_indicators.append(efficacy_indicator.as_dict())
self._efficacy_indicators = efficacy_indicators self._efficacy_indicators = efficacy_indicators

View File

@@ -138,6 +138,9 @@ class AuditTemplatePostType(wtypes.Base):
raise exception.InvalidGoal(goal=audit_template.goal) raise exception.InvalidGoal(goal=audit_template.goal)
if audit_template.scope: if audit_template.scope:
keys = [list(s)[0] for s in audit_template.scope]
if keys[0] not in ('compute', 'storage'):
audit_template.scope = [dict(compute=audit_template.scope)]
common_utils.Draft4Validator( common_utils.Draft4Validator(
AuditTemplatePostType._build_schema() AuditTemplatePostType._build_schema()
).validate(audit_template.scope) ).validate(audit_template.scope)
@@ -158,18 +161,23 @@ class AuditTemplatePostType(wtypes.Base):
"included and excluded together")) "included and excluded together"))
if audit_template.strategy: if audit_template.strategy:
available_strategies = objects.Strategy.list( try:
AuditTemplatePostType._ctx) if (common_utils.is_uuid_like(audit_template.strategy) or
available_strategies_map = { common_utils.is_int_like(audit_template.strategy)):
s.uuid: s for s in available_strategies} strategy = objects.Strategy.get(
if audit_template.strategy not in available_strategies_map: AuditTemplatePostType._ctx, audit_template.strategy)
else:
strategy = objects.Strategy.get_by_name(
AuditTemplatePostType._ctx, audit_template.strategy)
except Exception:
raise exception.InvalidStrategy( raise exception.InvalidStrategy(
strategy=audit_template.strategy) strategy=audit_template.strategy)
strategy = available_strategies_map[audit_template.strategy]
# Check that the strategy we indicate is actually related to the # Check that the strategy we indicate is actually related to the
# specified goal # specified goal
if strategy.goal_id != goal.id: if strategy.goal_id != goal.id:
available_strategies = objects.Strategy.list(
AuditTemplatePostType._ctx)
choices = ["'%s' (%s)" % (s.uuid, s.name) choices = ["'%s' (%s)" % (s.uuid, s.name)
for s in available_strategies] for s in available_strategies]
raise exception.InvalidStrategy( raise exception.InvalidStrategy(

View File

@@ -24,6 +24,7 @@ from wsme import types as wtypes
import wsmeext.pecan as wsme_pecan import wsmeext.pecan as wsme_pecan
from watcher.api.controllers.v1 import types from watcher.api.controllers.v1 import types
from watcher.api.controllers.v1 import utils
from watcher.common import exception from watcher.common import exception
from watcher.common import policy from watcher.common import policy
from watcher.decision_engine import rpcapi from watcher.decision_engine import rpcapi
@@ -49,6 +50,8 @@ class DataModelController(rest.RestController):
:param audit_uuid: The UUID of the audit, used to filter data model :param audit_uuid: The UUID of the audit, used to filter data model
by the scope in audit. by the scope in audit.
""" """
if not utils.allow_list_datamodel():
raise exception.NotAcceptable
if self.from_data_model: if self.from_data_model:
raise exception.OperationNotPermitted raise exception.OperationNotPermitted
allowed_data_model_type = [ allowed_data_model_type = [

View File

@@ -184,7 +184,7 @@ class MultiType(wtypes.UserType):
class JsonPatchType(wtypes.Base): class JsonPatchType(wtypes.Base):
"""A complex type that represents a single json-patch operation.""" """A complex type that represents a single json-patch operation."""
path = wtypes.wsattr(wtypes.StringType(pattern='^(/[\w-]+)+$'), path = wtypes.wsattr(wtypes.StringType(pattern=r'^(/[\w-]+)+$'),
mandatory=True) mandatory=True)
op = wtypes.wsattr(wtypes.Enum(str, 'add', 'replace', 'remove'), op = wtypes.wsattr(wtypes.Enum(str, 'add', 'replace', 'remove'),
mandatory=True) mandatory=True)

View File

@@ -164,7 +164,8 @@ def allow_start_end_audit_time():
Version 1.1 of the API added support for start and end time of continuous Version 1.1 of the API added support for start and end time of continuous
audits. audits.
""" """
return pecan.request.version.minor >= versions.MINOR_1_START_END_TIMING return pecan.request.version.minor >= (
versions.VERSIONS.MINOR_1_START_END_TIMING.value)
def allow_force(): def allow_force():
@@ -173,4 +174,23 @@ def allow_force():
Version 1.2 of the API added support for forced audits that allows to Version 1.2 of the API added support for forced audits that allows to
launch audit when other action plan is ongoing. launch audit when other action plan is ongoing.
""" """
return pecan.request.version.minor >= versions.MINOR_2_FORCE return pecan.request.version.minor >= (
versions.VERSIONS.MINOR_2_FORCE.value)
def allow_list_datamodel():
"""Check if we should support list data model API.
Version 1.3 of the API added support to list data model.
"""
return pecan.request.version.minor >= (
versions.VERSIONS.MINOR_3_DATAMODEL.value)
def allow_webhook_api():
"""Check if we should support webhook API.
Version 1.4 of the API added support to trigger webhook.
"""
return pecan.request.version.minor >= (
versions.VERSIONS.MINOR_4_WEBHOOK_API.value)

View File

@@ -14,25 +14,25 @@
# License for the specific language governing permissions and limitations # License for the specific language governing permissions and limitations
# under the License. # under the License.
import enum
class VERSIONS(enum.Enum):
MINOR_0_ROCKY = 0 # v1.0: corresponds to Rocky API
MINOR_1_START_END_TIMING = 1 # v1.1: Add start/end timei for audit
MINOR_2_FORCE = 2 # v1.2: Add force field to audit
MINOR_3_DATAMODEL = 3 # v1.3: Add list datamodel API
MINOR_4_WEBHOOK_API = 4 # v1.4: Add webhook trigger API
MINOR_MAX_VERSION = 4
# This is the version 1 API # This is the version 1 API
BASE_VERSION = 1 BASE_VERSION = 1
# Here goes a short log of changes in every version.
#
# v1.0: corresponds to Rocky API
# v1.1: Add start/end time for continuous audit
# v1.2: Add force field to audit
MINOR_0_ROCKY = 0
MINOR_1_START_END_TIMING = 1
MINOR_2_FORCE = 2
MINOR_MAX_VERSION = MINOR_2_FORCE
# String representations of the minor and maximum versions # String representations of the minor and maximum versions
_MIN_VERSION_STRING = '{}.{}'.format(BASE_VERSION, MINOR_0_ROCKY) _MIN_VERSION_STRING = '{}.{}'.format(BASE_VERSION,
_MAX_VERSION_STRING = '{}.{}'.format(BASE_VERSION, MINOR_MAX_VERSION) VERSIONS.MINOR_0_ROCKY.value)
_MAX_VERSION_STRING = '{}.{}'.format(BASE_VERSION,
VERSIONS.MINOR_MAX_VERSION.value)
def service_type_string(): def service_type_string():

View File

@@ -0,0 +1,62 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Webhook endpoint for Watcher v1 REST API.
"""
from oslo_log import log
import pecan
from pecan import rest
from wsme import types as wtypes
import wsmeext.pecan as wsme_pecan
from watcher.api.controllers.v1 import types
from watcher.api.controllers.v1 import utils
from watcher.common import exception
from watcher.decision_engine import rpcapi
from watcher import objects
LOG = log.getLogger(__name__)
class WebhookController(rest.RestController):
"""REST controller for webhooks resource."""
def __init__(self):
super(WebhookController, self).__init__()
self.dc_client = rpcapi.DecisionEngineAPI()
@wsme_pecan.wsexpose(None, wtypes.text, body=types.jsontype,
status_code=202)
def post(self, audit_ident, body):
"""Trigger the given audit.
:param audit_ident: UUID or name of an audit.
"""
LOG.debug("Webhook trigger Audit: %s.", audit_ident)
context = pecan.request.context
audit = utils.get_resource('Audit', audit_ident)
if audit is None:
raise exception.AuditNotFound(audit=audit_ident)
if audit.audit_type != objects.audit.AuditType.EVENT.value:
raise exception.AuditTypeNotAllowed(audit_type=audit.audit_type)
allowed_state = (
objects.audit.State.PENDING,
objects.audit.State.SUCCEEDED,
)
if audit.state not in allowed_state:
raise exception.AuditStateNotAllowed(state=audit.state)
# trigger decision-engine to run the audit
self.dc_client.trigger_audit(context, audit.uuid)

View File

@@ -15,9 +15,9 @@
# under the License. # under the License.
from http import client as http_client
from oslo_config import cfg from oslo_config import cfg
from pecan import hooks from pecan import hooks
from six.moves import http_client
from watcher.common import context from watcher.common import context

View File

@@ -34,7 +34,7 @@ class AuthTokenMiddleware(auth_token.AuthProtocol):
""" """
def __init__(self, app, conf, public_api_routes=()): def __init__(self, app, conf, public_api_routes=()):
route_pattern_tpl = '%s(\.json|\.xml)?$' route_pattern_tpl = r'%s(\.json|\.xml)?$'
try: try:
self.public_api_routes = [re.compile(route_pattern_tpl % route_tpl) self.public_api_routes = [re.compile(route_pattern_tpl % route_tpl)

View File

@@ -24,7 +24,6 @@ from xml import etree as et
from oslo_log import log from oslo_log import log
from oslo_serialization import jsonutils from oslo_serialization import jsonutils
import six
import webob import webob
from watcher._i18n import _ from watcher._i18n import _
@@ -84,12 +83,10 @@ class ParsableErrorMiddleware(object):
'</error_message>' % state['status_code']] '</error_message>' % state['status_code']]
state['headers'].append(('Content-Type', 'application/xml')) state['headers'].append(('Content-Type', 'application/xml'))
else: else:
if six.PY3: app_iter = [i.decode('utf-8') for i in app_iter]
app_iter = [i.decode('utf-8') for i in app_iter]
body = [jsonutils.dumps( body = [jsonutils.dumps(
{'error_message': '\n'.join(app_iter)})] {'error_message': '\n'.join(app_iter)})]
if six.PY3: body = [item.encode('utf-8') for item in body]
body = [item.encode('utf-8') for item in body]
state['headers'].append(('Content-Type', 'application/json')) state['headers'].append(('Content-Type', 'application/json'))
state['headers'].append(('Content-Length', str(len(body[0])))) state['headers'].append(('Content-Length', str(len(body[0]))))
else: else:

View File

@@ -20,7 +20,6 @@ import itertools
from oslo_config import cfg from oslo_config import cfg
from oslo_log import log from oslo_log import log
from oslo_utils import timeutils from oslo_utils import timeutils
import six
from watcher.common import context as watcher_context from watcher.common import context as watcher_context
from watcher.common import scheduling from watcher.common import scheduling
@@ -83,7 +82,7 @@ class APISchedulingService(scheduling.BackgroundSchedulerService):
service = objects.Service.get(context, service_id) service = objects.Service.get(context, service_id)
last_heartbeat = (service.last_seen_up or service.updated_at or last_heartbeat = (service.last_seen_up or service.updated_at or
service.created_at) service.created_at)
if isinstance(last_heartbeat, six.string_types): if isinstance(last_heartbeat, str):
# NOTE(russellb) If this service came in over rpc via # NOTE(russellb) If this service came in over rpc via
# conductor, then the timestamp will be a string and needs to be # conductor, then the timestamp will be a string and needs to be
# converted back to a datetime. # converted back to a datetime.

View File

@@ -18,11 +18,9 @@
# #
import abc import abc
import six
@six.add_metaclass(abc.ABCMeta) class BaseActionPlanHandler(object, metaclass=abc.ABCMeta):
class BaseActionPlanHandler(object):
@abc.abstractmethod @abc.abstractmethod
def execute(self): def execute(self):
raise NotImplementedError() raise NotImplementedError()

View File

@@ -19,14 +19,12 @@
import abc import abc
import jsonschema import jsonschema
import six
from watcher.common import clients from watcher.common import clients
from watcher.common.loader import loadable from watcher.common.loader import loadable
@six.add_metaclass(abc.ABCMeta) class BaseAction(loadable.Loadable, metaclass=abc.ABCMeta):
class BaseAction(loadable.Loadable):
# NOTE(jed): by convention we decided # NOTE(jed): by convention we decided
# that the attribute "resource_id" is the unique id of # that the attribute "resource_id" is the unique id of
# the resource to which the Action applies to allow us to use it in the # the resource to which the Action applies to allow us to use it in the
@@ -140,7 +138,7 @@ class BaseAction(loadable.Loadable):
raise NotImplementedError() raise NotImplementedError()
def check_abort(self): def check_abort(self):
if self.__class__.__name__ is 'Migrate': if self.__class__.__name__ == 'Migrate':
if self.migration_type == self.LIVE_MIGRATION: if self.migration_type == self.LIVE_MIGRATION:
return True return True
else: else:

View File

@@ -15,8 +15,6 @@
# limitations under the License. # limitations under the License.
# #
from __future__ import unicode_literals
from oslo_log import log from oslo_log import log
from watcher.applier.loading import default from watcher.applier.loading import default

View File

@@ -186,7 +186,7 @@ class Migrate(base.BaseAction):
return self.migrate(destination=self.destination_node) return self.migrate(destination=self.destination_node)
def revert(self): def revert(self):
LOG.info('Migrate action do not revert!') return self.migrate(destination=self.source_node)
def abort(self): def abort(self):
nova = nova_helper.NovaHelper(osc=self.osc) nova = nova_helper.NovaHelper(osc=self.osc)

View File

@@ -47,24 +47,24 @@ class Resize(base.BaseAction):
@property @property
def schema(self): def schema(self):
return { return {
'type': 'object', 'type': 'object',
'properties': { 'properties': {
'resource_id': { 'resource_id': {
'type': 'string', 'type': 'string',
'minlength': 1, 'minlength': 1,
'pattern': ('^([a-fA-F0-9]){8}-([a-fA-F0-9]){4}-' 'pattern': ('^([a-fA-F0-9]){8}-([a-fA-F0-9]){4}-'
'([a-fA-F0-9]){4}-([a-fA-F0-9]){4}-' '([a-fA-F0-9]){4}-([a-fA-F0-9]){4}-'
'([a-fA-F0-9]){12}$') '([a-fA-F0-9]){12}$')
},
'flavor': {
'type': 'string',
'minlength': 1,
},
}, },
'required': ['resource_id', 'flavor'], 'flavor': {
'additionalProperties': False, 'type': 'string',
} 'minlength': 1,
},
},
'required': ['resource_id', 'flavor'],
'additionalProperties': False,
}
@property @property
def instance_uuid(self): def instance_uuid(self):
@@ -95,7 +95,7 @@ class Resize(base.BaseAction):
return self.resize() return self.resize()
def revert(self): def revert(self):
return self.migrate(destination=self.source_node) LOG.warning("revert not supported")
def pre_condition(self): def pre_condition(self):
# TODO(jed): check if the instance exists / check if the instance is on # TODO(jed): check if the instance exists / check if the instance is on

View File

@@ -26,11 +26,9 @@ See: :doc:`../architecture` for more details on this component.
""" """
import abc import abc
import six
@six.add_metaclass(abc.ABCMeta) class BaseApplier(object, metaclass=abc.ABCMeta):
class BaseApplier(object):
@abc.abstractmethod @abc.abstractmethod
def execute(self, action_plan_uuid): def execute(self, action_plan_uuid):
raise NotImplementedError() raise NotImplementedError()

View File

@@ -11,9 +11,6 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from __future__ import unicode_literals
from watcher.common.loader import default from watcher.common.loader import default

View File

@@ -17,7 +17,6 @@
# #
import abc import abc
import six
import time import time
import eventlet import eventlet
@@ -40,8 +39,7 @@ CANCEL_STATE = [objects.action_plan.State.CANCELLING,
objects.action_plan.State.CANCELLED] objects.action_plan.State.CANCELLED]
@six.add_metaclass(abc.ABCMeta) class BaseWorkFlowEngine(loadable.Loadable, metaclass=abc.ABCMeta):
class BaseWorkFlowEngine(loadable.Loadable):
def __init__(self, config, context=None, applier_manager=None): def __init__(self, config, context=None, applier_manager=None):
"""Constructor """Constructor

View File

@@ -25,8 +25,11 @@ from taskflow import task as flow_task
from watcher.applier.workflow_engine import base from watcher.applier.workflow_engine import base
from watcher.common import exception from watcher.common import exception
from watcher import conf
from watcher import objects from watcher import objects
CONF = conf.CONF
LOG = log.getLogger(__name__) LOG = log.getLogger(__name__)
@@ -112,7 +115,7 @@ class DefaultWorkFlowEngine(base.BaseWorkFlowEngine):
return flow return flow
except exception.ActionPlanCancelled as e: except exception.ActionPlanCancelled:
raise raise
except tf_exception.WrappedFailure as e: except tf_exception.WrappedFailure as e:
@@ -127,9 +130,11 @@ class DefaultWorkFlowEngine(base.BaseWorkFlowEngine):
class TaskFlowActionContainer(base.BaseTaskFlowActionContainer): class TaskFlowActionContainer(base.BaseTaskFlowActionContainer):
def __init__(self, db_action, engine): def __init__(self, db_action, engine):
name = "action_type:{0} uuid:{1}".format(db_action.action_type, self.name = "action_type:{0} uuid:{1}".format(db_action.action_type,
db_action.uuid) db_action.uuid)
super(TaskFlowActionContainer, self).__init__(name, db_action, engine) super(TaskFlowActionContainer, self).__init__(self.name,
db_action,
engine)
def do_pre_execute(self): def do_pre_execute(self):
db_action = self.engine.notify(self._db_action, db_action = self.engine.notify(self._db_action,
@@ -158,6 +163,12 @@ class TaskFlowActionContainer(base.BaseTaskFlowActionContainer):
self.action.post_condition() self.action.post_condition()
def do_revert(self, *args, **kwargs): def do_revert(self, *args, **kwargs):
# NOTE: Not rollback action plan
if not CONF.watcher_applier.rollback_when_actionplan_failed:
LOG.info("Failed actionplan rollback option is turned off, and "
"the following action will be skipped: %s", self.name)
return
LOG.warning("Revert action: %s", self.name) LOG.warning("Revert action: %s", self.name)
try: try:
# TODO(jed): do we need to update the states in case of failure? # TODO(jed): do we need to update the states in case of failure?

View File

@@ -18,3 +18,10 @@
import eventlet import eventlet
eventlet.monkey_patch() eventlet.monkey_patch()
# Monkey patch the original current_thread to use the up-to-date _active
# global variable. See https://bugs.launchpad.net/bugs/1863021 and
# https://github.com/eventlet/eventlet/issues/592
import __original_module_threading as orig_threading # noqa
import threading # noqa
orig_threading.current_thread.__globals__['_active'] = threading._active

View File

@@ -15,7 +15,6 @@
import sys import sys
from oslo_upgradecheck import upgradecheck from oslo_upgradecheck import upgradecheck
import six
from watcher._i18n import _ from watcher._i18n import _
from watcher.common import clients from watcher.common import clients
@@ -38,7 +37,7 @@ class Checks(upgradecheck.UpgradeCommands):
clients.check_min_nova_api_version(CONF.nova_client.api_version) clients.check_min_nova_api_version(CONF.nova_client.api_version)
except ValueError as e: except ValueError as e:
return upgradecheck.Result( return upgradecheck.Result(
upgradecheck.Code.FAILURE, six.text_type(e)) upgradecheck.Code.FAILURE, str(e))
return upgradecheck.Result(upgradecheck.Code.SUCCESS) return upgradecheck.Result(upgradecheck.Code.SUCCESS)
_upgrade_checks = ( _upgrade_checks = (

View File

@@ -13,7 +13,6 @@
from oslo_context import context from oslo_context import context
from oslo_log import log from oslo_log import log
from oslo_utils import timeutils from oslo_utils import timeutils
import six
LOG = log.getLogger(__name__) LOG = log.getLogger(__name__)
@@ -69,7 +68,7 @@ class RequestContext(context.RequestContext):
self.project_id = project_id self.project_id = project_id
if not timestamp: if not timestamp:
timestamp = timeutils.utcnow() timestamp = timeutils.utcnow()
if isinstance(timestamp, six.string_types): if isinstance(timestamp, str):
timestamp = timeutils.parse_isotime(timestamp) timestamp = timeutils.parse_isotime(timestamp)
self.timestamp = timestamp self.timestamp = timestamp
self.user_name = user_name self.user_name = user_name

View File

@@ -28,7 +28,6 @@ import sys
from keystoneclient import exceptions as keystone_exceptions from keystoneclient import exceptions as keystone_exceptions
from oslo_config import cfg from oslo_config import cfg
from oslo_log import log from oslo_log import log
import six
from watcher._i18n import _ from watcher._i18n import _
@@ -97,19 +96,16 @@ class WatcherException(Exception):
def __str__(self): def __str__(self):
"""Encode to utf-8 then wsme api can consume it as well""" """Encode to utf-8 then wsme api can consume it as well"""
if not six.PY3: return self.args[0]
return six.text_type(self.args[0]).encode('utf-8')
else:
return self.args[0]
def __unicode__(self): def __unicode__(self):
return six.text_type(self.args[0]) return str(self.args[0])
def format_message(self): def format_message(self):
if self.__class__.__name__.endswith('_Remote'): if self.__class__.__name__.endswith('_Remote'):
return self.args[0] return self.args[0]
else: else:
return six.text_type(self) return str(self)
class UnsupportedError(WatcherException): class UnsupportedError(WatcherException):
@@ -243,6 +239,14 @@ class AuditTypeNotFound(Invalid):
msg_fmt = _("Audit type %(audit_type)s could not be found") msg_fmt = _("Audit type %(audit_type)s could not be found")
class AuditTypeNotAllowed(Invalid):
msg_fmt = _("Audit type %(audit_type)s is disallowed.")
class AuditStateNotAllowed(Invalid):
msg_fmt = _("Audit state %(state)s is disallowed.")
class AuditParameterNotAllowed(Invalid): class AuditParameterNotAllowed(Invalid):
msg_fmt = _("Audit parameter %(parameter)s are not allowed") msg_fmt = _("Audit parameter %(parameter)s are not allowed")

View File

@@ -14,14 +14,10 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from __future__ import unicode_literals
import abc import abc
import six
@six.add_metaclass(abc.ABCMeta) class BaseLoader(object, metaclass=abc.ABCMeta):
class BaseLoader(object):
@abc.abstractmethod @abc.abstractmethod
def list_available(self): def list_available(self):

View File

@@ -14,8 +14,6 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from __future__ import unicode_literals
from oslo_config import cfg from oslo_config import cfg
from oslo_log import log from oslo_log import log
from stevedore import driver as drivermanager from stevedore import driver as drivermanager

View File

@@ -16,13 +16,10 @@
import abc import abc
import six
from watcher.common import service from watcher.common import service
@six.add_metaclass(abc.ABCMeta) class Loadable(object, metaclass=abc.ABCMeta):
class Loadable(object):
"""Generic interface for dynamically loading a driver/entry point. """Generic interface for dynamically loading a driver/entry point.
This defines the contract in order to let the loader manager inject This defines the contract in order to let the loader manager inject
@@ -48,8 +45,7 @@ LoadableSingletonMeta = type(
"LoadableSingletonMeta", (abc.ABCMeta, service.Singleton), {}) "LoadableSingletonMeta", (abc.ABCMeta, service.Singleton), {})
@six.add_metaclass(LoadableSingletonMeta) class LoadableSingleton(object, metaclass=LoadableSingletonMeta):
class LoadableSingleton(object):
"""Generic interface for dynamically loading a driver as a singleton. """Generic interface for dynamically loading a driver as a singleton.
This defines the contract in order to let the loader manager inject This defines the contract in order to let the loader manager inject

View File

@@ -37,6 +37,7 @@ class GreenThreadPoolExecutor(BasePoolExecutor):
pool = futurist.GreenThreadPoolExecutor(int(max_workers)) pool = futurist.GreenThreadPoolExecutor(int(max_workers))
super(GreenThreadPoolExecutor, self).__init__(pool) super(GreenThreadPoolExecutor, self).__init__(pool)
executors = { executors = {
'default': GreenThreadPoolExecutor(), 'default': GreenThreadPoolExecutor(),
} }

View File

@@ -15,11 +15,9 @@
# under the License. # under the License.
import abc import abc
import six
@six.add_metaclass(abc.ABCMeta) class ServiceManager(object, metaclass=abc.ABCMeta):
class ServiceManager(object):
@abc.abstractproperty @abc.abstractproperty
def service_name(self): def service_name(self):

View File

@@ -28,7 +28,6 @@ from oslo_config import cfg
from oslo_log import log from oslo_log import log
from oslo_utils import strutils from oslo_utils import strutils
from oslo_utils import uuidutils from oslo_utils import uuidutils
import six
from watcher.common import exception from watcher.common import exception
@@ -82,7 +81,7 @@ def safe_rstrip(value, chars=None):
:return: Stripped value. :return: Stripped value.
""" """
if not isinstance(value, six.string_types): if not isinstance(value, str):
LOG.warning( LOG.warning(
"Failed to remove trailing character. Returning original object." "Failed to remove trailing character. Returning original object."
"Supplied object is not a string: %s,", value) "Supplied object is not a string: %s,", value)
@@ -104,7 +103,7 @@ def is_hostname_safe(hostname):
""" """
m = r'^[a-z0-9]([a-z0-9\-]{0,61}[a-z0-9])?$' m = r'^[a-z0-9]([a-z0-9\-]{0,61}[a-z0-9])?$'
return (isinstance(hostname, six.string_types) and return (isinstance(hostname, str) and
(re.match(m, hostname) is not None)) (re.match(m, hostname) is not None))
@@ -153,6 +152,7 @@ def extend_with_strict_schema(validator_class):
return validators.extend(validator_class, {"properties": strict_schema}) return validators.extend(validator_class, {"properties": strict_schema})
StrictDefaultValidatingDraft4Validator = extend_with_default( StrictDefaultValidatingDraft4Validator = extend_with_default(
extend_with_strict_schema(validators.Draft4Validator)) extend_with_strict_schema(validators.Draft4Validator))

View File

@@ -55,6 +55,11 @@ API_SERVICE_OPTS = [
"the service, this option should be False; note, you " "the service, this option should be False; note, you "
"will want to change public API endpoint to represent " "will want to change public API endpoint to represent "
"SSL termination URL with 'public_endpoint' option."), "SSL termination URL with 'public_endpoint' option."),
cfg.BoolOpt('enable_webhooks_auth',
default=True,
help='This option enables or disables webhook request '
'authentication via keystone. Default value is True.'),
] ]

View File

@@ -43,11 +43,20 @@ APPLIER_MANAGER_OPTS = [
help='Select the engine to use to execute the workflow'), help='Select the engine to use to execute the workflow'),
] ]
APPLIER_OPTS = [
cfg.BoolOpt('rollback_when_actionplan_failed',
default=False,
help='If set True, the failed actionplan will rollback '
'when executing. Defaule value is False.'),
]
def register_opts(conf): def register_opts(conf):
conf.register_group(watcher_applier) conf.register_group(watcher_applier)
conf.register_opts(APPLIER_MANAGER_OPTS, group=watcher_applier) conf.register_opts(APPLIER_MANAGER_OPTS, group=watcher_applier)
conf.register_opts(APPLIER_OPTS, group=watcher_applier)
def list_opts(): def list_opts():
return [(watcher_applier, APPLIER_MANAGER_OPTS)] return [(watcher_applier, APPLIER_MANAGER_OPTS),
(watcher_applier, APPLIER_OPTS)]

View File

@@ -40,11 +40,18 @@ WATCHER_DECISION_ENGINE_OPTS = [
default='watcher.decision.api', default='watcher.decision.api',
help='The identifier used by the Watcher ' help='The identifier used by the Watcher '
'module on the message broker'), 'module on the message broker'),
cfg.IntOpt('max_workers', cfg.IntOpt('max_audit_workers',
default=2, default=2,
required=True, required=True,
help='The maximum number of threads that can be used to ' help='The maximum number of threads that can be used to '
'execute strategies'), 'execute audits in parallel.'),
cfg.IntOpt('max_general_workers',
default=4,
required=True,
help='The maximum number of threads that can be used to '
'execute general tasks in parallel. The number of general '
'workers will not increase depending on the number of '
'audit workers!'),
cfg.IntOpt('action_plan_expiry', cfg.IntOpt('action_plan_expiry',
default=24, default=24,
mutable=True, mutable=True,

View File

@@ -18,7 +18,6 @@ Base classes for storage engines
import abc import abc
from oslo_config import cfg from oslo_config import cfg
from oslo_db import api as db_api from oslo_db import api as db_api
import six
_BACKEND_MAPPING = {'sqlalchemy': 'watcher.db.sqlalchemy.api'} _BACKEND_MAPPING = {'sqlalchemy': 'watcher.db.sqlalchemy.api'}
IMPL = db_api.DBAPI.from_config(cfg.CONF, backend_mapping=_BACKEND_MAPPING, IMPL = db_api.DBAPI.from_config(cfg.CONF, backend_mapping=_BACKEND_MAPPING,
@@ -30,8 +29,7 @@ def get_instance():
return IMPL return IMPL
@six.add_metaclass(abc.ABCMeta) class BaseConnection(object, metaclass=abc.ABCMeta):
class BaseConnection(object):
"""Base class for storage system connections.""" """Base class for storage system connections."""
@abc.abstractmethod @abc.abstractmethod

View File

@@ -15,8 +15,6 @@
# limitations under the License. # limitations under the License.
# #
from __future__ import print_function
import collections import collections
import datetime import datetime
import itertools import itertools
@@ -25,7 +23,6 @@ import sys
from oslo_log import log from oslo_log import log
from oslo_utils import strutils from oslo_utils import strutils
import prettytable as ptable import prettytable as ptable
from six.moves import input
from watcher._i18n import _ from watcher._i18n import _
from watcher._i18n import lazy_translation_enabled from watcher._i18n import lazy_translation_enabled

View File

@@ -1125,8 +1125,8 @@ class Connection(api.BaseConnection):
def get_action_description_by_id(self, context, def get_action_description_by_id(self, context,
action_id, eager=False): action_id, eager=False):
return self._get_action_description( return self._get_action_description(
context, fieldname="id", value=action_id, eager=eager) context, fieldname="id", value=action_id, eager=eager)
def get_action_description_by_type(self, context, def get_action_description_by_type(self, context,
action_type, eager=False): action_type, eager=False):

View File

@@ -18,7 +18,6 @@
# limitations under the License. # limitations under the License.
# #
import abc import abc
import six
from oslo_config import cfg from oslo_config import cfg
from oslo_log import log from oslo_log import log
@@ -36,9 +35,11 @@ CONF = cfg.CONF
LOG = log.getLogger(__name__) LOG = log.getLogger(__name__)
@six.add_metaclass(abc.ABCMeta) class BaseMetaClass(service.Singleton, abc.ABCMeta):
@six.add_metaclass(service.Singleton) pass
class BaseAuditHandler(object):
class BaseAuditHandler(object, metaclass=BaseMetaClass):
@abc.abstractmethod @abc.abstractmethod
def execute(self, audit, request_context): def execute(self, audit, request_context):
@@ -57,8 +58,7 @@ class BaseAuditHandler(object):
raise NotImplementedError() raise NotImplementedError()
@six.add_metaclass(abc.ABCMeta) class AuditHandler(BaseAuditHandler, metaclass=abc.ABCMeta):
class AuditHandler(BaseAuditHandler):
def __init__(self): def __init__(self):
super(AuditHandler, self).__init__() super(AuditHandler, self).__init__()

View File

@@ -0,0 +1,27 @@
# -*- encoding: utf-8 -*-
# Copyright (c) 2019 ZTE Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from watcher.decision_engine.audit import base
from watcher import objects
class EventAuditHandler(base.AuditHandler):
def post_execute(self, audit, solution, request_context):
super(EventAuditHandler, self).post_execute(audit, solution,
request_context)
# change state of the audit to SUCCEEDED
self.update_audit_state(audit, objects.audit.State.SUCCEEDED)

View File

@@ -19,8 +19,6 @@ import time
from oslo_config import cfg from oslo_config import cfg
from oslo_log import log from oslo_log import log
from watcher.common import exception
CONF = cfg.CONF CONF = cfg.CONF
LOG = log.getLogger(__name__) LOG = log.getLogger(__name__)
@@ -79,7 +77,6 @@ class DataSourceBase(object):
LOG.warning("Retry {0} of {1} while retrieving metrics retry " LOG.warning("Retry {0} of {1} while retrieving metrics retry "
"in {2} seconds".format(i+1, num_retries, timeout)) "in {2} seconds".format(i+1, num_retries, timeout))
time.sleep(timeout) time.sleep(timeout)
raise exception.DataSourceNotAvailable(datasource=self.NAME)
@abc.abstractmethod @abc.abstractmethod
def query_retry_reset(self, exception_instance): def query_retry_reset(self, exception_instance):

View File

@@ -136,19 +136,18 @@ class CeilometerHelper(base.DataSourceBase):
def list_metrics(self): def list_metrics(self):
"""List the user's meters.""" """List the user's meters."""
try: meters = self.query_retry(f=self.ceilometer.meters.list)
meters = self.query_retry(f=self.ceilometer.meters.list) if not meters:
except Exception:
return set() return set()
else: else:
return meters return meters
def check_availability(self): def check_availability(self):
try: status = self.query_retry(self.ceilometer.resources.list)
self.query_retry(self.ceilometer.resources.list) if status:
except Exception: return 'available'
else:
return 'not available' return 'not available'
return 'available'
def query_sample(self, meter_name, query, limit=1): def query_sample(self, meter_name, query, limit=1):
return self.query_retry(f=self.ceilometer.samples.list, return self.query_retry(f=self.ceilometer.samples.list,
@@ -189,7 +188,7 @@ class CeilometerHelper(base.DataSourceBase):
item_value = None item_value = None
if statistic: if statistic:
item_value = statistic[-1]._info.get('aggregate').get(aggregate) item_value = statistic[-1]._info.get('aggregate').get(aggregate)
if meter_name is 'host_airflow': if meter_name == 'host_airflow':
# Airflow from hardware.ipmi.node.airflow is reported as # Airflow from hardware.ipmi.node.airflow is reported as
# 1/10 th of actual CFM # 1/10 th of actual CFM
item_value *= 10 item_value *= 10

View File

@@ -52,17 +52,16 @@ class GnocchiHelper(base.DataSourceBase):
self.gnocchi = self.osc.gnocchi() self.gnocchi = self.osc.gnocchi()
def check_availability(self): def check_availability(self):
try: status = self.query_retry(self.gnocchi.status.get)
self.query_retry(self.gnocchi.status.get) if status:
except Exception: return 'available'
else:
return 'not available' return 'not available'
return 'available'
def list_metrics(self): def list_metrics(self):
"""List the user's meters.""" """List the user's meters."""
try: response = self.query_retry(f=self.gnocchi.metric.list)
response = self.query_retry(f=self.gnocchi.metric.list) if not response:
except Exception:
return set() return set()
else: else:
return set([metric['name'] for metric in response]) return set([metric['name'] for metric in response])
@@ -91,8 +90,9 @@ class GnocchiHelper(base.DataSourceBase):
f=self.gnocchi.resource.search, **kwargs) f=self.gnocchi.resource.search, **kwargs)
if not resources: if not resources:
raise exception.ResourceNotFound(name='gnocchi', LOG.warning("The {0} resource {1} could not be "
id=resource_id) "found".format(self.NAME, resource_id))
return
resource_id = resources[0]['id'] resource_id = resources[0]['id']
@@ -110,17 +110,18 @@ class GnocchiHelper(base.DataSourceBase):
statistics = self.query_retry( statistics = self.query_retry(
f=self.gnocchi.metric.get_measures, **kwargs) f=self.gnocchi.metric.get_measures, **kwargs)
return_value = None
if statistics: if statistics:
# return value of latest measure # return value of latest measure
# measure has structure [time, granularity, value] # measure has structure [time, granularity, value]
return_value = statistics[-1][2] return_value = statistics[-1][2]
if meter_name is 'host_airflow': if meter_name == 'host_airflow':
# Airflow from hardware.ipmi.node.airflow is reported as # Airflow from hardware.ipmi.node.airflow is reported as
# 1/10 th of actual CFM # 1/10 th of actual CFM
return_value *= 10 return_value *= 10
return return_value return return_value
def get_host_cpu_usage(self, resource, period, aggregate, def get_host_cpu_usage(self, resource, period, aggregate,
granularity=300): granularity=300):

View File

@@ -16,9 +16,10 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from urllib import parse as urlparse
from oslo_config import cfg from oslo_config import cfg
from oslo_log import log from oslo_log import log
import six.moves.urllib.parse as urlparse
from watcher.common import clients from watcher.common import clients
from watcher.common import exception from watcher.common import exception
@@ -72,7 +73,7 @@ class GrafanaHelper(base.DataSourceBase):
# Very basic url parsing # Very basic url parsing
parse = urlparse.urlparse(self._base_url) parse = urlparse.urlparse(self._base_url)
if parse.scheme is '' or parse.netloc is '' or parse.path is '': if parse.scheme == '' or parse.netloc == '' or parse.path == '':
LOG.critical("GrafanaHelper url not properly configured, " LOG.critical("GrafanaHelper url not properly configured, "
"check base_url and project_id") "check base_url and project_id")
return return
@@ -179,6 +180,9 @@ class GrafanaHelper(base.DataSourceBase):
kwargs = {k: v for k, v in raw_kwargs.items() if k and v} kwargs = {k: v for k, v in raw_kwargs.items() if k and v}
resp = self.query_retry(self._request, **kwargs) resp = self.query_retry(self._request, **kwargs)
if not resp:
LOG.warning("Datasource {0} is not available.".format(self.NAME))
return
result = translator.extract_result(resp.content) result = translator.extract_result(resp.content)

View File

@@ -112,10 +112,10 @@ class DataSourceManager(object):
datasource is attempted. datasource is attempted.
""" """
if not self.datasources or len(self.datasources) is 0: if not self.datasources or len(self.datasources) == 0:
raise exception.NoDatasourceAvailable raise exception.NoDatasourceAvailable
if not metrics or len(metrics) is 0: if not metrics or len(metrics) == 0:
LOG.critical("Can not retrieve datasource without specifying " LOG.critical("Can not retrieve datasource without specifying "
"list of required metrics.") "list of required metrics.")
raise exception.InvalidParameter(parameter='metrics', raise exception.InvalidParameter(parameter='metrics',
@@ -125,11 +125,11 @@ class DataSourceManager(object):
no_metric = False no_metric = False
for metric in metrics: for metric in metrics:
if (metric not in self.metric_map[datasource] or if (metric not in self.metric_map[datasource] or
self.metric_map[datasource].get(metric) is None): self.metric_map[datasource].get(metric) is None):
no_metric = True no_metric = True
LOG.warning("Datasource: {0} could not be used due to " LOG.warning("Datasource: {0} could not be used due to "
"metric: {1}".format(datasource, metric)) "metric: {1}".format(datasource, metric))
break break
if not no_metric: if not no_metric:
# Try to use a specific datasource but attempt additional # Try to use a specific datasource but attempt additional
# datasources upon exceptions (if config has more datasources) # datasources upon exceptions (if config has more datasources)

View File

@@ -73,11 +73,11 @@ class MonascaHelper(base.DataSourceBase):
self.monasca = self.osc.monasca() self.monasca = self.osc.monasca()
def check_availability(self): def check_availability(self):
try: result = self.query_retry(self.monasca.metrics.list)
self.query_retry(self.monasca.metrics.list) if result:
except Exception: return 'available'
else:
return 'not available' return 'not available'
return 'available'
def list_metrics(self): def list_metrics(self):
# TODO(alexchadin): this method should be implemented in accordance to # TODO(alexchadin): this method should be implemented in accordance to

View File

@@ -15,13 +15,11 @@
# limitations under the License. # limitations under the License.
import abc import abc
import six
from watcher.common.loader import loadable from watcher.common.loader import loadable
@six.add_metaclass(abc.ABCMeta) class Goal(loadable.Loadable, metaclass=abc.ABCMeta):
class Goal(loadable.Loadable):
def __init__(self, config): def __init__(self, config):
super(Goal, self).__init__(config) super(Goal, self).__init__(config)

View File

@@ -27,11 +27,8 @@ import abc
import jsonschema import jsonschema
from oslo_serialization import jsonutils from oslo_serialization import jsonutils
import six
class EfficacySpecification(object, metaclass=abc.ABCMeta):
@six.add_metaclass(abc.ABCMeta)
class EfficacySpecification(object):
def __init__(self): def __init__(self):
self._indicators_specs = self.get_indicators_specifications() self._indicators_specs = self.get_indicators_specifications()

View File

@@ -18,7 +18,6 @@ import abc
import jsonschema import jsonschema
from jsonschema import SchemaError from jsonschema import SchemaError
from jsonschema import ValidationError from jsonschema import ValidationError
import six
from oslo_log import log from oslo_log import log
from oslo_serialization import jsonutils from oslo_serialization import jsonutils
@@ -29,8 +28,7 @@ from watcher.common import exception
LOG = log.getLogger(__name__) LOG = log.getLogger(__name__)
@six.add_metaclass(abc.ABCMeta) class IndicatorSpecification(object, metaclass=abc.ABCMeta):
class IndicatorSpecification(object):
def __init__(self, name=None, description=None, unit=None, required=True): def __init__(self, name=None, description=None, unit=None, required=True):
self.name = name self.name = name

View File

@@ -19,9 +19,6 @@
# limitations under the License. # limitations under the License.
# #
from __future__ import unicode_literals
from watcher.common.loader import default from watcher.common.loader import default

View File

@@ -22,6 +22,7 @@ from oslo_config import cfg
from oslo_log import log from oslo_log import log
from watcher.decision_engine.audit import continuous as c_handler from watcher.decision_engine.audit import continuous as c_handler
from watcher.decision_engine.audit import event as e_handler
from watcher.decision_engine.audit import oneshot as o_handler from watcher.decision_engine.audit import oneshot as o_handler
from watcher import objects from watcher import objects
@@ -35,9 +36,10 @@ class AuditEndpoint(object):
def __init__(self, messaging): def __init__(self, messaging):
self._messaging = messaging self._messaging = messaging
self._executor = futurist.GreenThreadPoolExecutor( self._executor = futurist.GreenThreadPoolExecutor(
max_workers=CONF.watcher_decision_engine.max_workers) max_workers=CONF.watcher_decision_engine.max_audit_workers)
self._oneshot_handler = o_handler.OneShotAuditHandler() self._oneshot_handler = o_handler.OneShotAuditHandler()
self._continuous_handler = c_handler.ContinuousAuditHandler().start() self._continuous_handler = c_handler.ContinuousAuditHandler().start()
self._event_handler = e_handler.EventAuditHandler()
@property @property
def executor(self): def executor(self):
@@ -45,7 +47,10 @@ class AuditEndpoint(object):
def do_trigger_audit(self, context, audit_uuid): def do_trigger_audit(self, context, audit_uuid):
audit = objects.Audit.get_by_uuid(context, audit_uuid, eager=True) audit = objects.Audit.get_by_uuid(context, audit_uuid, eager=True)
self._oneshot_handler.execute(audit, context) if audit.audit_type == objects.audit.AuditType.ONESHOT.value:
self._oneshot_handler.execute(audit, context)
if audit.audit_type == objects.audit.AuditType.EVENT.value:
self._event_handler.execute(audit, context)
def trigger_audit(self, context, audit_uuid): def trigger_audit(self, context, audit_uuid):
LOG.debug("Trigger audit %s", audit_uuid) LOG.debug("Trigger audit %s", audit_uuid)

View File

@@ -25,11 +25,9 @@ See: :doc:`../architecture` for more details on this component.
""" """
import abc import abc
import six
@six.add_metaclass(abc.ABCMeta) class Model(object, metaclass=abc.ABCMeta):
class Model(object):
@abc.abstractmethod @abc.abstractmethod
def to_string(self): def to_string(self):

View File

@@ -110,7 +110,6 @@ import time
from oslo_config import cfg from oslo_config import cfg
from oslo_log import log from oslo_log import log
import six
from watcher.common import clients from watcher.common import clients
from watcher.common.loader import loadable from watcher.common.loader import loadable
@@ -120,8 +119,8 @@ LOG = log.getLogger(__name__)
CONF = cfg.CONF CONF = cfg.CONF
@six.add_metaclass(abc.ABCMeta) class BaseClusterDataModelCollector(loadable.LoadableSingleton,
class BaseClusterDataModelCollector(loadable.LoadableSingleton): metaclass=abc.ABCMeta):
STALE_MODEL = model_root.ModelRoot(stale=True) STALE_MODEL = model_root.ModelRoot(stale=True)

View File

@@ -13,8 +13,6 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import six
from oslo_log import log from oslo_log import log
from watcher.common import cinder_helper from watcher.common import cinder_helper
@@ -152,6 +150,9 @@ class CinderClusterDataModelCollector(base.BaseClusterDataModelCollector):
if self._audit_scope_handler is None: if self._audit_scope_handler is None:
LOG.debug("No audit, Don't Build storage data model") LOG.debug("No audit, Don't Build storage data model")
return return
if self._data_model_scope is None:
LOG.debug("No audit scope, Don't Build storage data model")
return
builder = CinderModelBuilder(self.osc) builder = CinderModelBuilder(self.osc)
return builder.execute(self._data_model_scope) return builder.execute(self._data_model_scope)
@@ -286,7 +287,7 @@ class CinderModelBuilder(base.BaseModelBuilder):
:param instance: Cinder Volume object. :param instance: Cinder Volume object.
:return: A volume node for the graph. :return: A volume node for the graph.
""" """
attachments = [{k: v for k, v in six.iteritems(d) if k in ( attachments = [{k: v for k, v in iter(d.items()) if k in (
'server_id', 'attachment_id')} for d in volume.attachments] 'server_id', 'attachment_id')} for d in volume.attachments]
volume_attributes = { volume_attributes = {

View File

@@ -63,6 +63,9 @@ class BaremetalClusterDataModelCollector(base.BaseClusterDataModelCollector):
if self._audit_scope_handler is None: if self._audit_scope_handler is None:
LOG.debug("No audit, Don't Build Baremetal data model") LOG.debug("No audit, Don't Build Baremetal data model")
return return
if self._data_model_scope is None:
LOG.debug("No audit scope, Don't Build Baremetal data model")
return
builder = BareMetalModelBuilder(self.osc) builder = BareMetalModelBuilder(self.osc)
return builder.execute(self._data_model_scope) return builder.execute(self._data_model_scope)

View File

@@ -16,6 +16,8 @@
import os_resource_classes as orc import os_resource_classes as orc
from oslo_log import log from oslo_log import log
from futurist import waiters
from watcher.common import nova_helper from watcher.common import nova_helper
from watcher.common import placement_helper from watcher.common import placement_helper
from watcher.decision_engine.model.collector import base from watcher.decision_engine.model.collector import base
@@ -23,6 +25,7 @@ from watcher.decision_engine.model import element
from watcher.decision_engine.model import model_root from watcher.decision_engine.model import model_root
from watcher.decision_engine.model.notification import nova from watcher.decision_engine.model.notification import nova
from watcher.decision_engine.scope import compute as compute_scope from watcher.decision_engine.scope import compute as compute_scope
from watcher.decision_engine import threading
LOG = log.getLogger(__name__) LOG = log.getLogger(__name__)
@@ -181,6 +184,9 @@ class NovaClusterDataModelCollector(base.BaseClusterDataModelCollector):
if self._audit_scope_handler is None: if self._audit_scope_handler is None:
LOG.debug("No audit, Don't Build compute data model") LOG.debug("No audit, Don't Build compute data model")
return return
if self._data_model_scope is None:
LOG.debug("No audit scope, Don't Build compute data model")
return
builder = NovaModelBuilder(self.osc) builder = NovaModelBuilder(self.osc)
return builder.execute(self._data_model_scope) return builder.execute(self._data_model_scope)
@@ -212,8 +218,12 @@ class NovaModelBuilder(base.BaseModelBuilder):
self.nova = osc.nova() self.nova = osc.nova()
self.nova_helper = nova_helper.NovaHelper(osc=self.osc) self.nova_helper = nova_helper.NovaHelper(osc=self.osc)
self.placement_helper = placement_helper.PlacementHelper(osc=self.osc) self.placement_helper = placement_helper.PlacementHelper(osc=self.osc)
self.executor = threading.DecisionEngineThreadPool()
def _collect_aggregates(self, host_aggregates, _nodes): def _collect_aggregates(self, host_aggregates, _nodes):
if not host_aggregates:
return
aggregate_list = self.call_retry(f=self.nova_helper.get_aggregate_list) aggregate_list = self.call_retry(f=self.nova_helper.get_aggregate_list)
aggregate_ids = [aggregate['id'] for aggregate aggregate_ids = [aggregate['id'] for aggregate
in host_aggregates if 'id' in aggregate] in host_aggregates if 'id' in aggregate]
@@ -229,6 +239,9 @@ class NovaModelBuilder(base.BaseModelBuilder):
_nodes.update(aggregate.hosts) _nodes.update(aggregate.hosts)
def _collect_zones(self, availability_zones, _nodes): def _collect_zones(self, availability_zones, _nodes):
if not availability_zones:
return
service_list = self.call_retry(f=self.nova_helper.get_service_list) service_list = self.call_retry(f=self.nova_helper.get_service_list)
zone_names = [zone['name'] for zone zone_names = [zone['name'] for zone
in availability_zones] in availability_zones]
@@ -239,20 +252,71 @@ class NovaModelBuilder(base.BaseModelBuilder):
if service.zone in zone_names or include_all_nodes: if service.zone in zone_names or include_all_nodes:
_nodes.add(service.host) _nodes.add(service.host)
def _add_physical_layer(self): def _compute_node_future(self, future, future_instances):
"""Add the physical layer of the graph. """Add compute node information to model and schedule instance info job
This includes components which represent actual infrastructure :param future: The future from the finished execution
hardware. :rtype future: :py:class:`futurist.GreenFuture`
:param future_instances: list of futures for instance jobs
:rtype future_instances: list :py:class:`futurist.GreenFuture`
""" """
try:
node_info = future.result()[0]
# filter out baremetal node
if node_info.hypervisor_type == 'ironic':
LOG.debug("filtering out baremetal node: %s", node_info)
return
self.add_compute_node(node_info)
# node.servers is a list of server objects
# New in nova version 2.53
instances = getattr(node_info, "servers", None)
# Do not submit job if there are no instances on compute node
if instances is None:
LOG.info("No instances on compute_node: {0}".format(node_info))
return
future_instances.append(
self.executor.submit(
self.add_instance_node, node_info, instances)
)
except Exception:
LOG.error("compute node from aggregate / "
"availability_zone could not be found")
def _add_physical_layer(self):
"""Collects all information on compute nodes and instances
Will collect all required compute node and instance information based
on the host aggregates and availability zones. If aggregates and zones
do not specify any compute nodes all nodes are retrieved instead.
The collection of information happens concurrently using the
DecisionEngineThreadpool. The collection is parallelized in three steps
first information about aggregates and zones is gathered. Secondly,
for each of the compute nodes a tasks is submitted to get detailed
information about the compute node. Finally, Each of these submitted
tasks will submit an additional task if the compute node contains
instances. Before returning from this function all instance tasks are
waited upon to complete.
"""
compute_nodes = set() compute_nodes = set()
host_aggregates = self.model_scope.get("host_aggregates") host_aggregates = self.model_scope.get("host_aggregates")
availability_zones = self.model_scope.get("availability_zones") availability_zones = self.model_scope.get("availability_zones")
if host_aggregates:
self._collect_aggregates(host_aggregates, compute_nodes)
if availability_zones:
self._collect_zones(availability_zones, compute_nodes)
"""Submit tasks to gather compute nodes from availability zones and
host aggregates. Each task adds compute nodes to the set, this set is
threadsafe under the assumption that CPython is used with the GIL
enabled."""
zone_aggregate_futures = {
self.executor.submit(
self._collect_aggregates, host_aggregates, compute_nodes),
self.executor.submit(
self._collect_zones, availability_zones, compute_nodes)
}
waiters.wait_for_all(zone_aggregate_futures)
# if zones and aggregates did not contain any nodes get every node.
if not compute_nodes: if not compute_nodes:
self.no_model_scope_flag = True self.no_model_scope_flag = True
all_nodes = self.call_retry( all_nodes = self.call_retry(
@@ -260,24 +324,20 @@ class NovaModelBuilder(base.BaseModelBuilder):
compute_nodes = set( compute_nodes = set(
[node.hypervisor_hostname for node in all_nodes]) [node.hypervisor_hostname for node in all_nodes])
LOG.debug("compute nodes: %s", compute_nodes) LOG.debug("compute nodes: %s", compute_nodes)
for node_name in compute_nodes:
cnode = self.call_retry( node_futures = [self.executor.submit(
self.nova_helper.get_compute_node_by_name, self.nova_helper.get_compute_node_by_name,
node_name, servers=True, detailed=True) node, servers=True, detailed=True)
if cnode: for node in compute_nodes]
node_info = cnode[0] LOG.debug("submitted {0} jobs".format(len(compute_nodes)))
# filter out baremetal node
if node_info.hypervisor_type == 'ironic': # Futures will concurrently be added, only safe with CPython GIL
LOG.debug("filtering out baremetal node: %s", node_name) future_instances = []
continue self.executor.do_while_futures_modify(
self.add_compute_node(node_info) node_futures, self._compute_node_future, future_instances)
# node.servers is a list of server objects
# New in nova version 2.53 # Wait for all instance jobs to finish
instances = getattr(node_info, "servers", None) waiters.wait_for_all(future_instances)
self.add_instance_node(node_info, instances)
else:
LOG.error("compute_node from aggregate / availability_zone "
"could not be found: {0}".format(node_name))
def add_compute_node(self, node): def add_compute_node(self, node):
# Build and add base node. # Build and add base node.

Some files were not shown because too many files have changed in this diff Show More