This implements the configuration parameters to implement
Grafana as a datasource including the influxdb translator
Change-Id: If1f27dc01e853c5b24bdb21f1e810f64eaee2e5c
Partially-implements: blueprint grafana-proxy-datasource
Replaces the NoSuchMetric exception that was replaced. The exception
is replaced with MetricNotAvailable and test cases are added to prevent
regression.
The changes in the exceptions were introduced in:
https://review.opendev.org/#/c/658127/
Change-Id: Id0f872e916aaa5dec59ed1ae6c0f653553b3fe46
In get_node_by_instance_uuid, an exception ComputeNodeNotFound
will be thrown if can't find a node through instance uuid.
But the exception information replaces the node name with
instance uuid, which is misleading, so we define a new exception.
Closes-Bug: #1832156
Change-Id: Ic6c44ae44da7c3b9a1c20e9b24a036063af266ba
Moves the query_retry method into the baseclass and makes the query
retry and timeout options part of the watcher_datasources config group.
This makes the query_retry behavior uniform across all datasources.
A new baseclass method named query_retry_reset is added so datasources
can define operations to perform when recovering from a query error.
Test cases are added to verify the behavior of query_retry.
The query_max_retries and query_timeout config parameters are
deprecated in the gnocchi_client group and will be removed in a future
release.
Change-Id: I33e9dc2d1f5ba8f83fcf1488ff583ca5be5529cc
In Python, when we use @property, the method will be
decorated by property.
When we call method self.strategy.datasource_backend()[1],
Actually it did two things:
1. call self.strategy.datasource_backend()
2. according to the method's return value[2], call self._datasource_backend()
[1]. https://github.com/openstack/watcher/blob/bd8636f3f/watcher/tests/decision_engine/strategy/strategies/test_base.py#L87
[2]. https://github.com/openstack/watcher/blob/bd8636f3f/watcher/decision_engine/strategy/strategies/base.py#L368
But in this part, we just want it to perform the first step.
So we have to use self.strategy.datasource_backend instead of
self.strategy.datasource_backend()
The reason why the unittest does not report an error is
because the returned value is a mock object, and the second step
is executed without error, for example:
python -m unittest watcher.tests.decision_engine.strategy.strategies.test_base
(Pdb) x=self.strategy.datasource_backend
(Pdb) type(x)
<class 'mock.mock.MagicMock'>
(Pdb) x
<MagicMock name='DataSourceManager().get_backend()' id='139740418102608'>
(Pdb) x()
<MagicMock name='DataSourceManager().get_backend()()' id='139740410824976'>
(Pdb) self.strategy.datasource_backend()
<MagicMock name='DataSourceManager().get_backend()()' id='139740410824976'>
To make the tests more robust, the underlying backend function
is mocked to be not callable.
Co-Authored-By: Matt Riedemann <mriedem.os@gmail.com>
Change-Id: I3305d9afe8ed79e1dc3affe02ba067ac06cece42
This patch added Placement to Watcher
We plan to improve the data model and strategies in
the future specs.
Change-Id: I7141459eef66557cd5d525b5887bd2a381cdac3f
Implements: blueprint support-placement-api
This makes the ConfFixture extend the Config fixture from
oslo.config which handles cleanup for us. The module level
import_opt calls are also removed since they are no longer
needed.
Change-Id: I869e89c53284c8da45e0b1293f2d35011f5bfbf9
Now there are some errors when running apidoc,
actually we don't need apidoc, so remove it.
Closes-Bug: #1831515
Change-Id: I3b91a2c05ed62ae7bbd30a29e9db51d0e021410f
The get_compute_node_by_hostname method is given a
compute service hostname and then does two queries to
find the matching hypervisor (compute node) with details:
1. List hypervisors with details and find the one that
matches the given compute service hostname.
2. Using that node, search for hypervisors with the
matching hypervisor_hostname.
There are two issues here:
1. The first query is inefficient in that it has to list
all hypervisors in the deployment to try and match the
one with the compute service hostname client side.
2. The second query is a fuzzy match on the server side [1]
so even though we have matched on the node we want,
get_compute_node_by_name can still return more than
one hypervisor which will result in the helper method
raising ComputeNodeNotFound. Consider having compute
hosts with names compute1, compute10, compute11, compute100,
and so on. The fuzzy match on compute1 would return all of
those hypervisors.
For non-ironic nodes in nova, the compute service host and
hypervisor should be 1:1, meaning the hypervisor.service['host']
should be the same as hypervisor.hypervisor_hostname. Knowing
this, we can simplify the code to search just on the given
compute service hostname and if we get more than one result, it
is because of the fuzzy match and we can then do our client-side
filtering on the compute service hostname.
[1] https://github.com/openstack/nova/blob/d4f58f5eb/nova/db/sqlalchemy/api.py#L676
Change-Id: I84f387982f665d7cc11bffe8ec390cc7e7ed5278
The nova CDM builder code and notification handling
code had some inefficiencies when it came to looking
up a hypevisor to get details. The general pattern
used before was:
1. get the minimal hypervisor information by hypervisor_hostname
2. make another query to get the hypervisor details by id
In the notifications case, it was actually three calls because
the first is listing hyprvisors to filter client-side by service
host.
This change collapses 1 and 2 above into a single API call
to get the hypervisor by hypervisor_hostname with details
which will include the service (compute) host information
which is what get_compute_node_by_id() was being used for.
Now that nothing is using get_compute_node_by_id it is removed.
There is more work we could do in get_compute_node_by_hostname
if the compute API allowed filtering hypervisors by service
host so a TODO is left for that.
One final thing: the TODO in get_compute_node_by_hostname about
there being more than one hypervisor per compute service host
for vmware vcenter is not accurate - nova's vcenter driver
hasn't supported a host:node 1:M topology like that since the
Liberty release [1]. The only in-tree driver in nova that supports
1:M is the ironic baremetal driver, so the comment is updated.
[1] Ifc17c5049e3ed29c8dd130339207907b00433960
Depends-On: https://review.opendev.org/661785/
Change-Id: I5e0e88d7b2dd1a69117ab03e0e66851c687606da
Some of the methods for retrieving data about instances was placed
at the bottom of nova_helper instead of being close to the other
instance based methods.
Change-Id: I68475883529610e514aa82f1881105ab0cf24ec3
This does two things:
1. Rather than make an API call per server on the host,
get all of the servers in a single API call by
filtering on the host. The os-hypervisors API results
to use make this require a bit of refactoring since
get_compute_node_by_name does not have the service
entry in it and get_compute_node_by_id does not have the
servers entry in it. A TODO is added to clean that up
with a single call to os-hypervisors once we have the
support in python-novaclient.
2. Pulls get_node_by_uuid() out of the loop.
A test is added for the nova_helper get_instance_list method
since one did not exist before.
The fake compute node mocks in test_nova_cdmc_execute are
also cleaned up since, as noted above, get_compute_node_by_name
and get_compute_node_by_id don't both return all the details.
Change-Id: Ifd9f83c2f399d4c1765b0c520f4d5a62ad0f5fbd
Fix the list of required metrics from a datasource when testing the
existence of this metric in the metric map.
Change-Id: I19b7408a98893bc942c32edb09f1b3798ec8dc79
With change Id34938c7bb8a5ca934d997e52cac3b365414c006
we require nova API version 2.56 or greater so we can
remove the compatibliity check in the
watcher_non_live_migrate_instance method.
The _check_nova_api_version method is left in place
for future compability checks.
Change-Id: I69040fbc13b03d90b9687c0d11104d4a5bae51d3
The [nova_client]/api_version defaults to 2.56 since
change Idd6ebc94f81ad5d65256c80885f2addc1aaeaae1. There
is compatibility code for that change but if 2.56 is
not available watcher_non_live_migrate_instance will
still fail if a destination host is used.
Since 2.56 has been available since the Queens version of
nova it should be reasonable to require at least that
version of nova is running for using Watcher.
This adds code which enforces the minimum version along
with a release note and "watcher-status upgrade check"
check method.
Note that it's kind of weird for watcher to have a config
option like nova_client.api_version since compute API
microversions are per API request even though novaclient
is constructed with the single configured version. It should
really be something the client (watcher in this case) determines
using version discovery and gracefully enables features if
the required nova API version is available, but that's a bigger
change.
Change-Id: Id34938c7bb8a5ca934d997e52cac3b365414c006
MetricNotAvailable and NoDatasourceAvailable allow to differentiate
between having no datasources configured and a required metric being
unavailable from the datasource. Both exceptions have comments so
that the use case is clear.
The input validation of the get_backend method in the datasource
manager is improved.
Additional logging information allows to identify which metric caused
the available datasource to be discarded.
Tests are updated to validate the correct functionality of the new
exceptions.
Change-Id: I512976cce2401dbcd249d42686b78843e111a0e7