[docs] apply sphinx-lint to docs

This change corrects the detected sphinx-linit issue in the existing
docs and updates the contributor devstack guide to call out
required and advanced.

mostly the changes were simple fixes like replacing the configurable
default rule with explict literal syntax `term` -> ``term``

some inline Note: comments have been promoted to .. note:: blocks
and literal blocks ::  have been promoted to .. code-block:: <language>
directives.

Change-Id: I6320c313d22bf542ad407169e6538dc6acf79901
This commit is contained in:
Sean Mooney
2024-11-08 01:37:58 +00:00
parent 5fadd0de57
commit 1f8d06e075
9 changed files with 215 additions and 157 deletions

View File

@@ -16,7 +16,7 @@ multinode environment to use.
You can set up the Watcher services quickly and easily using a Watcher
DevStack plugin. See `PluginModelDocs`_ for information on DevStack's plugin
model. To enable the Watcher plugin with DevStack, add the following to the
`[[local|localrc]]` section of your controller's `local.conf` to enable the
``[[local|localrc]]`` section of your controller's ``local.conf`` to enable the
Watcher plugin::
enable_plugin watcher https://opendev.org/openstack/watcher
@@ -32,7 +32,7 @@ Quick Devstack Instructions with Datasources
Watcher requires a datasource to collect metrics from compute nodes and
instances in order to execute most strategies. To enable this a
`[[local|localrc]]` to setup DevStack for some of the supported datasources
``[[local|localrc]]`` to setup DevStack for some of the supported datasources
is provided. These examples specify the minimal configuration parameters to
get both Watcher and the datasource working but can be expanded is desired.
@@ -41,54 +41,60 @@ Gnocchi
With the Gnocchi datasource most of the metrics for compute nodes and
instances will work with the provided configuration but metrics that
require Ironic such as `host_airflow and` `host_power` will still be
unavailable as well as `instance_l3_cpu_cache`::
require Ironic such as ``host_airflow and`` ``host_power`` will still be
unavailable as well as ``instance_l3_cpu_cache``
[[local|localrc]]
enable_plugin watcher https://opendev.org/openstack/watcher
.. code-block:: ini
enable_plugin watcher-dashboard https://opendev.org/openstack/watcher-dashboard
[[local|localrc]]
enable_plugin ceilometer https://opendev.org/openstack/ceilometer.git
CEILOMETER_BACKEND=gnocchi
enable_plugin watcher https://opendev.org/openstack/watcher
enable_plugin watcher-dashboard https://opendev.org/openstack/watcher-dashboard
enable_plugin ceilometer https://opendev.org/openstack/ceilometer.git
enable_plugin aodh https://opendev.org/openstack/aodh
enable_plugin panko https://opendev.org/openstack/panko
enable_plugin aodh https://opendev.org/openstack/aodh
enable_plugin panko https://opendev.org/openstack/panko
[[post-config|$NOVA_CONF]]
[DEFAULT]
compute_monitors=cpu.virt_driver
CEILOMETER_BACKEND=gnocchi
[[post-config|$NOVA_CONF]]
[DEFAULT]
compute_monitors=cpu.virt_driver
Detailed DevStack Instructions
==============================
#. Obtain N (where N >= 1) servers (virtual machines preferred for DevStack).
One of these servers will be the controller node while the others will be
compute nodes. N is preferably >= 3 so that you have at least 2 compute
nodes, but in order to stand up the Watcher services only 1 server is
needed (i.e., no computes are needed if you want to just experiment with
the Watcher services). These servers can be VMs running on your local
machine via VirtualBox if you prefer. DevStack currently recommends that
you use Ubuntu 16.04 LTS. The servers should also have connections to the
same network such that they are all able to communicate with one another.
#. Obtain N (where N >= 1) servers (virtual machines preferred for DevStack).
One of these servers will be the controller node while the others will be
compute nodes. N is preferably >= 3 so that you have at least 2 compute
nodes, but in order to stand up the Watcher services only 1 server is
needed (i.e., no computes are needed if you want to just experiment with
the Watcher services). These servers can be VMs running on your local
machine via VirtualBox if you prefer. DevStack currently recommends that
you use Ubuntu 16.04 LTS. The servers should also have connections to the
same network such that they are all able to communicate with one another.
#. For each server, clone the DevStack repository and create the stack user::
#. For each server, clone the DevStack repository and create the stack user
sudo apt-get update
sudo apt-get install git
git clone https://opendev.org/openstack/devstack.git
sudo ./devstack/tools/create-stack-user.sh
.. code-block:: bash
sudo apt-get update
sudo apt-get install git
git clone https://opendev.org/openstack/devstack.git
sudo ./devstack/tools/create-stack-user.sh
Now you have a stack user that is used to run the DevStack processes. You
may want to give your stack user a password to allow SSH via a password::
may want to give your stack user a password to allow SSH via a password
sudo passwd stack
.. code-block:: bash
#. Switch to the stack user and clone the DevStack repo again::
sudo passwd stack
sudo su stack
cd ~
git clone https://opendev.org/openstack/devstack.git
#. Switch to the stack user and clone the DevStack repo again
.. code-block:: bash
sudo su stack
cd ~
git clone https://opendev.org/openstack/devstack.git
#. For each compute node, copy the provided `local.conf.compute`_ example file
to the compute node's system at ~/devstack/local.conf. Make sure the
@@ -111,24 +117,30 @@ Detailed DevStack Instructions
the HOST_IP value is changed appropriately - i.e., HOST_IP is set to the IP
address of the controller node.
Note: if you want to use another Watcher git repository (such as a local
one), then change the enable plugin line::
.. NOTE::
if you want to use another Watcher git repository (such as a local
one), then change the enable plugin line
.. code-block:: bash
enable_plugin watcher <your_local_git_repo> [optional_branch]
enable_plugin watcher <your_local_git_repo> [optional_branch]
If you do this, then the Watcher DevStack plugin will try to pull the
python-watcherclient repo from <your_local_git_repo>/../, so either make
sure that is also available or specify WATCHERCLIENT_REPO in the local.conf
python-watcherclient repo from ``<your_local_git_repo>/../``, so either make
sure that is also available or specify WATCHERCLIENT_REPO in the ``local.conf``
file.
Note: if you want to use a specific branch, specify WATCHER_BRANCH in the
local.conf file. By default it will use the master branch.
.. NOTE::
if you want to use a specific branch, specify WATCHER_BRANCH in the
local.conf file. By default it will use the master branch.
Note: watcher-api will default run under apache/httpd, set the variable
WATCHER_USE_MOD_WSGI=FALSE if you do not wish to run under apache/httpd.
For development environment it is suggested to set WATHCER_USE_MOD_WSGI
to FALSE. For Production environment it is suggested to keep it at the
default TRUE value.
.. Note::
watcher-api will default run under apache/httpd, set the variable
WATCHER_USE_MOD_WSGI=FALSE if you do not wish to run under apache/httpd.
For development environment it is suggested to set WATHCER_USE_MOD_WSGI
to FALSE. For Production environment it is suggested to keep it at the
default TRUE value.
#. Start stacking from the controller node::
@@ -136,8 +148,9 @@ Detailed DevStack Instructions
#. Start stacking on each of the compute nodes using the same command.
#. Configure the environment for live migration via NFS. See the
`Multi-Node DevStack Environment`_ section for more details.
.. seealso::
Configure the environment for live migration via NFS. See the
`Multi-Node DevStack Environment`_ section for more details.
.. _local.conf.controller: https://github.com/openstack/watcher/tree/master/devstack/local.conf.controller
.. _local.conf.compute: https://github.com/openstack/watcher/tree/master/devstack/local.conf.compute
@@ -149,60 +162,19 @@ Since deploying Watcher with only a single compute node is not very useful, a
few tips are given here for enabling a multi-node environment with live
migration.
Configuring NFS Server
----------------------
.. NOTE::
If you would like to use live migration for shared storage, then the controller
can serve as the NFS server if needed::
Nova supports live migration with local block storage so by default NFS
is not required and is considered an advance configuration.
The minimum requirements for live migration are:
sudo apt-get install nfs-kernel-server
sudo mkdir -p /nfs/instances
sudo chown stack:stack /nfs/instances
- all hostnames are resolvable on each host
- all hosts have a passwordless ssh key that is trusted by the other hosts
- all hosts have a known_hosts file that lists each hosts
Add an entry to `/etc/exports` with the appropriate gateway and netmask
information::
/nfs/instances <gateway>/<netmask>(rw,fsid=0,insecure,no_subtree_check,async,no_root_squash)
Export the NFS directories::
sudo exportfs -ra
Make sure the NFS server is running::
sudo service nfs-kernel-server status
If the server is not running, then start it::
sudo service nfs-kernel-server start
Configuring NFS on Compute Node
-------------------------------
Each compute node needs to use the NFS server to hold the instance data::
sudo apt-get install rpcbind nfs-common
mkdir -p /opt/stack/data/instances
sudo mount <nfs-server-ip>:/nfs/instances /opt/stack/data/instances
If you would like to have the NFS directory automatically mounted on reboot,
then add the following to `/etc/fstab`::
<nfs-server-ip>:/nfs/instances /opt/stack/data/instances nfs auto 0 0
Edit `/etc/libvirt/libvirtd.conf` to make sure the following values are set::
listen_tls = 0
listen_tcp = 1
auth_tcp = "none"
Edit `/etc/default/libvirt-bin`::
libvirtd_opts="-d -l"
Restart the libvirt service::
sudo service libvirt-bin restart
If these requirements are met live migration will be possible.
Shared storage such as ceph, booting form cinder volume or nfs are recommend
when testing evacuate if you want to preserve vm data.
Setting up SSH keys between compute nodes to enable live migration
------------------------------------------------------------------
@@ -231,22 +203,91 @@ must exist in every other compute node's stack user's authorized_keys file and
every compute node's public ECDSA key needs to be in every other compute
node's root user's known_hosts file.
Disable serial console
----------------------
Configuring NFS Server (ADVANCED)
---------------------------------
Serial console needs to be disabled for live migration to work.
If you would like to use live migration for shared storage, then the controller
can serve as the NFS server if needed
On both the controller and compute node, in /etc/nova/nova.conf
.. code-block:: bash
[serial_console]
enabled = False
sudo apt-get install nfs-kernel-server
sudo mkdir -p /nfs/instances
sudo chown stack:stack /nfs/instances
Alternatively, in devstack's local.conf:
Add an entry to ``/etc/exports`` with the appropriate gateway and netmask
information
[[post-config|$NOVA_CONF]]
[serial_console]
#enabled=false
.. code-block:: bash
/nfs/instances <gateway>/<netmask>(rw,fsid=0,insecure,no_subtree_check,async,no_root_squash)
Export the NFS directories
.. code-block:: bash
sudo exportfs -ra
Make sure the NFS server is running
.. code-block:: bash
sudo service nfs-kernel-server status
If the server is not running, then start it
.. code-block:: bash
sudo service nfs-kernel-server start
Configuring NFS on Compute Node (ADVANCED)
------------------------------------------
Each compute node needs to use the NFS server to hold the instance data
.. code-block:: bash
sudo apt-get install rpcbind nfs-common
mkdir -p /opt/stack/data/instances
sudo mount <nfs-server-ip>:/nfs/instances /opt/stack/data/instances
If you would like to have the NFS directory automatically mounted on reboot,
then add the following to ``/etc/fstab``
.. code-block:: bash
<nfs-server-ip>:/nfs/instances /opt/stack/data/instances nfs auto 0 0
Configuring libvirt to listen on tcp (ADVANCED)
-----------------------------------------------
.. NOTE::
By default nova will use ssh as a transport for live migration
if you have a low bandwidth connection you can use tcp instead
however this is generally not recommended.
Edit ``/etc/libvirt/libvirtd.conf`` to make sure the following values are set
.. code-block:: ini
listen_tls = 0
listen_tcp = 1
auth_tcp = "none"
Edit ``/etc/default/libvirt-bin``
.. code-block:: ini
libvirtd_opts="-d -l"
Restart the libvirt service
.. code-block:: bash
sudo service libvirt-bin restart
VNC server configuration
------------------------
@@ -254,13 +295,18 @@ VNC server configuration
The VNC server listening parameter needs to be set to any address so
that the server can accept connections from all of the compute nodes.
On both the controller and compute node, in /etc/nova/nova.conf
On both the controller and compute node, in ``/etc/nova/nova.conf``
vncserver_listen = 0.0.0.0
.. code-block:: ini
Alternatively, in devstack's local.conf:
[vnc]
server_listen = "0.0.0.0"
VNCSERVER_LISTEN=0.0.0.0
Alternatively, in devstack's ``local.conf``:
.. code-block:: bash
VNCSERVER_LISTEN="0.0.0.0"
Environment final checkup