- Renamed the project from "SWatcher" to "Watcher Visio" in the README for clarity. - Added new environment variables `WATCHER_ENDPOINT_NAME` and `WATCHER_INTERFACE_NAME` to `.env.example` for Watcher service configuration. - Updated documentation to reflect the new cache TTL settings for dashboard statistics and source status. - Introduced a new CI section in the README to outline the pipeline configurations for continuous integration.
8.0 KiB
Watcher Visio
OpenStack Watcher dashboard — monitor your cluster and visualise audit recommendations.
Watcher Visio is a web dashboard for OpenStack operators. It shows region and host counts, physical/virtual CPU and RAM usage, VM statistics, top flavors, and OpenStack Watcher audit results with migration recommendations and CPU load charts per host. Data is pulled from OpenStack (SDK and Watcher API) and Prometheus (node_exporter, libvirt, placement metrics).
Table of contents
- Features
- Configuration
- Running locally
- Running with Docker
- Frontend build
- API
- Project structure
- Architecture
- Running tests
- CI
Features
- Cluster overview — compute region name, total hosts, and aggregate resource usage
- CPU & RAM — physical and virtual CPU/RAM usage and overcommit ratios (from Prometheus)
- VM statistics — total and active VM counts
- Top flavors — most used OpenStack flavors
- Watcher audits — list of audits with action plans and migration recommendations
- CPU load charts — per-host current and projected CPU load for audit actions (Chart.js)
- Mock mode —
USE_MOCK_DATA=truefor local development without OpenStack or Prometheus
Configuration
Environment variables
Copy .env.example to .env and set as needed. For Docker Compose you can use env_file: [.env] in docker-compose.yml.
| Variable | Description |
|---|---|
PROMETHEUS_URL |
Prometheus base URL (e.g. http://10.0.0.1:9090/). |
OPENSTACK_CLOUD |
Cloud name from clouds.yaml (e.g. distlab). |
OPENSTACK_REGION_NAME |
OpenStack region (e.g. cl2k1distlab). |
USE_MOCK_DATA |
Set to true/1/yes to serve mock data (no OpenStack/Prometheus). Useful for local/dev. |
SECRET_KEY |
Django secret key; override in production. |
WATCHER_ENDPOINT_NAME |
Watcher service type (default: infra-optim). |
WATCHER_INTERFACE_NAME |
Watcher API interface (default: public). |
Defaults for Prometheus, OpenStack, and Watcher are in watcher_visio/settings.py. Cache TTLs: DASHBOARD_CACHE_TTL (120 s) for stats/audits, SOURCE_STATUS_CACHE_TTL (30 s) for source-status; override in settings if needed.
OpenStack (clouds.yaml)
Authentication uses OpenStack’s standard clouds.yaml. The cloud name must match OPENSTACK_CLOUD. Place clouds.yaml in the project root (or standard OpenStack config location). Do not commit real credentials; use a local or CI-specific file and keep production secrets out of the repo.
Running locally
-
Create a virtualenv and install dependencies:
python -m venv .venv source .venv/bin/activate # or .venv\Scripts\activate on Windows pip install -r requirements.txt -
Optionally build frontend CSS (see Frontend build).
-
Configure
clouds.yamland environment (e.g..envor exportPROMETHEUS_URL,OPENSTACK_CLOUD,OPENSTACK_REGION_NAME). For development without OpenStack/Prometheus, setUSE_MOCK_DATA=true. -
Run migrations and start the server:
python manage.py migrate python manage.py runserverOpen http://127.0.0.1:8000/ (or the port shown). With
USE_MOCK_DATA=true, the dashboard is filled with mock data; otherwise the page loads a skeleton and fetches data from the API.
Running with Docker
Production-like (built image, no volume mount):
docker compose up --build
App is available at http://localhost:8080. Healthcheck hits GET /.
Development (mounted code, mock data, no OpenStack/Prometheus):
docker compose -f docker-compose.yml -f docker-compose.dev.yml up --build
Uses USE_MOCK_DATA=true and mounts the project directory for live code changes. Build CSS before building the image so static/css/output.css is present, or run npm run build locally before docker compose ... up --build.
Frontend build
CSS is built with Tailwind and DaisyUI (package.json).
- Install:
npm install - One-off build:
npm run build - Watch:
npm run dev
Source: static/css/main.css. Output: static/css/output.css. For Docker, run npm run build before building the image so the image includes output.css.
API
| Endpoint | Description |
|---|---|
GET / |
Dashboard page. With USE_MOCK_DATA=true, rendered with mock context; otherwise skeleton, with data loaded via the API. |
GET /api/stats/ |
JSON: region, pCPU/vCPU, pRAM/vRAM, VM stats, top flavors. Cached for DASHBOARD_CACHE_TTL seconds (see settings). |
GET /api/audits/ |
JSON: { "audits": [ ... ] } — list of Watcher audits with migrations and chart data (host labels, cpu_current, cpu_projected). Same cache TTL. |
GET /api/source-status/ |
JSON: status of data sources (Prometheus, OpenStack) — ok / error / mock per source. Cached for SOURCE_STATUS_CACHE_TTL seconds (see settings). |
Project structure
| Path | Description |
|---|---|
watcher_visio/ |
Django project: settings, root URL config, WSGI/ASGI. |
dashboard/ |
Main app: views (index, api_stats, api_audits), openstack_utils (connect, flavor, audits), prometheus_utils (query), mock_data, templatetags (mathfilters), tests. |
templates/, static/ |
HTML templates and static assets (Tailwind output, Chart.js, etc.). |
Optional: clouds.yaml in project root |
OpenStack config; not in repo — do not commit production secrets. |
| Dockerfile, docker-entrypoint.sh | Image build and entrypoint (migrate then run server). |
| docker-compose.yml, docker-compose.dev.yml | Compose: base (prod-like) and dev override (mount + mock). |
Architecture
flowchart LR
subgraph sources [Data sources]
OS[OpenStack SDK]
Prom[Prometheus]
Watcher[Watcher API]
end
subgraph app [Django]
Views[views]
Cache[(Cache)]
end
subgraph out [Output]
HTML[HTML]
API[JSON API]
end
subgraph frontend [Frontend]
Chart[Chart.js]
end
OS --> Views
Prom --> Views
Watcher --> Views
Views --> Cache
Cache --> Views
Views --> HTML
Views --> API
HTML --> Chart
API --> Chart
OpenStack (region, servers, flavors), Prometheus (metrics), and the Watcher API (audits, action plans, actions) are queried in Django views; results are cached. The dashboard page is either rendered with mock/skeleton data or loads stats and audits via /api/stats/ and /api/audits/; Chart.js draws the CPU and other charts.
Running tests
From the project root (with Django and dependencies installed, e.g. in a virtualenv):
python manage.py test dashboard
Run a specific test module:
python manage.py test dashboard.tests.test_mathfilters
Running tests in Docker
Use the dev compose file so the project directory is mounted; the container will then run tests against your current code (no image rebuild needed):
docker compose -f docker-compose.yml -f docker-compose.dev.yml run --rm watcher-visio python3 manage.py test dashboard
With the base compose only, the container uses the code baked into the image at build time. After code or test changes, either rebuild the image or use the dev override above so tests see the latest files.
CI
Pipelines are in .gitea/workflows/:
- ci.yml — on push/PR to
mainanddevelop: set up Python 3.12, install dependencies, run Ruff lint,python manage.py test dashboard, and Bandit security check. - docker-build.yml — on push to
main: build Docker image, push to Gitea Container Registry, create a release with a version tag.