OperationsDeploy

Deploy

Shipping a change to the VM. Manual; never auto.

There is no GitHub Actions / GitLab CI yet. Deploy is fully manual: push to GitLab, then SSH into the VM and rebuild the affected container. Never deploy without an explicit ask from the operator.

Hosts

WhatWhere
VMubuntu@13.62.60.156 (VPN required — see access)
Repo on VM~/heartbeat-dashboard
Git remotegitlab.stchl.eu:Ivan_Soko1ov/satchel-heartbeat.git (main)
Public URLhttps://heartbeat.stchl.eu (Caddy terminates TLS, proxies to web:3000)

Standard "ship a change" flow

# 1. local — commit & push
git add <files> && git commit -m "..." && git push

# 2. VM — pull + rebuild only what changed
ssh ubuntu@13.62.60.156 'cd ~/heartbeat-dashboard && git pull --ff-only && \
  docker compose -f docker-compose.prod.yml --env-file .env up -d --build <service>'

What to rebuild

Match the touched path to the right rebuild target:

You touchedRebuild
dashboard/**web
dashboard/content/docs/** (this site)web
api/**api
api/ingest_lib.py (dlt registry edit)api (then re-run the pipeline manually if backfilling a new source — see sources › how to land a new source)
docker-compose.prod.yml, .envup -d (no --build)
db/metrics/**./bin/deploy.sh (build + psql + compute_metrics + refresh_benchmarks)
api/audit/cuts/**, db/audit_schema.sql./bin/deploy.sh (or at minimum: build api + apply schema + refresh_audit)
Anything spanning the above./bin/deploy.sh

A bare git pull is not enough for dashboard/ or api/ changes — both run from baked Docker images, so they need --build.

Full redeploy

ssh ubuntu@13.62.60.156 'cd ~/heartbeat-dashboard && ./bin/deploy.sh'

This does:

  1. git pull --ff-only.
  2. docker compose build api web marts-db.
  3. docker compose up -d.
  4. Assemble db/metrics/ via db/build_metrics.sh and pipe into psql.
  5. Run compute_metrics — writes the first metric_history row for any newly added metric.
  6. Run refresh_benchmarks — recomputes μ/σ for rolling_stat metrics.
  7. Run refresh_audit — syncs audit_cut_registry from api/audit/cuts/ and refreshes snapshots.

It does not run ingest pipelines — those run on the cron schedule described in cron.

Verifying a deploy

# every container should be Up (healthy), including caddy
ssh ubuntu@13.62.60.156 \
  'cd ~/heartbeat-dashboard && docker compose -f docker-compose.prod.yml ps'

# open the UI directly
open https://heartbeat.stchl.eu

# logs if something looks off
ssh ubuntu@13.62.60.156 \
  'cd ~/heartbeat-dashboard && docker compose -f docker-compose.prod.yml logs --tail=100 web'

# Caddy logs (TLS issues, 502s)
ssh ubuntu@13.62.60.156 \
  'cd ~/heartbeat-dashboard && docker compose -f docker-compose.prod.yml logs --tail=100 caddy'

See access for the full tunnel + diagnostics toolkit.

Long-running ingest backfills

dlt backfills (especially the first one for a new source — webbank took ~2.5 h, 12 GB) must not ride on the api container's lifecycle. Use a one-off container on the dlt_data volume:

ssh ubuntu@13.62.60.156
cd ~/heartbeat-dashboard
docker compose -f docker-compose.prod.yml run --rm --no-deps \
  --name heartbeat_ingest api \
  python -m scripts.ingest --source <name>

The reasons (a parallel docker compose up -d --build api will wipe mid-flight extract state; a VM reboot will too) are documented in decisions › 2026-05-04. The hourly incremental cron stays on docker exec heartbeat_api because it's seconds, not hours.