Deploy
Shipping a change to the VM. Manual; never auto.
There is no GitHub Actions / GitLab CI yet. Deploy is fully manual: push to GitLab, then SSH into the VM and rebuild the affected container. Never deploy without an explicit ask from the operator.
Hosts
| What | Where |
|---|---|
| VM | ubuntu@13.62.60.156 (VPN required — see access) |
| Repo on VM | ~/heartbeat-dashboard |
| Git remote | gitlab.stchl.eu:Ivan_Soko1ov/satchel-heartbeat.git (main) |
| Public URL | https://heartbeat.stchl.eu (Caddy terminates TLS, proxies to web:3000) |
Standard "ship a change" flow
# 1. local — commit & push
git add <files> && git commit -m "..." && git push
# 2. VM — pull + rebuild only what changed
ssh ubuntu@13.62.60.156 'cd ~/heartbeat-dashboard && git pull --ff-only && \
docker compose -f docker-compose.prod.yml --env-file .env up -d --build <service>'What to rebuild
Match the touched path to the right rebuild target:
| You touched | Rebuild |
|---|---|
dashboard/** | web |
dashboard/content/docs/** (this site) | web |
api/** | api |
api/ingest_lib.py (dlt registry edit) | api (then re-run the pipeline manually if backfilling a new source — see sources › how to land a new source) |
docker-compose.prod.yml, .env | up -d (no --build) |
db/metrics/** | ./bin/deploy.sh (build + psql + compute_metrics + refresh_benchmarks) |
api/audit/cuts/**, db/audit_schema.sql | ./bin/deploy.sh (or at minimum: build api + apply schema + refresh_audit) |
| Anything spanning the above | ./bin/deploy.sh |
A bare git pull is not enough for dashboard/ or api/
changes — both run from baked Docker images, so they need --build.
Full redeploy
ssh ubuntu@13.62.60.156 'cd ~/heartbeat-dashboard && ./bin/deploy.sh'This does:
git pull --ff-only.docker compose build api web marts-db.docker compose up -d.- Assemble
db/metrics/viadb/build_metrics.shand pipe into psql. - Run
compute_metrics— writes the firstmetric_historyrow for any newly added metric. - Run
refresh_benchmarks— recomputes μ/σ forrolling_statmetrics. - Run
refresh_audit— syncsaudit_cut_registryfromapi/audit/cuts/and refreshes snapshots.
It does not run ingest pipelines — those run on the cron schedule described in cron.
Verifying a deploy
# every container should be Up (healthy), including caddy
ssh ubuntu@13.62.60.156 \
'cd ~/heartbeat-dashboard && docker compose -f docker-compose.prod.yml ps'
# open the UI directly
open https://heartbeat.stchl.eu
# logs if something looks off
ssh ubuntu@13.62.60.156 \
'cd ~/heartbeat-dashboard && docker compose -f docker-compose.prod.yml logs --tail=100 web'
# Caddy logs (TLS issues, 502s)
ssh ubuntu@13.62.60.156 \
'cd ~/heartbeat-dashboard && docker compose -f docker-compose.prod.yml logs --tail=100 caddy'See access for the full tunnel + diagnostics toolkit.
Long-running ingest backfills
dlt backfills (especially the first one for a new source — webbank
took ~2.5 h, 12 GB) must not ride on the api container's
lifecycle. Use a one-off container on the dlt_data volume:
ssh ubuntu@13.62.60.156
cd ~/heartbeat-dashboard
docker compose -f docker-compose.prod.yml run --rm --no-deps \
--name heartbeat_ingest api \
python -m scripts.ingest --source <name>The reasons (a parallel docker compose up -d --build api will wipe
mid-flight extract state; a VM reboot will too) are documented in
decisions › 2026-05-04. The hourly incremental
cron stays on docker exec heartbeat_api because it's seconds, not
hours.