# Access (/docs/operations/access)



The VM is not reachable without VPN. The cockpit is bound to
`127.0.0.1:3000` on the VM and reached via SSH tunnel. `marts-db` is
exposed only on the loopback interface inside the docker-compose
network, accessed either via SSH tunnel or `docker exec`.

## VPN + SSH [#vpn--ssh]

VPN profile: `Ivan_Sokolov_TG` (34.89.201.118). Without it the VM
does not respond.

SSH alias is configured in `~/.ssh/config` as `satchel-vm` →
`ubuntu@13.62.60.156`. Sanity check:

```bash
ssh satchel-vm 'uptime'
```

If `uptime` doesn't return: connect VPN first, then retry.

## Open the dashboard in a browser [#open-the-dashboard-in-a-browser]

The local port `3000` is often busy on a laptop, so tunnel to `3030`:

```bash
ssh -fNT -L 3030:localhost:3000 satchel-vm
open http://localhost:3030
# kill the tunnel:
pkill -f "ssh.*3030:localhost:3000"
```

## psql to marts-db [#psql-to-marts-db]

Through an SSH tunnel:

```bash
ssh -fNT -L 5434:localhost:5434 satchel-vm
psql -h localhost -p 5434 -U heartbeat heartbeat_marts
```

Or directly inside the container on the VM:

```bash
ssh satchel-vm 'docker exec -it heartbeat_marts psql -U heartbeat -d heartbeat_marts'
```

## Container status / logs [#container-status--logs]

```bash
# all four containers should be Up (healthy)
ssh satchel-vm 'cd ~/heartbeat-dashboard && \
  docker compose -f docker-compose.prod.yml ps'

# tail logs
ssh satchel-vm 'cd ~/heartbeat-dashboard && \
  docker compose -f docker-compose.prod.yml logs --tail=100 web'
ssh satchel-vm 'cd ~/heartbeat-dashboard && \
  docker compose -f docker-compose.prod.yml logs --tail=100 api'
```

## Manual ingest run [#manual-ingest-run]

For ad-hoc / debugging — the hourly cron handles routine refreshes:

```bash
ssh satchel-vm 'cd ~/heartbeat-dashboard && \
  docker compose -f docker-compose.prod.yml exec -T api \
  python -m scripts.ingest --source <name>'
```

For a long-running first-time backfill use the one-off container
pattern documented in
[deploy › long-running ingest backfills](/docs/operations/deploy#long-running-ingest-backfills).

## Re-run compute manually [#re-run-compute-manually]

After tuning thresholds or fixing a metric file, force an
out-of-schedule compute:

```bash
# every metric
ssh satchel-vm 'cd ~/heartbeat-dashboard && \
  docker compose -f docker-compose.prod.yml exec -T api \
  python3 scripts/compute_metrics.py'

# one metric
ssh satchel-vm 'cd ~/heartbeat-dashboard && \
  docker compose -f docker-compose.prod.yml exec -T api \
  python3 scripts/compute_metrics.py --metric <metric_id>'

# refresh benchmarks
ssh satchel-vm 'cd ~/heartbeat-dashboard && \
  docker compose -f docker-compose.prod.yml exec -T api \
  python -m api.scripts.refresh_benchmarks'
```

## Where things live on the VM [#where-things-live-on-the-vm]

| What                          | Path                                                                                   |
| ----------------------------- | -------------------------------------------------------------------------------------- |
| Repo                          | `/home/ubuntu/heartbeat-dashboard`                                                     |
| Cluster A schema dumps        | `/home/ubuntu/postgres/*.sql` (aml, webbank, vss, ...)                                 |
| Cluster B (SEPA) schema dumps | `/home/ubuntu/postgres-sepa/*.sql`                                                     |
| dlt working dir               | named volume `dlt_data` mounted at `/var/dlt` (`DLT_PIPELINES_DIR=/var/dlt/pipelines`) |
