Compare commits
7 Commits
master
...
b66e38ba72
| Author | SHA1 | Date | |
|---|---|---|---|
|
b66e38ba72
|
|||
|
8e5606d4bb
|
|||
|
319af90bd4
|
|||
|
40024cd370
|
|||
|
0d45e9f9d6
|
|||
|
cae1663526
|
|||
|
8bc4eee38e
|
@@ -19,7 +19,7 @@ You may receive:
|
||||
## Audit Log Structure
|
||||
|
||||
Logs are shipped to Loki via promtail. Audit events use these labels:
|
||||
- `hostname` - hostname
|
||||
- `host` - hostname
|
||||
- `systemd_unit` - typically `auditd.service` for audit logs
|
||||
- `job` - typically `systemd-journal`
|
||||
|
||||
@@ -36,7 +36,7 @@ Audit log entries contain structured data:
|
||||
|
||||
Find SSH logins and session activity:
|
||||
```logql
|
||||
{hostname="<hostname>", systemd_unit="sshd.service"}
|
||||
{host="<hostname>", systemd_unit="sshd.service"}
|
||||
```
|
||||
|
||||
Look for:
|
||||
@@ -48,7 +48,7 @@ Look for:
|
||||
|
||||
Query executed commands (filter out noise):
|
||||
```logql
|
||||
{hostname="<hostname>"} |= "EXECVE" != "PATH item" != "PROCTITLE" != "SYSCALL" != "BPF"
|
||||
{host="<hostname>"} |= "EXECVE" != "PATH item" != "PROCTITLE" != "SYSCALL" != "BPF"
|
||||
```
|
||||
|
||||
Further filtering:
|
||||
@@ -60,28 +60,28 @@ Further filtering:
|
||||
|
||||
Check for privilege escalation:
|
||||
```logql
|
||||
{hostname="<hostname>"} |= "sudo" |= "COMMAND"
|
||||
{host="<hostname>"} |= "sudo" |= "COMMAND"
|
||||
```
|
||||
|
||||
Or via audit:
|
||||
```logql
|
||||
{hostname="<hostname>"} |= "USER_CMD"
|
||||
{host="<hostname>"} |= "USER_CMD"
|
||||
```
|
||||
|
||||
### 4. Service Manipulation
|
||||
|
||||
Check if services were manually stopped/started:
|
||||
```logql
|
||||
{hostname="<hostname>"} |= "EXECVE" |= "systemctl"
|
||||
{host="<hostname>"} |= "EXECVE" |= "systemctl"
|
||||
```
|
||||
|
||||
### 5. File Operations
|
||||
|
||||
Look for file modifications (if auditd rules are configured):
|
||||
```logql
|
||||
{hostname="<hostname>"} |= "EXECVE" |= "vim"
|
||||
{hostname="<hostname>"} |= "EXECVE" |= "nano"
|
||||
{hostname="<hostname>"} |= "EXECVE" |= "rm"
|
||||
{host="<hostname>"} |= "EXECVE" |= "vim"
|
||||
{host="<hostname>"} |= "EXECVE" |= "nano"
|
||||
{host="<hostname>"} |= "EXECVE" |= "rm"
|
||||
```
|
||||
|
||||
## Query Guidelines
|
||||
@@ -99,7 +99,7 @@ Look for file modifications (if auditd rules are configured):
|
||||
**Time-bounded queries:**
|
||||
When investigating around a specific event:
|
||||
```logql
|
||||
{hostname="<hostname>"} |= "EXECVE" != "systemd"
|
||||
{host="<hostname>"} |= "EXECVE" != "systemd"
|
||||
```
|
||||
With `start: "2026-02-08T14:30:00Z"` and `end: "2026-02-08T14:35:00Z"`
|
||||
|
||||
|
||||
@@ -41,13 +41,13 @@ Search for relevant log entries using `query_logs`. Focus on service-specific lo
|
||||
**Query strategies (start narrow, expand if needed):**
|
||||
- Start with `limit: 20-30`, increase only if needed
|
||||
- Use tight time windows: `start: "15m"` or `start: "30m"` initially
|
||||
- Filter to specific services: `{hostname="<hostname>", systemd_unit="<service>.service"}`
|
||||
- Search for errors: `{hostname="<hostname>"} |= "error"` or `|= "failed"`
|
||||
- Filter to specific services: `{host="<hostname>", systemd_unit="<service>.service"}`
|
||||
- Search for errors: `{host="<hostname>"} |= "error"` or `|= "failed"`
|
||||
|
||||
**Common patterns:**
|
||||
- Service logs: `{hostname="<hostname>", systemd_unit="<service>.service"}`
|
||||
- All errors on host: `{hostname="<hostname>"} |= "error"`
|
||||
- Journal for a unit: `{hostname="<hostname>", systemd_unit="nginx.service"} |= "failed"`
|
||||
- Service logs: `{host="<hostname>", systemd_unit="<service>.service"}`
|
||||
- All errors on host: `{host="<hostname>"} |= "error"`
|
||||
- Journal for a unit: `{host="<hostname>", systemd_unit="nginx.service"} |= "failed"`
|
||||
|
||||
**Avoid:**
|
||||
- Using `start: "1h"` with no filters on busy hosts
|
||||
@@ -130,7 +130,7 @@ get_commit_info(<hash>) # Get full details of a specific change
|
||||
```
|
||||
|
||||
**Example workflow for a service-related alert:**
|
||||
1. Query `nixos_flake_info{hostname="monitoring02"}` → `current_rev: 8959829`
|
||||
1. Query `nixos_flake_info{hostname="monitoring01"}` → `current_rev: 8959829`
|
||||
2. `resolve_ref("master")` → `4633421`
|
||||
3. `is_ancestor("8959829", "4633421")` → Yes, host is behind
|
||||
4. `commits_between("8959829", "4633421")` → 7 commits missing
|
||||
|
||||
@@ -30,13 +30,11 @@ Use the `lab-monitoring` MCP server tools:
|
||||
### Label Reference
|
||||
|
||||
Available labels for log queries:
|
||||
- `hostname` - Hostname (e.g., `ns1`, `monitoring02`, `ha1`) - matches the Prometheus `hostname` label
|
||||
- `host` - Hostname (e.g., `ns1`, `monitoring01`, `ha1`)
|
||||
- `systemd_unit` - Systemd unit name (e.g., `nsd.service`, `nixos-upgrade.service`)
|
||||
- `job` - Either `systemd-journal` (most logs), `varlog` (file-based logs), or `bootstrap` (VM bootstrap logs)
|
||||
- `filename` - For `varlog` job, the log file path
|
||||
- `tier` - Deployment tier (`test` or `prod`)
|
||||
- `role` - Host role (e.g., `dns`, `vault`, `monitoring`) - matches the Prometheus `role` label
|
||||
- `level` - Log level mapped from journal PRIORITY (`critical`, `error`, `warning`, `notice`, `info`, `debug`) - journal scrape only
|
||||
- `hostname` - Alternative to `host` for some streams
|
||||
|
||||
### Log Format
|
||||
|
||||
@@ -49,12 +47,12 @@ Journal logs are JSON-formatted. Key fields:
|
||||
|
||||
**Logs from a specific service on a host:**
|
||||
```logql
|
||||
{hostname="ns1", systemd_unit="nsd.service"}
|
||||
{host="ns1", systemd_unit="nsd.service"}
|
||||
```
|
||||
|
||||
**All logs from a host:**
|
||||
```logql
|
||||
{hostname="monitoring02"}
|
||||
{host="monitoring01"}
|
||||
```
|
||||
|
||||
**Logs from a service across all hosts:**
|
||||
@@ -64,31 +62,17 @@ Journal logs are JSON-formatted. Key fields:
|
||||
|
||||
**Substring matching (case-sensitive):**
|
||||
```logql
|
||||
{hostname="ha1"} |= "error"
|
||||
{host="ha1"} |= "error"
|
||||
```
|
||||
|
||||
**Exclude pattern:**
|
||||
```logql
|
||||
{hostname="ns1"} != "routine"
|
||||
{host="ns1"} != "routine"
|
||||
```
|
||||
|
||||
**Regex matching:**
|
||||
```logql
|
||||
{systemd_unit="victoriametrics.service"} |~ "scrape.*failed"
|
||||
```
|
||||
|
||||
**Filter by level (journal scrape only):**
|
||||
```logql
|
||||
{level="error"} # All errors across the fleet
|
||||
{level=~"critical|error", tier="prod"} # Prod errors and criticals
|
||||
{hostname="ns1", level="warning"} # Warnings from a specific host
|
||||
```
|
||||
|
||||
**Filter by tier/role:**
|
||||
```logql
|
||||
{tier="prod"} |= "error" # All errors on prod hosts
|
||||
{role="dns"} # All DNS server logs
|
||||
{tier="test", job="systemd-journal"} # Journal logs from test hosts
|
||||
{systemd_unit="prometheus.service"} |~ "scrape.*failed"
|
||||
```
|
||||
|
||||
**File-based logs (caddy access logs, etc):**
|
||||
@@ -109,7 +93,7 @@ Default lookback is 1 hour. Use `start` parameter for older logs:
|
||||
Useful systemd units for troubleshooting:
|
||||
- `nixos-upgrade.service` - Daily auto-upgrade logs
|
||||
- `nsd.service` - DNS server (ns1/ns2)
|
||||
- `victoriametrics.service` - Metrics collection
|
||||
- `prometheus.service` - Metrics collection
|
||||
- `loki.service` - Log aggregation
|
||||
- `caddy.service` - Reverse proxy
|
||||
- `home-assistant.service` - Home automation
|
||||
@@ -122,7 +106,7 @@ Useful systemd units for troubleshooting:
|
||||
|
||||
VMs provisioned from template2 send bootstrap progress directly to Loki via curl (before promtail is available). These logs use `job="bootstrap"` with additional labels:
|
||||
|
||||
- `hostname` - Target hostname
|
||||
- `host` - Target hostname
|
||||
- `branch` - Git branch being deployed
|
||||
- `stage` - Bootstrap stage (see table below)
|
||||
|
||||
@@ -143,7 +127,7 @@ VMs provisioned from template2 send bootstrap progress directly to Loki via curl
|
||||
|
||||
```logql
|
||||
{job="bootstrap"} # All bootstrap logs
|
||||
{job="bootstrap", hostname="myhost"} # Specific host
|
||||
{job="bootstrap", host="myhost"} # Specific host
|
||||
{job="bootstrap", stage="failed"} # All failures
|
||||
{job="bootstrap", stage=~"building|success"} # Track build progress
|
||||
```
|
||||
@@ -152,7 +136,7 @@ VMs provisioned from template2 send bootstrap progress directly to Loki via curl
|
||||
|
||||
Parse JSON and filter on fields:
|
||||
```logql
|
||||
{systemd_unit="victoriametrics.service"} | json | PRIORITY="3"
|
||||
{systemd_unit="prometheus.service"} | json | PRIORITY="3"
|
||||
```
|
||||
|
||||
---
|
||||
@@ -242,11 +226,12 @@ All available Prometheus job names:
|
||||
- `unbound` - DNS resolver metrics (ns1, ns2)
|
||||
- `wireguard` - VPN tunnel metrics (http-proxy)
|
||||
|
||||
**Monitoring stack (localhost on monitoring02):**
|
||||
- `victoriametrics` - VictoriaMetrics self-metrics
|
||||
**Monitoring stack (localhost on monitoring01):**
|
||||
- `prometheus` - Prometheus self-metrics
|
||||
- `loki` - Loki self-metrics
|
||||
- `grafana` - Grafana self-metrics
|
||||
- `alertmanager` - Alertmanager metrics
|
||||
- `pushgateway` - Push-based metrics gateway
|
||||
|
||||
**External/infrastructure:**
|
||||
- `pve-exporter` - Proxmox hypervisor metrics
|
||||
@@ -261,7 +246,7 @@ All scrape targets have these labels:
|
||||
**Standard labels:**
|
||||
- `instance` - Full target address (`<hostname>.home.2rjus.net:<port>`)
|
||||
- `job` - Job name (e.g., `node-exporter`, `unbound`, `nixos-exporter`)
|
||||
- `hostname` - Short hostname (e.g., `ns1`, `monitoring02`) - use this for host filtering
|
||||
- `hostname` - Short hostname (e.g., `ns1`, `monitoring01`) - use this for host filtering
|
||||
|
||||
**Host metadata labels** (when configured in `homelab.host`):
|
||||
- `role` - Host role (e.g., `dns`, `build-host`, `vault`)
|
||||
@@ -274,7 +259,7 @@ Use the `hostname` label for easy host filtering across all jobs:
|
||||
|
||||
```promql
|
||||
{hostname="ns1"} # All metrics from ns1
|
||||
node_load1{hostname="monitoring02"} # Specific metric by hostname
|
||||
node_load1{hostname="monitoring01"} # Specific metric by hostname
|
||||
up{hostname="ha1"} # Check if ha1 is up
|
||||
```
|
||||
|
||||
@@ -282,10 +267,10 @@ This is simpler than wildcarding the `instance` label:
|
||||
|
||||
```promql
|
||||
# Old way (still works but verbose)
|
||||
up{instance=~"monitoring02.*"}
|
||||
up{instance=~"monitoring01.*"}
|
||||
|
||||
# New way (preferred)
|
||||
up{hostname="monitoring02"}
|
||||
up{hostname="monitoring01"}
|
||||
```
|
||||
|
||||
### Filtering by Role/Tier
|
||||
@@ -323,8 +308,8 @@ Current host labels:
|
||||
|
||||
1. Check `up{job="<service>"}` or `up{hostname="<host>"}` for scrape failures
|
||||
2. Use `list_targets` to see target health details
|
||||
3. Query service logs: `{hostname="<host>", systemd_unit="<service>.service"}`
|
||||
4. Search for errors: `{hostname="<host>"} |= "error"`
|
||||
3. Query service logs: `{host="<host>", systemd_unit="<service>.service"}`
|
||||
4. Search for errors: `{host="<host>"} |= "error"`
|
||||
5. Check `list_alerts` for related alerts
|
||||
6. Use role filters for group issues: `up{role="dns"}` to check all DNS servers
|
||||
|
||||
@@ -339,17 +324,17 @@ Current host labels:
|
||||
|
||||
When provisioning new VMs, track bootstrap progress:
|
||||
|
||||
1. Watch bootstrap logs: `{job="bootstrap", hostname="<hostname>"}`
|
||||
2. Check for failures: `{job="bootstrap", hostname="<hostname>", stage="failed"}`
|
||||
1. Watch bootstrap logs: `{job="bootstrap", host="<hostname>"}`
|
||||
2. Check for failures: `{job="bootstrap", host="<hostname>", stage="failed"}`
|
||||
3. After success, verify host appears in metrics: `up{hostname="<hostname>"}`
|
||||
4. Check logs are flowing: `{hostname="<hostname>"}`
|
||||
4. Check logs are flowing: `{host="<hostname>"}`
|
||||
|
||||
See [docs/host-creation.md](../../../docs/host-creation.md) for the full host creation pipeline.
|
||||
|
||||
### Debug SSH/Access Issues
|
||||
|
||||
```logql
|
||||
{hostname="<host>", systemd_unit="sshd.service"}
|
||||
{host="<host>", systemd_unit="sshd.service"}
|
||||
```
|
||||
|
||||
### Check Recent Upgrades
|
||||
|
||||
@@ -73,7 +73,6 @@ Additional context, caveats, or references.
|
||||
- **Reference existing patterns**: Mention how this fits with existing infrastructure
|
||||
- **Tables for comparisons**: Use markdown tables when comparing options
|
||||
- **Practical focus**: Emphasize what needs to happen, not theory
|
||||
- **Mermaid diagrams**: Use mermaid code blocks for architecture diagrams, flow charts, or other graphs when relevant to the plan. Keep node labels short and use `<br/>` for line breaks
|
||||
|
||||
## Examples of Good Plans
|
||||
|
||||
|
||||
14
.github/workflows/flake-check.yaml
vendored
Normal file
14
.github/workflows/flake-check.yaml
vendored
Normal file
@@ -0,0 +1,14 @@
|
||||
name: Run nix flake check
|
||||
on:
|
||||
push:
|
||||
pull_request:
|
||||
|
||||
jobs:
|
||||
flake-check:
|
||||
runs-on: ubuntu-latest
|
||||
container:
|
||||
image: ghcr.io/catthehacker/ubuntu:runner-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- uses: cachix/install-nix-action@v27
|
||||
- run: nix flake check
|
||||
27
.github/workflows/flake-update.yaml
vendored
Normal file
27
.github/workflows/flake-update.yaml
vendored
Normal file
@@ -0,0 +1,27 @@
|
||||
---
|
||||
name: Periodic flake update
|
||||
on: # yamllint disable-line rule:truthy
|
||||
schedule:
|
||||
- cron: "0 0 * * *"
|
||||
|
||||
permissions:
|
||||
contents: write
|
||||
|
||||
jobs:
|
||||
flake-update:
|
||||
runs-on: ubuntu-latest
|
||||
container:
|
||||
image: ghcr.io/catthehacker/ubuntu:runner-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
ref: master
|
||||
- uses: cachix/install-nix-action@v27
|
||||
- name: configure git
|
||||
run: |
|
||||
git config --global user.name 'torjus-bot'
|
||||
git config --global user.email 'torjus-bot@git.t-juice.club'
|
||||
- name: flake update
|
||||
run: nix flake update --commit-lock-file
|
||||
- name: push
|
||||
run: git push
|
||||
3
.gitignore
vendored
3
.gitignore
vendored
@@ -2,9 +2,6 @@
|
||||
result
|
||||
result-*
|
||||
|
||||
# MCP config (contains secrets)
|
||||
.mcp.json
|
||||
|
||||
# Terraform/OpenTofu
|
||||
terraform/.terraform/
|
||||
terraform/.terraform.lock.hcl
|
||||
|
||||
@@ -2,47 +2,45 @@
|
||||
"mcpServers": {
|
||||
"nixpkgs-options": {
|
||||
"command": "nix",
|
||||
"args": ["run", "git+https://code.t-juice.club/torjus/labmcp#nixpkgs-search", "--", "options", "serve"],
|
||||
"args": ["run", "git+https://git.t-juice.club/torjus/labmcp#nixpkgs-search", "--", "options", "serve"],
|
||||
"env": {
|
||||
"NIXPKGS_SEARCH_DATABASE": "sqlite:///run/user/1000/labmcp/nixpkgs-search.db"
|
||||
}
|
||||
},
|
||||
"nixpkgs-packages": {
|
||||
"command": "nix",
|
||||
"args": ["run", "git+https://code.t-juice.club/torjus/labmcp#nixpkgs-search", "--", "packages", "serve"],
|
||||
"args": ["run", "git+https://git.t-juice.club/torjus/labmcp#nixpkgs-search", "--", "packages", "serve"],
|
||||
"env": {
|
||||
"NIXPKGS_SEARCH_DATABASE": "sqlite:///run/user/1000/labmcp/nixpkgs-search.db"
|
||||
}
|
||||
},
|
||||
"lab-monitoring": {
|
||||
"command": "nix",
|
||||
"args": ["run", "git+https://code.t-juice.club/torjus/labmcp#lab-monitoring", "--", "serve", "--enable-silences"],
|
||||
"args": ["run", "git+https://git.t-juice.club/torjus/labmcp#lab-monitoring", "--", "serve", "--enable-silences"],
|
||||
"env": {
|
||||
"PROMETHEUS_URL": "https://prometheus.home.2rjus.net",
|
||||
"ALERTMANAGER_URL": "https://alertmanager.home.2rjus.net",
|
||||
"LOKI_URL": "https://loki.home.2rjus.net",
|
||||
"LOKI_USERNAME": "promtail",
|
||||
"LOKI_PASSWORD": "<password from: bao kv get -field=password secret/shared/loki/push-auth>"
|
||||
"LOKI_URL": "http://monitoring01.home.2rjus.net:3100"
|
||||
}
|
||||
},
|
||||
"homelab-deploy": {
|
||||
"command": "nix",
|
||||
"args": [
|
||||
"run",
|
||||
"git+https://code.t-juice.club/torjus/homelab-deploy",
|
||||
"git+https://git.t-juice.club/torjus/homelab-deploy",
|
||||
"--",
|
||||
"mcp",
|
||||
"--nats-url", "nats://nats1.home.2rjus.net:4222",
|
||||
"--nkey-file", "/home/torjus/.config/homelab-deploy/test-deployer.nkey",
|
||||
"--enable-builds"
|
||||
"--nkey-file", "/home/torjus/.config/homelab-deploy/test-deployer.nkey"
|
||||
]
|
||||
},
|
||||
"git-explorer": {
|
||||
"command": "nix",
|
||||
"args": ["run", "git+https://code.t-juice.club/torjus/labmcp#git-explorer", "--", "serve"],
|
||||
"args": ["run", "git+https://git.t-juice.club/torjus/labmcp#git-explorer", "--", "serve"],
|
||||
"env": {
|
||||
"GIT_REPO_PATH": "/home/torjus/git/nixos-servers"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
87
CLAUDE.md
87
CLAUDE.md
@@ -39,30 +39,6 @@ Do not automatically deploy changes. Deployments are usually done by updating th
|
||||
|
||||
Do not run SSH commands directly. If a command needs to be run on a remote host, provide the command to the user and ask them to run it manually.
|
||||
|
||||
### Sharing Command Output via Loki
|
||||
|
||||
All hosts have the `pipe-to-loki` script for sending command output or terminal sessions to Loki, allowing users to share output with Claude without copy-pasting.
|
||||
|
||||
**Pipe mode** - send command output:
|
||||
```bash
|
||||
command | pipe-to-loki # Auto-generated ID
|
||||
command | pipe-to-loki --id my-test # Custom ID
|
||||
```
|
||||
|
||||
**Session mode** - record interactive terminal session:
|
||||
```bash
|
||||
pipe-to-loki --record # Start recording, exit to send
|
||||
pipe-to-loki --record --id my-session # With custom ID
|
||||
```
|
||||
|
||||
The script prints the session ID which the user can share. Query results with:
|
||||
```logql
|
||||
{job="pipe-to-loki"} # All entries
|
||||
{job="pipe-to-loki", id="my-test"} # Specific ID
|
||||
{job="pipe-to-loki", hostname="testvm01"} # From specific host
|
||||
{job="pipe-to-loki", type="session"} # Only sessions
|
||||
```
|
||||
|
||||
### Testing Feature Branches on Hosts
|
||||
|
||||
All hosts have the `nixos-rebuild-test` helper script for testing feature branches before merging:
|
||||
@@ -114,12 +90,6 @@ nix develop -c tofu -chdir=terraform/vault apply
|
||||
cd terraform && tofu plan
|
||||
```
|
||||
|
||||
### Ansible
|
||||
|
||||
Ansible configuration and playbooks are in `/ansible/`. See [ansible/README.md](ansible/README.md) for inventory groups, available playbooks, and usage examples.
|
||||
|
||||
The devshell sets `ANSIBLE_CONFIG` automatically, so no `-i` flag is needed.
|
||||
|
||||
### Secrets Management
|
||||
|
||||
Secrets are managed by OpenBao (Vault) using AppRole authentication. Most hosts use the
|
||||
@@ -132,8 +102,6 @@ Terraform manages the secrets and AppRole policies in `terraform/vault/`.
|
||||
|
||||
**Important:** Never amend commits to `master` unless the user explicitly asks for it. Amending rewrites history and causes issues for deployed configurations.
|
||||
|
||||
**Important:** Never force push to `master`. If a commit on master has an error, fix it with a new commit rather than rewriting history.
|
||||
|
||||
**Important:** Do not use `gh pr create` to create pull requests. The git server does not support GitHub CLI for PR creation. Instead, push the branch and let the user create the PR manually via the web interface.
|
||||
|
||||
When starting a new plan or task, the first step should typically be to create and checkout a new branch with an appropriate name (e.g., `git checkout -b dns-automation` or `git checkout -b fix-nginx-config`).
|
||||
@@ -247,7 +215,7 @@ nix develop -c homelab-deploy -- deploy \
|
||||
deploy.prod.<hostname>
|
||||
```
|
||||
|
||||
Subject format: `deploy.<tier>.<hostname>` (e.g., `deploy.prod.monitoring02`, `deploy.test.testvm01`)
|
||||
Subject format: `deploy.<tier>.<hostname>` (e.g., `deploy.prod.monitoring01`, `deploy.test.testvm01`)
|
||||
|
||||
**Verifying Deployments:**
|
||||
|
||||
@@ -287,10 +255,7 @@ The `current_rev` label contains the git commit hash of the deployed flake confi
|
||||
- `/docs/` - Documentation and plans
|
||||
- `plans/` - Future plans and proposals
|
||||
- `plans/completed/` - Completed plans (moved here when done)
|
||||
- `/ansible/` - Ansible configuration and playbooks
|
||||
- `ansible.cfg` - Ansible configuration (inventory path, defaults)
|
||||
- `inventory/` - Dynamic and static inventory sources
|
||||
- `playbooks/` - Ansible playbooks for fleet management
|
||||
- `/playbooks/` - Ansible playbooks for fleet management
|
||||
|
||||
### Configuration Inheritance
|
||||
|
||||
@@ -309,16 +274,29 @@ All hosts automatically get:
|
||||
- OpenBao (Vault) secrets management via AppRole
|
||||
- Internal ACME CA integration (OpenBao PKI at vault.home.2rjus.net)
|
||||
- Daily auto-upgrades with auto-reboot
|
||||
- Prometheus node-exporter + Promtail (logs to monitoring02)
|
||||
- Prometheus node-exporter + Promtail (logs to monitoring01)
|
||||
- Monitoring scrape target auto-registration via `homelab.monitoring` options
|
||||
- Custom root CA trust
|
||||
- DNS zone auto-registration via `homelab.dns` options
|
||||
|
||||
### Hosts
|
||||
### Active Hosts
|
||||
|
||||
Host configurations are in `/hosts/<hostname>/`. See `flake.nix` for the complete list of `nixosConfigurations`.
|
||||
Production servers:
|
||||
- `ns1`, `ns2` - Primary/secondary DNS servers (10.69.13.5/6)
|
||||
- `vault01` - OpenBao (Vault) secrets server + PKI CA
|
||||
- `ha1` - Home Assistant + Zigbee2MQTT + Mosquitto
|
||||
- `http-proxy` - Reverse proxy
|
||||
- `monitoring01` - Full observability stack (Prometheus, Grafana, Loki, Tempo, Pyroscope)
|
||||
- `jelly01` - Jellyfin media server
|
||||
- `nix-cache01` - Binary cache server + GitHub Actions runner
|
||||
- `pgdb1` - PostgreSQL database
|
||||
- `nats1` - NATS messaging server
|
||||
|
||||
Use `nix flake show` or `nix develop -c ansible-inventory --graph` to list all hosts.
|
||||
Test/staging hosts:
|
||||
- `testvm01`, `testvm02`, `testvm03` - Test-tier VMs for branch testing and deployment validation
|
||||
|
||||
Template hosts:
|
||||
- `template1`, `template2` - Base templates for cloning new hosts
|
||||
|
||||
### Flake Inputs
|
||||
|
||||
@@ -326,7 +304,7 @@ Use `nix flake show` or `nix develop -c ansible-inventory --graph` to list all h
|
||||
- `nixpkgs-unstable` - Unstable channel (available via overlay as `pkgs.unstable.<package>`)
|
||||
- `nixos-exporter` - NixOS module for exposing flake revision metrics (used to verify deployments)
|
||||
- `homelab-deploy` - NATS-based remote deployment tool for test-tier hosts
|
||||
- Custom packages from code.t-juice.club:
|
||||
- Custom packages from git.t-juice.club:
|
||||
- `alerttonotify` - Alert routing
|
||||
|
||||
### Network Architecture
|
||||
@@ -335,7 +313,7 @@ Use `nix flake show` or `nix develop -c ansible-inventory --graph` to list all h
|
||||
- Infrastructure subnet: `10.69.13.x`
|
||||
- DNS: ns1/ns2 provide authoritative DNS with primary-secondary setup
|
||||
- Internal CA for ACME certificates (no Let's Encrypt)
|
||||
- Centralized monitoring at monitoring02
|
||||
- Centralized monitoring at monitoring01
|
||||
- Static networking via systemd-networkd
|
||||
|
||||
### Secrets Management
|
||||
@@ -349,13 +327,13 @@ Most hosts use OpenBao (Vault) for secrets:
|
||||
- `extractKey` option extracts a single key from vault JSON as a plain file
|
||||
- Secrets fetched at boot by `vault-secret-<name>.service` systemd units
|
||||
- Fallback to cached secrets in `/var/lib/vault/cache/` when Vault is unreachable
|
||||
- Provision AppRole credentials: `nix develop -c ansible-playbook ansible/playbooks/provision-approle.yml -l <hostname>`
|
||||
- Provision AppRole credentials: `nix develop -c ansible-playbook playbooks/provision-approle.yml -e hostname=<host>`
|
||||
|
||||
### Auto-Upgrade System
|
||||
|
||||
All hosts pull updates daily from:
|
||||
```
|
||||
git+https://code.t-juice.club/torjus/nixos-servers.git
|
||||
git+https://git.t-juice.club/torjus/nixos-servers.git
|
||||
```
|
||||
|
||||
Configured in `/system/autoupgrade.nix`:
|
||||
@@ -373,7 +351,7 @@ Template VMs are built from `hosts/template2` and deployed to Proxmox using Ansi
|
||||
|
||||
```bash
|
||||
# Build NixOS image and deploy to Proxmox as template
|
||||
nix develop -c ansible-playbook ansible/playbooks/build-and-deploy-template.yml
|
||||
nix develop -c ansible-playbook -i playbooks/inventory.ini playbooks/build-and-deploy-template.yml
|
||||
```
|
||||
|
||||
This playbook:
|
||||
@@ -448,7 +426,7 @@ This means:
|
||||
- `tofu plan` won't show spurious changes for Proxmox-managed defaults
|
||||
|
||||
**When rebuilding the template:**
|
||||
1. Run `nix develop -c ansible-playbook ansible/playbooks/build-and-deploy-template.yml`
|
||||
1. Run `nix develop -c ansible-playbook -i playbooks/inventory.ini playbooks/build-and-deploy-template.yml`
|
||||
2. Update `default_template_name` in `terraform/variables.tf` if the name changed
|
||||
3. Run `tofu plan` - should show no VM recreations (only template name in state)
|
||||
4. Run `tofu apply` - updates state without touching existing VMs
|
||||
@@ -480,21 +458,23 @@ See [docs/host-creation.md](docs/host-creation.md) for the complete host creatio
|
||||
|
||||
### Monitoring Stack
|
||||
|
||||
All hosts ship metrics and logs to `monitoring02`:
|
||||
- **Metrics**: VictoriaMetrics scrapes node-exporter from all hosts
|
||||
- **Logs**: Promtail ships logs to Loki on monitoring02
|
||||
- **Access**: Grafana at monitoring02 for visualization
|
||||
All hosts ship metrics and logs to `monitoring01`:
|
||||
- **Metrics**: Prometheus scrapes node-exporter from all hosts
|
||||
- **Logs**: Promtail ships logs to Loki on monitoring01
|
||||
- **Access**: Grafana at monitoring01 for visualization
|
||||
- **Tracing**: Tempo for distributed tracing
|
||||
- **Profiling**: Pyroscope for continuous profiling
|
||||
|
||||
**Scrape Target Auto-Generation:**
|
||||
|
||||
VictoriaMetrics scrape targets are automatically generated from host configurations, following the same pattern as DNS zone generation:
|
||||
Prometheus scrape targets are automatically generated from host configurations, following the same pattern as DNS zone generation:
|
||||
|
||||
- **Node-exporter**: All flake hosts with static IPs are automatically added as node-exporter targets
|
||||
- **Service targets**: Defined via `homelab.monitoring.scrapeTargets` in service modules
|
||||
- **External targets**: Non-flake hosts defined in `/services/monitoring/external-targets.nix`
|
||||
- **Library**: `lib/monitoring.nix` provides `generateNodeExporterTargets` and `generateScrapeConfigs`
|
||||
|
||||
Service modules declare their scrape targets directly via `homelab.monitoring.scrapeTargets`. The VictoriaMetrics config on monitoring02 auto-generates scrape configs from all hosts. See "Homelab Module Options" section for available options.
|
||||
Service modules declare their scrape targets directly via `homelab.monitoring.scrapeTargets`. The Prometheus config on monitoring01 auto-generates scrape configs from all hosts. See "Homelab Module Options" section for available options.
|
||||
|
||||
To add monitoring targets for non-NixOS hosts, edit `/services/monitoring/external-targets.nix`.
|
||||
|
||||
@@ -529,7 +509,6 @@ The `modules/homelab/` directory defines custom options used across hosts for au
|
||||
- `priority` - Alerting priority: `high` or `low`. Controls alerting thresholds for the host.
|
||||
- `role` - Primary role designation (e.g., `dns`, `database`, `bastion`, `vault`)
|
||||
- `labels` - Free-form key-value metadata for host categorization
|
||||
- `ansible = "false"` - Exclude host from Ansible dynamic inventory
|
||||
|
||||
**DNS options (`homelab.dns.*`):**
|
||||
- `enable` (default: `true`) - Include host in DNS zone generation
|
||||
|
||||
@@ -10,9 +10,9 @@ NixOS Flake-based configuration repository for a homelab infrastructure. All hos
|
||||
| `ca` | Internal Certificate Authority |
|
||||
| `ha1` | Home Assistant + Zigbee2MQTT + Mosquitto |
|
||||
| `http-proxy` | Reverse proxy |
|
||||
| `monitoring02` | VictoriaMetrics, Grafana, Loki, Alertmanager |
|
||||
| `monitoring01` | Prometheus, Grafana, Loki, Tempo, Pyroscope |
|
||||
| `jelly01` | Jellyfin media server |
|
||||
| `nix-cache02` | Nix binary cache + NATS-based build service |
|
||||
| `nix-cache01` | Nix binary cache |
|
||||
| `nats1` | NATS messaging |
|
||||
| `vault01` | OpenBao (Vault) secrets management |
|
||||
| `template1`, `template2` | VM templates for cloning new hosts |
|
||||
@@ -121,4 +121,4 @@ No manual intervention is required after `tofu apply`.
|
||||
- Infrastructure subnet: `10.69.13.0/24`
|
||||
- DNS: ns1/ns2 authoritative with primary-secondary AXFR
|
||||
- Internal CA for TLS certificates (migrating from step-ca to OpenBao PKI)
|
||||
- Centralized monitoring at monitoring02
|
||||
- Centralized monitoring at monitoring01
|
||||
|
||||
@@ -1,120 +0,0 @@
|
||||
# Ansible Configuration
|
||||
|
||||
This directory contains Ansible configuration for fleet management tasks.
|
||||
|
||||
## Structure
|
||||
|
||||
```
|
||||
ansible/
|
||||
├── ansible.cfg # Ansible configuration
|
||||
├── inventory/
|
||||
│ ├── dynamic_flake.py # Dynamic inventory from NixOS flake
|
||||
│ ├── static.yml # Non-flake hosts (Proxmox, etc.)
|
||||
│ └── group_vars/
|
||||
│ └── all.yml # Common variables
|
||||
└── playbooks/
|
||||
├── build-and-deploy-template.yml
|
||||
├── provision-approle.yml
|
||||
├── restart-service.yml
|
||||
└── run-upgrade.yml
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
The devshell automatically configures `ANSIBLE_CONFIG`, so commands work without extra flags:
|
||||
|
||||
```bash
|
||||
# List inventory groups
|
||||
nix develop -c ansible-inventory --graph
|
||||
|
||||
# List hosts in a specific group
|
||||
nix develop -c ansible-inventory --list | jq '.role_dns'
|
||||
|
||||
# Run a playbook
|
||||
nix develop -c ansible-playbook ansible/playbooks/run-upgrade.yml -l tier_test
|
||||
```
|
||||
|
||||
## Inventory
|
||||
|
||||
The inventory combines dynamic and static sources automatically.
|
||||
|
||||
### Dynamic Inventory (from flake)
|
||||
|
||||
The `dynamic_flake.py` script extracts hosts from the NixOS flake using `homelab.host.*` options:
|
||||
|
||||
**Groups generated:**
|
||||
- `flake_hosts` - All NixOS hosts from the flake
|
||||
- `tier_test`, `tier_prod` - By `homelab.host.tier`
|
||||
- `role_dns`, `role_vault`, `role_monitoring`, etc. - By `homelab.host.role`
|
||||
|
||||
**Host variables set:**
|
||||
- `tier` - Deployment tier (test/prod)
|
||||
- `role` - Host role
|
||||
- `short_hostname` - Hostname without domain
|
||||
|
||||
### Static Inventory
|
||||
|
||||
Non-flake hosts are defined in `inventory/static.yml`:
|
||||
|
||||
- `proxmox` - Proxmox hypervisors
|
||||
|
||||
## Playbooks
|
||||
|
||||
| Playbook | Description | Example |
|
||||
|----------|-------------|---------|
|
||||
| `run-upgrade.yml` | Trigger nixos-upgrade on hosts | `-l tier_prod` |
|
||||
| `restart-service.yml` | Restart a systemd service | `-l role_dns -e service=unbound` |
|
||||
| `reboot.yml` | Rolling reboot (one host at a time) | `-l tier_test` |
|
||||
| `provision-approle.yml` | Deploy Vault credentials (single host only) | `-l testvm01` |
|
||||
| `build-and-deploy-template.yml` | Build and deploy Proxmox template | (no limit needed) |
|
||||
|
||||
### Examples
|
||||
|
||||
```bash
|
||||
# Restart unbound on all DNS servers
|
||||
nix develop -c ansible-playbook ansible/playbooks/restart-service.yml \
|
||||
-l role_dns -e service=unbound
|
||||
|
||||
# Trigger upgrade on all test hosts
|
||||
nix develop -c ansible-playbook ansible/playbooks/run-upgrade.yml -l tier_test
|
||||
|
||||
# Provision Vault credentials for a specific host
|
||||
nix develop -c ansible-playbook ansible/playbooks/provision-approle.yml -l testvm01
|
||||
|
||||
# Build and deploy Proxmox template
|
||||
nix develop -c ansible-playbook ansible/playbooks/build-and-deploy-template.yml
|
||||
|
||||
# Rolling reboot of test hosts (one at a time, waits for each to come back)
|
||||
nix develop -c ansible-playbook ansible/playbooks/reboot.yml -l tier_test
|
||||
```
|
||||
|
||||
## Excluding Flake Hosts
|
||||
|
||||
To exclude a flake host from the dynamic inventory, add the `ansible = "false"` label in the host's configuration:
|
||||
|
||||
```nix
|
||||
homelab.host.labels.ansible = "false";
|
||||
```
|
||||
|
||||
Hosts with `homelab.dns.enable = false` are also excluded automatically.
|
||||
|
||||
## Adding Non-Flake Hosts
|
||||
|
||||
Edit `inventory/static.yml` to add hosts not managed by the NixOS flake:
|
||||
|
||||
```yaml
|
||||
all:
|
||||
children:
|
||||
my_group:
|
||||
hosts:
|
||||
host1.example.com:
|
||||
ansible_user: admin
|
||||
```
|
||||
|
||||
## Common Variables
|
||||
|
||||
Variables in `inventory/group_vars/all.yml` apply to all hosts:
|
||||
|
||||
- `ansible_user` - Default SSH user (root)
|
||||
- `domain` - Domain name (home.2rjus.net)
|
||||
- `vault_addr` - Vault server URL
|
||||
@@ -1,17 +0,0 @@
|
||||
[defaults]
|
||||
inventory = inventory/
|
||||
remote_user = root
|
||||
host_key_checking = False
|
||||
|
||||
# Reduce SSH connection overhead
|
||||
forks = 10
|
||||
pipelining = True
|
||||
|
||||
# Output formatting (YAML output via builtin default callback)
|
||||
stdout_callback = default
|
||||
callbacks_enabled = profile_tasks
|
||||
result_format = yaml
|
||||
|
||||
[ssh_connection]
|
||||
# Reuse SSH connections
|
||||
ssh_args = -o ControlMaster=auto -o ControlPersist=60s
|
||||
@@ -1,162 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Dynamic Ansible inventory script that extracts host information from the NixOS flake.
|
||||
|
||||
Generates groups:
|
||||
- flake_hosts: All hosts defined in the flake
|
||||
- tier_test, tier_prod: Hosts by deployment tier
|
||||
- role_<name>: Hosts by role (dns, vault, monitoring, etc.)
|
||||
|
||||
Usage:
|
||||
./dynamic_flake.py --list # Return full inventory
|
||||
./dynamic_flake.py --host X # Return host vars (not used, but required by Ansible)
|
||||
"""
|
||||
|
||||
import json
|
||||
import subprocess
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def get_flake_dir() -> Path:
|
||||
"""Find the flake root directory."""
|
||||
script_dir = Path(__file__).resolve().parent
|
||||
# ansible/inventory/dynamic_flake.py -> repo root
|
||||
return script_dir.parent.parent
|
||||
|
||||
|
||||
def evaluate_flake() -> dict:
|
||||
"""Evaluate the flake and extract host metadata."""
|
||||
flake_dir = get_flake_dir()
|
||||
|
||||
# Nix expression to extract relevant config from each host
|
||||
nix_expr = """
|
||||
configs: builtins.mapAttrs (name: cfg: {
|
||||
hostname = cfg.config.networking.hostName;
|
||||
domain = cfg.config.networking.domain or "home.2rjus.net";
|
||||
tier = cfg.config.homelab.host.tier;
|
||||
role = cfg.config.homelab.host.role;
|
||||
labels = cfg.config.homelab.host.labels;
|
||||
dns_enabled = cfg.config.homelab.dns.enable;
|
||||
}) configs
|
||||
"""
|
||||
|
||||
try:
|
||||
result = subprocess.run(
|
||||
[
|
||||
"nix",
|
||||
"eval",
|
||||
"--json",
|
||||
f"{flake_dir}#nixosConfigurations",
|
||||
"--apply",
|
||||
nix_expr,
|
||||
],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
check=True,
|
||||
cwd=flake_dir,
|
||||
)
|
||||
return json.loads(result.stdout)
|
||||
except subprocess.CalledProcessError as e:
|
||||
print(f"Error evaluating flake: {e.stderr}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
except json.JSONDecodeError as e:
|
||||
print(f"Error parsing nix output: {e}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def sanitize_group_name(name: str) -> str:
|
||||
"""Sanitize a string for use as an Ansible group name.
|
||||
|
||||
Ansible group names should contain only alphanumeric characters and underscores.
|
||||
"""
|
||||
return name.replace("-", "_")
|
||||
|
||||
|
||||
def build_inventory(hosts_data: dict) -> dict:
|
||||
"""Build Ansible inventory structure from host data."""
|
||||
inventory = {
|
||||
"_meta": {"hostvars": {}},
|
||||
"flake_hosts": {"hosts": []},
|
||||
}
|
||||
|
||||
# Track groups we need to create
|
||||
tier_groups: dict[str, list[str]] = {}
|
||||
role_groups: dict[str, list[str]] = {}
|
||||
|
||||
for _config_name, host_info in hosts_data.items():
|
||||
hostname = host_info["hostname"]
|
||||
domain = host_info["domain"]
|
||||
tier = host_info["tier"]
|
||||
role = host_info["role"]
|
||||
labels = host_info["labels"]
|
||||
dns_enabled = host_info["dns_enabled"]
|
||||
|
||||
# Skip hosts that have DNS disabled (like templates)
|
||||
if not dns_enabled:
|
||||
continue
|
||||
|
||||
# Skip hosts with ansible = "false" label
|
||||
if labels.get("ansible") == "false":
|
||||
continue
|
||||
|
||||
fqdn = f"{hostname}.{domain}"
|
||||
|
||||
# Use short hostname as inventory name, FQDN for connection
|
||||
inventory_name = hostname
|
||||
|
||||
# Add to flake_hosts group
|
||||
inventory["flake_hosts"]["hosts"].append(inventory_name)
|
||||
|
||||
# Add host variables
|
||||
inventory["_meta"]["hostvars"][inventory_name] = {
|
||||
"ansible_host": fqdn, # Connect using FQDN
|
||||
"fqdn": fqdn,
|
||||
"tier": tier,
|
||||
"role": role,
|
||||
}
|
||||
|
||||
# Group by tier
|
||||
tier_group = f"tier_{sanitize_group_name(tier)}"
|
||||
if tier_group not in tier_groups:
|
||||
tier_groups[tier_group] = []
|
||||
tier_groups[tier_group].append(inventory_name)
|
||||
|
||||
# Group by role (if set)
|
||||
if role:
|
||||
role_group = f"role_{sanitize_group_name(role)}"
|
||||
if role_group not in role_groups:
|
||||
role_groups[role_group] = []
|
||||
role_groups[role_group].append(inventory_name)
|
||||
|
||||
# Add tier groups to inventory
|
||||
for group_name, hosts in tier_groups.items():
|
||||
inventory[group_name] = {"hosts": hosts}
|
||||
|
||||
# Add role groups to inventory
|
||||
for group_name, hosts in role_groups.items():
|
||||
inventory[group_name] = {"hosts": hosts}
|
||||
|
||||
return inventory
|
||||
|
||||
|
||||
def main():
|
||||
if len(sys.argv) < 2:
|
||||
print("Usage: dynamic_flake.py --list | --host <hostname>", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
if sys.argv[1] == "--list":
|
||||
hosts_data = evaluate_flake()
|
||||
inventory = build_inventory(hosts_data)
|
||||
print(json.dumps(inventory, indent=2))
|
||||
elif sys.argv[1] == "--host":
|
||||
# Ansible calls this to get vars for a specific host
|
||||
# We provide all vars in _meta.hostvars, so just return empty
|
||||
print(json.dumps({}))
|
||||
else:
|
||||
print(f"Unknown option: {sys.argv[1]}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,5 +0,0 @@
|
||||
# Common variables for all hosts
|
||||
|
||||
ansible_user: root
|
||||
domain: home.2rjus.net
|
||||
vault_addr: https://vault01.home.2rjus.net:8200
|
||||
@@ -1,13 +0,0 @@
|
||||
# Static inventory for non-flake hosts
|
||||
#
|
||||
# Hosts defined here are merged with the dynamic flake inventory.
|
||||
# Use this for infrastructure that isn't managed by NixOS.
|
||||
#
|
||||
# Use short hostnames as inventory names with ansible_host for FQDN.
|
||||
|
||||
all:
|
||||
children:
|
||||
proxmox:
|
||||
hosts:
|
||||
pve1:
|
||||
ansible_host: pve1.home.2rjus.net
|
||||
@@ -1,48 +0,0 @@
|
||||
---
|
||||
# Reboot hosts with rolling strategy to avoid taking down redundant services
|
||||
#
|
||||
# Usage examples:
|
||||
# # Reboot a single host
|
||||
# ansible-playbook reboot.yml -l testvm01
|
||||
#
|
||||
# # Reboot all test hosts (one at a time)
|
||||
# ansible-playbook reboot.yml -l tier_test
|
||||
#
|
||||
# # Reboot all DNS servers safely (one at a time)
|
||||
# ansible-playbook reboot.yml -l role_dns
|
||||
#
|
||||
# Safety features:
|
||||
# - serial: 1 ensures only one host reboots at a time
|
||||
# - Waits for host to come back online before proceeding
|
||||
# - Groups hosts by role to avoid rebooting same-role hosts consecutively
|
||||
|
||||
- name: Reboot hosts (rolling)
|
||||
hosts: all
|
||||
serial: 1
|
||||
order: shuffle # Randomize to spread out same-role hosts
|
||||
gather_facts: false
|
||||
|
||||
vars:
|
||||
reboot_timeout: 300 # 5 minutes to wait for host to come back
|
||||
|
||||
tasks:
|
||||
- name: Display reboot target
|
||||
ansible.builtin.debug:
|
||||
msg: "Rebooting {{ inventory_hostname }} (role: {{ role | default('none') }})"
|
||||
|
||||
- name: Reboot the host
|
||||
ansible.builtin.systemd:
|
||||
name: reboot.target
|
||||
state: started
|
||||
async: 1
|
||||
poll: 0
|
||||
ignore_errors: true
|
||||
|
||||
- name: Wait for host to come back online
|
||||
ansible.builtin.wait_for_connection:
|
||||
delay: 5
|
||||
timeout: "{{ reboot_timeout }}"
|
||||
|
||||
- name: Display reboot result
|
||||
ansible.builtin.debug:
|
||||
msg: "{{ inventory_hostname }} rebooted successfully"
|
||||
@@ -1,40 +0,0 @@
|
||||
---
|
||||
# Restart a systemd service on target hosts
|
||||
#
|
||||
# Usage examples:
|
||||
# # Restart unbound on all DNS servers
|
||||
# ansible-playbook restart-service.yml -l role_dns -e service=unbound
|
||||
#
|
||||
# # Restart nginx on a specific host
|
||||
# ansible-playbook restart-service.yml -l http-proxy.home.2rjus.net -e service=nginx
|
||||
#
|
||||
# # Restart promtail on all prod hosts
|
||||
# ansible-playbook restart-service.yml -l tier_prod -e service=promtail
|
||||
|
||||
- name: Restart systemd service
|
||||
hosts: all
|
||||
gather_facts: false
|
||||
|
||||
tasks:
|
||||
- name: Validate service name provided
|
||||
ansible.builtin.fail:
|
||||
msg: |
|
||||
The 'service' variable is required.
|
||||
Usage: ansible-playbook restart-service.yml -l <target> -e service=<name>
|
||||
|
||||
Examples:
|
||||
-e service=nginx
|
||||
-e service=unbound
|
||||
-e service=promtail
|
||||
when: service is not defined
|
||||
run_once: true
|
||||
|
||||
- name: Restart {{ service }}
|
||||
ansible.builtin.systemd:
|
||||
name: "{{ service }}"
|
||||
state: restarted
|
||||
register: restart_result
|
||||
|
||||
- name: Display result
|
||||
ansible.builtin.debug:
|
||||
msg: "Service {{ service }} restarted on {{ inventory_hostname }}"
|
||||
@@ -50,7 +50,7 @@ homelab.host.tier = "test"; # or "prod"
|
||||
During the bootstrap process, status updates are sent to Loki. Query bootstrap logs with:
|
||||
|
||||
```
|
||||
{job="bootstrap", hostname="<hostname>"}
|
||||
{job="bootstrap", host="<hostname>"}
|
||||
```
|
||||
|
||||
### Bootstrap Stages
|
||||
@@ -72,7 +72,7 @@ The bootstrap process reports these stages via the `stage` label:
|
||||
|
||||
```
|
||||
# All bootstrap activity for a host
|
||||
{job="bootstrap", hostname="myhost"}
|
||||
{job="bootstrap", host="myhost"}
|
||||
|
||||
# Track all failures
|
||||
{job="bootstrap", stage="failed"}
|
||||
@@ -87,7 +87,7 @@ Once the VM reboots with its full configuration, it will start publishing metric
|
||||
|
||||
1. Check bootstrap completed successfully:
|
||||
```
|
||||
{job="bootstrap", hostname="<hostname>", stage="success"}
|
||||
{job="bootstrap", host="<hostname>", stage="success"}
|
||||
```
|
||||
|
||||
2. Verify the host is up and reporting metrics:
|
||||
@@ -102,7 +102,7 @@ Once the VM reboots with its full configuration, it will start publishing metric
|
||||
|
||||
4. Check logs are flowing:
|
||||
```
|
||||
{hostname="<hostname>"}
|
||||
{host="<hostname>"}
|
||||
```
|
||||
|
||||
5. Confirm expected services are running and producing logs
|
||||
@@ -119,7 +119,7 @@ Once the VM reboots with its full configuration, it will start publishing metric
|
||||
|
||||
1. Check bootstrap logs in Loki - if they never progress past `building`, the rebuild likely consumed all resources:
|
||||
```
|
||||
{job="bootstrap", hostname="<hostname>"}
|
||||
{job="bootstrap", host="<hostname>"}
|
||||
```
|
||||
|
||||
2. **USER**: SSH into the host and check the bootstrap service:
|
||||
@@ -149,7 +149,7 @@ Usually caused by running the `create-host` script without proper credentials, o
|
||||
|
||||
2. Check bootstrap logs for vault-related stages:
|
||||
```
|
||||
{job="bootstrap", hostname="<hostname>", stage=~"vault.*"}
|
||||
{job="bootstrap", host="<hostname>", stage=~"vault.*"}
|
||||
```
|
||||
|
||||
3. **USER**: Regenerate and provision credentials manually:
|
||||
|
||||
@@ -151,30 +151,11 @@ Rationale:
|
||||
- Well above NixOS system users (typically <1000)
|
||||
- Avoids Podman/container issues with very high GIDs
|
||||
|
||||
### Completed (2026-02-08) - OAuth2/OIDC for Grafana
|
||||
|
||||
**OAuth2 client deployed for Grafana on monitoring02:**
|
||||
- Client ID: `grafana`
|
||||
- Redirect URL: `https://grafana-test.home.2rjus.net/login/generic_oauth`
|
||||
- Scope maps: `openid`, `profile`, `email`, `groups` for `users` group
|
||||
- Role mapping: `admins` group → Grafana Admin, others → Viewer
|
||||
|
||||
**Configuration locations:**
|
||||
- Kanidm OAuth2 client: `services/kanidm/default.nix`
|
||||
- Grafana OIDC config: `services/grafana/default.nix`
|
||||
- Vault secret: `services/grafana/oauth2-client-secret`
|
||||
|
||||
**Key findings:**
|
||||
- PKCE is required by Kanidm - enable `use_pkce = true` in Grafana
|
||||
- Must set `email_attribute_path`, `login_attribute_path`, `name_attribute_path` to extract from userinfo
|
||||
- Users need: primary credential (password + TOTP for MFA), membership in `users` group, email address set
|
||||
- Unix password is separate from primary credential (web login requires primary credential)
|
||||
|
||||
### Next Steps
|
||||
|
||||
1. Enable PAM/NSS on production hosts (after test tier validation)
|
||||
2. Configure TrueNAS LDAP client for NAS integration testing
|
||||
3. Add OAuth2 clients for other services as needed
|
||||
3. Add OAuth2 clients (Grafana first)
|
||||
|
||||
## References
|
||||
|
||||
@@ -1,46 +0,0 @@
|
||||
# Garage S3 Storage Server
|
||||
|
||||
## Overview
|
||||
|
||||
Deploy a Garage instance for self-hosted S3-compatible object storage.
|
||||
|
||||
## Garage Basics
|
||||
|
||||
- S3-compatible distributed object storage designed for self-hosting
|
||||
- Supports per-key, per-bucket permissions (read/write/owner)
|
||||
- Keys without explicit grants have no access
|
||||
|
||||
## NixOS Module
|
||||
|
||||
Available as `services.garage` with these key options:
|
||||
|
||||
- `services.garage.enable` - Enable the service
|
||||
- `services.garage.package` - Must be set explicitly
|
||||
- `services.garage.settings` - Freeform TOML config (replication mode, ports, RPC, etc.)
|
||||
- `services.garage.settings.metadata_dir` - Metadata storage (SSD recommended)
|
||||
- `services.garage.settings.data_dir` - Data block storage (supports multiple dirs since v0.9)
|
||||
- `services.garage.environmentFile` - For secrets like `GARAGE_RPC_SECRET`
|
||||
- `services.garage.logLevel` - error/warn/info/debug/trace
|
||||
|
||||
The NixOS module only manages the server daemon. Buckets and keys are managed externally.
|
||||
|
||||
## Bucket/Key Management
|
||||
|
||||
No declarative NixOS options for buckets or keys. Two options:
|
||||
|
||||
1. **Terraform provider** - `jkossis/terraform-provider-garage` manages buckets, keys, and permissions via the Garage Admin API v2. Could live in `terraform/garage/` similar to `terraform/vault/`.
|
||||
2. **CLI** - `garage key create`, `garage bucket create`, `garage bucket allow`
|
||||
|
||||
## Integration Ideas
|
||||
|
||||
- Store Garage API keys in Vault, fetch via `vault.secrets` on consuming hosts
|
||||
- Terraform manages both Vault secrets and Garage buckets/keys
|
||||
- Enable admin API with token for Terraform provider access
|
||||
- Add Prometheus metrics scraping (Garage exposes metrics endpoint)
|
||||
|
||||
## Open Questions
|
||||
|
||||
- Single-node or multi-node replication?
|
||||
- Which host to deploy on?
|
||||
- What to store? (backups, media, app data)
|
||||
- Expose via HTTP proxy or direct S3 API only?
|
||||
@@ -1,244 +0,0 @@
|
||||
# Media PC Replacement
|
||||
|
||||
## Overview
|
||||
|
||||
Replace the aging Linux+Kodi media PC connected to the TV with a modern, compact solution. Primary use cases are Jellyfin/Kodi playback and watching Twitch/YouTube. The current machine (`media`, 10.69.31.50) is on VLAN 31.
|
||||
|
||||
## Current State
|
||||
|
||||
### Hardware
|
||||
- **CPU**: Intel Core i7-4770K @ 3.50GHz (Haswell, 4C/8T, 2013)
|
||||
- **GPU**: Nvidia GeForce GT 710 (Kepler, GK208B)
|
||||
- **OS**: Ubuntu 22.04.5 LTS (Jammy)
|
||||
- **Software**: Kodi
|
||||
- **Network**: `media.home.2rjus.net` at `10.69.31.50` (VLAN 31)
|
||||
|
||||
### Control & Display
|
||||
- **Input**: Wireless keyboard (works well, useful for browser)
|
||||
- **TV**: 1080p (no 4K/HDR currently, but may upgrade TV later)
|
||||
- **Audio**: Surround system connected via HDMI ARC from TV (PC → HDMI → TV → ARC → surround)
|
||||
|
||||
### Notes on Current Hardware
|
||||
- The i7-4770K is massively overpowered for media playback — it's a full desktop CPU from 2013
|
||||
- The GT 710 is a low-end passive GPU; supports NVDEC for H.264/H.265 hardware decode but limited to 4K@30Hz over HDMI 1.4
|
||||
- Ubuntu 22.04 is approaching EOL (April 2027) and is not managed by this repo
|
||||
- The whole system is likely in a full-size or mid-tower case — not ideal for a TV setup
|
||||
|
||||
### Integration
|
||||
- **Media source**: Jellyfin on `jelly01` (10.69.13.14) serves media from NAS via NFS
|
||||
- **DNS**: A record in `services/ns/external-hosts.nix`
|
||||
- **Not managed**: Not a NixOS host in this repo, no monitoring/auto-updates
|
||||
|
||||
## Options
|
||||
|
||||
### Option 1: Dedicated Streaming Device (Apple TV / Nvidia Shield)
|
||||
|
||||
| Aspect | Apple TV 4K | Nvidia Shield Pro |
|
||||
|--------|-------------|-------------------|
|
||||
| **Price** | ~$130-180 | ~$200 |
|
||||
| **Jellyfin** | Swiftfin app (good) | Jellyfin Android TV (good) |
|
||||
| **Kodi** | Not available (tvOS) | Full Kodi support |
|
||||
| **Twitch** | Native app | Native app |
|
||||
| **YouTube** | Native app | Native app |
|
||||
| **HDR/DV** | Dolby Vision + HDR10 | Dolby Vision + HDR10 |
|
||||
| **4K** | Yes | Yes |
|
||||
| **Form factor** | Tiny, silent | Small, silent |
|
||||
| **Remote** | Excellent Siri remote | Decent, supports CEC |
|
||||
| **Homelab integration** | None | Minimal (Plex/Kodi only) |
|
||||
|
||||
**Pros:**
|
||||
- Zero maintenance - appliance experience
|
||||
- Excellent app ecosystem (native Twitch, YouTube, streaming services)
|
||||
- Silent, tiny form factor
|
||||
- Great remote control / CEC support
|
||||
- Hardware-accelerated codec support out of the box
|
||||
|
||||
**Cons:**
|
||||
- No NixOS management, monitoring, or auto-updates
|
||||
- Can't run arbitrary software
|
||||
- Jellyfin clients are decent but not as mature as Kodi
|
||||
- Vendor lock-in (Apple ecosystem / Google ecosystem)
|
||||
- No SSH access for troubleshooting
|
||||
|
||||
### Option 2: NixOS Mini PC (Kodi Appliance)
|
||||
|
||||
A small form factor PC (Intel NUC, Beelink, MinisForum, etc.) running NixOS with Kodi as the desktop environment.
|
||||
|
||||
**NixOS has built-in support:**
|
||||
- `services.xserver.desktopManager.kodi.enable` - boots directly into Kodi
|
||||
- `kodi-gbm` package - Kodi with direct DRM/KMS rendering (no X11/Wayland needed)
|
||||
- `kodiPackages.jellycon` - Jellyfin integration for Kodi
|
||||
- `kodiPackages.sendtokodi` - plays streams via yt-dlp (Twitch, YouTube)
|
||||
- `kodiPackages.inputstream-adaptive` - adaptive streaming support
|
||||
|
||||
**Example NixOS config sketch:**
|
||||
```nix
|
||||
{ pkgs, ... }:
|
||||
{
|
||||
services.xserver.desktopManager.kodi = {
|
||||
enable = true;
|
||||
package = pkgs.kodi.withPackages (p: [
|
||||
p.jellycon
|
||||
p.sendtokodi
|
||||
p.inputstream-adaptive
|
||||
]);
|
||||
};
|
||||
|
||||
# Auto-login to Kodi session
|
||||
services.displayManager.autoLogin = {
|
||||
enable = true;
|
||||
user = "kodi";
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
**Pros:**
|
||||
- Full NixOS management (monitoring, auto-updates, vault, promtail)
|
||||
- Kodi is a proven TV interface with excellent remote/CEC support
|
||||
- JellyCon integrates Jellyfin library directly into Kodi
|
||||
- Twitch/YouTube via sendtokodi + yt-dlp or Kodi browser addons
|
||||
- Can run arbitrary services (e.g., Home Assistant dashboard)
|
||||
- Declarative, reproducible config in this repo
|
||||
|
||||
**Cons:**
|
||||
- More maintenance than an appliance
|
||||
- NixOS + Kodi on bare metal needs GPU driver setup (Intel iGPU is usually fine)
|
||||
- Kodi YouTube/Twitch addons are less polished than native apps
|
||||
- Need to buy hardware (~$150-400 for a decent mini PC)
|
||||
- Power consumption higher than a streaming device
|
||||
|
||||
### Option 3: NixOS Mini PC (Wayland Desktop)
|
||||
|
||||
A mini PC running NixOS with a lightweight Wayland compositor, launching Kodi for media and a browser for Twitch/YouTube.
|
||||
|
||||
**Pros:**
|
||||
- Best of both worlds: Kodi for media, Firefox/Chromium for Twitch/YouTube
|
||||
- Full NixOS management
|
||||
- Can switch between Kodi and browser easily
|
||||
- Native web experience for streaming sites
|
||||
|
||||
**Cons:**
|
||||
- More complex setup (compositor + Kodi + browser)
|
||||
- Harder to get a good "10-foot UI" experience
|
||||
- Keyboard/mouse may be needed alongside remote
|
||||
- Significantly more maintenance
|
||||
|
||||
## Comparison
|
||||
|
||||
| Criteria | Dedicated Device | NixOS Kodi | NixOS Desktop |
|
||||
|----------|-----------------|------------|---------------|
|
||||
| **Maintenance** | None | Low | Medium |
|
||||
| **Media experience** | Excellent | Excellent | Good |
|
||||
| **Twitch/YouTube** | Excellent (native apps) | Good (addons/yt-dlp) | Excellent (browser) |
|
||||
| **Homelab integration** | None | Full | Full |
|
||||
| **Form factor** | Tiny | Small | Small |
|
||||
| **Cost** | $130-200 | $150-400 | $150-400 |
|
||||
| **Silent operation** | Yes | Likely (fanless options) | Likely |
|
||||
| **CEC remote** | Yes | Yes (Kodi) | Partial |
|
||||
|
||||
## Decision: NixOS Mini PC with Kodi (Option 2)
|
||||
|
||||
**Rationale:**
|
||||
- Already comfortable with Kodi + wireless keyboard workflow
|
||||
- Browser access for Twitch/YouTube is important — Kodi can launch a browser when needed
|
||||
- Homelab integration comes for free (monitoring, auto-updates, vault)
|
||||
- Natural fit alongside the other 16 NixOS hosts in this repo
|
||||
- Dedicated devices lose the browser/keyboard workflow
|
||||
|
||||
### Display Server: Sway/Hyprland
|
||||
|
||||
Options evaluated:
|
||||
|
||||
| Approach | Pros | Cons |
|
||||
|----------|------|------|
|
||||
| Cage (kiosk) | Simplest, single-app | No browser without TTY switching |
|
||||
| kodi-gbm (no compositor) | Best HDR support | No browser at all, ALSA-only audio |
|
||||
| **Sway/Hyprland** | **Workspace switching, VA-API in browser** | **Slightly more config** |
|
||||
| Full DE (GNOME/KDE) | Everything works | Overkill, heavy |
|
||||
|
||||
**Decision: Sway or Hyprland** (Hyprland preferred — same as desktop)
|
||||
|
||||
- Kodi fullscreen on workspace 1, Firefox on workspace 2
|
||||
- Switch via keybinding on wireless keyboard
|
||||
- Auto-start both on login via greetd
|
||||
- Minimal config — no bar, no decorations, just workspaces
|
||||
- VA-API hardware decode works in Firefox on Wayland (important for YouTube/Twitch)
|
||||
- Can revisit kodi-gbm later if HDR becomes a priority (just a config change)
|
||||
|
||||
### Twitch/YouTube
|
||||
|
||||
Firefox on workspace 2, switched to via keyboard. Kodi addons (sendtokodi, YouTube plugin) available as secondary options but a real browser is the primary approach.
|
||||
|
||||
### Media Playback: Kodi + JellyCon + NFS Direct Path
|
||||
|
||||
Three options were evaluated for media playback:
|
||||
|
||||
| Approach | Transcoding | Library management | Watch state sync |
|
||||
|----------|-------------|-------------------|-----------------|
|
||||
| Jellyfin only (browser) | Yes — browsers lack codec support for DTS, PGS subs, etc. | Jellyfin | Jellyfin |
|
||||
| Kodi + NFS only | No — Kodi plays everything natively | Kodi local DB | None |
|
||||
| **Kodi + JellyCon + NFS** | **No — Kodi's native player, direct path via NFS** | **Jellyfin** | **Jellyfin** |
|
||||
|
||||
**Decision: Kodi + JellyCon with NFS direct path**
|
||||
|
||||
- JellyCon presents the Jellyfin library inside Kodi's UI (browse, search, metadata, artwork)
|
||||
- Playback uses Kodi's native player — direct play, no transcoding, full codec support including surround passthrough
|
||||
- JellyCon's "direct path" mode maps Jellyfin paths to local NFS mounts, so playback goes straight over NFS without streaming through Jellyfin's HTTP layer
|
||||
- Watch state, resume position, etc. sync back to Jellyfin — accessible from other devices too
|
||||
- NFS mount follows the same pattern as jelly01 (`nas.home.2rjus.net:/mnt/hdd-pool/media`)
|
||||
|
||||
### Audio Passthrough
|
||||
|
||||
Kodi on NixOS supports HDMI audio passthrough for surround formats (AC3, DTS, etc.). The ARC chain (PC → HDMI → TV → ARC → surround) works transparently — Kodi just needs to be configured for passthrough rather than decoding audio locally.
|
||||
|
||||
## Hardware
|
||||
|
||||
### Leading Candidate: GMKtec G3
|
||||
|
||||
- **CPU**: Intel N100 (Alder Lake-N, 4C/4T)
|
||||
- **RAM**: 16GB
|
||||
- **Storage**: 512GB NVMe
|
||||
- **Price**: ~NOK 2800 (~$250 USD)
|
||||
- **Source**: AliExpress
|
||||
|
||||
The N100 supports hardware decode for all relevant 4K codecs:
|
||||
|
||||
| Codec | Support | Used by |
|
||||
|-------|---------|---------|
|
||||
| H.264/AVC | Yes (Quick Sync) | Older media |
|
||||
| H.265/HEVC 10-bit | Yes (Quick Sync) | Most 4K media, HDR |
|
||||
| VP9 | Yes (Quick Sync) | YouTube 4K |
|
||||
| AV1 | Yes (Quick Sync) | YouTube, Twitch, newer encodes |
|
||||
|
||||
16GB RAM is comfortable for Kodi + browser + NixOS system services (node-exporter, promtail, etc.) with plenty of headroom.
|
||||
|
||||
### Key Requirements
|
||||
- HDMI 2.0+ for 4K future-proofing (current TV is 1080p)
|
||||
- Hardware video decode via VA-API / Intel Quick Sync
|
||||
- HDR support (for future TV upgrade)
|
||||
- Fanless or near-silent operation
|
||||
|
||||
## Implementation Steps
|
||||
|
||||
1. **Choose and order hardware**
|
||||
2. **Create host configuration** (`hosts/media1/`)
|
||||
- Kodi desktop manager with Jellyfin + streaming addons
|
||||
- Intel/AMD iGPU driver and VA-API hardware decode
|
||||
- HDMI audio passthrough for surround
|
||||
- NFS mount for media (same pattern as jelly01)
|
||||
- Browser package (Firefox/Chromium) for Twitch/YouTube fallback
|
||||
- Standard system modules (monitoring, promtail, vault, auto-upgrade)
|
||||
3. **Install NixOS** on the mini PC
|
||||
4. **Configure Kodi** (Jellyfin server, addons, audio passthrough)
|
||||
5. **Update DNS** - point `media.home.2rjus.net` to new IP (or keep on VLAN 31)
|
||||
6. **Retire old media PC**
|
||||
|
||||
## Open Questions
|
||||
|
||||
- [x] What are the current media PC specs? — i7-4770K, GT 710, Ubuntu 22.04. Overkill CPU, weak GPU, large form factor. Not worth reusing if goal is compact/silent.
|
||||
- [x] VLAN? — Keep on VLAN 31 for now, same as current media PC. Can revisit later.
|
||||
- [x] Is CEC needed? — No, not using it currently. Can add later if desired.
|
||||
- [x] Is 4K HDR output needed? — TV is 1080p now, but want 4K/HDR capability for future TV upgrade
|
||||
- [x] Audio setup? — Surround system via HDMI ARC from TV. Media PC outputs HDMI to TV, TV passes audio to surround via ARC. Kodi/any player just needs HDMI audio output with surround passthrough.
|
||||
- [x] Are there streaming service apps needed? — No. Only Twitch/YouTube, which work fine in any browser.
|
||||
- [x] Budget? — ~NOK 2800 for GMKtec G3 (N100, 16GB, 512GB NVMe)
|
||||
@@ -1,156 +0,0 @@
|
||||
# Monitoring Stack Migration to VictoriaMetrics
|
||||
|
||||
## Overview
|
||||
|
||||
Migrate from Prometheus to VictoriaMetrics on a new host (monitoring02) to gain better compression
|
||||
and longer retention. Run in parallel with monitoring01 until validated, then switch over using
|
||||
a `monitoring` CNAME for seamless transition.
|
||||
|
||||
## Current State
|
||||
|
||||
**monitoring02** (10.69.13.24) - **PRIMARY**:
|
||||
- 4 CPU cores, 8GB RAM, 60GB disk
|
||||
- VictoriaMetrics with 3-month retention
|
||||
- vmalert with alerting enabled (routes to local Alertmanager)
|
||||
- Alertmanager -> alerttonotify -> NATS notification pipeline
|
||||
- Grafana with Kanidm OIDC (`grafana.home.2rjus.net`)
|
||||
- Loki (log aggregation)
|
||||
- CNAMEs: monitoring, alertmanager, grafana, grafana-test, metrics, vmalert, loki
|
||||
|
||||
**monitoring01** (10.69.13.13) - **SHUT DOWN**:
|
||||
- No longer running, pending decommission
|
||||
|
||||
## Decision: VictoriaMetrics
|
||||
|
||||
Per `docs/plans/long-term-metrics-storage.md`, VictoriaMetrics is the recommended starting point:
|
||||
- Single binary replacement for Prometheus
|
||||
- 5-10x better compression (30 days could become 180+ days in same space)
|
||||
- Same PromQL query language (Grafana dashboards work unchanged)
|
||||
- Same scrape config format (existing auto-generated configs work)
|
||||
|
||||
If multi-year retention with downsampling becomes necessary later, Thanos can be evaluated.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────┐
|
||||
│ monitoring02 │
|
||||
│ VictoriaMetrics│
|
||||
│ + Grafana │
|
||||
monitoring │ + Loki │
|
||||
CNAME ──────────│ + Alertmanager │
|
||||
│ (vmalert) │
|
||||
└─────────────────┘
|
||||
▲
|
||||
│ scrapes
|
||||
┌───────────────┼───────────────┐
|
||||
│ │ │
|
||||
┌────┴────┐ ┌─────┴────┐ ┌─────┴────┐
|
||||
│ ns1 │ │ ha1 │ │ ... │
|
||||
│ :9100 │ │ :9100 │ │ :9100 │
|
||||
└─────────┘ └──────────┘ └──────────┘
|
||||
```
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
### Phase 1: Create monitoring02 Host [COMPLETE]
|
||||
|
||||
Host created and deployed at 10.69.13.24 (prod tier) with:
|
||||
- 4 CPU cores, 8GB RAM, 60GB disk
|
||||
- Vault integration enabled
|
||||
- NATS-based remote deployment enabled
|
||||
- Grafana with Kanidm OIDC deployed as test instance (`grafana-test.home.2rjus.net`)
|
||||
|
||||
### Phase 2: Set Up VictoriaMetrics Stack [COMPLETE]
|
||||
|
||||
New service module at `services/victoriametrics/` for VictoriaMetrics + vmalert + Alertmanager.
|
||||
Imported by monitoring02 alongside the existing Grafana service.
|
||||
|
||||
1. **VictoriaMetrics** (port 8428):
|
||||
- `services.victoriametrics.enable = true`
|
||||
- `retentionPeriod = "3"` (3 months)
|
||||
- All scrape configs migrated from Prometheus (22 jobs including auto-generated)
|
||||
- Static user override (DynamicUser disabled) for credential file access
|
||||
- OpenBao token fetch service + 30min refresh timer
|
||||
- Apiary bearer token via vault.secrets
|
||||
|
||||
2. **vmalert** for alerting rules:
|
||||
- Points to VictoriaMetrics datasource at localhost:8428
|
||||
- Reuses existing `services/monitoring/rules.yml` directly via `settings.rule`
|
||||
- Notifier sends to local Alertmanager at localhost:9093
|
||||
|
||||
3. **Alertmanager** (port 9093):
|
||||
- Same configuration as monitoring01 (alerttonotify webhook routing)
|
||||
- alerttonotify imported on monitoring02, routes alerts via NATS
|
||||
|
||||
4. **Grafana** (port 3000):
|
||||
- VictoriaMetrics datasource (localhost:8428) as default
|
||||
- Loki datasource pointing to localhost:3100
|
||||
|
||||
5. **Loki** (port 3100):
|
||||
- Same configuration as monitoring01 in standalone `services/loki/` module
|
||||
- Grafana datasource updated to localhost:3100
|
||||
|
||||
**Note:** pve-exporter and pushgateway scrape targets are not included on monitoring02.
|
||||
pve-exporter requires a local exporter instance; pushgateway is replaced by VictoriaMetrics
|
||||
native push support.
|
||||
|
||||
### Phase 3: Parallel Operation [COMPLETE]
|
||||
|
||||
Ran both monitoring01 and monitoring02 simultaneously to validate data collection and dashboards.
|
||||
|
||||
### Phase 4: Add monitoring CNAME [COMPLETE]
|
||||
|
||||
Added CNAMEs to monitoring02: monitoring, alertmanager, grafana, metrics, vmalert, loki.
|
||||
|
||||
### Phase 5: Update References [COMPLETE]
|
||||
|
||||
- Moved alertmanager, grafana, prometheus CNAMEs from http-proxy to monitoring02
|
||||
- Removed corresponding Caddy reverse proxy entries from http-proxy
|
||||
- monitoring02 Caddy serves alertmanager, grafana, metrics, vmalert directly
|
||||
|
||||
### Phase 6: Enable Alerting [COMPLETE]
|
||||
|
||||
- Switched vmalert from blackhole mode to local Alertmanager
|
||||
- alerttonotify service running on monitoring02 (NATS nkey from Vault)
|
||||
- prometheus-metrics Vault policy added for OpenBao scraping
|
||||
- Full alerting pipeline verified: vmalert -> Alertmanager -> alerttonotify -> NATS
|
||||
|
||||
### Phase 7: Cutover and Decommission [IN PROGRESS]
|
||||
|
||||
- monitoring01 shut down (2026-02-17)
|
||||
- Vault AppRole moved from approle.tf to hosts-generated.tf with extra_policies support
|
||||
|
||||
**Remaining cleanup (separate branch):**
|
||||
- [ ] Update `system/monitoring/logs.nix` - Promtail still points to monitoring01
|
||||
- [ ] Update `hosts/template2/bootstrap.nix` - Bootstrap Loki URL still points to monitoring01
|
||||
- [ ] Remove monitoring01 from flake.nix and host configuration
|
||||
- [ ] Destroy monitoring01 VM in Proxmox
|
||||
- [ ] Remove monitoring01 from terraform state
|
||||
- [ ] Remove or archive `services/monitoring/` (Prometheus config)
|
||||
|
||||
## Completed
|
||||
|
||||
- 2026-02-08: Phase 1 - monitoring02 host created
|
||||
- 2026-02-17: Phase 2 - VictoriaMetrics, vmalert, Alertmanager, Loki, Grafana configured
|
||||
- 2026-02-17: Phase 6 - Alerting enabled, CNAMEs migrated, monitoring01 shut down
|
||||
|
||||
## VictoriaMetrics Service Configuration
|
||||
|
||||
Implemented in `services/victoriametrics/default.nix`. Key design decisions:
|
||||
|
||||
- **Static user**: VictoriaMetrics NixOS module uses `DynamicUser`, overridden with a static
|
||||
`victoriametrics` user so vault.secrets and credential files work correctly
|
||||
- **Shared rules**: vmalert reuses `services/monitoring/rules.yml` via `settings.rule` path
|
||||
reference (no YAML-to-Nix conversion needed)
|
||||
- **Scrape config reuse**: Uses the same `lib/monitoring.nix` functions and
|
||||
`services/monitoring/external-targets.nix` as Prometheus for auto-generated targets
|
||||
|
||||
## Notes
|
||||
|
||||
- VictoriaMetrics uses port 8428 vs Prometheus 9090
|
||||
- PromQL compatibility is excellent
|
||||
- VictoriaMetrics native push replaces Pushgateway (remove from http-proxy if not needed)
|
||||
- monitoring02 deployed via OpenTofu using `create-host` script
|
||||
- Grafana dashboards defined declaratively via NixOS, not imported from monitoring01 state
|
||||
- Tempo and Pyroscope deferred (not actively used; can be added later if needed)
|
||||
@@ -1,135 +0,0 @@
|
||||
# monitoring02 Reboot Alert Investigation
|
||||
|
||||
**Date:** 2026-02-10
|
||||
**Status:** Completed - False positive identified
|
||||
|
||||
## Summary
|
||||
|
||||
A `host_reboot` alert fired for monitoring02 at 16:27:36 UTC. Investigation determined this was a **false positive** caused by NTP clock adjustments, not an actual reboot.
|
||||
|
||||
## Alert Details
|
||||
|
||||
- **Alert:** `host_reboot`
|
||||
- **Rule:** `changes(node_boot_time_seconds[10m]) > 0`
|
||||
- **Host:** monitoring02
|
||||
- **Time:** 2026-02-10T16:27:36Z
|
||||
|
||||
## Investigation Findings
|
||||
|
||||
### Evidence Against Actual Reboot
|
||||
|
||||
1. **Uptime:** System had been up for ~40 hours (143,751 seconds) at time of alert
|
||||
2. **Consistent BOOT_ID:** All logs showed the same systemd BOOT_ID (`fd26e7f3d86f4cd688d1b1d7af62f2ad`) from Feb 9 through the alert time
|
||||
3. **No log gaps:** Logs were continuous - no shutdown/restart cycle visible
|
||||
4. **Prometheus metrics:** `node_boot_time_seconds` showed a 1-second fluctuation, then returned to normal
|
||||
|
||||
### Root Cause: NTP Clock Adjustment
|
||||
|
||||
The `node_boot_time_seconds` metric fluctuated by 1 second due to how Linux calculates boot time:
|
||||
|
||||
```
|
||||
btime = current_wall_clock_time - monotonic_uptime
|
||||
```
|
||||
|
||||
When NTP adjusts the wall clock, `btime` shifts by the same amount. The `node_timex_*` metrics confirmed this:
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| `node_timex_maxerror_seconds` (max in 3h) | 1.02 seconds |
|
||||
| `node_timex_maxerror_seconds` (max in 24h) | 2.05 seconds |
|
||||
| `node_timex_sync_status` | 1 (synced) |
|
||||
| Current `node_timex_offset_seconds` | ~9ms (normal) |
|
||||
|
||||
The kernel's estimated maximum clock error spiked to over 1 second, causing the boot time calculation to drift momentarily.
|
||||
|
||||
Additionally, `systemd-resolved` logged "Clock change detected. Flushing caches." at 16:26:53Z, corroborating the NTP adjustment.
|
||||
|
||||
## Current Time Sync Configuration
|
||||
|
||||
### NixOS Guests
|
||||
- **NTP client:** systemd-timesyncd (NixOS default)
|
||||
- **No explicit configuration** in the codebase
|
||||
- Uses default NixOS NTP server pool
|
||||
|
||||
### Proxmox VMs
|
||||
- **Clocksource:** `kvm-clock` (optimal for KVM VMs)
|
||||
- **QEMU guest agent:** Enabled
|
||||
- **No additional QEMU timing args** configured
|
||||
|
||||
## Potential Improvements
|
||||
|
||||
### 1. Improve Alert Rule (Recommended)
|
||||
|
||||
Add tolerance to filter out small NTP adjustments:
|
||||
|
||||
```yaml
|
||||
# Current rule (triggers on any change)
|
||||
expr: changes(node_boot_time_seconds[10m]) > 0
|
||||
|
||||
# Improved rule (requires >60 second shift)
|
||||
expr: changes(node_boot_time_seconds[10m]) > 0 and abs(delta(node_boot_time_seconds[10m])) > 60
|
||||
```
|
||||
|
||||
### 2. Switch to Chrony (Optional)
|
||||
|
||||
Chrony handles time adjustments more gracefully than systemd-timesyncd:
|
||||
|
||||
```nix
|
||||
# In common/vm/qemu-guest.nix
|
||||
{
|
||||
services.qemuGuest.enable = true;
|
||||
|
||||
services.timesyncd.enable = false;
|
||||
services.chrony = {
|
||||
enable = true;
|
||||
extraConfig = ''
|
||||
makestep 1 3
|
||||
rtcsync
|
||||
'';
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Add QEMU Timing Args (Optional)
|
||||
|
||||
In `terraform/vms.tf`:
|
||||
|
||||
```hcl
|
||||
args = "-global kvm-pit.lost_tick_policy=delay -rtc driftfix=slew"
|
||||
```
|
||||
|
||||
### 4. Local NTP Server (Optional)
|
||||
|
||||
Running a local NTP server (e.g., on ns1/ns2) would reduce latency and improve sync stability across all hosts.
|
||||
|
||||
## Monitoring NTP Health
|
||||
|
||||
The `node_timex_*` metrics from node_exporter provide visibility into NTP health:
|
||||
|
||||
```promql
|
||||
# Clock offset from reference
|
||||
node_timex_offset_seconds
|
||||
|
||||
# Sync status (1 = synced)
|
||||
node_timex_sync_status
|
||||
|
||||
# Maximum estimated error - useful for alerting
|
||||
node_timex_maxerror_seconds
|
||||
```
|
||||
|
||||
A potential alert for NTP issues:
|
||||
|
||||
```yaml
|
||||
- alert: ntp_clock_drift
|
||||
expr: node_timex_maxerror_seconds > 1
|
||||
for: 5m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: "High clock drift on {{ $labels.hostname }}"
|
||||
description: "NTP max error is {{ $value }}s on {{ $labels.hostname }}"
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
No action required for the alert itself - the system was healthy. Consider implementing the improved alert rule to prevent future false positives from NTP adjustments.
|
||||
@@ -1,181 +0,0 @@
|
||||
# Native Nix Forgejo Runner on nix-cache02
|
||||
|
||||
## Goal
|
||||
|
||||
Add a second Forgejo Actions runner instance on nix-cache02 that executes jobs directly on the host (no containers). This allows CI builds to populate the nix binary cache automatically, reducing reliance on manually triggered builds before deployments.
|
||||
|
||||
## Motivation
|
||||
|
||||
- **Nix store caching**: The container-based `nix` label runs in ephemeral Podman containers, losing all nix store paths between jobs. Native execution uses the host's persistent store, so builds reuse cached paths automatically.
|
||||
- **Binary cache integration**: nix-cache02 *is* the binary cache server (Harmonia). Paths built by CI are immediately available to all hosts.
|
||||
- **Faster deploy cycle**: Currently updating a flake input (e.g. nixos-exporter) requires pushing to master, then waiting for the scheduled builder or manually triggering a build. With a native runner, repos can have CI workflows that run `nix build`, and those derivations are in the cache by the time hosts auto-upgrade.
|
||||
- **NixOS config builds**: Enables future workflows that build `nixosConfigurations.*` from this repo, populating the cache as a side effect of CI.
|
||||
|
||||
## Design
|
||||
|
||||
### Two Runner Instances
|
||||
|
||||
- **actions1** (existing) — Container-based, global runner available to all Forgejo repos. Unchanged.
|
||||
- **actions-native** (new) — Host-based, registered as a user-level runner under the `torjus` Forgejo account, so only repos owned by that user can target it.
|
||||
|
||||
### Trusted Repos
|
||||
|
||||
Repos that should be allowed to use the native runner:
|
||||
|
||||
- `torjus/nixos-servers`
|
||||
- `torjus/nixos-exporter`
|
||||
- `torjus/nixos` (gunter/magicman configs)
|
||||
- Other repos with nix builds that benefit from cache population (add as needed)
|
||||
|
||||
Restriction is configured in the Forgejo web UI when registering the runner — scope it to the user or specific repos.
|
||||
|
||||
### Label Configuration
|
||||
|
||||
```nix
|
||||
labels = [ "native-nix:host" ];
|
||||
```
|
||||
|
||||
Workflow files in trusted repos target this with `runs-on: native-nix`.
|
||||
|
||||
### Host Packages
|
||||
|
||||
The runner needs nix and basic tools available on the host:
|
||||
|
||||
```nix
|
||||
hostPackages = with pkgs; [
|
||||
bash
|
||||
coreutils
|
||||
curl
|
||||
gawk
|
||||
git
|
||||
gnused
|
||||
nodejs
|
||||
wget
|
||||
nix
|
||||
];
|
||||
```
|
||||
|
||||
## Security Analysis
|
||||
|
||||
### What the runner CAN access
|
||||
|
||||
- **Nix store** — Can read and write derivations. This is the whole point; harmonia serves the store to all hosts.
|
||||
- **Network** — Full network access during job execution.
|
||||
- **World-readable files** — Standard for any process on the system.
|
||||
|
||||
### What the runner CANNOT access
|
||||
|
||||
- **Cache signing key** — `/run/secrets/cache-secret` is mode `0400` root-owned. Harmonia signs derivations on serve, not on store write.
|
||||
- **Vault AppRole credentials** — `/var/lib/vault/approle/` is root-owned.
|
||||
- **Other vault secrets** — All in `/run/secrets/` with restrictive permissions.
|
||||
|
||||
### Mitigations
|
||||
|
||||
- **User-level runner** — Registered to the `torjus` user on Forgejo (not global), so only repos owned by that user can submit jobs.
|
||||
- **DynamicUser** — The runner uses systemd DynamicUser, so no persistent user account. Each invocation gets an ephemeral UID.
|
||||
- **Nix sandbox** — Nix builds already run sandboxed by default. Non-nix `run:` steps execute as the runner's system user but have no special privileges.
|
||||
- **Separate instance** — Container-based jobs (untrusted repos) remain on actions1 and never get host access.
|
||||
|
||||
### Accepted Risks
|
||||
|
||||
- A compromised trusted repo could inject bad derivations into the nix store/cache. This is an accepted risk since those repos already have deploy access to production hosts.
|
||||
- Jobs can consume host resources (CPU, memory, disk). The `runner.capacity` setting limits concurrent jobs.
|
||||
|
||||
## Implementation
|
||||
|
||||
### 1. Register runner on Forgejo and store token in Vault
|
||||
|
||||
- In Forgejo web UI: go to user settings > Actions > Runners, create a new runner registration token.
|
||||
- Store the token in Vault via Terraform.
|
||||
|
||||
**terraform/vault/variables.tf** — add variable:
|
||||
```hcl
|
||||
variable "forgejo_native_runner_token" {
|
||||
description = "Forgejo Actions runner token for native nix runner on nix-cache02"
|
||||
type = string
|
||||
default = "PLACEHOLDER"
|
||||
sensitive = true
|
||||
}
|
||||
```
|
||||
|
||||
**terraform/vault/secrets.tf** — add secret:
|
||||
```hcl
|
||||
"hosts/nix-cache02/forgejo-native-runner-token" = {
|
||||
auto_generate = false
|
||||
data = { token = var.forgejo_native_runner_token }
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Add NixOS configuration for native runner instance
|
||||
|
||||
Note: nix-cache02 already has an AppRole with access to `secret/data/hosts/nix-cache02/*` (defined in `terraform/vault/hosts-generated.tf`), so no approle changes are needed.
|
||||
|
||||
**File:** `hosts/nix-cache02/actions-runner.nix`
|
||||
|
||||
Add vault secret and runner instance alongside the existing overrides:
|
||||
|
||||
```nix
|
||||
# Fetch native runner token from Vault
|
||||
vault.secrets.forgejo-native-runner-token = {
|
||||
secretPath = "hosts/nix-cache02/forgejo-native-runner-token";
|
||||
extractKey = "token";
|
||||
mode = "0444";
|
||||
services = [ "gitea-runner-actions-native" ];
|
||||
};
|
||||
|
||||
# Native nix runner instance
|
||||
services.gitea-actions-runner.instances.actions-native = {
|
||||
enable = true;
|
||||
name = "${config.networking.hostName}-native";
|
||||
url = "https://code.t-juice.club";
|
||||
tokenFile = "/run/secrets/forgejo-native-runner-token";
|
||||
labels = [ "native-nix:host" ];
|
||||
hostPackages = with pkgs; [
|
||||
bash coreutils curl gawk git gnused nodejs wget nix
|
||||
];
|
||||
settings = {
|
||||
runner.capacity = 4;
|
||||
cache = {
|
||||
enabled = true;
|
||||
dir = "/var/lib/gitea-runner/actions-native/cache";
|
||||
};
|
||||
};
|
||||
};
|
||||
```
|
||||
|
||||
### 3. Build and deploy
|
||||
|
||||
1. Create feature branch
|
||||
2. Apply Terraform changes (variables + secrets + approle policy)
|
||||
3. Set the actual token value in `terraform.tfvars`
|
||||
4. Run `tofu apply` in `terraform/vault/`
|
||||
5. Build the NixOS configuration: `nix build .#nixosConfigurations.nix-cache02.config.system.build.toplevel`
|
||||
6. Deploy to nix-cache02
|
||||
7. Verify the native runner appears as online in Forgejo UI
|
||||
|
||||
### 4. Test with a workflow
|
||||
|
||||
In a trusted repo (e.g. nixos-exporter):
|
||||
|
||||
```yaml
|
||||
name: Build
|
||||
on: [push]
|
||||
jobs:
|
||||
build:
|
||||
runs-on: native-nix
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- run: nix build
|
||||
```
|
||||
|
||||
## Future Work
|
||||
|
||||
- **NixOS config CI**: Workflow that builds all `nixosConfigurations` on push to master, populating the binary cache.
|
||||
- **Nix store GC policy**: CI builds will accumulate store paths. Since this host is the binary cache, GC needs to be conservative — only delete paths not referenced by current system configurations. Defer to a follow-up.
|
||||
- **Resource limits**: Consider systemd MemoryMax/CPUQuota on the native runner if resource contention becomes an issue.
|
||||
- **Additional host packages**: Evaluate whether tools like `cachix` or `nix-prefetch-*` should be added.
|
||||
|
||||
## Open Questions
|
||||
|
||||
- Should `hostPackages` include additional tools beyond the basics listed above?
|
||||
- Do we want a separate capacity for the native runner vs container runner, or is 4 fine for both?
|
||||
@@ -1,156 +0,0 @@
|
||||
# Nix Cache Host Reprovision
|
||||
|
||||
## Overview
|
||||
|
||||
Reprovision `nix-cache01` using the OpenTofu workflow, and improve the build/cache system with:
|
||||
1. NATS-based remote build triggering (replacing the current bash script)
|
||||
2. Safer flake update workflow that validates builds before pushing to master
|
||||
|
||||
## Status
|
||||
|
||||
**Phase 1: New Build Host** - COMPLETE
|
||||
**Phase 2: NATS Build Triggering** - COMPLETE
|
||||
**Phase 3: Safe Flake Update Workflow** - NOT STARTED
|
||||
**Phase 4: Complete Migration** - COMPLETE
|
||||
**Phase 5: Scheduled Builds** - COMPLETE
|
||||
|
||||
## Completed Work
|
||||
|
||||
### New Build Host (nix-cache02)
|
||||
|
||||
Instead of reprovisioning nix-cache01 in-place, we created a new host `nix-cache02` at 10.69.13.25:
|
||||
|
||||
- **Specs**: 8 CPU cores, 16GB RAM (temporarily, will increase to 24GB after nix-cache01 decommissioned), 200GB disk
|
||||
- **Provisioned via OpenTofu** with automatic Vault credential bootstrapping
|
||||
- **Builder service** configured with two repos:
|
||||
- `nixos-servers` → `git+https://git.t-juice.club/torjus/nixos-servers.git`
|
||||
- `nixos` (gunter) → `git+https://git.t-juice.club/torjus/nixos.git`
|
||||
|
||||
### NATS-Based Build Triggering
|
||||
|
||||
The `homelab-deploy` tool was extended with a builder mode:
|
||||
|
||||
**NATS Subjects:**
|
||||
- `build.<repo>.<target>` - e.g., `build.nixos-servers.all` or `build.nixos-servers.ns1`
|
||||
|
||||
**NATS Permissions (in DEPLOY account):**
|
||||
| User | Publish | Subscribe |
|
||||
|------|---------|-----------|
|
||||
| Builder | `build.responses.>` | `build.>` |
|
||||
| Test deployer | `deploy.test.>`, `deploy.discover`, `build.>` | `deploy.responses.>`, `deploy.discover`, `build.responses.>` |
|
||||
| Admin deployer | `deploy.>`, `build.>` | `deploy.>`, `build.responses.>` |
|
||||
|
||||
**Vault Secrets:**
|
||||
- `shared/homelab-deploy/builder-nkey` - NKey seed for builder authentication
|
||||
|
||||
**NixOS Configuration:**
|
||||
- `hosts/nix-cache02/builder.nix` - Builder service configuration
|
||||
- `services/nats/default.nix` - Updated with builder NATS user
|
||||
|
||||
**MCP Integration:**
|
||||
- `.mcp.json` updated with `--enable-builds` flag
|
||||
- Build tool available via MCP for Claude Code
|
||||
|
||||
**Tested:**
|
||||
- Single host build: `build nixos-servers testvm01` (~30s)
|
||||
- All hosts build: `build nixos-servers all` (16 hosts in ~226s)
|
||||
|
||||
### Harmonia Binary Cache
|
||||
|
||||
- Parameterized `services/nix-cache/harmonia.nix` to use hostname-based Vault paths
|
||||
- Parameterized `services/nix-cache/proxy.nix` for hostname-based domain
|
||||
- New signing key: `nix-cache02.home.2rjus.net-1`
|
||||
- Vault secret: `hosts/nix-cache02/cache-secret`
|
||||
- Removed unused Gitea Actions runner from nix-cache01
|
||||
|
||||
## Current State
|
||||
|
||||
### nix-cache02 (Active)
|
||||
- Running at 10.69.13.25
|
||||
- Serving `https://nix-cache.home.2rjus.net` (canonical URL)
|
||||
- Builder service active, responding to NATS build requests
|
||||
- Metrics exposed on port 9973 (`homelab-deploy-builder` job)
|
||||
- Harmonia binary cache server running
|
||||
- Signing key: `nix-cache02.home.2rjus.net-1`
|
||||
- Prod tier with `build-host` role
|
||||
|
||||
### nix-cache01 (Decommissioned)
|
||||
- VM deleted from Proxmox
|
||||
- Host configuration removed from repo
|
||||
- Vault AppRole and secrets removed
|
||||
- Old signing key removed from trusted-public-keys
|
||||
|
||||
## Remaining Work
|
||||
|
||||
### Phase 3: Safe Flake Update Workflow
|
||||
|
||||
1. Create `.github/workflows/flake-update-safe.yaml`
|
||||
2. Disable or remove old `flake-update.yaml`
|
||||
3. Test manually with `workflow_dispatch`
|
||||
4. Monitor first automated run
|
||||
|
||||
### Phase 4: Complete Migration ✅
|
||||
|
||||
1. ~~**Add Harmonia to nix-cache02**~~ ✅ Done - new signing key, parameterized service
|
||||
2. ~~**Add trusted public key to all hosts**~~ ✅ Done - `system/nix.nix` updated
|
||||
3. ~~**Test cache from other hosts**~~ ✅ Done - verified from testvm01
|
||||
4. ~~**Update proxy and DNS**~~ ✅ Done - `nix-cache.home.2rjus.net` CNAME now points to nix-cache02
|
||||
5. ~~**Deploy to all hosts**~~ ✅ Done - all hosts have new trusted key
|
||||
6. ~~**Decommission nix-cache01**~~ ✅ Done - 2026-02-10:
|
||||
- Removed `hosts/nix-cache01/` directory
|
||||
- Removed `services/nix-cache/build-flakes.{nix,sh}`
|
||||
- Removed Vault AppRole and secrets
|
||||
- Removed old signing key from `system/nix.nix`
|
||||
- Removed from `flake.nix`
|
||||
- Deleted VM from Proxmox
|
||||
|
||||
### Phase 5: Scheduled Builds ✅
|
||||
|
||||
Implemented a systemd timer on nix-cache02 that triggers builds every 2 hours:
|
||||
|
||||
- **Timer**: `scheduled-build.timer` runs every 2 hours with 5m random jitter
|
||||
- **Service**: `scheduled-build.service` calls `homelab-deploy build` for both repos
|
||||
- **Authentication**: Dedicated scheduler NKey stored in Vault
|
||||
- **NATS user**: Added to DEPLOY account with publish `build.>` and subscribe `build.responses.>`
|
||||
|
||||
Files:
|
||||
- `hosts/nix-cache02/scheduler.nix` - Timer and service configuration
|
||||
- `services/nats/default.nix` - Scheduler NATS user
|
||||
- `terraform/vault/secrets.tf` - Scheduler NKey secret
|
||||
- `terraform/vault/variables.tf` - Variable for scheduler NKey
|
||||
|
||||
## Resolved Questions
|
||||
|
||||
- **Parallel vs sequential builds?** Sequential - hosts share packages, subsequent builds are fast after first
|
||||
- **What about gunter?** Configured as `nixos` repo in builder settings
|
||||
- **Disk size?** 200GB for new host
|
||||
- **Build host specs?** 8 cores, 16-24GB RAM matches current nix-cache01
|
||||
|
||||
### Phase 6: Observability
|
||||
|
||||
1. **Alerting rules** for build failures:
|
||||
```promql
|
||||
# Alert if any build fails
|
||||
increase(homelab_deploy_build_host_total{status="failure"}[1h]) > 0
|
||||
|
||||
# Alert if no successful builds in 24h (scheduled builds stopped)
|
||||
time() - homelab_deploy_build_last_success_timestamp > 86400
|
||||
```
|
||||
|
||||
2. **Grafana dashboard** for build metrics:
|
||||
- Build success/failure rate over time
|
||||
- Average build duration per host (histogram)
|
||||
- Build frequency (builds per hour/day)
|
||||
- Last successful build timestamp per repo
|
||||
|
||||
Available metrics:
|
||||
- `homelab_deploy_builds_total{repo, status}` - total builds by repo and status
|
||||
- `homelab_deploy_build_host_total{repo, host, status}` - per-host build counts
|
||||
- `homelab_deploy_build_duration_seconds_{bucket,sum,count}` - build duration histogram
|
||||
- `homelab_deploy_build_last_timestamp{repo}` - last build attempt
|
||||
- `homelab_deploy_build_last_success_timestamp{repo}` - last successful build
|
||||
|
||||
## Open Questions
|
||||
|
||||
- [x] ~~When to cut over DNS from nix-cache01 to nix-cache02?~~ Done - 2026-02-10
|
||||
- [ ] Implement safe flake update workflow before or after full migration?
|
||||
@@ -1,87 +0,0 @@
|
||||
# OpenBao + Kanidm OIDC Integration
|
||||
|
||||
## Status: Completed
|
||||
|
||||
Implemented 2026-02-09.
|
||||
|
||||
## Overview
|
||||
|
||||
Enable Kanidm users to authenticate to OpenBao (Vault) using OIDC for Web UI access. Members of the `admins` group get full read/write access to secrets.
|
||||
|
||||
## Implementation
|
||||
|
||||
### Files Modified
|
||||
|
||||
| File | Changes |
|
||||
|------|---------|
|
||||
| `terraform/vault/oidc.tf` | New - OIDC auth backend and roles |
|
||||
| `terraform/vault/policies.tf` | Added oidc-admin and oidc-default policies |
|
||||
| `terraform/vault/secrets.tf` | Added OAuth2 client secret |
|
||||
| `terraform/vault/approle.tf` | Granted kanidm01 access to openbao secrets |
|
||||
| `services/kanidm/default.nix` | Added openbao OAuth2 client, enabled imperative group membership |
|
||||
|
||||
### Kanidm Configuration
|
||||
|
||||
OAuth2 client `openbao` with:
|
||||
- Confidential client (uses client secret)
|
||||
- Web UI callback only: `https://vault.home.2rjus.net:8200/ui/vault/auth/oidc/oidc/callback`
|
||||
- Legacy crypto enabled (RS256 for OpenBao compatibility)
|
||||
- Scope maps for `admins` and `users` groups
|
||||
|
||||
Group membership is now managed imperatively (`overwriteMembers = false`) to prevent provisioning from resetting group memberships on service restart.
|
||||
|
||||
### OpenBao Configuration
|
||||
|
||||
OIDC auth backend at `/oidc` with two roles:
|
||||
|
||||
| Role | Bound Claims | Policy | Access |
|
||||
|------|--------------|--------|--------|
|
||||
| `admin` | `groups = admins@home.2rjus.net` | `oidc-admin` | Full read/write to secrets, system health/metrics |
|
||||
| `default` | (none) | `oidc-default` | Token lookup-self, system health |
|
||||
|
||||
Both roles request scopes: `openid`, `profile`, `email`, `groups`
|
||||
|
||||
### Policies
|
||||
|
||||
**oidc-admin:**
|
||||
- `secret/*` - create, read, update, delete, list
|
||||
- `sys/health` - read
|
||||
- `sys/metrics` - read
|
||||
- `sys/auth` - read
|
||||
- `sys/mounts` - read
|
||||
|
||||
**oidc-default:**
|
||||
- `auth/token/lookup-self` - read
|
||||
- `sys/health` - read
|
||||
|
||||
## Usage
|
||||
|
||||
### Web UI Login
|
||||
1. Navigate to https://vault.home.2rjus.net:8200
|
||||
2. Select "OIDC" authentication method
|
||||
3. Enter role: `admin` (for admins) or `default` (for any user)
|
||||
4. Click "Sign in with OIDC"
|
||||
5. Authenticate with Kanidm
|
||||
|
||||
### Group Management
|
||||
Add users to admins group for full access:
|
||||
```bash
|
||||
kanidm group add-members admins <username>
|
||||
```
|
||||
|
||||
## Limitations
|
||||
|
||||
**CLI login not supported:** Kanidm requires HTTPS for all redirect URIs on confidential (non-public) OAuth2 clients. OpenBao CLI uses `http://localhost:8250/oidc/callback` which Kanidm rejects. Public clients would allow localhost redirects, but OpenBao requires a client secret for OIDC auth.
|
||||
|
||||
## Lessons Learned
|
||||
|
||||
1. **Kanidm group names:** Groups are returned as `groupname@domain` (e.g., `admins@home.2rjus.net`), not just the short name
|
||||
2. **RS256 required:** OpenBao only supports RS256 for JWT signing; Kanidm defaults to ES256, requiring `enableLegacyCrypto = true`
|
||||
3. **Scope request:** OIDC roles must explicitly request the `groups` scope via `oidc_scopes`
|
||||
4. **Provisioning resets:** Kanidm provisioning with default `overwriteMembers = true` resets group memberships on restart
|
||||
5. **Two-phase Terraform:** Secret must exist before OIDC backend can validate discovery URL
|
||||
|
||||
## References
|
||||
|
||||
- [OpenBao JWT/OIDC Auth Method](https://openbao.org/docs/auth/jwt/)
|
||||
- [Kanidm OAuth2 Documentation](https://kanidm.github.io/kanidm/stable/integrations/oauth2.html)
|
||||
@@ -20,9 +20,9 @@ Hosts to migrate:
|
||||
| http-proxy | Stateless | Reverse proxy, recreate |
|
||||
| nats1 | Stateless | Messaging, recreate |
|
||||
| ha1 | Stateful | Home Assistant + Zigbee2MQTT + Mosquitto |
|
||||
| ~~monitoring01~~ | ~~Decommission~~ | ✓ Complete — replaced by monitoring02 (VictoriaMetrics) |
|
||||
| monitoring01 | Stateful | Prometheus, Grafana, Loki |
|
||||
| jelly01 | Stateful | Jellyfin metadata, watch history, config |
|
||||
| ~~pgdb1~~ | ~~Decommission~~ | ✓ Complete |
|
||||
| pgdb1 | Decommission | Only used by Open WebUI on gunter, migrating to local postgres |
|
||||
| ~~jump~~ | ~~Decommission~~ | ✓ Complete |
|
||||
| ~~auth01~~ | ~~Decommission~~ | ✓ Complete |
|
||||
| ~~ca~~ | ~~Deferred~~ | ✓ Complete |
|
||||
@@ -31,12 +31,10 @@ Hosts to migrate:
|
||||
|
||||
Before migrating any stateful host, ensure restic backups are in place and verified.
|
||||
|
||||
### ~~1a. Expand monitoring01 Grafana Backup~~ ✓ N/A
|
||||
### 1a. Expand monitoring01 Grafana Backup
|
||||
|
||||
~~The existing backup only covers `/var/lib/grafana/plugins` and a sqlite dump of `grafana.db`.
|
||||
Expand to back up all of `/var/lib/grafana/` to capture config directory and any other state.~~
|
||||
|
||||
No longer needed — monitoring01 decommissioned, replaced by monitoring02 with declarative Grafana dashboards.
|
||||
The existing backup only covers `/var/lib/grafana/plugins` and a sqlite dump of `grafana.db`.
|
||||
Expand to back up all of `/var/lib/grafana/` to capture config directory and any other state.
|
||||
|
||||
### 1b. Add Jellyfin Backup to jelly01
|
||||
|
||||
@@ -96,17 +94,15 @@ For each stateful host, the procedure is:
|
||||
7. Start services and verify functionality
|
||||
8. Decommission the old VM
|
||||
|
||||
### 3a. monitoring01 ✓ COMPLETE
|
||||
### 3a. monitoring01
|
||||
|
||||
~~1. Run final Grafana backup~~
|
||||
~~2. Provision new monitoring01 via OpenTofu~~
|
||||
~~3. After bootstrap, restore `/var/lib/grafana/` from restic~~
|
||||
~~4. Restart Grafana, verify dashboards and datasources are intact~~
|
||||
~~5. Prometheus and Loki start fresh with empty data (acceptable)~~
|
||||
~~6. Verify all scrape targets are being collected~~
|
||||
~~7. Decommission old VM~~
|
||||
|
||||
Replaced by monitoring02 with VictoriaMetrics, standalone Loki and Grafana modules. Host configuration, old service modules, and terraform resources removed.
|
||||
1. Run final Grafana backup
|
||||
2. Provision new monitoring01 via OpenTofu
|
||||
3. After bootstrap, restore `/var/lib/grafana/` from restic
|
||||
4. Restart Grafana, verify dashboards and datasources are intact
|
||||
5. Prometheus and Loki start fresh with empty data (acceptable)
|
||||
6. Verify all scrape targets are being collected
|
||||
7. Decommission old VM
|
||||
|
||||
### 3b. jelly01
|
||||
|
||||
@@ -167,19 +163,19 @@ Host was already removed from flake.nix and VM destroyed. Configuration cleaned
|
||||
|
||||
Host configuration, services, and VM already removed.
|
||||
|
||||
### pgdb1 ✓ COMPLETE
|
||||
### pgdb1 (in progress)
|
||||
|
||||
~~Only consumer was Open WebUI on gunter, which has been migrated to use local PostgreSQL.~~
|
||||
Only consumer was Open WebUI on gunter, which has been migrated to use local PostgreSQL.
|
||||
|
||||
~~1. Verify Open WebUI on gunter is using local PostgreSQL (not pgdb1)~~
|
||||
~~2. Remove host configuration from `hosts/pgdb1/`~~
|
||||
~~3. Remove `services/postgres/` (only used by pgdb1)~~
|
||||
~~4. Remove from `flake.nix`~~
|
||||
~~5. Remove Vault AppRole from `terraform/vault/approle.tf`~~
|
||||
~~6. Destroy the VM in Proxmox~~
|
||||
~~7. Commit cleanup~~
|
||||
1. ~~Verify Open WebUI on gunter is using local PostgreSQL (not pgdb1)~~ ✓
|
||||
2. ~~Remove host configuration from `hosts/pgdb1/`~~ ✓
|
||||
3. ~~Remove `services/postgres/` (only used by pgdb1)~~ ✓
|
||||
4. ~~Remove from `flake.nix`~~ ✓
|
||||
5. ~~Remove Vault AppRole from `terraform/vault/approle.tf`~~ ✓
|
||||
6. Destroy the VM in Proxmox
|
||||
7. ~~Commit cleanup~~ ✓
|
||||
|
||||
Host configuration, services, terraform resources, and VM removed. See `docs/plans/pgdb1-decommission.md` for detailed plan.
|
||||
See `docs/plans/pgdb1-decommission.md` for detailed plan.
|
||||
|
||||
## Phase 5: Decommission ca Host ✓ COMPLETE
|
||||
|
||||
|
||||
@@ -1,79 +0,0 @@
|
||||
# Local NTP with Chrony
|
||||
|
||||
## Overview/Goal
|
||||
|
||||
Set up pve1 as a local NTP server and switch all NixOS VMs from systemd-timesyncd to chrony, pointing at pve1 as the sole time source. This eliminates clock drift issues that cause false `host_reboot` alerts.
|
||||
|
||||
## Current State
|
||||
|
||||
- All NixOS hosts use `systemd-timesyncd` with default NixOS pool servers (`0.nixos.pool.ntp.org` etc.)
|
||||
- No NTP/timesyncd configuration exists in the repo — all defaults
|
||||
- pve1 (Proxmox, bare metal) already runs chrony but only as a client
|
||||
- VMs drift noticeably — ns1 (~19ms) and jelly01 (~39ms) are worst offenders
|
||||
- Clock step corrections from timesyncd trigger false `host_reboot` alerts via `changes(node_boot_time_seconds[10m]) > 0`
|
||||
- pve1 itself stays at 0ms offset thanks to chrony
|
||||
|
||||
## Why systemd-timesyncd is Insufficient
|
||||
|
||||
- Minimal SNTP client, no proper clock discipline or frequency tracking
|
||||
- Backs off polling interval when it thinks clock is stable, missing drift
|
||||
- Corrects via step adjustments rather than gradual slewing, causing metric jumps
|
||||
- Each VM resolves to different pool servers with varying accuracy
|
||||
|
||||
## Implementation Steps
|
||||
|
||||
### 1. Configure pve1 as NTP Server
|
||||
|
||||
Add to pve1's `/etc/chrony/chrony.conf`:
|
||||
|
||||
```
|
||||
# Allow NTP clients from the infrastructure subnet
|
||||
allow 10.69.13.0/24
|
||||
```
|
||||
|
||||
Restart chrony on pve1.
|
||||
|
||||
### 2. Add Chrony to NixOS System Config
|
||||
|
||||
Create `system/chrony.nix` (applied to all hosts via system imports):
|
||||
|
||||
```nix
|
||||
{
|
||||
# Disable systemd-timesyncd (chrony takes over)
|
||||
services.timesyncd.enable = false;
|
||||
|
||||
# Enable chrony pointing at pve1
|
||||
services.chrony = {
|
||||
enable = true;
|
||||
servers = [ "pve1.home.2rjus.net" ];
|
||||
serverOption = "iburst";
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Optional: Add Chrony Exporter
|
||||
|
||||
For better visibility into NTP sync quality:
|
||||
|
||||
```nix
|
||||
services.prometheus.exporters.chrony.enable = true;
|
||||
```
|
||||
|
||||
Add chrony exporter scrape targets via `homelab.monitoring.scrapeTargets` and create a Grafana dashboard for NTP offset across all hosts.
|
||||
|
||||
### 4. Roll Out
|
||||
|
||||
- Deploy to a test-tier host first to verify
|
||||
- Then deploy to all hosts via auto-upgrade
|
||||
|
||||
## Open Questions
|
||||
|
||||
- [ ] Does pve1's chrony config need `local stratum 10` as fallback if upstream is unreachable?
|
||||
- [ ] Should we also enable `enableRTCTrimming` for the VMs?
|
||||
- [ ] Worth adding a chrony exporter on pve1 as well (manual install like node-exporter)?
|
||||
|
||||
## Notes
|
||||
|
||||
- No fallback NTP servers needed on VMs — if pve1 is down, all VMs are down too
|
||||
- The `host_reboot` alert rule (`changes(node_boot_time_seconds[10m]) > 0`) should stop false-firing once clock corrections are slewed instead of stepped
|
||||
- pn01/pn02 are bare metal but still benefit from syncing to pve1 for consistency
|
||||
@@ -1,196 +0,0 @@
|
||||
# Loki Setup Improvements
|
||||
|
||||
## Overview
|
||||
|
||||
The current Loki deployment on monitoring01 is functional but minimal. It lacks retention policies, rate limiting, and uses local filesystem storage. This plan evaluates improvement options across several dimensions: retention management, storage backend, resource limits, and operational improvements.
|
||||
|
||||
## Current State
|
||||
|
||||
**Loki** on monitoring01 (`services/monitoring/loki.nix`):
|
||||
- Single-node deployment, no HA
|
||||
- Filesystem storage at `/var/lib/loki/chunks` (~6.8 GB as of 2026-02-13)
|
||||
- TSDB index (v13 schema, 24h period)
|
||||
- 30-day compactor-based retention with basic rate limits
|
||||
- No caching layer
|
||||
- Auth disabled (trusted network)
|
||||
|
||||
**Promtail** on all 16 hosts (`system/monitoring/logs.nix`):
|
||||
- Ships systemd journal (JSON) + `/var/log/**/*.log`
|
||||
- Labels: `hostname`, `tier`, `role`, `level`, `job` (systemd-journal/varlog), `systemd_unit`
|
||||
- `level` label mapped from journal PRIORITY (critical/error/warning/notice/info/debug)
|
||||
- Hardcoded to `http://monitoring01.home.2rjus.net:3100`
|
||||
|
||||
**Additional log sources:**
|
||||
- `pipe-to-loki` script (manual log submission, `job=pipe-to-loki`)
|
||||
- Bootstrap logs from template2 (`job=bootstrap`)
|
||||
|
||||
**Context:** The VictoriaMetrics migration plan (`docs/plans/monitoring-migration-victoriametrics.md`) includes moving Loki to monitoring02 with "same configuration as current". These improvements could be applied either before or after that migration.
|
||||
|
||||
## Improvement Areas
|
||||
|
||||
### 1. Retention Policy
|
||||
|
||||
**Implemented.** Compactor-based retention with 30-day period. Note: Loki 3.6.3 requires `delete_request_store = "filesystem"` when retention is enabled (not documented in older guides).
|
||||
|
||||
```nix
|
||||
compactor = {
|
||||
working_directory = "/var/lib/loki/compactor";
|
||||
compaction_interval = "10m";
|
||||
retention_enabled = true;
|
||||
retention_delete_delay = "2h";
|
||||
retention_delete_worker_count = 150;
|
||||
delete_request_store = "filesystem";
|
||||
};
|
||||
|
||||
limits_config = {
|
||||
retention_period = "30d";
|
||||
};
|
||||
```
|
||||
|
||||
### 2. Storage Backend
|
||||
|
||||
**Decision:** Stay with filesystem storage for now. Garage S3 was considered but ruled out - the current single-node Garage (replication_factor=1) offers no real durability benefit over local disk. S3 storage can be revisited after the NAS migration, when a more robust S3-compatible solution will likely be available.
|
||||
|
||||
### 3. Limits Configuration
|
||||
|
||||
**Implemented.** Basic guardrails added alongside retention in `limits_config`:
|
||||
|
||||
```nix
|
||||
limits_config = {
|
||||
retention_period = "30d";
|
||||
ingestion_rate_mb = 10; # MB/s per tenant
|
||||
ingestion_burst_size_mb = 20; # Burst allowance
|
||||
max_streams_per_user = 10000; # Prevent label explosion
|
||||
max_query_series = 500; # Limit query resource usage
|
||||
max_query_parallelism = 8;
|
||||
};
|
||||
```
|
||||
|
||||
### 4. Promtail Label Improvements
|
||||
|
||||
**Problem:** Label inconsistencies and missing useful metadata:
|
||||
- The `varlog` scrape config uses `hostname` while journal uses `host` (different label name)
|
||||
- No `tier` or `role` labels, making it hard to filter logs by deployment tier or host function
|
||||
|
||||
**Implemented:** Standardized on `hostname` to match Prometheus labels. The journal scrape previously used a relabel from `__journal__hostname` to `host`; now both scrape configs use a static `hostname` label from `config.networking.hostName`. Also updated `pipe-to-loki` and bootstrap scripts to use `hostname` instead of `host`.
|
||||
|
||||
1. **Standardized label:** Both scrape configs use `hostname` (matching Prometheus) via shared `hostLabels`
|
||||
2. **Added `tier` label:** Static label from `config.homelab.host.tier` (`test`/`prod`) on both scrape configs
|
||||
3. **Added `role` label:** Static label from `config.homelab.host.role` on both scrape configs (conditionally, only when non-null)
|
||||
|
||||
No cardinality impact - `tier` and `role` are 1:1 with `hostname`, so they add metadata to existing streams without creating new ones.
|
||||
|
||||
This enables queries like:
|
||||
- `{tier="prod"} |= "error"` - all errors on prod hosts
|
||||
- `{role="dns"}` - all DNS server logs
|
||||
- `{tier="test", job="systemd-journal"}` - journal logs from test hosts
|
||||
|
||||
### 5. Journal Priority → Level Label
|
||||
|
||||
**Implemented.** Promtail pipeline stages map journal `PRIORITY` to a `level` label:
|
||||
|
||||
| PRIORITY | level |
|
||||
|----------|-------|
|
||||
| 0-2 | critical |
|
||||
| 3 | error |
|
||||
| 4 | warning |
|
||||
| 5 | notice |
|
||||
| 6 | info |
|
||||
| 7 | debug |
|
||||
|
||||
Uses a `json` stage to extract PRIORITY, `template` to map to level name, and `labels` to attach it. This gives reliable level filtering for all journal logs, unlike Loki's `detected_level` which only works for apps that embed level keywords in message text.
|
||||
|
||||
Example queries:
|
||||
- `{level="error"}` - all errors across the fleet
|
||||
- `{level=~"critical|error", tier="prod"}` - prod errors and criticals
|
||||
- `{level="warning", role="dns"}` - warnings from DNS servers
|
||||
|
||||
### 6. Enable JSON Logging on Services
|
||||
|
||||
**Problem:** Many services support structured JSON log output but may be using plain text by default. JSON logs are significantly easier to query in Loki - `| json` cleanly extracts all fields, whereas plain text requires fragile regex or pattern matching.
|
||||
|
||||
**Audit results (2026-02-13):**
|
||||
|
||||
**Already logging JSON:**
|
||||
- Caddy (all instances) - JSON by default for access logs
|
||||
- homelab-deploy (listener/builder) - Go app, logs structured JSON
|
||||
|
||||
**Supports JSON, not configured (high value):**
|
||||
|
||||
| Service | How to enable | Config file |
|
||||
|---------|--------------|-------------|
|
||||
| Prometheus | `--log.format=json` | `services/monitoring/prometheus.nix` |
|
||||
| Alertmanager | `--log.format=json` | `services/monitoring/prometheus.nix` |
|
||||
| Loki | `--log.format=json` | `services/monitoring/loki.nix` |
|
||||
| Grafana | `log.console.format = "json"` | `services/monitoring/grafana.nix` |
|
||||
| Tempo | `log_format: json` in config | `services/monitoring/tempo.nix` |
|
||||
| OpenBao | `log_format = "json"` | `services/vault/default.nix` |
|
||||
|
||||
**Supports JSON, not configured (lower value - minimal log output):**
|
||||
|
||||
| Service | How to enable |
|
||||
|---------|--------------|
|
||||
| Pyroscope | `--log.format=json` (OCI container) |
|
||||
| Blackbox Exporter | `--log.format=json` |
|
||||
| Node Exporter | `--log.format=json` (all 16 hosts) |
|
||||
| Systemd Exporter | `--log.format=json` (all 16 hosts) |
|
||||
|
||||
**No JSON support (syslog/text only):**
|
||||
- NSD, Unbound, OpenSSH, Mosquitto
|
||||
|
||||
**Needs verification:**
|
||||
- Kanidm, Jellyfin, Home Assistant, Harmonia, Zigbee2MQTT, NATS
|
||||
|
||||
**Recommendation:** Start with the monitoring stack (Prometheus, Alertmanager, Loki, Grafana, Tempo) since they're all Go apps with the same `--log.format=json` flag. Then OpenBao. The exporters are lower priority since they produce minimal log output.
|
||||
|
||||
### 7. Monitoring CNAME for Promtail Target
|
||||
|
||||
**Problem:** Promtail hardcodes `monitoring01.home.2rjus.net:3100`. The VictoriaMetrics migration plan already addresses this by switching to a `monitoring` CNAME.
|
||||
|
||||
**Recommendation:** This should happen as part of the monitoring02 migration, not independently. If we do Loki improvements before that migration, keep pointing to monitoring01.
|
||||
|
||||
## Priority Ranking
|
||||
|
||||
| # | Improvement | Effort | Impact | Status |
|
||||
|---|-------------|--------|--------|--------|
|
||||
| 1 | **Retention policy** | Low | High | Done (30d compactor retention) |
|
||||
| 2 | **Limits config** | Low | Medium | Done (rate limits + stream guards) |
|
||||
| 3 | **Promtail labels** | Trivial | Low | Done (hostname/tier/role/level) |
|
||||
| 4 | **Journal priority → level** | Low-medium | Medium | Done (pipeline stages) |
|
||||
| 5 | **JSON logging audit** | Low-medium | Medium | Audited, not yet enabled |
|
||||
| 6 | **Monitoring CNAME** | Low | Medium | Part of monitoring02 migration |
|
||||
|
||||
## Implementation Steps
|
||||
|
||||
### Phase 1: Retention + Labels (done 2026-02-13)
|
||||
|
||||
1. ~~Add `compactor` section to `services/monitoring/loki.nix`~~ Done
|
||||
2. ~~Add `limits_config` with 30-day retention and basic rate limits~~ Done
|
||||
3. ~~Update `system/monitoring/logs.nix`~~ Done:
|
||||
- Standardized on `hostname` label (matching Prometheus) for both scrape configs
|
||||
- Added `tier` and `role` static labels from `homelab.host` options
|
||||
- Added pipeline stages for journal PRIORITY → `level` label mapping
|
||||
4. ~~Update `pipe-to-loki` and bootstrap scripts to use `hostname`~~ Done
|
||||
5. ~~Deploy and verify labels~~ Done - all 15 hosts reporting with correct labels
|
||||
|
||||
### Phase 2: JSON Logging (not started)
|
||||
|
||||
Enable JSON logging on services that support it, starting with the monitoring stack:
|
||||
1. Prometheus, Alertmanager, Loki, Grafana, Tempo (`--log.format=json`)
|
||||
2. OpenBao (`log_format = "json"`)
|
||||
3. Lower priority: exporters (node-exporter, systemd-exporter, blackbox)
|
||||
|
||||
### Phase 3 (future): S3 Storage Migration
|
||||
|
||||
Revisit after NAS migration when a proper S3-compatible storage solution is available. At that point, add a new schema period with `object_store = "s3"` - the old filesystem period will continue serving historical data until it ages out past retention.
|
||||
|
||||
## Open Questions
|
||||
|
||||
- [ ] Do we want per-stream retention (e.g., keep bootstrap/pipe-to-loki longer)?
|
||||
|
||||
## Notes
|
||||
|
||||
- Loki schema changes require adding a new period entry (not modifying existing ones). The old period continues serving historical data.
|
||||
- Loki 3.6.3 requires `delete_request_store = "filesystem"` in the compactor config when retention is enabled.
|
||||
- S3 storage deferred until post-NAS migration when a proper solution is available.
|
||||
- As of 2026-02-13, Loki uses ~6.8 GB for ~30 days of logs from 16 hosts. Prometheus uses ~7.6 GB on the same disk (33 GB total, ~8 GB free).
|
||||
219
docs/plans/monitoring-migration-victoriametrics.md
Normal file
219
docs/plans/monitoring-migration-victoriametrics.md
Normal file
@@ -0,0 +1,219 @@
|
||||
# Monitoring Stack Migration to VictoriaMetrics
|
||||
|
||||
## Overview
|
||||
|
||||
Migrate from Prometheus to VictoriaMetrics on a new host (monitoring02) to gain better compression
|
||||
and longer retention. Run in parallel with monitoring01 until validated, then switch over using
|
||||
a `monitoring` CNAME for seamless transition.
|
||||
|
||||
## Current State
|
||||
|
||||
**monitoring01** (10.69.13.13):
|
||||
- 4 CPU cores, 4GB RAM, 33GB disk
|
||||
- Prometheus with 30-day retention (15s scrape interval)
|
||||
- Alertmanager (routes to alerttonotify webhook)
|
||||
- Grafana (dashboards, datasources)
|
||||
- Loki (log aggregation from all hosts via Promtail)
|
||||
- Tempo (distributed tracing)
|
||||
- Pyroscope (continuous profiling)
|
||||
|
||||
**Hardcoded References to monitoring01:**
|
||||
- `system/monitoring/logs.nix` - Promtail sends logs to `http://monitoring01.home.2rjus.net:3100`
|
||||
- `hosts/template2/bootstrap.nix` - Bootstrap logs to Loki (keep as-is until decommission)
|
||||
- `services/http-proxy/proxy.nix` - Caddy proxies Prometheus, Alertmanager, Grafana, Pyroscope, Pushgateway
|
||||
|
||||
**Auto-generated:**
|
||||
- Prometheus scrape targets (from `lib/monitoring.nix` + `homelab.monitoring.scrapeTargets`)
|
||||
- Node-exporter targets (from all hosts with static IPs)
|
||||
|
||||
## Decision: VictoriaMetrics
|
||||
|
||||
Per `docs/plans/long-term-metrics-storage.md`, VictoriaMetrics is the recommended starting point:
|
||||
- Single binary replacement for Prometheus
|
||||
- 5-10x better compression (30 days could become 180+ days in same space)
|
||||
- Same PromQL query language (Grafana dashboards work unchanged)
|
||||
- Same scrape config format (existing auto-generated configs work)
|
||||
|
||||
If multi-year retention with downsampling becomes necessary later, Thanos can be evaluated.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────┐
|
||||
│ monitoring02 │
|
||||
│ VictoriaMetrics│
|
||||
│ + Grafana │
|
||||
monitoring │ + Loki │
|
||||
CNAME ──────────│ + Tempo │
|
||||
│ + Pyroscope │
|
||||
│ + Alertmanager │
|
||||
│ (vmalert) │
|
||||
└─────────────────┘
|
||||
▲
|
||||
│ scrapes
|
||||
┌───────────────┼───────────────┐
|
||||
│ │ │
|
||||
┌────┴────┐ ┌─────┴────┐ ┌─────┴────┐
|
||||
│ ns1 │ │ ha1 │ │ ... │
|
||||
│ :9100 │ │ :9100 │ │ :9100 │
|
||||
└─────────┘ └──────────┘ └──────────┘
|
||||
```
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
### Phase 1: Create monitoring02 Host
|
||||
|
||||
Use `create-host` script which handles flake.nix and terraform/vms.tf automatically.
|
||||
|
||||
1. **Run create-host**: `nix develop -c create-host monitoring02 10.69.13.24`
|
||||
2. **Update VM resources** in `terraform/vms.tf`:
|
||||
- 4 cores (same as monitoring01)
|
||||
- 8GB RAM (double, for VictoriaMetrics headroom)
|
||||
- 100GB disk (for 3+ months retention with compression)
|
||||
3. **Update host configuration**: Import monitoring services
|
||||
4. **Create Vault AppRole**: Add to `terraform/vault/approle.tf`
|
||||
|
||||
### Phase 2: Set Up VictoriaMetrics Stack
|
||||
|
||||
Create new service module at `services/monitoring/victoriametrics/` for testing alongside existing
|
||||
Prometheus config. Once validated, this can replace the Prometheus module.
|
||||
|
||||
1. **VictoriaMetrics** (port 8428):
|
||||
- `services.victoriametrics.enable = true`
|
||||
- `services.victoriametrics.retentionPeriod = "3m"` (3 months, increase later based on disk usage)
|
||||
- Migrate scrape configs via `prometheusConfig`
|
||||
- Use native push support (replaces Pushgateway)
|
||||
|
||||
2. **vmalert** for alerting rules:
|
||||
- `services.vmalert.enable = true`
|
||||
- Point to VictoriaMetrics for metrics evaluation
|
||||
- Keep rules in separate `rules.yml` file (same format as Prometheus)
|
||||
- No receiver configured during parallel operation (prevents duplicate alerts)
|
||||
|
||||
3. **Alertmanager** (port 9093):
|
||||
- Keep existing configuration (alerttonotify webhook routing)
|
||||
- Only enable receiver after cutover from monitoring01
|
||||
|
||||
4. **Loki** (port 3100):
|
||||
- Same configuration as current
|
||||
|
||||
5. **Grafana** (port 3000):
|
||||
- Define dashboards declaratively via NixOS options (not imported from monitoring01)
|
||||
- Reference existing dashboards on monitoring01 for content inspiration
|
||||
- Configure VictoriaMetrics datasource (port 8428)
|
||||
- Configure Loki datasource
|
||||
|
||||
6. **Tempo** (ports 3200, 3201):
|
||||
- Same configuration
|
||||
|
||||
7. **Pyroscope** (port 4040):
|
||||
- Same Docker-based deployment
|
||||
|
||||
### Phase 3: Parallel Operation
|
||||
|
||||
Run both monitoring01 and monitoring02 simultaneously:
|
||||
|
||||
1. **Dual scraping**: Both hosts scrape the same targets
|
||||
- Validates VictoriaMetrics is collecting data correctly
|
||||
|
||||
2. **Dual log shipping**: Configure Promtail to send logs to both Loki instances
|
||||
- Add second client in `system/monitoring/logs.nix` pointing to monitoring02
|
||||
|
||||
3. **Validate dashboards**: Access Grafana on monitoring02, verify dashboards work
|
||||
|
||||
4. **Validate alerts**: Verify vmalert evaluates rules correctly (no receiver = no notifications)
|
||||
|
||||
5. **Compare resource usage**: Monitor disk/memory consumption between hosts
|
||||
|
||||
### Phase 4: Add monitoring CNAME
|
||||
|
||||
Add CNAME to monitoring02 once validated:
|
||||
|
||||
```nix
|
||||
# hosts/monitoring02/configuration.nix
|
||||
homelab.dns.cnames = [ "monitoring" ];
|
||||
```
|
||||
|
||||
This creates `monitoring.home.2rjus.net` pointing to monitoring02.
|
||||
|
||||
### Phase 5: Update References
|
||||
|
||||
Update hardcoded references to use the CNAME:
|
||||
|
||||
1. **system/monitoring/logs.nix**:
|
||||
- Remove dual-shipping, point only to `http://monitoring.home.2rjus.net:3100`
|
||||
|
||||
2. **services/http-proxy/proxy.nix**: Update reverse proxy backends:
|
||||
- prometheus.home.2rjus.net -> monitoring.home.2rjus.net:8428
|
||||
- alertmanager.home.2rjus.net -> monitoring.home.2rjus.net:9093
|
||||
- grafana.home.2rjus.net -> monitoring.home.2rjus.net:3000
|
||||
- pyroscope.home.2rjus.net -> monitoring.home.2rjus.net:4040
|
||||
|
||||
Note: `hosts/template2/bootstrap.nix` stays pointed at monitoring01 until decommission.
|
||||
|
||||
### Phase 6: Enable Alerting
|
||||
|
||||
Once ready to cut over:
|
||||
1. Enable Alertmanager receiver on monitoring02
|
||||
2. Verify test alerts route correctly
|
||||
|
||||
### Phase 7: Cutover and Decommission
|
||||
|
||||
1. **Stop monitoring01**: Prevent duplicate alerts during transition
|
||||
2. **Update bootstrap.nix**: Point to `monitoring.home.2rjus.net`
|
||||
3. **Verify all targets scraped**: Check VictoriaMetrics UI
|
||||
4. **Verify logs flowing**: Check Loki on monitoring02
|
||||
5. **Decommission monitoring01**:
|
||||
- Remove from flake.nix
|
||||
- Remove host configuration
|
||||
- Destroy VM in Proxmox
|
||||
- Remove from terraform state
|
||||
|
||||
## Open Questions
|
||||
|
||||
- [ ] What disk size for monitoring02? 100GB should allow 3+ months with VictoriaMetrics compression
|
||||
- [ ] Which dashboards to recreate declaratively? (Review monitoring01 Grafana for current set)
|
||||
|
||||
## VictoriaMetrics Service Configuration
|
||||
|
||||
Example NixOS configuration for monitoring02:
|
||||
|
||||
```nix
|
||||
# VictoriaMetrics replaces Prometheus
|
||||
services.victoriametrics = {
|
||||
enable = true;
|
||||
retentionPeriod = "3m"; # 3 months, increase based on disk usage
|
||||
prometheusConfig = {
|
||||
global.scrape_interval = "15s";
|
||||
scrape_configs = [
|
||||
# Auto-generated node-exporter targets
|
||||
# Service-specific scrape targets
|
||||
# External targets
|
||||
];
|
||||
};
|
||||
};
|
||||
|
||||
# vmalert for alerting rules (no receiver during parallel operation)
|
||||
services.vmalert = {
|
||||
enable = true;
|
||||
datasource.url = "http://localhost:8428";
|
||||
# notifier.alertmanager.url = "http://localhost:9093"; # Enable after cutover
|
||||
rule = [ ./rules.yml ];
|
||||
};
|
||||
```
|
||||
|
||||
## Rollback Plan
|
||||
|
||||
If issues arise after cutover:
|
||||
1. Move `monitoring` CNAME back to monitoring01
|
||||
2. Restart monitoring01 services
|
||||
3. Revert Promtail config to point only to monitoring01
|
||||
4. Revert http-proxy backends
|
||||
|
||||
## Notes
|
||||
|
||||
- VictoriaMetrics uses port 8428 vs Prometheus 9090
|
||||
- PromQL compatibility is excellent
|
||||
- VictoriaMetrics native push replaces Pushgateway (remove from http-proxy if not needed)
|
||||
- monitoring02 deployed via OpenTofu using `create-host` script
|
||||
- Grafana dashboards defined declaratively via NixOS, not imported from monitoring01 state
|
||||
@@ -1,145 +0,0 @@
|
||||
# New Service Candidates
|
||||
|
||||
Ideas for additional services to deploy in the homelab. These lean more enterprise/obscure
|
||||
than the typical self-hosted fare.
|
||||
|
||||
## Litestream
|
||||
|
||||
Continuous SQLite replication to S3-compatible storage. Streams WAL changes in near-real-time,
|
||||
providing point-in-time recovery without scheduled backup jobs.
|
||||
|
||||
**Why:** Several services use SQLite (Home Assistant, potentially others). Litestream would
|
||||
give continuous backup to Garage S3 with minimal resource overhead and near-zero configuration.
|
||||
Replaces cron-based backup scripts with a small daemon per database.
|
||||
|
||||
**Integration points:**
|
||||
- Garage S3 as replication target (already deployed)
|
||||
- Home Assistant SQLite database is the primary candidate
|
||||
- Could also cover any future SQLite-backed services
|
||||
|
||||
**Complexity:** Low. Single Go binary, minimal config (source DB path + S3 endpoint).
|
||||
|
||||
**NixOS packaging:** Available in nixpkgs as `litestream`.
|
||||
|
||||
---
|
||||
|
||||
## ntopng
|
||||
|
||||
Deep network traffic analysis and flow monitoring. Provides real-time visibility into bandwidth
|
||||
usage, protocol distribution, top talkers, and anomaly detection via a web UI.
|
||||
|
||||
**Why:** We have host-level metrics (node-exporter) and logs (Loki) but no network-level
|
||||
visibility. ntopng would show traffic patterns across the infrastructure — NFS throughput to
|
||||
the NAS, DNS query volume, inter-host traffic, and bandwidth anomalies. Useful for capacity
|
||||
planning and debugging network issues.
|
||||
|
||||
**Integration points:**
|
||||
- Could export metrics to Prometheus via its built-in exporter
|
||||
- Web UI behind http-proxy with Kanidm OIDC (if supported) or Pomerium
|
||||
- NetFlow/sFlow from managed switches (if available)
|
||||
- Passive traffic capture on a mirror port or the monitoring host itself
|
||||
|
||||
**Complexity:** Medium. Needs network tap or mirror port for full visibility, or can run
|
||||
in host-local mode. May need a dedicated interface or VLAN mirror.
|
||||
|
||||
**NixOS packaging:** Available in nixpkgs as `ntopng`.
|
||||
|
||||
---
|
||||
|
||||
## Renovate
|
||||
|
||||
Automated dependency update bot that understands Nix flakes natively. Creates branches/PRs
|
||||
to bump flake inputs on a configurable schedule.
|
||||
|
||||
**Why:** Currently `nix flake update` is manual. Renovate can automatically propose updates
|
||||
to individual flake inputs (nixpkgs, homelab-deploy, nixos-exporter, etc.), group related
|
||||
updates, and respect schedules. More granular than updating everything at once — can bump
|
||||
nixpkgs weekly but hold back other inputs, auto-merge patch-level changes, etc.
|
||||
|
||||
**Integration points:**
|
||||
- Runs against git.t-juice.club repositories
|
||||
- Understands `flake.lock` format natively
|
||||
- Could target both `nixos-servers` and `nixos` repos
|
||||
- Update branches would be validated by homelab-deploy builder
|
||||
|
||||
**Complexity:** Medium. Needs git forge integration (Gitea/Forgejo API). Self-hosted runner
|
||||
mode available. Configuration via `renovate.json` in each repo.
|
||||
|
||||
**NixOS packaging:** Available in nixpkgs as `renovate`.
|
||||
|
||||
---
|
||||
|
||||
## Pomerium
|
||||
|
||||
Identity-aware reverse proxy implementing zero-trust access. Every request is authenticated
|
||||
and authorized based on identity, device, and context — not just network location.
|
||||
|
||||
**Why:** Currently Caddy terminates TLS but doesn't enforce authentication on most services.
|
||||
Pomerium would put Kanidm OIDC authentication in front of every internal service, with
|
||||
per-route authorization policies (e.g., "only admins can access Prometheus," "require re-auth
|
||||
for Vault UI"). Directly addresses the security hardening plan's goals.
|
||||
|
||||
**Integration points:**
|
||||
- Kanidm as OIDC identity provider (already deployed)
|
||||
- Could replace or sit in front of Caddy for internal services
|
||||
- Per-route policies based on Kanidm groups (admins, users, ssh-users)
|
||||
- Centralizes access logging and audit trail
|
||||
|
||||
**Complexity:** Medium-high. Needs careful integration with existing Caddy reverse proxy.
|
||||
Decision needed on whether Pomerium replaces Caddy or works alongside it (Pomerium for
|
||||
auth, Caddy for TLS termination and routing, or Pomerium handles everything).
|
||||
|
||||
**NixOS packaging:** Available in nixpkgs as `pomerium`.
|
||||
|
||||
---
|
||||
|
||||
## Apache Guacamole
|
||||
|
||||
Clientless remote desktop and SSH gateway. Provides browser-based access to hosts via
|
||||
RDP, VNC, SSH, and Telnet with no client software required. Supports session recording
|
||||
and playback.
|
||||
|
||||
**Why:** Provides an alternative remote access path that doesn't require VPN software or
|
||||
SSH keys on the client device. Useful for accessing hosts from untrusted machines (phone,
|
||||
borrowed laptop) or providing temporary access to others. Session recording gives an audit
|
||||
trail. Could complement the WireGuard remote access plan rather than replace it.
|
||||
|
||||
**Integration points:**
|
||||
- Kanidm for authentication (OIDC or LDAP)
|
||||
- Behind http-proxy or Pomerium for TLS
|
||||
- SSH access to all hosts in the fleet
|
||||
- Session recordings could be stored on Garage S3
|
||||
- Could serve as the "emergency access" path when VPN is unavailable
|
||||
|
||||
**Complexity:** Medium. Java-based (guacd + web app), typically needs PostgreSQL for
|
||||
connection/user storage (already available). Docker is the common deployment method but
|
||||
native packaging exists.
|
||||
|
||||
**NixOS packaging:** Available in nixpkgs as `guacamole-server` and `guacamole-client`.
|
||||
|
||||
---
|
||||
|
||||
## CrowdSec
|
||||
|
||||
Collaborative intrusion prevention system with crowd-sourced threat intelligence.
|
||||
Parses logs to detect attack patterns, applies remediation (firewall bans, CAPTCHA),
|
||||
and shares/receives threat signals from a global community network.
|
||||
|
||||
**Why:** Goes beyond fail2ban with behavioral detection, crowd-sourced IP reputation,
|
||||
and a scenario-based engine. Fits the security hardening plan. The community blocklist
|
||||
means we benefit from threat intelligence gathered across thousands of deployments.
|
||||
Could parse SSH logs, HTTP access logs, and other service logs to detect and block
|
||||
malicious activity.
|
||||
|
||||
**Integration points:**
|
||||
- Could consume logs from Loki or directly from journald/log files
|
||||
- Firewall bouncer for iptables/nftables remediation
|
||||
- Caddy bouncer for HTTP-level blocking
|
||||
- Prometheus metrics exporter for alert integration
|
||||
- Scenarios available for SSH brute force, HTTP scanning, and more
|
||||
- Feeds into existing alerting pipeline (Alertmanager -> alerttonotify)
|
||||
|
||||
**Complexity:** Medium. Agent (log parser + decision engine) on each host or centralized.
|
||||
Bouncers (enforcement) on edge hosts. Free community tier includes threat intel access.
|
||||
|
||||
**NixOS packaging:** Available in nixpkgs as `crowdsec`.
|
||||
212
docs/plans/nix-cache-reprovision.md
Normal file
212
docs/plans/nix-cache-reprovision.md
Normal file
@@ -0,0 +1,212 @@
|
||||
# Nix Cache Host Reprovision
|
||||
|
||||
## Overview
|
||||
|
||||
Reprovision `nix-cache01` using the OpenTofu workflow, and improve the build/cache system with:
|
||||
1. NATS-based remote build triggering (replacing the current bash script)
|
||||
2. Safer flake update workflow that validates builds before pushing to master
|
||||
|
||||
## Current State
|
||||
|
||||
### Host Configuration
|
||||
- `nix-cache01` at 10.69.13.15 serves the binary cache via Harmonia
|
||||
- Runs Gitea Actions runner for CI workflows
|
||||
- Has `homelab.deploy.enable = true` (already supports NATS-based deployment)
|
||||
- Uses a dedicated XFS volume at `/nix` for cache storage
|
||||
|
||||
### Current Build System (`services/nix-cache/build-flakes.sh`)
|
||||
- Runs every 30 minutes via systemd timer
|
||||
- Clones/pulls two repos: `nixos-servers` and `nixos` (gunter)
|
||||
- Builds all hosts with `nixos-rebuild build` (no blacklist despite docs mentioning it)
|
||||
- Pushes success/failure metrics to pushgateway
|
||||
- Simple but has no filtering, no parallelism, no remote triggering
|
||||
|
||||
### Current Flake Update Workflow (`.github/workflows/flake-update.yaml`)
|
||||
- Runs daily at midnight via cron
|
||||
- Runs `nix flake update --commit-lock-file`
|
||||
- Pushes directly to master
|
||||
- No build validation — can push broken inputs
|
||||
|
||||
## Improvement 1: NATS-Based Remote Build Triggering
|
||||
|
||||
### Design
|
||||
|
||||
Extend the existing `homelab-deploy` tool to support a "build" command that triggers builds on the cache host. This reuses the NATS infrastructure already in place.
|
||||
|
||||
| Approach | Pros | Cons |
|
||||
|----------|------|------|
|
||||
| Extend homelab-deploy | Reuses existing NATS auth, NKey handling, CLI | Adds scope to existing tool |
|
||||
| New nix-cache-tool | Clean separation | Duplicate NATS boilerplate, new credentials |
|
||||
| Gitea Actions webhook | No custom tooling | Less flexible, tied to Gitea |
|
||||
|
||||
**Recommendation:** Extend `homelab-deploy` with a build subcommand. The tool already has NATS client code, authentication handling, and a listener module in NixOS.
|
||||
|
||||
### Implementation
|
||||
|
||||
1. Add new message type to homelab-deploy: `build.<host>` subject
|
||||
2. Listener on nix-cache01 subscribes to `build.>` wildcard
|
||||
3. On message receipt, builds the specified host and returns success/failure
|
||||
4. CLI command: `homelab-deploy build <hostname>` or `homelab-deploy build --all`
|
||||
|
||||
### Benefits
|
||||
- Trigger rebuild for specific host to ensure it's cached
|
||||
- Could be called from CI after merging PRs
|
||||
- Reuses existing NATS infrastructure and auth
|
||||
- Progress/status could stream back via NATS reply
|
||||
|
||||
## Improvement 2: Smarter Flake Update Workflow
|
||||
|
||||
### Current Problems
|
||||
1. Updates can push breaking changes to master
|
||||
2. No visibility into what broke when it does
|
||||
3. Hosts that auto-update can pull broken configs
|
||||
|
||||
### Proposed Workflow
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Flake Update Workflow │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ 1. nix flake update (on feature branch) │
|
||||
│ 2. Build ALL hosts locally │
|
||||
│ 3. If all pass → fast-forward merge to master │
|
||||
│ 4. If any fail → create PR with failure logs attached │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Implementation Options
|
||||
|
||||
| Option | Description | Pros | Cons |
|
||||
|--------|-------------|------|------|
|
||||
| **A: Self-hosted runner** | Build on nix-cache01 | Fast (local cache), simple | Ties up cache host during build |
|
||||
| **B: Gitea Actions only** | Use container runner | Clean separation | Slow (no cache), resource limits |
|
||||
| **C: Hybrid** | Trigger builds on nix-cache01 via NATS from Actions | Best of both | More complex |
|
||||
|
||||
**Recommendation:** Option A with nix-cache01 as the runner. The host is already running Gitea Actions runner and has the cache. Building all ~16 hosts is disk I/O heavy but feasible on dedicated hardware.
|
||||
|
||||
### Workflow Steps
|
||||
|
||||
1. Workflow runs on schedule (daily or weekly)
|
||||
2. Creates branch `flake-update/YYYY-MM-DD`
|
||||
3. Runs `nix flake update --commit-lock-file`
|
||||
4. Builds each host: `nix build .#nixosConfigurations.<host>.config.system.build.toplevel`
|
||||
5. If all succeed:
|
||||
- Fast-forward merge to master
|
||||
- Delete feature branch
|
||||
6. If any fail:
|
||||
- Create PR from the update branch
|
||||
- Attach build logs as PR comment
|
||||
- Label PR with `needs-review` or `build-failure`
|
||||
- Do NOT merge automatically
|
||||
|
||||
### Workflow File Changes
|
||||
|
||||
```yaml
|
||||
# New: .github/workflows/flake-update-safe.yaml
|
||||
name: Safe flake update
|
||||
on:
|
||||
schedule:
|
||||
- cron: "0 2 * * 0" # Weekly on Sunday at 2 AM
|
||||
workflow_dispatch: # Manual trigger
|
||||
|
||||
jobs:
|
||||
update-and-validate:
|
||||
runs-on: homelab # Use self-hosted runner on nix-cache01
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
ref: master
|
||||
fetch-depth: 0 # Need full history for merge
|
||||
|
||||
- name: Create update branch
|
||||
run: |
|
||||
BRANCH="flake-update/$(date +%Y-%m-%d)"
|
||||
git checkout -b "$BRANCH"
|
||||
|
||||
- name: Update flake
|
||||
run: nix flake update --commit-lock-file
|
||||
|
||||
- name: Build all hosts
|
||||
id: build
|
||||
run: |
|
||||
FAILED=""
|
||||
for host in $(nix flake show --json | jq -r '.nixosConfigurations | keys[]'); do
|
||||
echo "Building $host..."
|
||||
if ! nix build ".#nixosConfigurations.$host.config.system.build.toplevel" 2>&1 | tee "build-$host.log"; then
|
||||
FAILED="$FAILED $host"
|
||||
fi
|
||||
done
|
||||
echo "failed=$FAILED" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Merge to master (if all pass)
|
||||
if: steps.build.outputs.failed == ''
|
||||
run: |
|
||||
git checkout master
|
||||
git merge --ff-only "$BRANCH"
|
||||
git push origin master
|
||||
git push origin --delete "$BRANCH"
|
||||
|
||||
- name: Create PR (if any fail)
|
||||
if: steps.build.outputs.failed != ''
|
||||
run: |
|
||||
git push origin "$BRANCH"
|
||||
# Create PR via Gitea API with build logs
|
||||
# ... (PR creation with log attachment)
|
||||
```
|
||||
|
||||
## Migration Steps
|
||||
|
||||
### Phase 1: Reprovision Host via OpenTofu
|
||||
|
||||
1. Add `nix-cache01` to `terraform/vms.tf`:
|
||||
```hcl
|
||||
"nix-cache01" = {
|
||||
ip = "10.69.13.15/24"
|
||||
cpu_cores = 4
|
||||
memory = 8192
|
||||
disk_size = "100G" # Larger for nix store
|
||||
}
|
||||
```
|
||||
|
||||
2. Shut down existing nix-cache01 VM
|
||||
3. Run `tofu apply` to provision new VM
|
||||
4. Verify bootstrap completes and cache is serving
|
||||
|
||||
**Note:** The cache will be cold after reprovision. Run initial builds to populate.
|
||||
|
||||
### Phase 2: Add Build Triggering to homelab-deploy
|
||||
|
||||
1. Add `build` command to homelab-deploy CLI
|
||||
2. Add listener handler in NixOS module for `build.*` subjects
|
||||
3. Update nix-cache01 config to enable build listener
|
||||
4. Test with `homelab-deploy build testvm01`
|
||||
|
||||
### Phase 3: Implement Safe Flake Update Workflow
|
||||
|
||||
1. Create `.github/workflows/flake-update-safe.yaml`
|
||||
2. Disable or remove old `flake-update.yaml`
|
||||
3. Test manually with `workflow_dispatch`
|
||||
4. Monitor first automated run
|
||||
|
||||
### Phase 4: Remove Old Build Script
|
||||
|
||||
1. After new workflow is stable, remove:
|
||||
- `services/nix-cache/build-flakes.nix`
|
||||
- `services/nix-cache/build-flakes.sh`
|
||||
2. The new workflow handles scheduled builds
|
||||
|
||||
## Open Questions
|
||||
|
||||
- [ ] What runner labels should the self-hosted runner use for the update workflow?
|
||||
- [ ] Should we build hosts in parallel (faster) or sequentially (easier to debug)?
|
||||
- [ ] How long to keep flake-update PRs open before auto-closing stale ones?
|
||||
- [ ] Should successful updates trigger a NATS notification to rebuild all hosts?
|
||||
- [ ] What to do about `gunter` (external nixos repo) - include in validation?
|
||||
- [ ] Disk size for new nix-cache01 - is 100G enough for cache + builds?
|
||||
|
||||
## Notes
|
||||
|
||||
- The existing `homelab.deploy.enable = true` on nix-cache01 means it already has NATS connectivity
|
||||
- The Harmonia service and cache signing key will work the same after reprovision
|
||||
- Actions runner token is in Vault, will be provisioned automatically
|
||||
- Consider adding a `homelab.host.role = "build-host"` label for monitoring/filtering
|
||||
@@ -1,232 +0,0 @@
|
||||
# NixOS Hypervisor
|
||||
|
||||
## Overview
|
||||
|
||||
Experiment with running a NixOS-based hypervisor as an alternative/complement to the current Proxmox setup. Goal is better homelab integration — declarative config, monitoring, auto-updates — while retaining the ability to run VMs with a Terraform-like workflow.
|
||||
|
||||
## Motivation
|
||||
|
||||
- Proxmox works but doesn't integrate with the NixOS-managed homelab (no monitoring, no auto-updates, no vault, no declarative config)
|
||||
- The PN51 units (once stable) are good candidates for experimentation — test-tier, plenty of RAM (32-64GB), 8C/16T
|
||||
- Long-term: could reduce reliance on Proxmox or provide a secondary hypervisor pool
|
||||
- **VM migration**: Currently all VMs (including both nameservers) run on a single Proxmox host. Being able to migrate VMs between hypervisors would allow rebooting a host for kernel updates without downtime for critical services like DNS.
|
||||
|
||||
## Hardware Candidates
|
||||
|
||||
| | pn01 | pn02 |
|
||||
|---|---|---|
|
||||
| **CPU** | Ryzen 7 5700U (8C/16T) | Ryzen 7 5700U (8C/16T) |
|
||||
| **RAM** | 64GB (2x32GB) | 32GB (1x32GB, second slot available) |
|
||||
| **Storage** | 1TB NVMe | 1TB SATA SSD (NVMe planned) |
|
||||
| **Status** | Stability testing | Stability testing |
|
||||
|
||||
## Options
|
||||
|
||||
### Option 1: Incus
|
||||
|
||||
Fork of LXD (after Canonical made LXD proprietary). Supports both containers (LXC) and VMs (QEMU/KVM).
|
||||
|
||||
**NixOS integration:**
|
||||
- `virtualisation.incus.enable` module in nixpkgs
|
||||
- Manages storage pools, networks, and instances
|
||||
- REST API for automation
|
||||
- CLI tool (`incus`) for management
|
||||
|
||||
**Terraform integration:**
|
||||
- `lxd` provider works with Incus (API-compatible)
|
||||
- Dedicated `incus` Terraform provider also exists
|
||||
- Can define VMs/containers in OpenTofu, similar to current Proxmox workflow
|
||||
|
||||
**Migration:**
|
||||
- Built-in live and offline migration via `incus move <instance> --target <host>`
|
||||
- Clustering makes hosts aware of each other — migration is a first-class operation
|
||||
- Shared storage (NFS, Ceph) or Incus can transfer storage during migration
|
||||
- Stateful stop-and-move also supported for offline migration
|
||||
|
||||
**Pros:**
|
||||
- Supports both containers and VMs
|
||||
- REST API + CLI for automation
|
||||
- Built-in clustering and migration — closest to Proxmox experience
|
||||
- Good NixOS module support
|
||||
- Image-based workflow (can build NixOS images and import)
|
||||
- Active development and community
|
||||
|
||||
**Cons:**
|
||||
- Another abstraction layer on top of QEMU/KVM
|
||||
- Less mature Terraform provider than libvirt
|
||||
- Container networking can be complex
|
||||
- NixOS guests in Incus VMs need some setup
|
||||
|
||||
### Option 2: libvirt/QEMU
|
||||
|
||||
Standard Linux virtualization stack. Thin wrapper around QEMU/KVM.
|
||||
|
||||
**NixOS integration:**
|
||||
- `virtualisation.libvirtd.enable` module in nixpkgs
|
||||
- Mature and well-tested
|
||||
- virsh CLI for management
|
||||
|
||||
**Terraform integration:**
|
||||
- `dmacvicar/libvirt` provider — mature, well-maintained
|
||||
- Supports cloud-init, volume management, network config
|
||||
- Very similar workflow to current Proxmox+OpenTofu setup
|
||||
- Can reuse cloud-init patterns from existing `terraform/` config
|
||||
|
||||
**Migration:**
|
||||
- Supports live and offline migration via `virsh migrate`
|
||||
- Requires shared storage (NFS, Ceph, or similar) for live migration
|
||||
- Requires matching CPU models between hosts (or CPU model masking)
|
||||
- Works but is manual — no cluster awareness, must specify target URI
|
||||
- No built-in orchestration for multi-host scenarios
|
||||
|
||||
**Pros:**
|
||||
- Closest to current Proxmox+Terraform workflow
|
||||
- Most mature Terraform provider
|
||||
- Minimal abstraction — direct QEMU/KVM management
|
||||
- Well-understood, massive community
|
||||
- Cloud-init works identically to Proxmox workflow
|
||||
- Can reuse existing template-building patterns
|
||||
|
||||
**Cons:**
|
||||
- VMs only (no containers without adding LXC separately)
|
||||
- No built-in REST API (would need to expose libvirt socket)
|
||||
- No web UI without adding cockpit or virt-manager
|
||||
- Migration works but requires manual setup — no clustering, no orchestration
|
||||
- Less feature-rich than Incus for multi-host scenarios
|
||||
|
||||
### Option 3: microvm.nix
|
||||
|
||||
NixOS-native microVM framework. VMs defined as NixOS modules in the host's flake.
|
||||
|
||||
**NixOS integration:**
|
||||
- VMs are NixOS configurations in the same flake
|
||||
- Supports multiple backends: cloud-hypervisor, QEMU, firecracker, kvmtool
|
||||
- Lightweight — shares host's nix store with guests via virtiofs
|
||||
- Declarative network, storage, and resource allocation
|
||||
|
||||
**Terraform integration:**
|
||||
- None — everything is defined in Nix
|
||||
- Fundamentally different workflow from current Proxmox+Terraform approach
|
||||
|
||||
**Pros:**
|
||||
- Most NixOS-native approach
|
||||
- VMs defined right alongside host configs in this repo
|
||||
- Very lightweight — fast boot, minimal overhead
|
||||
- Shares nix store with host (no duplicate packages)
|
||||
- No cloud-init needed — guest config is part of the flake
|
||||
|
||||
**Migration:**
|
||||
- No migration support — VMs are tied to the host's NixOS config
|
||||
- Moving a VM means rebuilding it on another host
|
||||
|
||||
**Cons:**
|
||||
- Very niche, smaller community
|
||||
- Different mental model from current workflow
|
||||
- Only NixOS guests (no Ubuntu, FreeBSD, etc.)
|
||||
- No Terraform integration
|
||||
- No migration support
|
||||
- Less isolation than full QEMU VMs
|
||||
- Would need to learn a new deployment pattern
|
||||
|
||||
## Comparison
|
||||
|
||||
| Criteria | Incus | libvirt | microvm.nix |
|
||||
|----------|-------|---------|-------------|
|
||||
| **Workflow similarity** | Medium | High | Low |
|
||||
| **Terraform support** | Yes (lxd/incus provider) | Yes (mature provider) | No |
|
||||
| **NixOS module** | Yes | Yes | Yes |
|
||||
| **Containers + VMs** | Both | VMs only | VMs only |
|
||||
| **Non-NixOS guests** | Yes | Yes | No |
|
||||
| **Live migration** | Built-in (first-class) | Yes (manual setup) | No |
|
||||
| **Offline migration** | Built-in | Yes (manual setup) | No (rebuild) |
|
||||
| **Clustering** | Built-in | Manual | No |
|
||||
| **Learning curve** | Medium | Low | Medium |
|
||||
| **Community/maturity** | Growing | Very mature | Niche |
|
||||
| **Overhead** | Low | Minimal | Minimal |
|
||||
|
||||
## Recommendation
|
||||
|
||||
Start with **Incus**. Migration and clustering are key requirements:
|
||||
- Built-in clustering makes two PN51s a proper hypervisor pool
|
||||
- Live and offline migration are first-class operations, similar to Proxmox
|
||||
- Can move VMs between hosts for maintenance (kernel updates, hardware work) without downtime
|
||||
- Supports both containers and VMs — flexibility for future use
|
||||
- Terraform provider exists (less mature than libvirt's, but functional)
|
||||
- REST API enables automation beyond what Terraform covers
|
||||
|
||||
libvirt could achieve similar results but requires significantly more manual setup for migration and has no clustering awareness. For a two-node setup where migration is a priority, Incus provides much more out of the box.
|
||||
|
||||
**microvm.nix** is off the table given the migration requirement.
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
### Phase 1: Single-Node Setup (on one PN51)
|
||||
|
||||
1. Enable `virtualisation.incus` on pn01 (or whichever is stable)
|
||||
2. Initialize Incus (`incus admin init`) — configure storage pool (local NVMe) and network bridge
|
||||
3. Configure bridge networking for VM traffic on VLAN 12
|
||||
4. Build a NixOS VM image and import it into Incus
|
||||
5. Create a test VM manually with `incus launch` to validate the setup
|
||||
|
||||
### Phase 2: Two-Node Cluster (PN51s only)
|
||||
|
||||
1. Enable Incus on the second PN51
|
||||
2. Form a cluster between both nodes
|
||||
3. Configure shared storage (NFS from NAS, or Ceph if warranted)
|
||||
4. Test offline migration: `incus move <vm> --target <other-node>`
|
||||
5. Test live migration with shared storage
|
||||
6. CPU compatibility is not an issue here — both nodes have identical Ryzen 7 5700U CPUs
|
||||
|
||||
### Phase 3: Terraform Integration
|
||||
|
||||
1. Add Incus Terraform provider to `terraform/`
|
||||
2. Define a test VM in OpenTofu (cloud-init, static IP, vault provisioning)
|
||||
3. Verify the full pipeline: tofu apply -> VM boots -> cloud-init -> vault credentials -> NixOS rebuild
|
||||
4. Compare workflow with existing Proxmox pipeline
|
||||
|
||||
### Phase 4: Evaluate and Expand
|
||||
|
||||
- Is the workflow comparable to Proxmox?
|
||||
- Migration reliability — does live migration work cleanly?
|
||||
- Performance overhead acceptable on Ryzen 5700U?
|
||||
- Worth migrating some test-tier VMs from Proxmox?
|
||||
- Could ns1/ns2 run on separate Incus nodes instead of the single Proxmox host?
|
||||
|
||||
### Phase 5: Proxmox Replacement (optional)
|
||||
|
||||
If Incus works well on the PN51s, consider replacing Proxmox entirely for a three-node cluster.
|
||||
|
||||
**CPU compatibility for mixed cluster:**
|
||||
|
||||
| Node | CPU | Architecture | x86-64-v3 |
|
||||
|------|-----|-------------|-----------|
|
||||
| Proxmox host | AMD Ryzen 9 3900X (12C/24T) | Zen 2 | Yes |
|
||||
| pn01 | AMD Ryzen 7 5700U (8C/16T) | Zen 3 | Yes |
|
||||
| pn02 | AMD Ryzen 7 5700U (8C/16T) | Zen 3 | Yes |
|
||||
|
||||
All three CPUs are AMD and support `x86-64-v3`. The 3900X (Zen 2) is the oldest, so it defines the feature ceiling — but `x86-64-v3` is well within its capabilities. VMs configured with `x86-64-v3` can migrate freely between all three nodes.
|
||||
|
||||
Being all-AMD also avoids the trickier Intel/AMD cross-vendor migration edge cases (different CPUID layouts, virtualization extensions).
|
||||
|
||||
The 3900X (12C/24T) would be the most powerful node, making it the natural home for heavier workloads, with the PN51s (8C/16T each) handling lighter VMs or serving as migration targets during maintenance.
|
||||
|
||||
Steps:
|
||||
1. Install NixOS + Incus on the Proxmox host (or a replacement machine)
|
||||
2. Join it to the existing Incus cluster with `x86-64-v3` CPU baseline
|
||||
3. Migrate VMs from Proxmox to the Incus cluster
|
||||
4. Decommission Proxmox
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [ ] PN51 units pass stability testing (see `pn51-stability.md`)
|
||||
- [ ] Decide which unit to use first (pn01 preferred — 64GB RAM, NVMe, currently more stable)
|
||||
|
||||
## Open Questions
|
||||
|
||||
- How to handle VM storage? Local NVMe, NFS from NAS, or Ceph between the two nodes?
|
||||
- Network topology: bridge on VLAN 12, or trunk multiple VLANs to the PN51?
|
||||
- Should VMs be on the same VLAN as the hypervisor host, or separate?
|
||||
- Incus clustering with only two nodes — any quorum issues? Three nodes (with Proxmox replacement) would solve this
|
||||
- How to handle NixOS guest images? Build with nixos-generators, or use Incus image builder?
|
||||
- ~~What CPU does the current Proxmox host have?~~ AMD Ryzen 9 3900X (Zen 2) — `x86-64-v3` confirmed, all-AMD cluster
|
||||
- If replacing Proxmox: migrate VMs first, or fresh start and rebuild?
|
||||
@@ -1,182 +0,0 @@
|
||||
# NixOS Router — Replace EdgeRouter
|
||||
|
||||
Replace the aging Ubiquiti EdgeRouter (gw, 10.69.10.1) with a NixOS-based router.
|
||||
The EdgeRouter is suspected to be a throughput bottleneck. A NixOS router integrates
|
||||
naturally with the existing fleet: same config management, same monitoring pipeline,
|
||||
same deployment workflow.
|
||||
|
||||
## Goals
|
||||
|
||||
- Eliminate the EdgeRouter throughput bottleneck
|
||||
- Full integration with existing monitoring (node-exporter, promtail, Prometheus, Loki)
|
||||
- Declarative firewall and routing config managed in the flake
|
||||
- Inter-VLAN routing for all existing subnets
|
||||
- DHCP server for client subnets
|
||||
- NetFlow/traffic accounting for future ntopng integration
|
||||
- Foundation for WireGuard remote access (see remote-access.md)
|
||||
|
||||
## Current Network Topology
|
||||
|
||||
**Subnets (known VLANs):**
|
||||
| VLAN/Subnet | Purpose | Notable hosts |
|
||||
|----------------|------------------|----------------------------------------|
|
||||
| 10.69.10.0/24 | Gateway | gw (10.69.10.1) |
|
||||
| 10.69.12.0/24 | Core services | nas, pve1, arr jails, restic |
|
||||
| 10.69.13.0/24 | Infrastructure | All NixOS servers (static IPs) |
|
||||
| 10.69.22.0/24 | WLAN | unifi-ctrl |
|
||||
| 10.69.30.0/24 | Workstations | gunter |
|
||||
| 10.69.31.0/24 | Media | media |
|
||||
| 10.69.99.0/24 | Management | sw1 (MikroTik CRS326-24G-2S+) |
|
||||
|
||||
**DNS:** ns1 (10.69.13.5) and ns2 (10.69.13.6) handle all resolution. Upstream is
|
||||
Cloudflare/Google over DoT via Unbound.
|
||||
|
||||
**Switch:** MikroTik CRS326-24G-2S+ — L2 switching with VLAN trunking. Capable of
|
||||
L3 routing via RouterOS but not ideal for sustained routing throughput.
|
||||
|
||||
## Hardware
|
||||
|
||||
Needs a small x86 box with:
|
||||
- At least 2 NICs (WAN + LAN trunk). Dual 2.5GbE preferred.
|
||||
- Enough CPU for nftables NAT at line rate (any modern x86 is fine)
|
||||
- 4-8 GB RAM (plenty for routing + DHCP + NetFlow accounting)
|
||||
- Low power consumption, fanless preferred for always-on use
|
||||
|
||||
**Leading candidate:** [Topton Solid Mini PC](https://www.aliexpress.com/item/1005008981218625.html)
|
||||
with Intel i3-N300 (8 E-cores), 2x10GbE SFP+ + 3x2.5GbE (~NOK 3000 barebones). The N300
|
||||
gives headroom for ntopng DPI and potential Suricata IDS without being overkill.
|
||||
|
||||
### Hardware Alternatives
|
||||
|
||||
Domestic availability for firewall mini PCs is limited — likely ordering from AliExpress.
|
||||
|
||||
Key things to verify:
|
||||
- NIC chipset: Intel i225-V/i226-V preferred over Realtek for Linux driver support
|
||||
- RAM/storage: some listings are barebones, check what's included
|
||||
- Import duties: factor in ~25% on top of listing price
|
||||
|
||||
| Option | NICs | Notes | Price |
|
||||
|--------|------|-------|-------|
|
||||
| [Topton Solid Firewall Router](https://www.aliexpress.com/item/1005008059819023.html) | 2x10GbE SFP+, 4x2.5GbE | No RAM/SSD, only Intel N150 available currently | ~NOK 2500 |
|
||||
| [Topton Solid Mini PC](https://www.aliexpress.com/item/1005008981218625.html) | 2x10GbE SFP+, 3x2.5GbE | No RAM/SSD, only Intel i3-N300 available currently | ~NOK 3000 |
|
||||
| [MINISFORUM MS-01](https://www.aliexpress.com/item/1005007308262492.html) | 2x10GbE SFP+, 2x2.5GbE | No RAM/SSD, i5-12600H | ~NOK 4500 |
|
||||
|
||||
The LAN port would carry a VLAN trunk to the MikroTik switch, with sub-interfaces
|
||||
for each VLAN. WAN port connects to the ISP uplink.
|
||||
|
||||
## NixOS Configuration
|
||||
|
||||
### Stability Policy
|
||||
|
||||
The router is treated differently from the rest of the fleet:
|
||||
- **No auto-upgrade** — `system.autoUpgrade.enable = false`
|
||||
- **No homelab-deploy listener** — `homelab.deploy.enable = false`
|
||||
- **Manual updates only** — update every few months, test-build first
|
||||
- **Use `nixos-rebuild boot`** — changes take effect on next deliberate reboot
|
||||
- **Tier: prod, priority: high** — alerts treated with highest priority
|
||||
|
||||
### Core Services
|
||||
|
||||
**Routing & NAT:**
|
||||
- `systemd-networkd` for all interface config (consistent with rest of fleet)
|
||||
- VLAN sub-interfaces on the LAN trunk (one per subnet)
|
||||
- `networking.nftables` for stateful firewall and NAT
|
||||
- IP forwarding enabled (`net.ipv4.ip_forward = 1`)
|
||||
- Masquerade outbound traffic on WAN interface
|
||||
|
||||
**DHCP:**
|
||||
- Kea or dnsmasq for DHCP on client subnets (WLAN, workstations, media)
|
||||
- Infrastructure subnet (10.69.13.0/24) stays static — no DHCP needed
|
||||
- Static leases for known devices
|
||||
|
||||
**Firewall (nftables):**
|
||||
- Default deny between VLANs
|
||||
- Explicit allow rules for known cross-VLAN traffic:
|
||||
- All subnets → ns1/ns2 (DNS)
|
||||
- All subnets → monitoring01 (metrics/logs)
|
||||
- Infrastructure → all (management access)
|
||||
- Workstations → media, core services
|
||||
- NAT masquerade on WAN
|
||||
- Rate limiting on WAN-facing services
|
||||
|
||||
**Traffic Accounting:**
|
||||
- nftables flow accounting or softflowd for NetFlow export
|
||||
- Export to future ntopng instance (see new-services.md)
|
||||
|
||||
**IDS/IPS (future consideration):**
|
||||
- Suricata for inline intrusion detection/prevention on the WAN interface
|
||||
- Signature-based threat detection, protocol anomaly detection
|
||||
- CPU-intensive — feasible at typical home internet speeds (500Mbps-1Gbps) on the N300
|
||||
- Not a day-one requirement, but the hardware should support it
|
||||
|
||||
### Monitoring Integration
|
||||
|
||||
Since this is a NixOS host in the flake, it gets the standard monitoring stack for free:
|
||||
- node-exporter for system metrics (CPU, memory, NIC throughput per interface)
|
||||
- promtail shipping logs to Loki
|
||||
- Prometheus scrape target auto-registration
|
||||
- Alertmanager alerts for host-down, high CPU, etc.
|
||||
|
||||
Additional router-specific monitoring:
|
||||
- Per-VLAN interface traffic metrics via node-exporter (automatic for all interfaces)
|
||||
- NAT connection tracking table size
|
||||
- WAN uplink status and throughput
|
||||
- DHCP lease metrics (if Kea, it has a Prometheus exporter)
|
||||
|
||||
This is a significant advantage over the EdgeRouter — full observability through
|
||||
the existing Grafana dashboards and Loki log search, debuggable via the monitoring
|
||||
MCP tools.
|
||||
|
||||
### WireGuard Integration
|
||||
|
||||
The remote access plan (remote-access.md) currently proposes a separate `extgw01`
|
||||
gateway host. With a NixOS router, there's a decision to make:
|
||||
|
||||
**Option A:** WireGuard terminates on the router itself. Simplest topology — the
|
||||
router is already the gateway, so VPN traffic doesn't need extra hops or firewall
|
||||
rules. But adds complexity to the router, which should stay simple.
|
||||
|
||||
**Option B:** Keep extgw01 as a separate host (original plan). Router just routes
|
||||
traffic to it. Better separation of concerns, router stays minimal.
|
||||
|
||||
Recommendation: Start with option B (keep it separate). The router should do routing
|
||||
and nothing else. WireGuard can move to the router later if extgw01 feels redundant.
|
||||
|
||||
## Migration Plan
|
||||
|
||||
### Phase 1: Build and lab test
|
||||
- Acquire hardware
|
||||
- Create host config in the flake (routing, NAT, DHCP, firewall)
|
||||
- Test-build on workstation: `nix build .#nixosConfigurations.router01.config.system.build.toplevel`
|
||||
- Lab test with a temporary setup if possible (two NICs, isolated VLAN)
|
||||
|
||||
### Phase 2: Prepare cutover
|
||||
- Pre-configure the MikroTik switch trunk port for the new router
|
||||
- Document current EdgeRouter config (port forwarding, NAT rules, DHCP leases)
|
||||
- Replicate all rules in the NixOS config
|
||||
- Verify DNS, DHCP, and inter-VLAN routing work in test
|
||||
|
||||
### Phase 3: Cutover
|
||||
- Schedule a maintenance window (brief downtime expected)
|
||||
- Swap WAN cable from EdgeRouter to new router
|
||||
- Swap LAN trunk from EdgeRouter to new router
|
||||
- Verify connectivity from each VLAN
|
||||
- Verify internet access, DNS resolution, inter-VLAN routing
|
||||
- Monitor via Prometheus/Loki (immediately available since it's a fleet host)
|
||||
|
||||
### Phase 4: Decommission EdgeRouter
|
||||
- Keep EdgeRouter available as fallback for a few weeks
|
||||
- Remove `gw` entry from external-hosts.nix, replace with flake-managed host
|
||||
- Update any references to 10.69.10.1 if the router IP changes
|
||||
|
||||
## Open Questions
|
||||
|
||||
- **Router IP:** Keep 10.69.10.1 or move to a different address? Each VLAN
|
||||
sub-interface needs an IP (the gateway address for that subnet).
|
||||
- **ISP uplink:** What type of WAN connection? PPPoE, DHCP, static IP?
|
||||
- **Port forwarding:** What ports are currently forwarded on the EdgeRouter?
|
||||
These need to be replicated in nftables.
|
||||
- **DHCP scope:** Which subnets currently get DHCP from the EdgeRouter vs
|
||||
other sources (UniFi controller for WLAN?)?
|
||||
- **UPnP/NAT-PMP:** Needed for any devices? (gaming consoles, etc.)
|
||||
- **Hardware preference:** Fanless mini PC budget and preferred vendor?
|
||||
@@ -1,104 +0,0 @@
|
||||
# NixOS OpenStack Image
|
||||
|
||||
## Overview
|
||||
|
||||
Build and upload a NixOS base image to the OpenStack cluster at work, enabling NixOS-based VPS instances to replace the current Debian+Podman setup. This image will serve as the foundation for multiple external services:
|
||||
|
||||
- **Forgejo** (replacing Gitea on docker2)
|
||||
- **WireGuard gateway** (replacing docker2's tunnel role, feeding into the remote-access plan)
|
||||
- Any future externally-hosted services
|
||||
|
||||
## Current State
|
||||
|
||||
- VPS hosting runs on an OpenStack cluster with a personal quota
|
||||
- Current VPS (`docker2.t-juice.club`) runs Debian with Podman containers
|
||||
- Homelab already has a working Proxmox image pipeline: `template2` builds via `nixos-rebuild build-image --image-variant proxmox`, deployed via Ansible
|
||||
- nixpkgs has a built-in `openstack` image variant in the same `image.modules` system used for Proxmox
|
||||
|
||||
## Decisions
|
||||
|
||||
- **No cloud-init dependency** - SSH key baked into the image, no need for metadata service
|
||||
- **No bootstrap script** - VPS deployments are infrequent; manual `nixos-rebuild` after first boot is fine
|
||||
- **No Vault access** - secrets handled manually until WireGuard access is set up (see remote-access plan)
|
||||
- **Separate from homelab services** - no logging/metrics integration initially; revisit after remote-access WireGuard is in place
|
||||
- **Repo placement TBD** - keep in this flake for now for convenience, but external hosts may move to a separate flake later since they can't use most shared `system/` modules (no Vault, no internal DNS, no Promtail)
|
||||
- **OpenStack CLI in devshell** - add `openstackclient` package; credentials (`clouds.yaml`) stay outside the repo
|
||||
- **Parallel deployment** - new Forgejo instance runs alongside docker2 initially, then CNAME moves over
|
||||
|
||||
## Approach
|
||||
|
||||
Follow the same pattern as the Proxmox template (`hosts/template2`), but targeting OpenStack's qcow2 format.
|
||||
|
||||
### What nixpkgs provides
|
||||
|
||||
The `image.modules.openstack` module produces a qcow2 image with:
|
||||
- `openstack-config.nix`: EC2 metadata fetcher, SSH enabled, GRUB bootloader, serial console, auto-growing root partition
|
||||
- `qemu-guest.nix` profile (virtio drivers)
|
||||
- ext4 root filesystem with `autoResize`
|
||||
|
||||
### What we need to customize
|
||||
|
||||
The stock OpenStack image pulls SSH keys and hostname from EC2-style metadata. Since we're baking the SSH key into the image, we need a simpler configuration:
|
||||
|
||||
- SSH authorized keys baked into the image
|
||||
- Base packages (age, vim, wget, git)
|
||||
- Nix substituters (`cache.nixos.org` only - internal cache not reachable)
|
||||
- systemd-networkd with DHCP
|
||||
- GRUB bootloader
|
||||
- Firewall enabled (public-facing host)
|
||||
|
||||
### Differences from template2
|
||||
|
||||
| Aspect | template2 (Proxmox) | openstack-template (OpenStack) |
|
||||
|--------|---------------------|-------------------------------|
|
||||
| Image format | VMA (`.vma.zst`) | qcow2 (`.qcow2`) |
|
||||
| Image variant | `proxmox` | `openstack` |
|
||||
| Cloud-init | ConfigDrive + NoCloud | Not used (SSH key baked in) |
|
||||
| Nix cache | Internal + nixos.org | `cache.nixos.org` only |
|
||||
| Vault | AppRole via wrapped token | None |
|
||||
| Bootstrap | Automatic nixos-rebuild on first boot | Manual |
|
||||
| Network | Internal DHCP | OpenStack DHCP |
|
||||
| DNS | Internal ns1/ns2 | Public DNS |
|
||||
| Firewall | Disabled (trusted network) | Enabled |
|
||||
| System modules | Full `../../system` import | Minimal (sshd, packages only) |
|
||||
|
||||
## Implementation Steps
|
||||
|
||||
### Phase 1: Build the image
|
||||
|
||||
1. Create `hosts/openstack-template/` with minimal configuration
|
||||
- `default.nix` - imports (only sshd and packages from `system/`, not the full set)
|
||||
- `configuration.nix` - base config: SSH key, DHCP, GRUB, base packages, firewall on
|
||||
- `hardware-configuration.nix` - qemu-guest profile with virtio drivers
|
||||
- Exclude from DNS and monitoring (`homelab.dns.enable = false`, `homelab.monitoring.enable = false`)
|
||||
- May need to override parts of `image.modules.openstack` to disable the EC2 metadata fetcher if it causes boot delays
|
||||
2. Build with `nixos-rebuild build-image --image-variant openstack --flake .#openstack-template`
|
||||
3. Verify the qcow2 image is produced in `result/`
|
||||
|
||||
### Phase 2: Upload and test
|
||||
|
||||
1. Add `openstackclient` to the devshell
|
||||
2. Upload image: `openstack image create --disk-format qcow2 --file result/<image>.qcow2 nixos-template`
|
||||
3. Boot a test instance from the image
|
||||
4. Verify: SSH access works, DHCP networking, Nix builds work
|
||||
5. Test manual `nixos-rebuild switch --flake` against the instance
|
||||
|
||||
### Phase 3: Automation (optional, later)
|
||||
|
||||
Consider an Ansible playbook similar to `build-and-deploy-template.yml` for image builds + uploads. Low priority since this will be done rarely.
|
||||
|
||||
## Open Questions
|
||||
|
||||
- [ ] Should external VPS hosts eventually move to a separate flake? (Depends on how different they end up being from homelab hosts)
|
||||
- [ ] Will the stock `openstack-config.nix` metadata fetcher cause boot delays/errors if the metadata service isn't reachable? May need to disable it.
|
||||
- [ ] **Flavor selection** - investigate what flavors are available in the quota. The standard small flavors likely have insufficient root disk for a NixOS host (Nix store grows fast). Options:
|
||||
- Use a larger flavor with adequate root disk
|
||||
- Create a custom flavor (if permissions allow)
|
||||
- Cinder block storage is an option in theory, but was very slow last time it was tested - avoid if possible
|
||||
- [ ] Consolidation opportunity - currently running multiple smaller VMs on OpenStack. Could a single larger NixOS VM replace several of them?
|
||||
|
||||
## Notes
|
||||
|
||||
- `nixos-rebuild build-image --image-variant openstack` uses the same `image.modules` system as Proxmox
|
||||
- nixpkgs also has an `openstack-zfs` variant if ZFS root is ever wanted
|
||||
- The stock OpenStack module imports `ec2-data.nix` and `amazon-init.nix` - these may need to be disabled or overridden if they cause issues without a metadata service
|
||||
@@ -1,239 +0,0 @@
|
||||
# ASUS PN51 Stability Testing
|
||||
|
||||
## Overview
|
||||
|
||||
Two ASUS PN51-E1 mini PCs (Ryzen 7 5700U) purchased years ago but shelved due to stability issues. Revisiting them to potentially add to the homelab.
|
||||
|
||||
## Hardware
|
||||
|
||||
| | pn01 (10.69.12.60) | pn02 (10.69.12.61) |
|
||||
|---|---|---|
|
||||
| **CPU** | AMD Ryzen 7 5700U (8C/16T) | AMD Ryzen 7 5700U (8C/16T) |
|
||||
| **RAM** | 2x 32GB DDR4 SO-DIMM (64GB) | 1x 32GB DDR4 SO-DIMM (32GB) |
|
||||
| **Storage** | 1TB NVMe | 1TB Samsung 870 EVO (SATA SSD) |
|
||||
| **BIOS** | 0508 (2023-11-08) | Updated 2026-02-21 (latest from ASUS) |
|
||||
|
||||
## Original Issues
|
||||
|
||||
- **pn01**: Would boot but freeze randomly after some time. No console errors, completely unresponsive. memtest86 passed.
|
||||
- **pn02**: Had trouble booting — would start loading kernel from installer USB then instantly reboot. When it did boot, would also freeze randomly.
|
||||
|
||||
## Debugging Steps
|
||||
|
||||
### 2026-02-21: Initial Setup
|
||||
|
||||
1. **Disabled fTPM** (labeled "Security Device" in ASUS BIOS) on both units
|
||||
- AMD Ryzen 5000 series had a known fTPM bug causing random hard freezes with no console output
|
||||
- Both units booted the NixOS installer successfully after this change
|
||||
2. Installed NixOS on both, added to repo as `pn01` and `pn02` on VLAN 12
|
||||
3. Configured monitoring (node-exporter, promtail, nixos-exporter)
|
||||
|
||||
### 2026-02-21: pn02 First Freeze
|
||||
|
||||
- pn02 froze approximately 1 hour after boot
|
||||
- All three Prometheus targets went down simultaneously — hard freeze, not graceful shutdown
|
||||
- Journal on next boot: `system.journal corrupted or uncleanly shut down`
|
||||
- Kernel warnings from boot log before freeze:
|
||||
- **TSC clocksource unstable**: `Marking clocksource 'tsc' as unstable because the skew is too large` — TSC skewing ~3.8ms over 500ms relative to HPET watchdog
|
||||
- **AMD PSP error**: `psp gfx command LOAD_TA(0x1) failed and response status is (0x7)` — Platform Security Processor failing to load trusted application
|
||||
- pn01 did not show these warnings on this particular boot, but has shown them historically (see below)
|
||||
|
||||
### 2026-02-21: pn02 BIOS Update
|
||||
|
||||
- Updated pn02 BIOS to latest version from ASUS website
|
||||
- **TSC still unstable** after BIOS update — same ~3.8ms skew
|
||||
- **PSP LOAD_TA still failing** after BIOS update
|
||||
- Monitoring back up, letting it run to see if freeze recurs
|
||||
|
||||
### 2026-02-22: TSC/PSP Confirmed on Both Units
|
||||
|
||||
- Checked kernel logs after ~9 hours uptime — both units still running
|
||||
- **pn01 now shows TSC unstable and PSP LOAD_TA failure** on this boot (same ~3.8ms TSC skew, same PSP error)
|
||||
- pn01 had these same issues historically when tested years ago — the earlier clean boot was just lucky TSC calibration timing
|
||||
- **Conclusion**: TSC instability and PSP LOAD_TA are platform-level quirks of the PN51-E1 / Ryzen 5700U, present on both units
|
||||
- The kernel handles TSC instability gracefully (falls back to HPET), and PSP LOAD_TA is non-fatal
|
||||
- Neither issue is likely the cause of the hard freezes — the fTPM bug remains the primary suspect
|
||||
|
||||
### 2026-02-22: Stress Test (1 hour)
|
||||
|
||||
- Ran `stress-ng --cpu 16 --vm 2 --vm-bytes 8G --timeout 1h` on both units
|
||||
- CPU temps peaked at ~85°C, settled to ~80°C sustained (throttle limit is 105°C)
|
||||
- Both survived the full hour with no freezes, no MCE errors, no kernel issues
|
||||
- No concerning log entries during or after the test
|
||||
|
||||
### 2026-02-22: TSC Runtime Switch Test
|
||||
|
||||
- Attempted to switch clocksource back to TSC at runtime on pn01:
|
||||
```
|
||||
echo tsc > /sys/devices/system/clocksource/clocksource0/current_clocksource
|
||||
```
|
||||
- Kernel watchdog immediately reverted to HPET — TSC skew is ongoing, not just a boot-time issue
|
||||
- **Conclusion**: TSC is genuinely unstable on the PN51-E1 platform. HPET is the correct clocksource.
|
||||
- For virtualization (Incus), this means guest VMs will use HPET-backed timing. Performance impact is minimal for typical server workloads (DNS, monitoring, light services) but would matter for latency-sensitive applications.
|
||||
|
||||
### 2026-02-22: BIOS Tweaks (Both Units)
|
||||
|
||||
- Disabled ErP Ready on both (EU power efficiency mode — aggressively cuts power in idle)
|
||||
- Disabled WiFi and Bluetooth in BIOS on both
|
||||
- **TSC still unstable** after these changes — same ~3.8ms skew on both units
|
||||
- ErP/power states are not the cause of the TSC issue
|
||||
|
||||
### 2026-02-22: pn02 Second Freeze
|
||||
|
||||
- pn02 froze again ~5.5 hours after boot (at idle, not under load)
|
||||
- All Prometheus targets down simultaneously — same hard freeze pattern
|
||||
- Last log entry was normal nix-daemon activity — zero warning/error logs before crash
|
||||
- Survived the 1h stress test earlier but froze at idle later — not thermal
|
||||
- pn01 remains stable throughout
|
||||
- **Action**: Blacklisted `amdgpu` kernel module on pn02 (`boot.blacklistedKernelModules = [ "amdgpu" ]`) to eliminate GPU/PSP firmware interactions as a cause. No console output but managed via SSH.
|
||||
- **Action**: Added diagnostic/recovery config to pn02:
|
||||
- `panic=10` + `nmi_watchdog=1` kernel params — auto-reboot after 10s on panic
|
||||
- `softlockup_panic` + `hardlockup_panic` sysctls — convert lockups to panics with stack traces
|
||||
- `hardware.rasdaemon` with recording — logs hardware errors (MCE, PCIe AER, memory) to sqlite database, survives reboots
|
||||
- Check recorded errors: `ras-mc-ctl --summary`, `ras-mc-ctl --errors`
|
||||
|
||||
## Benign Kernel Errors (Both Units)
|
||||
|
||||
These appear on both units and can be ignored:
|
||||
- `clocksource: Marking clocksource 'tsc' as unstable` — TSC skew vs HPET, kernel falls back gracefully. Platform-level quirk on PN51-E1, not always reproducible on every boot.
|
||||
- `psp gfx command LOAD_TA(0x1) failed` — AMD PSP firmware error, non-fatal. Present on both units across all BIOS versions.
|
||||
- `pcie_mp2_amd: amd_sfh_hid_client_init failed err -95` — AMD Sensor Fusion Hub, no sensors connected
|
||||
- `Bluetooth: hci0: Reading supported features failed` — Bluetooth init quirk
|
||||
- `Serial bus multi instantiate pseudo device driver INT3515:00: error -ENXIO` — unused serial bus device
|
||||
- `snd_hda_intel: no codecs found` — no audio device connected, headless server
|
||||
- `ata2.00: supports DRM functions and may not be fully accessible` — Samsung SSD DRM quirk (pn02 only)
|
||||
|
||||
### 2026-02-23: processor.max_cstate=1 and Proxmox Forums
|
||||
|
||||
- Found a thread on the Proxmox forums about PN51 units with similar freeze issues
|
||||
- Many users reporting identical symptoms — random hard freezes, no log evidence
|
||||
- No conclusive fix. Some have frequent freezes, others only a few times a month
|
||||
- Some reported BIOS updates helped, but results inconsistent
|
||||
- Added `processor.max_cstate=1` kernel parameter to pn02 — limits CPU to C1 halt state, preventing deep C-state sleep transitions that may trigger freezes on AMD mobile chips
|
||||
- Also applied: amdgpu blacklist, panic=10, nmi_watchdog=1, softlockup/hardlockup panic, rasdaemon
|
||||
|
||||
### 2026-02-23: logind D-Bus Deadlock (pn02)
|
||||
|
||||
- node-exporter alert fired — but host was NOT frozen
|
||||
- logind was running (PID 871) but deadlocked on D-Bus — not responding to `org.freedesktop.login1` requests
|
||||
- Every node-exporter scrape blocked for 25s waiting for logind, causing scrape timeouts
|
||||
- Likely related to amdgpu blacklist — no DRM device means no graphical seat, logind may have deadlocked during seat enumeration at boot
|
||||
- Fix: `systemctl restart systemd-logind` + `systemctl restart prometheus-node-exporter`
|
||||
- After restart, logind responded normally and reported seat0
|
||||
|
||||
### 2026-02-27: pn02 Third Freeze
|
||||
|
||||
- pn02 crashed again after ~2 days 21 hours uptime (longest run so far)
|
||||
- Evidence of crash:
|
||||
- Journal file corrupted: `system.journal corrupted or uncleanly shut down`
|
||||
- Boot partition fsck: `Dirty bit is set. Fs was not properly unmounted`
|
||||
- No orderly shutdown logs from previous boot
|
||||
- No auto-upgrade triggered
|
||||
- **NMI watchdog did NOT fire** — no kernel panic logged. This is a true hard lockup below NMI level
|
||||
- **rasdaemon recorded nothing** — no MCE, AER, or memory errors in the sqlite database
|
||||
- **Positive**: The system auto-rebooted this time (likely hardware watchdog), unlike previous freezes that required manual power cycle
|
||||
- `processor.max_cstate=1` may have extended uptime (2d21h vs previous 1h and 5.5h) but did not prevent the freeze
|
||||
|
||||
### 2026-02-27 to 2026-03-03: Relative Stability
|
||||
|
||||
- pn02 ran without crashes for approximately one week after the third freeze
|
||||
- pn01 continued to be completely stable throughout this period
|
||||
- Auto-upgrade reboots continued daily (~4am) on both units — these are planned and healthy
|
||||
|
||||
### 2026-03-04: pn02 Fourth Crash — sched_ext Kernel Oops (pstore captured)
|
||||
|
||||
- pn02 crashed after ~5.8 days uptime (504566s)
|
||||
- **First crash captured by pstore** — kernel oops and panic stack traces preserved across reboot
|
||||
- Journal corruption confirmed: `system.journal corrupted or uncleanly shut down`
|
||||
- **Crash location**: `RIP: 0010:set_next_task_scx+0x6e/0x210` — crash in the **sched_ext (SCX) scheduler** subsystem
|
||||
- **Call trace**: `sysvec_apic_timer_interrupt` → `cpuidle_enter_state` — crashed during CPU idle, triggered by APIC timer interrupt
|
||||
- **CR2**: `ffffffffffffff89` — dereferencing an obviously invalid kernel pointer
|
||||
- **Kernel**: 6.12.74 (NixOS 25.11)
|
||||
- **Significance**: This is the first crash with actual diagnostic output. Previous crashes were silent sub-NMI freezes. The sched_ext scheduler path is a new finding — earlier crashes were assumed to be hardware-level.
|
||||
|
||||
### 2026-03-06: pn02 Fifth Crash
|
||||
|
||||
- pn02 crashed again — journal corruption on next boot
|
||||
- No pstore data captured for this crash
|
||||
|
||||
### 2026-03-07: pn02 Sixth and Seventh Crashes — Two in One Day
|
||||
|
||||
**First crash (~11:06 UTC):**
|
||||
- ~26.6 hours uptime (95994s)
|
||||
- **pstore captured both Oops and Panic**
|
||||
- **Crash location**: Scheduler code path — `pick_next_task_fair` → `__pick_next_task`
|
||||
- **CR2**: `000000c000726000` — invalid pointer dereference
|
||||
- **Notable**: `dbus-daemon` segfaulted ~50 minutes before the kernel crash (`segfault at 0` in `libdbus-1.so.3.32.4` on CPU 0) — may indicate memory corruption preceding the kernel crash
|
||||
|
||||
**Second crash (~21:15 UTC):**
|
||||
- Journal corruption confirmed on next boot
|
||||
- No pstore data captured
|
||||
|
||||
### 2026-03-12: pn02 Memtest86 — 38 Passes, Zero Errors
|
||||
|
||||
- Ran memtest86 for ~109 hours (4.5 days), completing 38 full passes
|
||||
- **Zero errors found** — RAM appears healthy
|
||||
- Makes hardware-induced memory corruption less likely as the sole cause of crashes
|
||||
- Memtest cannot rule out CPU cache errors, PCIe/IOMMU issues, or kernel bugs triggered by platform quirks
|
||||
- **Next step**: Boot back into NixOS with sched_ext disabled to test the kernel scheduler hypothesis
|
||||
|
||||
### 2026-03-07: pn01 Status
|
||||
|
||||
- pn01 has had **zero crashes** since initial setup on Feb 21
|
||||
- Zero journal corruptions, zero pstore dumps in 30 days
|
||||
- Same BOOT_ID maintained between daily auto-upgrade reboots — consistently clean shutdown/reboot cycles
|
||||
- All 8 reboots in 30 days are planned auto-upgrade reboots
|
||||
- **pn01 is fully stable**
|
||||
|
||||
## Crash Summary
|
||||
|
||||
| Date | Uptime Before Crash | Crash Type | Diagnostic Data |
|
||||
|------|---------------------|------------|-----------------|
|
||||
| Feb 21 | ~1h | Silent freeze | None — sub-NMI |
|
||||
| Feb 22 | ~5.5h | Silent freeze | None — sub-NMI |
|
||||
| Feb 27 | ~2d 21h | Silent freeze | None — sub-NMI, rasdaemon empty |
|
||||
| Mar 4 | ~5.8d | **Kernel oops** | pstore: `set_next_task_scx` (sched_ext) |
|
||||
| Mar 6 | Unknown | Crash | Journal corruption only |
|
||||
| Mar 7 | ~26.6h | **Kernel oops + panic** | pstore: `pick_next_task_fair` (scheduler) + dbus segfault |
|
||||
| Mar 7 | Unknown | Crash | Journal corruption only |
|
||||
|
||||
## Conclusion
|
||||
|
||||
**pn02 is unreliable.** After exhausting mitigations (fTPM disabled, BIOS updated, WiFi/BT disabled, ErP disabled, amdgpu blacklisted, processor.max_cstate=1, NMI watchdog, rasdaemon), the unit still crashes every few days. 26 reboots in 30 days (7 unclean crashes + daily auto-upgrade reboots).
|
||||
|
||||
The pstore crash dumps from March reveal a new dimension: at least some crashes are **kernel scheduler bugs in sched_ext**, not just silent hardware-level freezes. The `set_next_task_scx` and `pick_next_task_fair` crash sites, combined with the dbus-daemon segfault before one crash, suggest possible memory corruption that manifests in the scheduler. Memtest86 ran 38 passes (109 hours) with zero errors, making option 2 less likely. Remaining possibilities:
|
||||
1. A sched_ext kernel bug exposed by the PN51's hardware quirks (unstable TSC, C-state behavior)
|
||||
2. ~~Hardware-induced memory corruption that happens to hit scheduler data structures~~ — unlikely after clean memtest
|
||||
3. A pure software bug in the 6.12.74 kernel's sched_ext implementation
|
||||
|
||||
**pn01 is stable** — zero crashes in 30 days of continuous operation. Both units have identical kernel and NixOS configuration (minus pn02's diagnostic mitigations), so the difference points toward a hardware defect specific to the pn02 board.
|
||||
|
||||
## Next Steps
|
||||
|
||||
- **~~pn02 memtest~~**: ~~Run memtest86 for 24h+~~ — Done (2026-03-12): 38 passes over 109 hours, zero errors. RAM is not the issue.
|
||||
- **pn02 sched_ext test**: Disable sched_ext (`boot.kernelParams = [ "sched_ext.enabled=0" ]` or equivalent) and run for 1-2 weeks to test whether the crashes stop — would help distinguish kernel bug from hardware defect
|
||||
- **pn02**: If sched_ext disable doesn't help, consider scrapping or repurposing for non-critical workloads that tolerate random reboots (auto-recovery via hardware watchdog is working)
|
||||
- **pn01**: Continue monitoring. If it remains stable long-term, it is viable for light workloads
|
||||
- If pn01 eventually crashes, apply the same mitigations (amdgpu blacklist, max_cstate=1) to see if they help
|
||||
- For the Incus hypervisor plan: likely need different hardware. Evaluating GMKtec G3 (Intel) as an alternative. Note: mixed Intel/AMD cluster complicates live migration
|
||||
|
||||
## Diagnostics and Auto-Recovery (pn02)
|
||||
|
||||
Currently deployed on pn02:
|
||||
|
||||
```nix
|
||||
boot.blacklistedKernelModules = [ "amdgpu" ];
|
||||
boot.kernelParams = [ "panic=10" "nmi_watchdog=1" "processor.max_cstate=1" ];
|
||||
boot.kernel.sysctl."kernel.softlockup_panic" = 1;
|
||||
boot.kernel.sysctl."kernel.hardlockup_panic" = 1;
|
||||
hardware.rasdaemon.enable = true;
|
||||
hardware.rasdaemon.record = true;
|
||||
```
|
||||
|
||||
**Crash recovery is working**: pstore now captures kernel oops/panic data, and the system auto-reboots via `panic=10` or SP5100 TCO hardware watchdog.
|
||||
|
||||
**After reboot, check:**
|
||||
- `ras-mc-ctl --summary` — overview of hardware errors
|
||||
- `ras-mc-ctl --errors` — detailed error list
|
||||
- `journalctl -b -1 -p err` — kernel logs from crashed boot (if panic was logged)
|
||||
- pstore data is automatically archived by `systemd-pstore.service` and forwarded to Loki via promtail
|
||||
@@ -4,118 +4,119 @@
|
||||
|
||||
## Goal
|
||||
|
||||
Enable personal remote access to selected homelab services from outside the internal network, without exposing anything directly to the internet.
|
||||
Enable remote access to some or all homelab services from outside the internal network, without exposing anything directly to the internet.
|
||||
|
||||
## Current State
|
||||
|
||||
- All services are only accessible from the internal 10.69.13.x network
|
||||
- http-proxy has a WireGuard tunnel (`wg0`, `10.69.222.0/24`) to a VPS (`docker2.t-juice.club`) on an OpenStack cluster
|
||||
- VPS runs Traefik which proxies selected services (including Jellyfin) back through the tunnel to http-proxy's Caddy
|
||||
- No other services are directly exposed to the public internet
|
||||
- Exception: jelly01 has a WireGuard link to an external VPS
|
||||
- No services are directly exposed to the public internet
|
||||
|
||||
## Decision: WireGuard Gateway
|
||||
## Constraints
|
||||
|
||||
After evaluating WireGuard gateway vs Headscale (self-hosted Tailscale), the **WireGuard gateway** approach was chosen:
|
||||
- Nothing should be directly accessible from the outside
|
||||
- Must use VPN or overlay network (no port forwarding of services)
|
||||
- Self-hosted solutions preferred over managed services
|
||||
|
||||
- Only 2 client devices (laptop + phone), so Headscale's device management UX isn't needed
|
||||
- Split DNS works fine on Linux laptop via systemd-resolved; all-or-nothing DNS on phone is acceptable for occasional use
|
||||
- Simpler infrastructure - no control server to maintain
|
||||
- Builds on existing WireGuard experience and setup
|
||||
## Options
|
||||
|
||||
## Architecture
|
||||
### 1. WireGuard Gateway (Internal Router)
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
clients["Laptop / Phone"]
|
||||
vps["VPS<br/>(WireGuard endpoint)"]
|
||||
extgw["extgw01<br/>(gateway + bastion)"]
|
||||
grafana["Grafana<br/>monitoring01:3000"]
|
||||
jellyfin["Jellyfin<br/>jelly01:8096"]
|
||||
arr["arr stack<br/>*-jail hosts"]
|
||||
A dedicated NixOS host on the internal network with a WireGuard tunnel out to the VPS. The VPS becomes the public entry point, and the gateway routes traffic to internal services. Firewall rules on the gateway control which services are reachable.
|
||||
|
||||
clients -->|WireGuard| vps
|
||||
vps -->|WireGuard tunnel| extgw
|
||||
extgw -->|allowed traffic| grafana
|
||||
extgw -->|allowed traffic| jellyfin
|
||||
extgw -->|allowed traffic| arr
|
||||
```
|
||||
**Pros:**
|
||||
- Simple, well-understood technology
|
||||
- Already running WireGuard for jelly01
|
||||
- Full control over routing and firewall rules
|
||||
- Excellent NixOS module support
|
||||
- No extra dependencies
|
||||
|
||||
### Existing path (unchanged)
|
||||
**Cons:**
|
||||
- Hub-and-spoke topology (all traffic goes through VPS)
|
||||
- Manual peer management
|
||||
- Adding a new client device means editing configs on both VPS and gateway
|
||||
|
||||
The current public access path stays as-is:
|
||||
### 2. WireGuard Mesh (No Relay)
|
||||
|
||||
```
|
||||
Internet → VPS (Traefik) → WireGuard → http-proxy (Caddy) → internal services
|
||||
```
|
||||
Each client device connects directly to a WireGuard endpoint. Could be on the VPS which forwards to the homelab, or if there is a routable IP at home, directly to an internal host.
|
||||
|
||||
This handles public Jellyfin access and any other publicly-exposed services.
|
||||
**Pros:**
|
||||
- Simple and fast
|
||||
- No extra software
|
||||
|
||||
### New path (personal VPN)
|
||||
**Cons:**
|
||||
- Manual key and endpoint management for every peer
|
||||
- Doesn't scale well
|
||||
- If behind CGNAT, still needs the VPS as intermediary
|
||||
|
||||
A separate WireGuard tunnel for personal remote access with restricted firewall rules:
|
||||
### 3. Headscale (Self-Hosted Tailscale)
|
||||
|
||||
```
|
||||
Laptop/Phone → VPS (WireGuard peers) → tunnel → extgw01 (firewall) → allowed services
|
||||
```
|
||||
Run a Headscale control server (on the VPS or internally) and install the Tailscale client on homelab hosts and personal devices. Gets the Tailscale mesh networking UX without depending on Tailscale's infrastructure.
|
||||
|
||||
### Access tiers
|
||||
**Pros:**
|
||||
- Mesh topology - devices communicate directly via NAT traversal (DERP relay as fallback)
|
||||
- Easy to add/remove devices
|
||||
- ACL support for granular access control
|
||||
- MagicDNS for service discovery
|
||||
- Good NixOS support for both headscale server and tailscale client
|
||||
- Subnet routing lets you expose the entire 10.69.13.x network or specific hosts without installing tailscale on every host
|
||||
|
||||
1. **VPN (default)**: Laptop/phone connect to VPS WireGuard endpoint, traffic routed through extgw01 firewall. Only whitelisted services are reachable.
|
||||
2. **SSH + 2FA (escalated)**: SSH into extgw01 for full network access when needed.
|
||||
**Cons:**
|
||||
- More moving parts than plain WireGuard
|
||||
- Headscale is a third-party reimplementation, can lag behind Tailscale features
|
||||
- Need to run and maintain the control server
|
||||
|
||||
## New Host: extgw01
|
||||
### 4. Tailscale (Managed)
|
||||
|
||||
A NixOS host on the internal network acting as both WireGuard gateway and SSH bastion.
|
||||
Same as Headscale but using Tailscale's hosted control plane.
|
||||
|
||||
### Responsibilities
|
||||
**Pros:**
|
||||
- Zero infrastructure to manage on the control plane side
|
||||
- Polished UX, well-maintained clients
|
||||
- Free tier covers personal use
|
||||
|
||||
- **WireGuard tunnel** to the VPS for client traffic
|
||||
- **Firewall** with allowlist controlling which internal services are reachable through the VPN
|
||||
- **SSH bastion** with 2FA for full network access when needed
|
||||
- **DNS**: Clients get split DNS config (laptop via systemd-resolved routing domain, phone uses internal DNS for all queries)
|
||||
**Cons:**
|
||||
- Dependency on Tailscale's service
|
||||
- Less aligned with self-hosting preference
|
||||
- Coordination metadata goes through their servers (data plane is still peer-to-peer)
|
||||
|
||||
### Firewall allowlist (initial)
|
||||
### 5. Netbird (Self-Hosted)
|
||||
|
||||
| Service | Destination | Port |
|
||||
|------------|------------------------------|-------|
|
||||
| Grafana | monitoring01.home.2rjus.net | 3000 |
|
||||
| Jellyfin | jelly01.home.2rjus.net | 8096 |
|
||||
| Sonarr | sonarr-jail.home.2rjus.net | 8989 |
|
||||
| Radarr | radarr-jail.home.2rjus.net | 7878 |
|
||||
| NZBget | nzbget-jail.home.2rjus.net | 6789 |
|
||||
Open-source alternative to Tailscale with a self-hostable management server. WireGuard-based, supports ACLs and NAT traversal.
|
||||
|
||||
### SSH 2FA options (to be decided)
|
||||
**Pros:**
|
||||
- Fully self-hostable
|
||||
- Web UI for management
|
||||
- ACL and peer grouping support
|
||||
|
||||
- **Kanidm**: Already deployed on kanidm01, supports RADIUS/OAuth2 for PAM integration
|
||||
- **SSH certificates via OpenBao**: Fits existing Vault infrastructure, short-lived certs
|
||||
- **TOTP via PAM**: Simplest fallback, Google Authenticator / similar
|
||||
**Cons:**
|
||||
- Heavier to self-host (needs multiple components: management server, signal server, TURN relay)
|
||||
- Less mature NixOS module support compared to Tailscale/Headscale
|
||||
|
||||
## VPS Configuration
|
||||
### 6. Nebula (by Defined Networking)
|
||||
|
||||
The VPS needs a new WireGuard interface (separate from the existing http-proxy tunnel):
|
||||
Certificate-based mesh VPN. Each node gets a certificate from a CA you control. No central coordination server needed at runtime.
|
||||
|
||||
- WireGuard endpoint listening on a public UDP port
|
||||
- 2 peers: laptop, phone
|
||||
- Routes client traffic through tunnel to extgw01
|
||||
- Minimal config - just routing, no firewall policy (that lives on extgw01)
|
||||
**Pros:**
|
||||
- No always-on control plane
|
||||
- Certificate-based identity
|
||||
- Lightweight
|
||||
|
||||
## Implementation Steps
|
||||
**Cons:**
|
||||
- Less convenient for ad-hoc device addition (need to issue certs)
|
||||
- NAT traversal less mature than Tailscale's
|
||||
- Smaller community/ecosystem
|
||||
|
||||
1. **Create extgw01 host configuration** in this repo
|
||||
- VM provisioned via OpenTofu (same as other hosts)
|
||||
- WireGuard interface for VPS tunnel
|
||||
- nftables/iptables firewall with service allowlist
|
||||
- IP forwarding enabled
|
||||
2. **Configure VPS WireGuard** for client peers
|
||||
- New WireGuard interface with laptop + phone peers
|
||||
- Routing for 10.69.13.0/24 through extgw01 tunnel
|
||||
3. **Set up client configs**
|
||||
- Laptop: WireGuard config + systemd-resolved split DNS for `home.2rjus.net`
|
||||
- Phone: WireGuard app config with DNS pointing at internal nameservers
|
||||
4. **Set up SSH 2FA** on extgw01
|
||||
- Evaluate Kanidm integration vs OpenBao SSH certs vs TOTP
|
||||
5. **Test and verify**
|
||||
- VPN access to allowed services only
|
||||
- Firewall blocks everything else
|
||||
- SSH + 2FA grants full access
|
||||
- Existing public access path unaffected
|
||||
## Key Decision Points
|
||||
|
||||
- **Static public IP vs CGNAT?** Determines whether clients can connect directly to home network or need VPS relay.
|
||||
- **Number of client devices?** If just phone and laptop, plain WireGuard via VPS is fine. More devices favors Headscale.
|
||||
- **Per-service vs per-network access?** Gateway with firewall rules gives per-service control. Headscale ACLs can also do this. Plain WireGuard gives network-level access with gateway firewall for finer control.
|
||||
- **Subnet routing vs per-host agents?** With Headscale/Tailscale, can either install client on every host, or use a single subnet router that advertises the 10.69.13.x range. The latter is closer to the gateway approach and avoids touching every host.
|
||||
|
||||
## Leading Candidates
|
||||
|
||||
Based on existing WireGuard experience, self-hosting preference, and NixOS stack:
|
||||
|
||||
1. **Headscale with a subnet router** - Best balance of convenience and self-hosting
|
||||
2. **WireGuard gateway via VPS** - Simplest, most transparent, builds on existing setup
|
||||
|
||||
@@ -39,17 +39,23 @@ Expand storage capacity for the main hdd-pool. Since we need to add disks anyway
|
||||
- nzbget: NixOS service or OCI container
|
||||
- NFS exports: `services.nfs.server`
|
||||
|
||||
### Filesystem: Keep ZFS
|
||||
### Filesystem: BTRFS RAID1
|
||||
|
||||
**Decision**: Keep existing ZFS pool, import on NixOS
|
||||
**Decision**: Migrate from ZFS to BTRFS with RAID1
|
||||
|
||||
**Rationale**:
|
||||
- **No data migration needed**: Existing ZFS pool can be imported directly on NixOS
|
||||
- **Proven reliability**: Pool has been running reliably on TrueNAS
|
||||
- **NixOS ZFS support**: Well-supported, declarative configuration via `boot.zfs` and `services.zfs`
|
||||
- **BTRFS RAID5/6 unreliable**: Research showed BTRFS RAID5/6 write hole is still unresolved
|
||||
- **BTRFS RAID1 wasteful**: With mixed disk sizes, RAID1 wastes significant capacity vs ZFS mirrors
|
||||
- Checksumming, snapshots, compression (lz4/zstd) all available
|
||||
- **In-kernel**: No out-of-tree module issues like ZFS
|
||||
- **Flexible expansion**: Add individual disks, not required to buy pairs
|
||||
- **Mixed disk sizes**: Better handling than ZFS multi-vdev approach
|
||||
- **RAID level conversion**: Can convert between RAID levels in place
|
||||
- Built-in checksumming, snapshots, compression (zstd)
|
||||
- NixOS has good BTRFS support
|
||||
|
||||
**BTRFS RAID1 notes**:
|
||||
- "RAID1" means 2 copies of all data
|
||||
- Distributes across all available devices
|
||||
- With 6+ disks, provides redundancy + capacity scaling
|
||||
- RAID5/6 avoided (known issues), RAID1/10 are stable
|
||||
|
||||
### Hardware: Keep Existing + Add Disks
|
||||
|
||||
@@ -63,94 +69,83 @@ Expand storage capacity for the main hdd-pool. Since we need to add disks anyway
|
||||
|
||||
**Storage architecture**:
|
||||
|
||||
**hdd-pool** (ZFS mirrors):
|
||||
- Current: 3 mirror vdevs (2x16TB + 2x8TB + 2x8TB) = 32TB usable
|
||||
- Add: mirror-3 with 2x 24TB = +24TB usable
|
||||
- Total after expansion: ~56TB usable
|
||||
**Bulk storage** (BTRFS RAID1 on HDDs):
|
||||
- Current: 6x HDDs (2x16TB + 2x8TB + 2x8TB)
|
||||
- Add: 2x new HDDs (size TBD)
|
||||
- Use: Media, downloads, backups, non-critical data
|
||||
- Risk tolerance: High (data mostly replaceable)
|
||||
|
||||
**Critical data** (small volume):
|
||||
- Use 2x 240GB SSDs in mirror (BTRFS or ZFS)
|
||||
- Or use 2TB NVMe for critical data
|
||||
- Risk tolerance: Low (data important but small)
|
||||
|
||||
### Disk Purchase Decision
|
||||
|
||||
**Decision**: 2x 24TB drives (ordered, arriving 2026-02-21)
|
||||
**Options under consideration**:
|
||||
|
||||
**Option A: 2x 16TB drives**
|
||||
- Matches largest current drives
|
||||
- Enables potential future RAID5 if desired (6x 16TB array)
|
||||
- More conservative capacity increase
|
||||
|
||||
**Option B: 2x 20-24TB drives**
|
||||
- Larger capacity headroom
|
||||
- Better $/TB ratio typically
|
||||
- Future-proofs better
|
||||
|
||||
**Initial purchase**: 2 drives (chassis has space for 2 more without modifications)
|
||||
|
||||
## Migration Strategy
|
||||
|
||||
### High-Level Plan
|
||||
|
||||
1. **Expand ZFS pool** (on TrueNAS):
|
||||
- Install 2x 24TB drives (may need new drive trays - order from abroad if needed)
|
||||
- If chassis space is limited, temporarily replace the two oldest 8TB drives (da0/ada4)
|
||||
- Add as mirror-3 vdev to hdd-pool
|
||||
- Verify pool health and resilver completes
|
||||
- Check SMART data on old 8TB drives (all healthy as of 2026-02-20, no reallocated sectors)
|
||||
- Burn-in: at minimum short + long SMART test before adding to pool
|
||||
1. **Preparation**:
|
||||
- Purchase 2x new HDDs (16TB or 20-24TB)
|
||||
- Create NixOS configuration for new storage host
|
||||
- Set up bare metal NixOS installation
|
||||
|
||||
2. **Prepare NixOS configuration**:
|
||||
- Create host configuration (`hosts/nas1/` or similar)
|
||||
- Configure ZFS pool import (`boot.zfs.extraPools`)
|
||||
- Set up services: radarr, sonarr, nzbget, restic-rest, NFS
|
||||
- Configure monitoring (node-exporter, promtail, smartctl-exporter)
|
||||
2. **Initial BTRFS pool**:
|
||||
- Install 2 new disks
|
||||
- Create BTRFS filesystem in RAID1
|
||||
- Mount and test NFS exports
|
||||
|
||||
3. **Install NixOS**:
|
||||
- `zfs export hdd-pool` on TrueNAS before shutdown (clean export)
|
||||
- Wipe TrueNAS boot-pool SSDs, set up as mdadm RAID1 for NixOS root
|
||||
- Install NixOS on mdadm mirror (keeps boot path ZFS-independent)
|
||||
- Import hdd-pool via `boot.zfs.extraPools`
|
||||
- Verify all datasets mount correctly
|
||||
3. **Data migration**:
|
||||
- Copy data from TrueNAS ZFS pool to new BTRFS pool over 10GbE
|
||||
- Verify data integrity
|
||||
|
||||
4. **Service migration**:
|
||||
- Configure NixOS services to use ZFS dataset paths
|
||||
- Update NFS exports
|
||||
- Test from consuming hosts
|
||||
4. **Expand pool**:
|
||||
- As old ZFS pool is emptied, wipe drives and add to BTRFS pool
|
||||
- Pool grows incrementally: 2 → 4 → 6 → 8 disks
|
||||
- BTRFS rebalances data across new devices
|
||||
|
||||
5. **Cutover**:
|
||||
- Update DNS/client mounts if IP changes
|
||||
- Verify monitoring integration
|
||||
5. **Service migration**:
|
||||
- Set up radarr/sonarr/nzbget/restic as NixOS services
|
||||
- Update NFS client mounts on consuming hosts
|
||||
|
||||
6. **Cutover**:
|
||||
- Point consumers to new NAS host
|
||||
- Decommission TrueNAS
|
||||
|
||||
### Post-Expansion: Vdev Rebalancing
|
||||
|
||||
ZFS has no built-in rebalance command. After adding the new 24TB vdev, ZFS will
|
||||
write new data preferentially to it (most free space), leaving old vdevs packed
|
||||
at ~97%. This is suboptimal but not urgent once overall pool usage drops to ~50%.
|
||||
|
||||
To gradually rebalance, rewrite files in place so ZFS redistributes blocks across
|
||||
all vdevs proportional to free space:
|
||||
|
||||
```bash
|
||||
# Rewrite files individually (spreads blocks across all vdevs)
|
||||
find /pool/dataset -type f -exec sh -c '
|
||||
for f; do cp "$f" "$f.rebal" && mv "$f.rebal" "$f"; done
|
||||
' _ {} +
|
||||
```
|
||||
|
||||
Avoid `zfs send/recv` for large datasets (e.g. 20TB) as this would concentrate
|
||||
data on the emptiest vdev rather than spreading it evenly.
|
||||
|
||||
**Recommendation**: Do this after NixOS migration is stable. Not urgent - the pool
|
||||
will function fine with uneven distribution, just slightly suboptimal for performance.
|
||||
- Repurpose hardware or keep as spare
|
||||
|
||||
### Migration Advantages
|
||||
|
||||
- **No data migration**: ZFS pool imported directly, no copying terabytes of data
|
||||
- **Low risk**: Pool expansion done on stable TrueNAS before OS swap
|
||||
- **Reversible**: Can boot back to TrueNAS if NixOS has issues (ZFS pool is OS-independent)
|
||||
- **Quick cutover**: Once NixOS config is ready, the OS swap is fast
|
||||
- **Low risk**: New pool created independently, old data remains intact during migration
|
||||
- **Incremental**: Can add old disks one at a time as space allows
|
||||
- **Flexible**: BTRFS handles mixed disk sizes gracefully
|
||||
- **Reversible**: Keep TrueNAS running until fully validated
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. ~~Decide on disk size~~ - 2x 24TB ordered
|
||||
2. Install drives and add mirror vdev to ZFS pool
|
||||
3. Check SMART data on 8TB drives - decide whether to keep or retire
|
||||
4. Design NixOS host configuration (`hosts/nas1/`)
|
||||
5. Document NFS export mapping (current -> new)
|
||||
6. Plan NixOS installation and cutover
|
||||
1. Decide on disk size (16TB vs 20-24TB)
|
||||
2. Purchase disks
|
||||
3. Design NixOS host configuration (`hosts/nas1/`)
|
||||
4. Plan detailed migration timeline
|
||||
5. Document NFS export mapping (current → new)
|
||||
|
||||
## Open Questions
|
||||
|
||||
- [ ] Final decision on disk size?
|
||||
- [ ] Hostname for new NAS host? (nas1? storage1?)
|
||||
- [ ] IP address/subnet: NAS and Proxmox are both on 10GbE to the same switch but different subnets, forcing traffic through the router (bottleneck). Move to same subnet during migration.
|
||||
- [x] Boot drive: Reuse TrueNAS boot-pool SSDs as mdadm RAID1 for NixOS root (no ZFS on boot path)
|
||||
- [ ] Retire old 8TB drives? (SMART looks healthy, keep unless chassis space is needed)
|
||||
- [x] Drive trays: ordered domestically (expected 2026-02-25 to 2026-03-03)
|
||||
- [ ] Timeline/maintenance window for NixOS swap?
|
||||
- [ ] IP address allocation (keep 10.69.12.50 or new IP?)
|
||||
- [ ] Timeline/maintenance window for migration?
|
||||
|
||||
@@ -43,21 +43,11 @@ kanidm person posix set-password <username>
|
||||
kanidm person posix set <username> --shell /bin/zsh
|
||||
```
|
||||
|
||||
### Setting Email Address
|
||||
|
||||
Email is required for OAuth2/OIDC login (e.g., Grafana):
|
||||
|
||||
```bash
|
||||
kanidm person update <username> --mail <email>
|
||||
```
|
||||
|
||||
### Example: Full User Creation
|
||||
|
||||
```bash
|
||||
kanidm person create testuser "Test User"
|
||||
kanidm person update testuser --mail testuser@home.2rjus.net
|
||||
kanidm group add-members ssh-users testuser
|
||||
kanidm group add-members users testuser # Required for OAuth2 scopes
|
||||
kanidm person posix set testuser
|
||||
kanidm person posix set-password testuser
|
||||
kanidm person get testuser
|
||||
@@ -139,40 +129,6 @@ Kanidm auto-assigns UIDs/GIDs from its configured range. For manually assigned G
|
||||
| 65,536+ | Users (auto-assigned) |
|
||||
| 68,000 - 68,999 | Groups (manually assigned) |
|
||||
|
||||
## OAuth2/OIDC Login (Web Services)
|
||||
|
||||
For OAuth2/OIDC login to web services like Grafana, users need:
|
||||
|
||||
1. **Primary credential** - Password set via `credential update` (separate from unix password)
|
||||
2. **MFA** - TOTP or passkey (Kanidm requires MFA for primary credentials)
|
||||
3. **Group membership** - Member of `users` group (for OAuth2 scope mapping)
|
||||
4. **Email address** - Set via `person update --mail`
|
||||
|
||||
### Setting Up Primary Credential (Web Login)
|
||||
|
||||
The primary credential is different from the unix/POSIX password:
|
||||
|
||||
```bash
|
||||
# Interactive credential setup
|
||||
kanidm person credential update <username>
|
||||
|
||||
# In the interactive prompt:
|
||||
# 1. Type 'password' to set a password
|
||||
# 2. Type 'totp' to add TOTP (scan QR with authenticator app)
|
||||
# 3. Type 'commit' to save
|
||||
```
|
||||
|
||||
### Verifying OAuth2 Readiness
|
||||
|
||||
```bash
|
||||
kanidm person get <username>
|
||||
```
|
||||
|
||||
Check for:
|
||||
- `mail:` - Email address set
|
||||
- `memberof:` - Includes `users@home.2rjus.net`
|
||||
- Primary credential status (check via `credential update` → `status`)
|
||||
|
||||
## PAM/NSS Client Configuration
|
||||
|
||||
Enable central authentication on a host:
|
||||
|
||||
48
flake.lock
generated
48
flake.lock
generated
@@ -7,18 +7,18 @@
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1773079666,
|
||||
"narHash": "sha256-midgZRnFEybsH3uJazCJcF9i5Tm5hYVH7+oDLAFpLtU=",
|
||||
"lastModified": 1739310461,
|
||||
"narHash": "sha256-GscftfATX84Aae9FObrQOe+hr5MsEma2Fc5fdzuu3hA=",
|
||||
"ref": "master",
|
||||
"rev": "d8c08778f941a459fccae932e3768f9b9fe1783d",
|
||||
"revCount": 11,
|
||||
"rev": "53915cec6356be1a2d44ac2cbd0a71b32d679e6f",
|
||||
"revCount": 7,
|
||||
"type": "git",
|
||||
"url": "https://code.t-juice.club/torjus/alerttonotify"
|
||||
"url": "https://git.t-juice.club/torjus/alerttonotify"
|
||||
},
|
||||
"original": {
|
||||
"ref": "master",
|
||||
"type": "git",
|
||||
"url": "https://code.t-juice.club/torjus/alerttonotify"
|
||||
"url": "https://git.t-juice.club/torjus/alerttonotify"
|
||||
}
|
||||
},
|
||||
"homelab-deploy": {
|
||||
@@ -28,18 +28,18 @@
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1773081467,
|
||||
"narHash": "sha256-K22nYBq4FXe/1NJ/wg0uUbFrutgw2j9axbA/1NvvK8E=",
|
||||
"lastModified": 1770481834,
|
||||
"narHash": "sha256-Xx9BYnI0C/qgPbwr9nj6NoAdQTbYLunrdbNSaUww9oY=",
|
||||
"ref": "master",
|
||||
"rev": "713d1e7584c1e076fcf8e6248e2d022027832e86",
|
||||
"revCount": 38,
|
||||
"rev": "fd0d63b103dfaf21d1c27363266590e723021c67",
|
||||
"revCount": 24,
|
||||
"type": "git",
|
||||
"url": "https://code.t-juice.club/torjus/homelab-deploy"
|
||||
"url": "https://git.t-juice.club/torjus/homelab-deploy"
|
||||
},
|
||||
"original": {
|
||||
"ref": "master",
|
||||
"type": "git",
|
||||
"url": "https://code.t-juice.club/torjus/homelab-deploy"
|
||||
"url": "https://git.t-juice.club/torjus/homelab-deploy"
|
||||
}
|
||||
},
|
||||
"nixos-exporter": {
|
||||
@@ -49,26 +49,26 @@
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1773081113,
|
||||
"narHash": "sha256-99hs9Gvzc+M9hSTY7zSHL7TmhPkOYZ/9li9OhN3kXWc=",
|
||||
"lastModified": 1770422522,
|
||||
"narHash": "sha256-WmIFnquu4u58v8S2bOVWmknRwHn4x88CRfBFTzJ1inQ=",
|
||||
"ref": "refs/heads/master",
|
||||
"rev": "79900ae92df5607235f6ddb28eda67270d996819",
|
||||
"revCount": 16,
|
||||
"rev": "cf0ce858997af4d8dcc2ce10393ff393e17fc911",
|
||||
"revCount": 11,
|
||||
"type": "git",
|
||||
"url": "https://code.t-juice.club/torjus/nixos-exporter"
|
||||
"url": "https://git.t-juice.club/torjus/nixos-exporter"
|
||||
},
|
||||
"original": {
|
||||
"type": "git",
|
||||
"url": "https://code.t-juice.club/torjus/nixos-exporter"
|
||||
"url": "https://git.t-juice.club/torjus/nixos-exporter"
|
||||
}
|
||||
},
|
||||
"nixpkgs": {
|
||||
"locked": {
|
||||
"lastModified": 1772822230,
|
||||
"narHash": "sha256-yf3iYLGbGVlIthlQIk5/4/EQDZNNEmuqKZkQssMljuw=",
|
||||
"lastModified": 1770136044,
|
||||
"narHash": "sha256-tlFqNG/uzz2++aAmn4v8J0vAkV3z7XngeIIB3rM3650=",
|
||||
"owner": "nixos",
|
||||
"repo": "nixpkgs",
|
||||
"rev": "71caefce12ba78d84fe618cf61644dce01cf3a96",
|
||||
"rev": "e576e3c9cf9bad747afcddd9e34f51d18c855b4e",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -80,11 +80,11 @@
|
||||
},
|
||||
"nixpkgs-unstable": {
|
||||
"locked": {
|
||||
"lastModified": 1772773019,
|
||||
"narHash": "sha256-E1bxHxNKfDoQUuvriG71+f+s/NT0qWkImXsYZNFFfCs=",
|
||||
"lastModified": 1770197578,
|
||||
"narHash": "sha256-AYqlWrX09+HvGs8zM6ebZ1pwUqjkfpnv8mewYwAo+iM=",
|
||||
"owner": "nixos",
|
||||
"repo": "nixpkgs",
|
||||
"rev": "aca4d95fce4914b3892661bcb80b8087293536c6",
|
||||
"rev": "00c21e4c93d963c50d4c0c89bfa84ed6e0694df2",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
|
||||
108
flake.nix
108
flake.nix
@@ -6,15 +6,15 @@
|
||||
nixpkgs-unstable.url = "github:nixos/nixpkgs?ref=nixos-unstable";
|
||||
|
||||
alerttonotify = {
|
||||
url = "git+https://code.t-juice.club/torjus/alerttonotify?ref=master";
|
||||
url = "git+https://git.t-juice.club/torjus/alerttonotify?ref=master";
|
||||
inputs.nixpkgs.follows = "nixpkgs-unstable";
|
||||
};
|
||||
nixos-exporter = {
|
||||
url = "git+https://code.t-juice.club/torjus/nixos-exporter";
|
||||
url = "git+https://git.t-juice.club/torjus/nixos-exporter";
|
||||
inputs.nixpkgs.follows = "nixpkgs-unstable";
|
||||
};
|
||||
homelab-deploy = {
|
||||
url = "git+https://code.t-juice.club/torjus/homelab-deploy?ref=master";
|
||||
url = "git+https://git.t-juice.club/torjus/homelab-deploy?ref=master";
|
||||
inputs.nixpkgs.follows = "nixpkgs-unstable";
|
||||
};
|
||||
};
|
||||
@@ -92,6 +92,15 @@
|
||||
./hosts/http-proxy
|
||||
];
|
||||
};
|
||||
monitoring01 = nixpkgs.lib.nixosSystem {
|
||||
inherit system;
|
||||
specialArgs = {
|
||||
inherit inputs self;
|
||||
};
|
||||
modules = commonModules ++ [
|
||||
./hosts/monitoring01
|
||||
];
|
||||
};
|
||||
jelly01 = nixpkgs.lib.nixosSystem {
|
||||
inherit system;
|
||||
specialArgs = {
|
||||
@@ -101,6 +110,15 @@
|
||||
./hosts/jelly01
|
||||
];
|
||||
};
|
||||
nix-cache01 = nixpkgs.lib.nixosSystem {
|
||||
inherit system;
|
||||
specialArgs = {
|
||||
inherit inputs self;
|
||||
};
|
||||
modules = commonModules ++ [
|
||||
./hosts/nix-cache01
|
||||
];
|
||||
};
|
||||
nats1 = nixpkgs.lib.nixosSystem {
|
||||
inherit system;
|
||||
specialArgs = {
|
||||
@@ -173,87 +191,6 @@
|
||||
./hosts/kanidm01
|
||||
];
|
||||
};
|
||||
monitoring02 = nixpkgs.lib.nixosSystem {
|
||||
inherit system;
|
||||
specialArgs = {
|
||||
inherit inputs self;
|
||||
};
|
||||
modules = commonModules ++ [
|
||||
./hosts/monitoring02
|
||||
];
|
||||
};
|
||||
nix-cache02 = nixpkgs.lib.nixosSystem {
|
||||
inherit system;
|
||||
specialArgs = {
|
||||
inherit inputs self;
|
||||
};
|
||||
modules = commonModules ++ [
|
||||
./hosts/nix-cache02
|
||||
];
|
||||
};
|
||||
garage01 = nixpkgs.lib.nixosSystem {
|
||||
inherit system;
|
||||
specialArgs = {
|
||||
inherit inputs self;
|
||||
};
|
||||
modules = commonModules ++ [
|
||||
./hosts/garage01
|
||||
];
|
||||
};
|
||||
pn01 = nixpkgs.lib.nixosSystem {
|
||||
inherit system;
|
||||
specialArgs = {
|
||||
inherit inputs self;
|
||||
};
|
||||
modules = commonModules ++ [
|
||||
./hosts/pn01
|
||||
];
|
||||
};
|
||||
pn02 = nixpkgs.lib.nixosSystem {
|
||||
inherit system;
|
||||
specialArgs = {
|
||||
inherit inputs self;
|
||||
};
|
||||
modules = commonModules ++ [
|
||||
./hosts/pn02
|
||||
];
|
||||
};
|
||||
nrec-nixos01 = nixpkgs.lib.nixosSystem {
|
||||
inherit system;
|
||||
specialArgs = {
|
||||
inherit inputs self;
|
||||
};
|
||||
modules = commonModules ++ [
|
||||
./hosts/nrec-nixos01
|
||||
];
|
||||
};
|
||||
nrec-nixos02 = nixpkgs.lib.nixosSystem {
|
||||
inherit system;
|
||||
specialArgs = {
|
||||
inherit inputs self;
|
||||
};
|
||||
modules = commonModules ++ [
|
||||
./hosts/nrec-nixos02
|
||||
];
|
||||
};
|
||||
media1 = nixpkgs.lib.nixosSystem {
|
||||
inherit system;
|
||||
specialArgs = {
|
||||
inherit inputs self;
|
||||
};
|
||||
modules = commonModules ++ [
|
||||
./hosts/media1
|
||||
];
|
||||
};
|
||||
openstack-template = nixpkgs.lib.nixosSystem {
|
||||
inherit system;
|
||||
specialArgs = {
|
||||
inherit inputs self;
|
||||
};
|
||||
modules = commonModules ++ [
|
||||
./hosts/openstack-template
|
||||
];
|
||||
};
|
||||
};
|
||||
packages = forAllSystems (
|
||||
{ pkgs }:
|
||||
@@ -271,12 +208,9 @@
|
||||
pkgs.opentofu
|
||||
pkgs.openbao
|
||||
pkgs.kanidm_1_8
|
||||
pkgs.nkeys
|
||||
pkgs.openstackclient
|
||||
(pkgs.callPackage ./scripts/create-host { })
|
||||
homelab-deploy.packages.${pkgs.system}.default
|
||||
];
|
||||
ANSIBLE_CONFIG = "./ansible/ansible.cfg";
|
||||
};
|
||||
}
|
||||
);
|
||||
|
||||
@@ -1,72 +0,0 @@
|
||||
{
|
||||
config,
|
||||
lib,
|
||||
pkgs,
|
||||
...
|
||||
}:
|
||||
|
||||
{
|
||||
imports = [
|
||||
../template2/hardware-configuration.nix
|
||||
|
||||
../../system
|
||||
../../common/vm
|
||||
];
|
||||
|
||||
# Host metadata (adjust as needed)
|
||||
homelab.host = {
|
||||
tier = "test"; # Start in test tier, move to prod after validation
|
||||
role = "storage";
|
||||
};
|
||||
|
||||
homelab.dns.cnames = [ "s3" ];
|
||||
|
||||
# Enable Vault integration
|
||||
vault.enable = true;
|
||||
|
||||
# Enable remote deployment via NATS
|
||||
homelab.deploy.enable = true;
|
||||
|
||||
nixpkgs.config.allowUnfree = true;
|
||||
boot.loader.grub.enable = true;
|
||||
boot.loader.grub.device = "/dev/vda";
|
||||
|
||||
networking.hostName = "garage01";
|
||||
networking.domain = "home.2rjus.net";
|
||||
networking.useNetworkd = true;
|
||||
networking.useDHCP = false;
|
||||
services.resolved.enable = true;
|
||||
networking.nameservers = [
|
||||
"10.69.13.5"
|
||||
"10.69.13.6"
|
||||
];
|
||||
|
||||
systemd.network.enable = true;
|
||||
systemd.network.networks."ens18" = {
|
||||
matchConfig.Name = "ens18";
|
||||
address = [
|
||||
"10.69.13.26/24"
|
||||
];
|
||||
routes = [
|
||||
{ Gateway = "10.69.13.1"; }
|
||||
];
|
||||
linkConfig.RequiredForOnline = "routable";
|
||||
};
|
||||
time.timeZone = "Europe/Oslo";
|
||||
|
||||
|
||||
nix.settings.tarball-ttl = 0;
|
||||
environment.systemPackages = with pkgs; [
|
||||
vim
|
||||
wget
|
||||
git
|
||||
];
|
||||
|
||||
# Open ports in the firewall.
|
||||
# networking.firewall.allowedTCPPorts = [ ... ];
|
||||
# networking.firewall.allowedUDPPorts = [ ... ];
|
||||
# Or disable the firewall altogether.
|
||||
networking.firewall.enable = false;
|
||||
|
||||
system.stateVersion = "25.11"; # Did you read the comment?
|
||||
}
|
||||
@@ -1,6 +0,0 @@
|
||||
{ ... }: {
|
||||
imports = [
|
||||
./configuration.nix
|
||||
../../services/garage
|
||||
];
|
||||
}
|
||||
@@ -13,8 +13,6 @@
|
||||
../../common/vm
|
||||
];
|
||||
|
||||
homelab.host.role = "home-automation";
|
||||
|
||||
nixpkgs.config.allowUnfree = true;
|
||||
# Use the systemd-boot EFI boot loader.
|
||||
boot.loader.grub = {
|
||||
@@ -46,7 +44,10 @@
|
||||
};
|
||||
time.timeZone = "Europe/Oslo";
|
||||
|
||||
|
||||
nix.settings.experimental-features = [
|
||||
"nix-command"
|
||||
"flakes"
|
||||
];
|
||||
nix.settings.tarball-ttl = 0;
|
||||
environment.systemPackages = with pkgs; [
|
||||
vim
|
||||
@@ -84,7 +85,6 @@
|
||||
"--keep-monthly 6"
|
||||
"--keep-within 1d"
|
||||
];
|
||||
extraOptions = [ "--retry-lock=5m" ];
|
||||
};
|
||||
|
||||
# Open ports in the firewall.
|
||||
|
||||
@@ -11,14 +11,18 @@
|
||||
../../common/vm
|
||||
];
|
||||
|
||||
homelab.host.role = "proxy";
|
||||
homelab.dns.cnames = [
|
||||
"nzbget"
|
||||
"radarr"
|
||||
"sonarr"
|
||||
"ha"
|
||||
"z2m"
|
||||
"grafana"
|
||||
"prometheus"
|
||||
"alertmanager"
|
||||
"jelly"
|
||||
"pyroscope"
|
||||
"pushgw"
|
||||
];
|
||||
|
||||
nixpkgs.config.allowUnfree = true;
|
||||
@@ -52,7 +56,10 @@
|
||||
};
|
||||
time.timeZone = "Europe/Oslo";
|
||||
|
||||
|
||||
nix.settings.experimental-features = [
|
||||
"nix-command"
|
||||
"flakes"
|
||||
];
|
||||
vault.enable = true;
|
||||
homelab.deploy.enable = true;
|
||||
|
||||
|
||||
@@ -11,8 +11,6 @@
|
||||
../../common/vm
|
||||
];
|
||||
|
||||
homelab.host.role = "media";
|
||||
|
||||
nixpkgs.config.allowUnfree = true;
|
||||
# Use the systemd-boot EFI boot loader.
|
||||
boot.loader.grub = {
|
||||
@@ -44,7 +42,10 @@
|
||||
};
|
||||
time.timeZone = "Europe/Oslo";
|
||||
|
||||
|
||||
nix.settings.experimental-features = [
|
||||
"nix-command"
|
||||
"flakes"
|
||||
];
|
||||
nix.settings.tarball-ttl = 0;
|
||||
environment.systemPackages = with pkgs; [
|
||||
vim
|
||||
|
||||
@@ -14,8 +14,9 @@
|
||||
../../services/kanidm
|
||||
];
|
||||
|
||||
# Host metadata
|
||||
homelab.host = {
|
||||
tier = "prod";
|
||||
tier = "test";
|
||||
role = "auth";
|
||||
};
|
||||
|
||||
@@ -55,7 +56,10 @@
|
||||
};
|
||||
time.timeZone = "Europe/Oslo";
|
||||
|
||||
|
||||
nix.settings.experimental-features = [
|
||||
"nix-command"
|
||||
"flakes"
|
||||
];
|
||||
nix.settings.tarball-ttl = 0;
|
||||
environment.systemPackages = with pkgs; [
|
||||
vim
|
||||
|
||||
@@ -1,84 +0,0 @@
|
||||
{
|
||||
pkgs,
|
||||
...
|
||||
}:
|
||||
|
||||
{
|
||||
imports = [
|
||||
./hardware-configuration.nix
|
||||
../../system
|
||||
];
|
||||
|
||||
boot.loader.systemd-boot.enable = true;
|
||||
boot.loader.efi.canTouchEfiVariables = true;
|
||||
|
||||
networking.hostName = "media1";
|
||||
networking.domain = "home.2rjus.net";
|
||||
networking.useNetworkd = true;
|
||||
networking.useDHCP = false;
|
||||
networking.firewall.enable = false;
|
||||
services.resolved.enable = true;
|
||||
networking.nameservers = [
|
||||
"10.69.13.5"
|
||||
"10.69.13.6"
|
||||
];
|
||||
|
||||
systemd.network.enable = true;
|
||||
systemd.network.networks."10-lan" = {
|
||||
matchConfig.Name = "enp*";
|
||||
address = [
|
||||
"10.69.31.51/24"
|
||||
];
|
||||
routes = [
|
||||
{ Gateway = "10.69.31.1"; }
|
||||
];
|
||||
linkConfig.RequiredForOnline = "routable";
|
||||
};
|
||||
|
||||
time.timeZone = "Europe/Oslo";
|
||||
|
||||
homelab.host = {
|
||||
tier = "prod";
|
||||
priority = "low";
|
||||
role = "media";
|
||||
};
|
||||
|
||||
# Intel N100 (Alder Lake-N) graphics
|
||||
hardware.graphics = {
|
||||
enable = true;
|
||||
extraPackages = with pkgs; [
|
||||
intel-media-driver # VA-API driver for Broadwell+
|
||||
];
|
||||
};
|
||||
|
||||
# NFS for media access
|
||||
environment.systemPackages = with pkgs; [
|
||||
nfs-utils
|
||||
];
|
||||
services.rpcbind.enable = true;
|
||||
|
||||
systemd.mounts = [
|
||||
{
|
||||
type = "nfs";
|
||||
mountConfig = {
|
||||
Options = "ro,soft,noatime";
|
||||
};
|
||||
what = "nas.home.2rjus.net:/mnt/hdd-pool/media";
|
||||
where = "/mnt/nas/media";
|
||||
}
|
||||
];
|
||||
|
||||
systemd.automounts = [
|
||||
{
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
automountConfig = {
|
||||
TimeoutIdleSec = "5min";
|
||||
};
|
||||
where = "/mnt/nas/media";
|
||||
}
|
||||
];
|
||||
|
||||
vault.enable = true;
|
||||
|
||||
system.stateVersion = "25.11";
|
||||
}
|
||||
@@ -1,33 +0,0 @@
|
||||
# Do not modify this file! It was generated by 'nixos-generate-config'
|
||||
# and may be overwritten by future invocations. Please make changes
|
||||
# to /etc/nixos/configuration.nix instead.
|
||||
{ config, lib, pkgs, modulesPath, ... }:
|
||||
|
||||
{
|
||||
imports =
|
||||
[ (modulesPath + "/installer/scan/not-detected.nix")
|
||||
];
|
||||
|
||||
boot.initrd.availableKernelModules = [ "xhci_pci" "ahci" "nvme" "usbhid" "usb_storage" "sd_mod" "sdhci_pci" ];
|
||||
boot.initrd.kernelModules = [ ];
|
||||
boot.kernelModules = [ "kvm-intel" ];
|
||||
boot.extraModulePackages = [ ];
|
||||
|
||||
fileSystems."/" =
|
||||
{ device = "/dev/disk/by-uuid/0e1f61fd-18c6-4114-942e-f113a1e4b347";
|
||||
fsType = "ext4";
|
||||
};
|
||||
|
||||
fileSystems."/boot" =
|
||||
{ device = "/dev/disk/by-uuid/03C8-7DFE";
|
||||
fsType = "vfat";
|
||||
options = [ "fmask=0022" "dmask=0022" ];
|
||||
};
|
||||
|
||||
swapDevices =
|
||||
[ { device = "/dev/disk/by-uuid/0871bf99-9db6-4cd7-b307-3cebbb0a4e60"; }
|
||||
];
|
||||
|
||||
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
|
||||
hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
|
||||
}
|
||||
@@ -1,166 +0,0 @@
|
||||
{
|
||||
config,
|
||||
pkgs,
|
||||
lib,
|
||||
...
|
||||
}:
|
||||
|
||||
let
|
||||
kodiPkg = pkgs.kodi-wayland.withPackages (p: [
|
||||
p.jellycon
|
||||
p.sendtokodi
|
||||
p.inputstream-adaptive
|
||||
]);
|
||||
|
||||
hyprlandConfig = ''
|
||||
# Monitor — auto-detect, native resolution
|
||||
monitor = , preferred, auto, 1
|
||||
|
||||
# Keyboard layout
|
||||
input {
|
||||
kb_layout = no
|
||||
}
|
||||
|
||||
# Launch Kodi, Firefox, and kitty on login
|
||||
exec-once = ${lib.getExe' kodiPkg "kodi"}
|
||||
exec-once = ${lib.getExe pkgs.firefox}
|
||||
exec-once = ${lib.getExe pkgs.kitty}
|
||||
|
||||
# Workspace rules — Kodi on 1, Firefox on 2, kitty on 3
|
||||
windowrulev2 = workspace 1 silent, class:^(kodi)$
|
||||
windowrulev2 = workspace 2 silent, class:^(firefox)$
|
||||
windowrulev2 = workspace 3 silent, class:^(kitty)$
|
||||
windowrulev2 = fullscreen, class:^(kodi)$
|
||||
windowrulev2 = fullscreen, class:^(firefox)$
|
||||
windowrulev2 = fullscreen, class:^(kitty)$
|
||||
|
||||
# Start on workspace 1 (Kodi)
|
||||
workspace = 1, default:true
|
||||
|
||||
# Switch workspaces with Super+1/2/3
|
||||
bind = SUPER, 1, workspace, 1
|
||||
bind = SUPER, 2, workspace, 2
|
||||
bind = SUPER, 3, workspace, 3
|
||||
|
||||
# No gaps, no borders — TV setup
|
||||
general {
|
||||
gaps_in = 0
|
||||
gaps_out = 0
|
||||
border_size = 0
|
||||
}
|
||||
|
||||
decoration {
|
||||
rounding = 0
|
||||
}
|
||||
|
||||
# Disable animations for snappy switching
|
||||
animations {
|
||||
enabled = false
|
||||
}
|
||||
|
||||
misc {
|
||||
# Disable Hyprland logo/splash
|
||||
disable_hyprland_logo = true
|
||||
disable_splash_rendering = true
|
||||
}
|
||||
'';
|
||||
in
|
||||
{
|
||||
# Hyprland compositor with UWSM for proper dbus/systemd session management
|
||||
programs.hyprland = {
|
||||
enable = true;
|
||||
withUWSM = true;
|
||||
portalPackage = pkgs.xdg-desktop-portal-hyprland;
|
||||
};
|
||||
|
||||
# greetd for auto-login — UWSM starts Hyprland as a systemd session
|
||||
services.greetd = {
|
||||
enable = true;
|
||||
settings = {
|
||||
default_session = {
|
||||
command = "uwsm start hyprland-uwsm.desktop";
|
||||
user = "kodi";
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
# Deploy Hyprland config to kodi user's XDG config dir
|
||||
systemd.tmpfiles.rules = [
|
||||
"d /home/kodi/.config/hypr 0755 kodi kodi -"
|
||||
];
|
||||
environment.etc."skel/hypr/hyprland.conf".text = hyprlandConfig;
|
||||
system.activationScripts.hyprlandConfig = lib.stringAfter [ "users" ] ''
|
||||
install -D -o kodi -g kodi -m 0644 /dev/stdin /home/kodi/.config/hypr/hyprland.conf <<'HYPRCONF'
|
||||
${hyprlandConfig}
|
||||
HYPRCONF
|
||||
'';
|
||||
|
||||
# Kodi user
|
||||
users.users.kodi = {
|
||||
isNormalUser = true;
|
||||
home = "/home/kodi";
|
||||
homeMode = "750";
|
||||
group = "kodi";
|
||||
extraGroups = [
|
||||
"video"
|
||||
"audio"
|
||||
"input"
|
||||
];
|
||||
};
|
||||
users.groups.kodi = { };
|
||||
|
||||
# Allow promtail to read kodi logs
|
||||
users.users.promtail.extraGroups = [ "kodi" ];
|
||||
systemd.services.promtail.serviceConfig.ProtectHome = lib.mkForce "read-only";
|
||||
|
||||
# Packages available on the system
|
||||
environment.systemPackages = [
|
||||
kodiPkg
|
||||
pkgs.firefox
|
||||
pkgs.kitty
|
||||
pkgs.yt-dlp
|
||||
];
|
||||
|
||||
# PipeWire for audio (HDMI passthrough support)
|
||||
services.pipewire = {
|
||||
enable = true;
|
||||
alsa.enable = true;
|
||||
pulse.enable = true;
|
||||
wireplumber.extraConfig."60-hdmi-default" = {
|
||||
"monitor.alsa.rules" = [
|
||||
{
|
||||
matches = [{ "node.name" = "~alsa_output.pci-.*hdmi.*"; }];
|
||||
actions.update-props = {
|
||||
"priority.session" = 2000;
|
||||
"priority.driver" = 2000;
|
||||
};
|
||||
}
|
||||
];
|
||||
};
|
||||
};
|
||||
|
||||
# Allow VA-API hardware decode in Firefox
|
||||
environment.sessionVariables = {
|
||||
MOZ_ENABLE_WAYLAND = "1";
|
||||
LIBVA_DRIVER_NAME = "iHD";
|
||||
};
|
||||
|
||||
# Ship Kodi logs to Loki
|
||||
services.promtail.configuration.scrape_configs = [
|
||||
{
|
||||
job_name = "kodi";
|
||||
static_configs = [
|
||||
{
|
||||
targets = [ "localhost" ];
|
||||
labels = {
|
||||
job = "kodi";
|
||||
hostname = config.networking.hostName;
|
||||
tier = config.homelab.host.tier;
|
||||
role = config.homelab.host.role;
|
||||
__path__ = "/home/kodi/.kodi/temp/kodi.log";
|
||||
};
|
||||
}
|
||||
];
|
||||
}
|
||||
];
|
||||
}
|
||||
110
hosts/monitoring01/configuration.nix
Normal file
110
hosts/monitoring01/configuration.nix
Normal file
@@ -0,0 +1,110 @@
|
||||
{
|
||||
pkgs,
|
||||
...
|
||||
}:
|
||||
|
||||
{
|
||||
imports = [
|
||||
./hardware-configuration.nix
|
||||
|
||||
../../system
|
||||
../../common/vm
|
||||
];
|
||||
|
||||
nixpkgs.config.allowUnfree = true;
|
||||
# Use the systemd-boot EFI boot loader.
|
||||
boot.loader.grub = {
|
||||
enable = true;
|
||||
device = "/dev/sda";
|
||||
configurationLimit = 3;
|
||||
};
|
||||
|
||||
networking.hostName = "monitoring01";
|
||||
networking.domain = "home.2rjus.net";
|
||||
networking.useNetworkd = true;
|
||||
networking.useDHCP = false;
|
||||
services.resolved.enable = true;
|
||||
networking.nameservers = [
|
||||
"10.69.13.5"
|
||||
"10.69.13.6"
|
||||
];
|
||||
|
||||
systemd.network.enable = true;
|
||||
systemd.network.networks."ens18" = {
|
||||
matchConfig.Name = "ens18";
|
||||
address = [
|
||||
"10.69.13.13/24"
|
||||
];
|
||||
routes = [
|
||||
{ Gateway = "10.69.13.1"; }
|
||||
];
|
||||
linkConfig.RequiredForOnline = "routable";
|
||||
};
|
||||
time.timeZone = "Europe/Oslo";
|
||||
|
||||
nix.settings.experimental-features = [
|
||||
"nix-command"
|
||||
"flakes"
|
||||
];
|
||||
nix.settings.tarball-ttl = 0;
|
||||
environment.systemPackages = with pkgs; [
|
||||
vim
|
||||
wget
|
||||
git
|
||||
sqlite
|
||||
];
|
||||
|
||||
services.qemuGuest.enable = true;
|
||||
|
||||
# Vault secrets management
|
||||
vault.enable = true;
|
||||
homelab.deploy.enable = true;
|
||||
vault.secrets.backup-helper = {
|
||||
secretPath = "shared/backup/password";
|
||||
extractKey = "password";
|
||||
outputDir = "/run/secrets/backup_helper_secret";
|
||||
services = [ "restic-backups-grafana" "restic-backups-grafana-db" ];
|
||||
};
|
||||
|
||||
services.restic.backups.grafana = {
|
||||
repository = "rest:http://10.69.12.52:8000/backup-nix";
|
||||
passwordFile = "/run/secrets/backup_helper_secret";
|
||||
paths = [ "/var/lib/grafana/plugins" ];
|
||||
timerConfig = {
|
||||
OnCalendar = "daily";
|
||||
Persistent = true;
|
||||
RandomizedDelaySec = "2h";
|
||||
};
|
||||
pruneOpts = [
|
||||
"--keep-daily 7"
|
||||
"--keep-weekly 4"
|
||||
"--keep-monthly 6"
|
||||
"--keep-within 1d"
|
||||
];
|
||||
};
|
||||
|
||||
services.restic.backups.grafana-db = {
|
||||
repository = "rest:http://10.69.12.52:8000/backup-nix";
|
||||
passwordFile = "/run/secrets/backup_helper_secret";
|
||||
command = [ "${pkgs.sqlite}/bin/sqlite3" "/var/lib/grafana/data/grafana.db" ".dump" ];
|
||||
timerConfig = {
|
||||
OnCalendar = "daily";
|
||||
Persistent = true;
|
||||
RandomizedDelaySec = "2h";
|
||||
};
|
||||
pruneOpts = [
|
||||
"--keep-daily 7"
|
||||
"--keep-weekly 4"
|
||||
"--keep-monthly 6"
|
||||
"--keep-within 1d"
|
||||
];
|
||||
};
|
||||
|
||||
# Open ports in the firewall.
|
||||
# networking.firewall.allowedTCPPorts = [ ... ];
|
||||
# networking.firewall.allowedUDPPorts = [ ... ];
|
||||
# Or disable the firewall altogether.
|
||||
networking.firewall.enable = false;
|
||||
|
||||
system.stateVersion = "23.11"; # Did you read the comment?
|
||||
}
|
||||
@@ -2,6 +2,6 @@
|
||||
{
|
||||
imports = [
|
||||
./configuration.nix
|
||||
./media-desktop.nix
|
||||
../../services/monitoring
|
||||
];
|
||||
}
|
||||
42
hosts/monitoring01/hardware-configuration.nix
Normal file
42
hosts/monitoring01/hardware-configuration.nix
Normal file
@@ -0,0 +1,42 @@
|
||||
{
|
||||
config,
|
||||
lib,
|
||||
pkgs,
|
||||
modulesPath,
|
||||
...
|
||||
}:
|
||||
|
||||
{
|
||||
imports = [
|
||||
(modulesPath + "/profiles/qemu-guest.nix")
|
||||
];
|
||||
boot.initrd.availableKernelModules = [
|
||||
"ata_piix"
|
||||
"uhci_hcd"
|
||||
"virtio_pci"
|
||||
"virtio_scsi"
|
||||
"sd_mod"
|
||||
"sr_mod"
|
||||
];
|
||||
boot.initrd.kernelModules = [ "dm-snapshot" ];
|
||||
boot.kernelModules = [
|
||||
"ptp_kvm"
|
||||
];
|
||||
boot.extraModulePackages = [ ];
|
||||
|
||||
fileSystems."/" = {
|
||||
device = "/dev/disk/by-label/root";
|
||||
fsType = "xfs";
|
||||
};
|
||||
|
||||
swapDevices = [ { device = "/dev/disk/by-label/swap"; } ];
|
||||
|
||||
# Enables DHCP on each ethernet and wireless interface. In case of scripted networking
|
||||
# (the default) this is the recommended approach. When using systemd-networkd it's
|
||||
# still possible to use this option, but it's recommended to use it in conjunction
|
||||
# with explicit per-interface declarations with `networking.interfaces.<interface>.useDHCP`.
|
||||
networking.useDHCP = lib.mkDefault true;
|
||||
# networking.interfaces.ens18.useDHCP = lib.mkDefault true;
|
||||
|
||||
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
|
||||
}
|
||||
@@ -1,71 +0,0 @@
|
||||
{
|
||||
config,
|
||||
lib,
|
||||
pkgs,
|
||||
...
|
||||
}:
|
||||
|
||||
{
|
||||
imports = [
|
||||
../template2/hardware-configuration.nix
|
||||
|
||||
../../system
|
||||
../../common/vm
|
||||
];
|
||||
|
||||
homelab.host = {
|
||||
tier = "prod";
|
||||
role = "monitoring";
|
||||
};
|
||||
|
||||
homelab.dns.cnames = [ "monitoring" "alertmanager" "grafana" "grafana-test" "metrics" "vmalert" "loki" ];
|
||||
|
||||
# Enable Vault integration
|
||||
vault.enable = true;
|
||||
|
||||
# Enable remote deployment via NATS
|
||||
homelab.deploy.enable = true;
|
||||
|
||||
nixpkgs.config.allowUnfree = true;
|
||||
boot.loader.grub.enable = true;
|
||||
boot.loader.grub.device = "/dev/vda";
|
||||
|
||||
networking.hostName = "monitoring02";
|
||||
networking.domain = "home.2rjus.net";
|
||||
networking.useNetworkd = true;
|
||||
networking.useDHCP = false;
|
||||
services.resolved.enable = true;
|
||||
networking.nameservers = [
|
||||
"10.69.13.5"
|
||||
"10.69.13.6"
|
||||
];
|
||||
|
||||
systemd.network.enable = true;
|
||||
systemd.network.networks."ens18" = {
|
||||
matchConfig.Name = "ens18";
|
||||
address = [
|
||||
"10.69.13.24/24"
|
||||
];
|
||||
routes = [
|
||||
{ Gateway = "10.69.13.1"; }
|
||||
];
|
||||
linkConfig.RequiredForOnline = "routable";
|
||||
};
|
||||
time.timeZone = "Europe/Oslo";
|
||||
|
||||
|
||||
nix.settings.tarball-ttl = 0;
|
||||
environment.systemPackages = with pkgs; [
|
||||
vim
|
||||
wget
|
||||
git
|
||||
];
|
||||
|
||||
# Open ports in the firewall.
|
||||
# networking.firewall.allowedTCPPorts = [ ... ];
|
||||
# networking.firewall.allowedUDPPorts = [ ... ];
|
||||
# Or disable the firewall altogether.
|
||||
networking.firewall.enable = false;
|
||||
|
||||
system.stateVersion = "25.11"; # Did you read the comment?
|
||||
}
|
||||
@@ -1,12 +0,0 @@
|
||||
{ ... }: {
|
||||
imports = [
|
||||
./configuration.nix
|
||||
../../services/grafana
|
||||
../../services/victoriametrics
|
||||
../../services/loki
|
||||
../../services/monitoring/alerttonotify.nix
|
||||
../../services/monitoring/blackbox.nix
|
||||
../../services/monitoring/exportarr.nix
|
||||
../../services/monitoring/pve.nix
|
||||
];
|
||||
}
|
||||
@@ -11,8 +11,6 @@
|
||||
../../common/vm
|
||||
];
|
||||
|
||||
homelab.host.role = "messaging";
|
||||
|
||||
nixpkgs.config.allowUnfree = true;
|
||||
# Use the systemd-boot EFI boot loader.
|
||||
boot.loader.grub = {
|
||||
@@ -44,7 +42,10 @@
|
||||
};
|
||||
time.timeZone = "Europe/Oslo";
|
||||
|
||||
|
||||
nix.settings.experimental-features = [
|
||||
"nix-command"
|
||||
"flakes"
|
||||
];
|
||||
nix.settings.tarball-ttl = 0;
|
||||
environment.systemPackages = with pkgs; [
|
||||
vim
|
||||
|
||||
@@ -1,36 +1,33 @@
|
||||
{
|
||||
config,
|
||||
lib,
|
||||
pkgs,
|
||||
...
|
||||
}:
|
||||
|
||||
{
|
||||
imports = [
|
||||
../template2/hardware-configuration.nix
|
||||
./hardware-configuration.nix
|
||||
|
||||
../../system
|
||||
../../common/vm
|
||||
];
|
||||
|
||||
homelab.host = {
|
||||
tier = "prod";
|
||||
role = "build-host";
|
||||
homelab.dns.cnames = [ "nix-cache" "actions1" ];
|
||||
|
||||
homelab.host.role = "build-host";
|
||||
|
||||
fileSystems."/nix" = {
|
||||
device = "/dev/disk/by-label/nixcache";
|
||||
fsType = "xfs";
|
||||
};
|
||||
nixpkgs.config.allowUnfree = true;
|
||||
# Use the systemd-boot EFI boot loader.
|
||||
boot.loader.grub = {
|
||||
enable = true;
|
||||
device = "/dev/sda";
|
||||
configurationLimit = 3;
|
||||
};
|
||||
|
||||
homelab.dns.cnames = [ "nix-cache" ];
|
||||
|
||||
# Enable Vault integration
|
||||
vault.enable = true;
|
||||
|
||||
# Enable remote deployment via NATS
|
||||
homelab.deploy.enable = true;
|
||||
|
||||
nixpkgs.config.allowUnfree = true;
|
||||
boot.loader.grub.enable = true;
|
||||
boot.loader.grub.device = "/dev/vda";
|
||||
|
||||
networking.hostName = "nix-cache02";
|
||||
networking.hostName = "nix-cache01";
|
||||
networking.domain = "home.2rjus.net";
|
||||
networking.useNetworkd = true;
|
||||
networking.useDHCP = false;
|
||||
@@ -44,7 +41,7 @@
|
||||
systemd.network.networks."ens18" = {
|
||||
matchConfig.Name = "ens18";
|
||||
address = [
|
||||
"10.69.13.25/24"
|
||||
"10.69.13.15/24"
|
||||
];
|
||||
routes = [
|
||||
{ Gateway = "10.69.13.1"; }
|
||||
@@ -53,6 +50,12 @@
|
||||
};
|
||||
time.timeZone = "Europe/Oslo";
|
||||
|
||||
nix.settings.experimental-features = [
|
||||
"nix-command"
|
||||
"flakes"
|
||||
];
|
||||
vault.enable = true;
|
||||
homelab.deploy.enable = true;
|
||||
|
||||
nix.settings.tarball-ttl = 0;
|
||||
environment.systemPackages = with pkgs; [
|
||||
@@ -61,11 +64,13 @@
|
||||
git
|
||||
];
|
||||
|
||||
services.qemuGuest.enable = true;
|
||||
|
||||
# Open ports in the firewall.
|
||||
# networking.firewall.allowedTCPPorts = [ ... ];
|
||||
# networking.firewall.allowedUDPPorts = [ ... ];
|
||||
# Or disable the firewall altogether.
|
||||
networking.firewall.enable = false;
|
||||
|
||||
system.stateVersion = "25.11"; # Did you read the comment?
|
||||
}
|
||||
system.stateVersion = "24.05"; # Did you read the comment?
|
||||
}
|
||||
@@ -1,10 +1,8 @@
|
||||
{ ... }: {
|
||||
{ ... }:
|
||||
{
|
||||
imports = [
|
||||
./configuration.nix
|
||||
./builder.nix
|
||||
./scheduler.nix
|
||||
./actions-runner.nix
|
||||
../../services/nix-cache
|
||||
../../services/actions-runner
|
||||
];
|
||||
}
|
||||
}
|
||||
42
hosts/nix-cache01/hardware-configuration.nix
Normal file
42
hosts/nix-cache01/hardware-configuration.nix
Normal file
@@ -0,0 +1,42 @@
|
||||
{
|
||||
config,
|
||||
lib,
|
||||
pkgs,
|
||||
modulesPath,
|
||||
...
|
||||
}:
|
||||
|
||||
{
|
||||
imports = [
|
||||
(modulesPath + "/profiles/qemu-guest.nix")
|
||||
];
|
||||
boot.initrd.availableKernelModules = [
|
||||
"ata_piix"
|
||||
"uhci_hcd"
|
||||
"virtio_pci"
|
||||
"virtio_scsi"
|
||||
"sd_mod"
|
||||
"sr_mod"
|
||||
];
|
||||
boot.initrd.kernelModules = [ "dm-snapshot" ];
|
||||
boot.kernelModules = [
|
||||
"ptp_kvm"
|
||||
];
|
||||
boot.extraModulePackages = [ ];
|
||||
|
||||
fileSystems."/" = {
|
||||
device = "/dev/disk/by-label/root";
|
||||
fsType = "xfs";
|
||||
};
|
||||
|
||||
swapDevices = [ { device = "/dev/disk/by-label/swap"; } ];
|
||||
|
||||
# Enables DHCP on each ethernet and wireless interface. In case of scripted networking
|
||||
# (the default) this is the recommended approach. When using systemd-networkd it's
|
||||
# still possible to use this option, but it's recommended to use it in conjunction
|
||||
# with explicit per-interface declarations with `networking.interfaces.<interface>.useDHCP`.
|
||||
networking.useDHCP = lib.mkDefault true;
|
||||
# networking.interfaces.ens18.useDHCP = lib.mkDefault true;
|
||||
|
||||
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
|
||||
}
|
||||
@@ -1,51 +0,0 @@
|
||||
{ config, pkgs, ... }:
|
||||
{
|
||||
# Fetch runner token from Vault
|
||||
vault.secrets.forgejo-runner-token = {
|
||||
secretPath = "hosts/nix-cache02/forgejo-runner-token";
|
||||
extractKey = "token";
|
||||
mode = "0444";
|
||||
services = [ "gitea-runner-actions1" ];
|
||||
};
|
||||
|
||||
# Override token source and runner capacity
|
||||
services.gitea-actions-runner.instances.actions1 = {
|
||||
tokenFile = "/run/secrets/forgejo-runner-token";
|
||||
settings.runner.capacity = 4;
|
||||
};
|
||||
|
||||
# Fetch native runner token from Vault
|
||||
vault.secrets.forgejo-native-runner-token = {
|
||||
secretPath = "hosts/nix-cache02/forgejo-native-runner-token";
|
||||
extractKey = "token";
|
||||
mode = "0444";
|
||||
services = [ "gitea-runner-actions-native" ];
|
||||
};
|
||||
|
||||
# Native nix runner instance (user-level, no containers)
|
||||
services.gitea-actions-runner.instances.actions-native = {
|
||||
enable = true;
|
||||
name = "${config.networking.hostName}-native";
|
||||
url = "https://code.t-juice.club";
|
||||
tokenFile = "/run/secrets/forgejo-native-runner-token";
|
||||
labels = [ "native-nix:host" ];
|
||||
hostPackages = with pkgs; [
|
||||
bash
|
||||
coreutils
|
||||
curl
|
||||
gawk
|
||||
git
|
||||
gnused
|
||||
nodejs
|
||||
wget
|
||||
nix
|
||||
];
|
||||
settings = {
|
||||
runner.capacity = 4;
|
||||
cache = {
|
||||
enabled = true;
|
||||
dir = "/var/lib/gitea-runner/actions-native/cache";
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
||||
@@ -1,45 +0,0 @@
|
||||
{ config, ... }:
|
||||
{
|
||||
# Fetch builder NKey from Vault
|
||||
vault.secrets.builder-nkey = {
|
||||
secretPath = "shared/homelab-deploy/builder-nkey";
|
||||
extractKey = "nkey";
|
||||
outputDir = "/run/secrets/builder-nkey";
|
||||
services = [ "homelab-deploy-builder" ];
|
||||
};
|
||||
|
||||
# Configure the builder service
|
||||
services.homelab-deploy.builder = {
|
||||
enable = true;
|
||||
natsUrl = "nats://nats1.home.2rjus.net:4222";
|
||||
nkeyFile = "/run/secrets/builder-nkey";
|
||||
|
||||
settings.repos = {
|
||||
nixos-servers = {
|
||||
url = "git+https://code.t-juice.club/torjus/nixos-servers.git";
|
||||
defaultBranch = "master";
|
||||
};
|
||||
nixos = {
|
||||
url = "git+https://code.t-juice.club/torjus/nixos.git";
|
||||
defaultBranch = "master";
|
||||
};
|
||||
};
|
||||
|
||||
timeout = 14400;
|
||||
metrics.enable = true;
|
||||
};
|
||||
|
||||
# Expose builder metrics for Prometheus scraping
|
||||
homelab.monitoring.scrapeTargets = [
|
||||
{
|
||||
job_name = "homelab-deploy-builder";
|
||||
port = 9973;
|
||||
}
|
||||
];
|
||||
|
||||
# Ensure builder starts after vault secret is available
|
||||
systemd.services.homelab-deploy-builder = {
|
||||
after = [ "vault-secret-builder-nkey.service" ];
|
||||
requires = [ "vault-secret-builder-nkey.service" ];
|
||||
};
|
||||
}
|
||||
@@ -1,61 +0,0 @@
|
||||
{ config, pkgs, lib, inputs, ... }:
|
||||
let
|
||||
homelab-deploy = inputs.homelab-deploy.packages.${pkgs.system}.default;
|
||||
|
||||
scheduledBuildScript = pkgs.writeShellApplication {
|
||||
name = "scheduled-build";
|
||||
runtimeInputs = [ homelab-deploy ];
|
||||
text = ''
|
||||
NATS_URL="nats://nats1.home.2rjus.net:4222"
|
||||
NKEY_FILE="/run/secrets/scheduler-nkey"
|
||||
|
||||
echo "Starting scheduled builds at $(date)"
|
||||
|
||||
# Build all nixos-servers hosts
|
||||
homelab-deploy build \
|
||||
--nats-url "$NATS_URL" \
|
||||
--nkey-file "$NKEY_FILE" \
|
||||
nixos-servers --all
|
||||
|
||||
# Build all nixos (gunter) hosts
|
||||
homelab-deploy build \
|
||||
--nats-url "$NATS_URL" \
|
||||
--nkey-file "$NKEY_FILE" \
|
||||
nixos --all
|
||||
|
||||
echo "Scheduled builds completed at $(date)"
|
||||
'';
|
||||
};
|
||||
in
|
||||
{
|
||||
# Fetch scheduler NKey from Vault
|
||||
vault.secrets.scheduler-nkey = {
|
||||
secretPath = "shared/homelab-deploy/scheduler-nkey";
|
||||
extractKey = "nkey";
|
||||
outputDir = "/run/secrets/scheduler-nkey";
|
||||
services = [ "scheduled-build" ];
|
||||
};
|
||||
|
||||
# Timer: every 2 hours
|
||||
systemd.timers.scheduled-build = {
|
||||
description = "Trigger scheduled Nix builds";
|
||||
wantedBy = [ "timers.target" ];
|
||||
timerConfig = {
|
||||
OnCalendar = "*-*-* 00/2:00:00"; # Every 2 hours at :00
|
||||
Persistent = true; # Run missed builds on boot
|
||||
RandomizedDelaySec = "5m"; # Slight jitter
|
||||
};
|
||||
};
|
||||
|
||||
# Service: oneshot that triggers builds
|
||||
systemd.services.scheduled-build = {
|
||||
description = "Trigger builds for all hosts via NATS";
|
||||
after = [ "network-online.target" "vault-secret-scheduler-nkey.service" ];
|
||||
requires = [ "vault-secret-scheduler-nkey.service" ];
|
||||
wants = [ "network-online.target" ];
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
ExecStart = lib.getExe scheduledBuildScript;
|
||||
};
|
||||
};
|
||||
}
|
||||
@@ -1,83 +0,0 @@
|
||||
{
|
||||
lib,
|
||||
pkgs,
|
||||
...
|
||||
}:
|
||||
|
||||
{
|
||||
services.openssh = {
|
||||
enable = true;
|
||||
settings = {
|
||||
PermitRootLogin = lib.mkForce "no";
|
||||
PasswordAuthentication = false;
|
||||
};
|
||||
};
|
||||
|
||||
users.users.nixos = {
|
||||
isNormalUser = true;
|
||||
extraGroups = [ "wheel" ];
|
||||
shell = pkgs.zsh;
|
||||
openssh.authorizedKeys.keys = [
|
||||
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAwfb2jpKrBnCw28aevnH8HbE5YbcMXpdaVv2KmueDu6 torjus@gunter"
|
||||
];
|
||||
};
|
||||
security.sudo.wheelNeedsPassword = false;
|
||||
programs.zsh.enable = true;
|
||||
|
||||
homelab.dns.enable = false;
|
||||
homelab.monitoring.enable = false;
|
||||
homelab.host.labels.ansible = "false";
|
||||
|
||||
fileSystems."/" = {
|
||||
device = "/dev/disk/by-label/nixos";
|
||||
fsType = "ext4";
|
||||
autoResize = true;
|
||||
};
|
||||
|
||||
fileSystems."/var/lib/forgejo/data/packages" = {
|
||||
device = "/dev/disk/by-uuid/25a84568-b36a-47b3-a6d0-b959209cfdaf";
|
||||
fsType = "ext4";
|
||||
};
|
||||
|
||||
boot.loader.grub.enable = true;
|
||||
boot.loader.grub.device = "/dev/vda";
|
||||
networking.hostName = "nrec-nixos01";
|
||||
networking.useNetworkd = true;
|
||||
networking.useDHCP = false;
|
||||
services.resolved.enable = true;
|
||||
|
||||
systemd.network.enable = true;
|
||||
systemd.network.networks."ens3" = {
|
||||
matchConfig.Name = "ens3";
|
||||
networkConfig.DHCP = "ipv4";
|
||||
linkConfig.RequiredForOnline = "routable";
|
||||
};
|
||||
time.timeZone = "Europe/Oslo";
|
||||
|
||||
networking.firewall.enable = true;
|
||||
networking.firewall.allowedTCPPorts = [
|
||||
22
|
||||
80
|
||||
443
|
||||
];
|
||||
|
||||
nix.settings.substituters = [
|
||||
"https://cache.nixos.org"
|
||||
];
|
||||
nix.settings.trusted-public-keys = [
|
||||
"cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY="
|
||||
];
|
||||
|
||||
services.caddy = {
|
||||
enable = true;
|
||||
virtualHosts."code.t-juice.club" = {
|
||||
extraConfig = ''
|
||||
reverse_proxy 127.0.0.1:3000
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
zramSwap.enable = true;
|
||||
|
||||
system.stateVersion = "25.11";
|
||||
}
|
||||
@@ -1,9 +0,0 @@
|
||||
{ modulesPath, ... }:
|
||||
{
|
||||
imports = [
|
||||
./configuration.nix
|
||||
../../system/packages.nix
|
||||
../../services/forgejo
|
||||
(modulesPath + "/profiles/qemu-guest.nix")
|
||||
];
|
||||
}
|
||||
@@ -1,85 +0,0 @@
|
||||
{ lib, pkgs, ... }:
|
||||
|
||||
{
|
||||
services.openssh = {
|
||||
enable = true;
|
||||
settings = {
|
||||
PermitRootLogin = lib.mkForce "no";
|
||||
PasswordAuthentication = false;
|
||||
};
|
||||
};
|
||||
|
||||
users.users.nixos = {
|
||||
isNormalUser = true;
|
||||
extraGroups = [ "wheel" ];
|
||||
shell = pkgs.zsh;
|
||||
openssh.authorizedKeys.keys = [
|
||||
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAwfb2jpKrBnCw28aevnH8HbE5YbcMXpdaVv2KmueDu6 torjus@gunter"
|
||||
];
|
||||
};
|
||||
security.sudo.wheelNeedsPassword = false;
|
||||
programs.zsh.enable = true;
|
||||
|
||||
homelab.dns.enable = false;
|
||||
homelab.monitoring.enable = false;
|
||||
homelab.host.labels.ansible = "false";
|
||||
|
||||
fileSystems."/" = {
|
||||
device = "/dev/disk/by-label/nixos";
|
||||
fsType = "ext4";
|
||||
autoResize = true;
|
||||
};
|
||||
|
||||
boot.loader.grub.enable = true;
|
||||
boot.loader.grub.device = "/dev/vda";
|
||||
networking.hostName = "nrec-nixos02";
|
||||
networking.useNetworkd = true;
|
||||
networking.useDHCP = false;
|
||||
services.resolved.enable = true;
|
||||
|
||||
systemd.network.enable = true;
|
||||
systemd.network.networks."ens3" = {
|
||||
matchConfig.Name = "ens3";
|
||||
networkConfig.DHCP = "ipv4";
|
||||
linkConfig.RequiredForOnline = "routable";
|
||||
};
|
||||
time.timeZone = "Europe/Oslo";
|
||||
|
||||
networking.firewall.enable = true;
|
||||
networking.firewall.allowedTCPPorts = [
|
||||
22
|
||||
80
|
||||
443
|
||||
];
|
||||
|
||||
nix.settings.substituters = [
|
||||
"https://cache.nixos.org"
|
||||
];
|
||||
nix.settings.trusted-public-keys = [
|
||||
"cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY="
|
||||
];
|
||||
|
||||
services.pocket-id = {
|
||||
enable = true;
|
||||
settings = {
|
||||
APP_URL = "https://oidc.t-juice.club";
|
||||
TRUST_PROXY = true;
|
||||
ANALYTICS_DISABLED = true;
|
||||
VERSION_CHECK_DISABLED = true;
|
||||
HOST = "127.0.0.1";
|
||||
};
|
||||
};
|
||||
|
||||
services.caddy = {
|
||||
enable = true;
|
||||
virtualHosts."oidc.t-juice.club" = {
|
||||
extraConfig = ''
|
||||
reverse_proxy 127.0.0.1:1411
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
zramSwap.enable = true;
|
||||
|
||||
system.stateVersion = "25.11";
|
||||
}
|
||||
@@ -1,9 +0,0 @@
|
||||
{ modulesPath, ... }:
|
||||
{
|
||||
imports = [
|
||||
./configuration.nix
|
||||
../../system/packages.nix
|
||||
../../services/actions-runner
|
||||
(modulesPath + "/profiles/qemu-guest.nix")
|
||||
];
|
||||
}
|
||||
@@ -58,7 +58,10 @@
|
||||
};
|
||||
time.timeZone = "Europe/Oslo";
|
||||
|
||||
|
||||
nix.settings.experimental-features = [
|
||||
"nix-command"
|
||||
"flakes"
|
||||
];
|
||||
nix.settings.tarball-ttl = 0;
|
||||
environment.systemPackages = with pkgs; [
|
||||
vim
|
||||
|
||||
@@ -58,7 +58,10 @@
|
||||
};
|
||||
time.timeZone = "Europe/Oslo";
|
||||
|
||||
|
||||
nix.settings.experimental-features = [
|
||||
"nix-command"
|
||||
"flakes"
|
||||
];
|
||||
nix.settings.tarball-ttl = 0;
|
||||
environment.systemPackages = with pkgs; [
|
||||
vim
|
||||
|
||||
@@ -1,72 +0,0 @@
|
||||
{
|
||||
lib,
|
||||
pkgs,
|
||||
...
|
||||
}:
|
||||
|
||||
{
|
||||
services.openssh = {
|
||||
enable = true;
|
||||
settings = {
|
||||
PermitRootLogin = lib.mkForce "no";
|
||||
PasswordAuthentication = false;
|
||||
};
|
||||
};
|
||||
|
||||
users.users.nixos = {
|
||||
isNormalUser = true;
|
||||
extraGroups = [ "wheel" ];
|
||||
shell = pkgs.zsh;
|
||||
openssh.authorizedKeys.keys = [
|
||||
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAwfb2jpKrBnCw28aevnH8HbE5YbcMXpdaVv2KmueDu6 torjus@gunter"
|
||||
];
|
||||
};
|
||||
security.sudo.wheelNeedsPassword = false;
|
||||
programs.zsh.enable = true;
|
||||
|
||||
homelab.dns.enable = false;
|
||||
homelab.monitoring.enable = false;
|
||||
homelab.host.labels.ansible = "false";
|
||||
|
||||
# Minimal fileSystems for evaluation; openstack-config.nix overrides this at image build time
|
||||
fileSystems."/" = {
|
||||
device = lib.mkDefault "/dev/vda1";
|
||||
fsType = lib.mkDefault "ext4";
|
||||
};
|
||||
|
||||
boot.loader.grub.enable = true;
|
||||
boot.loader.grub.device = "/dev/vda";
|
||||
networking.hostName = "nixos-openstack-template";
|
||||
networking.useNetworkd = true;
|
||||
networking.useDHCP = false;
|
||||
services.resolved.enable = true;
|
||||
|
||||
systemd.network.enable = true;
|
||||
systemd.network.networks."ens3" = {
|
||||
matchConfig.Name = "ens3";
|
||||
networkConfig.DHCP = "ipv4";
|
||||
linkConfig.RequiredForOnline = "routable";
|
||||
};
|
||||
time.timeZone = "Europe/Oslo";
|
||||
|
||||
networking.firewall.enable = true;
|
||||
networking.firewall.allowedTCPPorts = [ 22 ];
|
||||
|
||||
nix.settings.substituters = [
|
||||
"https://cache.nixos.org"
|
||||
];
|
||||
nix.settings.trusted-public-keys = [
|
||||
"cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY="
|
||||
];
|
||||
|
||||
environment.systemPackages = with pkgs; [
|
||||
age
|
||||
vim
|
||||
wget
|
||||
git
|
||||
];
|
||||
|
||||
zramSwap.enable = true;
|
||||
|
||||
system.stateVersion = "25.11";
|
||||
}
|
||||
@@ -1,7 +0,0 @@
|
||||
{ ... }:
|
||||
{
|
||||
imports = [
|
||||
./configuration.nix
|
||||
../../system/packages.nix
|
||||
];
|
||||
}
|
||||
@@ -1,54 +0,0 @@
|
||||
{
|
||||
config,
|
||||
lib,
|
||||
pkgs,
|
||||
...
|
||||
}:
|
||||
|
||||
{
|
||||
imports = [
|
||||
./hardware-configuration.nix
|
||||
../../system
|
||||
];
|
||||
|
||||
boot.loader.systemd-boot.enable = true;
|
||||
boot.loader.systemd-boot.memtest86.enable = true;
|
||||
boot.loader.efi.canTouchEfiVariables = true;
|
||||
|
||||
networking.hostName = "pn01";
|
||||
networking.domain = "home.2rjus.net";
|
||||
networking.useNetworkd = true;
|
||||
networking.useDHCP = false;
|
||||
networking.firewall.enable = false;
|
||||
services.resolved.enable = true;
|
||||
networking.nameservers = [
|
||||
"10.69.13.5"
|
||||
"10.69.13.6"
|
||||
];
|
||||
|
||||
systemd.network.enable = true;
|
||||
systemd.network.networks."enp2s0" = {
|
||||
matchConfig.Name = "enp2s0";
|
||||
address = [
|
||||
"10.69.12.60/24"
|
||||
];
|
||||
routes = [
|
||||
{ Gateway = "10.69.12.1"; }
|
||||
];
|
||||
linkConfig.RequiredForOnline = "routable";
|
||||
};
|
||||
|
||||
time.timeZone = "Europe/Oslo";
|
||||
|
||||
homelab.host = {
|
||||
tier = "test";
|
||||
priority = "low";
|
||||
role = "compute";
|
||||
};
|
||||
|
||||
vault.enable = true;
|
||||
|
||||
nixpkgs.config.allowUnfree = true;
|
||||
|
||||
system.stateVersion = "25.11";
|
||||
}
|
||||
@@ -1,5 +0,0 @@
|
||||
{ ... }: {
|
||||
imports = [
|
||||
./configuration.nix
|
||||
];
|
||||
}
|
||||
@@ -1,33 +0,0 @@
|
||||
# Do not modify this file! It was generated by ‘nixos-generate-config’
|
||||
# and may be overwritten by future invocations. Please make changes
|
||||
# to /etc/nixos/configuration.nix instead.
|
||||
{ config, lib, pkgs, modulesPath, ... }:
|
||||
|
||||
{
|
||||
imports =
|
||||
[ (modulesPath + "/installer/scan/not-detected.nix")
|
||||
];
|
||||
|
||||
boot.initrd.availableKernelModules = [ "xhci_pci" "nvme" "ahci" "usb_storage" "usbhid" "sd_mod" "rtsx_usb_sdmmc" ];
|
||||
boot.initrd.kernelModules = [ ];
|
||||
boot.kernelModules = [ "kvm-amd" ];
|
||||
boot.extraModulePackages = [ ];
|
||||
|
||||
fileSystems."/" =
|
||||
{ device = "/dev/disk/by-uuid/9444cf54-80e0-4315-adca-8ddd5037217c";
|
||||
fsType = "ext4";
|
||||
};
|
||||
|
||||
fileSystems."/boot" =
|
||||
{ device = "/dev/disk/by-uuid/D897-146F";
|
||||
fsType = "vfat";
|
||||
options = [ "fmask=0022" "dmask=0022" ];
|
||||
};
|
||||
|
||||
swapDevices =
|
||||
[ { device = "/dev/disk/by-uuid/6c1e775f-342e-463a-a7f9-d7ce6593a482"; }
|
||||
];
|
||||
|
||||
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
|
||||
hardware.cpu.amd.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
|
||||
}
|
||||
@@ -1,61 +0,0 @@
|
||||
{
|
||||
config,
|
||||
lib,
|
||||
pkgs,
|
||||
...
|
||||
}:
|
||||
|
||||
{
|
||||
imports = [
|
||||
./hardware-configuration.nix
|
||||
../../system
|
||||
];
|
||||
|
||||
boot.loader.systemd-boot.enable = true;
|
||||
boot.loader.systemd-boot.memtest86.enable = true;
|
||||
boot.loader.efi.canTouchEfiVariables = true;
|
||||
boot.blacklistedKernelModules = [ "amdgpu" ];
|
||||
boot.kernelParams = [ "panic=10" "nmi_watchdog=1" "processor.max_cstate=1" "sched_ext.enabled=0" ];
|
||||
boot.kernel.sysctl."kernel.softlockup_panic" = 1;
|
||||
boot.kernel.sysctl."kernel.hardlockup_panic" = 1;
|
||||
|
||||
hardware.rasdaemon.enable = true;
|
||||
hardware.rasdaemon.record = true;
|
||||
|
||||
networking.hostName = "pn02";
|
||||
networking.domain = "home.2rjus.net";
|
||||
networking.useNetworkd = true;
|
||||
networking.useDHCP = false;
|
||||
networking.firewall.enable = false;
|
||||
services.resolved.enable = true;
|
||||
networking.nameservers = [
|
||||
"10.69.13.5"
|
||||
"10.69.13.6"
|
||||
];
|
||||
|
||||
systemd.network.enable = true;
|
||||
systemd.network.networks."enp2s0" = {
|
||||
matchConfig.Name = "enp2s0";
|
||||
address = [
|
||||
"10.69.12.61/24"
|
||||
];
|
||||
routes = [
|
||||
{ Gateway = "10.69.12.1"; }
|
||||
];
|
||||
linkConfig.RequiredForOnline = "routable";
|
||||
};
|
||||
|
||||
time.timeZone = "Europe/Oslo";
|
||||
|
||||
homelab.host = {
|
||||
tier = "test";
|
||||
priority = "low";
|
||||
role = "compute";
|
||||
};
|
||||
|
||||
vault.enable = true;
|
||||
|
||||
nixpkgs.config.allowUnfree = true;
|
||||
|
||||
system.stateVersion = "25.11";
|
||||
}
|
||||
@@ -1,5 +0,0 @@
|
||||
{ ... }: {
|
||||
imports = [
|
||||
./configuration.nix
|
||||
];
|
||||
}
|
||||
@@ -1,33 +0,0 @@
|
||||
# Do not modify this file! It was generated by ‘nixos-generate-config’
|
||||
# and may be overwritten by future invocations. Please make changes
|
||||
# to /etc/nixos/configuration.nix instead.
|
||||
{ config, lib, pkgs, modulesPath, ... }:
|
||||
|
||||
{
|
||||
imports =
|
||||
[ (modulesPath + "/installer/scan/not-detected.nix")
|
||||
];
|
||||
|
||||
boot.initrd.availableKernelModules = [ "xhci_pci" "ahci" "usb_storage" "usbhid" "sd_mod" "rtsx_usb_sdmmc" ];
|
||||
boot.initrd.kernelModules = [ ];
|
||||
boot.kernelModules = [ "kvm-amd" ];
|
||||
boot.extraModulePackages = [ ];
|
||||
|
||||
fileSystems."/" =
|
||||
{ device = "/dev/disk/by-uuid/1d28b629-51ae-4f0e-b440-9388c2e48413";
|
||||
fsType = "ext4";
|
||||
};
|
||||
|
||||
fileSystems."/boot" =
|
||||
{ device = "/dev/disk/by-uuid/A5A7-C7B2";
|
||||
fsType = "vfat";
|
||||
options = [ "fmask=0022" "dmask=0022" ];
|
||||
};
|
||||
|
||||
swapDevices =
|
||||
[ { device = "/dev/disk/by-uuid/f2570894-0922-4746-84c7-2b2fe7601ea1"; }
|
||||
];
|
||||
|
||||
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
|
||||
hardware.cpu.amd.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
|
||||
}
|
||||
@@ -6,8 +6,7 @@ let
|
||||
text = ''
|
||||
set -euo pipefail
|
||||
|
||||
LOKI_URL="https://loki.home.2rjus.net/loki/api/v1/push"
|
||||
LOKI_AUTH_FILE="/run/secrets/promtail-loki-auth"
|
||||
LOKI_URL="http://monitoring01.home.2rjus.net:3100/loki/api/v1/push"
|
||||
|
||||
# Send a log entry to Loki with bootstrap status
|
||||
# Usage: log_to_loki <stage> <message>
|
||||
@@ -29,7 +28,7 @@ let
|
||||
streams: [{
|
||||
stream: {
|
||||
job: "bootstrap",
|
||||
hostname: $host,
|
||||
host: $host,
|
||||
stage: $stage,
|
||||
branch: $branch
|
||||
},
|
||||
@@ -37,14 +36,8 @@ let
|
||||
}]
|
||||
}')
|
||||
|
||||
local auth_args=()
|
||||
if [[ -f "$LOKI_AUTH_FILE" ]]; then
|
||||
auth_args=(-u "promtail:$(cat "$LOKI_AUTH_FILE")")
|
||||
fi
|
||||
|
||||
curl -s --connect-timeout 2 --max-time 5 \
|
||||
-X POST \
|
||||
"''${auth_args[@]}" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "$payload" \
|
||||
"$LOKI_URL" >/dev/null 2>&1 || true
|
||||
@@ -70,10 +63,10 @@ let
|
||||
echo "Waiting for network connectivity..."
|
||||
|
||||
# Verify we can reach the git server via HTTPS (doesn't respond to ping)
|
||||
if ! curl -s --connect-timeout 5 --max-time 10 https://code.t-juice.club >/dev/null 2>&1; then
|
||||
echo "ERROR: Cannot reach code.t-juice.club via HTTPS"
|
||||
if ! curl -s --connect-timeout 5 --max-time 10 https://git.t-juice.club >/dev/null 2>&1; then
|
||||
echo "ERROR: Cannot reach git.t-juice.club via HTTPS"
|
||||
echo "Check network configuration and DNS settings"
|
||||
log_to_loki "failed" "Network check failed - cannot reach code.t-juice.club"
|
||||
log_to_loki "failed" "Network check failed - cannot reach git.t-juice.club"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
@@ -134,7 +127,7 @@ let
|
||||
log_to_loki "building" "Starting nixos-rebuild boot"
|
||||
|
||||
# Build and activate the host-specific configuration
|
||||
FLAKE_URL="git+https://code.t-juice.club/torjus/nixos-servers.git?ref=$BRANCH#''${HOSTNAME}"
|
||||
FLAKE_URL="git+https://git.t-juice.club/torjus/nixos-servers.git?ref=$BRANCH#''${HOSTNAME}"
|
||||
|
||||
if nixos-rebuild boot --flake "$FLAKE_URL"; then
|
||||
echo "Successfully built configuration for $HOSTNAME"
|
||||
|
||||
@@ -35,7 +35,6 @@
|
||||
homelab.host = {
|
||||
tier = "test";
|
||||
priority = "low";
|
||||
labels.ansible = "false"; # Exclude from Ansible inventory
|
||||
};
|
||||
|
||||
boot.loader.grub.enable = true;
|
||||
@@ -54,7 +53,10 @@
|
||||
};
|
||||
time.timeZone = "Europe/Oslo";
|
||||
|
||||
|
||||
nix.settings.experimental-features = [
|
||||
"nix-command"
|
||||
"flakes"
|
||||
];
|
||||
nix.settings.tarball-ttl = 0;
|
||||
nix.settings.substituters = [
|
||||
"https://nix-cache.home.2rjus.net"
|
||||
|
||||
@@ -14,9 +14,9 @@
|
||||
../../common/ssh-audit.nix
|
||||
];
|
||||
|
||||
# Host metadata (adjust as needed)
|
||||
homelab.host = {
|
||||
tier = "test";
|
||||
role = "test";
|
||||
tier = "test"; # Start in test tier, move to prod after validation
|
||||
};
|
||||
|
||||
# Enable Vault integration
|
||||
@@ -55,7 +55,10 @@
|
||||
};
|
||||
time.timeZone = "Europe/Oslo";
|
||||
|
||||
|
||||
nix.settings.experimental-features = [
|
||||
"nix-command"
|
||||
"flakes"
|
||||
];
|
||||
nix.settings.tarball-ttl = 0;
|
||||
environment.systemPackages = with pkgs; [
|
||||
vim
|
||||
|
||||
@@ -14,9 +14,9 @@
|
||||
../../common/ssh-audit.nix
|
||||
];
|
||||
|
||||
# Host metadata (adjust as needed)
|
||||
homelab.host = {
|
||||
tier = "test";
|
||||
role = "test";
|
||||
tier = "test"; # Start in test tier, move to prod after validation
|
||||
};
|
||||
|
||||
# Enable Vault integration
|
||||
@@ -55,7 +55,10 @@
|
||||
};
|
||||
time.timeZone = "Europe/Oslo";
|
||||
|
||||
|
||||
nix.settings.experimental-features = [
|
||||
"nix-command"
|
||||
"flakes"
|
||||
];
|
||||
nix.settings.tarball-ttl = 0;
|
||||
environment.systemPackages = with pkgs; [
|
||||
vim
|
||||
|
||||
@@ -14,9 +14,9 @@
|
||||
../../common/ssh-audit.nix
|
||||
];
|
||||
|
||||
# Host metadata (adjust as needed)
|
||||
homelab.host = {
|
||||
tier = "test";
|
||||
role = "test";
|
||||
tier = "test"; # Start in test tier, move to prod after validation
|
||||
};
|
||||
|
||||
# Enable Vault integration
|
||||
@@ -55,7 +55,10 @@
|
||||
};
|
||||
time.timeZone = "Europe/Oslo";
|
||||
|
||||
|
||||
nix.settings.experimental-features = [
|
||||
"nix-command"
|
||||
"flakes"
|
||||
];
|
||||
nix.settings.tarball-ttl = 0;
|
||||
environment.systemPackages = with pkgs; [
|
||||
vim
|
||||
|
||||
@@ -45,7 +45,10 @@
|
||||
};
|
||||
time.timeZone = "Europe/Oslo";
|
||||
|
||||
|
||||
nix.settings.experimental-features = [
|
||||
"nix-command"
|
||||
"flakes"
|
||||
];
|
||||
nix.settings.tarball-ttl = 0;
|
||||
environment.systemPackages = with pkgs; [
|
||||
vim
|
||||
|
||||
@@ -58,9 +58,10 @@ let
|
||||
};
|
||||
|
||||
# Build effective labels for a host
|
||||
# Always includes hostname and tier; only includes priority/role if non-default
|
||||
# Always includes hostname; only includes tier/priority/role if non-default
|
||||
buildEffectiveLabels = host:
|
||||
{ hostname = host.hostname; tier = host.tier; }
|
||||
{ hostname = host.hostname; }
|
||||
// (lib.optionalAttrs (host.tier != "prod") { tier = host.tier; })
|
||||
// (lib.optionalAttrs (host.priority != "high") { priority = host.priority; })
|
||||
// (lib.optionalAttrs (host.role != null) { role = host.role; })
|
||||
// host.labels;
|
||||
@@ -94,15 +95,7 @@ let
|
||||
})
|
||||
(externalTargets.nodeExporter or [ ]);
|
||||
|
||||
# Node-exporter-only external targets (no systemd-exporter)
|
||||
externalOnlyEntries = map
|
||||
(target: {
|
||||
inherit target;
|
||||
labels = { hostname = extractHostnameFromTarget target; };
|
||||
})
|
||||
(externalTargets.nodeExporterOnly or [ ]);
|
||||
|
||||
allEntries = flakeEntries ++ externalEntries ++ externalOnlyEntries;
|
||||
allEntries = flakeEntries ++ externalEntries;
|
||||
|
||||
# Group entries by their label set for efficient static_configs
|
||||
# Convert labels attrset to a string key for grouping
|
||||
@@ -211,18 +204,7 @@ let
|
||||
in
|
||||
flakeScrapeConfigs ++ externalScrapeConfigs;
|
||||
|
||||
# Generate systemd-exporter targets (excludes nodeExporterOnly hosts)
|
||||
generateSystemdExporterTargets = self: externalTargets:
|
||||
let
|
||||
nodeTargets = generateNodeExporterTargets self (externalTargets // { nodeExporterOnly = [ ]; });
|
||||
in
|
||||
map
|
||||
(cfg: cfg // {
|
||||
targets = map (t: builtins.replaceStrings [ ":9100" ] [ ":9558" ] t) cfg.targets;
|
||||
})
|
||||
nodeTargets;
|
||||
|
||||
in
|
||||
{
|
||||
inherit extractHostMonitoring generateNodeExporterTargets generateScrapeConfigs generateSystemdExporterTargets;
|
||||
inherit extractHostMonitoring generateNodeExporterTargets generateScrapeConfigs;
|
||||
}
|
||||
|
||||
@@ -15,13 +15,13 @@
|
||||
- name: Build NixOS image
|
||||
ansible.builtin.command:
|
||||
cmd: "nixos-rebuild build-image --image-variant proxmox --flake .#template2"
|
||||
chdir: "{{ playbook_dir }}/../.."
|
||||
chdir: "{{ playbook_dir }}/.."
|
||||
register: build_result
|
||||
changed_when: true
|
||||
|
||||
- name: Find built image file
|
||||
ansible.builtin.find:
|
||||
paths: "{{ playbook_dir}}/../../result"
|
||||
paths: "{{ playbook_dir}}/../result"
|
||||
patterns: "*.vma.zst"
|
||||
recurse: true
|
||||
register: image_files
|
||||
@@ -105,7 +105,7 @@
|
||||
gather_facts: false
|
||||
|
||||
vars:
|
||||
terraform_dir: "{{ playbook_dir }}/../../terraform"
|
||||
terraform_dir: "{{ playbook_dir }}/../terraform"
|
||||
|
||||
tasks:
|
||||
- name: Get image filename from earlier play
|
||||
5
playbooks/inventory.ini
Normal file
5
playbooks/inventory.ini
Normal file
@@ -0,0 +1,5 @@
|
||||
[proxmox]
|
||||
pve1.home.2rjus.net
|
||||
|
||||
[proxmox:vars]
|
||||
ansible_user=root
|
||||
@@ -1,60 +1,54 @@
|
||||
---
|
||||
# Provision OpenBao AppRole credentials to a host
|
||||
#
|
||||
# Usage: ansible-playbook ansible/playbooks/provision-approle.yml -l <hostname>
|
||||
# Provision OpenBao AppRole credentials to an existing host
|
||||
# Usage: nix develop -c ansible-playbook playbooks/provision-approle.yml -e hostname=ha1
|
||||
# Requires: BAO_ADDR and BAO_TOKEN environment variables set
|
||||
#
|
||||
# IMPORTANT: This playbook must target exactly one host to prevent
|
||||
# accidentally regenerating credentials for multiple hosts.
|
||||
|
||||
- name: Validate single host target
|
||||
hosts: all
|
||||
gather_facts: false
|
||||
|
||||
tasks:
|
||||
- name: Fail if targeting multiple hosts
|
||||
ansible.builtin.fail:
|
||||
msg: |
|
||||
This playbook must target exactly one host.
|
||||
Use: ansible-playbook provision-approle.yml -l <hostname>
|
||||
|
||||
Targeting multiple hosts would regenerate credentials for all of them,
|
||||
potentially breaking existing services.
|
||||
when: ansible_play_hosts | length != 1
|
||||
run_once: true
|
||||
|
||||
- name: Provision AppRole credentials
|
||||
hosts: all
|
||||
- name: Fetch AppRole credentials from OpenBao
|
||||
hosts: localhost
|
||||
connection: local
|
||||
gather_facts: false
|
||||
|
||||
vars:
|
||||
target_hostname: "{{ inventory_hostname.split('.')[0] }}"
|
||||
vault_addr: "{{ lookup('env', 'BAO_ADDR') | default('https://vault01.home.2rjus.net:8200', true) }}"
|
||||
domain: "home.2rjus.net"
|
||||
|
||||
tasks:
|
||||
- name: Display target host
|
||||
ansible.builtin.debug:
|
||||
msg: "Provisioning AppRole credentials for: {{ target_hostname }}"
|
||||
- name: Validate hostname is provided
|
||||
ansible.builtin.fail:
|
||||
msg: "hostname variable is required. Use: -e hostname=<name>"
|
||||
when: hostname is not defined
|
||||
|
||||
- name: Get role-id for host
|
||||
ansible.builtin.command:
|
||||
cmd: "bao read -field=role_id auth/approle/role/{{ target_hostname }}/role-id"
|
||||
cmd: "bao read -field=role_id auth/approle/role/{{ hostname }}/role-id"
|
||||
environment:
|
||||
BAO_ADDR: "{{ vault_addr }}"
|
||||
BAO_SKIP_VERIFY: "1"
|
||||
register: role_id_result
|
||||
changed_when: false
|
||||
delegate_to: localhost
|
||||
|
||||
- name: Generate secret-id for host
|
||||
ansible.builtin.command:
|
||||
cmd: "bao write -field=secret_id -f auth/approle/role/{{ target_hostname }}/secret-id"
|
||||
cmd: "bao write -field=secret_id -f auth/approle/role/{{ hostname }}/secret-id"
|
||||
environment:
|
||||
BAO_ADDR: "{{ vault_addr }}"
|
||||
BAO_SKIP_VERIFY: "1"
|
||||
register: secret_id_result
|
||||
changed_when: true
|
||||
delegate_to: localhost
|
||||
|
||||
- name: Add target host to inventory
|
||||
ansible.builtin.add_host:
|
||||
name: "{{ hostname }}.{{ domain }}"
|
||||
groups: vault_target
|
||||
ansible_user: root
|
||||
vault_role_id: "{{ role_id_result.stdout }}"
|
||||
vault_secret_id: "{{ secret_id_result.stdout }}"
|
||||
|
||||
- name: Deploy AppRole credentials to host
|
||||
hosts: vault_target
|
||||
gather_facts: false
|
||||
|
||||
tasks:
|
||||
- name: Create AppRole directory
|
||||
ansible.builtin.file:
|
||||
path: /var/lib/vault/approle
|
||||
@@ -65,7 +59,7 @@
|
||||
|
||||
- name: Write role-id
|
||||
ansible.builtin.copy:
|
||||
content: "{{ role_id_result.stdout }}"
|
||||
content: "{{ vault_role_id }}"
|
||||
dest: /var/lib/vault/approle/role-id
|
||||
mode: "0600"
|
||||
owner: root
|
||||
@@ -73,7 +67,7 @@
|
||||
|
||||
- name: Write secret-id
|
||||
ansible.builtin.copy:
|
||||
content: "{{ secret_id_result.stdout }}"
|
||||
content: "{{ vault_secret_id }}"
|
||||
dest: /var/lib/vault/approle/secret-id
|
||||
mode: "0600"
|
||||
owner: root
|
||||
@@ -56,7 +56,10 @@
|
||||
};
|
||||
time.timeZone = "Europe/Oslo";
|
||||
|
||||
|
||||
nix.settings.experimental-features = [
|
||||
"nix-command"
|
||||
"flakes"
|
||||
];
|
||||
nix.settings.tarball-ttl = 0;
|
||||
environment.systemPackages = with pkgs; [
|
||||
vim
|
||||
|
||||
@@ -20,10 +20,10 @@ vault-fetch <secret-path> <output-directory> [cache-directory]
|
||||
|
||||
```bash
|
||||
# Fetch Grafana admin secrets
|
||||
vault-fetch hosts/ha1/mqtt-password /run/secrets/grafana /var/lib/vault/cache/grafana
|
||||
vault-fetch hosts/monitoring01/grafana-admin /run/secrets/grafana /var/lib/vault/cache/grafana
|
||||
|
||||
# Use default cache location
|
||||
vault-fetch hosts/ha1/mqtt-password /run/secrets/grafana
|
||||
vault-fetch hosts/monitoring01/grafana-admin /run/secrets/grafana
|
||||
```
|
||||
|
||||
## How It Works
|
||||
@@ -53,13 +53,13 @@ If Vault is unreachable or authentication fails:
|
||||
This tool is designed to be called from systemd service `ExecStartPre` hooks via the `vault.secrets` NixOS module:
|
||||
|
||||
```nix
|
||||
vault.secrets.mqtt-password = {
|
||||
secretPath = "hosts/ha1/mqtt-password";
|
||||
vault.secrets.grafana-admin = {
|
||||
secretPath = "hosts/monitoring01/grafana-admin";
|
||||
};
|
||||
|
||||
# Service automatically gets secrets fetched before start
|
||||
systemd.services.mosquitto.serviceConfig = {
|
||||
EnvironmentFile = "/run/secrets/mqtt-password/password";
|
||||
systemd.services.grafana.serviceConfig = {
|
||||
EnvironmentFile = "/run/secrets/grafana-admin/password";
|
||||
};
|
||||
```
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@ set -euo pipefail
|
||||
#
|
||||
# Usage: vault-fetch <secret-path> <output-directory> [cache-directory]
|
||||
#
|
||||
# Example: vault-fetch hosts/ha1/mqtt-password /run/secrets/grafana /var/lib/vault/cache/grafana
|
||||
# Example: vault-fetch hosts/monitoring01/grafana-admin /run/secrets/grafana /var/lib/vault/cache/grafana
|
||||
#
|
||||
# This script:
|
||||
# 1. Authenticates to Vault using AppRole credentials from /var/lib/vault/approle/
|
||||
@@ -17,7 +17,7 @@ set -euo pipefail
|
||||
# Parse arguments
|
||||
if [ $# -lt 2 ]; then
|
||||
echo "Usage: vault-fetch <secret-path> <output-directory> [cache-directory]" >&2
|
||||
echo "Example: vault-fetch hosts/ha1/mqtt-password /run/secrets/grafana /var/lib/vault/cache/grafana" >&2
|
||||
echo "Example: vault-fetch hosts/monitoring01/grafana /run/secrets/grafana /var/lib/vault/cache/grafana" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
|
||||
@@ -1,37 +1,57 @@
|
||||
{ config, lib, pkgs, ... }:
|
||||
{ pkgs, config, ... }:
|
||||
{
|
||||
# Trust podman interfaces so containers can reach the runner's cache service.
|
||||
# "podman+" is a wildcard matching any interface starting with "podman".
|
||||
networking.firewall.trustedInterfaces = [ "podman+" ];
|
||||
vault.secrets.actions-token = {
|
||||
secretPath = "hosts/nix-cache01/actions-token";
|
||||
extractKey = "token";
|
||||
outputDir = "/run/secrets/actions-token-1";
|
||||
services = [ "gitea-runner-actions1" ];
|
||||
};
|
||||
|
||||
virtualisation.podman = {
|
||||
enable = true;
|
||||
dockerCompat = true;
|
||||
dockerSocket.enable = true;
|
||||
};
|
||||
|
||||
services.gitea-actions-runner = {
|
||||
package = pkgs.forgejo-runner;
|
||||
|
||||
instances.actions1 = {
|
||||
services.gitea-actions-runner.instances = {
|
||||
actions1 = {
|
||||
enable = true;
|
||||
name = config.networking.hostName;
|
||||
url = "https://code.t-juice.club";
|
||||
tokenFile = lib.mkDefault "/var/lib/forgejo-runner/token";
|
||||
labels = [
|
||||
"nix:docker://code.t-juice.club/torjus/runner-images/nix:latest"
|
||||
"node-bookworm:docker://node:lts-bookworm-slim"
|
||||
"alpine:docker://alpine:latest"
|
||||
"golang:docker://code.t-juice.club/torjus/runner-images/golang:latest"
|
||||
];
|
||||
tokenFile = "/run/secrets/actions-token-1";
|
||||
name = "actions1.home.2rjus.net";
|
||||
settings = {
|
||||
runner.capacity = lib.mkDefault 2;
|
||||
log = {
|
||||
level = "debug";
|
||||
};
|
||||
|
||||
runner = {
|
||||
file = ".runner";
|
||||
capacity = 4;
|
||||
timeout = "2h";
|
||||
shutdown_timeout = "10m";
|
||||
insecure = false;
|
||||
fetch_timeout = "10s";
|
||||
fetch_interval = "30s";
|
||||
};
|
||||
|
||||
cache = {
|
||||
enabled = true;
|
||||
dir = "/var/lib/gitea-runner/actions1/cache";
|
||||
dir = "/var/cache/gitea-actions1";
|
||||
};
|
||||
|
||||
container = {
|
||||
privileged = false;
|
||||
};
|
||||
container.privileged = false;
|
||||
};
|
||||
labels =
|
||||
builtins.map (n: "${n}:docker://gitea/runner-images:${n}") [
|
||||
"ubuntu-latest"
|
||||
"ubuntu-latest-slim"
|
||||
"ubuntu-latest-full"
|
||||
]
|
||||
++ [
|
||||
"homelab"
|
||||
];
|
||||
|
||||
url = "https://git.t-juice.club";
|
||||
};
|
||||
};
|
||||
}
|
||||
|
||||
@@ -1,20 +0,0 @@
|
||||
{ pkgs, ... }:
|
||||
{
|
||||
services.forgejo = {
|
||||
package = pkgs.forgejo;
|
||||
enable = true;
|
||||
database.type = "sqlite3";
|
||||
lfs.enable = true;
|
||||
settings = {
|
||||
server = {
|
||||
DOMAIN = "code.t-juice.club";
|
||||
ROOT_URL = "https://code.t-juice.club/";
|
||||
HTTP_ADDR = "127.0.0.1";
|
||||
HTTP_PORT = 3000;
|
||||
};
|
||||
service.DISABLE_REGISTRATION = true;
|
||||
"service.explore".REQUIRE_SIGNIN_VIEW = true;
|
||||
session.COOKIE_SECURE = true;
|
||||
};
|
||||
};
|
||||
}
|
||||
@@ -1,64 +0,0 @@
|
||||
{ config, pkgs, ... }:
|
||||
{
|
||||
homelab.monitoring.scrapeTargets = [
|
||||
{
|
||||
job_name = "garage";
|
||||
port = 3903;
|
||||
metrics_path = "/metrics";
|
||||
}
|
||||
{
|
||||
job_name = "caddy";
|
||||
port = 9117;
|
||||
}
|
||||
];
|
||||
|
||||
vault.secrets.garage-env = {
|
||||
secretPath = "hosts/${config.networking.hostName}/garage";
|
||||
extractKey = "env";
|
||||
outputDir = "/run/secrets/garage-env";
|
||||
services = [ "garage" ];
|
||||
};
|
||||
|
||||
services.garage = {
|
||||
enable = true;
|
||||
package = pkgs.garage;
|
||||
environmentFile = "/run/secrets/garage-env";
|
||||
settings = {
|
||||
metadata_dir = "/var/lib/garage/meta";
|
||||
data_dir = "/var/lib/garage/data";
|
||||
replication_factor = 1;
|
||||
rpc_bind_addr = "[::]:3901";
|
||||
rpc_public_addr = "garage01.home.2rjus.net:3901";
|
||||
s3_api = {
|
||||
api_bind_addr = "[::]:3900";
|
||||
s3_region = "garage";
|
||||
root_domain = ".s3.home.2rjus.net";
|
||||
};
|
||||
admin = {
|
||||
api_bind_addr = "[::]:3903";
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
services.caddy = {
|
||||
enable = true;
|
||||
package = pkgs.unstable.caddy;
|
||||
configFile = pkgs.writeText "Caddyfile" ''
|
||||
{
|
||||
acme_ca https://vault.home.2rjus.net:8200/v1/pki_int/acme/directory
|
||||
metrics
|
||||
}
|
||||
|
||||
s3.home.2rjus.net {
|
||||
reverse_proxy http://localhost:3900
|
||||
}
|
||||
|
||||
http://garage01.home.2rjus.net:9117 {
|
||||
handle /metrics {
|
||||
metrics
|
||||
}
|
||||
respond 404
|
||||
}
|
||||
'';
|
||||
};
|
||||
}
|
||||
@@ -1,492 +0,0 @@
|
||||
{
|
||||
"uid": "apiary-homelab",
|
||||
"title": "Apiary - Honeypot",
|
||||
"tags": ["apiary", "honeypot", "prometheus", "homelab"],
|
||||
"timezone": "browser",
|
||||
"schemaVersion": 39,
|
||||
"version": 1,
|
||||
"refresh": "1m",
|
||||
"time": {
|
||||
"from": "now-24h",
|
||||
"to": "now"
|
||||
},
|
||||
"templating": {
|
||||
"list": []
|
||||
},
|
||||
"panels": [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "SSH Connections",
|
||||
"type": "stat",
|
||||
"gridPos": {"h": 4, "w": 6, "x": 0, "y": 0},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(oubliette_ssh_connections_total{job=\"apiary\"})",
|
||||
"legendFormat": "Total",
|
||||
"refId": "A",
|
||||
"instant": true
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{"color": "blue", "value": null}
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"colorMode": "value",
|
||||
"graphMode": "none",
|
||||
"textMode": "auto"
|
||||
},
|
||||
"description": "Total SSH connections across all outcomes"
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Active Sessions",
|
||||
"type": "stat",
|
||||
"gridPos": {"h": 4, "w": 6, "x": 6, "y": 0},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "oubliette_sessions_active{job=\"apiary\"}",
|
||||
"legendFormat": "Active",
|
||||
"refId": "A",
|
||||
"instant": true
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{"color": "green", "value": null},
|
||||
{"color": "yellow", "value": 5},
|
||||
{"color": "red", "value": 20}
|
||||
]
|
||||
},
|
||||
"noValue": "0"
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"colorMode": "value",
|
||||
"graphMode": "none",
|
||||
"textMode": "auto"
|
||||
},
|
||||
"description": "Currently active honeypot sessions"
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"title": "Unique IPs",
|
||||
"type": "stat",
|
||||
"gridPos": {"h": 4, "w": 6, "x": 12, "y": 0},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "oubliette_storage_unique_ips{job=\"apiary\"}",
|
||||
"legendFormat": "IPs",
|
||||
"refId": "A",
|
||||
"instant": true
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{"color": "purple", "value": null}
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"colorMode": "value",
|
||||
"graphMode": "none",
|
||||
"textMode": "auto"
|
||||
},
|
||||
"description": "Total unique source IPs observed"
|
||||
},
|
||||
{
|
||||
"id": 4,
|
||||
"title": "Total Login Attempts",
|
||||
"type": "stat",
|
||||
"gridPos": {"h": 4, "w": 6, "x": 18, "y": 0},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "oubliette_storage_login_attempts_total{job=\"apiary\"}",
|
||||
"legendFormat": "Attempts",
|
||||
"refId": "A",
|
||||
"instant": true
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{"color": "orange", "value": null}
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"colorMode": "value",
|
||||
"graphMode": "none",
|
||||
"textMode": "auto"
|
||||
},
|
||||
"description": "Total login attempts stored"
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"title": "SSH Connections Over Time",
|
||||
"type": "timeseries",
|
||||
"gridPos": {"h": 8, "w": 12, "x": 0, "y": 4},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"interval": "60s",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "rate(oubliette_ssh_connections_total{job=\"apiary\"}[$__rate_interval])",
|
||||
"legendFormat": "{{outcome}}",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"unit": "cps",
|
||||
"custom": {
|
||||
"drawStyle": "line",
|
||||
"lineInterpolation": "smooth",
|
||||
"fillOpacity": 20,
|
||||
"pointSize": 5,
|
||||
"showPoints": "auto",
|
||||
"stacking": {"mode": "none"}
|
||||
}
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"legend": {"displayMode": "list", "placement": "bottom"},
|
||||
"tooltip": {"mode": "multi", "sort": "desc"}
|
||||
},
|
||||
"description": "SSH connection rate by outcome"
|
||||
},
|
||||
{
|
||||
"id": 6,
|
||||
"title": "Auth Attempts Over Time",
|
||||
"type": "timeseries",
|
||||
"gridPos": {"h": 8, "w": 12, "x": 12, "y": 4},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"interval": "60s",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "rate(oubliette_auth_attempts_total{job=\"apiary\"}[$__rate_interval])",
|
||||
"legendFormat": "{{reason}} - {{result}}",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"unit": "cps",
|
||||
"custom": {
|
||||
"drawStyle": "line",
|
||||
"lineInterpolation": "smooth",
|
||||
"fillOpacity": 20,
|
||||
"pointSize": 5,
|
||||
"showPoints": "auto",
|
||||
"stacking": {"mode": "none"}
|
||||
}
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"legend": {"displayMode": "list", "placement": "bottom"},
|
||||
"tooltip": {"mode": "multi", "sort": "desc"}
|
||||
},
|
||||
"description": "Authentication attempt rate by reason and result"
|
||||
},
|
||||
{
|
||||
"id": 7,
|
||||
"title": "Sessions by Shell",
|
||||
"type": "timeseries",
|
||||
"gridPos": {"h": 8, "w": 12, "x": 0, "y": 22},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"interval": "60s",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "rate(oubliette_sessions_total{job=\"apiary\"}[$__rate_interval])",
|
||||
"legendFormat": "{{shell}}",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"unit": "cps",
|
||||
"custom": {
|
||||
"drawStyle": "line",
|
||||
"lineInterpolation": "smooth",
|
||||
"fillOpacity": 20,
|
||||
"pointSize": 5,
|
||||
"showPoints": "auto",
|
||||
"stacking": {"mode": "normal"}
|
||||
}
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"legend": {"displayMode": "list", "placement": "bottom"},
|
||||
"tooltip": {"mode": "multi", "sort": "desc"}
|
||||
},
|
||||
"description": "Session creation rate by shell type"
|
||||
},
|
||||
{
|
||||
"id": 8,
|
||||
"title": "Attempts by Country",
|
||||
"type": "geomap",
|
||||
"gridPos": {"h": 10, "w": 24, "x": 0, "y": 12},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "oubliette_auth_attempts_by_country_total{job=\"apiary\"}",
|
||||
"legendFormat": "{{country}}",
|
||||
"refId": "A",
|
||||
"instant": true,
|
||||
"format": "table"
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{"color": "green", "value": null},
|
||||
{"color": "yellow", "value": 10},
|
||||
{"color": "orange", "value": 50},
|
||||
{"color": "red", "value": 200}
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"view": {
|
||||
"id": "zero",
|
||||
"lat": 30,
|
||||
"lon": 10,
|
||||
"zoom": 2
|
||||
},
|
||||
"basemap": {
|
||||
"type": "default"
|
||||
},
|
||||
"layers": [
|
||||
{
|
||||
"type": "markers",
|
||||
"name": "Auth Attempts",
|
||||
"config": {
|
||||
"showLegend": true,
|
||||
"style": {
|
||||
"size": {
|
||||
"field": "Value",
|
||||
"min": 3,
|
||||
"max": 20
|
||||
},
|
||||
"color": {
|
||||
"field": "Value"
|
||||
},
|
||||
"symbol": {
|
||||
"mode": "fixed",
|
||||
"fixed": "img/icons/marker/circle.svg"
|
||||
}
|
||||
}
|
||||
},
|
||||
"location": {
|
||||
"mode": "lookup",
|
||||
"lookup": "country",
|
||||
"gazetteer": "public/gazetteer/countries.json"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"description": "Authentication attempts by country (geo lookup from country code)"
|
||||
},
|
||||
{
|
||||
"id": 9,
|
||||
"title": "Session Duration Distribution",
|
||||
"type": "heatmap",
|
||||
"gridPos": {"h": 8, "w": 24, "x": 0, "y": 30},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"interval": "60s",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "rate(oubliette_session_duration_seconds_bucket{job=\"apiary\"}[$__rate_interval])",
|
||||
"legendFormat": "{{le}}",
|
||||
"refId": "A",
|
||||
"format": "heatmap"
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {
|
||||
"scaleDistribution": {
|
||||
"type": "log",
|
||||
"log": 2
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"calculate": false,
|
||||
"yAxis": {
|
||||
"unit": "s"
|
||||
},
|
||||
"color": {
|
||||
"scheme": "Oranges",
|
||||
"mode": "scheme"
|
||||
},
|
||||
"cellGap": 1,
|
||||
"tooltip": {
|
||||
"show": true
|
||||
}
|
||||
},
|
||||
"description": "Distribution of session durations"
|
||||
},
|
||||
{
|
||||
"id": 10,
|
||||
"title": "Commands Executed by Shell",
|
||||
"type": "timeseries",
|
||||
"gridPos": {"h": 8, "w": 12, "x": 12, "y": 22},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"interval": "60s",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "rate(oubliette_commands_executed_total{job=\"apiary\"}[$__rate_interval])",
|
||||
"legendFormat": "{{shell}}",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"unit": "cps",
|
||||
"custom": {
|
||||
"drawStyle": "line",
|
||||
"lineInterpolation": "smooth",
|
||||
"fillOpacity": 20,
|
||||
"pointSize": 5,
|
||||
"showPoints": "auto",
|
||||
"stacking": {"mode": "normal"}
|
||||
}
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"legend": {"displayMode": "list", "placement": "bottom"},
|
||||
"tooltip": {"mode": "multi", "sort": "desc"}
|
||||
},
|
||||
"description": "Rate of commands executed in honeypot shells"
|
||||
},
|
||||
{
|
||||
"id": 11,
|
||||
"title": "Storage Query Duration by Method",
|
||||
"type": "timeseries",
|
||||
"gridPos": {"h": 8, "w": 12, "x": 0, "y": 38},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"interval": "60s",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "rate(oubliette_storage_query_duration_seconds_sum{job=\"apiary\"}[$__rate_interval]) / rate(oubliette_storage_query_duration_seconds_count{job=\"apiary\"}[$__rate_interval])",
|
||||
"legendFormat": "{{method}}",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"unit": "s",
|
||||
"custom": {
|
||||
"drawStyle": "line",
|
||||
"lineInterpolation": "smooth",
|
||||
"fillOpacity": 10,
|
||||
"pointSize": 5,
|
||||
"showPoints": "auto",
|
||||
"stacking": {"mode": "none"}
|
||||
}
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"legend": {"displayMode": "list", "placement": "bottom"},
|
||||
"tooltip": {"mode": "multi", "sort": "desc"}
|
||||
},
|
||||
"description": "Average query duration per storage method over time"
|
||||
},
|
||||
{
|
||||
"id": 12,
|
||||
"title": "Storage Query Rate by Method",
|
||||
"type": "timeseries",
|
||||
"gridPos": {"h": 8, "w": 12, "x": 12, "y": 38},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"interval": "60s",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "rate(oubliette_storage_query_duration_seconds_count{job=\"apiary\"}[$__rate_interval])",
|
||||
"legendFormat": "{{method}}",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"unit": "ops",
|
||||
"custom": {
|
||||
"drawStyle": "line",
|
||||
"lineInterpolation": "smooth",
|
||||
"fillOpacity": 10,
|
||||
"pointSize": 5,
|
||||
"showPoints": "auto",
|
||||
"stacking": {"mode": "none"}
|
||||
}
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"legend": {"displayMode": "list", "placement": "bottom"},
|
||||
"tooltip": {"mode": "multi", "sort": "desc"}
|
||||
},
|
||||
"description": "Query execution rate per storage method"
|
||||
},
|
||||
{
|
||||
"id": 13,
|
||||
"title": "Storage Query Errors",
|
||||
"type": "stat",
|
||||
"gridPos": {"h": 4, "w": 6, "x": 0, "y": 46},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(oubliette_storage_query_errors_total{job=\"apiary\"})",
|
||||
"legendFormat": "Errors",
|
||||
"refId": "A",
|
||||
"instant": true
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{"color": "green", "value": null},
|
||||
{"color": "yellow", "value": 1},
|
||||
{"color": "red", "value": 10}
|
||||
]
|
||||
},
|
||||
"noValue": "0"
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"colorMode": "value",
|
||||
"graphMode": "none",
|
||||
"textMode": "auto"
|
||||
},
|
||||
"description": "Total storage query errors"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1,446 +0,0 @@
|
||||
{
|
||||
"uid": "certificates-homelab",
|
||||
"title": "TLS Certificates",
|
||||
"tags": ["certificates", "tls", "security", "homelab"],
|
||||
"timezone": "browser",
|
||||
"schemaVersion": 39,
|
||||
"version": 1,
|
||||
"refresh": "5m",
|
||||
"time": {
|
||||
"from": "now-7d",
|
||||
"to": "now"
|
||||
},
|
||||
"panels": [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Endpoints Monitored",
|
||||
"type": "stat",
|
||||
"gridPos": {"h": 4, "w": 4, "x": 0, "y": 0},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "count(probe_ssl_earliest_cert_expiry{job=\"blackbox_tls\"})",
|
||||
"legendFormat": "Total",
|
||||
"refId": "A",
|
||||
"instant": true
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{"color": "blue", "value": null}
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"colorMode": "value",
|
||||
"graphMode": "none",
|
||||
"textMode": "auto"
|
||||
},
|
||||
"description": "Total number of TLS endpoints being monitored"
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Probe Failures",
|
||||
"type": "stat",
|
||||
"gridPos": {"h": 4, "w": 4, "x": 4, "y": 0},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "count(probe_success{job=\"blackbox_tls\"} == 0) or vector(0)",
|
||||
"legendFormat": "Failing",
|
||||
"refId": "A",
|
||||
"instant": true
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{"color": "green", "value": null},
|
||||
{"color": "red", "value": 1}
|
||||
]
|
||||
},
|
||||
"noValue": "0"
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"colorMode": "value",
|
||||
"graphMode": "none",
|
||||
"textMode": "auto"
|
||||
},
|
||||
"description": "Number of endpoints where TLS probe is failing"
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"title": "Expiring Soon (< 7d)",
|
||||
"type": "stat",
|
||||
"gridPos": {"h": 4, "w": 4, "x": 8, "y": 0},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "count((probe_ssl_earliest_cert_expiry{job=\"blackbox_tls\"} - time()) < 86400 * 7) or vector(0)",
|
||||
"legendFormat": "Warning",
|
||||
"refId": "A",
|
||||
"instant": true
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{"color": "green", "value": null},
|
||||
{"color": "yellow", "value": 1}
|
||||
]
|
||||
},
|
||||
"noValue": "0"
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"colorMode": "value",
|
||||
"graphMode": "none",
|
||||
"textMode": "auto"
|
||||
},
|
||||
"description": "Certificates expiring within 7 days"
|
||||
},
|
||||
{
|
||||
"id": 4,
|
||||
"title": "Expiring Critical (< 24h)",
|
||||
"type": "stat",
|
||||
"gridPos": {"h": 4, "w": 4, "x": 12, "y": 0},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "count((probe_ssl_earliest_cert_expiry{job=\"blackbox_tls\"} - time()) < 86400) or vector(0)",
|
||||
"legendFormat": "Critical",
|
||||
"refId": "A",
|
||||
"instant": true
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{"color": "green", "value": null},
|
||||
{"color": "red", "value": 1}
|
||||
]
|
||||
},
|
||||
"noValue": "0"
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"colorMode": "value",
|
||||
"graphMode": "none",
|
||||
"textMode": "auto"
|
||||
},
|
||||
"description": "Certificates expiring within 24 hours"
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"title": "Minimum Days Remaining",
|
||||
"type": "gauge",
|
||||
"gridPos": {"h": 4, "w": 8, "x": 16, "y": 0},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "min((probe_ssl_earliest_cert_expiry{job=\"blackbox_tls\"} - time()) / 86400)",
|
||||
"legendFormat": "Days",
|
||||
"refId": "A",
|
||||
"instant": true
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"unit": "d",
|
||||
"min": 0,
|
||||
"max": 90,
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{"color": "red", "value": null},
|
||||
{"color": "orange", "value": 7},
|
||||
{"color": "yellow", "value": 14},
|
||||
{"color": "green", "value": 30}
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"showThresholdLabels": false,
|
||||
"showThresholdMarkers": true
|
||||
},
|
||||
"description": "Shortest time until any certificate expires"
|
||||
},
|
||||
{
|
||||
"id": 6,
|
||||
"title": "Certificate Expiry by Endpoint",
|
||||
"type": "table",
|
||||
"gridPos": {"h": 12, "w": 12, "x": 0, "y": 4},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "(probe_ssl_earliest_cert_expiry{job=\"blackbox_tls\"} - time()) / 86400",
|
||||
"legendFormat": "{{instance}}",
|
||||
"refId": "A",
|
||||
"instant": true,
|
||||
"format": "table"
|
||||
}
|
||||
],
|
||||
"transformations": [
|
||||
{
|
||||
"id": "organize",
|
||||
"options": {
|
||||
"excludeByName": {"Time": true, "job": true, "__name__": true},
|
||||
"renameByName": {"instance": "Endpoint", "Value": "Days Until Expiry"}
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "sortBy",
|
||||
"options": {
|
||||
"sort": [{"field": "Days Until Expiry", "desc": false}]
|
||||
}
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {
|
||||
"align": "left"
|
||||
}
|
||||
},
|
||||
"overrides": [
|
||||
{
|
||||
"matcher": {"id": "byName", "options": "Days Until Expiry"},
|
||||
"properties": [
|
||||
{"id": "unit", "value": "d"},
|
||||
{"id": "decimals", "value": 1},
|
||||
{"id": "custom.width", "value": 150},
|
||||
{
|
||||
"id": "thresholds",
|
||||
"value": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{"color": "red", "value": null},
|
||||
{"color": "orange", "value": 7},
|
||||
{"color": "yellow", "value": 14},
|
||||
{"color": "green", "value": 30}
|
||||
]
|
||||
}
|
||||
},
|
||||
{"id": "custom.cellOptions", "value": {"type": "color-background"}}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"options": {
|
||||
"showHeader": true,
|
||||
"sortBy": [{"displayName": "Days Until Expiry", "desc": false}]
|
||||
},
|
||||
"description": "All monitored endpoints sorted by days until certificate expiry"
|
||||
},
|
||||
{
|
||||
"id": 7,
|
||||
"title": "Probe Status",
|
||||
"type": "table",
|
||||
"gridPos": {"h": 12, "w": 12, "x": 12, "y": 4},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "probe_success{job=\"blackbox_tls\"}",
|
||||
"legendFormat": "{{instance}}",
|
||||
"refId": "A",
|
||||
"instant": true,
|
||||
"format": "table"
|
||||
},
|
||||
{
|
||||
"expr": "probe_http_status_code{job=\"blackbox_tls\"}",
|
||||
"legendFormat": "{{instance}}",
|
||||
"refId": "B",
|
||||
"instant": true,
|
||||
"format": "table"
|
||||
},
|
||||
{
|
||||
"expr": "probe_duration_seconds{job=\"blackbox_tls\"}",
|
||||
"legendFormat": "{{instance}}",
|
||||
"refId": "C",
|
||||
"instant": true,
|
||||
"format": "table"
|
||||
}
|
||||
],
|
||||
"transformations": [
|
||||
{
|
||||
"id": "joinByField",
|
||||
"options": {
|
||||
"byField": "instance",
|
||||
"mode": "outer"
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "organize",
|
||||
"options": {
|
||||
"excludeByName": {"Time": true, "Time 1": true, "Time 2": true, "Time 3": true, "job": true, "job 1": true, "job 2": true, "job 3": true, "__name__": true},
|
||||
"renameByName": {
|
||||
"instance": "Endpoint",
|
||||
"Value #A": "Success",
|
||||
"Value #B": "HTTP Status",
|
||||
"Value #C": "Duration"
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {"align": "left"}
|
||||
},
|
||||
"overrides": [
|
||||
{
|
||||
"matcher": {"id": "byName", "options": "Success"},
|
||||
"properties": [
|
||||
{"id": "custom.width", "value": 80},
|
||||
{"id": "mappings", "value": [
|
||||
{"type": "value", "options": {"0": {"text": "FAIL", "color": "red"}}},
|
||||
{"type": "value", "options": {"1": {"text": "OK", "color": "green"}}}
|
||||
]},
|
||||
{"id": "custom.cellOptions", "value": {"type": "color-text"}}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {"id": "byName", "options": "HTTP Status"},
|
||||
"properties": [
|
||||
{"id": "custom.width", "value": 100}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {"id": "byName", "options": "Duration"},
|
||||
"properties": [
|
||||
{"id": "unit", "value": "s"},
|
||||
{"id": "decimals", "value": 3},
|
||||
{"id": "custom.width", "value": 100}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"options": {
|
||||
"showHeader": true
|
||||
},
|
||||
"description": "Probe success status, HTTP response code, and probe duration"
|
||||
},
|
||||
{
|
||||
"id": 8,
|
||||
"title": "Certificate Expiry Over Time",
|
||||
"type": "timeseries",
|
||||
"gridPos": {"h": 8, "w": 24, "x": 0, "y": 16},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "(probe_ssl_earliest_cert_expiry{job=\"blackbox_tls\"} - time()) / 86400",
|
||||
"legendFormat": "{{instance}}",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"unit": "d",
|
||||
"custom": {
|
||||
"lineWidth": 2,
|
||||
"fillOpacity": 10,
|
||||
"showPoints": "never"
|
||||
},
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{"color": "red", "value": null},
|
||||
{"color": "orange", "value": 7},
|
||||
{"color": "yellow", "value": 14},
|
||||
{"color": "green", "value": 30}
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"legend": {"displayMode": "table", "placement": "right", "calcs": ["lastNotNull"]},
|
||||
"tooltip": {"mode": "multi", "sort": "desc"}
|
||||
},
|
||||
"description": "Days until certificate expiry over time - useful for spotting renewal patterns"
|
||||
},
|
||||
{
|
||||
"id": 9,
|
||||
"title": "Probe Success Rate",
|
||||
"type": "timeseries",
|
||||
"gridPos": {"h": 8, "w": 12, "x": 0, "y": 24},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "avg(probe_success{job=\"blackbox_tls\"}) * 100",
|
||||
"legendFormat": "Success Rate",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"unit": "percent",
|
||||
"min": 0,
|
||||
"max": 100,
|
||||
"custom": {
|
||||
"lineWidth": 2,
|
||||
"fillOpacity": 20,
|
||||
"showPoints": "never"
|
||||
},
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{"color": "red", "value": null},
|
||||
{"color": "yellow", "value": 90},
|
||||
{"color": "green", "value": 100}
|
||||
]
|
||||
},
|
||||
"color": {"mode": "thresholds"}
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"legend": {"displayMode": "list", "placement": "bottom"},
|
||||
"tooltip": {"mode": "single"}
|
||||
},
|
||||
"description": "Overall probe success rate across all endpoints"
|
||||
},
|
||||
{
|
||||
"id": 10,
|
||||
"title": "Probe Duration",
|
||||
"type": "timeseries",
|
||||
"gridPos": {"h": 8, "w": 12, "x": 12, "y": 24},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "probe_duration_seconds{job=\"blackbox_tls\"}",
|
||||
"legendFormat": "{{instance}}",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"unit": "s",
|
||||
"custom": {
|
||||
"lineWidth": 1,
|
||||
"fillOpacity": 0,
|
||||
"showPoints": "never"
|
||||
}
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"legend": {"displayMode": "table", "placement": "right", "calcs": ["mean", "max"]},
|
||||
"tooltip": {"mode": "multi", "sort": "desc"}
|
||||
},
|
||||
"description": "Time taken to complete TLS probe for each endpoint"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1,85 +0,0 @@
|
||||
{
|
||||
"uid": "logs-homelab",
|
||||
"title": "Logs - Homelab",
|
||||
"tags": ["loki", "logs", "homelab"],
|
||||
"timezone": "browser",
|
||||
"schemaVersion": 39,
|
||||
"version": 1,
|
||||
"refresh": "30s",
|
||||
"templating": {
|
||||
"list": [
|
||||
{
|
||||
"name": "host",
|
||||
"type": "query",
|
||||
"datasource": {"type": "loki", "uid": "loki"},
|
||||
"query": "label_values(host)",
|
||||
"refresh": 2,
|
||||
"includeAll": true,
|
||||
"multi": false,
|
||||
"current": {"text": "All", "value": "$__all"}
|
||||
},
|
||||
{
|
||||
"name": "job",
|
||||
"type": "query",
|
||||
"datasource": {"type": "loki", "uid": "loki"},
|
||||
"query": "label_values(job)",
|
||||
"refresh": 2,
|
||||
"includeAll": true,
|
||||
"multi": false,
|
||||
"current": {"text": "All", "value": "$__all"}
|
||||
},
|
||||
{
|
||||
"name": "search",
|
||||
"type": "textbox",
|
||||
"current": {"text": "", "value": ""},
|
||||
"label": "Search"
|
||||
}
|
||||
]
|
||||
},
|
||||
"panels": [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Log Volume",
|
||||
"type": "timeseries",
|
||||
"gridPos": {"h": 6, "w": 24, "x": 0, "y": 0},
|
||||
"datasource": {"type": "loki", "uid": "loki"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum by (host) (count_over_time({host=~\"$host\", job=~\"$job\"} |~ \"$search\" [1m]))",
|
||||
"legendFormat": "{{host}}",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"unit": "short"
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"legend": {"displayMode": "list", "placement": "bottom"}
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Logs",
|
||||
"type": "logs",
|
||||
"gridPos": {"h": 18, "w": 24, "x": 0, "y": 6},
|
||||
"datasource": {"type": "loki", "uid": "loki"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "{host=~\"$host\", job=~\"$job\"} |~ \"$search\"",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"options": {
|
||||
"showTime": true,
|
||||
"showLabels": true,
|
||||
"showCommonLabels": false,
|
||||
"wrapLogMessage": true,
|
||||
"prettifyLogMessage": false,
|
||||
"enableLogDetails": true,
|
||||
"sortOrder": "Descending"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1,949 +0,0 @@
|
||||
{
|
||||
"uid": "nixos-fleet-homelab",
|
||||
"title": "NixOS Fleet - Homelab",
|
||||
"tags": ["nixos", "fleet", "homelab"],
|
||||
"timezone": "browser",
|
||||
"schemaVersion": 39,
|
||||
"version": 1,
|
||||
"refresh": "1m",
|
||||
"time": {
|
||||
"from": "now-7d",
|
||||
"to": "now"
|
||||
},
|
||||
"templating": {
|
||||
"list": [
|
||||
{
|
||||
"name": "tier",
|
||||
"type": "query",
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"query": "label_values(nixos_flake_info, tier)",
|
||||
"refresh": 2,
|
||||
"includeAll": true,
|
||||
"multi": false,
|
||||
"current": {"text": "All", "value": "$__all"}
|
||||
}
|
||||
]
|
||||
},
|
||||
"panels": [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Hosts Behind Remote",
|
||||
"type": "stat",
|
||||
"gridPos": {"h": 4, "w": 4, "x": 0, "y": 0},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "count(nixos_flake_revision_behind{tier=~\"$tier\"} == 1)",
|
||||
"legendFormat": "Behind",
|
||||
"refId": "A",
|
||||
"instant": true
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{"color": "green", "value": null},
|
||||
{"color": "yellow", "value": 1},
|
||||
{"color": "red", "value": 5}
|
||||
]
|
||||
},
|
||||
"noValue": "0"
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"colorMode": "value",
|
||||
"graphMode": "none",
|
||||
"textMode": "auto"
|
||||
},
|
||||
"description": "Number of hosts where current revision differs from remote master"
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Hosts Needing Reboot",
|
||||
"type": "stat",
|
||||
"gridPos": {"h": 4, "w": 4, "x": 4, "y": 0},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "count(nixos_config_mismatch{tier=~\"$tier\"} == 1)",
|
||||
"legendFormat": "Need Reboot",
|
||||
"refId": "A",
|
||||
"instant": true
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{"color": "green", "value": null},
|
||||
{"color": "yellow", "value": 1},
|
||||
{"color": "orange", "value": 3},
|
||||
{"color": "red", "value": 5}
|
||||
]
|
||||
},
|
||||
"noValue": "0"
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"colorMode": "value",
|
||||
"graphMode": "none"
|
||||
},
|
||||
"description": "Hosts where booted generation differs from current (switched but not rebooted)"
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"title": "Total Hosts",
|
||||
"type": "stat",
|
||||
"gridPos": {"h": 4, "w": 3, "x": 8, "y": 0},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "count(nixos_flake_info{tier=~\"$tier\"})",
|
||||
"legendFormat": "Hosts",
|
||||
"refId": "A",
|
||||
"instant": true
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [{"color": "blue", "value": null}]
|
||||
}
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"colorMode": "value",
|
||||
"graphMode": "none"
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": 4,
|
||||
"title": "Nixpkgs Age",
|
||||
"type": "stat",
|
||||
"gridPos": {"h": 4, "w": 3, "x": 11, "y": 0},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "max(nixos_flake_input_age_seconds{input=\"nixpkgs\", tier=~\"$tier\"})",
|
||||
"legendFormat": "Nixpkgs",
|
||||
"refId": "A",
|
||||
"instant": true
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"unit": "s",
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{"color": "green", "value": null},
|
||||
{"color": "yellow", "value": 604800},
|
||||
{"color": "orange", "value": 1209600},
|
||||
{"color": "red", "value": 2592000}
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"colorMode": "value",
|
||||
"graphMode": "none"
|
||||
},
|
||||
"description": "Age of nixpkgs flake input (yellow >7d, orange >14d, red >30d)"
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"title": "Hosts Up-to-date",
|
||||
"type": "stat",
|
||||
"gridPos": {"h": 4, "w": 3, "x": 14, "y": 0},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "count(nixos_flake_revision_behind{tier=~\"$tier\"} == 0)",
|
||||
"legendFormat": "Up-to-date",
|
||||
"refId": "A",
|
||||
"instant": true
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [{"color": "green", "value": null}]
|
||||
},
|
||||
"noValue": "0"
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"colorMode": "value",
|
||||
"graphMode": "none"
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": 13,
|
||||
"title": "Deployments (24h)",
|
||||
"type": "stat",
|
||||
"gridPos": {"h": 4, "w": 3, "x": 17, "y": 0},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(increase(homelab_deploy_deployments_total{status=\"completed\"}[24h]))",
|
||||
"legendFormat": "Deployments",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [{"color": "blue", "value": null}]
|
||||
},
|
||||
"noValue": "0",
|
||||
"decimals": 0
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"colorMode": "value",
|
||||
"graphMode": "none"
|
||||
},
|
||||
"description": "Total successful deployments in the last 24 hours"
|
||||
},
|
||||
{
|
||||
"id": 14,
|
||||
"title": "Avg Deploy Time",
|
||||
"type": "stat",
|
||||
"gridPos": {"h": 4, "w": 4, "x": 20, "y": 0},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(increase(homelab_deploy_deployment_duration_seconds_sum{success=\"true\"}[24h])) / sum(increase(homelab_deploy_deployment_duration_seconds_count{success=\"true\"}[24h]))",
|
||||
"legendFormat": "Avg Time",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"unit": "s",
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{"color": "green", "value": null},
|
||||
{"color": "yellow", "value": 30},
|
||||
{"color": "red", "value": 60}
|
||||
]
|
||||
},
|
||||
"noValue": "-"
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"colorMode": "value",
|
||||
"graphMode": "none"
|
||||
},
|
||||
"description": "Average deployment duration over the last 24 hours (yellow >30s, red >60s)"
|
||||
},
|
||||
{
|
||||
"id": 6,
|
||||
"title": "Fleet Status",
|
||||
"type": "table",
|
||||
"gridPos": {"h": 10, "w": 24, "x": 0, "y": 4},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "nixos_flake_info{tier=~\"$tier\"}",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"refId": "info"
|
||||
},
|
||||
{
|
||||
"expr": "nixos_flake_revision_behind{tier=~\"$tier\"}",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"refId": "behind"
|
||||
},
|
||||
{
|
||||
"expr": "nixos_config_mismatch{tier=~\"$tier\"}",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"refId": "mismatch"
|
||||
},
|
||||
{
|
||||
"expr": "nixos_generation_age_seconds{tier=~\"$tier\"}",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"refId": "age"
|
||||
},
|
||||
{
|
||||
"expr": "nixos_generation_count{tier=~\"$tier\"}",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"refId": "count"
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {},
|
||||
"overrides": [
|
||||
{
|
||||
"matcher": {"id": "byName", "options": "Hostname"},
|
||||
"properties": [{"id": "custom.width", "value": 120}]
|
||||
},
|
||||
{
|
||||
"matcher": {"id": "byName", "options": "Current Rev"},
|
||||
"properties": [{"id": "custom.width", "value": 90}]
|
||||
},
|
||||
{
|
||||
"matcher": {"id": "byName", "options": "Remote Rev"},
|
||||
"properties": [{"id": "custom.width", "value": 90}]
|
||||
},
|
||||
{
|
||||
"matcher": {"id": "byName", "options": "Behind"},
|
||||
"properties": [
|
||||
{"id": "custom.width", "value": 70},
|
||||
{"id": "mappings", "value": [
|
||||
{"type": "value", "options": {"0": {"text": "No", "color": "green"}}},
|
||||
{"type": "value", "options": {"1": {"text": "Yes", "color": "red"}}}
|
||||
]},
|
||||
{"id": "custom.cellOptions", "value": {"type": "color-text"}}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {"id": "byName", "options": "Need Reboot"},
|
||||
"properties": [
|
||||
{"id": "custom.width", "value": 100},
|
||||
{"id": "mappings", "value": [
|
||||
{"type": "value", "options": {"0": {"text": "No", "color": "green"}}},
|
||||
{"type": "value", "options": {"1": {"text": "Yes", "color": "orange"}}}
|
||||
]},
|
||||
{"id": "custom.cellOptions", "value": {"type": "color-text"}}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {"id": "byName", "options": "Config Age"},
|
||||
"properties": [
|
||||
{"id": "unit", "value": "s"},
|
||||
{"id": "custom.width", "value": 100}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {"id": "byName", "options": "Generations"},
|
||||
"properties": [{"id": "custom.width", "value": 100}]
|
||||
},
|
||||
{
|
||||
"matcher": {"id": "byName", "options": "Tier"},
|
||||
"properties": [{"id": "custom.width", "value": 60}]
|
||||
},
|
||||
{
|
||||
"matcher": {"id": "byName", "options": "Role"},
|
||||
"properties": [{"id": "custom.width", "value": 80}]
|
||||
}
|
||||
]
|
||||
},
|
||||
"options": {
|
||||
"showHeader": true,
|
||||
"sortBy": [{"displayName": "Hostname", "desc": false}]
|
||||
},
|
||||
"transformations": [
|
||||
{
|
||||
"id": "joinByField",
|
||||
"options": {"byField": "hostname", "mode": "outer"}
|
||||
},
|
||||
{
|
||||
"id": "organize",
|
||||
"options": {
|
||||
"excludeByName": {
|
||||
"Time": true,
|
||||
"Time 1": true,
|
||||
"Time 2": true,
|
||||
"Time 3": true,
|
||||
"Time 4": true,
|
||||
"Time 5": true,
|
||||
"Value #info": true,
|
||||
"__name__": true,
|
||||
"__name__ 1": true,
|
||||
"__name__ 2": true,
|
||||
"__name__ 3": true,
|
||||
"__name__ 4": true,
|
||||
"__name__ 5": true,
|
||||
"dns_role": true,
|
||||
"dns_role 1": true,
|
||||
"dns_role 2": true,
|
||||
"dns_role 3": true,
|
||||
"dns_role 4": true,
|
||||
"instance": true,
|
||||
"instance 1": true,
|
||||
"instance 2": true,
|
||||
"instance 3": true,
|
||||
"instance 4": true,
|
||||
"job": true,
|
||||
"job 1": true,
|
||||
"job 2": true,
|
||||
"job 3": true,
|
||||
"job 4": true,
|
||||
"nixos_version": true,
|
||||
"nixpkgs_rev": true,
|
||||
"role 1": true,
|
||||
"role 2": true,
|
||||
"role 3": true,
|
||||
"role 4": true,
|
||||
"tier 1": true,
|
||||
"tier 2": true,
|
||||
"tier 3": true,
|
||||
"tier 4": true
|
||||
},
|
||||
"indexByName": {
|
||||
"hostname": 0,
|
||||
"tier": 1,
|
||||
"role": 2,
|
||||
"current_rev": 3,
|
||||
"remote_rev": 4,
|
||||
"Value #behind": 5,
|
||||
"Value #mismatch": 6,
|
||||
"Value #age": 7,
|
||||
"Value #count": 8
|
||||
},
|
||||
"renameByName": {
|
||||
"hostname": "Hostname",
|
||||
"tier": "Tier",
|
||||
"role": "Role",
|
||||
"current_rev": "Current Rev",
|
||||
"remote_rev": "Remote Rev",
|
||||
"Value #behind": "Behind",
|
||||
"Value #mismatch": "Need Reboot",
|
||||
"Value #age": "Config Age",
|
||||
"Value #count": "Generations"
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 7,
|
||||
"title": "Generation Age by Host",
|
||||
"type": "bargauge",
|
||||
"gridPos": {"h": 8, "w": 12, "x": 0, "y": 14},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sort_desc(nixos_generation_age_seconds{tier=~\"$tier\"})",
|
||||
"legendFormat": "{{hostname}}",
|
||||
"refId": "A",
|
||||
"instant": true
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"unit": "s",
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{"color": "green", "value": null},
|
||||
{"color": "yellow", "value": 86400},
|
||||
{"color": "orange", "value": 259200},
|
||||
{"color": "red", "value": 604800}
|
||||
]
|
||||
},
|
||||
"min": 0
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"orientation": "horizontal",
|
||||
"displayMode": "gradient",
|
||||
"showUnfilled": true
|
||||
},
|
||||
"description": "How long ago each host's current config was deployed (yellow >1d, orange >3d, red >7d)"
|
||||
},
|
||||
{
|
||||
"id": 8,
|
||||
"title": "Generations per Host",
|
||||
"type": "bargauge",
|
||||
"gridPos": {"h": 8, "w": 12, "x": 12, "y": 14},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sort_desc(nixos_generation_count{tier=~\"$tier\"})",
|
||||
"legendFormat": "{{hostname}}",
|
||||
"refId": "A",
|
||||
"instant": true
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{"color": "blue", "value": null},
|
||||
{"color": "purple", "value": 50}
|
||||
]
|
||||
},
|
||||
"min": 0
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"orientation": "horizontal",
|
||||
"displayMode": "gradient",
|
||||
"showUnfilled": true
|
||||
},
|
||||
"description": "Total number of NixOS generations on each host"
|
||||
},
|
||||
{
|
||||
"id": 9,
|
||||
"title": "Deployment Activity (Generation Age Over Time)",
|
||||
"type": "timeseries",
|
||||
"gridPos": {"h": 8, "w": 24, "x": 0, "y": 22},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "nixos_generation_age_seconds{tier=~\"$tier\"}",
|
||||
"legendFormat": "{{hostname}}",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"unit": "s",
|
||||
"custom": {
|
||||
"lineWidth": 1,
|
||||
"fillOpacity": 0,
|
||||
"showPoints": "never",
|
||||
"stacking": {"mode": "none"}
|
||||
}
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"legend": {
|
||||
"displayMode": "list",
|
||||
"placement": "bottom"
|
||||
},
|
||||
"tooltip": {"mode": "multi", "sort": "desc"}
|
||||
},
|
||||
"description": "Generation age increases over time, drops to near-zero when deployed. Useful to see deployment patterns."
|
||||
},
|
||||
{
|
||||
"id": 10,
|
||||
"title": "Flake Input Ages",
|
||||
"type": "table",
|
||||
"gridPos": {"h": 6, "w": 12, "x": 0, "y": 30},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "max by (input) (nixos_flake_input_age_seconds)",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"unit": "s"
|
||||
},
|
||||
"overrides": [
|
||||
{
|
||||
"matcher": {"id": "byName", "options": "input"},
|
||||
"properties": [{"id": "custom.width", "value": 150}]
|
||||
}
|
||||
]
|
||||
},
|
||||
"options": {
|
||||
"showHeader": true,
|
||||
"sortBy": [{"displayName": "Value", "desc": true}]
|
||||
},
|
||||
"transformations": [
|
||||
{
|
||||
"id": "organize",
|
||||
"options": {
|
||||
"excludeByName": {"Time": true},
|
||||
"renameByName": {
|
||||
"input": "Flake Input",
|
||||
"Value": "Age"
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"description": "Age of each flake input across the fleet"
|
||||
},
|
||||
{
|
||||
"id": 11,
|
||||
"title": "Hosts by Revision",
|
||||
"type": "piechart",
|
||||
"gridPos": {"h": 6, "w": 6, "x": 12, "y": 30},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "count by (current_rev) (nixos_flake_info{tier=~\"$tier\"})",
|
||||
"legendFormat": "{{current_rev}}",
|
||||
"refId": "A",
|
||||
"instant": true
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"legend": {"displayMode": "table", "placement": "right", "values": ["value"]},
|
||||
"pieType": "pie"
|
||||
},
|
||||
"description": "Distribution of hosts by their current flake revision"
|
||||
},
|
||||
{
|
||||
"id": 12,
|
||||
"title": "Hosts by Tier",
|
||||
"type": "piechart",
|
||||
"gridPos": {"h": 6, "w": 6, "x": 18, "y": 30},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "count by (tier) (nixos_flake_info)",
|
||||
"legendFormat": "{{tier}}",
|
||||
"refId": "A",
|
||||
"instant": true
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"legend": {"displayMode": "table", "placement": "right", "values": ["value"]},
|
||||
"pieType": "pie"
|
||||
},
|
||||
"transformations": [
|
||||
{
|
||||
"id": "renameByRegex",
|
||||
"options": {
|
||||
"regex": "^$",
|
||||
"renamePattern": "prod"
|
||||
}
|
||||
}
|
||||
],
|
||||
"description": "Distribution of hosts by tier (test vs prod)"
|
||||
},
|
||||
{
|
||||
"id": 15,
|
||||
"title": "Build Service",
|
||||
"type": "row",
|
||||
"gridPos": {"h": 1, "w": 24, "x": 0, "y": 36},
|
||||
"collapsed": false
|
||||
},
|
||||
{
|
||||
"id": 16,
|
||||
"title": "Builds (24h)",
|
||||
"type": "stat",
|
||||
"gridPos": {"h": 4, "w": 4, "x": 0, "y": 37},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(increase(homelab_deploy_build_host_total{status=\"success\"}[24h]))",
|
||||
"legendFormat": "Builds",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [{"color": "green", "value": null}]
|
||||
},
|
||||
"noValue": "0",
|
||||
"decimals": 0
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"colorMode": "value",
|
||||
"graphMode": "none"
|
||||
},
|
||||
"description": "Successful host builds in the last 24 hours"
|
||||
},
|
||||
{
|
||||
"id": 17,
|
||||
"title": "Failed Builds (24h)",
|
||||
"type": "stat",
|
||||
"gridPos": {"h": 4, "w": 4, "x": 4, "y": 37},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(increase(homelab_deploy_build_host_total{status=\"failure\"}[24h])) or vector(0)",
|
||||
"legendFormat": "Failed",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{"color": "green", "value": null},
|
||||
{"color": "yellow", "value": 1},
|
||||
{"color": "red", "value": 5}
|
||||
]
|
||||
},
|
||||
"noValue": "0",
|
||||
"decimals": 0
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"colorMode": "value",
|
||||
"graphMode": "none"
|
||||
},
|
||||
"description": "Failed host builds in the last 24 hours"
|
||||
},
|
||||
{
|
||||
"id": 18,
|
||||
"title": "Last Build",
|
||||
"type": "stat",
|
||||
"gridPos": {"h": 4, "w": 4, "x": 8, "y": 37},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "time() - max(homelab_deploy_build_last_timestamp)",
|
||||
"legendFormat": "Last Build",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"unit": "s",
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{"color": "green", "value": null},
|
||||
{"color": "yellow", "value": 86400},
|
||||
{"color": "red", "value": 604800}
|
||||
]
|
||||
},
|
||||
"noValue": "-"
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"colorMode": "value",
|
||||
"graphMode": "none"
|
||||
},
|
||||
"description": "Time since last build attempt (yellow >1d, red >7d)"
|
||||
},
|
||||
{
|
||||
"id": 19,
|
||||
"title": "Avg Build Time",
|
||||
"type": "stat",
|
||||
"gridPos": {"h": 4, "w": 4, "x": 12, "y": 37},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(increase(homelab_deploy_build_duration_seconds_sum[24h])) / sum(increase(homelab_deploy_build_duration_seconds_count[24h]))",
|
||||
"legendFormat": "Avg Time",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"unit": "s",
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{"color": "green", "value": null},
|
||||
{"color": "yellow", "value": 30},
|
||||
{"color": "red", "value": 60}
|
||||
]
|
||||
},
|
||||
"noValue": "-"
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"colorMode": "value",
|
||||
"graphMode": "none"
|
||||
},
|
||||
"description": "Average build duration per host over the last 24 hours"
|
||||
},
|
||||
{
|
||||
"id": 20,
|
||||
"title": "Total Hosts Built",
|
||||
"type": "stat",
|
||||
"gridPos": {"h": 4, "w": 4, "x": 16, "y": 37},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "count(homelab_deploy_build_duration_seconds_count)",
|
||||
"legendFormat": "Hosts",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [{"color": "blue", "value": null}]
|
||||
},
|
||||
"noValue": "0"
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"colorMode": "value",
|
||||
"graphMode": "none"
|
||||
},
|
||||
"description": "Total number of unique hosts that have been built"
|
||||
},
|
||||
{
|
||||
"id": 21,
|
||||
"title": "Build Jobs (24h)",
|
||||
"type": "stat",
|
||||
"gridPos": {"h": 4, "w": 4, "x": 20, "y": 37},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(increase(homelab_deploy_builds_total[24h]))",
|
||||
"legendFormat": "Jobs",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [{"color": "purple", "value": null}]
|
||||
},
|
||||
"noValue": "0",
|
||||
"decimals": 0
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"colorMode": "value",
|
||||
"graphMode": "none"
|
||||
},
|
||||
"description": "Total build jobs (each job may build multiple hosts) in the last 24 hours"
|
||||
},
|
||||
{
|
||||
"id": 22,
|
||||
"title": "Build Time by Host",
|
||||
"type": "bargauge",
|
||||
"gridPos": {"h": 8, "w": 12, "x": 0, "y": 41},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sort_desc(homelab_deploy_build_duration_seconds_sum / homelab_deploy_build_duration_seconds_count)",
|
||||
"legendFormat": "{{host}}",
|
||||
"refId": "A",
|
||||
"instant": true
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"unit": "s",
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{"color": "green", "value": null},
|
||||
{"color": "yellow", "value": 15},
|
||||
{"color": "orange", "value": 25},
|
||||
{"color": "red", "value": 45}
|
||||
]
|
||||
},
|
||||
"min": 0
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"orientation": "horizontal",
|
||||
"displayMode": "gradient",
|
||||
"showUnfilled": true
|
||||
},
|
||||
"description": "Average build time per host (green <15s, yellow <25s, orange <45s, red >45s)"
|
||||
},
|
||||
{
|
||||
"id": 23,
|
||||
"title": "Build Count by Host",
|
||||
"type": "bargauge",
|
||||
"gridPos": {"h": 8, "w": 12, "x": 12, "y": 41},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sort_desc(sum by (host) (homelab_deploy_build_host_total))",
|
||||
"legendFormat": "{{host}}",
|
||||
"refId": "A",
|
||||
"instant": true
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{"color": "blue", "value": null},
|
||||
{"color": "purple", "value": 10}
|
||||
]
|
||||
},
|
||||
"min": 0
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"orientation": "horizontal",
|
||||
"displayMode": "gradient",
|
||||
"showUnfilled": true
|
||||
},
|
||||
"description": "Total build count per host (all time)"
|
||||
},
|
||||
{
|
||||
"id": 24,
|
||||
"title": "Build Activity",
|
||||
"type": "timeseries",
|
||||
"gridPos": {"h": 8, "w": 24, "x": 0, "y": 49},
|
||||
"datasource": {"type": "prometheus", "uid": "victoriametrics"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(increase(homelab_deploy_build_host_total{status=\"success\"}[1h]))",
|
||||
"legendFormat": "Successful",
|
||||
"refId": "A"
|
||||
},
|
||||
{
|
||||
"expr": "sum(increase(homelab_deploy_build_host_total{status=\"failure\"}[1h]))",
|
||||
"legendFormat": "Failed",
|
||||
"refId": "B"
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {
|
||||
"lineWidth": 1,
|
||||
"fillOpacity": 30,
|
||||
"showPoints": "never",
|
||||
"stacking": {"mode": "none"}
|
||||
}
|
||||
},
|
||||
"overrides": [
|
||||
{
|
||||
"matcher": {"id": "byName", "options": "Successful"},
|
||||
"properties": [{"id": "color", "value": {"mode": "fixed", "fixedColor": "green"}}]
|
||||
},
|
||||
{
|
||||
"matcher": {"id": "byName", "options": "Failed"},
|
||||
"properties": [{"id": "color", "value": {"mode": "fixed", "fixedColor": "red"}}]
|
||||
}
|
||||
]
|
||||
},
|
||||
"options": {
|
||||
"legend": {
|
||||
"displayMode": "list",
|
||||
"placement": "bottom"
|
||||
},
|
||||
"tooltip": {"mode": "multi", "sort": "desc"}
|
||||
},
|
||||
"description": "Build activity over time (successful vs failed builds per hour)"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1,296 +0,0 @@
|
||||
{
|
||||
"uid": "nixos-operations",
|
||||
"title": "NixOS Operations",
|
||||
"tags": ["loki", "nixos", "operations", "homelab"],
|
||||
"timezone": "browser",
|
||||
"schemaVersion": 39,
|
||||
"version": 1,
|
||||
"refresh": "1m",
|
||||
"time": {
|
||||
"from": "now-24h",
|
||||
"to": "now"
|
||||
},
|
||||
"templating": {
|
||||
"list": [
|
||||
{
|
||||
"name": "host",
|
||||
"type": "query",
|
||||
"datasource": {"type": "loki", "uid": "loki"},
|
||||
"query": "label_values(host)",
|
||||
"refresh": 2,
|
||||
"includeAll": true,
|
||||
"multi": true,
|
||||
"current": {"text": "All", "value": "$__all"}
|
||||
}
|
||||
]
|
||||
},
|
||||
"panels": [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Upgrade Log Volume",
|
||||
"type": "stat",
|
||||
"gridPos": {"h": 4, "w": 6, "x": 0, "y": 0},
|
||||
"datasource": {"type": "loki", "uid": "loki"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(count_over_time({systemd_unit=\"nixos-upgrade.service\", host=~\"$host\"} [$__range]))",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [{"color": "blue", "value": null}]
|
||||
},
|
||||
"noValue": "0"
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"colorMode": "value",
|
||||
"graphMode": "none"
|
||||
},
|
||||
"description": "Total log entries from nixos-upgrade.service in selected time range"
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Successful Upgrades",
|
||||
"type": "stat",
|
||||
"gridPos": {"h": 4, "w": 6, "x": 6, "y": 0},
|
||||
"datasource": {"type": "loki", "uid": "loki"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(count_over_time({systemd_unit=\"nixos-upgrade.service\", host=~\"$host\"} |= \"Done. The new configuration is\" [$__range]))",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [{"color": "green", "value": null}]
|
||||
},
|
||||
"noValue": "0"
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"colorMode": "value",
|
||||
"graphMode": "none"
|
||||
},
|
||||
"description": "Upgrades that completed successfully"
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"title": "Upgrade Errors",
|
||||
"type": "stat",
|
||||
"gridPos": {"h": 4, "w": 6, "x": 12, "y": 0},
|
||||
"datasource": {"type": "loki", "uid": "loki"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(count_over_time({systemd_unit=\"nixos-upgrade.service\", host=~\"$host\"} |~ \"(?i)error|failed\" [$__range]))",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{"color": "green", "value": null},
|
||||
{"color": "red", "value": 1}
|
||||
]
|
||||
},
|
||||
"noValue": "0"
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"colorMode": "value",
|
||||
"graphMode": "none"
|
||||
},
|
||||
"description": "Upgrade log entries containing errors"
|
||||
},
|
||||
{
|
||||
"id": 4,
|
||||
"title": "Bootstrap Events",
|
||||
"type": "stat",
|
||||
"gridPos": {"h": 4, "w": 6, "x": 18, "y": 0},
|
||||
"datasource": {"type": "loki", "uid": "loki"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(count_over_time({job=\"bootstrap\", host=~\"$host\"} [$__range]))",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [{"color": "purple", "value": null}]
|
||||
},
|
||||
"noValue": "0"
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {"calcs": ["lastNotNull"]},
|
||||
"colorMode": "value",
|
||||
"graphMode": "none"
|
||||
},
|
||||
"description": "Bootstrap log entries from new VM deployments"
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"title": "Upgrade Activity by Host",
|
||||
"type": "timeseries",
|
||||
"gridPos": {"h": 8, "w": 12, "x": 0, "y": 4},
|
||||
"datasource": {"type": "loki", "uid": "loki"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum by (host) (count_over_time({systemd_unit=\"nixos-upgrade.service\", host=~\"$host\"} [5m]))",
|
||||
"legendFormat": "{{host}}",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"unit": "short",
|
||||
"custom": {
|
||||
"lineWidth": 1,
|
||||
"fillOpacity": 30,
|
||||
"showPoints": "never",
|
||||
"stacking": {"mode": "normal"}
|
||||
}
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"legend": {"displayMode": "list", "placement": "bottom"},
|
||||
"tooltip": {"mode": "multi", "sort": "desc"}
|
||||
},
|
||||
"description": "When upgrades ran on each host"
|
||||
},
|
||||
{
|
||||
"id": 6,
|
||||
"title": "ACME Certificate Activity",
|
||||
"type": "timeseries",
|
||||
"gridPos": {"h": 8, "w": 12, "x": 12, "y": 4},
|
||||
"datasource": {"type": "loki", "uid": "loki"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum by (host) (count_over_time({systemd_unit=~\"acme.*\", host=~\"$host\"} [5m]))",
|
||||
"legendFormat": "{{host}}",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"unit": "short",
|
||||
"custom": {
|
||||
"lineWidth": 1,
|
||||
"fillOpacity": 30,
|
||||
"showPoints": "never",
|
||||
"stacking": {"mode": "normal"}
|
||||
}
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"legend": {"displayMode": "list", "placement": "bottom"},
|
||||
"tooltip": {"mode": "multi", "sort": "desc"}
|
||||
},
|
||||
"description": "ACME certificate renewal activity"
|
||||
},
|
||||
{
|
||||
"id": 7,
|
||||
"title": "Recent Upgrade Completions",
|
||||
"type": "logs",
|
||||
"gridPos": {"h": 8, "w": 12, "x": 0, "y": 12},
|
||||
"datasource": {"type": "loki", "uid": "loki"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "{systemd_unit=\"nixos-upgrade.service\", host=~\"$host\"} |= \"Done. The new configuration is\" | json | line_format \"{{.MESSAGE}}\" | keep host",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"options": {
|
||||
"showTime": true,
|
||||
"showLabels": true,
|
||||
"showCommonLabels": false,
|
||||
"wrapLogMessage": true,
|
||||
"prettifyLogMessage": false,
|
||||
"enableLogDetails": true,
|
||||
"sortOrder": "Descending"
|
||||
},
|
||||
"description": "Successful upgrade completion messages showing the new system path"
|
||||
},
|
||||
{
|
||||
"id": 8,
|
||||
"title": "Build Activity",
|
||||
"type": "logs",
|
||||
"gridPos": {"h": 8, "w": 12, "x": 12, "y": 12},
|
||||
"datasource": {"type": "loki", "uid": "loki"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "{systemd_unit=\"nixos-upgrade.service\", host=~\"$host\"} |= \"building\" | json | line_format \"{{.MESSAGE}}\" | keep host",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"options": {
|
||||
"showTime": true,
|
||||
"showLabels": true,
|
||||
"showCommonLabels": false,
|
||||
"wrapLogMessage": true,
|
||||
"prettifyLogMessage": false,
|
||||
"enableLogDetails": true,
|
||||
"sortOrder": "Descending"
|
||||
},
|
||||
"description": "Derivations being built during upgrades"
|
||||
},
|
||||
{
|
||||
"id": 9,
|
||||
"title": "Bootstrap Logs",
|
||||
"type": "logs",
|
||||
"gridPos": {"h": 8, "w": 24, "x": 0, "y": 20},
|
||||
"datasource": {"type": "loki", "uid": "loki"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "{job=\"bootstrap\", host=~\"$host\"}",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"options": {
|
||||
"showTime": true,
|
||||
"showLabels": true,
|
||||
"showCommonLabels": false,
|
||||
"wrapLogMessage": true,
|
||||
"prettifyLogMessage": false,
|
||||
"enableLogDetails": true,
|
||||
"sortOrder": "Descending"
|
||||
},
|
||||
"description": "Logs from VM bootstrap process (new deployments)"
|
||||
},
|
||||
{
|
||||
"id": 10,
|
||||
"title": "Upgrade Errors & Failures",
|
||||
"type": "logs",
|
||||
"gridPos": {"h": 8, "w": 24, "x": 0, "y": 28},
|
||||
"datasource": {"type": "loki", "uid": "loki"},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "{systemd_unit=\"nixos-upgrade.service\", host=~\"$host\"} |~ \"(?i)error|failed\" | json | line_format \"{{.MESSAGE}}\" | keep host",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"options": {
|
||||
"showTime": true,
|
||||
"showLabels": true,
|
||||
"showCommonLabels": false,
|
||||
"wrapLogMessage": true,
|
||||
"prettifyLogMessage": false,
|
||||
"enableLogDetails": true,
|
||||
"sortOrder": "Descending"
|
||||
},
|
||||
"description": "Errors and failures during NixOS upgrades"
|
||||
}
|
||||
]
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user