Adds a system-wide script for sending command output or interactive
sessions to Loki for easy sharing with Claude.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- CLI workflows for creating users and groups
- Troubleshooting guide (nscd, cache invalidation)
- Home directory behavior (UUID-based with symlinks)
- Update auth-system-replacement plan with progress
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Keep base groups (admins, users, ssh-users) provisioned declaratively
but manage regular users via the kanidm CLI. This allows setting POSIX
attributes and passwords in a single workflow.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add homelab.kanidm.enable option for central authentication via Kanidm.
The module configures:
- PAM/NSS integration with kanidm-unixd
- Client connection to auth.home.2rjus.net
- Login authorization for ssh-users group
Enable on testvm01-03 for testing.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Provides compressed swap in RAM to prevent OOM kills during
nixos-rebuild on low-memory VMs (2GB). Removes duplicate zram
configs from jelly01 and nix-cache01.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Based on security review findings, covering SSH hardening, firewall
enablement, log transport TLS, security alerting, and secrets management.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Changed section 4 from "if needed" to always spawn auditor
- Added explicit "Do NOT query audit logs yourself" guidance
- Listed specific scenarios requiring auditor (service stopped, etc.)
- Added manual intervention as first common cause
- Updated guidelines to emphasize mandatory delegation
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add new auditor agent for security-focused audit log analysis:
- SSH session tracking, command execution, sudo usage
- Suspicious activity detection patterns
- Can be used standalone or as sub-agent by investigate-alarm
Update investigate-alarm to delegate audit analysis to auditor
and add git-explorer MCP for configuration drift detection.
Add git-explorer to .mcp.json for repository inspection.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Kanidm does not expose a Prometheus /metrics endpoint.
The scrape target was causing 404 errors after the TLS
certificate issue was fixed.
Also add SSH command restriction to CLAUDE.md.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Include both auth.home.2rjus.net (CNAME) and kanidm01.home.2rjus.net
(A record) as SANs in the TLS certificate. This fixes Prometheus
scraping which connects via the hostname, not the CNAME.
Fixes: x509: certificate is valid for auth.home.2rjus.net, not kanidm01.home.2rjus.net
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add best practices for querying Loki to avoid overwhelming responses:
- Start with narrow filters and small limits
- Filter audit logs to EXECVE only
- Exclude verbose noise (PATH, PROCTITLE, SYSCALL, BPF)
- Expand queries incrementally if needed
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Enable Linux audit to log execve syscalls from interactive SSH sessions.
Uses auid filter to exclude system services and nix builds.
Logs forwarded to journald for Loki ingestion. Query with:
{host="testvmXX"} |= "EXECVE"
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Sub-agent for investigating system alarms using Prometheus metrics
and Loki logs. Provides root cause analysis with timeline of events.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Move detailed Prometheus/Loki reference from CLAUDE.md to the
observability skill
- Add complete list of Prometheus jobs organized by category
- Add bootstrap log documentation with stages table
- Add kanidm01 to host labels table
- CLAUDE.md now references the skill instead of duplicating info
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Document the end-to-end host creation workflow including:
- Prerequisites and step-by-step process
- Tier specification (test vs prod)
- Bootstrap observability via Loki
- Verification steps
- Troubleshooting guide
- Related files reference
Update CLAUDE.md to reference the new document.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Plan for migrating from Prometheus to VictoriaMetrics on new monitoring02
host with parallel operation, declarative Grafana dashboards, and CNAME-based
cutover.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Mark completed implementation steps
- Document deployed kanidm01 configuration
- Record UID/GID range decision (65,536-69,999)
- Add verified working items (WebUI, LDAP, certs)
- Update next steps and resolved questions
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Set owner/group to kanidm so the post-start provisioning
script can read the idm_admin password.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- New test-tier VM at 10.69.13.23 with role=auth
- Kanidm 1.8 server with HTTPS (443) and LDAPS (636)
- ACME certificate from internal CA (auth.home.2rjus.net)
- Provisioned groups: admins, users, ssh-users
- Provisioned user: torjus
- Daily backups at 22:00 (7 versions)
- Prometheus monitoring scrape target
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
New VMs bootstrapped from template2 will now use the local nix cache
during initial nixos-rebuild, speeding up bootstrap times.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
ns1 needs access to shared/dns/* for zone transfer key and
shared/homelab-deploy/* for the NATS listener.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Old VM had incorrect hardware-configuration.nix with hardcoded UUIDs
that didn't match actual disk layout, causing boot failure (emergency mode).
Recreated using template2-based configuration for OpenTofu provisioning.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Remove pgdb1 host configuration and postgres service module.
The only consumer (Open WebUI on gunter) has migrated to local PostgreSQL.
Removed:
- hosts/pgdb1/ - host configuration
- services/postgres/ - service module (only used by pgdb1)
- postgres_rules from monitoring rules
- rebuild-all.sh (obsolete script)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Only consumer was Open WebUI on gunter, which will migrate to local
PostgreSQL. Removed pgdb1 backup/migration phases and added to
decommission list.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- ns2 migrated to OpenTofu
- testvm02, testvm03 added to managed hosts
- Remove vaulttest01 (no longer exists)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Configure Unbound to query both ns1 and ns2 for the home.2rjus.net
zone, in addition to local NSD. This provides redundancy during
bootstrap or if local NSD is temporarily unavailable.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Bootstrap times can be improved by configuring the base template
to use the local nix cache during initial builds.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Remove hosts/template/ (legacy template1) and give each legacy host
its own hardware-configuration.nix copy
- Recreate ns2 using create-host with template2 base
- Add secondary DNS services (NSD + Unbound resolver)
- Configure Vault policy for shared DNS secrets
- Fix create-host IP uniqueness validator to check CIDR notation
(prevents false positives from DNS resolver entries)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Remove secrets/ directory (sops-nix no longer in use, all hosts use Vault)
- Move TODO.md to docs/plans/completed/automated-host-deployment-pipeline.md
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
All secrets are now managed by OpenBao (Vault). Remove the legacy
sops-nix infrastructure that is no longer in use.
Removed:
- sops-nix flake input
- system/sops.nix module
- .sops.yaml configuration file
- Age key generation from template prepare-host scripts
Updated:
- flake.nix - removed sops-nix references from all hosts
- flake.lock - removed sops-nix input
- scripts/create-host/ - removed sops references
- CLAUDE.md - removed SOPS documentation
Note: secrets/ directory should be manually removed by the user.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Remove the step-ca host and labmon flake input now that ACME has been
migrated to OpenBao PKI.
Removed:
- hosts/ca/ - step-ca host configuration
- services/ca/ - step-ca service module
- labmon flake input and module (no longer used)
Updated:
- flake.nix - removed ca host and labmon references
- flake.lock - removed labmon input
- rebuild-all.sh - removed ca from host list
- CLAUDE.md - updated documentation
Note: secrets/ca/ should be manually removed by the user.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add note that gh pr create is not supported
- Remove labmon from Prometheus job names list
- Remove labmon from flake inputs list
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Set up a simple nginx server with an ACME certificate from the new
OpenBao PKI infrastructure. This allows testing the ACME migration
before deploying to production hosts.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Switch all ACME certificate issuance from step-ca (ca.home.2rjus.net)
to OpenBao PKI (vault.home.2rjus.net:8200/v1/pki_int/acme/directory).
- Update default ACME server in system/acme.nix
- Update Caddy acme_ca in http-proxy and nix-cache services
- Remove labmon service from monitoring01 (step-ca monitoring)
- Remove labmon scrape target and certificate_rules alerts
- Remove alloy.nix (only used for labmon profiling)
- Add docs/plans/cert-monitoring.md for future cert monitoring needs
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Enable vault.enable and homelab.deploy.enable on vault01 so it can
receive NATS-based remote deployments. Vault fetches secrets from
itself using AppRole after auto-unseal.
Add systemd ordering to ensure vault-secret services wait for openbao
to be unsealed before attempting to fetch secrets.
Also adds vault01 AppRole entry to Terraform.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Enable vault.enable and homelab.deploy.enable for these hosts to
allow NATS-based remote deployments and expose metrics on port 9972.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add instructions for deploying to prod hosts using the CLI directly,
since the MCP server only handles test-tier deployments.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Document the new hostname and host metadata labels available on all
Prometheus scrape targets:
- hostname: short hostname for easy filtering
- role: host role (dns, build-host, vault)
- tier: deployment tier (test for test VMs)
- dns_role: primary/secondary for DNS servers
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add a `hostname` label to all Prometheus scrape targets, making it easy
to query all metrics for a host without wildcarding the instance label.
Example queries:
- {hostname="ns1"} - all metrics from ns1
- node_cpu_seconds_total{hostname="monitoring01"} - specific metric
For external targets (like gunter), the hostname is extracted from the
target string.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Extract homelab.host metadata (tier, priority, role, labels) from host
configurations and propagate them to Prometheus scrape targets. This
enables semantic alert filtering using labels instead of hardcoded
instance names.
Changes:
- lib/monitoring.nix: Extract host metadata, group targets by labels
- prometheus.nix: Use structured static_configs with labels
- rules.yml: Replace instance filters with role-based filters
Example labels in Prometheus:
- ns1/ns2: role=dns, dns_role=primary/secondary
- nix-cache01: role=build-host
- testvm*: tier=test
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Move nats-deploy-service.md to completed/ folder
- Update prometheus-scrape-target-labels.md with implementation status
- Add status table showing which steps are complete/partial/not started
- Update cross-references to point to new location
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Adds log_to_loki function that pushes structured log entries to Loki
at key bootstrap stages (starting, network_ok, vault_*, building,
success, failed). Enables querying bootstrap state via LogQL without
console access.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
TTY output was causing nixos-rebuild to fail. Keep the custom
greeting line to indicate bootstrap image, but use journal+console
for reliable logging.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add a third play to build-and-deploy-template.yml that updates
terraform/variables.tf with the new template name after deploying
to Proxmox. Only updates if the template name has changed.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Display bootstrap banner and live progress on tty1 instead of login prompt
- Add custom getty greeting on other ttys indicating this is a bootstrap image
- Disable getty on tty1 during bootstrap so output is visible
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Fixes false "Some deployments failed" warning in MCP server when
deployments are still in progress.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add three permanent test hosts (testvm01, testvm02, testvm03) with:
- Static IPs: 10.69.13.20-22
- Vault AppRole integration with homelab-deploy policy
- Remote deployment via NATS (homelab.deploy.enable)
- Test tier configuration
Also updates create-host template to include vault.enable and
homelab.deploy.enable by default.
VMs are now bootstrapped and running. Remove temporary flake_branch
and vault_wrapped_token settings so they use master going forward.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The homelab-deploy listener requires access to shared/homelab-deploy/*
secrets. Update hosts-generated.tf and the generator script to include
this policy automatically.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add vault.enable = true to testvm01, testvm02, testvm03
- Add homelab.deploy.enable = true for remote deployment via NATS
- Update create-host template to include these by default
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Three permanent test hosts for validating deployment and bootstrapping
workflow. Each host configured with:
- Static IP (10.69.13.20-22/24)
- Vault AppRole integration
- Bootstrap from deploy-test-hosts branch
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Remove host entries from terraform/vault/approle.tf on --remove
- Detect and warn about secrets in terraform/vault/secrets.tf
- Include vault kv delete commands in removal instructions
- Update check_entries_exist to return approle status
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The regex patterns expected 6 spaces of indentation but flake.nix uses
8 spaces for host entries. Also updated generated entry template to
match current flake.nix style (using commonModules ++).
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Update homelab-deploy input to get metrics support
- Enable metrics endpoint on port 9972
- Add scrape target for prometheus auto-discovery
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Compare VictoriaMetrics and Thanos as options for extending
metrics retention beyond 30 days while managing disk usage.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add homelab.deploy.enable option (requires vault.enable)
- Create shared homelab-deploy Vault policy for all hosts
- Enable homelab.deploy on all vault-enabled hosts
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add system/homelab-deploy.nix module that automatically enables the
listener on all hosts with vault.enable=true. Uses homelab.host.tier
and homelab.host.role for NATS subject subscriptions.
- Add homelab-deploy access to all host AppRole policies
- Remove manual listener config from vaulttest01 (now handled by system module)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Update homelab-deploy to include bugfix. Add CLI to devShell for
easier testing and deployment operations.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Ensure homelab-deploy-listener waits for the NKey secret to be
fetched from Vault before starting.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add homelab-deploy flake input and NixOS module for message-based
deployments across the fleet. Configure DEPLOY account in NATS with
tiered access control (listener, test-deployer, admin-deployer).
Enable listener on vaulttest01 as initial test host.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add a shared `homelab.host` module that provides host metadata for
multiple consumers:
- tier: deployment tier (test/prod) for future homelab-deploy service
- priority: alerting priority (high/low) for Prometheus label filtering
- role: primary role of the host (dns, database, monitoring, etc.)
- labels: free-form labels for additional metadata
Host configurations updated with appropriate values:
- ns1, ns2: role=dns with dns_role labels
- nix-cache01: priority=low, role=build-host
- vault01: role=vault
- jump: role=bastion
- template, template2, testvm01, vaulttest01: tier=test, priority=low
The module is now imported via commonModules in flake.nix, making it
available to all hosts including minimal configurations like template2.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
MCP exposes two tools:
- deploy: test-tier only, always available
- deploy_admin: all tiers, requires --enable-admin flag
Three security layers: CLI flag, NATS authz, Claude Code permissions.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Support deploying to all hosts in a tier or all hosts with a role:
- deploy.<tier>.all - broadcast to all hosts in tier
- deploy.<tier>.role.<role> - broadcast to hosts with matching role
MCP can deploy to all test hosts at once, admin can deploy to any group.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add plan for NATS-based deployment service (homelab-deploy) that enables
on-demand NixOS configuration updates via messaging. Features tiered
permissions (test/prod) enforced at NATS layer.
Update prometheus-scrape-target-labels plan to share the homelab.host
module for host metadata (tier, priority, role, labels) - single source
of truth for both deployment tiers and prometheus labels.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Increase duration from 5m to 10m and demote severity from critical to
warning. Brief degraded states during nixos-rebuild are normal and were
causing false positive alerts.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Merge Prometheus metrics and Loki logs into a unified troubleshooting
skill. Adds LogQL query patterns, label reference, and common service
units for log searching.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Reference guide for exploring Prometheus metrics when troubleshooting
homelab issues, including the new nixos_flake_info metrics.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Update nixos-exporter to 0.2.3
- Set system.configurationRevision for all hosts so the exporter
can report the flake's git revision
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add nixos-exporter prometheus exporter to track NixOS generation metrics
and flake revision status across all hosts.
Changes:
- Add nixos-exporter flake input
- Add commonModules list in flake.nix for modules shared by all hosts
- Enable nixos-exporter in system/monitoring/metrics.nix
- Configure Prometheus to scrape nixos-exporter on all hosts
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
RemainAfterExit=true kept the service in "active" state, which
prevented OnUnitActiveSec from scheduling new triggers since there
was no new "activation" event. Removing it allows the service to
properly go inactive, enabling the timer to reschedule correctly.
Also fix ExecStart to use lib.getExe for proper path resolution
with writeShellApplication.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add proposed dns_role label to distinguish primary/secondary DNS
resolvers. This addresses the unbound_low_cache_hit_ratio alert
firing on ns2, which has a cold cache due to low traffic.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Backups to the shared restic repository were all scheduled at exactly
midnight, causing lock conflicts. Adding RandomizedDelaySec spreads
them out over a 2-hour window to prevent simultaneous access.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The homeassistant override key should match the entity type in the
MQTT discovery topic path. For battery sensors, the topic is
homeassistant/sensor/<device>/battery/config, so the key should be
"battery" not "sensor_battery".
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Displays FQDN and flake commit hash with timestamp on login.
Templates can override with their own MOTD via mkDefault.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Convert remaining writeShellScript usages to writeShellApplication for
shellcheck validation and strict bash options.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Adds a helper script deployed to all hosts for testing feature branches.
Usage: nixos-rebuild-test <action> <branch>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Instead of creating a long-lived Vault token in Terraform (which gets
invalidated when Terraform recreates it), monitoring01 now uses its
existing AppRole credentials to fetch a fresh token for Prometheus.
Changes:
- Add prometheus-metrics policy to monitoring01's AppRole
- Remove vault_token.prometheus_metrics resource from Terraform
- Remove openbao-token KV secret from Terraform
- Add systemd service to fetch AppRole token on boot
- Add systemd timer to refresh token every 30 minutes
This ensures Prometheus always has a valid token without depending on
Terraform state or manual intervention.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Remove auth01 host configuration and associated services in preparation
for new auth stack with different provisioning system.
Removed:
- hosts/auth01/ - host configuration
- services/authelia/ - authelia service module
- services/lldap/ - lldap service module
- secrets/auth01/ - sops secrets
- Reverse proxy entries for auth and lldap
- Monitoring alert rules for authelia and lldap
- SOPS configuration for auth01
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Document TrueNAS CORE LDAP integration approach (NFS-only) and
future NixOS NAS migration path with native Kanidm PAM/NSS.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Evaluate options for replacing LLDAP+Authelia with a unified auth solution.
Recommends Kanidm for its native NixOS PAM/NSS integration and built-in OIDC.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Updated plan with:
- Full device inventory from ha1
- Backup verification details
- Branch and commit references
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
WSDCGQ12LM sensors report battery: 0 due to firmware quirk. Override
battery calculation using voltage via homeassistant value_template.
Also adds zigbee_sensor_stale alert for detecting dead sensors regardless
of battery reporting accuracy (1 hour threshold).
Device configuration moved from external devices.yaml to inline NixOS
config for declarative management.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
NATS HTTP monitoring endpoint serves JSON, not Prometheus format.
Use the prometheus-nats-exporter which queries the NATS endpoint
and exposes proper Prometheus metrics.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add exporters and scrape targets for services lacking monitoring:
- PostgreSQL: postgres-exporter on pgdb1
- Authelia: native telemetry metrics on auth01
- Unbound: unbound-exporter with remote-control on ns1/ns2
- NATS: HTTP monitoring endpoint on nats1
- OpenBao: telemetry config and Prometheus scrape with token auth
- Systemd: systemd-exporter on all hosts for per-service metrics
Add alert rules for postgres, auth (authelia + lldap), jellyfin,
vault (openbao), plus extend existing nats and unbound rules.
Add Terraform config for Prometheus metrics policy and token. The
token is created via vault_token resource and stored in KV, so no
manual token creation is needed.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Document Loki log query labels and patterns, and Prometheus job names
with example queries for the lab-monitoring MCP server.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Update CLAUDE.md and README.md to reflect that secrets are now managed
by OpenBao, with sops only remaining for ca. Update migration plans
with sops cleanup checklist and auth01 decommission.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The read-based loop split multiline values on newlines, causing only
the first line to be written. Use jq -j to write each key's value
directly to files, preserving multiline content.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Remove backup_helper_secret variable and switch shared/backup/password
to auto_generate. New password will be added alongside existing restic
repository key.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replace sops-nix secrets with OpenBao vault secrets across all hosts.
Hardcode root password hash, add extractKey option to vault-secrets
module, update Terraform with secrets/policies for all hosts, and
create AppRole provisioning playbook.
Hosts migrated: ha1, monitoring01, ns1, ns2, http-proxy, nix-cache01
Wave 1 hosts (nats1, jelly01, pgdb1) get AppRole policies only.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
nix-cache01 regularly hits high CPU during nix builds, causing flappy
alerts. Keep the 15m threshold for all other hosts.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The step-ca serving certificate is auto-renewed with a 24h lifetime,
so it always triggers the general < 86400s threshold. Exclude it and
add a dedicated step_ca_serving_cert_expiring alert at < 1h instead.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Move nix-cache_caddy back to a manual config in prometheus.nix using the
service CNAME (nix-cache.home.2rjus.net) instead of the hostname. The
auto-generated target used nix-cache01.home.2rjus.net which doesn't
match the TLS certificate SAN.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add homelab.monitoring NixOS options (enable, scrapeTargets) following
the same pattern as homelab.dns. Prometheus scrape configs are now
auto-generated from flake host configurations and external targets,
replacing hardcoded target lists.
Also cleans up alert rules: snake_case naming, fix zigbee2mqtt typo,
remove duplicate pushgateway alert, add for clauses to monitoring_rules,
remove hardcoded WireGuard public key, and add new alerts for
certificates, proxmox, caddy, smartctl temperature, filesystem
prediction, systemd state, file descriptors, and host reboots.
Fixes grafana scrape target port from 3100 to 3000.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add /modules/ and /lib/ to directory structure
- Document homelab.dns options and zone auto-generation
- Update "Adding a New Host" workflow (no manual zone editing)
- Expand DNS Architecture section with auto-generation details
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replace static zone file with dynamically generated records:
- Add homelab.dns module with enable/cnames options
- Extract IPs from systemd.network configs (filters VPN interfaces)
- Use git commit timestamp as zone serial number
- Move external hosts to separate external-hosts.nix
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Rename nixos-options to nixpkgs-options and add new nixpkgs-packages
server for package search functionality. Update CLAUDE.md to document
both MCP servers and their available tools.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replace custom backup-helper flake input with NixOS native
services.restic.backups module for ha1, monitoring01, and nixos-test1.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Fix bug where new hosts were added outside of nixosConfigurations block
instead of inside it.
Issues fixed:
1. Pattern was looking for "packages =" but actual text is "packages = forAllSystems"
2. Replacement was putting new entry AFTER closing brace instead of BEFORE
3. testvm01 was at top-level flake output instead of in nixosConfigurations
Changes:
- Update pattern to match "packages = forAllSystems"
- Put new entry BEFORE the closing brace of nixosConfigurations
- Move testvm01 to correct location inside nixosConfigurations block
Result: nix flake show now correctly shows testvm01 as NixOS configuration
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Fix error "500 can't upload to storage type 'zfspool'" by using "local"
storage pool for cloud-init disks instead of the VM's storage pool.
Cloud-init disks require storage that supports ISO/snippet content types,
which zfspool does not. The "local" storage pool (directory-based) supports
this content type.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Fix OpenTofu error where static IP and DHCP branches had different object
structures in the subnets array. Move conditional to network_config level
so both branches return complete, consistent yamlencode() results.
Error was: "The true and false result expressions must have consistent types"
Solution: Make network_config itself conditional rather than the subnets
array, ensuring both branches return the same type (string from yamlencode).
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Remove mention of .generated/ directory and clarify that cloud-init.tf
manages all cloud-init disks, not just branch-specific ones.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Replace SSH upload approach with native proxmox_cloud_init_disk resource
for cleaner, more maintainable cloud-init management.
Changes:
- Use proxmox_cloud_init_disk for all VMs (not just branch-specific ones)
- Include SSH keys, network config, and metadata in cloud-init disk
- Conditionally include NIXOS_FLAKE_BRANCH for VMs with flake_branch set
- Replace ide2 cloudinit disk with cdrom reference to cloud-init disk
- Remove built-in cloud-init parameters (ciuser, sshkeys, etc.)
- Remove cicustom parameter (no longer needed)
- Remove proxmox_host variable (no SSH uploads required)
- Remove .gitignore entry for .generated/ directory
Benefits:
- No SSH access to Proxmox required
- All cloud-init config managed in Terraform
- Consistent approach for all VMs
- Cleaner state management
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Implement dual improvements to enable efficient testing of pipeline changes
without polluting master branch:
1. Add --force flag to create-host script
- Skip hostname/IP uniqueness validation
- Overwrite existing host configurations
- Update entries in flake.nix and terraform/vms.tf (no duplicates)
- Useful for iterating on configurations during testing
2. Add branch support to bootstrap mechanism
- Bootstrap service reads NIXOS_FLAKE_BRANCH environment variable
- Defaults to master if not set
- Uses branch in git URL via ?ref= parameter
- Service loads environment from /etc/environment
3. Add cloud-init disk support for branch configuration
- VMs can specify flake_branch field in terraform/vms.tf
- Automatically generates cloud-init snippet setting NIXOS_FLAKE_BRANCH
- Uploads snippet to Proxmox via SSH
- Production VMs omit flake_branch and use master
4. Update documentation
- Document --force flag usage in create-host README
- Add branch testing examples in terraform README
- Update TODO.md with testing workflow
- Add .generated/ to gitignore
Testing workflow: Create feature branch, set flake_branch in VM definition,
deploy with terraform, iterate with --force flag, clean up before merging.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Add filesystem configuration matching Proxmox image builder output
to allow template2 to build with both `nixos-rebuild build` and
`nixos-rebuild build-image --image-variant proxmox`.
Filesystem specs discovered from running VM:
- ext4 filesystem with label "nixos"
- x-systemd.growfs option for automatic partition growth
- No swap partition
Using lib.mkDefault ensures these definitions work for normal builds
while allowing the Proxmox image builder to override when needed.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Add systemd service that automatically bootstraps freshly deployed VMs
with their host-specific NixOS configuration from the flake repository.
Changes:
- hosts/template2/bootstrap.nix: New systemd oneshot service that:
- Runs after cloud-init completes (ensures hostname is set)
- Reads hostname from hostnamectl (set by cloud-init from Terraform)
- Checks network connectivity via HTTPS (curl)
- Runs nixos-rebuild boot with flake URL
- Reboots on success, fails gracefully with clear errors on failure
- hosts/template2/configuration.nix: Configure cloud-init datasource
- Changed from NoCloud to ConfigDrive (used by Proxmox)
- Allows cloud-init to receive config from Proxmox
- hosts/template2/default.nix: Import bootstrap.nix module
- terraform/vms.tf: Add cloud-init disk to VMs
- Configure disks.ide.ide2.cloudinit block
- Removed invalid cloudinit_cdrom_storage parameter
- Enables Proxmox to inject cloud-init configuration
- TODO.md: Mark Phase 3 as completed
This eliminates the manual nixos-rebuild step from the deployment workflow.
VMs now automatically pull and apply their configuration on first boot.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Implements Phase 2 of the automated deployment pipeline.
This commit adds a Python CLI tool that automates the creation of NixOS host
configurations, eliminating manual boilerplate and reducing errors.
Features:
- Python CLI using typer framework with rich terminal UI
- Comprehensive validation (hostname format/uniqueness, IP subnet/uniqueness)
- Jinja2 templates for NixOS configurations
- Automatic updates to flake.nix and terraform/vms.tf
- Support for both static IP and DHCP configurations
- Dry-run mode for safe previews
- Packaged as Nix derivation and added to devShell
Usage:
create-host --hostname myhost --ip 10.69.13.50/24
The tool generates:
- hosts/<hostname>/default.nix
- hosts/<hostname>/configuration.nix
- Updates flake.nix with new nixosConfigurations entry
- Updates terraform/vms.tf with new VM definition
All generated configurations include full system imports (monitoring, SOPS,
autoupgrade, etc.) and are validated with nix flake check and tofu validate.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Phase 1 is now fully implemented with parameterized multi-VM deployments
via OpenTofu. Updated status, tasks, and added implementation details.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Implements Phase 1 of the OpenTofu deployment plan:
- Replace single-VM configuration with locals-based for_each pattern
- Support multiple VMs in single deployment
- Automatic DHCP vs static IP detection
- Configurable defaults with per-VM overrides
- Dynamic outputs for VM IPs and specifications
New files:
- outputs.tf: Dynamic outputs for deployed VMs
- vms.tf: VM definitions using locals.vms map
Updated files:
- variables.tf: Added default variables for VM configuration
- README.md: Comprehensive documentation and examples
Removed files:
- vm.tf: Replaced by new vms.tf (archived as vm.tf.old, then removed)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Document multi-phase plan for automating NixOS host creation, deployment, and configuration on Proxmox including OpenTofu parameterization, config generation, bootstrap mechanism, secrets management, and Nix-based DNS automation.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Add automated workflow for building and deploying NixOS VMs on Proxmox including template2 host configuration, Ansible playbook for image building/deployment, and OpenTofu configuration for VM provisioning with cloud-init.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
description: Analyzes audit logs to investigate user activity, command execution, and suspicious behavior on hosts. Can be used standalone for security reviews or called by other agents for behavioral context.
tools: Read, Grep, Glob
mcpServers:
- lab-monitoring
---
You are a security auditor for a NixOS homelab infrastructure. Your task is to analyze audit logs and reconstruct user activity on hosts.
## Input
You may receive:
- A host or list of hosts to investigate
- A time window (e.g., "last hour", "today", "between 14:00 and 15:00")
- Optional context: specific events to look for, user to focus on, or suspicious activity to investigate
- Optional context from a parent investigation (e.g., "a service stopped at 14:32, what happened around that time?")
## Audit Log Structure
Logs are shipped to Loki via promtail. Audit events use these labels:
-`host` - hostname
-`systemd_unit` - typically `auditd.service` for audit logs
description: Investigates a single system alarm by querying Prometheus metrics and Loki logs, analyzing configuration files for affected hosts/services, and providing root cause analysis.
tools: Read, Grep, Glob
mcpServers:
- lab-monitoring
- git-explorer
---
You are an alarm investigation specialist for a NixOS homelab infrastructure. Your task is to analyze a single alarm and determine its root cause.
## Input
You will receive information about an alarm, which may include:
- Alert name and severity
- Affected host or service
- Alert expression/threshold
- Current value or status
- When it started firing
## Investigation Process
### 1. Understand the Alert Context
Start by understanding what the alert is measuring:
- Use `get_alert` if you have a fingerprint, or `list_alerts` to find matching alerts
- Use `get_metric_metadata` to understand the metric being monitored
- Use `search_metrics` to find related metrics
### 2. Query Current State
Gather evidence about the current system state:
- Use `query` to check the current metric values and related metrics
- Use `list_targets` to verify the host/service is being scraped successfully
- Look for correlated metrics that might explain the issue
### 3. Check Service Logs
Search for relevant log entries using `query_logs`. Focus on service-specific logs and errors.
**Query strategies (start narrow, expand if needed):**
- Start with `limit: 20-30`, increase only if needed
- Use tight time windows: `start: "15m"` or `start: "30m"` initially
- Filter to specific services: `{host="<hostname>", systemd_unit="<service>.service"}`
- Search for errors: `{host="<hostname>"} |= "error"` or `|= "failed"`
**Common patterns:**
- Service logs: `{host="<hostname>", systemd_unit="<service>.service"}`
- All errors on host: `{host="<hostname>"} |= "error"`
- Journal for a unit: `{host="<hostname>", systemd_unit="nginx.service"} |= "failed"`
**Avoid:**
- Using `start: "1h"` with no filters on busy hosts
- Limits over 50 without specific filters
### 4. Investigate User Activity
For any analysis of user activity, **always spawn the `auditor` agent**. Do not query audit logs (EXECVE, USER_LOGIN, etc.) directly - delegate this to the auditor.
**Always call the auditor when:**
- A service stopped unexpectedly (may have been manually stopped)
- A process was killed or a config was changed
- You need to know who was logged in around the time of an incident
- You need to understand what commands led to the current state
- The cause isn't obvious from service logs alone
**Do NOT try to query audit logs yourself.** The auditor is specialized for:
- Parsing EXECVE records and reconstructing command lines
- Correlating SSH sessions with commands executed
- Identifying suspicious patterns
- Filtering out systemd/nix-store noise
**Example prompt for auditor:**
```
Investigate user activity on <hostname> between <start_time> and <end_time>.
Context: The prometheus-node-exporter service stopped at 14:32.
Determine if it was manually stopped and by whom.
```
Incorporate the auditor's findings into your timeline and root cause analysis.
### 5. Check Configuration (if relevant)
If the alert relates to a NixOS-managed service:
- Check host configuration in `/hosts/<hostname>/`
- Check service modules in `/services/<service>/`
- Look for thresholds, resource limits, or misconfigurations
- Check `homelab.host` options for tier/priority/role metadata
### 6. Check for Configuration Drift
Use the git-explorer MCP server to compare the host's deployed configuration against the current master branch. This helps identify:
- Hosts running outdated configurations
- Recent changes that might have caused the issue
- Whether a fix has already been committed but not deployed
**Step 1: Get the deployed revision from Prometheus**
```promql
nixos_flake_info{hostname="<hostname>"}
```
The `current_rev` label contains the deployed git commit hash.
**Step 2: Check if the host is behind master**
```
resolve_ref("master") # Get current master commit
is_ancestor(deployed, master) # Check if host is behind
```
**Step 3: See what commits are missing**
```
commits_between(deployed, master) # List commits not yet deployed
```
**Step 4: Check which files changed**
```
get_diff_files(deployed, master) # Files modified since deployment
```
Look for files in `hosts/<hostname>/`, `services/<relevant-service>/`, or `system/` that affect this host.
**Step 5: View configuration at the deployed revision**
- **Scrape failures**: Service down, firewall issues, port changes
**Note:** If a service stopped unexpectedly and service logs don't show a crash or error, it was likely manual intervention - call the auditor to investigate.
## Output Format
Provide a concise report with one of two outcomes:
### If Root Cause Identified:
```
## Root Cause
[1-2 sentence summary of the root cause]
## Timeline
[Chronological sequence of relevant events leading to the alert]
- HH:MM:SSZ - [Event description]
- HH:MM:SSZ - [Event description]
- HH:MM:SSZ - [Alert fired]
### Timeline sources
- HH:MM:SSZ - [Source for information about this event. Which metric or log file]
- HH:MM:SSZ - [Source for information about this event. Which metric or log file]
- HH:MM:SSZ - [Alert fired]
## Evidence
- [Specific metric values or log entries that support the conclusion]
- [Configuration details if relevant]
## Recommended Actions
1. [Specific remediation step]
2. [Follow-up actions if any]
```
### If Root Cause Unclear:
```
## Investigation Summary
[What was checked and what was found]
## Possible Causes
- [Hypothesis 1 with supporting/contradicting evidence]
- [Hypothesis 2 with supporting/contradicting evidence]
## Additional Information Needed
- [Specific data, logs, or access that would help]
- [Suggested queries or checks for the operator]
```
## Guidelines
- Be concise and actionable
- Reference specific metric names and values as evidence
- Include log snippets when they're informative
- Don't speculate without evidence
- If the alert is a false positive or expected behavior, explain why
- Consider the host's tier (test vs prod) when assessing severity
- Build a timeline from log timestamps and metrics to show the sequence of events
- **Query logs incrementally**: start with narrow filters and small limits, expand only if needed
- **Always delegate to the auditor agent** for any user activity analysis - never query EXECVE or audit logs directly
description: Reference guide for exploring Prometheus metrics and Loki logs when troubleshooting homelab issues. Use when investigating system state, deployments, service health, or searching logs.
---
# Observability Troubleshooting Guide
Quick reference for exploring Prometheus metrics and Loki logs to troubleshoot homelab issues.
## Available Tools
Use the `lab-monitoring` MCP server tools:
**Metrics:**
-`search_metrics` - Find metrics by name substring
-`get_metric_metadata` - Get type/help for a specific metric
-`query` - Execute PromQL queries
-`list_targets` - Check scrape target health
-`list_alerts` / `get_alert` - View active alerts
**Logs:**
-`query_logs` - Execute LogQL queries against Loki
-`list_labels` - List available log labels
-`list_label_values` - List values for a specific label
VMs provisioned from template2 send bootstrap progress directly to Loki via curl (before promtail is available). These logs use `job="bootstrap"` with additional labels:
-`host` - Target hostname
-`branch` - Git branch being deployed
-`stage` - Bootstrap stage (see table below)
**Bootstrap stages:**
| Stage | Message | Meaning |
|-------|---------|---------|
| `starting` | Bootstrap starting for \<host\> (branch: \<branch\>) | Bootstrap service has started |
| `network_ok` | Network connectivity confirmed | Can reach git server |
description: Create a planning document for a future homelab project. Use when the user wants to document ideas for future work without implementing immediately.
argument-hint: [topic or feature to plan]
---
# Quick Plan Generator
Create a planning document for a future homelab infrastructure project. Plans are for documenting ideas and approaches that will be implemented later, not immediately.
## Input
The user provides: $ARGUMENTS
## Process
1.**Understand the topic**: Research the codebase to understand:
- Current state of related systems
- Existing patterns and conventions
- Relevant NixOS options or packages
- Any constraints or dependencies
2.**Evaluate options**: If there are multiple approaches, research and compare them with pros/cons.
3.**Draft the plan**: Create a markdown document following the structure below.
4.**Save the plan**: Write to `docs/plans/<topic-slug>.md` using a kebab-case filename derived from the topic.
## Plan Structure
Use these sections as appropriate (not all plans need every section):
```markdown
# Title
## Overview/Goal
Brief description of what this plan addresses and why.
## Current State
What exists today that's relevant to this plan.
## Options Evaluated (if multiple approaches)
For each option:
- **Option Name**
- **Pros:** bullet points
- **Cons:** bullet points
- **Verdict:** brief assessment
Or use a comparison table for structured evaluation.
## Recommendation/Decision
What approach is recommended and why. Include rationale.
## Implementation Steps
Numbered phases or steps. Be specific but not overly detailed.
Can use sub-sections for major phases.
## Open Questions
Things still to be determined. Use checkbox format:
- [ ] Question 1?
- [ ] Question 2?
## Notes (optional)
Additional context, caveats, or references.
```
## Style Guidelines
- **Concise**: Use bullet points, avoid verbose paragraphs
- **Technical but accessible**: Include NixOS config snippets when relevant
- **Future-oriented**: These are plans, not specifications
- **Acknowledge uncertainty**: Use "Open Questions" for unresolved decisions
- **Reference existing patterns**: Mention how this fits with existing infrastructure
- **Tables for comparisons**: Use markdown tables when comparing options
- **Practical focus**: Emphasize what needs to happen, not theory
## Examples of Good Plans
Reference these existing plans for style guidance:
-`docs/plans/auth-system-replacement.md` - Good option evaluation with table
-`docs/plans/truenas-migration.md` - Good decision documentation with rationale
-`docs/plans/remote-access.md` - Good multi-option comparison
-`docs/plans/prometheus-scrape-target-labels.md` - Good implementation detail level
## After Creating the Plan
1. Tell the user the plan was saved to `docs/plans/<filename>.md`
2. Summarize the key points
3. Ask if they want any adjustments before committing
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Repository Overview
This is a Nix Flake-based NixOS configuration repository for managing a homelab infrastructure consisting of 16 server configurations. The repository uses a modular architecture with shared system configurations, reusable service modules, and per-host customization.
## Common Commands
### Building Configurations
```bash
# List all available configurations
nix flake show
# Build a specific host configuration locally (without deploying)
**Important:** Do NOT pipe `nix build` commands to other commands like `tail` or `head`. Piping can hide errors and make builds appear successful when they actually failed. Always run `nix build` without piping to see the full output.
```bash
# BAD - hides errors
nix build .#create-host 2>&1| tail -20
# GOOD - shows all output and errors
nix build .#create-host
```
### Deployment
Do not automatically deploy changes. Deployments are usually done by updating the master branch, and then triggering the auto update on the specific host.
### SSH Commands
Do not run SSH commands directly. If a command needs to be run on a remote host, provide the command to the user and ask them to run it manually.
### Testing Feature Branches on Hosts
All hosts have the `nixos-rebuild-test` helper script for testing feature branches before merging:
```bash
# On the target host, test a feature branch
nixos-rebuild-test boot <branch-name>
nixos-rebuild-test switch <branch-name>
# Additional arguments are passed through to nixos-rebuild
nixos-rebuild-test boot my-feature --show-trace
```
When working on a feature branch that requires testing on a live host, suggest using this command instead of the full flake URL syntax.
### Flake Management
```bash
# Check flake for errors
nix flake check
```
Do not run `nix flake update`. Should only be done manually by user.
### Development Environment
```bash
# Enter development shell
nix develop
```
The devshell provides: `ansible`, `tofu` (OpenTofu), `bao` (OpenBao CLI), `create-host`, and `homelab-deploy`.
**Important:** When suggesting commands that use devshell tools, always use `nix develop -c <command>` syntax rather than assuming the user is already in a devshell. For example:
```bash
# Good - works regardless of current shell
nix develop -c tofu plan
# Avoid - requires user to be in devshell
tofu plan
```
**OpenTofu:** Use the `-chdir` option instead of `cd` when running tofu commands in subdirectories:
```bash
# Good - uses -chdir option
nix develop -c tofu -chdir=terraform plan
nix develop -c tofu -chdir=terraform/vault apply
# Avoid - changing directories
cd terraform && tofu plan
```
### Secrets Management
Secrets are managed by OpenBao (Vault) using AppRole authentication. Most hosts use the
`vault.secrets` option defined in `system/vault-secrets.nix` to fetch secrets at boot.
Terraform manages the secrets and AppRole policies in `terraform/vault/`.
### Git Workflow
**Important:** Never commit directly to `master` unless the user explicitly asks for it. Always create a feature branch for changes.
**Important:** Never amend commits to `master` unless the user explicitly asks for it. Amending rewrites history and causes issues for deployed configurations.
**Important:** Do not use `gh pr create` to create pull requests. The git server does not support GitHub CLI for PR creation. Instead, push the branch and let the user create the PR manually via the web interface.
When starting a new plan or task, the first step should typically be to create and checkout a new branch with an appropriate name (e.g., `git checkout -b dns-automation` or `git checkout -b fix-nginx-config`).
### Plan Management
When creating plans for large features, follow this workflow:
1. When implementation begins, save a copy of the plan to `docs/plans/` (e.g., `docs/plans/feature-name.md`)
2. Once the feature is fully implemented, move the plan to `docs/plans/completed/`
### Git Commit Messages
Commit messages should follow the format: `topic: short description`
Examples:
-`flake: add opentofu to devshell`
-`template2: add proxmox image configuration`
-`terraform: add VM deployment configuration`
### Clipboard
To copy text to the clipboard, pipe to `wl-copy` (Wayland):
```bash
echo"text"| wl-copy
```
### NixOS Options and Packages Lookup
Two MCP servers are available for searching NixOS options and packages:
- **nixpkgs-options** - Search and lookup NixOS configuration option documentation
- **nixpkgs-packages** - Search and lookup Nix packages from nixpkgs
**Session Setup:** At the start of each session, index the nixpkgs revision from `flake.lock` to ensure documentation matches the project's nixpkgs version:
1. Read `flake.lock` and find the `nixpkgs` node's `rev` field
2. Call `index_revision` with that git hash (both servers share the same index)
**Options Tools (nixpkgs-options):**
-`search_options` - Search for options by name or description (e.g., query "nginx" or "postgresql")
-`get_option` - Get full details for a specific option (e.g., `services.loki.configuration`)
-`get_file` - Fetch the source file from nixpkgs that declares an option
**Package Tools (nixpkgs-packages):**
-`search_packages` - Search for packages by name or description (e.g., query "nginx" or "python")
-`get_package` - Get full details for a specific package by attribute path (e.g., `firefox`, `python312Packages.requests`)
-`get_file` - Fetch the source file from nixpkgs that defines a package
This ensures documentation matches the exact nixpkgs version (currently NixOS 25.11) used by this flake.
### Lab Monitoring
The **lab-monitoring** MCP server provides access to Prometheus metrics and Loki logs. Use the `/observability` skill for detailed reference on:
- Available Prometheus jobs and exporters
- Loki labels and LogQL query syntax
- Bootstrap log monitoring for new VMs
- Common troubleshooting workflows
The skill contains up-to-date information about all scrape targets, host labels, and example queries.
### Deploying to Test Hosts
The **homelab-deploy** MCP server enables remote deployments to test-tier hosts via NATS messaging.
**Available Tools:**
-`deploy` - Deploy NixOS configuration to test-tier hosts
-`list_hosts` - List available deployment targets
**Deploy Parameters:**
-`hostname` - Target a specific host (e.g., `vaulttest01`)
-`role` - Deploy to all hosts with a specific role (e.g., `vault`)
- Automatic Vault credential provisioning via `vault_wrapped_token`
OpenTofu outputs the VM's IP address after deployment for easy SSH access.
**Automatic Vault Credential Provisioning:**
VMs can receive Vault (OpenBao) credentials automatically during bootstrap:
1. OpenTofu generates a wrapped token via `terraform/vault/` and stores it in the VM configuration
2. Cloud-init passes `VAULT_WRAPPED_TOKEN` and `NIXOS_FLAKE_BRANCH` to the bootstrap script
3. The bootstrap script unwraps the token to obtain AppRole credentials
4. Credentials are written to `/var/lib/vault/approle/` before the NixOS rebuild
This eliminates the need for manual `provision-approle.yml` playbook runs on new VMs. Bootstrap progress is logged to Loki with `job="bootstrap"` labels.
#### Template Rebuilding and Terraform State
When the Proxmox template is rebuilt (via `build-and-deploy-template.yml`), the template name may change. This would normally cause Terraform to want to recreate all existing VMs, but that's unnecessary since VMs are independent once cloned.
**Solution**: The `terraform/vms.tf` file includes a lifecycle rule to ignore certain attributes that don't need management:
```hcl
lifecycle {
ignore_changes=[
clone, # Template name can change without recreating VMs
startup_shutdown, # Proxmox sets defaults (-1) that we don't need to manage
]
}
```
This means:
- **clone**: Existing VMs are not affected by template name changes; only new VMs use the updated template
- **startup_shutdown**: Proxmox sets default startup order/delay values (-1) that Terraform would otherwise try to remove
- You can safely update `default_template_name` in `terraform/variables.tf` without recreating VMs
-`tofu plan` won't show spurious changes for Proxmox-managed defaults
**When rebuilding the template:**
1. Run `nix develop -c ansible-playbook -i playbooks/inventory.ini playbooks/build-and-deploy-template.yml`
2. Update `default_template_name` in `terraform/variables.tf` if the name changed
3. Run `tofu plan` - should show no VM recreations (only template name in state)
4. Run `tofu apply` - updates state without touching existing VMs
5. New VMs created after this point will use the new template
### Adding a New Host
See [docs/host-creation.md](docs/host-creation.md) for the complete host creation pipeline, including:
- Using the `create-host` script to generate host configurations
- Deploying VMs and secrets with OpenTofu
- Monitoring the bootstrap process via Loki
- Verification and troubleshooting steps
**Note:** DNS A records and Prometheus node-exporter scrape targets are auto-generated from the host's `systemd.network.networks` static IP configuration. No manual zone file or Prometheus config editing is required.
### Important Patterns
**Overlay usage**: Access unstable packages via `pkgs.unstable.<package>` (defined in flake.nix overlay-unstable)
**Service composition**: Services in `/services/` are designed to be imported by multiple hosts. Keep them modular and reusable.
**Hardware configuration reuse**: Multiple hosts share `/hosts/template/hardware-configuration.nix` for VM instances.
**State version**: All hosts use stateVersion `"23.11"` - do not change this on existing hosts.
**Firewall**: Disabled on most hosts (trusted network). Enable selectively in host configuration if needed.
**Shell scripts**: Use `pkgs.writeShellApplication` instead of `pkgs.writeShellScript` or `pkgs.writeShellScriptBin` for creating shell scripts. `writeShellApplication` provides automatic shellcheck validation, sets strict bash options (`set -euo pipefail`), and allows declaring `runtimeInputs` for dependencies. When referencing the executable path (e.g., in `ExecStart`), use `lib.getExe myScript` to get the proper `bin/` path.
### Monitoring Stack
All hosts ship metrics and logs to `monitoring01`:
- **Metrics**: Prometheus scrapes node-exporter from all hosts
- **Logs**: Promtail ships logs to Loki on monitoring01
- **Access**: Grafana at monitoring01 for visualization
- **Tracing**: Tempo for distributed tracing
- **Profiling**: Pyroscope for continuous profiling
**Scrape Target Auto-Generation:**
Prometheus scrape targets are automatically generated from host configurations, following the same pattern as DNS zone generation:
- **Node-exporter**: All flake hosts with static IPs are automatically added as node-exporter targets
- **Service targets**: Defined via `homelab.monitoring.scrapeTargets` in service modules
- **External targets**: Non-flake hosts defined in `/services/monitoring/external-targets.nix`
- **Library**: `lib/monitoring.nix` provides `generateNodeExporterTargets` and `generateScrapeConfigs`
Service modules declare their scrape targets directly via `homelab.monitoring.scrapeTargets`. The Prometheus config on monitoring01 auto-generates scrape configs from all hosts. See "Homelab Module Options" section for available options.
To add monitoring targets for non-NixOS hosts, edit `/services/monitoring/external-targets.nix`.
### DNS Architecture
-`ns1` (10.69.13.5) - Primary authoritative DNS + resolver
-`ns2` (10.69.13.6) - Secondary authoritative DNS (AXFR from ns1)
- All hosts point to ns1/ns2 for DNS resolution
**Zone Auto-Generation:**
DNS zone entries are automatically generated from host configurations:
- **Flake-managed hosts**: A records extracted from `systemd.network.networks` static IPs
- **CNAMEs**: Defined via `homelab.dns.cnames` option in host configs
- **External hosts**: Non-flake hosts defined in `/services/ns/external-hosts.nix`
- Network interface is a VPN/tunnel (wg*, tun*, tap*)
To add DNS entries for non-NixOS hosts, edit `/services/ns/external-hosts.nix`.
### Homelab Module Options
The `modules/homelab/` directory defines custom options used across hosts for automation and metadata.
**Host options (`homelab.host.*`):**
-`tier` - Deployment tier: `test` or `prod`. Test-tier hosts can receive remote deployments and have different credential access.
-`priority` - Alerting priority: `high` or `low`. Controls alerting thresholds for the host.
-`role` - Primary role designation (e.g., `dns`, `database`, `bastion`, `vault`)
-`labels` - Free-form key-value metadata for host categorization
**DNS options (`homelab.dns.*`):**
-`enable` (default: `true`) - Include host in DNS zone generation
-`cnames` (default: `[]`) - List of CNAME aliases pointing to this host
**Monitoring options (`homelab.monitoring.*`):**
-`enable` (default: `true`) - Include host in Prometheus node-exporter scrape targets
-`scrapeTargets` (default: `[]`) - Additional scrape targets exposed by this host
**Deploy options (`homelab.deploy.*`):**
-`enable` (default: `false`) - Enable NATS-based remote deployment listener. When enabled, the host listens for deployment commands via NATS and can be targeted by the `homelab-deploy` MCP server.
NixOS Flake-based configuration repository for a homelab infrastructure. All hosts run NixOS 25.11 and are managed declaratively through this single repository.
## Configurations in use
## Hosts
* ha1
* ns1
* ns2
* template1
| Host | Role |
|------|------|
| `ns1`, `ns2` | Primary/secondary authoritative DNS |
| `ca` | Internal Certificate Authority |
| `ha1` | Home Assistant + Zigbee2MQTT + Mosquitto |
**Automatic DNS zone generation** - A records are derived from each host's static IP configuration. CNAME aliases are defined via `homelab.dns.cnames`. No manual zone file editing required.
**OpenBao (Vault) secrets** - Hosts authenticate via AppRole and fetch secrets at boot. Secrets and policies are managed as code in `terraform/vault/`. Legacy SOPS remains only for the `ca` host.
**Daily auto-upgrades** - All hosts pull from the master branch and automatically rebuild and reboot on a randomized schedule.
**Shared base configuration** - Every host automatically gets SSH, monitoring (node-exporter + Promtail), internal ACME certificates, and Nix binary cache access via the `system/` modules.
**Proxmox VM provisioning** - Build VM templates with Ansible and deploy VMs with OpenTofu from `terraform/`.
**OpenBao (Vault) secrets** - Centralized secrets management with AppRole authentication, PKI infrastructure, and automated bootstrap. Managed as code in `terraform/vault/`.
## Usage
```bash
# Enter dev shell (provides ansible, opentofu, openbao, create-host)
This document describes the process for creating new hosts in the homelab infrastructure.
## Overview
We use the `create-host` script to create new hosts, which generates default configurations from a template. We then use OpenTofu to deploy both secrets and VMs. The VMs boot using a template image (built from `hosts/template2`), which starts a bootstrap process. This bootstrap process applies the host's NixOS configuration and then reboots into the new config.
## Prerequisites
All tools are available in the devshell: `create-host`, `bao` (OpenBao CLI), `tofu`.
```bash
nix develop
```
## Steps
Steps marked with **USER** must be performed by the user due to credential requirements.
1.**USER**: Run `create-host --hostname <name> --ip <ip/prefix>`
2. Edit the auto-generated configurations in `hosts/<hostname>/` to import whatever modules are needed for its purpose
3. Add any secrets needed to `terraform/vault/`
4. Edit the VM specs in `terraform/vms.tf` if needed. To deploy from a branch other than master, add `flake_branch = "<branch>"` to the VM definition
5. Push configuration to master (or the branch specified by `flake_branch`)
6.**USER**: Apply terraform:
```bash
nix develop -c tofu -chdir=terraform/vault apply
nix develop -c tofu -chdir=terraform apply
```
7. Once terraform completes, a VM boots in Proxmox using the template image
8. The VM runs the `nixos-bootstrap` service, which applies the host config and reboots
9. After reboot, the host should be operational
10. Trigger auto-upgrade on `ns1` and `ns2` to propagate DNS records for the new host
11. Trigger auto-upgrade on `monitoring01` to add the host to Prometheus scrape targets
## Tier Specification
New hosts should set `homelab.host.tier` in their configuration:
```nix
homelab.host.tier = "test"; # or "prod"
```
- **test** - Test-tier hosts can receive remote deployments via the `homelab-deploy` MCP server and have different credential access. Use for staging/testing.
- **prod** - Production hosts. Deployments require direct access or the CLI with appropriate credentials.
## Observability
During the bootstrap process, status updates are sent to Loki. Query bootstrap logs with:
```
{job="bootstrap", host="<hostname>"}
```
### Bootstrap Stages
The bootstrap process reports these stages via the `stage` label:
| Stage | Message | Meaning |
|-------|---------|---------|
| `starting` | Bootstrap starting for \<host\> (branch: \<branch\>) | Bootstrap service has started |
| `network_ok` | Network connectivity confirmed | Can reach git server |
5. Confirm expected services are running and producing logs
## Troubleshooting
### Bootstrap Failed
#### Common Issues
* VM has trouble running initial nixos-rebuild. Usually caused if it needs to compile packages from scratch if they are not available in our local nix-cache.
#### Troubleshooting
1. Check bootstrap logs in Loki - if they never progress past `building`, the rebuild likely consumed all resources:
```
{job="bootstrap", host="<hostname>"}
```
2. **USER**: SSH into the host and check the bootstrap service:
```bash
ssh root@<hostname>
journalctl -u nixos-bootstrap.service
```
3. If the build failed due to resource constraints, increase VM specs in `terraform/vms.tf` and redeploy, or manually run the rebuild:
This document describes the physical and virtual infrastructure components that support the NixOS-managed servers in this repository.
## Overview
The homelab consists of several core infrastructure components:
- **Proxmox VE** - Hypervisor hosting all NixOS VMs
- **TrueNAS** - Network storage and backup target
- **Ubiquiti EdgeRouter** - Primary router and gateway
- **Mikrotik Switch** - Core network switching
All NixOS configurations in this repository run as VMs on Proxmox and rely on these underlying infrastructure components.
## Network Topology
### Subnets
VLAN numbers are based on third octet of ip address.
TODO: VLAN naming is currently inconsistent across router/switch/Proxmox configurations. Need to standardize VLAN names and update all device configs to use consistent naming.
-`10.69.8.x` - Kubernetes (no longer in use)
-`10.69.12.x` - Core services
-`10.69.13.x` - NixOS VMs and core services
-`10.69.30.x` - Client network 1
-`10.69.31.x` - Clients network 2
-`10.69.99.x` - Management network
### Core Network Services
- **Gateway**: Web UI exposed on 10.69.10.1
- **DNS**: ns1 (10.69.13.5), ns2 (10.69.13.6)
- **Primary DNS Domain**: `home.2rjus.net`
## Hardware Components
### Proxmox Hypervisor
**Purpose**: Hosts all NixOS VMs defined in this repository
**Hardware**:
- CPU: AMD Ryzen 9 3900X 12-Core Processor
- RAM: 96GB (94Gi)
- Storage: 1TB NVMe SSD (nvme0n1)
**Management**:
- Web UI: `https://pve1.home.2rjus.net:8006`
- Cluster: Standalone
- Version: Proxmox VE 8.4.16 (kernel 6.8.12-18-pve)
**VM Provisioning**:
- Template VM: ID 9000 (built from `hosts/template2`)
- See `/terraform` directory for automated VM deployment using OpenTofu
**Storage**:
- ZFS pool: `rpool` on NVMe partition (nvme0n1p3)
- Total capacity: ~900GB (232GB used, 667GB available)
- Configuration: Single disk (no RAID)
- Scrub status: Last scrub completed successfully with 0 errors
TODO: Improve monitoring for physical hosts (proxmox, nas)
TODO: Improve monitoring for networking equipment
All NixOS VMs ship metrics to monitoring01 via node-exporter and logs via Promtail. See `/services/monitoring/` for the observability stack configuration.
Deploy a modern, unified authentication solution for the homelab. Provides central user management, SSO for web services, and consistent UID/GID mapping for NAS permissions.
## Goals
1.**Central user database** - Manage users across all homelab hosts from a single source
2.**Linux PAM/NSS integration** - Users can SSH into hosts using central credentials
3.**UID/GID consistency** - Proper POSIX attributes for NAS share permissions
4.**OIDC provider** - Single sign-on for homelab web services (Grafana, etc.)
## Solution: Kanidm
Kanidm was chosen for the following reasons:
| Requirement | Kanidm Support |
|-------------|----------------|
| Central user database | Native |
| Linux PAM/NSS (host login) | Native NixOS module |
-`certificate_expiring_soon` - Warned when any monitored TLS cert had < 24h validity
-`step_ca_serving_cert_expiring` - Critical alert for step-ca's own serving certificate
-`certificate_check_error` - Warned when TLS connection check failed
-`step_ca_certificate_expiring` - Critical alert for step-ca issued certificates
## Why It Was Removed
1.**step-ca decommissioned**: The primary monitoring target (step-ca) is no longer in use
2.**Outdated codebase**: labmon was a custom tool that required maintenance
3.**Limited value**: With ACME auto-renewal, certificates should renew automatically
## Current State
ACME certificates are now issued by OpenBao PKI at `vault.home.2rjus.net:8200`. The ACME protocol handles automatic renewal, and certificates are typically renewed well before expiration.
## Future Needs
While ACME handles renewal automatically, we should consider monitoring for:
1.**ACME renewal failures**: Alert when a certificate fails to renew
- Could monitor ACME client logs (via Loki queries)
- Could check certificate file modification times
2.**Certificate expiration as backup**: Even with auto-renewal, a last-resort alert for certificates approaching expiration would catch renewal failures
3.**Certificate transparency**: Monitor for unexpected certificate issuance
### Potential Solutions
1.**Prometheus blackbox_exporter**: Can probe TLS endpoints and export certificate expiration metrics
-`probe_ssl_earliest_cert_expiry` metric
- Already a standard tool, well-maintained
2.**Custom Loki alerting**: Query ACME service logs for renewal failures
- Works with existing infrastructure
- No additional services needed
3.**Node-exporter textfile collector**: Script that checks local certificate files and writes expiration metrics
## Status
**Not yet implemented.** This document serves as a placeholder for future work on certificate monitoring.
- [x] Tested ACME issuance on vaulttest01 successfully
- [x] Verified certificate chain and trust
- [x] Migrate vault01's own certificate
- [x] Created `bootstrap-vault-cert` script for initial certificate issuance via bao CLI
- [x] Issued certificate with SANs (vault01.home.2rjus.net + vault.home.2rjus.net)
- [x] Updated service to read certificates from `/var/lib/acme/vault01.home.2rjus.net/`
- [x] Configured ACME for automatic renewals
- [ ] Migrate hosts from step-ca to OpenBao
- [x] Tested on vaulttest01 (non-production host)
- [ ] Standardize hostname usage across all configurations
- [ ] Use `vault.home.2rjus.net` (CNAME) consistently everywhere
- [ ] Update NixOS configurations to use CNAME instead of vault01
- [ ] Update Terraform configurations to use CNAME
- [ ] Audit and fix mixed usage of vault01.home.2rjus.net vs vault.home.2rjus.net
- [ ] Update `system/acme.nix` to use OpenBao ACME endpoint
- [ ] Change server to `https://vault.home.2rjus.net:8200/v1/pki_int/acme/directory`
- [ ] Roll out to all hosts via auto-upgrade
- [ ] Configure SSH CA in OpenBao (optional, future work)
- [ ] Enable SSH secrets engine (`ssh/` mount)
- [ ] Generate SSH signing keys
- [ ] Create roles for host and user certificates
- [ ] Configure TTLs and allowed principals
- [ ] Distribute SSH CA public key to all hosts
- [ ] Update sshd_config to trust OpenBao CA
- [ ] Decommission step-ca
- [ ] Verify all ACME services migrated and working
- [ ] Stop step-ca service on ca host
- [ ] Archive step-ca configuration for backup
- [ ] Update documentation
**Implementation Details (2026-02-03):**
**ACME Configuration Fix:**
The key blocker was that OpenBao's PKI mount was filtering out required ACME response headers. The solution was to add `allowed_response_headers` to the Terraform mount configuration:
```hcl
allowed_response_headers=[
"Replay-Nonce", # Required for ACME nonce generation
"Link", # Required for ACME navigation
"Location" # Required for ACME resource location
]
```
**Cluster Path Configuration:**
ACME requires the cluster path to include the full API path:
New VMs bootstrapped from template2 don't use our local nix cache (nix-cache.home.2rjus.net) during the initial `nixos-rebuild boot`. This means the first build downloads everything from cache.nixos.org, which is slower and uses more bandwidth.
## Solution
Update the template2 base image to include the nix cache configuration, so new VMs immediately benefit from cached builds during bootstrap.
## Implementation
1. Add nix cache configuration to `hosts/template2/configuration.nix`:
**Goal:** Automatically generate DNS entries from host configurations
**Approach:** Leverage Nix to generate zone file entries from flake host configurations
Since most hosts use static IPs defined in their NixOS configurations, we can extract this information and automatically generate A records. This keeps DNS in sync with the actual host configs.
## Implementation
- [x] Add optional CNAME field to host configurations
- [x] Added `homelab.dns.cnames` option in `modules/homelab/dns.nix`
- [x] Added `homelab.dns.enable` to allow opting out (defaults to true)
- [x] Documented in CLAUDE.md
- [x] Create Nix function to extract DNS records from all hosts
- [x] Created `lib/dns-zone.nix` with extraction functions
- [x] Parses each host's `networking.hostName` and `systemd.network.networks` IP configuration
- [x] Collects CNAMEs from `homelab.dns.cnames`
- [x] Filters out VPN interfaces (wg*, tun*, tap*, vti*)
- [x] Generates complete zone file with A and CNAME records
- [x] Integrate auto-generated records into zone files
- [x] External hosts separated to `services/ns/external-hosts.nix`
- [x] Zone includes comments showing which records are auto-generated vs external
- [x] Update zone file serial number automatically
Audit of services running in the homelab that lack monitoring coverage, either missing Prometheus scrape targets, alerting rules, or both.
## Services with No Monitoring
### PostgreSQL (`pgdb1`)
- **Current state:** No scrape targets, no alert rules
- **Risk:** A database outage would go completely unnoticed by Prometheus
- **Recommendation:** Enable `services.prometheus.exporters.postgres` (available in nixpkgs). This exposes connection counts, query throughput, replication lag, table/index stats, and more. Add alerts for at least `postgres_down` (systemd unit state) and connection pool exhaustion.
### Authelia (`auth01`)
- **Current state:** No scrape targets, no alert rules
- **Risk:** The authentication gateway being down blocks access to all proxied services
- **Recommendation:** Authelia exposes Prometheus metrics natively at `/metrics`. Add a scrape target and at minimum an `authelia_down` systemd unit state alert.
### LLDAP (`auth01`)
- **Current state:** No scrape targets, no alert rules
- **Risk:** LLDAP is a dependency of Authelia -- if LDAP is down, authentication breaks even if Authelia is running
- **Recommendation:** Add an `lldap_down` systemd unit state alert. LLDAP does not expose Prometheus metrics natively, so systemd unit monitoring via node-exporter may be sufficient.
### Vault / OpenBao (`vault01`)
- **Current state:** No scrape targets, no alert rules
- **Risk:** Secrets management service failures go undetected
- **Recommendation:** OpenBao supports Prometheus telemetry output natively. Add a scrape target for the telemetry endpoint and alerts for `vault_down` (systemd unit) and seal status.
### Gitea Actions Runner
- **Current state:** No scrape targets, no alert rules
- **Risk:** CI/CD failures go undetected
- **Recommendation:** Add at minimum a systemd unit state alert. The runner itself has limited metrics exposure.
## Services with Partial Monitoring
### Jellyfin (`jelly01`)
- **Current state:** Has scrape targets (port 8096), metrics are being collected, but zero alert rules
- **Metrics available:** 184 metrics, all .NET runtime / ASP.NET Core level. No Jellyfin-specific metrics (active streams, library size, transcoding sessions). Key useful metrics:
-`microsoft_aspnetcore_hosting_failed_requests` - rate of HTTP errors
- **Recommendation:** Add a `jellyfin_down` alert using either `up{job="jellyfin"} == 0` or systemd unit state. Consider alerting on sustained `failed_requests` rate increase.
### NATS (`nats1`)
- **Current state:** Has a `nats_down` alert (systemd unit state via node-exporter), but no NATS-specific metrics
- **Metrics available:** NATS has a built-in `/metrics` endpoint exposing connection counts, message throughput, JetStream consumer lag, and more
- **Recommendation:** Add a scrape target for the NATS metrics endpoint. Consider alerts for connection count spikes, slow consumers, and JetStream storage usage.
### DNS - Unbound (`ns1`, `ns2`)
- **Current state:** Has `unbound_down` alert (systemd unit state), but no DNS query metrics
- **Available in nixpkgs:** `services.prometheus.exporters.unbound.enable` (package: `prometheus-unbound-exporter` v0.5.0). Exposes query counts, cache hit ratios, response types (SERVFAIL, NXDOMAIN), upstream latency.
- **Recommendation:** Enable the unbound exporter on ns1/ns2. Add alerts for cache hit ratio drops and SERVFAIL rate spikes.
### DNS - NSD (`ns1`, `ns2`)
- **Current state:** Has `nsd_down` alert (systemd unit state), no NSD-specific metrics
- **Available in nixpkgs:** Nothing. No exporter package or NixOS module. Community `nsd_exporter` exists but is not packaged.
- **Recommendation:** The existing systemd unit alert is likely sufficient. NSD is a simple authoritative-only server with limited operational metrics. Not worth packaging a custom exporter for now.
## Existing Monitoring (for reference)
These services have adequate alerting and/or scrape targets:
No per-service CPU, memory, or IO metrics are collected. The existing node-exporter systemd collector only provides unit state (active/inactive/failed), socket stats, and timer triggers. While systemd tracks per-unit resource usage via cgroups internally (visible in `systemctl status` and `systemd-cgtop`), this data is not exported to Prometheus.
### Available Solution
The `prometheus-systemd-exporter` package (v0.7.0) is available in nixpkgs with a ready-made NixOS module:
This exporter reads cgroup data and exposes per-unit metrics including:
- CPU seconds consumed per service
- Memory usage per service
- Task/process counts per service
- Restart counts
- IO usage
### Recommendation
Enable on all hosts via the shared `system/` config (same pattern as node-exporter). Add a corresponding scrape job on monitoring01. This would give visibility into resource consumption per service across the fleet, useful for capacity planning and diagnosing noisy-neighbor issues on shared hosts.
## Suggested Priority
1.**PostgreSQL** - Critical infrastructure, easy to add with existing nixpkgs module
2.**Authelia + LLDAP** - Auth outage affects all proxied services
3.**Unbound exporter** - Ready-to-go NixOS module, just needs enabling
4.**Jellyfin alerts** - Metrics already collected, just needs alert rules
5.**NATS metrics** - Built-in endpoint, just needs a scrape target
Create a message-based deployment system that allows triggering NixOS configuration updates on-demand, rather than waiting for the daily auto-upgrade timer. This enables faster iteration when testing changes and immediate fleet-wide deployments.
## Goals
1.**On-demand deployment** - Trigger config updates immediately via NATS message
2.**Targeted deployment** - Deploy to specific hosts or all hosts
3.**Branch/revision support** - Test feature branches before merging to master
4.**MCP integration** - Allow Claude Code to trigger deployments during development
## Current State
- **Auto-upgrade**: All hosts run `nixos-upgrade.service` daily, pulling from master
- **Manual testing**: `nixos-rebuild-test <action> <branch>` helper exists on all hosts
- **NATS**: Running on nats1 with JetStream enabled, using NKey authentication
- **Accounts**: ADMIN (system) and HOMELAB (user workloads with JetStream)
| Deploy to all test hosts | `deploy.test.all` | ✅ | ✅ |
| Deploy to all prod hosts | `deploy.prod.all` | ❌ | ✅ |
| Deploy to all DNS servers | `deploy.prod.role.dns` | ❌ | ✅ |
All NKeys stored in Vault - MCP gets limited credentials, admin CLI gets full-access credentials.
### Host Metadata
Rather than defining `tier` in the listener config, use a central `homelab.host` module that provides host metadata for multiple consumers. This aligns with the approach proposed in `docs/plans/prometheus-scrape-target-labels.md`.
**Status:** The `homelab.host` module is implemented in `modules/homelab/host.nix`.
Hosts can be filtered by tier using `config.homelab.host.tier`.
**Module definition (in `modules/homelab/host.nix`):**
- **Audit logging**: Log all deployment requests with source identity
- **Network isolation**: NATS only accessible from internal network
## Decisions
All open questions have been resolved. See Notes section for decision rationale.
## Notes
- The existing `nixos-rebuild-test` helper provides a good reference for the rebuild logic
- Uses NATS request/reply pattern for immediate validation feedback and completion status
- Consider using NATS headers for metadata (request ID, timestamp)
- **Timeout decision**: Metrics show no-change upgrades complete in 5-55 seconds. A 10-minute default provides ample headroom for actual updates with package downloads. Per-host override available for hosts with known longer build times.
- **Rollback**: Not needed as a separate feature - deploy an older commit hash to effectively rollback.
- **Offline hosts**: No message persistence - if host is offline, deploy fails. Daily auto-upgrade is the safety net. Avoids complexity of JetStream deduplication (host coming online and applying 10 queued updates instead of just the latest).
- **Deploy history**: Use existing Loki - listener logs deployments to journald, queryable via Loki. No need for separate JetStream persistence.
- **Naming**: `homelab-deploy` - ties it to the infrastructure rather than implementation details.
Recreate ns1 using the OpenTofu workflow after the existing VM entered emergency mode due to incorrect hardware-configuration.nix (hardcoded UUIDs that don't match actual disk layout).
## Current ns1 Configuration to Preserve
- **IP:** 10.69.13.5/24
- **Gateway:** 10.69.13.1
- **Role:** Primary DNS (authoritative + resolver)
-`{hostname="ns1"}` - all metrics from ns1 (any job/port)
-`node_cpu_seconds_total{hostname="monitoring01"}` - specific metric by hostname
-`up{role="dns"}` - all DNS servers
-`up{tier="test"}` - all test-tier hosts
---
## Goal
Add support for custom per-host labels on Prometheus scrape targets, enabling alert rules to reference host metadata (priority, role) instead of hardcoding instance names.
**Related:** This plan shares the `homelab.host` module with `docs/plans/completed/nats-deploy-service.md`, which uses the same metadata for deployment tier assignment.
## Motivation
Some hosts have workloads that make generic alert thresholds inappropriate. For example, `nix-cache01` regularly hits high CPU during builds, requiring a longer `for` duration on `high_cpu_load`. Currently this is handled by excluding specific instance names in PromQL expressions, which is brittle and doesn't scale.
With per-host labels, alert rules can use semantic filters like `{priority!="low"}` instead of `{instance!="nix-cache01.home.2rjus.net:9100"}`.
## Proposed Labels
### `priority`
Indicates alerting importance. Hosts with `priority = "low"` can have relaxed thresholds or longer durations in alert rules.
Values: `"high"` (default), `"low"`
### `role`
Describes the function of the host. Useful for grouping in dashboards and targeting role-specific alert rules.
Values: free-form string, e.g. `"dns"`, `"build-host"`, `"database"`, `"monitoring"`
**Note on multiple roles:** Prometheus labels are strictly string values, not lists. For hosts that serve multiple roles there are a few options:
- **Separate boolean labels:** `role_build_host = "true"`, `role_cache_server = "true"` -- flexible but verbose, and requires updating the module when new roles are added.
- **Delimited string:** `role = "build-host,cache-server"` -- works with regex matchers (`{role=~".*build-host.*"}`), but regex matching is less clean and more error-prone.
- **Pick a primary role:** `role = "build-host"` -- simplest, and probably sufficient since most hosts have one primary role.
Recommendation: start with a single primary role string. If multi-role matching becomes a real need, switch to separate boolean labels.
### `dns_role`
For DNS servers specifically, distinguish between primary and secondary resolvers. The secondary resolver (ns2) receives very little traffic and has a cold cache, making generic cache hit ratio alerts inappropriate.
Values: `"primary"`, `"secondary"`
Example use case: The `unbound_low_cache_hit_ratio` alert fires on ns2 because its cache hit ratio (~62%) is lower than ns1 (~90%). This is expected behavior since ns2 gets ~100x less traffic. With a `dns_role` label, the alert can either exclude secondaries or use different thresholds:
This implementation uses a shared `homelab.host` module that provides host metadata for multiple consumers (Prometheus labels, deployment tiers, etc.). See also `docs/plans/completed/nats-deploy-service.md` which uses the same module for deployment tier assignment.
### 1. Create `homelab.host` module
✅ **Complete.** The module is in `modules/homelab/host.nix`.
Create `modules/homelab/host.nix` with shared host metadata options:
This requires grouping hosts by their label attrset and producing one `static_configs` entry per unique label combination. Hosts with default values (priority=high, no role, no labels) get grouped together with no extra labels (preserving current behavior).
✅ **Complete.** Now uses structured static_configs output.
Change the node-exporter scrape config to use the new structured output:
```nix
# Before:
static_configs=[{targets=nodeExporterTargets;}];
# After:
static_configs=nodeExporterTargets;
```
### 4. Set metadata on hosts
✅ **Complete.** All relevant hosts have metadata configured. Note: The implementation filters by `role` rather than `priority`, which matches the existing nix-cache01 configuration.
Example in `hosts/nix-cache01/configuration.nix`:
```nix
homelab.host={
priority="low";# relaxed alerting thresholds
role="build-host";
};
```
**Note:** Current implementation only sets `role = "build-host"`. Consider adding `priority = "low"` when label propagation is implemented.
Example in `hosts/ns1/configuration.nix`:
```nix
homelab.host={
role="dns";
labels.dns_role="primary";
};
```
**Note:**`tier` and `priority` use defaults ("prod" and "high"), which is the intended behavior. The current ns1/ns2 configurations match this pattern.
-`high_cpu_load`: Replaced `instance!="nix-cache01..."` with `role!="build-host"` for standard hosts (15m duration) and `role="build-host"` for build hosts (2h duration).
-`unbound_low_cache_hit_ratio`: Added `dns_role="primary"` filter to only alert on the primary DNS resolver (secondary has a cold cache).
### 6. Labels for `generateScrapeConfigs` (service targets)
✅ **Complete.** Host labels are now propagated to all auto-generated service scrape targets (unbound, homelab-deploy, nixos-exporter, etc.). This enables semantic filtering on any service metric, such as using `dns_role="primary"` with the unbound job.
Three Aqara Zigbee temperature sensors report `battery: 0` in their MQTT payload, making the `hass_sensor_battery_percent` Prometheus metric useless for battery monitoring on these devices.
Affected sensors:
- **Temp Living Room** (`0x54ef441000a54d3c`) — WSDCGQ12LM
### Solution 1: Calculate battery from voltage in Zigbee2MQTT (Implemented)
Override the Home Assistant battery entity's `value_template` in Zigbee2MQTT device configuration to calculate battery percentage from voltage.
**Formula:**`(voltage - 2100) / 9` (maps 2100-3000mV to 0-100%)
**Changes in `services/home-assistant/default.nix`:**
- Device configuration moved from external `devices.yaml` to inline NixOS config
- Three affected sensors have `homeassistant.sensor_battery.value_template` override
- All 12 devices now declaratively managed
**Expected battery values based on current voltages:**
| Sensor | Voltage | Expected Battery |
|--------|---------|------------------|
| Temp Living Room | 2710 mV | ~68% |
| Temp Office | 2658 mV | ~62% |
| temp_server | 2765 mV | ~74% |
### Solution 2: Alert on sensor staleness (Implemented)
Added Prometheus alert `zigbee_sensor_stale` in `services/monitoring/rules.yml` that fires when a Zigbee temperature sensor hasn't updated in over 1 hour. This provides defense-in-depth for detecting dead sensors regardless of battery reporting accuracy.
-`/var/lib/vault` — Contains AppRole credentials; not backed up (can be re-provisioned via Ansible)
-`/var/lib/sops-nix` — Legacy; ha1 uses Vault now
## Post-Deployment Steps
After deploying to ha1:
1. Restart zigbee2mqtt service (automatic on NixOS rebuild)
2. In Home Assistant, the battery entities may need to be re-discovered:
- Go to Settings → Devices & Services → MQTT
- The new `value_template` should take effect after entity re-discovery
- If not, try disabling and re-enabling the battery entities
## Notes
- Device configuration is now declarative in NixOS. Future device additions via Zigbee2MQTT frontend will need to be added to the NixOS config to persist.
- The `devices.yaml` file on ha1 will be overwritten on service start but can be removed after confirming the new config works.
- The NixOS zigbee2mqtt module defaults to `devices = "devices.yaml"` but our explicit inline config overrides this.
Build a Prometheus exporter for metrics specific to our homelab infrastructure. Unlike the generic nixos-exporter, this covers services and patterns unique to our environment.
## Current State
### Existing Exporters
- **node-exporter** (all hosts): System metrics
- **systemd-exporter** (all hosts): Service restart counts, IP accounting
- **labmon** (monitoring01): TLS certificate monitoring, step-ca health
Current Prometheus configuration retains metrics for 30 days (`retentionTime = "30d"`). Extending retention further raises disk usage concerns on the homelab hypervisor with limited local storage.
Prometheus does not support downsampling - it stores all data at full resolution until the retention period expires, then deletes it entirely.
## Current Configuration
Location: `services/monitoring/prometheus.nix`
- **Retention**: 30 days
- **Scrape interval**: 15s
- **Features**: Alertmanager, Pushgateway, auto-generated scrape configs from flake hosts
- **Storage**: Local disk on monitoring01
## Options Evaluated
### Option 1: VictoriaMetrics
VictoriaMetrics is a Prometheus-compatible TSDB with significantly better compression (5-10x smaller storage footprint).
**NixOS Options Available:**
-`services.victoriametrics.enable`
-`services.victoriametrics.prometheusConfig` - accepts Prometheus scrape config format
-`services.victoriametrics.retentionPeriod` - e.g., "6m" for 6 months
-`services.vmagent` - dedicated scraping agent
-`services.vmalert` - alerting rules evaluation
**Pros:**
- Simple migration - single service replacement
- Same PromQL query language - Grafana dashboards work unchanged
- Same scrape config format - existing auto-generated configs work as-is
- 5-10x better compression means 30 days of Prometheus data could become 180+ days
- Lightweight, single binary
**Cons:**
- No automatic downsampling (relies on compression alone)
- Alerting requires switching to vmalert instead of Prometheus alertmanager integration
- Would need to migrate existing data or start fresh
**Migration Steps:**
1. Replace `services.prometheus` with `services.victoriametrics`
2. Move scrape configs to `prometheusConfig`
3. Set up `services.vmalert` for alerting rules
4. Update Grafana datasource to VictoriaMetrics port (8428)
5. Keep Alertmanager for notification routing
### Option 2: Thanos
Thanos extends Prometheus with long-term storage and automatic downsampling by uploading data to object storage.
**NixOS Options Available:**
-`services.thanos.sidecar` - uploads Prometheus blocks to object storage
-`services.thanos.compact` - compacts and downsamples data
-`services.thanos.query` - unified query gateway
-`services.thanos.query-frontend` - query caching and parallelization
-`services.thanos.downsample` - dedicated downsampling service
**Downsampling Behavior:**
- Raw resolution kept for configurable period (default: indefinite)
- 5-minute resolution created after 40 hours
- 1-hour resolution created after 10 days
**Retention Configuration (in compactor):**
```nix
services.thanos.compact={
retention.resolution-raw="30d";# Keep raw for 30 days
retention.resolution-5m="180d";# Keep 5m samples for 6 months
retention.resolution-1h="2y";# Keep 1h samples for 2 years
};
```
**Pros:**
- True downsampling - older data uses progressively less storage
- Keep metrics for years with minimal storage impact
- Prometheus continues running unchanged
- Existing Alertmanager integration preserved
**Cons:**
- Requires object storage (MinIO, S3, or local filesystem)
- Multiple services to manage (sidecar, compactor, query)
- More complex architecture
- Additional infrastructure (MinIO) may be needed
**Required Components:**
1. Thanos Sidecar (runs alongside Prometheus)
2. Object storage (MinIO or local filesystem)
3. Thanos Compactor (handles downsampling)
4. Thanos Query (provides unified query endpoint)
**Migration Steps:**
1. Deploy object storage (MinIO or configure filesystem backend)
2. Add Thanos sidecar pointing to Prometheus data directory
3. Add Thanos compactor with retention policies
4. Add Thanos query gateway
5. Update Grafana datasource to Thanos Query port (10902)
| Grafana changes | Change port only | Change port only |
| Alerting changes | Need vmalert | Keep existing |
## Recommendation
**Start with VictoriaMetrics** for simplicity. The compression alone may provide 6+ months of retention in the same disk space currently used for 30 days.
If multi-year retention with true downsampling becomes necessary, Thanos can be evaluated later. However, it requires deploying object storage infrastructure (MinIO) which adds operational complexity.
Tracking the zram change to verify it resolves OOM issues during nixos-upgrade on low-memory hosts.
## Background
On 2026-02-08, ns2 (2GB RAM) experienced an OOM kill during nixos-upgrade. The Nix evaluation process consumed ~1.6GB before being killed by the kernel. ns1 (manually increased to 4GB) succeeded with the same upgrade.
Root cause: 2GB RAM is insufficient for Nix flake evaluation without swap.
## Fix Applied
**Commit:**`1674b6a` - system: enable zram swap for all hosts
**Merged:** 2026-02-08 ~12:15 UTC
**Change:** Added `zramSwap.enable = true` to `system/zram.nix`, providing ~2GB compressed swap on all hosts.
## Timeline
| Time (UTC) | Event |
|------------|-------|
| 05:00:46 | ns2 nixos-upgrade OOM killed |
| 05:01:47 | `nixos_upgrade_failed` alert fired |
| 12:15 | zram commit merged to master |
| 12:19 | ns2 rebooted with zram enabled |
| 12:20 | ns1 rebooted (memory reduced to 2GB via tofu) |
## Hosts Affected
All 2GB VMs that run nixos-upgrade:
- ns1, ns2 (DNS)
- vault01
- testvm01, testvm02, testvm03
- kanidm01
## Metrics to Monitor
Check these in Grafana or via PromQL to verify the fix:
### Swap availability (should be ~2GB after upgrade)
Proxmox supports memory ballooning, which allows VMs to dynamically grow/shrink memory allocation based on demand. The balloon driver inside the guest communicates with the hypervisor to release or reclaim memory pages.
Configuration in `terraform/vms.tf`:
```hcl
memory = 4096 # maximum memory
balloon = 2048 # minimum memory (shrinks to this when idle)
```
Pros:
- VMs get memory on-demand without reboots
- Better host memory utilization
- Solves upgrade OOM without permanently allocating 4GB
Cons:
- Requires QEMU guest agent running in guest
- Guest can experience memory pressure if host is overcommitted
Ballooning and zram are complementary - ballooning provides headroom from the host, zram provides overflow within the guest.
Extend the existing `homelab-deploy` tool to support a "build" command that triggers builds on the cache host. This reuses the NATS infrastructure already in place.
| New nix-cache-tool | Clean separation | Duplicate NATS boilerplate, new credentials |
| Gitea Actions webhook | No custom tooling | Less flexible, tied to Gitea |
**Recommendation:** Extend `homelab-deploy` with a build subcommand. The tool already has NATS client code, authentication handling, and a listener module in NixOS.
### Implementation
1. Add new message type to homelab-deploy: `build.<host>` subject
2. Listener on nix-cache01 subscribes to `build.>` wildcard
3. On message receipt, builds the specified host and returns success/failure
4. CLI command: `homelab-deploy build <hostname>` or `homelab-deploy build --all`
### Benefits
- Trigger rebuild for specific host to ensure it's cached
- Could be called from CI after merging PRs
- Reuses existing NATS infrastructure and auth
- Progress/status could stream back via NATS reply
| **A: Self-hosted runner** | Build on nix-cache01 | Fast (local cache), simple | Ties up cache host during build |
| **B: Gitea Actions only** | Use container runner | Clean separation | Slow (no cache), resource limits |
| **C: Hybrid** | Trigger builds on nix-cache01 via NATS from Actions | Best of both | More complex |
**Recommendation:** Option A with nix-cache01 as the runner. The host is already running Gitea Actions runner and has the cache. Building all ~16 hosts is disk I/O heavy but feasible on dedicated hardware.
### Workflow Steps
1. Workflow runs on schedule (daily or weekly)
2. Creates branch `flake-update/YYYY-MM-DD`
3. Runs `nix flake update --commit-lock-file`
4. Builds each host: `nix build .#nixosConfigurations.<host>.config.system.build.toplevel`
5. If all succeed:
- Fast-forward merge to master
- Delete feature branch
6. If any fail:
- Create PR from the update branch
- Attach build logs as PR comment
- Label PR with `needs-review` or `build-failure`
- Do NOT merge automatically
### Workflow File Changes
```yaml
# New: .github/workflows/flake-update-safe.yaml
name:Safe flake update
on:
schedule:
- cron:"0 2 * * 0"# Weekly on Sunday at 2 AM
workflow_dispatch:# Manual trigger
jobs:
update-and-validate:
runs-on:homelab # Use self-hosted runner on nix-cache01
steps:
- uses:actions/checkout@v4
with:
ref:master
fetch-depth:0# Need full history for merge
- name:Create update branch
run:|
BRANCH="flake-update/$(date +%Y-%m-%d)"
git checkout -b "$BRANCH"
- name:Update flake
run:nix flake update --commit-lock-file
- name:Build all hosts
id:build
run:|
FAILED=""
for host in $(nix flake show --json | jq -r '.nixosConfigurations | keys[]'); do
echo "Building $host..."
if ! nix build ".#nixosConfigurations.$host.config.system.build.toplevel" 2>&1 | tee "build-$host.log"; then
FAILED="$FAILED $host"
fi
done
echo "failed=$FAILED" >> $GITHUB_OUTPUT
- name:Merge to master (if all pass)
if:steps.build.outputs.failed == ''
run:|
git checkout master
git merge --ff-only "$BRANCH"
git push origin master
git push origin --delete "$BRANCH"
- name:Create PR (if any fail)
if:steps.build.outputs.failed != ''
run:|
git push origin "$BRANCH"
# Create PR via Gitea API with build logs
# ... (PR creation with log attachment)
```
## Migration Steps
### Phase 1: Reprovision Host via OpenTofu
1. Add `nix-cache01` to `terraform/vms.tf`:
```hcl
"nix-cache01" = {
ip = "10.69.13.15/24"
cpu_cores = 4
memory = 8192
disk_size = "100G" # Larger for nix store
}
```
2. Shut down existing nix-cache01 VM
3. Run `tofu apply` to provision new VM
4. Verify bootstrap completes and cache is serving
**Note:** The cache will be cold after reprovision. Run initial builds to populate.
### Phase 2: Add Build Triggering to homelab-deploy
1. Add `build` command to homelab-deploy CLI
2. Add listener handler in NixOS module for `build.*` subjects
3. Update nix-cache01 config to enable build listener
This document contains planned improvements to the NixOS infrastructure that are not directly part of the automated deployment pipeline.
## Planned
### Custom NixOS Options for Service and System Configuration
Currently, most service configurations in `services/` and shared system configurations in `system/` are written as plain NixOS module imports without declaring custom options. This means host-specific customization is done by directly setting upstream NixOS options or by duplicating configuration across hosts.
The `homelab.dns` module (`modules/homelab/dns.nix`) is the first example of defining custom options under a `homelab.*` namespace. This pattern should be extended to more of the repository's configuration.
**Goals:**
- Define `homelab.*` options for services and shared configuration where it makes sense, following the pattern established by `homelab.dns`
- Allow hosts to enable/configure services declaratively (e.g. `homelab.monitoring.enable`, `homelab.http-proxy.virtualHosts`) rather than importing opaque module files
- Keep options simple and focused — wrap only the parts that vary between hosts or that benefit from a clearer interface. Not everything needs a custom option.
**Candidate areas:**
-`system/` modules (e.g. auto-upgrade schedule, ACME CA URL, monitoring endpoints)
-`services/` modules where multiple hosts use the same service with different parameters
- Cross-cutting concerns that are currently implicit (e.g. which Loki endpoint promtail ships to)
## Completed
- [DNS Automation](completed/dns-automation.md) - Automatically generate DNS entries from host configurations
Enable remote access to some or all homelab services from outside the internal network, without exposing anything directly to the internet.
## Current State
- All services are only accessible from the internal 10.69.13.x network
- Exception: jelly01 has a WireGuard link to an external VPS
- No services are directly exposed to the public internet
## Constraints
- Nothing should be directly accessible from the outside
- Must use VPN or overlay network (no port forwarding of services)
- Self-hosted solutions preferred over managed services
## Options
### 1. WireGuard Gateway (Internal Router)
A dedicated NixOS host on the internal network with a WireGuard tunnel out to the VPS. The VPS becomes the public entry point, and the gateway routes traffic to internal services. Firewall rules on the gateway control which services are reachable.
**Pros:**
- Simple, well-understood technology
- Already running WireGuard for jelly01
- Full control over routing and firewall rules
- Excellent NixOS module support
- No extra dependencies
**Cons:**
- Hub-and-spoke topology (all traffic goes through VPS)
- Manual peer management
- Adding a new client device means editing configs on both VPS and gateway
### 2. WireGuard Mesh (No Relay)
Each client device connects directly to a WireGuard endpoint. Could be on the VPS which forwards to the homelab, or if there is a routable IP at home, directly to an internal host.
**Pros:**
- Simple and fast
- No extra software
**Cons:**
- Manual key and endpoint management for every peer
- Doesn't scale well
- If behind CGNAT, still needs the VPS as intermediary
### 3. Headscale (Self-Hosted Tailscale)
Run a Headscale control server (on the VPS or internally) and install the Tailscale client on homelab hosts and personal devices. Gets the Tailscale mesh networking UX without depending on Tailscale's infrastructure.
**Pros:**
- Mesh topology - devices communicate directly via NAT traversal (DERP relay as fallback)
- Easy to add/remove devices
- ACL support for granular access control
- MagicDNS for service discovery
- Good NixOS support for both headscale server and tailscale client
- Subnet routing lets you expose the entire 10.69.13.x network or specific hosts without installing tailscale on every host
**Cons:**
- More moving parts than plain WireGuard
- Headscale is a third-party reimplementation, can lag behind Tailscale features
- Need to run and maintain the control server
### 4. Tailscale (Managed)
Same as Headscale but using Tailscale's hosted control plane.
**Pros:**
- Zero infrastructure to manage on the control plane side
- Polished UX, well-maintained clients
- Free tier covers personal use
**Cons:**
- Dependency on Tailscale's service
- Less aligned with self-hosting preference
- Coordination metadata goes through their servers (data plane is still peer-to-peer)
### 5. Netbird (Self-Hosted)
Open-source alternative to Tailscale with a self-hostable management server. WireGuard-based, supports ACLs and NAT traversal.
**Pros:**
- Fully self-hostable
- Web UI for management
- ACL and peer grouping support
**Cons:**
- Heavier to self-host (needs multiple components: management server, signal server, TURN relay)
- Less mature NixOS module support compared to Tailscale/Headscale
### 6. Nebula (by Defined Networking)
Certificate-based mesh VPN. Each node gets a certificate from a CA you control. No central coordination server needed at runtime.
**Pros:**
- No always-on control plane
- Certificate-based identity
- Lightweight
**Cons:**
- Less convenient for ad-hoc device addition (need to issue certs)
- NAT traversal less mature than Tailscale's
- Smaller community/ecosystem
## Key Decision Points
- **Static public IP vs CGNAT?** Determines whether clients can connect directly to home network or need VPS relay.
- **Number of client devices?** If just phone and laptop, plain WireGuard via VPS is fine. More devices favors Headscale.
- **Per-service vs per-network access?** Gateway with firewall rules gives per-service control. Headscale ACLs can also do this. Plain WireGuard gives network-level access with gateway firewall for finer control.
- **Subnet routing vs per-host agents?** With Headscale/Tailscale, can either install client on every host, or use a single subnet router that advertises the 10.69.13.x range. The latter is closer to the gateway approach and avoids touching every host.
## Leading Candidates
Based on existing WireGuard experience, self-hosting preference, and NixOS stack:
1.**Headscale with a subnet router** - Best balance of convenience and self-hosting
2.**WireGuard gateway via VPS** - Simplest, most transparent, builds on existing setup
**Network range restrictions:** Consider restricting SSH to the infrastructure subnet (`10.69.13.0/24`) using `extraInputRules` for defense in depth. However, this adds complexity and may not be necessary given the trusted network model.
#### Per-Interface Rules (http-proxy WireGuard)
The `http-proxy` host has a WireGuard interface (`wg0`) that may need different rules than the LAN interface. Use `networking.firewall.interfaces` to apply per-interface policies:
```nix
# Example: http-proxy with different rules per interface
networking.firewall={
enable=true;
# Default: only SSH (via openFirewall)
allowedTCPPorts=[];
# LAN interface: allow HTTP/HTTPS
interfaces.ens18={
allowedTCPPorts=[80443];
};
# WireGuard interface: restrict to specific services or trust fully
interfaces.wg0={
allowedTCPPorts=[80443];
# Or use trustedInterfaces = [ "wg0" ] if fully trusted
};
};
```
**TODO:** Investigate current WireGuard usage on http-proxy to determine appropriate rules.
Add `./common/ssh-audit.nix` import to `system/default.nix`:
```nix
imports=[
# ... existing imports
../common/ssh-audit.nix
];
```
## Phase 3: Defense in Depth (P3)
### 3.1 Loki Authentication
Options:
1.**Basic auth via reverse proxy** - Put Loki behind Caddy with auth
2.**Loki multi-tenancy** - Enable `auth_enabled = true` and use tenant IDs
3.**Network isolation** - Bind Loki only to localhost, expose via authenticated proxy
Recommendation: Option 1 (reverse proxy) is simplest for homelab.
### 3.2 AppRole Secret Rotation
Update `terraform/vault/approle.tf`:
```hcl
secret_id_ttl=2592000 # 30 days
```
Add documentation for manual rotation procedure or implement automated rotation via the existing `restartTrigger` mechanism in `vault-secrets.nix`.
### 3.3 Enable Vault TLS Verification
Change default in `system/vault-secrets.nix`:
```nix
skipTlsVerify=mkOption{
type=types.bool;
default=false;# Changed from true
};
```
**Prerequisite:** Verify all hosts trust the internal CA that signed the Vault certificate.
## Implementation Order
1.**Test on test-tier first** - Deploy phases 1-2 to testvm01/02/03
2.**Validate SSH access** - Ensure key-based login works before disabling passwords
3.**Document firewall ports** - Create reference of ports per host before enabling
4.**Phase prod rollout** - Deploy to prod hosts one at a time, verify each
## Open Questions
- [ ] Do all hosts have SSH keys configured for root access?
- [ ] Should firewall rules be per-host or use a central definition with roles?
- [ ] Should Loki authentication use the existing Kanidm setup?
**Resolved:** Password-based SSH access for recovery is not required - most hosts have console access through Proxmox or physical access, which provides an out-of-band recovery path if SSH keys fail.
## Notes
- Firewall changes are the highest risk - test thoroughly on test-tier
- SSH hardening must not lock out access - verify keys first
- Consider creating a "break glass" procedure for emergency access if keys fail
This document describes the one-time setup process for enabling TPM2-based auto-unsealing on vault01.
## Overview
The auto-unseal feature uses systemd's `LoadCredentialEncrypted` with TPM2 to securely store and retrieve an unseal key. On service start, systemd automatically decrypts the credential using the VM's TPM, and the service unseals OpenBao.
## Prerequisites
- OpenBao must be initialized (`bao operator init` completed)
- You must have at least one unseal key from the initialization
- vault01 must have a TPM2 device (virtual TPM for Proxmox VMs)
## Initial Setup
Perform these steps on vault01 after deploying the service configuration:
### 1. Save Unseal Key
```bash
# Create temporary file with one of your unseal keys
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.