35 Commits

Author SHA1 Message Date
4c1debf0a3 Merge pull request 'decommission-ca-host' (#32) from decommission-ca-host into master
Some checks failed
Run nix flake check / flake-check (push) Failing after 1s
Reviewed-on: #32
2026-02-07 17:50:44 +00:00
f36457ee0d cleanup: remove legacy secrets directory and move TODO.md to completed plans
Some checks failed
Run nix flake check / flake-check (push) Failing after 1s
Run nix flake check / flake-check (pull_request) Failing after 1s
- Remove secrets/ directory (sops-nix no longer in use, all hosts use Vault)
- Move TODO.md to docs/plans/completed/automated-host-deployment-pipeline.md

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 18:49:31 +01:00
aedccbd9a0 flake: remove sops-nix (no longer used)
Some checks failed
Run nix flake check / flake-check (push) Failing after 1s
All secrets are now managed by OpenBao (Vault). Remove the legacy
sops-nix infrastructure that is no longer in use.

Removed:
- sops-nix flake input
- system/sops.nix module
- .sops.yaml configuration file
- Age key generation from template prepare-host scripts

Updated:
- flake.nix - removed sops-nix references from all hosts
- flake.lock - removed sops-nix input
- scripts/create-host/ - removed sops references
- CLAUDE.md - removed SOPS documentation

Note: secrets/ directory should be manually removed by the user.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 18:46:24 +01:00
bdc6057689 hosts: decommission ca host and remove labmon
Some checks failed
Run nix flake check / flake-check (push) Failing after 1s
Remove the step-ca host and labmon flake input now that ACME has been
migrated to OpenBao PKI.

Removed:
- hosts/ca/ - step-ca host configuration
- services/ca/ - step-ca service module
- labmon flake input and module (no longer used)

Updated:
- flake.nix - removed ca host and labmon references
- flake.lock - removed labmon input
- rebuild-all.sh - removed ca from host list
- CLAUDE.md - updated documentation

Note: secrets/ca/ should be manually removed by the user.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 18:41:49 +01:00
3a25e3f7bc Merge pull request 'migrate-to-openbao-pki' (#31) from migrate-to-openbao-pki into master
Some checks failed
Run nix flake check / flake-check (push) Failing after 1s
Reviewed-on: #31
2026-02-07 17:33:46 +00:00
46f03871f1 docs: update CLAUDE.md for PR creation and labmon removal
Some checks failed
Run nix flake check / flake-check (push) Failing after 1s
Run nix flake check / flake-check (pull_request) Failing after 1s
- Add note that gh pr create is not supported
- Remove labmon from Prometheus job names list
- Remove labmon from flake inputs list

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 18:32:27 +01:00
9d019f2b9a testvm01: add nginx with ACME certificate for PKI testing
Some checks failed
Run nix flake check / flake-check (push) Failing after 1s
Set up a simple nginx server with an ACME certificate from the new
OpenBao PKI infrastructure. This allows testing the ACME migration
before deploying to production hosts.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 18:22:28 +01:00
21db7e9573 acme: migrate from step-ca to OpenBao PKI
Switch all ACME certificate issuance from step-ca (ca.home.2rjus.net)
to OpenBao PKI (vault.home.2rjus.net:8200/v1/pki_int/acme/directory).

- Update default ACME server in system/acme.nix
- Update Caddy acme_ca in http-proxy and nix-cache services
- Remove labmon service from monitoring01 (step-ca monitoring)
- Remove labmon scrape target and certificate_rules alerts
- Remove alloy.nix (only used for labmon profiling)
- Add docs/plans/cert-monitoring.md for future cert monitoring needs

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 18:20:10 +01:00
979040aaf7 vault01: enable homelab-deploy listener
Some checks failed
Run nix flake check / flake-check (push) Failing after 1s
Enable vault.enable and homelab.deploy.enable on vault01 so it can
receive NATS-based remote deployments. Vault fetches secrets from
itself using AppRole after auto-unseal.

Add systemd ordering to ensure vault-secret services wait for openbao
to be unsealed before attempting to fetch secrets.

Also adds vault01 AppRole entry to Terraform.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 17:55:09 +01:00
8791c29402 hosts: enable homelab-deploy listener on pgdb1, nats1, jelly01
Some checks failed
Run nix flake check / flake-check (push) Failing after 1s
Enable vault.enable and homelab.deploy.enable for these hosts to
allow NATS-based remote deployments and expose metrics on port 9972.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 17:43:06 +01:00
c7a067d7b3 flake: update homelab-deploy input
Some checks failed
Run nix flake check / flake-check (push) Failing after 1s
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 17:31:24 +01:00
c518093578 docs: move prometheus-scrape-target-labels plan to completed
Some checks failed
Run nix flake check / flake-check (push) Failing after 1s
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 17:29:31 +01:00
0b462f0a96 Merge pull request 'prometheus-scrape-target-labels' (#30) from prometheus-scrape-target-labels into master
Some checks failed
Run nix flake check / flake-check (push) Failing after 1s
Reviewed-on: #30
2026-02-07 16:27:38 +00:00
116abf3bec CLAUDE.md: document homelab-deploy CLI for prod hosts
Some checks failed
Run nix flake check / flake-check (push) Failing after 1s
Run nix flake check / flake-check (pull_request) Failing after 1s
Add instructions for deploying to prod hosts using the CLI directly,
since the MCP server only handles test-tier deployments.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 17:23:10 +01:00
b794aa89db skills: update observability with new target labels
Some checks failed
Run nix flake check / flake-check (push) Failing after 1s
Document the new hostname and host metadata labels available on all
Prometheus scrape targets:
- hostname: short hostname for easy filtering
- role: host role (dns, build-host, vault)
- tier: deployment tier (test for test VMs)
- dns_role: primary/secondary for DNS servers

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 17:12:17 +01:00
50a85daa44 docs: update plan with hostname label documentation
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 17:09:46 +01:00
23e561cf49 monitoring: add hostname label to all scrape targets
Add a `hostname` label to all Prometheus scrape targets, making it easy
to query all metrics for a host without wildcarding the instance label.

Example queries:
- {hostname="ns1"} - all metrics from ns1
- node_cpu_seconds_total{hostname="monitoring01"} - specific metric

For external targets (like gunter), the hostname is extracted from the
target string.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 17:09:19 +01:00
7d291f85bf monitoring: propagate host labels to Prometheus scrape targets
Extract homelab.host metadata (tier, priority, role, labels) from host
configurations and propagate them to Prometheus scrape targets. This
enables semantic alert filtering using labels instead of hardcoded
instance names.

Changes:
- lib/monitoring.nix: Extract host metadata, group targets by labels
- prometheus.nix: Use structured static_configs with labels
- rules.yml: Replace instance filters with role-based filters

Example labels in Prometheus:
- ns1/ns2: role=dns, dns_role=primary/secondary
- nix-cache01: role=build-host
- testvm*: tier=test

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 17:04:50 +01:00
2a842c655a docs: update plan status and move completed nats-deploy plan
Some checks failed
Run nix flake check / flake-check (push) Failing after 1s
- Move nats-deploy-service.md to completed/ folder
- Update prometheus-scrape-target-labels.md with implementation status
- Add status table showing which steps are complete/partial/not started
- Update cross-references to point to new location

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 16:44:00 +01:00
1f4a5571dc CLAUDE.md: update documentation from audit
Some checks failed
Run nix flake check / flake-check (push) Failing after 1s
- Fix OpenBao CLI name (bao, not vault)
- Add vault01, testvm01-03 to hosts list
- Document nixos-exporter and homelab-deploy flake inputs
- Add vault/ and actions-runner/ services
- Document homelab.host and homelab.deploy options
- Document automatic Vault credential provisioning via wrapped tokens
- Consolidate homelab module options into dedicated section

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 16:37:38 +01:00
13d6d0ea3a Merge pull request 'improve-bootstrap-visibility' (#29) from improve-bootstrap-visibility into master
Some checks failed
Run nix flake check / flake-check (push) Failing after 1s
Reviewed-on: #29
2026-02-07 15:00:09 +00:00
eea000b337 CLAUDE.md: document bootstrap logs in Loki
Some checks failed
Run nix flake check / flake-check (push) Failing after 1s
Run nix flake check / flake-check (pull_request) Failing after 4s
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 15:57:51 +01:00
f19ba2f4b6 CLAUDE.md: use tofu -chdir instead of cd
Some checks failed
Run nix flake check / flake-check (push) Failing after 1s
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 15:41:59 +01:00
a90d9c33d5 CLAUDE.md: prefer nix develop -c for devshell commands
Some checks failed
Run nix flake check / flake-check (push) Failing after 1s
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 15:39:56 +01:00
09c9df1bbe terraform: regenerate wrapped token for testvm01
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 15:36:25 +01:00
ae3039af19 template2: send bootstrap status to Loki for remote monitoring
Adds log_to_loki function that pushes structured log entries to Loki
at key bootstrap stages (starting, network_ok, vault_*, building,
success, failed). Enables querying bootstrap state via LogQL without
console access.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 15:34:47 +01:00
11261c4636 template2: revert to journal+console output for bootstrap
Some checks failed
Run nix flake check / flake-check (push) Failing after 1s
TTY output was causing nixos-rebuild to fail. Keep the custom
greeting line to indicate bootstrap image, but use journal+console
for reliable logging.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 15:24:39 +01:00
4ca3c8890f terraform: add flake_branch and token for testvm01
Some checks failed
Run nix flake check / flake-check (push) Failing after 1s
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 15:14:57 +01:00
78e8d7a600 template2: add ncurses for clear command in bootstrap
Some checks failed
Run nix flake check / flake-check (push) Failing after 1s
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 15:10:25 +01:00
0cf72ec191 terraform: update template to nixos-25.11.20260203.e576e3c
Some checks failed
Run nix flake check / flake-check (push) Failing after 1s
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 15:02:16 +01:00
6a3a51407e playbooks: auto-update terraform template name after deploy
Add a third play to build-and-deploy-template.yml that updates
terraform/variables.tf with the new template name after deploying
to Proxmox. Only updates if the template name has changed.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 14:59:13 +01:00
a1ae766eb8 template2: show bootstrap progress on tty1
- Display bootstrap banner and live progress on tty1 instead of login prompt
- Add custom getty greeting on other ttys indicating this is a bootstrap image
- Disable getty on tty1 during bootstrap so output is visible

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 14:49:58 +01:00
11999b37f3 flake: update homelab-deploy
Some checks failed
Run nix flake check / flake-check (push) Failing after 1s
Fixes false "Some deployments failed" warning in MCP server when
deployments are still in progress.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 14:24:41 +01:00
29b2b7db52 Merge branch 'deploy-test-hosts'
Some checks failed
Run nix flake check / flake-check (push) Failing after 1s
Add three permanent test hosts (testvm01, testvm02, testvm03) with:
- Static IPs: 10.69.13.20-22
- Vault AppRole integration with homelab-deploy policy
- Remote deployment via NATS (homelab.deploy.enable)
- Test tier configuration

Also updates create-host template to include vault.enable and
homelab.deploy.enable by default.
2026-02-07 14:09:40 +01:00
b046a1b862 terraform: remove flake_branch from test VMs
VMs are now bootstrapped and running. Remove temporary flake_branch
and vault_wrapped_token settings so they use master going forward.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 14:09:30 +01:00
54 changed files with 576 additions and 1011 deletions

View File

@@ -185,21 +185,60 @@ Common job names:
- `home-assistant` - Home automation - `home-assistant` - Home automation
- `step-ca` - Internal CA - `step-ca` - Internal CA
### Instance Label Format ### Target Labels
The `instance` label uses FQDN format: All scrape targets have these labels:
``` **Standard labels:**
<hostname>.home.2rjus.net:<port> - `instance` - Full target address (`<hostname>.home.2rjus.net:<port>`)
``` - `job` - Job name (e.g., `node-exporter`, `unbound`, `nixos-exporter`)
- `hostname` - Short hostname (e.g., `ns1`, `monitoring01`) - use this for host filtering
Example queries filtering by host: **Host metadata labels** (when configured in `homelab.host`):
- `role` - Host role (e.g., `dns`, `build-host`, `vault`)
- `tier` - Deployment tier (`test` for test VMs, absent for prod)
- `dns_role` - DNS-specific role (`primary` or `secondary` for ns1/ns2)
### Filtering by Host
Use the `hostname` label for easy host filtering across all jobs:
```promql ```promql
up{instance=~"monitoring01.*"} {hostname="ns1"} # All metrics from ns1
node_load1{instance=~"ns1.*"} node_load1{hostname="monitoring01"} # Specific metric by hostname
up{hostname="ha1"} # Check if ha1 is up
``` ```
This is simpler than wildcarding the `instance` label:
```promql
# Old way (still works but verbose)
up{instance=~"monitoring01.*"}
# New way (preferred)
up{hostname="monitoring01"}
```
### Filtering by Role/Tier
Filter hosts by their role or tier:
```promql
up{role="dns"} # All DNS servers (ns1, ns2)
node_cpu_seconds_total{role="build-host"} # Build hosts only (nix-cache01)
up{tier="test"} # All test-tier VMs
up{dns_role="primary"} # Primary DNS only (ns1)
```
Current host labels:
| Host | Labels |
|------|--------|
| ns1 | `role=dns`, `dns_role=primary` |
| ns2 | `role=dns`, `dns_role=secondary` |
| nix-cache01 | `role=build-host` |
| vault01 | `role=vault` |
| testvm01/02/03 | `tier=test` |
--- ---
## Troubleshooting Workflows ## Troubleshooting Workflows
@@ -212,11 +251,12 @@ node_load1{instance=~"ns1.*"}
### Investigate Service Issues ### Investigate Service Issues
1. Check `up{job="<service>"}` for scrape failures 1. Check `up{job="<service>"}` or `up{hostname="<host>"}` for scrape failures
2. Use `list_targets` to see target health details 2. Use `list_targets` to see target health details
3. Query service logs: `{host="<host>", systemd_unit="<service>.service"}` 3. Query service logs: `{host="<host>", systemd_unit="<service>.service"}`
4. Search for errors: `{host="<host>"} |= "error"` 4. Search for errors: `{host="<host>"} |= "error"`
5. Check `list_alerts` for related alerts 5. Check `list_alerts` for related alerts
6. Use role filters for group issues: `up{role="dns"}` to check all DNS servers
### After Deploying Changes ### After Deploying Changes
@@ -246,5 +286,6 @@ With `start: "24h"` to see last 24 hours of upgrades across all hosts.
- Default scrape interval is 15s for most metrics targets - Default scrape interval is 15s for most metrics targets
- Default log lookback is 1h - use `start` parameter for older logs - Default log lookback is 1h - use `start` parameter for older logs
- Use `rate()` for counter metrics, direct queries for gauges - Use `rate()` for counter metrics, direct queries for gauges
- The `instance` label includes the port, use regex matching (`=~`) for hostname-only filters - Use the `hostname` label to filter metrics by host (simpler than regex on `instance`)
- Host metadata labels (`role`, `tier`, `dns_role`) are propagated to all scrape targets
- Log `MESSAGE` field contains the actual log content in JSON format - Log `MESSAGE` field contains the actual log content in JSON format

View File

@@ -1,52 +0,0 @@
keys:
- &admin_torjus age1lznyk4ee7e7x8n92cq2n87kz9920473ks5u9jlhd3dczfzq4wamqept56u
- &server_ns1 age1hz2lz4k050ru3shrk5j3zk3f8azxmrp54pktw5a7nzjml4saudesx6jsl0
- &server_ns2 age1w2q4gm2lrcgdzscq8du3ssyvk6qtzm4fcszc92z9ftclq23yyydqdga5um
- &server_ha1 age1d2w5zece9647qwyq4vas9qyqegg96xwmg6c86440a6eg4uj6dd2qrq0w3l
- &server_http-proxy age1gq8434ku0xekqmvnseeunv83e779cg03c06gwrusnymdsr3rpufqx6vr3m
- &server_ca age1288993th0ge00reg4zqueyvmkrsvk829cs068eekjqfdprsrkeqql7mljk
- &server_monitoring01 age1vpns76ykll8jgdlu3h05cur4ew2t3k7u03kxdg8y6ypfhsfhq9fqyurjey
- &server_jelly01 age1hchvlf3apn8g8jq2743pw53sd6v6ay6xu6lqk0qufrjeccan9vzsc7hdfq
- &server_nix-cache01 age1w029fksjv0edrff9p7s03tgk3axecdkppqymfpwfn2nu2gsqqefqc37sxq
- &server_pgdb1 age1ha34qeksr4jeaecevqvv2afqem67eja2mvawlmrqsudch0e7fe7qtpsekv
- &server_nats1 age1cxt8kwqzx35yuldazcc49q88qvgy9ajkz30xu0h37uw3ts97jagqgmn2ga
creation_rules:
- path_regex: secrets/[^/]+\.(yaml|json|env|ini)
key_groups:
- age:
- *admin_torjus
- *server_ns1
- *server_ns2
- *server_ha1
- *server_http-proxy
- *server_ca
- *server_monitoring01
- *server_jelly01
- *server_nix-cache01
- *server_pgdb1
- *server_nats1
- path_regex: secrets/ca/[^/]+\.(yaml|json|env|ini|)
key_groups:
- age:
- *admin_torjus
- *server_ca
- path_regex: secrets/monitoring01/[^/]+\.(yaml|json|env|ini)
key_groups:
- age:
- *admin_torjus
- *server_monitoring01
- path_regex: secrets/ca/keys/.+
key_groups:
- age:
- *admin_torjus
- *server_ca
- path_regex: secrets/nix-cache01/.+
key_groups:
- age:
- *admin_torjus
- *server_nix-cache01
- path_regex: secrets/http-proxy/.+
key_groups:
- age:
- *admin_torjus
- *server_http-proxy

142
CLAUDE.md
View File

@@ -61,25 +61,45 @@ Do not run `nix flake update`. Should only be done manually by user.
### Development Environment ### Development Environment
```bash ```bash
# Enter development shell (provides ansible, python3) # Enter development shell
nix develop nix develop
``` ```
The devshell provides: `ansible`, `tofu` (OpenTofu), `bao` (OpenBao CLI), `create-host`, and `homelab-deploy`.
**Important:** When suggesting commands that use devshell tools, always use `nix develop -c <command>` syntax rather than assuming the user is already in a devshell. For example:
```bash
# Good - works regardless of current shell
nix develop -c tofu plan
# Avoid - requires user to be in devshell
tofu plan
```
**OpenTofu:** Use the `-chdir` option instead of `cd` when running tofu commands in subdirectories:
```bash
# Good - uses -chdir option
nix develop -c tofu -chdir=terraform plan
nix develop -c tofu -chdir=terraform/vault apply
# Avoid - changing directories
cd terraform && tofu plan
```
### Secrets Management ### Secrets Management
Secrets are managed by OpenBao (Vault) using AppRole authentication. Most hosts use the Secrets are managed by OpenBao (Vault) using AppRole authentication. Most hosts use the
`vault.secrets` option defined in `system/vault-secrets.nix` to fetch secrets at boot. `vault.secrets` option defined in `system/vault-secrets.nix` to fetch secrets at boot.
Terraform manages the secrets and AppRole policies in `terraform/vault/`. Terraform manages the secrets and AppRole policies in `terraform/vault/`.
Legacy sops-nix is still present but only actively used by the `ca` host. Do not edit any
`.sops.yaml` or any file within `secrets/`. Ask the user to modify if necessary.
### Git Workflow ### Git Workflow
**Important:** Never commit directly to `master` unless the user explicitly asks for it. Always create a feature branch for changes. **Important:** Never commit directly to `master` unless the user explicitly asks for it. Always create a feature branch for changes.
**Important:** Never amend commits to `master` unless the user explicitly asks for it. Amending rewrites history and causes issues for deployed configurations. **Important:** Never amend commits to `master` unless the user explicitly asks for it. Amending rewrites history and causes issues for deployed configurations.
**Important:** Do not use `gh pr create` to create pull requests. The git server does not support GitHub CLI for PR creation. Instead, push the branch and let the user create the PR manually via the web interface.
When starting a new plan or task, the first step should typically be to create and checkout a new branch with an appropriate name (e.g., `git checkout -b dns-automation` or `git checkout -b fix-nginx-config`). When starting a new plan or task, the first step should typically be to create and checkout a new branch with an appropriate name (e.g., `git checkout -b dns-automation` or `git checkout -b fix-nginx-config`).
### Plan Management ### Plan Management
@@ -140,11 +160,27 @@ The **lab-monitoring** MCP server can query logs from Loki. All hosts ship syste
- `host` - Hostname (e.g., `ns1`, `ns2`, `monitoring01`, `ha1`). Use this label, not `hostname`. - `host` - Hostname (e.g., `ns1`, `ns2`, `monitoring01`, `ha1`). Use this label, not `hostname`.
- `systemd_unit` - Systemd unit name (e.g., `nsd.service`, `prometheus.service`, `nixos-upgrade.service`) - `systemd_unit` - Systemd unit name (e.g., `nsd.service`, `prometheus.service`, `nixos-upgrade.service`)
- `job` - Either `systemd-journal` (most logs) or `varlog` (file-based logs like caddy access logs) - `job` - Either `systemd-journal` (most logs), `varlog` (file-based logs), or `bootstrap` (VM bootstrap logs)
- `filename` - For `varlog` job, the log file path (e.g., `/var/log/caddy/nix-cache.log`) - `filename` - For `varlog` job, the log file path (e.g., `/var/log/caddy/nix-cache.log`)
Journal log entries are JSON-formatted with the actual log message in the `MESSAGE` field. Other useful fields include `PRIORITY` and `SYSLOG_IDENTIFIER`. Journal log entries are JSON-formatted with the actual log message in the `MESSAGE` field. Other useful fields include `PRIORITY` and `SYSLOG_IDENTIFIER`.
**Bootstrap Logs:**
VMs provisioned from template2 send bootstrap progress directly to Loki via curl (before promtail is available). These logs use `job="bootstrap"` with additional labels:
- `host` - Target hostname
- `branch` - Git branch being deployed
- `stage` - Bootstrap stage: `starting`, `network_ok`, `vault_ok`/`vault_skip`/`vault_warn`, `building`, `success`, `failed`
Query bootstrap status:
```
{job="bootstrap"} # All bootstrap logs
{job="bootstrap", host="testvm01"} # Specific host
{job="bootstrap", stage="failed"} # All failures
{job="bootstrap", stage=~"building|success"} # Track build progress
```
**Example LogQL queries:** **Example LogQL queries:**
``` ```
# Logs from a specific service on a host # Logs from a specific service on a host
@@ -171,13 +207,12 @@ The **lab-monitoring** MCP server can query Prometheus metrics via PromQL. The `
- `home-assistant` - Home automation metrics - `home-assistant` - Home automation metrics
- `jellyfin` - Media server metrics - `jellyfin` - Media server metrics
- `loki` / `prometheus` / `grafana` - Monitoring stack self-metrics - `loki` / `prometheus` / `grafana` - Monitoring stack self-metrics
- `step-ca` - Internal CA metrics
- `pve-exporter` - Proxmox hypervisor metrics - `pve-exporter` - Proxmox hypervisor metrics
- `smartctl` - Disk SMART health (gunter) - `smartctl` - Disk SMART health (gunter)
- `wireguard` - VPN metrics (http-proxy) - `wireguard` - VPN metrics (http-proxy)
- `pushgateway` - Push-based metrics (e.g., backup results) - `pushgateway` - Push-based metrics (e.g., backup results)
- `restic_rest` - Backup server metrics - `restic_rest` - Backup server metrics
- `labmon` / `ghettoptt` / `alertmanager` - Other service metrics - `ghettoptt` / `alertmanager` - Other service metrics
**Example PromQL queries:** **Example PromQL queries:**
``` ```
@@ -229,6 +264,21 @@ deploy(role="vault", action="switch")
**Note:** Only test-tier hosts with `homelab.deploy.enable = true` and the listener service running will respond to deployments. **Note:** Only test-tier hosts with `homelab.deploy.enable = true` and the listener service running will respond to deployments.
**Deploying to Prod Hosts:**
The MCP server only deploys to test-tier hosts. For prod hosts, use the CLI directly:
```bash
nix develop -c homelab-deploy -- deploy \
--nats-url nats://nats1.home.2rjus.net:4222 \
--nkey-file ~/.config/homelab-deploy/admin-deployer.nkey \
--branch <branch-name> \
--action switch \
deploy.prod.<hostname>
```
Subject format: `deploy.<tier>.<hostname>` (e.g., `deploy.prod.monitoring01`, `deploy.test.testvm01`)
**Verifying Deployments:** **Verifying Deployments:**
After deploying, use the `nixos_flake_info` metric from nixos-exporter to verify the host is running the expected revision: After deploying, use the `nixos_flake_info` metric from nixos-exporter to verify the host is running the expected revision:
@@ -248,10 +298,11 @@ The `current_rev` label contains the git commit hash of the deployed flake confi
- `default.nix` - Entry point, imports configuration.nix and services - `default.nix` - Entry point, imports configuration.nix and services
- `configuration.nix` - Host-specific settings (networking, hardware, users) - `configuration.nix` - Host-specific settings (networking, hardware, users)
- `/system/` - Shared system-level configurations applied to ALL hosts - `/system/` - Shared system-level configurations applied to ALL hosts
- Core modules: nix.nix, sshd.nix, sops.nix (legacy), vault-secrets.nix, acme.nix, autoupgrade.nix - Core modules: nix.nix, sshd.nix, vault-secrets.nix, acme.nix, autoupgrade.nix
- Additional modules: motd.nix (dynamic MOTD), packages.nix (base packages), root-user.nix (root config), homelab-deploy.nix (NATS listener)
- Monitoring: node-exporter and promtail on every host - Monitoring: node-exporter and promtail on every host
- `/modules/` - Custom NixOS modules - `/modules/` - Custom NixOS modules
- `homelab/` - Homelab-specific options (DNS automation, monitoring scrape targets) - `homelab/` - Homelab-specific options (see "Homelab Module Options" section below)
- `/lib/` - Nix library functions - `/lib/` - Nix library functions
- `dns-zone.nix` - DNS zone generation functions - `dns-zone.nix` - DNS zone generation functions
- `monitoring.nix` - Prometheus scrape target generation functions - `monitoring.nix` - Prometheus scrape target generation functions
@@ -259,14 +310,14 @@ The `current_rev` label contains the git commit hash of the deployed flake confi
- `home-assistant/` - Home automation stack - `home-assistant/` - Home automation stack
- `monitoring/` - Observability stack (Prometheus, Grafana, Loki, Tempo) - `monitoring/` - Observability stack (Prometheus, Grafana, Loki, Tempo)
- `ns/` - DNS services (authoritative, resolver, zone generation) - `ns/` - DNS services (authoritative, resolver, zone generation)
- `http-proxy/`, `ca/`, `postgres/`, `nats/`, `jellyfin/`, etc. - `vault/` - OpenBao (Vault) secrets server
- `/secrets/` - SOPS-encrypted secrets with age encryption (legacy, only used by ca) - `actions-runner/` - GitHub Actions runner
- `http-proxy/`, `postgres/`, `nats/`, `jellyfin/`, etc.
- `/common/` - Shared configurations (e.g., VM guest agent) - `/common/` - Shared configurations (e.g., VM guest agent)
- `/docs/` - Documentation and plans - `/docs/` - Documentation and plans
- `plans/` - Future plans and proposals - `plans/` - Future plans and proposals
- `plans/completed/` - Completed plans (moved here when done) - `plans/completed/` - Completed plans (moved here when done)
- `/playbooks/` - Ansible playbooks for fleet management - `/playbooks/` - Ansible playbooks for fleet management
- `/.sops.yaml` - SOPS configuration with age keys (legacy, only used by ca)
### Configuration Inheritance ### Configuration Inheritance
@@ -283,7 +334,7 @@ All hosts automatically get:
- Nix binary cache (nix-cache.home.2rjus.net) - Nix binary cache (nix-cache.home.2rjus.net)
- SSH with root login enabled - SSH with root login enabled
- OpenBao (Vault) secrets management via AppRole - OpenBao (Vault) secrets management via AppRole
- Internal ACME CA integration (ca.home.2rjus.net) - Internal ACME CA integration (OpenBao PKI at vault.home.2rjus.net)
- Daily auto-upgrades with auto-reboot - Daily auto-upgrades with auto-reboot
- Prometheus node-exporter + Promtail (logs to monitoring01) - Prometheus node-exporter + Promtail (logs to monitoring01)
- Monitoring scrape target auto-registration via `homelab.monitoring` options - Monitoring scrape target auto-registration via `homelab.monitoring` options
@@ -292,28 +343,31 @@ All hosts automatically get:
### Active Hosts ### Active Hosts
Production servers managed by `rebuild-all.sh`: Production servers:
- `ns1`, `ns2` - Primary/secondary DNS servers (10.69.13.5/6) - `ns1`, `ns2` - Primary/secondary DNS servers (10.69.13.5/6)
- `ca` - Internal Certificate Authority - `vault01` - OpenBao (Vault) secrets server + PKI CA
- `ha1` - Home Assistant + Zigbee2MQTT + Mosquitto - `ha1` - Home Assistant + Zigbee2MQTT + Mosquitto
- `http-proxy` - Reverse proxy - `http-proxy` - Reverse proxy
- `monitoring01` - Full observability stack (Prometheus, Grafana, Loki, Tempo, Pyroscope) - `monitoring01` - Full observability stack (Prometheus, Grafana, Loki, Tempo, Pyroscope)
- `jelly01` - Jellyfin media server - `jelly01` - Jellyfin media server
- `nix-cache01` - Binary cache server - `nix-cache01` - Binary cache server + GitHub Actions runner
- `pgdb1` - PostgreSQL database - `pgdb1` - PostgreSQL database
- `nats1` - NATS messaging server - `nats1` - NATS messaging server
Template/test hosts: Test/staging hosts:
- `template1` - Base template for cloning new hosts - `testvm01`, `testvm02`, `testvm03` - Test-tier VMs for branch testing and deployment validation
Template hosts:
- `template1`, `template2` - Base templates for cloning new hosts
### Flake Inputs ### Flake Inputs
- `nixpkgs` - NixOS 25.11 stable (primary) - `nixpkgs` - NixOS 25.11 stable (primary)
- `nixpkgs-unstable` - Unstable channel (available via overlay as `pkgs.unstable.<package>`) - `nixpkgs-unstable` - Unstable channel (available via overlay as `pkgs.unstable.<package>`)
- `sops-nix` - Secrets management (legacy, only used by ca) - `nixos-exporter` - NixOS module for exposing flake revision metrics (used to verify deployments)
- `homelab-deploy` - NATS-based remote deployment tool for test-tier hosts
- Custom packages from git.t-juice.club: - Custom packages from git.t-juice.club:
- `alerttonotify` - Alert routing - `alerttonotify` - Alert routing
- `labmon` - Lab monitoring
### Network Architecture ### Network Architecture
@@ -337,11 +391,6 @@ Most hosts use OpenBao (Vault) for secrets:
- Fallback to cached secrets in `/var/lib/vault/cache/` when Vault is unreachable - Fallback to cached secrets in `/var/lib/vault/cache/` when Vault is unreachable
- Provision AppRole credentials: `nix develop -c ansible-playbook playbooks/provision-approle.yml -e hostname=<host>` - Provision AppRole credentials: `nix develop -c ansible-playbook playbooks/provision-approle.yml -e hostname=<host>`
Legacy SOPS (only used by `ca` host):
- SOPS with age encryption, keys in `.sops.yaml`
- Shared secrets: `/secrets/secrets.yaml`
- Per-host secrets: `/secrets/<hostname>/`
### Auto-Upgrade System ### Auto-Upgrade System
All hosts pull updates daily from: All hosts pull updates daily from:
@@ -402,9 +451,21 @@ Example VM deployment includes:
- Custom CPU/memory/disk sizing - Custom CPU/memory/disk sizing
- VLAN tagging - VLAN tagging
- QEMU guest agent - QEMU guest agent
- Automatic Vault credential provisioning via `vault_wrapped_token`
OpenTofu outputs the VM's IP address after deployment for easy SSH access. OpenTofu outputs the VM's IP address after deployment for easy SSH access.
**Automatic Vault Credential Provisioning:**
VMs can receive Vault (OpenBao) credentials automatically during bootstrap:
1. OpenTofu generates a wrapped token via `terraform/vault/` and stores it in the VM configuration
2. Cloud-init passes `VAULT_WRAPPED_TOKEN` and `NIXOS_FLAKE_BRANCH` to the bootstrap script
3. The bootstrap script unwraps the token to obtain AppRole credentials
4. Credentials are written to `/var/lib/vault/approle/` before the NixOS rebuild
This eliminates the need for manual `provision-approle.yml` playbook runs on new VMs. Bootstrap progress is logged to Loki with `job="bootstrap"` labels.
#### Template Rebuilding and Terraform State #### Template Rebuilding and Terraform State
When the Proxmox template is rebuilt (via `build-and-deploy-template.yml`), the template name may change. This would normally cause Terraform to want to recreate all existing VMs, but that's unnecessary since VMs are independent once cloned. When the Proxmox template is rebuilt (via `build-and-deploy-template.yml`), the template name may change. This would normally cause Terraform to want to recreate all existing VMs, but that's unnecessary since VMs are independent once cloned.
@@ -484,11 +545,7 @@ Prometheus scrape targets are automatically generated from host configurations,
- **External targets**: Non-flake hosts defined in `/services/monitoring/external-targets.nix` - **External targets**: Non-flake hosts defined in `/services/monitoring/external-targets.nix`
- **Library**: `lib/monitoring.nix` provides `generateNodeExporterTargets` and `generateScrapeConfigs` - **Library**: `lib/monitoring.nix` provides `generateNodeExporterTargets` and `generateScrapeConfigs`
Host monitoring options (`homelab.monitoring.*`): Service modules declare their scrape targets directly via `homelab.monitoring.scrapeTargets`. The Prometheus config on monitoring01 auto-generates scrape configs from all hosts. See "Homelab Module Options" section for available options.
- `enable` (default: `true`) - Include host in Prometheus node-exporter scrape targets
- `scrapeTargets` (default: `[]`) - Additional scrape targets exposed by this host (job_name, port, metrics_path, scheme, scrape_interval, honor_labels)
Service modules declare their scrape targets directly (e.g., `services/ca/default.nix` declares step-ca on port 9000). The Prometheus config on monitoring01 auto-generates scrape configs from all hosts.
To add monitoring targets for non-NixOS hosts, edit `/services/monitoring/external-targets.nix`. To add monitoring targets for non-NixOS hosts, edit `/services/monitoring/external-targets.nix`.
@@ -507,13 +564,30 @@ DNS zone entries are automatically generated from host configurations:
- **External hosts**: Non-flake hosts defined in `/services/ns/external-hosts.nix` - **External hosts**: Non-flake hosts defined in `/services/ns/external-hosts.nix`
- **Serial number**: Uses `self.sourceInfo.lastModified` (git commit timestamp) - **Serial number**: Uses `self.sourceInfo.lastModified` (git commit timestamp)
Host DNS options (`homelab.dns.*`):
- `enable` (default: `true`) - Include host in DNS zone generation
- `cnames` (default: `[]`) - List of CNAME aliases pointing to this host
Hosts are automatically excluded from DNS if: Hosts are automatically excluded from DNS if:
- `homelab.dns.enable = false` (e.g., template hosts) - `homelab.dns.enable = false` (e.g., template hosts)
- No static IP configured (e.g., DHCP-only hosts) - No static IP configured (e.g., DHCP-only hosts)
- Network interface is a VPN/tunnel (wg*, tun*, tap*) - Network interface is a VPN/tunnel (wg*, tun*, tap*)
To add DNS entries for non-NixOS hosts, edit `/services/ns/external-hosts.nix`. To add DNS entries for non-NixOS hosts, edit `/services/ns/external-hosts.nix`.
### Homelab Module Options
The `modules/homelab/` directory defines custom options used across hosts for automation and metadata.
**Host options (`homelab.host.*`):**
- `tier` - Deployment tier: `test` or `prod`. Test-tier hosts can receive remote deployments and have different credential access.
- `priority` - Alerting priority: `high` or `low`. Controls alerting thresholds for the host.
- `role` - Primary role designation (e.g., `dns`, `database`, `bastion`, `vault`)
- `labels` - Free-form key-value metadata for host categorization
**DNS options (`homelab.dns.*`):**
- `enable` (default: `true`) - Include host in DNS zone generation
- `cnames` (default: `[]`) - List of CNAME aliases pointing to this host
**Monitoring options (`homelab.monitoring.*`):**
- `enable` (default: `true`) - Include host in Prometheus node-exporter scrape targets
- `scrapeTargets` (default: `[]`) - Additional scrape targets exposed by this host
**Deploy options (`homelab.deploy.*`):**
- `enable` (default: `false`) - Enable NATS-based remote deployment listener. When enabled, the host listens for deployment commands via NATS and can be targeted by the `homelab-deploy` MCP server.

View File

@@ -0,0 +1,72 @@
# Certificate Monitoring Plan
## Summary
This document describes the removal of labmon certificate monitoring and outlines future needs for certificate monitoring in the homelab.
## What Was Removed
### labmon Service
The `labmon` service was a custom Go application that provided:
1. **StepMonitor**: Monitoring for step-ca (Smallstep CA) certificate provisioning and health
2. **TLSConnectionMonitor**: Periodic TLS connection checks to verify certificate validity and expiration
The service exposed Prometheus metrics at `:9969` including:
- `labmon_tlsconmon_certificate_seconds_left` - Time until certificate expiration
- `labmon_tlsconmon_certificate_check_error` - Whether the TLS check failed
- `labmon_stepmon_certificate_seconds_left` - Step-CA internal certificate expiration
### Affected Files
- `hosts/monitoring01/configuration.nix` - Removed labmon configuration block
- `services/monitoring/prometheus.nix` - Removed labmon scrape target
- `services/monitoring/rules.yml` - Removed `certificate_rules` alert group
- `services/monitoring/alloy.nix` - Deleted (was only used for labmon profiling)
- `services/monitoring/default.nix` - Removed alloy.nix import
### Removed Alerts
- `certificate_expiring_soon` - Warned when any monitored TLS cert had < 24h validity
- `step_ca_serving_cert_expiring` - Critical alert for step-ca's own serving certificate
- `certificate_check_error` - Warned when TLS connection check failed
- `step_ca_certificate_expiring` - Critical alert for step-ca issued certificates
## Why It Was Removed
1. **step-ca decommissioned**: The primary monitoring target (step-ca) is no longer in use
2. **Outdated codebase**: labmon was a custom tool that required maintenance
3. **Limited value**: With ACME auto-renewal, certificates should renew automatically
## Current State
ACME certificates are now issued by OpenBao PKI at `vault.home.2rjus.net:8200`. The ACME protocol handles automatic renewal, and certificates are typically renewed well before expiration.
## Future Needs
While ACME handles renewal automatically, we should consider monitoring for:
1. **ACME renewal failures**: Alert when a certificate fails to renew
- Could monitor ACME client logs (via Loki queries)
- Could check certificate file modification times
2. **Certificate expiration as backup**: Even with auto-renewal, a last-resort alert for certificates approaching expiration would catch renewal failures
3. **Certificate transparency**: Monitor for unexpected certificate issuance
### Potential Solutions
1. **Prometheus blackbox_exporter**: Can probe TLS endpoints and export certificate expiration metrics
- `probe_ssl_earliest_cert_expiry` metric
- Already a standard tool, well-maintained
2. **Custom Loki alerting**: Query ACME service logs for renewal failures
- Works with existing infrastructure
- No additional services needed
3. **Node-exporter textfile collector**: Script that checks local certificate files and writes expiration metrics
## Status
**Not yet implemented.** This document serves as a placeholder for future work on certificate monitoring.

View File

@@ -1,10 +1,38 @@
# Prometheus Scrape Target Labels # Prometheus Scrape Target Labels
## Implementation Status
| Step | Status | Notes |
|------|--------|-------|
| 1. Create `homelab.host` module | ✅ Complete | `modules/homelab/host.nix` |
| 2. Update `lib/monitoring.nix` | ✅ Complete | Labels extracted and propagated |
| 3. Update Prometheus config | ✅ Complete | Uses structured static_configs |
| 4. Set metadata on hosts | ✅ Complete | All relevant hosts configured |
| 5. Update alert rules | ✅ Complete | Role-based filtering implemented |
| 6. Labels for service targets | ✅ Complete | Host labels propagated to all services |
| 7. Add hostname label | ✅ Complete | All targets have `hostname` label for easy filtering |
**Hosts with metadata configured:**
- `ns1`, `ns2`: `role = "dns"`, `labels.dns_role = "primary"/"secondary"`
- `nix-cache01`: `role = "build-host"`
- `vault01`: `role = "vault"`
- `testvm01/02/03`: `tier = "test"`
**Implementation complete.** Branch: `prometheus-scrape-target-labels`
**Query examples:**
- `{hostname="ns1"}` - all metrics from ns1 (any job/port)
- `node_cpu_seconds_total{hostname="monitoring01"}` - specific metric by hostname
- `up{role="dns"}` - all DNS servers
- `up{tier="test"}` - all test-tier hosts
---
## Goal ## Goal
Add support for custom per-host labels on Prometheus scrape targets, enabling alert rules to reference host metadata (priority, role) instead of hardcoding instance names. Add support for custom per-host labels on Prometheus scrape targets, enabling alert rules to reference host metadata (priority, role) instead of hardcoding instance names.
**Related:** This plan shares the `homelab.host` module with `docs/plans/nats-deploy-service.md`, which uses the same metadata for deployment tier assignment. **Related:** This plan shares the `homelab.host` module with `docs/plans/completed/nats-deploy-service.md`, which uses the same metadata for deployment tier assignment.
## Motivation ## Motivation
@@ -54,12 +82,11 @@ or
## Implementation ## Implementation
This implementation uses a shared `homelab.host` module that provides host metadata for multiple consumers (Prometheus labels, deployment tiers, etc.). See also `docs/plans/nats-deploy-service.md` which uses the same module for deployment tier assignment. This implementation uses a shared `homelab.host` module that provides host metadata for multiple consumers (Prometheus labels, deployment tiers, etc.). See also `docs/plans/completed/nats-deploy-service.md` which uses the same module for deployment tier assignment.
### 1. Create `homelab.host` module ### 1. Create `homelab.host` module
**Status:** Step 1 (Create `homelab.host` module) is complete. The module is in **Complete.** The module is in `modules/homelab/host.nix`.
`modules/homelab/host.nix` with tier, priority, role, and labels options.
Create `modules/homelab/host.nix` with shared host metadata options: Create `modules/homelab/host.nix` with shared host metadata options:
@@ -98,6 +125,8 @@ Import this module in `modules/homelab/default.nix`.
### 2. Update `lib/monitoring.nix` ### 2. Update `lib/monitoring.nix`
**Complete.** Labels are now extracted and propagated.
- `extractHostMonitoring` should also extract `homelab.host` values (priority, role, labels). - `extractHostMonitoring` should also extract `homelab.host` values (priority, role, labels).
- Build the combined label set from `homelab.host`: - Build the combined label set from `homelab.host`:
@@ -126,6 +155,8 @@ This requires grouping hosts by their label attrset and producing one `static_co
### 3. Update `services/monitoring/prometheus.nix` ### 3. Update `services/monitoring/prometheus.nix`
**Complete.** Now uses structured static_configs output.
Change the node-exporter scrape config to use the new structured output: Change the node-exporter scrape config to use the new structured output:
```nix ```nix
@@ -138,36 +169,37 @@ static_configs = nodeExporterTargets;
### 4. Set metadata on hosts ### 4. Set metadata on hosts
**Complete.** All relevant hosts have metadata configured. Note: The implementation filters by `role` rather than `priority`, which matches the existing nix-cache01 configuration.
Example in `hosts/nix-cache01/configuration.nix`: Example in `hosts/nix-cache01/configuration.nix`:
```nix ```nix
homelab.host = { homelab.host = {
tier = "test"; # can be deployed by MCP (used by homelab-deploy)
priority = "low"; # relaxed alerting thresholds priority = "low"; # relaxed alerting thresholds
role = "build-host"; role = "build-host";
}; };
``` ```
**Note:** Current implementation only sets `role = "build-host"`. Consider adding `priority = "low"` when label propagation is implemented.
Example in `hosts/ns1/configuration.nix`: Example in `hosts/ns1/configuration.nix`:
```nix ```nix
homelab.host = { homelab.host = {
tier = "prod";
priority = "high";
role = "dns"; role = "dns";
labels.dns_role = "primary"; labels.dns_role = "primary";
}; };
``` ```
**Note:** `tier` and `priority` use defaults ("prod" and "high"), which is the intended behavior. The current ns1/ns2 configurations match this pattern.
### 5. Update alert rules ### 5. Update alert rules
After implementing labels, review and update `services/monitoring/rules.yml`: **Complete.** Updated `services/monitoring/rules.yml`:
- Replace instance-name exclusions with label-based filters (e.g. `{priority!="low"}` instead of `{instance!="nix-cache01.home.2rjus.net:9100"}`). - `high_cpu_load`: Replaced `instance!="nix-cache01..."` with `role!="build-host"` for standard hosts (15m duration) and `role="build-host"` for build hosts (2h duration).
- Consider whether any other rules should differentiate by priority or role. - `unbound_low_cache_hit_ratio`: Added `dns_role="primary"` filter to only alert on the primary DNS resolver (secondary has a cold cache).
Specifically, the `high_cpu_load` rule currently has a nix-cache01 exclusion that should be replaced with a `priority`-based filter. ### 6. Labels for `generateScrapeConfigs` (service targets)
### 6. Consider labels for `generateScrapeConfigs` (service targets) **Complete.** Host labels are now propagated to all auto-generated service scrape targets (unbound, homelab-deploy, nixos-exporter, etc.). This enables semantic filtering on any service metric, such as using `dns_role="primary"` with the unbound job.
The same label propagation could be applied to service-level scrape targets. This is optional and can be deferred -- service targets are more specialized and less likely to need generic label-based filtering.

53
flake.lock generated
View File

@@ -28,11 +28,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1770447502, "lastModified": 1770481834,
"narHash": "sha256-xH1PNyE3ydj4udhe1IpK8VQxBPZETGLuORZdSWYRmSU=", "narHash": "sha256-Xx9BYnI0C/qgPbwr9nj6NoAdQTbYLunrdbNSaUww9oY=",
"ref": "master", "ref": "master",
"rev": "79db119d1ca6630023947ef0a65896cc3307c2ff", "rev": "fd0d63b103dfaf21d1c27363266590e723021c67",
"revCount": 22, "revCount": 24,
"type": "git", "type": "git",
"url": "https://git.t-juice.club/torjus/homelab-deploy" "url": "https://git.t-juice.club/torjus/homelab-deploy"
}, },
@@ -42,27 +42,6 @@
"url": "https://git.t-juice.club/torjus/homelab-deploy" "url": "https://git.t-juice.club/torjus/homelab-deploy"
} }
}, },
"labmon": {
"inputs": {
"nixpkgs": [
"nixpkgs-unstable"
]
},
"locked": {
"lastModified": 1748983975,
"narHash": "sha256-DA5mOqxwLMj/XLb4hvBU1WtE6cuVej7PjUr8N0EZsCE=",
"ref": "master",
"rev": "040a73e891a70ff06ec7ab31d7167914129dbf7d",
"revCount": 17,
"type": "git",
"url": "https://git.t-juice.club/torjus/labmon"
},
"original": {
"ref": "master",
"type": "git",
"url": "https://git.t-juice.club/torjus/labmon"
}
},
"nixos-exporter": { "nixos-exporter": {
"inputs": { "inputs": {
"nixpkgs": [ "nixpkgs": [
@@ -119,31 +98,9 @@
"inputs": { "inputs": {
"alerttonotify": "alerttonotify", "alerttonotify": "alerttonotify",
"homelab-deploy": "homelab-deploy", "homelab-deploy": "homelab-deploy",
"labmon": "labmon",
"nixos-exporter": "nixos-exporter", "nixos-exporter": "nixos-exporter",
"nixpkgs": "nixpkgs", "nixpkgs": "nixpkgs",
"nixpkgs-unstable": "nixpkgs-unstable", "nixpkgs-unstable": "nixpkgs-unstable"
"sops-nix": "sops-nix"
}
},
"sops-nix": {
"inputs": {
"nixpkgs": [
"nixpkgs-unstable"
]
},
"locked": {
"lastModified": 1770145881,
"narHash": "sha256-ktjWTq+D5MTXQcL9N6cDZXUf9kX8JBLLBLT0ZyOTSYY=",
"owner": "Mic92",
"repo": "sops-nix",
"rev": "17eea6f3816ba6568b8c81db8a4e6ca438b30b7c",
"type": "github"
},
"original": {
"owner": "Mic92",
"repo": "sops-nix",
"type": "github"
} }
} }
}, },

View File

@@ -5,18 +5,10 @@
nixpkgs.url = "github:nixos/nixpkgs?ref=nixos-25.11"; nixpkgs.url = "github:nixos/nixpkgs?ref=nixos-25.11";
nixpkgs-unstable.url = "github:nixos/nixpkgs?ref=nixos-unstable"; nixpkgs-unstable.url = "github:nixos/nixpkgs?ref=nixos-unstable";
sops-nix = {
url = "github:Mic92/sops-nix";
inputs.nixpkgs.follows = "nixpkgs-unstable";
};
alerttonotify = { alerttonotify = {
url = "git+https://git.t-juice.club/torjus/alerttonotify?ref=master"; url = "git+https://git.t-juice.club/torjus/alerttonotify?ref=master";
inputs.nixpkgs.follows = "nixpkgs-unstable"; inputs.nixpkgs.follows = "nixpkgs-unstable";
}; };
labmon = {
url = "git+https://git.t-juice.club/torjus/labmon?ref=master";
inputs.nixpkgs.follows = "nixpkgs-unstable";
};
nixos-exporter = { nixos-exporter = {
url = "git+https://git.t-juice.club/torjus/nixos-exporter"; url = "git+https://git.t-juice.club/torjus/nixos-exporter";
inputs.nixpkgs.follows = "nixpkgs-unstable"; inputs.nixpkgs.follows = "nixpkgs-unstable";
@@ -32,9 +24,7 @@
self, self,
nixpkgs, nixpkgs,
nixpkgs-unstable, nixpkgs-unstable,
sops-nix,
alerttonotify, alerttonotify,
labmon,
nixos-exporter, nixos-exporter,
homelab-deploy, homelab-deploy,
... ...
@@ -50,7 +40,6 @@
commonOverlays = [ commonOverlays = [
overlay-unstable overlay-unstable
alerttonotify.overlays.default alerttonotify.overlays.default
labmon.overlays.default
]; ];
# Common modules applied to all hosts # Common modules applied to all hosts
commonModules = [ commonModules = [
@@ -61,7 +50,6 @@
system.configurationRevision = self.rev or self.dirtyRev or "dirty"; system.configurationRevision = self.rev or self.dirtyRev or "dirty";
} }
) )
sops-nix.nixosModules.sops
nixos-exporter.nixosModules.default nixos-exporter.nixosModules.default
homelab-deploy.nixosModules.default homelab-deploy.nixosModules.default
./modules/homelab ./modules/homelab
@@ -80,7 +68,7 @@
ns1 = nixpkgs.lib.nixosSystem { ns1 = nixpkgs.lib.nixosSystem {
inherit system; inherit system;
specialArgs = { specialArgs = {
inherit inputs self sops-nix; inherit inputs self;
}; };
modules = commonModules ++ [ modules = commonModules ++ [
./hosts/ns1 ./hosts/ns1
@@ -89,7 +77,7 @@
ns2 = nixpkgs.lib.nixosSystem { ns2 = nixpkgs.lib.nixosSystem {
inherit system; inherit system;
specialArgs = { specialArgs = {
inherit inputs self sops-nix; inherit inputs self;
}; };
modules = commonModules ++ [ modules = commonModules ++ [
./hosts/ns2 ./hosts/ns2
@@ -98,7 +86,7 @@
ha1 = nixpkgs.lib.nixosSystem { ha1 = nixpkgs.lib.nixosSystem {
inherit system; inherit system;
specialArgs = { specialArgs = {
inherit inputs self sops-nix; inherit inputs self;
}; };
modules = commonModules ++ [ modules = commonModules ++ [
./hosts/ha1 ./hosts/ha1
@@ -107,7 +95,7 @@
template1 = nixpkgs.lib.nixosSystem { template1 = nixpkgs.lib.nixosSystem {
inherit system; inherit system;
specialArgs = { specialArgs = {
inherit inputs self sops-nix; inherit inputs self;
}; };
modules = commonModules ++ [ modules = commonModules ++ [
./hosts/template ./hosts/template
@@ -116,7 +104,7 @@
template2 = nixpkgs.lib.nixosSystem { template2 = nixpkgs.lib.nixosSystem {
inherit system; inherit system;
specialArgs = { specialArgs = {
inherit inputs self sops-nix; inherit inputs self;
}; };
modules = commonModules ++ [ modules = commonModules ++ [
./hosts/template2 ./hosts/template2
@@ -125,35 +113,25 @@
http-proxy = nixpkgs.lib.nixosSystem { http-proxy = nixpkgs.lib.nixosSystem {
inherit system; inherit system;
specialArgs = { specialArgs = {
inherit inputs self sops-nix; inherit inputs self;
}; };
modules = commonModules ++ [ modules = commonModules ++ [
./hosts/http-proxy ./hosts/http-proxy
]; ];
}; };
ca = nixpkgs.lib.nixosSystem {
inherit system;
specialArgs = {
inherit inputs self sops-nix;
};
modules = commonModules ++ [
./hosts/ca
];
};
monitoring01 = nixpkgs.lib.nixosSystem { monitoring01 = nixpkgs.lib.nixosSystem {
inherit system; inherit system;
specialArgs = { specialArgs = {
inherit inputs self sops-nix; inherit inputs self;
}; };
modules = commonModules ++ [ modules = commonModules ++ [
./hosts/monitoring01 ./hosts/monitoring01
labmon.nixosModules.labmon
]; ];
}; };
jelly01 = nixpkgs.lib.nixosSystem { jelly01 = nixpkgs.lib.nixosSystem {
inherit system; inherit system;
specialArgs = { specialArgs = {
inherit inputs self sops-nix; inherit inputs self;
}; };
modules = commonModules ++ [ modules = commonModules ++ [
./hosts/jelly01 ./hosts/jelly01
@@ -162,7 +140,7 @@
nix-cache01 = nixpkgs.lib.nixosSystem { nix-cache01 = nixpkgs.lib.nixosSystem {
inherit system; inherit system;
specialArgs = { specialArgs = {
inherit inputs self sops-nix; inherit inputs self;
}; };
modules = commonModules ++ [ modules = commonModules ++ [
./hosts/nix-cache01 ./hosts/nix-cache01
@@ -171,7 +149,7 @@
pgdb1 = nixpkgs.lib.nixosSystem { pgdb1 = nixpkgs.lib.nixosSystem {
inherit system; inherit system;
specialArgs = { specialArgs = {
inherit inputs self sops-nix; inherit inputs self;
}; };
modules = commonModules ++ [ modules = commonModules ++ [
./hosts/pgdb1 ./hosts/pgdb1
@@ -180,7 +158,7 @@
nats1 = nixpkgs.lib.nixosSystem { nats1 = nixpkgs.lib.nixosSystem {
inherit system; inherit system;
specialArgs = { specialArgs = {
inherit inputs self sops-nix; inherit inputs self;
}; };
modules = commonModules ++ [ modules = commonModules ++ [
./hosts/nats1 ./hosts/nats1
@@ -189,7 +167,7 @@
vault01 = nixpkgs.lib.nixosSystem { vault01 = nixpkgs.lib.nixosSystem {
inherit system; inherit system;
specialArgs = { specialArgs = {
inherit inputs self sops-nix; inherit inputs self;
}; };
modules = commonModules ++ [ modules = commonModules ++ [
./hosts/vault01 ./hosts/vault01
@@ -198,7 +176,7 @@
testvm01 = nixpkgs.lib.nixosSystem { testvm01 = nixpkgs.lib.nixosSystem {
inherit system; inherit system;
specialArgs = { specialArgs = {
inherit inputs self sops-nix; inherit inputs self;
}; };
modules = commonModules ++ [ modules = commonModules ++ [
./hosts/testvm01 ./hosts/testvm01
@@ -207,7 +185,7 @@
testvm02 = nixpkgs.lib.nixosSystem { testvm02 = nixpkgs.lib.nixosSystem {
inherit system; inherit system;
specialArgs = { specialArgs = {
inherit inputs self sops-nix; inherit inputs self;
}; };
modules = commonModules ++ [ modules = commonModules ++ [
./hosts/testvm02 ./hosts/testvm02
@@ -216,7 +194,7 @@
testvm03 = nixpkgs.lib.nixosSystem { testvm03 = nixpkgs.lib.nixosSystem {
inherit system; inherit system;
specialArgs = { specialArgs = {
inherit inputs self sops-nix; inherit inputs self;
}; };
modules = commonModules ++ [ modules = commonModules ++ [
./hosts/testvm03 ./hosts/testvm03

View File

@@ -1,63 +0,0 @@
{
pkgs,
...
}:
{
imports = [
../template/hardware-configuration.nix
../../system
../../common/vm
];
nixpkgs.config.allowUnfree = true;
# Use the systemd-boot EFI boot loader.
boot.loader.grub = {
enable = true;
device = "/dev/sda";
configurationLimit = 3;
};
networking.hostName = "ca";
networking.domain = "home.2rjus.net";
networking.useNetworkd = true;
networking.useDHCP = false;
services.resolved.enable = true;
networking.nameservers = [
"10.69.13.5"
"10.69.13.6"
];
systemd.network.enable = true;
systemd.network.networks."ens18" = {
matchConfig.Name = "ens18";
address = [
"10.69.13.12/24"
];
routes = [
{ Gateway = "10.69.13.1"; }
];
linkConfig.RequiredForOnline = "routable";
};
time.timeZone = "Europe/Oslo";
nix.settings.experimental-features = [
"nix-command"
"flakes"
];
nix.settings.tarball-ttl = 0;
environment.systemPackages = with pkgs; [
vim
wget
git
];
# Open ports in the firewall.
# networking.firewall.allowedTCPPorts = [ ... ];
# networking.firewall.allowedUDPPorts = [ ... ];
# Or disable the firewall altogether.
networking.firewall.enable = false;
system.stateVersion = "23.11"; # Did you read the comment?
}

View File

@@ -1,7 +0,0 @@
{ ... }:
{
imports = [
./configuration.nix
../../services/ca
];
}

View File

@@ -61,6 +61,9 @@
# Or disable the firewall altogether. # Or disable the firewall altogether.
networking.firewall.enable = false; networking.firewall.enable = false;
vault.enable = true;
homelab.deploy.enable = true;
zramSwap = { zramSwap = {
enable = true; enable = true;
}; };

View File

@@ -100,61 +100,6 @@
]; ];
}; };
labmon = {
enable = true;
settings = {
ListenAddr = ":9969";
Profiling = true;
StepMonitors = [
{
Enabled = true;
BaseURL = "https://ca.home.2rjus.net";
RootID = "3381bda8015a86b9a3cd1851439d1091890a79005e0f1f7c4301fe4bccc29d80";
}
];
TLSConnectionMonitors = [
{
Enabled = true;
Address = "ca.home.2rjus.net:443";
Verify = true;
Duration = "12h";
}
{
Enabled = true;
Address = "jelly.home.2rjus.net:443";
Verify = true;
Duration = "12h";
}
{
Enabled = true;
Address = "grafana.home.2rjus.net:443";
Verify = true;
Duration = "12h";
}
{
Enabled = true;
Address = "prometheus.home.2rjus.net:443";
Verify = true;
Duration = "12h";
}
{
Enabled = true;
Address = "alertmanager.home.2rjus.net:443";
Verify = true;
Duration = "12h";
}
{
Enabled = true;
Address = "pyroscope.home.2rjus.net:443";
Verify = true;
Duration = "12h";
}
];
};
};
# Open ports in the firewall. # Open ports in the firewall.
# networking.firewall.allowedTCPPorts = [ ... ]; # networking.firewall.allowedTCPPorts = [ ... ];
# networking.firewall.allowedUDPPorts = [ ... ]; # networking.firewall.allowedUDPPorts = [ ... ];

View File

@@ -59,5 +59,8 @@
# Or disable the firewall altogether. # Or disable the firewall altogether.
networking.firewall.enable = false; networking.firewall.enable = false;
vault.enable = true;
homelab.deploy.enable = true;
system.stateVersion = "23.11"; # Did you read the comment? system.stateVersion = "23.11"; # Did you read the comment?
} }

View File

@@ -59,5 +59,8 @@
# Or disable the firewall altogether. # Or disable the firewall altogether.
networking.firewall.enable = false; networking.firewall.enable = false;
vault.enable = true;
homelab.deploy.enable = true;
system.stateVersion = "23.11"; # Did you read the comment? system.stateVersion = "23.11"; # Did you read the comment?
} }

View File

@@ -2,7 +2,6 @@
let let
prepare-host-script = pkgs.writeShellApplication { prepare-host-script = pkgs.writeShellApplication {
name = "prepare-host.sh"; name = "prepare-host.sh";
runtimeInputs = [ pkgs.age ];
text = '' text = ''
echo "Removing machine-id" echo "Removing machine-id"
rm -f /etc/machine-id || true rm -f /etc/machine-id || true
@@ -22,11 +21,6 @@ let
echo "Removing cache" echo "Removing cache"
rm -rf /var/cache/* || true rm -rf /var/cache/* || true
echo "Generate age key"
rm -rf /var/lib/sops-nix || true
mkdir -p /var/lib/sops-nix
age-keygen -o /var/lib/sops-nix/key.txt
''; '';
}; };
in in

View File

@@ -6,22 +6,72 @@ let
text = '' text = ''
set -euo pipefail set -euo pipefail
LOKI_URL="http://monitoring01.home.2rjus.net:3100/loki/api/v1/push"
# Send a log entry to Loki with bootstrap status
# Usage: log_to_loki <stage> <message>
# Fails silently if Loki is unreachable
log_to_loki() {
local stage="$1"
local message="$2"
local timestamp_ns
timestamp_ns="$(date +%s)000000000"
local payload
payload=$(jq -n \
--arg host "$HOSTNAME" \
--arg stage "$stage" \
--arg branch "''${BRANCH:-master}" \
--arg ts "$timestamp_ns" \
--arg msg "$message" \
'{
streams: [{
stream: {
job: "bootstrap",
host: $host,
stage: $stage,
branch: $branch
},
values: [[$ts, $msg]]
}]
}')
curl -s --connect-timeout 2 --max-time 5 \
-X POST \
-H "Content-Type: application/json" \
-d "$payload" \
"$LOKI_URL" >/dev/null 2>&1 || true
}
echo "================================================================================"
echo " NIXOS BOOTSTRAP IN PROGRESS"
echo "================================================================================"
echo ""
# Read hostname set by cloud-init (from Terraform VM name via user-data) # Read hostname set by cloud-init (from Terraform VM name via user-data)
# Cloud-init sets the system hostname from user-data.txt, so we read it from hostnamectl # Cloud-init sets the system hostname from user-data.txt, so we read it from hostnamectl
HOSTNAME=$(hostnamectl hostname) HOSTNAME=$(hostnamectl hostname)
echo "DEBUG: Hostname from hostnamectl: '$HOSTNAME'" # Read git branch from environment, default to master
BRANCH="''${NIXOS_FLAKE_BRANCH:-master}"
echo "Hostname: $HOSTNAME"
echo ""
echo "Starting NixOS bootstrap for host: $HOSTNAME" echo "Starting NixOS bootstrap for host: $HOSTNAME"
log_to_loki "starting" "Bootstrap starting for $HOSTNAME (branch: $BRANCH)"
echo "Waiting for network connectivity..." echo "Waiting for network connectivity..."
# Verify we can reach the git server via HTTPS (doesn't respond to ping) # Verify we can reach the git server via HTTPS (doesn't respond to ping)
if ! curl -s --connect-timeout 5 --max-time 10 https://git.t-juice.club >/dev/null 2>&1; then if ! curl -s --connect-timeout 5 --max-time 10 https://git.t-juice.club >/dev/null 2>&1; then
echo "ERROR: Cannot reach git.t-juice.club via HTTPS" echo "ERROR: Cannot reach git.t-juice.club via HTTPS"
echo "Check network configuration and DNS settings" echo "Check network configuration and DNS settings"
log_to_loki "failed" "Network check failed - cannot reach git.t-juice.club"
exit 1 exit 1
fi fi
echo "Network connectivity confirmed" echo "Network connectivity confirmed"
log_to_loki "network_ok" "Network connectivity confirmed"
# Unwrap Vault token and store AppRole credentials (if provided) # Unwrap Vault token and store AppRole credentials (if provided)
if [ -n "''${VAULT_WRAPPED_TOKEN:-}" ]; then if [ -n "''${VAULT_WRAPPED_TOKEN:-}" ]; then
@@ -50,6 +100,7 @@ let
chmod 600 /var/lib/vault/approle/secret-id chmod 600 /var/lib/vault/approle/secret-id
echo "Vault credentials unwrapped and stored successfully" echo "Vault credentials unwrapped and stored successfully"
log_to_loki "vault_ok" "Vault credentials unwrapped and stored"
else else
echo "WARNING: Failed to unwrap Vault token" echo "WARNING: Failed to unwrap Vault token"
if [ -n "$UNWRAP_RESPONSE" ]; then if [ -n "$UNWRAP_RESPONSE" ]; then
@@ -63,17 +114,17 @@ let
echo "To regenerate token, run: create-host --hostname $HOSTNAME --force" echo "To regenerate token, run: create-host --hostname $HOSTNAME --force"
echo "" echo ""
echo "Vault secrets will not be available, but continuing bootstrap..." echo "Vault secrets will not be available, but continuing bootstrap..."
log_to_loki "vault_warn" "Failed to unwrap Vault token - continuing without secrets"
fi fi
else else
echo "No Vault wrapped token provided (VAULT_WRAPPED_TOKEN not set)" echo "No Vault wrapped token provided (VAULT_WRAPPED_TOKEN not set)"
echo "Skipping Vault credential setup" echo "Skipping Vault credential setup"
log_to_loki "vault_skip" "No Vault token provided - skipping credential setup"
fi fi
echo "Fetching and building NixOS configuration from flake..." echo "Fetching and building NixOS configuration from flake..."
# Read git branch from environment, default to master
BRANCH="''${NIXOS_FLAKE_BRANCH:-master}"
echo "Using git branch: $BRANCH" echo "Using git branch: $BRANCH"
log_to_loki "building" "Starting nixos-rebuild boot"
# Build and activate the host-specific configuration # Build and activate the host-specific configuration
FLAKE_URL="git+https://git.t-juice.club/torjus/nixos-servers.git?ref=$BRANCH#''${HOSTNAME}" FLAKE_URL="git+https://git.t-juice.club/torjus/nixos-servers.git?ref=$BRANCH#''${HOSTNAME}"
@@ -81,18 +132,30 @@ let
if nixos-rebuild boot --flake "$FLAKE_URL"; then if nixos-rebuild boot --flake "$FLAKE_URL"; then
echo "Successfully built configuration for $HOSTNAME" echo "Successfully built configuration for $HOSTNAME"
echo "Rebooting into new configuration..." echo "Rebooting into new configuration..."
log_to_loki "success" "Build successful - rebooting into new configuration"
sleep 2 sleep 2
systemctl reboot systemctl reboot
else else
echo "ERROR: nixos-rebuild failed for $HOSTNAME" echo "ERROR: nixos-rebuild failed for $HOSTNAME"
echo "Check that flake has configuration for this hostname" echo "Check that flake has configuration for this hostname"
echo "Manual intervention required - system will not reboot" echo "Manual intervention required - system will not reboot"
log_to_loki "failed" "nixos-rebuild failed - manual intervention required"
exit 1 exit 1
fi fi
''; '';
}; };
in in
{ {
# Custom greeting line to indicate this is a bootstrap image
services.getty.greetingLine = lib.mkForce ''
================================================================================
BOOTSTRAP IMAGE - NixOS \V (\l)
================================================================================
Bootstrap service is running. Logs are displayed on tty1.
Check status: journalctl -fu nixos-bootstrap
'';
systemd.services."nixos-bootstrap" = { systemd.services."nixos-bootstrap" = {
description = "Bootstrap NixOS configuration from flake on first boot"; description = "Bootstrap NixOS configuration from flake on first boot";
@@ -107,12 +170,12 @@ in
serviceConfig = { serviceConfig = {
Type = "oneshot"; Type = "oneshot";
RemainAfterExit = true; RemainAfterExit = true;
ExecStart = "${bootstrap-script}/bin/nixos-bootstrap"; ExecStart = lib.getExe bootstrap-script;
# Read environment variables from cloud-init (set by cloud-init write_files) # Read environment variables from cloud-init (set by cloud-init write_files)
EnvironmentFile = "-/run/cloud-init-env"; EnvironmentFile = "-/run/cloud-init-env";
# Logging to journald # Log to journal and console
StandardOutput = "journal+console"; StandardOutput = "journal+console";
StandardError = "journal+console"; StandardError = "journal+console";
}; };

View File

@@ -2,7 +2,6 @@
let let
prepare-host-script = pkgs.writeShellApplication { prepare-host-script = pkgs.writeShellApplication {
name = "prepare-host.sh"; name = "prepare-host.sh";
runtimeInputs = [ pkgs.age ];
text = '' text = ''
echo "Removing machine-id" echo "Removing machine-id"
rm -f /etc/machine-id || true rm -f /etc/machine-id || true
@@ -22,11 +21,6 @@ let
echo "Removing cache" echo "Removing cache"
rm -rf /var/cache/* || true rm -rf /var/cache/* || true
echo "Generate age key"
rm -rf /var/lib/sops-nix || true
mkdir -p /var/lib/sops-nix
age-keygen -o /var/lib/sops-nix/key.txt
''; '';
}; };
in in

View File

@@ -62,6 +62,39 @@
git git
]; ];
# Test nginx with ACME certificate from OpenBao PKI
services.nginx = {
enable = true;
virtualHosts."testvm01.home.2rjus.net" = {
forceSSL = true;
enableACME = true;
locations."/" = {
root = pkgs.writeTextDir "index.html" ''
<!DOCTYPE html>
<html>
<head>
<title>testvm01 - ACME Test</title>
<style>
body { font-family: monospace; max-width: 600px; margin: 50px auto; padding: 20px; }
.joke { background: #f0f0f0; padding: 20px; border-radius: 8px; margin: 20px 0; }
.punchline { margin-top: 15px; font-weight: bold; }
</style>
</head>
<body>
<h1>OpenBao PKI ACME Test</h1>
<p>If you're seeing this over HTTPS, the migration worked!</p>
<div class="joke">
<p>Why do programmers prefer dark mode?</p>
<p class="punchline">Because light attracts bugs.</p>
</div>
<p><small>Certificate issued by: vault.home.2rjus.net</small></p>
</body>
</html>
'';
};
};
};
# Open ports in the firewall. # Open ports in the firewall.
# networking.firewall.allowedTCPPorts = [ ... ]; # networking.firewall.allowedTCPPorts = [ ... ];
# networking.firewall.allowedUDPPorts = [ ... ]; # networking.firewall.allowedUDPPorts = [ ... ];

View File

@@ -62,6 +62,16 @@
# Or disable the firewall altogether. # Or disable the firewall altogether.
networking.firewall.enable = false; networking.firewall.enable = false;
# Vault fetches secrets from itself (after unseal)
vault.enable = true;
homelab.deploy.enable = true;
# Ensure vault-secret services wait for openbao to be unsealed
systemd.services.vault-secret-homelab-deploy-nkey = {
after = [ "openbao.service" ];
wants = [ "openbao.service" ];
};
system.stateVersion = "25.11"; # Did you read the comment? system.stateVersion = "25.11"; # Did you read the comment?
} }

View File

@@ -21,6 +21,7 @@ let
cfg = hostConfig.config; cfg = hostConfig.config;
monConfig = (cfg.homelab or { }).monitoring or { enable = true; scrapeTargets = [ ]; }; monConfig = (cfg.homelab or { }).monitoring or { enable = true; scrapeTargets = [ ]; };
dnsConfig = (cfg.homelab or { }).dns or { enable = true; }; dnsConfig = (cfg.homelab or { }).dns or { enable = true; };
hostConfig' = (cfg.homelab or { }).host or { };
hostname = cfg.networking.hostName; hostname = cfg.networking.hostName;
networks = cfg.systemd.network.networks or { }; networks = cfg.systemd.network.networks or { };
@@ -49,20 +50,73 @@ let
inherit hostname; inherit hostname;
ip = extractIP firstAddress; ip = extractIP firstAddress;
scrapeTargets = monConfig.scrapeTargets or [ ]; scrapeTargets = monConfig.scrapeTargets or [ ];
# Host metadata for label propagation
tier = hostConfig'.tier or "prod";
priority = hostConfig'.priority or "high";
role = hostConfig'.role or null;
labels = hostConfig'.labels or { };
}; };
# Build effective labels for a host
# Always includes hostname; only includes tier/priority/role if non-default
buildEffectiveLabels = host:
{ hostname = host.hostname; }
// (lib.optionalAttrs (host.tier != "prod") { tier = host.tier; })
// (lib.optionalAttrs (host.priority != "high") { priority = host.priority; })
// (lib.optionalAttrs (host.role != null) { role = host.role; })
// host.labels;
# Generate node-exporter targets from all flake hosts # Generate node-exporter targets from all flake hosts
# Returns a list of static_configs entries with labels
generateNodeExporterTargets = self: externalTargets: generateNodeExporterTargets = self: externalTargets:
let let
nixosConfigs = self.nixosConfigurations or { }; nixosConfigs = self.nixosConfigurations or { };
hostList = lib.filter (x: x != null) ( hostList = lib.filter (x: x != null) (
lib.mapAttrsToList extractHostMonitoring nixosConfigs lib.mapAttrsToList extractHostMonitoring nixosConfigs
); );
flakeTargets = map (host: "${host.hostname}.home.2rjus.net:9100") hostList;
# Extract hostname from a target string like "gunter.home.2rjus.net:9100"
extractHostnameFromTarget = target:
builtins.head (lib.splitString "." target);
# Build target entries with labels for each host
flakeEntries = map
(host: {
target = "${host.hostname}.home.2rjus.net:9100";
labels = buildEffectiveLabels host;
})
hostList;
# External targets get hostname extracted from the target string
externalEntries = map
(target: {
inherit target;
labels = { hostname = extractHostnameFromTarget target; };
})
(externalTargets.nodeExporter or [ ]);
allEntries = flakeEntries ++ externalEntries;
# Group entries by their label set for efficient static_configs
# Convert labels attrset to a string key for grouping
labelKey = entry: builtins.toJSON entry.labels;
grouped = lib.groupBy labelKey allEntries;
# Convert groups to static_configs format
# Every flake host now has at least a hostname label
staticConfigs = lib.mapAttrsToList
(key: entries:
let
labels = (builtins.head entries).labels;
in
{ targets = map (e: e.target) entries; labels = labels; }
)
grouped;
in in
flakeTargets ++ (externalTargets.nodeExporter or [ ]); staticConfigs;
# Generate scrape configs from all flake hosts and external targets # Generate scrape configs from all flake hosts and external targets
# Host labels are propagated to service targets for semantic alert filtering
generateScrapeConfigs = self: externalTargets: generateScrapeConfigs = self: externalTargets:
let let
nixosConfigs = self.nixosConfigurations or { }; nixosConfigs = self.nixosConfigurations or { };
@@ -70,13 +124,14 @@ let
lib.mapAttrsToList extractHostMonitoring nixosConfigs lib.mapAttrsToList extractHostMonitoring nixosConfigs
); );
# Collect all scrapeTargets from all hosts, grouped by job_name # Collect all scrapeTargets from all hosts, including host labels
allTargets = lib.flatten (map allTargets = lib.flatten (map
(host: (host:
map map
(target: { (target: {
inherit (target) job_name port metrics_path scheme scrape_interval honor_labels; inherit (target) job_name port metrics_path scheme scrape_interval honor_labels;
hostname = host.hostname; hostname = host.hostname;
hostLabels = buildEffectiveLabels host;
}) })
host.scrapeTargets host.scrapeTargets
) )
@@ -87,22 +142,32 @@ let
grouped = lib.groupBy (t: t.job_name) allTargets; grouped = lib.groupBy (t: t.job_name) allTargets;
# Generate a scrape config for each job # Generate a scrape config for each job
# Within each job, group targets by their host labels for efficient static_configs
flakeScrapeConfigs = lib.mapAttrsToList flakeScrapeConfigs = lib.mapAttrsToList
(jobName: targets: (jobName: targets:
let let
first = builtins.head targets; first = builtins.head targets;
targetAddrs = map
(t: # Group targets within this job by their host labels
labelKey = t: builtins.toJSON t.hostLabels;
groupedByLabels = lib.groupBy labelKey targets;
# Every flake host now has at least a hostname label
staticConfigs = lib.mapAttrsToList
(key: labelTargets:
let let
portStr = toString t.port; labels = (builtins.head labelTargets).hostLabels;
targetAddrs = map
(t: "${t.hostname}.home.2rjus.net:${toString t.port}")
labelTargets;
in in
"${t.hostname}.home.2rjus.net:${portStr}") { targets = targetAddrs; labels = labels; }
targets; )
groupedByLabels;
config = { config = {
job_name = jobName; job_name = jobName;
static_configs = [{ static_configs = staticConfigs;
targets = targetAddrs;
}];
} }
// (lib.optionalAttrs (first.metrics_path != "/metrics") { // (lib.optionalAttrs (first.metrics_path != "/metrics") {
metrics_path = first.metrics_path; metrics_path = first.metrics_path;

View File

@@ -99,3 +99,48 @@
- name: Display success message - name: Display success message
ansible.builtin.debug: ansible.builtin.debug:
msg: "Template VM {{ template_vmid }} created successfully on {{ storage }}" msg: "Template VM {{ template_vmid }} created successfully on {{ storage }}"
- name: Update Terraform template name
hosts: localhost
gather_facts: false
vars:
terraform_dir: "{{ playbook_dir }}/../terraform"
tasks:
- name: Get image filename from earlier play
ansible.builtin.set_fact:
image_filename: "{{ hostvars['localhost']['image_filename'] }}"
- name: Extract template name from image filename
ansible.builtin.set_fact:
new_template_name: "{{ image_filename | regex_replace('\\.vma\\.zst$', '') | regex_replace('^vzdump-qemu-', '') }}"
- name: Read current Terraform variables file
ansible.builtin.slurp:
src: "{{ terraform_dir }}/variables.tf"
register: variables_tf_content
- name: Extract current template name from variables.tf
ansible.builtin.set_fact:
current_template_name: "{{ (variables_tf_content.content | b64decode) | regex_search('variable \"default_template_name\"[^}]+default\\s*=\\s*\"([^\"]+)\"', '\\1') | first }}"
- name: Check if template name has changed
ansible.builtin.set_fact:
template_name_changed: "{{ current_template_name != new_template_name }}"
- name: Display template name status
ansible.builtin.debug:
msg: "Template name: {{ current_template_name }} -> {{ new_template_name }} ({{ 'changed' if template_name_changed else 'unchanged' }})"
- name: Update default_template_name in variables.tf
ansible.builtin.replace:
path: "{{ terraform_dir }}/variables.tf"
regexp: '(variable "default_template_name"[^}]+default\s*=\s*)"[^"]+"'
replace: '\1"{{ new_template_name }}"'
when: template_name_changed
- name: Display update result
ansible.builtin.debug:
msg: "Updated terraform/variables.tf with new template name: {{ new_template_name }}"
when: template_name_changed

View File

@@ -5,7 +5,6 @@ set -euo pipefail
HOSTS=( HOSTS=(
"ns1" "ns1"
"ns2" "ns2"
"ca"
"ha1" "ha1"
"http-proxy" "http-proxy"
"jelly01" "jelly01"

View File

@@ -314,11 +314,10 @@ def handle_remove(
for secret_path in host_secrets: for secret_path in host_secrets:
console.print(f" [white]vault kv delete secret/{secret_path}[/white]") console.print(f" [white]vault kv delete secret/{secret_path}[/white]")
# Warn about secrets directory # Warn about legacy secrets directory
if secrets_exist: if secrets_exist:
console.print(f"\n[yellow]⚠️ Warning: secrets/{hostname}/ directory exists and will NOT be deleted[/yellow]") console.print(f"\n[yellow]⚠️ Warning: secrets/{hostname}/ directory exists (legacy SOPS)[/yellow]")
console.print(f" Manually remove if no longer needed: [white]rm -rf secrets/{hostname}/[/white]") console.print(f" Manually remove if no longer needed: [white]rm -rf secrets/{hostname}/[/white]")
console.print(f" Also update .sops.yaml to remove the host's age key")
# Exit if dry run # Exit if dry run
if dry_run: if dry_run:

View File

@@ -219,7 +219,7 @@ def update_flake_nix(config: HostConfig, repo_root: Path, force: bool = False) -
new_entry = f""" {config.hostname} = nixpkgs.lib.nixosSystem {{ new_entry = f""" {config.hostname} = nixpkgs.lib.nixosSystem {{
inherit system; inherit system;
specialArgs = {{ specialArgs = {{
inherit inputs self sops-nix; inherit inputs self;
}}; }};
modules = commonModules ++ [ modules = commonModules ++ [
./hosts/{config.hostname} ./hosts/{config.hostname}

View File

@@ -1,24 +0,0 @@
{
"data": "ENC[AES256_GCM,data:TgGIuklFPUSCBosD86NFnkAtRvYijQNQP4vvTkKu3dRAOjdDa2li5djZDUS4NEEPEihpOcMXqHBb+ABk3LmoU5nLmsKCeylUp7+DhcGi9f3xw2h1zbHV37mt40OVLTF3cYufRdydIkCGQA3td3q1ue/wCna2ewe73xwGg5j6ZVJCZAtW4VCNZM+rcG+YxPUC0gmBH59+O0VSrZrkvSnifbr+K0dGwg4i17KwAukI4Ac7YMkQoeuAPXq38+ZftlRx4tq9xBUko6wpPY9zOaFzeagWYMF0n1UYqDt+/3XZI/mukPhJc9tzbWneqgkQBOx3OiDwrNglCHvEpnb+bZePIRLOnNHd1ShETgBqhsHGp9OAwwbAt4tO+HFpCQtVz7s2LWQFLbWiN0SCGzYUkFGCgoXae5H58lxFav8=,iv:UzaWlJ+M+VQx3CcPSGbFZh5/rGbKpS2Rq2XVZAIDFiQ=,tag:F3waoAMuEKTvN2xANReSww==,type:str]",
"sops": {
"kms": null,
"gcp_kms": null,
"azure_kv": null,
"hc_vault": null,
"age": [
{
"recipient": "age1lznyk4ee7e7x8n92cq2n87kz9920473ks5u9jlhd3dczfzq4wamqept56u",
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBpRGZSVHRSMGlyazAwQU5j\nd1o1L0Y1ckhQMkh4MVZiRmZlR2ozcmdsUW1vCk4xZ1ZibDBrUWZhYmxVVjBUczRn\nYlJtUWF3Y1lHWG56NkhmK2JOUHVGajQKLS0tIDN2S2doQURpTis2U3lWV0NxdWEz\ncjNZaEl1dEQwOXhsNE9xbHhYUzNTV3cKVmVIe05JwgXKSku7AJmrujYXrbBSbpBJ\nnqCuDIhok1w/fiff+XXn8udbgPVq5bC2SOhHbtVxImgBCFzrj5hQ0A==\n-----END AGE ENCRYPTED FILE-----\n"
},
{
"recipient": "age1288993th0ge00reg4zqueyvmkrsvk829cs068eekjqfdprsrkeqql7mljk",
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSA4V3NaUEdvMmJvakQ0L1F0\nUnkvQ2F5dEVlZ2pMdlBZcjJac0tERnF5ZWljCmFrdU1NZ29jMkJ1a1ZLdURmVWI0\ncm1vNytFVzZjbVY2aVd2N3laMWNRNFEKLS0tIGgzOTFZY0lxc0JyVmd5cFBlNkRr\nVDBWc0t4c3pVV3RhSTB1UUVpNHd6NUkKNn6Sxb5oxP7iWqTF1+X9nOiYum3U+Rzk\nkryxVnf9EvQIVIFKDaTb+yAEO8otjqj+C4mHA9fannnNEJduOiPWOg==\n-----END AGE ENCRYPTED FILE-----\n"
}
],
"lastmodified": "2024-11-30T13:18:08Z",
"mac": "ENC[AES256_GCM,data:9R9RJzPMr9Bv8aeCDxhExTfbr+R2hjap6FGSk5QxBdbNpOcNS78ica0CLEmkAYVAfjmx/X2jC5ZnsAueSPUK7nAgNX2gJXbUTpY0F+oKt35GJziLrFLl3u/ahpF9lQ50EL9OqqgS+igDqtodJhKme5DXH5/GXQHhz++O3VZkR78=,iv:XgN3PiowiEosi2DmrjP82HhJMvnwaV530tsBE8GQfjs=,tag:U243BrtH7H/DU9LcjN/MMg==,type:str]",
"pgp": null,
"unencrypted_suffix": "_unencrypted",
"version": "3.9.1"
}
}

View File

@@ -1,24 +0,0 @@
{
"data": "ENC[AES256_GCM,data:5AePh5uXcUseYBGWvlztgmg8mGBGy3ngKRa6+QxOaT0/fzSB1pKkaMtZJo76tV9wwjdL6/b6VVUI7GIaCBD5kgdZuA8RdBTXguHyjjdxAlI9xcrQaWWdATd8JJt+eQp/m2Y+0dioyXKaDV2ukI3GtHYjp/ixMoHHWEocnEEb40wG6c3CZcvsLWJvKTkFc2OvcjcU2RTfuNlYtEETidiD9iC/dtCakNQHmLP1UFYgcn0ebXBKmlqD6+x2o7BVT1SLwVCyGNvH3eKA2AWvddZChnhaNCUIXcRwBFCgS8lPs4iXhAhly+nwuj7ssFpuu3sjm5pq196tRS8WQl2iNUEJ2tzoOpceg1kZZ7KHX3wCbdBlCRqhy9Q4JMvWPDssO+zz2aU21+BDEySDTCnTYX9Hu2/iFvZejt++mKY=,iv:u/Ukye0BAj2ka++AA72W8WfXJAZZ/YJ3RC/aydxdoUc=,tag:ihTP5bCCigWEPcLFaYOhMA==,type:str]",
"sops": {
"kms": null,
"gcp_kms": null,
"azure_kv": null,
"hc_vault": null,
"age": [
{
"recipient": "age1lznyk4ee7e7x8n92cq2n87kz9920473ks5u9jlhd3dczfzq4wamqept56u",
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSB0VElDNHArZXlXa2JRQjd0\nQmVIbGpPWk43NDdiTkFtcEd1bDhRdXJWOUY0CndITHdKTFNJQXFOVFdyUGNtQ09k\nN2hnQmFYR0ZORWtxcUN0ZFhsM0U3N2cKLS0tIFh1TTBpMjFIZ2NYM1QxeDRjYlJx\nYkdrUDZmMUpGbjk3REJCVVRpeFk5Z28KJcia0Bk+3ZoifZnRLwqAko526ODPnkSS\nzymtOj/QYTA0++NP3B1aScIyhWITMEZX1iSoWDmgHj8ZQoNMdkM7AQ==\n-----END AGE ENCRYPTED FILE-----\n"
},
{
"recipient": "age1288993th0ge00reg4zqueyvmkrsvk829cs068eekjqfdprsrkeqql7mljk",
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBZNlNHRWNEcUZGNXNBMDFR\nTzE5RnNMQUMvU1k2OS9XMlpvUktMRzQ5RmxvCnlCS3lzRVpGUHJLRGZ6SWZ2ZktR\na3l0TVN2NUlRVEQwRHByYkNEMDQyWUkKLS0tIEh3RjBWT3c5K2RWeDRjWFpsU1lP\ncStqY2xta3RSNkR6Vkt5YXhYUTZmbDgKvVKmZc8S/RwurJGsGiJ5LhM4waLO9B9k\n2cawxHmcYM3KfXDFwp9UZWhIwF7SRkG56ZE4OjGI3sOL+74ixnePxA==\n-----END AGE ENCRYPTED FILE-----\n"
}
],
"lastmodified": "2024-11-30T13:18:16Z",
"mac": "ENC[AES256_GCM,data:JwjbQ129cYCBNA5Fb8lN9rW7/y4wuVOqLeajIMcYyCzlBcjzCZAV1DKN5n75xMamb/hb1AUkmtp/K82PKM0Vg5X4/lpWTUZXZOzn/TrwHx+yqlJjL9mUdGuHnSY5DwME38Dde3UxdtUa0CVgQOxvMIycW27w8+8NNfO2zxGxkzc=,iv:ZMZASOsqXZOb0NkBqG3GGaqqKgQdjZLiku2yU5QonB8=,tag:/lb/HMxsYOV5XX/5kWnFHA==,type:str]",
"pgp": null,
"unencrypted_suffix": "_unencrypted",
"version": "3.9.1"
}
}

View File

@@ -1,24 +0,0 @@
{
"data": "ENC[AES256_GCM,data:vqQ3HwSmuDlI4UwraLWvwkBSj9zTFeNEWI1xzhVrO/gpx8+WBZOt2F0J7/LSTGAWsWW/9Gov+XXXAOtfnKfjYVzizyT/jE8EQwMuItWiFEVA6hohgwtsk7YKJjXdJIxmiv+WKs73gWb0uFVGh1ArMzsVkGPj1W1AKMFAneDPgsfSCy9aVOMuF8zQwypFC8eaxqOQhLpiN2ncRm8e7khwGurSgYfHDgFghaDr8torgUrZTOPNFk+LEdxB3WcC17+4a8ZyuBapmYdRTrP73czTAuxOF8lMwddJhO99SF7nWuOYVF1FOKLGtK04oKci5/xRIzvWo3I0pGajkxtuF5CyWbd1KblcPfBALIU/J5hU/puGJ7M2sE/qsg/4kaTFxnhq32rPZj291jFb4evDdOhVodfC1axOQUbzAC0=,iv:yOeQ384ikqgDqfthl7GIVSIMNA/n0BYTSIqFN3T9MAY=,tag:Y6nhOCrkWx7MnVpEeKN0Jg==,type:str]",
"sops": {
"kms": null,
"gcp_kms": null,
"azure_kv": null,
"hc_vault": null,
"age": [
{
"recipient": "age1lznyk4ee7e7x8n92cq2n87kz9920473ks5u9jlhd3dczfzq4wamqept56u",
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBFTjRMWlNtYVQ2WnJEaGFN\nVFU2TXRTK2FHREpqREhOWHBKemxNc2U4WW44CnV4OWlBdXlFUWhJYi9jTTRuUWJV\nOWFPV2I4UytDRFo3blN3bUtFQ1NGU0kKLS0tIGp2VHlDc1JMMUdDUjlNNDFwUUxj\nVnhHbCtrNVNpZXo0K2dDVU5YTVJJUEkKk9mVTbzQVGZo3RKDLPDwtENknh+in1Q5\njf4DA1cGDDNzcEIWOOYyS+1mzT9WY8gU0hWqihX/bAx7CVsNUallZw==\n-----END AGE ENCRYPTED FILE-----\n"
},
{
"recipient": "age1288993th0ge00reg4zqueyvmkrsvk829cs068eekjqfdprsrkeqql7mljk",
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBrVFNwUGpkOUhkUXFWWERq\nMVdueC9VSE9KbGZkenBVK3NRMjRNVXVmcVRRCjNLa0QzbWVCQks3ZmV3eFVjcEp0\nRmxDSlZIZU1IbEdnbE83WlkxV3VZV1EKLS0tICtsRXArajQ4Um9mNEV5OWZBdS85\nVGFSU2wwODZ3Zm44M3pWcTdDV1dxejQKM2BK5Axb1cF344ea89gkzCLzEX6j4amK\nzxf+boBK7JUX7F6QaPB0sRU8J4Cei9mALz96C8xNHjX00KcD3O2QOA==\n-----END AGE ENCRYPTED FILE-----\n"
}
],
"lastmodified": "2024-11-30T13:18:20Z",
"mac": "ENC[AES256_GCM,data:AllgcWxHnr3igPi/JbfJCbEa6hKtmILnAjiaMojRZNO4p6zYSoF0s8lo9XX05/vIrFUo+YaCtsuacv+kfz9f6vQafPn7Vulbh6PeH1VlAmzyVfJOTmHP3YX8ic3uM56A4+III1jOERCFOIcc/CKsnRLFhLCRQRMgtgT0hTl5aPw=,iv:60dOYhoUTu1HIHzY36eJeRZ66/v6JmRRpIW99W2D+CI=,tag:F7nLSFm933K5M+JE4IvNYw==,type:str]",
"pgp": null,
"unencrypted_suffix": "_unencrypted",
"version": "3.9.1"
}
}

View File

@@ -1,24 +0,0 @@
{
"data": "ENC[AES256_GCM,data:YRdPrTLQH0xdWiIzOyjfEGpvfmuj6me6GzZZcauh9bUUywyA1ranDnWqbJYgawQQxIXsq9dhXD0uco+7mmXq2598kF1NI9jh6uLf3k0H494zZOalRBv/k8u9oJDLIiVAkg9eNNLbGX0PMZr/Yue/qdkuXx2Hg9E7bQJwpU/NXF+jKKs+3NmKT5NBlegwAzUs530D4DUoaq5AhvVvdC6a1UcE+KJzQ8pRiz1GjFIxAB7qX+GVwa3yNdLgo2tlAbOzjGtaDfJnhZIHSNEq+4TEhjlF9lCmFCGFDUVupvMOWs0kBywJEzIrDmxmvGHlPj3FfyytPb7qhlsOXDDDS67IoiwluKOnw+sALAG0Iv9LMrDZ3z8MXeEGvRWu0VDMuGXN905/9kGx/A40mPjcfnZvI+qSRIKjER5R8aU=,iv:qiP2Ml59AnK24MBbs7N/HqJIylf+fXGqJAo2N8iFNB0=,tag:0Dj5fVs6OB07kvV4qzuvfw==,type:str]",
"sops": {
"kms": null,
"gcp_kms": null,
"azure_kv": null,
"hc_vault": null,
"age": [
{
"recipient": "age1lznyk4ee7e7x8n92cq2n87kz9920473ks5u9jlhd3dczfzq4wamqept56u",
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBBUFlvNmRNYUlJSHZYUkpJ\nMEloQXFSdENIWGJVVDNIOVY5MS9SYWRoL0FrCnRJc05wZUZBSDRvMHNUUEhNRXQ4\nTWhYOUp6YUNGZFNWUFRrSmlJM1c4aWcKLS0tIFc1b3NlSEo2eFJhdDgwejRqcHlT\nZE5wN01uaE04cTlIbVJMVWQvQ1pXajgKQ1n6UmP7LEBsnIBXVc0BceOqvwCqQzBP\ncI8C5Io4ILgMjY4dr6sd0SeJG6mfDdiMA+k7c6jqoyZCW/Pkd3LANQ==\n-----END AGE ENCRYPTED FILE-----\n"
},
{
"recipient": "age1288993th0ge00reg4zqueyvmkrsvk829cs068eekjqfdprsrkeqql7mljk",
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBtM2lyeXVzdE9nL1k5L3dC\nTkl2MjhMb1FKMFdCeXFPSmNST0pvOTRUaEVvCmdwMnhjSFFHVFhidmIySS9jMEJu\nNTJpRjdFOWpZZ3ZuZFJwZUUrRFU5NnMKLS0tIDJ1UjdVQkpMNm5Pd01JRnZNOEtr\nb1lpMlBkVHpiT2lYdWtZaUQrRW1HUDgKq/JVMf5gdu6lNEmqY6zU2SymbT+jklem\nnUQ9yieJGF+PanutNW6BCJH8jb/fH+Y6AeJ9S+kKCB4Yi75i4d+oHg==\n-----END AGE ENCRYPTED FILE-----\n"
}
],
"lastmodified": "2024-11-30T13:18:24Z",
"mac": "ENC[AES256_GCM,data:6FJTKEdIpCm+Dz7Ua8dZOMZQFaGU0oU/HRP6ly5mWbXCv81LRbZXRBd+5RDY3z9g9nb0PXZrOMNps63F6SKxK52VfzLIOap3UGeMNQn5P4/yyFj7JQHQ5Gjcf2l2z2VZ7NhUdNoSCV/6lwjValbKtids48Q5c3sFX997ZiqIUnY=,iv:nUeyJd/v8d9v7QsLLckziD9K5qjOZKK4vOQJw/ymi18=,tag:6n5EE3oklWdVcedvB2J/zA==,type:str]",
"pgp": null,
"unencrypted_suffix": "_unencrypted",
"version": "3.9.1"
}
}

View File

@@ -1,30 +0,0 @@
ca_root_pw: ENC[AES256_GCM,data:jS5BHS9i/pOykus5aGsW+w==,iv:aQIU7uXnNKaeNXv1UjRpBoSYcRpHo8RjnvCaIw4yCqc=,tag:lkjGm5/Ve93nizqGDQ0ByA==,type:str]
sops:
kms: []
gcp_kms: []
azure_kv: []
hc_vault: []
age:
- recipient: age1lznyk4ee7e7x8n92cq2n87kz9920473ks5u9jlhd3dczfzq4wamqept56u
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSA5anlORWxJalhRWkJPeGIy
OStyVG8vMFRTTEZOWHR3Q3N1UWJQbFlxV3pBCmVKQVM1SlJ2L0JOb3U3cTh3YkZ4
WHAxSUpTT1dyRHJHYVd1Qkh1ZWxwYW8KLS0tIEhXeklsSmlGaFlaaWF5L0Nodk5a
clZ4M3hFSlFqaEZ0UWREdHpTQ29GVUEKAxj5P05Ilpwis2oKFe54mJX+1LfTwfUv
2XRFOrEQbFNcK5WFu46p1mc/AAjKTeHWuvb2Yq43CO+sh1+kqKz0XA==
-----END AGE ENCRYPTED FILE-----
- recipient: age1288993th0ge00reg4zqueyvmkrsvk829cs068eekjqfdprsrkeqql7mljk
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBaS0dqQ1p4MEE2d2JaeFRx
UnB4ejhrS3hLekpqeWJhcEJGdnpzMTZDelVRCmFjVGswd3VtRUloWG1WbWY5N0s3
cG9aV2hGU3lFZkkvcUJNWE1rWUIwMmMKLS0tIG1KdlhoQzREWDhPbXVSZVBUQkdE
N1hmcEwxWXBIWkQ3a3BrdGhvUFoxbzgKX6hLoz7o/Du6ymrYwmGDkXp2XT+0+7QE
YhD5qQzGLVQSh3XM/wWExj2Ue5/gw/NqNziHezOh2r9gQljbHjG2/g==
-----END AGE ENCRYPTED FILE-----
lastmodified: "2024-10-21T09:12:26Z"
mac: ENC[AES256_GCM,data:hfPRIXt/kZJa6lsj7rz+5xGlrWhR/LX895S2d8auP/4t3V//80YE/ofIsHeAY9M7eSFsW9ce2Vp0C/WiCQefVWNaNN7nVAwskCfQ6vTWzs23oYz4NYIeCtZggBG3uGgJxb7ZnAFUJWmLwCxkKTQyoVVnn8i/rUDIBrkilbeLWNI=,iv:lm1HVbWtAifHjqKP0D3sxRadsE9+82ugbA2x54yRBTo=,tag:averxmPLa131lJtFrNxcEA==,type:str]
pgp: []
unencrypted_suffix: _unencrypted
version: 3.9.1

View File

@@ -1,25 +0,0 @@
wg_private_key: ENC[AES256_GCM,data:DlC9txcLkTnb7FoEd249oJV/Ehcp50P8uulbE4rY/xU16fkTlnKvPmYZ7u8=,iv:IsiTzdrh+BNSVgx1mfjpMGNV2J0c88q6AoP0kHX2aGY=,tag:OqFsOIyE71SBD1mcNS/PeQ==,type:str]
sops:
age:
- recipient: age1lznyk4ee7e7x8n92cq2n87kz9920473ks5u9jlhd3dczfzq4wamqept56u
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSAzdm9HTTN1amwxQ2Z6MUQv
dGJ0cEgyaHNOZWtWSWlXNXc5bGhUdSsvVlVzCkJkc3ZQdzlBNDNxb3Avdi96bXFt
TExZY29nUDI3RE5vanh6TVBRME1Fa1UKLS0tIG8vSHdCYzkvWmJpd0hNbnRtUmtk
aVcwaFJJclZ3YUlUTTNwR2VESmVyZWMKHvKUJBDuNCqacEcRlapetCXHKRb0Js09
sqxLfEDwiN2LQQjYHZOmnMfCOt/b2rwXVKEHdTcIsXbdIdKOJwuAIQ==
-----END AGE ENCRYPTED FILE-----
- recipient: age1gq8434ku0xekqmvnseeunv83e779cg03c06gwrusnymdsr3rpufqx6vr3m
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBEeU01UTc2V1UyZXRadE5I
VE1aakVZUEZUNnJxbzJ1K3J1R3ZQdFdMbUhBCjZBMDM3ZkYvQWlyNHBtaDZRWkd4
VzY0L3l4N2RNZjJRTDJWZTZyZVhHbW8KLS0tIGVNZ0N0emVmaVRCV09jNmVKRlla
cWVSNkJqWHh5c21KcWFac2FlZTVaMTAK1UvfPgZAZYtwiONKIAo5HlaDpN+UT/S/
JfPUfjxgRQid8P20Eh/jUepxrDY8iXRZdsUMON+OoQ8mpwoAh5eN1A==
-----END AGE ENCRYPTED FILE-----
lastmodified: "2025-05-15T18:56:55Z"
mac: ENC[AES256_GCM,data:J2kHY7pXBJZ0UuNCZOhkU11M8rDqCYNzY71NyuDRmzzRCC9ZiNIbavyQAWj2Dpk1pjGsYjXsVoZvP7ti1wTFqahpaR/YWI5gmphrzAe32b9qFVEWTC3YTnmItnY0YxQZYehYghspBjnJtfUK0BvZxSb17egpoFnvHmAq+u5dyxg=,iv:/aLg02RLuJZ1bRzZfOD74pJuE7gppCBztQvUEt557mU=,tag:toxHHBuv3WRblyc9Sth6Iw==,type:str]
unencrypted_suffix: _unencrypted
version: 3.10.2

View File

@@ -1,33 +0,0 @@
default:
user: ENC[AES256_GCM,data:4Zzjm6/e8GCKSPNivnY=,iv:Y3gR+JSH/GLYvkVu3CN4T/chM5mjGjwVPI0iMB4p1t4=,tag:auyG8iWsd/YGjDnnTC21Ew==,type:str]
password: ENC[AES256_GCM,data:9cyM9U8VnzXBBA==,iv:YMHNNUoQ9Az5+81Df07tjC+LaEWPHV6frUjd4PZrQOs=,tag:3hKR+BhLJODJp19nn4ppkA==,type:str]
verify_ssl: ENC[AES256_GCM,data:Cu5Ucf0=,iv:QFfdV7gDBQ+L2kSZZqlVqCrn9CRg5RNG5DNTFWtVf5Y=,tag:u24ZbpWA65wj3WOwqU1v+g==,type:bool]
sops:
kms: []
gcp_kms: []
azure_kv: []
hc_vault: []
age:
- recipient: age1lznyk4ee7e7x8n92cq2n87kz9920473ks5u9jlhd3dczfzq4wamqept56u
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBuUXdMMG5YaHRJbThQZW9u
RHVBbXFiSHNiUWdLTDdPajIyQjN3OGR0dGpzCm9ZVkdNWjhBakU3dVdhRU9kbU81
aDlCNzJBQ1hvQ3FnTUk2N2RWQkZpUUEKLS0tIEZacTNqa3FWc2p1NXVtRWhwVExj
cUJtYXNjb2Z4QkF4MjlidEZxSUFNa3MKAGHGksPc9oJheSlUQ3ARK5MuR5NFbPmD
kmSDSgRmzbarxT8eJnK8/K4ii3hX5E9vGOohUkyc03w4ENsh/dw43g==
-----END AGE ENCRYPTED FILE-----
- recipient: age1vpns76ykll8jgdlu3h05cur4ew2t3k7u03kxdg8y6ypfhsfhq9fqyurjey
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBOVGhvdGE5Mzl0ckhBM21D
RXJwb09OS25PMGViblViM21wTVZiZWhtWmhFCnAzL1NqeUVyOGZFVDFvdXFPbklQ
ZkJPWDVIdUdCdjZGUjcrcmtvak5CWG8KLS0tIDhLUHJNN2VqNy9CdVh0K0N0b0k1
RUE4U0E0aGxiRkF0NWdwSEIrQTU4MjgKeOU6bIWO6ke9YcG+1E3brnC21sSQxZ9b
SiG2QEnFnTeJ5P50XQoYHqUY3B0qx7nDLvyzatYEi6sDkfLXhmHGbw==
-----END AGE ENCRYPTED FILE-----
lastmodified: "2024-12-03T16:25:12Z"
mac: ENC[AES256_GCM,data:gemq8YpMZQC+gY7lmMM3tfZh9XxL40qdGlLiB2CD4SIG49w0V6E/vY7xygt0WW0zHbhMI9yUIqlRc/PaXn+QfyxJEr3IjaT05rrWUqQAeRP9Zss74Y3NtQehh8fM8SgeyU4j2CQ9f9B/lW9IgdOW/TNgQZVXGg1vXZPEzl7AZ4A=,iv:LG5ojv3hAqk+EvFa/xEn43MBqL457uKFDE3dG5lSgZo=,tag:AxzcUzmdhO411Sw7Vg1itA==,type:str]
pgp: []
unencrypted_suffix: _unencrypted
version: 3.9.1

View File

@@ -1,19 +0,0 @@
{
"data": "ENC[AES256_GCM,data:P84qHFU+xQjwQGK8I1gIdcBsHrskuUg0M1nGMMaA+hFjAdFYUhdhmAN/+y0CO28=,iv:zJtk01zNMTBDQdVtZBTM34CHRaNYDkabolxh7PWGKUI=,tag:8AS80AbZJbh9B3Av3zuI1w==,type:str]",
"sops": {
"age": [
{
"recipient": "age1lznyk4ee7e7x8n92cq2n87kz9920473ks5u9jlhd3dczfzq4wamqept56u",
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBkRFB6QTIyWWdwVkV4ZXNB\nWkdSdEhMc0s4cnByWVZXTGhnSWZ0MTdEUWhJCnFlOFQ5TU1hcE91azVyZXVXRCtu\nZjIxalRLYlEreGZ6ZDNoeXNPaFN4b28KLS0tIHY5WVFXN1k4NFVmUjh6VURkcEpv\ncklGcWVhdTdBRnlOdm1qM2h5SS9UUkEKq2RyxSVymDqcsZ+yiNRujDCwk1WOWYRW\nDa4TRKg3FCe7TcCEPkIaev1aBqjLg9J9c/70SYpUm6Zgeps7v5yl3A==\n-----END AGE ENCRYPTED FILE-----\n"
},
{
"recipient": "age1w029fksjv0edrff9p7s03tgk3axecdkppqymfpwfn2nu2gsqqefqc37sxq",
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSArTGVuckp2NlhMZXRNMVhO\naUV3K0h3cmZ5ZGx4Q3dJWHNqZXFJeE1kM0dFCmF4TUFUMm9mTHJlYzlYWVhNa1RH\nR29VNDIrL1IvYUpQYm5SZEYzbWhhbkkKLS0tIEJsK1dwZVdaaHpWQkpOOS90dkhx\nbGhvRXhqdFdqQmhZZmhCdmw4NUtSVG8K3z2do+/cIjAqg6EMJnubOWid1sMeTxvo\nrq6eGJ7YzdgZr2JBVtJdDRtk/KeHXu9In4efbBXwLAPIfn1pU0gm1w==\n-----END AGE ENCRYPTED FILE-----\n"
}
],
"lastmodified": "2025-08-21T19:08:48Z",
"mac": "ENC[AES256_GCM,data:5CkO09NIqttb4UZPB9iGym8avhTsMeUkTFTKZJlNGjgB1qWyGQNeKCa50A1+SbBCCWE5EwxoynB1so7bi8vnq7k8CPUHbiWG8rLOJSYHQcZ9Tu7ZGtpeWPcCw1zPWJ/PTBsFVeaT5/ufdx/6ut+sTtRoKHOZZtO9oStHmu/Rlfg=,iv:z9iJJlbvhgxJaART5QoCrqvrqlgoVlGj8jlndCALmKU=,tag:ldjmND4NVVQrHUldLrB4Jg==,type:str]",
"unencrypted_suffix": "_unencrypted",
"version": "3.10.2"
}
}

View File

@@ -1,19 +0,0 @@
{
"data": "ENC[AES256_GCM,data:MQkR6FQGHK2AuhOmy2was49RY2XlLO5NwaXnUFzFo5Ata/2ufVoAj4Jvotw/dSrKL7f62A6s+2BPAyWrvACJ+pwYFlfyj3T9bNwhxwZPkEmiHEubJjWSiD6jkSW0gOxbY8ib6g/GbyF8I1cPeYr/hJD5qQ==,iv:eBL2Y3MOt9gYTETUZqsHo1D5hPOHxb4JR6Z/DFlzzqI=,tag:Qqbt39xZvQz/QhsggsArsw==,type:str]",
"sops": {
"age": [
{
"recipient": "age1lznyk4ee7e7x8n92cq2n87kz9920473ks5u9jlhd3dczfzq4wamqept56u",
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSAwZzFXaEsyUkZGNFV0bVlW\nRkpPRHpUK2VwUHpOQXZCUUpoVzFGa3hycnhvCndTN0toVFdoU2E5N3V3UFhTTjU0\nNDByWTkrV0o3T295dE0zS08rVGpyQjAKLS0tIC96M0VEcWpjRk5DMjJnMFB4ZHI3\nM2Jod2x4ZzMyZm1pbDhZNTFuWGNRUlEKHs5jBSfjml09JOeKiT9vFR0Fykg6OxKG\njhFU/J2+fWB22G7dBc4PI60SNqhxIheUbGTdcz4Yp4BPL6vW3eArIw==\n-----END AGE ENCRYPTED FILE-----\n"
},
{
"recipient": "age1w029fksjv0edrff9p7s03tgk3axecdkppqymfpwfn2nu2gsqqefqc37sxq",
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBJT3lxamcrQUpFdjZteFlF\nYUQ3aGdadGpuNXd2Z3RtZ3dQU0cvMlFUMUNRClBDR3U0OXZJU0NDamVMSlR5NitN\nYlhvNVlvUE0wRjErYzkwVHFOdGVCVjgKLS0tIEttR1BLTGpDYTRSQ0lUZmVEcnNi\nWkNaMEViUHVBcExVOEpjNE5CZHpjVkEKuX/Rf8kaB3apr1UhAnq3swS6fXiVmwm8\n7Key+SUAPNstbWbz0u6B9m1ev5QcXB2lx2/+Cm7cjW+6VE2gLHjTsQ==\n-----END AGE ENCRYPTED FILE-----\n"
}
],
"lastmodified": "2025-01-24T12:19:16Z",
"mac": "ENC[AES256_GCM,data:X8X91LVP1MMJ8ZYeSNPRO6XHN+NuswLZcHpAkbvoY+E9aTteO8UqS+fsStbNDlpF5jz/mhdMsKElnU8Z/CIWImwolI4GGE6blKy6gyqRkn4VeZotUoXcJadYV/5COud3XP2uSTb694JyQEZnBXFNeYeiHpN0y38zLxoX8kXHFbc=,iv:fFCRfv+Y1Nt2zgJNKsxElrYcuKkATJ3A/jvheUY2IK4=,tag:hYojbMGUAQvx7I4qkO7o9w==,type:str]",
"unencrypted_suffix": "_unencrypted",
"version": "3.9.3"
}
}

View File

@@ -1,109 +0,0 @@
root_password_hash: ENC[AES256_GCM,data:wk/xEuf+qU3ezmondq9y3OIotXPI/L+TOErTjgJz58wEvQkApYkjc3bHaUTzOrmWjQBgDUENObzPmvQ8WKawUSJRVlpfOEr5TQ==,iv:I8Z3xJz3qoXBD7igx087A1fMwf8d29hQ4JEI3imRXdY=,tag:M80osQeWGG9AAA8BrMfhHA==,type:str]
ns_xfer_key: ENC[AES256_GCM,data:VFpK7GChgFeUgQm31tTvVC888bN0yt6BAnHQa6KUTg4iZGP1WL5Bx6Zp8dY=,iv:9RF1eEc7JBxBebDOKfcDjGS2U7XsHkOW/l52yIP+1LA=,tag:L6DR2QlHOfo02kzfWWCrvg==,type:str]
backup_helper_secret: ENC[AES256_GCM,data:EvXEJnDilbfALQ==,iv:Q3dkZ8Ee3qbcjcoi5GxfbaVB4uRIvkIB6ioKVV/dL2Y=,tag:T/UgZvQgYGa740Wh7D0b7Q==,type:str]
nats_nkey: ENC[AES256_GCM,data:N2CVXjdwiE7eSPUtXe+NeKSTzA9eFwK2igxaCdYsXd4Ps0/DjYb/ggnQziQzSy8viESZYjXhJ2VtNw==,iv:Xhcf5wPB01Wu0A+oMw0wzTEHATp+uN+wsaYshxIzy1w=,tag:IauTIOHqfiM75Ufml/JXbg==,type:str]
sops:
age:
- recipient: age1lznyk4ee7e7x8n92cq2n87kz9920473ks5u9jlhd3dczfzq4wamqept56u
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBuWXhzQWFmeCt1R05jREcz
Ui9HZFN5dkxHNVE0RVJGZUJUa3hKK2sxdkhBCktYcGpLeGZIQzZIV3ZZWGs3YzF1
T09sUEhPWkRkOWZFWkltQXBlM1lQV1UKLS0tIERRSlRUYW5QeW9TVjJFSmorOWNI
ZytmaEhzMjVhRXI1S0hielF0NlBrMmcK4I1PtSf7tSvSIJxWBjTnfBCO8GEFHbuZ
BkZskr5fRnWUIs72ZOGoTAVSO5ZNiBglOZ8YChl4Vz1U7bvdOCt0bw==
-----END AGE ENCRYPTED FILE-----
- recipient: age1hz2lz4k050ru3shrk5j3zk3f8azxmrp54pktw5a7nzjml4saudesx6jsl0
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBQcXM0RHlGcmZrYW4yNGZs
S1ZqQzVaYmQ4MGhGaTFMUVIwOTk5K0tZZjB3ClN0QkhVeHRrNXZHdmZWMzFBRnJ6
WTFtaWZyRmx2TitkOXkrVkFiYVd3RncKLS0tIExpeGUvY1VpODNDL2NCaUhtZkp0
cGNVZTI3UGxlNWdFWVZMd3FlS3pDR3cKBulaMeonV++pArXOg3ilgKnW/51IyT6Z
vH9HOJUix+ryEwDIcjv4aWx9pYDHthPFZUDC25kLYG91WrJFQOo2oA==
-----END AGE ENCRYPTED FILE-----
- recipient: age1w2q4gm2lrcgdzscq8du3ssyvk6qtzm4fcszc92z9ftclq23yyydqdga5um
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBabTdsZWxZQjV2TGx2YjNM
ZTgzWktqTjY0S0M3bFpNZXlDRDk5TSt3V2k0CjdWWTN0TlRlK1RpUm9xYW03MFFG
aWN4a3o4VUVnYzBDd2FrelUraWtrMTAKLS0tIE1vTGpKYkhzcWErWDRreml2QmE2
ZkNIWERKb1drdVR6MTBSTnVmdm51VEkKVNDYdyBSrUT7dUn6a4eF7ELQ2B2Pk6V9
Z5fbT75ibuyX1JO315/gl2P/FhxmlRW1K6e+04gQe2R/t/3H11Q7YQ==
-----END AGE ENCRYPTED FILE-----
- recipient: age1d2w5zece9647qwyq4vas9qyqegg96xwmg6c86440a6eg4uj6dd2qrq0w3l
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBVSFhDOFRVbnZWbVlQaG5G
U0NWekU0NzI1SlpRN0NVS1hPN210MXY3Z244CmtFemR5OUpzdlBzMHBUV3g0SFFo
eUtqNThXZDJ2b01yVVVuOFdwQVo2Qm8KLS0tIHpXRWd3OEpPRkpaVDNDTEJLMWEv
ZlZtaFpBdzF0YXFmdjNkNUR3YkxBZU0KAub+HF/OBZQR9bx/SVadZcL6Ms+NQ7yq
21HCcDTWyWHbN4ymUrIYXci1A/0tTOrQL9Mkvaz7IJh4VdHLPZrwwA==
-----END AGE ENCRYPTED FILE-----
- recipient: age1gq8434ku0xekqmvnseeunv83e779cg03c06gwrusnymdsr3rpufqx6vr3m
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBBWkhBL1NTdjFDeEhQcEgv
Z3c3Z213L2ZhWGo0Qm5Zd1A1RTBDY3plUkh3CkNWV2ZtNWkrUjB0eWFzUlVtbHlk
WTdTQjN4eDIzY0c0dyt6ajVXZ0krd1UKLS0tIHB4aEJqTTRMenV3UkFkTGEySjQ2
YVM1a3ZPdUU4T244UU0rc3hVQ3NYczQK10wug4kTjsvv/iOPWi5WrVZMOYUq4/Mf
oXS4sikXeUsqH1T2LUBjVnUieSneQVn7puYZlN+cpDQ0XdK/RZ+91A==
-----END AGE ENCRYPTED FILE-----
- recipient: age1288993th0ge00reg4zqueyvmkrsvk829cs068eekjqfdprsrkeqql7mljk
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBYcEtHbjNWRkdodUxYdHRn
MDBMU08zWDlKa0Z4cHJvc28rZk5pUjhnMjE0CmdzRmVGWDlYQ052Wm1zWnlYSFV6
dURQK3JSbThxQlg3M2ZaL1hGRzVuL0UKLS0tIEI3UGZvbEpvRS9aR2J2Tnc1YmxZ
aUY5Q2MrdHNQWDJNaGt5MWx6MVRrRVEKRPxyAekGHFMKs0Z6spVDayBA4EtPk18e
jiFc97BGVtC5IoSu4icq3ZpKOdxymnkqKEt0YP/p/JTC+8MKvTJFQw==
-----END AGE ENCRYPTED FILE-----
- recipient: age1vpns76ykll8jgdlu3h05cur4ew2t3k7u03kxdg8y6ypfhsfhq9fqyurjey
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBQL3ZMUkI1dUV1T2tTSHhn
SjhyQ3dKTytoaDBNcit1VHpwVGUzWVNpdjBnCklYZWtBYzBpcGxZSDBvM2tIZm9H
bTFjb1ZCaDkrOU1JODVBVTBTbmxFbmcKLS0tIGtGcS9kejZPZlhHRXI5QnI5Wm9Q
VjMxTDdWZEltWThKVDl0S24yWHJxZHcKgzH79zT2I7ZgyTbbbvIhLN/rEcfiomJH
oSZDFvPiXlhPgy8bRyyq3l47CVpWbUI2Y7DFXRuODpLUirt3K3TmCA==
-----END AGE ENCRYPTED FILE-----
- recipient: age1hchvlf3apn8g8jq2743pw53sd6v6ay6xu6lqk0qufrjeccan9vzsc7hdfq
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBPcm9zUm1XUkpLWm1Jb3Uw
RncveGozOW5SRThEM1Y4SFF5RDdxUEhZTUE4CjVESHE5R3JZK0krOXZDL0RHR0oy
Z3JKaEpydjRjeFFHck1ic2JTRU5yZTQKLS0tIGY2ck56eG95YnpDYlNqUDh5RVp1
U3dRYkNleUtsQU1LMWpDbitJbnRIem8K+27HRtZihG8+k7ZC33XVfuXDFjC1e8lA
kffmxp9kOEShZF3IKmAjVHFBiPXRyGk3fGPyQLmSMK2UOOfCy/a/qA==
-----END AGE ENCRYPTED FILE-----
- recipient: age1w029fksjv0edrff9p7s03tgk3axecdkppqymfpwfn2nu2gsqqefqc37sxq
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBTZHlldDdSOEhjTklCSXQr
U2pXajFwZnNqQzZOTzY5b3lkMzlyREhXRWo4CmxId2F6NkNqeHNCSWNrcUJIY0Nw
cGF6NXJaQnovK1FYSXQ2TkJSTFloTUEKLS0tIHRhWk5aZ0lDVkZaZEJobm9FTDNw
a29sZE1GL2ZQSk0vUEc1ZGhkUlpNRkEK9tfe7cNOznSKgxshd5Z6TQiNKp+XW6XH
VvPgMqMitgiDYnUPj10bYo3kqhd0xZH2IhLXMnZnqqQ0I23zfPiNaw==
-----END AGE ENCRYPTED FILE-----
- recipient: age1ha34qeksr4jeaecevqvv2afqem67eja2mvawlmrqsudch0e7fe7qtpsekv
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSB5bk9NVjJNWmMxUGd3cXRx
amZ5SWJ3dHpHcnM4UHJxdmh6NnhFVmJQdldzCm95dHN3R21qSkE4Vm9VTnVPREp3
dUQyS1B4MWhhdmd3dk5LQ0htZEtpTWMKLS0tIGFaa3MxVExFYk1MY2loOFBvWm1o
L0NoRStkeW9VZVdpWlhteC8yTnRmMUkKMYjUdE1rGgVR29FnhJ5OEVjTB1Rh5Mtu
M/DvlhW3a7tZU8nDF3IgG2GE5xOXZMDO9QWGdB8zO2RJZAr3Q+YIlA==
-----END AGE ENCRYPTED FILE-----
- recipient: age1cxt8kwqzx35yuldazcc49q88qvgy9ajkz30xu0h37uw3ts97jagqgmn2ga
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBBU0xYMnhqOE0wdXdleStF
THcrY2NBQzNoRHdYTXY3ZmM5YXRZZkQ4aUZnCm9ad0IxSWxYT1JBd2RseUdVT1pi
UXBuNzFxVlN0OWNTQU5BV2NiVEV0RUUKLS0tIGJHY0dzSDczUzcrV0RpTjE0czEy
cWZMNUNlTzBRcEV5MjlRV1BsWGhoaUUKGhYaH8I0oPCfrbs7HbQKVOF/99rg3HXv
RRTXUI71/ejKIuxehOvifClQc3nUW73bWkASFQ0guUvO4R+c0xOgUg==
-----END AGE ENCRYPTED FILE-----
lastmodified: "2025-02-11T21:18:22Z"
mac: ENC[AES256_GCM,data:5//boMp1awc/2XAkSASSCuobpkxa0E6IKf3GR8xHpMoCD30FJsCwV7PgX3fR8OuLEhOJ7UguqMNQdNqG37RMacreuDmI1J8oCFKp+3M2j4kCbXaEo8bw7WAtyjUez+SAXKzZWYmBibH0KOy6jdt+v0fdgy5hMBT4IFDofYRsyD0=,iv:6pD+SLwncpmal/FR4U8It2njvaQfUzzpALBCxa0NyME=,tag:4QN8ZFjdqck5ZgulF+FtbA==,type:str]
unencrypted_suffix: _unencrypted
version: 3.9.4

View File

@@ -1,169 +0,0 @@
{ pkgs, unstable, ... }:
{
homelab.monitoring.scrapeTargets = [{
job_name = "step-ca";
port = 9000;
}];
sops.secrets."ca_root_pw" = {
sopsFile = ../../secrets/ca/secrets.yaml;
owner = "step-ca";
path = "/var/lib/step-ca/secrets/ca_root_pw";
};
sops.secrets."intermediate_ca_key" = {
sopsFile = ../../secrets/ca/keys/intermediate_ca_key;
format = "binary";
owner = "step-ca";
path = "/var/lib/step-ca/secrets/intermediate_ca_key";
};
sops.secrets."root_ca_key" = {
sopsFile = ../../secrets/ca/keys/root_ca_key;
format = "binary";
owner = "step-ca";
path = "/var/lib/step-ca/secrets/root_ca_key";
};
sops.secrets."ssh_host_ca_key" = {
sopsFile = ../../secrets/ca/keys/ssh_host_ca_key;
format = "binary";
owner = "step-ca";
path = "/var/lib/step-ca/secrets/ssh_host_ca_key";
};
sops.secrets."ssh_user_ca_key" = {
sopsFile = ../../secrets/ca/keys/ssh_user_ca_key;
format = "binary";
owner = "step-ca";
path = "/var/lib/step-ca/secrets/ssh_user_ca_key";
};
services.step-ca = {
enable = true;
package = pkgs.step-ca;
intermediatePasswordFile = "/var/lib/step-ca/secrets/ca_root_pw";
address = "0.0.0.0";
port = 443;
settings = {
metricsAddress = ":9000";
authority = {
provisioners = [
{
claims = {
enableSSHCA = true;
maxTLSCertDuration = "3600h";
defaultTLSCertDuration = "48h";
};
encryptedKey = "eyJhbGciOiJQQkVTMi1IUzI1NitBMTI4S1ciLCJjdHkiOiJqd2sranNvbiIsImVuYyI6IkEyNTZHQ00iLCJwMmMiOjYwMDAwMCwicDJzIjoiY1lWOFJPb3lteXFLMWpzcS1WM1ZXQSJ9.WS8tPK-Q4gtnSsw7MhpTzYT_oi-SQx-CsRLh7KwdZnpACtd4YbcOYg.zeyDkmKRx8BIp-eB.OQ8c-KDW07gqJFtEMqHacRBkttrbJRRz0sYR47vQWDCoWhodaXsxM_Bj2pGvUrR26ij1t7irDeypnJoh6WXvUg3n_JaIUL4HgTwKSBrXZKTscXmY7YVmRMionhAb6oS9Jgus9K4QcFDHacC9_WgtGI7dnu3m0G7c-9Ur9dcDfROfyrnAByJp1rSZMzvriQr4t9bNYjDa8E8yu9zq6aAQqF0Xg_AxwiqYqesT-sdcfrxKS61appApRgPlAhW-uuzyY0wlWtsiyLaGlWM7WMfKdHsq-VqcVrI7Gi2i77vi7OqPEberqSt8D04tIri9S_sArKqWEDnBJsL07CC41IY.CqtYfbSa_wlmIsKgNj5u7g";
key = {
alg = "ES256";
crv = "P-256";
kid = "CIjtIe7FNhsNQe1qKGD9Rpj-lrf2ExyTYCXAOd3YDjE";
kty = "EC";
use = "sig";
x = "XRMX-BeobZ-R5-xb-E9YlaRjJUfd7JQxpscaF1NMgFo";
y = "bF9xLp5-jywRD-MugMaOGbpbniPituWSLMlXRJnUUl0";
};
name = "ca@home.2rjus.net";
type = "JWK";
}
{
name = "acme";
type = "ACME";
claims = {
maxTLSCertDuration = "3600h";
defaultTLSCertDuration = "1800h";
};
}
{
claims = {
enableSSHCA = true;
};
name = "sshpop";
type = "SSHPOP";
}
];
};
crt = "/var/lib/step-ca/certs/intermediate_ca.crt";
db = {
badgerFileLoadingMode = "";
dataSource = "/var/lib/step-ca/db";
type = "badgerv2";
};
dnsNames = [
"ca.home.2rjus.net"
"10.69.13.12"
];
federatedRoots = null;
insecureAddress = "";
key = "/var/lib/step-ca/secrets/intermediate_ca_key";
logger = {
format = "text";
};
root = "/var/lib/step-ca/certs/root_ca.crt";
ssh = {
hostKey = "/var/lib/step-ca/secrets/ssh_host_ca_key";
userKey = "/var/lib/step-ca/secrets/ssh_user_ca_key";
};
templates = {
ssh = {
host = [
{
comment = "#";
name = "sshd_config.tpl";
path = "/etc/ssh/sshd_config";
requires = [
"Certificate"
"Key"
];
template = ./templates/ssh/sshd_config.tpl;
type = "snippet";
}
{
comment = "#";
name = "ca.tpl";
path = "/etc/ssh/ca.pub";
template = ./templates/ssh/ca.tpl;
type = "snippet";
}
];
user = [
{
comment = "#";
name = "config.tpl";
path = "~/.ssh/config";
template = ./templates/ssh/config.tpl;
type = "snippet";
}
{
comment = "#";
name = "step_includes.tpl";
path = "\${STEPPATH}/ssh/includes";
template = ./templates/ssh/step_includes.tpl;
type = "prepend-line";
}
{
comment = "#";
name = "step_config.tpl";
path = "ssh/config";
template = ./templates/ssh/step_config.tpl;
type = "file";
}
{
comment = "#";
name = "known_hosts.tpl";
path = "ssh/known_hosts";
template = ./templates/ssh/known_hosts.tpl;
type = "file";
}
];
};
};
tls = {
cipherSuites = [
"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"
];
maxVersion = 1.3;
minVersion = 1.2;
renegotiation = false;
};
};
};
}

Binary file not shown.

View File

@@ -1,14 +0,0 @@
Host *
{{- if or .User.GOOS "none" | eq "windows" }}
{{- if .User.StepBasePath }}
Include "{{ .User.StepBasePath | replace "\\" "/" | trimPrefix "C:" }}/ssh/includes"
{{- else }}
Include "{{ .User.StepPath | replace "\\" "/" | trimPrefix "C:" }}/ssh/includes"
{{- end }}
{{- else }}
{{- if .User.StepBasePath }}
Include "{{.User.StepBasePath}}/ssh/includes"
{{- else }}
Include "{{.User.StepPath}}/ssh/includes"
{{- end }}
{{- end }}

View File

@@ -1,4 +0,0 @@
@cert-authority * {{.Step.SSH.HostKey.Type}} {{.Step.SSH.HostKey.Marshal | toString | b64enc}}
{{- range .Step.SSH.HostFederatedKeys}}
@cert-authority * {{.Type}} {{.Marshal | toString | b64enc}}
{{- end }}

View File

@@ -1,4 +0,0 @@
Match all
TrustedUserCAKeys /etc/ssh/ca.pub
HostCertificate /etc/ssh/{{.User.Certificate}}
HostKey /etc/ssh/{{.User.Key}}

View File

@@ -1,11 +0,0 @@
Match exec "step ssh check-host{{- if .User.Context }} --context {{ .User.Context }}{{- end }} %h"
{{- if .User.User }}
User {{.User.User}}
{{- end }}
{{- if or .User.GOOS "none" | eq "windows" }}
UserKnownHostsFile "{{.User.StepPath}}\ssh\known_hosts"
ProxyCommand C:\Windows\System32\cmd.exe /c step ssh proxycommand{{- if .User.Context }} --context {{ .User.Context }}{{- end }}{{- if .User.Provisioner }} --provisioner {{ .User.Provisioner }}{{- end }} %r %h %p
{{- else }}
UserKnownHostsFile "{{.User.StepPath}}/ssh/known_hosts"
ProxyCommand step ssh proxycommand{{- if .User.Context }} --context {{ .User.Context }}{{- end }}{{- if .User.Provisioner }} --provisioner {{ .User.Provisioner }}{{- end }} %r %h %p
{{- end }}

View File

@@ -1 +0,0 @@
{{- if or .User.GOOS "none" | eq "windows" }}Include "{{ .User.StepPath | replace "\\" "/" | trimPrefix "C:" }}/ssh/config"{{- else }}Include "{{.User.StepPath}}/ssh/config"{{- end }}

View File

@@ -5,7 +5,7 @@
package = pkgs.unstable.caddy; package = pkgs.unstable.caddy;
configFile = pkgs.writeText "Caddyfile" '' configFile = pkgs.writeText "Caddyfile" ''
{ {
acme_ca https://ca.home.2rjus.net/acme/acme/directory acme_ca https://vault.home.2rjus.net:8200/v1/pki_int/acme/directory
metrics { metrics {
per_host per_host

View File

@@ -1,41 +0,0 @@
{ ... }:
{
services.alloy = {
enable = true;
};
environment.etc."alloy/config.alloy" = {
enable = true;
mode = "0644";
text = ''
pyroscope.write "local_pyroscope" {
endpoint {
url = "http://localhost:4040"
}
}
pyroscope.scrape "labmon" {
targets = [{"__address__" = "localhost:9969", "service_name" = "labmon"}]
forward_to = [pyroscope.write.local_pyroscope.receiver]
profiling_config {
profile.process_cpu {
enabled = true
}
profile.memory {
enabled = true
}
profile.mutex {
enabled = true
}
profile.block {
enabled = true
}
profile.goroutine {
enabled = true
}
}
}
'';
};
}

View File

@@ -7,7 +7,6 @@
./pve.nix ./pve.nix
./alerttonotify.nix ./alerttonotify.nix
./pyroscope.nix ./pyroscope.nix
./alloy.nix
./tempo.nix ./tempo.nix
]; ];
} }

View File

@@ -121,22 +121,20 @@ in
scrapeConfigs = [ scrapeConfigs = [
# Auto-generated node-exporter targets from flake hosts + external # Auto-generated node-exporter targets from flake hosts + external
# Each static_config entry may have labels from homelab.host metadata
{ {
job_name = "node-exporter"; job_name = "node-exporter";
static_configs = [ static_configs = nodeExporterTargets;
{
targets = nodeExporterTargets;
}
];
} }
# Systemd exporter on all hosts (same targets, different port) # Systemd exporter on all hosts (same targets, different port)
# Preserves the same label grouping as node-exporter
{ {
job_name = "systemd-exporter"; job_name = "systemd-exporter";
static_configs = [ static_configs = map
{ (cfg: cfg // {
targets = map (t: builtins.replaceStrings [":9100"] [":9558"] t) nodeExporterTargets; targets = map (t: builtins.replaceStrings [ ":9100" ] [ ":9558" ] t) cfg.targets;
} })
]; nodeExporterTargets;
} }
# Local monitoring services (not auto-generated) # Local monitoring services (not auto-generated)
{ {
@@ -180,14 +178,6 @@ in
} }
]; ];
} }
{
job_name = "labmon";
static_configs = [
{
targets = [ "monitoring01.home.2rjus.net:9969" ];
}
];
}
# TODO: nix-cache_caddy can't be auto-generated because the cert is issued # TODO: nix-cache_caddy can't be auto-generated because the cert is issued
# for nix-cache.home.2rjus.net (service CNAME), not nix-cache01 (hostname). # for nix-cache.home.2rjus.net (service CNAME), not nix-cache01 (hostname).
# Consider adding a target override to homelab.monitoring.scrapeTargets. # Consider adding a target override to homelab.monitoring.scrapeTargets.

View File

@@ -17,8 +17,9 @@ groups:
annotations: annotations:
summary: "Disk space low on {{ $labels.instance }}" summary: "Disk space low on {{ $labels.instance }}"
description: "Disk space is low on {{ $labels.instance }}. Please check." description: "Disk space is low on {{ $labels.instance }}. Please check."
# Build hosts (e.g., nix-cache01) are expected to have high CPU during builds
- alert: high_cpu_load - alert: high_cpu_load
expr: max(node_load5{instance!="nix-cache01.home.2rjus.net:9100"}) by (instance) > (count by (instance)(node_cpu_seconds_total{instance!="nix-cache01.home.2rjus.net:9100", mode="idle"}) * 0.7) expr: max(node_load5{role!="build-host"}) by (instance) > (count by (instance)(node_cpu_seconds_total{role!="build-host", mode="idle"}) * 0.7)
for: 15m for: 15m
labels: labels:
severity: warning severity: warning
@@ -26,7 +27,7 @@ groups:
summary: "High CPU load on {{ $labels.instance }}" summary: "High CPU load on {{ $labels.instance }}"
description: "CPU load is high on {{ $labels.instance }}. Please check." description: "CPU load is high on {{ $labels.instance }}. Please check."
- alert: high_cpu_load - alert: high_cpu_load
expr: max(node_load5{instance="nix-cache01.home.2rjus.net:9100"}) by (instance) > (count by (instance)(node_cpu_seconds_total{instance="nix-cache01.home.2rjus.net:9100", mode="idle"}) * 0.7) expr: max(node_load5{role="build-host"}) by (instance) > (count by (instance)(node_cpu_seconds_total{role="build-host", mode="idle"}) * 0.7)
for: 2h for: 2h
labels: labels:
severity: warning severity: warning
@@ -115,8 +116,9 @@ groups:
annotations: annotations:
summary: "NSD not running on {{ $labels.instance }}" summary: "NSD not running on {{ $labels.instance }}"
description: "NSD has been down on {{ $labels.instance }} more than 5 minutes." description: "NSD has been down on {{ $labels.instance }} more than 5 minutes."
# Only alert on primary DNS (secondary has cold cache after failover)
- alert: unbound_low_cache_hit_ratio - alert: unbound_low_cache_hit_ratio
expr: (rate(unbound_cache_hits_total[5m]) / (rate(unbound_cache_hits_total[5m]) + rate(unbound_cache_misses_total[5m]))) < 0.5 expr: (rate(unbound_cache_hits_total{dns_role="primary"}[5m]) / (rate(unbound_cache_hits_total{dns_role="primary"}[5m]) + rate(unbound_cache_misses_total{dns_role="primary"}[5m]))) < 0.5
for: 15m for: 15m
labels: labels:
severity: warning severity: warning
@@ -336,40 +338,6 @@ groups:
annotations: annotations:
summary: "Pyroscope service not running on {{ $labels.instance }}" summary: "Pyroscope service not running on {{ $labels.instance }}"
description: "Pyroscope service not running on {{ $labels.instance }}" description: "Pyroscope service not running on {{ $labels.instance }}"
- name: certificate_rules
rules:
- alert: certificate_expiring_soon
expr: labmon_tlsconmon_certificate_seconds_left{address!="ca.home.2rjus.net:443"} < 86400
for: 5m
labels:
severity: warning
annotations:
summary: "TLS certificate expiring soon for {{ $labels.instance }}"
description: "TLS certificate for {{ $labels.address }} is expiring within 24 hours."
- alert: step_ca_serving_cert_expiring
expr: labmon_tlsconmon_certificate_seconds_left{address="ca.home.2rjus.net:443"} < 3600
for: 5m
labels:
severity: critical
annotations:
summary: "Step-CA serving certificate expiring"
description: "The step-ca serving certificate (24h auto-renewed) has less than 1 hour of validity left. Renewal may have failed."
- alert: certificate_check_error
expr: labmon_tlsconmon_certificate_check_error == 1
for: 5m
labels:
severity: warning
annotations:
summary: "Error checking certificate for {{ $labels.address }}"
description: "Certificate check is failing for {{ $labels.address }} on {{ $labels.instance }}."
- alert: step_ca_certificate_expiring
expr: labmon_stepmon_certificate_seconds_left < 3600
for: 5m
labels:
severity: critical
annotations:
summary: "Step-CA certificate expiring for {{ $labels.instance }}"
description: "Step-CA certificate is expiring within 1 hour on {{ $labels.instance }}."
- name: proxmox_rules - name: proxmox_rules
rules: rules:
- alert: pve_node_down - alert: pve_node_down

View File

@@ -5,7 +5,7 @@
package = pkgs.unstable.caddy; package = pkgs.unstable.caddy;
configFile = pkgs.writeText "Caddyfile" '' configFile = pkgs.writeText "Caddyfile" ''
{ {
acme_ca https://ca.home.2rjus.net/acme/acme/directory acme_ca https://vault.home.2rjus.net:8200/v1/pki_int/acme/directory
metrics metrics
} }

View File

@@ -3,7 +3,7 @@
security.acme = { security.acme = {
acceptTerms = true; acceptTerms = true;
defaults = { defaults = {
server = "https://ca.home.2rjus.net/acme/acme/directory"; server = "https://vault.home.2rjus.net:8200/v1/pki_int/acme/directory";
email = "root@home.2rjus.net"; email = "root@home.2rjus.net";
dnsPropagationCheck = false; dnsPropagationCheck = false;
}; };

View File

@@ -10,7 +10,6 @@
./nix.nix ./nix.nix
./root-user.nix ./root-user.nix
./pki/root-ca.nix ./pki/root-ca.nix
./sops.nix
./sshd.nix ./sshd.nix
./vault-secrets.nix ./vault-secrets.nix
]; ];

View File

@@ -1,7 +0,0 @@
{ ... }: {
sops = {
defaultSopsFile = ../secrets/secrets.yaml;
age.keyFile = "/var/lib/sops-nix/key.txt";
age.generateKey = true;
};
}

View File

@@ -33,7 +33,7 @@ variable "default_target_node" {
variable "default_template_name" { variable "default_template_name" {
description = "Default template VM name to clone from" description = "Default template VM name to clone from"
type = string type = string
default = "nixos-25.11.20260131.41e216c" default = "nixos-25.11.20260203.e576e3c"
} }
variable "default_ssh_public_key" { variable "default_ssh_public_key" {

View File

@@ -101,6 +101,13 @@ locals {
] ]
} }
# vault01: Vault server itself (fetches secrets from itself)
"vault01" = {
paths = [
"secret/data/hosts/vault01/*",
]
}
} }
} }

View File

@@ -43,24 +43,20 @@ locals {
cpu_cores = 2 cpu_cores = 2
memory = 2048 memory = 2048
disk_size = "20G" disk_size = "20G"
flake_branch = "deploy-test-hosts" flake_branch = "improve-bootstrap-visibility"
vault_wrapped_token = "s.YRGRpAZVVtSYEa3wOYOqFmjt" vault_wrapped_token = "s.l5q88wzXfEcr5SMDHmO6o96b"
} }
"testvm02" = { "testvm02" = {
ip = "10.69.13.21/24" ip = "10.69.13.21/24"
cpu_cores = 2 cpu_cores = 2
memory = 2048 memory = 2048
disk_size = "20G" disk_size = "20G"
flake_branch = "deploy-test-hosts"
vault_wrapped_token = "s.tvs8yhJOkLjBs548STs6DBw7"
} }
"testvm03" = { "testvm03" = {
ip = "10.69.13.22/24" ip = "10.69.13.22/24"
cpu_cores = 2 cpu_cores = 2
memory = 2048 memory = 2048
disk_size = "20G" disk_size = "20G"
flake_branch = "deploy-test-hosts"
vault_wrapped_token = "s.sQ80FZGeG3z6jgrsuh74IopC"
} }
} }