Merge Prometheus metrics and Loki logs into a unified troubleshooting skill. Adds LogQL query patterns, label reference, and common service units for log searching. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
5.9 KiB
name, description
| name | description |
|---|---|
| observability | Reference guide for exploring Prometheus metrics and Loki logs when troubleshooting homelab issues. Use when investigating system state, deployments, service health, or searching logs. |
Observability Troubleshooting Guide
Quick reference for exploring Prometheus metrics and Loki logs to troubleshoot homelab issues.
Available Tools
Use the lab-monitoring MCP server tools:
Metrics:
search_metrics- Find metrics by name substringget_metric_metadata- Get type/help for a specific metricquery- Execute PromQL querieslist_targets- Check scrape target healthlist_alerts/get_alert- View active alerts
Logs:
query_logs- Execute LogQL queries against Lokilist_labels- List available log labelslist_label_values- List values for a specific label
Logs Reference
Label Reference
Available labels for log queries:
host- Hostname (e.g.,ns1,monitoring01,ha1)systemd_unit- Systemd unit name (e.g.,nsd.service,nixos-upgrade.service)job- Eithersystemd-journal(most logs) orvarlog(file-based logs)filename- Forvarlogjob, the log file pathhostname- Alternative tohostfor some streams
Log Format
Journal logs are JSON-formatted. Key fields:
MESSAGE- The actual log messagePRIORITY- Syslog priority (6=info, 4=warning, 3=error)SYSLOG_IDENTIFIER- Program name
Basic LogQL Queries
Logs from a specific service on a host:
{host="ns1", systemd_unit="nsd.service"}
All logs from a host:
{host="monitoring01"}
Logs from a service across all hosts:
{systemd_unit="nixos-upgrade.service"}
Substring matching (case-sensitive):
{host="ha1"} |= "error"
Exclude pattern:
{host="ns1"} != "routine"
Regex matching:
{systemd_unit="prometheus.service"} |~ "scrape.*failed"
File-based logs (caddy access logs, etc):
{job="varlog", hostname="nix-cache01"}
{job="varlog", filename="/var/log/caddy/nix-cache.log"}
Time Ranges
Default lookback is 1 hour. Use start parameter for older logs:
start: "1h"- Last hour (default)start: "24h"- Last 24 hoursstart: "168h"- Last 7 days
Common Services
Useful systemd units for troubleshooting:
nixos-upgrade.service- Daily auto-upgrade logsnsd.service- DNS server (ns1/ns2)prometheus.service- Metrics collectionloki.service- Log aggregationcaddy.service- Reverse proxyhome-assistant.service- Home automationstep-ca.service- Internal CAopenbao.service- Secrets managementsshd.service- SSH daemonnix-gc.service- Nix garbage collection
Extracting JSON Fields
Parse JSON and filter on fields:
{systemd_unit="prometheus.service"} | json | PRIORITY="3"
Metrics Reference
Deployment & Version Status
Check which NixOS revision hosts are running:
nixos_flake_info
Labels:
current_rev- Git commit of the running NixOS configurationremote_rev- Latest commit on the remote repositorynixpkgs_rev- Nixpkgs revision used to build the systemnixos_version- Full NixOS version string (e.g.,25.11.20260203.e576e3c)
Check if hosts are behind on updates:
nixos_flake_revision_behind == 1
View flake input versions:
nixos_flake_input_info
Labels: input (name), rev (revision), type (git/github)
Check flake input age:
nixos_flake_input_age_seconds / 86400
Returns age in days for each flake input.
System Health
Basic host availability:
up{job="node-exporter"}
CPU usage by host:
100 - (avg by (instance) (rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100)
Memory usage:
1 - (node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes)
Disk space (root filesystem):
node_filesystem_avail_bytes{mountpoint="/"} / node_filesystem_size_bytes{mountpoint="/"}
Service-Specific Metrics
Common job names:
node-exporter- System metrics (all hosts)nixos-exporter- NixOS version/generation metricscaddy- Reverse proxy metricsprometheus/loki/grafana- Monitoring stackhome-assistant- Home automationstep-ca- Internal CA
Instance Label Format
The instance label uses FQDN format:
<hostname>.home.2rjus.net:<port>
Example queries filtering by host:
up{instance=~"monitoring01.*"}
node_load1{instance=~"ns1.*"}
Troubleshooting Workflows
Check Deployment Status Across Fleet
- Query
nixos_flake_infoto see all hosts' current revisions - Check
nixos_flake_revision_behindfor hosts needing updates - Look at upgrade logs:
{systemd_unit="nixos-upgrade.service"}withstart: "24h"
Investigate Service Issues
- Check
up{job="<service>"}for scrape failures - Use
list_targetsto see target health details - Query service logs:
{host="<host>", systemd_unit="<service>.service"} - Search for errors:
{host="<host>"} |= "error" - Check
list_alertsfor related alerts
After Deploying Changes
- Verify
current_revupdated innixos_flake_info - Confirm
nixos_flake_revision_behind == 0 - Check service logs for startup issues
- Check service metrics are being scraped
Debug SSH/Access Issues
{host="<host>", systemd_unit="sshd.service"}
Check Recent Upgrades
{systemd_unit="nixos-upgrade.service"}
With start: "24h" to see last 24 hours of upgrades across all hosts.
Notes
- Default scrape interval is 15s for most metrics targets
- Default log lookback is 1h - use
startparameter for older logs - Use
rate()for counter metrics, direct queries for gauges - The
instancelabel includes the port, use regex matching (=~) for hostname-only filters - Log
MESSAGEfield contains the actual log content in JSON format