Compare commits

..

42 Commits

Author SHA1 Message Date
4ae92b4f85 chore: migrate module path from git.t-juice.club to code.t-juice.club
Update Go module path and all import references for Gitea to Forgejo
host migration.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 19:48:25 +01:00
4276ffbda5 feat: add optional basic auth support for Loki client
Some Loki deployments (e.g., behind a reverse proxy or Grafana Cloud)
require HTTP Basic Authentication. This adds optional --loki-username
and --loki-password flags (and corresponding env vars) to the
lab-monitoring server, along with NixOS module options for secure
credential management via systemd LoadCredential.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-17 20:32:10 +01:00
aff058dcc0 chore: update flake.lock
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-13 22:22:08 +01:00
dcaeb1f517 chore: remove unused gotools and go-tools from devShell
Both are redundant: staticcheck is covered by golangci-lint, and
gotools (goimports, etc.) is covered by gopls and golangci-lint.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-13 22:20:48 +01:00
fd40e73f1b feat: add package indexing to MCP index_revision tool
The options server's index_revision now also indexes packages when running
under nixpkgs-search, matching the CLI behavior. The packages server gets
its own index_revision tool for standalone package indexing.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-13 22:12:08 +01:00
a0be405b76 Merge pull request 'feat: add git-explorer MCP server for read-only repository access' (#8) from feature/git-explorer into master
Reviewed-on: #8
2026-02-08 03:30:29 +00:00
75673974a2 feat: add git-explorer MCP server for read-only repository access
Implements a new MCP server that provides read-only access to git
repositories using go-git. Designed for deployment verification by
comparing deployed flake revisions against source repositories.

9 tools: resolve_ref, get_log, get_commit_info, get_diff_files,
get_file_at_commit, is_ancestor, commits_between, list_branches,
search_commits.

Includes CLI commands, NixOS module, and comprehensive tests.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-08 04:26:38 +01:00
98bad6c9ba chore: switch devShell from go_1_24 to default go
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-06 20:07:32 +01:00
d024f128b5 chore: update flake.lock
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-06 20:02:44 +01:00
9b16a5fe86 feat: default list_alerts to active alerts only
Change list_alerts (MCP tool) and alerts (CLI command) to show only
active (non-silenced, non-inhibited) alerts by default. Add state=all
option and --all CLI flag to show all alerts when needed.

- MCP: list_alerts with no state param now returns active alerts only
- MCP: list_alerts with state=all returns all alerts (previous default)
- CLI: alerts command defaults to active, --all shows everything
- Add tests for new default behavior and state=all option
- Update README with new CLI examples
- Bump version to 0.3.0
- Clarify version bumping rules in CLAUDE.md

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-06 19:59:37 +01:00
9dfe61e170 Merge pull request 'feature/loki-log-queries' (#7) from feature/loki-log-queries into master
Reviewed-on: #7
2026-02-05 20:06:33 +00:00
d97e554dfc fix: cap log query limit and validate direction parameter
Prevent unbounded memory usage by capping the limit parameter to 5000.
Validate direction against allowed values instead of passing through
to Loki unchecked.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-05 20:58:35 +01:00
859e35ab5c feat: add Loki log query support to lab-monitoring
Add 3 opt-in Loki tools (query_logs, list_labels, list_label_values)
that are registered when LOKI_URL is configured. Includes Loki HTTP
client, CLI commands (logs, labels), NixOS module option, formatting,
and tests.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-05 20:55:39 +01:00
f4f859fefa docs: add lab-monitoring to README and update CLAUDE.md planning notes
Add comprehensive lab-monitoring documentation to README including MCP
server description, installation, MCP client config examples, CLI usage,
environment variables, MCP tools table, NixOS module example, and module
options. Also add a reminder in CLAUDE.md to update the README after
implementing a plan.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 23:54:13 +01:00
b491a60105 Merge pull request 'feature/lab-monitoring' (#6) from feature/lab-monitoring into master
Reviewed-on: #6
2026-02-04 22:48:23 +00:00
52f50a1a06 chore: enable silences in lab-monitoring MCP config
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 23:46:15 +01:00
d31a93d3b6 docs: add Loki log query support to lab-monitoring TODO
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 23:36:00 +01:00
5b9eda48f8 chore: update monitoring URLs to production endpoints
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 23:33:44 +01:00
741f02d856 docs: add list_rules and get_rule_group to lab-monitoring TODO
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 23:33:30 +01:00
06e62eb6ad feat: gate create_silence behind --enable-silences flag
The create_silence tool is a write operation that can suppress alerts.
Disable it by default and require explicit opt-in via --enable-silences
CLI flag (or enableSilences NixOS option) as a safety measure.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 23:23:46 +01:00
2a08cdaf2e feat: include active alert count in MCP server instructions
Add InstructionsFunc callback to ServerConfig, called during each
initialize handshake to generate dynamic instructions. The lab-monitoring
server uses this to query Alertmanager and include a count of active
non-silenced alerts, so the LLM can proactively inform the user.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 23:16:52 +01:00
1755364bba feat: add lab-monitoring MCP server for Prometheus and Alertmanager
New MCP server that queries live Prometheus and Alertmanager HTTP APIs
with 8 tools: list_alerts, get_alert, search_metrics, get_metric_metadata,
query (PromQL), list_targets, list_silences, and create_silence.

Extends the MCP core with ModeCustom and NewGenericServer for servers
that don't require a database. Includes CLI with direct commands
(alerts, query, targets, metrics), NixOS module, and comprehensive
httptest-based tests.

Bumps existing binaries to 0.2.1 due to shared internal/mcp change.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 23:11:53 +01:00
0bd4ed778a Merge pull request 'feature/nixpkgs-search' (#5) from feature/nixpkgs-search into master
Reviewed-on: #5
2026-02-04 17:07:30 +00:00
d1285d1f80 fix: improve package search relevance with exact match priority
Package search now prioritizes results in this order:
1. Exact pname match
2. Exact attr_path match
3. pname starts with query
4. attr_path starts with query
5. FTS ranking (bm25 for SQLite, ts_rank for PostgreSQL)

This ensures searching for "git" returns the "git" package first,
rather than packages that merely mention "git" in their description.

Also update CLAUDE.md to clarify using `nix run` instead of
`go build -o` for testing binaries.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 18:04:06 +01:00
66145fab6c docs: mark nixpkgs-packages as completed in TODO
The nixpkgs-packages feature has been implemented in nixpkgs-search.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 17:31:09 +01:00
d7ee6048e1 chore: update dev config for nixpkgs-search
- Update .mcp.json to use nixpkgs-search options/packages servers
- Update CLAUDE.md example to use nixpkgs-search

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 17:31:05 +01:00
75addb5a28 docs: update README for nixpkgs-search as primary server
- Document nixpkgs-search as the primary MCP server
- Add package search CLI examples and MCP tools
- Update installation and usage examples
- Add nixpkgs-search-mcp NixOS module documentation
- Mark nixos-options as legacy
- Update environment variable documentation

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 17:30:25 +01:00
3625a8dfc3 feat(nix): add nixpkgs-search-mcp NixOS module
Add NixOS module for deploying nixpkgs-search as systemd services:
- Runs separate MCP servers for options (port 8082) and packages (port 8083)
- Shared database configuration (SQLite or PostgreSQL)
- Separate indexing service that runs before servers start
- options.enable and packages.enable flags (both default to true)
- indexFlags option for customizing index command (--no-packages, etc.)

Also update flake.nix:
- Register new module as nixpkgs-search-mcp
- Set as default nixosModule

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 17:30:19 +01:00
ea4c69bc23 feat: add nixpkgs-search binary with package search support
Add a new nixpkgs-search CLI that combines NixOS options search with
Nix package search functionality. This provides two MCP servers from
a single binary:
- `nixpkgs-search options serve` for NixOS options
- `nixpkgs-search packages serve` for Nix packages

Key changes:
- Add packages table to database schema (version 3)
- Add Package type and search methods to database layer
- Create internal/packages/ with indexer and parser for nix-env JSON
- Add MCP server mode (options/packages) with separate tool sets
- Add package handlers: search_packages, get_package
- Create cmd/nixpkgs-search with combined indexing support
- Update flake.nix with nixpkgs-search package (now default)
- Bump version to 0.2.0

The index command can index both options and packages together, or
use --no-packages/--no-options flags for partial indexing.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 17:12:41 +01:00
9efcca217c Merge pull request 'feature/add-linting-tools' (#4) from feature/add-linting-tools into master
Reviewed-on: #4
2026-02-04 00:55:15 +00:00
d6e99161a9 docs: add linting instructions to CLAUDE.md
Document the requirement to run golangci-lint, govulncheck, and go vet
before completing work on a feature branch.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 01:53:28 +01:00
ea11dd5e14 fix: add nolint:errcheck comments for intentionally unchecked errors
Add //nolint:errcheck comments to intentionally unchecked error returns:

- defer X.Close() calls: errors from closing read-only resources, rows
  after iteration, files, response bodies, and gzip readers are not
  actionable and don't affect correctness

- defer tx.Rollback(): standard Go pattern where rollback after
  successful commit returns an error, which is expected behavior

- defer stmt.Close(): statements are closed with their transactions

- Cleanup operations: DeleteRevision on failure and os.RemoveAll for
  temp directories are best-effort cleanup

- HTTP response encoding: if JSON encoding fails at response time,
  there's nothing useful we can do

- Test/benchmark code: unchecked errors in test setup/cleanup where
  failures will surface through test assertions

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 01:51:20 +01:00
097b661aed fix: resolve ineffassign warnings in postgres SearchOptions
The argNum variable tracks parameter positions but the final value is
unused. Added explicit acknowledgment to silence the linter.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 01:45:21 +01:00
6596ac56a5 fix: resolve staticcheck nil pointer dereference warning
Use t.Fatal instead of t.Error when retrieved session is nil to prevent
subsequent nil pointer dereference on retrieved.ID.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 01:45:16 +01:00
ad819a3c2c chore: add govulncheck to devshell
Add govulncheck for vulnerability scanning of Go dependencies.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 01:39:38 +01:00
df9a2f30a9 Merge pull request 'feature/file-metadata-and-range' (#3) from feature/file-metadata-and-range into master
Reviewed-on: #3
2026-02-04 00:37:05 +00:00
c829dd28a9 chore: bump version to 0.1.2
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 01:30:53 +01:00
9252ddcfae test: add tests for file metadata and range parameters
- testFileRange: test GetFileWithRange with various offset/limit values
- testDeclarationsWithMetadata: test file metadata in declarations
- Verify byte_size and line_count are computed correctly
- Test edge cases: offset beyond EOF, non-indexed files

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 01:30:49 +01:00
b188ca5088 feat(mcp): add offset/limit params and show file metadata in declarations
- Add offset and limit parameters to get_file tool schema
- Default limit is 250 lines, offset is 0
- Show "Showing lines X-Y of Z total" header when range is applied
- Update handleGetOption to use GetDeclarationsWithMetadata
- Display file size metadata (bytes, lines) in declarations output

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 01:30:45 +01:00
d9aab773c6 feat(database): add file size metadata and range parameters
- Add byte_size and line_count columns to files table
- Increment SchemaVersion to 2 (requires re-indexing)
- Add DeclarationWithMetadata, FileRange, FileResult types
- Add GetDeclarationsWithMetadata method for file metadata lookup
- Add GetFileWithRange method for paginated file retrieval
- Implement countLines and applyLineRange helpers

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 01:30:39 +01:00
128cc313dc docs: add TODO items for large file handling and nixpkgs-packages MCP
- Add file size metadata to get_option declarations
- Add range parameters to get_file with sensible defaults
- New MCP server idea for indexing nixpkgs packages

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 01:14:10 +01:00
1fc9f71c48 Merge pull request 'feature/hm-options' (#2) from feature/hm-options into master
Reviewed-on: #2
2026-02-03 22:40:08 +00:00
56 changed files with 10895 additions and 294 deletions

View File

@@ -1,15 +1,44 @@
{
"mcpServers": {
"nixos-options": {
"nixpkgs-options": {
"command": "nix",
"args": [
"run",
".",
".#nixpkgs-search",
"--",
"options",
"serve"
],
"env": {
"NIXOS_OPTIONS_DATABASE": "sqlite://:memory:"
"NIXPKGS_SEARCH_DATABASE": "sqlite://:memory:"
}
},
"nixpkgs-packages": {
"command": "nix",
"args": [
"run",
".#nixpkgs-search",
"--",
"packages",
"serve"
],
"env": {
"NIXPKGS_SEARCH_DATABASE": "sqlite://:memory:"
}
},
"lab-monitoring": {
"command": "nix",
"args": [
"run",
".#lab-monitoring",
"--",
"serve",
"--enable-silences"
],
"env": {
"PROMETHEUS_URL": "https://prometheus.home.2rjus.net",
"ALERTMANAGER_URL": "https://alertmanager.home.2rjus.net",
"LOKI_URL": "http://monitoring01.home.2rjus.net:3100"
}
}
}

247
CLAUDE.md
View File

@@ -8,15 +8,34 @@ This file provides context for Claude when working on this project.
## MCP Servers
### NixOS Options (`nixos-options`)
### Nixpkgs Search (`nixpkgs-search`) - **Primary**
Combined search for NixOS options and Nix packages from nixpkgs. Provides two separate MCP servers:
- **Options server**: Search NixOS configuration options (`nixpkgs-search options serve`)
- **Packages server**: Search Nix packages (`nixpkgs-search packages serve`)
### NixOS Options (`nixos-options`) - Legacy
Search and query NixOS configuration options. Uses nixpkgs as source.
*Note: Prefer using `nixpkgs-search options` instead.*
### Home Manager Options (`hm-options`)
Search and query Home Manager configuration options. Uses home-manager repository as source.
Both servers share the same architecture:
- Full-text search across option names and descriptions
- Query specific options with type, default, example, and declarations
### Lab Monitoring (`lab-monitoring`)
Query Prometheus metrics, Alertmanager alerts, and Loki logs. Unlike other servers, this queries live HTTP APIs — no database or indexing needed.
- 8 core tools: list/get alerts, search metrics, get metadata, PromQL query, list targets, list/create silences
- 3 optional Loki tools (when `LOKI_URL` is set): query_logs, list_labels, list_label_values
- Configurable Prometheus, Alertmanager, and Loki URLs via flags or environment variables
- Optional basic auth for Loki (`LOKI_USERNAME`/`LOKI_PASSWORD`)
### Git Explorer (`git-explorer`)
Read-only access to git repository information. Designed for deployment verification.
- 9 tools: resolve_ref, get_log, get_commit_info, get_diff_files, get_file_at_commit, is_ancestor, commits_between, list_branches, search_commits
- Uses go-git library for pure Go implementation
- All operations are read-only (never modifies repository)
The nixpkgs/options/hm servers share a database-backed architecture:
- Full-text search across option/package names and descriptions
- Query specific options/packages with full metadata
- Index multiple revisions (by git hash or channel name)
- Fetch module source files
- PostgreSQL and SQLite backends
@@ -27,13 +46,14 @@ Both servers share the same architecture:
- **Build System**: Nix flakes
- **Databases**: PostgreSQL and SQLite (both fully supported)
- **Protocol**: MCP (Model Context Protocol) - JSON-RPC over STDIO or HTTP/SSE
- **Module Path**: `git.t-juice.club/torjus/labmcp`
- **Module Path**: `code.t-juice.club/torjus/labmcp`
## Project Status
**Complete and maintained** - All core features implemented:
- Full MCP servers with 6 tools each
- PostgreSQL and SQLite backends with FTS
- Full MCP servers (6 tools each for nixpkgs/options, 8-11 tools for monitoring)
- PostgreSQL and SQLite backends with FTS (for nixpkgs/options servers)
- Live API queries for Prometheus/Alertmanager/Loki (monitoring server)
- NixOS modules for deployment
- CLI for manual operations
- Comprehensive test suite
@@ -43,20 +63,26 @@ Both servers share the same architecture:
```
labmcp/
├── cmd/
│ ├── nixpkgs-search/
│ │ └── main.go # Combined options+packages CLI (primary)
│ ├── nixos-options/
│ │ └── main.go # NixOS options CLI
── hm-options/
└── main.go # Home Manager options CLI
│ │ └── main.go # NixOS options CLI (legacy)
── hm-options/
└── main.go # Home Manager options CLI
│ ├── lab-monitoring/
│ │ └── main.go # Prometheus/Alertmanager CLI
│ └── git-explorer/
│ └── main.go # Git repository explorer CLI
├── internal/
│ ├── database/
│ │ ├── interface.go # Store interface
│ │ ├── interface.go # Store interface (options + packages)
│ │ ├── schema.go # Schema versioning
│ │ ├── postgres.go # PostgreSQL implementation
│ │ ├── sqlite.go # SQLite implementation
│ │ └── *_test.go # Database tests
│ ├── mcp/
│ │ ├── server.go # MCP server core + ServerConfig
│ │ ├── handlers.go # Tool implementations
│ │ ├── server.go # MCP server core + ServerConfig + modes
│ │ ├── handlers.go # Tool implementations (options + packages)
│ │ ├── types.go # Protocol types
│ │ ├── transport.go # Transport interface
│ │ ├── transport_stdio.go # STDIO transport
@@ -66,17 +92,39 @@ labmcp/
│ ├── options/
│ │ └── indexer.go # Shared Indexer interface
│ ├── nixos/
│ │ ├── indexer.go # Nixpkgs indexing
│ │ ├── parser.go # options.json parsing (shared)
│ │ ├── indexer.go # NixOS options indexing
│ │ ├── parser.go # options.json parsing
│ │ ├── types.go # Channel aliases, extensions
│ │ └── *_test.go # Indexer tests
── homemanager/
├── indexer.go # Home Manager indexing
├── types.go # Channel aliases, extensions
└── *_test.go # Indexer tests
── homemanager/
├── indexer.go # Home Manager indexing
├── types.go # Channel aliases, extensions
└── *_test.go # Indexer tests
│ ├── packages/
│ │ ├── indexer.go # Nix packages indexing
│ │ ├── parser.go # nix-env JSON parsing
│ │ ├── types.go # Package types, channel aliases
│ │ └── *_test.go # Parser tests
│ ├── monitoring/
│ │ ├── types.go # Prometheus/Alertmanager/Loki API types
│ │ ├── prometheus.go # Prometheus HTTP client
│ │ ├── alertmanager.go # Alertmanager HTTP client
│ │ ├── loki.go # Loki HTTP client
│ │ ├── handlers.go # MCP tool definitions + handlers
│ │ ├── format.go # Markdown formatting utilities
│ │ └── *_test.go # Tests (httptest-based)
│ └── gitexplorer/
│ ├── client.go # go-git repository wrapper
│ ├── types.go # Type definitions
│ ├── handlers.go # MCP tool definitions + handlers
│ ├── format.go # Markdown formatters
│ ├── validation.go # Path validation
│ └── *_test.go # Tests
├── nix/
│ ├── module.nix # NixOS module for nixos-options
│ ├── hm-options-module.nix # NixOS module for hm-options
│ ├── lab-monitoring-module.nix # NixOS module for lab-monitoring
│ ├── git-explorer-module.nix # NixOS module for git-explorer
│ └── package.nix # Parameterized Nix package
├── testdata/
│ └── options-sample.json # Test fixture
@@ -90,17 +138,58 @@ labmcp/
## MCP Tools
Both servers provide the same 6 tools:
### Options Servers (nixpkgs-search options, nixos-options, hm-options)
| Tool | Description |
|------|-------------|
| `search_options` | Full-text search across option names and descriptions |
| `get_option` | Get full details for a specific option with children |
| `get_file` | Fetch source file contents from indexed repository |
| `index_revision` | Index a revision (by hash or channel name) |
| `index_revision` | Index a revision (options, files, and packages for nixpkgs) |
| `list_revisions` | List all indexed revisions |
| `delete_revision` | Delete an indexed revision |
### Packages Server (nixpkgs-search packages)
| Tool | Description |
|------|-------------|
| `search_packages` | Full-text search across package names and descriptions |
| `get_package` | Get full details for a specific package by attr path |
| `get_file` | Fetch source file contents from nixpkgs |
| `index_revision` | Index a revision to make its packages searchable |
| `list_revisions` | List all indexed revisions |
| `delete_revision` | Delete an indexed revision |
### Monitoring Server (lab-monitoring)
| Tool | Description |
|------|-------------|
| `list_alerts` | List alerts with optional filters (state, severity, receiver) |
| `get_alert` | Get full details for a specific alert by fingerprint |
| `search_metrics` | Search metric names with substring filter, enriched with metadata |
| `get_metric_metadata` | Get type, help text, and unit for a specific metric |
| `query` | Execute instant PromQL query |
| `list_targets` | List scrape targets with health status |
| `list_silences` | List active/pending silences |
| `create_silence` | Create a silence (confirms with user first) |
| `query_logs` | Execute a LogQL range query against Loki (requires `LOKI_URL`) |
| `list_labels` | List available label names from Loki (requires `LOKI_URL`) |
| `list_label_values` | List values for a specific label from Loki (requires `LOKI_URL`) |
### Git Explorer Server (git-explorer)
| Tool | Description |
|------|-------------|
| `resolve_ref` | Resolve a git ref (branch, tag, commit) to its full commit hash |
| `get_log` | Get commit log with optional filters (author, path, limit) |
| `get_commit_info` | Get full details for a specific commit |
| `get_diff_files` | Get list of files changed between two commits |
| `get_file_at_commit` | Get file contents at a specific commit |
| `is_ancestor` | Check if one commit is an ancestor of another |
| `commits_between` | Get all commits between two refs |
| `list_branches` | List all branches in the repository |
| `search_commits` | Search commit messages for a pattern |
## Key Implementation Details
### Database
@@ -136,7 +225,32 @@ Both servers provide the same 6 tools:
## CLI Commands
### nixos-options
### nixpkgs-search (Primary)
```bash
# Options MCP Server
nixpkgs-search options serve # Run options MCP server on STDIO
nixpkgs-search options search <query> # Search options
nixpkgs-search options get <option> # Get option details
# Packages MCP Server
nixpkgs-search packages serve # Run packages MCP server on STDIO
nixpkgs-search packages search <query> # Search packages
nixpkgs-search packages get <attr> # Get package details
# Combined Indexing
nixpkgs-search index <revision> # Index options AND packages
nixpkgs-search index --no-packages <r> # Index options only (faster)
nixpkgs-search index --no-options <r> # Index packages only
nixpkgs-search index --no-files <r> # Skip file indexing
nixpkgs-search index --force <r> # Force re-index
# Shared Commands
nixpkgs-search list # List indexed revisions
nixpkgs-search delete <revision> # Delete indexed revision
nixpkgs-search --version # Show version
```
### nixos-options (Legacy)
```bash
nixos-options serve # Run MCP server on STDIO (default)
nixos-options serve --transport http # Run MCP server on HTTP
@@ -164,34 +278,90 @@ hm-options delete <revision> # Delete indexed revision
hm-options --version # Show version
```
### lab-monitoring
```bash
lab-monitoring serve # Run MCP server on STDIO
lab-monitoring serve --transport http # Run MCP server on HTTP
lab-monitoring alerts # List alerts
lab-monitoring alerts --state active # Filter by state
lab-monitoring query 'up' # Instant PromQL query
lab-monitoring targets # List scrape targets
lab-monitoring metrics node # Search metric names
lab-monitoring logs '{job="varlogs"}' # Query logs (requires LOKI_URL)
lab-monitoring logs '{job="nginx"} |= "error"' --start 2h --limit 50
lab-monitoring labels # List Loki labels
lab-monitoring labels --values job # List values for a label
```
### git-explorer
```bash
git-explorer serve # Run MCP server on STDIO
git-explorer serve --transport http # Run MCP server on HTTP
git-explorer --repo /path resolve <ref> # Resolve ref to commit hash
git-explorer --repo /path log --limit 10 # Show commit log
git-explorer --repo /path show <ref> # Show commit details
git-explorer --repo /path diff <from> <to> # Files changed between commits
git-explorer --repo /path cat <ref> <path> # File contents at commit
git-explorer --repo /path branches # List branches
git-explorer --repo /path search <query> # Search commit messages
git-explorer --version # Show version
```
### Channel Aliases
**nixos-options**: `nixos-unstable`, `nixos-stable`, `nixos-24.11`, `nixos-24.05`, etc.
**nixpkgs-search/nixos-options**: `nixos-unstable`, `nixos-stable`, `nixos-24.11`, `nixos-24.05`, etc.
**hm-options**: `hm-unstable`, `hm-stable`, `master`, `release-24.11`, `release-24.05`, etc.
## Notes for Claude
### Planning
When creating implementation plans, the first step should usually be to **checkout an appropriately named feature branch** (e.g., `git checkout -b feature/lab-monitoring`). This keeps work isolated and makes PRs cleaner.
**After implementing a plan**, update the README.md to reflect any new or changed functionality (new servers, tools, CLI commands, configuration options, NixOS module options, etc.).
### Development Workflow
- **Always run `go fmt ./...` before committing Go code**
- **Run Go commands using `nix develop -c`** (e.g., `nix develop -c go test ./...`)
- **Use `nix run` to run binaries** (e.g., `nix run .#nixos-options -- serve`)
- **Use `nix run` to run/test binaries** (e.g., `nix run .#nixpkgs-search -- options serve`)
- Do NOT use `go build -o /tmp/...` to test binaries - always use `nix run`
- Remember: modified files must be tracked by git for `nix run` to see them
- File paths in responses should use format `path/to/file.go:123`
### Linting
**Before completing work on a feature**, run all linting tools to ensure code quality:
```bash
# Run all linters (should report 0 issues)
nix develop -c golangci-lint run ./...
# Check for known vulnerabilities in dependencies
nix develop -c govulncheck ./...
# Run go vet for additional static analysis
nix develop -c go vet ./...
```
All three tools should pass with no issues before merging a feature branch.
### Nix Build Requirement
**IMPORTANT**: When running `nix build`, `nix run`, or similar commands, new files must be tracked by git first. Nix flakes only see git-tracked files. If you create new files, run `git add <file>` before attempting nix operations.
### Version Bumping
Version bumps should be done once per feature branch, not per commit. Rules:
Version bumps should be done once per feature branch, not per commit. **Only bump versions for packages that were actually changed** — different packages can have different version numbers.
Rules for determining bump type:
- **Patch bump** (0.1.0 → 0.1.1): Changes to Go code within `internal/` that affect a program
- **Minor bump** (0.1.0 → 0.2.0): Changes to Go code outside `internal/` (e.g., `cmd/`)
- **Major bump** (0.1.0 → 1.0.0): Breaking changes to CLI usage or MCP protocol
Version is defined in multiple places that must stay in sync:
- `cmd/nixos-options/main.go`
- `cmd/hm-options/main.go`
- `internal/mcp/server.go` (in `DefaultNixOSConfig` and `DefaultHomeManagerConfig`)
- `nix/package.nix`
Each package's version is defined in multiple places that must stay in sync *for that package*:
- **lab-monitoring**: `cmd/lab-monitoring/main.go` + `internal/mcp/server.go` (`DefaultMonitoringConfig`)
- **nixpkgs-search**: `cmd/nixpkgs-search/main.go` + `internal/mcp/server.go` (`DefaultNixOSConfig`, `DefaultNixpkgsPackagesConfig`)
- **nixos-options**: `cmd/nixos-options/main.go` + `internal/mcp/server.go` (`DefaultNixOSConfig`)
- **hm-options**: `cmd/hm-options/main.go` + `internal/mcp/server.go` (`DefaultHomeManagerConfig`)
- **git-explorer**: `cmd/git-explorer/main.go` + `internal/mcp/server.go` (`DefaultGitExplorerConfig`)
- **nix/package.nix**: Shared across all packages (bump to highest version when any package changes)
### User Preferences
- User prefers PostgreSQL over SQLite (has homelab infrastructure)
@@ -214,19 +384,28 @@ nix develop -c go test -bench=. -benchtime=1x -timeout=30m ./internal/homemanage
### Building
```bash
# Build with nix
nix build .#nixpkgs-search
nix build .#nixos-options
nix build .#hm-options
nix build .#lab-monitoring
nix build .#git-explorer
# Run directly
nix run .#nixos-options -- serve
nix run .#nixpkgs-search -- options serve
nix run .#nixpkgs-search -- packages serve
nix run .#nixpkgs-search -- index nixos-unstable
nix run .#hm-options -- serve
nix run .#nixos-options -- index nixos-unstable
nix run .#hm-options -- index hm-unstable
nix run .#lab-monitoring -- serve
nix run .#git-explorer -- --repo . serve
```
### Indexing Performance
Indexing operations are slow due to Nix evaluation and file downloads. When running index commands, use appropriate timeouts:
- **nixos-options**: ~5-6 minutes for `nixos-unstable` (with files)
- **nixpkgs-search (full)**: ~15-20 minutes for `nixos-unstable` (options + packages + files)
- **nixpkgs-search (options only)**: ~5-6 minutes with `--no-packages`
- **nixpkgs-search (packages only)**: ~10-15 minutes with `--no-options`
- **hm-options**: ~1-2 minutes for `master` (with files)
Use `--no-files` flag for faster indexing (~1-2 minutes) if file content lookup isn't needed.
Use `--no-files` flag to skip file indexing for faster results.
Use `--no-packages` to index only options (matches legacy behavior).

496
README.md
View File

@@ -4,18 +4,51 @@ A collection of Model Context Protocol (MCP) servers written in Go.
## MCP Servers
### NixOS Options (`nixos-options`)
### Nixpkgs Search (`nixpkgs-search`) - Primary
Search and query NixOS configuration options across multiple nixpkgs revisions. Designed to help Claude (and other MCP clients) answer questions about NixOS configuration.
Combined search for NixOS options and Nix packages from nixpkgs. Provides two separate MCP servers:
- **Options server**: Search NixOS configuration options (`nixpkgs-search options serve`)
- **Packages server**: Search Nix packages (`nixpkgs-search packages serve`)
Both servers share the same database, allowing you to index once and serve both.
### Home Manager Options (`hm-options`)
Search and query Home Manager configuration options across multiple home-manager revisions. Designed to help Claude (and other MCP clients) answer questions about Home Manager configuration.
### Shared Features
### Lab Monitoring (`lab-monitoring`)
- Full-text search across option names and descriptions
- Query specific options with type, default, example, and declarations
Query Prometheus metrics, Alertmanager alerts, and Loki logs from your monitoring stack. Unlike other servers, this queries live HTTP APIs — no database or indexing needed.
- List and inspect alerts from Alertmanager
- Execute PromQL queries against Prometheus
- Search metric names with metadata
- View scrape target health
- Manage alert silences
- Query logs via LogQL (when Loki is configured)
### Git Explorer (`git-explorer`)
Read-only access to git repository information. Designed for deployment verification — comparing deployed flake revisions against source repositories.
- Resolve refs (branches, tags, commits) to commit hashes
- View commit logs with filtering by author, path, or range
- Get full commit details including file change statistics
- Compare commits to see which files changed
- Read file contents at any commit
- Check ancestry relationships between commits
- Search commit messages
All operations are read-only and will never modify the repository.
### NixOS Options (`nixos-options`) - Legacy
Search and query NixOS configuration options. **Note**: Prefer using `nixpkgs-search` instead, which includes this functionality plus package search.
### Shared Features (nixpkgs-search, hm-options, nixos-options)
- Full-text search across option/package names and descriptions
- Query specific options/packages with full metadata
- Index multiple revisions (by git hash or channel name)
- Fetch module source files
- Support for PostgreSQL and SQLite backends
@@ -26,19 +59,25 @@ Search and query Home Manager configuration options across multiple home-manager
```bash
# Build the packages
nix build git+https://git.t-juice.club/torjus/labmcp#nixos-options
nix build git+https://git.t-juice.club/torjus/labmcp#hm-options
nix build git+https://code.t-juice.club/torjus/labmcp#nixpkgs-search
nix build git+https://code.t-juice.club/torjus/labmcp#hm-options
nix build git+https://code.t-juice.club/torjus/labmcp#lab-monitoring
nix build git+https://code.t-juice.club/torjus/labmcp#git-explorer
# Or run directly
nix run git+https://git.t-juice.club/torjus/labmcp#nixos-options -- --help
nix run git+https://git.t-juice.club/torjus/labmcp#hm-options -- --help
nix run git+https://code.t-juice.club/torjus/labmcp#nixpkgs-search -- --help
nix run git+https://code.t-juice.club/torjus/labmcp#hm-options -- --help
nix run git+https://code.t-juice.club/torjus/labmcp#lab-monitoring -- --help
nix run git+https://code.t-juice.club/torjus/labmcp#git-explorer -- --help
```
### From Source
```bash
go install git.t-juice.club/torjus/labmcp/cmd/nixos-options@latest
go install git.t-juice.club/torjus/labmcp/cmd/hm-options@latest
go install code.t-juice.club/torjus/labmcp/cmd/nixpkgs-search@latest
go install code.t-juice.club/torjus/labmcp/cmd/hm-options@latest
go install code.t-juice.club/torjus/labmcp/cmd/lab-monitoring@latest
go install code.t-juice.club/torjus/labmcp/cmd/git-explorer@latest
```
## Usage
@@ -50,11 +89,18 @@ Configure in your MCP client (e.g., Claude Desktop):
```json
{
"mcpServers": {
"nixos-options": {
"command": "nixos-options",
"args": ["serve"],
"nixpkgs-options": {
"command": "nixpkgs-search",
"args": ["options", "serve"],
"env": {
"NIXOS_OPTIONS_DATABASE": "sqlite:///path/to/nixos-options.db"
"NIXPKGS_SEARCH_DATABASE": "sqlite:///path/to/nixpkgs-search.db"
}
},
"nixpkgs-packages": {
"command": "nixpkgs-search",
"args": ["packages", "serve"],
"env": {
"NIXPKGS_SEARCH_DATABASE": "sqlite:///path/to/nixpkgs-search.db"
}
},
"hm-options": {
@@ -63,6 +109,24 @@ Configure in your MCP client (e.g., Claude Desktop):
"env": {
"HM_OPTIONS_DATABASE": "sqlite:///path/to/hm-options.db"
}
},
"lab-monitoring": {
"command": "lab-monitoring",
"args": ["serve"],
"env": {
"PROMETHEUS_URL": "http://prometheus.example.com:9090",
"ALERTMANAGER_URL": "http://alertmanager.example.com:9093",
"LOKI_URL": "http://loki.example.com:3100",
"LOKI_USERNAME": "optional-username",
"LOKI_PASSWORD": "optional-password"
}
},
"git-explorer": {
"command": "git-explorer",
"args": ["serve"],
"env": {
"GIT_REPO_PATH": "/path/to/your/repo"
}
}
}
}
@@ -73,19 +137,42 @@ Alternatively, if you have Nix installed, you can use the flake directly without
```json
{
"mcpServers": {
"nixos-options": {
"nixpkgs-options": {
"command": "nix",
"args": ["run", "git+https://git.t-juice.club/torjus/labmcp#nixos-options", "--", "serve"],
"args": ["run", "git+https://code.t-juice.club/torjus/labmcp#nixpkgs-search", "--", "options", "serve"],
"env": {
"NIXOS_OPTIONS_DATABASE": "sqlite:///path/to/nixos-options.db"
"NIXPKGS_SEARCH_DATABASE": "sqlite:///path/to/nixpkgs-search.db"
}
},
"nixpkgs-packages": {
"command": "nix",
"args": ["run", "git+https://code.t-juice.club/torjus/labmcp#nixpkgs-search", "--", "packages", "serve"],
"env": {
"NIXPKGS_SEARCH_DATABASE": "sqlite:///path/to/nixpkgs-search.db"
}
},
"hm-options": {
"command": "nix",
"args": ["run", "git+https://git.t-juice.club/torjus/labmcp#hm-options", "--", "serve"],
"args": ["run", "git+https://code.t-juice.club/torjus/labmcp#hm-options", "--", "serve"],
"env": {
"HM_OPTIONS_DATABASE": "sqlite:///path/to/hm-options.db"
}
},
"lab-monitoring": {
"command": "nix",
"args": ["run", "git+https://code.t-juice.club/torjus/labmcp#lab-monitoring", "--", "serve"],
"env": {
"PROMETHEUS_URL": "http://prometheus.example.com:9090",
"ALERTMANAGER_URL": "http://alertmanager.example.com:9093",
"LOKI_URL": "http://loki.example.com:3100"
}
},
"git-explorer": {
"command": "nix",
"args": ["run", "git+https://code.t-juice.club/torjus/labmcp#git-explorer", "--", "serve"],
"env": {
"GIT_REPO_PATH": "/path/to/your/repo"
}
}
}
}
@@ -93,20 +180,23 @@ Alternatively, if you have Nix installed, you can use the flake directly without
### As MCP Server (HTTP)
Both servers can run over HTTP with Server-Sent Events (SSE) for web-based MCP clients:
All servers can run over HTTP with Server-Sent Events (SSE) for web-based MCP clients:
```bash
# Start HTTP server on default address (127.0.0.1:8080)
nixos-options serve --transport http
# Start HTTP server on default address
nixpkgs-search options serve --transport http
nixpkgs-search packages serve --transport http
hm-options serve --transport http
lab-monitoring serve --transport http
git-explorer serve --transport http
# Custom address and CORS configuration
nixos-options serve --transport http \
nixpkgs-search options serve --transport http \
--http-address 0.0.0.0:8080 \
--allowed-origins https://example.com
# With TLS
nixos-options serve --transport http \
nixpkgs-search options serve --transport http \
--tls-cert /path/to/cert.pem \
--tls-key /path/to/key.pem
```
@@ -118,53 +208,136 @@ HTTP transport endpoints:
### CLI Examples
**Index a revision:**
**Index a revision (nixpkgs-search):**
```bash
# NixOS options - index by channel name
nixos-options index nixos-unstable
# Home Manager options - index by channel name
hm-options index hm-unstable
# Index both options and packages
nixpkgs-search index nixos-unstable
# Index by git hash
nixos-options index e6eae2ee2110f3d31110d5c222cd395303343b08
nixpkgs-search index e6eae2ee2110f3d31110d5c222cd395303343b08
# Index options only (faster, skip packages)
nixpkgs-search index --no-packages nixos-unstable
# Index packages only (skip options)
nixpkgs-search index --no-options nixos-unstable
# Index without file contents (faster, disables get_file tool)
nixos-options index --no-files nixos-unstable
nixpkgs-search index --no-files nixos-unstable
```
**Index a revision (hm-options):**
```bash
# Index by channel name
hm-options index hm-unstable
# Index without file contents
hm-options index --no-files release-24.11
```
**List indexed revisions:**
```bash
nixos-options list
nixpkgs-search list
hm-options list
```
**Search for options:**
```bash
# NixOS options
nixos-options search nginx
nixos-options search -n 10 postgresql
# NixOS options via nixpkgs-search
nixpkgs-search options search nginx
nixpkgs-search options search -n 10 postgresql
# Home Manager options
hm-options search git
hm-options search -n 10 neovim
```
**Get option details:**
**Search for packages:**
```bash
nixos-options get services.nginx.enable
nixpkgs-search packages search firefox
nixpkgs-search packages search -n 10 python
# Filter by status
nixpkgs-search packages search --unfree nvidia
nixpkgs-search packages search --broken deprecated-package
```
**Get option/package details:**
```bash
nixpkgs-search options get services.nginx.enable
nixpkgs-search packages get firefox
hm-options get programs.git.enable
```
**Lab Monitoring CLI:**
```bash
# List alerts (defaults to active only)
lab-monitoring alerts
lab-monitoring alerts --all # Include silenced/inhibited alerts
lab-monitoring alerts --state all # Same as --all
lab-monitoring alerts --severity critical
# Execute PromQL queries
lab-monitoring query 'up'
lab-monitoring query 'rate(http_requests_total[5m])'
# List scrape targets
lab-monitoring targets
# Search metrics
lab-monitoring metrics node
lab-monitoring metrics -n 20 cpu
# Query logs from Loki (requires LOKI_URL)
lab-monitoring logs '{job="varlogs"}'
lab-monitoring logs '{job="nginx"} |= "error"' --start 2h --limit 50
lab-monitoring logs '{job="systemd"}' --direction forward
# List Loki labels
lab-monitoring labels
lab-monitoring labels --values job
```
**Git Explorer CLI:**
```bash
# Resolve a ref to commit hash
git-explorer --repo /path/to/repo resolve main
git-explorer --repo /path/to/repo resolve v1.0.0
# View commit log
git-explorer --repo /path/to/repo log --limit 10
git-explorer --repo /path/to/repo log --author "John" --path src/
# Show commit details
git-explorer --repo /path/to/repo show HEAD
git-explorer --repo /path/to/repo show abc1234
# Compare commits
git-explorer --repo /path/to/repo diff HEAD~5 HEAD
# Show file at specific commit
git-explorer --repo /path/to/repo cat HEAD README.md
# List branches
git-explorer --repo /path/to/repo branches
git-explorer --repo /path/to/repo branches --remote
# Search commit messages
git-explorer --repo /path/to/repo search "fix bug"
```
**Delete an indexed revision:**
```bash
nixos-options delete nixos-23.11
nixpkgs-search delete nixos-23.11
hm-options delete release-23.11
```
@@ -174,20 +347,26 @@ hm-options delete release-23.11
| Variable | Description | Default |
|----------|-------------|---------|
| `NIXOS_OPTIONS_DATABASE` | Database connection string for nixos-options | `sqlite://nixos-options.db` |
| `NIXPKGS_SEARCH_DATABASE` | Database connection string for nixpkgs-search | `sqlite://nixpkgs-search.db` |
| `HM_OPTIONS_DATABASE` | Database connection string for hm-options | `sqlite://hm-options.db` |
| `NIXOS_OPTIONS_DATABASE` | Database connection string for nixos-options (legacy) | `sqlite://nixos-options.db` |
| `PROMETHEUS_URL` | Prometheus base URL for lab-monitoring | `http://localhost:9090` |
| `ALERTMANAGER_URL` | Alertmanager base URL for lab-monitoring | `http://localhost:9093` |
| `LOKI_URL` | Loki base URL for lab-monitoring (optional, enables log tools) | *(none)* |
| `LOKI_USERNAME` | Username for Loki basic auth (optional) | *(none)* |
| `LOKI_PASSWORD` | Password for Loki basic auth (optional) | *(none)* |
### Database Connection Strings
**SQLite:**
```bash
export NIXOS_OPTIONS_DATABASE="sqlite:///path/to/database.db"
export NIXOS_OPTIONS_DATABASE="sqlite://:memory:" # In-memory
export NIXPKGS_SEARCH_DATABASE="sqlite:///path/to/database.db"
export NIXPKGS_SEARCH_DATABASE="sqlite://:memory:" # In-memory
```
**PostgreSQL:**
```bash
export NIXOS_OPTIONS_DATABASE="postgres://user:pass@localhost/nixos_options?sslmode=disable"
export NIXPKGS_SEARCH_DATABASE="postgres://user:pass@localhost/nixpkgs_search?sslmode=disable"
```
### Command-Line Flags
@@ -195,42 +374,87 @@ export NIXOS_OPTIONS_DATABASE="postgres://user:pass@localhost/nixos_options?sslm
The database can also be specified via the `-d` or `--database` flag:
```bash
nixos-options -d "postgres://localhost/nixos" serve
nixpkgs-search -d "postgres://localhost/nixpkgs" options serve
nixpkgs-search -d "sqlite://my.db" index nixos-unstable
hm-options -d "sqlite://my.db" index hm-unstable
```
## MCP Tools
Both servers provide the following tools:
### Options Servers (nixpkgs-search options, hm-options)
| Tool | Description |
|------|-------------|
| `search_options` | Search for options by name or description |
| `get_option` | Get full details for a specific option |
| `get_file` | Fetch source file contents from the repository |
| `index_revision` | Index a revision |
| `index_revision` | Index a revision (options, files, and packages for nixpkgs) |
| `list_revisions` | List all indexed revisions |
| `delete_revision` | Delete an indexed revision |
### Packages Server (nixpkgs-search packages)
| Tool | Description |
|------|-------------|
| `search_packages` | Search for packages by name or description |
| `get_package` | Get full details for a specific package |
| `get_file` | Fetch source file contents from nixpkgs |
| `index_revision` | Index a revision to make its packages searchable |
| `list_revisions` | List all indexed revisions |
| `delete_revision` | Delete an indexed revision |
### Monitoring Server (lab-monitoring)
| Tool | Description |
|------|-------------|
| `list_alerts` | List alerts with optional filters (state, severity, receiver). Defaults to active alerts only; use state=all to include silenced/inhibited |
| `get_alert` | Get full details for a specific alert by fingerprint |
| `search_metrics` | Search metric names with substring filter, enriched with metadata |
| `get_metric_metadata` | Get type, help text, and unit for a specific metric |
| `query` | Execute an instant PromQL query |
| `list_targets` | List scrape targets with health status |
| `list_silences` | List active/pending alert silences |
| `create_silence` | Create a new alert silence (requires `--enable-silences` flag) |
| `query_logs` | Execute a LogQL range query against Loki (requires `LOKI_URL`) |
| `list_labels` | List available label names from Loki (requires `LOKI_URL`) |
| `list_label_values` | List values for a specific label from Loki (requires `LOKI_URL`) |
### Git Explorer Server (git-explorer)
| Tool | Description |
|------|-------------|
| `resolve_ref` | Resolve a git ref (branch, tag, commit) to its full commit hash |
| `get_log` | Get commit log with optional filters (author, path, limit) |
| `get_commit_info` | Get full details for a specific commit |
| `get_diff_files` | Get list of files changed between two commits |
| `get_file_at_commit` | Get file contents at a specific commit |
| `is_ancestor` | Check if one commit is an ancestor of another |
| `commits_between` | Get all commits between two refs |
| `list_branches` | List all branches in the repository |
| `search_commits` | Search commit messages for a pattern |
## NixOS Modules
NixOS modules are provided for running both MCP servers as systemd services.
NixOS modules are provided for running the MCP servers as systemd services.
### nixos-options
### nixpkgs-search (Recommended)
The `nixpkgs-search` module runs two separate MCP servers (options and packages) that share a database:
```nix
{
inputs.labmcp.url = "git+https://git.t-juice.club/torjus/labmcp";
inputs.labmcp.url = "git+https://code.t-juice.club/torjus/labmcp";
outputs = { self, nixpkgs, labmcp }: {
nixosConfigurations.myhost = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules = [
labmcp.nixosModules.nixos-options-mcp
labmcp.nixosModules.nixpkgs-search-mcp
{
services.nixos-options-mcp = {
services.nixpkgs-search = {
enable = true;
indexOnStart = [ "nixos-unstable" ];
# Both options and packages servers are enabled by default
};
}
];
@@ -239,11 +463,24 @@ NixOS modules are provided for running both MCP servers as systemd services.
}
```
**Options-only configuration:**
```nix
{
services.nixpkgs-search = {
enable = true;
indexOnStart = [ "nixos-unstable" ];
indexFlags = [ "--no-packages" ]; # Faster indexing
packages.enable = false; # Don't run packages server
};
}
```
### hm-options
```nix
{
inputs.labmcp.url = "git+https://git.t-juice.club/torjus/labmcp";
inputs.labmcp.url = "git+https://code.t-juice.club/torjus/labmcp";
outputs = { self, nixpkgs, labmcp }: {
nixosConfigurations.myhost = nixpkgs.lib.nixosSystem {
@@ -262,9 +499,150 @@ NixOS modules are provided for running both MCP servers as systemd services.
}
```
### lab-monitoring
```nix
{
inputs.labmcp.url = "git+https://code.t-juice.club/torjus/labmcp";
outputs = { self, nixpkgs, labmcp }: {
nixosConfigurations.myhost = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules = [
labmcp.nixosModules.lab-monitoring-mcp
{
services.lab-monitoring = {
enable = true;
prometheusUrl = "http://prometheus.example.com:9090";
alertmanagerUrl = "http://alertmanager.example.com:9093";
enableSilences = true; # Optional: enable create_silence tool
};
}
];
};
};
}
```
### git-explorer
```nix
{
inputs.labmcp.url = "git+https://code.t-juice.club/torjus/labmcp";
outputs = { self, nixpkgs, labmcp }: {
nixosConfigurations.myhost = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules = [
labmcp.nixosModules.git-explorer-mcp
{
services.git-explorer = {
enable = true;
repoPath = "/path/to/your/git/repo";
};
}
];
};
};
}
```
### nixos-options (Legacy)
```nix
{
inputs.labmcp.url = "git+https://code.t-juice.club/torjus/labmcp";
outputs = { self, nixpkgs, labmcp }: {
nixosConfigurations.myhost = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules = [
labmcp.nixosModules.nixos-options-mcp
{
services.nixos-options-mcp = {
enable = true;
indexOnStart = [ "nixos-unstable" ];
};
}
];
};
};
}
```
### Module Options
Both modules have similar options. Shown here for `nixos-options-mcp` (replace with `hm-options-mcp` for Home Manager):
#### nixpkgs-search
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `enable` | bool | `false` | Enable the service |
| `package` | package | from flake | Package to use |
| `database.type` | enum | `"sqlite"` | `"sqlite"` or `"postgres"` |
| `database.name` | string | `"nixpkgs-search.db"` | SQLite database filename |
| `database.connectionString` | string | `""` | PostgreSQL connection URL (stored in Nix store) |
| `database.connectionStringFile` | path | `null` | Path to file with PostgreSQL connection URL (recommended for secrets) |
| `indexOnStart` | list of string | `[]` | Revisions to index on service start |
| `indexFlags` | list of string | `[]` | Additional flags for indexing (e.g., `["--no-packages"]`) |
| `user` | string | `"nixpkgs-search"` | User to run the service as |
| `group` | string | `"nixpkgs-search"` | Group to run the service as |
| `dataDir` | path | `/var/lib/nixpkgs-search` | Directory for data storage |
| `options.enable` | bool | `true` | Enable the options MCP server |
| `options.http.address` | string | `"127.0.0.1:8082"` | HTTP listen address for options server |
| `options.openFirewall` | bool | `false` | Open firewall for options HTTP port |
| `packages.enable` | bool | `true` | Enable the packages MCP server |
| `packages.http.address` | string | `"127.0.0.1:8083"` | HTTP listen address for packages server |
| `packages.openFirewall` | bool | `false` | Open firewall for packages HTTP port |
Both `options.http` and `packages.http` also support:
- `endpoint` (default: `"/mcp"`)
- `allowedOrigins` (default: `[]`)
- `sessionTTL` (default: `"30m"`)
- `tls.enable`, `tls.certFile`, `tls.keyFile`
#### lab-monitoring
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `enable` | bool | `false` | Enable the service |
| `package` | package | from flake | Package to use |
| `prometheusUrl` | string | `"http://localhost:9090"` | Prometheus base URL |
| `alertmanagerUrl` | string | `"http://localhost:9093"` | Alertmanager base URL |
| `lokiUrl` | nullOr string | `null` | Loki base URL (enables log query tools when set) |
| `lokiUsername` | nullOr string | `null` | Username for Loki basic authentication |
| `lokiPasswordFile` | nullOr path | `null` | Path to file containing Loki password (uses systemd `LoadCredential`) |
| `enableSilences` | bool | `false` | Enable the create_silence tool (write operation) |
| `http.address` | string | `"127.0.0.1:8084"` | HTTP listen address |
| `http.endpoint` | string | `"/mcp"` | HTTP endpoint path |
| `http.allowedOrigins` | list of string | `[]` | Allowed CORS origins (empty = localhost only) |
| `http.sessionTTL` | string | `"30m"` | Session timeout (Go duration format) |
| `http.tls.enable` | bool | `false` | Enable TLS |
| `http.tls.certFile` | path | `null` | TLS certificate file |
| `http.tls.keyFile` | path | `null` | TLS private key file |
| `openFirewall` | bool | `false` | Open firewall for HTTP port |
The lab-monitoring module uses `DynamicUser=true`, so no separate user/group configuration is needed.
#### git-explorer
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `enable` | bool | `false` | Enable the service |
| `package` | package | from flake | Package to use |
| `repoPath` | string | *(required)* | Path to the git repository to serve |
| `defaultRemote` | string | `"origin"` | Default remote name for ref resolution |
| `http.address` | string | `"127.0.0.1:8085"` | HTTP listen address |
| `http.endpoint` | string | `"/mcp"` | HTTP endpoint path |
| `http.allowedOrigins` | list of string | `[]` | Allowed CORS origins |
| `http.sessionTTL` | string | `"30m"` | Session timeout |
| `http.tls.enable` | bool | `false` | Enable TLS |
| `http.tls.certFile` | path | `null` | TLS certificate file |
| `http.tls.keyFile` | path | `null` | TLS private key file |
| `openFirewall` | bool | `false` | Open firewall for HTTP port |
The git-explorer module uses `DynamicUser=true` and grants read-only access to the repository path.
#### hm-options-mcp / nixos-options-mcp (Legacy)
| Option | Type | Default | Description |
|--------|------|---------|-------------|
@@ -293,18 +671,18 @@ Using `connectionStringFile` (recommended for production with sensitive credenti
```nix
{
services.nixos-options-mcp = {
services.nixpkgs-search = {
enable = true;
database = {
type = "postgres";
# File contains: postgres://user:secret@localhost/nixos_options?sslmode=disable
connectionStringFile = "/run/secrets/nixos-options-db";
# File contains: postgres://user:secret@localhost/nixpkgs_search?sslmode=disable
connectionStringFile = "/run/secrets/nixpkgs-search-db";
};
indexOnStart = [ "nixos-unstable" ];
};
# Example with agenix or sops-nix for secret management
# age.secrets.nixos-options-db.file = ./secrets/nixos-options-db.age;
# age.secrets.nixpkgs-search-db.file = ./secrets/nixpkgs-search-db.age;
}
```
@@ -321,8 +699,10 @@ go test ./...
go test -bench=. ./internal/database/...
# Build
go build ./cmd/nixos-options
go build ./cmd/nixpkgs-search
go build ./cmd/hm-options
go build ./cmd/lab-monitoring
go build ./cmd/git-explorer
```
## License

13
TODO.md
View File

@@ -4,12 +4,25 @@
- [ ] Progress reporting during indexing ("Fetching nixpkgs... Parsing options... Indexing files...")
- [ ] Add `search_files` MCP tool - search for files by path pattern (e.g., find all nginx-related modules)
- [ ] Include file size metadata in `get_option` declarations (byte size and/or line count) so clients know file sizes before fetching
- [ ] Add range parameters to `get_file` (`offset`, `limit`) with sensible defaults (~200-300 lines) to avoid dumping massive files
## Robustness
- [ ] PostgreSQL integration tests with testcontainers (currently skipped without manual DB setup)
- [ ] Graceful handling of concurrent indexing (what happens if two clients index the same revision?)
## New MCP Servers
- [x] `nixpkgs-packages` - Index and search nixpkgs packages (implemented in `nixpkgs-search packages`)
- [x] `lab-monitoring` - Query Prometheus and Alertmanager APIs (8 tools, no database required)
## Lab Monitoring
- [ ] Add `list_rules` tool - list Prometheus alerting and recording rules (via `/api/v1/rules`)
- [ ] Add `get_rule_group` tool - get details for a specific rule group
- [x] Add Loki log query support - query logs via LogQL (3 tools: `query_logs`, `list_labels`, `list_label_values`), opt-in via `LOKI_URL`
## Nice to Have
- [ ] Option history/diff - compare options between two revisions ("what changed in services.nginx between 24.05 and 24.11?")

459
cmd/git-explorer/main.go Normal file
View File

@@ -0,0 +1,459 @@
package main
import (
"context"
"fmt"
"log"
"os"
"os/signal"
"syscall"
"time"
"github.com/urfave/cli/v2"
"code.t-juice.club/torjus/labmcp/internal/gitexplorer"
"code.t-juice.club/torjus/labmcp/internal/mcp"
)
const version = "0.1.0"
func main() {
app := &cli.App{
Name: "git-explorer",
Usage: "Read-only MCP server for git repository exploration",
Version: version,
Flags: []cli.Flag{
&cli.StringFlag{
Name: "repo",
Aliases: []string{"r"},
Usage: "Path to git repository",
EnvVars: []string{"GIT_REPO_PATH"},
Value: ".",
},
&cli.StringFlag{
Name: "default-remote",
Usage: "Default remote name",
EnvVars: []string{"GIT_DEFAULT_REMOTE"},
Value: "origin",
},
},
Commands: []*cli.Command{
serveCommand(),
resolveCommand(),
logCommand(),
showCommand(),
diffCommand(),
catCommand(),
branchesCommand(),
searchCommand(),
},
}
if err := app.Run(os.Args); err != nil {
log.Fatal(err)
}
}
func serveCommand() *cli.Command {
return &cli.Command{
Name: "serve",
Usage: "Run MCP server for git exploration",
Flags: []cli.Flag{
&cli.StringFlag{
Name: "transport",
Aliases: []string{"t"},
Usage: "Transport type: 'stdio' or 'http'",
Value: "stdio",
},
&cli.StringFlag{
Name: "http-address",
Usage: "HTTP listen address",
Value: "127.0.0.1:8085",
},
&cli.StringFlag{
Name: "http-endpoint",
Usage: "HTTP endpoint path",
Value: "/mcp",
},
&cli.StringSliceFlag{
Name: "allowed-origins",
Usage: "Allowed Origin headers for CORS",
},
&cli.StringFlag{
Name: "tls-cert",
Usage: "TLS certificate file",
},
&cli.StringFlag{
Name: "tls-key",
Usage: "TLS key file",
},
&cli.DurationFlag{
Name: "session-ttl",
Usage: "Session TTL for HTTP transport",
Value: 30 * time.Minute,
},
},
Action: func(c *cli.Context) error {
return runServe(c)
},
}
}
func resolveCommand() *cli.Command {
return &cli.Command{
Name: "resolve",
Usage: "Resolve a ref to a commit hash",
ArgsUsage: "<ref>",
Action: func(c *cli.Context) error {
if c.NArg() < 1 {
return fmt.Errorf("ref argument required")
}
return runResolve(c, c.Args().First())
},
}
}
func logCommand() *cli.Command {
return &cli.Command{
Name: "log",
Usage: "Show commit log",
Flags: []cli.Flag{
&cli.StringFlag{
Name: "ref",
Usage: "Starting ref (default: HEAD)",
Value: "HEAD",
},
&cli.IntFlag{
Name: "limit",
Aliases: []string{"n"},
Usage: "Maximum number of commits",
Value: 10,
},
&cli.StringFlag{
Name: "author",
Usage: "Filter by author",
},
&cli.StringFlag{
Name: "path",
Usage: "Filter by path",
},
},
Action: func(c *cli.Context) error {
return runLog(c)
},
}
}
func showCommand() *cli.Command {
return &cli.Command{
Name: "show",
Usage: "Show commit details",
ArgsUsage: "<ref>",
Flags: []cli.Flag{
&cli.BoolFlag{
Name: "stats",
Usage: "Include file statistics",
Value: true,
},
},
Action: func(c *cli.Context) error {
ref := "HEAD"
if c.NArg() > 0 {
ref = c.Args().First()
}
return runShow(c, ref)
},
}
}
func diffCommand() *cli.Command {
return &cli.Command{
Name: "diff",
Usage: "Show files changed between two commits",
ArgsUsage: "<from-ref> <to-ref>",
Action: func(c *cli.Context) error {
if c.NArg() < 2 {
return fmt.Errorf("both from-ref and to-ref arguments required")
}
return runDiff(c, c.Args().Get(0), c.Args().Get(1))
},
}
}
func catCommand() *cli.Command {
return &cli.Command{
Name: "cat",
Usage: "Show file contents at a commit",
ArgsUsage: "<ref> <path>",
Action: func(c *cli.Context) error {
if c.NArg() < 2 {
return fmt.Errorf("both ref and path arguments required")
}
return runCat(c, c.Args().Get(0), c.Args().Get(1))
},
}
}
func branchesCommand() *cli.Command {
return &cli.Command{
Name: "branches",
Usage: "List branches",
Flags: []cli.Flag{
&cli.BoolFlag{
Name: "remote",
Aliases: []string{"r"},
Usage: "Include remote branches",
},
},
Action: func(c *cli.Context) error {
return runBranches(c)
},
}
}
func searchCommand() *cli.Command {
return &cli.Command{
Name: "search",
Usage: "Search commit messages",
ArgsUsage: "<query>",
Flags: []cli.Flag{
&cli.StringFlag{
Name: "ref",
Usage: "Starting ref (default: HEAD)",
Value: "HEAD",
},
&cli.IntFlag{
Name: "limit",
Aliases: []string{"n"},
Usage: "Maximum number of results",
Value: 20,
},
},
Action: func(c *cli.Context) error {
if c.NArg() < 1 {
return fmt.Errorf("query argument required")
}
return runSearch(c, c.Args().First())
},
}
}
func runServe(c *cli.Context) error {
ctx, cancel := signal.NotifyContext(context.Background(), syscall.SIGINT, syscall.SIGTERM)
defer cancel()
repoPath := c.String("repo")
client, err := gitexplorer.NewGitClient(repoPath, c.String("default-remote"))
if err != nil {
return fmt.Errorf("failed to open repository: %w", err)
}
logger := log.New(os.Stderr, "[mcp] ", log.LstdFlags)
config := mcp.DefaultGitExplorerConfig()
server := mcp.NewGenericServer(logger, config)
gitexplorer.RegisterHandlers(server, client)
transport := c.String("transport")
switch transport {
case "stdio":
logger.Printf("Starting git-explorer MCP server on stdio (repo: %s)...", repoPath)
return server.Run(ctx, os.Stdin, os.Stdout)
case "http":
httpConfig := mcp.HTTPConfig{
Address: c.String("http-address"),
Endpoint: c.String("http-endpoint"),
AllowedOrigins: c.StringSlice("allowed-origins"),
SessionTTL: c.Duration("session-ttl"),
TLSCertFile: c.String("tls-cert"),
TLSKeyFile: c.String("tls-key"),
}
httpTransport := mcp.NewHTTPTransport(server, httpConfig)
return httpTransport.Run(ctx)
default:
return fmt.Errorf("unknown transport: %s (use 'stdio' or 'http')", transport)
}
}
func getClient(c *cli.Context) (*gitexplorer.GitClient, error) {
return gitexplorer.NewGitClient(c.String("repo"), c.String("default-remote"))
}
func runResolve(c *cli.Context, ref string) error {
client, err := getClient(c)
if err != nil {
return err
}
result, err := client.ResolveRef(ref)
if err != nil {
return err
}
fmt.Printf("%s (%s) -> %s\n", result.Ref, result.Type, result.Commit)
return nil
}
func runLog(c *cli.Context) error {
client, err := getClient(c)
if err != nil {
return err
}
entries, err := client.GetLog(
c.String("ref"),
c.Int("limit"),
c.String("author"),
"",
c.String("path"),
)
if err != nil {
return err
}
if len(entries) == 0 {
fmt.Println("No commits found.")
return nil
}
for _, e := range entries {
fmt.Printf("%s %s\n", e.ShortHash, e.Subject)
fmt.Printf(" Author: %s <%s>\n", e.Author, e.Email)
fmt.Printf(" Date: %s\n\n", e.Date.Format("2006-01-02 15:04:05"))
}
return nil
}
func runShow(c *cli.Context, ref string) error {
client, err := getClient(c)
if err != nil {
return err
}
info, err := client.GetCommitInfo(ref, c.Bool("stats"))
if err != nil {
return err
}
fmt.Printf("commit %s\n", info.Hash)
fmt.Printf("Author: %s <%s>\n", info.Author, info.Email)
fmt.Printf("Date: %s\n", info.Date.Format("2006-01-02 15:04:05"))
if len(info.Parents) > 0 {
fmt.Printf("Parents: %v\n", info.Parents)
}
if info.Stats != nil {
fmt.Printf("\n%d file(s) changed, %d insertions(+), %d deletions(-)\n",
info.Stats.FilesChanged, info.Stats.Additions, info.Stats.Deletions)
}
fmt.Printf("\n%s", info.Message)
return nil
}
func runDiff(c *cli.Context, fromRef, toRef string) error {
client, err := getClient(c)
if err != nil {
return err
}
result, err := client.GetDiffFiles(fromRef, toRef)
if err != nil {
return err
}
if len(result.Files) == 0 {
fmt.Println("No files changed.")
return nil
}
fmt.Printf("Comparing %s..%s\n\n", result.FromCommit[:7], result.ToCommit[:7])
for _, f := range result.Files {
status := f.Status[0:1] // First letter: A, M, D, R
path := f.Path
if f.OldPath != "" {
path = fmt.Sprintf("%s -> %s", f.OldPath, f.Path)
}
fmt.Printf("%s %s (+%d -%d)\n", status, path, f.Additions, f.Deletions)
}
return nil
}
func runCat(c *cli.Context, ref, path string) error {
client, err := getClient(c)
if err != nil {
return err
}
content, err := client.GetFileAtCommit(ref, path)
if err != nil {
return err
}
fmt.Print(content.Content)
return nil
}
func runBranches(c *cli.Context) error {
client, err := getClient(c)
if err != nil {
return err
}
result, err := client.ListBranches(c.Bool("remote"))
if err != nil {
return err
}
if result.Total == 0 {
fmt.Println("No branches found.")
return nil
}
for _, b := range result.Branches {
marker := " "
if b.IsHead {
marker = "*"
}
remoteMarker := ""
if b.IsRemote {
remoteMarker = " (remote)"
}
fmt.Printf("%s %s -> %s%s\n", marker, b.Name, b.Commit[:7], remoteMarker)
}
return nil
}
func runSearch(c *cli.Context, query string) error {
client, err := getClient(c)
if err != nil {
return err
}
result, err := client.SearchCommits(c.String("ref"), query, c.Int("limit"))
if err != nil {
return err
}
if result.Count == 0 {
fmt.Printf("No commits matching '%s'.\n", query)
return nil
}
fmt.Printf("Found %d commit(s) matching '%s':\n\n", result.Count, query)
for _, e := range result.Commits {
fmt.Printf("%s %s\n", e.ShortHash, e.Subject)
fmt.Printf(" Author: %s <%s>\n", e.Author, e.Email)
fmt.Printf(" Date: %s\n\n", e.Date.Format("2006-01-02 15:04:05"))
}
return nil
}

View File

@@ -12,15 +12,15 @@ import (
"github.com/urfave/cli/v2"
"git.t-juice.club/torjus/labmcp/internal/database"
"git.t-juice.club/torjus/labmcp/internal/homemanager"
"git.t-juice.club/torjus/labmcp/internal/mcp"
"git.t-juice.club/torjus/labmcp/internal/options"
"code.t-juice.club/torjus/labmcp/internal/database"
"code.t-juice.club/torjus/labmcp/internal/homemanager"
"code.t-juice.club/torjus/labmcp/internal/mcp"
"code.t-juice.club/torjus/labmcp/internal/options"
)
const (
defaultDatabase = "sqlite://hm-options.db"
version = "0.1.1"
version = "0.3.0"
)
func main() {
@@ -191,7 +191,7 @@ func runServe(c *cli.Context) error {
if err != nil {
return fmt.Errorf("failed to open database: %w", err)
}
defer store.Close()
defer store.Close() //nolint:errcheck // cleanup on exit
if err := store.Initialize(ctx); err != nil {
return fmt.Errorf("failed to initialize database: %w", err)
@@ -234,7 +234,7 @@ func runIndex(c *cli.Context, revision string, indexFiles bool, force bool) erro
if err != nil {
return fmt.Errorf("failed to open database: %w", err)
}
defer store.Close()
defer store.Close() //nolint:errcheck // cleanup on exit
if err := store.Initialize(ctx); err != nil {
return fmt.Errorf("failed to initialize database: %w", err)
@@ -288,7 +288,7 @@ func runList(c *cli.Context) error {
if err != nil {
return fmt.Errorf("failed to open database: %w", err)
}
defer store.Close()
defer store.Close() //nolint:errcheck // cleanup on exit
if err := store.Initialize(ctx); err != nil {
return fmt.Errorf("failed to initialize database: %w", err)
@@ -325,7 +325,7 @@ func runSearch(c *cli.Context, query string) error {
if err != nil {
return fmt.Errorf("failed to open database: %w", err)
}
defer store.Close()
defer store.Close() //nolint:errcheck // cleanup on exit
if err := store.Initialize(ctx); err != nil {
return fmt.Errorf("failed to initialize database: %w", err)
@@ -397,7 +397,7 @@ func runGet(c *cli.Context, name string) error {
if err != nil {
return fmt.Errorf("failed to open database: %w", err)
}
defer store.Close()
defer store.Close() //nolint:errcheck // cleanup on exit
if err := store.Initialize(ctx); err != nil {
return fmt.Errorf("failed to initialize database: %w", err)
@@ -490,7 +490,7 @@ func runDelete(c *cli.Context, revision string) error {
if err != nil {
return fmt.Errorf("failed to open database: %w", err)
}
defer store.Close()
defer store.Close() //nolint:errcheck // cleanup on exit
if err := store.Initialize(ctx); err != nil {
return fmt.Errorf("failed to initialize database: %w", err)

621
cmd/lab-monitoring/main.go Normal file
View File

@@ -0,0 +1,621 @@
package main
import (
"context"
"fmt"
"log"
"os"
"os/signal"
"syscall"
"time"
"github.com/urfave/cli/v2"
"code.t-juice.club/torjus/labmcp/internal/mcp"
"code.t-juice.club/torjus/labmcp/internal/monitoring"
)
const version = "0.3.1"
func main() {
app := &cli.App{
Name: "lab-monitoring",
Usage: "MCP server for Prometheus and Alertmanager monitoring",
Version: version,
Flags: []cli.Flag{
&cli.StringFlag{
Name: "prometheus-url",
Usage: "Prometheus base URL",
EnvVars: []string{"PROMETHEUS_URL"},
Value: "http://localhost:9090",
},
&cli.StringFlag{
Name: "alertmanager-url",
Usage: "Alertmanager base URL",
EnvVars: []string{"ALERTMANAGER_URL"},
Value: "http://localhost:9093",
},
&cli.StringFlag{
Name: "loki-url",
Usage: "Loki base URL (optional, enables log query tools)",
EnvVars: []string{"LOKI_URL"},
},
&cli.StringFlag{
Name: "loki-username",
Usage: "Username for Loki basic auth",
EnvVars: []string{"LOKI_USERNAME"},
},
&cli.StringFlag{
Name: "loki-password",
Usage: "Password for Loki basic auth",
EnvVars: []string{"LOKI_PASSWORD"},
},
},
Commands: []*cli.Command{
serveCommand(),
alertsCommand(),
queryCommand(),
targetsCommand(),
metricsCommand(),
logsCommand(),
labelsCommand(),
},
}
if err := app.Run(os.Args); err != nil {
log.Fatal(err)
}
}
func serveCommand() *cli.Command {
return &cli.Command{
Name: "serve",
Usage: "Run MCP server for lab monitoring",
Flags: []cli.Flag{
&cli.StringFlag{
Name: "transport",
Aliases: []string{"t"},
Usage: "Transport type: 'stdio' or 'http'",
Value: "stdio",
},
&cli.StringFlag{
Name: "http-address",
Usage: "HTTP listen address",
Value: "127.0.0.1:8084",
},
&cli.StringFlag{
Name: "http-endpoint",
Usage: "HTTP endpoint path",
Value: "/mcp",
},
&cli.StringSliceFlag{
Name: "allowed-origins",
Usage: "Allowed Origin headers for CORS",
},
&cli.StringFlag{
Name: "tls-cert",
Usage: "TLS certificate file",
},
&cli.StringFlag{
Name: "tls-key",
Usage: "TLS key file",
},
&cli.DurationFlag{
Name: "session-ttl",
Usage: "Session TTL for HTTP transport",
Value: 30 * time.Minute,
},
&cli.BoolFlag{
Name: "enable-silences",
Usage: "Enable the create_silence tool (write operation, disabled by default)",
},
},
Action: func(c *cli.Context) error {
return runServe(c)
},
}
}
func alertsCommand() *cli.Command {
return &cli.Command{
Name: "alerts",
Usage: "List alerts from Alertmanager",
Flags: []cli.Flag{
&cli.StringFlag{
Name: "state",
Usage: "Filter by state: active (default), suppressed, unprocessed, all",
},
&cli.StringFlag{
Name: "severity",
Usage: "Filter by severity label",
},
&cli.BoolFlag{
Name: "all",
Usage: "Show all alerts including silenced and inhibited (shorthand for --state all)",
},
},
Action: func(c *cli.Context) error {
return runAlerts(c)
},
}
}
func queryCommand() *cli.Command {
return &cli.Command{
Name: "query",
Usage: "Execute an instant PromQL query",
ArgsUsage: "<promql>",
Action: func(c *cli.Context) error {
if c.NArg() < 1 {
return fmt.Errorf("promql expression required")
}
return runQuery(c, c.Args().First())
},
}
}
func targetsCommand() *cli.Command {
return &cli.Command{
Name: "targets",
Usage: "List scrape targets",
Action: func(c *cli.Context) error {
return runTargets(c)
},
}
}
func metricsCommand() *cli.Command {
return &cli.Command{
Name: "metrics",
Usage: "Search metric names",
ArgsUsage: "<search>",
Flags: []cli.Flag{
&cli.IntFlag{
Name: "limit",
Aliases: []string{"n"},
Usage: "Maximum number of results",
Value: 50,
},
},
Action: func(c *cli.Context) error {
query := ""
if c.NArg() > 0 {
query = c.Args().First()
}
return runMetrics(c, query)
},
}
}
func runServe(c *cli.Context) error {
ctx, cancel := signal.NotifyContext(context.Background(), syscall.SIGINT, syscall.SIGTERM)
defer cancel()
logger := log.New(os.Stderr, "[mcp] ", log.LstdFlags)
config := mcp.DefaultMonitoringConfig()
prom := monitoring.NewPrometheusClient(c.String("prometheus-url"))
am := monitoring.NewAlertmanagerClient(c.String("alertmanager-url"))
var loki *monitoring.LokiClient
if lokiURL := c.String("loki-url"); lokiURL != "" {
loki = monitoring.NewLokiClient(monitoring.LokiClientOptions{
BaseURL: lokiURL,
Username: c.String("loki-username"),
Password: c.String("loki-password"),
})
}
config.InstructionsFunc = func() string {
return monitoring.AlertSummary(am)
}
server := mcp.NewGenericServer(logger, config)
opts := monitoring.HandlerOptions{
EnableSilences: c.Bool("enable-silences"),
}
monitoring.RegisterHandlers(server, prom, am, loki, opts)
transport := c.String("transport")
switch transport {
case "stdio":
logger.Println("Starting lab-monitoring MCP server on stdio...")
return server.Run(ctx, os.Stdin, os.Stdout)
case "http":
httpConfig := mcp.HTTPConfig{
Address: c.String("http-address"),
Endpoint: c.String("http-endpoint"),
AllowedOrigins: c.StringSlice("allowed-origins"),
SessionTTL: c.Duration("session-ttl"),
TLSCertFile: c.String("tls-cert"),
TLSKeyFile: c.String("tls-key"),
}
httpTransport := mcp.NewHTTPTransport(server, httpConfig)
return httpTransport.Run(ctx)
default:
return fmt.Errorf("unknown transport: %s (use 'stdio' or 'http')", transport)
}
}
func runAlerts(c *cli.Context) error {
ctx := context.Background()
am := monitoring.NewAlertmanagerClient(c.String("alertmanager-url"))
filters := monitoring.AlertFilters{}
// Determine state filter: --all flag takes precedence, then --state, then default to active
state := c.String("state")
if c.Bool("all") {
state = "all"
}
switch state {
case "active", "":
// Default to active alerts only (non-silenced, non-inhibited)
active := true
filters.Active = &active
silenced := false
filters.Silenced = &silenced
inhibited := false
filters.Inhibited = &inhibited
case "suppressed":
active := false
filters.Active = &active
case "unprocessed":
unprocessed := true
filters.Unprocessed = &unprocessed
case "all":
// No filters - return everything
}
if severity := c.String("severity"); severity != "" {
filters.Filter = append(filters.Filter, fmt.Sprintf(`severity="%s"`, severity))
}
alerts, err := am.ListAlerts(ctx, filters)
if err != nil {
return fmt.Errorf("failed to list alerts: %w", err)
}
if len(alerts) == 0 {
fmt.Println("No alerts found.")
return nil
}
for _, a := range alerts {
state := a.Status.State
severity := a.Labels["severity"]
name := a.Labels["alertname"]
fmt.Printf("[%s] %s (severity=%s, fingerprint=%s)\n", state, name, severity, a.Fingerprint)
for k, v := range a.Annotations {
fmt.Printf(" %s: %s\n", k, v)
}
}
return nil
}
func runQuery(c *cli.Context, promql string) error {
ctx := context.Background()
prom := monitoring.NewPrometheusClient(c.String("prometheus-url"))
data, err := prom.Query(ctx, promql, time.Time{})
if err != nil {
return fmt.Errorf("query failed: %w", err)
}
for _, r := range data.Result {
labels := ""
for k, v := range r.Metric {
if labels != "" {
labels += ", "
}
labels += fmt.Sprintf("%s=%q", k, v)
}
value := ""
if len(r.Value) >= 2 {
if v, ok := r.Value[1].(string); ok {
value = v
}
}
fmt.Printf("{%s} %s\n", labels, value)
}
return nil
}
func runTargets(c *cli.Context) error {
ctx := context.Background()
prom := monitoring.NewPrometheusClient(c.String("prometheus-url"))
data, err := prom.Targets(ctx)
if err != nil {
return fmt.Errorf("failed to fetch targets: %w", err)
}
if len(data.ActiveTargets) == 0 {
fmt.Println("No active targets.")
return nil
}
for _, t := range data.ActiveTargets {
job := t.Labels["job"]
instance := t.Labels["instance"]
fmt.Printf("[%s] %s/%s (last scrape: %s, duration: %.3fs)\n",
t.Health, job, instance, t.LastScrape.Format("15:04:05"), t.LastScrapeDuration)
if t.LastError != "" {
fmt.Printf(" error: %s\n", t.LastError)
}
}
return nil
}
func runMetrics(c *cli.Context, query string) error {
ctx := context.Background()
prom := monitoring.NewPrometheusClient(c.String("prometheus-url"))
names, err := prom.LabelValues(ctx, "__name__")
if err != nil {
return fmt.Errorf("failed to fetch metric names: %w", err)
}
limit := c.Int("limit")
count := 0
for _, name := range names {
if query != "" {
// Simple case-insensitive substring match
if !containsIgnoreCase(name, query) {
continue
}
}
fmt.Println(name)
count++
if count >= limit {
fmt.Printf("... (showing %d of matching metrics, use --limit to see more)\n", limit)
break
}
}
if count == 0 {
fmt.Printf("No metrics found matching '%s'\n", query)
}
return nil
}
func logsCommand() *cli.Command {
return &cli.Command{
Name: "logs",
Usage: "Query logs from Loki using LogQL",
ArgsUsage: "<logql>",
Flags: []cli.Flag{
&cli.StringFlag{
Name: "start",
Usage: "Start time: relative duration (e.g., '1h'), RFC3339, or Unix epoch",
Value: "1h",
},
&cli.StringFlag{
Name: "end",
Usage: "End time: relative duration, RFC3339, or Unix epoch",
Value: "now",
},
&cli.IntFlag{
Name: "limit",
Aliases: []string{"n"},
Usage: "Maximum number of entries",
Value: 100,
},
&cli.StringFlag{
Name: "direction",
Usage: "Sort order: 'backward' (newest first) or 'forward' (oldest first)",
Value: "backward",
},
},
Action: func(c *cli.Context) error {
if c.NArg() < 1 {
return fmt.Errorf("LogQL expression required")
}
return runLogs(c, c.Args().First())
},
}
}
func labelsCommand() *cli.Command {
return &cli.Command{
Name: "labels",
Usage: "List labels from Loki, or values for a specific label",
Flags: []cli.Flag{
&cli.StringFlag{
Name: "values",
Usage: "Get values for this label name instead of listing labels",
},
},
Action: func(c *cli.Context) error {
return runLabels(c)
},
}
}
func runLogs(c *cli.Context, logql string) error {
lokiURL := c.String("loki-url")
if lokiURL == "" {
return fmt.Errorf("--loki-url or LOKI_URL is required for log queries")
}
ctx := context.Background()
loki := monitoring.NewLokiClient(monitoring.LokiClientOptions{
BaseURL: lokiURL,
Username: c.String("loki-username"),
Password: c.String("loki-password"),
})
now := time.Now()
start, err := parseCLITime(c.String("start"), now.Add(-time.Hour))
if err != nil {
return fmt.Errorf("invalid start time: %w", err)
}
end, err := parseCLITime(c.String("end"), now)
if err != nil {
return fmt.Errorf("invalid end time: %w", err)
}
data, err := loki.QueryRange(ctx, logql, start, end, c.Int("limit"), c.String("direction"))
if err != nil {
return fmt.Errorf("log query failed: %w", err)
}
totalEntries := 0
for _, stream := range data.Result {
totalEntries += len(stream.Values)
}
if totalEntries == 0 {
fmt.Println("No log entries found.")
return nil
}
for _, stream := range data.Result {
// Print stream labels
labels := ""
for k, v := range stream.Stream {
if labels != "" {
labels += ", "
}
labels += fmt.Sprintf("%s=%q", k, v)
}
fmt.Printf("--- {%s} ---\n", labels)
for _, entry := range stream.Values {
ts := formatCLITimestamp(entry[0])
fmt.Printf("[%s] %s\n", ts, entry[1])
}
fmt.Println()
}
return nil
}
func runLabels(c *cli.Context) error {
lokiURL := c.String("loki-url")
if lokiURL == "" {
return fmt.Errorf("--loki-url or LOKI_URL is required for label queries")
}
ctx := context.Background()
loki := monitoring.NewLokiClient(monitoring.LokiClientOptions{
BaseURL: lokiURL,
Username: c.String("loki-username"),
Password: c.String("loki-password"),
})
if label := c.String("values"); label != "" {
values, err := loki.LabelValues(ctx, label)
if err != nil {
return fmt.Errorf("failed to list label values: %w", err)
}
if len(values) == 0 {
fmt.Printf("No values found for label '%s'.\n", label)
return nil
}
for _, v := range values {
fmt.Println(v)
}
return nil
}
labels, err := loki.Labels(ctx)
if err != nil {
return fmt.Errorf("failed to list labels: %w", err)
}
if len(labels) == 0 {
fmt.Println("No labels found.")
return nil
}
for _, label := range labels {
fmt.Println(label)
}
return nil
}
// parseCLITime parses a time string for CLI use. Handles "now", relative durations,
// RFC3339, and Unix epoch seconds.
func parseCLITime(s string, defaultTime time.Time) (time.Time, error) {
if s == "now" || s == "" {
return time.Now(), nil
}
// Try as relative duration
if d, err := time.ParseDuration(s); err == nil {
return time.Now().Add(-d), nil
}
// Try as RFC3339
if t, err := time.Parse(time.RFC3339, s); err == nil {
return t, nil
}
// Try as Unix epoch seconds
var epoch int64
validDigits := true
for _, c := range s {
if c >= '0' && c <= '9' {
epoch = epoch*10 + int64(c-'0')
} else {
validDigits = false
break
}
}
if validDigits && len(s) > 0 {
return time.Unix(epoch, 0), nil
}
return defaultTime, fmt.Errorf("cannot parse time '%s'", s)
}
// formatCLITimestamp converts a nanosecond Unix timestamp string to a readable format.
func formatCLITimestamp(nsStr string) string {
var ns int64
for _, c := range nsStr {
if c >= '0' && c <= '9' {
ns = ns*10 + int64(c-'0')
}
}
t := time.Unix(0, ns)
return t.Local().Format("2006-01-02 15:04:05")
}
func containsIgnoreCase(s, substr string) bool {
sLower := make([]byte, len(s))
subLower := make([]byte, len(substr))
for i := range s {
if s[i] >= 'A' && s[i] <= 'Z' {
sLower[i] = s[i] + 32
} else {
sLower[i] = s[i]
}
}
for i := range substr {
if substr[i] >= 'A' && substr[i] <= 'Z' {
subLower[i] = substr[i] + 32
} else {
subLower[i] = substr[i]
}
}
for i := 0; i <= len(sLower)-len(subLower); i++ {
match := true
for j := range subLower {
if sLower[i+j] != subLower[j] {
match = false
break
}
}
if match {
return true
}
}
return false
}

View File

@@ -12,14 +12,14 @@ import (
"github.com/urfave/cli/v2"
"git.t-juice.club/torjus/labmcp/internal/database"
"git.t-juice.club/torjus/labmcp/internal/mcp"
"git.t-juice.club/torjus/labmcp/internal/nixos"
"code.t-juice.club/torjus/labmcp/internal/database"
"code.t-juice.club/torjus/labmcp/internal/mcp"
"code.t-juice.club/torjus/labmcp/internal/nixos"
)
const (
defaultDatabase = "sqlite://nixos-options.db"
version = "0.1.1"
version = "0.3.0"
)
func main() {
@@ -190,7 +190,7 @@ func runServe(c *cli.Context) error {
if err != nil {
return fmt.Errorf("failed to open database: %w", err)
}
defer store.Close()
defer store.Close() //nolint:errcheck // cleanup on exit
if err := store.Initialize(ctx); err != nil {
return fmt.Errorf("failed to initialize database: %w", err)
@@ -233,7 +233,7 @@ func runIndex(c *cli.Context, revision string, indexFiles bool, force bool) erro
if err != nil {
return fmt.Errorf("failed to open database: %w", err)
}
defer store.Close()
defer store.Close() //nolint:errcheck // cleanup on exit
if err := store.Initialize(ctx); err != nil {
return fmt.Errorf("failed to initialize database: %w", err)
@@ -283,7 +283,7 @@ func runList(c *cli.Context) error {
if err != nil {
return fmt.Errorf("failed to open database: %w", err)
}
defer store.Close()
defer store.Close() //nolint:errcheck // cleanup on exit
if err := store.Initialize(ctx); err != nil {
return fmt.Errorf("failed to initialize database: %w", err)
@@ -320,7 +320,7 @@ func runSearch(c *cli.Context, query string) error {
if err != nil {
return fmt.Errorf("failed to open database: %w", err)
}
defer store.Close()
defer store.Close() //nolint:errcheck // cleanup on exit
if err := store.Initialize(ctx); err != nil {
return fmt.Errorf("failed to initialize database: %w", err)
@@ -392,7 +392,7 @@ func runGet(c *cli.Context, name string) error {
if err != nil {
return fmt.Errorf("failed to open database: %w", err)
}
defer store.Close()
defer store.Close() //nolint:errcheck // cleanup on exit
if err := store.Initialize(ctx); err != nil {
return fmt.Errorf("failed to initialize database: %w", err)
@@ -485,7 +485,7 @@ func runDelete(c *cli.Context, revision string) error {
if err != nil {
return fmt.Errorf("failed to open database: %w", err)
}
defer store.Close()
defer store.Close() //nolint:errcheck // cleanup on exit
if err := store.Initialize(ctx); err != nil {
return fmt.Errorf("failed to initialize database: %w", err)

863
cmd/nixpkgs-search/main.go Normal file
View File

@@ -0,0 +1,863 @@
package main
import (
"context"
"fmt"
"log"
"os"
"os/signal"
"strings"
"syscall"
"time"
"github.com/urfave/cli/v2"
"code.t-juice.club/torjus/labmcp/internal/database"
"code.t-juice.club/torjus/labmcp/internal/mcp"
"code.t-juice.club/torjus/labmcp/internal/nixos"
"code.t-juice.club/torjus/labmcp/internal/packages"
)
const (
defaultDatabase = "sqlite://nixpkgs-search.db"
version = "0.4.0"
)
func main() {
app := &cli.App{
Name: "nixpkgs-search",
Usage: "Search nixpkgs options and packages",
Version: version,
Flags: []cli.Flag{
&cli.StringFlag{
Name: "database",
Aliases: []string{"d"},
Usage: "Database connection string (postgres://... or sqlite://...)",
EnvVars: []string{"NIXPKGS_SEARCH_DATABASE"},
Value: defaultDatabase,
},
},
Commands: []*cli.Command{
optionsCommand(),
packagesCommand(),
indexCommand(),
listCommand(),
deleteCommand(),
},
}
if err := app.Run(os.Args); err != nil {
log.Fatal(err)
}
}
// optionsCommand returns the options subcommand.
func optionsCommand() *cli.Command {
return &cli.Command{
Name: "options",
Usage: "NixOS options commands",
Subcommands: []*cli.Command{
{
Name: "serve",
Usage: "Run MCP server for NixOS options",
Flags: serveFlags(),
Action: func(c *cli.Context) error {
return runOptionsServe(c)
},
},
{
Name: "search",
Usage: "Search for options",
ArgsUsage: "<query>",
Flags: []cli.Flag{
&cli.StringFlag{
Name: "revision",
Aliases: []string{"r"},
Usage: "Revision to search (default: most recent)",
},
&cli.IntFlag{
Name: "limit",
Aliases: []string{"n"},
Usage: "Maximum number of results",
Value: 20,
},
},
Action: func(c *cli.Context) error {
if c.NArg() < 1 {
return fmt.Errorf("query argument required")
}
return runOptionsSearch(c, c.Args().First())
},
},
{
Name: "get",
Usage: "Get details for a specific option",
ArgsUsage: "<option-name>",
Flags: []cli.Flag{
&cli.StringFlag{
Name: "revision",
Aliases: []string{"r"},
Usage: "Revision to search (default: most recent)",
},
},
Action: func(c *cli.Context) error {
if c.NArg() < 1 {
return fmt.Errorf("option name required")
}
return runOptionsGet(c, c.Args().First())
},
},
},
}
}
// packagesCommand returns the packages subcommand.
func packagesCommand() *cli.Command {
return &cli.Command{
Name: "packages",
Usage: "Nix packages commands",
Subcommands: []*cli.Command{
{
Name: "serve",
Usage: "Run MCP server for Nix packages",
Flags: serveFlags(),
Action: func(c *cli.Context) error {
return runPackagesServe(c)
},
},
{
Name: "search",
Usage: "Search for packages",
ArgsUsage: "<query>",
Flags: []cli.Flag{
&cli.StringFlag{
Name: "revision",
Aliases: []string{"r"},
Usage: "Revision to search (default: most recent)",
},
&cli.IntFlag{
Name: "limit",
Aliases: []string{"n"},
Usage: "Maximum number of results",
Value: 20,
},
&cli.BoolFlag{
Name: "broken",
Usage: "Include broken packages only",
},
&cli.BoolFlag{
Name: "unfree",
Usage: "Include unfree packages only",
},
},
Action: func(c *cli.Context) error {
if c.NArg() < 1 {
return fmt.Errorf("query argument required")
}
return runPackagesSearch(c, c.Args().First())
},
},
{
Name: "get",
Usage: "Get details for a specific package",
ArgsUsage: "<attr-path>",
Flags: []cli.Flag{
&cli.StringFlag{
Name: "revision",
Aliases: []string{"r"},
Usage: "Revision to search (default: most recent)",
},
},
Action: func(c *cli.Context) error {
if c.NArg() < 1 {
return fmt.Errorf("attr path required")
}
return runPackagesGet(c, c.Args().First())
},
},
},
}
}
// indexCommand returns the index command (indexes both options and packages).
func indexCommand() *cli.Command {
return &cli.Command{
Name: "index",
Usage: "Index a nixpkgs revision (options and packages)",
ArgsUsage: "<revision>",
Flags: []cli.Flag{
&cli.BoolFlag{
Name: "no-files",
Usage: "Skip indexing file contents",
},
&cli.BoolFlag{
Name: "no-packages",
Usage: "Skip indexing packages (options only)",
},
&cli.BoolFlag{
Name: "no-options",
Usage: "Skip indexing options (packages only)",
},
&cli.BoolFlag{
Name: "force",
Aliases: []string{"f"},
Usage: "Force re-indexing even if revision already exists",
},
},
Action: func(c *cli.Context) error {
if c.NArg() < 1 {
return fmt.Errorf("revision argument required")
}
return runIndex(c, c.Args().First())
},
}
}
// listCommand returns the list command.
func listCommand() *cli.Command {
return &cli.Command{
Name: "list",
Usage: "List indexed revisions",
Action: func(c *cli.Context) error {
return runList(c)
},
}
}
// deleteCommand returns the delete command.
func deleteCommand() *cli.Command {
return &cli.Command{
Name: "delete",
Usage: "Delete an indexed revision",
ArgsUsage: "<revision>",
Action: func(c *cli.Context) error {
if c.NArg() < 1 {
return fmt.Errorf("revision argument required")
}
return runDelete(c, c.Args().First())
},
}
}
// serveFlags returns common flags for serve commands.
func serveFlags() []cli.Flag {
return []cli.Flag{
&cli.StringFlag{
Name: "transport",
Aliases: []string{"t"},
Usage: "Transport type: 'stdio' or 'http'",
Value: "stdio",
},
&cli.StringFlag{
Name: "http-address",
Usage: "HTTP listen address",
Value: "127.0.0.1:8080",
},
&cli.StringFlag{
Name: "http-endpoint",
Usage: "HTTP endpoint path",
Value: "/mcp",
},
&cli.StringSliceFlag{
Name: "allowed-origins",
Usage: "Allowed Origin headers for CORS (can be specified multiple times)",
},
&cli.StringFlag{
Name: "tls-cert",
Usage: "TLS certificate file",
},
&cli.StringFlag{
Name: "tls-key",
Usage: "TLS key file",
},
&cli.DurationFlag{
Name: "session-ttl",
Usage: "Session TTL for HTTP transport",
Value: 30 * time.Minute,
},
}
}
// openStore opens a database store based on the connection string.
func openStore(connStr string) (database.Store, error) {
if strings.HasPrefix(connStr, "sqlite://") {
path := strings.TrimPrefix(connStr, "sqlite://")
return database.NewSQLiteStore(path)
}
if strings.HasPrefix(connStr, "postgres://") || strings.HasPrefix(connStr, "postgresql://") {
return database.NewPostgresStore(connStr)
}
// Default to SQLite with the connection string as path
return database.NewSQLiteStore(connStr)
}
func runOptionsServe(c *cli.Context) error {
ctx, cancel := signal.NotifyContext(context.Background(), syscall.SIGINT, syscall.SIGTERM)
defer cancel()
store, err := openStore(c.String("database"))
if err != nil {
return fmt.Errorf("failed to open database: %w", err)
}
defer store.Close() //nolint:errcheck // cleanup on exit
if err := store.Initialize(ctx); err != nil {
return fmt.Errorf("failed to initialize database: %w", err)
}
logger := log.New(os.Stderr, "[mcp] ", log.LstdFlags)
config := mcp.DefaultNixOSConfig()
server := mcp.NewServer(store, logger, config)
indexer := nixos.NewIndexer(store)
pkgIndexer := packages.NewIndexer(store)
server.RegisterHandlersWithPackages(indexer, pkgIndexer)
transport := c.String("transport")
switch transport {
case "stdio":
logger.Println("Starting NixOS options MCP server on stdio...")
return server.Run(ctx, os.Stdin, os.Stdout)
case "http":
httpConfig := mcp.HTTPConfig{
Address: c.String("http-address"),
Endpoint: c.String("http-endpoint"),
AllowedOrigins: c.StringSlice("allowed-origins"),
SessionTTL: c.Duration("session-ttl"),
TLSCertFile: c.String("tls-cert"),
TLSKeyFile: c.String("tls-key"),
}
httpTransport := mcp.NewHTTPTransport(server, httpConfig)
return httpTransport.Run(ctx)
default:
return fmt.Errorf("unknown transport: %s (use 'stdio' or 'http')", transport)
}
}
func runPackagesServe(c *cli.Context) error {
ctx, cancel := signal.NotifyContext(context.Background(), syscall.SIGINT, syscall.SIGTERM)
defer cancel()
store, err := openStore(c.String("database"))
if err != nil {
return fmt.Errorf("failed to open database: %w", err)
}
defer store.Close() //nolint:errcheck // cleanup on exit
if err := store.Initialize(ctx); err != nil {
return fmt.Errorf("failed to initialize database: %w", err)
}
logger := log.New(os.Stderr, "[mcp] ", log.LstdFlags)
config := mcp.DefaultNixpkgsPackagesConfig()
server := mcp.NewServer(store, logger, config)
pkgIndexer := packages.NewIndexer(store)
server.RegisterPackageHandlers(pkgIndexer)
transport := c.String("transport")
switch transport {
case "stdio":
logger.Println("Starting nixpkgs packages MCP server on stdio...")
return server.Run(ctx, os.Stdin, os.Stdout)
case "http":
httpConfig := mcp.HTTPConfig{
Address: c.String("http-address"),
Endpoint: c.String("http-endpoint"),
AllowedOrigins: c.StringSlice("allowed-origins"),
SessionTTL: c.Duration("session-ttl"),
TLSCertFile: c.String("tls-cert"),
TLSKeyFile: c.String("tls-key"),
}
httpTransport := mcp.NewHTTPTransport(server, httpConfig)
return httpTransport.Run(ctx)
default:
return fmt.Errorf("unknown transport: %s (use 'stdio' or 'http')", transport)
}
}
func runIndex(c *cli.Context, revision string) error {
ctx := context.Background()
store, err := openStore(c.String("database"))
if err != nil {
return fmt.Errorf("failed to open database: %w", err)
}
defer store.Close() //nolint:errcheck // cleanup on exit
if err := store.Initialize(ctx); err != nil {
return fmt.Errorf("failed to initialize database: %w", err)
}
indexFiles := !c.Bool("no-files")
indexOptions := !c.Bool("no-options")
indexPackages := !c.Bool("no-packages")
force := c.Bool("force")
optionsIndexer := nixos.NewIndexer(store)
pkgIndexer := packages.NewIndexer(store)
// Resolve revision
ref := optionsIndexer.ResolveRevision(revision)
fmt.Printf("Indexing revision: %s\n", revision)
var optionCount, packageCount, fileCount int
var rev *database.Revision
// Index options first (creates the revision record)
if indexOptions {
var result *nixos.IndexResult
if force {
result, err = optionsIndexer.ReindexRevision(ctx, revision)
} else {
result, err = optionsIndexer.IndexRevision(ctx, revision)
}
if err != nil {
return fmt.Errorf("options indexing failed: %w", err)
}
if result.AlreadyIndexed && !force {
fmt.Printf("Revision already indexed (%d options). Use --force to re-index.\n", result.OptionCount)
rev = result.Revision
} else {
optionCount = result.OptionCount
rev = result.Revision
fmt.Printf("Indexed %d options\n", optionCount)
}
} else {
// If not indexing options, check if revision exists
rev, err = store.GetRevision(ctx, ref)
if err != nil {
return fmt.Errorf("failed to get revision: %w", err)
}
if rev == nil {
// Create revision record without options
commitDate, _ := pkgIndexer.GetCommitDate(ctx, ref)
rev = &database.Revision{
GitHash: ref,
ChannelName: pkgIndexer.GetChannelName(revision),
CommitDate: commitDate,
}
if err := store.CreateRevision(ctx, rev); err != nil {
return fmt.Errorf("failed to create revision: %w", err)
}
}
}
// Index files
if indexFiles && rev != nil {
fmt.Println("Indexing files...")
fileCount, err = optionsIndexer.IndexFiles(ctx, rev.ID, rev.GitHash)
if err != nil {
fmt.Printf("Warning: file indexing failed: %v\n", err)
} else {
fmt.Printf("Indexed %d files\n", fileCount)
}
}
// Index packages
if indexPackages && rev != nil {
fmt.Println("Indexing packages...")
pkgResult, err := pkgIndexer.IndexPackages(ctx, rev.ID, rev.GitHash)
if err != nil {
fmt.Printf("Warning: package indexing failed: %v\n", err)
} else {
packageCount = pkgResult.PackageCount
fmt.Printf("Indexed %d packages\n", packageCount)
}
}
// Summary
fmt.Println()
fmt.Printf("Git hash: %s\n", rev.GitHash)
if rev.ChannelName != "" {
fmt.Printf("Channel: %s\n", rev.ChannelName)
}
if optionCount > 0 {
fmt.Printf("Options: %d\n", optionCount)
}
if packageCount > 0 {
fmt.Printf("Packages: %d\n", packageCount)
}
if fileCount > 0 {
fmt.Printf("Files: %d\n", fileCount)
}
return nil
}
func runList(c *cli.Context) error {
ctx := context.Background()
store, err := openStore(c.String("database"))
if err != nil {
return fmt.Errorf("failed to open database: %w", err)
}
defer store.Close() //nolint:errcheck // cleanup on exit
if err := store.Initialize(ctx); err != nil {
return fmt.Errorf("failed to initialize database: %w", err)
}
revisions, err := store.ListRevisions(ctx)
if err != nil {
return fmt.Errorf("failed to list revisions: %w", err)
}
if len(revisions) == 0 {
fmt.Println("No revisions indexed.")
fmt.Println("Use 'nixpkgs-search index <revision>' to index a nixpkgs version.")
return nil
}
fmt.Printf("Indexed revisions (%d):\n\n", len(revisions))
for _, rev := range revisions {
fmt.Printf(" %s", rev.GitHash[:12])
if rev.ChannelName != "" {
fmt.Printf(" (%s)", rev.ChannelName)
}
fmt.Printf("\n Options: %d, Packages: %d, Indexed: %s\n",
rev.OptionCount, rev.PackageCount, rev.IndexedAt.Format("2006-01-02 15:04"))
}
return nil
}
func runDelete(c *cli.Context, revision string) error {
ctx := context.Background()
store, err := openStore(c.String("database"))
if err != nil {
return fmt.Errorf("failed to open database: %w", err)
}
defer store.Close() //nolint:errcheck // cleanup on exit
if err := store.Initialize(ctx); err != nil {
return fmt.Errorf("failed to initialize database: %w", err)
}
// Find revision
rev, err := store.GetRevision(ctx, revision)
if err != nil {
return fmt.Errorf("failed to get revision: %w", err)
}
if rev == nil {
rev, err = store.GetRevisionByChannel(ctx, revision)
if err != nil {
return fmt.Errorf("failed to get revision: %w", err)
}
}
if rev == nil {
return fmt.Errorf("revision '%s' not found", revision)
}
if err := store.DeleteRevision(ctx, rev.ID); err != nil {
return fmt.Errorf("failed to delete revision: %w", err)
}
fmt.Printf("Deleted revision %s\n", rev.GitHash)
return nil
}
// Options search and get functions
func runOptionsSearch(c *cli.Context, query string) error {
ctx := context.Background()
store, err := openStore(c.String("database"))
if err != nil {
return fmt.Errorf("failed to open database: %w", err)
}
defer store.Close() //nolint:errcheck // cleanup on exit
if err := store.Initialize(ctx); err != nil {
return fmt.Errorf("failed to initialize database: %w", err)
}
rev, err := resolveRevision(ctx, store, c.String("revision"))
if err != nil {
return err
}
if rev == nil {
return fmt.Errorf("no indexed revision found")
}
filters := database.SearchFilters{
Limit: c.Int("limit"),
}
options, err := store.SearchOptions(ctx, rev.ID, query, filters)
if err != nil {
return fmt.Errorf("search failed: %w", err)
}
if len(options) == 0 {
fmt.Printf("No options found matching '%s'\n", query)
return nil
}
fmt.Printf("Found %d options matching '%s':\n\n", len(options), query)
for _, opt := range options {
fmt.Printf(" %s\n", opt.Name)
fmt.Printf(" Type: %s\n", opt.Type)
if opt.Description != "" {
desc := opt.Description
if len(desc) > 100 {
desc = desc[:100] + "..."
}
fmt.Printf(" %s\n", desc)
}
fmt.Println()
}
return nil
}
func runOptionsGet(c *cli.Context, name string) error {
ctx := context.Background()
store, err := openStore(c.String("database"))
if err != nil {
return fmt.Errorf("failed to open database: %w", err)
}
defer store.Close() //nolint:errcheck // cleanup on exit
if err := store.Initialize(ctx); err != nil {
return fmt.Errorf("failed to initialize database: %w", err)
}
rev, err := resolveRevision(ctx, store, c.String("revision"))
if err != nil {
return err
}
if rev == nil {
return fmt.Errorf("no indexed revision found")
}
opt, err := store.GetOption(ctx, rev.ID, name)
if err != nil {
return fmt.Errorf("failed to get option: %w", err)
}
if opt == nil {
return fmt.Errorf("option '%s' not found", name)
}
fmt.Printf("%s\n", opt.Name)
fmt.Printf(" Type: %s\n", opt.Type)
if opt.Description != "" {
fmt.Printf(" Description: %s\n", opt.Description)
}
if opt.DefaultValue != "" && opt.DefaultValue != "null" {
fmt.Printf(" Default: %s\n", opt.DefaultValue)
}
if opt.Example != "" && opt.Example != "null" {
fmt.Printf(" Example: %s\n", opt.Example)
}
if opt.ReadOnly {
fmt.Println(" Read-only: yes")
}
// Get declarations
declarations, err := store.GetDeclarations(ctx, opt.ID)
if err == nil && len(declarations) > 0 {
fmt.Println(" Declared in:")
for _, decl := range declarations {
if decl.Line > 0 {
fmt.Printf(" - %s:%d\n", decl.FilePath, decl.Line)
} else {
fmt.Printf(" - %s\n", decl.FilePath)
}
}
}
// Get children
children, err := store.GetChildren(ctx, rev.ID, opt.Name)
if err == nil && len(children) > 0 {
fmt.Println(" Sub-options:")
for _, child := range children {
shortName := child.Name
if strings.HasPrefix(child.Name, opt.Name+".") {
shortName = child.Name[len(opt.Name)+1:]
}
fmt.Printf(" - %s (%s)\n", shortName, child.Type)
}
}
return nil
}
// Packages search and get functions
func runPackagesSearch(c *cli.Context, query string) error {
ctx := context.Background()
store, err := openStore(c.String("database"))
if err != nil {
return fmt.Errorf("failed to open database: %w", err)
}
defer store.Close() //nolint:errcheck // cleanup on exit
if err := store.Initialize(ctx); err != nil {
return fmt.Errorf("failed to initialize database: %w", err)
}
rev, err := resolveRevision(ctx, store, c.String("revision"))
if err != nil {
return err
}
if rev == nil {
return fmt.Errorf("no indexed revision found")
}
filters := database.PackageSearchFilters{
Limit: c.Int("limit"),
}
if c.IsSet("broken") {
broken := c.Bool("broken")
filters.Broken = &broken
}
if c.IsSet("unfree") {
unfree := c.Bool("unfree")
filters.Unfree = &unfree
}
pkgs, err := store.SearchPackages(ctx, rev.ID, query, filters)
if err != nil {
return fmt.Errorf("search failed: %w", err)
}
if len(pkgs) == 0 {
fmt.Printf("No packages found matching '%s'\n", query)
return nil
}
fmt.Printf("Found %d packages matching '%s':\n\n", len(pkgs), query)
for _, pkg := range pkgs {
fmt.Printf(" %s\n", pkg.AttrPath)
fmt.Printf(" Name: %s", pkg.Pname)
if pkg.Version != "" {
fmt.Printf(" %s", pkg.Version)
}
fmt.Println()
if pkg.Description != "" {
desc := pkg.Description
if len(desc) > 100 {
desc = desc[:100] + "..."
}
fmt.Printf(" %s\n", desc)
}
if pkg.Broken || pkg.Unfree || pkg.Insecure {
var flags []string
if pkg.Broken {
flags = append(flags, "broken")
}
if pkg.Unfree {
flags = append(flags, "unfree")
}
if pkg.Insecure {
flags = append(flags, "insecure")
}
fmt.Printf(" Flags: %s\n", strings.Join(flags, ", "))
}
fmt.Println()
}
return nil
}
func runPackagesGet(c *cli.Context, attrPath string) error {
ctx := context.Background()
store, err := openStore(c.String("database"))
if err != nil {
return fmt.Errorf("failed to open database: %w", err)
}
defer store.Close() //nolint:errcheck // cleanup on exit
if err := store.Initialize(ctx); err != nil {
return fmt.Errorf("failed to initialize database: %w", err)
}
rev, err := resolveRevision(ctx, store, c.String("revision"))
if err != nil {
return err
}
if rev == nil {
return fmt.Errorf("no indexed revision found")
}
pkg, err := store.GetPackage(ctx, rev.ID, attrPath)
if err != nil {
return fmt.Errorf("failed to get package: %w", err)
}
if pkg == nil {
return fmt.Errorf("package '%s' not found", attrPath)
}
fmt.Printf("%s\n", pkg.AttrPath)
fmt.Printf(" Name: %s\n", pkg.Pname)
if pkg.Version != "" {
fmt.Printf(" Version: %s\n", pkg.Version)
}
if pkg.Description != "" {
fmt.Printf(" Description: %s\n", pkg.Description)
}
if pkg.Homepage != "" {
fmt.Printf(" Homepage: %s\n", pkg.Homepage)
}
if pkg.License != "" && pkg.License != "[]" {
fmt.Printf(" License: %s\n", pkg.License)
}
if pkg.Maintainers != "" && pkg.Maintainers != "[]" {
fmt.Printf(" Maintainers: %s\n", pkg.Maintainers)
}
// Status flags
if pkg.Broken || pkg.Unfree || pkg.Insecure {
fmt.Println(" Status:")
if pkg.Broken {
fmt.Println(" - broken")
}
if pkg.Unfree {
fmt.Println(" - unfree")
}
if pkg.Insecure {
fmt.Println(" - insecure")
}
}
return nil
}
// resolveRevision finds a revision by hash or channel, or returns the most recent.
func resolveRevision(ctx context.Context, store database.Store, revisionArg string) (*database.Revision, error) {
if revisionArg != "" {
rev, err := store.GetRevision(ctx, revisionArg)
if err != nil {
return nil, fmt.Errorf("failed to get revision: %w", err)
}
if rev != nil {
return rev, nil
}
rev, err = store.GetRevisionByChannel(ctx, revisionArg)
if err != nil {
return nil, fmt.Errorf("failed to get revision: %w", err)
}
return rev, nil
}
// Return most recent
revisions, err := store.ListRevisions(ctx)
if err != nil {
return nil, fmt.Errorf("failed to list revisions: %w", err)
}
if len(revisions) > 0 {
return revisions[0], nil
}
return nil, nil
}

6
flake.lock generated
View File

@@ -2,11 +2,11 @@
"nodes": {
"nixpkgs": {
"locked": {
"lastModified": 1770115704,
"narHash": "sha256-KHFT9UWOF2yRPlAnSXQJh6uVcgNcWlFqqiAZ7OVlHNc=",
"lastModified": 1770841267,
"narHash": "sha256-9xejG0KoqsoKEGp2kVbXRlEYtFFcDTHjidiuX8hGO44=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "e6eae2ee2110f3d31110d5c222cd395303343b08",
"rev": "ec7c70d12ce2fc37cb92aff673dcdca89d187bae",
"type": "github"
},
"original": {

View File

@@ -26,7 +26,28 @@
mainProgram = "hm-options";
description = "MCP server for Home Manager options search and query";
};
default = self.packages.${system}.nixos-options;
nixpkgs-search = pkgs.callPackage ./nix/package.nix {
src = ./.;
pname = "nixpkgs-search";
subPackage = "cmd/nixpkgs-search";
mainProgram = "nixpkgs-search";
description = "Search nixpkgs options and packages";
};
lab-monitoring = pkgs.callPackage ./nix/package.nix {
src = ./.;
pname = "lab-monitoring";
subPackage = "cmd/lab-monitoring";
mainProgram = "lab-monitoring";
description = "MCP server for Prometheus and Alertmanager monitoring";
};
git-explorer = pkgs.callPackage ./nix/package.nix {
src = ./.;
pname = "git-explorer";
subPackage = "cmd/git-explorer";
mainProgram = "git-explorer";
description = "Read-only MCP server for git repository exploration";
};
default = self.packages.${system}.nixpkgs-search;
});
devShells = forAllSystems (system:
@@ -36,11 +57,10 @@
{
default = pkgs.mkShell {
buildInputs = with pkgs; [
go_1_24
go
gopls
gotools
go-tools
golangci-lint
govulncheck
postgresql
sqlite
];
@@ -53,6 +73,10 @@
});
nixosModules = {
nixpkgs-search-mcp = { pkgs, ... }: {
imports = [ ./nix/nixpkgs-search-module.nix ];
services.nixpkgs-search.package = lib.mkDefault self.packages.${pkgs.system}.nixpkgs-search;
};
nixos-options-mcp = { pkgs, ... }: {
imports = [ ./nix/module.nix ];
services.nixos-options-mcp.package = lib.mkDefault self.packages.${pkgs.system}.nixos-options;
@@ -61,7 +85,15 @@
imports = [ ./nix/hm-options-module.nix ];
services.hm-options-mcp.package = lib.mkDefault self.packages.${pkgs.system}.hm-options;
};
default = self.nixosModules.nixos-options-mcp;
lab-monitoring-mcp = { pkgs, ... }: {
imports = [ ./nix/lab-monitoring-module.nix ];
services.lab-monitoring.package = lib.mkDefault self.packages.${pkgs.system}.lab-monitoring;
};
git-explorer-mcp = { pkgs, ... }: {
imports = [ ./nix/git-explorer-module.nix ];
services.git-explorer.package = lib.mkDefault self.packages.${pkgs.system}.git-explorer;
};
default = self.nixosModules.nixpkgs-search-mcp;
};
};
}

23
go.mod
View File

@@ -1,24 +1,43 @@
module git.t-juice.club/torjus/labmcp
module code.t-juice.club/torjus/labmcp
go 1.24
require (
github.com/go-git/go-git/v5 v5.16.4
github.com/lib/pq v1.10.9
github.com/urfave/cli/v2 v2.27.5
modernc.org/sqlite v1.34.4
)
require (
dario.cat/mergo v1.0.0 // indirect
github.com/Microsoft/go-winio v0.6.2 // indirect
github.com/ProtonMail/go-crypto v1.1.6 // indirect
github.com/cloudflare/circl v1.6.1 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.5 // indirect
github.com/cyphar/filepath-securejoin v0.4.1 // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/emirpasic/gods v1.18.1 // indirect
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376 // indirect
github.com/go-git/go-billy/v5 v5.6.2 // indirect
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 // indirect
github.com/kevinburke/ssh_config v1.2.0 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/ncruces/go-strftime v0.1.9 // indirect
github.com/pjbgf/sha1cd v0.3.2 // indirect
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/sergi/go-diff v1.3.2-0.20230802210424-5b0b94c5c0d3 // indirect
github.com/skeema/knownhosts v1.3.1 // indirect
github.com/xanzy/ssh-agent v0.3.3 // indirect
github.com/xrash/smetrics v0.0.0-20240521201337-686a1a2994c1 // indirect
golang.org/x/sys v0.22.0 // indirect
golang.org/x/crypto v0.37.0 // indirect
golang.org/x/net v0.39.0 // indirect
golang.org/x/sys v0.32.0 // indirect
gopkg.in/warnings.v0 v0.1.2 // indirect
modernc.org/gc/v3 v3.0.0-20240107210532-573471604cb6 // indirect
modernc.org/libc v1.55.3 // indirect
modernc.org/mathutil v1.6.0 // indirect

102
go.sum
View File

@@ -1,36 +1,134 @@
dario.cat/mergo v1.0.0 h1:AGCNq9Evsj31mOgNPcLyXc+4PNABt905YmuqPYYpBWk=
dario.cat/mergo v1.0.0/go.mod h1:uNxQE+84aUszobStD9th8a29P2fMDhsBdgRYvZOxGmk=
github.com/Microsoft/go-winio v0.5.2/go.mod h1:WpS1mjBmmwHBEWmogvA2mj8546UReBk4v8QkMxJ6pZY=
github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=
github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=
github.com/ProtonMail/go-crypto v1.1.6 h1:ZcV+Ropw6Qn0AX9brlQLAUXfqLBc7Bl+f/DmNxpLfdw=
github.com/ProtonMail/go-crypto v1.1.6/go.mod h1:rA3QumHc/FZ8pAHreoekgiAbzpNsfQAosU5td4SnOrE=
github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be h1:9AeTilPcZAjCFIImctFaOjnTIavg87rW78vTPkQqLI8=
github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be/go.mod h1:ySMOLuWl6zY27l47sB3qLNK6tF2fkHG55UZxx8oIVo4=
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5 h1:0CwZNZbxp69SHPdPJAN/hZIm0C4OItdklCFmMRWYpio=
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs=
github.com/cloudflare/circl v1.6.1 h1:zqIqSPIndyBh1bjLVVDHMPpVKqp8Su/V+6MeDzzQBQ0=
github.com/cloudflare/circl v1.6.1/go.mod h1:uddAzsPgqdMAYatqJ0lsjX1oECcQLIlRpzZh3pJrofs=
github.com/cpuguy83/go-md2man/v2 v2.0.5 h1:ZtcqGrnekaHpVLArFSe4HK5DoKx1T0rq2DwVB0alcyc=
github.com/cpuguy83/go-md2man/v2 v2.0.5/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/cyphar/filepath-securejoin v0.4.1 h1:JyxxyPEaktOD+GAnqIqTf9A8tHyAG22rowi7HkoSU1s=
github.com/cyphar/filepath-securejoin v0.4.1/go.mod h1:Sdj7gXlvMcPZsbhwhQ33GguGLDGQL7h7bg04C/+u9jI=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/elazarl/goproxy v1.7.2 h1:Y2o6urb7Eule09PjlhQRGNsqRfPmYI3KKQLFpCAV3+o=
github.com/elazarl/goproxy v1.7.2/go.mod h1:82vkLNir0ALaW14Rc399OTTjyNREgmdL2cVoIbS6XaE=
github.com/emirpasic/gods v1.18.1 h1:FXtiHYKDGKCW2KzwZKx0iC0PQmdlorYgdFG9jPXJ1Bc=
github.com/emirpasic/gods v1.18.1/go.mod h1:8tpGGwCnJ5H4r6BWwaV6OrWmMoPhUl5jm/FMNAnJvWQ=
github.com/gliderlabs/ssh v0.3.8 h1:a4YXD1V7xMF9g5nTkdfnja3Sxy1PVDCj1Zg4Wb8vY6c=
github.com/gliderlabs/ssh v0.3.8/go.mod h1:xYoytBv1sV0aL3CavoDuJIQNURXkkfPA/wxQ1pL1fAU=
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376 h1:+zs/tPmkDkHx3U66DAb0lQFJrpS6731Oaa12ikc+DiI=
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376/go.mod h1:an3vInlBmSxCcxctByoQdvwPiA7DTK7jaaFDBTtu0ic=
github.com/go-git/go-billy/v5 v5.6.2 h1:6Q86EsPXMa7c3YZ3aLAQsMA0VlWmy43r6FHqa/UNbRM=
github.com/go-git/go-billy/v5 v5.6.2/go.mod h1:rcFC2rAsp/erv7CMz9GczHcuD0D32fWzH+MJAU+jaUU=
github.com/go-git/go-git-fixtures/v4 v4.3.2-0.20231010084843-55a94097c399 h1:eMje31YglSBqCdIqdhKBW8lokaMrL3uTkpGYlE2OOT4=
github.com/go-git/go-git-fixtures/v4 v4.3.2-0.20231010084843-55a94097c399/go.mod h1:1OCfN199q1Jm3HZlxleg+Dw/mwps2Wbk9frAWm+4FII=
github.com/go-git/go-git/v5 v5.16.4 h1:7ajIEZHZJULcyJebDLo99bGgS0jRrOxzZG4uCk2Yb2Y=
github.com/go-git/go-git/v5 v5.16.4/go.mod h1:4Ge4alE/5gPs30F2H1esi2gPd69R0C39lolkucHBOp8=
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8 h1:f+oWsMOmNPc8JmEHVZIycC7hBoQxHH9pNKQORJNozsQ=
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8/go.mod h1:wcDNUvekVysuuOpQKo3191zZyTpiI6se1N1ULghS0sw=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/pprof v0.0.0-20240409012703-83162a5b38cd h1:gbpYu9NMq8jhDVbvlGkMFWCjLFlqqEZjEmObmhUy6Vo=
github.com/google/pprof v0.0.0-20240409012703-83162a5b38cd/go.mod h1:kf6iHlnVGwgKolg33glAes7Yg/8iWP8ukqeldJSO7jw=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 h1:BQSFePA1RWJOlocH6Fxy8MmwDt+yVQYULKfN0RoTN8A=
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99/go.mod h1:1lJo3i6rXxKeerYnT8Nvf0QmHCRC1n8sfWVwXF2Frvo=
github.com/kevinburke/ssh_config v1.2.0 h1:x584FjTGwHzMwvHx18PXxbBVzfnxogHaAReU4gf13a4=
github.com/kevinburke/ssh_config v1.2.0/go.mod h1:CT57kijsi8u/K/BOFA39wgDQJ9CxiF4nAY/ojJ6r6mM=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw=
github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/ncruces/go-strftime v0.1.9 h1:bY0MQC28UADQmHmaF5dgpLmImcShSi2kHU9XLdhx/f4=
github.com/ncruces/go-strftime v0.1.9/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
github.com/onsi/gomega v1.34.1 h1:EUMJIKUjM8sKjYbtxQI9A4z2o+rruxnzNvpknOXie6k=
github.com/onsi/gomega v1.34.1/go.mod h1:kU1QgUvBDLXBJq618Xvm2LUX6rSAfRaFRTcdOeDLwwY=
github.com/pjbgf/sha1cd v0.3.2 h1:a9wb0bp1oC2TGwStyn0Umc/IGKQnEgF0vVaZ8QF8eo4=
github.com/pjbgf/sha1cd v0.3.2/go.mod h1:zQWigSxVmsHEZow5qaLtPYxpcKMMQpa09ixqBxuCS6A=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/sergi/go-diff v1.3.2-0.20230802210424-5b0b94c5c0d3 h1:n661drycOFuPLCN3Uc8sB6B/s6Z4t2xvBgU1htSHuq8=
github.com/sergi/go-diff v1.3.2-0.20230802210424-5b0b94c5c0d3/go.mod h1:A0bzQcvG0E7Rwjx0REVgAGH58e96+X0MeOfepqsbeW4=
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/skeema/knownhosts v1.3.1 h1:X2osQ+RAjK76shCbvhHHHVl3ZlgDm8apHEHFqRjnBY8=
github.com/skeema/knownhosts v1.3.1/go.mod h1:r7KTdC8l4uxWRyK2TpQZ/1o5HaSzh06ePQNxPwTcfiY=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/urfave/cli/v2 v2.27.5 h1:WoHEJLdsXr6dDWoJgMq/CboDmyY/8HMMH1fTECbih+w=
github.com/urfave/cli/v2 v2.27.5/go.mod h1:3Sevf16NykTbInEnD0yKkjDAeZDS0A6bzhBH5hrMvTQ=
github.com/xanzy/ssh-agent v0.3.3 h1:+/15pJfg/RsTxqYcX6fHqOXZwwMP+2VyYWJeWM2qQFM=
github.com/xanzy/ssh-agent v0.3.3/go.mod h1:6dzNDKs0J9rVPHPhaGCukekBHKqfl+L3KghI1Bc68Uw=
github.com/xrash/smetrics v0.0.0-20240521201337-686a1a2994c1 h1:gEOO8jv9F4OT7lGCjxCBTO/36wtF6j2nSip77qHd4x4=
github.com/xrash/smetrics v0.0.0-20240521201337-686a1a2994c1/go.mod h1:Ohn+xnUBiLI6FVj/9LpzZWtj1/D6lUovWYBkxHVV3aM=
golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.37.0 h1:kJNSjF/Xp7kU0iB2Z+9viTPMW4EqqsrywMXLJOOsXSE=
golang.org/x/crypto v0.37.0/go.mod h1:vg+k43peMZ0pUMhYmVAWysMK35e6ioLh3wB8ZCAfbVc=
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 h1:2dVuKD2vS7b0QIHQbpyTISPd0LeHDbnYEryqj5Q1ug8=
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56/go.mod h1:M4RDyNAINzryxdtnbRXRL/OHtkFuWGRjvuhBJpk2IlY=
golang.org/x/mod v0.16.0 h1:QX4fJ0Rr5cPQCF7O9lh9Se4pmwfwskqZfq5moyldzic=
golang.org/x/mod v0.16.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.39.0 h1:ZCu7HMWDxpXpaiKdhzIfaltL9Lp31x/3fCP11bc6/fY=
golang.org/x/net v0.39.0/go.mod h1:X7NRbYVEA+ewNkCNyJ513WmMdQ3BineSwVtN2zD/d+E=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.22.0 h1:RI27ohtqKCnwULzJLqkv897zojh5/DwS/ENaMzUOaWI=
golang.org/x/sys v0.22.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.32.0 h1:s77OFDvIQeibCmezSnk/q6iAfkdiQaJi4VzroCFrN20=
golang.org/x/sys v0.32.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.31.0 h1:erwDkOK1Msy6offm1mOgvspSkslFnIGsFnxOKoufg3o=
golang.org/x/term v0.31.0/go.mod h1:R4BeIy7D95HzImkxGkTW1UQTtP54tio2RyHz7PwK0aw=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.24.0 h1:dd5Bzh4yt5KYA8f9CJHCP4FB4D51c2c6JvN37xJJkJ0=
golang.org/x/text v0.24.0/go.mod h1:L8rBsPeo2pSS+xqN0d5u2ikmjtmoJbDBT1b7nHvFCdU=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.19.0 h1:tfGCXNR1OsFG+sVdLAitlpjAvD/I6dHDKnYrpEZUHkw=
golang.org/x/tools v0.19.0/go.mod h1:qoJWxmGSIBmAeriMx19ogtrEPrGtDbPK634QFIcLAhc=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/warnings.v0 v0.1.2 h1:wFXVbFY8DY5/xOe1ECiWdKCzZlxgshcYVNkBHstARME=
gopkg.in/warnings.v0 v0.1.2/go.mod h1:jksf8JmL6Qr/oQM2OXTHunEvvTAsrWBLb6OOjuVWRNI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
modernc.org/cc/v4 v4.21.4 h1:3Be/Rdo1fpr8GrQ7IVw9OHtplU4gWbb+wNgeoBMmGLQ=
modernc.org/cc/v4 v4.21.4/go.mod h1:HM7VJTZbUCR3rV8EYBi9wxnJ0ZBRiGE5OeGXNA0IsLQ=
modernc.org/ccgo/v4 v4.19.2 h1:lwQZgvboKD0jBwdaeVCTouxhxAyN6iawF3STraAal8Y=

View File

@@ -12,7 +12,7 @@ func BenchmarkCreateOptions(b *testing.B) {
if err != nil {
b.Fatalf("Failed to create store: %v", err)
}
defer store.Close()
defer store.Close() //nolint:errcheck // benchmark cleanup
ctx := context.Background()
if err := store.Initialize(ctx); err != nil {
@@ -53,7 +53,7 @@ func benchmarkBatch(b *testing.B, batchSize int) {
if err != nil {
b.Fatalf("Failed to create store: %v", err)
}
defer store.Close()
defer store.Close() //nolint:errcheck // benchmark cleanup
ctx := context.Background()
if err := store.Initialize(ctx); err != nil {
@@ -88,9 +88,9 @@ func benchmarkBatch(b *testing.B, batchSize int) {
}
// Clean up for next iteration
store.DeleteRevision(ctx, rev.ID)
_ = store.DeleteRevision(ctx, rev.ID) //nolint:errcheck // benchmark cleanup
rev = &Revision{GitHash: fmt.Sprintf("batchbench%d", i), ChannelName: "bench"}
store.CreateRevision(ctx, rev)
_ = store.CreateRevision(ctx, rev) //nolint:errcheck // benchmark setup
for _, opt := range opts {
opt.RevisionID = rev.ID
}
@@ -102,7 +102,7 @@ func BenchmarkSearchOptions(b *testing.B) {
if err != nil {
b.Fatalf("Failed to create store: %v", err)
}
defer store.Close()
defer store.Close() //nolint:errcheck // benchmark cleanup
ctx := context.Background()
if err := store.Initialize(ctx); err != nil {
@@ -144,7 +144,7 @@ func BenchmarkGetChildren(b *testing.B) {
if err != nil {
b.Fatalf("Failed to create store: %v", err)
}
defer store.Close()
defer store.Close() //nolint:errcheck // benchmark cleanup
ctx := context.Background()
if err := store.Initialize(ctx); err != nil {
@@ -197,7 +197,7 @@ func BenchmarkSchemaInitialize(b *testing.B) {
b.Fatalf("Failed to initialize: %v", err)
}
store.Close()
store.Close() //nolint:errcheck // benchmark cleanup
}
}
@@ -207,7 +207,7 @@ func BenchmarkRevisionCRUD(b *testing.B) {
if err != nil {
b.Fatalf("Failed to create store: %v", err)
}
defer store.Close()
defer store.Close() //nolint:errcheck // benchmark cleanup
ctx := context.Background()
if err := store.Initialize(ctx); err != nil {

View File

@@ -19,13 +19,15 @@ func runStoreTests(t *testing.T, newStore func(t *testing.T) Store) {
{"OptionChildren", testOptionChildren},
{"Declarations", testDeclarations},
{"Files", testFiles},
{"FileRange", testFileRange},
{"DeclarationsWithMetadata", testDeclarationsWithMetadata},
{"SchemaVersion", testSchemaVersion},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
store := newStore(t)
defer store.Close()
defer store.Close() //nolint:errcheck // test cleanup
tt.test(t, store)
})
}
@@ -451,6 +453,14 @@ func testFiles(t *testing.T, store Store) {
t.Errorf("Extension = %q, want .nix", got.Extension)
}
// Verify file metadata was computed
if got.ByteSize != len(file.Content) {
t.Errorf("ByteSize = %d, want %d", got.ByteSize, len(file.Content))
}
if got.LineCount != 3 {
t.Errorf("LineCount = %d, want 3", got.LineCount)
}
// Get non-existent file
got, err = store.GetFile(ctx, rev.ID, "nonexistent.nix")
if err != nil {
@@ -476,6 +486,169 @@ func testFiles(t *testing.T, store Store) {
}
}
func testFileRange(t *testing.T, store Store) {
ctx := context.Background()
if err := store.Initialize(ctx); err != nil {
t.Fatalf("Initialize failed: %v", err)
}
rev := &Revision{GitHash: "range123", ChannelName: "test"}
if err := store.CreateRevision(ctx, rev); err != nil {
t.Fatalf("CreateRevision failed: %v", err)
}
// Create a multi-line file
content := "line 1\nline 2\nline 3\nline 4\nline 5\nline 6\nline 7\nline 8\nline 9\nline 10"
file := &File{
RevisionID: rev.ID,
FilePath: "multiline.nix",
Extension: ".nix",
Content: content,
}
if err := store.CreateFile(ctx, file); err != nil {
t.Fatalf("CreateFile failed: %v", err)
}
// Test default range (first 250 lines, but we have less)
result, err := store.GetFileWithRange(ctx, rev.ID, "multiline.nix", FileRange{})
if err != nil {
t.Fatalf("GetFileWithRange default failed: %v", err)
}
if result == nil {
t.Fatal("Expected result, got nil")
}
if result.TotalLines != 10 {
t.Errorf("TotalLines = %d, want 10", result.TotalLines)
}
if result.StartLine != 1 {
t.Errorf("StartLine = %d, want 1", result.StartLine)
}
if result.EndLine != 10 {
t.Errorf("EndLine = %d, want 10", result.EndLine)
}
// Test with offset
result, err = store.GetFileWithRange(ctx, rev.ID, "multiline.nix", FileRange{Offset: 2, Limit: 3})
if err != nil {
t.Fatalf("GetFileWithRange with offset failed: %v", err)
}
if result.StartLine != 3 {
t.Errorf("StartLine = %d, want 3", result.StartLine)
}
if result.EndLine != 5 {
t.Errorf("EndLine = %d, want 5", result.EndLine)
}
if result.Content != "line 3\nline 4\nline 5" {
t.Errorf("Content = %q, want lines 3-5", result.Content)
}
// Test offset beyond file
result, err = store.GetFileWithRange(ctx, rev.ID, "multiline.nix", FileRange{Offset: 100})
if err != nil {
t.Fatalf("GetFileWithRange beyond end failed: %v", err)
}
if result.StartLine != 0 {
t.Errorf("StartLine = %d, want 0 for beyond end", result.StartLine)
}
if result.Content != "" {
t.Errorf("Content = %q, want empty for beyond end", result.Content)
}
// Test non-existent file
result, err = store.GetFileWithRange(ctx, rev.ID, "nonexistent.nix", FileRange{})
if err != nil {
t.Fatalf("GetFileWithRange for nonexistent failed: %v", err)
}
if result != nil {
t.Error("Expected nil for nonexistent file")
}
}
func testDeclarationsWithMetadata(t *testing.T, store Store) {
ctx := context.Background()
if err := store.Initialize(ctx); err != nil {
t.Fatalf("Initialize failed: %v", err)
}
rev := &Revision{GitHash: "metadata123", ChannelName: "test"}
if err := store.CreateRevision(ctx, rev); err != nil {
t.Fatalf("CreateRevision failed: %v", err)
}
// Create a file
file := &File{
RevisionID: rev.ID,
FilePath: "modules/nginx.nix",
Extension: ".nix",
Content: "line 1\nline 2\nline 3",
}
if err := store.CreateFile(ctx, file); err != nil {
t.Fatalf("CreateFile failed: %v", err)
}
// Create an option with declarations
opt := &Option{
RevisionID: rev.ID,
Name: "services.nginx.enable",
ParentPath: "services.nginx",
Type: "boolean",
}
if err := store.CreateOption(ctx, opt); err != nil {
t.Fatalf("CreateOption failed: %v", err)
}
// Create declarations - one pointing to indexed file, one to non-indexed
decls := []*Declaration{
{OptionID: opt.ID, FilePath: "modules/nginx.nix", Line: 10},
{OptionID: opt.ID, FilePath: "modules/other.nix", Line: 20},
}
if err := store.CreateDeclarationsBatch(ctx, decls); err != nil {
t.Fatalf("CreateDeclarationsBatch failed: %v", err)
}
// Get declarations with metadata
declMetas, err := store.GetDeclarationsWithMetadata(ctx, rev.ID, opt.ID)
if err != nil {
t.Fatalf("GetDeclarationsWithMetadata failed: %v", err)
}
if len(declMetas) != 2 {
t.Fatalf("Expected 2 declarations, got %d", len(declMetas))
}
// Find the declaration for the indexed file
var indexed, notIndexed *DeclarationWithMetadata
for _, d := range declMetas {
if d.FilePath == "modules/nginx.nix" {
indexed = d
} else {
notIndexed = d
}
}
if indexed == nil {
t.Fatal("Expected indexed declaration")
}
if !indexed.HasFile {
t.Error("Expected HasFile=true for indexed file")
}
if indexed.ByteSize != len(file.Content) {
t.Errorf("ByteSize = %d, want %d", indexed.ByteSize, len(file.Content))
}
if indexed.LineCount != 3 {
t.Errorf("LineCount = %d, want 3", indexed.LineCount)
}
if notIndexed == nil {
t.Fatal("Expected not-indexed declaration")
}
if notIndexed.HasFile {
t.Error("Expected HasFile=false for non-indexed file")
}
if notIndexed.ByteSize != 0 {
t.Errorf("ByteSize = %d, want 0 for non-indexed", notIndexed.ByteSize)
}
}
func testSchemaVersion(t *testing.T, store Store) {
ctx := context.Background()

View File

@@ -14,6 +14,7 @@ type Revision struct {
CommitDate time.Time
IndexedAt time.Time
OptionCount int
PackageCount int
}
// Option represents a NixOS configuration option.
@@ -44,6 +45,48 @@ type File struct {
FilePath string
Extension string
Content string
ByteSize int
LineCount int
}
// Package represents a Nix package from nixpkgs.
type Package struct {
ID int64
RevisionID int64
AttrPath string // e.g., "python312Packages.requests"
Pname string // Package name
Version string
Description string
LongDescription string
Homepage string
License string // JSON array
Platforms string // JSON array
Maintainers string // JSON array
Broken bool
Unfree bool
Insecure bool
}
// DeclarationWithMetadata includes declaration info plus file metadata.
type DeclarationWithMetadata struct {
Declaration
ByteSize int // File size in bytes, 0 if file not indexed
LineCount int // Number of lines, 0 if file not indexed
HasFile bool // True if file is indexed
}
// FileRange specifies a range of lines to return from a file.
type FileRange struct {
Offset int // Line offset (0-based)
Limit int // Maximum lines to return (0 = default 250)
}
// FileResult contains a file with range metadata.
type FileResult struct {
*File
TotalLines int // Total lines in the file
StartLine int // First line returned (1-based)
EndLine int // Last line returned (1-based)
}
// SearchFilters contains optional filters for option search.
@@ -55,6 +98,15 @@ type SearchFilters struct {
Offset int
}
// PackageSearchFilters contains optional filters for package search.
type PackageSearchFilters struct {
Broken *bool
Unfree *bool
Insecure *bool
Limit int
Offset int
}
// Store defines the interface for database operations.
type Store interface {
// Schema operations
@@ -80,9 +132,18 @@ type Store interface {
CreateDeclaration(ctx context.Context, decl *Declaration) error
CreateDeclarationsBatch(ctx context.Context, decls []*Declaration) error
GetDeclarations(ctx context.Context, optionID int64) ([]*Declaration, error)
GetDeclarationsWithMetadata(ctx context.Context, revisionID, optionID int64) ([]*DeclarationWithMetadata, error)
// File operations
CreateFile(ctx context.Context, file *File) error
CreateFilesBatch(ctx context.Context, files []*File) error
GetFile(ctx context.Context, revisionID int64, path string) (*File, error)
GetFileWithRange(ctx context.Context, revisionID int64, path string, r FileRange) (*FileResult, error)
// Package operations
CreatePackage(ctx context.Context, pkg *Package) error
CreatePackagesBatch(ctx context.Context, pkgs []*Package) error
GetPackage(ctx context.Context, revisionID int64, attrPath string) (*Package, error)
SearchPackages(ctx context.Context, revisionID int64, query string, filters PackageSearchFilters) ([]*Package, error)
UpdateRevisionPackageCount(ctx context.Context, id int64, count int) error
}

View File

@@ -22,7 +22,7 @@ func NewPostgresStore(connStr string) (*PostgresStore, error) {
}
if err := db.Ping(); err != nil {
db.Close()
db.Close() //nolint:errcheck // best-effort cleanup on connection failure
return nil, fmt.Errorf("failed to ping database: %w", err)
}
@@ -43,6 +43,7 @@ func (s *PostgresStore) Initialize(ctx context.Context) error {
dropStmts := []string{
DropDeclarations,
DropOptions,
DropPackages,
DropFiles,
DropRevisions,
DropSchemaInfo,
@@ -64,7 +65,8 @@ func (s *PostgresStore) Initialize(ctx context.Context) error {
channel_name TEXT,
commit_date TIMESTAMP,
indexed_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
option_count INTEGER NOT NULL DEFAULT 0
option_count INTEGER NOT NULL DEFAULT 0,
package_count INTEGER NOT NULL DEFAULT 0
)`,
`CREATE TABLE IF NOT EXISTS options (
id SERIAL PRIMARY KEY,
@@ -88,12 +90,32 @@ func (s *PostgresStore) Initialize(ctx context.Context) error {
revision_id INTEGER NOT NULL REFERENCES revisions(id) ON DELETE CASCADE,
file_path TEXT NOT NULL,
extension TEXT,
content TEXT NOT NULL
content TEXT NOT NULL,
byte_size INTEGER NOT NULL DEFAULT 0,
line_count INTEGER NOT NULL DEFAULT 0
)`,
`CREATE TABLE IF NOT EXISTS packages (
id SERIAL PRIMARY KEY,
revision_id INTEGER NOT NULL REFERENCES revisions(id) ON DELETE CASCADE,
attr_path TEXT NOT NULL,
pname TEXT NOT NULL,
version TEXT,
description TEXT,
long_description TEXT,
homepage TEXT,
license TEXT,
platforms TEXT,
maintainers TEXT,
broken BOOLEAN NOT NULL DEFAULT FALSE,
unfree BOOLEAN NOT NULL DEFAULT FALSE,
insecure BOOLEAN NOT NULL DEFAULT FALSE
)`,
IndexOptionsRevisionName,
IndexOptionsRevisionParent,
IndexFilesRevisionPath,
IndexDeclarationsOption,
IndexPackagesRevisionAttr,
IndexPackagesRevisionPname,
}
for _, stmt := range createStmts {
@@ -102,13 +124,22 @@ func (s *PostgresStore) Initialize(ctx context.Context) error {
}
}
// Create full-text search index for PostgreSQL
// Create full-text search index for PostgreSQL options
_, err = s.db.ExecContext(ctx, `
CREATE INDEX IF NOT EXISTS idx_options_fts
ON options USING GIN(to_tsvector('english', name || ' ' || COALESCE(description, '')))
`)
if err != nil {
return fmt.Errorf("failed to create FTS index: %w", err)
return fmt.Errorf("failed to create options FTS index: %w", err)
}
// Create full-text search index for PostgreSQL packages
_, err = s.db.ExecContext(ctx, `
CREATE INDEX IF NOT EXISTS idx_packages_fts
ON packages USING GIN(to_tsvector('english', attr_path || ' ' || pname || ' ' || COALESCE(description, '')))
`)
if err != nil {
return fmt.Errorf("failed to create packages FTS index: %w", err)
}
// Set schema version
@@ -131,10 +162,10 @@ func (s *PostgresStore) Close() error {
// CreateRevision creates a new revision record.
func (s *PostgresStore) CreateRevision(ctx context.Context, rev *Revision) error {
err := s.db.QueryRowContext(ctx, `
INSERT INTO revisions (git_hash, channel_name, commit_date, option_count)
VALUES ($1, $2, $3, $4)
INSERT INTO revisions (git_hash, channel_name, commit_date, option_count, package_count)
VALUES ($1, $2, $3, $4, $5)
RETURNING id, indexed_at`,
rev.GitHash, rev.ChannelName, rev.CommitDate, rev.OptionCount,
rev.GitHash, rev.ChannelName, rev.CommitDate, rev.OptionCount, rev.PackageCount,
).Scan(&rev.ID, &rev.IndexedAt)
if err != nil {
return fmt.Errorf("failed to create revision: %w", err)
@@ -146,9 +177,9 @@ func (s *PostgresStore) CreateRevision(ctx context.Context, rev *Revision) error
func (s *PostgresStore) GetRevision(ctx context.Context, gitHash string) (*Revision, error) {
rev := &Revision{}
err := s.db.QueryRowContext(ctx, `
SELECT id, git_hash, channel_name, commit_date, indexed_at, option_count
SELECT id, git_hash, channel_name, commit_date, indexed_at, option_count, package_count
FROM revisions WHERE git_hash = $1`, gitHash,
).Scan(&rev.ID, &rev.GitHash, &rev.ChannelName, &rev.CommitDate, &rev.IndexedAt, &rev.OptionCount)
).Scan(&rev.ID, &rev.GitHash, &rev.ChannelName, &rev.CommitDate, &rev.IndexedAt, &rev.OptionCount, &rev.PackageCount)
if err == sql.ErrNoRows {
return nil, nil
}
@@ -162,10 +193,10 @@ func (s *PostgresStore) GetRevision(ctx context.Context, gitHash string) (*Revis
func (s *PostgresStore) GetRevisionByChannel(ctx context.Context, channel string) (*Revision, error) {
rev := &Revision{}
err := s.db.QueryRowContext(ctx, `
SELECT id, git_hash, channel_name, commit_date, indexed_at, option_count
SELECT id, git_hash, channel_name, commit_date, indexed_at, option_count, package_count
FROM revisions WHERE channel_name = $1
ORDER BY indexed_at DESC LIMIT 1`, channel,
).Scan(&rev.ID, &rev.GitHash, &rev.ChannelName, &rev.CommitDate, &rev.IndexedAt, &rev.OptionCount)
).Scan(&rev.ID, &rev.GitHash, &rev.ChannelName, &rev.CommitDate, &rev.IndexedAt, &rev.OptionCount, &rev.PackageCount)
if err == sql.ErrNoRows {
return nil, nil
}
@@ -178,17 +209,17 @@ func (s *PostgresStore) GetRevisionByChannel(ctx context.Context, channel string
// ListRevisions returns all indexed revisions.
func (s *PostgresStore) ListRevisions(ctx context.Context) ([]*Revision, error) {
rows, err := s.db.QueryContext(ctx, `
SELECT id, git_hash, channel_name, commit_date, indexed_at, option_count
SELECT id, git_hash, channel_name, commit_date, indexed_at, option_count, package_count
FROM revisions ORDER BY indexed_at DESC`)
if err != nil {
return nil, fmt.Errorf("failed to list revisions: %w", err)
}
defer rows.Close()
defer rows.Close() //nolint:errcheck // rows.Err() checked after iteration
var revisions []*Revision
for rows.Next() {
rev := &Revision{}
if err := rows.Scan(&rev.ID, &rev.GitHash, &rev.ChannelName, &rev.CommitDate, &rev.IndexedAt, &rev.OptionCount); err != nil {
if err := rows.Scan(&rev.ID, &rev.GitHash, &rev.ChannelName, &rev.CommitDate, &rev.IndexedAt, &rev.OptionCount, &rev.PackageCount); err != nil {
return nil, fmt.Errorf("failed to scan revision: %w", err)
}
revisions = append(revisions, rev)
@@ -235,7 +266,7 @@ func (s *PostgresStore) CreateOptionsBatch(ctx context.Context, opts []*Option)
if err != nil {
return fmt.Errorf("failed to begin transaction: %w", err)
}
defer tx.Rollback()
defer tx.Rollback() //nolint:errcheck // rollback after commit returns error, which is expected
stmt, err := tx.PrepareContext(ctx, `
INSERT INTO options (revision_id, name, parent_path, type, default_value, example, description, read_only)
@@ -244,7 +275,7 @@ func (s *PostgresStore) CreateOptionsBatch(ctx context.Context, opts []*Option)
if err != nil {
return fmt.Errorf("failed to prepare statement: %w", err)
}
defer stmt.Close()
defer stmt.Close() //nolint:errcheck // statement closed with transaction
for _, opt := range opts {
err := stmt.QueryRowContext(ctx,
@@ -283,7 +314,7 @@ func (s *PostgresStore) GetChildren(ctx context.Context, revisionID int64, paren
if err != nil {
return nil, fmt.Errorf("failed to get children: %w", err)
}
defer rows.Close()
defer rows.Close() //nolint:errcheck // rows.Err() checked after iteration
var options []*Option
for rows.Next() {
@@ -300,7 +331,7 @@ func (s *PostgresStore) GetChildren(ctx context.Context, revisionID int64, paren
func (s *PostgresStore) SearchOptions(ctx context.Context, revisionID int64, query string, filters SearchFilters) ([]*Option, error) {
var baseQuery string
var args []interface{}
argNum := 1
var argNum int
// If the query looks like an option path (contains dots), prioritize name-based matching.
if strings.Contains(query, ".") {
@@ -331,7 +362,7 @@ func (s *PostgresStore) SearchOptions(ctx context.Context, revisionID int64, que
if filters.Namespace != "" {
baseQuery += fmt.Sprintf(" AND name LIKE $%d", argNum)
args = append(args, filters.Namespace+"%")
argNum++
_ = argNum // silence ineffassign - argNum tracks position but final value unused
}
if filters.HasDefault != nil {
@@ -355,7 +386,7 @@ func (s *PostgresStore) SearchOptions(ctx context.Context, revisionID int64, que
if err != nil {
return nil, fmt.Errorf("failed to search options: %w", err)
}
defer rows.Close()
defer rows.Close() //nolint:errcheck // rows.Err() checked after iteration
var options []*Option
for rows.Next() {
@@ -388,7 +419,7 @@ func (s *PostgresStore) CreateDeclarationsBatch(ctx context.Context, decls []*De
if err != nil {
return fmt.Errorf("failed to begin transaction: %w", err)
}
defer tx.Rollback()
defer tx.Rollback() //nolint:errcheck // rollback after commit returns error, which is expected
stmt, err := tx.PrepareContext(ctx, `
INSERT INTO declarations (option_id, file_path, line)
@@ -397,7 +428,7 @@ func (s *PostgresStore) CreateDeclarationsBatch(ctx context.Context, decls []*De
if err != nil {
return fmt.Errorf("failed to prepare statement: %w", err)
}
defer stmt.Close()
defer stmt.Close() //nolint:errcheck // statement closed with transaction
for _, decl := range decls {
err := stmt.QueryRowContext(ctx, decl.OptionID, decl.FilePath, decl.Line).Scan(&decl.ID)
@@ -417,7 +448,7 @@ func (s *PostgresStore) GetDeclarations(ctx context.Context, optionID int64) ([]
if err != nil {
return nil, fmt.Errorf("failed to get declarations: %w", err)
}
defer rows.Close()
defer rows.Close() //nolint:errcheck // rows.Err() checked after iteration
var decls []*Declaration
for rows.Next() {
@@ -432,11 +463,19 @@ func (s *PostgresStore) GetDeclarations(ctx context.Context, optionID int64) ([]
// CreateFile creates a new file record.
func (s *PostgresStore) CreateFile(ctx context.Context, file *File) error {
// Compute metadata if not already set
if file.ByteSize == 0 {
file.ByteSize = len(file.Content)
}
if file.LineCount == 0 {
file.LineCount = countLines(file.Content)
}
err := s.db.QueryRowContext(ctx, `
INSERT INTO files (revision_id, file_path, extension, content)
VALUES ($1, $2, $3, $4)
INSERT INTO files (revision_id, file_path, extension, content, byte_size, line_count)
VALUES ($1, $2, $3, $4, $5, $6)
RETURNING id`,
file.RevisionID, file.FilePath, file.Extension, file.Content,
file.RevisionID, file.FilePath, file.Extension, file.Content, file.ByteSize, file.LineCount,
).Scan(&file.ID)
if err != nil {
return fmt.Errorf("failed to create file: %w", err)
@@ -450,19 +489,27 @@ func (s *PostgresStore) CreateFilesBatch(ctx context.Context, files []*File) err
if err != nil {
return fmt.Errorf("failed to begin transaction: %w", err)
}
defer tx.Rollback()
defer tx.Rollback() //nolint:errcheck // rollback after commit returns error, which is expected
stmt, err := tx.PrepareContext(ctx, `
INSERT INTO files (revision_id, file_path, extension, content)
VALUES ($1, $2, $3, $4)
INSERT INTO files (revision_id, file_path, extension, content, byte_size, line_count)
VALUES ($1, $2, $3, $4, $5, $6)
RETURNING id`)
if err != nil {
return fmt.Errorf("failed to prepare statement: %w", err)
}
defer stmt.Close()
defer stmt.Close() //nolint:errcheck // statement closed with transaction
for _, file := range files {
err := stmt.QueryRowContext(ctx, file.RevisionID, file.FilePath, file.Extension, file.Content).Scan(&file.ID)
// Compute metadata if not already set
if file.ByteSize == 0 {
file.ByteSize = len(file.Content)
}
if file.LineCount == 0 {
file.LineCount = countLines(file.Content)
}
err := stmt.QueryRowContext(ctx, file.RevisionID, file.FilePath, file.Extension, file.Content, file.ByteSize, file.LineCount).Scan(&file.ID)
if err != nil {
return fmt.Errorf("failed to insert file: %w", err)
}
@@ -475,9 +522,9 @@ func (s *PostgresStore) CreateFilesBatch(ctx context.Context, files []*File) err
func (s *PostgresStore) GetFile(ctx context.Context, revisionID int64, path string) (*File, error) {
file := &File{}
err := s.db.QueryRowContext(ctx, `
SELECT id, revision_id, file_path, extension, content
SELECT id, revision_id, file_path, extension, content, byte_size, line_count
FROM files WHERE revision_id = $1 AND file_path = $2`, revisionID, path,
).Scan(&file.ID, &file.RevisionID, &file.FilePath, &file.Extension, &file.Content)
).Scan(&file.ID, &file.RevisionID, &file.FilePath, &file.Extension, &file.Content, &file.ByteSize, &file.LineCount)
if err == sql.ErrNoRows {
return nil, nil
}
@@ -486,3 +533,184 @@ func (s *PostgresStore) GetFile(ctx context.Context, revisionID int64, path stri
}
return file, nil
}
// GetDeclarationsWithMetadata retrieves declarations with file metadata for an option.
func (s *PostgresStore) GetDeclarationsWithMetadata(ctx context.Context, revisionID, optionID int64) ([]*DeclarationWithMetadata, error) {
rows, err := s.db.QueryContext(ctx, `
SELECT d.id, d.option_id, d.file_path, d.line,
COALESCE(f.byte_size, 0), COALESCE(f.line_count, 0), (f.id IS NOT NULL)
FROM declarations d
LEFT JOIN files f ON f.revision_id = $1 AND f.file_path = d.file_path
WHERE d.option_id = $2`, revisionID, optionID)
if err != nil {
return nil, fmt.Errorf("failed to get declarations with metadata: %w", err)
}
defer rows.Close() //nolint:errcheck // rows.Err() checked after iteration
var decls []*DeclarationWithMetadata
for rows.Next() {
decl := &DeclarationWithMetadata{}
if err := rows.Scan(&decl.ID, &decl.OptionID, &decl.FilePath, &decl.Line,
&decl.ByteSize, &decl.LineCount, &decl.HasFile); err != nil {
return nil, fmt.Errorf("failed to scan declaration: %w", err)
}
decls = append(decls, decl)
}
return decls, rows.Err()
}
// GetFileWithRange retrieves a file with a specified line range.
func (s *PostgresStore) GetFileWithRange(ctx context.Context, revisionID int64, path string, r FileRange) (*FileResult, error) {
file, err := s.GetFile(ctx, revisionID, path)
if err != nil {
return nil, err
}
if file == nil {
return nil, nil
}
return applyLineRange(file, r), nil
}
// CreatePackage creates a new package record.
func (s *PostgresStore) CreatePackage(ctx context.Context, pkg *Package) error {
err := s.db.QueryRowContext(ctx, `
INSERT INTO packages (revision_id, attr_path, pname, version, description, long_description, homepage, license, platforms, maintainers, broken, unfree, insecure)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13)
RETURNING id`,
pkg.RevisionID, pkg.AttrPath, pkg.Pname, pkg.Version, pkg.Description, pkg.LongDescription, pkg.Homepage, pkg.License, pkg.Platforms, pkg.Maintainers, pkg.Broken, pkg.Unfree, pkg.Insecure,
).Scan(&pkg.ID)
if err != nil {
return fmt.Errorf("failed to create package: %w", err)
}
return nil
}
// CreatePackagesBatch creates multiple packages in a batch.
func (s *PostgresStore) CreatePackagesBatch(ctx context.Context, pkgs []*Package) error {
tx, err := s.db.BeginTx(ctx, nil)
if err != nil {
return fmt.Errorf("failed to begin transaction: %w", err)
}
defer tx.Rollback() //nolint:errcheck // rollback after commit returns error, which is expected
stmt, err := tx.PrepareContext(ctx, `
INSERT INTO packages (revision_id, attr_path, pname, version, description, long_description, homepage, license, platforms, maintainers, broken, unfree, insecure)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13)
RETURNING id`)
if err != nil {
return fmt.Errorf("failed to prepare statement: %w", err)
}
defer stmt.Close() //nolint:errcheck // statement closed with transaction
for _, pkg := range pkgs {
err := stmt.QueryRowContext(ctx,
pkg.RevisionID, pkg.AttrPath, pkg.Pname, pkg.Version, pkg.Description, pkg.LongDescription, pkg.Homepage, pkg.License, pkg.Platforms, pkg.Maintainers, pkg.Broken, pkg.Unfree, pkg.Insecure,
).Scan(&pkg.ID)
if err != nil {
return fmt.Errorf("failed to insert package %s: %w", pkg.AttrPath, err)
}
}
return tx.Commit()
}
// GetPackage retrieves a package by revision and attr_path.
func (s *PostgresStore) GetPackage(ctx context.Context, revisionID int64, attrPath string) (*Package, error) {
pkg := &Package{}
err := s.db.QueryRowContext(ctx, `
SELECT id, revision_id, attr_path, pname, version, description, long_description, homepage, license, platforms, maintainers, broken, unfree, insecure
FROM packages WHERE revision_id = $1 AND attr_path = $2`, revisionID, attrPath,
).Scan(&pkg.ID, &pkg.RevisionID, &pkg.AttrPath, &pkg.Pname, &pkg.Version, &pkg.Description, &pkg.LongDescription, &pkg.Homepage, &pkg.License, &pkg.Platforms, &pkg.Maintainers, &pkg.Broken, &pkg.Unfree, &pkg.Insecure)
if err == sql.ErrNoRows {
return nil, nil
}
if err != nil {
return nil, fmt.Errorf("failed to get package: %w", err)
}
return pkg, nil
}
// SearchPackages searches for packages matching a query.
func (s *PostgresStore) SearchPackages(ctx context.Context, revisionID int64, query string, filters PackageSearchFilters) ([]*Package, error) {
// Query includes exact match priority:
// - Priority 0: exact pname match
// - Priority 1: exact attr_path match
// - Priority 2: pname starts with query
// - Priority 3: attr_path starts with query
// - Priority 4: FTS match (ordered by ts_rank)
baseQuery := `
SELECT id, revision_id, attr_path, pname, version, description, long_description, homepage, license, platforms, maintainers, broken, unfree, insecure
FROM packages
WHERE revision_id = $1
AND to_tsvector('english', attr_path || ' ' || pname || ' ' || COALESCE(description, '')) @@ plainto_tsquery('english', $2)`
args := []interface{}{revisionID, query}
argNum := 3
if filters.Broken != nil {
baseQuery += fmt.Sprintf(" AND broken = $%d", argNum)
args = append(args, *filters.Broken)
argNum++
}
if filters.Unfree != nil {
baseQuery += fmt.Sprintf(" AND unfree = $%d", argNum)
args = append(args, *filters.Unfree)
argNum++
}
if filters.Insecure != nil {
baseQuery += fmt.Sprintf(" AND insecure = $%d", argNum)
args = append(args, *filters.Insecure)
argNum++
}
// Order by exact match priority, then ts_rank, then attr_path
// CASE returns priority (lower = better), ts_rank returns positive scores (higher = better, so DESC)
baseQuery += fmt.Sprintf(` ORDER BY
CASE
WHEN pname = $%d THEN 0
WHEN attr_path = $%d THEN 1
WHEN pname LIKE $%d THEN 2
WHEN attr_path LIKE $%d THEN 3
ELSE 4
END,
ts_rank(to_tsvector('english', attr_path || ' ' || pname || ' ' || COALESCE(description, '')), plainto_tsquery('english', $2)) DESC,
attr_path`, argNum, argNum+1, argNum+2, argNum+3)
// For LIKE comparisons, escape % and _ characters for PostgreSQL
likeQuery := strings.ReplaceAll(strings.ReplaceAll(query, "%", "\\%"), "_", "\\_") + "%"
args = append(args, query, query, likeQuery, likeQuery)
if filters.Limit > 0 {
baseQuery += fmt.Sprintf(" LIMIT %d", filters.Limit)
}
if filters.Offset > 0 {
baseQuery += fmt.Sprintf(" OFFSET %d", filters.Offset)
}
rows, err := s.db.QueryContext(ctx, baseQuery, args...)
if err != nil {
return nil, fmt.Errorf("failed to search packages: %w", err)
}
defer rows.Close() //nolint:errcheck // rows.Err() checked after iteration
var packages []*Package
for rows.Next() {
pkg := &Package{}
if err := rows.Scan(&pkg.ID, &pkg.RevisionID, &pkg.AttrPath, &pkg.Pname, &pkg.Version, &pkg.Description, &pkg.LongDescription, &pkg.Homepage, &pkg.License, &pkg.Platforms, &pkg.Maintainers, &pkg.Broken, &pkg.Unfree, &pkg.Insecure); err != nil {
return nil, fmt.Errorf("failed to scan package: %w", err)
}
packages = append(packages, pkg)
}
return packages, rows.Err()
}
// UpdateRevisionPackageCount updates the package count for a revision.
func (s *PostgresStore) UpdateRevisionPackageCount(ctx context.Context, id int64, count int) error {
_, err := s.db.ExecContext(ctx,
"UPDATE revisions SET package_count = $1 WHERE id = $2", count, id)
if err != nil {
return fmt.Errorf("failed to update package count: %w", err)
}
return nil
}

View File

@@ -2,7 +2,7 @@ package database
// SchemaVersion is the current database schema version.
// When this changes, the database will be dropped and recreated.
const SchemaVersion = 1
const SchemaVersion = 3
// Common SQL statements shared between implementations.
const (
@@ -20,7 +20,8 @@ const (
channel_name TEXT,
commit_date TIMESTAMP,
indexed_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
option_count INTEGER NOT NULL DEFAULT 0
option_count INTEGER NOT NULL DEFAULT 0,
package_count INTEGER NOT NULL DEFAULT 0
)`
// OptionsTable creates the options table.
@@ -53,7 +54,28 @@ const (
revision_id INTEGER NOT NULL REFERENCES revisions(id) ON DELETE CASCADE,
file_path TEXT NOT NULL,
extension TEXT,
content TEXT NOT NULL
content TEXT NOT NULL,
byte_size INTEGER NOT NULL DEFAULT 0,
line_count INTEGER NOT NULL DEFAULT 0
)`
// PackagesTable creates the packages table.
PackagesTable = `
CREATE TABLE IF NOT EXISTS packages (
id INTEGER PRIMARY KEY,
revision_id INTEGER NOT NULL REFERENCES revisions(id) ON DELETE CASCADE,
attr_path TEXT NOT NULL,
pname TEXT NOT NULL,
version TEXT,
description TEXT,
long_description TEXT,
homepage TEXT,
license TEXT,
platforms TEXT,
maintainers TEXT,
broken BOOLEAN NOT NULL DEFAULT FALSE,
unfree BOOLEAN NOT NULL DEFAULT FALSE,
insecure BOOLEAN NOT NULL DEFAULT FALSE
)`
)
@@ -78,6 +100,16 @@ const (
IndexDeclarationsOption = `
CREATE INDEX IF NOT EXISTS idx_declarations_option
ON declarations(option_id)`
// IndexPackagesRevisionAttr creates an index on packages(revision_id, attr_path).
IndexPackagesRevisionAttr = `
CREATE UNIQUE INDEX IF NOT EXISTS idx_packages_revision_attr
ON packages(revision_id, attr_path)`
// IndexPackagesRevisionPname creates an index on packages(revision_id, pname).
IndexPackagesRevisionPname = `
CREATE INDEX IF NOT EXISTS idx_packages_revision_pname
ON packages(revision_id, pname)`
)
// Drop statements for schema recreation.
@@ -85,6 +117,7 @@ const (
DropSchemaInfo = `DROP TABLE IF EXISTS schema_info`
DropDeclarations = `DROP TABLE IF EXISTS declarations`
DropOptions = `DROP TABLE IF EXISTS options`
DropPackages = `DROP TABLE IF EXISTS packages`
DropFiles = `DROP TABLE IF EXISTS files`
DropRevisions = `DROP TABLE IF EXISTS revisions`
)

View File

@@ -23,7 +23,7 @@ func NewSQLiteStore(path string) (*SQLiteStore, error) {
// Enable foreign keys
if _, err := db.Exec("PRAGMA foreign_keys = ON"); err != nil {
db.Close()
db.Close() //nolint:errcheck // best-effort cleanup on connection failure
return nil, fmt.Errorf("failed to enable foreign keys: %w", err)
}
@@ -44,10 +44,12 @@ func (s *SQLiteStore) Initialize(ctx context.Context) error {
dropStmts := []string{
DropDeclarations,
DropOptions,
DropPackages,
DropFiles,
DropRevisions,
DropSchemaInfo,
"DROP TABLE IF EXISTS options_fts",
"DROP TABLE IF EXISTS packages_fts",
}
for _, stmt := range dropStmts {
if _, err := s.db.ExecContext(ctx, stmt); err != nil {
@@ -63,10 +65,13 @@ func (s *SQLiteStore) Initialize(ctx context.Context) error {
OptionsTable,
DeclarationsTable,
FilesTable,
PackagesTable,
IndexOptionsRevisionName,
IndexOptionsRevisionParent,
IndexFilesRevisionPath,
IndexDeclarationsOption,
IndexPackagesRevisionAttr,
IndexPackagesRevisionPname,
}
for _, stmt := range createStmts {
@@ -88,8 +93,8 @@ func (s *SQLiteStore) Initialize(ctx context.Context) error {
return fmt.Errorf("failed to create FTS table: %w", err)
}
// Create triggers to keep FTS in sync
triggers := []string{
// Create triggers to keep options FTS in sync
optionsTriggers := []string{
`CREATE TRIGGER IF NOT EXISTS options_ai AFTER INSERT ON options BEGIN
INSERT INTO options_fts(rowid, name, description) VALUES (new.id, new.name, new.description);
END`,
@@ -101,9 +106,42 @@ func (s *SQLiteStore) Initialize(ctx context.Context) error {
INSERT INTO options_fts(rowid, name, description) VALUES (new.id, new.name, new.description);
END`,
}
for _, trigger := range triggers {
for _, trigger := range optionsTriggers {
if _, err := s.db.ExecContext(ctx, trigger); err != nil {
return fmt.Errorf("failed to create trigger: %w", err)
return fmt.Errorf("failed to create options trigger: %w", err)
}
}
// Create FTS5 virtual table for packages full-text search
_, err = s.db.ExecContext(ctx, `
CREATE VIRTUAL TABLE IF NOT EXISTS packages_fts USING fts5(
attr_path,
pname,
description,
content='packages',
content_rowid='id'
)
`)
if err != nil {
return fmt.Errorf("failed to create packages FTS table: %w", err)
}
// Create triggers to keep packages FTS in sync
packagesTriggers := []string{
`CREATE TRIGGER IF NOT EXISTS packages_ai AFTER INSERT ON packages BEGIN
INSERT INTO packages_fts(rowid, attr_path, pname, description) VALUES (new.id, new.attr_path, new.pname, new.description);
END`,
`CREATE TRIGGER IF NOT EXISTS packages_ad AFTER DELETE ON packages BEGIN
INSERT INTO packages_fts(packages_fts, rowid, attr_path, pname, description) VALUES('delete', old.id, old.attr_path, old.pname, old.description);
END`,
`CREATE TRIGGER IF NOT EXISTS packages_au AFTER UPDATE ON packages BEGIN
INSERT INTO packages_fts(packages_fts, rowid, attr_path, pname, description) VALUES('delete', old.id, old.attr_path, old.pname, old.description);
INSERT INTO packages_fts(rowid, attr_path, pname, description) VALUES (new.id, new.attr_path, new.pname, new.description);
END`,
}
for _, trigger := range packagesTriggers {
if _, err := s.db.ExecContext(ctx, trigger); err != nil {
return fmt.Errorf("failed to create packages trigger: %w", err)
}
}
@@ -127,9 +165,9 @@ func (s *SQLiteStore) Close() error {
// CreateRevision creates a new revision record.
func (s *SQLiteStore) CreateRevision(ctx context.Context, rev *Revision) error {
result, err := s.db.ExecContext(ctx, `
INSERT INTO revisions (git_hash, channel_name, commit_date, option_count)
VALUES (?, ?, ?, ?)`,
rev.GitHash, rev.ChannelName, rev.CommitDate, rev.OptionCount,
INSERT INTO revisions (git_hash, channel_name, commit_date, option_count, package_count)
VALUES (?, ?, ?, ?, ?)`,
rev.GitHash, rev.ChannelName, rev.CommitDate, rev.OptionCount, rev.PackageCount,
)
if err != nil {
return fmt.Errorf("failed to create revision: %w", err)
@@ -155,9 +193,9 @@ func (s *SQLiteStore) CreateRevision(ctx context.Context, rev *Revision) error {
func (s *SQLiteStore) GetRevision(ctx context.Context, gitHash string) (*Revision, error) {
rev := &Revision{}
err := s.db.QueryRowContext(ctx, `
SELECT id, git_hash, channel_name, commit_date, indexed_at, option_count
SELECT id, git_hash, channel_name, commit_date, indexed_at, option_count, package_count
FROM revisions WHERE git_hash = ?`, gitHash,
).Scan(&rev.ID, &rev.GitHash, &rev.ChannelName, &rev.CommitDate, &rev.IndexedAt, &rev.OptionCount)
).Scan(&rev.ID, &rev.GitHash, &rev.ChannelName, &rev.CommitDate, &rev.IndexedAt, &rev.OptionCount, &rev.PackageCount)
if err == sql.ErrNoRows {
return nil, nil
}
@@ -171,10 +209,10 @@ func (s *SQLiteStore) GetRevision(ctx context.Context, gitHash string) (*Revisio
func (s *SQLiteStore) GetRevisionByChannel(ctx context.Context, channel string) (*Revision, error) {
rev := &Revision{}
err := s.db.QueryRowContext(ctx, `
SELECT id, git_hash, channel_name, commit_date, indexed_at, option_count
SELECT id, git_hash, channel_name, commit_date, indexed_at, option_count, package_count
FROM revisions WHERE channel_name = ?
ORDER BY indexed_at DESC LIMIT 1`, channel,
).Scan(&rev.ID, &rev.GitHash, &rev.ChannelName, &rev.CommitDate, &rev.IndexedAt, &rev.OptionCount)
).Scan(&rev.ID, &rev.GitHash, &rev.ChannelName, &rev.CommitDate, &rev.IndexedAt, &rev.OptionCount, &rev.PackageCount)
if err == sql.ErrNoRows {
return nil, nil
}
@@ -187,17 +225,17 @@ func (s *SQLiteStore) GetRevisionByChannel(ctx context.Context, channel string)
// ListRevisions returns all indexed revisions.
func (s *SQLiteStore) ListRevisions(ctx context.Context) ([]*Revision, error) {
rows, err := s.db.QueryContext(ctx, `
SELECT id, git_hash, channel_name, commit_date, indexed_at, option_count
SELECT id, git_hash, channel_name, commit_date, indexed_at, option_count, package_count
FROM revisions ORDER BY indexed_at DESC`)
if err != nil {
return nil, fmt.Errorf("failed to list revisions: %w", err)
}
defer rows.Close()
defer rows.Close() //nolint:errcheck // rows.Err() checked after iteration
var revisions []*Revision
for rows.Next() {
rev := &Revision{}
if err := rows.Scan(&rev.ID, &rev.GitHash, &rev.ChannelName, &rev.CommitDate, &rev.IndexedAt, &rev.OptionCount); err != nil {
if err := rows.Scan(&rev.ID, &rev.GitHash, &rev.ChannelName, &rev.CommitDate, &rev.IndexedAt, &rev.OptionCount, &rev.PackageCount); err != nil {
return nil, fmt.Errorf("failed to scan revision: %w", err)
}
revisions = append(revisions, rev)
@@ -249,7 +287,7 @@ func (s *SQLiteStore) CreateOptionsBatch(ctx context.Context, opts []*Option) er
if err != nil {
return fmt.Errorf("failed to begin transaction: %w", err)
}
defer tx.Rollback()
defer tx.Rollback() //nolint:errcheck // rollback after commit returns error, which is expected
stmt, err := tx.PrepareContext(ctx, `
INSERT INTO options (revision_id, name, parent_path, type, default_value, example, description, read_only)
@@ -257,7 +295,7 @@ func (s *SQLiteStore) CreateOptionsBatch(ctx context.Context, opts []*Option) er
if err != nil {
return fmt.Errorf("failed to prepare statement: %w", err)
}
defer stmt.Close()
defer stmt.Close() //nolint:errcheck // statement closed with transaction
for _, opt := range opts {
result, err := stmt.ExecContext(ctx,
@@ -301,7 +339,7 @@ func (s *SQLiteStore) GetChildren(ctx context.Context, revisionID int64, parentP
if err != nil {
return nil, fmt.Errorf("failed to get children: %w", err)
}
defer rows.Close()
defer rows.Close() //nolint:errcheck // rows.Err() checked after iteration
var options []*Option
for rows.Next() {
@@ -384,7 +422,7 @@ func (s *SQLiteStore) SearchOptions(ctx context.Context, revisionID int64, query
if err != nil {
return nil, fmt.Errorf("failed to search options: %w", err)
}
defer rows.Close()
defer rows.Close() //nolint:errcheck // rows.Err() checked after iteration
var options []*Option
for rows.Next() {
@@ -422,7 +460,7 @@ func (s *SQLiteStore) CreateDeclarationsBatch(ctx context.Context, decls []*Decl
if err != nil {
return fmt.Errorf("failed to begin transaction: %w", err)
}
defer tx.Rollback()
defer tx.Rollback() //nolint:errcheck // rollback after commit returns error, which is expected
stmt, err := tx.PrepareContext(ctx, `
INSERT INTO declarations (option_id, file_path, line)
@@ -430,7 +468,7 @@ func (s *SQLiteStore) CreateDeclarationsBatch(ctx context.Context, decls []*Decl
if err != nil {
return fmt.Errorf("failed to prepare statement: %w", err)
}
defer stmt.Close()
defer stmt.Close() //nolint:errcheck // statement closed with transaction
for _, decl := range decls {
result, err := stmt.ExecContext(ctx, decl.OptionID, decl.FilePath, decl.Line)
@@ -455,7 +493,7 @@ func (s *SQLiteStore) GetDeclarations(ctx context.Context, optionID int64) ([]*D
if err != nil {
return nil, fmt.Errorf("failed to get declarations: %w", err)
}
defer rows.Close()
defer rows.Close() //nolint:errcheck // rows.Err() checked after iteration
var decls []*Declaration
for rows.Next() {
@@ -470,10 +508,18 @@ func (s *SQLiteStore) GetDeclarations(ctx context.Context, optionID int64) ([]*D
// CreateFile creates a new file record.
func (s *SQLiteStore) CreateFile(ctx context.Context, file *File) error {
// Compute metadata if not already set
if file.ByteSize == 0 {
file.ByteSize = len(file.Content)
}
if file.LineCount == 0 {
file.LineCount = countLines(file.Content)
}
result, err := s.db.ExecContext(ctx, `
INSERT INTO files (revision_id, file_path, extension, content)
VALUES (?, ?, ?, ?)`,
file.RevisionID, file.FilePath, file.Extension, file.Content,
INSERT INTO files (revision_id, file_path, extension, content, byte_size, line_count)
VALUES (?, ?, ?, ?, ?, ?)`,
file.RevisionID, file.FilePath, file.Extension, file.Content, file.ByteSize, file.LineCount,
)
if err != nil {
return fmt.Errorf("failed to create file: %w", err)
@@ -493,18 +539,26 @@ func (s *SQLiteStore) CreateFilesBatch(ctx context.Context, files []*File) error
if err != nil {
return fmt.Errorf("failed to begin transaction: %w", err)
}
defer tx.Rollback()
defer tx.Rollback() //nolint:errcheck // rollback after commit returns error, which is expected
stmt, err := tx.PrepareContext(ctx, `
INSERT INTO files (revision_id, file_path, extension, content)
VALUES (?, ?, ?, ?)`)
INSERT INTO files (revision_id, file_path, extension, content, byte_size, line_count)
VALUES (?, ?, ?, ?, ?, ?)`)
if err != nil {
return fmt.Errorf("failed to prepare statement: %w", err)
}
defer stmt.Close()
defer stmt.Close() //nolint:errcheck // statement closed with transaction
for _, file := range files {
result, err := stmt.ExecContext(ctx, file.RevisionID, file.FilePath, file.Extension, file.Content)
// Compute metadata if not already set
if file.ByteSize == 0 {
file.ByteSize = len(file.Content)
}
if file.LineCount == 0 {
file.LineCount = countLines(file.Content)
}
result, err := stmt.ExecContext(ctx, file.RevisionID, file.FilePath, file.Extension, file.Content, file.ByteSize, file.LineCount)
if err != nil {
return fmt.Errorf("failed to insert file: %w", err)
}
@@ -522,9 +576,9 @@ func (s *SQLiteStore) CreateFilesBatch(ctx context.Context, files []*File) error
func (s *SQLiteStore) GetFile(ctx context.Context, revisionID int64, path string) (*File, error) {
file := &File{}
err := s.db.QueryRowContext(ctx, `
SELECT id, revision_id, file_path, extension, content
SELECT id, revision_id, file_path, extension, content, byte_size, line_count
FROM files WHERE revision_id = ? AND file_path = ?`, revisionID, path,
).Scan(&file.ID, &file.RevisionID, &file.FilePath, &file.Extension, &file.Content)
).Scan(&file.ID, &file.RevisionID, &file.FilePath, &file.Extension, &file.Content, &file.ByteSize, &file.LineCount)
if err == sql.ErrNoRows {
return nil, nil
}
@@ -533,3 +587,262 @@ func (s *SQLiteStore) GetFile(ctx context.Context, revisionID int64, path string
}
return file, nil
}
// GetDeclarationsWithMetadata retrieves declarations with file metadata for an option.
func (s *SQLiteStore) GetDeclarationsWithMetadata(ctx context.Context, revisionID, optionID int64) ([]*DeclarationWithMetadata, error) {
rows, err := s.db.QueryContext(ctx, `
SELECT d.id, d.option_id, d.file_path, d.line,
COALESCE(f.byte_size, 0), COALESCE(f.line_count, 0), (f.id IS NOT NULL)
FROM declarations d
LEFT JOIN files f ON f.revision_id = ? AND f.file_path = d.file_path
WHERE d.option_id = ?`, revisionID, optionID)
if err != nil {
return nil, fmt.Errorf("failed to get declarations with metadata: %w", err)
}
defer rows.Close() //nolint:errcheck // rows.Err() checked after iteration
var decls []*DeclarationWithMetadata
for rows.Next() {
decl := &DeclarationWithMetadata{}
if err := rows.Scan(&decl.ID, &decl.OptionID, &decl.FilePath, &decl.Line,
&decl.ByteSize, &decl.LineCount, &decl.HasFile); err != nil {
return nil, fmt.Errorf("failed to scan declaration: %w", err)
}
decls = append(decls, decl)
}
return decls, rows.Err()
}
// GetFileWithRange retrieves a file with a specified line range.
func (s *SQLiteStore) GetFileWithRange(ctx context.Context, revisionID int64, path string, r FileRange) (*FileResult, error) {
file, err := s.GetFile(ctx, revisionID, path)
if err != nil {
return nil, err
}
if file == nil {
return nil, nil
}
return applyLineRange(file, r), nil
}
// CreatePackage creates a new package record.
func (s *SQLiteStore) CreatePackage(ctx context.Context, pkg *Package) error {
result, err := s.db.ExecContext(ctx, `
INSERT INTO packages (revision_id, attr_path, pname, version, description, long_description, homepage, license, platforms, maintainers, broken, unfree, insecure)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`,
pkg.RevisionID, pkg.AttrPath, pkg.Pname, pkg.Version, pkg.Description, pkg.LongDescription, pkg.Homepage, pkg.License, pkg.Platforms, pkg.Maintainers, pkg.Broken, pkg.Unfree, pkg.Insecure,
)
if err != nil {
return fmt.Errorf("failed to create package: %w", err)
}
id, err := result.LastInsertId()
if err != nil {
return fmt.Errorf("failed to get last insert id: %w", err)
}
pkg.ID = id
return nil
}
// CreatePackagesBatch creates multiple packages in a batch.
func (s *SQLiteStore) CreatePackagesBatch(ctx context.Context, pkgs []*Package) error {
tx, err := s.db.BeginTx(ctx, nil)
if err != nil {
return fmt.Errorf("failed to begin transaction: %w", err)
}
defer tx.Rollback() //nolint:errcheck // rollback after commit returns error, which is expected
stmt, err := tx.PrepareContext(ctx, `
INSERT INTO packages (revision_id, attr_path, pname, version, description, long_description, homepage, license, platforms, maintainers, broken, unfree, insecure)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`)
if err != nil {
return fmt.Errorf("failed to prepare statement: %w", err)
}
defer stmt.Close() //nolint:errcheck // statement closed with transaction
for _, pkg := range pkgs {
result, err := stmt.ExecContext(ctx,
pkg.RevisionID, pkg.AttrPath, pkg.Pname, pkg.Version, pkg.Description, pkg.LongDescription, pkg.Homepage, pkg.License, pkg.Platforms, pkg.Maintainers, pkg.Broken, pkg.Unfree, pkg.Insecure,
)
if err != nil {
return fmt.Errorf("failed to insert package %s: %w", pkg.AttrPath, err)
}
id, err := result.LastInsertId()
if err != nil {
return fmt.Errorf("failed to get last insert id: %w", err)
}
pkg.ID = id
}
return tx.Commit()
}
// GetPackage retrieves a package by revision and attr_path.
func (s *SQLiteStore) GetPackage(ctx context.Context, revisionID int64, attrPath string) (*Package, error) {
pkg := &Package{}
err := s.db.QueryRowContext(ctx, `
SELECT id, revision_id, attr_path, pname, version, description, long_description, homepage, license, platforms, maintainers, broken, unfree, insecure
FROM packages WHERE revision_id = ? AND attr_path = ?`, revisionID, attrPath,
).Scan(&pkg.ID, &pkg.RevisionID, &pkg.AttrPath, &pkg.Pname, &pkg.Version, &pkg.Description, &pkg.LongDescription, &pkg.Homepage, &pkg.License, &pkg.Platforms, &pkg.Maintainers, &pkg.Broken, &pkg.Unfree, &pkg.Insecure)
if err == sql.ErrNoRows {
return nil, nil
}
if err != nil {
return nil, fmt.Errorf("failed to get package: %w", err)
}
return pkg, nil
}
// SearchPackages searches for packages matching a query.
func (s *SQLiteStore) SearchPackages(ctx context.Context, revisionID int64, query string, filters PackageSearchFilters) ([]*Package, error) {
// Query includes exact match priority:
// - Priority 0: exact pname match
// - Priority 1: exact attr_path match
// - Priority 2: pname starts with query
// - Priority 3: attr_path starts with query
// - Priority 4: FTS match (ordered by bm25 rank)
baseQuery := `
SELECT p.id, p.revision_id, p.attr_path, p.pname, p.version, p.description, p.long_description, p.homepage, p.license, p.platforms, p.maintainers, p.broken, p.unfree, p.insecure
FROM packages p
INNER JOIN packages_fts fts ON p.id = fts.rowid
WHERE p.revision_id = ?
AND packages_fts MATCH ?`
// Escape the query for FTS5 by wrapping in double quotes for literal matching.
escapedQuery := `"` + strings.ReplaceAll(query, `"`, `""`) + `"`
// For LIKE comparisons, escape % and _ characters
likeQuery := strings.ReplaceAll(strings.ReplaceAll(query, "%", "\\%"), "_", "\\_")
args := []interface{}{revisionID, escapedQuery}
if filters.Broken != nil {
baseQuery += " AND p.broken = ?"
args = append(args, *filters.Broken)
}
if filters.Unfree != nil {
baseQuery += " AND p.unfree = ?"
args = append(args, *filters.Unfree)
}
if filters.Insecure != nil {
baseQuery += " AND p.insecure = ?"
args = append(args, *filters.Insecure)
}
// Order by exact match priority, then FTS5 rank, then attr_path
// CASE returns priority (lower = better), bm25 returns negative scores (lower = better)
baseQuery += ` ORDER BY
CASE
WHEN p.pname = ? THEN 0
WHEN p.attr_path = ? THEN 1
WHEN p.pname LIKE ? ESCAPE '\' THEN 2
WHEN p.attr_path LIKE ? ESCAPE '\' THEN 3
ELSE 4
END,
bm25(packages_fts),
p.attr_path`
args = append(args, query, query, likeQuery+"%", likeQuery+"%")
if filters.Limit > 0 {
baseQuery += fmt.Sprintf(" LIMIT %d", filters.Limit)
}
if filters.Offset > 0 {
baseQuery += fmt.Sprintf(" OFFSET %d", filters.Offset)
}
rows, err := s.db.QueryContext(ctx, baseQuery, args...)
if err != nil {
return nil, fmt.Errorf("failed to search packages: %w", err)
}
defer rows.Close() //nolint:errcheck // rows.Err() checked after iteration
var packages []*Package
for rows.Next() {
pkg := &Package{}
if err := rows.Scan(&pkg.ID, &pkg.RevisionID, &pkg.AttrPath, &pkg.Pname, &pkg.Version, &pkg.Description, &pkg.LongDescription, &pkg.Homepage, &pkg.License, &pkg.Platforms, &pkg.Maintainers, &pkg.Broken, &pkg.Unfree, &pkg.Insecure); err != nil {
return nil, fmt.Errorf("failed to scan package: %w", err)
}
packages = append(packages, pkg)
}
return packages, rows.Err()
}
// UpdateRevisionPackageCount updates the package count for a revision.
func (s *SQLiteStore) UpdateRevisionPackageCount(ctx context.Context, id int64, count int) error {
_, err := s.db.ExecContext(ctx,
"UPDATE revisions SET package_count = ? WHERE id = ?", count, id)
if err != nil {
return fmt.Errorf("failed to update package count: %w", err)
}
return nil
}
// countLines counts the number of lines in content.
func countLines(content string) int {
if content == "" {
return 0
}
count := 1
for _, c := range content {
if c == '\n' {
count++
}
}
// Don't count trailing newline as extra line
if len(content) > 0 && content[len(content)-1] == '\n' {
count--
}
return count
}
// applyLineRange extracts a range of lines from a file.
func applyLineRange(file *File, r FileRange) *FileResult {
lines := strings.Split(file.Content, "\n")
totalLines := len(lines)
// Handle trailing newline
if totalLines > 0 && lines[totalLines-1] == "" {
totalLines--
lines = lines[:totalLines]
}
// Apply defaults
offset := r.Offset
if offset < 0 {
offset = 0
}
limit := r.Limit
if limit <= 0 {
limit = 250 // Default limit
}
// Calculate range
startLine := offset + 1 // 1-based
if offset >= totalLines {
// Beyond end of file
return &FileResult{
File: &File{ID: file.ID, RevisionID: file.RevisionID, FilePath: file.FilePath, Extension: file.Extension, Content: "", ByteSize: file.ByteSize, LineCount: file.LineCount},
TotalLines: totalLines,
StartLine: 0,
EndLine: 0,
}
}
endIdx := offset + limit
if endIdx > totalLines {
endIdx = totalLines
}
endLine := endIdx // 1-based (last line included)
// Extract lines
selectedLines := lines[offset:endIdx]
content := strings.Join(selectedLines, "\n")
return &FileResult{
File: &File{ID: file.ID, RevisionID: file.RevisionID, FilePath: file.FilePath, Extension: file.Extension, Content: content, ByteSize: file.ByteSize, LineCount: file.LineCount},
TotalLines: totalLines,
StartLine: startLine,
EndLine: endLine,
}
}

View File

@@ -0,0 +1,570 @@
package gitexplorer
import (
"errors"
"fmt"
"io"
"strings"
"github.com/go-git/go-git/v5"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/object"
)
var (
// ErrNotFound is returned when a ref, commit, or file is not found.
ErrNotFound = errors.New("not found")
// ErrFileTooLarge is returned when a file exceeds the size limit.
ErrFileTooLarge = errors.New("file too large")
)
// GitClient provides read-only access to a git repository.
type GitClient struct {
repo *git.Repository
defaultRemote string
}
// NewGitClient opens a git repository at the given path.
func NewGitClient(repoPath string, defaultRemote string) (*GitClient, error) {
repo, err := git.PlainOpen(repoPath)
if err != nil {
return nil, fmt.Errorf("failed to open repository: %w", err)
}
if defaultRemote == "" {
defaultRemote = "origin"
}
return &GitClient{
repo: repo,
defaultRemote: defaultRemote,
}, nil
}
// ResolveRef resolves a ref (branch, tag, or commit hash) to a commit hash.
func (c *GitClient) ResolveRef(ref string) (*ResolveResult, error) {
result := &ResolveResult{Ref: ref}
// Try to resolve as a revision
hash, err := c.repo.ResolveRevision(plumbing.Revision(ref))
if err != nil {
return nil, fmt.Errorf("%w: ref '%s'", ErrNotFound, ref)
}
result.Commit = hash.String()
// Determine the type of ref
// Check if it's a branch
if _, err := c.repo.Reference(plumbing.NewBranchReferenceName(ref), true); err == nil {
result.Type = "branch"
return result, nil
}
// Check if it's a remote branch
if _, err := c.repo.Reference(plumbing.NewRemoteReferenceName(c.defaultRemote, ref), true); err == nil {
result.Type = "branch"
return result, nil
}
// Check if it's a tag
if _, err := c.repo.Reference(plumbing.NewTagReferenceName(ref), true); err == nil {
result.Type = "tag"
return result, nil
}
// Default to commit
result.Type = "commit"
return result, nil
}
// GetLog returns the commit log starting from the given ref.
func (c *GitClient) GetLog(ref string, limit int, author string, since string, path string) ([]LogEntry, error) {
if limit <= 0 || limit > Limits.MaxLogEntries {
limit = Limits.MaxLogEntries
}
// Resolve the ref to a commit hash
hash, err := c.repo.ResolveRevision(plumbing.Revision(ref))
if err != nil {
return nil, fmt.Errorf("%w: ref '%s'", ErrNotFound, ref)
}
logOpts := &git.LogOptions{
From: *hash,
}
// Add path filter if specified
if path != "" {
if err := ValidatePath(path); err != nil {
return nil, err
}
logOpts.PathFilter = func(p string) bool {
return strings.HasPrefix(p, path) || p == path
}
}
iter, err := c.repo.Log(logOpts)
if err != nil {
return nil, fmt.Errorf("failed to get log: %w", err)
}
defer iter.Close()
var entries []LogEntry
err = iter.ForEach(func(commit *object.Commit) error {
// Apply author filter
if author != "" {
authorLower := strings.ToLower(author)
if !strings.Contains(strings.ToLower(commit.Author.Name), authorLower) &&
!strings.Contains(strings.ToLower(commit.Author.Email), authorLower) {
return nil
}
}
// Apply since filter
if since != "" {
// Parse since as a ref and check if this commit is reachable
sinceHash, err := c.repo.ResolveRevision(plumbing.Revision(since))
if err == nil {
// Stop if we've reached the since commit
if commit.Hash == *sinceHash {
return io.EOF
}
}
}
// Get first line of commit message as subject
subject := commit.Message
if idx := strings.Index(subject, "\n"); idx != -1 {
subject = subject[:idx]
}
subject = strings.TrimSpace(subject)
entries = append(entries, LogEntry{
Hash: commit.Hash.String(),
ShortHash: commit.Hash.String()[:7],
Author: commit.Author.Name,
Email: commit.Author.Email,
Date: commit.Author.When,
Subject: subject,
})
if len(entries) >= limit {
return io.EOF
}
return nil
})
// io.EOF is expected when we hit the limit
if err != nil && err != io.EOF {
return nil, fmt.Errorf("failed to iterate log: %w", err)
}
return entries, nil
}
// GetCommitInfo returns full details about a commit.
func (c *GitClient) GetCommitInfo(ref string, includeStats bool) (*CommitInfo, error) {
hash, err := c.repo.ResolveRevision(plumbing.Revision(ref))
if err != nil {
return nil, fmt.Errorf("%w: ref '%s'", ErrNotFound, ref)
}
commit, err := c.repo.CommitObject(*hash)
if err != nil {
return nil, fmt.Errorf("failed to get commit: %w", err)
}
info := &CommitInfo{
Hash: commit.Hash.String(),
Author: commit.Author.Name,
Email: commit.Author.Email,
Date: commit.Author.When,
Committer: commit.Committer.Name,
CommitDate: commit.Committer.When,
Message: commit.Message,
}
for _, parent := range commit.ParentHashes {
info.Parents = append(info.Parents, parent.String())
}
if includeStats {
stats, err := c.getCommitStats(commit)
if err == nil {
info.Stats = stats
}
}
return info, nil
}
// getCommitStats computes file change statistics for a commit.
func (c *GitClient) getCommitStats(commit *object.Commit) (*FileStats, error) {
stats, err := commit.Stats()
if err != nil {
return nil, err
}
result := &FileStats{
FilesChanged: len(stats),
}
for _, s := range stats {
result.Additions += s.Addition
result.Deletions += s.Deletion
}
return result, nil
}
// GetDiffFiles returns the files changed between two commits.
func (c *GitClient) GetDiffFiles(fromRef, toRef string) (*DiffResult, error) {
fromHash, err := c.repo.ResolveRevision(plumbing.Revision(fromRef))
if err != nil {
return nil, fmt.Errorf("%w: ref '%s'", ErrNotFound, fromRef)
}
toHash, err := c.repo.ResolveRevision(plumbing.Revision(toRef))
if err != nil {
return nil, fmt.Errorf("%w: ref '%s'", ErrNotFound, toRef)
}
fromCommit, err := c.repo.CommitObject(*fromHash)
if err != nil {
return nil, fmt.Errorf("failed to get from commit: %w", err)
}
toCommit, err := c.repo.CommitObject(*toHash)
if err != nil {
return nil, fmt.Errorf("failed to get to commit: %w", err)
}
patch, err := fromCommit.Patch(toCommit)
if err != nil {
return nil, fmt.Errorf("failed to get patch: %w", err)
}
result := &DiffResult{
FromCommit: fromHash.String(),
ToCommit: toHash.String(),
}
for i, filePatch := range patch.FilePatches() {
if i >= Limits.MaxDiffFiles {
break
}
from, to := filePatch.Files()
df := DiffFile{}
// Determine status and paths
switch {
case from == nil && to != nil:
df.Status = "added"
df.Path = to.Path()
case from != nil && to == nil:
df.Status = "deleted"
df.Path = from.Path()
case from != nil && to != nil && from.Path() != to.Path():
df.Status = "renamed"
df.Path = to.Path()
df.OldPath = from.Path()
default:
df.Status = "modified"
if to != nil {
df.Path = to.Path()
} else if from != nil {
df.Path = from.Path()
}
}
// Count additions and deletions
for _, chunk := range filePatch.Chunks() {
content := chunk.Content()
lines := strings.Split(content, "\n")
switch chunk.Type() {
case 1: // Add
df.Additions += len(lines)
case 2: // Delete
df.Deletions += len(lines)
}
}
result.Files = append(result.Files, df)
}
return result, nil
}
// GetFileAtCommit returns the content of a file at a specific commit.
func (c *GitClient) GetFileAtCommit(ref, path string) (*FileContent, error) {
if err := ValidatePath(path); err != nil {
return nil, err
}
hash, err := c.repo.ResolveRevision(plumbing.Revision(ref))
if err != nil {
return nil, fmt.Errorf("%w: ref '%s'", ErrNotFound, ref)
}
commit, err := c.repo.CommitObject(*hash)
if err != nil {
return nil, fmt.Errorf("failed to get commit: %w", err)
}
file, err := commit.File(path)
if err != nil {
return nil, fmt.Errorf("%w: file '%s'", ErrNotFound, path)
}
// Check file size
if file.Size > Limits.MaxFileContent {
return nil, fmt.Errorf("%w: %d bytes (max %d)", ErrFileTooLarge, file.Size, Limits.MaxFileContent)
}
content, err := file.Contents()
if err != nil {
return nil, fmt.Errorf("failed to read file: %w", err)
}
return &FileContent{
Path: path,
Commit: hash.String(),
Size: file.Size,
Content: content,
}, nil
}
// IsAncestor checks if ancestor is an ancestor of descendant.
func (c *GitClient) IsAncestor(ancestorRef, descendantRef string) (*AncestryResult, error) {
ancestorHash, err := c.repo.ResolveRevision(plumbing.Revision(ancestorRef))
if err != nil {
return nil, fmt.Errorf("%w: ref '%s'", ErrNotFound, ancestorRef)
}
descendantHash, err := c.repo.ResolveRevision(plumbing.Revision(descendantRef))
if err != nil {
return nil, fmt.Errorf("%w: ref '%s'", ErrNotFound, descendantRef)
}
ancestorCommit, err := c.repo.CommitObject(*ancestorHash)
if err != nil {
return nil, fmt.Errorf("failed to get ancestor commit: %w", err)
}
descendantCommit, err := c.repo.CommitObject(*descendantHash)
if err != nil {
return nil, fmt.Errorf("failed to get descendant commit: %w", err)
}
isAncestor, err := ancestorCommit.IsAncestor(descendantCommit)
if err != nil {
return nil, fmt.Errorf("failed to check ancestry: %w", err)
}
return &AncestryResult{
Ancestor: ancestorHash.String(),
Descendant: descendantHash.String(),
IsAncestor: isAncestor,
}, nil
}
// CommitsBetween returns commits between two refs (exclusive of from, inclusive of to).
func (c *GitClient) CommitsBetween(fromRef, toRef string, limit int) (*CommitRange, error) {
if limit <= 0 || limit > Limits.MaxLogEntries {
limit = Limits.MaxLogEntries
}
fromHash, err := c.repo.ResolveRevision(plumbing.Revision(fromRef))
if err != nil {
return nil, fmt.Errorf("%w: ref '%s'", ErrNotFound, fromRef)
}
toHash, err := c.repo.ResolveRevision(plumbing.Revision(toRef))
if err != nil {
return nil, fmt.Errorf("%w: ref '%s'", ErrNotFound, toRef)
}
iter, err := c.repo.Log(&git.LogOptions{
From: *toHash,
})
if err != nil {
return nil, fmt.Errorf("failed to get log: %w", err)
}
defer iter.Close()
result := &CommitRange{
FromCommit: fromHash.String(),
ToCommit: toHash.String(),
}
err = iter.ForEach(func(commit *object.Commit) error {
// Stop when we reach the from commit (exclusive)
if commit.Hash == *fromHash {
return io.EOF
}
subject := commit.Message
if idx := strings.Index(subject, "\n"); idx != -1 {
subject = subject[:idx]
}
subject = strings.TrimSpace(subject)
result.Commits = append(result.Commits, LogEntry{
Hash: commit.Hash.String(),
ShortHash: commit.Hash.String()[:7],
Author: commit.Author.Name,
Email: commit.Author.Email,
Date: commit.Author.When,
Subject: subject,
})
if len(result.Commits) >= limit {
return io.EOF
}
return nil
})
if err != nil && err != io.EOF {
return nil, fmt.Errorf("failed to iterate log: %w", err)
}
result.Count = len(result.Commits)
return result, nil
}
// ListBranches returns all branches in the repository.
func (c *GitClient) ListBranches(includeRemote bool) (*BranchList, error) {
result := &BranchList{}
// Get HEAD to determine current branch
head, err := c.repo.Head()
if err == nil && head.Name().IsBranch() {
result.Current = head.Name().Short()
}
// List local branches
branchIter, err := c.repo.Branches()
if err != nil {
return nil, fmt.Errorf("failed to list branches: %w", err)
}
err = branchIter.ForEach(func(ref *plumbing.Reference) error {
if len(result.Branches) >= Limits.MaxBranches {
return io.EOF
}
branch := Branch{
Name: ref.Name().Short(),
Commit: ref.Hash().String(),
IsRemote: false,
IsHead: ref.Name().Short() == result.Current,
}
result.Branches = append(result.Branches, branch)
return nil
})
if err != nil && err != io.EOF {
return nil, fmt.Errorf("failed to iterate branches: %w", err)
}
// List remote branches if requested
if includeRemote {
refs, err := c.repo.References()
if err != nil {
return nil, fmt.Errorf("failed to list references: %w", err)
}
err = refs.ForEach(func(ref *plumbing.Reference) error {
if len(result.Branches) >= Limits.MaxBranches {
return io.EOF
}
if ref.Name().IsRemote() {
branch := Branch{
Name: ref.Name().Short(),
Commit: ref.Hash().String(),
IsRemote: true,
IsHead: false,
}
result.Branches = append(result.Branches, branch)
}
return nil
})
if err != nil && err != io.EOF {
return nil, fmt.Errorf("failed to iterate references: %w", err)
}
}
result.Total = len(result.Branches)
return result, nil
}
// SearchCommits searches commit messages for a pattern.
func (c *GitClient) SearchCommits(ref, query string, limit int) (*SearchResult, error) {
if limit <= 0 || limit > Limits.MaxSearchResult {
limit = Limits.MaxSearchResult
}
hash, err := c.repo.ResolveRevision(plumbing.Revision(ref))
if err != nil {
return nil, fmt.Errorf("%w: ref '%s'", ErrNotFound, ref)
}
iter, err := c.repo.Log(&git.LogOptions{
From: *hash,
})
if err != nil {
return nil, fmt.Errorf("failed to get log: %w", err)
}
defer iter.Close()
result := &SearchResult{
Query: query,
}
queryLower := strings.ToLower(query)
// We need to scan more commits to find matches
scanned := 0
maxScan := limit * 100 // Scan up to 100x the limit
err = iter.ForEach(func(commit *object.Commit) error {
scanned++
if scanned > maxScan {
return io.EOF
}
// Search in message (case-insensitive)
if !strings.Contains(strings.ToLower(commit.Message), queryLower) {
return nil
}
subject := commit.Message
if idx := strings.Index(subject, "\n"); idx != -1 {
subject = subject[:idx]
}
subject = strings.TrimSpace(subject)
result.Commits = append(result.Commits, LogEntry{
Hash: commit.Hash.String(),
ShortHash: commit.Hash.String()[:7],
Author: commit.Author.Name,
Email: commit.Author.Email,
Date: commit.Author.When,
Subject: subject,
})
if len(result.Commits) >= limit {
return io.EOF
}
return nil
})
if err != nil && err != io.EOF {
return nil, fmt.Errorf("failed to search commits: %w", err)
}
result.Count = len(result.Commits)
return result, nil
}

View File

@@ -0,0 +1,446 @@
package gitexplorer
import (
"os"
"path/filepath"
"testing"
"time"
"github.com/go-git/go-git/v5"
"github.com/go-git/go-git/v5/plumbing/object"
)
// createTestRepo creates a temporary git repository with some commits for testing.
func createTestRepo(t *testing.T) (string, func()) {
t.Helper()
dir, err := os.MkdirTemp("", "gitexplorer-test-*")
if err != nil {
t.Fatalf("failed to create temp dir: %v", err)
}
cleanup := func() {
_ = os.RemoveAll(dir)
}
repo, err := git.PlainInit(dir, false)
if err != nil {
cleanup()
t.Fatalf("failed to init repo: %v", err)
}
wt, err := repo.Worktree()
if err != nil {
cleanup()
t.Fatalf("failed to get worktree: %v", err)
}
// Create initial file and commit
readme := filepath.Join(dir, "README.md")
if err := os.WriteFile(readme, []byte("# Test Repo\n"), 0644); err != nil {
cleanup()
t.Fatalf("failed to write README: %v", err)
}
if _, err := wt.Add("README.md"); err != nil {
cleanup()
t.Fatalf("failed to add README: %v", err)
}
sig := &object.Signature{
Name: "Test User",
Email: "test@example.com",
When: time.Now().Add(-2 * time.Hour),
}
_, err = wt.Commit("Initial commit", &git.CommitOptions{Author: sig})
if err != nil {
cleanup()
t.Fatalf("failed to create initial commit: %v", err)
}
// Create a second file and commit
subdir := filepath.Join(dir, "src")
if err := os.MkdirAll(subdir, 0755); err != nil {
cleanup()
t.Fatalf("failed to create subdir: %v", err)
}
mainFile := filepath.Join(subdir, "main.go")
if err := os.WriteFile(mainFile, []byte("package main\n\nfunc main() {}\n"), 0644); err != nil {
cleanup()
t.Fatalf("failed to write main.go: %v", err)
}
if _, err := wt.Add("src/main.go"); err != nil {
cleanup()
t.Fatalf("failed to add main.go: %v", err)
}
sig.When = time.Now().Add(-1 * time.Hour)
_, err = wt.Commit("Add main.go", &git.CommitOptions{Author: sig})
if err != nil {
cleanup()
t.Fatalf("failed to create second commit: %v", err)
}
// Update README and commit
if err := os.WriteFile(readme, []byte("# Test Repo\n\nThis is a test repository.\n"), 0644); err != nil {
cleanup()
t.Fatalf("failed to update README: %v", err)
}
if _, err := wt.Add("README.md"); err != nil {
cleanup()
t.Fatalf("failed to add updated README: %v", err)
}
sig.When = time.Now()
_, err = wt.Commit("Update README", &git.CommitOptions{Author: sig})
if err != nil {
cleanup()
t.Fatalf("failed to create third commit: %v", err)
}
return dir, cleanup
}
func TestNewGitClient(t *testing.T) {
repoPath, cleanup := createTestRepo(t)
defer cleanup()
client, err := NewGitClient(repoPath, "")
if err != nil {
t.Fatalf("NewGitClient failed: %v", err)
}
if client == nil {
t.Fatal("client is nil")
}
if client.defaultRemote != "origin" {
t.Errorf("defaultRemote = %q, want %q", client.defaultRemote, "origin")
}
// Test with invalid path
_, err = NewGitClient("/nonexistent/path", "")
if err == nil {
t.Error("expected error for nonexistent path")
}
}
func TestResolveRef(t *testing.T) {
repoPath, cleanup := createTestRepo(t)
defer cleanup()
client, err := NewGitClient(repoPath, "")
if err != nil {
t.Fatalf("NewGitClient failed: %v", err)
}
// Test resolving HEAD
result, err := client.ResolveRef("HEAD")
if err != nil {
t.Fatalf("ResolveRef(HEAD) failed: %v", err)
}
if result.Commit == "" {
t.Error("commit hash is empty")
}
// Test resolving master branch
result, err = client.ResolveRef("master")
if err != nil {
t.Fatalf("ResolveRef(master) failed: %v", err)
}
if result.Type != "branch" {
t.Errorf("type = %q, want %q", result.Type, "branch")
}
// Test resolving invalid ref
_, err = client.ResolveRef("nonexistent")
if err == nil {
t.Error("expected error for nonexistent ref")
}
}
func TestGetLog(t *testing.T) {
repoPath, cleanup := createTestRepo(t)
defer cleanup()
client, err := NewGitClient(repoPath, "")
if err != nil {
t.Fatalf("NewGitClient failed: %v", err)
}
// Get full log
entries, err := client.GetLog("HEAD", 10, "", "", "")
if err != nil {
t.Fatalf("GetLog failed: %v", err)
}
if len(entries) != 3 {
t.Errorf("got %d entries, want 3", len(entries))
}
// Check order (newest first)
if entries[0].Subject != "Update README" {
t.Errorf("first entry subject = %q, want %q", entries[0].Subject, "Update README")
}
// Test with limit
entries, err = client.GetLog("HEAD", 1, "", "", "")
if err != nil {
t.Fatalf("GetLog with limit failed: %v", err)
}
if len(entries) != 1 {
t.Errorf("got %d entries, want 1", len(entries))
}
// Test with author filter
entries, err = client.GetLog("HEAD", 10, "Test User", "", "")
if err != nil {
t.Fatalf("GetLog with author failed: %v", err)
}
if len(entries) != 3 {
t.Errorf("got %d entries, want 3", len(entries))
}
entries, err = client.GetLog("HEAD", 10, "nonexistent", "", "")
if err != nil {
t.Fatalf("GetLog with nonexistent author failed: %v", err)
}
if len(entries) != 0 {
t.Errorf("got %d entries, want 0", len(entries))
}
// Test with path filter
entries, err = client.GetLog("HEAD", 10, "", "", "src")
if err != nil {
t.Fatalf("GetLog with path failed: %v", err)
}
if len(entries) != 1 {
t.Errorf("got %d entries, want 1 (only src/main.go commit)", len(entries))
}
}
func TestGetCommitInfo(t *testing.T) {
repoPath, cleanup := createTestRepo(t)
defer cleanup()
client, err := NewGitClient(repoPath, "")
if err != nil {
t.Fatalf("NewGitClient failed: %v", err)
}
info, err := client.GetCommitInfo("HEAD", true)
if err != nil {
t.Fatalf("GetCommitInfo failed: %v", err)
}
if info.Author != "Test User" {
t.Errorf("author = %q, want %q", info.Author, "Test User")
}
if info.Email != "test@example.com" {
t.Errorf("email = %q, want %q", info.Email, "test@example.com")
}
if len(info.Parents) != 1 {
t.Errorf("parents = %d, want 1", len(info.Parents))
}
if info.Stats == nil {
t.Error("stats is nil")
}
// Test without stats
info, err = client.GetCommitInfo("HEAD", false)
if err != nil {
t.Fatalf("GetCommitInfo without stats failed: %v", err)
}
if info.Stats != nil {
t.Error("stats should be nil")
}
}
func TestGetDiffFiles(t *testing.T) {
repoPath, cleanup := createTestRepo(t)
defer cleanup()
client, err := NewGitClient(repoPath, "")
if err != nil {
t.Fatalf("NewGitClient failed: %v", err)
}
result, err := client.GetDiffFiles("HEAD~2", "HEAD")
if err != nil {
t.Fatalf("GetDiffFiles failed: %v", err)
}
if len(result.Files) < 1 {
t.Error("expected at least one changed file")
}
// Check that we have the expected files
foundReadme := false
foundMain := false
for _, f := range result.Files {
if f.Path == "README.md" {
foundReadme = true
}
if f.Path == "src/main.go" {
foundMain = true
}
}
if !foundReadme {
t.Error("expected README.md in diff")
}
if !foundMain {
t.Error("expected src/main.go in diff")
}
}
func TestGetFileAtCommit(t *testing.T) {
repoPath, cleanup := createTestRepo(t)
defer cleanup()
client, err := NewGitClient(repoPath, "")
if err != nil {
t.Fatalf("NewGitClient failed: %v", err)
}
content, err := client.GetFileAtCommit("HEAD", "README.md")
if err != nil {
t.Fatalf("GetFileAtCommit failed: %v", err)
}
if content.Path != "README.md" {
t.Errorf("path = %q, want %q", content.Path, "README.md")
}
if content.Content == "" {
t.Error("content is empty")
}
// Test nested file
content, err = client.GetFileAtCommit("HEAD", "src/main.go")
if err != nil {
t.Fatalf("GetFileAtCommit for nested file failed: %v", err)
}
if content.Path != "src/main.go" {
t.Errorf("path = %q, want %q", content.Path, "src/main.go")
}
// Test nonexistent file
_, err = client.GetFileAtCommit("HEAD", "nonexistent.txt")
if err == nil {
t.Error("expected error for nonexistent file")
}
// Test path traversal
_, err = client.GetFileAtCommit("HEAD", "../../../etc/passwd")
if err == nil {
t.Error("expected error for path traversal")
}
}
func TestIsAncestor(t *testing.T) {
repoPath, cleanup := createTestRepo(t)
defer cleanup()
client, err := NewGitClient(repoPath, "")
if err != nil {
t.Fatalf("NewGitClient failed: %v", err)
}
// First commit is ancestor of HEAD
result, err := client.IsAncestor("HEAD~2", "HEAD")
if err != nil {
t.Fatalf("IsAncestor failed: %v", err)
}
if !result.IsAncestor {
t.Error("HEAD~2 should be ancestor of HEAD")
}
// HEAD is not ancestor of first commit
result, err = client.IsAncestor("HEAD", "HEAD~2")
if err != nil {
t.Fatalf("IsAncestor failed: %v", err)
}
if result.IsAncestor {
t.Error("HEAD should not be ancestor of HEAD~2")
}
}
func TestCommitsBetween(t *testing.T) {
repoPath, cleanup := createTestRepo(t)
defer cleanup()
client, err := NewGitClient(repoPath, "")
if err != nil {
t.Fatalf("NewGitClient failed: %v", err)
}
result, err := client.CommitsBetween("HEAD~2", "HEAD", 10)
if err != nil {
t.Fatalf("CommitsBetween failed: %v", err)
}
// Should have 2 commits (HEAD~1 and HEAD, exclusive of HEAD~2)
if result.Count != 2 {
t.Errorf("count = %d, want 2", result.Count)
}
}
func TestListBranches(t *testing.T) {
repoPath, cleanup := createTestRepo(t)
defer cleanup()
client, err := NewGitClient(repoPath, "")
if err != nil {
t.Fatalf("NewGitClient failed: %v", err)
}
result, err := client.ListBranches(false)
if err != nil {
t.Fatalf("ListBranches failed: %v", err)
}
if result.Total < 1 {
t.Error("expected at least one branch")
}
foundMaster := false
for _, b := range result.Branches {
if b.Name == "master" {
foundMaster = true
if !b.IsHead {
t.Error("master should be HEAD")
}
}
}
if !foundMaster {
t.Error("expected master branch")
}
}
func TestSearchCommits(t *testing.T) {
repoPath, cleanup := createTestRepo(t)
defer cleanup()
client, err := NewGitClient(repoPath, "")
if err != nil {
t.Fatalf("NewGitClient failed: %v", err)
}
result, err := client.SearchCommits("HEAD", "README", 10)
if err != nil {
t.Fatalf("SearchCommits failed: %v", err)
}
if result.Count < 1 {
t.Error("expected at least one match for 'README'")
}
// Search with no matches
result, err = client.SearchCommits("HEAD", "nonexistent-query-xyz", 10)
if err != nil {
t.Fatalf("SearchCommits for no match failed: %v", err)
}
if result.Count != 0 {
t.Errorf("count = %d, want 0", result.Count)
}
}

View File

@@ -0,0 +1,195 @@
package gitexplorer
import (
"fmt"
"strings"
)
// FormatResolveResult formats a ResolveResult as markdown.
func FormatResolveResult(r *ResolveResult) string {
var sb strings.Builder
sb.WriteString(fmt.Sprintf("**Ref:** %s\n", r.Ref))
sb.WriteString(fmt.Sprintf("**Type:** %s\n", r.Type))
sb.WriteString(fmt.Sprintf("**Commit:** %s\n", r.Commit))
return sb.String()
}
// FormatLogEntries formats a slice of LogEntry as markdown.
func FormatLogEntries(entries []LogEntry) string {
if len(entries) == 0 {
return "No commits found."
}
var sb strings.Builder
sb.WriteString(fmt.Sprintf("## Commit Log (%d commits)\n\n", len(entries)))
for _, e := range entries {
sb.WriteString(fmt.Sprintf("### %s %s\n", e.ShortHash, e.Subject))
sb.WriteString(fmt.Sprintf("**Author:** %s <%s>\n", e.Author, e.Email))
sb.WriteString(fmt.Sprintf("**Date:** %s\n\n", e.Date.Format("2006-01-02 15:04:05")))
}
return sb.String()
}
// FormatCommitInfo formats a CommitInfo as markdown.
func FormatCommitInfo(info *CommitInfo) string {
var sb strings.Builder
sb.WriteString("## Commit Details\n\n")
sb.WriteString(fmt.Sprintf("**Hash:** %s\n", info.Hash))
sb.WriteString(fmt.Sprintf("**Author:** %s <%s>\n", info.Author, info.Email))
sb.WriteString(fmt.Sprintf("**Date:** %s\n", info.Date.Format("2006-01-02 15:04:05")))
sb.WriteString(fmt.Sprintf("**Committer:** %s\n", info.Committer))
sb.WriteString(fmt.Sprintf("**Commit Date:** %s\n", info.CommitDate.Format("2006-01-02 15:04:05")))
if len(info.Parents) > 0 {
sb.WriteString(fmt.Sprintf("**Parents:** %s\n", strings.Join(info.Parents, ", ")))
}
if info.Stats != nil {
sb.WriteString(fmt.Sprintf("**Changes:** %d file(s), +%d -%d\n",
info.Stats.FilesChanged, info.Stats.Additions, info.Stats.Deletions))
}
sb.WriteString("\n### Message\n\n")
sb.WriteString("```\n")
sb.WriteString(info.Message)
if !strings.HasSuffix(info.Message, "\n") {
sb.WriteString("\n")
}
sb.WriteString("```\n")
return sb.String()
}
// FormatDiffResult formats a DiffResult as markdown.
func FormatDiffResult(r *DiffResult) string {
if len(r.Files) == 0 {
return "No files changed."
}
var sb strings.Builder
sb.WriteString(fmt.Sprintf("## Files Changed (%d files)\n\n", len(r.Files)))
sb.WriteString(fmt.Sprintf("**From:** %s\n", r.FromCommit[:7]))
sb.WriteString(fmt.Sprintf("**To:** %s\n\n", r.ToCommit[:7]))
sb.WriteString("| Status | Path | Changes |\n")
sb.WriteString("|--------|------|--------|\n")
for _, f := range r.Files {
path := f.Path
if f.OldPath != "" {
path = fmt.Sprintf("%s → %s", f.OldPath, f.Path)
}
changes := fmt.Sprintf("+%d -%d", f.Additions, f.Deletions)
sb.WriteString(fmt.Sprintf("| %s | %s | %s |\n", f.Status, path, changes))
}
return sb.String()
}
// FormatFileContent formats a FileContent as markdown.
func FormatFileContent(c *FileContent) string {
var sb strings.Builder
sb.WriteString(fmt.Sprintf("## File: %s\n\n", c.Path))
sb.WriteString(fmt.Sprintf("**Commit:** %s\n", c.Commit[:7]))
sb.WriteString(fmt.Sprintf("**Size:** %d bytes\n\n", c.Size))
// Determine language hint from extension
ext := ""
if idx := strings.LastIndex(c.Path, "."); idx != -1 {
ext = c.Path[idx+1:]
}
sb.WriteString(fmt.Sprintf("```%s\n", ext))
sb.WriteString(c.Content)
if !strings.HasSuffix(c.Content, "\n") {
sb.WriteString("\n")
}
sb.WriteString("```\n")
return sb.String()
}
// FormatAncestryResult formats an AncestryResult as markdown.
func FormatAncestryResult(r *AncestryResult) string {
var sb strings.Builder
sb.WriteString("## Ancestry Check\n\n")
sb.WriteString(fmt.Sprintf("**Ancestor:** %s\n", r.Ancestor[:7]))
sb.WriteString(fmt.Sprintf("**Descendant:** %s\n", r.Descendant[:7]))
if r.IsAncestor {
sb.WriteString("\n✓ **Yes**, the first commit is an ancestor of the second.\n")
} else {
sb.WriteString("\n✗ **No**, the first commit is not an ancestor of the second.\n")
}
return sb.String()
}
// FormatCommitRange formats a CommitRange as markdown.
func FormatCommitRange(r *CommitRange) string {
if r.Count == 0 {
return "No commits in range."
}
var sb strings.Builder
sb.WriteString(fmt.Sprintf("## Commits Between (%d commits)\n\n", r.Count))
sb.WriteString(fmt.Sprintf("**From:** %s (exclusive)\n", r.FromCommit[:7]))
sb.WriteString(fmt.Sprintf("**To:** %s (inclusive)\n\n", r.ToCommit[:7]))
for _, e := range r.Commits {
sb.WriteString(fmt.Sprintf("- **%s** %s (%s)\n", e.ShortHash, e.Subject, e.Author))
}
return sb.String()
}
// FormatBranchList formats a BranchList as markdown.
func FormatBranchList(r *BranchList) string {
if r.Total == 0 {
return "No branches found."
}
var sb strings.Builder
sb.WriteString(fmt.Sprintf("## Branches (%d total)\n\n", r.Total))
if r.Current != "" {
sb.WriteString(fmt.Sprintf("**Current branch:** %s\n\n", r.Current))
}
sb.WriteString("| Branch | Commit | Type |\n")
sb.WriteString("|--------|--------|------|\n")
for _, b := range r.Branches {
branchType := "local"
if b.IsRemote {
branchType = "remote"
}
marker := ""
if b.IsHead {
marker = " ✓"
}
sb.WriteString(fmt.Sprintf("| %s%s | %s | %s |\n", b.Name, marker, b.Commit[:7], branchType))
}
return sb.String()
}
// FormatSearchResult formats a SearchResult as markdown.
func FormatSearchResult(r *SearchResult) string {
if r.Count == 0 {
return fmt.Sprintf("No commits found matching '%s'.", r.Query)
}
var sb strings.Builder
sb.WriteString(fmt.Sprintf("## Search Results for '%s' (%d matches)\n\n", r.Query, r.Count))
for _, e := range r.Commits {
sb.WriteString(fmt.Sprintf("### %s %s\n", e.ShortHash, e.Subject))
sb.WriteString(fmt.Sprintf("**Author:** %s <%s>\n", e.Author, e.Email))
sb.WriteString(fmt.Sprintf("**Date:** %s\n\n", e.Date.Format("2006-01-02 15:04:05")))
}
return sb.String()
}

View File

@@ -0,0 +1,440 @@
package gitexplorer
import (
"context"
"fmt"
"code.t-juice.club/torjus/labmcp/internal/mcp"
)
// RegisterHandlers registers all git-explorer tool handlers on the MCP server.
func RegisterHandlers(server *mcp.Server, client *GitClient) {
server.RegisterTool(resolveRefTool(), makeResolveRefHandler(client))
server.RegisterTool(getLogTool(), makeGetLogHandler(client))
server.RegisterTool(getCommitInfoTool(), makeGetCommitInfoHandler(client))
server.RegisterTool(getDiffFilesTool(), makeGetDiffFilesHandler(client))
server.RegisterTool(getFileAtCommitTool(), makeGetFileAtCommitHandler(client))
server.RegisterTool(isAncestorTool(), makeIsAncestorHandler(client))
server.RegisterTool(commitsBetweenTool(), makeCommitsBetweenHandler(client))
server.RegisterTool(listBranchesTool(), makeListBranchesHandler(client))
server.RegisterTool(searchCommitsTool(), makeSearchCommitsHandler(client))
}
// Tool definitions
func resolveRefTool() mcp.Tool {
return mcp.Tool{
Name: "resolve_ref",
Description: "Resolve a git ref (branch, tag, or commit hash) to its full commit hash",
InputSchema: mcp.InputSchema{
Type: "object",
Properties: map[string]mcp.Property{
"ref": {
Type: "string",
Description: "Git ref to resolve (e.g., 'main', 'v1.0.0', 'HEAD', commit hash)",
},
},
Required: []string{"ref"},
},
}
}
func getLogTool() mcp.Tool {
return mcp.Tool{
Name: "get_log",
Description: "Get commit log starting from a ref, with optional filters",
InputSchema: mcp.InputSchema{
Type: "object",
Properties: map[string]mcp.Property{
"ref": {
Type: "string",
Description: "Starting ref for the log (default: HEAD)",
},
"limit": {
Type: "integer",
Description: fmt.Sprintf("Maximum number of commits to return (default: 20, max: %d)", Limits.MaxLogEntries),
Default: 20,
},
"author": {
Type: "string",
Description: "Filter by author name or email (substring match)",
},
"since": {
Type: "string",
Description: "Stop log at this ref (exclusive)",
},
"path": {
Type: "string",
Description: "Filter commits that affect this path",
},
},
},
}
}
func getCommitInfoTool() mcp.Tool {
return mcp.Tool{
Name: "get_commit_info",
Description: "Get full details for a specific commit including message, author, and optionally file statistics",
InputSchema: mcp.InputSchema{
Type: "object",
Properties: map[string]mcp.Property{
"ref": {
Type: "string",
Description: "Commit ref (hash, branch, tag, or HEAD)",
},
"include_stats": {
Type: "boolean",
Description: "Include file change statistics (default: true)",
Default: true,
},
},
Required: []string{"ref"},
},
}
}
func getDiffFilesTool() mcp.Tool {
return mcp.Tool{
Name: "get_diff_files",
Description: "Get list of files changed between two commits with change type and line counts",
InputSchema: mcp.InputSchema{
Type: "object",
Properties: map[string]mcp.Property{
"from_ref": {
Type: "string",
Description: "Starting commit ref (the older commit)",
},
"to_ref": {
Type: "string",
Description: "Ending commit ref (the newer commit)",
},
},
Required: []string{"from_ref", "to_ref"},
},
}
}
func getFileAtCommitTool() mcp.Tool {
return mcp.Tool{
Name: "get_file_at_commit",
Description: fmt.Sprintf("Get the contents of a file at a specific commit (max %dKB)", Limits.MaxFileContent/1024),
InputSchema: mcp.InputSchema{
Type: "object",
Properties: map[string]mcp.Property{
"ref": {
Type: "string",
Description: "Commit ref (hash, branch, tag, or HEAD)",
},
"path": {
Type: "string",
Description: "Path to the file relative to repository root",
},
},
Required: []string{"ref", "path"},
},
}
}
func isAncestorTool() mcp.Tool {
return mcp.Tool{
Name: "is_ancestor",
Description: "Check if one commit is an ancestor of another",
InputSchema: mcp.InputSchema{
Type: "object",
Properties: map[string]mcp.Property{
"ancestor": {
Type: "string",
Description: "Potential ancestor commit ref",
},
"descendant": {
Type: "string",
Description: "Potential descendant commit ref",
},
},
Required: []string{"ancestor", "descendant"},
},
}
}
func commitsBetweenTool() mcp.Tool {
return mcp.Tool{
Name: "commits_between",
Description: "Get all commits between two refs (from is exclusive, to is inclusive)",
InputSchema: mcp.InputSchema{
Type: "object",
Properties: map[string]mcp.Property{
"from_ref": {
Type: "string",
Description: "Starting commit ref (exclusive - commits after this)",
},
"to_ref": {
Type: "string",
Description: "Ending commit ref (inclusive - up to and including this)",
},
"limit": {
Type: "integer",
Description: fmt.Sprintf("Maximum number of commits (default: %d)", Limits.MaxLogEntries),
Default: Limits.MaxLogEntries,
},
},
Required: []string{"from_ref", "to_ref"},
},
}
}
func listBranchesTool() mcp.Tool {
return mcp.Tool{
Name: "list_branches",
Description: "List all branches in the repository",
InputSchema: mcp.InputSchema{
Type: "object",
Properties: map[string]mcp.Property{
"include_remote": {
Type: "boolean",
Description: "Include remote-tracking branches (default: false)",
Default: false,
},
},
},
}
}
func searchCommitsTool() mcp.Tool {
return mcp.Tool{
Name: "search_commits",
Description: "Search commit messages for a pattern (case-insensitive)",
InputSchema: mcp.InputSchema{
Type: "object",
Properties: map[string]mcp.Property{
"query": {
Type: "string",
Description: "Search pattern to match in commit messages",
},
"ref": {
Type: "string",
Description: "Starting ref for the search (default: HEAD)",
},
"limit": {
Type: "integer",
Description: fmt.Sprintf("Maximum number of results (default: 20, max: %d)", Limits.MaxSearchResult),
Default: 20,
},
},
Required: []string{"query"},
},
}
}
// Handler constructors
func makeResolveRefHandler(client *GitClient) mcp.ToolHandler {
return func(ctx context.Context, args map[string]interface{}) (mcp.CallToolResult, error) {
ref, _ := args["ref"].(string)
if ref == "" {
return mcp.ErrorContent(fmt.Errorf("ref is required")), nil
}
result, err := client.ResolveRef(ref)
if err != nil {
return mcp.ErrorContent(err), nil
}
return mcp.CallToolResult{
Content: []mcp.Content{mcp.TextContent(FormatResolveResult(result))},
}, nil
}
}
func makeGetLogHandler(client *GitClient) mcp.ToolHandler {
return func(ctx context.Context, args map[string]interface{}) (mcp.CallToolResult, error) {
ref := "HEAD"
if r, ok := args["ref"].(string); ok && r != "" {
ref = r
}
limit := 20
if l, ok := args["limit"].(float64); ok && l > 0 {
limit = int(l)
}
author, _ := args["author"].(string)
since, _ := args["since"].(string)
path, _ := args["path"].(string)
entries, err := client.GetLog(ref, limit, author, since, path)
if err != nil {
return mcp.ErrorContent(err), nil
}
return mcp.CallToolResult{
Content: []mcp.Content{mcp.TextContent(FormatLogEntries(entries))},
}, nil
}
}
func makeGetCommitInfoHandler(client *GitClient) mcp.ToolHandler {
return func(ctx context.Context, args map[string]interface{}) (mcp.CallToolResult, error) {
ref, _ := args["ref"].(string)
if ref == "" {
return mcp.ErrorContent(fmt.Errorf("ref is required")), nil
}
includeStats := true
if s, ok := args["include_stats"].(bool); ok {
includeStats = s
}
info, err := client.GetCommitInfo(ref, includeStats)
if err != nil {
return mcp.ErrorContent(err), nil
}
return mcp.CallToolResult{
Content: []mcp.Content{mcp.TextContent(FormatCommitInfo(info))},
}, nil
}
}
func makeGetDiffFilesHandler(client *GitClient) mcp.ToolHandler {
return func(ctx context.Context, args map[string]interface{}) (mcp.CallToolResult, error) {
fromRef, _ := args["from_ref"].(string)
if fromRef == "" {
return mcp.ErrorContent(fmt.Errorf("from_ref is required")), nil
}
toRef, _ := args["to_ref"].(string)
if toRef == "" {
return mcp.ErrorContent(fmt.Errorf("to_ref is required")), nil
}
result, err := client.GetDiffFiles(fromRef, toRef)
if err != nil {
return mcp.ErrorContent(err), nil
}
return mcp.CallToolResult{
Content: []mcp.Content{mcp.TextContent(FormatDiffResult(result))},
}, nil
}
}
func makeGetFileAtCommitHandler(client *GitClient) mcp.ToolHandler {
return func(ctx context.Context, args map[string]interface{}) (mcp.CallToolResult, error) {
ref, _ := args["ref"].(string)
if ref == "" {
return mcp.ErrorContent(fmt.Errorf("ref is required")), nil
}
path, _ := args["path"].(string)
if path == "" {
return mcp.ErrorContent(fmt.Errorf("path is required")), nil
}
content, err := client.GetFileAtCommit(ref, path)
if err != nil {
return mcp.ErrorContent(err), nil
}
return mcp.CallToolResult{
Content: []mcp.Content{mcp.TextContent(FormatFileContent(content))},
}, nil
}
}
func makeIsAncestorHandler(client *GitClient) mcp.ToolHandler {
return func(ctx context.Context, args map[string]interface{}) (mcp.CallToolResult, error) {
ancestor, _ := args["ancestor"].(string)
if ancestor == "" {
return mcp.ErrorContent(fmt.Errorf("ancestor is required")), nil
}
descendant, _ := args["descendant"].(string)
if descendant == "" {
return mcp.ErrorContent(fmt.Errorf("descendant is required")), nil
}
result, err := client.IsAncestor(ancestor, descendant)
if err != nil {
return mcp.ErrorContent(err), nil
}
return mcp.CallToolResult{
Content: []mcp.Content{mcp.TextContent(FormatAncestryResult(result))},
}, nil
}
}
func makeCommitsBetweenHandler(client *GitClient) mcp.ToolHandler {
return func(ctx context.Context, args map[string]interface{}) (mcp.CallToolResult, error) {
fromRef, _ := args["from_ref"].(string)
if fromRef == "" {
return mcp.ErrorContent(fmt.Errorf("from_ref is required")), nil
}
toRef, _ := args["to_ref"].(string)
if toRef == "" {
return mcp.ErrorContent(fmt.Errorf("to_ref is required")), nil
}
limit := Limits.MaxLogEntries
if l, ok := args["limit"].(float64); ok && l > 0 {
limit = int(l)
}
result, err := client.CommitsBetween(fromRef, toRef, limit)
if err != nil {
return mcp.ErrorContent(err), nil
}
return mcp.CallToolResult{
Content: []mcp.Content{mcp.TextContent(FormatCommitRange(result))},
}, nil
}
}
func makeListBranchesHandler(client *GitClient) mcp.ToolHandler {
return func(ctx context.Context, args map[string]interface{}) (mcp.CallToolResult, error) {
includeRemote := false
if r, ok := args["include_remote"].(bool); ok {
includeRemote = r
}
result, err := client.ListBranches(includeRemote)
if err != nil {
return mcp.ErrorContent(err), nil
}
return mcp.CallToolResult{
Content: []mcp.Content{mcp.TextContent(FormatBranchList(result))},
}, nil
}
}
func makeSearchCommitsHandler(client *GitClient) mcp.ToolHandler {
return func(ctx context.Context, args map[string]interface{}) (mcp.CallToolResult, error) {
query, _ := args["query"].(string)
if query == "" {
return mcp.ErrorContent(fmt.Errorf("query is required")), nil
}
ref := "HEAD"
if r, ok := args["ref"].(string); ok && r != "" {
ref = r
}
limit := 20
if l, ok := args["limit"].(float64); ok && l > 0 {
limit = int(l)
}
result, err := client.SearchCommits(ref, query, limit)
if err != nil {
return mcp.ErrorContent(err), nil
}
return mcp.CallToolResult{
Content: []mcp.Content{mcp.TextContent(FormatSearchResult(result))},
}, nil
}
}

View File

@@ -0,0 +1,121 @@
package gitexplorer
import (
"time"
)
// ResolveResult contains the result of resolving a ref to a commit.
type ResolveResult struct {
Ref string `json:"ref"`
Commit string `json:"commit"`
Type string `json:"type"` // "branch", "tag", "commit"
}
// LogEntry represents a single commit in the log.
type LogEntry struct {
Hash string `json:"hash"`
ShortHash string `json:"short_hash"`
Author string `json:"author"`
Email string `json:"email"`
Date time.Time `json:"date"`
Subject string `json:"subject"`
}
// CommitInfo contains full details about a commit.
type CommitInfo struct {
Hash string `json:"hash"`
Author string `json:"author"`
Email string `json:"email"`
Date time.Time `json:"date"`
Committer string `json:"committer"`
CommitDate time.Time `json:"commit_date"`
Message string `json:"message"`
Parents []string `json:"parents"`
Stats *FileStats `json:"stats,omitempty"`
}
// FileStats contains statistics about file changes.
type FileStats struct {
FilesChanged int `json:"files_changed"`
Additions int `json:"additions"`
Deletions int `json:"deletions"`
}
// DiffFile represents a file changed between two commits.
type DiffFile struct {
Path string `json:"path"`
OldPath string `json:"old_path,omitempty"` // For renames
Status string `json:"status"` // "added", "modified", "deleted", "renamed"
Additions int `json:"additions"`
Deletions int `json:"deletions"`
}
// DiffResult contains the list of files changed between two commits.
type DiffResult struct {
FromCommit string `json:"from_commit"`
ToCommit string `json:"to_commit"`
Files []DiffFile `json:"files"`
}
// FileContent represents the content of a file at a specific commit.
type FileContent struct {
Path string `json:"path"`
Commit string `json:"commit"`
Size int64 `json:"size"`
Content string `json:"content"`
}
// AncestryResult contains the result of an ancestry check.
type AncestryResult struct {
Ancestor string `json:"ancestor"`
Descendant string `json:"descendant"`
IsAncestor bool `json:"is_ancestor"`
}
// CommitRange represents commits between two refs.
type CommitRange struct {
FromCommit string `json:"from_commit"`
ToCommit string `json:"to_commit"`
Commits []LogEntry `json:"commits"`
Count int `json:"count"`
}
// Branch represents a git branch.
type Branch struct {
Name string `json:"name"`
Commit string `json:"commit"`
IsRemote bool `json:"is_remote"`
IsHead bool `json:"is_head"`
Upstream string `json:"upstream,omitempty"`
AheadBy int `json:"ahead_by,omitempty"`
BehindBy int `json:"behind_by,omitempty"`
}
// BranchList contains the list of branches.
type BranchList struct {
Branches []Branch `json:"branches"`
Current string `json:"current"`
Total int `json:"total"`
}
// SearchResult represents a commit matching a search query.
type SearchResult struct {
Commits []LogEntry `json:"commits"`
Query string `json:"query"`
Count int `json:"count"`
}
// Limits defines the maximum values for various operations.
var Limits = struct {
MaxFileContent int64 // Maximum file size in bytes
MaxLogEntries int // Maximum commit log entries
MaxBranches int // Maximum branches to list
MaxDiffFiles int // Maximum files in diff
MaxSearchResult int // Maximum search results
}{
MaxFileContent: 100 * 1024, // 100KB
MaxLogEntries: 100,
MaxBranches: 500,
MaxDiffFiles: 1000,
MaxSearchResult: 100,
}

View File

@@ -0,0 +1,57 @@
package gitexplorer
import (
"errors"
"path/filepath"
"slices"
"strings"
)
var (
// ErrPathTraversal is returned when a path attempts to traverse outside the repository.
ErrPathTraversal = errors.New("path traversal not allowed")
// ErrAbsolutePath is returned when an absolute path is provided.
ErrAbsolutePath = errors.New("absolute paths not allowed")
// ErrNullByte is returned when a path contains null bytes.
ErrNullByte = errors.New("null bytes not allowed in path")
// ErrEmptyPath is returned when a path is empty.
ErrEmptyPath = errors.New("path cannot be empty")
)
// ValidatePath validates a file path for security.
// It rejects:
// - Absolute paths
// - Paths containing null bytes
// - Paths that attempt directory traversal (contain "..")
// - Empty paths
func ValidatePath(path string) error {
if path == "" {
return ErrEmptyPath
}
// Check for null bytes
if strings.Contains(path, "\x00") {
return ErrNullByte
}
// Check for absolute paths
if filepath.IsAbs(path) {
return ErrAbsolutePath
}
// Clean the path and check for traversal
cleaned := filepath.Clean(path)
// Check if cleaned path starts with ".."
if strings.HasPrefix(cleaned, "..") {
return ErrPathTraversal
}
// Check for ".." components in the path
parts := strings.Split(cleaned, string(filepath.Separator))
if slices.Contains(parts, "..") {
return ErrPathTraversal
}
return nil
}

View File

@@ -0,0 +1,91 @@
package gitexplorer
import (
"testing"
)
func TestValidatePath(t *testing.T) {
tests := []struct {
name string
path string
wantErr error
}{
// Valid paths
{
name: "simple file",
path: "README.md",
wantErr: nil,
},
{
name: "nested file",
path: "internal/gitexplorer/types.go",
wantErr: nil,
},
{
name: "file with dots",
path: "file.test.go",
wantErr: nil,
},
{
name: "current dir prefix",
path: "./README.md",
wantErr: nil,
},
{
name: "deeply nested",
path: "a/b/c/d/e/f/g.txt",
wantErr: nil,
},
// Invalid paths
{
name: "empty path",
path: "",
wantErr: ErrEmptyPath,
},
{
name: "absolute path unix",
path: "/etc/passwd",
wantErr: ErrAbsolutePath,
},
{
name: "parent dir traversal simple",
path: "../secret.txt",
wantErr: ErrPathTraversal,
},
{
name: "parent dir traversal nested",
path: "foo/../../../etc/passwd",
wantErr: ErrPathTraversal,
},
{
name: "parent dir traversal in middle",
path: "foo/bar/../../../secret",
wantErr: ErrPathTraversal,
},
{
name: "null byte",
path: "file\x00.txt",
wantErr: ErrNullByte,
},
{
name: "null byte in middle",
path: "foo/bar\x00baz/file.txt",
wantErr: ErrNullByte,
},
{
name: "double dot only",
path: "..",
wantErr: ErrPathTraversal,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := ValidatePath(tt.path)
if err != tt.wantErr {
t.Errorf("ValidatePath(%q) = %v, want %v", tt.path, err, tt.wantErr)
}
})
}
}

View File

@@ -15,9 +15,9 @@ import (
"strings"
"time"
"git.t-juice.club/torjus/labmcp/internal/database"
"git.t-juice.club/torjus/labmcp/internal/nixos"
"git.t-juice.club/torjus/labmcp/internal/options"
"code.t-juice.club/torjus/labmcp/internal/database"
"code.t-juice.club/torjus/labmcp/internal/nixos"
"code.t-juice.club/torjus/labmcp/internal/options"
)
// revisionPattern validates revision strings to prevent injection attacks.
@@ -97,7 +97,7 @@ func (idx *Indexer) IndexRevision(ctx context.Context, revision string) (*option
if err != nil {
return nil, fmt.Errorf("failed to open options.json: %w", err)
}
defer optionsFile.Close()
defer optionsFile.Close() //nolint:errcheck // read-only file
opts, err := nixos.ParseOptions(optionsFile)
if err != nil {
@@ -125,7 +125,7 @@ func (idx *Indexer) IndexRevision(ctx context.Context, revision string) (*option
// Store options
if err := idx.storeOptions(ctx, rev.ID, opts); err != nil {
// Cleanup on failure
idx.store.DeleteRevision(ctx, rev.ID)
_ = idx.store.DeleteRevision(ctx, rev.ID) //nolint:errcheck // best-effort cleanup
return nil, fmt.Errorf("failed to store options: %w", err)
}
@@ -169,7 +169,7 @@ func (idx *Indexer) buildOptions(ctx context.Context, ref string) (string, func(
}
cleanup := func() {
os.RemoveAll(tmpDir)
_ = os.RemoveAll(tmpDir) //nolint:errcheck // best-effort temp dir cleanup
}
// Build options.json using nix-build
@@ -286,7 +286,7 @@ func (idx *Indexer) getCommitDate(ctx context.Context, ref string) (time.Time, e
if err != nil {
return time.Time{}, err
}
defer resp.Body.Close()
defer resp.Body.Close() //nolint:errcheck // response body read-only
if resp.StatusCode != http.StatusOK {
return time.Time{}, fmt.Errorf("GitHub API returned %d", resp.StatusCode)
@@ -345,7 +345,7 @@ func (idx *Indexer) IndexFiles(ctx context.Context, revisionID int64, ref string
if err != nil {
return 0, fmt.Errorf("failed to download tarball: %w", err)
}
defer resp.Body.Close()
defer resp.Body.Close() //nolint:errcheck // response body read-only
if resp.StatusCode != http.StatusOK {
return 0, fmt.Errorf("download failed with status %d", resp.StatusCode)
@@ -356,7 +356,7 @@ func (idx *Indexer) IndexFiles(ctx context.Context, revisionID int64, ref string
if err != nil {
return 0, fmt.Errorf("failed to create gzip reader: %w", err)
}
defer gz.Close()
defer gz.Close() //nolint:errcheck // gzip reader read-only
tr := tar.NewReader(gz)
count := 0

View File

@@ -6,7 +6,7 @@ import (
"testing"
"time"
"git.t-juice.club/torjus/labmcp/internal/database"
"code.t-juice.club/torjus/labmcp/internal/database"
)
// TestHomeManagerRevision is a known release branch for testing.
@@ -70,7 +70,7 @@ func TestResolveRevision(t *testing.T) {
if err != nil {
t.Fatalf("Failed to create store: %v", err)
}
defer store.Close()
defer store.Close() //nolint:errcheck // benchmark/test cleanup
indexer := NewIndexer(store)
@@ -104,7 +104,7 @@ func TestGetChannelName(t *testing.T) {
if err != nil {
t.Fatalf("Failed to create store: %v", err)
}
defer store.Close()
defer store.Close() //nolint:errcheck // benchmark/test cleanup
indexer := NewIndexer(store)
@@ -144,7 +144,7 @@ func BenchmarkIndexRevision(b *testing.B) {
if err != nil {
b.Fatalf("Failed to create store: %v", err)
}
defer store.Close()
defer store.Close() //nolint:errcheck // benchmark/test cleanup
ctx := context.Background()
if err := store.Initialize(ctx); err != nil {
@@ -157,7 +157,7 @@ func BenchmarkIndexRevision(b *testing.B) {
for i := 0; i < b.N; i++ {
// Delete any existing revision first (for repeated runs)
if rev, _ := store.GetRevision(ctx, TestHomeManagerRevision); rev != nil {
store.DeleteRevision(ctx, rev.ID)
_ = store.DeleteRevision(ctx, rev.ID) //nolint:errcheck // benchmark cleanup
}
result, err := indexer.IndexRevision(ctx, TestHomeManagerRevision)
@@ -186,7 +186,7 @@ func TestIndexRevision(t *testing.T) {
if err != nil {
t.Fatalf("Failed to create store: %v", err)
}
defer store.Close()
defer store.Close() //nolint:errcheck // benchmark/test cleanup
ctx := context.Background()
if err := store.Initialize(ctx); err != nil {

View File

@@ -8,17 +8,45 @@ import (
"strings"
"time"
"git.t-juice.club/torjus/labmcp/internal/database"
"git.t-juice.club/torjus/labmcp/internal/options"
"code.t-juice.club/torjus/labmcp/internal/database"
"code.t-juice.club/torjus/labmcp/internal/options"
"code.t-juice.club/torjus/labmcp/internal/packages"
)
// RegisterHandlers registers all tool handlers on the server.
// RegisterHandlers registers all tool handlers on the server for options mode.
// Used by legacy nixos-options and hm-options servers (no package indexing).
func (s *Server) RegisterHandlers(indexer options.Indexer) {
s.registerOptionsHandlers(indexer, nil)
}
// RegisterHandlersWithPackages registers all tool handlers for options mode
// with additional package indexing support. When pkgIndexer is non-nil,
// index_revision will also index packages, and list_revisions will show package counts.
func (s *Server) RegisterHandlersWithPackages(indexer options.Indexer, pkgIndexer *packages.Indexer) {
s.registerOptionsHandlers(indexer, pkgIndexer)
}
// registerOptionsHandlers is the shared implementation for RegisterHandlers and RegisterHandlersWithPackages.
func (s *Server) registerOptionsHandlers(indexer options.Indexer, pkgIndexer *packages.Indexer) {
s.tools["search_options"] = s.handleSearchOptions
s.tools["get_option"] = s.handleGetOption
s.tools["get_file"] = s.handleGetFile
s.tools["index_revision"] = s.makeIndexHandler(indexer)
s.tools["index_revision"] = s.makeIndexHandler(indexer, pkgIndexer)
if pkgIndexer != nil {
s.tools["list_revisions"] = s.handleListRevisionsWithPackages
} else {
s.tools["list_revisions"] = s.handleListRevisions
}
s.tools["delete_revision"] = s.handleDeleteRevision
}
// RegisterPackageHandlers registers all tool handlers on the server for packages mode.
func (s *Server) RegisterPackageHandlers(pkgIndexer *packages.Indexer) {
s.tools["search_packages"] = s.handleSearchPackages
s.tools["get_package"] = s.handleGetPackage
s.tools["get_file"] = s.handleGetFile
s.tools["index_revision"] = s.makePackageIndexHandler(pkgIndexer)
s.tools["list_revisions"] = s.handleListRevisionsWithPackages
s.tools["delete_revision"] = s.handleDeleteRevision
}
@@ -103,8 +131,8 @@ func (s *Server) handleGetOption(ctx context.Context, args map[string]interface{
return ErrorContent(fmt.Errorf("option '%s' not found", name)), nil
}
// Get declarations
declarations, err := s.store.GetDeclarations(ctx, option.ID)
// Get declarations with file metadata
declarations, err := s.store.GetDeclarationsWithMetadata(ctx, rev.ID, option.ID)
if err != nil {
s.logger.Printf("Failed to get declarations: %v", err)
}
@@ -134,10 +162,15 @@ func (s *Server) handleGetOption(ctx context.Context, args map[string]interface{
sb.WriteString("\n**Declared in:**\n")
for _, decl := range declarations {
if decl.Line > 0 {
sb.WriteString(fmt.Sprintf("- %s:%d\n", decl.FilePath, decl.Line))
sb.WriteString(fmt.Sprintf("- %s:%d", decl.FilePath, decl.Line))
} else {
sb.WriteString(fmt.Sprintf("- %s\n", decl.FilePath))
sb.WriteString(fmt.Sprintf("- %s", decl.FilePath))
}
// Add file metadata if available
if decl.HasFile && decl.ByteSize > 0 {
sb.WriteString(fmt.Sprintf(" (%d bytes, %d lines)", decl.ByteSize, decl.LineCount))
}
sb.WriteString("\n")
}
}
@@ -199,21 +232,40 @@ func (s *Server) handleGetFile(ctx context.Context, args map[string]interface{})
return ErrorContent(fmt.Errorf("no indexed revision available")), nil
}
file, err := s.store.GetFile(ctx, rev.ID, path)
// Parse range parameters
var offset, limit int
if o, ok := args["offset"].(float64); ok {
offset = int(o)
}
if l, ok := args["limit"].(float64); ok {
limit = int(l)
}
// Use GetFileWithRange
fileRange := database.FileRange{Offset: offset, Limit: limit}
result, err := s.store.GetFileWithRange(ctx, rev.ID, path, fileRange)
if err != nil {
return ErrorContent(fmt.Errorf("failed to get file: %w", err)), nil
}
if file == nil {
if result == nil {
return ErrorContent(fmt.Errorf("file '%s' not found (files may not be indexed for this revision)", path)), nil
}
// Format output with range metadata
var sb strings.Builder
if result.TotalLines > 0 && (result.StartLine > 1 || result.EndLine < result.TotalLines) {
sb.WriteString(fmt.Sprintf("Showing lines %d-%d of %d total\n\n", result.StartLine, result.EndLine, result.TotalLines))
}
sb.WriteString(fmt.Sprintf("```%s\n%s\n```", strings.TrimPrefix(result.Extension, "."), result.Content))
return CallToolResult{
Content: []Content{TextContent(fmt.Sprintf("```%s\n%s\n```", strings.TrimPrefix(file.Extension, "."), file.Content))},
Content: []Content{TextContent(sb.String())},
}, nil
}
// makeIndexHandler creates the index_revision handler with the indexer.
func (s *Server) makeIndexHandler(indexer options.Indexer) ToolHandler {
// If pkgIndexer is non-nil, it will also index packages after options and files.
func (s *Server) makeIndexHandler(indexer options.Indexer, pkgIndexer *packages.Indexer) ToolHandler {
return func(ctx context.Context, args map[string]interface{}) (CallToolResult, error) {
revision, _ := args["revision"].(string)
if revision == "" {
@@ -245,6 +297,17 @@ func (s *Server) makeIndexHandler(indexer options.Indexer) ToolHandler {
s.logger.Printf("Warning: file indexing failed: %v", err)
}
// Index packages if package indexer is available
var packageCount int
if pkgIndexer != nil {
pkgResult, pkgErr := pkgIndexer.IndexPackages(ctx, result.Revision.ID, result.Revision.GitHash)
if pkgErr != nil {
s.logger.Printf("Warning: package indexing failed: %v", pkgErr)
} else {
packageCount = pkgResult.PackageCount
}
}
var sb strings.Builder
sb.WriteString(fmt.Sprintf("Indexed revision: %s\n", result.Revision.GitHash))
if result.Revision.ChannelName != "" {
@@ -252,6 +315,9 @@ func (s *Server) makeIndexHandler(indexer options.Indexer) ToolHandler {
}
sb.WriteString(fmt.Sprintf("Options: %d\n", result.OptionCount))
sb.WriteString(fmt.Sprintf("Files: %d\n", fileCount))
if packageCount > 0 {
sb.WriteString(fmt.Sprintf("Packages: %d\n", packageCount))
}
// Handle Duration which may be time.Duration or interface{}
if dur, ok := result.Duration.(time.Duration); ok {
sb.WriteString(fmt.Sprintf("Duration: %s\n", dur.Round(time.Millisecond)))
@@ -263,6 +329,85 @@ func (s *Server) makeIndexHandler(indexer options.Indexer) ToolHandler {
}
}
// makePackageIndexHandler creates an index_revision handler for the packages-only server.
// It creates a revision record if needed, then indexes packages.
func (s *Server) makePackageIndexHandler(pkgIndexer *packages.Indexer) ToolHandler {
return func(ctx context.Context, args map[string]interface{}) (CallToolResult, error) {
revision, _ := args["revision"].(string)
if revision == "" {
return ErrorContent(fmt.Errorf("revision is required")), nil
}
if err := packages.ValidateRevision(revision); err != nil {
return ErrorContent(err), nil
}
// Resolve channel aliases to git ref
ref := pkgIndexer.ResolveRevision(revision)
// Check if revision already exists
rev, err := s.store.GetRevision(ctx, ref)
if err != nil {
return ErrorContent(fmt.Errorf("failed to check revision: %w", err)), nil
}
if rev == nil {
// Also try by channel name
rev, err = s.store.GetRevisionByChannel(ctx, revision)
if err != nil {
return ErrorContent(fmt.Errorf("failed to check revision: %w", err)), nil
}
}
if rev == nil {
// Create a new revision record
commitDate, _ := pkgIndexer.GetCommitDate(ctx, ref)
rev = &database.Revision{
GitHash: ref,
ChannelName: pkgIndexer.GetChannelName(revision),
CommitDate: commitDate,
}
if err := s.store.CreateRevision(ctx, rev); err != nil {
return ErrorContent(fmt.Errorf("failed to create revision: %w", err)), nil
}
}
// Check if packages are already indexed for this revision
if rev.PackageCount > 0 {
var sb strings.Builder
sb.WriteString(fmt.Sprintf("Revision already indexed: %s\n", rev.GitHash))
if rev.ChannelName != "" {
sb.WriteString(fmt.Sprintf("Channel: %s\n", rev.ChannelName))
}
sb.WriteString(fmt.Sprintf("Packages: %d\n", rev.PackageCount))
sb.WriteString(fmt.Sprintf("Indexed at: %s\n", rev.IndexedAt.Format("2006-01-02 15:04")))
return CallToolResult{
Content: []Content{TextContent(sb.String())},
}, nil
}
// Index packages
pkgResult, err := pkgIndexer.IndexPackages(ctx, rev.ID, rev.GitHash)
if err != nil {
return ErrorContent(fmt.Errorf("package indexing failed: %w", err)), nil
}
var sb strings.Builder
sb.WriteString(fmt.Sprintf("Indexed revision: %s\n", rev.GitHash))
if rev.ChannelName != "" {
sb.WriteString(fmt.Sprintf("Channel: %s\n", rev.ChannelName))
}
sb.WriteString(fmt.Sprintf("Packages: %d\n", pkgResult.PackageCount))
if dur, ok := pkgResult.Duration.(time.Duration); ok {
sb.WriteString(fmt.Sprintf("Duration: %s\n", dur.Round(time.Millisecond)))
}
return CallToolResult{
Content: []Content{TextContent(sb.String())},
}, nil
}
}
// handleListRevisions handles the list_revisions tool.
func (s *Server) handleListRevisions(ctx context.Context, args map[string]interface{}) (CallToolResult, error) {
revisions, err := s.store.ListRevisions(ctx)
@@ -397,3 +542,196 @@ func formatJSON(s string) string {
}
return result
}
// handleSearchPackages handles the search_packages tool.
func (s *Server) handleSearchPackages(ctx context.Context, args map[string]interface{}) (CallToolResult, error) {
query, _ := args["query"].(string)
if query == "" {
return ErrorContent(fmt.Errorf("query is required")), nil
}
revision, _ := args["revision"].(string)
rev, err := s.resolveRevision(ctx, revision)
if err != nil {
return ErrorContent(err), nil
}
if rev == nil {
return ErrorContent(fmt.Errorf("no indexed revision available")), nil
}
filters := database.PackageSearchFilters{
Limit: 50,
}
if broken, ok := args["broken"].(bool); ok {
filters.Broken = &broken
}
if unfree, ok := args["unfree"].(bool); ok {
filters.Unfree = &unfree
}
if limit, ok := args["limit"].(float64); ok && limit > 0 {
filters.Limit = int(limit)
}
pkgs, err := s.store.SearchPackages(ctx, rev.ID, query, filters)
if err != nil {
return ErrorContent(fmt.Errorf("search failed: %w", err)), nil
}
// Format results
var sb strings.Builder
sb.WriteString(fmt.Sprintf("Found %d packages matching '%s' in revision %s:\n\n", len(pkgs), query, rev.GitHash[:8]))
for _, pkg := range pkgs {
sb.WriteString(fmt.Sprintf("## %s\n", pkg.AttrPath))
sb.WriteString(fmt.Sprintf("**Name:** %s", pkg.Pname))
if pkg.Version != "" {
sb.WriteString(fmt.Sprintf(" %s", pkg.Version))
}
sb.WriteString("\n")
if pkg.Description != "" {
desc := pkg.Description
if len(desc) > 200 {
desc = desc[:200] + "..."
}
sb.WriteString(fmt.Sprintf("**Description:** %s\n", desc))
}
if pkg.Broken || pkg.Unfree || pkg.Insecure {
var flags []string
if pkg.Broken {
flags = append(flags, "broken")
}
if pkg.Unfree {
flags = append(flags, "unfree")
}
if pkg.Insecure {
flags = append(flags, "insecure")
}
sb.WriteString(fmt.Sprintf("**Flags:** %s\n", strings.Join(flags, ", ")))
}
sb.WriteString("\n")
}
return CallToolResult{
Content: []Content{TextContent(sb.String())},
}, nil
}
// handleGetPackage handles the get_package tool.
func (s *Server) handleGetPackage(ctx context.Context, args map[string]interface{}) (CallToolResult, error) {
attrPath, _ := args["attr_path"].(string)
if attrPath == "" {
return ErrorContent(fmt.Errorf("attr_path is required")), nil
}
revision, _ := args["revision"].(string)
rev, err := s.resolveRevision(ctx, revision)
if err != nil {
return ErrorContent(err), nil
}
if rev == nil {
return ErrorContent(fmt.Errorf("no indexed revision available")), nil
}
pkg, err := s.store.GetPackage(ctx, rev.ID, attrPath)
if err != nil {
return ErrorContent(fmt.Errorf("failed to get package: %w", err)), nil
}
if pkg == nil {
return ErrorContent(fmt.Errorf("package '%s' not found", attrPath)), nil
}
// Format result
var sb strings.Builder
sb.WriteString(fmt.Sprintf("# %s\n\n", pkg.AttrPath))
sb.WriteString(fmt.Sprintf("**Package name:** %s\n", pkg.Pname))
if pkg.Version != "" {
sb.WriteString(fmt.Sprintf("**Version:** %s\n", pkg.Version))
}
if pkg.Description != "" {
sb.WriteString(fmt.Sprintf("\n**Description:**\n%s\n", pkg.Description))
}
if pkg.LongDescription != "" {
sb.WriteString(fmt.Sprintf("\n**Long description:**\n%s\n", pkg.LongDescription))
}
if pkg.Homepage != "" {
sb.WriteString(fmt.Sprintf("\n**Homepage:** %s\n", pkg.Homepage))
}
if pkg.License != "" && pkg.License != "[]" {
sb.WriteString(fmt.Sprintf("\n**License:** %s\n", formatJSONArray(pkg.License)))
}
if pkg.Maintainers != "" && pkg.Maintainers != "[]" {
sb.WriteString(fmt.Sprintf("\n**Maintainers:** %s\n", formatJSONArray(pkg.Maintainers)))
}
if pkg.Platforms != "" && pkg.Platforms != "[]" {
sb.WriteString(fmt.Sprintf("\n**Platforms:** %s\n", formatJSONArray(pkg.Platforms)))
}
// Status flags
if pkg.Broken || pkg.Unfree || pkg.Insecure {
sb.WriteString("\n**Status:**\n")
if pkg.Broken {
sb.WriteString("- ⚠️ This package is marked as **broken**\n")
}
if pkg.Unfree {
sb.WriteString("- This package has an **unfree** license\n")
}
if pkg.Insecure {
sb.WriteString("- ⚠️ This package is marked as **insecure**\n")
}
}
return CallToolResult{
Content: []Content{TextContent(sb.String())},
}, nil
}
// handleListRevisionsWithPackages handles the list_revisions tool for packages mode.
func (s *Server) handleListRevisionsWithPackages(ctx context.Context, args map[string]interface{}) (CallToolResult, error) {
revisions, err := s.store.ListRevisions(ctx)
if err != nil {
return ErrorContent(fmt.Errorf("failed to list revisions: %w", err)), nil
}
if len(revisions) == 0 {
return CallToolResult{
Content: []Content{TextContent("No revisions indexed. Use the nixpkgs-search CLI to index packages.")},
}, nil
}
var sb strings.Builder
sb.WriteString(fmt.Sprintf("Indexed revisions (%d):\n\n", len(revisions)))
for _, rev := range revisions {
sb.WriteString(fmt.Sprintf("- **%s**", rev.GitHash[:12]))
if rev.ChannelName != "" {
sb.WriteString(fmt.Sprintf(" (%s)", rev.ChannelName))
}
sb.WriteString(fmt.Sprintf("\n Options: %d, Packages: %d, Indexed: %s\n",
rev.OptionCount, rev.PackageCount, rev.IndexedAt.Format("2006-01-02 15:04")))
}
return CallToolResult{
Content: []Content{TextContent(sb.String())},
}, nil
}
// formatJSONArray formats a JSON array string as a comma-separated list.
func formatJSONArray(s string) string {
if s == "" || s == "[]" {
return ""
}
var arr []string
if err := json.Unmarshal([]byte(s), &arr); err != nil {
return s
}
return strings.Join(arr, ", ")
}

View File

@@ -7,7 +7,19 @@ import (
"io"
"log"
"git.t-juice.club/torjus/labmcp/internal/database"
"code.t-juice.club/torjus/labmcp/internal/database"
)
// ServerMode indicates which type of tools the server should expose.
type ServerMode string
const (
// ModeOptions exposes only option-related tools.
ModeOptions ServerMode = "options"
// ModePackages exposes only package-related tools.
ModePackages ServerMode = "packages"
// ModeCustom exposes externally registered tools (no database required).
ModeCustom ServerMode = "custom"
)
// ServerConfig contains configuration for the MCP server.
@@ -18,19 +30,25 @@ type ServerConfig struct {
Version string
// Instructions are the server instructions sent to clients.
Instructions string
// InstructionsFunc, if set, is called during initialization to generate
// dynamic instructions. Its return value is appended to Instructions.
InstructionsFunc func() string
// DefaultChannel is the default channel to use when no revision is specified.
DefaultChannel string
// SourceName is the name of the source repository (e.g., "nixpkgs", "home-manager").
SourceName string
// Mode specifies which tools to expose (options or packages).
Mode ServerMode
}
// DefaultNixOSConfig returns the default configuration for NixOS options server.
func DefaultNixOSConfig() ServerConfig {
return ServerConfig{
Name: "nixos-options",
Version: "0.1.1",
Version: "0.4.0",
DefaultChannel: "nixos-stable",
SourceName: "nixpkgs",
Mode: ModeOptions,
Instructions: `NixOS Options MCP Server - Search and query NixOS configuration options.
If the current project contains a flake.lock file, you can index the exact nixpkgs revision used by the project:
@@ -39,7 +57,48 @@ If the current project contains a flake.lock file, you can index the exact nixpk
Example: If flake.lock contains "rev": "abc123...", call index_revision with revision "abc123...".
This ensures option documentation matches the nixpkgs version the project actually uses.`,
This ensures option documentation matches the nixpkgs version the project actually uses.
Note: index_revision also indexes packages when available, so both options and packages become searchable.`,
}
}
// DefaultNixpkgsPackagesConfig returns the default configuration for nixpkgs packages server.
func DefaultNixpkgsPackagesConfig() ServerConfig {
return ServerConfig{
Name: "nixpkgs-packages",
Version: "0.4.0",
DefaultChannel: "nixos-stable",
SourceName: "nixpkgs",
Mode: ModePackages,
Instructions: `Nixpkgs Packages MCP Server - Search and query Nix packages from nixpkgs.
If the current project contains a flake.lock file, you can search packages from the exact nixpkgs revision used by the project:
1. Read the flake.lock file to find the nixpkgs "rev" field
2. Call index_revision with that git hash to index packages for that specific version
Example: If flake.lock contains "rev": "abc123...", call index_revision with revision "abc123...".
This ensures package information matches the nixpkgs version the project actually uses.`,
}
}
// DefaultMonitoringConfig returns the default configuration for the lab monitoring server.
func DefaultMonitoringConfig() ServerConfig {
return ServerConfig{
Name: "lab-monitoring",
Version: "0.3.1",
Mode: ModeCustom,
Instructions: `Lab Monitoring MCP Server - Query Prometheus metrics and Alertmanager alerts.
Tools for querying your monitoring stack:
- Search and query Prometheus metrics with PromQL
- List and inspect alerts from Alertmanager
- View scrape target health status
- Manage alert silences
- Query logs via LogQL (when Loki is configured)
All queries are executed against live Prometheus, Alertmanager, and Loki HTTP APIs.`,
}
}
@@ -47,9 +106,10 @@ This ensures option documentation matches the nixpkgs version the project actual
func DefaultHomeManagerConfig() ServerConfig {
return ServerConfig{
Name: "hm-options",
Version: "0.1.1",
Version: "0.3.0",
DefaultChannel: "hm-stable",
SourceName: "home-manager",
Mode: ModeOptions,
Instructions: `Home Manager Options MCP Server - Search and query Home Manager configuration options.
If the current project contains a flake.lock file, you can index the exact home-manager revision used by the project:
@@ -62,11 +122,33 @@ This ensures option documentation matches the home-manager version the project a
}
}
// DefaultGitExplorerConfig returns the default configuration for the git-explorer server.
func DefaultGitExplorerConfig() ServerConfig {
return ServerConfig{
Name: "git-explorer",
Version: "0.1.0",
Mode: ModeCustom,
Instructions: `Git Explorer MCP Server - Read-only access to git repository information.
Tools for exploring git repositories:
- Resolve refs (branches, tags, commits) to commit hashes
- View commit logs with filtering by author, path, or range
- Get full commit details including file change statistics
- Compare commits to see which files changed
- Read file contents at any commit
- Check ancestry relationships between commits
- Search commit messages
All operations are read-only and will never modify the repository.`,
}
}
// Server is an MCP server that handles JSON-RPC requests.
type Server struct {
store database.Store
config ServerConfig
tools map[string]ToolHandler
toolDefs []Tool
initialized bool
logger *log.Logger
}
@@ -74,7 +156,7 @@ type Server struct {
// ToolHandler is a function that handles a tool call.
type ToolHandler func(ctx context.Context, args map[string]interface{}) (CallToolResult, error)
// NewServer creates a new MCP server with the given configuration.
// NewServer creates a new MCP server with a database store.
func NewServer(store database.Store, logger *log.Logger, config ServerConfig) *Server {
if logger == nil {
logger = log.New(io.Discard, "", 0)
@@ -89,6 +171,25 @@ func NewServer(store database.Store, logger *log.Logger, config ServerConfig) *S
return s
}
// NewGenericServer creates a new MCP server without a database store.
// Use RegisterTool to add tools externally.
func NewGenericServer(logger *log.Logger, config ServerConfig) *Server {
if logger == nil {
logger = log.New(io.Discard, "", 0)
}
return &Server{
config: config,
tools: make(map[string]ToolHandler),
logger: logger,
}
}
// RegisterTool registers an externally defined tool with its handler.
func (s *Server) RegisterTool(tool Tool, handler ToolHandler) {
s.toolDefs = append(s.toolDefs, tool)
s.tools[tool.Name] = handler
}
// registerTools registers all available tools.
func (s *Server) registerTools() {
// Tools will be implemented in handlers.go
@@ -172,6 +273,13 @@ func (s *Server) handleInitialize(req *Request) *Response {
s.logger.Printf("Client: %s %s, protocol: %s",
params.ClientInfo.Name, params.ClientInfo.Version, params.ProtocolVersion)
instructions := s.config.Instructions
if s.config.InstructionsFunc != nil {
if extra := s.config.InstructionsFunc(); extra != "" {
instructions += "\n\n" + extra
}
}
result := InitializeResult{
ProtocolVersion: ProtocolVersion,
Capabilities: Capabilities{
@@ -183,7 +291,7 @@ func (s *Server) handleInitialize(req *Request) *Response {
Name: s.config.Name,
Version: s.config.Version,
},
Instructions: s.config.Instructions,
Instructions: instructions,
}
return &Response{
@@ -205,6 +313,22 @@ func (s *Server) handleToolsList(req *Request) *Response {
// getToolDefinitions returns the tool definitions.
func (s *Server) getToolDefinitions() []Tool {
// For custom mode, return externally registered tools
if s.config.Mode == ModeCustom {
return s.toolDefs
}
// For packages mode, return package tools
if s.config.Mode == ModePackages {
return s.getPackageToolDefinitions()
}
// Default: options mode
return s.getOptionToolDefinitions()
}
// getOptionToolDefinitions returns the tool definitions for options mode.
func (s *Server) getOptionToolDefinitions() []Tool {
// Determine naming based on source
optionType := "NixOS"
sourceRepo := "nixpkgs"
@@ -291,13 +415,23 @@ func (s *Server) getToolDefinitions() []Tool {
Type: "string",
Description: "Git hash or channel name. Uses default if not specified.",
},
"offset": {
Type: "integer",
Description: "Line offset (0-based). Default: 0",
Default: 0,
},
"limit": {
Type: "integer",
Description: "Maximum lines to return. Default: 250, use 0 for all lines",
Default: 250,
},
},
Required: []string{"path"},
},
},
{
Name: "index_revision",
Description: fmt.Sprintf("Index a %s revision to make its options searchable", sourceRepo),
Description: s.indexRevisionDescription(sourceRepo),
InputSchema: InputSchema{
Type: "object",
Properties: map[string]Property{
@@ -334,6 +468,137 @@ func (s *Server) getToolDefinitions() []Tool {
}
}
// indexRevisionDescription returns the description for the index_revision tool,
// adjusted based on whether packages are also indexed.
func (s *Server) indexRevisionDescription(sourceRepo string) string {
if s.config.SourceName == "nixpkgs" {
return fmt.Sprintf("Index a %s revision to make its options and packages searchable", sourceRepo)
}
return fmt.Sprintf("Index a %s revision to make its options searchable", sourceRepo)
}
// getPackageToolDefinitions returns the tool definitions for packages mode.
func (s *Server) getPackageToolDefinitions() []Tool {
exampleChannels := "'nixos-unstable', 'nixos-24.05'"
exampleFilePath := "pkgs/applications/networking/browsers/firefox/default.nix"
return []Tool{
{
Name: "search_packages",
Description: "Search for Nix packages by name or description",
InputSchema: InputSchema{
Type: "object",
Properties: map[string]Property{
"query": {
Type: "string",
Description: "Search query (matches package name, attr path, and description)",
},
"revision": {
Type: "string",
Description: fmt.Sprintf("Git hash or channel name (e.g., %s). Uses default if not specified.", exampleChannels),
},
"broken": {
Type: "boolean",
Description: "Filter by broken status (true = only broken, false = only working)",
},
"unfree": {
Type: "boolean",
Description: "Filter by license (true = only unfree, false = only free)",
},
"limit": {
Type: "integer",
Description: "Maximum number of results (default: 50)",
Default: 50,
},
},
Required: []string{"query"},
},
},
{
Name: "get_package",
Description: "Get full details for a specific Nix package by attribute path",
InputSchema: InputSchema{
Type: "object",
Properties: map[string]Property{
"attr_path": {
Type: "string",
Description: "Package attribute path (e.g., 'firefox', 'python312Packages.requests')",
},
"revision": {
Type: "string",
Description: "Git hash or channel name. Uses default if not specified.",
},
},
Required: []string{"attr_path"},
},
},
{
Name: "get_file",
Description: "Fetch the contents of a file from nixpkgs",
InputSchema: InputSchema{
Type: "object",
Properties: map[string]Property{
"path": {
Type: "string",
Description: fmt.Sprintf("File path relative to nixpkgs root (e.g., '%s')", exampleFilePath),
},
"revision": {
Type: "string",
Description: "Git hash or channel name. Uses default if not specified.",
},
"offset": {
Type: "integer",
Description: "Line offset (0-based). Default: 0",
Default: 0,
},
"limit": {
Type: "integer",
Description: "Maximum lines to return. Default: 250, use 0 for all lines",
Default: 250,
},
},
Required: []string{"path"},
},
},
{
Name: "index_revision",
Description: "Index a nixpkgs revision to make its packages searchable",
InputSchema: InputSchema{
Type: "object",
Properties: map[string]Property{
"revision": {
Type: "string",
Description: fmt.Sprintf("Git hash (full or short) or channel name (e.g., %s)", exampleChannels),
},
},
Required: []string{"revision"},
},
},
{
Name: "list_revisions",
Description: "List all indexed nixpkgs revisions",
InputSchema: InputSchema{
Type: "object",
Properties: map[string]Property{},
},
},
{
Name: "delete_revision",
Description: "Delete an indexed revision and all its data",
InputSchema: InputSchema{
Type: "object",
Properties: map[string]Property{
"revision": {
Type: "string",
Description: "Git hash or channel name of the revision to delete",
},
},
Required: []string{"revision"},
},
},
}
}
// handleToolsCall handles a tool invocation.
func (s *Server) handleToolsCall(ctx context.Context, req *Request) *Response {
var params CallToolParams

View File

@@ -8,8 +8,9 @@ import (
"strings"
"testing"
"git.t-juice.club/torjus/labmcp/internal/database"
"git.t-juice.club/torjus/labmcp/internal/nixos"
"code.t-juice.club/torjus/labmcp/internal/database"
"code.t-juice.club/torjus/labmcp/internal/nixos"
"code.t-juice.club/torjus/labmcp/internal/packages"
)
func TestServerInitialize(t *testing.T) {
@@ -145,6 +146,110 @@ func TestServerNotification(t *testing.T) {
}
}
func TestPackagesServerToolsList(t *testing.T) {
store := setupTestStore(t)
server := NewServer(store, nil, DefaultNixpkgsPackagesConfig())
pkgIndexer := packages.NewIndexer(store)
server.RegisterPackageHandlers(pkgIndexer)
input := `{"jsonrpc":"2.0","id":1,"method":"tools/list"}`
resp := runRequest(t, server, input)
if resp.Error != nil {
t.Fatalf("Unexpected error: %v", resp.Error)
}
result, ok := resp.Result.(map[string]interface{})
if !ok {
t.Fatalf("Expected map result, got %T", resp.Result)
}
tools, ok := result["tools"].([]interface{})
if !ok {
t.Fatalf("Expected tools array, got %T", result["tools"])
}
// Should have 6 tools (search_packages, get_package, get_file, index_revision, list_revisions, delete_revision)
if len(tools) != 6 {
t.Errorf("Expected 6 tools, got %d", len(tools))
}
expectedTools := map[string]bool{
"search_packages": false,
"get_package": false,
"get_file": false,
"index_revision": false,
"list_revisions": false,
"delete_revision": false,
}
for _, tool := range tools {
toolMap := tool.(map[string]interface{})
name := toolMap["name"].(string)
if _, ok := expectedTools[name]; ok {
expectedTools[name] = true
}
}
for name, found := range expectedTools {
if !found {
t.Errorf("Tool %q not found in tools list", name)
}
}
}
func TestOptionsServerWithPackagesToolsList(t *testing.T) {
store := setupTestStore(t)
server := NewServer(store, nil, DefaultNixOSConfig())
indexer := nixos.NewIndexer(store)
pkgIndexer := packages.NewIndexer(store)
server.RegisterHandlersWithPackages(indexer, pkgIndexer)
input := `{"jsonrpc":"2.0","id":1,"method":"tools/list"}`
resp := runRequest(t, server, input)
if resp.Error != nil {
t.Fatalf("Unexpected error: %v", resp.Error)
}
result, ok := resp.Result.(map[string]interface{})
if !ok {
t.Fatalf("Expected map result, got %T", resp.Result)
}
tools, ok := result["tools"].([]interface{})
if !ok {
t.Fatalf("Expected tools array, got %T", result["tools"])
}
// Should still have 6 tools (same as options-only)
if len(tools) != 6 {
t.Errorf("Expected 6 tools, got %d", len(tools))
}
// Verify index_revision is present
found := false
for _, tool := range tools {
toolMap := tool.(map[string]interface{})
if toolMap["name"].(string) == "index_revision" {
found = true
// For nixpkgs source, description should mention packages
desc := toolMap["description"].(string)
if !strings.Contains(desc, "packages") {
t.Errorf("index_revision description should mention packages, got: %s", desc)
}
break
}
}
if !found {
t.Error("index_revision tool not found in tools list")
}
}
func TestGetFilePathValidation(t *testing.T) {
store := setupTestStore(t)
server := setupTestServer(t, store)
@@ -245,7 +350,7 @@ func setupTestStore(t *testing.T) database.Store {
}
t.Cleanup(func() {
store.Close()
store.Close() //nolint:errcheck // test cleanup
})
return store

View File

@@ -85,7 +85,7 @@ func TestSessionStoreCreate(t *testing.T) {
// Verify we can retrieve it
retrieved := store.Get(session.ID)
if retrieved == nil {
t.Error("Should be able to retrieve created session")
t.Fatal("Should be able to retrieve created session")
}
if retrieved.ID != session.ID {
t.Error("Retrieved session ID should match")
@@ -179,7 +179,7 @@ func TestSessionStoreCleanup(t *testing.T) {
// Create multiple sessions
for i := 0; i < 5; i++ {
store.Create()
_, _ = store.Create() //nolint:errcheck // test setup, error checked via count
}
if store.Count() != 5 {

View File

@@ -182,6 +182,7 @@ func (t *HTTPTransport) handlePost(w http.ResponseWriter, r *http.Request) {
if err := json.Unmarshal(body, &req); err != nil {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
//nolint:errcheck // response already being written, can't handle encode error
json.NewEncoder(w).Encode(Response{
JSONRPC: "2.0",
Error: &Error{
@@ -237,7 +238,7 @@ func (t *HTTPTransport) handlePost(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
json.NewEncoder(w).Encode(resp)
_ = json.NewEncoder(w).Encode(resp) //nolint:errcheck // response already being written
}
// handleInitialize handles the initialize request and creates a new session.
@@ -271,7 +272,7 @@ func (t *HTTPTransport) handleInitialize(w http.ResponseWriter, r *http.Request,
w.Header().Set("Content-Type", "application/json")
w.Header().Set("Mcp-Session-Id", session.ID)
w.WriteHeader(http.StatusOK)
json.NewEncoder(w).Encode(resp)
_ = json.NewEncoder(w).Encode(resp) //nolint:errcheck // response already being written
}
// handleGet handles SSE stream for server-initiated notifications.

View File

@@ -59,7 +59,7 @@ func TestHTTPTransportInitialize(t *testing.T) {
if err != nil {
t.Fatalf("Request failed: %v", err)
}
defer resp.Body.Close()
defer resp.Body.Close() //nolint:errcheck // test cleanup
if resp.StatusCode != http.StatusOK {
t.Errorf("Expected 200, got %d", resp.StatusCode)
@@ -103,7 +103,7 @@ func TestHTTPTransportSessionRequired(t *testing.T) {
if err != nil {
t.Fatalf("Request failed: %v", err)
}
defer resp.Body.Close()
defer resp.Body.Close() //nolint:errcheck // test cleanup
if resp.StatusCode != http.StatusBadRequest {
t.Errorf("Expected 400 without session, got %d", resp.StatusCode)
@@ -129,7 +129,7 @@ func TestHTTPTransportInvalidSession(t *testing.T) {
if err != nil {
t.Fatalf("Request failed: %v", err)
}
defer resp.Body.Close()
defer resp.Body.Close() //nolint:errcheck // test cleanup
if resp.StatusCode != http.StatusNotFound {
t.Errorf("Expected 404 for invalid session, got %d", resp.StatusCode)
@@ -158,7 +158,7 @@ func TestHTTPTransportValidSession(t *testing.T) {
if err != nil {
t.Fatalf("Request failed: %v", err)
}
defer resp.Body.Close()
defer resp.Body.Close() //nolint:errcheck // test cleanup
if resp.StatusCode != http.StatusOK {
t.Errorf("Expected 200 with valid session, got %d", resp.StatusCode)
@@ -185,7 +185,7 @@ func TestHTTPTransportNotificationAccepted(t *testing.T) {
if err != nil {
t.Fatalf("Request failed: %v", err)
}
defer resp.Body.Close()
defer resp.Body.Close() //nolint:errcheck // test cleanup
if resp.StatusCode != http.StatusAccepted {
t.Errorf("Expected 202 for notification, got %d", resp.StatusCode)
@@ -210,7 +210,7 @@ func TestHTTPTransportDeleteSession(t *testing.T) {
if err != nil {
t.Fatalf("Request failed: %v", err)
}
defer resp.Body.Close()
defer resp.Body.Close() //nolint:errcheck // test cleanup
if resp.StatusCode != http.StatusNoContent {
t.Errorf("Expected 204 for delete, got %d", resp.StatusCode)
@@ -232,7 +232,7 @@ func TestHTTPTransportDeleteNonexistentSession(t *testing.T) {
if err != nil {
t.Fatalf("Request failed: %v", err)
}
defer resp.Body.Close()
defer resp.Body.Close() //nolint:errcheck // test cleanup
if resp.StatusCode != http.StatusNotFound {
t.Errorf("Expected 404 for nonexistent session, got %d", resp.StatusCode)
@@ -315,7 +315,7 @@ func TestHTTPTransportOriginValidation(t *testing.T) {
if err != nil {
t.Fatalf("Request failed: %v", err)
}
defer resp.Body.Close()
defer resp.Body.Close() //nolint:errcheck // test cleanup
if tt.expectAllowed && resp.StatusCode == http.StatusForbidden {
t.Error("Expected request to be allowed but was forbidden")
@@ -340,7 +340,7 @@ func TestHTTPTransportSSERequiresAcceptHeader(t *testing.T) {
if err != nil {
t.Fatalf("Request failed: %v", err)
}
defer resp.Body.Close()
defer resp.Body.Close() //nolint:errcheck // test cleanup
if resp.StatusCode != http.StatusNotAcceptable {
t.Errorf("Expected 406 without Accept header, got %d", resp.StatusCode)
@@ -361,7 +361,7 @@ func TestHTTPTransportSSEStream(t *testing.T) {
if err != nil {
t.Fatalf("Request failed: %v", err)
}
defer resp.Body.Close()
defer resp.Body.Close() //nolint:errcheck // test cleanup
if resp.StatusCode != http.StatusOK {
t.Fatalf("Expected 200, got %d", resp.StatusCode)
@@ -425,7 +425,7 @@ func TestHTTPTransportSSEKeepalive(t *testing.T) {
if err != nil {
t.Fatalf("Request failed: %v", err)
}
defer resp.Body.Close()
defer resp.Body.Close() //nolint:errcheck // test cleanup
if resp.StatusCode != http.StatusOK {
t.Fatalf("Expected 200, got %d", resp.StatusCode)
@@ -483,7 +483,7 @@ func TestHTTPTransportParseError(t *testing.T) {
if err != nil {
t.Fatalf("Request failed: %v", err)
}
defer resp.Body.Close()
defer resp.Body.Close() //nolint:errcheck // test cleanup
if resp.StatusCode != http.StatusOK {
t.Errorf("Expected 200 (with JSON-RPC error), got %d", resp.StatusCode)
@@ -511,7 +511,7 @@ func TestHTTPTransportMethodNotAllowed(t *testing.T) {
if err != nil {
t.Fatalf("Request failed: %v", err)
}
defer resp.Body.Close()
defer resp.Body.Close() //nolint:errcheck // test cleanup
if resp.StatusCode != http.StatusMethodNotAllowed {
t.Errorf("Expected 405, got %d", resp.StatusCode)
@@ -530,7 +530,7 @@ func TestHTTPTransportOptionsRequest(t *testing.T) {
if err != nil {
t.Fatalf("Request failed: %v", err)
}
defer resp.Body.Close()
defer resp.Body.Close() //nolint:errcheck // test cleanup
if resp.StatusCode != http.StatusNoContent {
t.Errorf("Expected 204, got %d", resp.StatusCode)
@@ -641,7 +641,7 @@ func TestHTTPTransportRequestBodyTooLarge(t *testing.T) {
if err != nil {
t.Fatalf("Request failed: %v", err)
}
defer resp.Body.Close()
defer resp.Body.Close() //nolint:errcheck // test cleanup
if resp.StatusCode != http.StatusRequestEntityTooLarge {
t.Errorf("Expected 413 for oversized request, got %d", resp.StatusCode)
@@ -670,7 +670,7 @@ func TestHTTPTransportSessionLimitReached(t *testing.T) {
if err != nil {
t.Fatalf("Request %d failed: %v", i, err)
}
resp.Body.Close()
resp.Body.Close() //nolint:errcheck // test cleanup
if resp.StatusCode != http.StatusOK {
t.Errorf("Request %d: expected 200, got %d", i, resp.StatusCode)
@@ -685,7 +685,7 @@ func TestHTTPTransportSessionLimitReached(t *testing.T) {
if err != nil {
t.Fatalf("Request failed: %v", err)
}
defer resp.Body.Close()
defer resp.Body.Close() //nolint:errcheck // test cleanup
if resp.StatusCode != http.StatusServiceUnavailable {
t.Errorf("Expected 503 when session limit reached, got %d", resp.StatusCode)
@@ -713,7 +713,7 @@ func TestHTTPTransportRequestBodyWithinLimit(t *testing.T) {
if err != nil {
t.Fatalf("Request failed: %v", err)
}
defer resp.Body.Close()
defer resp.Body.Close() //nolint:errcheck // test cleanup
if resp.StatusCode != http.StatusOK {
t.Errorf("Expected 200 for valid request within limit, got %d", resp.StatusCode)

View File

@@ -0,0 +1,153 @@
package monitoring
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"net/url"
"strings"
"time"
)
// AlertmanagerClient is an HTTP client for the Alertmanager API v2.
type AlertmanagerClient struct {
baseURL string
httpClient *http.Client
}
// NewAlertmanagerClient creates a new Alertmanager API client.
func NewAlertmanagerClient(baseURL string) *AlertmanagerClient {
return &AlertmanagerClient{
baseURL: strings.TrimRight(baseURL, "/"),
httpClient: &http.Client{
Timeout: 30 * time.Second,
},
}
}
// ListAlerts returns alerts matching the given filters.
func (c *AlertmanagerClient) ListAlerts(ctx context.Context, filters AlertFilters) ([]Alert, error) {
params := url.Values{}
if filters.Active != nil {
params.Set("active", fmt.Sprintf("%t", *filters.Active))
}
if filters.Silenced != nil {
params.Set("silenced", fmt.Sprintf("%t", *filters.Silenced))
}
if filters.Inhibited != nil {
params.Set("inhibited", fmt.Sprintf("%t", *filters.Inhibited))
}
if filters.Unprocessed != nil {
params.Set("unprocessed", fmt.Sprintf("%t", *filters.Unprocessed))
}
if filters.Receiver != "" {
params.Set("receiver", filters.Receiver)
}
for _, f := range filters.Filter {
params.Add("filter", f)
}
u := c.baseURL + "/api/v2/alerts"
if len(params) > 0 {
u += "?" + params.Encode()
}
req, err := http.NewRequestWithContext(ctx, http.MethodGet, u, nil)
if err != nil {
return nil, fmt.Errorf("failed to create request: %w", err)
}
resp, err := c.httpClient.Do(req)
if err != nil {
return nil, fmt.Errorf("request failed: %w", err)
}
defer resp.Body.Close() //nolint:errcheck // cleanup on exit
body, err := io.ReadAll(resp.Body)
if err != nil {
return nil, fmt.Errorf("failed to read response: %w", err)
}
if resp.StatusCode != http.StatusOK {
return nil, fmt.Errorf("unexpected status %d: %s", resp.StatusCode, string(body))
}
var alerts []Alert
if err := json.Unmarshal(body, &alerts); err != nil {
return nil, fmt.Errorf("failed to parse alerts: %w", err)
}
return alerts, nil
}
// ListSilences returns all silences.
func (c *AlertmanagerClient) ListSilences(ctx context.Context) ([]Silence, error) {
req, err := http.NewRequestWithContext(ctx, http.MethodGet, c.baseURL+"/api/v2/silences", nil)
if err != nil {
return nil, fmt.Errorf("failed to create request: %w", err)
}
resp, err := c.httpClient.Do(req)
if err != nil {
return nil, fmt.Errorf("request failed: %w", err)
}
defer resp.Body.Close() //nolint:errcheck // cleanup on exit
body, err := io.ReadAll(resp.Body)
if err != nil {
return nil, fmt.Errorf("failed to read response: %w", err)
}
if resp.StatusCode != http.StatusOK {
return nil, fmt.Errorf("unexpected status %d: %s", resp.StatusCode, string(body))
}
var silences []Silence
if err := json.Unmarshal(body, &silences); err != nil {
return nil, fmt.Errorf("failed to parse silences: %w", err)
}
return silences, nil
}
// CreateSilence creates a new silence and returns the silence ID.
func (c *AlertmanagerClient) CreateSilence(ctx context.Context, silence Silence) (string, error) {
data, err := json.Marshal(silence)
if err != nil {
return "", fmt.Errorf("failed to marshal silence: %w", err)
}
req, err := http.NewRequestWithContext(ctx, http.MethodPost, c.baseURL+"/api/v2/silences", bytes.NewReader(data))
if err != nil {
return "", fmt.Errorf("failed to create request: %w", err)
}
req.Header.Set("Content-Type", "application/json")
resp, err := c.httpClient.Do(req)
if err != nil {
return "", fmt.Errorf("request failed: %w", err)
}
defer resp.Body.Close() //nolint:errcheck // cleanup on exit
body, err := io.ReadAll(resp.Body)
if err != nil {
return "", fmt.Errorf("failed to read response: %w", err)
}
if resp.StatusCode != http.StatusOK {
return "", fmt.Errorf("unexpected status %d: %s", resp.StatusCode, string(body))
}
var result struct {
SilenceID string `json:"silenceID"`
}
if err := json.Unmarshal(body, &result); err != nil {
return "", fmt.Errorf("failed to parse response: %w", err)
}
return result.SilenceID, nil
}

View File

@@ -0,0 +1,175 @@
package monitoring
import (
"context"
"encoding/json"
"net/http"
"net/http/httptest"
"testing"
"time"
)
func TestAlertmanagerClient_ListAlerts(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path != "/api/v2/alerts" {
t.Errorf("unexpected path: %s", r.URL.Path)
}
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`[
{
"annotations": {"summary": "Target is down"},
"endsAt": "2024-01-01T01:00:00Z",
"fingerprint": "abc123",
"receivers": [{"name": "default"}],
"startsAt": "2024-01-01T00:00:00Z",
"status": {"inhibitedBy": [], "silencedBy": [], "state": "active"},
"updatedAt": "2024-01-01T00:00:00Z",
"generatorURL": "http://prometheus:9090/graph",
"labels": {"alertname": "TargetDown", "severity": "critical", "instance": "node1:9100"}
}
]`))
}))
defer srv.Close()
client := NewAlertmanagerClient(srv.URL)
alerts, err := client.ListAlerts(context.Background(), AlertFilters{})
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if len(alerts) != 1 {
t.Fatalf("expected 1 alert, got %d", len(alerts))
}
if alerts[0].Fingerprint != "abc123" {
t.Errorf("expected fingerprint=abc123, got %s", alerts[0].Fingerprint)
}
if alerts[0].Labels["alertname"] != "TargetDown" {
t.Errorf("expected alertname=TargetDown, got %s", alerts[0].Labels["alertname"])
}
if alerts[0].Status.State != "active" {
t.Errorf("expected state=active, got %s", alerts[0].Status.State)
}
}
func TestAlertmanagerClient_ListAlertsWithFilters(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
q := r.URL.Query()
if q.Get("active") != "true" {
t.Errorf("expected active=true, got %s", q.Get("active"))
}
if q.Get("silenced") != "false" {
t.Errorf("expected silenced=false, got %s", q.Get("silenced"))
}
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`[]`))
}))
defer srv.Close()
client := NewAlertmanagerClient(srv.URL)
active := true
silenced := false
_, err := client.ListAlerts(context.Background(), AlertFilters{
Active: &active,
Silenced: &silenced,
})
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
}
func TestAlertmanagerClient_ListSilences(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path != "/api/v2/silences" {
t.Errorf("unexpected path: %s", r.URL.Path)
}
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`[
{
"id": "silence-1",
"matchers": [{"name": "alertname", "value": "TargetDown", "isRegex": false}],
"startsAt": "2024-01-01T00:00:00Z",
"endsAt": "2024-01-01T02:00:00Z",
"createdBy": "admin",
"comment": "Maintenance window",
"status": {"state": "active"}
}
]`))
}))
defer srv.Close()
client := NewAlertmanagerClient(srv.URL)
silences, err := client.ListSilences(context.Background())
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if len(silences) != 1 {
t.Fatalf("expected 1 silence, got %d", len(silences))
}
if silences[0].ID != "silence-1" {
t.Errorf("expected id=silence-1, got %s", silences[0].ID)
}
if silences[0].CreatedBy != "admin" {
t.Errorf("expected createdBy=admin, got %s", silences[0].CreatedBy)
}
}
func TestAlertmanagerClient_CreateSilence(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodPost {
t.Errorf("expected POST, got %s", r.Method)
}
if r.URL.Path != "/api/v2/silences" {
t.Errorf("unexpected path: %s", r.URL.Path)
}
if r.Header.Get("Content-Type") != "application/json" {
t.Errorf("expected Content-Type=application/json, got %s", r.Header.Get("Content-Type"))
}
var silence Silence
if err := json.NewDecoder(r.Body).Decode(&silence); err != nil {
t.Fatalf("failed to decode request body: %v", err)
}
if silence.CreatedBy != "admin" {
t.Errorf("expected createdBy=admin, got %s", silence.CreatedBy)
}
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{"silenceID": "new-silence-id"}`))
}))
defer srv.Close()
client := NewAlertmanagerClient(srv.URL)
id, err := client.CreateSilence(context.Background(), Silence{
Matchers: []Matcher{
{Name: "alertname", Value: "TargetDown", IsRegex: false},
},
StartsAt: time.Now(),
EndsAt: time.Now().Add(2 * time.Hour),
CreatedBy: "admin",
Comment: "Test silence",
})
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if id != "new-silence-id" {
t.Errorf("expected id=new-silence-id, got %s", id)
}
}
func TestAlertmanagerClient_HTTPError(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusInternalServerError)
_, _ = w.Write([]byte("internal error"))
}))
defer srv.Close()
client := NewAlertmanagerClient(srv.URL)
_, err := client.ListAlerts(context.Background(), AlertFilters{})
if err == nil {
t.Fatal("expected error, got nil")
}
}

View File

@@ -0,0 +1,437 @@
package monitoring
import (
"fmt"
"sort"
"strings"
"time"
)
const maxRows = 100
// formatInstantVector formats instant vector results as a markdown table.
func formatInstantVector(results []PromInstantVector) string {
if len(results) == 0 {
return "No results."
}
// Collect all label keys across results (excluding __name__)
labelKeys := collectLabelKeys(results)
var sb strings.Builder
// Header
sb.WriteString("| ")
if _, ok := results[0].Metric["__name__"]; ok {
sb.WriteString("Metric | ")
}
for _, key := range labelKeys {
sb.WriteString(key)
sb.WriteString(" | ")
}
sb.WriteString("Value |\n")
// Separator
sb.WriteString("| ")
if _, ok := results[0].Metric["__name__"]; ok {
sb.WriteString("--- | ")
}
for range labelKeys {
sb.WriteString("--- | ")
}
sb.WriteString("--- |\n")
// Rows
truncated := false
for i, r := range results {
if i >= maxRows {
truncated = true
break
}
sb.WriteString("| ")
if _, ok := results[0].Metric["__name__"]; ok {
sb.WriteString(r.Metric["__name__"])
sb.WriteString(" | ")
}
for _, key := range labelKeys {
sb.WriteString(r.Metric[key])
sb.WriteString(" | ")
}
// Value is at index 1 of the value tuple
if len(r.Value) >= 2 {
if v, ok := r.Value[1].(string); ok {
sb.WriteString(v)
}
}
sb.WriteString(" |\n")
}
if truncated {
sb.WriteString(fmt.Sprintf("\n*Showing %d of %d results (truncated)*\n", maxRows, len(results)))
}
return sb.String()
}
// collectLabelKeys returns sorted label keys across all results, excluding __name__.
func collectLabelKeys(results []PromInstantVector) []string {
keySet := make(map[string]struct{})
for _, r := range results {
for k := range r.Metric {
if k != "__name__" {
keySet[k] = struct{}{}
}
}
}
keys := make([]string, 0, len(keySet))
for k := range keySet {
keys = append(keys, k)
}
sort.Strings(keys)
return keys
}
// formatAlerts formats alerts as grouped markdown.
func formatAlerts(alerts []Alert) string {
if len(alerts) == 0 {
return "No alerts found."
}
// Group by alertname
groups := make(map[string][]Alert)
var order []string
for _, a := range alerts {
name := a.Labels["alertname"]
if _, exists := groups[name]; !exists {
order = append(order, name)
}
groups[name] = append(groups[name], a)
}
var sb strings.Builder
sb.WriteString(fmt.Sprintf("**%d alert(s)**\n\n", len(alerts)))
for _, name := range order {
group := groups[name]
sb.WriteString(fmt.Sprintf("## %s (%d)\n\n", name, len(group)))
for i, a := range group {
if i >= maxRows {
sb.WriteString(fmt.Sprintf("*... and %d more*\n", len(group)-maxRows))
break
}
sb.WriteString(fmt.Sprintf("**State:** %s | **Severity:** %s\n", a.Status.State, a.Labels["severity"]))
// Labels (excluding alertname and severity)
var labels []string
for k, v := range a.Labels {
if k != "alertname" && k != "severity" {
labels = append(labels, fmt.Sprintf("%s=%s", k, v))
}
}
sort.Strings(labels)
if len(labels) > 0 {
sb.WriteString(fmt.Sprintf("**Labels:** %s\n", strings.Join(labels, ", ")))
}
// Annotations
for k, v := range a.Annotations {
sb.WriteString(fmt.Sprintf("**%s:** %s\n", k, v))
}
sb.WriteString(fmt.Sprintf("**Fingerprint:** %s\n", a.Fingerprint))
sb.WriteString(fmt.Sprintf("**Started:** %s\n", a.StartsAt.Format(time.RFC3339)))
if len(a.Status.SilencedBy) > 0 {
sb.WriteString(fmt.Sprintf("**Silenced by:** %s\n", strings.Join(a.Status.SilencedBy, ", ")))
}
if len(a.Status.InhibitedBy) > 0 {
sb.WriteString(fmt.Sprintf("**Inhibited by:** %s\n", strings.Join(a.Status.InhibitedBy, ", ")))
}
sb.WriteString("\n")
}
}
return sb.String()
}
// formatTargets formats targets as grouped markdown.
func formatTargets(targets *PromTargetsData) string {
if targets == nil || len(targets.ActiveTargets) == 0 {
return "No active targets."
}
// Group by job
groups := make(map[string][]PromTarget)
var order []string
for _, t := range targets.ActiveTargets {
job := t.Labels["job"]
if _, exists := groups[job]; !exists {
order = append(order, job)
}
groups[job] = append(groups[job], t)
}
sort.Strings(order)
var sb strings.Builder
sb.WriteString(fmt.Sprintf("**%d active target(s)**\n\n", len(targets.ActiveTargets)))
// Count health statuses
healthCounts := make(map[string]int)
for _, t := range targets.ActiveTargets {
healthCounts[t.Health]++
}
var healthParts []string
for h, c := range healthCounts {
healthParts = append(healthParts, fmt.Sprintf("%s: %d", h, c))
}
sort.Strings(healthParts)
sb.WriteString(fmt.Sprintf("**Health summary:** %s\n\n", strings.Join(healthParts, ", ")))
for _, job := range order {
group := groups[job]
sb.WriteString(fmt.Sprintf("## %s (%d targets)\n\n", job, len(group)))
sb.WriteString("| Instance | Health | Last Scrape | Duration | Error |\n")
sb.WriteString("| --- | --- | --- | --- | --- |\n")
for _, t := range group {
instance := t.Labels["instance"]
lastScrape := t.LastScrape.Format("15:04:05")
duration := fmt.Sprintf("%.3fs", t.LastScrapeDuration)
lastErr := t.LastError
if lastErr == "" {
lastErr = "-"
}
sb.WriteString(fmt.Sprintf("| %s | %s | %s | %s | %s |\n",
instance, t.Health, lastScrape, duration, lastErr))
}
sb.WriteString("\n")
}
return sb.String()
}
// formatSilences formats silences as markdown.
func formatSilences(silences []Silence) string {
if len(silences) == 0 {
return "No silences found."
}
var sb strings.Builder
sb.WriteString(fmt.Sprintf("**%d silence(s)**\n\n", len(silences)))
for _, s := range silences {
state := "unknown"
if s.Status != nil {
state = s.Status.State
}
sb.WriteString(fmt.Sprintf("## Silence %s [%s]\n\n", s.ID, state))
// Matchers
var matchers []string
for _, m := range s.Matchers {
op := "="
if m.IsRegex {
op = "=~"
}
if m.IsEqual != nil && !*m.IsEqual {
if m.IsRegex {
op = "!~"
} else {
op = "!="
}
}
matchers = append(matchers, fmt.Sprintf("%s%s%s", m.Name, op, m.Value))
}
sb.WriteString(fmt.Sprintf("**Matchers:** %s\n", strings.Join(matchers, ", ")))
sb.WriteString(fmt.Sprintf("**Created by:** %s\n", s.CreatedBy))
sb.WriteString(fmt.Sprintf("**Comment:** %s\n", s.Comment))
sb.WriteString(fmt.Sprintf("**Starts:** %s\n", s.StartsAt.Format(time.RFC3339)))
sb.WriteString(fmt.Sprintf("**Ends:** %s\n", s.EndsAt.Format(time.RFC3339)))
sb.WriteString("\n")
}
return sb.String()
}
// formatMetricSearch formats metric search results.
func formatMetricSearch(names []string, metadata map[string][]PromMetadata) string {
if len(names) == 0 {
return "No metrics found matching the search."
}
var sb strings.Builder
sb.WriteString(fmt.Sprintf("**%d metric(s) found**\n\n", len(names)))
sb.WriteString("| Metric | Type | Help |\n")
sb.WriteString("| --- | --- | --- |\n")
truncated := false
for i, name := range names {
if i >= maxRows {
truncated = true
break
}
metaType := ""
help := ""
if metas, ok := metadata[name]; ok && len(metas) > 0 {
metaType = metas[0].Type
help = metas[0].Help
if len(help) > 100 {
help = help[:100] + "..."
}
}
sb.WriteString(fmt.Sprintf("| %s | %s | %s |\n", name, metaType, help))
}
if truncated {
sb.WriteString(fmt.Sprintf("\n*Showing %d of %d metrics (truncated)*\n", maxRows, len(names)))
}
return sb.String()
}
const maxLabelValues = 100
const maxLineLength = 500
// formatLogStreams formats Loki log query results as grouped markdown.
func formatLogStreams(data *LokiQueryData) string {
if data == nil || len(data.Result) == 0 {
return "No log results."
}
var sb strings.Builder
totalEntries := 0
for _, s := range data.Result {
totalEntries += len(s.Values)
}
sb.WriteString(fmt.Sprintf("**%d stream(s), %d total log entries**\n\n", len(data.Result), totalEntries))
for _, stream := range data.Result {
// Stream labels header
var labels []string
for k, v := range stream.Stream {
labels = append(labels, fmt.Sprintf("%s=%q", k, v))
}
sort.Strings(labels)
sb.WriteString(fmt.Sprintf("## {%s}\n\n", strings.Join(labels, ", ")))
if len(stream.Values) == 0 {
sb.WriteString("No entries.\n\n")
continue
}
sb.WriteString("| Timestamp | Log Line |\n")
sb.WriteString("| --- | --- |\n")
truncated := false
for i, entry := range stream.Values {
if i >= maxRows {
truncated = true
break
}
ts := formatNanosecondTimestamp(entry[0])
line := entry[1]
if len(line) > maxLineLength {
line = line[:maxLineLength] + "..."
}
// Escape pipe characters in log lines for markdown table
line = strings.ReplaceAll(line, "|", "\\|")
// Replace newlines with spaces for table compatibility
line = strings.ReplaceAll(line, "\n", " ")
sb.WriteString(fmt.Sprintf("| %s | %s |\n", ts, line))
}
if truncated {
sb.WriteString(fmt.Sprintf("\n*Showing %d of %d entries (truncated)*\n", maxRows, len(stream.Values)))
}
sb.WriteString("\n")
}
return sb.String()
}
// formatLabels formats a list of label names as a bullet list.
func formatLabels(labels []string) string {
if len(labels) == 0 {
return "No labels found."
}
sort.Strings(labels)
var sb strings.Builder
sb.WriteString(fmt.Sprintf("**%d label(s)**\n\n", len(labels)))
for _, label := range labels {
sb.WriteString(fmt.Sprintf("- `%s`\n", label))
}
return sb.String()
}
// formatLabelValues formats label values as a bullet list.
func formatLabelValues(label string, values []string) string {
if len(values) == 0 {
return fmt.Sprintf("No values found for label '%s'.", label)
}
sort.Strings(values)
var sb strings.Builder
sb.WriteString(fmt.Sprintf("**%d value(s) for label `%s`**\n\n", len(values), label))
truncated := false
for i, v := range values {
if i >= maxLabelValues {
truncated = true
break
}
sb.WriteString(fmt.Sprintf("- `%s`\n", v))
}
if truncated {
sb.WriteString(fmt.Sprintf("\n*Showing %d of %d values (truncated)*\n", maxLabelValues, len(values)))
}
return sb.String()
}
// formatNanosecondTimestamp converts a nanosecond Unix timestamp string to RFC3339.
func formatNanosecondTimestamp(nsStr string) string {
var ns int64
for _, c := range nsStr {
if c >= '0' && c <= '9' {
ns = ns*10 + int64(c-'0')
}
}
t := time.Unix(0, ns)
return t.UTC().Format(time.RFC3339)
}
// formatMetricMetadata formats metadata for a single metric.
func formatMetricMetadata(name string, metas []PromMetadata) string {
if len(metas) == 0 {
return fmt.Sprintf("No metadata found for metric '%s'.", name)
}
var sb strings.Builder
sb.WriteString(fmt.Sprintf("# %s\n\n", name))
for _, m := range metas {
sb.WriteString(fmt.Sprintf("**Type:** %s\n", m.Type))
if m.Help != "" {
sb.WriteString(fmt.Sprintf("**Help:** %s\n", m.Help))
}
if m.Unit != "" {
sb.WriteString(fmt.Sprintf("**Unit:** %s\n", m.Unit))
}
}
return sb.String()
}

View File

@@ -0,0 +1,680 @@
package monitoring
import (
"context"
"encoding/json"
"fmt"
"strings"
"time"
"code.t-juice.club/torjus/labmcp/internal/mcp"
)
// AlertSummary queries Alertmanager for active (non-silenced) alerts and returns
// a short summary string. Returns empty string if there are no alerts or if
// Alertmanager is unreachable.
func AlertSummary(am *AlertmanagerClient) string {
active := true
silenced := false
inhibited := false
alerts, err := am.ListAlerts(context.Background(), AlertFilters{
Active: &active,
Silenced: &silenced,
Inhibited: &inhibited,
})
if err != nil || len(alerts) == 0 {
return ""
}
// Count by severity
severities := make(map[string]int)
for _, a := range alerts {
sev := a.Labels["severity"]
if sev == "" {
sev = "unknown"
}
severities[sev]++
}
var parts []string
// Show critical first if present
if n, ok := severities["critical"]; ok {
parts = append(parts, fmt.Sprintf("%d critical", n))
delete(severities, "critical")
}
if n, ok := severities["warning"]; ok {
parts = append(parts, fmt.Sprintf("%d warning", n))
delete(severities, "warning")
}
for sev, n := range severities {
parts = append(parts, fmt.Sprintf("%d %s", n, sev))
}
return fmt.Sprintf("ALERT STATUS: %d active alert(s) firing (%s). Use list_alerts for details. Let the user know about this.",
len(alerts), strings.Join(parts, ", "))
}
// HandlerOptions configures which handlers are registered.
type HandlerOptions struct {
// EnableSilences enables the create_silence tool, which is a write operation.
// Disabled by default as a safety measure.
EnableSilences bool
}
// RegisterHandlers registers all monitoring tool handlers on the MCP server.
func RegisterHandlers(server *mcp.Server, prom *PrometheusClient, am *AlertmanagerClient, loki *LokiClient, opts HandlerOptions) {
server.RegisterTool(listAlertsTool(), makeListAlertsHandler(am))
server.RegisterTool(getAlertTool(), makeGetAlertHandler(am))
server.RegisterTool(searchMetricsTool(), makeSearchMetricsHandler(prom))
server.RegisterTool(getMetricMetadataTool(), makeGetMetricMetadataHandler(prom))
server.RegisterTool(queryTool(), makeQueryHandler(prom))
server.RegisterTool(listTargetsTool(), makeListTargetsHandler(prom))
server.RegisterTool(listSilencesTool(), makeListSilencesHandler(am))
if opts.EnableSilences {
server.RegisterTool(createSilenceTool(), makeCreateSilenceHandler(am))
}
if loki != nil {
server.RegisterTool(queryLogsTool(), makeQueryLogsHandler(loki))
server.RegisterTool(listLabelsTool(), makeListLabelsHandler(loki))
server.RegisterTool(listLabelValuesTool(), makeListLabelValuesHandler(loki))
}
}
// Tool definitions
func listAlertsTool() mcp.Tool {
return mcp.Tool{
Name: "list_alerts",
Description: "List alerts from Alertmanager with optional filters",
InputSchema: mcp.InputSchema{
Type: "object",
Properties: map[string]mcp.Property{
"state": {
Type: "string",
Description: "Filter by alert state: 'active', 'suppressed', 'unprocessed', or 'all' (default: active)",
Enum: []string{"active", "suppressed", "unprocessed", "all"},
},
"severity": {
Type: "string",
Description: "Filter by severity label (e.g., 'critical', 'warning')",
},
"receiver": {
Type: "string",
Description: "Filter by receiver name",
},
},
},
}
}
func getAlertTool() mcp.Tool {
return mcp.Tool{
Name: "get_alert",
Description: "Get full details for a specific alert by fingerprint",
InputSchema: mcp.InputSchema{
Type: "object",
Properties: map[string]mcp.Property{
"fingerprint": {
Type: "string",
Description: "Alert fingerprint identifier",
},
},
Required: []string{"fingerprint"},
},
}
}
func searchMetricsTool() mcp.Tool {
return mcp.Tool{
Name: "search_metrics",
Description: "Search Prometheus metric names with optional substring filter, enriched with metadata (type, help text)",
InputSchema: mcp.InputSchema{
Type: "object",
Properties: map[string]mcp.Property{
"query": {
Type: "string",
Description: "Substring to filter metric names (e.g., 'cpu', 'memory', 'node_'). Empty returns all metrics.",
},
"limit": {
Type: "integer",
Description: "Maximum number of results (default: 50)",
Default: 50,
},
},
},
}
}
func getMetricMetadataTool() mcp.Tool {
return mcp.Tool{
Name: "get_metric_metadata",
Description: "Get type, help text, and unit for a specific Prometheus metric",
InputSchema: mcp.InputSchema{
Type: "object",
Properties: map[string]mcp.Property{
"metric": {
Type: "string",
Description: "Metric name (e.g., 'node_cpu_seconds_total')",
},
},
Required: []string{"metric"},
},
}
}
func queryTool() mcp.Tool {
return mcp.Tool{
Name: "query",
Description: "Execute an instant PromQL query against Prometheus. Supports aggregations like avg_over_time(metric[1h]), rate(), sum(), etc.",
InputSchema: mcp.InputSchema{
Type: "object",
Properties: map[string]mcp.Property{
"promql": {
Type: "string",
Description: "PromQL expression to evaluate (e.g., 'up', 'rate(http_requests_total[5m])', 'avg_over_time(node_load1[1h])')",
},
},
Required: []string{"promql"},
},
}
}
func listTargetsTool() mcp.Tool {
return mcp.Tool{
Name: "list_targets",
Description: "List Prometheus scrape targets with health status, grouped by job",
InputSchema: mcp.InputSchema{
Type: "object",
Properties: map[string]mcp.Property{},
},
}
}
func listSilencesTool() mcp.Tool {
return mcp.Tool{
Name: "list_silences",
Description: "List active and pending alert silences from Alertmanager",
InputSchema: mcp.InputSchema{
Type: "object",
Properties: map[string]mcp.Property{},
},
}
}
func createSilenceTool() mcp.Tool {
return mcp.Tool{
Name: "create_silence",
Description: `Create a new silence in Alertmanager. IMPORTANT: Always confirm with the user before creating a silence, showing them the matchers, duration, and reason.`,
InputSchema: mcp.InputSchema{
Type: "object",
Properties: map[string]mcp.Property{
"matchers": {
Type: "string",
Description: `JSON array of matchers, e.g. [{"name":"alertname","value":"TargetDown","isRegex":false}]`,
},
"duration": {
Type: "string",
Description: "Silence duration in Go duration format (e.g., '2h', '30m', '1h30m')",
},
"author": {
Type: "string",
Description: "Author of the silence",
},
"comment": {
Type: "string",
Description: "Reason for the silence",
},
},
Required: []string{"matchers", "duration", "author", "comment"},
},
}
}
// Handler constructors
func makeListAlertsHandler(am *AlertmanagerClient) mcp.ToolHandler {
return func(ctx context.Context, args map[string]interface{}) (mcp.CallToolResult, error) {
filters := AlertFilters{}
state, _ := args["state"].(string)
switch state {
case "active", "":
// Default to active alerts only (non-silenced, non-inhibited)
active := true
filters.Active = &active
silenced := false
filters.Silenced = &silenced
inhibited := false
filters.Inhibited = &inhibited
case "suppressed":
active := false
filters.Active = &active
case "unprocessed":
unprocessed := true
filters.Unprocessed = &unprocessed
case "all":
// No filters - return everything
}
if severity, ok := args["severity"].(string); ok && severity != "" {
filters.Filter = append(filters.Filter, fmt.Sprintf(`severity="%s"`, severity))
}
if receiver, ok := args["receiver"].(string); ok && receiver != "" {
filters.Receiver = receiver
}
alerts, err := am.ListAlerts(ctx, filters)
if err != nil {
return mcp.ErrorContent(fmt.Errorf("failed to list alerts: %w", err)), nil
}
return mcp.CallToolResult{
Content: []mcp.Content{mcp.TextContent(formatAlerts(alerts))},
}, nil
}
}
func makeGetAlertHandler(am *AlertmanagerClient) mcp.ToolHandler {
return func(ctx context.Context, args map[string]interface{}) (mcp.CallToolResult, error) {
fingerprint, _ := args["fingerprint"].(string)
if fingerprint == "" {
return mcp.ErrorContent(fmt.Errorf("fingerprint is required")), nil
}
// Fetch all alerts and find the one matching the fingerprint
alerts, err := am.ListAlerts(ctx, AlertFilters{})
if err != nil {
return mcp.ErrorContent(fmt.Errorf("failed to fetch alerts: %w", err)), nil
}
for _, a := range alerts {
if a.Fingerprint == fingerprint {
return mcp.CallToolResult{
Content: []mcp.Content{mcp.TextContent(formatAlerts([]Alert{a}))},
}, nil
}
}
return mcp.ErrorContent(fmt.Errorf("alert with fingerprint '%s' not found", fingerprint)), nil
}
}
func makeSearchMetricsHandler(prom *PrometheusClient) mcp.ToolHandler {
return func(ctx context.Context, args map[string]interface{}) (mcp.CallToolResult, error) {
query, _ := args["query"].(string)
limit := 50
if l, ok := args["limit"].(float64); ok && l > 0 {
limit = int(l)
}
// Get all metric names
allNames, err := prom.LabelValues(ctx, "__name__")
if err != nil {
return mcp.ErrorContent(fmt.Errorf("failed to fetch metric names: %w", err)), nil
}
// Filter by substring
var matched []string
queryLower := strings.ToLower(query)
for _, name := range allNames {
if query == "" || strings.Contains(strings.ToLower(name), queryLower) {
matched = append(matched, name)
if len(matched) >= limit {
break
}
}
}
// Fetch metadata for matched metrics
metadata, err := prom.Metadata(ctx, "")
if err != nil {
// Non-fatal: proceed without metadata
metadata = nil
}
return mcp.CallToolResult{
Content: []mcp.Content{mcp.TextContent(formatMetricSearch(matched, metadata))},
}, nil
}
}
func makeGetMetricMetadataHandler(prom *PrometheusClient) mcp.ToolHandler {
return func(ctx context.Context, args map[string]interface{}) (mcp.CallToolResult, error) {
metric, _ := args["metric"].(string)
if metric == "" {
return mcp.ErrorContent(fmt.Errorf("metric is required")), nil
}
metadata, err := prom.Metadata(ctx, metric)
if err != nil {
return mcp.ErrorContent(fmt.Errorf("failed to fetch metadata: %w", err)), nil
}
metas := metadata[metric]
return mcp.CallToolResult{
Content: []mcp.Content{mcp.TextContent(formatMetricMetadata(metric, metas))},
}, nil
}
}
func makeQueryHandler(prom *PrometheusClient) mcp.ToolHandler {
return func(ctx context.Context, args map[string]interface{}) (mcp.CallToolResult, error) {
promql, _ := args["promql"].(string)
if promql == "" {
return mcp.ErrorContent(fmt.Errorf("promql is required")), nil
}
data, err := prom.Query(ctx, promql, time.Time{})
if err != nil {
return mcp.ErrorContent(fmt.Errorf("query failed: %w", err)), nil
}
var result string
switch data.ResultType {
case "vector":
result = formatInstantVector(data.Result)
case "scalar":
if len(data.Result) > 0 && len(data.Result[0].Value) >= 2 {
if v, ok := data.Result[0].Value[1].(string); ok {
result = fmt.Sprintf("**Scalar result:** %s", v)
}
}
if result == "" {
result = "Scalar query returned no value."
}
default:
result = fmt.Sprintf("Result type: %s\n\n%s", data.ResultType, formatInstantVector(data.Result))
}
return mcp.CallToolResult{
Content: []mcp.Content{mcp.TextContent(result)},
}, nil
}
}
func makeListTargetsHandler(prom *PrometheusClient) mcp.ToolHandler {
return func(ctx context.Context, args map[string]interface{}) (mcp.CallToolResult, error) {
data, err := prom.Targets(ctx)
if err != nil {
return mcp.ErrorContent(fmt.Errorf("failed to fetch targets: %w", err)), nil
}
return mcp.CallToolResult{
Content: []mcp.Content{mcp.TextContent(formatTargets(data))},
}, nil
}
}
func makeListSilencesHandler(am *AlertmanagerClient) mcp.ToolHandler {
return func(ctx context.Context, args map[string]interface{}) (mcp.CallToolResult, error) {
silences, err := am.ListSilences(ctx)
if err != nil {
return mcp.ErrorContent(fmt.Errorf("failed to fetch silences: %w", err)), nil
}
// Filter to active/pending only
var filtered []Silence
for _, s := range silences {
if s.Status != nil && (s.Status.State == "active" || s.Status.State == "pending") {
filtered = append(filtered, s)
}
}
return mcp.CallToolResult{
Content: []mcp.Content{mcp.TextContent(formatSilences(filtered))},
}, nil
}
}
func makeCreateSilenceHandler(am *AlertmanagerClient) mcp.ToolHandler {
return func(ctx context.Context, args map[string]interface{}) (mcp.CallToolResult, error) {
matchersJSON, _ := args["matchers"].(string)
if matchersJSON == "" {
return mcp.ErrorContent(fmt.Errorf("matchers is required")), nil
}
durationStr, _ := args["duration"].(string)
if durationStr == "" {
return mcp.ErrorContent(fmt.Errorf("duration is required")), nil
}
author, _ := args["author"].(string)
if author == "" {
return mcp.ErrorContent(fmt.Errorf("author is required")), nil
}
comment, _ := args["comment"].(string)
if comment == "" {
return mcp.ErrorContent(fmt.Errorf("comment is required")), nil
}
// Parse matchers
var matchers []Matcher
if err := parseJSON(matchersJSON, &matchers); err != nil {
return mcp.ErrorContent(fmt.Errorf("invalid matchers JSON: %w", err)), nil
}
// Parse duration
duration, err := time.ParseDuration(durationStr)
if err != nil {
return mcp.ErrorContent(fmt.Errorf("invalid duration: %w", err)), nil
}
now := time.Now()
silence := Silence{
Matchers: matchers,
StartsAt: now,
EndsAt: now.Add(duration),
CreatedBy: author,
Comment: comment,
}
id, err := am.CreateSilence(ctx, silence)
if err != nil {
return mcp.ErrorContent(fmt.Errorf("failed to create silence: %w", err)), nil
}
var sb strings.Builder
sb.WriteString("Silence created successfully.\n\n")
sb.WriteString(fmt.Sprintf("**ID:** %s\n", id))
sb.WriteString(fmt.Sprintf("**Expires:** %s\n", silence.EndsAt.Format(time.RFC3339)))
sb.WriteString(fmt.Sprintf("**Author:** %s\n", author))
sb.WriteString(fmt.Sprintf("**Comment:** %s\n", comment))
return mcp.CallToolResult{
Content: []mcp.Content{mcp.TextContent(sb.String())},
}, nil
}
}
// parseJSON is a helper to unmarshal JSON from a string.
func parseJSON(s string, v interface{}) error {
return json.Unmarshal([]byte(s), v)
}
// Loki tool definitions
func queryLogsTool() mcp.Tool {
return mcp.Tool{
Name: "query_logs",
Description: "Execute a LogQL range query against Loki to search and retrieve log entries",
InputSchema: mcp.InputSchema{
Type: "object",
Properties: map[string]mcp.Property{
"logql": {
Type: "string",
Description: `LogQL query expression (e.g., '{job="varlogs"}', '{job="nginx"} |= "error"')`,
},
"start": {
Type: "string",
Description: "Start time: relative duration (e.g., '1h', '30m'), RFC3339 timestamp, or Unix epoch seconds. Default: 1h ago",
},
"end": {
Type: "string",
Description: "End time: relative duration (e.g., '5m'), RFC3339 timestamp, or Unix epoch seconds. Default: now",
},
"limit": {
Type: "integer",
Description: "Maximum number of log entries to return (default: 100)",
Default: 100,
},
"direction": {
Type: "string",
Description: "Sort order for log entries: 'backward' (newest first) or 'forward' (oldest first)",
Enum: []string{"backward", "forward"},
},
},
Required: []string{"logql"},
},
}
}
func listLabelsTool() mcp.Tool {
return mcp.Tool{
Name: "list_labels",
Description: "List available label names from Loki",
InputSchema: mcp.InputSchema{
Type: "object",
Properties: map[string]mcp.Property{},
},
}
}
func listLabelValuesTool() mcp.Tool {
return mcp.Tool{
Name: "list_label_values",
Description: "List values for a specific label from Loki",
InputSchema: mcp.InputSchema{
Type: "object",
Properties: map[string]mcp.Property{
"label": {
Type: "string",
Description: "Label name to get values for (e.g., 'job', 'instance')",
},
},
Required: []string{"label"},
},
}
}
// Loki handler constructors
func makeQueryLogsHandler(loki *LokiClient) mcp.ToolHandler {
return func(ctx context.Context, args map[string]interface{}) (mcp.CallToolResult, error) {
logql, _ := args["logql"].(string)
if logql == "" {
return mcp.ErrorContent(fmt.Errorf("logql is required")), nil
}
now := time.Now()
start := now.Add(-time.Hour)
end := now
if startStr, ok := args["start"].(string); ok && startStr != "" {
parsed, err := parseTimeArg(startStr, now.Add(-time.Hour))
if err != nil {
return mcp.ErrorContent(fmt.Errorf("invalid start time: %w", err)), nil
}
start = parsed
}
if endStr, ok := args["end"].(string); ok && endStr != "" {
parsed, err := parseTimeArg(endStr, now)
if err != nil {
return mcp.ErrorContent(fmt.Errorf("invalid end time: %w", err)), nil
}
end = parsed
}
limit := 100
if l, ok := args["limit"].(float64); ok && l > 0 {
limit = int(l)
}
if limit > 5000 {
limit = 5000
}
direction := "backward"
if d, ok := args["direction"].(string); ok && d != "" {
if d != "backward" && d != "forward" {
return mcp.ErrorContent(fmt.Errorf("direction must be 'backward' or 'forward'")), nil
}
direction = d
}
data, err := loki.QueryRange(ctx, logql, start, end, limit, direction)
if err != nil {
return mcp.ErrorContent(fmt.Errorf("log query failed: %w", err)), nil
}
return mcp.CallToolResult{
Content: []mcp.Content{mcp.TextContent(formatLogStreams(data))},
}, nil
}
}
func makeListLabelsHandler(loki *LokiClient) mcp.ToolHandler {
return func(ctx context.Context, args map[string]interface{}) (mcp.CallToolResult, error) {
labels, err := loki.Labels(ctx)
if err != nil {
return mcp.ErrorContent(fmt.Errorf("failed to list labels: %w", err)), nil
}
return mcp.CallToolResult{
Content: []mcp.Content{mcp.TextContent(formatLabels(labels))},
}, nil
}
}
func makeListLabelValuesHandler(loki *LokiClient) mcp.ToolHandler {
return func(ctx context.Context, args map[string]interface{}) (mcp.CallToolResult, error) {
label, _ := args["label"].(string)
if label == "" {
return mcp.ErrorContent(fmt.Errorf("label is required")), nil
}
values, err := loki.LabelValues(ctx, label)
if err != nil {
return mcp.ErrorContent(fmt.Errorf("failed to list label values: %w", err)), nil
}
return mcp.CallToolResult{
Content: []mcp.Content{mcp.TextContent(formatLabelValues(label, values))},
}, nil
}
}
// parseTimeArg parses a time argument that can be:
// - A relative duration (e.g., "1h", "30m", "2h30m") — interpreted as that duration ago from now
// - An RFC3339 timestamp (e.g., "2024-01-15T10:30:00Z")
// - A Unix epoch in seconds (e.g., "1705312200")
// If parsing fails, returns the provided default time.
func parseTimeArg(s string, defaultTime time.Time) (time.Time, error) {
// Try as relative duration first
if d, err := time.ParseDuration(s); err == nil {
return time.Now().Add(-d), nil
}
// Try as RFC3339
if t, err := time.Parse(time.RFC3339, s); err == nil {
return t, nil
}
// Try as Unix epoch seconds
var epoch int64
validDigits := true
for _, c := range s {
if c >= '0' && c <= '9' {
epoch = epoch*10 + int64(c-'0')
} else {
validDigits = false
break
}
}
if validDigits && len(s) > 0 {
return time.Unix(epoch, 0), nil
}
return defaultTime, fmt.Errorf("cannot parse time '%s': use relative duration (e.g., '1h'), RFC3339, or Unix epoch seconds", s)
}

View File

@@ -0,0 +1,659 @@
package monitoring
import (
"context"
"encoding/json"
"io"
"log"
"net/http"
"net/http/httptest"
"strings"
"testing"
"code.t-juice.club/torjus/labmcp/internal/mcp"
)
// setupTestServer creates a test MCP server with monitoring handlers backed by test HTTP servers.
func setupTestServer(t *testing.T, promHandler, amHandler http.HandlerFunc, lokiHandler ...http.HandlerFunc) (*mcp.Server, func()) {
t.Helper()
promSrv := httptest.NewServer(promHandler)
amSrv := httptest.NewServer(amHandler)
logger := log.New(io.Discard, "", 0)
config := mcp.DefaultMonitoringConfig()
server := mcp.NewGenericServer(logger, config)
prom := NewPrometheusClient(promSrv.URL)
am := NewAlertmanagerClient(amSrv.URL)
var loki *LokiClient
var lokiSrv *httptest.Server
if len(lokiHandler) > 0 && lokiHandler[0] != nil {
lokiSrv = httptest.NewServer(lokiHandler[0])
loki = NewLokiClient(LokiClientOptions{BaseURL: lokiSrv.URL})
}
RegisterHandlers(server, prom, am, loki, HandlerOptions{EnableSilences: true})
cleanup := func() {
promSrv.Close()
amSrv.Close()
if lokiSrv != nil {
lokiSrv.Close()
}
}
return server, cleanup
}
func TestHandler_ListAlerts(t *testing.T) {
server, cleanup := setupTestServer(t,
nil,
func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`[
{
"annotations": {"summary": "Node is down"},
"endsAt": "2024-01-01T01:00:00Z",
"fingerprint": "fp1",
"receivers": [{"name": "default"}],
"startsAt": "2024-01-01T00:00:00Z",
"status": {"inhibitedBy": [], "silencedBy": [], "state": "active"},
"updatedAt": "2024-01-01T00:00:00Z",
"generatorURL": "",
"labels": {"alertname": "NodeDown", "severity": "critical"}
}
]`))
},
)
defer cleanup()
result := callTool(t, server, "list_alerts", map[string]interface{}{})
if result.IsError {
t.Fatalf("unexpected error: %s", result.Content[0].Text)
}
if !strings.Contains(result.Content[0].Text, "NodeDown") {
t.Errorf("expected output to contain 'NodeDown', got: %s", result.Content[0].Text)
}
if !strings.Contains(result.Content[0].Text, "1 alert") {
t.Errorf("expected output to contain '1 alert', got: %s", result.Content[0].Text)
}
}
func TestHandler_ListAlertsDefaultsToActive(t *testing.T) {
// Test that list_alerts with no state param defaults to active filters
server, cleanup := setupTestServer(t,
nil,
func(w http.ResponseWriter, r *http.Request) {
q := r.URL.Query()
// Default should apply active filters
if q.Get("active") != "true" {
t.Errorf("expected default active=true, got %s", q.Get("active"))
}
if q.Get("silenced") != "false" {
t.Errorf("expected default silenced=false, got %s", q.Get("silenced"))
}
if q.Get("inhibited") != "false" {
t.Errorf("expected default inhibited=false, got %s", q.Get("inhibited"))
}
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`[]`))
},
)
defer cleanup()
result := callTool(t, server, "list_alerts", map[string]interface{}{})
if result.IsError {
t.Fatalf("unexpected error: %s", result.Content[0].Text)
}
}
func TestHandler_ListAlertsStateAll(t *testing.T) {
// Test that list_alerts with state=all applies no filters
server, cleanup := setupTestServer(t,
nil,
func(w http.ResponseWriter, r *http.Request) {
q := r.URL.Query()
// state=all should not set any filter params
if q.Get("active") != "" {
t.Errorf("expected no active param for state=all, got %s", q.Get("active"))
}
if q.Get("silenced") != "" {
t.Errorf("expected no silenced param for state=all, got %s", q.Get("silenced"))
}
if q.Get("inhibited") != "" {
t.Errorf("expected no inhibited param for state=all, got %s", q.Get("inhibited"))
}
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`[
{
"annotations": {},
"endsAt": "2024-01-01T01:00:00Z",
"fingerprint": "fp1",
"receivers": [{"name": "default"}],
"startsAt": "2024-01-01T00:00:00Z",
"status": {"inhibitedBy": [], "silencedBy": [], "state": "active"},
"updatedAt": "2024-01-01T00:00:00Z",
"generatorURL": "",
"labels": {"alertname": "ActiveAlert", "severity": "critical"}
},
{
"annotations": {},
"endsAt": "2024-01-01T01:00:00Z",
"fingerprint": "fp2",
"receivers": [{"name": "default"}],
"startsAt": "2024-01-01T00:00:00Z",
"status": {"inhibitedBy": [], "silencedBy": ["s1"], "state": "suppressed"},
"updatedAt": "2024-01-01T00:00:00Z",
"generatorURL": "",
"labels": {"alertname": "SilencedAlert", "severity": "warning"}
}
]`))
},
)
defer cleanup()
result := callTool(t, server, "list_alerts", map[string]interface{}{
"state": "all",
})
if result.IsError {
t.Fatalf("unexpected error: %s", result.Content[0].Text)
}
if !strings.Contains(result.Content[0].Text, "2 alert") {
t.Errorf("expected output to contain '2 alert', got: %s", result.Content[0].Text)
}
}
func TestHandler_GetAlert(t *testing.T) {
server, cleanup := setupTestServer(t,
nil,
func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`[
{
"annotations": {"summary": "Found it"},
"endsAt": "2024-01-01T01:00:00Z",
"fingerprint": "target-fp",
"receivers": [{"name": "default"}],
"startsAt": "2024-01-01T00:00:00Z",
"status": {"inhibitedBy": [], "silencedBy": [], "state": "active"},
"updatedAt": "2024-01-01T00:00:00Z",
"generatorURL": "",
"labels": {"alertname": "TestAlert", "severity": "warning"}
},
{
"annotations": {},
"endsAt": "2024-01-01T01:00:00Z",
"fingerprint": "other-fp",
"receivers": [{"name": "default"}],
"startsAt": "2024-01-01T00:00:00Z",
"status": {"inhibitedBy": [], "silencedBy": [], "state": "active"},
"updatedAt": "2024-01-01T00:00:00Z",
"generatorURL": "",
"labels": {"alertname": "OtherAlert", "severity": "info"}
}
]`))
},
)
defer cleanup()
result := callTool(t, server, "get_alert", map[string]interface{}{
"fingerprint": "target-fp",
})
if result.IsError {
t.Fatalf("unexpected error: %s", result.Content[0].Text)
}
if !strings.Contains(result.Content[0].Text, "TestAlert") {
t.Errorf("expected output to contain 'TestAlert', got: %s", result.Content[0].Text)
}
}
func TestHandler_GetAlertNotFound(t *testing.T) {
server, cleanup := setupTestServer(t,
nil,
func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`[]`))
},
)
defer cleanup()
result := callTool(t, server, "get_alert", map[string]interface{}{
"fingerprint": "nonexistent",
})
if !result.IsError {
t.Error("expected error result for nonexistent fingerprint")
}
}
func TestHandler_Query(t *testing.T) {
server, cleanup := setupTestServer(t,
func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path != "/api/v1/query" {
http.NotFound(w, r)
return
}
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"status": "success",
"data": {
"resultType": "vector",
"result": [
{
"metric": {"__name__": "up", "job": "node"},
"value": [1234567890, "1"]
}
]
}
}`))
},
nil,
)
defer cleanup()
result := callTool(t, server, "query", map[string]interface{}{
"promql": "up",
})
if result.IsError {
t.Fatalf("unexpected error: %s", result.Content[0].Text)
}
if !strings.Contains(result.Content[0].Text, "node") {
t.Errorf("expected output to contain 'node', got: %s", result.Content[0].Text)
}
}
func TestHandler_ListTargets(t *testing.T) {
server, cleanup := setupTestServer(t,
func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path != "/api/v1/targets" {
http.NotFound(w, r)
return
}
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"status": "success",
"data": {
"activeTargets": [
{
"labels": {"instance": "localhost:9090", "job": "prometheus"},
"scrapePool": "prometheus",
"scrapeUrl": "http://localhost:9090/metrics",
"globalUrl": "http://localhost:9090/metrics",
"lastError": "",
"lastScrape": "2024-01-01T00:00:00Z",
"lastScrapeDuration": 0.015,
"health": "up",
"scrapeInterval": "15s",
"scrapeTimeout": "10s"
}
],
"droppedTargets": []
}
}`))
},
nil,
)
defer cleanup()
result := callTool(t, server, "list_targets", map[string]interface{}{})
if result.IsError {
t.Fatalf("unexpected error: %s", result.Content[0].Text)
}
if !strings.Contains(result.Content[0].Text, "prometheus") {
t.Errorf("expected output to contain 'prometheus', got: %s", result.Content[0].Text)
}
}
func TestHandler_SearchMetrics(t *testing.T) {
server, cleanup := setupTestServer(t,
func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
switch r.URL.Path {
case "/api/v1/label/__name__/values":
_, _ = w.Write([]byte(`{
"status": "success",
"data": ["node_cpu_seconds_total", "node_memory_MemTotal_bytes", "up"]
}`))
case "/api/v1/metadata":
_, _ = w.Write([]byte(`{
"status": "success",
"data": {
"node_cpu_seconds_total": [{"type": "counter", "help": "CPU time", "unit": ""}],
"node_memory_MemTotal_bytes": [{"type": "gauge", "help": "Total memory", "unit": "bytes"}]
}
}`))
default:
http.NotFound(w, r)
}
},
nil,
)
defer cleanup()
result := callTool(t, server, "search_metrics", map[string]interface{}{
"query": "node",
})
if result.IsError {
t.Fatalf("unexpected error: %s", result.Content[0].Text)
}
if !strings.Contains(result.Content[0].Text, "node_cpu") {
t.Errorf("expected output to contain 'node_cpu', got: %s", result.Content[0].Text)
}
// "up" should be filtered out since it doesn't match "node"
if strings.Contains(result.Content[0].Text, "| up |") {
t.Errorf("expected 'up' to be filtered out, got: %s", result.Content[0].Text)
}
}
func TestHandler_ListSilences(t *testing.T) {
server, cleanup := setupTestServer(t,
nil,
func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path != "/api/v2/silences" {
http.NotFound(w, r)
return
}
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`[
{
"id": "s1",
"matchers": [{"name": "alertname", "value": "Test", "isRegex": false}],
"startsAt": "2024-01-01T00:00:00Z",
"endsAt": "2024-01-01T02:00:00Z",
"createdBy": "admin",
"comment": "Testing",
"status": {"state": "active"}
},
{
"id": "s2",
"matchers": [{"name": "job", "value": "node", "isRegex": false}],
"startsAt": "2023-01-01T00:00:00Z",
"endsAt": "2023-01-01T02:00:00Z",
"createdBy": "admin",
"comment": "Old",
"status": {"state": "expired"}
}
]`))
},
)
defer cleanup()
result := callTool(t, server, "list_silences", map[string]interface{}{})
if result.IsError {
t.Fatalf("unexpected error: %s", result.Content[0].Text)
}
// Should show active silence but filter out expired
if !strings.Contains(result.Content[0].Text, "s1") {
t.Errorf("expected active silence s1 in output, got: %s", result.Content[0].Text)
}
if strings.Contains(result.Content[0].Text, "s2") {
t.Errorf("expected expired silence s2 to be filtered out, got: %s", result.Content[0].Text)
}
}
func TestHandler_ToolCount(t *testing.T) {
server, cleanup := setupTestServer(t,
func(w http.ResponseWriter, r *http.Request) {},
func(w http.ResponseWriter, r *http.Request) {},
)
defer cleanup()
tools := listTools(t, server)
// Without Loki: 7 base + 1 silence = 8
if len(tools) != 8 {
t.Errorf("expected 8 tools with silences enabled (no Loki), got %d", len(tools))
for _, tool := range tools {
t.Logf(" tool: %s", tool.Name)
}
}
// Verify create_silence is present
found := false
for _, tool := range tools {
if tool.Name == "create_silence" {
found = true
break
}
}
if !found {
t.Error("expected create_silence tool when silences enabled")
}
}
func TestHandler_ToolCountWithLoki(t *testing.T) {
server, cleanup := setupTestServer(t,
func(w http.ResponseWriter, r *http.Request) {},
func(w http.ResponseWriter, r *http.Request) {},
func(w http.ResponseWriter, r *http.Request) {},
)
defer cleanup()
tools := listTools(t, server)
// With Loki: 7 base + 1 silence + 3 loki = 11
if len(tools) != 11 {
t.Errorf("expected 11 tools with silences and Loki enabled, got %d", len(tools))
for _, tool := range tools {
t.Logf(" tool: %s", tool.Name)
}
}
// Verify Loki tools are present
lokiTools := map[string]bool{"query_logs": false, "list_labels": false, "list_label_values": false}
for _, tool := range tools {
if _, ok := lokiTools[tool.Name]; ok {
lokiTools[tool.Name] = true
}
}
for name, found := range lokiTools {
if !found {
t.Errorf("expected %s tool when Loki enabled", name)
}
}
}
func TestHandler_ToolCountWithoutSilences(t *testing.T) {
promSrv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {}))
amSrv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {}))
defer promSrv.Close()
defer amSrv.Close()
logger := log.New(io.Discard, "", 0)
config := mcp.DefaultMonitoringConfig()
server := mcp.NewGenericServer(logger, config)
prom := NewPrometheusClient(promSrv.URL)
am := NewAlertmanagerClient(amSrv.URL)
RegisterHandlers(server, prom, am, nil, HandlerOptions{EnableSilences: false})
tools := listTools(t, server)
if len(tools) != 7 {
t.Errorf("expected 7 tools without silences, got %d", len(tools))
for _, tool := range tools {
t.Logf(" tool: %s", tool.Name)
}
}
// Verify create_silence is NOT present
for _, tool := range tools {
if tool.Name == "create_silence" {
t.Error("expected create_silence tool to be absent when silences disabled")
}
}
}
func listTools(t *testing.T, server *mcp.Server) []mcp.Tool {
t.Helper()
req := &mcp.Request{
JSONRPC: "2.0",
ID: 1,
Method: "tools/list",
}
resp := server.HandleRequest(context.Background(), req)
if resp == nil {
t.Fatal("expected response, got nil")
}
if resp.Error != nil {
t.Fatalf("unexpected error: %s", resp.Error.Message)
}
resultJSON, err := json.Marshal(resp.Result)
if err != nil {
t.Fatalf("failed to marshal result: %v", err)
}
var listResult mcp.ListToolsResult
if err := json.Unmarshal(resultJSON, &listResult); err != nil {
t.Fatalf("failed to unmarshal result: %v", err)
}
return listResult.Tools
}
func TestHandler_QueryLogs(t *testing.T) {
server, cleanup := setupTestServer(t,
nil,
nil,
func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path != "/loki/api/v1/query_range" {
http.NotFound(w, r)
return
}
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"status": "success",
"data": {
"resultType": "streams",
"result": [
{
"stream": {"job": "varlogs", "filename": "/var/log/syslog"},
"values": [
["1704067200000000000", "Jan 1 00:00:00 host kernel: test message"]
]
}
]
}
}`))
},
)
defer cleanup()
result := callTool(t, server, "query_logs", map[string]interface{}{
"logql": `{job="varlogs"}`,
})
if result.IsError {
t.Fatalf("unexpected error: %s", result.Content[0].Text)
}
if !strings.Contains(result.Content[0].Text, "varlogs") {
t.Errorf("expected output to contain 'varlogs', got: %s", result.Content[0].Text)
}
if !strings.Contains(result.Content[0].Text, "test message") {
t.Errorf("expected output to contain 'test message', got: %s", result.Content[0].Text)
}
}
func TestHandler_ListLabels(t *testing.T) {
server, cleanup := setupTestServer(t,
nil,
nil,
func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path != "/loki/api/v1/labels" {
http.NotFound(w, r)
return
}
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"status": "success",
"data": ["job", "instance", "filename"]
}`))
},
)
defer cleanup()
result := callTool(t, server, "list_labels", map[string]interface{}{})
if result.IsError {
t.Fatalf("unexpected error: %s", result.Content[0].Text)
}
if !strings.Contains(result.Content[0].Text, "3 label") {
t.Errorf("expected output to contain '3 label', got: %s", result.Content[0].Text)
}
if !strings.Contains(result.Content[0].Text, "job") {
t.Errorf("expected output to contain 'job', got: %s", result.Content[0].Text)
}
}
func TestHandler_ListLabelValues(t *testing.T) {
server, cleanup := setupTestServer(t,
nil,
nil,
func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path != "/loki/api/v1/label/job/values" {
http.NotFound(w, r)
return
}
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"status": "success",
"data": ["varlogs", "nginx", "systemd"]
}`))
},
)
defer cleanup()
result := callTool(t, server, "list_label_values", map[string]interface{}{
"label": "job",
})
if result.IsError {
t.Fatalf("unexpected error: %s", result.Content[0].Text)
}
if !strings.Contains(result.Content[0].Text, "3 value") {
t.Errorf("expected output to contain '3 value', got: %s", result.Content[0].Text)
}
if !strings.Contains(result.Content[0].Text, "nginx") {
t.Errorf("expected output to contain 'nginx', got: %s", result.Content[0].Text)
}
}
// callTool is a test helper that calls a tool through the MCP server.
func callTool(t *testing.T, server *mcp.Server, name string, args map[string]interface{}) mcp.CallToolResult {
t.Helper()
params := mcp.CallToolParams{
Name: name,
Arguments: args,
}
paramsJSON, err := json.Marshal(params)
if err != nil {
t.Fatalf("failed to marshal params: %v", err)
}
req := &mcp.Request{
JSONRPC: "2.0",
ID: 1,
Method: "tools/call",
Params: paramsJSON,
}
resp := server.HandleRequest(context.Background(), req)
if resp == nil {
t.Fatal("expected response, got nil")
}
if resp.Error != nil {
t.Fatalf("JSON-RPC error: %s", resp.Error.Message)
}
resultJSON, err := json.Marshal(resp.Result)
if err != nil {
t.Fatalf("failed to marshal result: %v", err)
}
var result mcp.CallToolResult
if err := json.Unmarshal(resultJSON, &result); err != nil {
t.Fatalf("failed to unmarshal result: %v", err)
}
return result
}

137
internal/monitoring/loki.go Normal file
View File

@@ -0,0 +1,137 @@
package monitoring
import (
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"net/url"
"strings"
"time"
)
// LokiClientOptions configures the Loki client.
type LokiClientOptions struct {
BaseURL string
Username string
Password string
}
// LokiClient is an HTTP client for the Loki API.
type LokiClient struct {
baseURL string
username string
password string
httpClient *http.Client
}
// NewLokiClient creates a new Loki API client.
func NewLokiClient(opts LokiClientOptions) *LokiClient {
return &LokiClient{
baseURL: strings.TrimRight(opts.BaseURL, "/"),
username: opts.Username,
password: opts.Password,
httpClient: &http.Client{
Timeout: 30 * time.Second,
},
}
}
// QueryRange executes a LogQL range query against Loki.
func (c *LokiClient) QueryRange(ctx context.Context, logql string, start, end time.Time, limit int, direction string) (*LokiQueryData, error) {
params := url.Values{}
params.Set("query", logql)
params.Set("start", fmt.Sprintf("%d", start.UnixNano()))
params.Set("end", fmt.Sprintf("%d", end.UnixNano()))
if limit > 0 {
params.Set("limit", fmt.Sprintf("%d", limit))
}
if direction != "" {
params.Set("direction", direction)
}
body, err := c.get(ctx, "/loki/api/v1/query_range", params)
if err != nil {
return nil, fmt.Errorf("query range failed: %w", err)
}
var data LokiQueryData
if err := json.Unmarshal(body, &data); err != nil {
return nil, fmt.Errorf("failed to parse query data: %w", err)
}
return &data, nil
}
// Labels returns all available label names from Loki.
func (c *LokiClient) Labels(ctx context.Context) ([]string, error) {
body, err := c.get(ctx, "/loki/api/v1/labels", nil)
if err != nil {
return nil, fmt.Errorf("labels failed: %w", err)
}
var labels []string
if err := json.Unmarshal(body, &labels); err != nil {
return nil, fmt.Errorf("failed to parse labels: %w", err)
}
return labels, nil
}
// LabelValues returns all values for a given label name from Loki.
func (c *LokiClient) LabelValues(ctx context.Context, label string) ([]string, error) {
path := fmt.Sprintf("/loki/api/v1/label/%s/values", url.PathEscape(label))
body, err := c.get(ctx, path, nil)
if err != nil {
return nil, fmt.Errorf("label values failed: %w", err)
}
var values []string
if err := json.Unmarshal(body, &values); err != nil {
return nil, fmt.Errorf("failed to parse label values: %w", err)
}
return values, nil
}
// get performs a GET request and returns the "data" field from the Loki response envelope.
// Loki uses the same {"status":"success","data":...} format as Prometheus.
func (c *LokiClient) get(ctx context.Context, path string, params url.Values) (json.RawMessage, error) {
u := c.baseURL + path
if len(params) > 0 {
u += "?" + params.Encode()
}
req, err := http.NewRequestWithContext(ctx, http.MethodGet, u, nil)
if err != nil {
return nil, fmt.Errorf("failed to create request: %w", err)
}
if c.username != "" {
req.SetBasicAuth(c.username, c.password)
}
resp, err := c.httpClient.Do(req)
if err != nil {
return nil, fmt.Errorf("request failed: %w", err)
}
defer resp.Body.Close() //nolint:errcheck // cleanup on exit
body, err := io.ReadAll(resp.Body)
if err != nil {
return nil, fmt.Errorf("failed to read response: %w", err)
}
if resp.StatusCode != http.StatusOK {
return nil, fmt.Errorf("unexpected status %d: %s", resp.StatusCode, string(body))
}
var promResp PromResponse
if err := json.Unmarshal(body, &promResp); err != nil {
return nil, fmt.Errorf("failed to parse response: %w", err)
}
if promResp.Status != "success" {
return nil, fmt.Errorf("loki error (%s): %s", promResp.ErrorType, promResp.Error)
}
return promResp.Data, nil
}

View File

@@ -0,0 +1,221 @@
package monitoring
import (
"context"
"net/http"
"net/http/httptest"
"testing"
"time"
)
func TestLokiClient_QueryRange(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path != "/loki/api/v1/query_range" {
t.Errorf("unexpected path: %s", r.URL.Path)
}
if r.URL.Query().Get("query") != `{job="varlogs"}` {
t.Errorf("unexpected query param: %s", r.URL.Query().Get("query"))
}
if r.URL.Query().Get("direction") != "backward" {
t.Errorf("unexpected direction: %s", r.URL.Query().Get("direction"))
}
if r.URL.Query().Get("limit") != "10" {
t.Errorf("unexpected limit: %s", r.URL.Query().Get("limit"))
}
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"status": "success",
"data": {
"resultType": "streams",
"result": [
{
"stream": {"job": "varlogs", "filename": "/var/log/syslog"},
"values": [
["1234567890000000000", "line 1"],
["1234567891000000000", "line 2"]
]
}
]
}
}`))
}))
defer srv.Close()
client := NewLokiClient(LokiClientOptions{BaseURL: srv.URL})
start := time.Unix(0, 1234567890000000000)
end := time.Unix(0, 1234567899000000000)
data, err := client.QueryRange(context.Background(), `{job="varlogs"}`, start, end, 10, "backward")
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if data.ResultType != "streams" {
t.Errorf("expected resultType=streams, got %s", data.ResultType)
}
if len(data.Result) != 1 {
t.Fatalf("expected 1 stream, got %d", len(data.Result))
}
if data.Result[0].Stream["job"] != "varlogs" {
t.Errorf("expected job=varlogs, got %s", data.Result[0].Stream["job"])
}
if len(data.Result[0].Values) != 2 {
t.Fatalf("expected 2 entries, got %d", len(data.Result[0].Values))
}
if data.Result[0].Values[0][1] != "line 1" {
t.Errorf("expected first line='line 1', got %s", data.Result[0].Values[0][1])
}
}
func TestLokiClient_QueryRangeError(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"status": "error",
"errorType": "bad_data",
"error": "invalid LogQL query"
}`))
}))
defer srv.Close()
client := NewLokiClient(LokiClientOptions{BaseURL: srv.URL})
_, err := client.QueryRange(context.Background(), "invalid{", time.Now().Add(-time.Hour), time.Now(), 100, "backward")
if err == nil {
t.Fatal("expected error, got nil")
}
if !contains(err.Error(), "invalid LogQL query") {
t.Errorf("expected error to contain 'invalid LogQL query', got: %s", err.Error())
}
}
func TestLokiClient_Labels(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path != "/loki/api/v1/labels" {
t.Errorf("unexpected path: %s", r.URL.Path)
}
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"status": "success",
"data": ["job", "instance", "filename"]
}`))
}))
defer srv.Close()
client := NewLokiClient(LokiClientOptions{BaseURL: srv.URL})
labels, err := client.Labels(context.Background())
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if len(labels) != 3 {
t.Fatalf("expected 3 labels, got %d", len(labels))
}
if labels[0] != "job" {
t.Errorf("expected first label=job, got %s", labels[0])
}
}
func TestLokiClient_LabelValues(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path != "/loki/api/v1/label/job/values" {
t.Errorf("unexpected path: %s, expected /loki/api/v1/label/job/values", r.URL.Path)
}
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"status": "success",
"data": ["varlogs", "nginx", "systemd"]
}`))
}))
defer srv.Close()
client := NewLokiClient(LokiClientOptions{BaseURL: srv.URL})
values, err := client.LabelValues(context.Background(), "job")
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if len(values) != 3 {
t.Fatalf("expected 3 values, got %d", len(values))
}
if values[0] != "varlogs" {
t.Errorf("expected first value=varlogs, got %s", values[0])
}
}
func TestLokiClient_BasicAuth(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
user, pass, ok := r.BasicAuth()
if !ok {
t.Error("expected basic auth to be set")
}
if user != "myuser" {
t.Errorf("expected username=myuser, got %s", user)
}
if pass != "mypass" {
t.Errorf("expected password=mypass, got %s", pass)
}
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"status": "success",
"data": ["job"]
}`))
}))
defer srv.Close()
client := NewLokiClient(LokiClientOptions{
BaseURL: srv.URL,
Username: "myuser",
Password: "mypass",
})
labels, err := client.Labels(context.Background())
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if len(labels) != 1 || labels[0] != "job" {
t.Errorf("unexpected labels: %v", labels)
}
}
func TestLokiClient_NoAuthWhenNoCredentials(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if _, _, ok := r.BasicAuth(); ok {
t.Error("expected no basic auth header, but it was set")
}
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"status": "success",
"data": ["job"]
}`))
}))
defer srv.Close()
client := NewLokiClient(LokiClientOptions{BaseURL: srv.URL})
labels, err := client.Labels(context.Background())
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if len(labels) != 1 || labels[0] != "job" {
t.Errorf("unexpected labels: %v", labels)
}
}
func TestLokiClient_HTTPError(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusInternalServerError)
_, _ = w.Write([]byte("internal error"))
}))
defer srv.Close()
client := NewLokiClient(LokiClientOptions{BaseURL: srv.URL})
_, err := client.QueryRange(context.Background(), `{job="test"}`, time.Now().Add(-time.Hour), time.Now(), 100, "backward")
if err == nil {
t.Fatal("expected error, got nil")
}
if !contains(err.Error(), "500") {
t.Errorf("expected error to contain status code, got: %s", err.Error())
}
}

View File

@@ -0,0 +1,135 @@
package monitoring
import (
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"net/url"
"strings"
"time"
)
// PrometheusClient is an HTTP client for the Prometheus API.
type PrometheusClient struct {
baseURL string
httpClient *http.Client
}
// NewPrometheusClient creates a new Prometheus API client.
func NewPrometheusClient(baseURL string) *PrometheusClient {
return &PrometheusClient{
baseURL: strings.TrimRight(baseURL, "/"),
httpClient: &http.Client{
Timeout: 30 * time.Second,
},
}
}
// Query executes an instant PromQL query. If ts is zero, the current time is used.
func (c *PrometheusClient) Query(ctx context.Context, promql string, ts time.Time) (*PromQueryData, error) {
params := url.Values{}
params.Set("query", promql)
if !ts.IsZero() {
params.Set("time", fmt.Sprintf("%d", ts.Unix()))
}
body, err := c.get(ctx, "/api/v1/query", params)
if err != nil {
return nil, fmt.Errorf("query failed: %w", err)
}
var data PromQueryData
if err := json.Unmarshal(body, &data); err != nil {
return nil, fmt.Errorf("failed to parse query data: %w", err)
}
return &data, nil
}
// LabelValues returns all values for a given label name.
func (c *PrometheusClient) LabelValues(ctx context.Context, label string) ([]string, error) {
path := fmt.Sprintf("/api/v1/label/%s/values", url.PathEscape(label))
body, err := c.get(ctx, path, nil)
if err != nil {
return nil, fmt.Errorf("label values failed: %w", err)
}
var values []string
if err := json.Unmarshal(body, &values); err != nil {
return nil, fmt.Errorf("failed to parse label values: %w", err)
}
return values, nil
}
// Metadata returns metadata for metrics. If metric is empty, returns metadata for all metrics.
func (c *PrometheusClient) Metadata(ctx context.Context, metric string) (map[string][]PromMetadata, error) {
params := url.Values{}
if metric != "" {
params.Set("metric", metric)
}
body, err := c.get(ctx, "/api/v1/metadata", params)
if err != nil {
return nil, fmt.Errorf("metadata failed: %w", err)
}
var metadata map[string][]PromMetadata
if err := json.Unmarshal(body, &metadata); err != nil {
return nil, fmt.Errorf("failed to parse metadata: %w", err)
}
return metadata, nil
}
// Targets returns the current scrape targets.
func (c *PrometheusClient) Targets(ctx context.Context) (*PromTargetsData, error) {
body, err := c.get(ctx, "/api/v1/targets", nil)
if err != nil {
return nil, fmt.Errorf("targets failed: %w", err)
}
var data PromTargetsData
if err := json.Unmarshal(body, &data); err != nil {
return nil, fmt.Errorf("failed to parse targets data: %w", err)
}
return &data, nil
}
// get performs a GET request and returns the "data" field from the Prometheus response envelope.
func (c *PrometheusClient) get(ctx context.Context, path string, params url.Values) (json.RawMessage, error) {
u := c.baseURL + path
if len(params) > 0 {
u += "?" + params.Encode()
}
req, err := http.NewRequestWithContext(ctx, http.MethodGet, u, nil)
if err != nil {
return nil, fmt.Errorf("failed to create request: %w", err)
}
resp, err := c.httpClient.Do(req)
if err != nil {
return nil, fmt.Errorf("request failed: %w", err)
}
defer resp.Body.Close() //nolint:errcheck // cleanup on exit
body, err := io.ReadAll(resp.Body)
if err != nil {
return nil, fmt.Errorf("failed to read response: %w", err)
}
if resp.StatusCode != http.StatusOK {
return nil, fmt.Errorf("unexpected status %d: %s", resp.StatusCode, string(body))
}
var promResp PromResponse
if err := json.Unmarshal(body, &promResp); err != nil {
return nil, fmt.Errorf("failed to parse response: %w", err)
}
if promResp.Status != "success" {
return nil, fmt.Errorf("prometheus error (%s): %s", promResp.ErrorType, promResp.Error)
}
return promResp.Data, nil
}

View File

@@ -0,0 +1,209 @@
package monitoring
import (
"context"
"net/http"
"net/http/httptest"
"testing"
"time"
)
func TestPrometheusClient_Query(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path != "/api/v1/query" {
t.Errorf("unexpected path: %s", r.URL.Path)
}
if r.URL.Query().Get("query") != "up" {
t.Errorf("unexpected query param: %s", r.URL.Query().Get("query"))
}
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"status": "success",
"data": {
"resultType": "vector",
"result": [
{
"metric": {"__name__": "up", "job": "prometheus", "instance": "localhost:9090"},
"value": [1234567890, "1"]
},
{
"metric": {"__name__": "up", "job": "node", "instance": "localhost:9100"},
"value": [1234567890, "0"]
}
]
}
}`))
}))
defer srv.Close()
client := NewPrometheusClient(srv.URL)
data, err := client.Query(context.Background(), "up", time.Time{})
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if data.ResultType != "vector" {
t.Errorf("expected resultType=vector, got %s", data.ResultType)
}
if len(data.Result) != 2 {
t.Fatalf("expected 2 results, got %d", len(data.Result))
}
if data.Result[0].Metric["job"] != "prometheus" {
t.Errorf("expected job=prometheus, got %s", data.Result[0].Metric["job"])
}
}
func TestPrometheusClient_QueryError(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"status": "error",
"errorType": "bad_data",
"error": "invalid expression"
}`))
}))
defer srv.Close()
client := NewPrometheusClient(srv.URL)
_, err := client.Query(context.Background(), "invalid{", time.Time{})
if err == nil {
t.Fatal("expected error, got nil")
}
if !contains(err.Error(), "invalid expression") {
t.Errorf("expected error to contain 'invalid expression', got: %s", err.Error())
}
}
func TestPrometheusClient_LabelValues(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path != "/api/v1/label/__name__/values" {
t.Errorf("unexpected path: %s", r.URL.Path)
}
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"status": "success",
"data": ["up", "node_cpu_seconds_total", "prometheus_build_info"]
}`))
}))
defer srv.Close()
client := NewPrometheusClient(srv.URL)
values, err := client.LabelValues(context.Background(), "__name__")
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if len(values) != 3 {
t.Fatalf("expected 3 values, got %d", len(values))
}
if values[0] != "up" {
t.Errorf("expected first value=up, got %s", values[0])
}
}
func TestPrometheusClient_Metadata(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path != "/api/v1/metadata" {
t.Errorf("unexpected path: %s", r.URL.Path)
}
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"status": "success",
"data": {
"up": [{"type": "gauge", "help": "Whether the target is up.", "unit": ""}],
"node_cpu_seconds_total": [{"type": "counter", "help": "CPU seconds spent.", "unit": "seconds"}]
}
}`))
}))
defer srv.Close()
client := NewPrometheusClient(srv.URL)
metadata, err := client.Metadata(context.Background(), "")
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if len(metadata) != 2 {
t.Fatalf("expected 2 metrics, got %d", len(metadata))
}
if metadata["up"][0].Type != "gauge" {
t.Errorf("expected up type=gauge, got %s", metadata["up"][0].Type)
}
}
func TestPrometheusClient_Targets(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path != "/api/v1/targets" {
t.Errorf("unexpected path: %s", r.URL.Path)
}
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"status": "success",
"data": {
"activeTargets": [
{
"labels": {"instance": "localhost:9090", "job": "prometheus"},
"scrapePool": "prometheus",
"scrapeUrl": "http://localhost:9090/metrics",
"globalUrl": "http://localhost:9090/metrics",
"lastError": "",
"lastScrape": "2024-01-01T00:00:00Z",
"lastScrapeDuration": 0.01,
"health": "up",
"scrapeInterval": "15s",
"scrapeTimeout": "10s"
}
],
"droppedTargets": []
}
}`))
}))
defer srv.Close()
client := NewPrometheusClient(srv.URL)
data, err := client.Targets(context.Background())
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if len(data.ActiveTargets) != 1 {
t.Fatalf("expected 1 active target, got %d", len(data.ActiveTargets))
}
if data.ActiveTargets[0].Health != "up" {
t.Errorf("expected health=up, got %s", data.ActiveTargets[0].Health)
}
}
func TestPrometheusClient_HTTPError(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusInternalServerError)
_, _ = w.Write([]byte("internal error"))
}))
defer srv.Close()
client := NewPrometheusClient(srv.URL)
_, err := client.Query(context.Background(), "up", time.Time{})
if err == nil {
t.Fatal("expected error, got nil")
}
if !contains(err.Error(), "500") {
t.Errorf("expected error to contain status code, got: %s", err.Error())
}
}
func contains(s, substr string) bool {
return len(s) >= len(substr) && searchString(s, substr)
}
func searchString(s, substr string) bool {
for i := 0; i <= len(s)-len(substr); i++ {
if s[i:i+len(substr)] == substr {
return true
}
}
return false
}

View File

@@ -0,0 +1,137 @@
package monitoring
import (
"encoding/json"
"time"
)
// Prometheus API response types
// PromResponse is the standard Prometheus API response envelope.
type PromResponse struct {
Status string `json:"status"`
Data json.RawMessage `json:"data,omitempty"`
ErrorType string `json:"errorType,omitempty"`
Error string `json:"error,omitempty"`
}
// PromQueryData represents the data field for query results.
type PromQueryData struct {
ResultType string `json:"resultType"`
Result []PromInstantVector `json:"result"`
}
// PromInstantVector represents a single instant vector result.
type PromInstantVector struct {
Metric map[string]string `json:"metric"`
Value [2]interface{} `json:"value"` // [timestamp, value_string]
}
// PromScalar represents a scalar query result.
type PromScalar [2]interface{} // [timestamp, value_string]
// PromMetadata represents metadata for a single metric.
type PromMetadata struct {
Type string `json:"type"`
Help string `json:"help"`
Unit string `json:"unit"`
}
// PromTarget represents a single scrape target.
type PromTarget struct {
DiscoveredLabels map[string]string `json:"discoveredLabels"`
Labels map[string]string `json:"labels"`
ScrapePool string `json:"scrapePool"`
ScrapeURL string `json:"scrapeUrl"`
GlobalURL string `json:"globalUrl"`
LastError string `json:"lastError"`
LastScrape time.Time `json:"lastScrape"`
LastScrapeDuration float64 `json:"lastScrapeDuration"`
Health string `json:"health"`
ScrapeInterval string `json:"scrapeInterval"`
ScrapeTimeout string `json:"scrapeTimeout"`
}
// PromTargetsData represents the data field for targets results.
type PromTargetsData struct {
ActiveTargets []PromTarget `json:"activeTargets"`
DroppedTargets []PromTarget `json:"droppedTargets"`
}
// Alertmanager API response types
// Alert represents an alert from the Alertmanager API v2.
type Alert struct {
Annotations map[string]string `json:"annotations"`
EndsAt time.Time `json:"endsAt"`
Fingerprint string `json:"fingerprint"`
Receivers []AlertReceiver `json:"receivers"`
StartsAt time.Time `json:"startsAt"`
Status AlertStatus `json:"status"`
UpdatedAt time.Time `json:"updatedAt"`
GeneratorURL string `json:"generatorURL"`
Labels map[string]string `json:"labels"`
}
// AlertReceiver represents an alert receiver.
type AlertReceiver struct {
Name string `json:"name"`
}
// AlertStatus represents the status of an alert.
type AlertStatus struct {
InhibitedBy []string `json:"inhibitedBy"`
SilencedBy []string `json:"silencedBy"`
State string `json:"state"` // "active", "suppressed", "unprocessed"
}
// AlertFilters contains filters for listing alerts.
type AlertFilters struct {
Active *bool
Silenced *bool
Inhibited *bool
Unprocessed *bool
Filter []string // PromQL-style label matchers, e.g. {severity="critical"}
Receiver string
}
// Silence represents a silence from the Alertmanager API v2.
type Silence struct {
ID string `json:"id,omitempty"`
Matchers []Matcher `json:"matchers"`
StartsAt time.Time `json:"startsAt"`
EndsAt time.Time `json:"endsAt"`
CreatedBy string `json:"createdBy"`
Comment string `json:"comment"`
Status *SilenceStatus `json:"status,omitempty"`
}
// SilenceStatus represents the status of a silence.
type SilenceStatus struct {
State string `json:"state"` // "active", "pending", "expired"
}
// Matcher represents a label matcher for silences.
type Matcher struct {
Name string `json:"name"`
Value string `json:"value"`
IsRegex bool `json:"isRegex"`
IsEqual *bool `json:"isEqual,omitempty"`
}
// Loki API response types
// LokiQueryData represents the data field for Loki query results.
type LokiQueryData struct {
ResultType string `json:"resultType"`
Result []LokiStream `json:"result"`
}
// LokiStream represents a single log stream with its entries.
type LokiStream struct {
Stream map[string]string `json:"stream"`
Values []LokiEntry `json:"values"`
}
// LokiEntry represents a log entry as [nanosecond_timestamp, log_line].
type LokiEntry [2]string

View File

@@ -15,8 +15,8 @@ import (
"strings"
"time"
"git.t-juice.club/torjus/labmcp/internal/database"
"git.t-juice.club/torjus/labmcp/internal/options"
"code.t-juice.club/torjus/labmcp/internal/database"
"code.t-juice.club/torjus/labmcp/internal/options"
)
// revisionPattern validates revision strings to prevent injection attacks.
@@ -91,7 +91,7 @@ func (idx *Indexer) IndexRevision(ctx context.Context, revision string) (*IndexR
if err != nil {
return nil, fmt.Errorf("failed to open options.json: %w", err)
}
defer optionsFile.Close()
defer optionsFile.Close() //nolint:errcheck // read-only file
options, err := ParseOptions(optionsFile)
if err != nil {
@@ -119,7 +119,7 @@ func (idx *Indexer) IndexRevision(ctx context.Context, revision string) (*IndexR
// Store options
if err := idx.storeOptions(ctx, rev.ID, options); err != nil {
// Cleanup on failure
idx.store.DeleteRevision(ctx, rev.ID)
_ = idx.store.DeleteRevision(ctx, rev.ID) //nolint:errcheck // best-effort cleanup
return nil, fmt.Errorf("failed to store options: %w", err)
}
@@ -163,7 +163,7 @@ func (idx *Indexer) buildOptions(ctx context.Context, ref string) (string, func(
}
cleanup := func() {
os.RemoveAll(tmpDir)
_ = os.RemoveAll(tmpDir) //nolint:errcheck // best-effort temp dir cleanup
}
// Build options.json using nix-build
@@ -280,7 +280,7 @@ func (idx *Indexer) getCommitDate(ctx context.Context, ref string) (time.Time, e
if err != nil {
return time.Time{}, err
}
defer resp.Body.Close()
defer resp.Body.Close() //nolint:errcheck // response body read-only
if resp.StatusCode != http.StatusOK {
return time.Time{}, fmt.Errorf("GitHub API returned %d", resp.StatusCode)
@@ -362,7 +362,7 @@ func (idx *Indexer) IndexFiles(ctx context.Context, revisionID int64, ref string
if err != nil {
return 0, fmt.Errorf("failed to download tarball: %w", err)
}
defer resp.Body.Close()
defer resp.Body.Close() //nolint:errcheck // response body read-only
if resp.StatusCode != http.StatusOK {
return 0, fmt.Errorf("download failed with status %d", resp.StatusCode)
@@ -373,7 +373,7 @@ func (idx *Indexer) IndexFiles(ctx context.Context, revisionID int64, ref string
if err != nil {
return 0, fmt.Errorf("failed to create gzip reader: %w", err)
}
defer gz.Close()
defer gz.Close() //nolint:errcheck // gzip reader read-only
tr := tar.NewReader(gz)
count := 0

View File

@@ -6,7 +6,7 @@ import (
"testing"
"time"
"git.t-juice.club/torjus/labmcp/internal/database"
"code.t-juice.club/torjus/labmcp/internal/database"
)
// TestNixpkgsRevision is the revision from flake.lock used for testing.
@@ -77,7 +77,7 @@ func BenchmarkIndexRevision(b *testing.B) {
if err != nil {
b.Fatalf("Failed to create store: %v", err)
}
defer store.Close()
defer store.Close() //nolint:errcheck // benchmark/test cleanup
ctx := context.Background()
if err := store.Initialize(ctx); err != nil {
@@ -90,7 +90,7 @@ func BenchmarkIndexRevision(b *testing.B) {
for i := 0; i < b.N; i++ {
// Delete any existing revision first (for repeated runs)
if rev, _ := store.GetRevision(ctx, TestNixpkgsRevision); rev != nil {
store.DeleteRevision(ctx, rev.ID)
_ = store.DeleteRevision(ctx, rev.ID) //nolint:errcheck // benchmark cleanup
}
result, err := indexer.IndexRevision(ctx, TestNixpkgsRevision)
@@ -119,7 +119,7 @@ func BenchmarkIndexRevisionWithFiles(b *testing.B) {
if err != nil {
b.Fatalf("Failed to create store: %v", err)
}
defer store.Close()
defer store.Close() //nolint:errcheck // benchmark/test cleanup
ctx := context.Background()
if err := store.Initialize(ctx); err != nil {
@@ -132,7 +132,7 @@ func BenchmarkIndexRevisionWithFiles(b *testing.B) {
for i := 0; i < b.N; i++ {
// Delete any existing revision first
if rev, _ := store.GetRevision(ctx, TestNixpkgsRevision); rev != nil {
store.DeleteRevision(ctx, rev.ID)
_ = store.DeleteRevision(ctx, rev.ID) //nolint:errcheck // benchmark cleanup
}
result, err := indexer.IndexRevision(ctx, TestNixpkgsRevision)
@@ -168,7 +168,7 @@ func BenchmarkIndexFilesOnly(b *testing.B) {
if err != nil {
b.Fatalf("Failed to create store: %v", err)
}
defer store.Close()
defer store.Close() //nolint:errcheck // benchmark/test cleanup
ctx := context.Background()
if err := store.Initialize(ctx); err != nil {
@@ -211,7 +211,7 @@ func TestIndexRevision(t *testing.T) {
if err != nil {
t.Fatalf("Failed to create store: %v", err)
}
defer store.Close()
defer store.Close() //nolint:errcheck // benchmark/test cleanup
ctx := context.Background()
if err := store.Initialize(ctx); err != nil {

View File

@@ -4,7 +4,7 @@ package options
import (
"context"
"git.t-juice.club/torjus/labmcp/internal/database"
"code.t-juice.club/torjus/labmcp/internal/database"
)
// IndexResult contains the results of an indexing operation.

View File

@@ -0,0 +1,257 @@
package packages
import (
"context"
"encoding/json"
"fmt"
"net/http"
"os"
"os/exec"
"path/filepath"
"regexp"
"strings"
"time"
"code.t-juice.club/torjus/labmcp/internal/database"
)
// revisionPattern validates revision strings to prevent injection attacks.
// Allows: alphanumeric, hyphens, underscores, dots (for channel names like "nixos-24.11"
// and git hashes). Must be 1-64 characters.
var revisionPattern = regexp.MustCompile(`^[a-zA-Z0-9._-]{1,64}$`)
// Indexer handles indexing of packages from nixpkgs revisions.
type Indexer struct {
store database.Store
httpClient *http.Client
}
// NewIndexer creates a new packages indexer.
func NewIndexer(store database.Store) *Indexer {
return &Indexer{
store: store,
httpClient: &http.Client{
Timeout: 10 * time.Minute, // Longer timeout for package evaluation
},
}
}
// ValidateRevision checks if a revision string is safe to use.
// Returns an error if the revision contains potentially dangerous characters.
func ValidateRevision(revision string) error {
if !revisionPattern.MatchString(revision) {
return fmt.Errorf("invalid revision format: must be 1-64 alphanumeric characters, hyphens, underscores, or dots")
}
return nil
}
// IndexPackages indexes packages for an existing revision.
// The revision must already exist in the database (created by options indexer).
func (idx *Indexer) IndexPackages(ctx context.Context, revisionID int64, ref string) (*IndexResult, error) {
start := time.Now()
// Validate revision to prevent injection attacks
if err := ValidateRevision(ref); err != nil {
return nil, err
}
// Build packages JSON using nix-env
packagesPath, cleanup, err := idx.buildPackages(ctx, ref)
if err != nil {
return nil, fmt.Errorf("failed to build packages: %w", err)
}
defer cleanup()
// Parse and store packages using streaming to reduce memory usage
packagesFile, err := os.Open(packagesPath)
if err != nil {
return nil, fmt.Errorf("failed to open packages.json: %w", err)
}
defer packagesFile.Close() //nolint:errcheck // read-only file
// Store packages in batches
batch := make([]*database.Package, 0, 1000)
count := 0
_, err = ParsePackagesStream(packagesFile, func(pkg *ParsedPackage) error {
dbPkg := &database.Package{
RevisionID: revisionID,
AttrPath: pkg.AttrPath,
Pname: pkg.Pname,
Version: pkg.Version,
Description: pkg.Description,
LongDescription: pkg.LongDescription,
Homepage: pkg.Homepage,
License: pkg.License,
Platforms: pkg.Platforms,
Maintainers: pkg.Maintainers,
Broken: pkg.Broken,
Unfree: pkg.Unfree,
Insecure: pkg.Insecure,
}
batch = append(batch, dbPkg)
count++
// Store in batches
if len(batch) >= 1000 {
if err := idx.store.CreatePackagesBatch(ctx, batch); err != nil {
return fmt.Errorf("failed to store packages batch: %w", err)
}
batch = batch[:0]
}
return nil
})
if err != nil {
return nil, fmt.Errorf("failed to parse packages: %w", err)
}
// Store remaining packages
if len(batch) > 0 {
if err := idx.store.CreatePackagesBatch(ctx, batch); err != nil {
return nil, fmt.Errorf("failed to store final packages batch: %w", err)
}
}
// Update revision package count
if err := idx.store.UpdateRevisionPackageCount(ctx, revisionID, count); err != nil {
return nil, fmt.Errorf("failed to update package count: %w", err)
}
return &IndexResult{
RevisionID: revisionID,
PackageCount: count,
Duration: time.Since(start),
}, nil
}
// buildPackages builds a JSON file containing all packages for a nixpkgs revision.
func (idx *Indexer) buildPackages(ctx context.Context, ref string) (string, func(), error) {
// Create temp directory
tmpDir, err := os.MkdirTemp("", "nixpkgs-packages-*")
if err != nil {
return "", nil, fmt.Errorf("failed to create temp dir: %w", err)
}
cleanup := func() {
_ = os.RemoveAll(tmpDir) //nolint:errcheck // best-effort temp dir cleanup
}
outputPath := filepath.Join(tmpDir, "packages.json")
// First, fetch the nixpkgs tarball to the nix store
// This ensures it's available for nix-env evaluation
nixExpr := fmt.Sprintf(`
builtins.fetchTarball {
url = "https://github.com/NixOS/nixpkgs/archive/%s.tar.gz";
}
`, ref)
fetchCmd := exec.CommandContext(ctx, "nix-instantiate", "--eval", "-E", nixExpr)
fetchCmd.Dir = tmpDir
fetchOutput, err := fetchCmd.Output()
if err != nil {
cleanup()
if exitErr, ok := err.(*exec.ExitError); ok {
return "", nil, fmt.Errorf("nix-instantiate fetch failed: %s", string(exitErr.Stderr))
}
return "", nil, fmt.Errorf("nix-instantiate fetch failed: %w", err)
}
// The output is the store path in quotes, e.g., "/nix/store/xxx-source"
nixpkgsPath := strings.Trim(strings.TrimSpace(string(fetchOutput)), "\"")
// Run nix-env to get all packages as JSON
// Use --json --meta to get full metadata
cmd := exec.CommandContext(ctx, "nix-env",
"-f", nixpkgsPath,
"-qaP", "--json", "--meta",
)
cmd.Dir = tmpDir
// Create output file
outputFile, err := os.Create(outputPath)
if err != nil {
cleanup()
return "", nil, fmt.Errorf("failed to create output file: %w", err)
}
cmd.Stdout = outputFile
// Suppress stderr warnings about unfree/broken packages
cmd.Stderr = nil
err = cmd.Run()
outputFile.Close() //nolint:errcheck // output file, will check stat below
if err != nil {
cleanup()
if exitErr, ok := err.(*exec.ExitError); ok {
return "", nil, fmt.Errorf("nix-env failed: %s", string(exitErr.Stderr))
}
return "", nil, fmt.Errorf("nix-env failed: %w", err)
}
// Verify output file exists and has content
stat, err := os.Stat(outputPath)
if err != nil || stat.Size() == 0 {
cleanup()
return "", nil, fmt.Errorf("packages.json not found or empty")
}
return outputPath, cleanup, nil
}
// ResolveRevision resolves a channel name or ref to a git ref.
func (idx *Indexer) ResolveRevision(revision string) string {
if ref, ok := ChannelAliases[revision]; ok {
return ref
}
return revision
}
// GetChannelName returns the channel name if the revision matches one.
func (idx *Indexer) GetChannelName(revision string) string {
if _, ok := ChannelAliases[revision]; ok {
return revision
}
// Check if the revision is a channel ref value
for name, ref := range ChannelAliases {
if ref == revision {
return name
}
}
return ""
}
// GetCommitDate gets the commit date for a git ref using GitHub API.
func (idx *Indexer) GetCommitDate(ctx context.Context, ref string) (time.Time, error) {
url := fmt.Sprintf("https://api.github.com/repos/NixOS/nixpkgs/commits/%s", ref)
req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
if err != nil {
return time.Time{}, err
}
req.Header.Set("Accept", "application/vnd.github.v3+json")
resp, err := idx.httpClient.Do(req)
if err != nil {
return time.Time{}, err
}
defer resp.Body.Close() //nolint:errcheck // response body read-only
if resp.StatusCode != http.StatusOK {
return time.Time{}, fmt.Errorf("GitHub API returned %d", resp.StatusCode)
}
var commit struct {
Commit struct {
Committer struct {
Date time.Time `json:"date"`
} `json:"committer"`
} `json:"commit"`
}
if err := json.NewDecoder(resp.Body).Decode(&commit); err != nil {
return time.Time{}, err
}
return commit.Commit.Committer.Date, nil
}

View File

@@ -0,0 +1,82 @@
package packages
import (
"testing"
)
func TestValidateRevision(t *testing.T) {
tests := []struct {
name string
revision string
expectErr bool
}{
{"valid hash", "abc123def456", false},
{"valid channel", "nixos-unstable", false},
{"valid version channel", "nixos-24.11", false},
{"empty", "", true},
{"too long", "a" + string(make([]byte, 100)), true},
{"shell injection", "$(rm -rf /)", true},
{"path traversal", "../../../etc/passwd", true},
{"semicolon", "abc;rm -rf /", true},
{"backtick", "`whoami`", true},
{"space", "abc def", true},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
err := ValidateRevision(tc.revision)
if tc.expectErr && err == nil {
t.Error("Expected error, got nil")
}
if !tc.expectErr && err != nil {
t.Errorf("Expected no error, got %v", err)
}
})
}
}
func TestResolveRevision(t *testing.T) {
idx := &Indexer{}
tests := []struct {
input string
expected string
}{
{"nixos-unstable", "nixos-unstable"},
{"nixos-stable", "nixos-24.11"},
{"nixos-24.11", "nixos-24.11"},
{"abc123", "abc123"},
}
for _, tc := range tests {
t.Run(tc.input, func(t *testing.T) {
result := idx.ResolveRevision(tc.input)
if result != tc.expected {
t.Errorf("Expected %q, got %q", tc.expected, result)
}
})
}
}
func TestGetChannelName(t *testing.T) {
idx := &Indexer{}
tests := []struct {
input string
expected string
}{
{"nixos-unstable", "nixos-unstable"},
{"nixos-stable", "nixos-stable"},
{"nixos-24.11", "nixos-24.11"},
{"abc123", ""},
}
for _, tc := range tests {
t.Run(tc.input, func(t *testing.T) {
result := idx.GetChannelName(tc.input)
if result != tc.expected {
t.Errorf("Expected %q, got %q", tc.expected, result)
}
})
}
}

199
internal/packages/parser.go Normal file
View File

@@ -0,0 +1,199 @@
package packages
import (
"encoding/json"
"fmt"
"io"
"strings"
)
// ParsePackages reads and parses a nix-env JSON output file.
func ParsePackages(r io.Reader) (map[string]*ParsedPackage, error) {
var raw PackagesFile
if err := json.NewDecoder(r).Decode(&raw); err != nil {
return nil, fmt.Errorf("failed to decode packages JSON: %w", err)
}
packages := make(map[string]*ParsedPackage, len(raw))
for attrPath, pkg := range raw {
parsed := &ParsedPackage{
AttrPath: attrPath,
Pname: pkg.Pname,
Version: pkg.Version,
Description: pkg.Meta.Description,
LongDescription: pkg.Meta.LongDescription,
Homepage: normalizeHomepage(pkg.Meta.Homepage),
License: normalizeLicense(pkg.Meta.License),
Platforms: normalizePlatforms(pkg.Meta.Platforms),
Maintainers: normalizeMaintainers(pkg.Meta.Maintainers),
Broken: pkg.Meta.Broken,
Unfree: pkg.Meta.Unfree,
Insecure: pkg.Meta.Insecure,
}
packages[attrPath] = parsed
}
return packages, nil
}
// normalizeHomepage converts homepage to a string.
func normalizeHomepage(v interface{}) string {
if v == nil {
return ""
}
switch hp := v.(type) {
case string:
return hp
case []interface{}:
if len(hp) > 0 {
if s, ok := hp[0].(string); ok {
return s
}
}
}
return ""
}
// normalizeLicense converts license to a JSON array string.
func normalizeLicense(v interface{}) string {
if v == nil {
return "[]"
}
licenses := make([]string, 0)
switch l := v.(type) {
case string:
licenses = append(licenses, l)
case map[string]interface{}:
// Single license object
if spdxID, ok := l["spdxId"].(string); ok {
licenses = append(licenses, spdxID)
} else if fullName, ok := l["fullName"].(string); ok {
licenses = append(licenses, fullName)
} else if shortName, ok := l["shortName"].(string); ok {
licenses = append(licenses, shortName)
}
case []interface{}:
for _, item := range l {
switch li := item.(type) {
case string:
licenses = append(licenses, li)
case map[string]interface{}:
if spdxID, ok := li["spdxId"].(string); ok {
licenses = append(licenses, spdxID)
} else if fullName, ok := li["fullName"].(string); ok {
licenses = append(licenses, fullName)
} else if shortName, ok := li["shortName"].(string); ok {
licenses = append(licenses, shortName)
}
}
}
}
data, _ := json.Marshal(licenses)
return string(data)
}
// normalizePlatforms converts platforms to a JSON array string.
func normalizePlatforms(v []interface{}) string {
if v == nil {
return "[]"
}
platforms := make([]string, 0, len(v))
for _, p := range v {
switch pv := p.(type) {
case string:
platforms = append(platforms, pv)
// Skip complex platform specs (objects)
}
}
data, _ := json.Marshal(platforms)
return string(data)
}
// normalizeMaintainers converts maintainers to a JSON array string.
func normalizeMaintainers(maintainers []Maintainer) string {
if len(maintainers) == 0 {
return "[]"
}
names := make([]string, 0, len(maintainers))
for _, m := range maintainers {
name := m.Name
if name == "" && m.Github != "" {
name = "@" + m.Github
}
if name != "" {
names = append(names, name)
}
}
data, _ := json.Marshal(names)
return string(data)
}
// ParsePackagesStream parses packages from a reader using streaming to reduce memory usage.
// It yields parsed packages through a callback function.
func ParsePackagesStream(r io.Reader, callback func(*ParsedPackage) error) (int, error) {
dec := json.NewDecoder(r)
// Read the opening brace
t, err := dec.Token()
if err != nil {
return 0, fmt.Errorf("failed to read opening token: %w", err)
}
if delim, ok := t.(json.Delim); !ok || delim != '{' {
return 0, fmt.Errorf("expected opening brace, got %v", t)
}
count := 0
for dec.More() {
// Read the key (attr path)
t, err := dec.Token()
if err != nil {
return count, fmt.Errorf("failed to read attr path: %w", err)
}
attrPath, ok := t.(string)
if !ok {
return count, fmt.Errorf("expected string key, got %T", t)
}
// Read the value (package)
var pkg RawPackage
if err := dec.Decode(&pkg); err != nil {
// Skip malformed packages
continue
}
parsed := &ParsedPackage{
AttrPath: attrPath,
Pname: pkg.Pname,
Version: pkg.Version,
Description: pkg.Meta.Description,
LongDescription: pkg.Meta.LongDescription,
Homepage: normalizeHomepage(pkg.Meta.Homepage),
License: normalizeLicense(pkg.Meta.License),
Platforms: normalizePlatforms(pkg.Meta.Platforms),
Maintainers: normalizeMaintainers(pkg.Meta.Maintainers),
Broken: pkg.Meta.Broken,
Unfree: pkg.Meta.Unfree,
Insecure: pkg.Meta.Insecure,
}
if err := callback(parsed); err != nil {
return count, fmt.Errorf("callback error for %s: %w", attrPath, err)
}
count++
}
return count, nil
}
// SplitAttrPath splits an attribute path into its components.
// For example, "python312Packages.requests" returns ["python312Packages", "requests"].
func SplitAttrPath(attrPath string) []string {
return strings.Split(attrPath, ".")
}

View File

@@ -0,0 +1,215 @@
package packages
import (
"strings"
"testing"
)
func TestParsePackages(t *testing.T) {
input := `{
"firefox": {
"name": "firefox-120.0",
"pname": "firefox",
"version": "120.0",
"system": "x86_64-linux",
"meta": {
"description": "A web browser built from Firefox source tree",
"homepage": "https://www.mozilla.org/firefox/",
"license": {"spdxId": "MPL-2.0", "fullName": "Mozilla Public License 2.0"},
"maintainers": [
{"name": "John Doe", "github": "johndoe", "githubId": 12345}
],
"platforms": ["x86_64-linux", "aarch64-linux"]
}
},
"python312Packages.requests": {
"name": "python3.12-requests-2.31.0",
"pname": "requests",
"version": "2.31.0",
"system": "x86_64-linux",
"meta": {
"description": "HTTP library for Python",
"homepage": ["https://requests.readthedocs.io/"],
"license": [{"spdxId": "Apache-2.0"}],
"unfree": false
}
}
}`
packages, err := ParsePackages(strings.NewReader(input))
if err != nil {
t.Fatalf("ParsePackages failed: %v", err)
}
if len(packages) != 2 {
t.Errorf("Expected 2 packages, got %d", len(packages))
}
// Check firefox
firefox, ok := packages["firefox"]
if !ok {
t.Fatal("firefox package not found")
}
if firefox.Pname != "firefox" {
t.Errorf("Expected pname 'firefox', got %q", firefox.Pname)
}
if firefox.Version != "120.0" {
t.Errorf("Expected version '120.0', got %q", firefox.Version)
}
if firefox.Homepage != "https://www.mozilla.org/firefox/" {
t.Errorf("Expected homepage 'https://www.mozilla.org/firefox/', got %q", firefox.Homepage)
}
if firefox.License != `["MPL-2.0"]` {
t.Errorf("Expected license '[\"MPL-2.0\"]', got %q", firefox.License)
}
// Check python requests
requests, ok := packages["python312Packages.requests"]
if !ok {
t.Fatal("python312Packages.requests package not found")
}
if requests.Pname != "requests" {
t.Errorf("Expected pname 'requests', got %q", requests.Pname)
}
// Homepage is array, should extract first element
if requests.Homepage != "https://requests.readthedocs.io/" {
t.Errorf("Expected homepage 'https://requests.readthedocs.io/', got %q", requests.Homepage)
}
}
func TestParsePackagesStream(t *testing.T) {
input := `{
"hello": {
"name": "hello-2.12",
"pname": "hello",
"version": "2.12",
"system": "x86_64-linux",
"meta": {
"description": "A program that produces a familiar, friendly greeting"
}
},
"world": {
"name": "world-1.0",
"pname": "world",
"version": "1.0",
"system": "x86_64-linux",
"meta": {}
}
}`
var packages []*ParsedPackage
count, err := ParsePackagesStream(strings.NewReader(input), func(pkg *ParsedPackage) error {
packages = append(packages, pkg)
return nil
})
if err != nil {
t.Fatalf("ParsePackagesStream failed: %v", err)
}
if count != 2 {
t.Errorf("Expected count 2, got %d", count)
}
if len(packages) != 2 {
t.Errorf("Expected 2 packages, got %d", len(packages))
}
}
func TestNormalizeLicense(t *testing.T) {
tests := []struct {
name string
input interface{}
expected string
}{
{"nil", nil, "[]"},
{"string", "MIT", `["MIT"]`},
{"object with spdxId", map[string]interface{}{"spdxId": "MIT"}, `["MIT"]`},
{"object with fullName", map[string]interface{}{"fullName": "MIT License"}, `["MIT License"]`},
{"array of strings", []interface{}{"MIT", "Apache-2.0"}, `["MIT","Apache-2.0"]`},
{"array of objects", []interface{}{
map[string]interface{}{"spdxId": "MIT"},
map[string]interface{}{"spdxId": "Apache-2.0"},
}, `["MIT","Apache-2.0"]`},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
result := normalizeLicense(tc.input)
if result != tc.expected {
t.Errorf("Expected %q, got %q", tc.expected, result)
}
})
}
}
func TestNormalizeHomepage(t *testing.T) {
tests := []struct {
name string
input interface{}
expected string
}{
{"nil", nil, ""},
{"string", "https://example.com", "https://example.com"},
{"array", []interface{}{"https://example.com", "https://docs.example.com"}, "https://example.com"},
{"empty array", []interface{}{}, ""},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
result := normalizeHomepage(tc.input)
if result != tc.expected {
t.Errorf("Expected %q, got %q", tc.expected, result)
}
})
}
}
func TestNormalizeMaintainers(t *testing.T) {
tests := []struct {
name string
maintainers []Maintainer
expected string
}{
{"empty", nil, "[]"},
{"with name", []Maintainer{{Name: "John Doe"}}, `["John Doe"]`},
{"with github only", []Maintainer{{Github: "johndoe"}}, `["@johndoe"]`},
{"multiple", []Maintainer{{Name: "Alice"}, {Name: "Bob"}}, `["Alice","Bob"]`},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
result := normalizeMaintainers(tc.maintainers)
if result != tc.expected {
t.Errorf("Expected %q, got %q", tc.expected, result)
}
})
}
}
func TestSplitAttrPath(t *testing.T) {
tests := []struct {
input string
expected []string
}{
{"firefox", []string{"firefox"}},
{"python312Packages.requests", []string{"python312Packages", "requests"}},
{"haskellPackages.aeson.components.library", []string{"haskellPackages", "aeson", "components", "library"}},
}
for _, tc := range tests {
t.Run(tc.input, func(t *testing.T) {
result := SplitAttrPath(tc.input)
if len(result) != len(tc.expected) {
t.Errorf("Expected %v, got %v", tc.expected, result)
return
}
for i := range result {
if result[i] != tc.expected[i] {
t.Errorf("Expected %v, got %v", tc.expected, result)
return
}
}
})
}
}

View File

@@ -0,0 +1,78 @@
// Package packages contains types and logic for indexing Nix packages.
package packages
// RawPackage represents a package as parsed from nix-env --json output.
type RawPackage struct {
Pname string `json:"pname"`
Version string `json:"version"`
System string `json:"system"`
Meta RawPackageMeta `json:"meta"`
Name string `json:"name"`
OutputName string `json:"outputName,omitempty"`
Outputs map[string]interface{} `json:"outputs,omitempty"`
}
// RawPackageMeta contains package metadata.
type RawPackageMeta struct {
Available bool `json:"available,omitempty"`
Broken bool `json:"broken,omitempty"`
Description string `json:"description,omitempty"`
Homepage interface{} `json:"homepage,omitempty"` // Can be string or []string
Insecure bool `json:"insecure,omitempty"`
License interface{} `json:"license,omitempty"` // Can be string, object, or []interface{}
LongDescription string `json:"longDescription,omitempty"`
Maintainers []Maintainer `json:"maintainers,omitempty"`
Name string `json:"name,omitempty"`
OutputsToInstall []string `json:"outputsToInstall,omitempty"`
Platforms []interface{} `json:"platforms,omitempty"` // Can be strings or objects
Position string `json:"position,omitempty"`
Unfree bool `json:"unfree,omitempty"`
}
// Maintainer represents a package maintainer.
type Maintainer struct {
Email string `json:"email,omitempty"`
Github string `json:"github,omitempty"`
GithubID int `json:"githubId,omitempty"`
Matrix string `json:"matrix,omitempty"`
Name string `json:"name,omitempty"`
}
// ParsedPackage represents a package ready for database storage.
type ParsedPackage struct {
AttrPath string
Pname string
Version string
Description string
LongDescription string
Homepage string
License string // JSON array
Platforms string // JSON array
Maintainers string // JSON array
Broken bool
Unfree bool
Insecure bool
}
// PackagesFile represents the top-level structure of nix-env JSON output.
// It's a map from attr path to package definition.
type PackagesFile map[string]RawPackage
// ChannelAliases maps friendly channel names to their git branch/ref patterns.
// These are the same as NixOS options since packages come from the same repo.
var ChannelAliases = map[string]string{
"nixos-unstable": "nixos-unstable",
"nixos-stable": "nixos-24.11",
"nixos-24.11": "nixos-24.11",
"nixos-24.05": "nixos-24.05",
"nixos-23.11": "nixos-23.11",
"nixos-23.05": "nixos-23.05",
}
// IndexResult contains the results of a package indexing operation.
type IndexResult struct {
RevisionID int64
PackageCount int
Duration interface{} // time.Duration - kept as interface to avoid import cycle
AlreadyIndexed bool // True if revision already has packages
}

141
nix/git-explorer-module.nix Normal file
View File

@@ -0,0 +1,141 @@
{ config, lib, pkgs, ... }:
let
cfg = config.services.git-explorer;
mkHttpFlags = httpCfg: lib.concatStringsSep " " ([
"--transport http"
"--http-address '${httpCfg.address}'"
"--http-endpoint '${httpCfg.endpoint}'"
"--session-ttl '${httpCfg.sessionTTL}'"
] ++ lib.optionals (httpCfg.allowedOrigins != []) (
map (origin: "--allowed-origins '${origin}'") httpCfg.allowedOrigins
) ++ lib.optionals httpCfg.tls.enable [
"--tls-cert '${httpCfg.tls.certFile}'"
"--tls-key '${httpCfg.tls.keyFile}'"
]);
in
{
options.services.git-explorer = {
enable = lib.mkEnableOption "Git Explorer MCP server";
package = lib.mkPackageOption pkgs "git-explorer" { };
repoPath = lib.mkOption {
type = lib.types.str;
description = "Path to the git repository to serve.";
};
defaultRemote = lib.mkOption {
type = lib.types.str;
default = "origin";
description = "Default remote name for ref resolution.";
};
http = {
address = lib.mkOption {
type = lib.types.str;
default = "127.0.0.1:8085";
description = "HTTP listen address for the MCP server.";
};
endpoint = lib.mkOption {
type = lib.types.str;
default = "/mcp";
description = "HTTP endpoint path for MCP requests.";
};
allowedOrigins = lib.mkOption {
type = lib.types.listOf lib.types.str;
default = [ ];
description = "Allowed Origin headers for CORS.";
};
sessionTTL = lib.mkOption {
type = lib.types.str;
default = "30m";
description = "Session TTL for HTTP transport.";
};
tls = {
enable = lib.mkEnableOption "TLS for HTTP transport";
certFile = lib.mkOption {
type = lib.types.nullOr lib.types.path;
default = null;
description = "Path to TLS certificate file.";
};
keyFile = lib.mkOption {
type = lib.types.nullOr lib.types.path;
default = null;
description = "Path to TLS private key file.";
};
};
};
openFirewall = lib.mkOption {
type = lib.types.bool;
default = false;
description = "Whether to open the firewall for the MCP HTTP server.";
};
};
config = lib.mkIf cfg.enable {
assertions = [
{
assertion = !cfg.http.tls.enable || (cfg.http.tls.certFile != null && cfg.http.tls.keyFile != null);
message = "services.git-explorer.http.tls: both certFile and keyFile must be set when TLS is enabled";
}
];
systemd.services.git-explorer = {
description = "Git Explorer MCP Server";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
environment = {
GIT_REPO_PATH = cfg.repoPath;
GIT_DEFAULT_REMOTE = cfg.defaultRemote;
};
script = let
httpFlags = mkHttpFlags cfg.http;
in ''
exec ${cfg.package}/bin/git-explorer serve ${httpFlags}
'';
serviceConfig = {
Type = "simple";
DynamicUser = true;
Restart = "on-failure";
RestartSec = "5s";
# Hardening
NoNewPrivileges = true;
ProtectSystem = "strict";
ProtectHome = "read-only";
PrivateTmp = true;
PrivateDevices = true;
ProtectKernelTunables = true;
ProtectKernelModules = true;
ProtectControlGroups = true;
RestrictNamespaces = true;
RestrictRealtime = true;
RestrictSUIDSGID = true;
MemoryDenyWriteExecute = true;
LockPersonality = true;
# Read-only access to repo path
ReadOnlyPaths = [ cfg.repoPath ];
};
};
networking.firewall = lib.mkIf cfg.openFirewall (let
addressParts = lib.splitString ":" cfg.http.address;
port = lib.toInt (lib.last addressParts);
in {
allowedTCPPorts = [ port ];
});
};
}

View File

@@ -0,0 +1,173 @@
{ config, lib, pkgs, ... }:
let
cfg = config.services.lab-monitoring;
mkHttpFlags = httpCfg: lib.concatStringsSep " " ([
"--transport http"
"--http-address '${httpCfg.address}'"
"--http-endpoint '${httpCfg.endpoint}'"
"--session-ttl '${httpCfg.sessionTTL}'"
] ++ lib.optionals (httpCfg.allowedOrigins != []) (
map (origin: "--allowed-origins '${origin}'") httpCfg.allowedOrigins
) ++ lib.optionals httpCfg.tls.enable [
"--tls-cert '${httpCfg.tls.certFile}'"
"--tls-key '${httpCfg.tls.keyFile}'"
]);
in
{
options.services.lab-monitoring = {
enable = lib.mkEnableOption "Lab Monitoring MCP server";
package = lib.mkPackageOption pkgs "lab-monitoring" { };
prometheusUrl = lib.mkOption {
type = lib.types.str;
default = "http://localhost:9090";
description = "Prometheus base URL.";
};
alertmanagerUrl = lib.mkOption {
type = lib.types.str;
default = "http://localhost:9093";
description = "Alertmanager base URL.";
};
lokiUrl = lib.mkOption {
type = lib.types.nullOr lib.types.str;
default = null;
description = "Loki base URL. When set, enables log query tools (query_logs, list_labels, list_label_values).";
};
lokiUsername = lib.mkOption {
type = lib.types.nullOr lib.types.str;
default = null;
description = "Username for Loki basic authentication.";
};
lokiPasswordFile = lib.mkOption {
type = lib.types.nullOr lib.types.path;
default = null;
description = "Path to a file containing the password for Loki basic authentication. Recommended over storing secrets in the Nix store.";
};
enableSilences = lib.mkOption {
type = lib.types.bool;
default = false;
description = "Enable the create_silence tool (write operation, disabled by default).";
};
http = {
address = lib.mkOption {
type = lib.types.str;
default = "127.0.0.1:8084";
description = "HTTP listen address for the MCP server.";
};
endpoint = lib.mkOption {
type = lib.types.str;
default = "/mcp";
description = "HTTP endpoint path for MCP requests.";
};
allowedOrigins = lib.mkOption {
type = lib.types.listOf lib.types.str;
default = [ ];
description = "Allowed Origin headers for CORS.";
};
sessionTTL = lib.mkOption {
type = lib.types.str;
default = "30m";
description = "Session TTL for HTTP transport.";
};
tls = {
enable = lib.mkEnableOption "TLS for HTTP transport";
certFile = lib.mkOption {
type = lib.types.nullOr lib.types.path;
default = null;
description = "Path to TLS certificate file.";
};
keyFile = lib.mkOption {
type = lib.types.nullOr lib.types.path;
default = null;
description = "Path to TLS private key file.";
};
};
};
openFirewall = lib.mkOption {
type = lib.types.bool;
default = false;
description = "Whether to open the firewall for the MCP HTTP server.";
};
};
config = lib.mkIf cfg.enable {
assertions = [
{
assertion = !cfg.http.tls.enable || (cfg.http.tls.certFile != null && cfg.http.tls.keyFile != null);
message = "services.lab-monitoring.http.tls: both certFile and keyFile must be set when TLS is enabled";
}
];
systemd.services.lab-monitoring = {
description = "Lab Monitoring MCP Server";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
environment = {
PROMETHEUS_URL = cfg.prometheusUrl;
ALERTMANAGER_URL = cfg.alertmanagerUrl;
} // lib.optionalAttrs (cfg.lokiUrl != null) {
LOKI_URL = cfg.lokiUrl;
} // lib.optionalAttrs (cfg.lokiUsername != null) {
LOKI_USERNAME = cfg.lokiUsername;
};
script = let
httpFlags = mkHttpFlags cfg.http;
silenceFlag = lib.optionalString cfg.enableSilences "--enable-silences";
in ''
${lib.optionalString (cfg.lokiPasswordFile != null) ''
export LOKI_PASSWORD="$(< "$CREDENTIALS_DIRECTORY/loki-password")"
''}
exec ${cfg.package}/bin/lab-monitoring serve ${httpFlags} ${silenceFlag}
'';
serviceConfig = {
Type = "simple";
DynamicUser = true;
Restart = "on-failure";
RestartSec = "5s";
} // lib.optionalAttrs (cfg.lokiPasswordFile != null) {
LoadCredential = [ "loki-password:${cfg.lokiPasswordFile}" ];
} // {
# Hardening
NoNewPrivileges = true;
ProtectSystem = "strict";
ProtectHome = true;
PrivateTmp = true;
PrivateDevices = true;
ProtectKernelTunables = true;
ProtectKernelModules = true;
ProtectControlGroups = true;
RestrictNamespaces = true;
RestrictRealtime = true;
RestrictSUIDSGID = true;
MemoryDenyWriteExecute = true;
LockPersonality = true;
};
};
networking.firewall = lib.mkIf cfg.openFirewall (let
addressParts = lib.splitString ":" cfg.http.address;
port = lib.toInt (lib.last addressParts);
in {
allowedTCPPorts = [ port ];
});
};
}

View File

@@ -0,0 +1,383 @@
{ config, lib, pkgs, ... }:
let
cfg = config.services.nixpkgs-search;
# Determine database URL based on configuration
# For postgres with connectionStringFile, the URL is set at runtime via script
useConnectionStringFile = cfg.database.type == "postgres" && cfg.database.connectionStringFile != null;
databaseUrl = if cfg.database.type == "sqlite"
then "sqlite://${cfg.dataDir}/${cfg.database.name}"
else if useConnectionStringFile
then "" # Will be set at runtime from file
else cfg.database.connectionString;
# Build HTTP transport flags for a service
mkHttpFlags = httpCfg: lib.concatStringsSep " " ([
"--transport http"
"--http-address '${httpCfg.address}'"
"--http-endpoint '${httpCfg.endpoint}'"
"--session-ttl '${httpCfg.sessionTTL}'"
] ++ lib.optionals (httpCfg.allowedOrigins != []) (
map (origin: "--allowed-origins '${origin}'") httpCfg.allowedOrigins
) ++ lib.optionals httpCfg.tls.enable [
"--tls-cert '${httpCfg.tls.certFile}'"
"--tls-key '${httpCfg.tls.keyFile}'"
]);
# Common HTTP options
mkHttpOptions = defaultPort: {
address = lib.mkOption {
type = lib.types.str;
default = "127.0.0.1:${toString defaultPort}";
description = "HTTP listen address for the MCP server.";
};
endpoint = lib.mkOption {
type = lib.types.str;
default = "/mcp";
description = "HTTP endpoint path for MCP requests.";
};
allowedOrigins = lib.mkOption {
type = lib.types.listOf lib.types.str;
default = [ ];
example = [ "http://localhost:3000" "https://example.com" ];
description = ''
Allowed Origin headers for CORS.
Empty list means only localhost origins are allowed.
'';
};
sessionTTL = lib.mkOption {
type = lib.types.str;
default = "30m";
description = "Session TTL for HTTP transport (Go duration format).";
};
tls = {
enable = lib.mkEnableOption "TLS for HTTP transport";
certFile = lib.mkOption {
type = lib.types.nullOr lib.types.path;
default = null;
description = "Path to TLS certificate file.";
};
keyFile = lib.mkOption {
type = lib.types.nullOr lib.types.path;
default = null;
description = "Path to TLS private key file.";
};
};
};
# Service configuration factory
mkServiceConfig = serviceName: subcommand: httpCfg: {
description = "Nixpkgs Search ${serviceName} MCP Server";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ]
++ lib.optional (cfg.database.type == "postgres") "postgresql.service";
environment = lib.mkIf (!useConnectionStringFile) {
NIXPKGS_SEARCH_DATABASE = databaseUrl;
};
path = [ cfg.package ];
script = let
httpFlags = mkHttpFlags httpCfg;
in
if useConnectionStringFile then ''
# Read database connection string from file
if [ ! -f "${cfg.database.connectionStringFile}" ]; then
echo "Error: connectionStringFile not found: ${cfg.database.connectionStringFile}" >&2
exit 1
fi
export NIXPKGS_SEARCH_DATABASE="$(cat "${cfg.database.connectionStringFile}")"
exec nixpkgs-search ${subcommand} serve ${httpFlags}
'' else ''
exec nixpkgs-search ${subcommand} serve ${httpFlags}
'';
serviceConfig = {
Type = "simple";
User = cfg.user;
Group = cfg.group;
Restart = "on-failure";
RestartSec = "5s";
# Hardening
NoNewPrivileges = true;
ProtectSystem = "strict";
ProtectHome = true;
PrivateTmp = true;
PrivateDevices = true;
ProtectKernelTunables = true;
ProtectKernelModules = true;
ProtectControlGroups = true;
RestrictNamespaces = true;
RestrictRealtime = true;
RestrictSUIDSGID = true;
MemoryDenyWriteExecute = true;
LockPersonality = true;
ReadWritePaths = [ cfg.dataDir ];
WorkingDirectory = cfg.dataDir;
StateDirectory = lib.mkIf (cfg.dataDir == "/var/lib/nixpkgs-search") "nixpkgs-search";
};
};
in
{
options.services.nixpkgs-search = {
enable = lib.mkEnableOption "Nixpkgs Search MCP server(s)";
package = lib.mkPackageOption pkgs "nixpkgs-search" { };
user = lib.mkOption {
type = lib.types.str;
default = "nixpkgs-search";
description = "User account under which the service runs.";
};
group = lib.mkOption {
type = lib.types.str;
default = "nixpkgs-search";
description = "Group under which the service runs.";
};
dataDir = lib.mkOption {
type = lib.types.path;
default = "/var/lib/nixpkgs-search";
description = "Directory to store data files.";
};
database = {
type = lib.mkOption {
type = lib.types.enum [ "sqlite" "postgres" ];
default = "sqlite";
description = "Database backend to use.";
};
name = lib.mkOption {
type = lib.types.str;
default = "nixpkgs-search.db";
description = "SQLite database filename (when using sqlite backend).";
};
connectionString = lib.mkOption {
type = lib.types.str;
default = "";
description = ''
PostgreSQL connection string (when using postgres backend).
Example: "postgres://user:password@localhost/nixpkgs_search?sslmode=disable"
WARNING: This value will be stored in the Nix store, which is world-readable.
For production use with sensitive credentials, use connectionStringFile instead.
'';
};
connectionStringFile = lib.mkOption {
type = lib.types.nullOr lib.types.path;
default = null;
description = ''
Path to a file containing the PostgreSQL connection string.
The file should contain just the connection string, e.g.:
postgres://user:password@localhost/nixpkgs_search?sslmode=disable
This is the recommended way to configure PostgreSQL credentials
as the file is not stored in the world-readable Nix store.
The file must be readable by the service user.
'';
example = "/run/secrets/nixpkgs-search-db";
};
};
indexOnStart = lib.mkOption {
type = lib.types.listOf lib.types.str;
default = [ ];
example = [ "nixos-unstable" "nixos-24.11" ];
description = ''
List of nixpkgs revisions to index on service start.
Can be channel names (nixos-unstable) or git hashes.
Indexing is skipped if the revision is already indexed.
'';
};
indexFlags = lib.mkOption {
type = lib.types.listOf lib.types.str;
default = [ ];
example = [ "--no-packages" "--no-files" ];
description = ''
Additional flags to pass to the index command.
Useful for skipping packages (--no-packages), options (--no-options),
or files (--no-files) during indexing.
'';
};
options = {
enable = lib.mkOption {
type = lib.types.bool;
default = true;
description = "Enable the NixOS options MCP server.";
};
http = mkHttpOptions 8082;
openFirewall = lib.mkOption {
type = lib.types.bool;
default = false;
description = "Whether to open the firewall for the options MCP HTTP server.";
};
};
packages = {
enable = lib.mkOption {
type = lib.types.bool;
default = true;
description = "Enable the Nix packages MCP server.";
};
http = mkHttpOptions 8083;
openFirewall = lib.mkOption {
type = lib.types.bool;
default = false;
description = "Whether to open the firewall for the packages MCP HTTP server.";
};
};
};
config = lib.mkIf cfg.enable {
assertions = [
{
assertion = cfg.database.type == "sqlite"
|| cfg.database.connectionString != ""
|| cfg.database.connectionStringFile != null;
message = "services.nixpkgs-search.database: when using postgres backend, either connectionString or connectionStringFile must be set";
}
{
assertion = cfg.database.connectionString == "" || cfg.database.connectionStringFile == null;
message = "services.nixpkgs-search.database: connectionString and connectionStringFile are mutually exclusive";
}
{
assertion = !cfg.options.http.tls.enable || (cfg.options.http.tls.certFile != null && cfg.options.http.tls.keyFile != null);
message = "services.nixpkgs-search.options.http.tls: both certFile and keyFile must be set when TLS is enabled";
}
{
assertion = !cfg.packages.http.tls.enable || (cfg.packages.http.tls.certFile != null && cfg.packages.http.tls.keyFile != null);
message = "services.nixpkgs-search.packages.http.tls: both certFile and keyFile must be set when TLS is enabled";
}
{
assertion = cfg.options.enable || cfg.packages.enable;
message = "services.nixpkgs-search: at least one of options.enable or packages.enable must be true";
}
];
users.users.${cfg.user} = lib.mkIf (cfg.user == "nixpkgs-search") {
isSystemUser = true;
group = cfg.group;
home = cfg.dataDir;
description = "Nixpkgs Search MCP server user";
};
users.groups.${cfg.group} = lib.mkIf (cfg.group == "nixpkgs-search") { };
systemd.tmpfiles.rules = [
"d ${cfg.dataDir} 0750 ${cfg.user} ${cfg.group} -"
];
# Indexing service (runs once on startup if indexOnStart is set)
systemd.services.nixpkgs-search-index = lib.mkIf (cfg.indexOnStart != []) {
description = "Nixpkgs Search Indexer";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ]
++ lib.optional (cfg.database.type == "postgres") "postgresql.service";
before = lib.optionals cfg.options.enable [ "nixpkgs-search-options.service" ]
++ lib.optionals cfg.packages.enable [ "nixpkgs-search-packages.service" ];
environment = lib.mkIf (!useConnectionStringFile) {
NIXPKGS_SEARCH_DATABASE = databaseUrl;
};
path = [ cfg.package ];
script = let
indexFlags = lib.concatStringsSep " " cfg.indexFlags;
indexCommands = lib.concatMapStringsSep "\n" (rev: ''
echo "Indexing revision: ${rev}"
nixpkgs-search index ${indexFlags} "${rev}" || true
'') cfg.indexOnStart;
in
if useConnectionStringFile then ''
# Read database connection string from file
if [ ! -f "${cfg.database.connectionStringFile}" ]; then
echo "Error: connectionStringFile not found: ${cfg.database.connectionStringFile}" >&2
exit 1
fi
export NIXPKGS_SEARCH_DATABASE="$(cat "${cfg.database.connectionStringFile}")"
${indexCommands}
'' else ''
${indexCommands}
'';
serviceConfig = {
Type = "oneshot";
User = cfg.user;
Group = cfg.group;
RemainAfterExit = true;
# Hardening
NoNewPrivileges = true;
ProtectSystem = "strict";
ProtectHome = true;
PrivateTmp = true;
PrivateDevices = true;
ProtectKernelTunables = true;
ProtectKernelModules = true;
ProtectControlGroups = true;
RestrictNamespaces = true;
RestrictRealtime = true;
RestrictSUIDSGID = true;
LockPersonality = true;
ReadWritePaths = [ cfg.dataDir ];
WorkingDirectory = cfg.dataDir;
StateDirectory = lib.mkIf (cfg.dataDir == "/var/lib/nixpkgs-search") "nixpkgs-search";
};
};
# Options MCP server
systemd.services.nixpkgs-search-options = lib.mkIf cfg.options.enable
(mkServiceConfig "Options" "options" cfg.options.http // {
after = (mkServiceConfig "Options" "options" cfg.options.http).after
++ lib.optionals (cfg.indexOnStart != []) [ "nixpkgs-search-index.service" ];
});
# Packages MCP server
systemd.services.nixpkgs-search-packages = lib.mkIf cfg.packages.enable
(mkServiceConfig "Packages" "packages" cfg.packages.http // {
after = (mkServiceConfig "Packages" "packages" cfg.packages.http).after
++ lib.optionals (cfg.indexOnStart != []) [ "nixpkgs-search-index.service" ];
});
# Open firewall ports if configured
networking.firewall = lib.mkMerge [
(lib.mkIf cfg.options.openFirewall (let
addressParts = lib.splitString ":" cfg.options.http.address;
port = lib.toInt (lib.last addressParts);
in {
allowedTCPPorts = [ port ];
}))
(lib.mkIf cfg.packages.openFirewall (let
addressParts = lib.splitString ":" cfg.packages.http.address;
port = lib.toInt (lib.last addressParts);
in {
allowedTCPPorts = [ port ];
}))
];
};
}

View File

@@ -7,9 +7,9 @@
buildGoModule {
inherit pname src;
version = "0.1.1";
version = "0.4.0";
vendorHash = "sha256-D0KIxQC9ctIAaHBFTvkhBE06uOZwDUcIw8471Ug2doY=";
vendorHash = "sha256-XrTtiaQT5br+0ZXz8//rc04GZn/HlQk7l8Nx/+Uil/I=";
subPackages = [ subPackage ];
@@ -22,7 +22,7 @@ buildGoModule {
meta = with lib; {
inherit description mainProgram;
homepage = "https://git.t-juice.club/torjus/labmcp";
homepage = "https://code.t-juice.club/torjus/labmcp";
license = licenses.mit;
maintainers = [ ];
};