Logging & Observability
Wilson uses structured logging powered by Winston. Logs are always available in the dashboard and TUI debug panel. File logging and OpenTelemetry export are opt-in.
Log Levels
Section titled “Log Levels”| Level | Description |
|---|---|
debug | Verbose agent internals (tool calls, LLM requests) |
info | Normal operations (imports, queries, session events) |
warn | Recoverable issues (parse warnings, fallback behavior) |
error | Failures (API errors, DB errors) |
In-Memory Buffer (Default)
Section titled “In-Memory Buffer (Default)”By default, the last 50 log entries are kept in memory. View them in:
- Dashboard — Logs tab at
http://localhost:3141 - TUI — Debug panel (
Ctrl+Din interactive mode)
No configuration required.
File Logging
Section titled “File Logging”Enable JSON Lines file logging for persistent logs:
wilson --debug # Enable via CLI flagOA_DEBUG=1 wilson # Enable via environment variableLogs are written to ~/.openaccountant/logs/agent.log as newline-delimited JSON:
{"ts":"2026-03-01T12:00:00.000Z","level":"info","msg":"Imported 42 transactions","data":{"file":"chase.csv"}}File rotation: 5MB max size, 1 backup file. agent.log is always the current file.
LLM Traces (Dashboard)
Section titled “LLM Traces (Dashboard)”Every LLM API call is automatically traced and visible in the dashboard Traces tab at http://localhost:3141. No configuration needed.
Each trace records:
- Model and provider (e.g.,
gpt-5.2viaopenai) - Duration — wall-clock time for the API call
- Token usage — input tokens, output tokens
- Prompt/response lengths — character counts
- Status — success or error (with error message)
The Traces tab also shows session-level stats: total calls, error count, total tokens consumed, and average latency. Stats are grouped by model so you can compare performance across providers.
Traces are kept in memory (last 200 calls) and reset when Wilson exits.
Full Interaction Capture
Section titled “Full Interaction Capture”Beyond lightweight traces, Wilson also captures the full content of every LLM call — system prompts, user prompts, responses, tool calls, and tool results — to SQLite. This data powers the Training Data Pipeline for fine-tuning custom models. Interaction capture is always on and adds negligible overhead.
OpenTelemetry Export
Section titled “OpenTelemetry Export”Send logs to any OpenTelemetry-compatible collector (Grafana, Jaeger, Datadog, etc.):
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 wilsonWilson uses:
- HTTP OTLP exporter (not gRPC) for Bun compatibility
- SimpleLogRecordProcessor for immediate export (CLI sessions are short-lived)
- Resource attributes:
service.name=wilson,service.version=0.1.0
Example: Local Grafana Stack
Section titled “Example: Local Grafana Stack”Run a local OTel collector with Grafana:
# Start Grafana + Loki + OTel collector (via docker-compose)docker compose up -d
# Launch Wilson with OTel exportOTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 wilsonIf no collector is reachable, Wilson silently continues without export — it never blocks on telemetry.