MCP vs CLI vs web console

The MCP server is one of three ways to interact with Loguro. They share auth and read the same data — pick whichever has the lowest friction in the moment.

Reach for MCP when…Reach for CLI when…Reach for Web Console when…
Debugging in the IDE: ask Claude/Cursor about a failure and let it query logs iterativelyReproducible scripts: cron jobs, CI alerts, loguro logs --json \| jqVisual exploration: charts, timeline view, replay, saved view dashboards
Natural-language questions: “show me errors after the deploy at 14:00”Direct queries you already know: loguro logs -l error --from 1hSetting up integrations interactively (OAuth flows, Slack app install)
Multi-step investigation where Claude calls get_log_timeline then group_logs then get_distinct_values autonomouslyOne-shot ad-hoc queries from terminal, piping into other toolsFirst-time onboarding for someone new to your project
Sharing context with the AI without copy-pasting log dumpsBulk operations: project + key + alert in one shell sessionIssue creation with rich UI, log shares, embed widgets

Token footprint

The MCP exposes 6 focused tools (query_logs, get_log_timeline, get_distinct_values, group_logs, get_slow_logs, sample_logs) — roughly 1k tokens of system prompt vs the 5–10k tokens of MCPs that ship 30+ generic tools. That overhead matters when you run smaller models or have a long debugging conversation.

Latency

Each tool call is a single fetch — no SQL round-tripping through an interpreter. For interactive AI workflows this is fast enough that Claude can chain 3–5 calls in a single turn without lag.

For bulk fetches (10k+ rows, exports, scripted pipelines) the CLI streams JSON faster — use loguro logs --json | jq instead.

Combine them

  • CLI for setup and scripted workflows
  • MCP for IDE-driven debugging and natural-language exploration
  • Web for visual analysis and saved-view dashboards

All three share auth and read the same data — switch freely based on what’s least friction in the moment.

// related

See also