Docs / ai

Bring your own AI key

QueryDen calls OpenAI, Anthropic, Gemini, or Ollama with a key you provide. No server-side AI.

Last updated 2026-05-13 Edit on GitHub

QueryDen does not run an inference server. When you use the AI features, the call goes directly from your machine to the provider you configured, with the API key you provided.

Configuring a provider

Settings → AI → Add provider. Pick one of:

  • OpenAI — paste an sk-... key.
  • Anthropic — paste an sk-ant-... key.
  • Google (Gemini) — paste an AIza... key.
  • Ollama — point at a local Ollama instance (http://localhost:11434 by default). No key needed.

The key is written to the encrypted vault, alongside your connections. It never leaves your machine except as the Authorization header on the request to the provider.

Where the AI is actually used

  • EXPLAIN visualizer — given a plan tree, the AI is asked for a one-paragraph diagnosis and a suggested CREATE INDEX or ALTER TABLE statement. The suggestion is read-only until you click Apply.

That is the only wired-up surface today. The standalone “AI Assistant” toolbar dialog is a stub and does not call any provider — issue #10.

Model selection

Default models per provider:

ProviderDefaultWhy
OpenAIgpt-4o-miniCheap, fast, good enough for plan-tree analysis
Anthropicclaude-3-5-haiku-latestCheapest tier; switch to sonnet for harder plans
Googlegemini-1.5-flashFree tier covers most use
Ollamallama3.1:8bLocal; no network call

Override per-provider in settings.

Privacy posture

  • No telemetry. The app does not record AI usage or send it anywhere except the provider.
  • No prompts are stored unless you save the resulting suggestion to local history.
  • No background pings. If you don’t use the AI surface, no AI requests are made.

Coming soon

  • “Ask the schema” inline assistant in the editor — wired to the same provider chain. Currently blocked on the stub linked above.
  • Cost guardrails (max tokens per request, monthly spend ceiling).