Installation
qwed CLI is automatically available after installation.
Commands
qwed verify - One-Shot Verification
Verify a query and exit.
--provider, -p- LLM provider (openai,anthropic,gemini)--model, -m- Model name (e.g.,gpt-4o-mini,llama3)--base-url- Custom API endpoint (for Ollama:http://localhost:11434/v1)--api-key- API key (or useQWED_API_KEYenv var)--no-cache- Disable caching--quiet, -q- Minimal output (for scripts)
qwed interactive - Interactive Mode
Start an interactive REPL session.
stats- Show cache statisticsexit,quit,q- Exit interactive mode
qwed cache - Cache Management
Manage verification result cache.
qwed cache stats
Show cache statistics.
qwed cache clear
Clear all cached results.
Environment Variables
QWED_API_KEY
Set default API key (useful for scripts).
QWED_QUIET
Disable colorful branding output.
Configuration
Provider Priority
QWED auto-detects providers in this order:- Command-line flags (
--provider,--base-url) - Environment variables (
QWED_API_KEY) - Ollama default (tries
http://localhost:11434/v1)
Default Models
| Provider | Default Model |
|---|---|
| Ollama | llama3 |
| OpenAI | gpt-3.5-turbo |
| Anthropic | claude-3-haiku |
| Gemini | gemini-pro |
Output Formats
Colorful Output (Default)
Quiet Output (--quiet)
Error Output
Use Cases
1. Quick Verification
2. Scripting
3. Local Development
4. Cache Performance Testing
Troubleshooting
βOllama not running"
"API key required"
"Module not foundβ
Advanced Features
Caching Behavior
- Default: Enabled (24h TTL)
- Disable:
--no-cacheflag - Clear:
qwed cache clear - Stats:
qwed cache stats
Multiple Providers
Related Docs
- QWEDLocal Guide - Python API
- Ollama Integration - FREE local LLMs
- Full Documentation - Complete docs
Made with π by the QWED team