Understanding QWEDβs Architecture
QWED uses LLMs as untrusted translators, not as answer generators:Key Insight: The LLM translates natural language to structured form. QWED then verifies the structured form using deterministic engines. The LLM can be wrong β QWED catches and corrects errors.
Supported LLM Providers
| Provider | Env Variable | Models |
|---|---|---|
| OpenAI | OPENAI_API_KEY | gpt-4o, gpt-4-turbo, gpt-3.5-turbo |
| Anthropic | ANTHROPIC_API_KEY | claude-3-opus, claude-3-sonnet |
GOOGLE_API_KEY | gemini-pro, gemini-ultra | |
| Azure OpenAI | AZURE_OPENAI_* | Any Azure-hosted model |
| Local/Ollama | OLLAMA_BASE_URL | llama2, mistral, codellama |
| Custom | QWED_LLM_BASE_URL | Any OpenAI-compatible API |
Configuration Options
Option 1: Use QWEDβs Built-in Translation (Recommended)
QWED can handle LLM translation internally:Option 2: Bring Your Own LLM
Use QWED purely as a verification layer:Option 3: Self-Hosted with Custom LLM
For self-hosted deployments, configure your LLM:Provider-Specific Setup
OpenAI
Anthropic (Claude)
Google (Gemini)
Azure OpenAI
Local LLMs (Ollama)
Custom OpenAI-Compatible API
For any API thatβs OpenAI-compatible (vLLM, LiteLLM, etc.):Programmatic Configuration
Translation vs Verification
Understanding the two phases:| Phase | What Happens | Who Does It | Required? |
|---|---|---|---|
| Translation | Natural language β Structured form | LLM (any) | Optional |
| Verification | Structured form β Proof | QWED Engines | Required |
When You Need an LLM
client.verify("Is the derivative of xΒ² equal to 2x?")β Needs LLM to parseclient.verify("Calculate compound interest on $1000 at 5%")β Needs LLM
When You Donβt Need an LLM
client.verify_math("diff(x**2, x) == 2*x")β Already structuredclient.verify_logic("(AND (GT x 5) (LT x 10))")β Already in DSLclient.verify_sql("SELECT * FROM users")β Already structuredclient.verify_code("import os; os.system('rm -rf /')")β Code, not NL
FAQ
Do I need an LLM to use QWED?
No. If youβre sending structured queries (math expressions, SQL, code, QWED-Logic DSL), you donβt need an LLM. QWED engines work directly on structured input.Can I use my own LLM and just use QWED for verification?
Yes. This is the βBring Your Own LLMβ pattern. Call your LLM, then pass its output to QWED for verification.Which LLM is best for QWED translation?
For translation accuracy, we recommend:- GPT-4o (best)
- Claude 3 Opus
- Gemini Pro
- GPT-3.5-turbo (good for simple queries)
Is the LLM translation deterministic?
We settemperature=0 for reproducibility, but LLMs are inherently probabilistic. Thatβs why QWED verification is essential β it provides the determinism guarantee.