🌟 Why QWEDLocal?
No Backend Server Needed
- Run verification directly in your application
- No infrastructure to manage
- Perfect for prototyping, scripts, and small projects
100% Privacy
- Your API keys stay on your machine
- Your data never touches QWED servers
- Perfect for HIPAA, GDPR, sensitive data
Model Agnostic
- Works with ANY LLM - OpenAI, Anthropic, Gemini
- Works with local models via Ollama (FREE!)
- Works with any OpenAI-compatible API
Smart Caching
- Automatic result caching saves API costs
- 50-80% cost reduction on repeated queries
- 10x faster for cache hits
📦 Installation
🚀 Quick Start
Option 1: Ollama (FREE! $0/month)
Option 2: OpenAI
Option 3: Anthropic Claude
🔬 Verification Engines
1. Math Verification (SymPy)
2. Logic Verification (Z3)
3. Code Security (AST)
⚡ Smart Caching
Automatic caching saves API costs!🎨 CLI Tool
One-Shot Verification
Interactive Mode
Cache Management
Help
🎯 Cost Comparison
| Tier | Monthly Cost | LLM Options | Best For |
|---|---|---|---|
| Local | $0 | Ollama (Llama 3, Mistral, Phi) | Students, Privacy, Development |
| Budget | ~$5-10 | GPT-4o-mini, Gemini Flash | Startups, Prototypes |
| Premium | ~$50-100 | GPT-4, Claude Opus | Enterprises, Production |
🔒 Privacy \u0026 Security
Your Data Never Leaves Your Machine
QWEDLocal architecture:- Healthcare (HIPAA compliance)
- Finance (PCI-DSS compliance)
- Government (classified data)
- Privacy-focused applications
🔧 Advanced Configuration
Custom Cache Settings
Environment Variables
Quiet Mode (No Branding)
📊 Examples
Example 1: Fact Checking Pipeline
Example 2: Code Review Automation
Example 3: Batch Processing with Cache
🐛 Troubleshooting
LLM Not Available
Missing Dependencies
Cache Issues
🎓 Learn More
- CLI Guide - Complete CLI reference
- Ollama Integration - FREE local LLMs
- LLM Configuration - All provider setups
- Full Documentation - Complete docs