QWED Open Responses
Verify AI agent outputs before execution.
What is QWED Open Responses?โ
QWED Open Responses provides deterministic verification guards for AI agent outputs. It works with:
- OpenAI Responses API
- LangChain agents
- LlamaIndex
- Any AI framework
The Problemโ
When AI agents execute tools or generate structured outputs, they can:
- ๐ง Call dangerous functions -
rm -rf /,DROP TABLE - ๐งฎ Produce incorrect calculations - Financial errors, wrong totals
- ๐ Violate business rules - Invalid state transitions
- ๐ Leak sensitive data - PII, API keys in responses
- ๐ฐ Exceed budgets - Unlimited API calls
The Solutionโ
QWED Open Responses intercepts and verifies every agent output before execution:
AI Agent Output โ Guards โ Verified? โ Execute
โ
YES โโโโ
NO โโโโโ Block + Error
How It Worksโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ AI Agent (GPT, Claude, etc.) โ
โ โ
โ "Call calculator with x=150, y=10, result=1600" โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โ Tool Call / Structured Output
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ QWED Open Responses Verifier โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ โ SchemaGuard โ โ ToolGuard โ โ MathGuard โโ
โ โ JSON Valid โ โ Blocklist โ โ 150 ร 10 โ 1600 โ โโ
โ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ โ
โ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ โ StateGuard โ โArgumentGuardโ โ SafetyGuard โโ
โ โ Transitions โ โ Type Check โ โ PII, Injection, Budget โโ
โ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ โ
โ MathGuard Failed: 150 ร 10 = 1500, not 1600 โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโ
โ โ BLOCKED โ
โ Return error โ
โโโโโโโโโโโโโโโโโโโ
The 6 Guardsโ
| Guard | What It Verifies | Example Catch |
|---|---|---|
| SchemaGuard | JSON Schema compliance | Missing required field |
| ToolGuard | Block dangerous tool calls | execute_shell blocked |
| MathGuard | Verify calculations | 150 ร 10 โ 1600 |
| StateGuard | Valid state transitions | completed โ pending invalid |
| ArgumentGuard | Tool argument validation | amount: "abc" not a number |
| SafetyGuard | PII, injection, budget | SSN detected in output |
Installationโ
Basicโ
pip install qwed-open-responses
With Framework Integrationsโ
# OpenAI
pip install qwed-open-responses[openai]
# LangChain
pip install qwed-open-responses[langchain]
# All integrations
pip install qwed-open-responses[all]
Quick Startโ
Basic Verificationโ
from qwed_open_responses import ResponseVerifier
from qwed_open_responses.guards import ToolGuard, MathGuard, SafetyGuard
verifier = ResponseVerifier()
# Verify a tool call before execution
result = verifier.verify_tool_call(
tool_name="calculator",
arguments={
"operation": "multiply",
"x": 150,
"y": 10,
"result": 1500 # Correct!
},
guards=[ToolGuard(), MathGuard(), SafetyGuard()]
)
if result.verified:
print("โ
Safe to execute")
execute_tool(result.tool_name, result.arguments)
else:
print(f"โ Blocked: {result.block_reason}")
print(f" Failed guard: {result.failed_guard}")
Verify Structured Outputโ
from qwed_open_responses.guards import SchemaGuard
# Define expected schema
order_schema = {
"type": "object",
"required": ["order_id", "total", "items"],
"properties": {
"order_id": {"type": "string"},
"total": {"type": "number", "minimum": 0},
"items": {"type": "array", "minItems": 1}
}
}
result = verifier.verify_structured_output(
output={
"order_id": "ORD-123",
"total": 99.99,
"items": [{"name": "Widget", "price": 99.99}]
},
guards=[SchemaGuard(schema=order_schema)]
)
Framework Integrationโ
LangChainโ
from langchain_openai import ChatOpenAI
from langchain.agents import create_tool_calling_agent, AgentExecutor
from qwed_open_responses.middleware.langchain import QWEDCallbackHandler
# Create callback with guards
callback = QWEDCallbackHandler(
guards=[ToolGuard(), SafetyGuard()],
block_on_failure=True, # Stop execution if guard fails
)
# Add to agent
llm = ChatOpenAI(model="gpt-4")
agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(
agent=agent,
tools=tools,
callbacks=[callback]
)
# Every tool call is now verified!
result = executor.invoke({"input": "Calculate 25% of 500"})
OpenAI SDKโ
from qwed_open_responses.middleware.openai_sdk import VerifiedOpenAI
from qwed_open_responses.guards import SchemaGuard, SafetyGuard
# Create verified client
client = VerifiedOpenAI(
api_key="sk-...",
guards=[
SchemaGuard(schema=my_schema),
SafetyGuard(block_pii=True)
]
)
# Use normally - verification is automatic
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Generate an order"}],
tools=my_tools
)
# Tool calls are verified before returning
for tool_call in response.choices[0].message.tool_calls:
print(f"Verified tool call: {tool_call.function.name}")
Why QWED Open Responses?โ
Security Comparisonโ
| Threat | Without Verification | With QWED |
|---|---|---|
Agent calls rm -rf / | ๐ System destroyed | โ BLOCKED |
| SQL injection in query | ๐ Data breach | โ BLOCKED |
| Wrong calculation | ๐ธ Financial loss | โ CAUGHT |
| PII in API response | ๐ Compliance violation | โ DETECTED |
| Infinite tool loop | ๐ฐ $10,000 API bill | โ BUDGET GUARD |
Real-World Impactโ
- Finance: Prevent wrong calculations in trading bots
- Healthcare: Block PII leaks in patient summaries
- E-commerce: Verify order totals before payment
- DevOps: Prevent dangerous shell commands
Configurationโ
Environment Variablesโ
| Variable | Description | Default |
|---|---|---|
QWED_OR_LOG_LEVEL | Logging level | INFO |
QWED_OR_STRICT | Fail on any guard failure | true |
QWED_OR_MAX_BUDGET | Maximum API cost allowed | 100.0 |
Custom Guard Configurationโ
from qwed_open_responses.guards import ToolGuard, SafetyGuard
# Custom tool blocklist
tool_guard = ToolGuard(
blocklist=["execute_shell", "delete_database", "send_email"],
allow_unknown=False # Block tools not in whitelist
)
# Custom safety settings
safety_guard = SafetyGuard(
block_pii=True,
block_injection=True,
max_budget=50.0, # $50 limit
harmful_patterns=["password", "secret", "token"]
)
verifier = ResponseVerifier(guards=[tool_guard, safety_guard])
Next Stepsโ
- Guards Reference - Deep dive into each guard
- Examples - Real-world use cases
- LangChain Integration - Agent verification
- OpenAI Integration - Responses API
- Troubleshooting - Common issues
Linksโ
- GitHub: QWED-AI/qwed-open-responses
- PyPI: qwed-open-responses
- npm: qwed-open-responses