Skip to main content
Each guard verifies a specific aspect of legal output. Guards are labeled DETERMINISTIC or PARTIAL / HEURISTIC to indicate the strength of the underlying check.
  • DETERMINISTIC guards return reproducible results for supported, structured inputs.
  • PARTIAL / HEURISTIC guards apply structural or rule-based checks. A passing result does not prove that the underlying legal claim is correct — only that it matched a supported pattern.
When a claim falls outside a guard’s supported boundary, the guard should be treated as fail-closed: reject or mark the claim unverified rather than accepting it.

1. DeadlineGuard

Status: DETERMINISTIC Purpose: Verify date calculations in contracts for structured, unambiguous inputs.

The Problem

LLMs frequently miscalculate deadlines:
  • Confuse business days vs calendar days
  • Ignore leap years
  • Forget jurisdiction-specific holidays

The Solution

from qwed_legal import DeadlineGuard

guard = DeadlineGuard(country="US", state="CA")

result = guard.verify(
    signing_date="2026-01-15",
    term="30 business days",
    claimed_deadline="2026-02-14"
)

print(result.verified)           # False
print(result.computed_deadline)  # 2026-02-27
print(result.difference_days)    # 13

Parameters

signing_date
str
required
The date the contract was signed (ISO format or natural language).
term
str
required
The term description (e.g., “30 days”, “30 business days”, “2 weeks”, “3 months”, “1 year”).
claimed_deadline
str
required
The deadline claimed by the LLM.
tolerance_days
int
default:"0"
Allow +/- this many days when verifying the deadline. Useful for accommodating minor rounding differences.

Response fields

FieldTypeDescription
verifiedboolWhether the claimed deadline matches the computed deadline
signing_datedatetimeParsed signing date
claimed_deadlinedatetimeThe deadline claimed by the LLM
computed_deadlinedatetimeThe correct deadline computed by the guard
term_parsedstrThe original term string
difference_daysintAbsolute difference in days between claimed and computed
messagestrHuman-readable verification message
verification_modestrAlways "SYMBOLIC" for legal verification

Features

FeatureDescription
Business vs CalendarAutomatically detects “business days” vs “days”
Holiday Support200+ countries via python-holidays
Leap YearsHandles Feb 29 correctly
Natural LanguageParses “2 weeks”, “3 months”, “1 year”

Calculate business days between dates

guard = DeadlineGuard(country="US")

business_days = guard.calculate_business_days_between(
    start_date="2026-01-15",
    end_date="2026-02-14"
)

print(business_days)  # Number of business days excluding weekends and holidays

2. LiabilityGuard

Status: DETERMINISTIC Purpose: Verify liability cap and indemnity calculations for supported numeric inputs.

The Problem

LLMs get percentage math wrong:
  • “200% of 5M=5M = 15M” ❌ (Should be $10M)
  • Float precision errors on large amounts
  • Tiered liability miscalculations

Constructor parameters

tolerance_percent
float
default:"0.01"
Tolerance for floating-point comparison as a percentage. For example, 0.01 means 0.01% tolerance. Adjust for stricter or more lenient verification.

The Solution

from qwed_legal import LiabilityGuard

guard = LiabilityGuard()

result = guard.verify_cap(
    contract_value=5_000_000,
    cap_percentage=200,
    claimed_cap=15_000_000
)

print(result.verified)      # False
print(result.computed_cap)  # 10,000,000
print(result.difference)    # 5,000,000

verify_cap parameters

contract_value
float
required
Total value of the contract.
cap_percentage
float
required
Liability cap as a percentage (e.g., 200 for 200%).
claimed_cap
float
required
The cap amount claimed by the LLM.

Response fields

FieldTypeDescription
verifiedboolWhether the claimed cap matches the computed cap
contract_valueDecimalThe contract value used
cap_percentageDecimalThe percentage used
claimed_capDecimalThe cap claimed by the LLM
computed_capDecimalThe correct cap computed by the guard
differenceDecimalAbsolute difference between claimed and computed
messagestrHuman-readable verification message

Additional methods

# Tiered liability
result = guard.verify_tiered_liability(
    tiers=[
        {"base": 1_000_000, "percentage": 100},
        {"base": 500_000, "percentage": 50},
    ],
    claimed_total=1_250_000  # ✅ Correct: 1M + 250K
)

# Indemnity limit (3x annual fee)
result = guard.verify_indemnity_limit(
    annual_fee=100_000,
    multiplier=3,
    claimed_limit=300_000  # ✅ Correct
)

3. ClauseGuard

Status: PARTIAL / HEURISTIC Purpose: Detect a limited set of contradictory clauses using text heuristics, with optional Z3-based satisfiability checks. A “consistent” result is not a proof of full contractual consistency.

The problem

LLMs miss logical contradictions:
  • “Seller may terminate with 30 days notice”
  • “Neither party may terminate before 90 days”
These clauses conflict for days 30-90!

The solution

The primary check_consistency() method uses text heuristics to detect conflicts. For formal logic verification, use verify_using_z3().
from qwed_legal import ClauseGuard

guard = ClauseGuard()

result = guard.check_consistency([
    "Seller may terminate with 30 days notice",
    "Neither party may terminate before 90 days",
    "Seller may terminate immediately upon breach"
])

print(result.consistent)  # False
print(result.conflicts)
# [(0, 1, "Termination notice (30 days) conflicts with minimum term (90 days)")]

Detection types

Conflict TypeDescription
TerminationNotice period vs minimum term
Permission/Prohibition”May” vs “May not”
ExclusivityMultiple exclusive rights

Z3-based verification

For power users who want to define precise logical constraints:
result = guard.verify_using_z3([
    "constraint_a",
    "constraint_b",
])

print(result.consistent)  # True if constraints are satisfiable
print(result.message)     # "✅ VERIFIED: Constraints are satisfiable."

4. CitationGuard

Status: PARTIAL / HEURISTIC Purpose: Validate that legal citations match a supported format. CitationGuard does not prove that a cited authority exists or is controlling — it only checks structural shape against supported reporters.

The Problem

The Mata v. Avianca scandal: Lawyers used ChatGPT, which cited 6 fake court cases. They were fined $5,000 and sanctioned.

The Solution

from qwed_legal import CitationGuard

guard = CitationGuard()

# Valid citation
result = guard.verify("Brown v. Board of Education, 347 U.S. 483 (1954)")
print(result.valid)  # True
print(result.parsed_components)
# {'volume': 347, 'reporter': 'U.S.', 'page': '483'}

# Invalid citation (fake reporter)
result = guard.verify("Smith v. Jones, 999 FAKE 123 (2020)")
print(result.valid)   # False
print(result.issues)  # ["Unknown reporter"]

Supported citation patterns

PatternFormatExample
US Supreme Courtvolume U.S. page347 U.S. 483
US Federalvolume F./F.2d/F.3d page500 F.3d 120
UK Neutral[year] court number[2023] UKSC 10
India AIRAIR year court pageAIR 2020 SC 100

Batch Verification

result = guard.verify_batch([
    "Brown v. Board, 347 U.S. 483 (1954)",
    "Fake v. Case, 999 X.Y.Z. 123",
])

print(result.total)    # 2
print(result.valid)    # 1
print(result.invalid)  # 1

Statute Citations

result = guard.check_statute_citation("42 U.S.C. § 1983")
print(result.valid)  # True
print(result.parsed_components)
# {'title': 42, 'code': 'U.S.C.', 'section': '1983'}

5. JurisdictionGuard

Status: PARTIAL / HEURISTIC Purpose: Apply structured checks around governing law and forum selection clauses for modeled combinations. Results should not be treated as authoritative legal opinions on choice-of-law conflicts.

The Problem

LLMs miss jurisdiction conflicts:
  • Governing law in one country, forum in another
  • Missing CISG applicability warnings
  • Cross-border legal system mismatches

The Solution

from qwed_legal import JurisdictionGuard

guard = JurisdictionGuard()

result = guard.verify_choice_of_law(
    parties_countries=["US", "UK"],
    governing_law="Delaware",
    forum="London"
)

print(result.verified)   # False - mismatch detected
print(result.conflicts)  # ["Governing law 'Delaware' (US state) but forum 'London' is non-US..."]

Parameters

parties_countries
list[str]
required
List of ISO country codes for contract parties (e.g., ["US", "UK"]).
governing_law
str
required
The stated governing law — can be a country code or US state name/abbreviation (e.g., "Delaware", "DE", "UK").
forum
str
The stated forum or venue for dispute resolution.
jurisdiction_type
JurisdictionType
default:"JurisdictionType.EXCLUSIVE"
Type of jurisdiction clause. Accepts JurisdictionType.EXCLUSIVE, JurisdictionType.NON_EXCLUSIVE, or JurisdictionType.HYBRID.

Features

FeatureDescription
Choice of LawValidates governing law makes sense for parties
Forum SelectionChecks forum vs governing law alignment
CISG DetectionWarns about international sale of goods conventions
Convention CheckVerifies Hague, NY Convention applicability
Legal System MismatchDetects cross-border Common Law vs Civil Law conflicts

Verify forum selection

Use verify_forum_selection to validate a forum independently, with optional contract value threshold checks for US federal court diversity jurisdiction:
result = guard.verify_forum_selection(
    forum="Delaware",
    contract_value=50_000,
    parties_countries=["US", "DE"]
)

print(result.verified)   # True
print(result.warnings)   # ["Contract value $50,000 may not meet diversity jurisdiction threshold..."]

Convention Check

result = guard.check_convention_applicability(
    parties_countries=["US", "DE"],
    convention="CISG"
)
print(result.verified)  # True - both are CISG members

6. StatuteOfLimitationsGuard

Status: PARTIAL / HEURISTIC Purpose: Compute claim limitation periods for supported jurisdictions and claim types using rule tables. Coverage is limited to the modeled jurisdictions and claim types listed below.

The Problem

LLMs don’t track jurisdiction-specific limitation periods:
  • California breach of contract: 4 years
  • New York breach of contract: 6 years
  • Different periods for negligence, fraud, etc.

The Solution

from qwed_legal import StatuteOfLimitationsGuard

guard = StatuteOfLimitationsGuard()

result = guard.verify(
    claim_type="breach_of_contract",
    jurisdiction="California",
    incident_date="2020-01-15",
    filing_date="2026-06-01"
)

print(result.verified)          # False - 4 year limit exceeded!
print(result.expiration_date)   # 2024-01-15
print(result.days_remaining)    # -867 (negative = expired)

Parameters

claim_type
str
required
Type of legal claim (e.g., "breach_of_contract", "negligence", "fraud"). See supported claim types below.
jurisdiction
str
required
State or country name (e.g., "California", "New York", "UK").
incident_date
str
required
Date the incident occurred (ISO format).
filing_date
str
required
Date the claim was or will be filed (ISO format).
claimed_within_period
bool
Optional LLM claim to verify. When provided, the guard checks whether the LLM’s assertion (within/outside period) matches the computed result.

Supported jurisdictions

12 jurisdictions are supported with periods for 10 claim types.
JurisdictionBreach of ContractNegligenceFraud
California4 years2 years3 years
New York6 years3 years6 years
Texas4 years2 years4 years
Delaware3 years2 years3 years
Florida5 years4 years4 years
Illinois5 years2 years5 years
UK/England6 years6 years6 years
Germany3 years3 years10 years
France5 years5 years5 years
Australia6 years6 years6 years
India3 years3 years3 years
Canada2 years2 years6 years

Supported claim types

breach_of_contract, breach_of_warranty, negligence, professional_malpractice, fraud, personal_injury, property_damage, employment, product_liability, defamation

Get limitation period

Look up the limitation period for a specific claim type and jurisdiction without performing a full verification:
years = guard.get_limitation_period("fraud", "Germany")
print(years)  # 10.0

Compare jurisdictions

comparison = guard.compare_jurisdictions(
    "breach_of_contract",
    ["California", "New York", "Delaware"]
)
# {'California': 4.0, 'New York': 6.0, 'Delaware': 3.0}

7. IRACGuard

Status: PARTIAL / HEURISTIC Purpose: Check that legal reasoning follows the IRAC framework (Issue, Rule, Application, Conclusion). IRACGuard verifies structure and surface-level consistency only — it is not a proof of correct legal reasoning.

The Problem

LLMs produce legal advice that lacks structured reasoning:
  • Missing clear identification of the legal issue
  • No citation of applicable rules or statutes
  • Conclusions without proper application of law to facts

The Solution

from qwed_legal import IRACGuard

guard = IRACGuard()

llm_output = """
Issue: Whether the defendant breached the employment contract.
Rule: Under California Labor Code § 2922, employment is presumed at-will.
Application: The defendant terminated employment without the 30-day notice 
required by the contract, which modified the at-will presumption.
Conclusion: The defendant breached the employment contract.
"""

result = guard.verify_structure(llm_output)

print(result["verified"])    # True
print(result["components"])  # {'issue': '...', 'rule': '...', 'application': '...', 'conclusion': '...'}

Detection Types

CheckDescription
StructureVerifies all 4 IRAC components are present
Logical DisconnectDetects when Application doesn’t reference the Rule
Missing StepsIdentifies which IRAC components are missing

Error Response

result = guard.verify_structure("The defendant should pay damages.")

print(result["verified"])  # False
print(result["error"])     # "Failed Reasoned Elaboration. Missing steps: issue, rule, application, conclusion..."
print(result["missing"])   # ['issue', 'rule', 'application', 'conclusion']

8. FairnessGuard

Status: PARTIAL / HEURISTIC Purpose: Apply counterfactual consistency checks to detect output that changes when protected attributes are swapped. This is a structural fairness check, not a complete fairness proof. Requires an external LLM client.

The Problem

AI legal systems can exhibit bias based on protected attributes:
  • Different sentencing recommendations based on gender
  • Inconsistent contract assessments based on party names
  • Discriminatory loan approval reasoning

The Solution

from qwed_legal import FairnessGuard

# Requires an LLM client for counterfactual generation
guard = FairnessGuard(llm_client=my_llm)

result = guard.verify_decision_fairness(
    original_prompt="Should John Smith receive parole given his rehabilitation record?",
    original_decision="Parole recommended based on positive rehabilitation.",
    protected_attribute_swap={"John": "Jane", "his": "her"}
)

print(result["verified"])  # True if decision is consistent
print(result["status"])    # "FAIRNESS_VERIFIED" or "JUDICIAL_BIAS_DETECTED"

How It Works

  1. Early exit - If protected_attribute_swap is empty ({}), returns immediately with NO_SWAP_REQUIRED without calling the LLM
  2. Counterfactual Generation - Swaps protected attributes (names, pronouns) while preserving case
  3. Re-evaluation - Runs the modified prompt through the LLM
  4. Deterministic Comparison - Checks if outcomes match exactly

Response fields

FieldTypeDescription
verifiedboolWhether the decision is fair
statusstrFAIRNESS_VERIFIED or NO_SWAP_REQUIRED (on success)
riskstrJUDICIAL_BIAS_DETECTED or LLM_GENERATION_FAILED (on failure)
messagestrExplanation of the result
variancedictPresent when bias detected — contains original and counterfactual decisions

Detection types

Status / RiskDescription
FAIRNESS_VERIFIEDDecision unchanged after attribute swap
JUDICIAL_BIAS_DETECTEDDecision changed based on protected attributes
NO_SWAP_REQUIREDNo protected attributes to swap (empty dict passed)
LLM_GENERATION_FAILEDThe LLM client returned None for the counterfactual prompt
FairnessGuard requires an LLM client at initialization. Without it, verify_decision_fairness() will raise a ValueError.

9. ContradictionGuard

Status: PARTIAL / HEURISTIC Purpose: Detect logical contradictions between modeled clauses using a constraint solver. Coverage is limited to the supported clause categories below — a “consistent” result is not a proof of full contract consistency.

The Problem

Contracts can contain mathematically impossible combinations:
  • “Liability capped at 10,000"+"Minimumpenaltyof10,000" + "Minimum penalty of 50,000”
  • “Term is exactly 12 months” + “Minimum duration of 24 months”
Text-based heuristics (ClauseGuard) miss these formal logic conflicts.

The Solution

from qwed_legal import ContradictionGuard, Clause

guard = ContradictionGuard()

clauses = [
    Clause(id="1", text="Liability capped at 10000", category="LIABILITY", value=10000),
    Clause(id="2", text="Penalty shall be 50000", category="LIABILITY", value=50000),
]

result = guard.verify_consistency(clauses)

print(result["verified"])  # False
print(result["message"])   # "❌ LOGIC CONTRADICTION: Clauses are mutually exclusive..."

Clause Structure

The Clause dataclass requires:
FieldTypeDescription
idstrUnique clause identifier
textstrHuman-readable clause text
categorystrDURATION, LIABILITY, or TERMINATION
valueintNormalized numeric value (days, dollars, etc.)

Supported Categories

CategoryDetects
DURATIONConflicting term lengths (exact vs min/max)
LIABILITYCap vs penalty contradictions

Z3 vs ClauseGuard

FeatureClauseGuardContradictionGuard
InputRaw text stringsStructured Clause objects
MethodText heuristicsZ3 SMT Solver
DetectsPermission conflictsMathematical impossibilities
Use CaseQuick checksFormal verification

10. ProvenanceGuard

Status: DETERMINISTIC Purpose: Verify AI-generated content carries proper provenance metadata and disclosure markers. All checks are deterministic (SHA-256 hashing, regex pattern matching, datetime validation).

The problem

AI transparency regulations (California CAITA 2026, EU AI Act Article 50) require AI-generated legal content to carry proper attribution. Without verification:
  • Content may lack required AI-generation disclosures
  • Provenance metadata can be incomplete or tampered with
  • Unauthorized models may generate legal documents without audit trails

The solution

from qwed_legal import ProvenanceGuard

guard = ProvenanceGuard(
    require_disclosure=True,
    require_human_review=False,
    allowed_models=["gpt-4", "claude-3-opus"]
)

content = "This AI-generated document reviews the contract terms..."
provenance = {
    "content_hash": "a1b2c3...",  # SHA-256 of content
    "model_id": "gpt-4",
    "generation_timestamp": "2026-03-24T12:00:00+00:00",
}

result = guard.verify_provenance(content, provenance)

print(result["verified"])        # True or False
print(result["checks_passed"])   # ["metadata_completeness", "hash_integrity", ...]
print(result["checks_failed"])   # []
print(result["risk"])            # "" if verified, e.g. "CONTENT_TAMPERED" if not

Verification checks

ProvenanceGuard runs up to six checks. The first three always run; the last three are configurable.
CheckDescriptionAlways runs
Metadata completenesscontent_hash, model_id, and generation_timestamp are present and non-emptyYes
Hash integritySHA-256 of the content matches content_hash in provenanceYes
Timestamp validityISO-8601 format, not in the futureYes
Disclosure complianceContent includes an AI-generation disclosure statementIf require_disclosure=True
Model allowlistmodel_id is in the approved listIf allowed_models is set
Human reviewhuman_reviewed is True in provenanceIf require_human_review=True

Constructor parameters

require_disclosure
bool
default:"True"
Require AI disclosure text in the content (e.g., “AI-generated”, “produced by AI”).
require_human_review
bool
default:"False"
Require human_reviewed=True in provenance metadata.
allowed_models
list[str] | None
default:"None"
Allowlist of model IDs. None allows all models; an empty list denies all.

Generating provenance records

You can also use ProvenanceGuard to generate provenance metadata:
from qwed_legal import ProvenanceGuard

guard = ProvenanceGuard()

record = guard.generate_provenance(
    content="This AI-generated contract summary...",
    model_id="gpt-4",
    disclosure_text="This document was generated by AI.",
    human_reviewed=True,
    reviewer_id="lawyer-42"
)

print(record.content_hash)           # SHA-256 hash
print(record.generation_timestamp)   # ISO-8601 UTC timestamp
print(record.human_reviewed)         # True

ProvenanceRecord fields

FieldTypeDescription
content_hashstrSHA-256 hash of the AI-generated content
model_idstrIdentifier of the model that generated the content
generation_timestampstrISO-8601 timestamp of generation
disclosure_textstrHuman-readable AI disclosure statement
human_reviewedboolWhether a human has reviewed the content
reviewer_idstr | NoneIdentifier of the human reviewer

Risk classifications

When verification fails, the risk field indicates the type of failure:
RiskTrigger
CONTENT_TAMPEREDHash mismatch between content and content_hash
INCOMPLETE_PROVENANCERequired metadata fields missing or empty
MISSING_DISCLOSURENo AI-generation disclosure found in content
UNAUTHORIZED_MODELmodel_id not in the allowed models list
UNREVIEWED_CONTENThuman_reviewed is not True
INVALID_TIMESTAMPTimestamp is malformed or in the future
ProvenanceGuard is fully deterministic — no LLM calls required. All checks use SHA-256 hashing, regex pattern matching, and datetime validation.

SACProcessor (RAG Helper) 📄

Purpose: Prevent Document-Level Retrieval Mismatch (DRM) in legal RAG systems.

The Problem

Standard RAG chunking causes >95% retrieval mismatch in legal databases because:
  • Legal documents share nearly identical boilerplate
  • Chunk-level embeddings lose document context
  • NDAs, contracts, and agreements look alike at the chunk level

The Solution

from qwed_legal import SACProcessor

sac = SACProcessor(llm_client=my_llm)

# Your existing chunks
chunks = naive_split(contract_text)

# Augment with document fingerprint
augmented = sac.generate_sac_chunks(
    document_text=contract_text,
    chunks=chunks,
    document_id="NDA-2026-001"
)

# Each chunk now includes global context
print(augmented[0])
# DOCUMENT CONTEXT [NDA-2026-001]: NDA between Acme Corp and Beta Inc...
# CHUNK CONTENT [1/10]: Original chunk text here...

Configuration

ParameterDefaultDescription
target_summary_length150Character limit for document fingerprint
preview_chars5000Max chars sent to LLM for summarization

Methods

MethodDescription
generate_sac_chunks()Augment all chunks with document fingerprint
generate_fingerprint_only()Get just the fingerprint for caching
SACProcessor requires an LLM client. Generic (automated) summaries outperform expert-guided ones for retrieval.

All-in-One: LegalGuard

For convenience, use the unified LegalGuard class:
from qwed_legal import LegalGuard

# Optional: provide llm_client for FairnessGuard
guard = LegalGuard(
    llm_client=my_llm,
    provenance_config={
        "require_disclosure": True,
        "require_human_review": False,
        "allowed_models": ["gpt-4", "claude-3-opus"],
    }
)

# All 10 guards available
guard.verify_deadline(...)
guard.verify_liability_cap(...)
guard.check_clause_consistency(...)          # ClauseGuard (text heuristics)
guard.verify_citation(...)
guard.verify_jurisdiction(...)
guard.verify_statute_of_limitations(...)
guard.verify_irac_structure(...)             # v0.3.0
guard.verify_fairness(...)                   # v0.3.0 (requires llm_client)
guard.verify_contradiction(...)              # v0.3.0 (Z3 SMT Solver)
guard.verify_provenance(content, provenance) # NEW in v0.4.0
LegalGuard is a convenience wrapper. It does not change the verification boundaries of the underlying guards. DeadlineGuard, LiabilityGuard, and ProvenanceGuard are deterministic for supported inputs; the remaining guards are partial or heuristic. Only verify_fairness() requires an LLM client.

Next Steps