The challenge is structural: policy is still written for people, not machines. Financial institutions interpret thousands of pages of guidance in prose—UCP 600 clauses, AML frameworks, ESG principles—then translate them by hand into controls and checklists. The result is slow, inconsistent, and opaque. The next step is inevitable: policy becomes code.
From Interpretation to Execution
The core question facing regulators and banks is simple: how do we prove an AI system actually follows the rules? The answer begins with policy-aware infrastructure—systems that express obligations and thresholds as executable, versioned logic with citations.
In this model, a rule isn’t a PDF line item; it’s a testable object with provenance (e.g., “EU AI Act, Article 13”), lifecycle state, and a clear evaluation function. When guidance changes, the logic updates; decisions remain traceable to their source.
This transition is underway but uneven. Securities and market-conduct regulators have advanced machine-readable rulebooks (such as FINRA’s taxonomy-based rulebook initiative). Trade finance, by contrast, is still focused on data harmonisation—making documents digital, but not yet making rules executable.
The Case for Policy-Aware AI
Traditional automation stops at implementation; policy-aware AI starts at interpretation.
Consistency — a sanctions screening rule evaluates the same way across all transactions, not differently at each branch.
Explainability — every outcome can cite the clause, threshold, and data used.
Auditability — rule versions and execution traces are logged for review.
Adaptability — when a sanctions list updates, the policy propagates instantly across all active checks.
The result isn’t faster paperwork—it’s computational governance.
Trade Finance: A High-Value Testbed
A single trade transaction can invoke dozens of overlapping frameworks: UCP 600 for documentary credit, sanctions and AML regimes across jurisdictions, plus emerging ESG criteria. Digitisation made documents searchable; it didn’t make them understandable. This gap shows up in the data: first-presentation discrepancy rates for letters of credit still routinely fall in the ~60–75% range, driving delays and costs.[2]
A policy-aware reasoning system changes that dynamic:
A sanctions rule becomes code with explicit jurisdiction tags and evidence paths.
Parts of UCP 600 (e.g., timelines and determinable conditions) are represented as machine-checkable constraints, with ambiguity preserved where the rule requires judgement.
ESG eligibility can be structured as parameterised checks that reference recognised labels or frameworks (e.g., energy-efficiency classes, SDG alignments)—noting that truly standardised, executable ESG thresholds for trade are still evolving.
Each rule is versioned, cited, and linked to the specific document or entity it governs. The system knows what logic applied, why, and when.
The Policy Store (and How It Works with the Trade Graph)
At TradeQu, policy logic lives inside a Policy Store that sits alongside the Trade Graph—our knowledge representation of entities (companies, instruments, jurisdictions), documents (LCs, invoices, guarantees), and relationships (obligations, exposures, timelines).
Versioned — each rule carries an identifier, creation date, lineage, and deprecation status.
Contextual — rules reference the entities, documents, and jurisdictions they govern.
Executable — the store exposes an API for deterministic evaluation.
Auditable — every decision can return the rule reference, logic path, and timestamp.
When the reasoning layer processes a trade event, it queries the Trade Graph for context and the Policy Store for applicable constraints. Today, that yields deterministic checks where rules are crisp, and flagged human-in-the-loop reviews where rules require judgement. As standards mature, more of the policy surface area becomes executable.
“Regulation stops being a document. It becomes an interface.”
Intelligence Without Centralisation
Here’s where TradeQu differs from traditional platforms: reasoning doesn’t require a shared database. Our zero-copy architecture keeps data where it resides—bank systems, cloud vaults, or on-prem—and reasons through metadata and proofs, not file transfer.
No data migration required
Full sovereignty retained
Shared intelligence without shared data
Compliance by architecture, not policy
“Shared intelligence doesn’t require shared data.”
Built for AI’s Future
The Trade Graph and Policy Store are model-agnostic by design. As language and logic models improve, they plug into a stable representation (entities, relationships, policies). That lets institutions adopt new models safely—without vendor lock-in—while preserving explainability and audit trails.
Current Limits (And Why That’s OK)
Three realities to acknowledge:
UCP 600 codification is selective. Some timelines and determinable checks can be encoded; other areas intentionally rely on “international standard banking practice” and require judgement. That’s why the system mixes deterministic checks with escalations and explanations, not blanket automation.
ESG criteria aren’t yet a single executable standard. We model ESG as parameterised rules referencing recognised frameworks and labels; industry-wide computational standards for trade are in development, not universally adopted.
General-purpose language models can hallucinate. Deterministic policy checks run first; LLMs provide contextual interpretation only where rules require judgement—and high-stakes outcomes always route through human review.
This is precisely why policy-aware infrastructure matters: it provides a safe bridge from prose to code, with explicit boundaries and auditability.
The Path Ahead (Realistic Roadmap)
2025 — Applied internal research.
We’re formalising rule objects (with citations, versions, and tests), evaluating multi-jurisdictional reasoning (EU AI Act + UCP 600 + AML), and measuring determinism + escalation performance on real-world document sets under confidentiality.
2026 — Controlled pilots (non-production).
Partner institutions will trial the Policy Store + Trade Graph on constrained workflows (e.g., LC discrepancy triage, sanctions policy checks), with human-in-the-loop reviews and full audit capture.
Beyond 2026 — broader policy surfaces (documentary collections, guarantees), richer standards integration, and progressive automation where rules allow it. Vision: a global trade network where compliance is computational, trust is verifiable, and intelligence compounds across every transaction.
Conclusion
Policy-aware AI turns governance from documentation into computation. In regulated finance, that’s the difference between experimental AI and trusted systems. When regulations can be executed, traced, and verified, compliance ceases to be a cost—it becomes infrastructure.
TradeQu Labs is building that layer—where policy, reasoning, and trust converge.
“When regulation becomes executable, compliance stops being a cost — it becomes infrastructure.”
Further Reading (expand for sources & context)
EU AI Act — transparency, traceability, phased application 2025–2027
BIS FSI Insights No. 63 (Dec 2024) — AI governance expectations for the financial sector
Machine-readable rulebooks — taxonomy-based initiatives enabling automated compliance
ICC DSI KTDDE — data harmonisation for key trade documents; current focus vs. executable legal rules
UCP 600 discrepancy studies — first-presentation refusal rates and document examination practice
Footnotes
EU AI Act (2024) and UK BoE/FCA surveys identify explainability and regulatory clarity as primary barriers for high-risk AI in finance.
Multiple studies report LC first-presentation discrepancy rates typically ~60–75%, with regional variance; harmonisation work to date targets data formats, not fully executable legal rules.
Authorship Declaration
Written by Sam Carter — TradeQu Labs.
Research and drafting assisted by ChatGPT (GPT-5), Perplexity Research, and Claude 3 Opus. All sources verified through human review. This article adheres to TradeQu’s principle of transparent AI-assisted research and publication.
We’re always looking for collaborators exploring how intelligence can become verifiable.
Let’s build the future of compliant AI together.
If your institution is exploring AI governance, policy-as-code, or explainable infrastructure, we’d like to collaborate.



