1. Introduction
At TradeQu, our mission is to build AI-native infrastructure that is safe, explainable, and compliant by design.
This AI Policy outlines how we develop, deploy, and evaluate artificial intelligence systems within our products, research, and platform experiments.
Our principles are guided by the EU AI Act, ISO 42001, and NIST AI Risk Management Framework.
2. Our Core Principles
We design every AI-enabled system to meet five non-negotiable principles:
Transparency — All AI outputs must be traceable to their data sources, rules, and reasoning process.
Accountability — Human oversight is mandatory for every decision with regulatory or financial impact.
Fairness & Non-Discrimination — We avoid and monitor for bias in models, data, and downstream effects.
Data Integrity — Only verified, legally obtained data is used in our systems.
Explainability by Design — Every AI action must be explainable in human-readable form and verifiable through audit logs.
3. Development Standards
TradeQu’s AI systems are built following strict internal guidelines:
Policy-as-Code Integration: All decision logic is mapped to codified policies, ensuring regulatory traceability.
Provenance-First Logging: Every AI decision emits structured metadata (policy version, model ID, data lineage, human approval).
Testing & Evaluation: Each release undergoes bias, performance, and reliability testing before production.
Model Monitoring: Continuous monitoring detects drift, anomalies, and compliance deviations.
Human-in-the-Loop: Final approval for all financial or compliance decisions remains with authorised personnel.
4. Data Governance
We apply the same controls to training, inference, and user data:
Zero-Copy Posture: Data used in AI processing never leaves our secure environment.
Tenant Isolation: Each client’s data is logically and cryptographically isolated.
Minimal Data Retention: Data is kept only as long as necessary for its stated purpose.
PII Protection: Personally identifiable information is automatically redacted or pseudonymised before model ingestion.
Third-Party Models: When using external APIs or models, TradeQu applies strict evaluation and redaction before transmission.
5. Responsible Use of AI Tools
TradeQu occasionally uses generative AI tools (e.g. ChatGPT, Claude, Perplexity) for research and drafting.
All content published under TradeQu Labs undergoes human verification and fact-checking prior to release.
We disclose AI assistance in authorship statements where relevant.
We use synthetic data for testing and research where appropriate, ensuring that no real customer or transaction data is exposed. However, we never use AI-generated documents, simulated counterparties, or synthetic data as substitutes for real regulatory due diligence or production workflows.
6. Compliance Alignment
TradeQu aligns with global standards for trustworthy AI, including:
EU AI Act (2024) — Risk classification, transparency, and traceability
ISO/IEC 42001 (2023) — AI Management Systems
NIST AI RMF (2023) — Governance, accountability, and risk mitigation
BIS FSI Guidance (2024) — Use of AI in financial supervision
Our governance framework and Policy Store are designed to support automatic alignment with these standards.
7. Continuous Improvement
AI safety and compliance are evolving fields.
We review this policy quarterly and update it to reflect new regulations, research findings, and feedback from our partners and community.
TradeQu encourages responsible innovation, open dialogue, and external audit participation.
8. Contact
Questions or concerns about this policy can be directed to:
📧 compliance@tradequ.ai
📍 TradeQu Ltd
Let’s build the future of compliant AI together.
If your institution is exploring AI governance, policy-as-code, or explainable infrastructure, we’d like to collaborate.