Request Demo
FAQ

Frequently asked questions

Everything you need to know about Arcus, our products, and how we help regulated industries govern AI reasoning.

Arcus is governance infrastructure for AI that needs to explain itself. Rather than monitoring models after the fact, Arcus structures every decision into traceable, auditable reasoning steps — so you can see exactly how an AI system arrived at its output.

Arcus is built for regulated industries — financial services, insurance, healthcare, legal — anywhere AI decisions carry regulatory or fiduciary weight and need to be explained, audited, or defended.

Most governance platforms log which model was called and when. Arcus traces how the system arrived at its decision — the reasoning structure, not just the call log. This means you get auditable proof of the decision process itself, not metadata about it.

Iterative Reasoning Graphs break AI decisions into typed, traceable steps: generation, retrieval, verification, evaluation, and synthesis. The system iterates until a defined standard is met — producing a complete, auditable reasoning trace for every decision.

The Epistemic Integrity Engine scores every reasoning output across four dimensions: factual accuracy, logical coherence, source fidelity, and claim support. It provides a quantitative integrity measure for each step in the reasoning graph.

The Graph Definition Language gives domain experts a way to define how AI thinks using ten composable primitives — no engineering or graph theory knowledge required. It bridges the gap between domain expertise and AI system design.

Yes. IRG works with any underlying model provider. The governance layer sits above the model layer, which means you can switch or combine providers without rebuilding your compliance infrastructure.

Arcus addresses the EU AI Act (Articles 9–15), SR 11-7 (Model Risk Management), the Colorado AI Act, and cross-jurisdictional frameworks. One protocol covers all compliance surfaces.

IRG produces the technical documentation, event recording, risk management artifacts, and human oversight trails required by Articles 9–15 for high-risk AI systems. The reasoning trace itself becomes your compliance documentation.

Yes. IRG traces feed directly into platforms like OneTrust, ServiceNow, and IBM OpenPages, turning compliance documentation into something auditors can actually use.

Schedule a 30-minute introductory conversation. We’ll discuss your governance challenges and walk through a live reasoning trace so you can see exactly how IRG works with your use case.

Implementation timelines vary by use case and integration complexity. Schedule a conversation to discuss your specific requirements and we’ll provide a tailored estimate.