Governed Reasoning Infrastructure

AI systems make decisions.
We make those decisions auditable.

Arcus builds protocol-level infrastructure for structured AI reasoning. Our protocol decomposes AI decisions into traceable graphs — every step named, typed, verified, and scored. For regulated industries where “the model said so” is not an acceptable answer.

IRG Iterative Reasoning Graphs
EIE Epistemic Integrity Engine
Reason Graph Definition Language

The difference between logging and governance

Current AI governance platforms track which model was called, when, by whom. They govern AI as an asset in an inventory. But regulators don’t ask “which model did you use?” They ask “how did the system arrive at this decision?”

That question requires a different kind of trace — not a call log, but a reasoning structure.

What platforms log today

14:32:07  model: mistral-large
14:32:07  tokens_in: 2,847
14:32:11  tokens_out: 1,203
14:32:11  latency: 3.8s
14:32:11  guardrails: passed
14:32:11  status: complete
What happened inside? ▒▒▒▒▒▒

What IRG traces

Clarify → resolved income ambiguity
Strategy → dual-factor risk assessment
Draft → preliminary: approve, moderate risk
FactCheck → ✗ employment claim unsupported
Evaluate → factual gap, revise strategy
Strategy₂ → request employment verification
Draft₂ → conditional: pending verification
FactCheck₂ → ✓ all claims supported
Converge → EIE: 0.87 | iterations: 2

Three layers of reasoning infrastructure

IRG is not a product bolted onto existing AI. It’s a protocol — a way of structuring how AI systems think, with auditability as a first-class property rather than an afterthought.

IRG

Iterative Reasoning Graphs

Structures AI reasoning into directed acyclic graphs with typed nodes — generation, retrieval, verification, evaluation, synthesis. Each decision decomposes into a traceable topology. The graph iterates until convergence criteria are met, discarding drafts but preserving learning between iterations.

EIE

Epistemic Integrity Engine

Scores every reasoning output on factual accuracy, logical coherence, source fidelity, and claim support. Produces a continuous integrity metric that regulators, auditors, and insurers can reference. Not a binary pass/fail — a calibrated measure of reasoning quality.

Reason

Graph Definition Language

A domain-specific language for defining reasoning architectures. Ten primitives — generate, retrieve, verify, evaluate, revise, synthesize, branch, loop, simulate, observe — compose into arbitrary topologies. Domain experts contribute prompt configurations without understanding graph theory.

graph credit_assessment {

  // Clarify inputs before reasoning begins
  clarify = Clarification(input: application)

  loop iterate(max: 3) {
    strategy = ResponseStrategy(
      input:   clarify,
      context: "EU AI Act Article 11 compliant assessment"
    )
    draft    = Draft(input: strategy)
    facts    = FactCheck(input: draft, sources: applicant_docs)
    impact   = ImpactAnalysis(input: draft, policy: fair_lending)
    eval     = StrategyEvaluation(input: [facts, impact])
    check    = ConvergenceCheck(input: eval, threshold: 0.85)
    break if check.converged
  }

  return draft // with full trace + EIE score
}

Above the model layer. Beside the GRC platform.

IRG doesn’t replace your governance platform or your model provider. It operates between them — orchestrating model calls within governed reasoning topologies, producing the traces and scores that make compliance documentation meaningful.

Layer 3
IRG / EIE
REASONING GOVERNANCE
Layer 2
Mistral · Anthropic · OpenAI
MODEL LAYER
Layer 1
Cloud · On-premise · Sovereign
INFRASTRUCTURE
OneTrust · ServiceNow · IBM OpenPages feed from IRG traces

One protocol. Every compliance surface.

Every jurisdiction independently requires the same capabilities: risk documentation, reasoning transparency, audit trails, quality management. IRG produces them all from the same underlying trace. Different prompt sets, different regulatory templates, same protocol.

EU · August 2026

EU AI Act — Articles 9–15

High-risk AI systems require technical documentation, automatic event recording, risk management, and human oversight. Credit scoring, employment, healthcare, insurance.

US · Banking

SR 11-7 — Model Risk Management

Federal Reserve guidance requiring banks to validate, document, and govern models used in decision-making. The standard MRM framework for US financial institutions.

US · Colorado · Feb 2026

Colorado AI Act

Impact assessments, documentation of decision-making, transparency obligations for high-risk AI in employment, lending, insurance, and housing.

Global · Convergent

Cross-jurisdictional

Institutions operating across borders adopt the strictest standard. EU compliance subsumes most frameworks. One EIE deployment covers all surfaces.

“Scaling models improves the constants. Graph design shifts the Pareto frontier. The former is expensive and subject to diminishing returns. The latter is a design choice with compounding returns as the discipline matures.”
— Cognitive Engineering: A Formal Framework, Arcus Research

Grounded in theory, built for production

IRG isn’t prompt engineering. It’s an application of cognitive architecture theory — grounded in the Cattell-Horn-Carroll framework of intelligence, formalized through circuit complexity analysis, and expressed as an engineering discipline we call cognitive engineering.

The framework defines output quality as a function of three independent variables: reasoning capability (R), knowledge (K), and graph design quality (G). Current AI scaling improves R and K. Cognitive engineering optimizes G — a previously unmeasured axis.

Cognitive Engineering v0.2

Formal framework including CHC-grounded R/K/G theory, four laws of cognitive engineering, G₁–G₅ topology classification, circuit complexity analysis, and cost models.

Read the paper →

Let’s talk about governed reasoning

If your AI makes decisions that regulators, auditors, or customers need to trust — we should talk.

Schedule a conversation or reach us at contact@arcusx.ai