Arcus builds protocol-level infrastructure for structured AI reasoning. Our protocol decomposes AI decisions into traceable graphs — every step named, typed, verified, and scored. For regulated industries where “the model said so” is not an acceptable answer.
Current AI governance platforms track which model was called, when, by whom. They govern AI as an asset in an inventory. But regulators don’t ask “which model did you use?” They ask “how did the system arrive at this decision?”
That question requires a different kind of trace — not a call log, but a reasoning structure.
IRG is not a product bolted onto existing AI. It’s a protocol — a way of structuring how AI systems think, with auditability as a first-class property rather than an afterthought.
Structures AI reasoning into directed acyclic graphs with typed nodes — generation, retrieval, verification, evaluation, synthesis. Each decision decomposes into a traceable topology. The graph iterates until convergence criteria are met, discarding drafts but preserving learning between iterations.
Scores every reasoning output on factual accuracy, logical coherence, source fidelity, and claim support. Produces a continuous integrity metric that regulators, auditors, and insurers can reference. Not a binary pass/fail — a calibrated measure of reasoning quality.
A domain-specific language for defining reasoning architectures. Ten primitives — generate, retrieve, verify, evaluate, revise, synthesize, branch, loop, simulate, observe — compose into arbitrary topologies. Domain experts contribute prompt configurations without understanding graph theory.
graph credit_assessment { // Clarify inputs before reasoning begins clarify = Clarification(input: application) loop iterate(max: 3) { strategy = ResponseStrategy( input: clarify, context: "EU AI Act Article 11 compliant assessment" ) draft = Draft(input: strategy) facts = FactCheck(input: draft, sources: applicant_docs) impact = ImpactAnalysis(input: draft, policy: fair_lending) eval = StrategyEvaluation(input: [facts, impact]) check = ConvergenceCheck(input: eval, threshold: 0.85) break if check.converged } return draft // with full trace + EIE score }
IRG doesn’t replace your governance platform or your model provider. It operates between them — orchestrating model calls within governed reasoning topologies, producing the traces and scores that make compliance documentation meaningful.
Every jurisdiction independently requires the same capabilities: risk documentation, reasoning transparency, audit trails, quality management. IRG produces them all from the same underlying trace. Different prompt sets, different regulatory templates, same protocol.
High-risk AI systems require technical documentation, automatic event recording, risk management, and human oversight. Credit scoring, employment, healthcare, insurance.
Federal Reserve guidance requiring banks to validate, document, and govern models used in decision-making. The standard MRM framework for US financial institutions.
Impact assessments, documentation of decision-making, transparency obligations for high-risk AI in employment, lending, insurance, and housing.
Institutions operating across borders adopt the strictest standard. EU compliance subsumes most frameworks. One EIE deployment covers all surfaces.
“Scaling models improves the constants. Graph design shifts the Pareto frontier. The former is expensive and subject to diminishing returns. The latter is a design choice with compounding returns as the discipline matures.”— Cognitive Engineering: A Formal Framework, Arcus Research
IRG isn’t prompt engineering. It’s an application of cognitive architecture theory — grounded in the Cattell-Horn-Carroll framework of intelligence, formalized through circuit complexity analysis, and expressed as an engineering discipline we call cognitive engineering.
The framework defines output quality as a function of three independent variables: reasoning capability (R), knowledge (K), and graph design quality (G). Current AI scaling improves R and K. Cognitive engineering optimizes G — a previously unmeasured axis.
Formal framework including CHC-grounded R/K/G theory, four laws of cognitive engineering, G₁–G₅ topology classification, circuit complexity analysis, and cost models.
If your AI makes decisions that regulators, auditors, or customers need to trust — we should talk.
Schedule a conversation or reach us at contact@arcusx.ai