Request Demo

Research & announcements

Protocol releases, technical thinking, and updates from Arcus Labs

What SR 11-7 Means for AI-Driven Decision Making
What SR 11-7 Means for AI-Driven Decision Making

SR 11-7, the Federal Reserve's model risk management guidance, was written for statistical models with inspectable coefficients. LLMs break every assumption the framework rests on. When an examiner asks how the model arrived at a specific decision, the answer "we trust the output" is not an answer.

Read
The Difference Between Logging and Governance in AI Systems
The Difference Between Logging and Governance in AI Systems

There is a difference between knowing what your AI did and knowing how it got there. Most governance platforms answer the first question. They log the model, the timestamp, the guardrail result. The second question requires a reasoning trace.

Read
Introducing EIE: A Protocol for Measuring Epistemic Integrity in AI Systems
Introducing EIE: A Protocol for Measuring Epistemic Integrity in AI Systems

We are releasing the Epistemic Integrity Evaluation specification. EIE is an open protocol for measuring whether AI systems handle uncertainty honestly, consistently, and proportionally. Most evaluation frameworks measure whether the model got the answer right. EIE measures whether it behaved well when it did not know.

Read
Introducing IRG: A Protocol for Persistent, Structured AI Reasoning
Introducing IRG: A Protocol for Persistent, Structured AI Reasoning

We are releasing the Iterative Reasoning Graph specification. IRG is an open protocol for building AI systems that reason in explicit, persistent, revisable structures rather than ephemeral token streams. The reasoning persists in a graph your team can inspect, replay, and audit.

Read