Govern the reasoning,
not just the answer.
LLMs generate probabilities. Institutions require verifiable truth. Blazar is a protocol runtime that controls an agent's reasoning trajectory before it becomes an answer or action — purpose-built for law, compliance, audit and the public sector.
Reliability is a property of the trajectory,
not of the final text.
Probabilities cannot be trusted as truth.
Frontier LLMs generate fluent language under uncertainty. In high-stakes work, fluency is not evidence. Blazar treats every claim as a force that must be matched by accumulated support.
PM4 is a reasoning-control protocol.
Built on years of applied research into autoregressive reasoning trajectories and premature semantic commitment. PM4 governs when a model may strengthen, downgrade, or refuse a conclusion.
Control the path before it becomes the answer.
Standard approaches validate output after generation. Blazar intervenes earlier — at the moment a system commits to a locally coherent but inadmissible line of reasoning.
A runtime above your model.
Not a wrapper. Not a guardrail.
Connect any LLM agent or workflow.
Blazar sits above GPT-class models via API, SDK, or private deployment. No retraining required.
The protocol observes the reasoning trajectory.
PM4 measures dispersion, alternative continuations, and the force of an emerging claim against its accumulated support.
One of five outcomes is enforced.
Allow, downgrade, escalate, request additional support, or stop. The agent never commits to an inadmissible answer.
Every result ships with a verifiable reasoning log.
An auditable trail from input to outcome — reviewable by counsel, regulators, or oversight bodies.
Allow
Claim force is licensed by the trajectory's accumulated support.
Downgrade
Output is permitted at a weaker assertional force.
Request support
Agent must retrieve additional grounds before continuing.
Escalate
Routed to a human reviewer or higher authority.
Stop
Premature commit detected. No output is released.
Every reasoning node, observed.
Every commit, licensed.
Not a black box. A governed graph.
Blazar treats agent cognition as a graph of micro-decisions. Each node is observed against the protocol's admissibility condition. Inadmissible activations are halted, weakened, or rerouted to humans.
The result: agents that move with the speed of an LLM and the discipline of an institution.
For workflows where black-box
answers are not an option.
Built for high-stakes institutions — not for chat.
Law firms, regulators, ministries and audit bodies cannot rely on persuasive output without a path. Blazar gives them the protocol layer to use AI agents inside real decisions.
Available as API integrations, private deployments, and controlled workflow runtimes — including sovereign on-premise installations.
See it on a real workflowDeploy the protocol,
or build it with us.
Blazar applied to your
highest-stakes workflow.
- Live demonstration of agent + reasoning log on a workflow you select
- Walk through admit / downgrade / escalate / stop outcomes
- Discuss API, private cloud, or sovereign on-prem deployment options
Collaborate on AI reasoning
control & sovereign infra.
- Co-develop new PM4-based protocols with our research team
- Open to researchers, labs, technical partners, strategic institutions
- Topics: agent safety, trajectory governance, sovereign AI infrastructure
A private research lab,
not a product startup.
Built by Metatesk.
Backgrounds across machine learning, applied mathematics, AI systems engineering, legal-tech and high-stakes analysis — supported by scientific partners with long-term ML and data-science experience.
We treat AI agency as a problem of protocol, not product. Blazar is the operational expression of that thesis.
Metatesk · Private AI Research Lab