AI brokers have gotten persistent, autonomous, and deeply embedded in on a regular basis workflows. However as they achieve the flexibility to behave on our behalf, a more durable query emerges: who controls the info, the execution, and the belief layer?
—
In the present day, $NEAR AI launched its reply. Introduced dwell at NEARCON 2026, IronClaw is a brand new open-source, verifiable AI agent runtime designed for a future the place brokers run repeatedly — with out exposing delicate knowledge, credentials, or person intent.
A Runtime Constructed for Autonomous AI — With out Blind Belief
IronClaw builds on the unique OpenClaw imaginative and prescient, however strengthens it with cryptographic ensures from the bottom up. Written in Rust and deployed inside encrypted Trusted Execution Environments (TEEs) on $NEAR AI Cloud, the runtime permits AI brokers to entry instruments, preserve reminiscence, and take actions on customers’ behalf — all inside a tightly managed safety boundary.
Relatively than asking customers to belief opaque platforms, IronClaw shifts the belief mannequin towards verifiable execution. Information and inference keep protected on the {hardware} stage, and brokers function underneath specific, enforceable permissions.
Safety by Structure, Not Add-Ons
IronClaw is designed with defense-in-depth as a core precept.
Loading tweet…
View Tweet
Each untrusted or third-party device runs in its personal sandbox, restricted to solely the sources it’s explicitly licensed to entry. Community calls are restricted to authorised locations. Delicate credentials are injected solely at runtime and by no means uncovered on to instruments or exterior providers.
Agent exercise is repeatedly monitored to detect misuse, together with protections in opposition to prompt-injection assaults and abusive useful resource consumption. All person knowledge is saved regionally in PostgreSQL, encrypted with AES-256-GCM, and by no means shared externally. Importantly, IronClaw collects no telemetry or analytics, guaranteeing execution stays totally personal.
A whole audit log provides customers visibility into each device interplay — transparency with out surveillance.
Privateness-First AI, Able to Deploy
IronClaw launches with a free Starter tier that features one hosted agent occasion operating inside $NEAR AI’s safe atmosphere and powered by its inference infrastructure. Builders and organizations can scale up by way of versatile paid tiers as their wants develop.
The purpose isn’t simply safer brokers — it’s sensible deployment with out forcing groups to decide on between comfort and management.
Loading tweet…
View Tweet
Why This Issues
As AI techniques more and more serve company incentives and depend on opaque knowledge pipelines, IronClaw represents a special path: native management, verifiable execution, and privateness by default.
Illia Polosukhin, Co-Founding father of $NEAR Protocol and Founding father of $NEAR AI, described IronClaw as an “agentic harness designed for safety,” extending $NEAR’s full-stack belief mannequin from blockchain infrastructure into the AI layer itself.
Relatively than bolting safety onto agentic AI after the actual fact, IronClaw embeds it into the runtime — combining confidential inference, cryptographic verification, and hardware-backed execution right into a single system.
A Basis for Accountable Agentic AI
George Zeng, Chief Product Officer and GM of $NEAR AI, framed the launch extra bluntly:
“AI brokers are already coming into vital workflows, however safety, compliance, and knowledge possession stay unresolved. IronClaw is supposed to shut that hole — giving builders and enterprises the arrogance to deploy always-on brokers with out surrendering transparency or management.”
IronClaw is out there now, with code accessible by way of $NEAR AI’s GitHub.
As AI strikes from instruments to actors, IronClaw alerts a transparent place: autonomy shouldn’t come at the price of privateness, and intelligence ought to by no means require blind belief.



