Managing risk in the new era of ‘persistent’ AI
As AI systems shift from copilot to autopilot, Puneet Kukreja considers the emerging risks and outlines steps organisations must take now to manage AI autonomy
If 2025 was the year of artificial intelligence (AI) copilots, 2026 is shaping up to be the year of autopilots.
Moltbook has made agent‑to‑agent behaviour visible in public, showing how AI systems can interact with one another.
OpenClaw has made the underlying mechanisms clearer, demonstrating how agents can ‘wake up’ on their own through “heartbeat‑driven” processes at defined intervals that run without constant prompting—this “heartbeat” prompts a new work cycle.
Together, these shifts point to a transition from AI that simply answers to AI that operates.
Most leaders still think of AI as episodic: you ask, it answers and the interaction ends when the session does. Heartbeat-enabled systems behave differently.
Some AI systems are now built to wake themselves, reassess their environment, determine what requires attention and continue operating without waiting for human instruction.
They can check for updates, reprioritise what matters and carry on working without being asked. In practice, the system no longer just responds—it persists.
Once AI persists, risk shifts from the possibility of an incorrect sentence to the possibility of an incorrect action, repeated across tools and workflows before anyone realises intervention is needed. This persistence reshapes the risk model entirely.
Absolute autonomy could pose serious threats to enterprise security, exposing organisations to cyberattacks, operational disruption and data breaches.
This is why boards must treat this moment not as “just another feature upgrade” but as a change in operating model. The question for company boards is no longer, “How do we supervise systems that answer?” Now, it is, “Are we ready for systems that decide and act”.
The new AI risk frontier
In the last wave of AI adoption, board conversations focused on outputs, including hallucinations, bias, misinformation and data leakage via text.
In a heartbeat-enabled AI world, the primary risk moves from what a model says to what a system does. The danger is no longer about one‑off mistakes; it is about patterns of behaviour that build up quietly until they become serious.
When systems act at machine speed, trust becomes something you engineer because customers see the impact first and only get the explanation afterwards.
The technology enabling this shift is now widely available. The model works inside a system that can browse the internet, read information, use tools and complete tasks step-by-step.
This means the real “intent” comes from the sequence of actions and the system’s permissions, not from a single prompt.
In short, governance should focus on what the system is allowed to do, how fast it can act and what safeguards are in place if something goes wrong.
Three risks boards need to understand now
Always‑on systems can make small problems big
When an AI agent works autonomously, even a small mistake can become a serious issue because the system keeps making decisions and repeating actions without stopping.
The real risk is in the chain of actions, not one moment
In systems like OpenClaw and Moltbook, the biggest danger comes from how tasks link together and what the agent is allowed to do with connected tools. It is not just from the text it generates.
Trust disappears faster than you can investigate
AI works at machine speed. So, people see the impact of its actions before the organisation has time to explain or fix anything.
If the response is slow, trust can quickly disappear.
“Engineered resilience” means designing AI systems with built‑in limits, safeguards and evidence trails, rather than trying to add these controls after something goes wrong.
Five controls
There are five controls boards should ask management to demonstrate, each with a named executive, within the next 90 days.
-
Set clear permission levels for AI agents
Sort agents into categories—from simple, read‑only helpers to those that can take high‑risk actions. Ensure important actions, such as payments, data exports, external emails or system changes, require strict approval. -
Limit what agents are allowed to do
Give each agent the minimum access it needs, and no more. Where possible, time-limit access and ensure agents cannot grant themselves extra permissions. -
Reduce the damage an error can cause
Control how fast agents can make changes and limit how many changes they can make at once. Block any increase in privileges or scope unless this has been formally approved. -
Assume outside content can be unsafe: test everything
Treat all external or unverified content as potentially risky. Test the entire workflow from end to end and watch for unusual actions or odd data handling activities. -
Make fast response part of the design
Put in place a kill switch, ways to undo changes and a rehearsed incident plan tested through simulation. Keep detailed logs so the team can trace decisions made and clearly show what has taken place.
The goal is not to slow down AI adoption; it is to make autonomy safe enough to be boring.
The organisations that lead in 2026 will not be those that avoid agents; it will be those that can prove control, constrain scope and contain incidents quickly while improving productivity.
Heartbeat is the signal that AI is becoming infrastructure. When AI has a heartbeat, resilience has to be engineered.
Puneet Kukreja is Consulting Partner, Resilient Nation Leader and Head of Cyber with EY Ireland