How will AI impact my boardroom in 2026?
As the uptake of artificial intelligence (AI) technologies in business continues to accelerate, boards must grapple with the associated risks and regulatory pressures, writes Dan Byrne
The modern boardroom isn’t really supposed to have had an “easier” time of it in the past, but rapid innovations like artificial intelligence (AI) can often make it feel that way.
As we approach 2026, the pace of AI development, adoption and integration is only gaining strength. Many boards are now at a crossroads. By now, directors have probably heard repeatedly how their businesses need to adapt or fade away into irrelevance.
It can seem daunting, especially when it’s not followed up with cold, hard facts—actionable points directors can plug into their boardroom agendas to get started on work.
Here is a run-down of the biggest AI challenges facing boards today, from market sentiments and expert advice to increasing regulation:
1. Agentic governance
Agentic AI is next-level material for many directors, who will start exploring its potential in 2026.
When we talk about agentic AI, we are referring to technology that doesn’t just chat back to you; it can independently keep records, plan meetings, feed data to board packs and carry out other tasks based on the goals humans give it.
We are already hearing stories of boards using this kind of functionality as a minute-taker in meetings, as much as we’re hearing stories of boards that remain sceptical of AI as a whole, and refuse to, or delay, going near it.
Whatever stage your board is at, make no mistake: agentic AI will become more mainstream in 2026 —and, at some stage, some stakeholder will wonder when and how you’re going to embrace it.
2. The regulatory cliff and mandatory literacy
We are crossing the chasm in AI literacy. A few years ago, any training in AI was a “nice-to-have” addition to your governance CV. Now, it’s so central that many boardroom recruiters will ask about it as a standard question.
The remarkable thing is how quickly we have gone from one scenario to the other. That’s how open and embracing business has been to AI.
There are now regulations in place for many companies. The European Union’s AI Act is a notable example, becoming fully operational from August 2026 and affecting one of the biggest economies in the world.
Article 4 specifically addresses literacy requirements, and while it doesn’t explicitly mention boards, the result is that boards will be involved.
Directors cannot and should not leave AI strategy to the technology team and think that any “literacy” requirement stops there.
AI is too big and too ingrained for that to work. Instead, boards need dedicated training so they can contribute to strategy and, crucially, raise questions when needed.
3. The risk
The pivotal thing about AI risk is that it’s growing faster than many of us are prepared for. This means it could impact an organisation in ways they weren’t expecting.
It could cause trouble for finance, governance ethics, public relations, marketing, data security —you name it. We can be certain that, throughout 2026, we will see stories of companies misusing AI to such an extent that they land in very public hot water.
These scenarios will always come back to boards, even those that believe they are apart from AI management and strategising.
4. The institutionalisation of AI oversight
AI oversight will become a longer and more complex process in 2026.
At the board level, the structure of scrutiny will likely expand, using a “technology and governance” committee or similar initiative to allocate responsibilities and provide feedback to directors.
This is natural and, in most cases, ideal for good governance, but it’s important to remember that the buck always stops with the board when it comes to making crucial calls
No matter who is feeding information to directors, those directors must make sure they understand key points and have asked any unanswered questions before making crucial calls.
5. The economy
Although quite separate from the concerns outlined above, it’s worth noting that market analysts and mainstream media continue to talk about an “AI bubble” that may burst at some future point.
With a lot of comparisons to the dot-com bubble of the late 1990s/early 2000s, there are lingering fears about how little profit AI is generating compared to the tens of billions invested.
As of December 2025, there remains only talk and scrutiny, but it’s worth noting that any bursting bubble would likely come back to many boards from a different angle—in the form of economic woes they will have to strategise around.
Dan Byrne is a journalist with the Corporate Governance Institute