Who really regulates AI in Ireland?
Ireland’s AI oversight will span multiple regulators under the AI Act. David O’Sullivan explains how this complex framework shapes compliance governance and accountability for organisations
When it comes to the regulation of artificial intelligence (AI), the EU Artificial Intelligence Act 2024 (AI Act) allows for various types of regulators, including market surveillance authorities, fundamental rights authorities and coordinating bodies. This bevvy may result in a complex regulatory ecosystem, with one organisation subject to oversight and scrutiny from multiple bodies.
Regulators appointed in Ireland include the following.
Fundamental rights authorities:
- An Coimisiún Toghcháin
- Coimisiún na Meán
- Data Protection Commission
- Environmental Protection Agency
- Financial Services and Pensions Ombudsman
- Irish Human Rights and Equality Commission
- Ombudsman
- Ombudsman for Children’s Office
- Ombudsman for the Defence Forces
Market surveillance authorities:
- Central Bank of Ireland
- Commission for Communications Regulation
- Commission for Railway Regulation
- Competition and Consumer Protection Commission
- Data Protection Commission
- Health and Safety Authority
- Health Products Regulatory Authority
- Marine Survey Office
These are supported by a coordinating office, the AI Office, that will act as a central point of contact in Ireland.
Market surveillance authority v fundamental rights body
Market surveillance authorities (MSA) act as post‑market regulatory enforcers that police compliance of AI systems on the EU market, investigate, order fixes/withdrawals and sanction non‑compliance.
They address the technical aspects of complying with the Act and identify instances of AI system misuse.
Fundamental rights bodies (FRBs) are public authorities tasked with protecting people’s rights affected by high-risk AI, such as AI systems that pose a significant risk to the health, safety, or fundamental rights of natural persons.
FRBs can obtain documentation and ask MSAs to test systems where rights risks are suspected. They are the place to go for people who think they have been, for example, discriminated against by the use of AI.
Current status
Although the major obligations under the AI Act will not be enforced until December 2027, some of the above-mentioned bodies are already regulating the use of AI under existing legislation.
For example, the Data Protection Commission has launched investigations into some of the major AI providers, including XAI (GROK) and OpenAI (ChatGPT), regarding how they process personal data.
Preparation for AI regulation
Organisations must understand where they sit under the AI Act by identifying their operator role and the risk classifications of the systems they intend to use.
They must establish a governance structure (standalone or built on top of existing fora) that enables the monitoring of the use of AI and to continually identify the role and risk classification for all new AI systems.
However, compliance is only part of the AI story. The organisation may be in full compliance with all applicable laws but still carry a high risk through the use of AI. Organisations must be properly governed to protect the business from risk and to ensure the proper identification of opportunities and the allocation of resources to worthy AI projects aligned with the organisation’s mission and values.
So who is going to regulate AI in Ireland?
While Ireland will have a broad network of regulators overseeing AI, every organisation remains responsible for how it develops, deploys and governs AI systems.
Businesses must understand their operator roles, assess the risk level of the systems they use and maintain strong internal governance. Regulation provides the framework, but it is organisations themselves that must ensure AI is used safely, responsibly and in line with their mission and values.
AI regulation in Ireland will be shared across a network of market surveillance authorities, fundamental rights bodies and a central coordinating office. Each plays a different role, from enforcing technical compliance to protecting people’s rights. Together, they form Ireland’s oversight framework for the safe and responsible use of AI under the AI Act.
David O’Sullivan is Director of Consulting at Forvis Mazars