Building trust through responsible AI

AI adoption is accelerating across Irish business, but without strong governance, risks rise. Keith Power explains why internal audit is uniquely placed to lead on responsible AI

AI is no longer on the horizon; it’s already reshaping how Irish businesses operate. From predictive analytics and GenAI-powered customer support to automated underwriting and risk modelling, AI is embedded across enterprises.

But with rapid adoption comes rising complexity and, importantly, risk.

Without robust governance, AI can introduce bias, privacy violations and compliance breaches. That’s why internal audit teams in Ireland have a unique opportunity: to lead the charge on responsible AI or risk being left behind in the wake of a model failure or regulatory misstep.

Internal audit: positioned to lead on AI governance

Internal audit functions are evolving. No longer just compliance watchdogs, they’re becoming strategic enablers of trust and transformation.

With enterprise-wide visibility and a mandate for independent assurance, internal audit is ideally placed to:

  • evaluate AI governance structures;
  • advise on responsible AI deployment; and
  • assess control design and effectiveness.

By embedding themselves early in the AI lifecycle, audit teams can help shape innovation that’s not only bold, but responsible.

Irish organisations are navigating a landscape of accelerated transformation and tightening regulation. According to our latest CEO Survey, 29 percent of Irish CEOs believe their organisation might not be viable in ten years without reinvention.

Meanwhile, our GenAI Business Leaders Survey shows that 98 percent of Irish businesses have started their AI journey. However, only six percent have deployed AI at scale and 79 percent have yet to fully implement AI governance frameworks.

With the EU AI Act now in effect, Irish organisations face new obligations, including AI literacy, risk categorisation and human oversight for high-risk systems. Internal audit can play a pivotal role in helping organisations meet these requirements while unlocking value from AI.

AI is reshaping internal audit

Just as cybersecurity became a board-level priority a decade ago, AI risk is now front and centre in audit committee discussions.

AI is transforming core business processes, from decision-making and forecasting to customer engagement, often faster than governance frameworks can keep pace. Organisations without a clear strategy or inventory of AI use cases are especially exposed.

Left unchecked, AI systems can:

  • Make decisions that breach privacy, fairness or ethical standards;
  • Obscure explainability, complicating audit and regulatory compliance; or
  • Introduce security and IP risks through third-party models or data leakage.

Boards, regulators and customers are asking tough questions. With the right mandate, capabilities and frameworks, internal audit is well-positioned to answer them.

This is a moment of opportunity. Internal audit can step up as a strategic enabler of responsible AI, helping organisations innovate with confidence while keeping trust and transparency at the core.

How internal audit can drive responsible AI

Here are five key actions that can be taken to drive responsible AI in organisations:

  1. Map the AI landscape
    Create a living inventory of AI systems, including third-party tools, to assess risk exposure, regulatory compliance and audit readiness. This forms the foundation for strategic oversight and future assurance planning.
  2. Evaluate governance structures
    AI governance is often fragmented. Internal audit can bring clarity by reviewing decision rights, escalation paths and oversight responsibilities. Ensure governance is documented, resourced and operational.
  3. Assess risk and control frameworks
    AI introduces risks that traditional controls may not cover such as bias, explainability and data drift. Audit teams should assess whether frameworks like the NIST AI Risk Management Framework or ISO 42001 have been adapted to address these risks effectively.
  4. Embed audit into AI design
    Post-deployment reviews are no longer sufficient. Internal audit should be involved from the design phase, especially for high-risk or customer-facing models, to ensure transparency, fallback controls and intended use are clearly defined.
  5. Use AI to audit AI
    Internal audit can harness AI tools for control testing, anomaly detection and document summarisation. This not only improves coverage and efficiency but also builds credibility in advising on AI adoption.

Keith Power is Partner at PwC Ireland