The Governance Gap: Why your Analytics Team Needs More Than Technical Talent

By Luli Adeyemo, Executive Director, Tech Diversity Academy

Recently, I wrote an article for Pulse IT called "I Have Skin in the AI Game. The Algorithm Doesn't Have Mine."

The response surprised me. Not because people disagreed, but because of who reached out. Clinicians. Risk managers. Board directors. People who'd been sitting in rooms where AI was discussed, nodding along, and quietly wondering whether someone, somewhere, had thought this through.  In summary, the article referenced how Dermatology AI tools are misdiagnosing skin cancer on people who look like me.

One clinician's comment stayed with me: "As a clinician you believe AI has been subjected to testing, questioning and factual analysis before being sold to you as a tool. This is clearly not the case."

Another commented, “Most tools (questionnaires, medical tests, etc) go through rigorous testing with peer review prior to being used as a standard. It’s a very curious situation that something as important as skin checks didn’t take into account the full range of skin types”

That's the gap we're not talking about enough. Not a skills gap. A governance gap.

The data that isn't neutral

The study I reference in my article was published in The Lancet Digital Health, where they examined 21 skin lesion datasets used globally to train diagnostic AI. Among more than 100,000 images, only 2,436 had skin colour recorded. Of those, just 10 were of brown skin. Only one was of dark brown or Black skin.

This isn't a coding failure. It's a governance failure. Someone chose that data. Someone decided it was representative. Someone signed off.

And the people most likely to be harmed by that decision? They weren't in the room when it was made.

This is the pattern I keep seeing: technical capability outpacing governance capability. Organisations investing heavily in what AI can do, while underinvesting in who's asking the hard questions about whether it should.

Skills and talent: The capability we're not building

When we talk about the analytics talent gap, we typically mean technical skills. Data engineering. Machine learning. Statistical modelling. These matter, and they're in short supply.

But there's another capability gap that's harder to see: the ability to translate technical decisions into ethical, commercial, and regulatory risk. The ability to sit across the table from a data science team and ask the questions that don't appear in the model documentation.

Whose data is in this training set? Who isn't represented? What happens when the model fails? Who's accountable?

These aren't technical questions. They're governance questions. And they require a different kind of skill: the ability to read the room, probe assumptions, and hold complexity without defaulting to "the engineers will sort it out."

We need analytics professionals who can do both. Technical depth and governance fluency. The sector is starting to recognise this, but capability development hasn't caught up.

Ethics and governance: From afterthought to core competency

For too long, ethics and governance have been treated as compliance exercises. Tick the box, add a disclaimer, move on.

That's changing, and quickly.

Australia's AI Safety Institute is now operational. Mandatory AI requirements for government take effect in June 2026, with broader requirements landing in December. ISO/IEC 42001 is becoming the standard that serious organisations align to. APRA's CPS 230 is reshaping how financial services think about operational risk, including algorithmic decision-making.

The regulatory landscape is shifting from "nice to have" to "need to have." But regulation alone won't solve the problem. Compliance is not the same as capability.

What we need are practitioners who can sit at the intersection of technical, ethical, and commercial risk. People who understand that governance isn't about slowing things down; it's about building systems that hold up under scrutiny.

Different professional lenses catch different risks. The person who's spent 15 years in human rights will see harms the machine learning team won't. The product designer will spot user impact that never made the roadmap. The healthcare governance lead will ask accountability questions that weren't in the planning.

That diversity of perspective isn't a nice-to-have. It's how governance actually works

Value of analytics: Data for good requires governance for good

The potential of analytics and AI to drive positive societal impact is enormous. Healthcare diagnostics, environmental monitoring, social service delivery, economic opportunity. The "data for good" agenda is real, and it matters.

But good intentions aren't enough.

The same tools that could improve cancer detection can entrench diagnostic bias. The same algorithms that could expand financial inclusion can automate discrimination. The same systems that could optimise public services can erode privacy and trust.

The difference between data for good and data that harms isn't the technology. It's the governance. It's who's asking the questions, whose perspectives are represented, and whether the people most affected by these systems have any voice in how they're designed.

This is where diversity becomes a patient safety issue, a consumer protection issue, a business risk issue. Not diversity as a values statement, but diversity as a governance strategy.

Building the capability

This is why we built Australia's first AI Governance Practitioners Programme.

Not another "What is AI?" course. Not technical training for non-technical people. A programme designed to build the cross-functional practitioners who can translate AI risk into decisions that stick at board level.

Our April cohort includes a human rights practice lead from the energy sector, a senior product designer, a head of governance in healthcare, and 12 women sponsored by a major tech company to be in the room when AI decisions get made.

Not one of them writes code. Every one of them will govern it.

The programme was developed in collaboration with Dr Kobi Leins, who contributes to the development of international AI standards including ISO/IEC 42005. It's grounded in the regulatory reality that's coming, but focused on building practical capability, not just awareness.

Because the gap isn't knowledge. The gap is confidence. The confidence to ask the probing question. To push back when something's being glossed over. To stop nodding along and start shaping the conversation.

The room is changing

I've spent more than 30 years in technology. I've been the only woman in the room. The only Black person in the room. The person expected to nod along rather than push back.

That's changing. Slowly, but it's changing.

The organisations that will navigate this next phase well aren't the ones with the most advanced AI. They're the ones building governance capability alongside technical capability. The ones investing in diverse perspectives not because it looks good, but because it catches risks they'd otherwise miss.

The analytics profession has always been about finding signal in noise. The signal right now is clear: governance capability is the next frontier.

And the people who build it first will be the ones who shape what comes next.

 

About the Author

Luli Adeyemo is Executive Director of TechDiversity Foundation, Co-founder of Tendertrace, and one of Australia's most compelling voices on inclusive innovation and AI governance. With 30 years across technology, enterprise, and government, she brings rare depth and authenticity to complex conversations. A 1986 BMX World Champion, featured in Forbes and BBC News, and recognised as Emerging Leader in Tech at the Women's Agenda Leadership Awards, Luli is known for combining strategic rigour with warmth, humour, and the kind of storytelling that actually moves people.

Luli is the creator of Australia’s first AI Governance Practitioners Programme through the Tech Diversity Academy.  Module 1 begins April 1st, 2026. 

For more information, visit https://tech-diversity.com.au/