90% of AI projects are failing to deliver ROI (return on investment).
But AI’s still all that anyone can talk about.
As organisations rush to adopt AI tools within their businesses, we brought together senior data and transformation professionals from across UK industry for a dynamic panel discussion on all things AI. From identifying use cases to ethical AI; how organisations are building internal AI skills and getting their data AI-ready, to a future defined by human-AI collaboration, the panellists explored some of the most compelling topics around AI adoption and provided valuable insights based on their experience.
The panel
Garth Newboult, Delivery Director, FDM Consulting Services moderated two discussions on the topic: Enhance your AI-bility with AI.
Panellists included Becky Fitzgerald, Director of data and AI, Yorkshire Building Society (YBS); Alex Gore, Group Chief Operations Officer UBDS; Elliot Morris, Cofounder and CEO of drovi; Andy Bell, Head of Data Problem Management, Precisely Consulting; Adam Cockburn, CTO, Axiologik; Carole Roberts, Director of Tech and Data at LBS; and Gangotri Bhatt, Head of FDM Skills Lab, UK & EMEA.
The first panel focused on: ‘Creating an AI enabled workforce’ and how businesses can build real-world AI capabilities into their teams.
Here are the key takeaways from the session –
First steps
What are some of the prevalent use cases for AI?
The answer depends on who you’re asking.
Adam Cockburn believes use cases vary greatly from organisation to organisation. But the onus is on leadership to identify some of the things they struggle with and how to use AI as a capability to address them.
90% of AI projects are failing to deliver ROI. That’s not because people are fundamentally not embracing it. Instead, it’s more about expectations around use cases.
It’s important to distinguish AI as a capability on its own that can deliver new and innovative services vs. AI that can underpin your existing capability. Adam mentioned the use of AI in recruitment as an example. Tasks like data sifting, validation, and CV screening which were previously time-consuming manual jobs, can now be automated, improving efficiencies. This is a fairly simple AI use case underpinning an existing capability.
Cybersecurity is another area that involves vast volumes of data. The use of AI in modern scene intrusion allows organisations to effectively manage their data and identify patterns within it.
Human-AI collaboration
AI on its own is not a solution. But joined with human oversight can significantly boost business productivity.
Alex Gore mentioned that his company have been investing in AI tools to enhance the skills and capabilities of their human consultants. One example of this is an AI-Business Analyst. It attends a meeting/workshop, facilitating and prompting questions and collecting information to summarise into user stories. It can also adapt its final line of questioning based on feedback from different stakeholders.
However, AI tools can and do regularly generate hallucinations where the data is inaccurate. This makes human oversight imperative and calls for an increase in data literacy and critical thinking skills. We have to verify the output of whatever AI tool we’re using. Whilst software-as-a-service (SaaS) tools can create designs from natural language, we need human expertise to validate they are correct, secure and cover all requirements.
Adam Cockburn cited another example where human oversight has been critical. Adam’s company work with multiple healthcare clients at national scale. Their task was to help one client make better use of extensive public health data. The challenge was to process these vast volumes of data and build a proof of concept that didn’t fall foul of ethics or generate inaccurate diagnosis. They did this by bringing in health policy leads into the POC to validate not just technical outcomes, but also policy and overall business outcomes.
Skills for the future
Alex Gore mentioned an increasing demand for contextual understanding with their consultants. He underscored the importance of understanding business needs, constraints of any data being queried, and needs for regulatory compliance rules when using AI. He believes having the right knowledge empowers his consultants to apply better prompts.
For Andy Bell, data literacy tops the list of essential skills. It empowers people to understand the content, attribution, and associated metadata that determines the output you’ll get from the data.
Another valuable skill is the capability to understand how to manage this data and filter only “good” data that is trustworthy, into the system. Without this oversight, Andy believes any model is due to fail.
How are organisations building AI skills?
Alex Gore emphasised the importance of enabling a culture of experimentation within organisations so people can see what works and doesn’t in a safe space. He mentioned how his company have created sandbox environments to build a proof of concept really quickly.
This highlights where your gaps are in data quality, where you’re likely to generate hallucinations and fix that every quickly. This also allows senior leaders to get excited about the possibilities.
Alex’s point was reiterated by FDM’s Gangotri Bhatt who provided an overview of how at FDM we’re building AI capabilities into our coaching and our Pods where Consultants are getting practical hands-on experience of implementing and testing AI solutions.
How do FDM Pods build AI capabilities?
FDM pods are agile teams where our consultants work on real internal projects whilst focusing on their professional growth using Scrum. Consultants work in a collaborative AI environment implementing the theoretical knowledge they have gained and using that in hands-on projects. Pods are the best places to introduce any new AI technologies because they are secure sandbox environments for consultants to experiment in, before using those learnings on real client projects.
We have 10 AI Pods currently in place ranging from an AI chatbot, a CV matching tool and bespoke projects for finance, government, and insurance clients. Within these pods consultants are learning multiple AI skills like prompt engineering, AI APIs, machine learning and implementing them into real projects. Within pods we have No code (Azure, drag and drop tools), Low code (manipulation and data), and API (development and integration of systems).
We encourage problem statements from clients and arrange discovery sessions to understand their requirements, as well as the potential solution they’re looking at. Clients can be part of the pod – get demos and see the progress of the consultants as they upskill.
The changing roles of tech professionals
Eliott Morris believes project managers will increasingly become more technical. Tech PMs will do more prompt engineering, managing agentic work rather than coding.
AI capability is no longer confined to technical roles; there’s an expectation that business professionals across all disciplines bring a level of AI awareness. Business users, compliance professionals, and even operational leads are now expected to have a working understanding of data flow, usage rights, and regulatory expectations highlighting a shift toward cross-functional data responsibility.
For a data architect too, there will be a shift to critical thinking skills, domain fluency and cross-functional collaboration. Data lineage would be a top skill to get an understanding of where the AI tool gets its data from.
Ethical AI
According to Andy Bell, ethical use of AI involves understanding the legal framework you’re working in. This includes how we interact with data, govern access to that data, and respond to changes in legislation.
It’s equally important to assess if it’s right to use certain types of data to certain situations. At FDM we ensure that our consultants are using AI tools approved by their clients.
Becky Fitzgerald, believes it’s important not to be scared of AI, but to manage it properly with the right governance and oversight.
Getting your data ready for AI
The topic for our second panel was ‘Getting your data ready for AI’. For this panel, the speakers were joined by Carole Roberts, Director of Tech and Data at LBS.
One theme that stood out was the importance of early involvement from data owners and governance leads. Rather than being brought in at the tail-end of a project, these stakeholders should be embedded from the outset to ensure data is usable, secure, and ethically managed.
Data readiness
One of the key insights that emerged from this session was that data readiness is no longer just a technology issue, but a strategic one. AI readiness starts with data readiness. Boards and senior leaders are starting to realise that poor data can derail their AI ambitions. Regulatory pressure is growing, and organisations are having to rethink how they embed trust, explainability, and auditability into their data pipelines from day one.
Carole Roberts talked about how AI is exposing data indiscriminately. That’s where data stewards need to continue to protect the data and ensure all the right standards are maintained to help businesses by being the bridge between data and what they need to use it for.
Data quality
A common misconception around data is that its quality has to be good. However, what’s critical is understanding data. The right understanding allows us to design systems to cope with the variances and we can build tolerance thresholds so that LLMs and AI models can flag and cope with failures in a way that enables humans to intervene at the right point.
Bias in AI
The most effective way to mitigate biases is to bring in as many diverse perspectives and voices in early on. All large language models are trained on different source datasets with different parameters and different perspectives.
Garth summed up the discussion by suggesting that AI Proofs of Concept (PoCs) valuable not just to test feasibility, but to shine a light on weaknesses in your data. When a PoC underperforms, it’s often your data not the algorithm that’s the issue.
Framing PoCs as a diagnostic tool helps teams strengthen their data foundations, making future AI efforts more robust and scalable.
Is your business getting ready for AI? Find out how our consultants can make your journey a seamless one. Book a discovery today.