FDM Consultant Jonathan van Kuijk works in the Workplace Technology department for a retail client. His company is considering rolling out Microsoft Copilot for M365, a well-known generative AI workplace productivity tool. If adopted, the software would be made available to eligible M365 E5 license holders, predominately Head Office knowledge workers.
‘I contribute to functional testing along with crafting data security posture management (DSPM) and information lifecycle management (ILM) policy in light of new considerations prompted by Copilot for M365. This work ensures data compliance and security policies are AI ready and facilitates accurate, scalable, and usable capabilities assessments.’
Across all sectors, organisations are looking to leverage the use of AI to improve efficiency and gain deeper insights. This trend has sparked numerous discussions about how to achieve AI readiness.
Alexander Elliot, Head of Data, AI and Innovation at FDM Group outlines the importance of data cleaning in laying the groundwork for robust AI use.
‘Data cleaning is the process of detecting and rectifying errors and inconsistencies within data sets to enhance their quality. It is critical that data used within AI has been evaluated to ensure that it’s fit for purpose.
Accurate data underpins the reliability of AI models, allowing them to make precise predictions and informed decisions. Moreover, clean data contributes to the efficiency of AI algorithms, enabling them to learn more swiftly and perform optimally, thus conserving time and computational resources. The reliability of AI outcomes also depends on the consistency of the underlying data, which is critical for building trust in AI systems.’
Data should be evaluated to ensure that it is:
- Secure: Data must be protected from unauthorised access – this could be to prevent sensitive data from being presented to the users of AI, or from external breaches and cyberattacks. Implementing robust security measures is essential to maintain the integrity and confidentiality of data.
- Data Quality: High-quality data is fundamental to producing accurate and consistent AI results. Data should be of high quality, having had missing values, errors, or outliers resolved before use.
- Ethical: Ethical and privacy considerations should be considered for the data utilised. Any data used to train AI should be representative and free from bias.
‘One of the most important things to remember with AI is that it assumes that it is permitted to look at any data available in order to ‘learn’. Unless we have classified and protected any data that we don’t want to use (personal letters, end of year self-assessment forms etc) the AI will use it. Therefore, thinking about what data you don’t want to use can be just as important as thinking about the data you do wish to use.’
Risks of poor data quality used to train AI
Bias
Popular large language models (LLMs) like ChatGPT require massive amounts of data to train and the most cost-effective way to do this is by scraping the internet. However, when we accept large amounts of web text as ‘representative’, we risk perpetuating dominant viewpoints, and inevitably perpetuating biases.
Data is primarily created by humans so carries inherent biases. It is therefore important to evaluate data for both accuracy and discriminatory biases that may affect the AI’s perspective. Conduct parity tests during training to identify and address any biases.
Inaccurate representation
Representation errors are often a result of subjective training data. Additionally, accurate data labelling is crucial to avoid measurement errors. Without quality control, human-labelled data can introduce bias.
Outdated data
AI programmes may struggle with data quality aspects like timeliness and consistency. If trained on historical data, they can’t account for changes in the 2020s, resulting in outputs that lack up-to-date and complete information.
Duplication errors
Using unchecked and duplicate data from multiple sources can cause errors. Moreover, unstructured data without metadata can create confusion, complicating analysis for the AI programme.
Alexander Elliot believes, ‘Assessing areas such as strategic alignment, available skills & expertise, and organisational culture should be assessed to help in systematically evaluating your organisation’s overall readiness to adopt AI.’
Our FDM Consultants are actively helping clients to become AI-ready, as well as supporting the identification of use cases and promoting adoption of AI throughout the user groups.
Facilitating AI readiness
FDM Consultant Christopher Shaw works in the AI Ethics, Risk and Compliance team for an energy client, focusing primarily on their preparation for the upcoming EU AI Act that will be implemented across Europe (and wider) in the next few months.
‘Being able to work on the first legally binding AI regulations in human history in such a global company has given me a great opportunity to learn about AI, it’s uses, and everything in between.’
FDM Consultant Norbert Csecs works as an AI Integration Specialist for a global private healthcare insurance company. His role is to help the client with Copilot for M365 rollout mainly in admin, risk management, training and ROI measurement, providing daily support.
‘We’re still at the beginning for the most part but we’re already going from fundamentals and onboarding to deep dives and advanced prompting.’
‘I love listening to people’s questions and worries, particularly enjoying explaining how GenAI decides what to say next and why it’s not anywhere near even slightly taking over the world.’
Use cases for AI
The first step in any organisation’s AI journey is to identify the right use cases that align with their business objectives.
Amin Manzari is another FDM Consultant working as a BA/Project Manager for an energy company.
‘In my present capacity as a Business Analyst/Project Manager, I am part of a team that delivers Artificial Intelligence (AI) solutions across the business spectrum. My responsibilities encompass the identification, scoping, and management of these projects, thereby empowering my team to harness cutting-edge technology to generate substantial business value.’
FDM Consultant Aaron Morgan and his team recently created an AI Chatbot for a public sector client which can parse through their Annual Assurance Reports to return unique and insightful responses to user queries.
‘In my most recent project for the client, AI chatbot has been very interesting as I have had the opportunity to utilise the power of AI to fulfil unique client requirements.’
Bowei Wu works as a Machine Vision Lead for Generative AI for a global energy client.
‘I manage a portfolio of Generative AI projects to do with visual data, mainly driving the team, the projects. And finally, I look to build the funnel of projects by exploring new opportunities. I’m currently working on a number of Image Generation projects which are very exciting as it means working with the latest new technologies.’
The challenge
A key challenge is that organisations that implement AI often have to stop because they don’t have a data strategy in place. Or, if they do have an established data strategy in place, there are no Data Governance measures in place to ensure that data isn’t mishandled, overshared and is of a good quality.
FDM enabling data governance transformation
A government safety regulator approached FDM with an urgent need to establish a data strategy and underpinning data governance frameworks and artefacts which would enable alignment and benchmarking against the National Data Strategy.
We conducted an initial round of discovery sessions across the different in-flight programmes and departments and identified two key challenges:
- An absence of a central data strategy leading to inconsistent outcomes
- A lack of corporate data catalogue and dictionary meaning information was not readily available to users
Our solution
Our FDM team developed a series of foundational artefacts which act as an enabler for the Regulator to track their data governance against a newly formed data strategy which was linked to the National Data Strategy and to the Government Digital Standards (GDS).
The creation of a common data model blueprint and Data Governance Board artefacts is a prime example of how we have introduced frameworks and approaches that can drive standardisation across the organisation. The client could now better understand the systems they have, and the data contained within them.
FDM’s AI offering
At FDM, each of our five Practices prepares Consultants for unique roles within the ever-evolving AI landscape.
All our consultants are introduced to AI as a core capability. Our AI Engineers can support with Prompt Engineering to building custom AI solutions for your business. We are working with partners to encourage ethical use of AI and how to govern this through GRC (Governance, Risk and Compliance) tools.
Is your business getting ready for AI? We can help you get there.
Book a discovery session with us today.