Unlocking Success: The Ultimate Guide to Implementing AI in Your Business

Paul Brown
01.11.2023 Published: 01.11.23, Modified: 01.11.2023 13:11:13

Is your company looking to adopt AI automation?  Find out the opportunities and ethical challenges of responsible AI adoption, the persistent skills gap in AI and how this cutting-edge tech is revolutionising the way we work.

Executive Summary

Industry experts from IT, media and telecoms joined our Industry Insight event, in collaboration with Microsoft for an in-depth expose on all things AI. Through panel discussions, fireside chats and interactive Q&A sessions we uncovered some interesting perspectives into the most critical issues concerning AI at this nascent stage of its integration into business, including ideas around regulation and corporate social responsibility for safe AI use.

Within this guide we explore:

We were joined by:

Adoption of AI

First things first. AI isn’t new. In fact, it’s been around since the 1950s as the field of computer science that seeks to create intelligent machines to replicate human intelligence. However, we’ve seen a rapid development of it in recent years.

Case in point: Chat GPT scooped up an unprecedented 100 million users in the first 2 months of its roll out. For context, this took TikTok 9 months and Instagram 2 years and 6 months.

What is different about Generative AI?

From its inception AI has undergone significant evolution. There’s been a lot of work in this space since 2018 but 2021 was when we really saw Generative AI hit the market.

Generative AI creates new written, visual and auditory content from prompts or existing briefs. The most popular applications of generative AI are:

Are these apps good or bad?

Generative AI applications let us create new content and boost productivity. However as with most new tech, it can pose a threat in the wrong hands. The main issues with generative AI are:

Top benefits of generative AI

What do employees want from AI?

  1. 62% people spend too much time looking for information whether in paper or digital format. So, one of the top uses of AI is having a better way of searching across content to retrieve relevant information.
  2. Meetings and action items – employees want better ways of being able to plan their days and this can be seen in the growing popularity of digital assistants like Alexa, Cortana and Siri helping in those aspects.

What do employers want from AI?

60% of business leaders think innovation is a challenge and focus for them.

There is paranoia around AI replacing people and reducing headcount but actually they would far prefer not to reduce headcount and instead increase employee productivity and ensure people are focusing on high value tasks if the work that they’re doing does become automated.

Top Use Cases for AI

  1. Contact Centre transformation

So many organisations, particularly in financial services are thinking about digital contact centres and how they can use AI to create a better customer experience.

In a world where consumer spend is changing and it’s getting increasingly difficult to retain customers, – using AI-backed hyper-personalisation seems a top trend in the market. Microsoft has infused generative AI capabilities right across their apps with Co-Pilot.

AI will be there to support people, not replace them. Microsoft Co-Pilot will work alongside you in whatever application you’re using to help you to be more productive and improve your job satisfaction and experience.

Barriers to adoption

What are the main barriers to safe and responsible AI adoption?

Jonathan Young, Chief Information Officer, FDM Group believes the first step is to identify your business problem and what you’re trying to achieve with AI. Adoption for adoption’s sake shouldn’t be the approach. At FDM we are using AI in our HR department to evaluate staff performance.

As we start to embed automation  into business we know that some jobs will change as a result and there will be some new jobs created, as with any new tech. With a high proportion (86%) people wanting to use AI to find the information they need, there is a workforce that is ready to embrace some of the benefits AI can bring.

Adoption is a cultural challenge. Organisations need to create an environment where people feel comfortable using AI. For businesses that are open to transformation, they need to give people processes and guidelines and space for experimentation. Employees need to understand what the business is trying to do so they feel bought in.

Learning is a permanent requirement of work. The average life expectancy of a skill is reducing to 4.5- 5 years. That will accelerate even more so creating the right environment where people can go on their own learning journeys to develop new skills and understand what the organisation is doing is critical to any business transformation when it comes to AI.  

AI and security

One of the main themes that comes up in any discussion around AI is the issue of privacy and security and how vulnerable it makes us to cyber threats.

The risk of data loss with AI has also been seen with organisations like Samsung, where members of staff inadvertently leaked meeting notes and source code trying to utilise the Chat GPT platform. 

Claire Dickson, CIO at DS Smith mentioned that her company had banned the public use of generative AI to avoid the risk of data leaks.

Simon Lambert, Chief Learning Officer at Microsoft UK spoke of creating guidelines for AI use and coming together as an industry to agree best practices.

Karl Hoods, CDIO, Department of Science Innovation and Technology echoed Simon’s thoughts about the need for guidelines and creating responsible standards of AI around privacy, reliability, transparency, security and inclusiveness.

He mentioned how his company created an AI blueprint with recommendations for the government about how we should think about AI regulation.

Bias in AI

All AI-generated data is reflective of inherent societal biases. How can organisations mitigate these biases?

Jonathan Young stressed that AI decisions rely on input data, which can carry the biases of its contributors. It’s crucial to be aware of your own biases, as data may skew towards specific nationalities or faiths, necessitating vigilance when using such technology.

If we’re making decisions that affect people’s lives and employment, we have to understand what these engines are doing.

However, Jonathan believes censoring and banning is a knee-jerk reaction and doesn’t actually address the issue at its root. He stressed the importance of embracing the lessons the data throws up to get to the big picture.

Another pertinent issue is the need for human input to review data captured through generative AI. So there needs to be significant investment in education – ethics, decision making and people being highly skilled. We’ll start to see the importance of what AI does being decided by humans rather than AI doing it.

Hidden biases

Most biases are hidden and if decisions and recommendations are made by AI based on those biases, there will be an issue.

Algorithms indiscriminately force a pattern into the outcome of data, so there is a need for transparency in algorithms.

Jane Pitt, Training Program manager at Microsoft mentioned the instance of Northpointe, a company in the United States who were using an automated system to determine whether a criminal should be given parole. After using the system for a number of years it was discovered that it was discriminating against black people. 

Northpointe claimed there was no bias as they didn’t record the ethnicity of prisoners. However, it was eventually found that the system was working on the basis of people’s addresses – with certain postcodes known for having larger minority populations.

This example corroborates the point that biases aren’t obvious.

We need diverse voices and better representation across development stages to mitigate these biases.

Digital-first, not Digital-only

In an increasingly tech-dependent world, it’s important to adopt a digital-first instead of a digital-only approach to ensure we aren’t isolating certain groups.

While there are business advantages of being tech-first, our tech has to represent the people it’s meant to serve. For example, most supermarket tills are at present self-serviced. This can be a challenge for the elderly and those with mobility issues.

The tech industry is inherently lacking diversity, and not enough thought goes into the impact innovations and advancements can have on users.

If it’s not diverse, it’s not ethical

Now more than ever we need greater representation and diverse voices right from the development and testing phases of any new tech. This would need a radical overhaul of training and recruitment policies.

Is a degree in computer science a pre-requisite to work in tech?

Tech is about lifelong learning as it moves so fast. Also, bringing back the point about the life expectancy of skills being just 4 years, even if a candidate learnt computer science at university it’s likely that those skills would become obsolete in a few years.

Jaqueline de Rojas CBE, President of Digital Leaders and board member of techUK believes we need micro injections of learning with short accessible courses.

‘In a world that is increasingly technology dependent and moving at a much faster pace, we have to think of ways in which we educate differently and that means micro-injections of learning and thinking about how we meet the needs of industry as we prepare for what comes next.’

Jaqueline de Rojas CBE, President of Digital Leaders and Board member of techUK

At FDM, all our training courses are between eight to fourteen weeks and are designed to equip consultants with the technical skills they need to succeed with our clients. Consultants gain hands-on experience in a development environment using agile methodologies, working on real-world applications, from web-based projects to machine learning and data science. Additionally, our training includes modules on professional skills to ensure our consultants are ready to make a difference for our clients from day one.

Importance of explainable AI 

Dr. Janet Pitt, Chief Data Scientist at Napier, spoke about explainable AI and the importance of understanding the explanation or logic behind AI decisions.

As generative AI grows, why is explanability so important?

AI was designed to generate convincing text like cards, fan fiction, poems, song lyrics etc. When you give it challenges that require precision it will give you something that looks feasible but doesn’t necessarily relate back to precise data.

If we use AI to review and condense large amounts of text, how can we trust AI has understood the text without it explaining its reasoning?

AI is bad at saying ‘I don’t know’ because we’ve trained it to have an answer. As a result, it always gives its best guess.

In future there will be text and video that look very convincing but aren’t actually real – including manufactured CCTV footage. Legislation is being created to address these concerns. In addition to GDPR, there’s the EU AI Act that’s coming in effect to enforce explainability and human in the loop for anything high-risk.

Certain decisions can’t be made by computers like mortgage approvals, medical decisions, and stopping one’s income. Human in the loop ensures that these decisions are never 100% automated.

Jane also stressed the importance of education and ethics training especially in schools so young students understand the criminal liability for the inappropriate use of AI.  


AI is going to permeate every industry, and no one will be immune to its capabilities.

However, like any new technology, it can pose a threat in the wrong hands. Effective adoption will need the right guidelines and early adopters of AI tech need to invest in extensive skills training so that their businesses and workforces can utilise its benefits whilst mitigating risks.

The main challenges of AI including inappropriate use, data and intellectual property theft have led to legislation being developed around AI governance. However, education right from school level is imperative to reinforce the concepts of accountability and criminal liability.

Biases exist in society and are reflected in the data that AI generates. Promoting diversity and better representation in tech through an overhaul of existing recruitment policies is the first step in addressing these biases.

The purpose of AI is to boost human productivity by automating repetitive tasks, not replace human jobs. So whilst AI can be useful to scroll large volumes of data, critical decisions affecting people’s lives will have to be human-made.

Finally, different people want different things from AI. Employees typically want to use AI to find information quickly and employers want to use it to improve workforce productivity. As AI continues to touch different aspects of our lives we need to be prepared to use and manage its capabilities.

How FDM can support your business

At FDM we have been connecting people with technology for over thirty years. We are committed to offering the highest standards of training to ensure our consultants are equipped with the skills and confidence to achieve success for your organisation.

Our FDM Academy training programmes including our Ex-Forces, Returners and Graduate programmes have achieved Tech Industry Gold accreditation from TechSKills

We collaborate with partners like Microsoft, Amazon Web Services, Salesforce, ServiceNow, Appian and nCino to train our consultants in cutting-edge technology. They are fully certified and can add value to your business from day 1 of deployment.

Contact Us to find out how we can find the right talent for your organisation.