News

Expert insights from Understanding AI: Beyond the Hype

Preeta Ghoshal
03.11.2025 Published: 28.10.25, Modified: 03.11.2025 14:11:16

FDM Group in partnership with Northeastern University London were delighted to present Understanding AI: Beyond the Hype, a series of expert-led talks featuring key insights from academic speakers and industry pros to explore the real-world impact of AI on businesses and the job market.

The four sessions held throughout October explored some of the most compelling discussions around AI adoption, fluency, and its role in our future

AI Ethics in Action’ was the first event of the series. It featured Dr. David Freeborn, assistant professor in philosophy at Northeastern University London, and Gillian Magee, Senior Director looking after a large portfolio of IT change for AstraZeneca, who shared their insights on the role of ethics in AI and how ethical AI can be used to make better business decisions.

Read ahead

The second event titled Create More: The Power and Potential of Generative AI looked at some of the exciting and unexpected ways AI tools can transform how your business creates almost anything. Dr. Alice Helliwell of Northeastern University and Amaritpal Singh Saini of News UK led the session and explored the opportunities (and big questions) that come with creating alongside AI.

Read ahead

Our third event of the series was an interactive panel discussion on the power of prompts and using AI fluency to build a stronger workforce. Professor Ioannis Votsis of Northeastern University London kicked off the event, followed by a panel session featuring Lee O’Brien (ex-Investec), Naomi Bowman (Baringa), and Matt White (ex-Sainsbury’s).

Read ahead

Fake News, Real Risks: AI, Deepfakes, and the Battle for Truth concluded our Understanding AI series. Brian Ball (Professor of Philosophy at Northeastern University London) and Chris Paterson (Technology Risk Leader) explored how organisations can defend against deception, protect their reputation, and adapt to a new era of digital threats.

Read ahead

Here are some of the highlights of the series:

AI Ethics is a rapidly evolving, fragmented landscape

AI is now embedded in core products and services across the economy. But with this growth comes complexity. Ethical AI spans disciplines from law and philosophy to computer science and business, and touches on issues like bias, misinformation, sustainability, and existential risk.

Researchers at Northeastern University are trying to shape responsible AI governance using tools such as network analysis, natural language processing and graph processing that bridge philosophy, computer science and other fields.

Training “Philosopher Engineers” for ethical AI

To build resilient, ethical and effective AI systems and tools, we need to train philosopher engineers who think critically, reason responsibly and code competently. Embedding ethical reflection at every stage of AI development – from design to deployment – is essential for building resilient, trustworthy systems.

But AI is not a one-size-fits-all solution. To use AI effectively and ethically we need to ask these questions at the three key stages of its lifecycle:

  • Design stage: What are we building and should we build it?
  • Development stage: How and why are we building it this way?
  • Deployment stage: Where, when, and for whom is it appropriate?

Assessing the right use cases should also be a key priority for businesses looking to adopt AI. Sometimes there might be much cheaper, simpler solutions to a problem, that are less environmentally costly than AI. A simple business process change could be the answer.

Data quality is key

Data sits at the heart of AI tools. Whether you’re building or buying AI tools, understanding the data they’re trained on is critical to identifying risks and ensuring fairness. AI doesn’t fix bad data, it amplifies it.

Building ethical AI competence

There are three key pillars to building competencies in ethical AI:

  • Safeguards: Robust research, mitigation plans, and upfront ethical reflection.
  • Awareness: Helping teams understand both the risks and opportunities of AI.
  • Competence: Knowing when AI is the right solution—and when it’s not. 

Create More: The Power and Potential of Generative AI

The use case for Generative AI stretches beyond text, images, and creative roles. These tools now have practical implications across industries and teams. But this proliferation of AI poses a threat to musicians, artists, writers – anyone who ‘creates’.

The question is: Who owns the intellectual property rights of a creation? Is it the person who writes the creative prompt for an AI tool to produce an output, or the artists whose work AI simply collates. This leads to a greater deep dive into the creative capabilities and limitations of AI tools.

What can we do with AI that we couldn’t do before? What’s been enabled by this new medium?

What drives creativity? 

Creative flair and human motivation are at the core of any creative output. AI doesn’t actually ‘create’ anything new. It remixes or collates existing content.

But in a future defined by human-AI collaboration, what role can AI play in enhancing human creativity?

  • AI can come up with iterations of our ideas
  • AI can be a sounding board to evaluate ideas
  • Generate artificial images on prompt
  • Automate the process of making rapidly changing images
  • Transcribe content into multiple language, making it accessible to wider audiences

However, in spite of the improved efficiencies with AI, it’s important to retain knowledge and control of the systems, tools and processes we use and have final sign-off on them.

AI is a tech, not a solution

The best way to leverage AI is to assess how it can aid the various stages of the creative process from ideation to final execution, and then using the right prompts to achieve the desired output. All this whilst retaining human oversight.

This brings us to the topic of prompts and our third event of the series –


AI Fluency and prompt power: Building a smarter workforce

“Give AI the same consideration as you would when training a junior colleague.”

A good prompt is what drives AI tools to generate useful outputs. To create better AI models, we need quality data as well as human critical thinking to interrogate the information we’re given.

By challenging the results that AI generates, we can improve the quality of chatbot outputs. AI hallucinates. When prompting, if you get a solution that looks great, check that the source is real.

On the other hand, we don’t just need human prompting machines, but machines learning how to counter prompt humans to tease out intended meaning via apt prompts. AI models that learn how to counter prompt humans can reduce miscommunication.

Leveraging prompts

Prompts are becoming increasingly valuable assets. There is a need to retain prompts so we can recreate them.

However, it’s important to understand that whilst we use AI tools to improve efficiencies, we can’t trust them implicitly and need to install proper guardrails around them.

For example, we have been writing software for over 50 years. Asking Claude or similar LLMs to write our code with a 3-line prompt is taking a step back.

The other overarching question is, if we fully integrate AI tools in our day-to-day, what happens when they don’t work?

This further highlights the necessity of having human knowledge and oversight over processes.

After in-depth discussions around the varied possibilities of AI tools and the ways of leveraging this tech to improve efficiencies, the final session of the event series turned to the dark side of AI and considered some of the threats it exposes us to. 


Fake News, Real Risks: AI, Deepfakes, and the Battle for Truth

AI-generated and disseminated misinformation is more than a headline issue: it’s a growing risk to businesses, governments, and society.

Misinformation

If we rely on information from others, we open ourselves to risks.

Misinformation is generated through human error and deliberate lies.

There are three types of wrong information that are often used interchangeably:

  • Misinformation (false/ misleading content)
  • Disinformation (manipulated or fabricated content, designed to mislead)
  • Mal-information (designed to cause harm)

Our personal data is often used to manufacture misinformation. Algorithms use your information to show content that reinforces your views, creating echo chambers.

The stakes are incredibly high, and can include reputational damage, operational disruption, financial loss, cybersecurity threats, and strategic impact.

Disinformation spreads through:

  • Traditional media
  • Disinformation news sites
  • Social media
  • Public figures
  • Encrypted messaging services like WhatsApp
  • Offline

Common threat vectors:

  1. AI generated deepfakes
  2. Social media amplification
  3. Foreign influence operations
  4. Internal leak or misinformation
  5. Competitor-driven misinformation

How can we mitigate risks?

There are a number of steps ranging from high to low-cost that organisations can take to mitigate AI-related risks. From implementing monitoring and detection capabilities to delivering regular compliance training and awareness – organisations need to plan ahead to stay on top of evolving threats.

Risk management is an organisation-wide priority but has to come from the top down. Business leaders need to understand their organisation’s vulnerabilities and create a resilience strategy to respond to threats. They must then test their resilience responses regularly, on various scenarios.

Understanding AI: Beyond the hype was an important series exploring different aspects of what is arguably the most transformative and disruptive phenomena of the 21st century.

From understanding the importance of ethics in AI to using it as a collaborator in the creative process; from the power of prompt engineering to generate desired outputs, to ownership of creative credit and finally addressing the issues of AI-generated risks – several interesting ideas came up that will shape how we navigate this new digital era.

One thing is clear. AI has become an inextricable part of our present and its impact on our future will continue to grow. To maximise the value we get from it, we have to be on top of its capabilities as well as limitations. At FDM, we’re preparing for a future defined by human-AI collaboration.

Find out how our AI-enabled talent can help your business unlock smarter, faster growth.

Work with us.

Past events


26/06/2024 | FDM London Centre

FDM alumni come together for professional development event

11/09/2024 | FDM London Centre

FDM hosts RRC roundtable event for alumni

Play the video  Play button Play the video  Play button

arrow

arrow

arrow
How will AI impact the future of jobs? Download our research Close icon