Lawrence Tallon, chief executive at the Medicines and Healthcare products Regulatory Agency (Credit: MHRA)
The Medicines and Healthcare products Regulatory Agency (MHRA) has launched a call for evidence on how AI in healthcare should be regulated.
It is asking members of the public, clinicians, industry and healthcare providers to share their views to support the work of the National Commission into the Regulation of AI in Healthcare, which was formed in September to help speed up access to AI tools such as ambient voice technologies.
The commission, chaired by Professor Alastair Denniston, head of the Centre of Excellence in Regulatory Science in AI and Digital Health, brings together AI leaders, clinicians, regulators and patient advocates to advises the MHRA on the future of health AI regulation.
Lawrence Tallon, chief executive at the MHRA, said: “AI is already revolutionising our lives, both its possibilities and its capabilities are ever-expanding, and as we continue into this new world, we must ensure that its use in healthcare is safe, risk-proportionate and engenders public trust and confidence.
“The national commission brings together a host of experts including patients’ groups, clinicians, industry, academics and members from across government. Today we are asking the public to contribute by sharing their thoughts, experiences and opinions.
“We want everyone to have the chance to help shape the safest and most advanced AI-enabled healthcare system in the world at this truly pivotal moment.”
Data from the Nuffield Trust, published on 3 December 2025, show that 28% of GPs use AI tools in their clinical practice, but lack of regulatory oversight of AI is a major concern.
Key themes in the call for evidence include modernising the rules for AI in healthcare, keeping patients safe as AI evolves, and clarifying responsibility on the distribution of responsibilities between regulators, companies, healthcare organisations and individuals.
Professor Denniston said: “We are starting to see how AI health technologies could benefit patients, the wider NHS and the country as a whole.
“But we are also needing to rethink our safeguards. This is not just about the technology ‘in the box’, it is about how the technology works in the real world.
“It is about how AI is used by health professionals or directly by patients, and how it is regulated and used safely by a complex healthcare system such as the NHS.”
The commission will focus on system-wide implementation challenges rather than just technology approval, and is aimed at supporting the ambitions in the 10 year health plan and the life sciences sector plan.
Deputy chair of the commission, Professor Henrietta Hughes, patient safety commissioner for England, said: “Patients bear the direct consequences of AI healthcare decisions, from diagnostic accuracy to privacy and treatment access.
“The lived experience and views of patients and the public are vital in identifying potential risks and opportunities that technologists and clinicians may miss.
“Your views matter and each of us has the opportunity to shape the role AI will play in our lifetime, and for the generations to come.”
The call for evidence runs from 18 December 2025 to 12pm on 2 February 2026. Click here to take part.


