One in five GPs already incorporate AI tools into clinical practice, reveals BMJ survey
A recent online survey conducted by the BMJ has disclosed that 20% of UK general practitioners (GPs) are incorporating generative artificial intelligence (AI) tools, such as ChatGPT, into their clinical practice, despite the widespread lack of formal training on how to properly utilise these technologies.
The results, which were published on 17 September 2024, highlight that out of 1,006 GP respondents, 205 reported using generative AI tools in their work. Of those practitioners, 47 (29%) revealed that they utilised AI to create clinical documentation following patient consultations, while 45 (28%) used AI to suggest potential differential diagnoses.
According to the survey, ChatGPT is the most popular large language model (LLM)-based chatbot currently being used among UK GPs.
Dr Charlotte Blease, an associate professor at Uppsala University in Sweden and the study’s lead author, suggests that GPs could be finding value in these tools, particularly for assisting with administrative burdens and supporting clinical reasoning. She explained, “GPs may derive value from these tools, particularly with administrative tasks and to support clinical reasoning.”
However, Dr Blease also stressed the limitations of AI, noting that such tools can be prone to subtle errors and biases that could impact clinical decision-making. “These tools have limitations since they can embed subtle errors and biases,” she stated.
The study raised concerns over the lack of official guidance or policies concerning the use of AI in clinical practice. “Despite a lack of guidance about these tools and unclear work policies, GPs report using them to assist with their job,” Blease observed.
She further argued that the medical community needs to strike a balance by both educating current and future practitioners about the benefits of AI—such as summarising complex information—and highlighting the risks, including hallucinations, biases, and the potential compromise of patient privacy.
One of the most pressing risks she emphasised is the potential breach of patient confidentiality. “Our doctors may unintentionally be gifting patients’ highly sensitive information to these tech companies,” Dr Blease warned, underscoring the uncertainty around how companies behind generative AI models may use the data they collect.
The BMJ survey was conducted in February 2024 and distributed via Doctors.net.uk, a clinician marketing service, to a non-probability sample of GPs. This monthly ‘omnibus survey’ requires responses from 1,000 participants, all of whom are asked to complete closed-ended questions.
While GPs’ use of AI is becoming more common, another study commissioned by the Health Foundation, published in July 2024, showed broad support for AI in healthcare among both NHS staff and the general public. According to the Health Foundation’s research, 76% of NHS staff and 54% of the public are in favour of using AI for patient care.
Despite the enthusiasm, NHS England’s director of AI, imaging, and deployment, Dom Cushnan, has urged for a cautious and evidence-based approach. Speaking at the Digital Health AI and Data event in October 2023, Cushnan underscored the need for health systems to ensure that AI tools are suitable and rigorously tested before being fully integrated into clinical settings.
While Cushnan acknowledged the excitement surrounding AI’s potential, he also emphasised that such technology must pass stringent reviews to ensure its efficacy and safety for patient care. “AI tools must undergo a rigorous process before they can be integrated into clinical practice,” Cushnan said, reinforcing the importance of evidence-based approaches in healthcare innovations.
This growing integration of AI into the medical profession highlights a significant shift in healthcare, yet it also raises critical questions about training, patient safety, and ethical considerations as the technology continues to evolve.