Beyond the Hype: How Language Shapes Public Trust in Artificial Intelligence
The words we choose when talking about artificial intelligence are never neutral. They frame how we understand its purpose, its risks and its place in society. Recent rhetoric around AI’s role in healthcare has been particularly striking. The UK Government’s action plan to “unleash AI” promised major improvements, but “unleashing” is a word we rarely associate with something gentle or safe. When paired with talk of “harnessing” AI’s potential, the message becomes mixed.
Clear, thoughtful communication is essential. As AI becomes more embedded in healthcare, the language we use must help people understand what these tools can and cannot do.
AI’s Growing Role in Healthcare
AI is already making a difference across the UK. In hospitals, it is helping detect pain in non‑verbal patients, speeding up breast cancer diagnosis and supporting faster discharge processes. These tools are contributing to a more responsive, efficient NHS.
AI’s impact is also being felt behind the scenes. Recruitment, training, scheduling and roster development are all areas where AI‑powered tools can streamline processes and free up time for clinicians to focus on patient care. These efficiencies are welcome, but they must be introduced with care.
But this progress is uneven. Not every hospital has access to AI tools, and not every patient or clinician benefits from them. The vision of a future‑ready, AI‑enabled NHS sits alongside the reality of outdated technology, legacy systems and ageing infrastructure.
This gap highlights a growing tension: AI is advancing rapidly, but many healthcare organisations are still held back by significant technological debt. New AI tools must integrate with legacy systems that are often cumbersome and incompatible with modern innovation. Without addressing these deeper infrastructure issues, AI alone cannot deliver the transformation we hope for.
And crucially, AI cannot replace the essential services that keep the system running, staffing hospitals, building new facilities or providing compassionate care. It must complement, not substitute, the skilled professionals who are the backbone of health and social care.
That’s why investment in AI must go hand in hand with investment in people. Retraining and upskilling the workforce is essential so staff can work confidently and safely alongside new technologies.
Digital Inequality: A Barrier We Cannot Ignore
One of the most pressing challenges is digital inequality. Audit Wales reported in 2023 that 32% of people aged 75 and over in Wales had not been online in the previous three months, while 14% of social housing residents and 12% of people with long‑term illnesses were also digitally excluded.
These are often the very people who stand to benefit most from AI‑supported services, yet they may be unable or unwilling to access them. And digital exclusion is not limited to older people; younger individuals, people with disabilities and those with varying abilities also face barriers. Even among digitally active groups, trust in AI is far from guaranteed.
The barriers are structural as well as personal. Sixty‑seven percent of rural premises in Wales lack access to full‑fibre broadband. Across the UK, half of digitally excluded people report difficulties accessing NHS services due to usability, language or accessibility issues.
Trust is another major factor. Concerns about security, privacy and understanding how AI works deter many from engaging with digital tools. As AI becomes more visible in healthcare, building public confidence becomes even more important.
That’s why the narratives we construct around AI matter. The potential is exciting, but we cannot prioritise speed over responsibility. Clear communication, transparency and careful deployment are essential. AI must work alongside human expertise, not replace it and it must help reduce, not deepen, digital inequality.
As AI continues to shape the future of health and social care, we must ensure its benefits reach everyone, especially the most vulnerable. Only then can we truly unlock AI’s potential to improve lives.
By Prof. Genevieve Liveley, Professor of Classics and Turing Fellow at the University of Bristol