Health Scams, Trust Erosion and Grok Misuse: The Growing Dangers of AI-Generated Videos
by Rachel Brown
In an era where artificial intelligence can craft images and videos with disturbing realism, experts and authorities are raising the alarm about how easily deepfake content can mislead, manipulate, and harm the public - from medical endorsements to sexually explicit imagery shared online without consent.
Late last year, Guy’s and St Thomas’ NHS Foundation Trust in London issued a rare public warning after a series of AI-generated videos falsely showed its clinicians appearing to endorse products and treatments they had no connections with, prompting alarm within the UK health sector about the potential for mass misinformation and patient harm.
Medical professionals are far from the only targets. Deepfake technology – where AI synthesises lifelike video and audio – has been used to mimic public figures, influencers, and experts not just for satire, but to mislead viewers and influence behaviour.
Investigations have found hundreds of deepfake videos on platforms including TikTok, Instagram and Facebook featuring false representations of doctors giving unverified health advice or promoting uncertain supplements to audiences.
These risks have spread into broader social and legal debates. Across the world regulators, governments and citizens are grappling with how to control AI tools that make deepfake production deceptively easy yet remain hard to police.
One of the most prominent flashpoints in this debate is Grok, an AI chatbot developed by Elon Musk’s xAI and integrated into the social platform X (previously known as Twitter). Flaunted by its developers as a useful text and image generator, Grok has rapidly become involved in controversy for how users have exploited its capabilities to produce non-consensual, sexually suggestive and explicit deepfake images, which includes content involving minors.
The backlash has been swift and international. Malaysia and Indonesia temporarily blocked access to the tool after authorities cited repeated misuse for generating obscene and exploitive material. In the UK, media regulator Ofcom launched a formal investigation into whether X and Grok violated online safety laws by failing to prevent the spread of manipulated content that could constitute intimate image abuse or child sexual abuse material.
Grok’s controversies highlight a core tension at the heart of AI innovation today: how to balance openness, creativity and user freedom with meaningful protections against misuse. Analysts note that safeguards intended to prevent harmful outputs are often too weak, easily bypassed or inconsistently applied – making harmful content creation trivial for motivated users.
When widely shared videos appear to show impartial medical experts endorsing health products or therapies, the consequences can ripple outward: eroding trust in public health institutions, misleading vulnerable patients, and amplifying the spread of misinformation that can affect real world health outcomes.
Governments and regulators are starting to act. In the UK, new laws aim to criminalise the creation of non-consensual intimate AI images and require platforms to limit their spread. However, many experts warn that legislation on its own will not be sufficient. Stronger technological safeguards, clearer industry standards, and more proactive platform moderation will be needed to keep up with the rapid development of generative AI.
At the same time, vigilance is crucial. Individuals are encouraged to question the authenticity of online content and seek verification from trusted sources. For policymakers and technologists, the challenge is to design systems and rules that allow society to benefit from AI while reducing its potential to harm.