# How Accurate Are AI-Generated Doctors' Claims on Social Media?


Key Highlights :

1. Social media is full of fake doctors that are generated by artificial intelligence.
2. These fake doctors often share false information about health and beauty tips.
3. AI-generated doctors can be very misleading, with a static image of a purported doctor that employs very minimal facial expression and only moves its mouth.
4. AI poses risks in other ways, such as deep fakes in medical diagnostics.
5. Patients should be very careful when trusting information from fake doctors on social media.




     In recent years, videos of AI-generated doctors giving health and beauty tips on social media have become hugely popular. While these videos may appear to be trustworthy, offering helpful advice on natural remedies or whitening teeth, it is important to remember that not everything they say is true. AI-generated bots can spread false information and pose risks to users, so it is important to understand the accuracy of their claims and the dangers of using AI in the medical field.

     The videos of AI-generated doctors have been shared hundreds of thousands of times, with one particular video garnering over 40,000 likes, 18,000 shares, and 2.1 million clicks. The video claimed that chia seeds could cure diabetes, which is false. While chia seeds contain unsaturated fatty acids, dietary fiber, essential amino acids, and vitamins, and have been found to have a positive effect on people’s health, there is no scientific evidence that they can cure diabetes or help get it completely under control.

     AI-generated doctors also share beauty tips with household remedies that supposedly make teeth whiter or stimulate beard growth. Unfortunately, many of these videos contain false information and are often in Hindi, even if they bear English titles in their usernames. A 2021 Canadian study found that India had become a hotspot for false information on health issues during the COVID-19 pandemic, potentially due to India’s higher internet penetration rate, increasing social media consumption, and —in some cases—users’ lower digital competence.

     AI-generated doctors usually appear trustworthy, with white coats and a stethoscope around their necks or dressed in scrubs. However, Stephen Gilbert, professor of Medical Device Regulatory Science at the Dresden University of Technology, warned of the dangers of AI impersonating a doctor. AI can be misleading, as it can convey the authority of a doctor, who usually plays an authoritative role in almost all societies. This includes prescribing medication, diagnosing, or even deciding if someone is alive or dead.

     AI-generated doctors can also pose risks in other ways, such as deep fakes in medical diagnostics. In 2019, researchers managed to produce CT scans containing false images, thus showing that tumors could be added to or removed from images of CT scans. Similarly, chatbots can sound reasonable, even when incorrect, and may give inaccurate advice to users.

     Despite the risks, AI has played an increasingly prominent role in medicine in recent years. It can help doctors analyze X-ray and ultrasound images and offer support in making diagnoses or providing treatment. However, users should be wary of the accuracy of the information provided and research which medical team is behind a particular website, app, or account. If this is not evident, it is best to remain skeptical and assume the source is untrustworthy.



Continue Reading at Source : frontline_thehindu