CLARITI

Social Networks and the Real Danger of Pseudoscience, Fake News and Conspiracy Theories to Public Health

Ryan McConville, Dan Saattrup Nielsen – University of Bristol

A multimodal machine learning based study of medical misinformation on social networks.

Medical misinformation, popular science and alternative medicine thrives online in social networks where communities can share their experience and beliefs. This can be beneficial with regard to self-management of conditions but there exists a dark side where dangerous medical misinformation and pseudoscience spreads online, potentially sourced from, and amplified by, ‘influencers’ who lack proper credentials to provide medical advice.

This thrives in online communities, such as those found on social networks, and has a clear real-world danger to individual and public health. This (mis)information can be in the form of text (e.g., people sharing their own experience or beliefs, news articles) and the form of videos (e.g. YouTube). Further, the rich structural connectivity (e.g., friendship, interactions) between users online provides a valuable source of information. These three modalities (text, video, connectivity) are a rich source of information in which we can use to develop sophisticated multi-modal machine learning models in order to:

  • Understand the key roles of those promoting medical misinformation (influencers).
  • Understanding how (and why) this information spreads and how to detect it in an automated way.
  • Ultimately, prevent the spread of dangerous medical misinformation via an intelligent system that can detect and evaluate misinformation.