Combating misinformation on social media: from detection to mitigation
Links to Slide
Part 1 : Misinformation and human perception
Part 2 : Misinformation detection
Part 3 : Misinformation mitigation
The spread of misinformation — i.e. false or misleading information — has a profound impact on our society, and in the age of social media, its effect is amplified due to the ease of information sharing. The COVID-19 pandemic is perhaps the latest instance that showed us concrete ramifications of misinformation (e.g. vaccine hesitancy), and more broadly how misinformation can cause social division and undermine public trust in governments. The topical diversity, multi-modality and multi-linguality of misinformation and its complex interaction with humans on social media platforms presents significant challenges and attracts research from the Data Science and NLP communities to combat misinformation. In this tutorial, we present machine learning, text mining and natural language processing techniques as well as recommender system technologies for the detection and mitigation of misinformation on social media. The tutorial will comprise three parts; (1) misinformation and human perception; (2) misinformation detection; and (3) misinformation mitigation.
By Xiuzhen (Jenny) Zhang and Jey Han Lau
Xiuzhen (Jenny) Zhang Bio:
Xiuzhen (Jenny) Zhang is Professor of Data Science at School of Computing Technologies, RMIT University. She specialises in text mining, machine learning and social media data analytics. She has published over 100 papers in these areas. Her research has been supported by the Australian Research Council, Australian and Victorian governments, as well as industry partners. She is an associate editor of the journal Information Processing and Management. She obtained her PhD from The University of Melbourne.
Jey Han Lau Bio:
Jey Han Lau is a Senior Lecturer in the School of Computing and Information Systems at the University of Melbourne. His research is in Natural Language Processing, and a common theme of his research is that it involves building computational models in an unsupervised or semi-supervised setting, i.e. a learning scenario where the supervision signal for model training is not available or scarce. His research is also characterised by a diverse flavour of applications, such as topic models, lexical semantics, text generation and misinformation detection. Some of his work has generated broader community interest beyond academia, e.g. his research in text generation and influence operations has been covered by science magazines (New Scientist) and mainstream news media (Guardian and BBC).