Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive
 
Alberto Barron-Cedeno, Tamer Elsayed, Preslav Nakov, Giovanni Da San Martino, Maram Hasanain, Reem Suwaileh, Fatima Haouari, Nikolay Babulkov, Bayan Hamdan, Alex Nikolov, Shaden Shaar, Zien Sheikh Ali
 
We present an overview of the third edition of the CheckThat! Lab at CLEF 2020. The lab featured five tasks in two different languages: English and Arabic. The first four tasks compose the full pipeline of claim verification in social media: Task 1 on check-worthiness estimation, Task 2 on retrieving previously fact-checked claims, Task 3 on evidence retrieval, and Task 4 on claim verification. The lab is completed with Task 5 on check-worthiness estimation in political debates and speeches. A total of 67 teams registered to participate in the lab (up from 47 at CLEF 2019), and 23 of them actually submitted runs (compared to 14 at CLEF 2019). Most teams used deep neural networks based on BERT, LSTMs, or CNNs, and achieved sizable improvements over the baselines on all tasks. Here we describe the tasks setup, the evaluation results, and a summary of the approaches used by the participants, and we discuss some lessons learned. Last but not least, we release to the research community all datasets from the lab as well as the evaluation scripts, which should enable further research in the important tasks of check-worthiness estimation and automatic claim verification.

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive
 
Firoj Alam, Shaden Shaar, Fahim Dalvi, Hassan Sajjad, Alex Nikolov, Hamdy Mubarak, Giovanni Da San Martino, Ahmed Abdelali, Nadir Durrani, Kareem Darwish, Preslav Nakov

 

With the emergence of the COVID-19 pandemic, the political and the medical aspects of disinformation merged as the problem got elevated to a whole new level to become the first global infodemic. Fighting this infodemic is ranked second in the list of the most important focus areas of the World Health Organization, with dangers ranging from promoting fake cures, rumors, and conspiracy theories to spreading xenophobia and panic. Addressing the issue requires solving a number of challenging problems such as identifying messages containing claims, determining their check-worthiness and factuality, and their potential to do harm as well as the nature of that harm, to mention just a few. Thus, here we design, annotate, and release to the research community a new dataset for fine-grained disinformation analysis that (i)focuses on COVID-19, (ii) combines the perspectives and the interests of journalists, fact-checkers, social media platforms, policy makers, and society as a whole, and (iii) covers both English and Arabic. Finally, we show strong evaluation results using state-of-the-art Transformers, thus confirming the practical utility of the annotation schema and of the dataset.

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

Firoj Alam, Fahim Dalvi, Shaden Shaar, Nadir Durrani, Hamdy Mubarak, Alex Nikolov, Giovanni Da San Martino, Ahmed Abdelali, Hassan Sajjad, Kareem Darwish, Preslav Nakov
 
KeywordsCOVID-19, Infodemic, Disinformation, Misinformation, Fake News, Call to Arms, Crowdsourcing Annotations
 
TL;DRFighting the COVID-19 infodemic beyond factuality; combining the perspectives of journalists, fact-checkers, policymakers, government entities, social media platforms, and society; call to arms
 
AbstractWith the outbreak of the COVID-19 pandemic, people turned to social media to read and to share timely information including statistics, warnings, advice, and inspirational stories. Unfortunately, alongside all this useful information, there was also a new blending of medical and political misinformation and disinformation, which gave rise to the first global infodemic. While fighting this infodemic is typically thought of in terms of factuality, the problem is much broader as malicious content includes not only fake news, rumors, and conspiracy theories, but also promotion of fake cures, panic, racism, xenophobia, and mistrust in the authorities, among others. This is a complex problem that needs a holistic approach combining the perspectives of journalists, fact-checkers, policymakers, government entities, social media platforms, and society as a whole. Taking them into account we define an annotation schema and detailed annotation instructions, which reflect these perspectives. We performed initial annotations using this schema, and our initial experiments demonstrated sizable improvements over the baselines. Now, we issue a call to arms to the research community and beyond to join the fight by supporting our crowdsourcing annotation efforts.

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

Full Text 

The novel coronavirus (COVID-19) pandemic, which started at the end of 2019, has been causing great suffering in human society.

According to the World Health Organization, the total confirmed cases of COVID-19 globally on April 17, 2020, was 2,078,605, including 139,515 deaths1, so many peoples’ lives have been significantly affected by the pandemic (World Health Organization, 2020). Along with the rapid spread of the virus among different human societies, a huge amount of information has been generated, among which many are misinformation (such as claims on various effective drugs and remedies) or disinformation (such as conspiracy theories on the origin of the virus as biological weapons) that causes more harm and panic (Brennen, Simon, Howard, & Nielsen, 2020). Therefore, the global health crisis also turned itself into a global information crisis (Xie et al., 2020).


© 2020