In a paper published this week on the preprint server Arxiv.org, researchers at Facebook, New York University, and the University of Waterloo detail at question-answering data set — CovidQA — that comprises submissions from the COVID-19 Open Research Dataset Challenge, a collection of tasks based on scientific questions developed with the World Health Organization and the National Academies of Sciences, Engineering, and Medicine. They say that CovidQA, which is a work in progress, could help gauge the accuracy of chatbots and search engines that answer topics about the novel coronavirus.
Countries, health systems, and nonprofits around the world are employing AI natural language tools to triage potential COVID-19 patients. But as our investigation in early April revealed, chatbots, in particular, rely on inconsistent medical data sources and privacy practices. Data sets like CovidQA could be used to empirically compare the accuracy of the answers supplied by COVID-19 chatbots, exposing gaps in their knowledge and giving users greater peace of mind.