Federico Germani is sounding an alarm. In a research paper he co-authored, the headline says it all: “AI Model GPT-3 (dis)informs us better than humans.” Read it here.
Germani is a Researcher at the University of Zurich and is the founder and director of Culturico, a non-profit storytelling platform.
In my latest episode of Mediated World, Germani shared that he felt the need to switch from studying molecular biology to social sciences to have a more immediate impact. He’s exploring the emerging field of bioethics and disinformation with a goal to improve global health.
Based on the results of the study, it turns out we aren’t good at distinguishing the difference between AI-generated content and human-created content. In fact, GPT-3 (which was the model tested at the time) informs and misinforms us really well.
When respondents looked at a series of tweets (if that’s what they’re still called), they couldn’t tell the difference between the ones written by AI or by humans. That’s both impressive and terrifying. Machines are now better at mimicking human communication than humans are at actually communicating.
Interestingly, one of the surprises from the study strongly suggested that humans are better at identifying misinformation than AI. So, for all the social media companies putting AI-based systems in place to reduce misinformation, this seems to potentially be the wrong approach. Perhaps AI can be a first line of defense, but humans do a better job of spotting fake news.
Given the pace of change and advancements in this space, Germani argues we’re going to need much better media and information literacy moving forward. But that in some ways, we can neglect the AI and just focus on teaching critical thinking skills.
Federico Germani is a Researcher at the University of Zurich and is the founder and director of Culturico, a non-profit with the mission to combat misinformation and disinformation through thought-provoking and fact-based storytelling. You can follow him on LinkedIn or on Twitter / X.