It feels like everyone is talking about ‘fake news’! Other related buzzwords include post-truth, post-fact, misinformation, disinformation, filter bubbles, and echo chambers. Despite the temptation, I must accept that I can’t research everything that fascinates me. At the same time, with the term popping up wherever I look, I can’t help but wonder about it. Why the fuss, and why now?
My tentative conclusion is that it is a new face to an age-old fear – the fear of being fooled. And the internet makes a perfect environment that brings out that fear and even heightens it. This post is an unofficial and totally personal recollection of the history of various discussions and debates pertaining to that topic. Once again for my own record.
I would probably start it with the famous, “Nobody knows you’re a dog” cartoon in The New Yorker in 1993. The 1990s were also when that idea of connecting with somebody without knowing who they really are were numerously played on in popular culture but overall in two opposite directions: it can be a seed for either the ultimate love (e.g. The Contact, 1997; You’ve Got Mail, 1998) or mortal danger (e.g. The Net, 1995).
In the first decade of the 2000s, I ran again into the pursuit of ‘authenticity’ in the faceless online world while I was writing my PhD thesis. Through the American jargon “Astroturf campaigns“, to be precise. Come to think of it, it is quite a judgmentally loaded term, reminding me of Andrew Potter’s book The Authenticity Hoax (2010). (I liked the Korean front cover more, by the way.)
Information literacy and digital literacy were gaining more and more importance in the meantime. In 2007, funded by the Asia-Europe Foundation, Han at Yeungnam University and I conducted a small-scale study comparing online information search behaviours between South Korean and British university students. Exploratory in nature, the study provided us with very interesting pointers. British students tended to rely principally on the provenance of information sources (e.g. BBC), whereas Korean students were found to place a significant weight on peer users’ inputs (e.g. Naver’s real-time ranking of popular search terms). I bet the landscape is quite different now though.
Along the way I have also come across the CRAAP Test, the PROMPT mnemonic, an abundance of advice on how to stay critical in the era of social media (e.g. Pierre Lévy’s presentation on the topic in 2013), the “nutrition labels for the news” project at MIT, and various attempts to sort real photos from doctored ones in crisis reporting (e.g. one by The Atlantic during Hurricane Sandy in 2012; an ESRC-funded project led by Ella McPherson on “digital human rights reporting by civilian witnesses and the verification problem”). Considering all these efforts, it was quite discouraging to read that according to a 2016 study from Stanford University most middle school students couldn’t tell ads labelled “sponsored content” from real news stories on a website. Have things gotten worse?
I don’t think ‘fake news sites‘ were considered to be much of a concern until this year’s US election. There were not that many to begin with, but I also recall most of the earlier ones, like The Onion and DDanzi, were perceived as socially conscious satires. Merlyna Lim has made an interesting point lately that dirty campaigns goes all the way back to year 1800 where two founding fathers of the US, Thomas Jefferson and John Adams, competed for the presidency. The New York Times has also made a similar point that the newness of fake news has been exaggerated. However, it’s just that this time fake news farms created a powerful synergy with an army of pro-Trump chatbots, eventually overwhelming the election.
Not with fancy bots but with cheap human labour, manipulating online content during an election has a longer history in other parts of the world. At a Freedom on the Net regional meeting earlier this year, going around the table we compiled a list of such examples, ranging from China’s “50-cent party” to Malaysia’s “cyber troopers”. (And I have learnt some more such as Ukraine’s “i-army” and Turkey’s “AK trolls” since.) The 2012 presidential election of South Korea was one definite case. The extent of manipulation has been evidenced by news reports and court hearings, but incidentally it has also been captured in a paper I co-authored on the popular political podcast Nakkomsu, its explicit endorsement of the liberal opposition, and the systematic counterattacks from members and supporters of the conservative party it faced on Twitter in 2012.
It looks like the ‘fake news’ discourse has expanded further. Now it’s about knowing about types of misleading information, how it spreads, what the bigger problems surrounding the phenomenon are (e.g. bias, propaganda, desensitisation to lies, and series lack of critical digital literacies), what responsible citizens should and should not do (e.g. fact-checking and taking the time to correct misinformation) to counter those problems.