The misinformation crisis | The World Weekly
Last May in Jharkhand, India, a WhatsApp message warning of a “child-lifting gang” in the area stoked local tensions. Although the message would turn out to be baseless, it sparked clashes which left seven people dead.
Only two years earlier, in Uttar Pradesh, rumours on social media accused a local ironsmith of killing and eating a cow. The inflammatory allegations, which turned out to be false, led to a mob dragging the man from his home and beating him to death.
The man’s name was Mohammad Akhlaq, and his murder underscores just one of the many real-world consequences of the sprawling phenomenon commonly referred to as ‘fake news’.
In this special dossier, The World Weekly investigates how the proliferation of digital misinformation is reshaping our world - swaying elections, influencing policy, eroding the democratic process, and even bringing about health epidemics.
Despite its many advantages, the Internet has become a potent tool for special interest groups and governments to manipulate public opinion.
The investigation by former FBI director Robert Mueller (a registered Republican) into allegations of Russian interference in the 2016 US presidential election has highlighted just how effective military-grade misinformation campaigns have become. By ‘creating the news’ (by, for example, hacking and releasing the emails of the Democratic National Committee just before the vote), polarising the political debate and actively conducting anti-Clinton advertisement campaigns, Russian actors, some argue, may have swayed the election in favour of Donald Trump.
One alleged Russian troll factory, the Internet Research Agency, and other groups focused on swaying public opinion, is directly linked to a close ally of President Vladimir Putin, Yevgeny Prigozhin, who is also allegedly connected to covert Russian military operations in Syria and the Ukraine.
There has been evidence of similar attacks and sustained polarisation campaigns across Europe too. Just before France voted in the second round of the presidential election last May, material from Emmanuel Macron’s campaign was leaked. The incident was attributed to the same Russian hacker group(s) linked to the hacking of the Democratic party a few months before, Fancy Bear.
Henri Verdier, the chief technology officer of the French government, told The World Weekly that whilst the government could not prove who was responsible, it had clear evidence that the recommendation algorithms of YouTube and other major platforms had been gamed to feature controversial content.
It is believed that Russian campaigns have attempted to systematically and repeatedly exacerbate existing tensions by using social media to antagonise movements like Black Lives Matter and the Texas Nationalist Movement in the US. In some cases, protesters have reportedly even been duped into attending local rallies organised by agents in Russia.
In the UK, efforts were allegedly made to influence the Brexit referendum by misrepresenting the EU. According to estimates from various researchers, the number of Russian-run Twitter accounts disseminating messages about Brexit in the run-up to the referendum could have been anywhere from 50 to 150,000.
Although the real impact is not clear, Russian activity has been linked to the rise of anti-establishment and populist parties in Austria, Germany, Holland, Hungary, Serbia, Macedonia, France, the UK and more recently Italy.
As with Lenin’s Comintern plan to promote Russian interests globally and foster dissent within the West in the early 20th century, analysts believe this strategy of disruption from within is designed to break up NATO and lift the more recent Western sanctions following Russia’s annexation of Crimea. This has the ultimate aim of aligning governments with the Kremlin’s interests and world view.
“Essentially the information conflict is a component of general conflict. Deriving from that, Russia has made an effort to form structures that are engaged in this matter.” - Sergei Shoigu, Russian defence minister, quoted by TASS news agency
Disinformation has other uses too. It can be used to deflect and reorient blame.
When, in April 2017, the White House accused Syrian government forces of carrying out a chemical attack in Idlib province that killed as many as 100 people, it condemned Moscow for trying to cover it up. Russia “spins out multiple, conflicting accounts in order to create confusion and sow doubts in the international community”, said a declassified intelligence assessment released by the White House.
In Europe, Russia used falsified flight path data and tampered video to blame Ukraine for the shooting down of the Malaysia Airlines flight MH17 in July 2014. The Kremlin maintains it had no involvement in the crash. It has also rejected the notion that it meddled in the American presidential election. “Why have you decided the Russian authorities, myself included, gave anybody permission to do this?” President Putin told NBC News’ Megyn Kelly in an interview earlier this month.
More than 10 years earlier, the administration of George W. Bush and pro-war media spared no effort to convince the international community that Saddam Hussein possessed weapons of mass destruction, a claim that was used to justify the invasion of Iraq, despite UN chief weapons inspector Hans Blix concluding the contrary.
Islamic State (IS), the militant group who rose on the back of the Iraq invasion, has used social media to lure tens of thousands from their homes around the world to their deaths in its self-declared, war-ravaged caliphate under the illusion of idyll.
The problem with ‘fake news’
Commentators on misinformation agree that the term ‘fake news’ is itself vulnerable to exploitation. Increasingly, rather than an objective denouncement of lies, it is used as accusatory hyperbole intended to discredit detractors. This has serious implications for democracy and free speech.
The phrase “fake news” — granted legitimacy by an American president — is being used by autocrats to silence reporters, undermine political opponents, stave off media scrutiny and mislead citizens. The CPJ [the Committee to Protect Journalists] documented 21 cases in 2017 in which journalists were jailed on 'fake news' charges. - John McCain, United States senator, in a 2018 editorial for The Washington Post
A recent Oxford University study has accused the government of Rodrigo Duterte of spending $200,000 on “keyboard trolls” tasked with spreading propaganda, and bullying critics.
One fake news story purported that NASA, the US space agency, named Mr. Duterte “the best president in the solar system”.
The government has also been accused of clamping down on critical media outlets, such as Rappler whose operating licence was revoked in January. Mr. Duterte made disparaging comments of his own, calling Rappler a “fake news outlet”. One of Rappler’s reporters was banned from the presidential palace.
Governments in 30 out of 65 countries surveyed in Freedom on the Net 2017 — an Internet liberty report by Freedom House, which covers 87% of the world’s Internet user population — were found to have attempted to control online discussions, a new high. In the same report, disinformation was found to have heavily affected elections in 18 countries. There was also a 50% increase on 2016 in countries featuring physical reprisals for online speech, from 20 to 30, with people being murdered in eight of them.
In Myanmar, misinformation has played a central role in what the UN has called a “textbook example of ethnic cleansing”, where more than half a million Rohingya Muslims have fled persecution reportedly at the hands of the army into neighbouring Bangladesh.
Widespread reports of executions, infanticide, burnings, and gang rape have been corroborated by survivor accounts and humanitarian organisations.
Ethnic tensions between the Buddhist majority and Rohingya Muslim minority in Myanmar have been inflamed by misinformation in the recently technologised nation. “Burma is experiencing an ugly renaissance of genocidal propaganda,” Matthew Smith, the co-founder of Fortify Rights, a human rights organisation working in Southeast Asia, told The Washington Post in December. “And it spreads like wildfire on Facebook.”
Much of the anti-Rohingya content shared on Facebook is misleading at the very least, intended to prey on low levels of digital literacy and fan nationalist sentiment. A common campaign narrative warns of an extremist Rohingya insurgency attempting to carve out a separatist Islamic state within the country.
Another recent fake news story posted on a popular Facebook page falsely claimed that the chief minister of Yangon wanted to destroy two shrines at Shwedagon Pagoda, one of the holiest sites in Buddhism.
Myanmar has been affected by an insurgency group calling itself the Arakan Rohingya Salvation Army (ARSA). In coordinated attacks in August 2017 the group attacked 30 security posts, killing 12 service personnel.
With the advent of the Internet it became easier than ever to seek out news that confirms and reinforces what we want to believe. “All of us can choose to live in an echo chamber,” Stephan Lewandowsky, chair in Cognitive Psychology at the University of Bristol and an expert in how misinformation persists and spreads in society, told The World Weekly. “That is a real problem.”
Hyperpartisan outlets pander to extreme political ideologies. Scientific conspiracy sites, for example, bathe climate change deniers in “alternative facts”.
All of these groups will find small corners of the Internet where they will hear what they want to hear. What is worse, adds Professor Lewandowsky, is that while they are in these bubbles they come to “think that their views are widely shared”.
This becomes problematic when peripheral ideas find their way into government and influence policy. US President Trump has been widely criticised for his contentious views on climate change and Washington’s recent withdrawal from the Paris climate agreement.
Healthcare has been particularly vulnerable to false news, putting public health at risk.
In April 2013, only two months after it became routine, the Japanese government suspended its recommendation for the human papillomavirus (HPV) vaccine, an injection given to girls that protects against strains of HPV that cause nearly all cases of cervical cancer.
The decision followed intense pressure brought about by unverified reports of adverse reactions to the drug. Videos of young girls experiencing seizures were posted online by concerned parents. National media, supported by anti-vaccine campaigners and some doctors, ran sensationalist stories despite a lack of credible evidence.
The vaccination rate fell from 70% to almost zero — where it remains — despite The World Health Organisation’s (WHO) Global Advisory Committee on Vaccine Safety reiterating last June that the vaccine is “extremely safe”, and that opposition to it is based on “spurious case reports and unsubstantiated allegations”.
Riko Muranaka, a journalist and lecturer at the Kyoto University School of Medicine, attempted to quantify the crisis. “In Japan, 3,000 lives and 10,000 wombs are lost to cervical cancer every year,” Dr. Muranaka said in her acceptance speech for the 2017 John Maddox Prize, an award recognising her for publicly defending the vaccine in the face of threats and lawsuits.
“If we have to wait 10 years for any hope of HPV vaccination starting again, how many more wombs will Japanese gynaecologists have to dig out? The answer is a hundred thousand.” - Riko Muranaka, winner of the 2017 John Maddox Prize
Health experts and campaigners have become increasingly frustrated with how misinformation allows archaic and preventable diseases to persist. In 2016, five people died in Malaysia from diphtheria. According to local media, a rumour that the vaccine contained pig DNA took hold among some Muslim families and was partly to blame.
Polio still lingers in Pakistan, Afghanistan and Nigeria, where some communities distrust the vaccine’s western benefactors. Conspiracy theories have had deadly effects. In January this year, two polio workers, a mother and her daughter, were killed in Pakistan. In 2013, jihadi group Boko Haram gunned down nine in Kano, Nigeria.
Misinformation even had its part to play in the 2013-2016 West African Ebola epidemic. Rumours and half-truths posted on social media propagated misinformation and panic, and ultimately catalysed the disease’s unprecedented spread. In Guinea, conspiracy theories led to distrust of foreign health workers. “Families hid their sick, and, in certain villages, people resisted and prevented humanitarian organisations from doing their work,” Marc Poncin, a biologist with Médecins Sans Frontières, told SciDev.Net.
Asked why healthcare in particular is so at risk of misinformation, Dr. Muranaka pointed to the disconnect between science and society. “The ‘reasoning’ for both the effectiveness and dangers of drugs or vaccines is becoming very subtle,” she told TWW. “When they talk about genetic type, active oxygen or adjuvant-induced autoimmune disease, they look scientific. This trend affects more people who used to be neutral before.”
An uncertain future
For all the talk of a misinformation crisis, it is worth noting that the phenomenon is nothing new. To sell more copies, the New York Sun in 1835 famously ran a story about the discovery of man-bats and other fantastic creatures living on the moon. In 1983, a Russian Cold War disinformation campaign asserted AIDS was a biological weapon created by the Pentagon.
What is new, however, is the Internet and its ability to propagate information at scale. Anyone can now become a publisher. Social networks and their popularity algorithms can turn a claim (true or not) into a global story reaching hundreds of millions of people in a matter of hours. Relevance algorithms on the other hand create dense echo chambers exacerbating individual worldviews.
A study published on March 8, 2018, in Science analysed all of the true and false news stories distributed on Twitter from 2006 to 2017 and found that not only do lies spread faster than truth, they also reach more people. “Falsehood diffused significantly farther, faster, deeper, and more broadly than truth in all categories of information.”
With platforms such as Google and Facebook now commandeering most of the ad revenue, traditional news publishers are struggling to monetise their content. Over the years, this has had a notable effect on the scope and quality of their reporting with many, local publishers in particular, having to close down. What is more, in this ever growing contest for attention, many publishers have been drawn to adapt their editorial to the populist dynamics of the click-bait economy.
As the ‘free world’ struggles to respond to the ‘fake news’ crisis, autocracies like China, Saudi Arabia, Iran or Venezuela are using it as a newfound justification for mass censorship.
The greatest threats posed by misinformation may be yet to come. For example, Artificial Intelligence computing will allow the creation of highly realistic ‘fake videos’. A rudimentary implementation of such technology was on display in the “deepfakes” scandal, where the faces of celebrities were superimposed into adult films. More advanced technology could, for example, fabricate a declaration of war by a politician, or fake a geopolitically inflammatory event.
We are only beginning to unravel the true implications, intended or not, of this ever growing, ever more complex phenomenon. In the eyes of many, an uncomfortable future looms on the horizon: misinformation, by polarising our societies, represents one of the greatest threats to the free world order.