I was invited to speak today about disinformation at the European Parliament’s AIDA and INGE Special Committees hearing on the future of democracy in the digital age. My notes below.
I have been asked to speak abut foreign interference and disinformation, what research tells us about the challenges they represent and the context they happen in, and how we might respond.
Foreign interference here includes information operations specifically, but it’s important to remember these are a subset of a wider range of soft power, public diplomacy, publicity, and communications operations.
Disinformation, in line with the EU Commission action plan, I take to mean “verifiably false or misleading information that is created, presented and disseminated for economic gain or to intentionally deceive the public, and may cause public harm” and that is almost always legal speech.
So where are we with foreign interference and disinformation? We need to understand the challenges we face and the context they exist in if we are to address them in credible and effective ways.
Let’s take the challenge first – foreign interference often aims to increase divisions in our societies, undermine trust in institutions, and on that basis influence individual and collective decision making.
Disinformation is only one way in which foreign governments try to achieve these aims – as said, soft power, public diplomacy, publicity, and communications operations are often deployed for the same purposes, and some of these in part work via domestic actors, wittingly or unwittingly.
And, crucially, many other factors are far, far more important in shaping divisions in our societies, trust in institutions, and individual and collective decision making than disinformation (let alone disinformation from foreign interference narrowly).
We face real and serious problems with disinformation, but as with any societal problem, we need to understand the scale and scope and the way the public thinks about it if we want to respond in effective and credible ways.
We don’t always have evidence and research on these issues that is up to date or that capture differences from country to country, but research from the United States (which has had severe problems with disinformation in recent years) can give a sense of the scale and scope, and our own research from the Reuters Institute how the public sees these problems—
First, on scale and scope, in the United States, one team of researchers found that across offline and online media use, “news consumption [comprises] 14.2% of Americans’ daily media diets” whereas “fake news comprises only 0.15% of Americans’ daily media diet.” – with time spent with news outweighing fake news, highly biased, and hyper-partisan sites by a factor of almost 100.
Looking specifically at Twitter, which may give at least an indication of dynamics on far larger platforms such as Facebook and YouTube where it is harder for researchers to access data, another team found that “fake news accounted for nearly 6% of all news consumption, but it was heavily concentrated—only 1% of users were exposed to 80% of fake news, and 0.1% of users were responsible for sharing 80% of fake news.”
There are wider issues than “f*ke news” as narrowly defined in these studies, including dangerous narratives that aren’t necessarily tightly tied to discrete checkable claims or specific sites, networked propaganda from specific constellations of political actors and partisan media, and problematic information including various kinds of hyper-partisan material, harassment, and trolling – often targeted at women, ethnic minorities, and marginalized communities.
But this research suggests that, while very real and serious, the scale and scope of identified mis- and disinformation narrowly conceived and measured is more limited and more concentrated in highly partisan subcommunities than is sometimes imagined.
Second, on how the public sees these problems, in the annual Reuters Institute Digital News Report, we ask nationally representative samples of internet news users in 40 markets across the world a range of questions, including last year what source they are most concerned about false or misleading information from online. Looking at the 20 EU member states we cover in the report, 11% respond “foreign governments”. By comparison, 12% respond “journalists or news organizations”. And – I’m sorry if it is awkward to mention this in this setting – 38% say “The government, politicians or political parties in my country”.
This finding is important for two reasons.
First because, as with any social problem, public perception will influence the effectiveness and especially the credibility of any responses, and when it comes to disinformation, a large plurality of the public is more concerned about false or misleading information from domestic politicians or domestic news media than from foreign governments.
Second, and again – apologies if this is inconvenient and even rude – I’d say social science research largely suggest the public is often right to be more concerned about domestic sources of false or misleading information.
I think these research findings leave us in a place where we must recognize two things –
First, if the goal of foreign interference is, among other things, to undermine trust in institutions, there is a risk that our very own public conversation about disinformation help outside actors achieve this goal if we exaggerate the (very real) challenges we face. (This, of course, is also why the Russian opposition has for years encouraged Western liberals not to exaggerate the effect of the Kremlin’s information operations.)
Second, if the goal of foreign interference is, among other things, to undermine trust in institutions, but much of the public see problems of disinformation as being about domestic politicians and media spreading false and misleading information about them, I think there is a risk that attempts to counter disinformation that are narrowly aimed at foreign interference specifically and does nothing to address what much of the public see as the main problems may come across to some of the public as self-serving attempts by governments and established elites to protect themselves by censoring outside sources of information and stifling criticism.
So, what can we do, other than taking problems seriously without exaggerating them in ways that spread the fear foreign actors seek to spread, and other than not pursuing responses that may seem so selectively partial as to be self-serving in ways that in themselves may contribute to undermining trust in institutions?
I will not talk about technology and technology companies, because Anna Bulakh and Alex Stamos will focus on this, other than to say that (a) there are clearly a range of tactical technical interventions that can help (labeling, context, introduction of friction, provision of authoritative information, in some cases reduction or removal of content, provision of data and tools to independent third parties) and (b) that our research documents that the public clearly – and in my view rightly – sees technology companies, especially Facebook and the Facebook-owned WhatsApp, but also to a lesser extend search, Twitter, and Google’s YouTube, as part of these problems, and will expect them to be part of the solutions.
All of these tactical and technical interventions can help make a difference, but at a more fundamental level, if what we want to prevent is that disinformation from foreign interference increases divisions and undermines trust in institutions, if we remember that much disinformation is legal speech protected by the fundamental right to receive and impart information, and that narrow direct interventions targeted exclusively at foreign actors may exacerbate these problems because they might look self-serving to a public that sees domestic actors as central to disinformation problems (a view that is well supported by research), then indirect interventions focused on building resilience might be the best response available.
Open societies with robust institutions will not be free of disinformation and pernicious forms of speech. But they will be better able to withstand the problems they create.
Building resilience would involve investing in strengthening the independent institutions that help people be informed, connect with others, and work together, including –
- Independent news media, both private sector, public service, and non-profit (some policy options here)
- Independent fact-checkers
- Independent research, ideally with better access to data from both platforms and public authorities
- Independent media literacy programs (for all ages)
And indeed, the 2018 report of the European Commission High Level Group on Online Disinformation recommended, among other things, exactly such investments, calling for the European Union, with its annual budget of well over €160 billion, to commit at least €100 million in funding for independent initiatives (in a context where several foreign governments are estimated to be spending €1 billion or more a year on their own state media and influence operations).
That has not happened in the three years since.
But the recommendations still stand.
It is tempting to imagine that there are simple, cheap, and uncontroversial solutions to the very real and serious disinformation problems that we face. But there aren’t. There are only complicated, often expensive, and sometimes controversial options.
Research can help provide an understanding of the challenges we face and inform the decisions we make. But fundamentally, this is about choices and priorities.
You, and your counterparts at the member state level, are the ones who have a democratic mandate to make them.