Skip to Main Content

Foreign Interference Including Disinformation

Selected e-articles

Abstract: Foreign actors, particularly Russia and China, are using disinformation as a tool to sow doubts and counterfactuals within the U.S. population. This tactic is not new. From Nazi influence campaigns in the United States to the Soviets spreading lies about the origins of HIV, disinformation has been a powerful tool throughout history. The modern “information age” and the reach of the internet has only exacerbated the impact of these sophisticated campaigns. What then can be done to limit the future effectiveness of the dissemination of foreign states’ disinformation? Who has the responsibility and where does the First Amendment draw the boundaries of jurisdiction?

Abstract: Russia’s cyber-enabled influence operations (CEIO) have garnered significant public, academic and policy interest. 126 million Americans were reportedly exposed to Russia’s efforts to influence the 2016 US election on Facebook. Indeed, to the extent that such efforts shape political outcomes, they may prove far more consequential than other, more flamboyant forms of cyber conflict. Importantly, CEIOs highlight the human dimension of cyber conflict. Focused on ‘hacking human minds’ and affecting individuals behind keyboards, as opposed to hacking networked systems, CEIOs represent an emergent form of state cyber activity. Importantly, data for studying CEIOs are often publicly available. We employ semantic network analysis (SNA) to assess data seldom analyzed in cybersecurity research – the text of actual advertisements from a prominent CEIO. We examine the content, as well as the scope and scale of the Russian-orchestrated social media campaign. While often described as ‘disinformation,’ our analysis shows that the information utilized in the Russian CEIO was generally factually correct. Further, it appears that African Americans, not white conservatives, were the target demographic that Russia sought to influence. We conclude with speculation, based on our findings, about the likely motives for the CEIO.

Abstract: There is widespread concern that foreign actors are using social media to interfere in elections worldwide. Yet data have been unavailable to investigate links between exposure to foreign influence campaigns and political behavior. Using longitudinal survey data from US respondents linked to their Twitter feeds, we quantify the relationship between exposure to the Russian foreign influence campaign and attitudes and voting behavior in the 2016 US election. We demonstrate, first, that exposure to Russian disinformation accounts was heavily concentrated: only 1% of users accounted for 70% of exposures. Second, exposure was concentrated among users who strongly identified as Republicans. Third, exposure to the Russian influence campaign was eclipsed by content from domestic news media and politicians. Finally, we find no evidence of a meaningful relationship between exposure to the Russian foreign influence campaign and changes in attitudes, polarization, or voting behavior. The results have implications for understanding the limits of election interference campaigns on social media.

Abstract: Electoral disinformation is feared to variously undermine democratic trust by inflaming incorrect negative beliefs about the fairness of elections, or to shore up dictators by creating falsely positive ones. Recent studies of political misperceptions, however, suggest that disinformation has at best minimal effects on beliefs. In this article, we investigate the drivers of public perceptions and misperceptions of election fairness. We build on theories of rational belief updating and motivated reasoning, and link public opinion data from 82 national elections with expert survey data on disinformation and de facto electoral integrity. We show that, overall, people arrive at largely accurate perceptions, but that disinformation campaigns are indeed associated with less accurate and more polarized beliefs about election fairness. This contributes a cross-nationally comparative perspective to studies of (dis)information processing and belief updating, as well as attitude formation and trust surrounding highly salient political institutions such as elections.

Abstract: The growing influence of social media platforms, and the disinformation that circulates in them, has transformed the public spheres. How to deal with disinformation is an open normative, empirical and political question in contemporary democracies. In this article, we outline an agenda on the institutional strategies pursued in the European Union (EU), the normative understandings of the public sphere that such strategies imply, and the analytical challenges to undertake this line of inquiry. We argue that there is an emerging competition in the EU field of disinformation – constructed by actors coming from different pre‐existing fields, such as journalism or foreign policy – not only to define what is ‘true’ from what is ‘fake’, but also to determine the sort of the public sphere and democracy we ought to strive for. This perspective allows us to anticipate which actors might be empowered (or disempowered) depending on how disinformation is addressed in regulatory terms.

Abstract: The lobbying of other countries' political and legal elites has emerged as a security risk globally. The securitization of foreign lobbying has prompted the adoption of specialized legal regimes, that is, foreign transparency laws, to enable the scrutiny of how foreign actors lobby. This article analyzes and compares such laws in the United States, Australia and the European Union (EU) with respect to three issues: (1) the definition of a foreign actor, (2) the definition of a type of foreign lobbying and (3) the definition of a protected normative good. While the impetus to legislative reform has often rested on a concern with authoritarian governments, the foreign transparency laws capture diverse kinds of foreign influence activities and actors. They may thus catch in their nets actors or types of influence never intended to be caught in the first place. This has particularly significant implications for the EU as a polity and foreign policy actor.

Abstract: Datafication and the use of algorithmic systems increasingly blur distinctions between policy fields. In the financial sector, for example, algorithms are used in credit scoring, money has become transactional data sought after by large data-driven companies, while financial technologies (FinTech) are emerging as a locus of information warfare. To grasp the context specificity of algorithmic governance and the assumptions on which its evaluation within different domains is based, we comparatively study the sociotechnical imaginaries of algorithmic governance in European Union (EU) policy on online disinformation and FinTech. We find that sociotechnical imaginaries prevalent in EU policy documents on disinformation and FinTech are highly divergent. While the first can be characterized as an algorithm-facilitated attempt to return to the presupposed status quo (absence of manipulation) without a defined future imaginary, the latter places technological innovation at the centre of realizing a globally competitive Digital Single Market.

Abstract : Background: Disinformation and historical revisionism have been acknowledged as tools for foreign interference that belong to the landscape of hybrid threats. Historical revisionism plays an essential role in Russian foreign policy towards the post-Soviet space and is in strong relation with the concepts of Near Abroad and Russkii Mir (‘Russian World’) and with certain ideas contained in the neo-Eurasianist Movement. This article examines Russian revisionist narratives disseminated in information and influencing campaigns in Europe and against the West.       Methods: This study uses a mixed methodology combining desk research, including literature review, and analysis of the EUvsDisinfo database of cases identified before the February 2022 invasion of Ukraine. R esults: The manipulation of historical events has been largely employed by the Kremlin as a tool for foreign interference to achieve strategic objectives. First World War treaties, mainly the Trianon Peace Treaty, as well as the Second World War and the communist and fascist historical experiences in countries within the post-Soviet space, are the pivotal topics from which hostile influencing narratives are built. From the analysis of the EUvsDisinfo database, the article identifies seven topic themes.       Conclusions:  Our findings suggest that pre-emptively elaborated counter-narratives based on historical evidence and sound historiography can be an effective tool against hostile revisionist narratives that exploit vulnerabilities and specific target groups within European societies.

Abstract: This paper argues that the current disinformation studies literature lacks any sustained analysis of a crucial element in any communication campaign – its sources of funding. The paper argues that crowdfunding platforms are arguably better networked and ‘cross platform enabled’ than most social media sites to spread disinformation. And that disinformation actors have weaponized crowdfunding to amplify and sustain the spread of their grievances and forms of disinformation. The paper offers a rich qualitative study of a set of election fraud and 5G themed campaigns on the GoFundMe crowdfunding platform. The study questions how networked content and financial appeals in the crowdfunding pitch can contribute to the disinformation literature and potential solutions.

Abstract: Against the backdrop of the deterioration of EU–Russia relations in recent years, there has been a shift in the awareness of hybrid threats all across the Union. At the same time, there is evidence of a growing political will to strengthen resilience to these threats. While hostile foreign actors have long deployed hybrid methods to target Europe, Russia’s intervention in Ukraine in 2014, interference in the 2016 US presidential election, and repeated cyber-attacks and disinformation campaigns aimed at EU member states have marked a turning point, exposing Western countries’ unpreparedness and vulnerability to these threats. This article analyses the EU’s resilience to hybrid warfare from institutional, regulatory and societal perspectives, with a particular focus on the information space. By drawing on case studies from member states historically at the forefront of resisting and countering Russian-backed disinformation campaigns, this article outlines the case for a whole-of-society approach to countering hybrid threats and underscores the need for EU leadership in a standard-setting capacity.

Abstract: Information pollution in a digitally connected and increasingly polarized world,the spread of disinformation campaigns aimed at shaping public opinion, trendsof foreign electoral interference and manipulation, as well as abusive behaviour and the intensification of hate speech on the internet and social media are the phenomenon which concern international public opinion. These all represent a challenge for democracy, and in particular for the electoral processes affecting the right to freedom of expression, including the right to receive information, and the right to free elections. It is a growing international effort to deal with these problems. Among international organizations engaged to seek solutions is the Council of Europe (CoE). The author analyses CoE's instruments, legally binding (as European Convention on Human Rights), as well of the character of "soft law", especially resolution of the CoE's Parliamentary Assembly 2326 (2020) Democracy hacked? How to respond? She exposes the need for better cooperation of international organizations and states' authorities in this matter.

Abstract: Can residents of Ukraine discern between pro-Kremlin disinformation and true statements? Moreover, which pro-Kremlin disinformation claims are more likely to be believed, and by which audiences? We present the results from two surveys carried out in 2019—one online and the other face-to-face—that address these questions in Ukraine, where the Russian government and its supporters have heavily targeted disinformation campaigns. We find that, on average, respondents can distinguish between true stories and disinformation. However, many Ukrainians remain uncertain about a variety of disinformation claims’ truthfulness. We show that the topic of the disinformation claim matters. Disinformation about the economy is more likely to be believed than disinformation about politics, historical experience, or the military. Additionally, Ukrainians with partisan and ethnolinguistic ties to Russia are more likely to believe pro-Kremlin disinformation across topics. Our findings underscore the importance of evaluating multiple types of disinformation claims present in a country and examining these claims’ target audiences.

Abstract: Disinformation is endemic in the digital age, seriously harming the public interest in democracy, health care, and national security. Increasingly, disinformation is created and disseminated by social media algorithms. Algorithmic disinformation, a new phenomenon, thus looms large in contemporary society. Recommendation algorithms are driving the spread of disinformation on social media networks, and generative algorithms are creating deepfakes, both at unprecedented levels. The regulation of algorithmic disinformation is therefore one of today’s thorniest legal problems. Against this backdrop, this Article proposes a novel approach to regulating algorithmic disinformation effectively. It first explores why transparency, intelligibility, and accountability should be adopted as the three major principles of the legal regulation of algorithmic disinformation. Because of its market-based technology development and regulation policy, the United States has yet to adopt any laws regulating algorithmic disinformation, let alone these three principles. The Article then examines legislative reforms in France and China, where the three principles have been translated into legal rules requiring social media companies to disclose their disinformation-related algorithms, render them intelligible to users, and assume legal responsibility for curbing the spread of disinformation on their platforms. Based on a critical discussion of the major problems with these legal rules, the Article puts forward a multi-stakeholder approach to better implement the three principles. It argues that the United States should take the lead in creating and piloting an algorithmic disinformation review system. This new system would empower the administrative oversight of algorithmic disinformation and promote the dynamic engagement of social media users and experts in policing algorithms that generate and disseminate disinformation. The ADRS would thus promote the transparency and intelligibility of algorithms and hold social media platforms accountable for curbing disinformation.

AbstractThe advent of social media changed the way we consume content, favoring a disintermediated access to, and production of information. This scenario has been matter of critical discussion about its impact on society, magnified in the case of the Arab Springs or heavily criticized during Brexit and the 2016 U.S. elections. In this work we explore information consumption on Twitter during the 2019 European Parliament electoral campaign by analyzing the interaction patterns of official news outlets, disinformation outlets, politicians, people from the showbiz and many others. We extensively explore interactions among different classes of accounts in the months preceding the elections, held between 23rd and 26th of May, 2019. We collected almost 400,000 tweets posted by 863 accounts having different roles in the public society. Through a thorough quantitative analysis we investigate the information flow among them, also exploiting geolocalized information. Accounts show the tendency to confine their interaction within the same class and the debate rarely crosses national borders. Moreover, we do not find evidence of an organized network of accounts aimed at spreading disinformation. Instead, disinformation outlets are largely ignored by the other actors and hence play a peripheral role in online political discussions.​​​​​

Abstract: This article explores the transformative role of practices of countering digital disinformation in European Union diplomacy. It argues that an overlooked dimension of the change brought by the rise of digital disinformation is located in the emergence of everyday countering practices. Efforts to counter disinformation have led to the recruitment of new actors with different dispositions and skill sets than those of traditional diplomats and state officials in diplomatic organizations such as the European External Action Service. Focusing on the countering efforts by the East StratCom Task Force, a unit introduced in 2015, the article argues that the composition of actors, the task force's practices and the reorientation in audience perception it reflected, contributed significantly to institutional transformation. Drawing on 23 interviews with key actors and building on recent advancements in international practice theory, the article shows how change and transformation can be studied in practices that have resulted from digitalization in international politics. The article thus contributes to an increased understanding of the digitalization of diplomacy in which new practices can emerge from both deliberate reflection and experimentation.

Further sources

If you are unable to access the article you need, please contact us and we will get it for you as soon as possible.

Data Protection Notice   Cookie Policy & Inventory
Library Catalogue
Journals on all devices
Books, articles, EPRS publications & more
Newspapers on all devices