Skip to main content

Deterrence in the 21st century: 8Deterrence and Strategic Disinformation: An Overview of Canada’s Responses

Deterrence in the 21st century
8Deterrence and Strategic Disinformation: An Overview of Canada’s Responses
    • Notifications
    • Privacy
  • Project HomeDeterrence in the 21st Century
  • Projects
  • Learn more about Manifold

Notes

Show the following:

  • Annotations
  • Resources
Search within:

Adjust appearance:

  • font
    Font style
  • color scheme
  • Margins
table of contents
  1. Contents
  2. Foreword
  3. Preface
    1. Introduction
  4. Section I Deterrence as an Evolving Concept
    1. 1 Deterrence Is Always about Information: A New Framework for Understanding
    2. 2 Nuclear Crisis Management for the Information Age
    3. 3 Deterrence by De-legitimization in the Information Environment: Concept, Theory, and Practice
  5. Section II Wider Strategic Context and Experiences
    1. 4 Understanding Russia’s Approaches to Information Warfare
    2. 5 The Evolution of China’s Information Exploitation of COVID-19
    3. 6 Deterrence in the Gaza Conflict: Hamas Case Study Analysis
    4. 7 Resilience as a Framework for Deterrence in the Information Age: Lessons Learned from Israel about Information and Influence Operations
  6. Section III Canada’s Context
    1. 8 Deterrence and Strategic Disinformation: An Overview of Canada’s Responses
    2. 9 Exit, Voice, or Loyalty? Functional Engagement as Cyber Strategy for Middle Power Statecraft
  7. Section IV Emerging Tools and Approaches
    1. 10 Digital Tribalism and Ontological Insecurity: Manipulating Identities in the Information Environment
    2. 11 Deterrence for Online Radicalization and Recruitment in the Twenty-First Century
    3. 12 Assessing Influence in Target Audiences that Won’t Say or Don’t Know How Much They Have Been Influenced
    4. Conclusion
    5. Afterword
    6. Postface
  8. List of Abbreviations
  9. About the Authors
  10. Index

8Deterrence and Strategic Disinformation: An Overview of Canada’s Responses

Nicole J. Jackson

Over the past half decade, in Canada and globally, considerable public and policy attention has focused on the question of whether and how to respond disinformation. Among Western governments, there is now a widespread understanding that strategic disinformation, especially through social media, may present serious challenges to national and societal security. A major concern is that some state and non-state actors, both foreign and domestic, are involved in organized strategic deception and/or are intentionally creating confusion by promoting divisive content that plays on pre-existing biases. There is an assumption that these actions may undermine credibility and trust in authorities, as well as in information itself, which makes it easier to manipulate societies and leaders. For the military, a particular concern is that disinformation, and other manipulations of information, may exacerbate or even create a chaotic environment in which to make decisions. Yet, despite the acknowledged need to respond urgently to disinformation, there is no academic or policy consensus about whether, when, and how to do so.

This lack of consensus is partly because the subject of disinformation is fraught with problems of definition and therefore ambiguity. A spectrum of disinformation has always existed, and specific cases have varied, but today actors and processes appear and evolve at an unprecedented rate, some almost instantaneously and with global reach. Within this context, academics and practitioners analyze and advocate for different responses to a range of “disinformation.” Even when focusing more specifically, for example, on “strategic disinformation” or “foreign,” “strategic digital disinformation,” there is little or no consensus as to which measures, or combination of measures, are needed, are most effective, or are ethical in particular circumstances.

Currently, some experts are questioning whether the theory and practice of “deterrence” may provide some insights into possible state responses in the information environment. The traditional military understanding of deterrence is based on the idea that a potential aggressor’s cost-benefit calculation might be influenced, for example, by the threat of a punitive response (deterrence by punishment) or by the realization that the defender’s preparations are so advanced or effective (deterrence by denial) that the costs of carrying out the aggression would be too great (Snyder, 1961). Of course, when applied to “disinformation” or “strategic disinformation,” as examined in this chapter, the traditional logic of deterrence, which is already controversial, is further complicated. In fact, many would argue that deterrence has little or no place in a discussion about (dis)information. Yet the concept has evolved, and deterrence remains at the core of US and NATO—and hence Canadian—strategy.

This chapter therefore examines whether and how both the concept and theory of deterrence have evolved in ways that make them useful to understand and assess responses to “strategic disinformation” and uncover their limits. It also provides a case study of Canadian security and foreign policy responses to specifically ask whether the “widening of deterrence” is helpful in explaining Canada’s burgeoning approach. In doing so, we see how Canada’s fragmented actions may be illuminated by these wider understandings, in addition to revealing certain missing actions.

The chapter first examines the contested definitions of “disinformation” and “deterrence.” Both concepts have been conceptually stretched within scholarly works and in practice, creating uncertainty about “what to deter” and “how to deter.” It then examines recent literature on the theory and practice of deterrence to highlight where “new” understandings have relevance and limits for considering whether and how to respond to disinformation. Next, the chapter examines whether and how Canada is attempting to “deter,” that is, how it is trying to foster restraint and prevent (some) disinformation and its negative effects. It categorizes the government’s major foreign and security actions since 2014 according to whether they fit within deterrence by denial (including technical and strategic denial through resilience) or “deterrence by punishment or imposing costs” (including social and psychological costs). By framing Canada’s major actions through the broadened lens of deterrence, this review also highlights what is missing from Canada’s fragmentary approach. The chapter concludes by suggesting that, despite the contributions of recent literature on deterrence, “democratic suasion” may be a more appropriate and holistic concept to describe and guide Canada’s approach.

Of course, context matters, and the context for Canadian (and other governments’) responses to disinformation is a world in which many state and non-state actors have become more proactive and increasingly global—by co-opting traditional and social media around the world for their own purposes and taking more aggressive moves to shape and suppress online and offline discourse at home and abroad. Adding further complexity, but also opening new opportunities for reform, this is increasingly happening through the use of new technologies at a time when Western liberal democracies are confronted with political polarization, media echo chambers, and widespread questioning of the resilience of Western democratic institutions and quality of governance.

The chapter begins by examining the definitional ambiguity and conceptual stretching of “disinformation,” which is necessary to answer a question of key interest here: What is to be deterred? It then examines the conceptual stretching of “deterrence” to discover insights from recent literature that may be applied to the questions of whether and how disinformation can be deterred, and then to the case of Canada.

The Ambiguity of What to Deter: The Conceptual Stretching and “Hybridization” of “Disinformation”

Perhaps the most significant challenge in addressing disinformation is how to define it. Definitional and practical scope ambiguity have direct implications for thinking about whether we can “deter” the challenge, and how. Whether we are, or believe we are, facing “misinformation,” “digital disinformation,” “disinformation,” “strategic narratives,” “information warfare,” or “strategic information campaigns,” the language we use can imply different sets of challenges and strategies that need to be addressed, and thus different responses.

Disinformation is not new, and neither is the study of disinformation. Yet, despite a recent proliferation of studies, the scholarly definition (and identification) of disinformation remains controversial. In practice, disinformation is often very loosely defined, reflecting its ambiguity and complexity, including the blurred line between a variety of often related activities including cyber-attacks, leaks, and corruption. Today, there is some academic consensus that “disinformation” is best defined as the deliberate dissemination of intentionally false or inaccurate information, as opposed to “misinformation,” which is the act of spreading false information unintentionally, including when intent can not be determined (Jack, 2019; Jayakumar et al., 2021; Lanoszka, 2019; Tucker, 2018). Disinformation, it has been shown, can be disseminated through the written word and visually, by traditional means (e.g., newspapers, radio, and TV), and by newer digital technologies of social media (e.g., Facebook, Twitter, YouTube). Online and offline there are many legitimate and illegitimate actors (individuals, groups, states, and non-state entities), all manipulating and shaping, or trying to shape, public discourse in a constantly evolving process. Recent studies, for example, highlight the fact that relatively simple automated bots have been replaced by more sophisticated or blended disinformation agents that include domestic and foreign actors (Bradshaw & Howard, 2018).

In the Western context, the term “information warfare” tends to refer to disinformation that is deliberate or coordinated in a military context. It usually describes limited, tactical information operations carried out during hostilities by either state or non-state actors using and exploiting an open system with the intent to do harm (some definitions say that harm actually has to be done), through illegitimate if not illegal ways (Lucas & Pomeranzev, 2016). “Information operations” originally was a military term that referred to the strategic use of technological, operational, and psychological resources to disrupt the enemy’s informational capacities and protect “friendly forces.” Today, many analysts (and social networking services, most notably Facebook), have adopted the terms “information operations” or “information campaigns” to refer to a variety of actors’ “deliberate and systematic attempts to steer public opinion using inauthentic accounts and/or inaccurate information” (Jack, 2019, p. 8).

Questions of how to identify, and whether and how to respond to, such a range of amorphous or “soft” phenomenon are obviously exceedingly controversial and challenging. Information operations can involve accurate information, misinformation, disinformation, or a mix of all three. What is false or misleading information can be contested. Even scientific data evolves with new information, and analyses and commentaries are shaped by cognitive biases and preferences that may stem from cultural identities and other complex factors. Whether an information campaign edges over from “persuasion” to “deliberatively manipulative” or “deceptive” can sometimes be a matter of perspective. Significantly, information is often “laundered,” sources can be, and increasingly are, blurred or hidden, and the distinction between “domestic” and “foreign” appears less relevant than it once was.

To further complicate matters, it is not just the “disinformation” itself that needs to be responded to. There often are other related activities (corruption, cyber-attacks, etc.) as well the broader strategy itself. Many experts are now convinced that (some) deliberate and coordinated disinformation is a key part of a broader strategy by various actors to undermine democracy and social cohesion (Lin & Kerr, 2021; Wigell, 2019). Some highlight a broad global shift since 2014 from “outright falsification to a greater emphasis on subtle and strategic manipulation and amplification of divisive narratives”—for example, on immigration/migration, anti-religious sentiment, nationalist identity, women’s health, gender-based harassment, climate change, and now COVID-19 (Jackson, 2017). Key think tanks, experts, and government agencies warn that issues and identities (anti-Semitism, anti-Muslim hate speech, misogyny, etc.) are being “weaponized” in both targeted and coordinated ways, and not only during elections. The strategies, they speculate, include undermining arguments for multilateralism, spurring polarization along “culture war” lines, eroding trust in democratic institutions, and/or sewing confusion (Institute for Strategic Dialogue, 2019). For the military, a key concern is that malicious actors may use strategic disinformation and other forms of manipulation to gain advantage in the information environment in order to create chaos for adversaries’ “command and control.”

In short, given the complex and ambiguous nature of disinformation, it is unsurprising that we encounter loose definitions and “hybridization” of the challenge (its rhetorical linkage with other issues) as well as ambiguity over whether and what to deter. To understand specific motives requires deep knowledge of the context (the different theatres of disinformation), as well as the actor(s) and their intentions (if any) over time. A range of disinformation exists, and it can often be debated what the underlying strategy is, and whether and when a range of responses may be needed in both the cyber/information and the psychological “domains.”

Can We Deter (Strategic) Disinformation, and How? Insights and Limits from the Literature on Deterrence

Beyond the above-mentioned definitional difficulties of “what it means to deter,” is it possible or even desirable to have “deterrence of disinformation”? Do deterrence theory and practice have any relevance to countering “disinformation,” or more specifically to thinking through responses to state-sponsored (and non-state-sponsored) “strategic disinformation”? Scholars point to multiple limits and problems with governmental attempts to respond, or not respond, to disinformation (Bjola & Papadakis, 2020; Gregor & Mlejnkova, 2021). For example, as mentioned above, since disinformation can be “laundered,” it can be difficult to attribute sources or to have accountability. Others say that this is overstated or that this may also grant flexibility when considering various responses. Still others could argue that almost any government involvement in this area would be/is detrimental, especially if it is perceived as a state intrusion on freedom of speech or privacy. In other words, for some scholars and practitioners, disinformation (or some disinformation) may be better understood and dealt with as a social and cultural issue than as a security “threat” (Ramersad & Althiyabi, 2020; Sample et al., 2018).

Nevertheless, the question of how to deter or dissuade efforts at disinformation has some commonalities with questions about whether and how to respond to a range of asymmetrical aggressions from state and non-state actors. The latter are considered by many scholars and practitioners to be some of today’s most urgent and challenging issues. To address them, a growing academic and policy literature has sought to examine the role of “deterrence” in responding to “cross-domain” (cyber, space, economic, etc.) (Adamsky, 2015; Brantly, 2018a, 2018b; Lindsay & Gartzke, 2019a, 2019b; Sweijs & Zilincik, 2020) as well “hybrid”1 (ambiguous and blended) “threats” (Cullen & Wegge, 2019; Jackson, 2019; Stoker & Whiteside, 2020; Sweijs & Zilincik, 2019). Authors writing on these topics provide insights into how deterrence, and understandings about it, are evolving. These in turn have relevance for considering responses to disinformation.

How to Deter Disinformation? Technical and Strategic “Deterrence by Denial” and Resilience

Both the theory and practice of deterrence have evolved considerably over time. The so-called fourth wave of deterrence began at the end of the Cold War, when threats came to be perceived as more uncertain and less predictable. Many scholars have since written about a new, more complex and less state-centric environment defined by asymmetric changes. Today, some scholars argue that we are in a “fifth wave” of deterrence, defined by the need for “resilience” to address vulnerabilities through a long-term approach—for example, to build strong and adaptive infrastructure, to ensure social cohesion, and to sustain trust in government (Prior, 2018). In practice, it seems that the diffuse nature of threats is leading to more distributed responses through new or non-traditional networks and approaches.

Within this recent literature on “deterrence,” “resilience” is conceptualized as an important part of both technical and strategic “deterrence by denial.” The logic is that to increase resilience not only mitigates harmful effects of hostile influence, but also changes adversaries’ cost-benefit analyses by denying them (technical or strategic/political) benefits. In strategic “deterrence by denial,” the strategic or political impact is absorbed with no long-lasting result (Hartmann, 2017; Hellman, 2019), as opposed to technical “deterrence by denial,” which denies direct impact. Applied to the case of disinformation, technical denial could then occur, for example, through the bolstering of cyber defences and technical capabilities (or shutting down/denying access to a news outlet). Strategic denial could include credible actions to deny objectives, for example, by protecting the psychological realm (e.g., through education to increase critical thinking, or in media to increase fact-checking) or by strengthening democratic institutions. If an adversary’s strategy is to gain “information dominance,” then showing that society can “keep going” physically and psychologically (and leaders can keep making sound decisions)—despite the disinformation and related confusion—may help to “maintain deterrence.”

The literature on deterrence in the so-called grey zone between peace and war applies this logic of “deterrence by resilience” to “hybrid threats.” It argues that greater resilience might be accomplished through a coordinated approach, with governments, private actors, and civilians working together at the domestic and/or global levels (Lorenz, 2017; Wilner, 2017). By extension, actions to increase technical and strategic denial to “strategic disinformation” would also include a range of efforts in different areas, including political/institutional (e.g., efforts to secure elections, or to increase trust in democratic institutions), military (e.g., efforts to improve strategic communications), infrastructure (physical or digital), social (e.g., to increase awareness), and information (e.g., to govern platforms or regulate media) at home and abroad (Monaghan, 2019). Strengthening resilience to disinformation in this way would be a “cross-domain” effort to prepare societies in a “cross-sectoral” approach in order to convince actors/adversaries of the futility of their efforts to engage in strategic disinformation (Sweijs & Zilincik, 2020).

In other words, widening the theory and concept of “deterrence” to include strengthening resilience to hybrid threats helpfully points to a range of non-traditional responses to a variety of hybrid challenges, including disinformation. Some scholars argue that this widening alters too much the traditional logic and practice of deterrence, even while others argue that it adds little in the way of new benefits (Lindsay & Gartzke, 2019a). Similarly, the concept of “resilience” significantly alters the focus of traditional deterrence responses. This can be criticized for encompassing too many activities to be analytically or practically helpful. Resilience can also be a confusing term in that it aims to represent processes that simultaneously seek to maintain status quo in the face of shocks and those that allow for transformation (Bourbeau & Ryan, 2018).

Nevertheless, in practice, and facing the rapidly evolving and increasingly global reach of some “hybrid threats” such as strategic disinformation, Western governments have called for strengthening domestic (and global) resilience as a means to “deter” activities in the “grey zone.” The United Kingdom, for example, now interprets the term “deterrence” very widely to include defensive resilience measures (reasoning that capable and resilient governance raises the price of hybrid aggression and reduces its chances for success) (UK Ministry of Defence, 2019). At the same time, and seemingly paradoxically, recent calls by NATO and the European Union for “more resilience” have been critiqued both for trying to legitimize these organizations’ roles in countering “hybrid threats” (including disinformation), and for abdicating or transferring responsibility to domestic actors. Below, we will see how resilience has become a cornerstone of Canada’s approach to disinformation.

How to Deter? The Broadening Conceptualization of “Costs” and “Punishments”

The more traditional concepts of “deterrence by punishment” and “deterrence by increasing costs” (as well as the more proactive “compellence”) (Shelling, 1966) have also been applied in the so-called cross-domain literature (Sweijs & Zilincik, 2020). In other words, just as the logic of “deterrence by denial” has been applied to other domains (e.g., cyber, economic, and outer space), so has the logic of ensuring that punishments or costs “outweigh benefits” been brought to bear in a variety of areas outside traditional military concerns. Furthermore, the traditional understanding of “costs” and “punishments” has been expanded to include, for example, the relevance of identity and belief systems to the cost-benefit analysis. For example, recent research examines the benefits of increasing the social costs of norms (the “calling out” of bad behaviour) and increasing the negative costs, through “deterrence by de-legitimization” (raising the reputational cost to motivate restraint) (Wilner, 2014, p. 449), or through “deterrence by counter-narrative” (Knopf, 2010). At the same time, scholars have examined how positive incentives can play a role in dissuading attacks—for example, by fostering interdependence through “deterrence by entanglement” (Brantley, 2018).

Applied to disinformation, these new understandings provide a wider range of options for how to “deter.” They include not only military means (kinetic and non-kinetic), but also, for example, political (travel restrictions, expulsions of diplomats), economic (sanctions, financial penalties), civil (public blaming), information (legislation), and international law. In keeping with traditional deterrence theory, they also suggest a reactive approach. However, a more extreme version of “deterrence by punishment” also includes offensive actions aimed at disrupting or degrading an adversary’s capacity for action. An example here is the US strategy of “persistent engagement” to shape the parameters of acceptable behaviour in cyberspace, including, if necessary, aggressive cyber operations (Healey, 2019). Unsurprisingly, when applied to strategic disinformation, attempts to prevent an adversary from taking further action (offensive pre-emption) are also the most controversial since they raise concerns about intervention and sovereignty and the need for secrecy versus transparency.

The Illusion and Limits of “Deterrence of Disinformation”

Recent literature on deterrence also suggests possible limits to the logic of “deterrence of disinformation,” showing that creating an “illusion of deterrence” may be especially relevant in response to an ambiguous threat.

First, much of the recent literature stresses that deterrence is fundamentally the outcome of a psychological relationship (Kroenig & Pavel, 2012), meaning that capabilities are less relevant than our perception about them (Jervis, 2016). The implication is that even though strategic disinformation can never be completely countered, and our capabilities will always be limited or in need of “catching up,” the “illusion of capability” (to deter) is still possible, and indeed may matter most. Nevertheless, as others have commented, this is not a new revelation, and perception has long been acknowledged as central to traditional deterrence theory and international relations (Hudson, 2014).

Second, recent literature finds that deterrence is not about absolutes, it is about making “attacks” less likely or effective over time (“cumulative” or “punctuated” deterrence) (Kello, 2017; Tor, 2015). If (an adversary’s) individual activities can be rendered difficult, the greater process may be undermined. As mentioned above, the “cross-domain” literature also suggests that actions taken in adjacent areas may render the broader strategy ineffective. Thus, to deter disinformation, actions in other realms may be possible (e.g., sanctions in the economic realm), and even minor actions taken may affect an adversary’s perceptions and actions. Moreover, when disinformation is understood as part of “hybrid warfare,” or one of many hybrid threats, then “hybrid deterrence” (as opposed to “comprehensive deterrence”) may make sense as an approach to deter (some) strategic disinformation (Monaghan, 2019). Related to this, if success can be rendered tactically difficult (e.g., through regulation of social media platforms), it may be harder to maintain coordination, and the whole effort may be undermined (i.e., “tactical denial”) (Kroenig & Pavel, 2012).

Third, there is a developing consensus in the deterrence literature that more needs to be understood about actors, their motivations, aims, and limits. For example, work on deterrence and terrorism has shown the limits of deterrence while highlighting that terrorists may be deterred—we just need to find out and target what they really cherish (e.g., political motives) (Trager & Zagorcheva, 2006; Wilner, 2011, 2014). It can also be logically speculated that the same may be true in the case of disinformation, thus it is not just the processes (e.g., media, bots, culture) that we need to examine, but also the actors’ key (political and other) motivations and other root causes of disinformation.

Deter How? A Case Study of Canada’s Responses to Disinformation

This section provides an overview of where the Canadian case fits with the above analysis of recent literature on deterrence and how it may be applicable to disinformation.

Deterrence by Denial: Building Technical and Strategic Resilience

Although strategic disinformation can obviously not be completely shut out, denial of direct access aims to make it more difficult through technical solutions and bolstering infrastructure and institutions. In Canada, attempts to deter strategic disinformation have included accelerated efforts to strengthen cyber defence and resilience and to develop legislation and norms to hamper disinformation efforts, especially during elections. More generally, there have been efforts to increase co-operation and to share more information (about disinformation) to “deny” actors (further) access at the domestic and international levels.

To give some examples, the Canadian Departments of National Defence (DND) and Public Safety, CSIS, and the Communications Security Establishment (CSE), among others, have increased their work to develop greater internal IT capacity, discover data solutions, and strengthen institutional resilience in response to an array of recent misinformation and disinformation. The CSE and CSIS joined Elections Canada to track and analyze big data to share with other G7 members and conducted simulations to identify vulnerabilities (Government of Canada, 2019; Pinkerton, 2019). There has also been a significant increase in research concerning the creation, attribution, and dissemination of (especially digital) disinformation, including, for example, research into the development of algorithms to identify and block “fake news.” It is widely acknowledged that the next shift in disinformation is well underway with the artificial intelligence (AI) revolution, and this reality is informing research into how to leverage AI against AI, including how to detect coordinated activities by malicious actors. In this regard, Canada has greatly increased its co-operation with other governments and NGOs working in these fields. For example, it supports and shares technical and other research on fake online personas and images with the US Global Engagement Center. At the same time, Canada has actively supported research into the ethical implications of possible responses.

The Canadian government has also developed new legislation to counter disinformation. The 2019 Elections Modernization Act, for example, introduced new provisions aimed at deterring or preventing “foreign interference” (Reepschlager & Dubois, 2019). Other institutional initiatives have aimed to increase bureaucratic collaboration. For example, in advance of the October 2019 federal election, the government created a new RCMP-led task force, Security and Intelligence Threats to Elections (SITE). SITE included Global Affairs Canada, the CSE, and CSIS, to build awareness and prepare the government to prevent and respond to “covert, clandestine or criminal attempts to interfere with the electoral process.” It analyzed foreign social media and coordinated responses with the G7 Rapid Response Mechanism (see below) (Government of Canada, 2019). The government also initiated the Critical Elections Incident Public Protocol, under which five senior bureaucrats were to be informed of any potential interference during the 2019 federal election, in order to determine whether the incidents were serious enough to inform Canadians (none were) (House of Commons, 2019).

Canada’s attempts to “deny through information sharing” have also taken place at the international level. They include the efforts of Global Affairs Canada (GAC) to position Canada at the centre of collective cyber defence by sharing reports, coordinating roles, and sharing best practices as network coordinator of the Rapid Response Mechanism (Government of Canada, 2019). GAC also addresses disinformation and related “foreign interference” through many partnerships—for example, with NATO and the European Union, as well as with NGOs such as the US Alliance for Securing Democracy.

However, as mentioned above, “deterrence by denial” is not just about denying or making access more difficult; it is also about denying political and cognitive “wins.” Here the Canadian government has encouraged the development of individual/cognitive and societal resilience to misinformation and disinformation through programs designed to foster awareness of the challenges. Since 2014, Canadian government departments and security agencies have been quick to publicly explain why disinformation is a security challenge and to expose specific actors and their actions (Jackson, 2022). A series of bureaucratic and think-tank reports examine the roles allegedly played by Russia (and Russia-related actors) and China, as well as Iran, North Korea, former US president Donald Trump, right-wing extremists, etc. These reports, along with heightened political rhetoric about the dangers of disinformation from leaders such as then Canadian foreign minister Chrystia Freeland, have played an important role in raising awareness about the challenges and their (possible) negative effects (Bradshaw, 2018; Canadian Centre for Cyber Security, 2018, 2020; CSE, 2017; Greenspoon & Owen, 2018; Kolga, 2019; National Security and Intelligence Committee of Parliamentarians, 2020; Picard, 2019; Sukhankin, 2019; Tenove, 2018). Recent works on deterrence would suggest that such reports may further increase societal resilience and trust in government responses by signalling governments’ respect for truth and transparency (Doorn & Brinkel, 2020). The Canadian government has also articulated its intentions to respond to misinformation and disinformation in official cyber-strategy documents, in Canada’s defence policy, and in several other non-legal documents. These stated intentions refer to disinformation and the wider category of misinformation in relation to the challenges of “hybrid conflict” and “foreign interference,” and they propose “whole of government” and “whole of society” responses. Taken together, this official and rhetorical “securitization” of disinformation (using rhetoric to refer to it as an urgent security threat) may itself function as a deterrent by signalling recognition of a challenge and implying clear intentions to act. However, it may also be criticized for being vague and not showing enough political resolve.

Other efforts to develop strategic “deterrence by denial” include a range of attempts to identify specific targeted messages and audiences and to deploy credible narratives (or counter-narratives) through strategic communications (Pamment et al., 2018). Overall, Canada’s military and security agencies have increased their monitoring (in terms of aggregate data and community policing), research, and exposure of false or manipulative narratives. The latter are not generally directly responded to because such efforts can backfire and have unintended consequences. However, as mentioned above, there are different kinds of disinformation “campaigns” that have targeted Canada, and there are also examples of Canada making specific and effective small-scale responses (Potter, 2019). For example, Canada’s Task Force Latvia, along with the local Canadian embassy, was adept at using various public outreach efforts in countering malicious narratives designed to impugn Canadian military personnel.

To give some other examples of broad Canadian efforts in this area, the Canadian military and the DND track trends in narratives and emerging technologies such as “deepfakes,” both of which are recognized now as potentially decisive factors in future conflicts. The RCMP examines how foreign actors intersect with domestic extremism, reflecting the current concern that disinformation may be part of a broader phenomenon of violent transnational social movements, based locally, but inspired internationally (Kelshall & Dittmar, 2018). Similarly, Public Safety Canada is exploring links between communities, extremism, and disinformation.

Deterrence through Threats of Punishments and Imposing Costs

Beyond attempts to build technical and strategic resilience, Canada has pursued some threats and attempts to impose “costs” (narrowly and widely defined) on the perpetrators of disinformation. These include attempts to impose normative costs through “public blaming,” as well as legislation and international law. Such attempts are, however, limited, and it is highly questionable whether the costs imposed or threatened outweigh aggressors’ perceived benefit.

First, Canada has called out the “bad behaviour” of certain actors (as seen above in Canada’s bureaucratic reports and political rhetoric), and it can be argued that this “shaming and blaming” may also increase social and psychological costs. Theoretically, along with other allies’ similar actions, over time they may contribute to “deterrence through de-legitimation” (and bolster resilience, as seen above). On the other hand, they may also contribute to perceived grievances and hinder other diplomatic efforts.

Perhaps Canada’s greatest efforts thus far to impose normative costs and restrain behaviour have been in legislation and international law. These include Canada’s engagement with allies to develop norms in response to various activities in cyberspace, including disinformation. For example, Canada has been involved in intergovernmental negotiations at the United Nations to create a new global cyber-security architecture that would protect digital information and the infrastructure on which it resides. This effort faces many obstacles, but the point here is that it is an attempt to deter “by entanglement,” by increasing interdependence among states (if not among the non-state actors, or state-affiliated actors, that act independently).2 Canada’s initial efforts to regulate social media platforms are outside the scope of this chapter, but they are also examples of attempts to impose “harder” costs by regulating rules, content, and competition.

There is little public information about Canada’s consideration of more offensive cyber “punishments” of actors engaged in response to strategic (digital) disinformation. Certainly, some argue for Canada to take more offensive actions to disrupt or degrade actors’ capacity to spread strategic disinformation as part of a more effective “deterrence by punishment” response. (Such actions have also been framed as a pre-emptive measure to increase defensive resilience.) However, steps have been taken to revamp Canada’s national security infrastructure and to give the CSE the power to defend elections if they come under cyber-attack. In June 2019, the CSE was granted wide-ranging powers to engage in “defensive cyber operations” and “active cyber operations” to “degrade, disrupt, influence, respond to or interfere with the capabilities, intentions or activities of a foreign individual, state, organization or terrorist group as they relate to Canada’s defense, security or international affairs,” as per the wording of the National Security Act of 2017. In other words, for the first time, Canada could launch its own cyber-attacks. The threat of possible Canadian counterattacks (including against the digital information environment) is meant to deter attacks and “proactively shut down the source of a possible attack against Canada” (Kolga, 2019, p. 26).

Of course, the larger context of Canada’s responses to strategic disinformation includes efforts to “deter” aggressive actors and actions. For example, in the case of Canada’s responses to Russia, deterrence is practised through military means to increase “costs” (by stationing troops in the Baltics), political means (through the expulsion of diplomats and travel restrictions), and economic means (economic sanctions). None of these attempts to impose costs have been used specifically for disinformation “acts” or applied specifically to those who spread disinformation. However, there may be some “spillover” in the area of disinformation. As mentioned above, actions taken in adjacent areas may render the broader strategy ineffective. Yet Canada has not yet clearly communicated any exact costs in relation to specific cases of disinformation. Imposing targeted sanctions in the information realm would narrow the focus of deterrence, but it is not clear how effective they would be. Nevertheless, they are one example of how, despite heightened rhetoric, there is more that could be done to show capacity and the political resolve to follow through.

Conclusion and Discussion: Deterrence, Delusion, or Democratic Suasion?

This chapter attempted to show that, despite many challenges, “deterrence” has relevance when considering how to respond to strategic disinformation. Recent literature expands the scope of traditional deterrence and points to a wider range of non-traditional means “to deter.” Applied to strategic disinformation, these means include strengthening technical, individual, and social and institutional “resilience” and imposing a broad range of “costs” and incentives in a nuanced cost-benefit analysis. The literature further highlights the significance of the “illusion to deter,” which may be especially important in responding to ambiguous threats such as disinformation. Other insights include the importance of acting in areas outside the information domain in order deter any larger strategy, as well as the need to learn more about actors’ perceptions, aims, and strategies (which can be extremely difficult to ascertain).

These contributions can be used to illuminate Canada’s fragmented foreign policy and security responses to disinformation. They show that Canada is attempting to deter disinformation by taking a broad yet ad hoc approach that reflects the real difficulties and uncertainty about how to respond to such a complex and ambiguous challenge. At first glance, Canada may seem to be taking a random “whack-a-mole” approach. However, overall its rhetoric and actions fit within the wider understandings of both technical and strategic “deterrence by denial.” Canada’s responses, including what might be termed “denial by information sharing,” are attempts to increase resilience, so that society can “take the first punch” and “negate the benefits.”

Examples of strategic disinformation have also been “called out” and normative costs “raised.” However, these actions are limited. Specific thresholds for specific actions are not clear, and there have been few, if any, positive inducements to change behaviour (e.g., the lifting of sanctions). More generally, viewing Canada’s actions through the lens of deterrence highlights the fact that assumptions, especially about actor motivations and strategies, could be further questioned and more effort made to shape the perceptions and thinking of adversaries, as well as giving more consideration to the unintended consequences of actions.

In sum, while this overview shows that deterrence in its broad conceptualization may rightly point to a range of activities in response to disinformation, it remains questionable whether it is a sufficiently accurate concept with which to describe or analyze Canada’s (and other countries’) actual or possible responses. In a recent study, Sweijs and Zilincik (2020) proposed “dissuasion” to describe the broader efforts that are now often encompassed under the term “deterrence.” Similarly, Wiggle (2021) coined the term “democratic deterrence,” which he conceives as a “whole of society approach,” coordinated by the government, to build resilience.

This chapter concludes with the suggestion that “democratic suasion” is another umbrella concept worth developing. “Democratic suasion” would encompass dissuasion and persuasion, since disinformation is often at its core a political problem, dependent upon the type of relationship between the disinforming actor and its target. It would also capture Wiggle’s stress on building democratic institutional and ideational resilience as a key deterrent and means of compellence. In this conceptualization, “democratic suasion” may be a better and more inclusive term than “deterrence” to capture Canada’s stated policy aspirations to create a more holistic approach and the diverse actual and possible responses needed in response to this complex challenge. Democratic suasion would suggest strengthening technical and democratic resilience, building upon recent domestic and international collaborations, but also incentivizing “good behaviour” and “thinking ahead” to consider aggressors’ motivations and how to shape their thinking. In contrast to deterrence, this would be more along the lines of a context-specific “public health approach” where more preventative action would be taken, but with a focus on reducing the harm from disinformation, while accepting that some kinds of disinformation will always be with us. For the military, this would mean continuing work with allies and other domestic actors and civil society to address the strategy behind disinformation, and showing that it can function despite disinformation and related manipulations, which increase confusion.

This chapter provides an overview of Canada’s actions in light of new thinking about deterrence and its application to disinformation. Future studies could usefully examine the above ideas and their limits, analyze in more depth specific Canadian responses in specific theatres of disinformation, their effectiveness in different cases, and how Canada’s experiences in turn may contribute to the literature on deterrence.

Notes

  1. 1 The “hybrid warfare” paradigm perceives (some) disinformation as part of an ambiguous or blended conflict or one of multiple instruments that may be used in synchronized “attack” and tailored to specific vulnerabilities. Either way, it is understood as deliberate and strategic, but also including elements of uncertainty and deniability.
  2. 2 The 2014–15 Group of Governmental Experts outlined voluntary, non-binding peacetime norms of state behaviour in cyberspace. Subsequently, the General Assembly unanimously adopted a resolution that states should be guided by these norms. See United Nations (2015).

References

  1. Adamsky, D. (2015). Cross-domain Coercion: The current Russian art of strategy. Security Studies Center.

  2. Bjola, C., & Papadakis, K. (2020). Digital propaganda, counterpublics and the disruption of the public sphere: The Finnish approach to building digital resilience. Cambridge Review of International Affairs, 33(5), 638–66.

  3. Bourbeau, P., & Ryan, C. (2017). Resilience, resistance, infrapolitics and enmeshment. European Journal of International Relations, 24(1), 221–39.

  4. Bradshaw, S. (2018). Securing Canadian elections: Disinformation, computational propaganda, targeted advertising and what to expect in 2019. Behind the Headlines: Research Paper Series, 66(3). https://thecic.org/research-publications/behind-the-headlines/securing-elections-2019/

  5. Bradshaw, S., & Howard, P. N. (2018). Challenging truth and trust: A global inventory of organized social manipulation. Computational Research Project, University of Oxford.

  6. Brantly, A. (2018a) Back to reality: Cross domain deterrence and cyberspace. Virginia Tech.

  7. Brantly, A. (2018b). Conceptualizing cyber deterrence by entanglement. Social Science Research Network.

  8. Canadian Centre for Cyber Security. (2018). National cyber threat assessment 2018. Retrieved 2 May 2020 from https://cyber.gc.ca/en/guidance/national-cyber-threat-assessment-2018

  9. Canadian Centre for Cyber Security. (2020). National cyber threat assessment 2020. Retrieved 3 February 2020 from https://cyber.gc.ca/sites/default/files/publications/ncta-2020-e-web.pdf

  10. CSE (Communications Security Establishment). (2017). Cyber threats to Canada’s democratic process. Retrieved 2 May 2020 from https://cyber.gc.ca/sites/default/files/publications/cse-cyber-threat-assessment-e.pdf

  11. CSE (Communications Security Establishment). (2019). Cyber threats to Canada’s democratic process, update 2019. Retrieved 2 May 2020 from https://cyber.gc.ca/sites/default/files/publications/tdp-2019-report_e.pdf

  12. Cullen, P., & Wegge, N. (2019). Countering hybrid warfare. Development, Concepts and Doctrine Centre, Shrivenham.

  13. Goodale, R. (2018). National cyber security strategy: Canada’s vision for security and prosperity in the digital age. Public Safety Canada. http://epe.lac-bac.gc.ca/100/201/301/weekly_acquisitions_list-ef/2018/18-27/publications.gc.ca/collections/collection_2018/sp-ps/PS4-239-2018-eng.pdf

  14. Government of Canada. (2019). Canada’s digital charter: Trust in a digital world—innovation for a better Canada. Innovation, Science and Economic Development Canada. https://www.ic.gc.ca/eic/site/062.nsf/eng/h_00108.html

  15. Greenspoon, E. & Owen T. (2018). Democracy divided: Countering disinformation and hate in the digital public sphere. Public Policy Forum. https://ppforum.ca/publications/social-marketing-hate-speech-disinformation-democracy/

  16. Gregor, M., & Mlejnková, P. (2021). Challenging online propaganda and disinformation in the 21st century. Palgrave Macmillan.

  17. Hartmann, U. (2017). The evolution of the hybrid threat, and resilience as a countermeasure. Center for Security Studies.

  18. Healey, J. (2019). The implications of persistent (and permanent) engagement in cyberspace. Journal of Cybersecurity, 5, 1–15.

  19. Hellman, A. (2019). How has European geostrategic thinking towards Russia shifted since 2014? European leadership Network. https://www.europeanleadershipnetwork.org/policy-brief/how-has-european-geostrategic-thinking-towards-russia-shifted-since-2014/

  20. Hudson, V. M. (2014). Foreign policy analysis: Classic and contemporary theory. Rowman and Littlefield.

  21. Institute for Strategic Dialogue. (2019). 2019 EU elections information operations analysis: Interim briefing paper. Institute for Strategic Dialogue.

  22. Jack, C. (2019). Lexicon of lies: Terms for problematic information. Data and Society Research Institute. https://datasociety.net/pubs/oh/DataAndSociety_LexiconofLies.pdf

  23. Jackson, D. (2017, 17 October). Issue brief: Distinguishing disinformation from propaganda, misinformation and “fake news.” National Endowment for Democracy. https://www.ned.org/issue-brief-distinguishing-disinformation-from-propaganda-misinformation-and-fake-news/

  24. Jackson, N. (2019). Deterrence, resilience and hybrid wars: The case of Canada and NATO. Journal of Military and Strategic Studies, 19(4), 104–25.

  25. Jackson, N. (2022). The Canadian government’s response to foreign disinformation: Rhetoric, stated policy intentions and practices. International Journal, 76(2), 544–63.

  26. Jayakumar, S., Ang, B., & Anwar, N. D. (Eds.). (2021). Disinformation and fake news. Palgrave Macmillan.

  27. Jervis, R. (2016). Some thoughts on deterrence in the cyber era. Journal of Information Warfare, 15, 66–73.

  28. Jordan, D., Kiras, J., Lonsdale, D., Speller, I., Tuck, C., & Walton, C. D. (2016). Understanding modern warfare. Cambridge University Press.

  29. Kello, L. (2017). The virtual weapon and international order. Yale University Press.

  30. Kelshall, C., & Dittmar, V. (2018). Accidental power: How non-state actors hijacked and reshaped the international system. SFU Library and CASIS.

  31. Knopf, J. W. (2010). The fourth wave in deterrence research. Contemporary Security Policy, 31(1), 1–33.

  32. Kolga, M. (2019). Stemming the virus: Understanding and responding to the threat of Russian disinformation. Macdonald Laurier Institute.

  33. Kolga, M., Jakub J., Vogel, N. (2019). Russian proofing your elections. Macdonald Laurier Institute.

  34. Kroenig, M., & Pavel, B. (2012). How to deter terrorism. Washington Quarterly, 35, 21–36.

  35. Lanoszka, A., (2019). Disinformation in international politics. European Journal of International Security, 4(2), 227–48.

  36. Lindsay, J. R., Gartzke, E. A. (2019a). Cross-domain deterrence: Strategy in an era of complexity. Oxford University Press.

  37. Lindsay, J. R., Gartzke, E. A. (2019b). Conclusion: The analytic potential of cross-domain deterrence. In J. R. Lindsay & E. A. Gartzke (Eds.), Cross-domain deterrence: Strategy in an era of complexity (pp. 335–71). Oxford University Press.

  38. Lorenz, W. (2017). The evolution of deterrence: From Cold War to hybrid war. Polish Quarterly of International Affairs, (2), 22–37.

  39. Lucas, E., & Pomeranzev, P. (2016). Winning the information wars. Center for European Policy Analysis.

  40. Monaghan, S. (Ed.). (2019). MCDC Countering Hybrid Warfare Project: Countering hybrid warfare. Multinational Capability Development Campaign. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/784299/concepts_mcdc_countering_hybrid_warfare.pdf

  41. National Security and Intelligence Committee of Parliamentarians. (2020). National Security and Intelligence Committee of Parliamentarians annual report 2019. https://www.nsicop-cpsnr.ca/reports/rp-2020-03-12-ar/intro-en.html

  42. Osinga, F., & Sweijs, T. (Eds.). (2021). NL ARMS Netherlands annual review of military studies 2020: Deterrence in the 21st century—insights from theory and practice. Springer Nature.

  43. Pamment, J., Twetman, H., Nothhaft, H., & Fjällhed, A. (2018). The role of communicators in countering the malicious use of social media. NATO Strategic Communications Centre of Excellence.

  44. Picard, C. (2019). Online disinformation threats in the 2019 Canadian federal election: Who is behind them and why. In J. McQuade (Ed.), Disinformation and digital democracies in the 21st century (pp. 35–40). NATO Association of Canada.

  45. Pinkerton, C. (2019, 30 January). Government releases blueprint for protecting election from interference. iPolitics. https://www.ipolitics.ca/news/government-releases-blueprint-for-protecting-election-from-interference

  46. Potter, E. (2019). Russia’s strategy for perception management through public diplomacy and influence operations: The Canadian case. The Hague Journal of Diplomacy, 14, 402–25.

  47. Prior, T. (2018). Resilience: The “fifth wave” in the evolution of deterrence. Center for Security Studies.

  48. Public Safety Canada. (2019). National cyber security action plan 2019–2024: Budget 2018 investments. Public safety Canada. http://epe.lac-bac.gc.ca/100/201/301/weekly_acquisitions_list-ef/2019/19-34/publications.gc.ca/collections/collection_2019/sp-ps/PS9-1-2019-eng.pdf

  49. Reepschlager, A., & Dubois, E. (2019, 2 January). New election laws are no match for the Internet. Policy Options. https://policyoptions.irpp.org/magazines/january-2019/new-election-laws-no-match-internet/

  50. Sample, C., McAlaney, J., Bakdash, J., & Thackray, H. (2018). A cultural exploration of social media manipulators. Journal of Information Warfare, 17(4), 56–71.

  51. Snyder, G. (1961). Deterrence and defense. Princeton University Press.

  52. Stoker, D., & Whiteside. C. (2020). Blurred lines: Grey-zone conflict and hybrid war—two failures of American strategic thinking. Naval War College Review, 73, 1–37.

  53. Sukhankin, S. (2019). The Western Alliance in the face of Russian (dis)information machine: Where does Canada stand? SPP Research Paper CGAI/School for Public Policy, 12(26). https://doi.org/10.11575/sppp.v12i0.61799

  54. Sweijs, T., & Zilinicik S. (2019). Cross domain deterrence and hybrid conflict. The Hague Centre for Strategic Studies.

  55. Sweijs, T. & Zilincik, S. (2020). The essence of cross-domain deterrence. In F. Osinka & T. Sweijs (Eds.), NL ARMS Netherlands annual review of military studies 2020. https://link.springer.com/content/pdf/10.1007%2F978-94-6265-419-8_8.pdf

  56. Tenove, C. (2018). Digital threats to democratic elections: How foreign actors use digital techniques to undermine democracy. Centre for the Study of Democratic Institutions.

  57. Tor, U. (2015). Cumulative deterrence’ as a new paradigm for cyber deterrence. Journal of Strategic Studies, 40, 92–117.

  58. Trager, R. F., & Zagorcheva, D. P. (2006). Deterring terrorism: It can be done. International Security, 30(3), 87–123.

  59. Tucker, J. (2018). Social media, political polarization, and political disinformation: A review of the scientific literature. Hewlett Foundation.

  60. United Nations (2015). Report of the Group of Governmental Experts on developments in the field of information and telecommunications in the context of international security. A/70/174. https://documents-dds-ny.un.org/doc/UNDOC/GEN/N15/228/35/PDF/N1522835.pdf?OpenElement

  61. Wigell, M. (2019). Hybrid interference as a wedge strategy. International Affairs, 95(2), 255–75.

  62. Wilner, A. S. (2011). Deterring the undeterrable: Coercion, denial, and delegitimization in counterterrorism. Journal of Strategic Studies, 3(1), 3–37.

  63. Wilner, A. S. (2014). Contemporary deterrence theory and counterterrorism: A bridge too far. New York University Journal of International Law and Politics, 47, 439–62.

  64. Wilner, A. S. (2017). Cyber deterrence and critical infrastructure protection: Expectation, application and limitation. Comparative Strategy, 36(4), 309–18.

Annotate

Next Chapter
9Exit, Voice, or Loyalty? Functional Engagement as Cyber Strategy for Middle Power Statecraft
PreviousNext
Deterrence in the 21st Century
© 2024 Eric Ouellet, Madeleine D’Agata, and Keith Stewart
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org