Conclusion
Keith Stewart and Madeleine D’Agata
Eric Ouellet’s introduction to this volume set out a series of essential questions. The chapters that followed provided expert insights that offer a starting point for addressing the critical issues faced. However, we are far from having clear solutions at this point. In soliciting contributions, the net was cast wide, as befits a problem set as challenging as this. The aim was to examine the implications of the changing information environment (IE) for security at all levels, including national security and the security of individuals and organizations. The major theme of the book has been the harnessing of information to achieve strategic influence internationally by a range of actors, both state and non-state, most recently in the context of renewed and overt great power competition, but equally during the period since the fall of the Berlin Wall and particularly in the wake of September 2001. In modern times, the perennial problem of disinformation has resurfaced, promulgated widely using novel media, especially since the development of social media and Web 2.0, and this has been highlighted in the material presented by many of the authors. However, this is not the only challenge posed by the constantly changing nature of the IE, and other critical concerns have been discussed here—for example, the opportunities afforded malign actors to harness cyber means to threaten critical infrastructure and military capability.
Perhaps the most basic question we face is how to achieve security in the face of the challenges posed by adversary action that exploits the IE. This can be considered at a number of levels of analysis; for example, the personal security of individuals and their assets, operations security for military, police, and other security services that must guard essential information, and, ultimately, national security. The diversity of material in this book reflects this. At the national strategic level, our security has rested, since the end of the Second World War, on the achievement of mutual deterrence based on the threat of massive retaliation with nuclear weapons. Thus, paraphrasing the challenge laid down by Dr. Ouellet in the introductory chapter, it is important to ask to what extent a deterrence-based posture has the potential to maintain security given information-based challenges and threats, and if so, how do deterrence theory and practice need to adapt to this new reality? This line of inquiry led Ouellet to a number of supplementary questions, including the following: Given the salience of the threat posed by adversarial disinformation, to what extent can it be deterred? If so, is it possible to deter disinformation or other information-based threats through the threat of punishment, or is a different approach required? If deterrence is found to be a viable approach, then what do we need to understand about our adversaries in terms of their perception of costs and benefits that might enable us to achieve a deterrence stance? How should Canada and its allies face up to these challenges, and are there any ways in which the West might begin to fight back? Importantly, how can we achieve all of the foregoing and still conform to our own legal and ethical standards without being brought to the level of our adversaries? This volume has provided a diverse set of insights from leading international experts that have a bearing on all of these problems and more. This final chapter presents reflections on some of the above questions based on a selective distillation of some information from the preceding chapters combined with material from other sources with the aim of offering a series of concluding thoughts.
Perspectives on the Challenge of Deterrence in the IE
We have seen that the spread of misinformation and disinformation in the IE has increased dramatically in the past few years around the world, often severely impacting individuals and organizations and causing confusion, panic, and, on occasion, distrust in government (Bennett & Livingston, 2018; Liu & Huang, 2020). Geography offers little protection against this scourge, and Canada and Canadians, among other polities, have been increasingly targeted in recent years. Certain nations have been, and continue to be, at the forefront of the spread of disinformation, impacting elections in the United States as well as more recently propagating falsehoods surrounding COVID-19 (US Department of State, 2020). Not only does such disinformation lead to financial losses—for example, at the time of writing, $7.75 million has been lost to COVID-19 fraud in Canada according to the Canadian Anti-Fraud Centre (2021)—it also discourages susceptible individuals from following public health guidelines and promotes vaccine hesitancy, potentially, in the end, contributing to the further spread of COVID-19. Moreover, whether we are discussing cyberspace, or the IE more broadly, it is recognized that it is extremely difficult to defend against adversarial activity. In their chapter Leuprecht and Szeman identified several attributes of the IE that present significant challenges. These include its interconnectedness, which enables adversaries to generate effect without concern for geography or political borders, the relatively low costs of entry, and the possibility of engagement in continuous offensive operations. Adaptation of deterrence for the challenges of the IE must take account of these characteristics.
As noted in the chapter by Jackson, despite the importance being placed on deterring the spread of disinformation in Canada from a security and safety standpoint, there is actually little consensus from academia or policy-makers on how exactly Canada should defend itself. As that author points out, part of the problem is a lack of consensus on defining disinformation, which Jackson and others approach as a societal and cultural issue as much as one of security. Disinformation is typically understood to imply the intentional spreading of deliberately false information. This contrasts with misinformation, which implies the unintentional dissemination of similarly false or inaccurate information. Thus, by many definitions, disinformation is meant to intentionally and maliciously mislead others. And yet, it is not always possible to ascertain intention. Jackson stresses that government efforts aimed at attenuating the spread of disinformation need to proceed with caution to ensure they are not perceived as interfering with freedom of speech.
A consistent theme in this book has been the observation that the IE, and specifically the Internet and social media, have substantially increased the potential for adversaries to engage in information operations (IO) against competitor nations, effectively overcoming geographical and territorial boundaries to a variety of ends, including the spreading of false narratives and propaganda, enabling clandestine access to information and networks, and interference with control systems for civilian and military infrastructure. Chapters in this volume have examined the activities of specific competitor nations. For example, the chapters by Heide and by Seaboyer and Jolicoeur focus on Russia and China, respectively, while Bar-Gil examines information activities directed against Israel by Iran and its proxies, as well as examples of Russian IE tactics. This work demonstrates that, in addition to seeking to catch up with the West in terms of IE capability, the adversary powers considered here have taken the opportunity to adapt technologies to their own preferred methods. For example, Seaboyer and Jolicoeur describe China’s policy of “informationalization,” which has, in part, enabled the Chinese Communist Party (CCP) to exploit tools, originally conceived of as enabling the free exchange of information, to bound and manipulate the narratives to which Chinese citizens have access.
In a similar vein to Seaboyer and Jolicoeur’s comments on China, Heide reminds us that Russia also engages extensively in the IE internally, as well as externally. Heide points out that this contrasts with democratic nations that only conduct IO on operations (in almost all cases abroad). Domestically, both have a specific focus on maintaining the mood and morale of their populations and armed forces by controlling the information and ideas that they can access with a view to avoiding any threat to the authority of the ruling regimes through dissent or uprisings. After a degree of thawing in the late 1980s and ’90s, Chinese and Russian authorities are again exerting a high degree of control over the IE of their citizens. While the technologies have changed, the intention is reminiscent of earlier attempts to block access to information from the outside world—for example, Soviet radio jamming operations against Radio Free Europe and Radio Liberty, which broadcast from Munich during the Cold War to provide domestic news to audiences behind the Iron Curtain.1 Today’s equivalent is manifest in the complex system of technological control and enforced censorship that has been dubbed the “Great Firewall of China.”2 Stittmatter (2018) describes the techniques of “intimidation, censorship, and propaganda” that enabled the CCP to take back control of the Internet after a period of relative freedom before 2012. Deletion of social media accounts, blocking of websites, restrictions on the numbers of persons with whom a social media user can share information, and the introduction of fake information, among others, are all cited as techniques through which the CCP was able to fulfil the leader’s command to “win back the commanding heights of the internet” (Stittmatter, 2018, p. 70). Similar to the point made by Leuprecht and Szeman regarding the possibilities provided by the IE for engagement in persistent operations, these authors observe that, in their external affairs, both Russia and China appear to adopt a posture of constant conflict, notably in the IE, where they are able, in Lindsay and Gartzke’s (2019) terms, to inflict some harm “through cyber exploitation, covert infiltration, and other ‘gray zone’ provocations that fall below clear thresholds of . . . retaliation” (p. 15).
Russian operations in the IE are constant and are aimed widely at all sections of the targeted nations, including the military, civil society, and policy-makers. Bar-Gil’s chapter describes a struggle for “the global mindset.” That author also observes that, compared to some of its adversaries, Israel is at a relative disadvantage owing to the breadth and sophistication of its information infrastructure and, by extension, its dependence on such technology-enabled systems, which leaves it exposed to information and cyber-attacks. This echoes Lindsay and Gartzke’s (2019) observation that “It is possible and much feared in some circles, that weaker states and nonstate actors might exploit the technologies of globalization to undermine the conventional military advantages of great powers” (p. 3). In this regard, it is interesting that Bar-Gil notes that access to the IE means that malicious activity that “what was formerly a gradual, professional psychological impact is now a high-speed action that even the least competent, remote, and disassembled forces may conduct due to technological improvements.” As Leuprecht and Szeman observed, in the modern IE, the costs of entry are low.
Several authors in this volume point to the use of proxies as part of operations in the IE. Bar-Gil describes how Iran provides capability to its allies Hezbollah and Hamas to enable operations against Israel, and in some cases directs specific cyber operations, thus achieving the benefits of deniability while overcoming the disadvantages of physical dislocation from its target. Heide provides a very comprehensive description of the multitude of proxy channels adopted by Russia in its IO, noting the overt use of third-party organizations such as state media as well as a range of “grey” and “black” means that again confer plausible deniability. Seaboyer and Jolicoeur describe how the CCP exploits various levels of the Chinese and foreign media domestically and externally with a view to controlling its message. In addition, they outline how China is able to expand its technical capability for IO via manipulation of academic and industrial relationships, blurring the lines between civil and military research and development and industrial capacity.
As mentioned previously, the challenges addressed by this volume require consideration at several different levels of analysis. While the foregoing comments relate exclusively to the national strategic level, it would be wrong to ignore the fact that engagement with the IE occurs at the individual level, and thus effort must be expended in understanding the risks associated with individual actions and the contexts in which individual actors operate. Although the IE can serve as an environment that facilitates positive human interaction, as observed by Ducol et al. (2016), deviant behaviours, attitudes, and beliefs are of great concern and can lead to serious consequences for individuals such as cyber-bullying, cyber-stalking (Hango, 2016), and fraud (Canadian Anti-Fraud Centre, 2021; Johnson, 2019), among others. Research suggests that certain types of individuals are particularly susceptible to being influenced in the IE. For instance, D’Agata and colleague found links between lowered Honesty-Humility (one of the six factors of personality) and greater online disinhibition, engagement in risky online behaviours (D’Agata & Kwantes, 2020), and engagement with strangers online (D’Agata, Kwantes, & Holden, 2021). These are examples of behaviours that can increase not only one’s exposure to adversaries and criminals, but also one’s susceptibility to oversharing or behaving in unsafe ways online. Peter et al. (2021) found certain individuals, such as younger adults, to be more susceptible to belief in disinformation or conspiracy theories. Furthermore, psychological tendencies or needs seem to be influential in the IE; for instance, as noted in the chapters by Meharg and by Speckhard and Ellenberg, the need to belong or connect with others or establish one’s identity can promote engagement with strangers online. Moreover, for some, these needs may be met in the IE more so than in real-world settings. For instance, research has found a link between heightened real-life social isolation as well as social anxiety and increased comfort with or reliance on online communication (e.g., Whaite et al., 2018; Prizant-Passal et al., 2016). Speckhard and Ellenberg found that in extreme cases, such a need can result in individuals being radicalized, leading to even more serious outcomes such as engaging in illegal activity. More concerning, these authors also note that the sophistication extremists and extremist organizations display in the IE is particularly challenging to effectively counter or dispel.
How Should Deterrence Theory Change to Match the Challenges of the IE?
A number of the contributors to this volume have observed that classical models of deterrence require revision to address the realities of the early twenty-first century. As Jackson and Leuprecht and Szeman have all pointed out, to effectively deter in the IE, Canada and its allies must update their deterrence theory and practice. The changes necessitating such a rethink are in large part bound up, as Cimbala and Lowther and Ankersen, for example, have pointed out, with changes within the IE itself. Ankersen chapter includes the observation that “what has changed are the “operant media through which and with which opponents” communicate, while Cimbala and Lowther note in theirs that “the nuclear-cyber relationship . . . makes deterrence a much more complex task.” Nevertheless, material presented in this volume provides some grounds for optimism that the fundamental aspects of deterrence, such as communication, credibility, and risk calculation, are broadly similar today when compared with the immediate post-1945 period, and are likely to remain so with the consequence that deterrence continues as a possibility in the modern era. Stressing the importance of the non-physical elements of deterrence such as credibility and communication, Ankersen states that a “material bias” focused on, for example, weapons systems, has directed attention from the fact that “deterrence actually operates—has always operated—in the information environment.” Thus, Ankersen sees contextual change in terms of the means, that is to say the information technology that enables communication between the deterring parties.
Self-knowledge of vulnerability to threat is essential to building preparedness and resilience in anticipation of likely future attacks, as was stressed by Robinson in a paper that emphasized the requirement for “synchronised and systemic” (2019, p. 8) responses to adversary hybrid tactics. Similar to Ankersen, Robinson notes that while many of the threats facing NATO nations are not new, the means that an adversary might employ, such as cyber, are. Thus, Robinson emphasizes the need for deterrence theory and strategies to address such change, and notes that new approaches, including non-kinetic options, have developed with a view to deterring hybrid threats.
Lastly, Ankersen’s comments align well with many of the other authors in this volume with regard to the likely benefits of dissuasion through defence in a “deterrence by denial” approach. The framework of cyber threats presented by Ankersen provides a useful means for structuring an integrated defensive posture across all domains and environments, based on an understanding of the various threat categories. Many authors in this volume have stressed the importance of promoting resilience in order to be positioned to engage in deterrence by denial. Jackson’s chapter includes the observation that doing so “not only mitigates harmful effects of hostile influence, but also changes adversaries’ cost-benefit analyses by denying them (technical or strategic/political) benefits.” Jackson adds that such efforts may need to be carried out in coordination with governments, private actors, and civilians.
Deterrence is a form of influence operation in that it seeks to achieve psychological effects in a decision maker with a view to guiding that individual to behave in a certain way. Smith (2005) summed up the basis for all deterrence in noting that, “In short, the real target of someone wishing to deter is the mind of the opposing decision maker” (p. 190). Deterrence theory has seen regular revision in the light of real-world contextual changes, for example, the end of the Cold War. It has also adapted to take account of research that used observational studies to examine the fundamental assumptions of the theory. For example, Jervis (1985) lamented the fact that an examination of case studies demonstrated that “participants almost never have a good understanding of each other’s perspective, goals or specific actions. Signals that seem clear to the sender are missed or misinterpreted by the receiver, actions meant to convey one impression often leave quite a different one” (p. 1). Jervis further stated that classical deterrence theory was flawed to the extent that it relied on deductive logic rather than an examination of real-world experience, and that it was “based on the premise that people are highly rational” (p. 1). The aim of Jervis and colleagues (1985) was to strengthen the theory and its application with an improved understanding of, among other things, how, in the real-world, officials and institutions of state process information, how humans make decisions, and the cognitive and other biases that may undermine those processes. Thus, it is to be hoped that adaptation to the realities of the modern IE should represent a continuation of a process of evolution rather than a major transformation.
A very good example of an adaptation of deterrence theory that appears well-suited to the challenges of IE-mediated deterrence was described in Wilner’s chapter in the context of counterterrorism. Wilner describes the development of a novel theoretical approach based upon deterrence by de-legitimization that “weighs on an adversary’s normative or ideological perspective” with a view to undermining the logic upon which their use of terror tactics is based by “targeting and degrading the ideological motivation that guides support for and participation in terrorism.” This raises an important issue—namely, the development of a sound understanding of an adversary (as well as that adversary’s supporters and potential supporters) to see how justification for their actions is achieved, and consequently how it might be undermined. Wilner’s chapter extends the application of the notion of deterrence by de-legitimization by applying it to the issue of deterrence in the IE, advocating specifically for the establishment of international norms for actors’ behaviour within the IE, for more publicity for breaches of acceptable behaviour, and, lastly, for proactive efforts within society to strengthen shared basic principles with a view to achieving collective resilience. Citing Doorn and Brinkel, Wilner stresses the importance of building and enabling trust and credibility within our societies in order to establish “societal counterweights to malicious propaganda and disinformation campaigns.” As noted in Jackson’s chapter, Canada’s responses to disinformation are broad, and future research is needed to better understand how these responses could be better refined as well as tailored to different situations.
Understanding Adversaries in Order to Deter Them
In their chapter, Cimbala and Lowther note that part of the process of adapting deterrence to the modern strategic environment is a recognition that there is a requirement for “tailored” approaches, based on an in-depth understanding of the specific adversary to be deterred. Ankersen likewise stresses the importance of the development of an improved appreciation of the adversary and, citing Jervis, notes the importance of understanding how potential adversaries view the world in order to understand their behaviour and, ultimately, their intentions. Moreover, Ankersen emphasizes the fundamental psychological nature of deterrence, quoting Filipidou (2020) and Jervis et al. (1985), who refer to it, respectively, as “a state of mind” and “a psychological relationship.” Perhaps the essential point in Ankersen’s chapter is that, by focusing on the intended effects of adversary action, it should be possible to discern these actors’ goals and therefore how they would perceive the likely costs and benefits of their actions. The contention is that the apparent “uniqueness” of cyber, which Ankersen argues is “overstated” and is based on a focus on means and capability, can be bypassed, thereby allowing a more integrated perspective of the threat and enabling a comprehensive view of deterrence that includes cyber. This, according to Ankersen, is essential to deterrence via threat of reprisal, since “without an appreciation for what the intended effects or benefits of an attack are, it is difficult to calibrate the costs necessary to dissuade an opponent from carrying it out.”
Cimbala and Lowther point out that nuclear crisis management is “both a competitive and a co-operative endeavour” and emphasize that communication is essential to enable each party to demonstrate its appreciation of a situation to the other. Seen in this way, deterrence is reliant on the development and maintenance of an effective relationship between the parties based on clear communication. In addition, they underline the importance of each side developing a clear and accurate understanding of their adversary’s intentions and capabilities upon which to base risk assessment and course-of-action decision making. These observations align with early iterations of deterrence theory. For example, Schelling (1966) pointed out that “a hot line can help to improvise arms control in a crisis: but there is a more pervasive dialogue about arms control all the time between the US and the Soviet Union. . . . I have in mind . . . the continuous process by which the USSR and the US interpret each other’s intentions and convey their own” (p. 264). Cimbala and Lowther’s focus is on nuclear crisis management, but these elements are central to all deterrence relationships, whether in a crisis or in a steady state.
In their chapter, Schleifer and Ansbacher provide their perspective on the deterrent relationship between Israel and Hamas. They judge that Hamas has achieved an appropriate appreciation of Israeli decision makers’ perception of risk and is therefore managing to deter them by shaping public opinion with respect to the acceptability or otherwise of the probable costs of specific military action. Their chapter provides a series of examples of how, in their opinion, a combination of terror tactics, disinformation, and influencing international opinion has enabled Hamas to achieve this deterrence despite Israel’s military advantages.
Importantly, the chapters in section 1 of this volume emphasize the critical element of credibility in deterrence communication. This comprises, at least, the extent to which the party receiving the deterrent message believes that their adversary has both the capability claimed and the intention and will to use that capability in the circumstances specified. This is, in turn, dependent on issues such as the credibility of the source of the deterrent message and the effectiveness of the transmission of that message, neither of which can be assumed. Even heads of state can fall foul of this basic requirement. For example, as Keegan (2005) reminds us, by 2002 Saddam Hussein was “a victim of his own fictions and evasions. Because of his systematic mendacity, he had lost the capacity to persuade anyone that he was telling the truth” (p. 113).
Understanding Situations
More than one author in this volume touched on the critical issue of protagonists’ ability to achieve and maintain what Endsley (e.g., 1995) and others have called “situation awareness” and, particularly in the case of Cimbala and Lowther’s chapter, the dangers of protagonists not being able to maintain such an appreciation. The implication is that the increasing speed and complexity of situations mediated in the IE renders the achievement and maintenance of situation awareness extremely difficult and thus increases risk of misdiagnosis, miscalculation, and human error. In particular, they provide several examples of how cyber operations have the potential, deliberately or inadvertently, to skew or undermine an opponent’s understanding, as may be the case, for example, through the manipulation of information within an adversary’s C4ISR systems, or through disruption of their internal communications, or perhaps through direct interference with the systems controlling the weapons themselves. A critical element of Cimbala and Lowther’s argument is that having lost situational awareness, participants could feel increased pressure to take pre-emptive action.
Many of the situational characteristics described by the authors in this volume, and in particular the crisis-management situations discussed by Cimbala and Lowther, such as limited time, situational ambiguity, and changing conditions, are in line with applied settings studied by psychologists interested in “naturalistic decision making” (NDM), notably Klein (e.g., 2008). Their studies of fire commanders, process control operators, surgical teams, and military commanders, to name a few, demonstrated, much as Jervis observed, that, placed in such situations, people tend not to conform to best practices predicted by rational decision theory. Rather, in time-compressed emergency environments, the experts reported using prior experience and knowledge rapidly to categorize the situation and generate as adequate a response as possible in terms of a course of action. Cimbala and Lowther make a similar observation citing the work of March and Simon. Indeed, Simon (e.g., 1978) had, as part of the development of a theory of bounded rationality in the 1950s, dubbed such decision making “satisficing,” that is, finding a solution that is satisfactory and sufficient relative to the decision maker’s level of aspiration. Cimbala and Lowther quite rightly make the chilling observation that in the context of nuclear crisis management, there is simply no margin for error. In view of the foregoing, there is no suggestion that what has been described is the “best” way to make decisions; rather the implication is that under extreme time pressure, with a need to respond to stay ahead of a dynamic situation, it may be the only possible way to respond within the capacity of human decision makers. One useful conclusion of the NDM work is that in order to promote good decision making, we should focus on optimizing, as much as possible, the conditions under which decision makers make decisions. Their work strongly suggests that a focus on achievement and maintenance of situational awareness, a high-functioning command team and organization to support the decision maker, and efficient communications and coordination are key. Cimbala and Lowther’s work shows us a variety of ways in which cyber means might be used by an adversary to undermine these critical structures and processes. The implication of the NDM research is that, as well as hardening against cyber intrusion, organizations should seek to optimize the decision-making context, for example, through training, improved organizational design, and, if available, decision-support systems.
Cimbala and Lowther point out that currently we can only “speculate about the impact of cyber-attacks and efforts to inject technical disinformation into systems responsible for nuclear crisis management.” Nevertheless, their chapter provides a range of scenarios that could be used in modelling, experimentation, and simulation with a view to achieving an improved appreciation of the demands of such situations. Such work could provide the basis for improved preparation and potentially training and education for decision makers and their teams. In addition, such an approach offers some hope that we might achieve some degree of deterrence by denial, hardening our critical systems and augmenting the resilience of our people and organizations with a view to avoiding crisis escalation.
How Can Canada and Its Allies Achieve Increased Resilience?
A number of the chapters in this volume have implications for how states might achieve increased resilience. The IE has been leveraged by criminals and adversaries now for many years in an effort to influence, intimidate, manipulate, and radicalize individuals. Multiple streams of research exist in this domain to better understand what makes individuals vulnerable to others’ manipulations in the IE, as well as strategies or techniques that can be employed to reduce the effects of such efforts. Furthermore, understanding the motivations and techniques employed by our adversaries can help in the development of methods to deter such actions in the IE. In addition, as discussed in the chapter by Porter, an examination of the online influence campaigns employed by our adversaries is needed in order to better understand how to build resilience in our own personnel and citizens and to engage in deterrence by denial.
The authors in this volume provide several recommendations for specific interventions to promote resilience, including technological developments to aid identification of adversary IO and hardening of critical civilian and military systems. Bar-Gil and Heide both favour augmenting such tools with a range of non-technical interventions, for example, training and education. Heide proposes that both the general public and the media would benefit from the ability to identify malicious IO more effectively, and Bar-Gil advocates for training military and civilian audiences alike in critical thinking about information, especially that which is presented in social media. In fact, there is a great deal of research, particularly in the field of psychology, that highlights the benefits of critical thinking, such that analytical thinking is associated with lowered belief in disinformation (e.g., Bronstein et al., 2019; D’Agata, Kwantes, Peter, & Vallikanthan, 2021). Heide points out that adversaries benefit from ordinary persons unintentionally spreading their falsehoods as misinformation, and consequently invest time and energy in its creation and dissemination through a broad range of media, both state-sponsored and commercial, for example, TV, radio, and fake accounts on social media platforms. Bar-Gil stresses the potential for limiting the success of such tactics through promotion of “digital literacy,” efforts that have been shown to be successful in limiting the spread of false messages. In addition, both authors address the controversial topic of governments restricting access to specific media within their own nations, with Bar-Gil discussing the potential use of specific instruments under Israeli law, and Heide advocating the blocking of access to Western audiences for news outlets spreading propaganda and disinformation and the cutting of funding sources for organizations involved in malicious IO.
With respect to the challenge of developing strong counter-narratives to challenge adversary influence operations and disinformation, we need to address the question of when our strategic communications might be considered equivalent to an adversary’s propaganda. Some authors even seem to have attempted to rehabilitate the term “propaganda.” Cull (2015) argues that most propaganda is, at base, an attempt to hinder the advance of an opposing idea, and as such could conceivably be considered defensive “counter propaganda.” Employing the same term, Taylor (2002) expressed the view that “propaganda”3 is required “on behalf of . . . peace” (p. 439).
At the tactical level, Cull describes actions to counter a specific message and cites the work of the US Information Agency in identifying and debunking Soviet disinformation rumours in the 1980s. At the strategic level Cull sees “a communications policy” (2015, p. 3) aimed at adversary propaganda, for example, the US information campaign during the Cold War and British foreign-language broadcasts aimed to counter totalitarian propaganda in the 1930s. Interestingly, Cull also notes that, “In our own time China’s large scale spending on cultural outreach and international broadcasting is seen by Beijing as a corrective to the western bias of global media outlets” (p. 3), and as such is, in their eyes, essentially a counter-propaganda exercise. To this we could doubtless add their construction of a “golden shield” containing and protecting “an internet with Chinese characteristics” (Strittmatter, 2018, p. 79) and enabling their near total control of the information that Chinese citizens can access.
How Might Canada and Its Allies Respond?
The chapters by Bar-Gil and Heide present proposals for solutions to achieve deterrence in the face of the threats they describe. As a general point, it is possible to conclude that both authors advocate an approach that can be characterised as “deterrence by denial” based on the achievement of high levels of resilience in the states, institutions, and systems discussed. Moreover, we should also note that in advocating an approach based on proactive strategic communications, Heide is, in parallel, proposing a form of pre-emption in the IE. This, it is suggested, is important to ensure that audiences are presented with “truthful accounts” before being exposed to the adversary’s disinformation, which Heide notes may be harder for individuals to discount once internalized.
In order to begin to achieve the necessary resilience, Heide stresses that Canada needs to develop strong narratives tailored to specific audiences that explain “what defines Canada, its beliefs, and its actions.” In order to achieve this, Heide proposes that Canada needs a strategic communications capability that is always active in order to deter adversary IO in a pre-emptive fashion. In addition, Heide suggests monitoring and analysis of adversary messaging combined with the development and dissemination of Canadian narratives.
The proposed developments outlined above, as well as others described in detail in the individual chapters, may have the potential to both bolster resilience and harden Western societies against the malign information activities of adversary powers. Nevertheless, in formulating policy and doctrine for such a capability there would be many questions that would need to be addressed, not least those in the moral and ethical spheres. Indeed, it will be essential to be prepared to address any suggestion that in responding within the IE Western nations could risk constructing a mirror image of the structures and tactics they are seeking to counter. Certainly, the development of information-related capability by government and military in the West is sometimes treated with suspicion by domestic audiences. For example, Galeotti (2017) suggests that strategic communications “could perhaps be glibly described as ‘propaganda we like’” (p. 1). Taylor (2002) similarly points out that there is “an entire range of euphemisms” (p. 437) within which we can assume “strategic communications” would figure. Taylor expressed the view that democracies “tend to delude themselves that they are not in the business of propaganda” (p. 437), arguing that it is assessed to consist of untruths and to be conducted only by undemocratic parties. The crux of Taylor’s paper was that at that time, as now, “when certain value systems are under attack . . . they . . . need to be defended . . . by a reaffirmation of the values that were being challenged” (p. 440–1). Moreover, Taylor stated the opinion that this should be a job for governments owing to a concern that “the free, democratic media of any country have become an unreliable mirror of the true nature of that society by virtue of the increasingly commercialised environment in which they now operate” (p. 439).
Both Heide and Bar-Gil recommend the development of analytic capability aimed at understanding adversary IO aims and approaches with a view to identifying domestic capability gaps and developing countermeasures. For example, Bar-Gil notes that Internet and social media present opportunities for the collection of relevant open-source intelligence (OSINT), and that such information has the potential to be used to underpin proactive responses. Bar-Gil provides the example of the Bellingcat investigations into the shooting down of Malaysian Airlines Flight 17 over Ukraine. Moreover, Bar-Gil describes how OSINT, based on an adversary’s social media presence, has provided the foundations for responses both within the IE and, in a cross-domain response, in physical action.
One area that perhaps received less attention is the notion of cross-domain operations or cross-domain deterrence as a means to respond to, or get ahead of, hostile information activities. Cull, for example, emphasizes that “not all propaganda is best countered in the communications sphere . . . [and that] addressing the source of the propaganda can prove an effective strategy for counter propaganda” (2015, p. 14). Illustrating that, when conducted by unscrupulous state actors, this can involve drastic and illegal measures, Cull provides the example of the assassination of a Bulgarian journalist by Romanian operatives. Bar-Gil provides examples of the use of physical attack in response to cyber activities noting that these were intended to degrade the adversary’s IE capability and simultaneously deliver a deterrent message. Such examples highlight the need for governments to engage in an examination of ethics and proportionality in adopting cross-domain tactics.
With this in mind, democratic nations might do well to ask about the extent to which the proposals put forward by Bar-Gil and Heide require the establishment of completely new capability, or whether what is needed is, in part, the re-establishment of capability that has seen under-investment in recent times. Taylor noted that reductions in US public diplomacy in the 1990s, such as cuts to Voice of America broadcasts to the Middle East, had led to “an information vacuum which was then vacated by the morass of lies, rumours and disinformation generated by its adversaries” (p. 439). In a similar vein, in a 2005 article published on the BBC website announcing cuts to World Service broadcasts in eight languages, including Polish and Hungarian, the head of its Polish-language service was quoted as saying that, while they found the BBC’s position on Europe “somewhat optimistic,” they acknowledged that central Europe “is not the greatest geopolitical need at the moment” (“BBC East Europe voices silenced,” 2005). Clearly, we have the benefit of hindsight in having seen the increasing tensions in central and eastern Europe in recent years and the rise of quasi-authoritarianism in some quarters. The conclusion must be that over time the specific focus of counter-adversary IO will shift, and the capability that we build to support such operations must possess the flexibility needed to address new requirements from time to time. It would seem reasonable to suggest that the chapters in this book have demonstrated enough basic similarities in the techniques employed by a range of potential adversaries that such a capability could be created, although this does not necessarily address the problem that area expertise cannot be created in short order.
The chapters in this book have provided a range of useful recommendations for the enhancement of democratic nations’ capacity to operate in the IE that might broadly be characterized as falling into developments in the areas of analytic capability and proactive information capabilities. It should be advantageous to such efforts that similar capability has existed in the past and that lessons learned from the experience of the twentieth century are available. The exception might well be, as Ankersen notes, the substantial changes in the media, systems, and organizations that constitute the modern IE. The arms race in communications and information technology is unlikely to slow soon, and it is clearly the case that it will be those nations that can adapt to the new environment and harness the opportunities presented to achieve their strategic goals that will come out on top in the information battle.
Leuprecht and Szeman propose that Canada may not have sufficient resources to carry out “persistent engagement” and should instead look to partner with the United States. Jackson notes that Canada’s “attempts to deter strategic disinformation have included accelerated efforts to strengthen cyber defence and resilience and to develop legislation and norms to hamper disinformation efforts, especially during elections. More generally, there have been efforts to increase co-operation and to share more information (about disinformation) to ‘deny’ actors (further) access at the domestic and international levels.”
Final Thoughts
This volume has covered a very wide range of topics in an attempt to conduct a preliminary examination of the risk presented by adversary activities in the IE and methods through which democratic nations might respond. We have seen a general consensus that the IE has rendered geographic boundaries less relevant to malign actors who are able to exploit connectivity to conduct operations against the West. Our networked environment also affords these adversaries the opportunity to achieve their strategic intentions incrementally and without crossing the threshold that would trigger a more robust response. The implication is that there is also asymmetry in acceptability of methods. The West is rightly much less ready to use methods that would be considered illegal and unethical to achieve its aims. Thus, we are assailed by a constant barrage of disinformation that has the potential to decay the credibility and trust citizens have in the essential institutions of state and society. Meanwhile the same capabilities are targeted internally at the populations of nations like China and North Korea by governments who simultaneously exert near total control over the information their people can access.
A challenge facing defence departments in the IE is ensuring that operations are targeted toward our adversaries as well ensuring no harm comes to domestic populations in the process. The IE allows for individuals and groups to disguise their true identities when operating online, making it more difficult to identify who they are, and to prevent them from continuing to engage in nefarious activities against our armed forces and citizens. In addition, it is extremely difficult to fully measure the scope and depth of targeted online campaigns. As discussed by Porter and colleagues, techniques aimed at assessing attitudes indirectly offer one approach to help quantify the scope and depth, however, more sophisticated techniques, perhaps based on cutting-edge technologies such as machine learning, may be needed. A challenge facing defence analysts and researchers in particular is an inability to directly study and understand our adversaries. As discussed in the chapter by Speckhard and Ellenberg, work on defectors can be enlightening, but it is not sufficient in its own right. Continued work in this area focused on creative ways to assess and understand adversaries and their campaigns is needed.
Research aimed at identifying vulnerabilities in individuals to being influenced and/or radicalized online can be key to the development of strategies and techniques to help reduce such vulnerabilities. Moreover, such work can help promote resilience in our own personnel and citizens by identifying approaches to help individuals more thoroughly consider and examine information online before behaving hastily. In addition, research in this domain has the potential to inform areas such as public affairs as to the types of messaging that could be effective at promoting resilience against influence and disinformation in the IE. Finally, as mentioned, evaluating the effectiveness of adversary online campaigns might help identify means to deter similar campaigns in the future. More research in the area of deterrence in the IE is needed. Moreover, a move toward a more integrated approach with other areas of government may be needed in order to better capture the effects, scope, and depth of our adversaries’ actions in the IE, in an effort to deter attacks in the future.
A repeated theme in this volume has been a recognition that if potential adversaries are able to sidestep our attempts to deter their activities through threat of reprisal, then we need to expand our repertoire of deterrence methods to achieve deterrence by denial. A variety of proposals have been made throughout the book that, when taken in aggregate, amount to the beginnings of a recipe for how Canada and its allies can begin to reinforce the essential resilience of our societies and state institutions built up over hundreds of years, and in so doing, face up to the new authoritarian regimes that seek to undermine us. For example, training of military personnel and education for the general population is required to enable them to navigate the IE safely and securely; training and simulation will help our civil and military crisis responders and decision makers respond in the face of adversary escalation; increased understanding of adversaries will offer the capacity to anticipate their stratagems, achieve early warning, and counter their propaganda; and an improved understanding of the structure of their ideology will enable de-legitimization in the eyes of their own populations and the wider world. Perhaps most importantly, there is the undercurrent of a confidence that the West has prevailed in the past in the face of opposing narratives and that it can do so again by building an information infrastructure to counter adversary narrative and present a strong alternative.
Notes
- 1 These stations were particularly threatening to the Soviet authorities since they attempted to provide news about events in the targeted nations based on local sources (e.g., Kind-Kovács, 2013).
- 2 For example, Strittmatter (2018) observes that “China’s attempt to censor the web, as the former US president Bill Clinton joked, was like ‘trying to nail Jell-O to the wall.’ That was in the year 2000. The Chinese listened to the prophecy, and swiftly built a new great wall: the Great Firewall” (p. 61).
- 3 It might be argued that, in part, Taylor’s paper is an attempt to rehabilitate the term “propaganda,” which, it is argued, is essential in defence of democratic values—in Taylor’s terms, “democratic propaganda.”
References
BBC East Europe voices silenced. (2005, 21 December). BBC News. http://news.bbc.co.uk/2/hi/europe/4550102.stm
Bennett, W. L., & Livingston, S. (2018). The disinformation order: Disruptive communication and the decline of democratic institutions. European Journal of Communication, 33(2), 122–39. https://doi.org/10.1177/0267323118760317
Bronstein, M. V., Pennycook, G., Bear, A., Rand, D. G., & Cannon, T. D. (2019). Belief in fake news is associated with delusionality, dogmatism, religious fundamentalism, and reduced analytic thinking. Journal of Applied Research in Memory and Cognition, 8(1), 108–17.
Canadian Anti-Fraud Centre. (2021). Homepage. https://www.antifraudcentre-centreantifraude.ca/index-eng.htm
Cull, N. J. (2015). Counter propaganda: Cases from US Public diplomacy and beyond. Legatum Institute.
D’Agata, M. T., & Kwantes, P. J. (2020). Personality factors predicting disinhibited and risky online behaviors. Journal of Individual Differences, 41(4), 199–206. https://doi.org/10.1027/1614-0001/a000321
D’Agata, M. T., Kwantes, P. J., & Holden, R. R. (2021). Psychological factors related to self-disclosure and relationship formation in the online environment. Personal Relationships, 28(2), 230–50. https://doi.org/10.1111/pere.12361
D’Agata, M., Kwantes, P., Peter, E., & Vallikanthan, J. (2021). Testing tactics to reduce belief in fake news in a North American sample. Defence Research and Development Canada, Scientific Letter, DRDC-RDDC-2021-L338.
Ducol, B., Bouchard, M., Davies, G., Ouellet, M., & Neudecker, C. (2016). Assessment of the state of knowledge: Connections between research on the social psychology of the Internet and violent extremism. TSAS: The Canadian Network for Research on Terrorism, Security, and Society.
Endsley, M.R. (1995). Toward a theory of situation awareness in dynamic systems. Human Factors, 37(1), 32–64. https://doi.org/10.1518/001872095779049543
Hango, D. W. (2016). Cyberbullying and cyberstalking among Internet users aged 15 to 29 in Canada. Insights on Canadian Society. Statistics Canada. https://www150.statcan.gc.ca/n1/pub/75-006-x/2016001/article/14693-eng.htm
Galeotti, M. (2017, 22 February). “Propaganda needs to be clever, smart and efficient,” but Russian army’s “information troops” are not just propagandists. In Moscow’s Shadows. https://inmoscowsshadows.wordpress.com/2017/02/22/propaganda-needs-to-be-clever-smart-and-efficient-but-russian-armys-information-troops-are-not-just-propagandists/
Jervis, R. (1985). Introduction: Approach and assumptions. In Jervis, R., Lebow, R., & Stein, J. (Eds.), Psychology and deterrence (pp. 1–12). Johns Hopkins University Press.
Jervis, R., Lebow, R., & Stein, J. (1985). Psychology and deterrence. Johns Hopkins University Press.
Johnson, E. (2019, 20 January). TD Bank should have seen “red flags” as senior lost $732 K in romance scam, son says. CBC News. https://www.cbc.ca/news/canada/toronto/senior-wires-life-savings-through-td-bank-in-romance-scam-1.4980649
Keegan, J. (2005). The Iraq War. Vintage.
Kind-Kovács, F. (2013). Voices, letters, and literature through the Iron Curtain: exiles and the (trans)mission of radio in the Cold War. Cold War History, 13(2), 193–219. https://doi.org/10.1080/14682745.2012.746666
Klein, G. (2008). Naturalistic decision making. Human Factors, 50(3), 456–60. https://journals.sagepub.com/doi/pdf/10.1518/001872008X288385?casa_token=RHLPXO5oURYAAAAA:f5s9qqlfAsbEmNT9_VD33eWJIXiQGvQjqm2wHeQyTDBorN_yqFCtKRvAFwOf_ywzzs00pB5mm8q4iw
Lindsay, J. R. & Gartske, E. (2019). Introduction: Cross-domain deterrence, from practice to theory. In E. Gartzke & J. R. Lindsay (Eds.), Cross-domain deterrence: Strategy in an era of complexity (pp. 1–25). Oxford University Press.
Liu, P. L., & Huang, L. V. (2020). Digital disinformation about COVID-19 and the third-person effect: Examining the channel differences and negative emotional outcomes. Cyberpsychology, Behavior, and Social Networking, 23(11), 789–93. https://doi.org/10.1089/cyber.2020.0363
March, J. G., & Simon, H. A. (1958). Organizations. John Wiley and Sons.
Peter, E., D’Agata, M., Kwantes, P., & Vallikanthan, J. (2021). Individual differences in susceptibility to disinformation. Scientific Report, DRDC-RDDC-2021-R114. Defence Research and Development Canada.
Prizant-Passal, S., Shechner, T., & Aderka, I. M. (2016). Social anxiety and Internet use—a meta-analysis: What do we know? What are we missing? Computers in Human Behavior, 62, 221–9. https://doi.org/10.1016/j.chb.2016.04.003
Robinson, E. (2019). Hybrid warfare and modern deterrence theory. Scientific Letter, DRDC-RDDC-2019-L184. Defence Research and Development Canada.
Schelling, T. C. (1966). Arms and influence. Yale University Press.
Simon, H. A. (1978, 8 December). Rational decision-making in business organizations. Nobel Memorial Lecture. http://www.nobelprize.org/uploads/2018/06/simon-lecture
Smith, R. (2005). The utility of force: The art of war in the modern world. Allen Lane.
Strittmatter, K (2018). We have been harmonized: Life in China’s surveillance state. Custom House.
Taylor, P.M. (2002). Strategic communications or democratic propaganda? Journalism Studies, 3(3), 437–41. https://doi.org/10.1080/14616700220145641
US Department of State. (2020). GEC special report: Pillars of Russia’s disinformation and propaganda ecosystem. https://www.state.gov/wp-content/uploads/2020/08/Pillars-of-Russia%E2%80%99s-Disinformation-and-Propaganda-Ecosystem_08-04-20.pdf
Whaite, E. O., Shensa, A., Sidani, J. E., Colditz, J. B., & Primack, B. A. (2018). Social media use, personality characteristics, and social isolation among young adults in the United States. Personality and Individual Differences, 124, 45–50. https://doi.org/10.1016/j.paid.2017.10.030