How Institutional Cooperation Enables Intelligence Competition

This paper draws on material published in my new book, Age of Deception: Cybersecurity as Secret Statecraft (Cornell, 2025), used with permission of the publisher.

Jon R. Lindsay
Jon R. Lindsay
Associate Professor, School of Cybersecurity and Privacy and the Sam Nunn School of International Affairs, Georgia Institute of Technology.
Cybersecurity analysis of network traffic. Intelligence operations in cyberspace rely on detecting, exploiting, and managing vulnerabilities within complex digital systems. Photo by Kristina Alexanderson (CC BY).

Explaining Success and Failure in Intelligence Operations

A century of technological progress and economic globalization has expanded the scope and scale of intelligence competition in world politics.[1] Accordingly, there is a burgeoning of international relations scholarship on political secrecy, intelligence, and covert action.[2]

This new focus on the strategic dynamics of secret statecraft complements and expands the traditional emphasis on the behavior of intelligence organizations and intelligence-policy relations in the intelligence studies field.[3] Yet there is still little work on the operational conditions that promote success (or failure) in intelligence campaigns, and how this may enable (or undermine) strategic advantage in political competition.[4] This gap contrasts starkly with the extensive literature on military effectiveness on the battlefield.[5]

Why Intelligence Depends on Institutions, Not Force

While traditional warfare is an extreme form of security competition, intelligence contests depend essentially on institutional cooperation. Intelligence works by subverting institutions that improve information and collective action, such as governments, organizations, cultural practices, and software systems.

Spies sneak into foreign societies or hostile organizations by pretending to be legitimate citizens or trusted colleagues. Spyware likewise infiltrates shared information systems by masquerading as legitimate software.

 This is one reason why intelligence is not only prevalent in peacetime but also more challenging in war: Institutions thrive in peace but suffer in war.

War is a contest of strength and resolve in political anarchy, which tends to undermine the institutional conditions for intelligence. Intelligence, by contrast, is a contest of wits between deceivers who must cooperate to compete.[6]

Vulnerable Institutions and Clandestine Organization

Cover of the book Age of Deception. Cybersecurity intelligence as Secret Statecraft

My recent book, Age of Deception: Cybersecurity as Secret Statecraft (Cornell 2025), presents an institutional theory of intelligence performance. It argues that the successful operational implementation of secret statecraft (espionage, sabotage, subversion, counterintelligence), in any historical era regardless of the technological state of the art, depends on a combination of vulnerable institutions and clandestine organization.

The former refers to collective institutional resources that are shared between a deception specialist (i.e., intelligence attacker) and its target, while the latter refers to institutional resources that the deceiver can control autonomously. Vulnerable institutions provide the operating environment that a clandestine organization exploits.

When the monitoring and enforcement vulnerabilities of the institutional environment are exposed and a sophisticated organization can discreetly employ its clandestine capabilities, then persistent intelligence access can be maintained. When institutions are less exposed, however, then operations must become more complex to overcome access barriers, and when organizations are less sophisticated, conversely, there is greater risk of losing control of operations.

Technology Does Not Guarantee Intelligence Advantage

This view challenges a conventional wisdom prevalent in the policy discourse of cybersecurity, namely that technology gives attackers the advantage over defenders (i.e., cyberspace is offense dominant).[7]

The institutional conditions of secret statecraft are both socially and technically constructed.

Consider, for example, two Russian intelligence campaigns conducted nearly a century apart. In the early Cold War, the Soviet Committee for State Security (KGB) recruited a spy ring known as the “Cambridge Five,” the most famous of whom was Kim Philby.

Philby rose to the senior ranks of British intelligence, and he relied on good-old-fashioned espionage tradecraft to betray numerous Western intelligence operations to the Soviets. After the Cold War, one of the KGB’s descendants was the Russian Foreign Intelligence Service (SVR).

The SVR hacked the SolarWinds company in 2019, using sophisticated technical methods to gain access to thousands of organizations and infiltrating more than a hundred of them. Yet while SolarWinds was hacked by an advanced persistent threat (APT), the human spy Philby was ultimately less advanced but far more persistent. We need an institutional theory of intelligence performance to explain why technology does not always ensure success.

Institutional Conditions of Intelligence Performance

If technology determines intelligence performance, then the more sophisticated SolarWinds campaign should have fared better. However, Philby maintained persistent access to his targets for decades, rather than mere months, and inflicted serious damage to Western intelligence operations, rather than simply raising awareness about supply-chain cybersecurity. Institutional conditions, rather than technological sophistication, determine the quality of intelligence performance.

Institutional contingency is especially salient in cybersecurity because its enabling institutions are incredibly complex.

The conditions of intelligence performance (summarized in the table below) were relevant long before the internet (e.g., for Kim Philby), and they will be relevant long after the quantum internet (should that ever be built). Kim Philby exploited an overly trusting and thus highly exposed organizational culture in the early years of the Cold War, but a major liability was that he was also vulnerable to compromise due to the antics of other members of the Cambridge Five.

The SVR had a comparatively more sophisticated clandestine operation for running collection operations compared to the young KGB, but it was also up against a much more robust cybersecurity and counterintelligence ecosystem. The KGB and SVR were able to conduct successful intelligence operations only as long as both conditions were met. Even then, intelligence performance had only an indirect impact on politics. When conditions changed, however, intelligence performance degraded.

Espionage CampaignOperational ConditionsIntelligence Performance
TechnologyInstitutionOrganizationPersistenceImpact
Kim PhilbyLowExposedDependentDecadesSerious
SolarWindsHighSecureSophisticatedMonthsModerate
Comparing intelligence performance in cases of Russian espionage

Operational Constraints in Cyber-Enabled Intelligence

This comparison of Kim Philby and the SolarWinds hack illustrates why a greater quantity of technical operations does not simply translate into a greater quality of intelligence.  Public and private cyber defenses create access barriers and counterintelligence capacity that complicate offensive cyber operations.

Threat actors must deal with many challenges to conduct controlled infiltrations into hostile organizations to achieve specific strategic objectives, all while protecting costly investments in sensitive sources and methods from professional deception detectors. The organizational requirements for managing complex operations with complex risks, moreover, create barriers to entry for sophisticated intelligence operations against complex targets. Cyber-enabled intelligence operations require a lot more than a hacker in a hoodie in the basement. This implies that assumptions about asymmetric advantage in cyberspace are backwards: Nation-state spy agencies like the SVR and NSA have important advantages over relatively less capable targets.

In short, technology does not determine advantage by itself. The institutional conditions of secret statecraft are both socially and technically constructed. They have a path-dependent history, and they change as attackers and defenders make different choices. Institutional contingency is especially salient in cybersecurity because its enabling institutions are incredibly complex. Precisely because offensive advantages in cyberspace are so contingent, threat actors must be more sophisticated to manage contingencies. Because politics in the twenty-first century is constrained and enabled by an increasingly complex institutional environment, of which collective global cyberspace is the foremost symptom, there is more opportunity for deception than ever before.


Notes

[1] David V. Gioe et al., “Intelligence in the Cyber Era: Evolution or Revolution?,” Political Science Quarterly 135, no. 2 (2020): 191–224, https://doi.org/10.1002/polq.13031; Thomas Rid, “A Revolution in Intelligence,” in The New Makers of Modern Strategy: From the Ancient World to the Digital Age, ed. Hal Brands (Princeton University Press, 2023); Jon R. Lindsay, Age of Deception: Cybersecurity as Secret Statecraft (Cornell University Press, 2025).

[2] For a review see Allison Carnegie, “Secrecy in International Relations and Foreign Policy,” Annual Review of Political Science 24, no. 1 (2021): 213–33, https://doi.org/10.1146/annurev-polisci-041719-102430. See also Austin Carson, Secret Wars: Covert Conflict in International Politics (Princeton University Press, 2018); Lindsey A. O’Rourke, Covert Regime Change: America’s Secret Cold War (Cornell University Press, 2018); Michael Poznansky, In the Shadow of International Law: Covert Intervention in the Postwar World (Oxford University Press, 2020); Melissa M. Lee, Crippling Leviathan: How Foreign Subversion Weakens the State (Cornell University Press, 2020); Lennart Maschmeyer, Subversion: From Covert Operations to Cyber Conflict (Oxford University Press, 2024).

[3] Most of the IR literature on intelligence focuses on institutional interactions within states rather than strategic interactions between them, e.g., Richard K. Betts, Enemies of Intelligence: Knowledge and Power in American National Security (Columbia University Press, 2007); Joshua Rovner, Fixing the Facts: National Security and the Politics of Intelligence (Cornell University Press, 2011); Robert Jervis, Why Intelligence Fails: Lessons from the Iranian Revolution and the Iraq War (Cornell University Press, 2011); Erik J. Dahl, Intelligence and Surprise Attack: Failure and Success from Pearl Harbor to 9/11 and Beyond (Georgetown University Press, 2013); Keren Yarhi-Milo, Knowing the Adversary: Leaders, Intelligence, and Assessment of Intentions in International Relations (Princeton University Press, 2014); Amy B. Zegart, Spies, Lies, and Algorithms: The History and Future of American Intelligence (Princeton University Press, 2022).

[4] Important exceptions include Lennart Maschmeyer, “The Subversive Trilemma: Why Cyber Operations Fall Short of Expectations,” International Security 46, no. 2 (2021): 51–90; Jennifer E. Sims, Decision Advantage: Intelligence in International Politics from the Spanish Armada to Cyberwar (Oxford University Press, 2022); Joshua Rovner, “Theory of Sabotage,” Études françaises de renseignement et de cyber (Paris cedex 14) 1, no. 1 (2023): 139–53.

[5] Martin Van Creveld, Fighting Power: German and US Army Performance, 1939-1945 (Greenwood Press, 1982); Allan R. Millett et al., “The Effectiveness of Military Organizations,” International Security 11, no. 1 (1986): 37–71; Stephen D. Biddle, Military Power: Explaining Victory and Defeat in Modern Battle (Princeton University Press, 2004); Risa A. Brooks and Elizabeth A. Stanley, eds., Creating Military Power: The Sources of Military Effectiveness (Stanford University Press, 2007); Caitlin Talmadge, The Dictator’s Army: Battlefield Effectiveness in Authoritarian Regimes (Cornell University Press, 2015); Ryan Grauer, Commanding Military Power: Organizing for Victory and Defeat on the Battlefield (Cambridge University Press, 2016); Jon R. Lindsay, Information Technology and Military Power (Cornell University Press, 2020); Kendrick Kuo, “Dangerous Changes: When Military Innovation Harms Combat Effectiveness,” International Security 47, no. 2 (2022): 48–87, https://doi.org/10.1162/isec_a_00446.

[6] On the general logic of deception as cooperation subversion see Erving Goffman, Strategic Interaction (University of Pennsylvania Press, 1969); David Shulman, From Hire to Liar: The Role of Deception in the Workplace (Cornell University Press, 2006); Jon R. Lindsay, “Restrained by Design: The Political Economy of Cybersecurity,” Digital Policy, Regulation and Governance 19, no. 6 (2017): 493–514; Lennart Maschmeyer, “Subversion, Cyber Operations and Reverse Structural Power in World Politics,” European Journal of International Relations 29, no. 1 (2022): 79–103; Nadiya Kostyuk and Susan Landau, “Dueling Over DUAL_EC_DRBG: The Consequences of Corrupting a Cryptographic Standardization Process,” Harvard National Security Journal 13, no. 2 (2022): 224–84.

[7] Keir Lieber, “The Offense-Defense Balance and Cyber Warfare,” in Cyber Analogies, ed. Emily O. Goldman and John Arquilla (Naval Postgraduate School, 2014); Rebecca Slayton, “What Is the Cyber Offense-Defense Balance? Conceptions, Causes, and Assessment,” International Security 41, no. 3 (2017): 72–109; Jason Healey, “Understanding the Offense’s Systemwide Advantage in Cyberspace,” Lawfare, December 22, 2021, https://www.lawfaremedia.org/article/understanding-offenses-systemwide-advantage-cyberspace.

DON’T MISS AN ARTICLE

We don’t spam! Read our privacy policy for more info.

TAGGED:
Share This Article
Associate Professor, School of Cybersecurity and Privacy and the Sam Nunn School of International Affairs, Georgia Institute of Technology.