Over the past year details have emerged about the spectacular and deadly implosion from 2009-2013 of the CIA’s covert communications system used to communicate with intelligence assets across the world. According to public reporting, the agency had used contractors to create a series of innocuous-seeming websites through which its in-country assets could communicate with their handlers but utilized similar coding and technical elements that enabled Iranian intelligence to simply Google for the complete list of sites and rapidly uncover their interconnections and links back to the CIA itself. What can we learn from this incident about our online security and privacy and what lessons might it teach us about the need to return to the previous era’s view that there is no such thing as truly secure communication? Moreover, what might those lessons offer us in terms of advice for the rest of society?
The collapse of the CIA’s web-based communications platform offers a textbook example of overreliance and naïve trust in the protections and security of the online world. Each day we blindly throw ourselves in to the digital ether, naively trusting that encryption, hardened servers, secured websites and databases, good cybersecurity practices and good intentions from website owners and infrastructure vendors globally will keep us and our private information safe. We see only calm waters and blue skies, not the icebergs lurking beneath the surface.
The online world is so easy to use that we often forget both how fragile it is and how dangerous. We happily type in our credit card details on some website somewhere in the world without stopping to ask just how well coded that website is and whether they might have left a database exposed somewhere along the way. Our doctors upload our medical records to “secure” websites run by third party contractors that look like a third grader wrote them. Our city governments store their records in software systems that were obsolete 15 years ago and haven’t had a security patch in 20.
All of this is hidden from the casual observer. We just see a friendly website beckoning us inside, not the frightening security nightmare behind it.
Few of the world’s netizens understand basic digital security and so beyond the basics of looking for a little lock icon in the URL bar of their browser, are little aware of all the digital dangers awaiting them. Even the programmer class that write those websites are all-too-often unfamiliar with even the most basic of security practices.
Most universities and coding bootcamps treat security as a topic to be taught, rather than the most fundamental of principles to be infused throughout every lesson.
Making matters worse, Silicon Valley works tirelessly to retrain society at large that sharing is good. Whether it’s a photograph of your breakfast or the address and security procedures of a top secret SCIF, we are taught to just put it all out there for the world to see. Privacy is for the past they tell us.
The result is that the young born-digital generation that is entering the intelligence community and its contractors, along with the Silicon Valley technical experts they hire to assist them, lack even the most basic semblance of an understanding of how to navigate the security risks of the digital world. Captivated by the “shiny new object” that is technology, they tout technological solutions as the answer to every problem. Led astray by Silicon Valley’s siren song of technological utopia, they are blind to the dangerous realities underlying the tools and techniques they tout.
Most naively, they see technology as something fundamentally new and unlike anything that has come to pass in our history, rather than understanding technology as merely the current incarnation of human processes that have existed since the dawn of our species and which will continue to be reinvented for each generation to come.
Seen through this lens, email accessed over an HTTPS connection is not some fundamentally new form of securely connecting individuals, but rather merely a new modality that offers new affordances such as reduced cost and increased speed and greater ease of protection. Yet, as a modality, each of those affordances brings with it costs. The changes in cost and speed are also reflected in the changes to cost and speed with which adversaries can intercept and monitor those communications. The greater ease of protection compared with traditional cryptography and covert communications comes at the cost of centralizing the attack surface for adversaries and making it easier for them to decode those communications through vulnerabilities and compromises.
To non-security experts, encryption can often appear to be an impenetrable and absolute shield protecting communications from any adversary. In reality, even if the encryption algorithm itself cannot yet be compromised, the endpoints most certainly can. An encrypted mobile chat app is of little use if the phone has been compromised with keystroke tracing and screen capturing malware that intercepts those communications before and after the encryption layer. Moreover, even encryption only protects the communications with a given IP address, not the fact that you connected to that IP address. In many cases all an adversary needs to know is who is connecting to a given set of CIA-owned IP addresses, even if they don’t know what was actually exchanged with those addresses. Moreover, a single mistake or sloppy registration of those IP addresses and domains or their shell company owners is all it takes to connect them back to their real CIA ownership.
In the intelligence world it is also important to remember that sources must be protected for life. This poses unique challenges not confronted by almost any other sector, especially when using technologies like encryption, which have very finite shelf lives. Any given encryption algorithm can be assumed to have a fixed lifespan in which forcibly decrypting it is intractable with current computing power. Even the most secure encryption algorithms of today will likely be readily decrypted at some point in the future. Vulnerabilities and weaknesses and major advances in hardware and algorithmic attacks can dramatically shorten these protection lifespans.
Why does this matter? If a Fortune 50’s encrypted business plan from today is finally decrypted 20 years from now, it will likely be little more than a historical footnote of little interest. In contrast, take a repressive regime today that obtains an encrypted list of all of the CIA double agents in its government in 2018. Imagine that twenty years from now a new algorithmic technique, vulnerability or hardware advance is discovered to where it is finally able to brute force decrypt that file. It is likely that even 20 years later that government will round up and execute all those double agents and potentially even their families.
At present there is no readily available cryptographic technology that can guarantee to keep a communication unreadable for perpetuity and there is a very real risk that even today’s best algorithms will not last for the natural human life of today’s intelligence sources. Given the routineness with which governments around the world hoover up digital communications and store them until they can be decrypted in the future, it should be assumed that all encrypted communications channels offer only limited-term obfuscation at best and that any secrets, including identities, revealed within will be compromised after some period of time.
Seen through this light, encryption should be viewed not as an absolute protection for covert communications, but rather as a characteristic to help secure communications blend into the background. Most consumer-facing websites today use HTTPS. Cloaking a covert communications site in HTTPS should not be viewed as absolutely protecting the communications of that site, but rather ensuring that any traffic blends into the background of all HTTPS website traffic in that country.
Such distinctions are lost on a generation that has been taught that encryption equals safety and that what you can’t see online can’t hurt you.
There is also certainly a degree of “Hollywood effect” that taints the younger generation’s understanding of security. Young country specialists brought in to advise the intelligence community on the state of digital surveillance tradecraft in hostile regimes become too enamored with the technological aspects of those systems rather than their societal implications. Having seen too many James Bond movies, they believe surveillance systems are comedically obvious to the untrained eye. Every LED in their hotel room must be evidence of a hidden camera, not realizing that in reality any cameras and microphones in their room aren’t likely to have large oversized bright blinking lights announcing their presence. That those large stereotypical closed-circuit cameras marked by equally large warning sights watching an intersection are in actuality merely just one element of a series of layered observation devices, including countless concealed and covert monitoring systems. That the limitations of facial recognition software that may render AI systems useless do nothing to impede the army of human monitors watching those same cameras. More importantly, that government surveillance occurs alongside the myriad private companies surveilling them.
Put another way, we think of the digital world as a safe and secure place to communicate in the shadows. We believe communicating with a contact on an encrypted website is somehow safer than the age-old tradecraft of covert physical meetings and dead drops. In reality, the digital world is no safer and in fact is in many ways far less safe than the physical forms of communication it replaces.
The digital world makes communication faster, cheaper and far more convenient, but not safer. For consumer websites, faster, cheaper, easier are what have made the web so successful. For intelligence agencies communicating with sources in countries where lives are at stake, safety is more important than ease.
How then is a spy to communicate securely in this modern digital world? The answer, as noted earlier, is to step back and recognize that the digital world is merely the channel through which we communicate, a set of digital reincarnations of the physical world. Rather than abandon the millennia of physical world tradecraft, all of those traditional practices have their equivalents in the digital world.
Take the classic example of a signaling scenario in which an intelligence asset must send a signal to their handler at the right moment to either indicate they need evacuation or to signal that a particular scenario has occurred they were instructed to watch for, such as a decision by their government to invade a neighboring country. How can that asset communicate that signal safely in a digital world under constant surveillance?
Logging onto an encrypted website might seem the obvious choice, but as the CIA discovered, that can have disastrous consequences in countries with cybersavvy intelligence services.
Instead, if we look back to the pre-digital era of walking down the street with a prearranged shopping bag or wearing a particular color scarf or a ring on a certain finger or having a certain item for lunch, we see that all of these myriad and creative signals were designed to be executed in the full view of adversarial counterintelligence officers. In the physical world you assumed everything was being observed and you developed mechanisms for communicating in the open in ways that your enemies would be fully aware of but would never suspect.
What might a digital equivalent be? In countries where search engines like Google are popular, imagine prearranging a set of specific Google searches to serve as your signal. These searches would represent unique phrases that no-one else would search on, but sufficiently innocuous they would never catch the attention of a counterintelligence officer. Perhaps a search for tomorrow’s weather with a unique but innocent looking typo in it. The CIA handler, in turn, would set up a series of paid search advertisements on these search terms. As an advertiser, the handler could simply sit back and wait to be notified that one of their ads has just run, indicating that someone performed a Google search for that term. Using combinations of ordinary searches in specific sequences to eliminate false positives, one could quite readily devise quite robust signaling processes occurring entirely in the open.
Similarly, in countries where social media is widely used, a doting parent posting regular updates of their young children could signal an array of information through the arrangement of books and ordinary household items in the background of the photographs they post to Facebook to friends and family (and handlers posing as family).
Stenographically encoded images could be posted to Twitter on a range of topics from myriad one-time accounts, blending into the background of hundreds of millions of daily tweets, with only the handlers knowing which hashtags to look for or even just bulk scanning every daily image Twitter-wide for stenographic signatures.
More broadly, the shear number of touchpoints companies surveil us through each day could even be used to offer more secure mosaicked signaling and communication, in which the choreographed combinations of dozens of distinct actions form a complex communicative code, whether in the form of random tweets from different accounts about different topics that together form a message or purchasing a specific set of items in a specific order from a specific set of merchants that are recorded in their credit card statement monitored by a handler.
In other words, instead of treating the digital world as one of safety and security, treat it as a fully unsecure and surveilled medium. Assume digital communications are no different from communicating with a source in a public square with adversarial personnel studying your every movement.
Indeed, when you talk with older, more experienced, diplomatic and intelligence personnel that have been stationed in repressive countries for years, one of the first pieces of advice you will be given is to assume that absolutely every word you speak or action you take, even in the sanctity of your hotel room or apartment, will be monitored. That even the most secure areas of your embassy or corporate boardroom are somehow bugged and that all digital communications are assumed to be monitored, even those occurring over military-grade communications systems. In turn, their younger born-digital staff will brag about the invincibility of technology, much as some of the social media companies once bragged openly that their technologies could never allow misinformation or foreign influence campaigns to touch their platforms.
That’s not to say there are not brilliant and incredibly capable experts in the intelligence community who truly understand these issues. The problem is that their very measured and realistic voices are all too often drowned out by the boisterous shouts of a younger generation that have no understanding of the very real limits of the technology they believe solves every problem.
Putting this all together, one of the great failures of the intelligence community in the digital era has been its overreliance and overconfidence in the power of technology to solve all problems and overeagerness to discard the lessons and tradecraft that have served it well for generations, rather than stepping back and recognizing that technology is merely a tool that can be used well or poorly. In the end, perhaps if Silicon Valley stepped back and thought a bit more about these global issues, they’d take the issues of misuse and misinformation more seriously. To put it another way, if we all stopped thinking of technology as magic rather than merely a tool, we might have fewer of the failures of imagination that lead to devastating consequences for society. Perhaps the future of spying in the digital era is to embrace yesterday’s tradecraft with today’s technology.