The truth, though, is that deletion has never existed technologically in the way that we conceive of it. Deletion is a figment, a public fiction, a lie that computing tells you to reassure you and give you comfort. Although the deleted file disappears from view, it is rarely gone. In technical terms, deletion is really just a form of the middle permission, a kind of write. Normally, when you press delete for one of your files, its data—which has been stashed deep down on a disk somewhere—is not actually touched. Instead, only the computer’s map of where each file is stored is rewritten to say I’m no longer using this space for anything important . The supposedly erased file can still be read by anyone who looks hard enough for it.
This can be confirmed through experience, actually. Next time you copy a file, ask yourself why it takes so long compared with the instantaneous act of deletion. The answer is that deletion doesn’t really do anything to a file besides conceal it. Put simply, computers were not designed to correct mistakes, but to hide them—and to hide them only from those parties who don’t know where to look.
* * *
The waning days of 2012 brought grim news: The governments of both Australia and the UK were proposing legislation for the mandatory recording of telephone and internet metadata. This was the first time that democratic governments publicly confirmed the ambition to establish a sort of surveillance time machine. Though these laws were justified as public safety measures, they represented a breathtaking intrusion into the daily lives of the innocent.
These public initiatives of mass surveillance proved, once and for all, that there could be no natural alliance between technology and government. The rift between my two strangely interrelated communities, the American IC and the global online tribe of technologists, became pretty much definitive. For years I had been able to fool myself that we were all, ultimately, on the same side of history: We were all trying to protect the internet, to keep it free for speech and free of fear. But now the government, my employer, was definitively the adversary. What my technologist peers had always suspected, I’d only recently confirmed, and I couldn’t tell them. Or I couldn’t tell them yet.
What I could do, however, was help them out. This was how I found myself in Honolulu as one of the hosts and teachers of a CryptoParty. This was a new type of gathering where technologists volunteered their time to teach free classes to the public on the topic of digital self-defense—essentially, showing anyone who was interested how to protect the security of their communications. I jumped at the chance to participate.
Though this might strike you as a dangerous thing for me to have done, given the other activities I was involved with at the time, it should instead just reaffirm how much faith I had in the encryption methods I taught. These were the very methods that protected that drive full of IC abuses sitting back at my house, with locks that couldn’t be cracked even by the NSA. I knew that no number of documents, and no amount of journalism, would ever be enough to address the threat the world was facing. People needed tools to protect themselves, and they needed to know how to use them. Given that I was also trying to provide these tools to journalists, I was worried that my approach had become too technical. After so many sessions spent lecturing colleagues, this opportunity to simplify my subject for a general audience would benefit me as much as anyone. Also, I honestly missed teaching, which I had done often in years prior: It had been a year since I’d stood at the front of a class, and the moment I was back in that position, I realized I’d been teaching the right things to the wrong people all along.
The CryptoParty was held in a one-room art gallery behind a furniture store and coworking space. While I was setting up the projector so I could share slides showing how easy it was to run a Tor server, my students drifted in, a diverse crew of strangers and a few new friends I’d only met online. All in all, I’d say about twenty people showed up that December night to learn from me and my co-lecturer, Runa Sandvik, a bright young Norwegian woman from the Tor Project. (Runa would go on to work as the senior director of information security for the New York Times , which would sponsor her later CryptoParties.) Our audience wanted to re-establish a sense of control over the private spaces in their lives.
I began my presentation by discussing deletion and the fact that total erasure could never be accomplished. The crowd understood this instantly. I went on to explain that, at best, the data they wanted no one to see couldn’t be unwritten so much as overwritten: scribbled over, in a sense, until the original was rendered unreadable. But, I cautioned, even this approach had its drawbacks. There was always a chance that their operating system had silently hidden away a copy of the file they were hoping to delete in some temporary storage nook they weren’t privy to.
That’s when I pivoted to encryption.
Encryption is the only true protection against surveillance. If the whole of your storage drive is encrypted to begin with, your adversaries can’t rummage through it for deleted files—or for anything else—unless they have the encryption key. If all the emails in your inbox are encrypted, Google can’t read them to profile you—unless they have the encryption key. If all your communications that pass through hostile networks are encrypted, spies can’t read them—unless they have the encryption key.
Encryption works, I explained, by way of algorithms. It’s a mathematical method of transforming information—such as your emails, phone calls, photos, videos, and files—in such a way that it becomes incomprehensible to anyone who doesn’t have a copy of the encryption key. And it’s reversible. You can think of a modern encryption algorithm as a magic wand that you can wave over a document to change each letter into a language that only you and those you trust can read. The encryption key is the magic spell that puts the wand to work. It doesn’t matter how many people know that you used the wand, so long as you can keep your personal magic spell from anyone you don’t trust.
Encryption algorithms are basically just sets of math problems designed to be incredibly difficult even for computers to solve. If all of our data, including our communications, were encrypted, then no government would be able to understand them. It could still intercept and collect the signals, but it would be intercepting and collecting pure noise. Encrypting our communications would essentially delete them from the memories of every entity we deal with. It would effectively withdraw permission from those to whom it was never granted to begin with.
Any government hoping to access encrypted communications has only two options: It can either go after the keymasters or go after the keys. The best means we have for keeping our keys safe is called “zero knowledge,” a method that ensures that any data you try to store externally—say, for instance, on a company’s cloud platform—is encrypted by an algorithm running on your device before it is uploaded. With zero knowledge, the key is never shared and remains in the users’ hands—and only in the users’ hands. No company, no agency, no enemy can touch them.
My key to the NSA’s secrets went beyond zero knowledge: It was a zero-knowledge key consisting of multiple zero-knowledge keys.
My keys to the drive were hidden everywhere. But I retained one for myself. And if I destroyed that single lone piece that I kept on my person, I would destroy all access to the NSA’s secrets forever.
Читать дальше