You are being watched
Don’t want the government, big industry and some 15-year-old to know your secrets? Guess you’re out of luck.
Your Facebook friends are watching you.
So are their Facebook friends, and a bunch of total strangers. The guys who run Facebook are watching you, too. Your keystrokes are being logged. Your mouse clicks are being monitored and digested. Your behavioral patterns are being analyzed, monetized: what you buy on Amazon, who you follow on Twitter, where you say you eat on Yelp, your most shameful Google searches.
The photos you post on Flickr are encoded with little bits of geospatial metadata that pinpoint where they were taken and can reveal where you live. Your smart phone—jampacked with apps coded by who knows who and potentially loaded with spyware—is a pocket homing beacon, trackable by satellite. There are trucks with cameras on their roofs, trundling past your apartment, duly noting your unsecured Wi-Fi signal.
Walmart is putting radio frequency identification tags in your underwear.
You can barely remember all the different passwords to the ever-proliferating number of websites to which you’ve entrusted personal photos and videos, likes and dislikes, credit-card info and your Social Security number. Then there are the photos of you that other people have posted without your knowledge, or the things they may have written about you on blogs or message boards—things that have a good chance of remaining online and searchable for perpetuity.
And that’s to say nothing of the vast and classified surveillance apparatus—“so large, so unwieldy, and so secretive that no one knows how much money it costs, how many people it employs, how many programs exist within it, or exactly how many agencies do the same work,” according to The Washington Post—that could (who knows?!) be silently taking note of the e-mails you’ve sent and the phone calls you’ve made.
Public is the new private
Over just the past decade or so, the Web has turned things upside down. As Danah Boyd said, speaking this past spring at South by Southwest in Austin, we’ve seen “an inversion of defaults when it comes to what’s public and what’s private.”
Time was, what you said and did was “private by default, public through effort,” said Boyd, a fellow at Harvard University’s Berkman Center for Internet and Society. That’s all changed: “Conversation is public by default, private through effort. You actually have to think about making something private because, by default, it is going to be accessible to a much broader audience. … And, needless to say, people make a lot of mistakes learning this.”
To a degree unheard of even five years ago, we live our lives mediated by Firefox browsers and Droid screens. And that means—whether it’s ostensibly protected sensitive data (financial and medical data), ostensibly inconsequential personal data (Flickr photos, YouTube channels, Twitter feeds) or ostensibly depersonalized behavioral data (browsing patterns, search queries, HTTP cookies)—our lives are nowhere near as private as we might presume them to be.
“Precisely because the tech advances have come in so many places, it’s really quite hard to pick any one particular spot that’s the biggest problem,” said Lee Tien, senior staff attorney at the Electronic Frontier Foundation. “They all converge. Because we have a giant personal information superhighway, where all of our information travels around both the government and the business sector, what gets picked up in one place is being transferred to another place. So it all ends up, not necessarily in a central basket, but in a lot of different baskets—where it can always be accessed.”
“Data collection is becoming ubiquitous,” said Jules Polonetsky, co-chair and director of the Future of Privacy Forum, and former chief privacy officer at AOL. “It’s not science fiction anymore to think there are lots of databases that have everything we’ve done: every search we’ve done, every website we’ve visited.”
It might be comforting to think that our online identities are just anonymous strings of ones and zeros, but that’s just not true anymore. So what we used to loosely define as “privacy”—an admittedly amorphous concept—is changing fast. And only recently do consumers, voters, politicians and the media seem to be grasping that fact.
Before, “We had privacy from obscurity,” said David Ardia, another fellow at the Berkman Center, and the director and founder of the Citizen Media Law Project. Now, almost everything worth knowing about almost anyone is online.
“That means it’s searchable, and it’s available forever. And I don’t think we’ve caught up to that change in the way we structure our lives and the way we understand privacy.”
‘They want to know more about us’
To begin with, privacy is a problematic notion.
“It a very misunderstood concept from a constitutional point of view,” said famed civil-liberties attorney and author Harvey Silverglate, of Boston. “There are some parts of the Constitution, and of the Bill of Rights in particular, that are quite specific about it. And there are others that are quite general and amorphous.”
While the First Amendment is very explicit, for instance (“Congress shall make no law …”), the Fourth Amendment (“unreasonable searches and seizures” … “probable cause”) leaves a lot more wiggle room. It’s “seemingly intentionally vague,” said Silverglate. The result is a wording that suggests people are entitled to a reasonable degree of privacy—but just what it is differs in any given environment.
Obviously, the framers “didn’t envision the Internet or telephones, but they obviously understood that this was an area that was going to be evolving, and they couldn’t define it.”
And so we find ourselves, at the beginning of the second decade of the 21st century, still trying to figure all this out.
The problem, says Silverglate, “is that the pace of technological change is proceeding so quickly that the courts, which were always a little bit behind in the development of technology, are now being left in the dust.”
Indeed, said Tien, “Technology has advanced and the law has not.” Moreover, “Privacy is not easy to define. It means different things to different people.” But above all else, he said, the most acute threat nowadays is that both the government and the private sector have such vested interests in chipping away at whatever privacy actually is.
“You and I might view the information that we give off online, that we don’t want others to capture, as a negative thing, like pollution in the air,” said Tien. “For government and industry, it’s a nutrient. It’s something they can feed on. They want to know more about us.”
No such agency
In a Washington Post special report, “Top Secret America,” Dana Priest and William Arkin wrote, “A hidden world, growing beyond control,” describing “some 1,271 government organizations and 1,931 private companies [working] on programs related to counterterrorism, homeland security, and intelligence in about 10,000 locations across the United States.”
“An estimated 854,000 people, nearly 1.5 times as many people as live in Washington, D.C., hold top-secret security clearances,” the report continued.
If you don’t think a goodly number of those folks are listening in to the occasional Skype conversation, you haven’t been paying attention these past nine months.
“I’m worried about a number of phenomena,” said Silverglate. “First, because of the increasing number of searches being done by the terror warriors—the CIA, the [National Security Agency], the FBI and God knows who else—the chaos in the federal investigative establishment is unbelievable. If you think they can’t get the mail delivered on time, just think about the wiretaps and the electronic surveillance.”
It’s enough to make the most intrusive data-mining operation seem tame by comparison. After all, said Silverglate, a corporation “can spy on you, but they can’t arrest you.” And when they do spy on you, it’s “because they want to sell you something, not kill you.”
Don’t (just) worry about the government
The problem comes along when governments start strong-arming those companies into doing their bidding. Consider the controversy surrounding AT&T’s cooperation with the NSA, without the knowledge of its customers, on a “massive program of illegal dragnet surveillance of domestic communications” (as the Electronic Frontier Foundation charged) back in 2006. “AT&T just allowed them access to the control room,” marveled Silverglate.
The feds, in other words, “enlist the brilliance and expertise of companies like Google for the purposes of snooping on its citizens.”
It’s a job at which Google has allegedly acquitted itself quite well in recent months.
In May, news broke that the omnipresent (and sometimes seemingly omnipotent) corporation had been vacuuming up data about citizens’ Wi-Fi networks and what sorts of content was being accessed thereon. Like in a B-movie stakeout, it was all monitored from inside a van—those camera-equipped Street View trucks that patrol the world’s cul-de-sacs and capture images of sword-and-sorcery LARPers, “horse boy” and, well, your front door.
Google insists that the data sweeps were “unintentional” and that they were only viewed a very limited number of times. You’re not the only one who’s dubious. Massachusetts congressman Ed Markey has asked the Federal Trade Commission to determine whether Google’s privacy breach broke the law. Galaxy Internet Services, an Internet service provider based in Newton, has brought suit. And Connecticut Attorney General Richard Blumenthal is heading a multistate investigation.
In June, U.S. Representative John Conyers of Michigan requested that Google CEO Eric Schmidt enlighten him as to just how those cars came to intercept that Wi-Fi info. In his letter, Conyers got out the virtual police tape, asking that Google “retain the data collected by its Street View cars, as well as any records related to the collection of such data, until such time as review of this matter is complete.”
It was about this time that Conyers sent a letter to Mark Zuckerberg, Facebook’s twerpy bazillionaire of a CEO, inquiring whether the site shared user data “without the knowledge of the account holders.”
But however much kerfuffle there was about Facebook’s Orwellian Beacon program or its labyrinthine privacy settings, no matter how sinister David Fincher’s new The Social Network makes Zuckerberg’s enterprise seem (never mind the leaked instant messages reported in the most recent New Yorker, where he describes his gathering of user info with haiku-like sangfroid—“people just submitted it / i don’t know why / they ‘trust me’/ dumb fucks”)—when it comes to privacy, Facebook is probably the least of your problems.
Sure, it’s bad. “The interplay between the multiple options is so complex” on Facebook, said Polonetsky. “Your location. What apps you use. Your friends’ apps. Different segments of your profile. Your contact information. It’s this incredibly complicated maze. Even I gotta sit sometimes and think before I answer a question.”
But too few people realize that this stuff is everywhere these days.
“You go to a site and there’s a lot going on!” said Polonetsky. “A lot of different data being collected. Regular cookies. Flash cookies. Behavioral retargeting. Analytics. There’s data being sent to an ad exchange. There might be an affiliate program because they’re selling ads not on a click basis, but on a commission basis. There’s 20 or 30 places your browser may go when you visit a site, and then [there’s] all the different things you have to do if you want to turn that off. Your cookie settings. Your Adobe Flash player settings. You could spend hours just disabling the data transmission that happens.”
He added, “I think we’re at this really interesting time.”
The omniscient eye of corporate-abetted Big Brother may get the blockbuster treatment in the Post. But oftentimes privacy intrusions grow much closer to home—and are much more damaging.
“We used to thinking of the threat as ‘us against them,’” said Tien. “Now, because of the Internet and ubiquitous portable devices, there’s a much more lateral threat as well. Kids can ruin each other’s privacy without really even trying. They think they’re just in a Facebook squabble, but there are a lot of other people who have access to that data. So there’s both a Big Brother problem and a Little Brother problem. And that Little Brother problem has gotten worse.”
Who is Little Brother? He’s all those people you know, sort of know or wish you didn’t know: creepy, barely remembered high-school classmates; Machiavellian co-workers; your angry ex. But mostly you really don’t know who Little Brother is, because Little Brother is anonymous. He or she is part of a sea of nameless faces: the anonymity-emboldened tough guy on a message board, or an auteur posting a sadistic video on YouTube, or an obsessive Twitter stalker, or, sometimes, a malicious suburban mom hiding behind a hoax identity while taunting a teenager to suicide.
Inexorably, we seem to be drawn to a battle between two conflicting notions—and the winner of that battle may determine what kind of Internet we end up with. The voices advocating for increased privacy protections argue that our actions online should remain invisible—unless we give our express consent to be watched and tracked. But some of the most powerful voices on the Web are beginning to suggest that you should be held responsible for your online actions: that your anonymity on the Web is dangerous.
Speaking at the Techonomy conference in Lake Tahoe this past month, Google’s Schmidt opined that the rise of user-driven technology—and the dangers posed by those who would misuse it—required a new approach. “The only way to manage this is true transparency and no anonymity,” he said. “In a world of asynchronous threats, it is too dangerous for there not to be some way to identify you. We need a [verified] name service for people. Governments will demand it.”
And Schmidt is right—the same governments that are investigating Google’s breaches of their citizens’ privacy are also demanding that their citizens be accountable for their online identities in ways that must make the world’s totalitarian regimes smile. That’s the paradox: Any measure that would allow Google to track the sources of a Chinese hacker attack would also enable to the Chinese government to track its own dissidents.
Even on our shores, a look at recent government action on privacy shows how confused the issue has become.
On the one hand, U.S. lawmakers and the nation’s top consumer-protection agency are so spooked by online marketing practices that they are threatening legislation if the industry doesn’t begin to self-regulate. By doing so, they’re affirming the public’s right to retain its anonymity.
Earlier this year, the FTC began floating the idea of a no-track list, which would prevent advertisers from gathering information from a user’s online behavior—much as the federal Do Not Call list restricts the practices of telemarketers. The ability of marketers to track you has shifted so quickly, and the information they can glean is so frighteningly accurate that in July, Congress hauled a who’s who of the Interwebs—including representatives from Google, Facebook, Apple and AT&T—in front of the Senate Committee on Commerce, Science and Transportation, threatening to push bills through both the House and the Senate if the industry didn’t start explaining to consumers what information is being collected and how it’s being used.
After the Senate hearings, Massachusetts Sen. John Kerry announced that he would draft legislation—to complement bills already introduced in the House—that would give people more control over how their information is collected and distributed online.
“Take the single example of a cancer survivor who uses a social network to connect with other cancer survivors and share her story,” said Kerry in a statement. “That story is not meant for her employer or everyone she e-mails, or marketers of medical products promising herbal cures. Misapplied and poorly distributed, this information could lead to a lost job opportunity or higher insurance rates. Even distributed without malice this information could pigeonhole her identity as a cancer survivor—which may not be how she wants to face the world.”
Deciding who gets that information “should be her right,” Kerry continued. “Whether or not she acts to protect its distribution, private firms should start with the premise that they should treat her and her information with respect. The fact that no law limits the collection of this information or its distribution is a problem that threatens an individual’s sense of self.”
That very month, however, the Obama administration tried to make it easier for the FBI to obtain records of “online transactions”—including a list of who you’ve e-mailed and what websites you’ve visited—without a warrant. Around the same time, the Electronic Frontier Foundation reported that the White House had circulated a draft of its plan for securing identity online—which calls for individuals to “voluntarily request a smart identity card from her home state” to “authenticate herself for a variety of online services” including “securely accessing her personal laptop computer, anonymously posting blog entries, and logging onto Internet e-mail services using a pseudonym.”
The proposal, called the National Strategy for Trusted Identities in Cyberspace, sounded alarming to some critics.
“If I’m posting on a blog, reading, browsing, who needs to know who I am? Why is it so important that my identity be verified and authenticated?” said Tien. “We have a tendency to say, ‘Well, gee, there are all these problems, so we need to know people’s identity.’ But identity isn’t security. You don’t automatically know what to do about someone just because you know who they are.”
Even a raft of new laws and legal precedents can’t be the only answer. Beyond legal remedies, there has to be a cultural component.
“Much of our sense of privacy in the world isn’t guaranteed by law,” said Tien. “It’s guaranteed by people acting within traditional bounds.” Unfortunately, he said, “Technology screws this up. It accelerates social change in ways where people aren’t sure what the norms are.”
Justin Silverman, a law student at Suffolk University who blogs for Suffolk Media Law and the Citizen Media Law Project, said he suspects that ultimately people’s sensibilities will adapt as folks get “more comfortable with information online” and a lot these issues will “solve themselves.” In the meantime, he said, “[The] Market will take care of some things.”
Indeed, even as they’ve helped create some of these issues, technology and the private sector have huge roles to play. People are starting to demand it. The Wall Street Journal reported recently that “companies with ideas on how to protect personal information”—firms such as Abine and TRUSTe—“are a new favorite of venture capitalists.”
A lot of Internet companies, according to Polonetsky, are simply saying: “I’ve had enough of this. I have some pretty big plans to do some pretty good things with technology, and I don’t want to be called a bad guy. I’m ready to have the practices that seem to be of grave concern taken off the table so I can roll things out.”
Even as the technology evolves, and legislators and courts and corporations slowly smarten up, and society gets more Web-savvy, some of this stuff will always be with us.
Tien mentioned a phrase he likes from philosophy: essentially contested concept. That’s an idea that pretty much everybody recognizes and agrees exists in theory—justice, say—but on which there’s little concurrence about just what it is and how to achieve it.
“Privacy is essentially contested,” said Tien. “We want to protect our privacy, but there are grand incentives to know more about us. Combine this problem of competing incentives with the problem of how hard a problem it is to solve and how every era changes the technology: Even if the problem gets solved for the telephone, it didn’t get solved for e-mail, and it didn’t get solved for social networking. It’s always going to be work.”