by Jonathan Weinberg
Surveillance used to be expensive. Even just a few years ago, tailing a person’s movements around the clock required rotating shifts of personnel devoted full-time to the task. Not any more, though.
Governments can track the movements of massive numbers of people by positioning cameras to read license plates, or by setting up facial recognition systems. Those systems need few people to operate them, automating the collection of information about people’s lives and adding that data to searchable databases. Surveillance has become cheap.
I study the law of identification and privacy, so I pay attention to that trend, and it’s worrying. The data maintained in our individual profiles can be used in making decisions about credit, employment, government benefits and more. What governments and companies think they know about us – whether or not it’s accurate – has real power over our actual lives.
Back in the day, the high cost of surveillance made it not a big deal when the Supreme Court ruled that government agents don’t need a warrant to follow a person in public, to sift through her trash or to fly over her property and observe it from the air.
The effort needed to collect that sort of data meant that governments would engage in surveillance only rarely, and only for compelling reasons. For most Americans, little about their everyday comings and goings, likes and dislikes, hopes and dreams was tabulated and collected in any central source. But that’s now changed.
Because information collection is now so easy and storage is cheap, it makes sense for government to collect much more information. As a result, after 9/11, rather than the U.S. government first trying to figure out who the bad guys might be and then collecting records of who they spoke to on the phone, federal officials simply compiled a database of who every person in the U.S. was speaking to on the phone, updated in real time.
Private companies’ tracking of our lives has also become easy and cheap too. Advertising network systems let data brokers track nearly every page you visit on the web, and associate it with an individual profile. Facebook can follow much of its users’ web browsing, even if they’re not logged in.
Google’s tracking presence is even broader. According to one recent study, Google Analytics tracks users on nearly 70 percent of the top one million websites, and Google subsidiary Doubleclick separately tracks users on almost half of the top million sites. That gives Google – or a subsidiary – access to an extensive list of who visits which websites and when. And the company can combine that information with data derived from people’s use of Google Maps, Gmail and other Google services.
Online tracking is even more powerful when it’s merged with real-world information tied to real names and identities. Facebook, for example, combines its data with information from data brokers such as Experian and Acxiom, which compile information from government records, retailers, financial institutions, social media and other sources.
Acxiom claims to have information about 700 million consumers around the world, subdividing its information on U.S. residents into more than 3,000 categories. (That figure may be overstated, but even with a decent discount for skepticism, that’s a lot of information.)
Another company, The Work Number, a subsidiary of credit bureau Equifax, maintains detailed salary and other payroll-related information for more than one-third of working Americans. Retailer loyalty cards are another source of data – Datalogix, a subsidiary of database giant Oracle, aggregates data on consumer purchases, including sales that suggest medical conditions or other personal concerns, such as weight loss pills, allergy treatments and hair removal products.
By combining online and offline data, Facebook can charge premium rates to an advertiser who wants to target, say, people in Idaho who are in long-distance relationships and are thinking about buying a minivan. (There are 3,100 of them in Facebook’s database.) If you want to reach users with an interest in Ramadan who have recently returned from overseas trips, Facebook can find them too.
Today, credit bureaus evaluate financial data – income and employment history, debt repayment records and public information like bankruptcy filings and foreclosures – to decide a person’s creditworthiness. But companies and government agencies can crunch through all these data to find correlations they hadn’t recognized before – and then take action based on those findings, sometimes in discriminatory and socially undesirable ways.
For example, online sellers may charge higher prices to customers from poorer ZIP codes, where there is less competition from brick-and-mortar stores. A credit card company downgraded consumers’ creditworthiness if they had used their cards to pay for marriage counseling or tire repair services. A major cable TV company developed procedures to discourage prospective customers with low credit scores from signing up, because data analytics revealed that those customers were less lucrative than others.
United States law – unlike the law in Europe – gives ordinary people no general right to see their own digital profiles, so we have little opportunity to correct inaccuracies. But even if everything in a profile is accurate, there’s still a big problem: Proprietors’ use of our information in this way encodes discrimination in automated decisions. It means that people who have had marriage counseling, say, or who live in poor neighborhoods are treated as second-class citizens in a wide range of everyday transactions and interactions. That’s not a recipe for a healthy society.
The rise of social credit?
All this could spread very deeply into our lives, raising concerns about invasions of privacy. What if credit bureau ratings incorporated the creditworthiness of an applicant’s friends? Or her educational background, the make of her car or whether she uses all capital letters in her text messages? The U.S. Consumer Finance Protection Bureau has opened an inquiry into the dangers such practices might pose.
The People’s Republic of China has begun to construct a souped-up version of the financial credit bureau that, according to some reports, would look even more broadly at a person’s life. In that system, every citizen would have a score incorporating not only financial data, but also “anything from defaulting on a loan to criticizing the ruling party, from running a red light to failing to care for your parents properly.” The score would affect what jobs an individual could get, what schools her children could attend, even whether she could get a reservation at a fancy restaurant.
Those features haven’t been implemented yet; so far, the system is more limited. Western news reports have decried this plan as totalitarian. It’s worth asking, though, what direction we in the United States are headed in.
Indeed, it’s worth thinking about all of this more deeply. U.S. firms – unless they’re managed or regulated in socially beneficial ways – have both the incentive and the opportunity to use information about us in undesirable ways. We need to talk about the government’s enacting rules constraining that activity. After all, leaving those decisions to the people who make money selling our data is unlikely to result in our getting the rules we want.