We're all being mined for data – but who are the real winners …

We're all being mined for data – but who are the real winners …

It’s exactly a year since Edward Snowden‘s revelations about NSA and GCHQ surveillance began to reach the public domain. As a result, we now know quite a lot about the ways the NSA and its overseas franchises have mastered “big data” technology to hoover up our metadata and monitor our clickstreams (which means, by the way, that there’s no such thing as private reading any more – at least for anyone who reads online). But it turns out that all that was just for starters. One of the more recent disclosures is a PowerPoint deck about a GCHQ surveillance programme called Squeaky Dolphin, which suggests that the agencies’ appetite for personal data is even more voracious and bizarre than most of us imagined.

Squeaky Dolphin (the logo shows a cartoon dolphin armed with a can of WD40) is aimed at “broad real-time monitoring of online activity” – which essentially means logs of the YouTube videos you watched, the URLs you “liked” on Facebook and visits you made to blogs. One of the slides in the deck shows a graph of Facebook “likes” for web pages containing the name of Liam Fox, the former Tory defence minister who resigned under a cloud.

Why on earth do the security agencies think they need to try to get inside our minds like this? Are Squeaky Dolphin and programmes like it examples of pathological “mission creep”? Are the NSA and GCHQ doing this simply because they can – because it’s technically feasible? Has big data’s appetite for data become insatiable?

Big data is the Rorschach blot of our times – an incomprehensible shape on to which we project our dreams and nightmares, hopes and fears. It’s already shaping our lives and looks like determining our networked future. Huge corporations – who see it as “the new oil” – are already exploiting this 21st-century resource, or tooling up to grab some of the action. And governments have been exploiting it since the turn of the millennium.

Big data is an ugly term that covers a multitude of activities. Technically, it refers to datasets that are too large and complex to manipulate or interrogate with standard methods or tools. In practice, the term encompasses a whole spectrum of interpretations: a mindset that is obsessed with quantitative information; an approach to decision-making; a new way – sometimes the only way – of doing some kinds of modern science; a computing technology that combines massive processing power with formidable storage capacity and parallel-processing algorithms; a way of detecting terrorist and criminal networks; a way of building detailed profiles of customers in order to increase the chances of selling stuff to them; and much more. For simplicity, let’s call it the Next Big Thing.

How did we get here?

In the beginning there was no data. Well, nothing we would regard as data anyway. And then in the mid-17th century the word emerged as a philosophical term, as the plural of the Latin word “datum”, which means “that is given” – a starting point, something on which you can start to build an argument. It came into its own only when we began to collect information in quantitative form and developed technologies and procedures for processing that information. Computing started, after all, as “data processing”.

For most of human history, data was scarce, hard to collect and difficult to analyse. As industrialisation gathered pace and the nation-state evolved, vast efforts went into collecting the quantitative information needed to administer a modern democracy. It started with censuses and grew from there. But because we had no technologies for analysing the resulting data harvests in their entirety, we used mathematical tools for summarising their essential features. We called the academic discipline “statistics” and used its fundamental concepts – means, medians, probability distributions, sampling, significance tests, bias etc – as ways of extracting meaning from data. These tools were useful in many ways, but they dealt only in aggregates and told us little about individuals, any one of whom could – as the old classroom adage put it – “drown in a river that was on average only six inches deep”.

And that was how things stood for the best part of a century. What triggered the seismic shift in our information environment was the advent of the internet in the 1980s, and more particularly, the arrival of the web in the 1990s and the mobile phone revolution shortly afterwards. From then on, data went from being scarce and expensive to collect to being bountiful and cheap to harvest (and store) – provided you had the necessary equipment and were in the right place in the ecosystem. The internet provided a surveillance… er, data-collection engine of Orwellian dimensions, because everything one does on the network is automatically logged. The web enabled online commerce, which meant that oodles of personal data – name and address, age, gender, credit-card details, purchases and transactions, consumer preferences, interests, hobbies, etc – could be hoovered up by corporations, while software for tracking web browsing provided all kinds of equally interesting data for advertisers. The first generations of mobile phones logged their locations on a real-time basis, but that data was usually accessible only by telecommunications companies, or law-enforcement and security agencies, so we had to wait for smartphones with onboard GPS and consumer apps that revealed location data to complete the surveillance jigsaw.

The result was that we have moved with astonishing rapidity from a world in which data was scarce to one in which it is super-abundant. According to the Economist‘s Kenneth Cukier, for example, Google processes more than 24 petabytes a day. (A petabyte is 1,048,576 gigabytes.) The question – for both corporations and governments – was how to extract usable information from the torrent of data that was emerging from our networked existence as a kind of exhaust. Industry wanted to know how to “personalise” its offerings to consumers; advertisers wanted to be able to target ads more accurately; security agencies wanted to mine the exhaust to detect terrorists and prevent bad things from happening.

And while all this was going on, science became increasingly data-intensive as new instruments spewed out unimaginable amounts of information. The Large Hadron Collider in Cern, for example, produces 15 petabytes of data a year. Telescopes and instruments used by astronomers and astrophysicists are similarly prolific generators of data torrents. The Square Kilometre Array telescope, for example, generates 915 petabytes of data a day. Other sciences are headed in the same direction. Biology has become increasingly computational: a single genome sequencing instrument generates terabytes of data a week. And so it goes on.

Given that there is no way that humans can handle these volumes of scientific, commercial and personal data, machine intelligence is the only way to go. And so technologies for dealing with the challenge have evolved – in the form of clusters of up to 50,000 computers, running specialised parallel-processing software and algorithms for machine-learning, pattern-recognition and other functions. Putting this stuff together and making it work is formidably difficult, which is why big data is, by definition, the province of large, well-financed corporations, research labs and government agencies. And it’s also why mastery of the technology confers extraordinary advantages on those who possess it.

The dream

Virtually every discussion about big data is suffused with a strange mixture of excitement, evangelism, hype, greed and fear. The excitement is easy to understand because it’s clear that, for some, big data really is the new oil. The prosperity of the Googles, Facebooks and Amazons of this world is, after all, largely built on their mastery of the technology. And, at a less stratospheric level, non-technology companies are beginning to realise that failure to analyse the data flowing from their interactions with customers may put them at a competitive disadvantage. In that sense, they are at the same stage as companies were in the early 1990s when it began to dawn on them that perhaps this internet thingy might have repercussions for them.

Big data has implications for the way companies make decisions. As Erik Brynjolfsson of MIT puts it: “Instead of relying on a leader’s gut instincts, an increasing number of companies are embracing a new method that involves data-based analytics.” (Think Moneyball, Michael Lewis’s account of how a failing baseball team was rescued by relying not on the intuition of the team’s talent scouts, but on the data crunched by a Harvard maths whiz.)

During the first day of a baby’s life, the amount of data generated by humanity is equivalent to 70 times the information contained in the library of congress. Photograph: Catherine Balet from the series Strangers in the Light

In his research, Brynjolfsson found that companies that use “data-driven decision-making” show higher performance: a study of 179 large publicly traded firms, for example, found that the ones that adopted this method are about 5% more productive and profitable than their competitors. “There is,” he says, “a lot of low-hanging fruit for companies that are able to use big data to their advantage.”

The commercial imperative to take the big data revolution seriously is obvious, though many companies are going to discover that doing so will involve some painful decisions because their existing IT systems are too fragmented for meaningful data-analytics to be possible. What’s more intriguing is the evangelical zeal with which the non-commercial world views the technology. It’s seen as a way not just of discovering new knowledge, but as a way of improving healthcare, tackling poverty, improving governance, revitalising democracy, combating global warming and generally making life better.

In this context, a terrific boost came from Google some years ago in a paper published in the scientific journal Nature. In it the authors (who were Google employees) explained that Google could “predict” the spread of winter flu in the United States – not just nationally, but down to specific regions – and could do so well ahead of the Centre for Disease Control in Atlanta, which is the official agency for tracking disease outbreaks in the US. (The researchers deduced the spread of flu by tracking and analysing the geographical distribution of search queries such as “winter flu”.) The results caused quite a furore among healthcare professionals, who wondered if the search giant had just handed them a powerful tool for epidemiological research.

Metaphors that often surface in discussions about the social uses of big data are the microscope and the telescope – both instruments that enabled the rise of new sciences and greatly empowered established ones. Brynjolfsson favours the former. The microscope, invented four centuries ago, allowed people to see and measure things as never before — at the cellular level. It was a revolution in measurement. Data analytics, Brynjolfsson argues, is the modern equivalent of the microscope. Google searches, Facebook posts and Twitter messages make it possible to measure behaviour and sentiment in fine detail and in real time.

Alex Pentland, another MIT scientist and author of a new book on what he calls “social physics”, is similarly enthused. “The power of big data,” he says, “is that it is information about people’s behaviour instead of information about their beliefs. It’s about the behaviour of customers, employees and prospects for your new business. It’s not about the things you post on Facebook, and it’s not about your searches on Google, which is what most people think about, and it’s not data from internal company processes and RFIDs [radio-frequency identifications – a means of tracking items]. This sort of big data comes from things like location data off of your cell phone or credit card, it’s the little data breadcrumbs that you leave behind you as you move around in the world.”

“What those breadcrumbs tell,” he continues, “is the story of your life. It tells what you’ve chosen to do. That’s very different from what you put on Facebook. What you put on Facebook is what you would like to tell people, edited according to the standards of the day. Who you actually are is determined by where you spend time, and which things you buy. Big data is increasingly about real behaviour, and by analysing this sort of data, scientists can tell an enormous amount about you. They can tell whether you are the sort of person who will pay back loans. They can tell you if you’re likely to get diabetes.

“They can do this because the sort of person you are is largely determined by your social context, so if I can see some of your behaviours, I can infer the rest, just by comparing you to the people in your crowd. You can tell all sorts of things about a person, even though it’s not explicitly in the data, because people are so enmeshed in the surrounding social fabric that it determines the sorts of things that they think are normal, and what behaviours they will learn from one another.”

The nightmare

What’s interesting about this little riff is that Professor Pentland doesn’t seem to be unduly troubled about the implications of all this – though, to be fair, since the quotes are taken from an interview he gave, it’s possible that he gives a more rounded picture in his book. But the general tone of his remarks is pretty typical of big data enthusiasts. Sure, they say, there could be problems, but just look at the upsides – all the cool things we could do if we make intelligent use of all this data. We could have better epidemiology, track infectious diseases in real time, have more effective and responsive neighbourhood policing, provide online tutoring that is sensitive to the needs of each individual student, and so on. What’s not to like?

Well, of course it’s true that we could conceivably have all of those good things and many more besides. The only problem is that they come with a price tag attached: the systematic elimination of personal privacy, which in turn implies the emergence of a society in which surveillance is comprehensive and pervasive. We may be headed in that direction anyway, courtesy of the intelligence agencies and the internet companies, but it’s strange to hear sensible, public-spirited evangelists encouraging us down that road too.

Then there’s the issue of inequality. Technology, as Melvin Kranzberg famously observed, is neither good nor bad. But nor is it neutral either. Big data is a technology for the big battalions, not for the rest of us. It will further increase the power of large corporations and governments, and further disempower the poor and the socially excluded. “While massive datasets may feel very abstract,” writes Kate Crawford, one of the most perceptive commentators on this stuff, “they are intricately linked to physical place and human culture. And places, like people, have their own individual character and grain. For example, Boston has a problem with potholes, patching approximately 20,000 every year. To help allocate its resources efficiently, the City of Boston released the excellent StreetBump smartphone app, which draws on accelerometer and GPS data to help passively detect potholes, instantly reporting them to the city. While certainly a clever approach, StreetBump has a signal problem. People in lower-income groups in the US are less likely to have smartphones, and this is particularly true of older residents, where smartphone penetration can be as low as 16%. For cities like Boston, this means that smartphone data sets are missing inputs from significant parts of the population – often those who have the fewest resources.”

Another disturbing thing about the big data bandwagon is its implicit epistemology, which could be crudely summarised by modifying the old Klondike slogan: “There’s truth in them thar data”. What it boils down to is a naive conviction that the more data you have, the closer you will get to the truth. No more relying on small, potentially unrepresentative samples and misleading averages. Instead the plain, unvarnished truth. This “truth” however, comes in the form of correlations: the discovery, for example, that influenza outbreaks go hand in hand with certain kinds of Google searches. Never mind that Google doesn’t know anything about what causes flu. So the knowledge that comes from big data is generally an inference that two things are related, not knowledge of why they might be related. (Or not, as the case may be: it turns out that Google’s brief foray into epidemiology came unstuck. A new outbreak of the disease had the search engine completely foxed.)

This might not be a problem in the commercial world. For example, it doesn’t matter that Amazon’s recommendation engine thinks that because I bought Thomas Piketty’s Capital in the 21st Century book, I am also likely to be interested in a boxed set of Downton Abbey (perhaps because I had earlier been searching for books on “inequality”). But it would matter if a collector’s interest in, say, French hunting knives led to them being targeted for stop-and-search by the local police. In fact, these kind of “predictive analytics” are already being deployed. Shortly after the Boston marathon bombing, for example, a New York writer’s Google searches for “pressure cookers” and “backpacks” resulted in armed cops hammering on her door.

In the end, the crippled epistemology of the big data movement may prove to be our biggest problem. Remember that the underlying assumption is that “more is better” – the more data you have, the better your knowledge. But since the world is infinitely complex, that means that the search for more and more data, in ever-finer granularities, is effectively infinite. You can never be too rich or too thin – or have too much data. And this applies in particular to the intelligence agencies, as Kate Crawford points out in a brilliant essay, “The Anxieties of Big Data”.

Like me, Crawford was struck by the Squeaky Dolphin project. “The PowerPoint deck,” she writes, “reveals something more specific. It outlines an expansionist programme to bring big data together with the more traditional approaches of the social and humanistic sciences: the worlds of small data. GCHQ calls it the Human Science Operations Cell, and it is all about supplementing data analysis with broader sociocultural tools from anthropology, sociology, political science, biology, history, psychology, and economics.”

It’s tempting to ridicule this project – which demonstrates how the security agency’s worries that it might be missing something significant leads it to extend its data-gathering net beyond the boundaries of the absurd. But, in a way, the NSA and GCHQ are simply the creatures of the crippled epistemologies of their political masters – who ordered them after 9/11 to ensure that nothing bad ever happened again. Which leaves us with the question: when, despite all this surveillance the next terrible thing happens, what will the politicians do then? How much more surveillance will they demand? And how much more can society stand?

This article is from:  

We're all being mined for data – but who are the real winners …

Share this post