Has the sheen of data science begun to fade? Statistical models now shape decisions about everything from drone strikes to lines of credit, and yet the reputation of experts who deal in statistics is eroding. Even as our lives are converted into reams of information to be parsed by software, confidence in institutions that claim the mantle of objectivity, whether they issue news reports or policy prescriptions, is in decline. Theodore Porter, a historian of science at UCLA, describes the widespread adoption of quantification—the foundation of today’s algorithmic number-crunching—in Trust in Numbers: The Pursuit of Objectivity in Science and Public Life (1995). He tells of nineteenth-century accountants and actuaries who distinguished themselves through their pursuit of objectivity, and the scores of professions that subsequently sought the same credibility and authority via tabulation. He also observes the ramifications of filtering all kinds of natural and social phenomena through numeric measurements.
As the world has come to seem more and more like a statistics laboratory, Porter has continued to study the pursuit of universal truths by collecting measurements and applying conventional mathematical processes. Beginning in early 2017, he spoke several times with Triple Canopy’s Adam Florin, who was working as a developer in data visualization and exploring how information design came to be ubiquitous in visual culture. Porter and Florin discussed the uses and abuses of big data, the fantasy of statisticians as a beneficent elite, and the balance of industrial concerns and scientific ideals.
Adam Florin In Trust in Numbers, you chronicle the adoption of quantitative methodologies across a wide range of scientific and political domains, from British accountants and actuaries and French civil engineers in the nineteenth century to American ones in the twentieth. In each case, those who claimed the mantle of objectivity accrued power. Were these professions borrowing from the natural sciences? Or did these forms of quantification emerge from the burgeoning social sciences? And how did the spread of quantification legitimize the work of early social scientists?
Theodore Porter These quantitative approaches actually emerged within a variety of institutions, and never simply by imitating science or claiming the prestige of science. The methods of accountants, bookkeepers, and economists have their histories, worked out as solutions to their own problems. They aren’t alien impositions, imported wholesale from academic science; yet it was important to these professionals to achieve the dignity of science, which they construed in terms of uniform and rigorous calculation. Whenever you perform a count—whether of patient diagnoses or finances—you presume some kind of standardization, which implies consistency and perhaps rigor. But standardization presents many difficulties, and is not always welcome: You get resistance from people who prefer not to classify or be classified entirely through data.
The experts themselves were not always happy to be forced into a straitjacket of inflexible rules, yet they often felt they had no choice. In Trust in Numbers, I devote a lot of attention to professionals who are reluctant to engage in the kind of subtle analysis that they would have performed with a clear conscience had they not been subject to criticism by skeptical outsiders. Left to their own devices, they would deploy their expertise flexibly, making adjustments and even exceptions when it seemed necessary. We’ve associated this kind of professional behavior with doctors (at least until recently). Curiously, quantitative professionals set a standard of inflexible rigor that often goes beyond what is demanded by natural scientists.
Florin Right. On the one hand, there is the impetus to mechanize judgment, to formalize systems of reasoning to the point that decisions can be made without human intervention—either for the sake of efficiency or impartiality. At the same time, technical experts and politicians are trained to make considered judgments. This seems like a double-edged sword: By upholding systems of standardization and mechanization, professionals risk diminishing their own authority because they’ll ultimately be bound by the rules they’ve created.
Porter There’s certainly power in having the authority to deviate from the rules. Accountants and public engineers often lacked the credibility to act this way. If you’re performing a cost-benefit analysis, you are perhaps trying to demonstrate that a construction project is worth pursuing. The number that you get is supposed to be a real measure, insulated from politics and free from any dependence on who is making the measurement.
In reality, of course, much is always going on behind the scenes, but the hope and expectation is that the measurement is objective—meaning, for example, that any two experts would have come to the same result. Experimental sciences pursue such aims as well, though usually while retaining the right to alter the means. I write about engineers and accountants who felt obliged to use objective measurements to render decision-making as automatic as possible. Following the rules in this way means applying a neutral, impersonal analysis. It reflects a particular form of democratic political order: Nobody can make selfish choices or get special treatment. The ideal of rule-bound rigor enforces a kind of equality.
Florin Part of the danger of automating decision-making processes and downplaying human intuition has to do with what you call, in Trust in Numbers, the “moral distance encouraged by a quantitative method.” An analyst, trained in dealing with aggregates, might feel disconnected from the subjects being studied. You write that, in the early twentieth century in the United States, “middle-class philanthropists and social workers used statistics to learn about kinds of people whom they did not know, and often did not care to know, as persons.” How can the benefits of quantification be weighed against the diminution of empathy for—or a true understanding of the conditions of—the people being analyzed?
Porter Perhaps there is some space to seek a balance between impartiality and empathy, but this is hard for a public authority. On an individual level, many of us have had frustrating experiences of being blocked from doing something that seems altogether reasonable on account of rules made up by people who couldn’t foresee such a situation. The rules, we might say, should be adapted to the persons and the situation in question. But you don’t want to idealize, for instance, a welfare system in which a few individuals are authorized to make decisions about who receives benefits based on their sense of need. They may very well make decisions based on the suffering of people near to them, rather than rationally distributing resources. This kind of system is especially bad for people who recognize themselves—and are likely to be recognized by others—as different. And problems like this may arise even without ill will or self-dealing. Still, it is quite possible to allow rules to be tempered by professional discretion, to give the spirit of the laws some rights against their letter.
Florin You write in Trust in Numbers about British statistician Karl Pearson’s quasi-religious belief in the value of quantification as not just practical, but representing a “spirit of renunciation,” an “ethic of puritanical self-denial.” Today, given the degree to which the techniques of quantification are associated with corporate and political power, that ethos seems quite distant. Why did Pearson see inherent value in quantification?
Porter Pearson has somehow gotten into everything I’ve done. After Trust in Numbers I wrote Karl Pearson: The Scientific Life in a Statistical Age (2004). Before writing that book, I hadn’t realized that Pearson insisted on the individuality of everything that he did—both in the sense of each thing being distinctive and the character of each individual having a role in everything, even number tables. He was trying to hold together the project of rigorous calculation with the expertise of individuals and the wisdom that comes from education, family background, and experience. He absolutely believed in numbers, but understood even calculation as a professional art.
Pearson sometimes complained that too many young scientists were just following textbooks, that mathematics and engineering were decaying; he insisted on the training and depth of understanding that would allow someone to consider the situation and make the appropriate statistical calculations. He wanted to shape a statistical elite. He lived in England at the turn of the twentieth century, when those who were trained in statistics had at least some hope of access to power. Still, political leaders in those days were mostly educated in literature and the arts, and he spoke of the role of mathematics as potentially similar to that of the classics and great literature in developing the individual—in the manner of a bildungsroman. He did not confuse statistics with literature, but he looked to quantitative science to exhibit the wisdom that, in a previous era, was particularly associated with literature and the classics. To him there was nothing automatic about statistics; the supple intelligence acquired in one’s life should be applied even to calculation, even to tables and numbers.
Florin Did he envision a small elite of experts or did he want to spread the methods and forms of analysis that he had learned to a broader swath of society? Did he feel any impetus to democratize quantification?
Porter He sometimes talked about statistics as being a basis for making decisions based on evidence rather than opinion. He worked to train statisticians to work for census, health, and economic agencies. He wanted a large number of people to have this specialized knowledge, but he wanted them to have the characteristics of a proper elite, not just a bunch of number-crunchers. He looked to a healthy democracy to recognize and respect real expertise.
Florin Statisticians have played a crucial role in today’s major technology companies, which have rebranded them as “data scientists.” (Whether that new term is intended to reflect something about the nature of the work, or attract mathematics PhDs, I don’t know.) The effectiveness of data science as a tool for gaining insight into human behavior (and generating revenue) is often held up as “disruptive,” if not “revolutionary.” This language makes me think of the promise of statistics around the time of the French Revolution and the rise of modern democracies, when the first census offices aggregated data from disparate regions to produce a new kind of picture of human populations. Governments gained the ability to make informed decisions, but also to centralize power and exert a new kind of control over citizens. Does access to large repositories of social data typically engender novel forms of control and concentrations of power?
Porter Well, this kind of ambition—to use statistics to make decisions and predict the future—is typically associated with the state, which until recently was the only actor that could collect data at that scale.
Florin In your article “Thin Description: Surface and Depth in Science and Science Studies” (2012), you note, “Statistics in the human domain retains an element of its primal meaning, state-istics, the descriptive science of the state.”
Porter Yes, but today private businesses are also able to collect massive amounts of data. Instead of the centralized, planned counts of government censuses and surveys, they prefer chaotic counts of data drawn from transactions as they happen. Instead of a scientific approach to research, they rely on the outputs that result from social-media interactions, purchases, clicks on online ads, time spent on websites, and so on. People in the technology industry are extremely proud of the disruption represented in this move away from the centralized planned count, and yet they’ve provided no real replacement; these approaches probably shouldn’t be in competition with each other.
Florin I was asking if the collection of massive amounts of data grants one power, but perhaps I’ve got it backwards: You need to have quite a lot of power in order to collect data at such a large scale in the first place. We tend to think of data being gathered, but in fact they’re produced, which often requires substantial resources. The historian Daniel Rosenberg points out that “data” descends from the Latin for “given,” but the scholar Johanna Drucker argues that the word has always been misnomer. Noting the labor involved in measurement, she proposes “capta” as an alternative. “Statisticians know very well that no ‘data’ preexist their parameterization,” she writes in “Humanities Approaches to Graphical Display” (2011). “Data are capta, taken not given, constructed as an interpretation of the phenomenal world, not inherent in it.”
Porter Before “data” referred to collections of information, the word typically referred to assumptions that went into a mathematical proof. We now understand that “data” is not necessarily “given,” but we often use the word to imply that the numbers are facts that have been found. Rosenberg also writes about the etymology of “fact,” which comes from the Latin for “to do.” Perhaps it is telling that the phrase “untrue facts” sounds contradictory, and implies lying, but “untrue data” just suggests data that aren’t correct. Certainly, you’re right that power is required to gather data, which are never simply “given.”
Florin In the era of big data, it seems that much of what is collected is untrue or unusable, but perhaps only temporarily, until those who do the collecting figure out what the data might prove. With analysis software, it’s quite simple to prune, wrangle, or munge data into whichever format a data scientist chooses. So technology companies spend fortunes storing and retrieving troves of behavioral data without any sense of how they’ll be analyzed or to what end.
Porter As a so-called zero-percent member of the statistics department, I sometimes get to eavesdrop on their email discussions. One thing I’ve learned is that they don’t like the idea of statistics as pure data-mining, without hypotheses. They insist on appropriate designs for testing hypotheses, rather than drawing from preexisting datasets and extracting some conclusion from algorithms. That said, while statisticians are surely right to emphasize the experimental approach, they also have a kind of false faith. For a long time, statistics didn't have much to say about exploratory analysis; but you need to do some data mining even to come up with interesting hypotheses. The techniques associated with big data are more supple than the formal process of experimental design—maybe “disruptive” is not a bad word—and can provide the basis for a hypothesis that can later be rigorously tested.
Florin On the subject of massive data-collection efforts, could you speak about your latest manuscript?
Porter The book is about insane asylums as sites for gathering data on human heredity. Much of the history still is written as if everything changed in 1900 when Mendel’s experiments on peas became known to science. Mendelian genetics provided a basic theory: Heredity is a result of the transmission of key factors, which we call genes. In this version, genetics is essentially the study of genes and of the elemental processes involved. In the study of human heredity, which had long been obsessed with insanity and so-called feeblemindedness, Mendelian genetics usually implied that heredity was about genes, and that there were genes for feeblemindedness, insanity, or perhaps schizophrenia. These would be combined, like the peas, in characteristic Mendelian ratios—for example, three normals to one feebleminded when both parents carry the (recessive) gene for the defect but neither manifested it.
The old asylum heredity supposed, instead, that an insane or “idiotic” relative raised the probability of a similar condition in the offspring. The transmission of mental defect was treated as a pure statistical problem. The Mendelian version soon came to seem ridiculous and even fraudulent, while the statistical tradition could at least try to calculate transmission patterns based on an abundance of data gathered over decades at asylums and related institutions. In the book, I look at how asylum doctors identified similarities between the offspring of parents, and tracked down families with multiple members who’d been institutionalized or diagnosed with mental weakness or insanity. The analysis of human heredity began as a data-gathering enterprise, and in many ways remains one.
Florin And that enterprise took the form of reports on the mental health of patients as well as the collection of information about genetics?
Porter Yes. Of course, Mendelian genetics, which took off after 1900, was also highly data-oriented. Compared to efforts to trace gene transmission, the work being done in insane asylums was more humble, more about social and medical data than about fundamental genetic models or theories. Asylum doctors kept big books filled with forms in which they inscribed, along the horizontal axis at the top, the patient’s name, occupation, spouse, and religion, followed by when the illness began, the diagnosis, and the apparent cause. Heredity was one cause, identified by some record or recollection of an insane family member. Further to the right there was a space for outcome of treatment, which doctors filled out when the patient was discharged or died.
These institutions held hundreds, and eventually thousands, of patients. The case histories, recorded in standardized tabular form, are usually not very exciting. Of course, achieving standardization was not so easy, because it involved not just statistics but the nature of the institutions being described by the numbers. Checking in patients can be routinized, but the consideration of heredity adds a significant variable, which became increasingly important to institutions in their efforts—not so much to cure (which didn’t work out well) but to prevent illness. As the institutions got bigger and bigger, they turned their attention to the causes of disorders; if they couldn’t cure patients, they might be able to address the causes. Heredity became a serious factor—indeed, the most important consideration—in the effort to reduce the prevalence of insanity. This was the real source of eugenics.
Florin The process is essentially aggregation. Each institution creates categories for medical conditions, then places patients within those categories, right? The amount of information must have been huge.
Porter For the time, the scale of the data was extraordinary. In 1838, France passed a law regulating asylum services and expanding public access to them. Soon, the country had dozens of mental hospitals gathering heaps of data. Previously, the medical profession, and the knowledge produced by doctors, hinged on the relationship between a doctor and an individual patient. But with the growth of these institutions the profession had to change, as a small number of doctors were now treating hundreds or thousands of patients.
Locally, there was some resistance to the use of statistics among asylum doctors, but by the 1850s, asylums were working to merge the data from all institutions. A few asylum doctors passionately defended the uniqueness of their institutions; they argued that large pools of information should be developed through accumulation over time rather than merging the data sets of disparate institutions. In 1867, the inspector-general of asylum services in France published a series of tables designed to standardize medical statistics. This provoked a very interesting debate. Asylum doctors might have insisted on the individuality of the patient, but they were more likely to insist on the distinctiveness of the institution and its categories for data. Meanwhile, these institutions needed to work with census offices, which were less sympathetic to the view that institutional differences couldn’t be reconciled. At any rate, the Franco-Prussian War put a quick end to this standardizing initiative in 1870.
Florin Since then, and especially after World War II, the International Organization for Standardization (ISO) and other proponents of standardization have made enormous progress in harmonizing measurements, forms of communication, construction materials, shipping containers, manufacturing processes, clothing sizes, “cooking quality of alimentary pasta by sensory analysis”—all manner of human activity, major and minor. How was this made possible?
Porter There was a series statistical congresses in the mid-to-late nineteenth century, working to standardize data on crime, schooling agriculture, trade, and so on. Most failed for the same reasons that the 1867 initiative to standardize insanity statistics did: Standardization is impossible when laws vary so much from state to state. An important predecessor and ally of the ISO, the International Statistical Institute, was created in 1885 as more of a scientific institution, devoted to professional cooperation, with mostly European members. Technical committees facilitated the exchange of information between countries, but the organization didn’t issue standards, much less push members to reconcile their use of statistics.
After World War II, the European Commission created an organization called Eurostat to provide high-quality statistical information to member nations. Eurostat isn’t a state, but it has some state-like characteristics, which made it possible in some measure to get all the European statistical agencies to cooperate and standardize their methods. Of course, this was extremely difficult, because it involved negotiating not just about statistics but about laws and institutions.
Florin It’s interesting that you underscore the commitment to professional cooperation. The stories you tell in Trust in Numbers have to do with agencies, such as the corps of engineers, that use quantification to establish common ground and define the boundaries of professions. The same groups of professionals created international bodies like the ISO, which assumed responsibility for the standardization of measurements and communication worldwide. Were they motivated by a feeling of insularity, or a need to protect against powerful outsiders, like politicians who would be inclined to pursue the narrow self-interest of a single nation?
Porter Well, that sense of insularity exists, but most scientific organizations simply wanted to make their results communicable to all scientists. While they recognized the political stakes, for professionals and nations, of debates about which standards would prevail, they tacitly assumed that somebody’s standards would—or that a compromise could be found. Elite scientists in the late nineteenth century vehemently debated the standard unit of electricity, focusing on the ohm (Ω), the unit of electrical resistance. In the end, the British effectively won this debate, thanks in part to the international networks of British science and industry; the result not only provided advantages to British electrical manufacturers, but boosted the power of British electrical technologies. Of course, British research scientists did not describe this outcome as a triumph of their own parochial standard.
Florin There can be real consequences for the winners and losers in debates about standards, whether they have to do with international shipping routes and ports or Internet infrastructure. To what extent are standards designed to support industrial power? And to what extent are they created to support the scientific ideal of communicable knowledge and commensurable data?
Porter Surely the answer is that both are always factors, and not always in the same proportions. The importance of scientific standards often extends beyond the sciences, and industrial actors are very powerful, perhaps more powerful than the scientists. Sometimes it is difficult to distinguish scientific and industrial interests. The scientists might be enlisted to support domestic industries as well as their own academic interests. Famous physicists like Lord Kelvin and James Clerk Maxwell were involved in the debate about the ohm, but a lot of the energy behind these discussions involved industrial interests, too. When you study these debates, you realize how often individuals who are remembered as theoretical scientists were involved in questions related to manufacturing and industry.
Often, scientists are representing an industry that supports their findings. For example, the telegraph was extremely interesting to physicists in the second half of the nineteenth century; the challenge of running a submarine cable under the Atlantic provided the stimulus for the creation of electromagnetic field theory.
Florin So there’s a symbiotic relationship between scientific and industrial interests: As industrial products are released to the market, scientists get to use and reflect upon them, which furthers their research.
Porter Right. And, of course, plenty of similar arguments are made regarding the communities around today’s technology industry, which you probably know better than I do.
Florin Yes, and aside from fascinating scientists, new devices beckon the general public to reflect on the data that they produce themselves. I’m thinking of the quantified self movement, and the obsessive self-awareness that activity trackers like Fitbit can engender. What opportunities are there to reflect on—and not just refine—those products, which seem to turn our minds and bodies into vessels of quantification? The critic Moira Weigel connects self-monitoring to Catholicism: “Like confession and therapy, activity trackers promise to improve us by confronting us with who we are when we are not paying attention. The difference is that they produce clarity constantly, in real time. And they tell us exactly what to do.”
Porter The analogy to confession is nice, although confession doesn’t exactly offer an independent source of information, as activity trackers do. I haven’t quite quantified myself, but I have observed, driving my Prius, what is sometimes called the Prius Effect: Some significant portion of the improved mileage attained by Priuses results from behavioral changes of drivers, who are motivated by a large dashboard display that offers continuous feedback on fuel efficiency as they drive. I didn’t even want this monitor, but now I can’t not look at it. I drive more slowly and brake earlier. The sense of having standards against which to assess yourself is very powerful.
Florin Is this an indication of how easily and thoroughly we internalize quantification, whether that means wanting to know which demographic we belong to, counting steps with an activity tracker, fixating on grades at school, or striving as a company (or nation) to meet certain performance indicators?
Porter Very much so. Being subjected to these routines means, for instance, accepting the legitimacy of the SAT score and valuing oneself accordingly: “I thought I was pretty smart, but I got an 1100 so I guess I’m not.” The power of these forms of classification, especially as we internalize them, is extraordinary. And they’re taking hold on a massive scale.
Take the Program for International Student Assessment (PISA), a global study of the performance of fifteen-year-old students that measures “problem-solving competencies” and “cognitive abilities.” Sponsored by the Organization for Economic Cooperation and Development, the project is meant to provide data that can be used to shape education approaches and policies. I’ve gone to PISA conferences and found that one serious problem is how to create uniformity in measurement, given that schools around the world aren’t set up to produce the same outcomes. And why would they, unless their purpose is to create uniformity? Given the divergence of populations, political systems, economies, histories, and so on, how does it make sense to have every fifteen-year-old measured by the same math test?
Florin This kind of attempt to devise standard units of measurement in order to administer diverse and remote locales is akin to what the scholars Nikolas Rose and Peter Miller call “government at a distance.”1 Data about one place or people is collected and evaluated by others, who occupy what Bruno Latour calls “centers of calculation,” in an evocation of colonial capitals. They are unlikely to have first-hand knowledge of that place or those people—which, consequently, must be represented as comprehensible data. Through this production of knowledge about others, power can operate indirectly and migrate beyond the state, largely through forms of behavior and regulation that are assumed by individuals.
Porter The ambition of neoliberal governance is decentralization. To give responsibility to the people who reside in a community, and whose knowledge about a community can’t easily be transmitted to some central office—much less incorporated into a database that will enable decisions to be made via calculation—is hard. And this challenge (or the lack of desire to meet this challenge) is reflected in neoliberalism’s emphasis on entrepreneurship as the basis for organizing the economy. Neoliberalism favors a system in which as many actors as possible, as far down the hierarchy as possible, are animated by the prospect of profits. Ideally, the incentives for workers are identified and aligned with what’s good for the firm, which makes them into entrepreneurs: They succeed when they do something that makes the firm more profitable.
As local offices are given more autonomy from the central office, they are judged by some economic or accounting measure—for corporations, typically by profits—which applies up and down the hierarchy. But what seems to be a uniform standard of comparison can turn out to not be one. Circumstances change; the locals figure out the measure that’s being used to evaluate their performance and tweak their behavior accordingly, often without anyone finding out or discouraging them until it’s too late. These are what I call spaces of exploitable ambiguity. An example is General Electric under Jack Welch’s leadership, when the company posted impressive earnings year after year, until one year—collapse. The company’s accountants had found a way to move money from one category to another in order to exceed the expectations of analysts and shareholders. Then, in 2007, the company reached a point where the manipulation was no longer possible and had to come clean, correct the previously reported earnings, and pay a $50 million fine to the SEC.
Florin In addition to such deliberate malfeasance, statistical models often do harm simply because of the carelessness of practitioners. The most astonishing example I’ve come across is the NSA’s SKYNET program (named, without irony, after the diabolical computer system in Terminator), which uses a machine-learning algorithm to select targets for drone strikes in Pakistan. After documentation outlining the specific methodology was made public, data scientists condemned SKYNET as being scientifically unsound—and potentially leading to the deaths of thousands of innocent civilians.
Cathy O’Neil, a data scientist and mathematician, accounts for the influence of statistical models on our thinking and behavior in Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (2016). O’Neil believes in the administrative power of statistical models, but asserts that they are too seldom applied ethically and humanely. She points to an automated “teacher assessment tool” called IMPACT, adopted by Washington, DC, in 2007 and used to identify underperforming public school teachers, who were then recommended for firing. O’Neil details how the statistical model produces and then enforces biases as a result of being overly reliant on test scores, which are too small a sample size and too infrequent an input to be ethically sound. The absence of any mechanism to gauge the effects of IMPACT leads to a feedback loop. With regards to education, employment, personal finances, incarceration, and so on, such tools can thrust individuals into a downward spiral, with no means of appeal.
Porter While algorithms like IMPACT may constrain the administrators tasked with evaluating teachers, the more extreme effect is on the teachers and their independence—which is why they recognize it as an attack on their professional standing. Even if you have faith in the calculations and the rigor of a mathematical system, there is always a gap between the measured numbers and the numeric “indicator” used for decision-making. If the predictions of the model end up being erroneous, you might have a numerical error, or else people may be taking advantage of these gaps. If the gap becomes wide enough, then teachers might be able to satisfy the criteria of the model without seriously educating kids.
This points to one of the most terrible dangers of the systems of quantification of our time: Especially—but not only—when the model comes to the attention of those being measured, the effects on their behavior can be profound and destructive. A decent but imperfect indicator, providing helpful feedback, will often perform terribly when actors are rewarded for conforming to it.
1 In “Governing Economic Life” (1990), Rose and Miller analyze “indirect mechanisms of rule,” which they see as enabling “government at a distance.” To do so, they adopt Bruno Latour’s notion of “action at a distance,” which indicates how control is exerted on places and people that are alien and distant. “Eighteenth-century French navigators could only travel to unfamiliar regions of the East Pacific, colonize, domesticate, and dominate the inhabitants from their European metropolitan bases because, in various technical ways, these distant places were ‘mobilized,’ brought home to ‘centers of calculation’ in the form of maps, drawings, readings of the movements of the tides and the stars,” Rose and Miller observe. “Mobile traces that were stable enough to be moved back and forward without distortion, corruption or decay, and combinable so that they could be accumulated and calculated upon, enabled the ships to be sent out and to return, enabled a ‘center’ to be formed that could ‘dominate’ a realm of persons and processes distant from it.” In Science in Action (1987), Latour suggests that this process “is similar whether it is a question of dominating the sky, the earth, or the economy: Domination involves the exercise of a form of intellectual mastery made possible by those at a center having information about persons and events distant from them.”