Earth to Algorithm: The Materiality of Algorithmic Surveillance

Knowledge hub

Earth to Algorithm: The Materiality of Algorithmic Surveillance

Written by Tyne Daile Sumner

As dawn breaks around Google’s Douglas County Data Centre, just outside of Atlanta, Georgia, a daily spectacle unfolds. Under slowly rising cloud cover, vehicles pull up to decant early morning workers; dwarfed beside the gargantuan, dimly lit facility that hums above the landscape like an outlandish spaceship. Inside, the mysterious intricacy of algorithms takes startingly somatic form. A bright, hyper-coloured playground—striking contours of red, green, blue, and yellow—bends and arches throughout the immense space, folding across and back upon itself in loops that disappear and reemerge in the style of an Escher tessellation. This is the fundamental materiality of data: pipes that send and receive water which, in turn, cools the servers that drive Search, statistical modelling, and the growing infrastructure behind algorithmic surveillance.

Perhaps even more surprising than the network’s immense physical form is its aesthetics. Shiny and minimalist, the data center appears more like a contemporary art installation than the cluttered mess of cords and hardware that spring to mind when most people think of largescale computation. Of the thousands of feet of pipeline inside the Douglas County center, Google adopt a light-hearted rationale that mirror’s the company’s happy-go-lucky approach to corporate culture: "We paint them bright colours not only because it’s fun, but also to designate which one is which. The bright pink pipe transfers water from the row of chillers to an outside cooling tower. The blue pipes supply cold water and the red pipes return the warm water back to be cooled." [1] Herein we encounter a blasé fusion of utility and appearance, engineering and slick public relations—the opacity and mystification of the algorithm that Frank Pasquale calls the "black box society." [2]

This is the fundamental materiality of data: pipes that send and receive water which, in turn, cools the servers that drive Search, statistical modelling, and the growing infrastructure behind algorithmic surveillance.
Pipes for sending and receiving water at Google's Douglas County, Georgia data center. Image: Google.

In recent years there has been growing attention around the operational and environmental impact of data centers and the Artificial Intelligence (AI) produced by them—a need to peer inside the box, as it were, to better understand the structures that drive the algorithmic systems we now encounter everywhere. From articles revealing ChatGPT’s ‘thirsty’ water usage [3], to research exposing "AI computing’s carbon footprint," [4] tracing the materiality of algorithms is now a widespread scholarly and public undertaking. At a fundamental level we need to understand how algorithms work because they are now an inextricable part of our everyday lives. More urgently, there is a need to interrogate the environmental impacts of AI; how it is "made from natural resources, fuel, human labor, infrastructures, logistics, histories, and classifications." [5] This earthy, physical manifestation is often at odds with the abstract, disembodied ‘cloud’ usually conjured in the public consciousness when AI is mentioned.

In his book To the Cloud: Big Data in a Turbulent World, the sociologist Vincent Mosco explains how the seductive metaphor of the cloud functions to convince people that offsite data processing is detached from the corporeal, mineral and water-based reality of the Earth’s landscape. "Moving to the cloud," Mosco writes "is far from entering the ethereal, weightless, and green environment that the image of the physical cloud and the mythology of cloud computing suggest." [6] In this warped imaginary, AI is envisaged in dreamy, fantastic terms that share more in common with Romantic poetry than they do with the resource-intensive warehouses that dot the horizons of rural towns across contemporary North America. Like Wordsworth’s pensive speaker who wanders "lonely as a cloud / That floats on high o’er vales and hills," the popular understanding of AI remains sprightly, undemanding, and dispersed across immaterial time and space as if somehow perpetually floating.

In recent years there has been growing attention around the operational and environmental impact of data centers and the Artificial Intelligence (AI) produced by them—a need to peer inside the box, as it were, to better understand the structures that drive the algorithmic systems we now encounter everywhere.

Bound up with the necessity to see past AI’s deceptive otherworldly connotations, many have drawn attention to the dangerous biases and discrimination built into algorithmic systems. As John Cheney-Lippold reminds us, for instance, we are inextricably connected to the data that generate each "freshly minted algorithmic truth." [7] This connection has been shown to enable systemic data discrimination under the guise of algorithmic neutrality. Work by Safiya Umoja Noble, for example, reveals the ways in which AI is perniciously employed to surveil user searches of subjects deemed to fit specific racial or other demographic categories. [8] In many cases, datafied surveillance is "integrated into public housing and benefit programs" in ways that strategically ignore and elide "the needs and insights of poor and working people." [9]

In the Australian context, algorithmic surveillance and data discrimination were the core principles behind the 2016 Robodebt scandal. Initiated by the Australian Government’s Department of Human Services, Robodebt used algorithms to match reported income data from the Australian Taxation Office (ATO) with individuals’ welfare payments, in turn identifying apparent discrepancies that were assumed—either incorrectly or otherwise—to point to overpayments. Posited as a routine data analysis exercise, the destructive system led to unwarranted financial stress and serious emotional anguish in numerous individuals. [10] The bureaucratic strategy enacted in this instance makes light of the necessity of human oversight as a guardrail against aberrant or incorrect data analytics. It also, however, belies a greater harm at the core of many forms of algorithmic surveillance: the deliberate orchestration of data colonialism and corruption by human actors hiding beneath a disguise of algorithmic neutrality.

It also, however, belies a greater harm at the core of many forms of algorithmic surveillance: the deliberate orchestration of data colonialism and corruption by human actors hiding beneath a disguise of algorithmic neutrality.

In her book Cloud Ethics, the British geographer Louise Amoore calls for a different kind of ethical practice in relation to algorithms, one that "begins from the algorithm as always already an ethicopolitical entity by virtue of being immanently formed through the relational attributes of selves and others." [11] This principle underpins several recent efforts to expose the underlying machinations of largescale computational systems, in particular works curated by artists and writers for wider public education. One such example is Kate Crawford and Vladan Joler’s Anatomy of an AI System, which presents an ontology and cartography of The Amazon Echo as an "anatomical map of human labor, data and planetary resources." [12]

Kate Crawford and Vladan Joler's Anatomy of an AI System pictured in the exhibition Instruments of Surveillance. Image: Casey Horsfield. NCM, 2024.

Both artwork and pedagogical exercise, the installation is an eerie demonstration of AI’s internal machineries—assemblers, component manufacturers, smelters and refiners, mines, internet infrastructure, data preparation and so on. Invited to peer into the enigmatic void of automated quantification, we are instantly alert to the ultimate invisibility of AI in a world in which the term itself—Artificial Intelligence–is seemingly everywhere. By visualising AI’s underlying processes in intricate detail, Crawford and Joler turn the tables on a lineage of strategic obfuscation in which algorithmic tools have been presented to the public as a simple relation between "an individual, their data, and any single technology company" who trades in commercial data and surveillance capitalism. [13]

Gazing into Anatomy of an AI System, with its striking white graphic against the vast blackness of digital space, we are confronted with the highly connected network of tools, people, resources, institutions, companies, logics, laws, and labour that comprise every algorithmic gesture in our wider computational milieu. Reading the work left to right, we begin with the Earth and the geological processes that yield the minerals and reserves needed to manufacture AI hardware and infrastructure. At the bottom of the map is a different resource: "the history of human knowledge and capacity, which is also used to train and optimise Artificial Intelligence systems." [14] And so returns the human, at once seemingly disjunct from the gargantuan processing power that drives everyday surveillance algorithms and yet inseparable from the images, text, video, and voice recordings on which AI feeds its insatiable appetite.

The story of algorithmic surveillance begins, and ends, with human ingenuity—from the welded pipes in Google’s aesthetically chic silos, to the opaque and unregulated models used to discriminate and erode individual freedoms.

The story of algorithmic surveillance begins, and ends, with human ingenuity—from the welded pipes in Google’s aesthetically chic silos, to the opaque and unregulated models used to discriminate and erode individual freedoms. That which Cathy O’Neil astutely calls a "weapon of math destruction," the algorithm is both pervasive and yet contestable, constituted by planetary ingredients yet imperceptible and tactically elusive. [15] As we turn towards a future in which the possibility of Artificial General Intelligence (AGI) presents even greater challenges for democratic justice, data governance, and communication, a more incisive critique of AI requires unrelenting environmental transparency. The materiality of algorithms—in earth, art, database, and everyday speech—is an indispensable 21st century deliberation.

References

[1] Google Data Centers, Gallery: https://www.google.com/about/datacenters/gallery/

[2] Frank Pasquale. The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge, MA: Harvard University Press (2015)

[3] Mark Sellman and Adam Vaughan. ‘‘Thirsty’ ChatGPT uses four times more water than previously thought.’ The Times (2024)

[4] Wu et al. ‘Sustainable AI: Environmental Implications, Challenges and Opportunities.’ Proceedings of the 5h MLSys Conference (2022): 1-19

[5] Kate Crawford. The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven: Yale University Press (2021), p. 8

[6] Vincent Mosco. To the Cloud: Big Data in a Turbulent World. New York: Routledge (2015), p. 9

[7] John Cheney-Lippold. We Are Data: Algorithms and the Making of Our Digital Selves. New York: New York University Press (2019), p. 9

[8] Safiya Umoja Noble. New York: New York University Press (2018)

[9] Virginia Eubanks. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin’s Press (2017), p. 8

[10] Report.’ Royal Commission into the Robodebt Scheme (2023). Online

[11] Louise Amoore. Cloud Ethics: Algorithms and the Attributes of Ourselves and Others. Durham: Duke University Press (2020), p. 7

[12] Kate Crawford and Vladan Joler. Anatomy of an AI System (2018): https://anatomyof.ai/

[13] Kate Crawford and Vladan Joler. Anatomy of an AI System (2018), p. IV

[14] Kate Crawford and Vladan Joler. Anatomy of an AI System (2018), p. V

[15] Cathy O’Neil. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown (2016)

About the Author

Tyne Daile Sumner

Dr Tyne Daile Sumner is an Australian Research Council DECRA Fellow at the Australian National University. Her research examines cultural representations of surveillance, modern and contemporary literature, and interdisciplinary approaches to Artificial Intelligence. She has published widely on topics ranging from cultural data and mid-century poetry, to digital ethics and facial recognition technologies. She is the lead researcher and co-curator of NCM's Instruments of Surveillance.