Get Your Smart On
The public conversation about Large Language Models (LLMs, collectively referred to as AI) has revolved around fears and fantasies as depicted in the stories we have been telling ourselves about technology since at least the 1950’s. Most recognizable is the entity known as Hal from "2001: A Space Odyssey" by Arthur C. Clarke. This novel, developed concurrently with Stanley Kubrick’s film of the same name, has had a profound impact on how AI is perceived in popular culture. HAL 9000, the computer onboard the spaceship Discovery One, is presented as a highly intelligent and capable entity, responsible for maintaining the spacecraft and the safety of its crew. When HAL malfunctions and begins to act against the human crew, a techno-dystopia unfolds.
On the other hand, we imagine AI can help us organize and run complex systems, including human societies. We are hopeful that AI can transform human experience for the better, bringing awesome enhancements to our lives. To illustrate, take "The Ship Who Sang" by Anne McCaffrey, a pioneering portrayal of a symbiotic relationship between human and machine. Published in 1961, it is the story of the Ship, a spaceship controlled by the brain of a severely disabled woman named Helva. Helva’s brain is implanted into the spaceship allowing her to become its guiding consciousness. The Ship experiences and processes emotions, and forms deep bonds with her human partners. Such a successful relationship between human and machine represents the advancement of both.
So far, our public conversations about AI have been in these two modes of fantasy and fear: Could AI become sentient, or achieve General Intelligence (GI), and surpass humans? Does GI pose a danger to human kind? When will the point of singularity arrive, the point at which technology outstrips humans and escapes human control? We are all anticipation wrapped in trepidation, wrapped in anticipation.
Arguably, algorithms like LLMs alone are not AI, since AI today refers to an array of technologies like natural language processing, machine vision, and neural networks. LLMs are but word calculators using math and probabilities to come up with the next word in a sequence of words, but LLMs like ChatGPT, with human-like chat interfaces, are becoming the public face of AI.1 But whether or not LLMs are AI is kind of irrelevant. The point is that the use of the term invokes a dialectic of fear and fantasy that displaces discussions about the material reality of these AI technologies, and of the on-the-ground practices taking shape around the use of AI.
This piece hopes to open this discussion, beginning with the voices of experts in digital integrity and privacy rights. People like Meredith Whittaker, Lucy Suchman, and Maria Ressa, who have been warning us that LLMs/AI are but an expansion and intensification of what Shoshana Zuboff named “surveillance capitalism.” As you probably know, surveillance capitalism is a business model where companies accumulate vast amounts of data on individuals to predict and influence their behavior for profit. As Whittaker, CTO of Signal the privacy-first messaging app, says in a recent conference talk entitled “AI, Encryption, and the Sins of the 90s”:
“The AI we're talking about today needs to be seen as an extension of this surveillance business model – a way to expand the reach and profitability of the massive amounts of data and infrastructural monopolies that these large companies possess.…"
Surveillance capitalism has emerged as the business model of big tech. According to Whittaker, these AI projects are an answer to the question of what more these companies can do with their already outsized competitive advantage — the troves of data they have already collected and the infrastructure that they control — so that they can continue to grow their influence well beyond the technology sector. In this way, AI extends and intensifies the political and economic control that a small handful of companies have on our everyday lives.
You didn’t really think that a gazillion of dollars are being poured into AI development so that you can play with synthetic text and images, produce content in the new creator economy, and make your “passive income” dreams come true? And by a gallizion of dollars I mean what Adam Conover, host of Factually, says: “tens of billions of dollars this year alone” , and “one of the largest infusion of cash into a specific technology in the history of silicon valley.”2
Here is the beginning of the reality check needed in the age of AI hype:
It has been curious how a potentially liberatory technology like blockchain received ardent and public critique for its use of electric power, but very little attention has been paid to the vast amounts of resources that these LLMs usurp. While both technologies are energy-intensive, new blockchains are being built that are carbon-neutral by design, but AI's rapid expansion is driving its power usage to potentially surpass that of blockchain tech.3
Despite this, interest and resources began to flood into AI development shortly after OpenAI’s release of ChatGPT in November 2022. I have been laid off twice this year alone, and I can say that it is nuts how many new jobs listed on job boards are now AI-related (see TechJury and CompTIA below). Prompt engineers, AI ethicists, and Machine Learning programers emerged fully-formed from the hype-stream, and now we are neck deep in AI experts.4 Nvidia, the company known for designing and manufacturing the graphics processing units (GPUs) used in AI applications, saw their stock fly. Python, the language used in machine learning algorithms, has become all the rage. Clearly, all that sweet, sweet venture capital funding, and the interests that it represents, is putting their thumb on the scale in favour of a fast and furious development of AI. It is a race, it is a “war,” it is the inevitable future.
This in spite of repeated warnings from software engineers, who mostly remain unimpressed by what AI can do, that AI is no where near where the hype puts it. Consumer applications built on AI barely work, don’t live up to expectations, and some of their public demos have been found to be marketing stunts that misrepresent the AI’s real outputs. For example, Devin was billed as a fully autonomous AI software engineer created by a startup called Cognition. But clever Devin was creating it’s own bugs so it could fix them, and failed at basic comprehension tasks. Github’s CoPilot, an early code-completion AI, was found to be routing requests to tech workers in India. There are enough examples to make us suspicious of any new claims about what AI can do. In any other product, bad performance and “hallucinations” would be a cause for concern, but not where it comes to AI. This means that what AI can actually do (for us) is not the point - so what else is animating this seismic shift in the tech landscape?
Who is selling the AI dream? And who is buying?
To get at the reality beyond the hype, you follow the money. Whittaker argues correctly for a “political economic view” that considers the distribution of resources and power as the lens through which to interpret the narratives animating AI development.5 I’m not a political economist, but here is what is already clear: The development of LLMs leverages vast quantities of pre-existing data and fuels the demand for even more data collection. We know our data is used to refine algorithms for targeted advertising, content recommendation, and behavioral prediction. AI offers the rationale, as well as a new mechanism, for the collection of even more data.
The ultimate goal of this data collection is a system where consumer behavior is not merely anticipated but actively molded to align with predetermined outcomes. In order for such a project to succeed, human nature itself must become more predictable. As Lucy Suchman observes, these algorithms and machines only work well within enclosed worlds where the environment is fully controlled. Industrial robots work well within the closed system of an automated factory. The Amazon warehouse attempts to create a controlled environment, but humans are still needed to fill the gaps of what the robots cannot yet do. Humans are shoed into this controlled world in dehumanizing and even dangerous ways, as we have learned from workers on the ground. And when you put self-driving cars into the open human world, bad things are bound to happen.6 Either these machines will need to improve significantly so that they can operate in human worlds, or human worlds will need to become more controlled and predictable, or both. We’ll come back to this when we discuss lifeworlds and phenomenology...
The push for "bigger is better" AI models directly incentivizes more intrusive and expansive data collection practices that allow companies at the forefront of surveillance advertising models, such as Google and Meta, to extend their surveillance capitalism model into many other industries, industries previously inaccessible to them in a direct way. As Whittaker argues, these mega-tech companies realized that “ …we can use these same resources to train AI models to infiltrate new markets to make claims about intelligence and computational sophistication that can give us more power over your lives and institutions.”7 The big AI push doesn’t represent a new turn, a new invention, or a break with the status quo, but rather, an extension and intensification of the surveillance business model that powers big tech. Given recent critiques of this model, these interests needed a way to change their image, and AI gives them a way to wrest back control of our imagined futures.
AI-driven systems themselves operate as surveillance tools. They are used to generate inferences and narratives about individuals that impact their access to resources and shape their experiences. AI powered facial recognition, emotion recognition, and worker-productivity monitoring are being incorporated into already existing work monitoring systems. If you have been subject to any of these technologies, you may have had a strong emotional reaction to being monitored in this way. It is not pleasant, even when an attempt is made to gamify it.
One example from my experience working in a gamified workspace: each week, we were asked to fill out surveys about our colleagues, and this taken together with our own scores in what was literally called “The Game,” would determine who we were set up to work with next, and what projects we got to work on. Problem is, pesky human ingenuity. When you introduce a measure of performance, humans will turn away from the job they are doing and work to play the real game, which is gaming the performance metrics. As someone who is close to me says, data is for system management, not people management. Anyways, “The Game” was sold as a more objective way to evaluate performance, get important feedback from our peers, and as the smarter way to gain productivity. But it felt like we were subjects of a human experiment, and I came to suspect the development of that algorithm was the real reason for our work, not the projects we were advancing. But, in comes pesky human intuition. Without implicit coordination, we broke the algorithm. Given non-sensical results, they discontinued its use — one for the “Team Human”8 column — but it was still harrowing to labour under those conditions.
We actually know little about how these AI surveillance technologies are being implemented in business and governments, but we know AI is being rolled out rapidly in a permissionless way. What we do know comes from workplaces that have seen some level of organizing, two examples being from the workers at Uber and Amazon. We don’t know but suspect, based on the experience reported by Uber workers, that algorithms are being used not only to set surge pricing (well known), but also to set wages for workers and to keep driving patterns unpredictable (less well known).9 Reportedly, when you first begin driving for Uber, you make pretty good money, as a way to entice you in. However, you will not earn more but less with time and experience. You see, it becomes harder to leave an occupation the longer you have been doing it, even as your wages go down. (They are learning how to slowly turn up the temperature of the water until we are well and cooked.) Also, just when you figure out a pattern of driving that will earn you a predictable day rate, reportedly the pattern changes out from under you. This means that you can make wildly different amounts on a per diem basis driving the same routes you previously established as profitable. The algorithm you are gaming is gaming you right back.
In the dream scenario, these algorithms will allow us to become a lot more efficient and productive, and could even lead to our liberation from wage labour — never gonna happen. We are also being told on the daily that AI threatens many of our jobs, at a moment of heightened economic stress. Companies in tech sectors are laying off workers en masse, and the job market is dismal despite economic indicators telling us the economy is doing just great. We are being made redundant, squeezed, and forced to return to the office, despite historically high productivity numbers. The chipping away at workers’ rights is blamed on technological progress, and who can argue against progress? It is inevitable.
While AI is being used to review human work, these algorithms are truly beyond human review - we often don’t know why an AI renders the decisions it does. We do know that AI systems have been trained on some of the worst data on the internet, data no doubt reflecting existing biases and inequalities. As a result, AI-driven decision-making in various domains, including law enforcement and employment, risks perpetuating and even exacerbating these biases. This is sure to disproportionately impact minority communities.
The framing of AI as neutral, objective, and inevitable obscures the values and interests driving its development, and makes it hard to know those responsible for harmful outcomes. This is a familiar ploy, one known as “the god trick” in the sciences - the position outside and above from which you can see without being seen, affect without being affected, called objectivity. Often, those who are being surveilled do not know they are being surveilled. But because we know we are being watched by these technologies, the mere possibility of having our movements captured at any moment is enough. You may experience ongoing, low-grade paranoia or anxiety, and not know when and how you are being surveilled. This is taking place in the background of our lives, generally without our consent or knowledge. I left the US several years ago, and I can tell you it takes months if not years to come down from the generalized anxiety of living in the US. On the flip side of this, living in other places is pretty boring because all that adrenaline is addictive.
To be continued…
CompTIA. "Top AI Statistics and Facts for 2024." CompTIA Blog, CompTIA, 2024, https://connect.comptia.org/blog/artificial-intelligence-statistics-facts. Accessed 29 Sept. 2024.
Haraway, Donna. "A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century." Simians, Cyborgs, and Women: The Reinvention of Nature, Routledge, 1991, pp. 149–181.
Hare, Jonathan. "What Uses More Energy—AI or Bitcoin Mining?" Wired, 17 Sept. 2023, https://wired.me/science/energy/ai-vs-bitcoin-mining-energy/. Accessed 29 Sept. 2024.
Ressa, Maria, host. "The AI series: AI and Surveillance Capitalism | Studio B: Unscripted." Studio B: Unscripted, 28 Nov. 2023.
Suchman, Lucy. "(471) What is AI? Part 2, with Lucy Suchman | AI Now Salons - YouTube", commentary by Sarah Myers West, 8 Feb. 2023.
Suchman, Lucy. "Restoring Information's Body - Remediations at the human-machine interface" Remediations at the human-machine interface, 9 July 1993,
TechJury.net. "101 Artificial Intelligence Statistics [Updated for 2024]." TechJury, 2024, https://techjury.net/blog/ai-statistics/. Accessed 29 Sept. 2024.
Whittaker, Meredith. "AI, Encryption, and the Sins of the 90s." Keynote address, Network and Distributed System Security (NDSS) Symposium, 27 Feb. 2024, San Diego, California. YouTube, uploaded by NDSS Symposium, 27 Feb. 2024.
Whittaker, Meredith. *"(471) What is AI? Part 1, with Meredith Whittaker | AI Now Salons." commentary by Amba Kak, *1 Feb. 2023.
Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. Public Affairs/Hachette Book Group, 2019.
I am not an AI expert…Read morea year ago · 10 likes · 3 comments · Anna's Journal .
Suchman, Lucy. "Restoring Information's Body - Remediations at the human-machine interface." Remediations at the human-machine interface, 9 July 1993.