Beyond the Hype: Exploring the Real-World Impacts of AI on Society, Democracy, and the Environment

March 13, 2024 by Sonja Johnston

Artificial intelligence is often described in anthropomorphic terms, yet contemplating human concepts like creativity, intelligence, or ethics in the absence of biological life raises fundamental questions. On April 9, 2024 the Jackman Humanities Institute is sponsoring "Intelligence" in the Absence of Life—a workshop organized and moderated by Teresa Heffernan, JHI’s Visiting Public Humanities Faculty Fellow for 2023-24 and Professor of English Language and Literature at Saint Mary's University, Halifax. Speakers Paris Marx, Elke Schwarz, Lucy Suchman and Ron Deibart will explore the real-world impacts of AI on society, democracy, and the environment, beyond the obscured realities and narratives perpetuated by the AI industry and media. Discussions range from the reshaping of employment dynamics to the development of AI weaponry and the complexities of societal responsibility. Recently, Teresa Heffernan answered some questions ahead of the workshop.

Can you provide a brief overview of the workshop and its goals?

The goal of this workshop is to explore what it means to speak of creativity or intelligence or ethics in the absence of life and to foreground the real-world impact of imposing corporate-owned AI, which erodes public space and narrows debate, on the world at large.

For instance, ask ChatGPT about its definition of AI and it predictably conflates, in keeping with the AI industry, human intelligence with machines: “Artificial intelligence (AI) is the ability of computers and other machines to perform tasks that would normally require human intelligence, such as understanding language, recognizing images, making decisions, and solving problems. There are many different types of AI, including narrow or weak AI, which is designed to perform a specific task, and general or strong AI, which is designed to be capable of performing any intellectual task that a human can.” Ask a chatbot, like Google’s Bard, about its carbon footprint and it responds: “My carbon footprint is zero.”

There are many problems with these responses, but to start, AI does not “understand” language and “strong AI” does not exist. Chatbots generate prose by predicting the next word in a sentence based on statistical probability (think of these large language models as an elaboration of the autocorrect feature on your phone). These responses have already undergone a process of sorting, editing, and weighting carried out by mostly poorly paid workers behind the scenes. Moreover, the technology itself requires a resource-intensive infrastructure, including enormous amounts of energy and water, that does not bode well given our environmental crisis.

What do you believe each speaker brings to the conversation about AI?

All the speakers foreground the real-world impacts of AI technologies that are obscured by the hype generated by the AI industry. If ChatGPT and other robots are marketed as replacing humans, Paris Marx exposes not only the threat they pose to artists and writers, but also the exploitation of the workers behind the scenes that ensure that “autonomous” machines, from self-driving vehicles to chatbots, appear autonomous. As he writes: “The robots and chatbots aren't replacing humans, they're just keeping the people out of sight and out of mind.”

Along with this devaluation of human labour, Silicon Valley also wants to restructure the military to make it faster and more agile often selling its AI products with ethical claims that Elke Schwarz, points out, do not hold up. Reminding us that war is “not a technological problem,” she will discuss the longer history of reducing the human to a technological variable in a lucrative war industry.

Lucy Suchman further explains how the AI-enabled military machine works and what it maps and what it leaves. Data, she reminds us, is not objective but “the product of an extensively engineered chain of translation from machine readable signals to ad hoc systems of classification and interpretation.” In her discussion, she will explain the troubling US Defence Department’s project Joint All Domain Command and Control (JADC2) that proposes, in collaboration with tech companies, to build an AI-powered network to combine all warfighting domains to land, sea, air, and space sensor data with the goal of training the shooters of the future to operate in this new global targeting system. The integration of AI systems into all aspects of society that is happening with little public oversight has enabled the rise of authoritarianism. From the military to law enforcement agencies to the for-profit-AI industry, the mechanics of algorithms and how data is collected, classified and interpreted is often hidden behind claims of security and proprietary control.

As the founder of the Citizen Lab, Ron Deibart is dedicated to exposing spyware mercenaries and the hack-for-hire industry that are often employed by governments to target journalists, lawyers, and human rights activists. Deibart is a global leader in the effort to check unregulated technologies that refuse to render transparent the mechanics of its algorithms and data sets.

The workshop covers a range of perspectives from different disciplines. Are there specific perspectives or voices that you feel are essential to include in discussions about AI, but seem to always be missing?

The AI industry’s push for “AI everywhere” and an “AI-first world” short circuits public debates about what sort of future(s) we want as we are inundated with messages that AI is the future. The push to “techify” everything obscures the real impact of AI on society, democracy, and the environment and these need to be brought to light. From the automation and acceleration of bias to the proliferation of misinformation, deep fakes, and hate-spewing bots, globally disseminated on highly profitable social media platforms, to invasive surveillance technology to the theft of data and the erosion of privacy to the uberization of work to the immense concentration of wealth and power in the top tech companies to the massive environmental price tag of AI infrastructure--this technology is not neutral in its impact.

Not only do these platforms depend on the internet and technologies that were built with public money, but Big Tech companies have been swooping into cities in an attempt to take over and privatize health care, transportation, and education, while lobbying for massive public expenditures to fund high-tech research. Discussions about AI have been largely shaped by computer science and engineering, steered by corporate and military interests, so we need many more voices at the table given its global impact.

You’re this year's JHI's Visiting Public Humanities Faculty Fellow. What role do you see public humanities playing in discussions on AI?

The humanities offers a place of independent and open-ended research that is ideally accountable and accessible to the public and committed to advancing learning and knowledge for the good of society, which is very different from the end-driven corporate mandate of turning a profit. The AI industry commodifies and monetizes knowledge: it owns the technology required to store and process big data and controls the platforms that disseminate the information. The humanities, the centuries old study of human society, is not profit-driven and offers a very different type of knowledge than that generated by algorithms, big data and machines that do not traffic in facts or evidence and that strip data of context, culture, and history.

As the industrial revolution was transforming the English countryside, Thomas Love Peacock in his “Four Ages of Poetry” (1820) argued that poetry was increasingly useless and retrograde in the age of scientific invention. Percy Bysshe Shelley, in his spirited “A Defence of Poetry” (1821), argued that the periods in history when calculation trumps imagination, social inequality and tyranny prevail: the rich get richer and the poor get poorer and societies are torn between “anarchy and despotism,” which is a pretty accurate diagnosis of our tech-obsessed age.

How do you think public perceptions of AI have been shaped by the media and industry narratives?

Often the media uncritically disseminates the messages of corporate heads of tech giants, like Google or Open AI, or consults those working for or funded by the AI industry, essentially giving them free advertising space.  Like any industry selling a product, these companies are invested in marketing their technology, but what gets lost is the real-world impact of this technology.

There is also the problematic conflation of science and fiction that drives the AI industry and that is also often repeated by the media, which reduces fiction to tech propaganda and science to "scientism," something that merely has the veneer of science. So, for instance, images of Terminator often accompany media articles about AI, but James Cameron was not referring to a literal machine, but to the military-industrial complex and its dehumanizing thrust. Thus, Sarah Connor famously tells the scientist working on the destructive neural-net processor that will become Skynet: “You don't know what it's like to really create something; to create a life; to feel it growing inside you. All you know how to create is death and destruction.”

Are there specific outcomes or conversations you'd like to inspire among the workshop participants? 

I would love the workshop participants to engage in these conversations about the impact of AI on labour, the environment, and democracy and to really interrogate the future marketed by the AI industry. 

Can you suggest some resources for anyone interested in doing some reading before the workshop?

In addition to the great books, podcasts and blogs by the speakers, I would recommend Kate Crawford’s Atlas of AI, Ruha Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code, Shoshana Zuboff’s The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, Benedetta Brevini’s Is AI Good for the Planet? Cathy O’Neil’s Weapons of Math Destruction, Mary L.Gray and Siddharth Suri’s Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass, Ursula Franklin’s The Real World of Technology, Jill Lepore’s If Then: How the Simulmatics Corporation Invented the Future.

Two of the best fiction works I have read are the graphic novel by Sydney Padua, The Thrilling Adventures of Lovelace and Babbage: The (Mostly)True History of the First Computer and Will Eaves’s novel Murmur, based on Alan Turing when he was forced to undergo chemical castration in lieu of prison; in this period of great mental and physical anguish, of humiliation and outrage, Eaves imagines Turing would have rethought the brain as a machine model that had structured his research as he was forced to meet his mind.  I am also currently reading Benjamin Labatut’s brilliant The Maniac, about the mathematicians and physicists that gave rise to quantum mechanics, the atomic bomb, and AI.

The Jackman Humanities Institute is sponsoring “Intelligence” in the Absence of Life on Tuesday, April 9, 2024 from 9:30am to 5:30pm with a reception from 5:30pm to 7:00pm. The workshop takes place in Room 100, Jackman Humanities Building, 170 St. George St., Toronto, ON, M5R 2M8. This event is free and open to all!

See the full event listing for more information.

Categories