torsdag den 7. december 2023

Algorithmic capitalism

Review of

Simon Lindgren

Critical Theory of AI

ISBN: 978-1509-555772

209 pages

Polity Pres


If you ask what artificial intelligence or language models like ChatGPT are, you will typically get either an

answer about the technology behind them or an answer about what they can be used for. The first answer will

be about programming, machine learning, algorithms, and that sort of thing. You may also get the history of

the development of artificial intelligence as a story about brilliant scientists and engineers who have

constructed thinking machines. The second answer will be about the models' superhuman powers and

abilities, increased efficiency at work, clever solutions to complex problems, or how many jobs will be

eliminated and people will therefore end up unemployed.

Both types of answers are important and can raise good questions, but they may still miss the point when

we talk about artificial intelligence or any other form of technology for that matter.

At least that's what Simon Lindgren argues in his new book Critical Theory of AI.

The problem with technical and narrow questions about AI is that they come to see technology as something

isolated. This is actually a fundamental problem with many discussions about technology, namely that it is

seen as something that is in the world; something that can then be chosen to use or not. A very common view

of technologies is that once they are produced, they are simply present and can be used. Technology is

perceived as something specific with a given use.

Lindgren draws on critical theory and post-structuralist theory of power in his definition of artificial

intelligence. With Michel Foucault, he looks at how technology is always part of knowledge-power relations,

where particular privileged understandings are established as true and correct, which has effects on how the

individuals can live their life. In the contexts in which technologies are involved, the individual is positioned

or addressed in a certain way, which is part of the way the individual becomes a subject. It is Althusser's

conception of Ideologically determined and determining interpellation that is at stake here. Technology

becomes part of a discourse where one can fight and negotiate the 'truth'. “Discourse is power and knowledge

embedded in language and social practice” (p.130).

To understand artificial intelligence as something other than specific tools, Lindgren suggests understanding

it as an assemblage. Technologies will always be part of a wide array of different relationships with other

things and contexts. There are people who use these technologies in specific ways, and these technologies

carry significance as economic instruments. They also acquire ideological meanings that become part of the

relationship between humans and machines situated in a context involving a web of power and meaning.

According to this understanding, a technology (such as a coffee machine, a fountain pen, or a language

model) is never simply an addition to humans. As Latour (another inspiration for Lindgren) expressed, a

gunman is more than just a gun + a man. When considering a gunman in a context where other forms of

power and knowledge also exist, it becomes even more complex. However, it's precisely this complexity that

is crucial for understanding AI in a critical theoretical light.

Lindgren employs various terms for AI as an assemblage. He draws, among other things, on Foucault's

concept of the social as an apparatus or dispositif, indicating how all things and relations are interconnected

through knowledge, power, and organization. The assemblage or apparatus is a heterogeneous totality where

laws, power relations, institutions, buildings, organizations, morality, knowledge, science, etc., are

interconnected. "[We] can conceive of AI and humans as entities that are interconnected in a more broadly

encompassing 'machinery,' that is 'the social'" (p. 86).

Artificial intelligence and all other technologies must be understood in relation to the way they are

integrated into established economic and political structures. Therefore, technology is not something

separate from, for example, economic conditions, where it can be included or not. Technologies themselves

have agency (a capacity for action) that influences the contexts they are part of. However, technologies do

not determine societal conditions, which is another ideological understanding of technology that argues that

as a society, we must incorporate technology now that it's here. Technology is understood here as an urgent

driving force for societal change. But critical theory continually points out that it could be different; there is

room for political action and thus changing the techno-economic conditions and circumstances.

When it is claimed that technological development has consequences, it's a specific ideological use of

technology that is part of a power play. Technological development, when referenced, empowers some at

the expense of others, Lindgren asserts with the weight of critical theory. "[Tech] must be understood in

relation to political economy, and that it is not an autonomous force. [...] This means that AI must be seen

from the theoretical perspective of the social shaping of technology" (p.9 and 14).

Currently, AI and language models are celebrated as disruptive technologies that will change the future for

many. This is probably true. Not least, it will contribute to the ongoing climate catastrophe because the

energy consumption involved in training, maintaining, and using digital technologies emits unimaginable

amounts of CO2. Another consequence is the underpaid workers in the global South who provide human

feedback to image recognition programs. And the entire platform economy, which is also linked to the

development of AI, means a setback for the rights and conditions that labor movements worldwide have

fought for because the platform economy turns work into detached "tasks" that private actors can perform:

food couriers, taxi drivers, programmers, etc. AI also means precarity - casual laborers and day laborers

with minimal rights. It's "algorithmically driven digital capitalism," as Lindgren states (p.23).

But this kind of disruption is not highlighted in the ideologically determined perception of technology as

progress. Here, technology is part of a discourse that draws on an enlightenment philosophical understanding

of how history moves forward and upward through people's development of new technologies and

organizational forms. But in reality, technologies like AI are incredibly conservative when you look at how

they actually function, claims Lindgren. "For all its celebration of disruption and innovation, the tech

industry has always tended to serve existing power relations" (p. 65). When looking at the ideologies

programmed into AI, they are completely traditionalist - or perhaps even worse than that. Lindgren goes

so far as to include Yarden Katz's Artificial Whiteness, which argues that AI must be understood as a

'technology of whiteness' because it mimics and serves a logic of white supremacy (p. 138).

The point is that demanding ethical AI isn't enough, as the problem isn't how we as humans can relate to the

technology. AI is systematically connected to a specific form of socialization, contributing to algorithmic

oppression.

Interestingly, Lindgren points to Habermas' discourse ethics as a possible way to see and understand

technology differently. The ideal becomes deliberative democracy, with a focus on centering development,

decisions, and determinations based on the people affected by these processes. It's a fundamental

understanding of direct democracy translated into what Habermas calls discourse ethics: "Only those norms

can claim to be valid that meet (or could meet) with the approval of all affected in their capacity as

participants in a practical discourse. [...] Translated into the AI domain, this means that the moral

architecture that underlies algorithmic systems must be grounded in the practical participation of those that

are affected by the systems" (p.159).

When I call this reference to discourse ethics interesting, it's because it fundamentally breaks with the

analytical framework Lindgren has set up using original critical theory and post-structuralism. It's not

coincidental that Habermas is very critical of post-structuralist thinking, as it breaks with the enlightenment

philosophical and Kantian optimistic idea of reason as the saving factor. Habermas adheres to Kant's idea

that reason will ultimately find the truth - and his moral philosophy also pointed to the possibility of

morality. Habermas' concept of communicative action relies on the notion that reasonable conversation

leads to a shared truth: the common good.

However, the post-structuralist thinkers Lindgren draws on reject the existence of such reason. Therefore,

discourse means something entirely different for them. This contradiction between the analytical framework

and the solution Lindgren points out could have been highlighted by the author himself. There's an

unresolved duality in Lindgren, which is also evident in his use of the concept of ideology. On the one hand,

ideology means the values ​​circulating in conjunction with knowledge and power (Foucault). But on the other

hand, ideology for Lindgren means false consciousness, drawing on a Marxist notion that citizens are deceived

through bourgeois ideology. These two concepts of ideology, as far as I can judge, stand side by side in the book

without being brought forward as conflicting.


In my opinion, these are blemishes in a very good book that offers a different and critical view of

technological development. Lindgren looks at AI as an assemblage based on various dimensions, providing

a critical theoretical perspective on the revered technology. There are insightful analyses and discussions

throughout the book. Lindgren writes very well, making the book accessible to those interested in a

comprehensive view of AI - a perspective that doesn't limit itself to viewing technology as either good or

bad but attempts to delve into the various contexts in which technology operates and influences people's

social relations and the individual's relationship with oneself.

Ingen kommentarer:

Send en kommentar