Identity Politics and Artificial Intelligence

Roberto Simanowski
4 min readFeb 12, 2022

Since the rise of identity politics, the issue of who does the talking has taken center stage. Since the rise of Amanda Gorman, the same could be said of who does the translating. There’s no way a white man should translate the words of a black woman, may people believe. Fair enough, but what about artificial intelligence, which is playing an ever greater role in the nuts and bolts of human communication? Who’s actually talking, when AI is the one generating the words?

In September 2020, an article in the Guardian newspaper about the dangers of artificial intelligence for humankind caused a stir — less for its content, which sought to defuse worries, than for its author, which was an AI program itself, hence the title: “A robot wrote this entire article. Are you scared yet, human?“

The piece made skeptics immediately ask whether the code truly meant what it had written about the relationship between AI and humans and if it would have warned us, if it had reached different conclusions.

Those who understand AI a little more deeply know that it only told us what we human beings ourselves think about our relationship to it. AI doesn’t think when it writes. It computes. As an artificial neural network, it can only process information statistically. From vast amounts of gigabytes of text, it knows which word most frequently follows any given other one, just as Google Translate knows which German word is substituted most often for any given English one.

AI operates not by sense but by math. It’s an opportunist profiting from probability, which always says whatever has a majority on its side. And that’s precisely the problem when identity politics is factored in.

Is artificial intelligence feminine, like the gender of the corresponding terms in German and many other languages? Or is it masculine, as well as white and heterosexual, because straight white males are the group that has left behind the largest linguistic footprint on the Internet? Many AI researchers fear that the answer is the latter and have started demanding that the data-set from which AI learns to compute, if not actually think, be curated so that minorities are equitably represented. There is also the claim for “algorithmic reparation,” for algorithms that prioritize protecting groups that have historically experienced discrimination. A claim informed by critical race theory and reminiscent of affirmative action in hiring

The question then arises of who should be allowed to make what adjustments and with what sort of a mandate. Will software engineers in Silicon Valley be allowed to correct the data-set as they please? Or will that be the task of international monitoring committees similar to Facebook’s Oversight Board and Twitter’s Trust & Safety Council? Who selects the members and what will the criterion be for making decisions?

Whatever the answers may be, any attempt to correct the AI data-set introduces politics into statistics. And leaving aside identity politics for a moment, this is problematic. No matter how curating is concretely organized, it means the return of gatekeepers, whose elimination by the Internet was welcomed so frenetically. The history of technology, it seems, contains a surprising amount of irony.

None of that takes away the urgency of the issue within identity politics. The problem of underrepresentation and how to fix it remain hot-button topics in both the on- and offline worlds. The discussions about who gets to speak for what, or who is entitled to translate Amanda Gorman, are public ones. The AI training set, however, exists away from the public eye. Ultimately, not even programmers likely know which people of what identities determine how much of AI’s calculations. It’s completely unclear who’s talking when AI tells us humanity has no reason to be afraid of it.

As if that weren’t thorny enough, AI remains inequitable even when the data-set is repaired. If it does indeed operate according to strict rules of probability and always picks whatever represents the majority, every minority, no matter how prioritized and affirmed, will ultimately be silenced. Minority voices will no longer come together in a more or less vigorous opposition pressuring the hegemon to reach some sort of agreement. If the greatest number always triumphs, there is no proportional representation or compromise. As in the US elections, the winner takes it all.

This sort of AI is a mouthpiece of the mainstream and by no means a tool for promoting equality and inclusion. Does that make it white and male? Not necessarily — at least not if AI is globally oriented and takes as its basis all the data on the Internet. In that case, the mainstream will be that which most resembles itself on the Internet. Strictly statistically, like-minded groups will have an advantage.

When AI takes over their voice, it becomes an advocate of conformist thought, which is worse than a melting pot in which all minorities are submerged — and drowned out. This process shows that technological progress doesn’t automatically equal social progress. It’s also reason to believe that AI should definitely not translate Amanda Gorman and that humanity has more to fear from AI than AI itself may think.

--

--

Roberto Simanowski
0 Followers

German scholar of media & cultural studies; founder of Dichtung-Digital, author of “Data Love”, “Facebook-Gesellschaft” & “Death Algorithm”.