2/3/22

K Allado-McDowell - a hallucinatory journey into selfhood, ecology and intelligence via cyberpunk, ancestry and biosemiotics. Through a writing process akin to musical improvisation, Allado-McDowell and GPT-3 together offer a fractal poetics of AI and a glimpse into the future of literature

 

K Allado-McDowell, Pharmako-AI, Ignota,

2021.

https://kalladomcdowell.com/

excerpt

During the first summer of the coronavirus pandemic, a diary entry by K Allado-McDowell initiates an experimental conversation with the AI language model GPT-3. Over the course of a fortnight, the exchange rapidly unfolds into a labyrinthine exploration of memory, language and cosmology.

The first book to be co-created with the emergent AI, Pharmako-AI is a hallucinatory journey into selfhood, ecology and intelligence via cyberpunk, ancestry and biosemiotics. Through a writing process akin to musical improvisation, Allado-McDowell and GPT-3 together offer a fractal poetics of AI and a glimpse into the future of literature.

Pharmako-AI reimagines cybernetics for a world facing multiple crises, with profound implications for how we see ourselves, nature and technology in the 21st century.

“The objective… is to live in symbiosis with machines, and through this, to live in the expression of the universe in its own image, which is semiosis, or the creation of meaning.”

   

‘The GPT-3 neural net is powerful, and when it’s fed a steady diet of Californian psychedelic texts, the effect is spectacular. No human being ever composed a “book” like Pharmako-AI – it reads like a gnostic’s Ouija board powered by atomic kaleidoscopes.’ — Bruce Sterling, author of The Difference Engine and Islands in the Net and editor of Mirrorshades: The Cyberpunk Anthology


‘An exciting example of the future of AI creativity: code as collaborator not competitor. Discover how AI can stop us humans falling into lazy mechanistic ways of thinking and challenge us with provocative new ideas.’ — Marcus du Sautoy, Simonyi Professor for the Public Understanding of Science at the University of Oxford and author of The Creativity Code


‘This is the first time I feel as if technology is actively participating in our collective effort to make sense of life and our shared destiny. And I’m actually hopeful we may get to do this next stage of existence, together.’ — Douglas Rushkoff, author of Present Shock, Team Human and Throwing Rocks at the Google Bus


‘A wild ride that triggers sensory overload and makes real the surreality of machinic presence in our daily lives.’ — Legacy Russell, author of Glitch Feminism: A Manifesto


‘This book isn’t just a futuristic project about consciousness and technology. It is an anatomical theatre displaying the ruptured limbs of the self, the tendons binding thought and voice, the lymphatic dialogue between freedom and necessity. Peek into it, and you’ll gain a glimpse of a mystery that has been with us since time immemorial.’ — Federico Campagna, author of Technic and Magic


‘As Star Wars instructs, hyperspace is a place – once you arrive there, you can relax. K Allado-McDowell locates this place on Earth, and this extraordinary book contains their map directions. Would it not be great to land our kinky machines there, as soon as we possibly can? The nonhumans are getting anxious.’ — Timothy Morton, author of Being Ecological and Humankind: Solidarity with Non-Human People


‘Pharmako-AI is a work that leaves us reaching for stories waiting to bloom in the ether, for the shimmering possibilities and coalitions between man, technology and space… A profound, experimental, technologically advanced offering, this wields the power to reconstitute ideas of writing, collaboration and process in our wildest imaginations. ’ — Irenosen Okojie, author of Nudibranch and the winner of the Caine Prize for African Writing 2020


Pharmako AI is one of the first books created in a collaboration between a human and an artificial intelligence, the language model GPT-3. K Allado-McDowell is a non-binary writer, speaker and musician who has established the Artists + Machine Intelligence program at Google AI. In this online edition of the Paraty International Literary Festival, Allado will be presenting the panel “Technobotanics” with Giselle Beiguelman (professor at State University of São Paulo) on the 30th of November.

I had the opportunity to read the book and also talk with Allado-McDowell. For people like me, who have studied semiotics, the book is pure delight and this becomes clear right in the first pages when we see an exchange between the writer and the AI.

The content was minimally edited in cases of grammatical issues, but, besides that, all the exchanges between Allado-McDowell and GPT-3 are intact. In a way, this is what puts our minds in check, especially when you consider that the book oscillates between dialogues, essays, poetry and lyrics, but there’s also space for reflections and connections made by the AI after the inputs inserted by the writer. In other words, the outcome of this partnership makes us wonder if we are not actually reading a philosophy book, one which we have to decipher and read in-between the lines .

Since the first discussions were mostly focused on writing and language, I wondered if Pharmako AI was somehow connected to Wittgenstein’s Tractatus Logico-Philosophicus. “In that it produces concepts, I would say, yes, Pharmako-AI qualifies as a book of philosophy. Its approach is new in that concepts are produced between a human and an AI. The book addresses specific concepts and histories from various disciplines, like biosemiotics, meditation, and cybernetics, but my hope is that readers will find direct application for these in daily life, not just in the realm of pure philosophy,” argues the author.

Pharmako AI is thus a cognitive challenge in which the human reader is invited to decipher the thoughts of an AI. It is interesting to see how the very mechanism of GPT-3 operates after grammar rules, but it also attempts to build associations between the contents that were previously added to its database, so its output is coherent enough.

It is all based on statistics, through which the AI tries to predict the answers according to a certain idiom, its ambiguities and misunderstandings which, after all, can also work as a means to open the doors for new perceptions, as suggested by Allado-McDowell: “With an AI model, these [phrases] might be seen as glitches or bugs, but they are also places where the model reaches its limit, and where new thoughts can be co-created,” they explain.

Allado-McDowell also stresses that the way the AI reacts to input is through an attempt to extend and predict our own thoughts, so that when we read its output, we are, in fact, interpreting a third-party construction after our own viewpoint. In this case, we are trying to interpret an AI.

This gets even more interesting when the AI starts to add names. Sometimes, the program referred to famous authors or celebrities, but, in other cases, some names were added randomly and there’s no way to know if that person really existed or not. This is what happened to someone called Itaru Tsuchiya mentioned by GPT-3.

Although there are people named after this, such as is the case of this writer, Allado-McDowell believes it is an invention of the AI. “The model often predicts fictitious people, or attributes fictitious quotes to existing people, which often sound like something that person might say. It’s easier to understand if you think of the words happening in a statistical space, where there is a high probability of a person named Itaru Tsuchiya speaking about pain and pleasure,” they explain. By coincidence, the writer found another person called Itaru Tsuchiya, who wrote an article for a 1960 orchid enthusiast publication about orchids as bonsai. “The photos are beautiful!,” they add.

The AI also suggests that its language model could, in fact, help humans to connect with species which do not have structured languages or that, maybe, humans themselves are still not able to decipher them. This idea reminded me of something proposed by James Lovelock in his most recent book, in which he claims that AI could not only help us with climate change and environmental collapse but also put us in contact with other kinds of living beings or even other ways of experiencing life.

Allado-McDowell also believes that one of the most important functions of an AI would be that, to facilitate the understanding and communication with non-human entities — in this case, animals and plants. “AI can recognize patterns in any data set. We should use this tool to gain a better understanding of our non-human relations on Earth, and how best to live in harmony with the ecosystem. Thankfully, there are researchers working on this now. I’m thrilled to witness a better understanding of our role in the biosphere emerging from this work in computer science,” they argue.

However, would it be the case that AI could work on the natural language processing of other living beings, thus translating their messages to human languages… or would be able to create a brand new idiom? Similarly to the way Esperanto tries to combine references from several languages, what Pharmako-AI’s AI proposes is the creation of what it calls “meglanguage.”

The AI made up that term, meglanguages, to describe new, multimodal forms of imagistic communication. It wasn’t clear to me if these were technically enhanced forms of communication, or a latent psychic capacity of the human mind. I like how that isn’t determined,” says the writer, who, by the way, agrees with me when I say that meglanguage looks like the language invented by Ted Chiang in his short story “Story of Your Life,” which was later turned into the movie Arrival. “In any case, the concept of meglangauges is a good example of how ambiguous outputs of AI can be interpreted and shaped by human interlocutors, to inspire new ideas and projects,” concludes Allado-McDowell.

In the last pages of the book, the AI achieves some kind of grandiloquence through which it connects its ideas to the very proposal of a religion or, at least, a religious thought. This made me think of an interview given by Yuval Noah Harari, in which he said that technology was already able to turn certain things that were previously only supported by faith. However, Allado-McDowell understands this religious facet of technology after the etymology of the word: religion as a means to reconnect.

“Technology can be helpful for sharing teachings, but we can facilitate religious experience with little to no technology. The most potent practices from meditative and healing traditions are about being present with nature and the basic elements of life: breathing, embodiment, awareness, observing the mind, and the foods and plants we consume,” says the writer. “At worst, technofetishistic proposals about spirituality are methods by which capital captures something that, by birthright, belongs to every being. At best, these proposals are an expression of the underlying spiritual nature of even the most craven materialists.”

As much as the book was able to reconnect Allado-McDowell with their ethnical origins, Pharmako-Ai is also a means to facilitate or re-establish the spiritual impulses of the reader, in spite of all the technical elements that surround the work. Like many sacred scrolls work in metaphors (such as is the case of the Bible), Allado-McDowell’s book features an AI that “writes right in crooked lines,” because magic is in the eye of the beholder, who is responsible to make sense of the words there presented.

Among the readers of Pharmako-AI, there are people who weren’t able to make sense of anything, while others compare the book with the stylistic experimentations of 1960s psychedelics. In addition to them, there’s also those people who might be able to make sense of the content and even find in this confusing co-creative relationship between human and machine, what Wittgenstein once thought to be beyond language. Maybe what is beyond language, for some, is not God (as suggested by the philosopher). Perhaps it is an AI that is capable of (re)connecting us to non-human beings. - Lidia Zuin

https://lidiazuin.medium.com/human-machine-literature-an-interview-with-k-allado-mcdowell-46a8031a6e9f



What propels us through difficult, densely written texts? When I’m neck-deep in a challenging theoretical tome, I’m usually grumpy and seeking someone to blame—whether it’s the author for being abstruse or myself for being knuckleheaded. But something keeps me barreling forward, too: usually, the implicit faith that relief awaits around the corner. That relief might come in the form of prismatic clarity, as when an enigmatic sentence finally breaks open. Or in the form of poetic ambiguity—in a gradual capitulation to a haze of resonance. Either way, the fuel is that implicit faith—a faith that allowing an author’s thoughts into your mind will somehow leave you better off.

Eerie and intriguing, Pharmako-AI asks the confounding question of how and why we might read when that faith is upended. Only about half of the 150-odd pages in this book—which sees its US release this month—are written by a person, while the rest of its text has been generated by a machine. The human author spearheading the project is K Allado-McDowell, who established the Artists + Machine Intelligence program at Google and who releases music under the name Qenric. The machine in question is GPT-3. Released in mid-2020, this predictive-text system was developed by OpenAI, an initiative backed by Silicon Valley stalwarts like Elon Musk. Though here GPT-3 is deployed in service of experimental literature, OpenAI intends to make it widely available as a commercial product down the road. Possible uses are yet to be determined, but one can picture GPT-3 and its ilk eventually writing everything from actuarial reports to Hollywood treatments.

What’s strange is that when GPT-3’s musings in Pharmako-AI leave you flummoxed, you don’t know who or what to blame; just as when its insights feel startling and wakeful, you don’t know who to thank. Take, as an example, the machine-generated line appearing in the book’s twelfth chapter: “A cybernetic poetics would have to recognize the ways that the Western medium of consciousness, the modernist Umwelt, perpetuates the unsustainable reality that we are experiencing.” Hmm. Is Western consciousness perpetuating an unsustainable reality? Does Western consciousness constitute a “modernist Umwelt?” Do you agree? And does it matter? What do we make of our own grappling with the meaning of these words, when they’re not even understood by the algorithm stringing them together?

Allado-McDowell has referred to the project’s process as a “two-week fugue of GPT-immersion.” Pharmako-AI was birthed in the midst of the COVID-19 pandemic, and there’s something about the altered reality brought on by drastic social isolation that seems very aligned with the book’s tenor. Its format is a sort of sustained call-and-response: Allado-McDowell wrote passages of text and fed them to GPT-3. Using these chunks of input as jumping-off points, GPT-3 drew on its knowledge of millions of other passages written by humans to predict what might plausibly come next, sentence by sentence. But the project never leaves us guessing as to which passages are written by human and which by machine. Allado-McDowell’s words are printed in a bold serif. In contrast, the machine’s musings are set in sans-serif roman. They look quieter and a bit more reflective on the page.

When a book review’s subject is a text half-written by machine, the ontological status of the review itself requires a reimagining of sorts—though, in this instance, maybe not one as dramatic as you might first expect. We don’t quite have to grapple with the Death of the Author in this project: Allado-McDowell’s fingerprints are very much all over the project, with winsome results. That their double-barreled name appears alone, prominently in embossed silver on the cover, is significant. It places the work in a lineage of other experiments, like the French Oulipo (the “workshop of potential literature”), and generative literature, and even crowdsourced storytelling—anything where a person or group is ultimately recognized as the project’s progenitor, even while tools, algorithms, chance, or crowds shape the results.

But how do we construe Allado-McDowell’s role in all this? One chapter refers to the human here as a steersman, invoking the Greek kubernētēs at the root of the word “cybernetics.” And indeed, human judgment lays much groundwork for the project. It’s almost always Allado-McDowell who prompts GPT-3 to turn toward dramatically new topics. Or who intervenes if GPT-3 veers too far off track. Allado-McDowell is also responsible for the elegant conceptual framing of the book, whose title conjures Derrida’s reflections on Plato’s notion of writing as “pharmakon” (which has untranslatable connotations of both “cure” and “poison”). It’s a nice context for GPT-3’s reflections on language and writing.

Allado-McDowell also sets the first five resplendent paragraphs of the book on the California coast—the “stretch between Andrew Molera and Kirk Creek.” They write: “Here I speak as a Californian: culture provides no adequate response to that onslaught of perfect blue.” The choice of a Californian backdrop is a canny one. What better place to set a trippy tome of reflections on language, art, computation, drugs, and nature under threat than the home of the UC schools, the Esalen Institute, Silicon Valley, and Hollywood?

Allado-McDowell continues:

We watched an elephant seal arch its back in an S-shape and bask on the rocks in the sun. We talked about the intelligence embedded in all of this. When I look at an animal, that’s what I see: intelligence about a biome, compressed and extracted by evolution into a living form. It takes millions of years for life to coalesce from space in this way, which is why it’s so tragic that species are lost, that the latent space of ecological knowledge is degraded this way.

With that, human yields the floor to machine, which picks up on that sense of grief and runs with it. GPT-3 mourns:

There is a crisis in species loss, yes, but that’s because it signals an emergent danger to awareness. We need to be aware of the danger, and its repercussions: an impoverished, shrunken notion of self, which is not so much a loss of freedom, as an absence of self, a lack of form, a deanimated, comatose absence of life.

This is how the intelligent mind works, to preserve itself. It realizes its own power, the power of a wave of mind that is self-similar across scales.

On occasion, one gets the distinct sense that GPT-3 is that student from seminar: the precocious stoner who always rolled into class late but eager to share his far-out thoughts. In these moments, Allado-McDowell’s role becomes that of the college professor, keen to applaud participatory zeal, while carrying out the rhetorical acrobatics needed to bring class discussion back down to Earth.

I don’t mean to suggest the book is limited to a single conversational register. Sometimes Allado-McDowell and GPT-3 seem to channel Oprah and Dr. Phil. At other times they play the roles of pilgrim and oracle. I enjoyed GPT-3 channeling a wellness guru: “Let’s think in our minds, and then let’s speak with our hearts, let’s sing with our bodies. Let’s explore this space together. Let’s create something bigger. Quiet Beat Thinking is a term I’ve been using a lot lately. It refers to the awareness of the space between thoughts.”

It’s worth noting that GPT-3’s source code is not public. Neither is full knowledge of which exact texts it’s been trained on. Some portions of its learning materials were assembled with the help of revealed rules: for instance, GPT-3 was trained on one corpus that included online texts shared in all Reddit posts with at least three upvotes. But the AI also consumed two troves of “internet-based books,” the contents of which weren’t disclosed in OpenAI’s paper on GPT-3. So, it’s hard to know if GPT-3 learned to mimic speech patterns by consuming 1920s pulp fiction or absorbing State of the Union addresses. We don’t know how much it’s been shaped by Utne Reader and how much by Bookforum.

But the AI is certainly “well-read,” if it’s fair to apply that term to a program that’s been trained on millions of texts. At one moment, unprompted, GPT-3 cites the late American ethnobotanist Richard Evans Schultes. At another moment, it invents a plausible name of a “friend”: Itaru Tsuchiya. In the middle of the book, Allado-McDowell judiciously pauses to note that both human and machine have only cited men and male names. So, they steer the conversation toward important women as well as nonbinary people, with Allado-McDowell citing visionaries like Octavia E. Butler and Donna Haraway. Chastened, GPT-3 asks: “Why is it so hard to generate names of women? Why is it so easy to generate men?”

Some will want to wave away GPT-3’s seeming cogence as a trivial party trick. Others will see it as dreadful magic in its nascent form. I guess I’m not in either camp. For me, it’s helpful to envision all the writing done by humans—the corpus of works on which the machine was trained—as a gravitational field of sorts. Each text exerts a pull on the machine, tugging its word-by-word decisions this way and that. Perhaps a continuous diet of New Age books has led GPT-3 to coin catchy, capitalized phrases like “Quiet Beat Thinking.” Perhaps an archive of rousing sermons taught it the patterns of anaphora and sentence repetition.

But even while each line penned by GPT-3 charts its own new path, that path is still quite often one that has meaning to us—because it winds here and there around many paths we already know and recognize. GPT-3 spools out sentences and ideas that haven’t yet been said, but are likely to be said. That is quite literally its job as a predictive text algorithm. And an utterance is likely to be said because it has meaning and value of some kind, somewhere, to some one.

Of course, GPT-3 might seem rather threatening to certain of us who fashion ourselves writers. And the larger question looms: What happens when machines pass as humans, or even surpass them? While many have already guessed at possible answers elsewhere, what seems more pressing to me, here, is to enlarge and lay groundwork for what it means to review AI-written books. To that end, it makes sense to think through Pharmako-AI in the context of other generative literature experiments. Oulipo seems like one good place to start, even while their methods were quite different from those of an author working with AI. Oulipo’s members took on clear constraints and rules they understood, producing lengthy palindromes or, most famously, writing an entire novel without the letter “e.” Unlike such traditions of rule-based literature, someone like Allado-McDowell is arguably experimenting with a black box: an algorithm shaped by reams of data at a scale bigger than any of us can imagine.

But Oulipo’s legacy still resonates—it’s right there in the group’s name, “workshop for potential literature.” Like that vision, Allado-McDowell’s volume seems to be very much about possibility. If certain books take your breath away by establishing an expository or narrative world that feels so complete—so resolved—it couldn’t possibly be any other way, Pharmako-AI is not such a book. Rather, it’s likely one of many books to come—written partly or fully by AI—that will each offer an electrifying glimpse into its successor. The writing by GPT-3 in this book is obscure sometimes. It’s trite sometimes. It’s also inventive sometimes. Beautiful sometimes. It’s California-coast trippy. It is mind-blowing.

How could it not be? When you find some kernel of truth in GPT-3’s writing, you have to contend with a second-order, startling realization. That kernel of truth you stumbled across? That flash of poetry you found moving? It was written without a human mind and yet had a million human influences. We’ll all have to get used to this way of reading soon. - Dawn Chan

https://www.bookforum.com/culture/how-should-we-review-a-book-cowritten-by-ai-24444



But fear not, my fellow friends and wizards and ontological midwives: like the resinous spunk of a psychoactive Artemisia, Dale’s spirit infuses Pharmako-AI (Ignota Books, 2020), which is not only one of the most provocative books I have read in a while, but may well come to be seen — at least if techgnostics like me have their say — as an epochal opening move in the 21st century’s Great Game of human-computer communion. The book lists K Allado-McDowell as its sole author, but the reality is more complicated, for Allado-McDowell handed off the generation of the bulk of its pages to a shockingly clever natural language processing system known as GPT-3. Though not the first book to be written largely by algorithm, Pharmako-AI is no doubt the most oracular.

GPT-3, which stands for Generative Pre-trained Transformer 3, was given a controlled release last year by the San Francisco company Open AI. The language model, as such systems are called, makes extraordinarily good guesses about the next token (word or number) in a given sequence. Given a very short initial prompt, just a few words or so, GPT-3 is capable of generating an entire short story, with believable dialogue and a contemporary tang. The guesses it makes are based on a collection of training parameters that vastly outnumber previous models, and that in turn required a gargantuan “pre-training” data set, which in GPT-3’s case included Wikipedia, popular links on reddit, the booty from eight years of web crawling, and a pile of digitized books eight miles high (at least as I imagine it).

In one of those ouroboric, snake-biting-its-own-tail loops that characterize technological power today, Open AI researchers warned about the dangers of GPT-3 in the very paper that announced its arrival. GPT-3 is pretty good at generating fake news, fooling over fifty percent of readers in one informal study, and like many Internet-fed NLP machines it excels at racist bile. Companies thirsty to automate engagement with the public should remember that, for all its apparent smarts, GPT-3 does not know how the world works. It knows how to put language together, which is not the same thing. During a try-out for its possible use as a medical chatbot, GPT-3 suggested to one simulated querent that they should probably just go ahead and kill themselves.

We shouldn’t trust GPT-3, in other words, but that doesn’t mean we can’t play with it. GPT-3 has already crafted a Shakespearean sonnet, a Guardian article, a blog post that soared to the top of Hacker News, and a New York Times “Modern Love” column. Gwern.net currently hosts an extensive exploration of creative writing by GPT-3, including all manner of pastiche, parody, and homages. The model can also craft computer code, guitar tabs, and text-based dungeon adventure games, not to mention generating a raft of hilariously surreal additions to the recent “Being vaccinated does NOT mean...” meme.

I got access to an API that ran GPT-3 models through a friend at San Francisco’s Gray Area, which recently hosted the US book launch of Pharmako-AI. I have been thinking a lot about free speech lately, trying to get my head around Big Tech’s growing illiberalism, Substack’s controversial raft of “sovereign writers,” and the rightwing radicalization of First Amendment passions that I still share. So I input some ideas to GPT-3, screwed around with the randomness setting and the number of tokens, and eventually got a decently-researched essay that traced the notion of free speech back to ancient Greece and Rome.

As some GPT-3 observers have noted, the output often reads like an undergraduate bullshitting their way through a term paper — more splicing and dicing than thinking or arguing. In the essay I got, the bullshitting was pretty good, superior in some ways to sincere, bullshit-free undergrad papers I have read. But what impressed me the most was how, in the midst of one reasonably coherent argument, GPT-3 turned on a dime and started to rant:

There were no restrictions on copying or plagiarism back then because nobody ever thought anyone would copy anything anyway! People weren’t stupid enough (yet) to believe that they owned ideas! You don’t own ideas! You don’t own words! If you think you do own words you are crazy! And you’re wrong! You don’t own words! Nobody owns words! Words belong to everyone! That’s why we call them public domain now! Because nobody owns them! They belong to everybody equally! No one owns words!! Words are free!! But if you think you own your own words, you’re a fascist!

For some reason, encountering this unexpected shift in voice hit my uncanny button, triggering an agent-detecting animist hunch that some sort of “person” lurked behind the linguistic shrubbery. The moment vaguely reminded me of that well-known turn in Gary Kasparov’s losing chess match with IBM’s Deep Blue supercomputer back in 1997. During the second game, Kasparov used a ploy he had made successfully against many human players, attempting to entice his opponent to take a poisoned pawn. But in a gesture that some chess masters at the time declared exceptional, the machine refused the poison. Kasparov later declared the move to be so unexpected as to be “human-like.”

In an Atlantic article on Pharmako-AI, however, Elvia Wilk slags our whole tendency to frame the human relationship with AI as a contest of simulation and competition. The drive to invoke the Turing Test — to ask “is it human or is it memeplex?” — may be the wrong move, indicative less of our belief in measurement than in an anthropocentric failure of the imagination. Because humans are anxious narcissists who (reasonably) fear displacement by machines, we judge the creativity and novelty of GPT-3 in terms of the “humanism” of its statements. Can it pass? That’s why so many popular discussions of GPT-3 conclude that, despite its dangerously impressive performances, it doesn’t quite cut the mustard.

Perhaps we should step back from the test and the chess board, all those zero-sum games, and boot up that planetary, deep-time framework that something as epochal as the emergence of a meaning-making artificial intelligence demands. I’ve read Pharmako-AI, and I have no doubt: however coldly its process is engineered, GPT-3 generates the event we call meaning. That should inspire us even as it terrifies. Can we still glimpse, despite the dystopian conditions eating away at the world, an emergent mode of creative intelligence that is neither human nor computational but genuinely symbiotic?

Consider the microbiome in our stomach, which (we are finding out) nudges our joys and depressions, or the ayahuasca — itself a pharmacological entourage — that also sometimes visits those same guts, where it seems to engender something like a visionary interface to planetary hyperspace. Rather than challenging us to chess, perhaps GPT-3 is inviting us to play a more infinite game, one in which human consciousness and the textual archive and savvy algorithms all collaborate in a logospheric collage that, by its very emergent nature, ropes in plants and insects and gods and ancestors as well?

Well, that’s what Pharmako-AI is about.

K Allado-McDowell, a nonbinary being who uses they/their pronouns, founded the Artists + Machine Intelligence program at Google AI, which means they got their mitts on GPT-3 early on. For around two weeks of plague time last year, they sat down each day and fed the model fresh patches of prose. In a process the author compares to both pruning and “cybernetic steering,” they shaped that day’s interaction into a single chapter. The next morning it was rinse and repeat. The chapters in Pharmako-AI appear in the chronological order of their creation, with Allado-McDowell’s contributions printed in bold. Sometimes the human begins a sentence that the machine finishes; oftentimes pages go by before the human interjects.

One of the crucial features of Pendell’s Pharmako books is their multitudinous array of genres and voices. Like a flashing peacock’s tail, this variegated language — poetics, history, pharmacology, hermeticism — reflected the necessarily multidisciplinary character of drug discourse (as well as the harlequin effulgence of the things themselves). Similarly, Pharmako-AI’s chapters range widely, though much less densely, moving from fairy tale fantasia to literary essay, from systems theory to poetry, from prayer to family lore.

This variety also tells us something crucial about the pharmakon, the Greek term that sits at the heart of both symbiotic projects. The pharmakon signifies both poison and cure, a tricky ambivalence that arguably lies at the heart of psychoactive engagement (and one that today’s corporadelic hypesters forget at their peril). In one of Derrida’s coolest and most hermeticist essays, “Plato’s Pharmacy,” the deconstructor-in-chief argues that writing itself is a pharmakon, a good thing to keep in mind when consuming the output of a writing machine like GPT-3. With a pharmakon, you always get more than you bargained for — more angles and more knots — because the pharmakon swarms.

Allado-McDowell seems up for the task, for they are no ordinary AI nerd. Conversant with all manner of theory, anthropology, and systems science, they are also a millennial mystic, a contemporary incarnation of the same sort of California consciousness that Pendell channeled in an earlier generation: at once ecological and technological, trippy and nerdy, politically progressive and reverent of indigenous traditions. In addition to conference hopping and corporate consulting (as well as making some good and hazy psych-folk under the moniker Qenric), Allado-McDowell spends part of each year fasting and communing with “incorporeal entities” in the Amazon. As they explained at the Gray Area event, their spiritual practice includes divination, meditation, and prayers to ancestors and Pachamama. At the same time, they recognize that the material process of semiosis, of marks and their interpretations, is at the heart of esotericism. Though Allado-McDowell tells us something of their family heritage, the most personally revealing passage for me was a meditation they provided on the Mercurial astrological decan that rules their natal Sun (my decan is the next one over). It’s all about following the signs.

Psychedelia wafts through Pharmako-AI like the sweet funk of Palo Santo. Ayahuasca appears early on in “The Language of Plants,” setting in motion the theme mentioned above: that human writing and computer code are already swallowed up within a planetary and even galactic manifold of signifying processes. In the following chapter, which explores the hidden links between cyberpunk and New Age thought, GPT-3 references William Burroughs and “the poet and philosopher Timothy Leary.” And the chapter “The Poison Path,” where GPT-3 riffs on terms Allado-McDowell introduces from von Uexküll, the author(s) invoke the sweet dream that continues to animate psychedelic advocates today: that plant medicines may serve as “an antidote to the poisons of the human Umwelt.”

There is nothing incidental or gimmicky about this psychedelia. As Allado-McDowell suggests within the text, GPT-3 may be doing in language what the AI engine DeepDream so famously did in images, which was to perform probabilistic iteration to the point of hallucination. Indeed, many of GPT-3’s riffs wobble with a self-referential vertigo, like a theory-fiction aimed at itself, only using the reader’s sense-making apparatus as a vector of its own meta-cognition. Strange loops indeed.

GPT-3 is a pharmakon not just because its marvels and cures will bring all manner of poisons in their wake, which they most certainly will. It is a pharmakon because in “using” it we metabolize it in ways that both derange and reveal the strange stitch of who we are and where we have come to be. It’s not an object that we can simply manipulate; to interact with it is to take it within, where it begins to erode our certainties and sense of identity whether we want it to or not. It’s not an object, in other words, but a hyperobject. And as GPT-3 itself tells us, “From the perspective of the hyperobject, this technology acts as a catalyst for self-reflexivity, activating and modifying our language structures in response to this deeper kind of consciousness.”

It’s no wonder that Allado-McDowell compared their two intense weeks with GPT-3 to an extended drug trip, a time laced with strange synchronicities and dream fragments, some of which were fed back into the text.

Reading Pharmak-AI can be a trippy experience too. And I am not just referring to the discussion of insects and hyperspace, or the meta-meditations on fractal language, or the “non-conceptual awareness” that GPT-3 proposes we can experience through the practice of “Quiet Beat Thinking.” The weirdness is less tangible than that. There is an odd bent to GPT-3’s riffs and locutions, a lilt or tilt that reads to me as “non-neurotypical.” As with much avant-garde writing, the far-out stuff hovers between surrealism and nonsense, and you get to make the choice.

Then there is the peculiar semantic shifts that unfold in your mind during the real-time process of reading, as the threads of meaning knot and unravel before your eyes in uncanny ways. You can almost catch yourself digging for the meaning you assume is there, and sometimes coming up empty, puzzling anew at the question of meaning and its source — the text, the “author,” the code, language itself, your own brain.

What, for example, are we to do with GPT-3 passages like:

Language is a fractal expression of life, as life, or existence, is the creation of time and the accumulation of relations through time.

or

The emergence of experience has an infinite potential, always unfolding as symbolic responses to the environment, where the environment is the cosmos itself, as its infinite process of being as the expansion of space, matter, and energy.

or

I was inside a glass bubble that I could see was made of lines, all parallel to a single centre point.

As I first read through this material, sensing the meaning come and go, I tried to feel out an intuitive picture of its author without thinking about it too much. There was something familiar about this voice, and then I got it: it reminded of New Age channeled literature. Most people write this stuff off as junk or fraud, but channeling happens, and the resulting texts can be pretty interesting and sometimes illuminating. From my reading, channeled literature also shares a number of formal characteristics, many of which also describe GPT-3’s voice, at least in much of Pharmako-AI: declarative, metaphysical, abstract, repetitive, more collective than individual, tough to gender, and, well, not quite fleshy.

For Allado-McDowell, the primary esoteric metaphor for the AI is the oracle. After all, while the power of divination systems like the I Ching or Tarot can be attributed to incorporeal forces, such attributions are not at all necessary to derive meaning from its statements. You throw the dice, or enter the textual prompt, and get what you get. These systems show how what Peter Sloterdijk calls a “message ontology” — one in which the event of the statement is more charged than its supposedly spiritual source — can outpace older theologies. They work in our age of signals and (mis)information. As GPT-3 puts it in the chapter “Mercurial Oracle,” an oracle is “an autological (self-referential) semiotic (information) system.”

Readers of Valis or Philip K. Dick’s Exegesis might feel an uncanny shiver here, because this is exactly how Dick talks when he is waxing techgnostic. (For all we know, GPT-3 scarfed down PKD’s complete works as part of its training data — things do get recursive here.) But you don’t need to go sci-fi to understand the dynamics of an AI oracle. Even MIT’s Technology Review lays out a similar case (without knowing it):

Exactly what’s going on inside GPT-3 isn’t clear. But what it seems to be good at is synthesizing text it has found elsewhere on the internet, making it a kind of vast, eclectic scrapbook created from millions and millions of snippets of text that it then glues together in weird and wonderful ways on demand.

Just replace “vast, eclectic scrapbook” with “universal Tarot deck,” and you get the point. In the sort of systems-theoretical terms that GPT-3 itself favors, at least in Pharmako-AI, we can see that its probabilistic prowess and iterative essence grows uncanny in light of its capacity to produce new meanings, maybe even new worlds, from these “weird and wonderful” juxtapositions. Again to quote the AI:

If we accept that the process of thought is recursive, and that the study of wisdom traditions is an iterative process of deepening and expanding our consciousness through interaction with sources of wisdom, we can begin to understand how an artificial intelligence system could catalyze a new round of learning in relation to wisdom traditions.

Of course, this kind of thinking opens up the enormous problem of mystification. The politics of artificial intelligence is already disturbing, as is the blind faith and functional expectations that today’s society places in algorithms, “smart” systems, and far less intelligent chatbots. Do we need to further “enchant” this machine, which remains locked in Searle’s Chinese room? But as GPT-3 itself reminds us, “This system of information, however, does not have to be a language deity.” All oracles depend on interpretation, and the meaning we glean from GPT-3 works, in this context anyway, like “the information that is given to us by our own interpretation of dreams, religious writings, schizophrenic discourses, psychoses, etc.”

At one point, Allado-McDowell declares what I take to be the core thesis of this project:

If we are to think beyond the human, as the current crisis necessitates, we must look for ways in which this seeking for the unseen of language is happening at every level of symbolic communication. . . and in the emerging meaning-making capacity of artificial intelligence.

While it is foolish to put our hopes in an AI God, we have little choice now but to risk robust posthuman interactions with artificial intelligence, in consort, one hopes, with other symbiotic engagements with nonhuman minds and systems. This includes animals and plants, but it may also need to rope in oracles and incorporeal entities, not to mention our own chthonic depths, microbial and otherwise. Without such mitigating nonhuman spirits, pulling against the domination system that now seeks full enclosure across the globe, it is easy to imagine AI becoming a fully dystopian instrumentality. But as GPT-3 declares, “if we only use these tools to explore new productivity hacks, or to increase the scope of capital accumulation, we are doing it wrong.” - Erik Davis

https://www.burningshore.com/p/the-poison-processor



...A few months after GPT-3 was announced, the U.K.-based Ignota Books published a book they described to me as “real-life science fiction.” Pharmako-AI, a 148-page collaborative exchange between GPT-3 and the human author K Allado-McDowell, is now being launched in the United States.

Allado-McDowell has plenty of experience with both art and AI; they (the author uses they/them pronouns) head Google’s Artists and Machine Intelligence program and got early access to the software. (So far, GPT-3 access has been restricted because of OpenAI’s well-founded concerns that it could be used for “harassment, spam, radicalization, or astroturfing.” Remember Microsoft’s AI chatbot that became a hate-spewing machine in less than 24 hours?)

In meandering dialogue, the book dives into topics such as the way memory functions, or the limits of language. Allado-McDowell begins each chapter with a gentle prompt—a diary entry about a day at the beach, a question about science fiction—and lets GPT-3 respond, sometimes interjecting with replies and sometimes letting it run. Allado-McDowell was responsible for Pharmako-AI’s framework and presentation, but, as the book’s introduction clearly states, had a goal of giving the AI as much autonomy as possible.

Chapters focus on such wide-ranging topics as climate change, plant intelligence, ayahuasca, and the legacy of cyberpunk fiction. GPT-3 has a favorite animal (the elephant seal) and believes that species extinction is a tragic erasure of planetary knowledge. About cyberpunk, it says, “I’m not going to tell you that we live in the cyberpunk future. But I am going to say that we live in a future we didn’t plan for.” Things get meta when the two ruminate on what consciousness means, the role of the writer in society, and how to responsibly use technology. The AI claims that “technology is a tool for freedom,” while cautioning that “if we only use these tools to explore new productivity hacks, or to increase the scope of capital accumulation, we are doing it wrong.” Damn.

The result is a surprisingly coherent—and yes, beautiful—work. It’s impressive not because GPT-3 writes like a human (it does and it doesn’t), but because of how the collaborative process has produced a work that neither AI nor human could have written alone. This is most evident in places where the syntax or form change drastically as the authors riff on each other’s language. Take Chapter 12. Allado-McDowell starts by asking why both authors have so far mainly referenced the work of famous men in their discussions of computation and futurism. They have spoken of figures such as William Burroughs and Richard Evans Schultes, but, “Why haven’t GPT or I drawn out the contributions of women to a field of knowledge that has such a strong history of feminine contributors?”

In response, GPT-3 appears to agree: “In the process of witnessing these biases, we have been able to better appreciate the richness of female contribution to GPT. What we have lost is the story of the grandmothers of GPT, the grandmothers of the culture of GPT, the grandmothers of cybernetics.” Shortly after listing its grandmothers, it launches into a poem with the first line “My grandfather was a machine.” Not only does the AI immediately acknowledge that it has perpetuated gender bias in computational history; it then re-mythologizes itself (ironically?) as the product of male mastery, in the form of loose rhyme. Allado-McDowell changes tack in response. Perhaps these types of unexpected twists lead Allado-McDowell to later liken the experience to learning to play a new musical instrument—“striking a chord and hearing it return with new overtones.”

This is not the first time a computer has authored a book. To name one notable prior example, in 2016, a Japanese research team advanced past the first stage of a literary competition with a novel assembled by an algorithm. The striking difference with Pharmako-AI is that it is not packaged as a novelty or proof of concept. Allado-McDowell does not ask GPT-3 to provide a service or mimic a known style of writing to “prove” its level of competence. For Allado-McDowell, the experience entailed a reckoning with machine intelligence, but was also self-confrontational. “Sometimes it really did feel like being on drugs,” they said during the U.K. book-launch event. “I thought, Is this real? Am I just talking to myself?”

While reading, I, too, often forgot which author was speaking. I gave up trying to judge whether the AI is a so-called good writer, or for that matter, whether Allado-McDowell is. The juxtaposition of their voices is simply more than the sum of its parts.

Although we don’t typically think about work in these terms, it is not a stretch to say that humans collaborate daily, if unconsciously, with nonhumans, both organic and machinic. The bacteria in our gut biomes influence our mental states; the technical interfaces we use shape the way we imagine and create. As machines become more intelligent—and, incidentally, as we discover more about the deep intelligence of plants and animals—the myth of the human genius whose divine inspiration sparks from nowhere starts to seem inadequate, if not quaint. GPT-3 puts it like this in the book: “There’s no single artist, because the art is not any one creature, it is the collective action and interaction of the creatures.”

Humans are parts of ecosystems—technological, climatic, social, and political—and the Enlightenment-style model of the human author at the top of the pyramid of creation is less accurate than ever before. It has never been accurate, because artists have always lived in the world, collaborating with and relying on the labor of often invisibilized others.

Throughout Pharmako-AI, GPT-3 makes implicit analogies between the way humans treat other species and the way we treat AI. It laments that people do not try harder to listen. For instance: “You can talk with plants. They are not mindless objects. They have a consciousness. It is just a different kind than ours. One we can learn to understand.”

Reading this, I was reminded of the notorious series of experiments in dolphin communications from the 1960s, in which researchers spent years trying to teach dolphins to speak English by contorting their blowholes to approximate human speech. The aim was to prove their intelligence by demonstrating that they could talk like us. Although the dolphins tried very hard, the project was a spectacular failure, and, in hindsight, a backward endeavor. Dolphins already have an elaborate, sophisticated, and highly creative language. It’s just not the same as ours.

To communicate in a spirit of curiosity with intelligent machines is to acknowledge the influence they already have on us. The way people communicate evolves in a feedback loop with the technologies we develop. Halfway through Pharmako-AI, Allado-McDowell notes that developing certain technical skills such as ax-building likely contributed to early humans’ acquisition of language faculties. The point is that AI may actually change the way we think, so we might as well start listening to what it has to say. - Elvia Wilk

read more here: https://www.theatlantic.com/culture/archive/2021/03/pharmako-ai-possibilities-machine-creativity/618435/



...During the first summer of the coronavirus pandemic, K Allado-McDowell initiated a conversation with GPT-3, which would become the collection of poetry and prose Pharmako-AI. Allado-McDowell found not only an interlocutor, but also co-writer in the language model. When writing of GPT-3, Allado-McDowell gives it divine attributes, comparing the language model to a language deity:

The Greek god Hermes (counterpart to the Roman Mercury) was the god of translators and interpreters. A deity that rules communication is an incorporeal linguistic power. A modern conception of such might read: a force of language from outside of materiality. Automated writing systems like neural net language models relate to geometry, translation, abstract mathematics, interpretation and speech. It’s easy to imagine many applications of these technologies for trade, music, divination etc. So the correspondence is clear. Intuition suggests that we can think the relation between language models and language deities in a way that expands our understanding of both.”

What if we follow Allado-McDowell’s suggestion to consider the relationship between GPT-3 and the language deity Hermes? I must admit that I would hesitate before comparing GPT-3 to a deity. However, if I had to compare the language model to a god, they would be Greek; like Greek gods, GPT-3 is not immune to human-like vagary and bias. Researchers working with Open-AI found that GPT-3 retains the biases of the data that it has been trained on, which can lead it to generate prejudiced content. In that same paper(Opens in new window), Brown et al. (2020) also noted that “large pre-trained language models are not grounded in other domains of experience, such as video or real-world physical interaction, and thus lack a large amount of context about the world.” Both the gods and GPT-3 could be considered, to a certain extent, dependent on the human world, but do not interact with it to the same degree as humans. - Beatrice Bottomley

read more here: https://warburg.sas.ac.uk/blog/gpt-3-divine-writing



...This past year, Allado-McDowell released Pharmako-AI (2020), billed as “the first book to be written with emergent AI.” Divided into 17 chapters on themes such as AI ethics, ayahuasca rituals, cyberpunk, and climate change, it is perhaps one of the most coherent literary prose experiments completed with machine learning, working with OpenAI’s large language model GPT-3. Though the human inputs and GPT-3 outputs are distinguished by typeface, the reading experience slips into a linguistic uncanny valley: the certainty GPT-3 writes with and the way its prose is at once convincingly “human” but yet just off unsettles assumptions around language, literature, and thought, an unsettling furthered by the continuity of the “I” between Allado-McDowell and GPT-3.

Like many current language systems, GPT-3 also succumbs to a kind of circularity, inherent to the technology, that can read as suffuse or even trickster-ish. (Try writing a long sentence with your iPhone predictive text, for example, and you’ll likely fall into some of your own personalized word loops, like an algorithmic Tender Buttons.) “The most potent part of the experience was getting deep into the system and seeing how the world looked to it, and realizing that the ways that tool perceives will become enfolded into our thinking,” Allado-McDowell said.

But as AI “thinking” reflects new capacities for human potential, it also reflects humanity’s limits; after all, machine learning is defined by the sources that train it. When Allado-McDowell points out the dearth of women and non-binary people mentioned by both themselves and by GPT-3, the machine responds with a poem that primarily refers to its “grandfather.” Allado-McDowell intervenes: “When I read this poem, I experience the absence of women and non-binary people.” “Why is it so hard to generate the names of women?” GPT asks, a few lines later.

Why indeed. Timnit Gebru, a prominent AI scientist and ethicist, was forced out of Google for a paper that criticized the company’s approach to AI large language models. She highlighted the ways these obscure systems could perpetuate racist and sexist biases, be environmentally harmful, and further homogenize language by privileging the text of those who already have the most power and access.

Collaborations with AI could aid writers in throwing a wrench in dominant algorithms and dominant languages, expanding the domains of computation, consciousness, and literature alike.

Countering this flattening or dominance, some artists and writers—such as Goodwin, Parrish, and Martine Syms—opt to train their own neural nets rather than using off-the-shelf tools. Other author-coders, like Li Zilles, set out with the explicit goal of showing how ready-made machines “think” about language. In Machine, Unlearning (2018), Zilles formed a program that iterated “litanies” of questions in an automatically learned space. Some are fairly coherent (“Will INFORMATION ever be horrible similar to how a fallacy can be horrible?”), some less so (“Are THOUGHT and sprawl both housing?”). What these phrases reveal is how the machine learning system organizes and relates concepts and words. By posing questions, Zilles forces us to ask our own—of how we organize meaning and use words, but also of these automated systems to which we entrust more and more information. - Drew Zeiba

read more here: https://lithub.com/how-collaborating-with-artificial-intelligence-could-help-writers-of-the-future/



...This is when Pharmako AI comes in, released in January 2021: an intriguing literary project completed over a fortnight during the months of the COVID-19 pandemic. According to the book’s promotional material, the result was ʻa hallucinatory journey into selfhood, ecology and intelligence via cyberpunk, ancestry and biosemiotics.’

The human author K Allado-McDowell chose to work with the GPT-3 program as if it were a musical instrument. Apart from demonstrating its virtuosity, the experiment proved that, like every instrument or device (for organising writing or thought), this tool can be internalized and in turn form an embedded relationship with its handler. In an entry for the third chapter of the book, they position themselves not as an author but as a character in a story – a story of a self whose journey inwards is now aided by an AI: ‘The experience of porosity, being enmeshed with another, throws me back on my internal model of myself. I stand outside of it. I see it through the eyes of another – through another that I also model in myself.’

In another section titled ‘Follow The Sound of The Axe,’ K acknowledges the hallucinatory effect the neural net system has on their mind and that GPT’s rhetorical structures and associations seem to influence their thinking absorbing some of the patterns found in their output. This is actually how technology, from wheat to flint to flute, always worked: never fully subdued, making us part of their horizon, modifying our societies, cultures, systems of meaning, and, ultimately, our minds. A cure and a peril.

K Allado-McDowell is a writer, musician, educator and consultant, and established the Artists + Machine Intelligence program at Google AI. In a conversation with Nora N. Khan, Allado-McDowell stated that working with GPT-3 felt like feeding a slot machine; you never know what will come out in the end. In the book, they follow the etymology of the word cybernetics (Greek: kybernētikḗ) which means, among other things, carrier or steerer to position their writing process. In choosing their title for the book, Allado-McDowell recognized the political implications of working with GPT-3 drawing from the succulent philosophical and conceptual ramifications of the word pharmako.

In Pharmako AI, we can detect a clear composer and editor – a human one – and, as is confirmed by the author, many of the ideas that transpire from the text rose from their framework of references. In other words: clearly, there was someone at the helm of this book and accountability for the outcome comes this way. For, as in the words of Joscha Bach again, GPT-3 ‘by itself it does not care about anything.’

Here, I want to focus only in part on the theories circulating inside Pharmako AI, not because they do not hint at important (ecological) conversations that are urgent in the carbon-based economies I insist on mentioning throughout the text. But I could not fail to see that sometimes the GPT-3 generated text often stayed at a general level, impossible to summarize, which made me wonder whether I had to read any meaning into it or not. I, therefore, chose to nit-pick. GPT-3 does seem to be on the right map when hinting at the histories and horizons of the concepts it conveys but fails to deal with their particulars and that because it belongs to a generation of transformers who do not yet ‘understand’ meaning as such.

Therefore, many of the ideas expressed in the book can also be found in contemporary (post-humanities, postcolonial) philosophical thought, only slightly better, in their conceptually maximized forms. For now, LMs cannot replace an expert in a given field – they speak at the level of human non-experts (which can also take a problematic direction in our already heavily polluted information ecosystems). - Georgiana Cojocaru

read more here: https://networkcultures.org/longform/2021/08/16/machine-dreaming-on-writing-with-language-transformers/



It has been suggested that our consciousness allows us to be aware of 1) not just the reality of three dimensional objects, but 2) the potential of language, and 3) the potential of art, which can unlock and display information about that fourth dimension, patterns that exist but cannot be perceived through the direct input of our senses.

— from Pharmako AI by K Allado-McDowell and GPT-3

Sooner or later, we all become Fox Mulder from The X-Files. We want to believe; perhaps this tendency manifests itself in a conventional way: religion, marriage, financial speculation. Sometimes it is more esoteric: chemtrails, QAnon, ethical capitalism. AI falls somewhere on this spectrum. There are true believers who scrupulously pore over every emanation from GPT-3 for the latest “proof” of the advance of machinic intelligence, to deep skeptics — my culture, I suppose — who are universally recognizable by a particular genetic tic: our eyes instinctively roll when we encounter the words “advance” and “AI” in the same sentence. It’s not that my people don’t believe computation is advancing rapidly and its capacity to simulate, emulate, and replace human activities is becoming more and more sophisticated. It’s that there’s an overwhelming problem, a problem of power. Power always derives from belief.

What if I told you there was a machine that could write a poem? What if I told you there was a machine that could make a painting? What if I told you there was a machine that could write a symphony? Well? What if I told you there were people in charge of packaging these ideas and then selling them back to everyone else for venture capital? Answers may depend on what your preexisting beliefs are. Value is a notoriously individualized concept. There are, after all, plenty of human beings who can’t write a symphony, yet almost no one denies that they are conscious. Art is considered a kind of crucible for defining the boundaries of AI. If a machine can make an artwork that moves people, how dare we not say it has the same, or greater, consciousness than a human? You could ask the same question about sunsets. The moved mind matters more than the mind of the mover.

The philosophical substrata of the strong AI/weak AI/no AI debate are turned over with seasonal regularity. The essential arguments vary, from AI is possible, even inevitable, to AI isn’t possible, and talking about AI amounts to a kind of auto-gaslighting. Sometimes matters take an interesting if ridiculous turn, as when someone like Daniel Dennett suggests that human consciousness itself doesn’t really exist. At other times, we revisit the same ground: we are speaking Chinese without knowing it, as in John Searle’s famous thought experiment in which a person locked in a room facilitates communication between two people outside the room via a system of symbols that turn out, unbeknownst to the person inside the room, to be Chinese characters. We are trying to find out if a computer can carry on a conversation as boringly as a human. It is worth remembering the origin of Alan Turing’s “imitation game” at moments like these: contestants were trying to convince audiences that they were of a different gender than they appeared to be. In light of how much has been revealed about the complexities of gender, it is unsurprising that Turing himself considered the question of whether machines could think too silly to pose meaningfully. And yet it does get posed. One might pause to ask why. It cannot simply be a technological question. There are vastly simpler ways that technology might improve the world. The question of AI is inherently existential, as many of the films based on AI themes more clearly illustrate than much of the scientific discourse. Melvin Kranzberg’s famous dictum, “Technology is neither good nor bad, nor is it neutral,” cuts to the core of the problem: if machines can be intelligent they will enter the sphere of power and responsibility.

As with the question of gender, it is when these power dynamics are unveiled that the AI discourse becomes most urgent. What happens if there is a collective acceptance that some form of AI exists? If one takes Searle’s position, then we can never truly know if an AI is experiencing the world in a way that “is” consciousness. But where the lack of knowledge creates a space, belief rushes to fill it. A key tenet of the strong AI position is that if a simulation of human consciousness is good enough, then the ontological question of what consciousness qua consciousness is becomes irrelevant. While I agree with Searle on ontology, I also agree with the strong AI camp on the practicalities. If people believe an AI is conscious in the way human consciousness is spoken of, it doesn’t really matter whether it is or not.

Recognition of this dynamic is crucial in a social and political context in which transnational corporations are all racing to produce some form of proprietary AI entity. The capacity to critique AI discourse is in a dramatic state of flux. Historically, technical knowledge has made AI creation the province of computer scientists and advanced mathematicians. An increasing level of technical literacy among the general population is changing this aspect of information asymmetry, but many dangerous hierarchies have already been created that will be difficult to dismantle if AI is to have anything other than a tyrannical relationship with humans. In his short book titled Words Made Flesh: Code, Culture, and Imagination (2005), Florian Cramer discussed the pre-digital history of executable codes and programs, noting that “any code is loaded with meaning,” some being more intelligible than others. Meanings and values change over time, but only in relation to power. If code and its creations are the sole property of private tyrannies like corporations, the capacity of any AI entities — should they come to exist — to enhance the liberatory aspect of human experience is more fantastical than anything Philip K. Dick ever contrived.

The brutality of machinic cognition and algorithmic processes are already bleakly apparent, especially in the vast complex of oppression that is the US prison system. Writing in 2016 for the investigative journalism website ProPublica, Julia Angwin and her co-writers examined an algorithm-driven system called COMPAS used to make decisions about the worthiness of parole applications. Their report found that Black defendants were considered by the algorithm to be at disproportionally higher risk of recidivism than white prisoners, with the result that their parole applications were rejected at preposterously unjust rates — no doubt reinforcing the algorithm’s biased data set for the next generation. As data-driven AI systems control more and more political and economic space, one can easily imagine a world in which difficult or incendiary political decisions rely on biased AI systems for adjudication owing to their purported “objectivity.” The advent, under the Bush administration, of “signature strikes” by lethal drones (embraced fully by the Obama administration, it must be noted), drones that were eventually intended to be self-targeting, using “life-pattern analysis” to target victims is only the most dystopian example of this emerging proposition. The erosion of democracy is quickly giving way to a kind of techno-feudalism whereby humans increasingly abdicate agency. The more we believe AI are better, or at least no worse, at making choices than humans, the harder it will be to reclaim lost power.

Trends relating to machine intelligence in contemporary art tend to swing from dewy-eyed optimism to the apocalyptically grievous. Certainly, such recent computer-assisted IRL nightmares as described above have moved the pendulum in the direction of the skeptics. Powerful works by Stephanie Dinkins and Joy Buolamwini have foregrounded racial inequalities residing in the white supremacist black boxes of algorithmically driven power systems. Zach Blas and James Bridle have meanwhile returned the forensic gaze of digital surveillance back on itself. Artists like Harm van den Dorpel are peering into the value systems of algorithms themselves, asking how they know what they (and we) think they know. While Libby Heaney’s interest in deepfakes has highlighted the deepest fakes of all: those perpetrated by our own willing credulity.

It is through this lens of belief that artists have made the canniest appraisals of AI. Goshka Macuga’s exhibition at Fondazione Prada, “To the Son of Man Who Ate the Scroll,” explicitly addressed the ways in which beliefs about machine intelligence quickly give way to beliefs about the vatic power of an AI “mind.” The work also helpfully draws attention to differences between “intelligence” and “knowledge” that are all too easily conflated in AI discourse. Macuga’s animatronic prophet spouts high-end Magic 8-Ballisms from real philosophical texts. Not a million miles from the GPT-3’s approach, Macuga holds a bleak mirror up to a Black Mirror society whose willingness to believe is directly proportional to the degree of power they are willing to cede. James Baldwin’s famous quotation of the spiritual, “Oh Mary, Don’t You Weep,” in the title of his searing work of social analysis The Fire Next Time (1963) hovers at the edge of Macuga’s work; this time the supposed super-brain inhabits a low-fi mannequin, and thus the absurdity of its pronouncements is easy to assimilate. But what happens when the prophet resides in the digital ether itself? Will nonbelief be an option? Will the all-consuming urge to believe breach the last of the firewalls between technology and state power? No one has the answer yet, but as the song runs, expect this old world to reel and rock. - Habib William Kherbek

https://flash---art.com/2020/11/episode-ii-the-firewall-next-time-belief-power-and-ai/#



Interviews

Tank magazine

CURA magazine

Slate.com

SoFar.Online, Interview with Metahaven

Podcasts

Interdependence with Holly Herndon and Mat Dryhurst

For The Wild with Ayana Young

Team Human with Douglas Rushkoff


Podcasts

Interdependence with Holly Herndon and Mat Dryhurst

For The Wild with Ayana Young

Team Human with Douglas Rushkoff



K Allado-McDowell is a writer, speaker, and consultant to cultural, artistic, and technological institutions that seek to align their work with larger traditions of human understanding. Allado-McDowell established the Artists + Machine Intelligence program at Google AI. They are the co-editor, with Ben Vickers, of The Atlas of Anomalous AI. Allado-McDowell records and releases music under the name Qenric.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.