AI, Anthropology, and Theology: God in the Machine
One of my headline-news discoveries this autumn has been that Lutherans are a pretty exciting bunch of people to hang out with. In late September I flew to Chicago, Bookslut’s center of gravity, to attend a conference at the Lutheran School of Theology, more specifically at its Zygon Center for Religion and Science. (There I gave a pre-conference public lecture, and with post-talk glow firmly in place, enjoyed meeting a few people who knew me not as anthropologist Barbara but as Bookslut author Barbara.)
The next two days were filled with talks by wondrously creative scientists (ranging from physicists to ecologists) and theologians. Yet as anyone knows who’s been hyped on a conference high, the real rush comes not during formal presentations but at meals or during walks when unexpected connections are forged. In this way I met Anne Foerst, who brings together theology and science in one person, and who rivets her conversational partners by offering verbal gems on topics as disparate as the nature of Jesus’s bodily processes and how an understanding of AI (artificial intelligence) might fight racism.
Years ago, Foerst came to the US to carry out archival research at Harvard Divinity School. Yet having studied computer science as well as theology in Germany, she felt a pull towards MIT too. Amazingly enough, she not only landed a position a Harvard but was also invited to join the MIT AI Lab. As Foerst recounts in God in the Machine, “It worked beautifully. I spent my days happily at MIT and often trotted over to HDS… at both schools I was initially perceived as an outsider. People at HDS found my fascination with high-tech somewhat strange. Most of them were slightly antitech and thought my quest to bring theology and AI together somewhat superfluous. At MIT, on the other hand, people were suspicious of the theologian in their midst. It probably took a year and many, many conversations until heads stopped turning toward me whenever the term evolution was mentioned.”
Foerst believes that people transform when they interact with AI creatures. During her talk at Zygon, she showed video clips of this process unfolding as people met Kismet, the robot star of God in the Machine. Created in the AI lab, Kismet was given cute features to attract our attention, and programmed with abilities to track movement, vary facial expressions in response to a speaker’s voice tone, and attend to objects jointly with a partner. As we watched the clips, we responded visibly and audibly when Kismet’s “baby face” would first look sad then perk up when a person stopped by the lab to chat pleasantly. Never mind that Foerst had hammered home a fundamental point to us: Kismet could not learn. Its every interaction started at zero. Kismet had no capacity for individual recognition, no memory, and thus, no history.
And yet, we still ooh’ed, aah’ed, and laughed as Kismet behaved with humans. In response to our response, Foerst talked about how humans co-create interactions, and how we change whenever we relate with a social partner-- human or not.
Only connect! I couldn’t listen fast enough. My work with African ape communication (recounted in The Dynamic Dance) had already led me to think hard about the boundary between humans and apes. I confessed to Anne my viscerally negative response when a philosopher wanted to refer to the supersmart bonobo Kanzi as “a person.” The philosopher’s reasoning hinged on Kanzi’s language abilities -- and have no doubt, these abilities are quite astonishing, expressed without any history of formal reward training or other conditioning (see www.greatapetrust.org). Anne bluntly told me I’d made a classic mistake: the conflation of humanness and personhood.
Anyone who tells me, in front of other scientists and in a thoroughly pleasant way, that I’m all wrong (and is right about it), makes academic life worthwhile. We gabbled through lunch about AI, apes, and personhood, and once home again, I rushed to buy God in the Machine.
One of life’s true pleasures is to be able to read a book with the timber and cadence of the author’s voice still fresh in your ear. I read about Kismet and its implications for personhood and theology with a slightly Germanic accent and with much verve and authority. And at first, I forced my husband to endure a cascade of questions powered my Foerst’s experiences and musings.
§ You’re a Frankenstein fan; in the book and in every movie version, the monster turns mean. Why, do you think? What caused this violence: was it built in or did it develop from the monster’s experiences?
§ I know you love Jenna [the cat who adores my husband and shadows his every step as he moves around the house]. But why? Can you articulate what is it about this nonhuman creature that moves you to feel love for her?
§ Remember Deep Blue, the computer that beat Gary Kasparov, the human, at chess back in the ‘90s? Did you know that during the same period, no robot could carry out a simple human task like buttering a piece of bread?
Random as it may sound, this last point is crucial because it is rooted in AI’s history. Foerst explains: “AI was founded in the 1950s. The men who founded the field were sitting in the 8th-floor lounge at the MIT AI Lab. They were all mathematicians and physicists, all brilliant in their field. In short: they were nerds. All were male, all were white, all came either from Jewish or Christian traditions and most came from privileged families. This select group of people thought about what defines intelligence and what a machine should be able to do in order to be called intelligent… [It] should be able to deal with natural language and it should be able to play chess. It should be able to solve abstract problems and be capable of proving mathematical theorems.”
Kismet was different, crucially so; it was made to relate instead of to solve mental problems. As I read on, I stopped asking questions aloud; I was too busy examining my own assumptions and beliefs. Foerst feels strongly that human-robot interactions raise key questions for theology: “I personally think it is spiritual when I interact with Kismet and have emotional reactions. It is a moment where we can celebrate our capability to bond -- humans can be so wonderful in interactions.”
I became quite caught up in Foerst’s vision of personhood, which she says “can be understood as participating in the narrative processes of mutual storytelling about who each of us is. This participation can come in many forms.” Agreed so far. But then: “A baby, for instance, cannot actively participate, as she has no sense of self and therefore no narrative about who she is. But she participates passively because her parents interact with her constantly and create a story of her...” Here, my work with the dynamics of infant developmental processes tells me that infants (humans and apes too) are always active participants, able from the moment of birth to co-create emotion and meaning with their parents.
And then: “Is Kismet part of the community of persons? For those among us who delight in its resemblance to us and who understand some interactions with it as spiritual, it can become a person. Others who are afraid of Kismet-like technology would deny it.”
Here’s the crux of the whole book. Immediately it flashed into my head that Kismet is to AI as Kanzi is to ape studies. SAT-think aside, it’s an analogy worth exploring. After all, Kanzi’s language abilities led people to rethink what it means to be human. His language is embodied and co-created with his human partners; Kismet was wired to be embodied and to respond to human partners.
So if Kanzi makes us rethink personhood, does Kismet too? In the end, I cannot agree with Foerst that Kismet -- even when people interact with it, and tell narratives that include it -- could become a person.
Why do I feel this way? Is it because of fear? Foerst writes a lot about robot anxiety, an extreme worry that AI may someday take over our very world. Yet for an anthropologist, that’s not the primary reason to question inclusion of Kismet in a community of persons.
Simply put, Kismet does not “participate” in any sense of the word that has meaning for me. If Foerst’s “participation” stretches far enough to enfold a creature who has no individual recognition or memory, no learning, no response or emotion that isn’t wired in -- and who never had these things -- it stretches too far for me. Freeing as it is to extend the concept of personhood beyond our species, the individual “person” in question must be able join in co-creating from within, not because of wired circuits; that is, she must have in the present, or have had at some point in the past, a way to engage in a genuinely mutual partnership. This requirement (Foerst’s concerns aside) leaves behind neither a newborn baby nor an adult in a deep coma or otherwise severely disabled from full linguistic and cognitive participation. Importantly, each and every human being meets this criterion for genuine partnership -- even if just by turning the head slightly toward the sound of a loved voice -- or did so at some point in the past.
I could go on for pages more. The role of innate biology in this book, and Foerst’s discussion of the limitations on our ability to accept as persons all humans in the world, can be debated with vigor. But instead I’ll exhort you to read the book. Often arguable, God in the Machine is just as often brilliant; at times sobering, in its call for the practice of love in the world, extended to all those who look and act different from us, it is exhilarating.
At bottom Foerst’s topic is humanity’s “search for partnership” in the universe. It is a deep human pull and an ancient one, with evolutionary roots. I’ll talk more about that when I return to Chicago in January.
-- If someone could please arrange for Barbara J. King to have a conference high in Lucca, Italy, she’d appreciate it.