OpenAI’s GPT Store opened for business last week offering specialized models created by third parties for purchase. The featured releases include a code tutor from Khan Academy and a math/science tutor, as well as an Automated Blog Post Writer and a hiking GPT from AllTrails. As a blogger and a hiker, I’m intrigued by the latter two, but I haven’t seen news stories about how blogging and hiking are about to be transformed by AI. The big news seems to be that GPT has girlfriends available.
Quartz began the stampede with a headline that AI girlfriends (eight of them!) are “flooding” the GPT store. Every story since echoes the “can you believe it!” tone and points out that OpenAI’s terms of service prohibit “GPTs dedicated to fostering romantic companionship.” So far, OpenAI is not interested in clarifying what “romantic” means. I guess they’ll know it when they see it. It makes sense that they leave the porn and near-porn market to others as lucrative as it might be. Their aim is higher.
Companionship of the non-romantic kind is what OpenAI wants to sell. Your co-pilot, hiking guide, or tutor will be more effective the more invested you are in your relationship with it. And you’ll be more likely to keep paying the monthly fee. To what extent will people invest their emotions and whether it matters to their effectiveness or OpenAI’s bottom line is an open question. We’ve known since the days of Eliza that people can form strong attachments with chatbots. Tamagotchi created a moral panic in the 1990s because kids became emotionally attached to their virtual pets. But what are these attachments? Will your co-pilot be more effective if you love it? Could your relationship with a GPT be as meaningful as your relationship with your work colleague, your teacher, or your romantic partner?
Williams James, writing more than 100 years ago, anticipated this question. James was always working through the implications of big questions. He starts by arguing that belief in God does not make a difference in our investigations of the world. Does it matter, he asks “if the facts of experience up to date are purposeless configurations of blind atoms moving according to eternal laws, or that on the other hand they are due to the providence of God? As far as the past facts go, indeed there is no difference.” In other words, we don’t have to answer the questions of God to answer questions about the material world.
There are plenty of scientists who believe in God and reconcile their beliefs in this way. As James suggests, rather than argue over metaphysics and theology, we should focus on explaining how the universe works (God or no-God) and improving human lives. Writing a few years later in The Meaning of Truth, James returns with an update because he realized that for many people it matters that God exists. The example he uses is an “automatic sweetheart” very much like the one you can purchase on the GPT Store.
The flaw was evident when, as a case analogous to that of a godless universe, I thought of what I called an 'automatic sweetheart,' meaning a soulless body which should be absolutely indistinguishable from a spiritually animated maiden, laughing, talking, blushing, nursing us, and performing all feminine offices as tactfully and sweetly as if a soul were in her. Would any one regard her as a full equivalent? Certainly not, and why? Because, framed as we are, our egoism craves above all things inward sympathy and recognition, love and admiration. The outward treatment is valued mainly as an expression, as a manifestation of the accompanying consciousness believed in. Pragmatically, then, belief in the automatic sweetheart would not work, and [in] point of fact no one treats it as a serious hypothesis.
What James recognized was that for some people, belief in God mattered, not in explaining questions of Physics, but in providing a sense of meaning for the questions they asked. He changed his view on the importance of the question. For James, the example was useful because the prospect of an automatic sweetheart was as remote as an example of God acting in the universe. Today, however, the hypothesis of automatic companions is quite serious. We may excuse (or not) James for the sexism in his description as the Victorian language of his time, but here we are 100 years later and AI girlfriends are being marketed using essentially the same language. And it is not only companions who get the blushing maiden treatment. Most chatbots are gendered female to take advantage of the same comforting stereotypes.
Some instructional chatbots–Khanmigo and Walter for example– avoid this specific problem, but they cannot avoid the larger point James is making. To quote James’s student Gertrude Stein–“there is no there there.” There is no understanding on the other side of an interaction with a Large Model to make it meaningful. GPTs have no capacity for “sympathy and recognition.” Falling in love with an AI is like falling in love with a movie star. It takes an imaginative leap or a suspension of disbelief to feel any sense of reciprocity. I believe this lack of meaningful reciprocity will limit how effective Khanmigo can be at reaching students. No AI tutor will motivate a student to persist or provide a sense of community.
Put the question in a different context…medicine, and I start to doubt myself. Google put out a research note last week saying that their “research AI system based on a LLM and optimized for diagnostic reasoning and conversations” (named AMIE, because of course she is) could outperform human doctors. Large caveats apply here. This was limited to texting and in no way measured anything like a typical medical examination or diagnosis. But I cheer inwardly when I read this and other findings that suggest LM-powered medical chatbots are in my future. That’s because I have a strong preference for getting my medical services virtually and would prefer to avoid the hassle of interacting with a human doctor. The fact that I have a version of white coat syndrome where I pass out during medical examinations may be a factor here.
Doctors feel about their profession as I feel about mine. Medicine is a human service, and even if there are aspects of medicine that could be automated, people need to be seen and heard. The practice of medicine, like that of teaching, requires James’s “sympathy and recognition.” Even a skeptic of modern medicine like me can see this. If I had a serious diagnosis, I would want to talk to a human about next steps and options. This distinction in medicine between just needing routine help with a minor diagnosis and a serious evaluation of health news reframes the issue of automation for education. If we’re talking about educational tools that augment the classroom, not replace the essential human connections at the core of education, then I can see potential, maybe a lot of potential.
Teachers rarely provide one-on-one attention. They don’t have time. As this overview of recent research on tutoring from The Hechinger Report shows, access to tutoring services is a real and seemingly growing problem. I don’t know of any school system or institution of higher education that has tutoring figured out. So maybe we don’t have to figure out the answer to James’s question when it comes to tutors. Maybe the answer, as the article suggests, is that “humans and machines could take turns.” If the question is not how do we replace teachers with AI, but how do we better educate our students, especially those who are hard to reach, by extending the help available, then perhaps the answer is, It’s AuToMaTiC.
𝑨𝑰 𝑳𝒐𝒈 © 2024 by Rob Nelson is licensed under CC BY-SA 4.0.