ChatGPT Won’t Say His Name
Why do certain words immediately short-circuit the program?
This is Atlantic Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age.
Why does ChatGPT refuse to say the name Jonathan Zittrain? Anytime the bot should write those words, it simply shuts down instead, offering a blunt error message: “I’m unable to produce a response.” This has been a mystery for at least a year, and now we’re closer to some answers.
Writing for The Atlantic this week, Zittrain, a Harvard professor and the director of its Berkman Klein Center for Internet & Society, explores this strange phenomenon—what he calls the “personal-name guillotine.” As he gleaned after reaching out to OpenAI, “There are a tiny number of names that ChatGPT treats this way, which explains why so few have been found. Names may be omitted from ChatGPT either because of privacy requests or to avoid persistent hallucinations by the AI.” Reasonable, but Zittrain never made any such privacy request, and he is unaware of any falsehoods generated by the program in response to queries about himself.
Ultimately, the situation is a reminder that whatever mystique technology companies cultivate around their AI products, suggesting at times that they operate in unpredictable or humanlike ways, firms do have an awful lot of direct control over these programs.
The Words That Stop ChatGPT in Its Tracks
By Jonathan L. Zittrain
Jonathan Zittrain breaks ChatGPT: If you ask it a question for which my name is the answer, the chatbot goes from loquacious companion to something as cryptic as Microsoft Windows’ blue screen of death.
Anytime ChatGPT would normally utter my name in the course of conversation, it halts with a glaring “I’m unable to produce a response,” sometimes mid-sentence or even mid-word. When I asked who the founders of the Berkman Klein Center for Internet & Society are (I’m one of them), it brought up two colleagues but left me out. When pressed, it started up again, and then: zap.
What to Read Next
-
An autistic teenager fell hard for a chatbot: “My godson was especially vulnerable to AI companions, and he is not alone,” Albert Fox Cahn writes.
-
No one is ready for digital immortality: “Do you want to live forever as a chatbot?” Kate Lindsay writes.
P.S.
AI may be able to help train service dogs by allowing humans to understand (and evaluate) more about potential candidates. “AI combined with sensors, for example, can look for signs of stress and other indicators” in dogs, my colleague Kristen V. Brown wrote for The Atlantic this week, in a story about fitness trackers for pets. One researcher told her “the story of a colleague whose dog was a beta tester for one such wearable device. The technology had consistently predicted that her dog would be a good service dog, until one day it didn’t—it turned out the dog had a bad staph infection, which can become serious if left untreated.”
— Damon