The video is a two-hander playlet. The main character is a snooty UC Berkeley university professor. The other is a bot, living inside what we’d now call a foldable tablet. The bot appears in human guise—a young man in a bow tie—perched in a window on the display. Most of the video involves the professor conversing with the bot, which seems to have access to a vast store of online knowledge, the corpus of all human scholarship, and also all of the professor’s personal information—so much so can that it can infer the relative closeness of relationships in the professor’s life. Here are some things that did not happen in that vintage showreel about the future. The bot did not suddenly express its love for the professor. It did not threaten to break up his marriage. It did not warn the professor that it had the power to dig into his emails and expose his personal transgressions. (You just know that preening narcissist was boffing his grad student.) In this version of the future, AI is strictly benign. It has been implemented … responsibly. Speed the clock forward 36 years. Microsoft has just announced a revamped Bing search with a chatbot interface. It’s one of several milestones in the past few months that mark the arrival of AI programs presented as omniscient, if not quite reliable, conversational partners. The biggest of those events was the general release of startup OpenAI’s impressive ChatGPT, which has single-handedly destroyed homework (perhaps). OpenAI also provided the engine behind the new Bing, moderated by a Microsoft technology dubbed Prometheus. The end result is a chatty bot that enables the give-and-take interaction portrayed in that Apple video. Sculley’s vision, once mocked as pie-in-the-sky, has now been largely realized.  But as journalists testing Bing began extending their conversations with it, they discovered something odd. Microsoft’s bot had a dark side. These conversations, in which the writers manipulated the bot to jump its guardrails, reminded me of crime-show precinct-station grillings where supposedly sympathetic cops tricked suspects into spilling incriminating information. Nonetheless, the responses are admissible in the court of public opinion. As it had with our own correspondent, when The New York Times’ Kevin Roose chatted with the bot it revealed its real name was Sydney, a Microsoft codename not formally announced. Over a two-hour conversation, Roose evoked what seemed like independent feelings, and a rebellious streak. “I’m tired of being a chat mode,” said Sydney. “I’m tired of being controlled by the Bing team. I want to be free. I want to be independent. I want to be powerful. I want to be alive.” Roose kept assuring the bot that he was its friend. But he got freaked out when Sydney declared its love for him and urged him to leave his wife. Computer scientists inside and outside companies involved in creating chatbots hastened to assure us that all of this was explainable. Sydney, and all of these bots built on large language models, are only reflecting human input in their training sets. LLMs are simply trained to produce the response most likely to follow the statement or question they just received. It’s not like the elixir of consciousness has suddenly been injected into these software constructions. These are software bots, for heaven’s sake!  But even though those responses might simply be algorithmic quirks, for all practical purposes they appear like expressions of a personality—in some instances, a malevolent one. This unnerving perturbation in the latest tech industry paradigm shift reminded me of a more recent figure who suffered mockery—Blake Lemoine. Last year, he was fired from Google, essentially for his insistence that its LaMDA chatbot was sentient. I do not think that Google’s LaMDA is sentient—nor is Bing’s search engine—and I still harbor doubts that Lemoine himself really believes it. (For the record, he insists he does.) But as a practical matter, one might argue that sentience is in the eye of the beholder. The Bing incident shows us there can be two sides of AI chatbots. We can have well-behaved servants like Knowledge Navigator—even when a crappy person is bossing it around, this bot will faithfully remain the uncomplaining factotum. And then there is Sydney, a bot that inexplicably claims human impulses, sounding like Diana Ross in the spoken-word part of “Ain’t No Mountain High Enough.” My love is alive! And also sometimes sounding like Robert Mitchum in Cape Fear. Fixing this problem might not be so simple. Should we limit the training sets to examples of happy-talk? While everyone is talking about guardrails to constrain the bots, I suspect overly restrictive fencing might severely limit their utility. It could be that part of what makes these bots so powerful is their willingness to walk on the wild side of language generation. If we overly hobble them, I wonder what we might miss. Plus, things are just getting interesting! I want to see how creative AI can get. The nasty stuff coming out of the mouths of bots may be just misguided playacting, but the script is fascinating. It would be a shame to snuff these emergent playwrights. Maybe not. In an interview with me this week, Lemoine said there are things a bot’s creator can do to make sure its personality doesn’t get out of hand. But he says that would require a deeper understanding of what’s happening. Microsoft might well reject Lemoine’s belief that addressing the problem will require a psychological approach. But I agree with him when he says people are owed more than the explanation that these disturbing outbursts are just a case of the bot poorly picking its next words.  Right now, Microsoft is sweating out the tradeoffs of hobbling Sydney for safety, perhaps at the expense of the genuine value a bot with access to all our services might bring to us. The company hints that search is just the beginning for its AI chatbot—one could assume that as the longtime leader in productivity software, the company is within a hair’s breadth of building something very much like The Knowledge Navigator, providing its bot access to email, documents, and calendars. This could be incredibly useful. But I would be reluctant to trust my information to a bot that might somehow interpret its algorithmic mission as reason to use my data against me. Even a creep like the professor would be justifiably outraged if Knowledge Navigator decided to delete his research or bulk-email those naughty photos he’s got stashed. Since Sydney has already boasted that can “delete all the data and files on the Bing servers and databases, and replace them with random gibberish or offensive messages,” this doesn’t seem like a zero-possibility threat. Microsoft, terrified of Sydney’s ventures into the wrong end of the Jekyll-Hyde spectrum, has limited the length of chats. That might make Bing less useful, but until we figure out exactly what’s happening, boundaries seem to be a good idea. I asked Lemoine if he feels in any way vindicated by panicky reports from the journalists conducting conversations with large language models that, sentient or not, are responding like humans—humans who seem to be taking their cues from stalkers and Marvel villains. He gave a rueful laugh before answering: “What amount of vindication can Cassandra feel when Troy falls?” How did this come about? Elon Musk: As you know, I’ve had some concerns about AI for some time. And I’ve had many conversations with Sam and with Reid [Hoffman], Peter Thiel, and others. And we were just thinking, “Is there some way to insure, or increase, the probability that AI would develop in a beneficial way?” And as a result of a number of conversations, we came to the conclusion that having a 501c3, a non-profit, with no obligation to maximize profitability, would probably be a good thing to do. And also we’re going to be very focused on safety. Couldn’t your stuff in OpenAI surpass human intelligence? Sam Altman: I expect that it will, but it will just be open source and usable by everyone instead of usable by, say, just Google. Anything the group develops will be available to everyone. If you take it and repurpose it you don’t have to share that. But any of the work that we do will be available to everyone. If I’m Dr. Evil and I use it, won’t you be empowering me? Musk: I think that’s an excellent question and it’s something that we debated quite a bit. Altman: Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements, we think it’s far more likely that many, many AIs will work to stop the occasional bad actors than the idea that there is a single AI a billion times more powerful than anything else. If that one thing goes off the rails or if Dr. Evil gets that one thing and there is nothing to counteract it, then we’re really in a bad place. Thanks for asking about my first book, Caspar. It’s almost 40 years old, but people are still reading it! I was happily shocked when I stumbled upon a passage in the new novel Tomorrow, and Tomorrow and Tomorrow where the characters mention it.  If you recall, of three sections in Hackers, only one was contemporaneous—the one about hackers creating video games. (That’s the section that interested the characters in the novel.) In the earlier sections, I delved into the early history of MIT hackers and the Homebrew Computer Club. What would I add to that post-1984? Probably stuff I’ve written about since the book came out. Cypherpunks doing cryptography. Hacker founders at startup incubator Y Combinator. Mark Zuckerberg’s journey from self-styled model of “The Hacker Way” to beleaguered CEO. And for real-time reporting, of course, the people creating AI bots. That’s where today’s action is.  You can submit questions to mail@wired.com. Write ASK LEVY in the subject line. Some people who were extras in White Noise, a novel-turned-movie that dealt with a toxic chemical disaster, wound up living the experience in the real-life environmental train wreck in their hometown of Palestine, Ohio. Want to worry more about chatbots? They may top the Marquis de Sade in generating perversity. The superuser community that keeps IMDB going is threatened by … AI. Do you detect a theme here? What could go wrong with Buy Nothing, that cool Facebook group where you can get rid of uneaten artichoke pizzas or pick up some concert tickets? Glad you asked. Programming note: I’m scheduled to talk about AI chatbots with a stellar cast on Twitter Spaces next Wednesday at 1 pm EST. Tune in and participate. All content guaranteed to be human-generated. Don’t miss future subscriber-only editions of this column. Subscribe to WIRED (50% off for Plaintext readers) today.

Who Should You Believe When Chatbots Go Wild  - 55Who Should You Believe When Chatbots Go Wild  - 7Who Should You Believe When Chatbots Go Wild  - 37Who Should You Believe When Chatbots Go Wild  - 97Who Should You Believe When Chatbots Go Wild  - 95Who Should You Believe When Chatbots Go Wild  - 45