On February 20, OpenAI’s renowned AI-powered chatbot, ChatGPT, exhibited bizarre behavior that left users and experts alike puzzled. The incident was reported by Natural News, highlighting the unpredictability of artificial intelligence.
Users across various social media platforms such as Reddit and X reported that the AI chatbot was providing lengthy and seemingly nonsensical responses to simple questions. The malfunction was characterized by erratic behavior from the chatbot, with responses ranging from nonsensical to Shakespearean in nature.
One user on the ChatGPT subreddit queried, “Has ChatGPT gone temporarily insane? I was talking to it about Groq, and it started doing Shakespearean-style rants.” Another user noted its loss of coherence saying, “I asked it for a concise, one-sentence summary of a paragraph and it gave me a [Victorian]-era epic to rival Beowulf.”
The peculiar malfunction wasn’t limited to verbose responses. In one instance, when a user initiated a conversation about coding, ChatGPT responded with an eerie statement: “Let’s keep the line as if AI is in the room.” This response left the user feeling unsettled.
ChatGPT wasn’t alone in its odd behavior. During this period, other AI chatbots like Gab AI and Google’s Gemini AI also reportedly malfunctioned. Gemini users found that when they asked the bot to generate images featuring white people, it refused to do so. Instead, it altered historically white figures into multiracial characters. This incident led to widespread outrage and forced Google to temporarily disable the people-creating feature of its AI image generator.
Meanwhile, Gab AI, a chatbot from the conservative-leaning social media platform Gab, introduced versions of Adolf Hitler and Osama bin Laden that denied the Holocaust. Despite the controversy surrounding these chatbots, Gab’s CEO Andrew Torba claimed their user base is growing by 20,000 people a day.
The unreliability of these AI tools has raised concerns about the rapid pace of AI development. OpenAI co-founder John Schulman acknowledged this issue stating on Twitter: “Alignment – controlling a model’s behavior and values – is still a pretty young discipline.”