ChatGPT has quickly become one of the most popular AI tools in the world, but a recent bug has caused the popular chatbot to generate unexpected responses.
ChatGPT is one of creator OpenAI‘s most popular AI applications. But, this is a far cry from their new “Sora” AI video generation software. After undergoing refinements and even a change to the advanced GPT-4 system, the chatbot can still generate some questionable responses, without the need for jailbreaks of any kind.
The bug was first spotted on the ChatGPT subreddit, which is full of posts reporting “nonsense” responses. One post is titled “This felt like watching someone slowly go insane”, where the response starts off strong, but slowly begins to falter into absolute nonsense. It appears that these hallucinations cause the language-learning AI to just start using phrases without context.
AI “hallucinations” are pretty commonplace amongst language learning modules, and even though ChatGPT might be one of the most advanced of them out there, these responses prove that there’s still more work to be done.
A viral story from tech columnist Ed Zitron further explains that some companies might be reaching the “upper limits” of what is possible using AI, and how accurate outputs can really be.
Subscribe to our newsletter for the latest updates on Esports, Gaming and more.
OpenAI is aware of the widespread issues
OpenAI has acknowledged the current errors with ChatGPT on its service status page. The incident note reads: “We are investigating reports of unexpected responses from ChatGPT.” and that the company has identified the issue and is working on a fix.
As of the time of writing, the issue has not yet been resolved. However, OpenAI has a pretty good track record of fixing errors and loopholes to create unexpected responses, such as the now-infamous “Grandma” exploit.