A recent research in the Nature Scientific Journal revealed that AI chatbots are making more mistakes over time as newer models emerge. One of the key reasons for this may be because AI models are optimized to always provide believable answers, and the seemingly correct answers are always prioritized. It doesn’t recheck whether the news it’s giving is accurate or not. AI chatbots are just brilliantly coded search algorithm, which answers based on what is most written over the internet.
These AI hallucinations are self-reinforcing meaning older large language models train newer large language models. This method has only led to model collapse and the improvement is very minimum. Editor and writer Mathieu Roy cautioned users not to rely too heavily on these tools and to always check AI-generated search results for inconsistencies.
There have been numerous examples of Google’s finest artificial intelligence drawing blatantly inaccurate images. Some examples of this include portraying Nazis as black people, creating inaccurate images of well-known historical figures. Incidents like this are far too common and are getting worse with new updates. One of the fixes to this issue can be changing these AI hallucinations by forcing AI models to conduct thorough research and provide sources for every single answer given to a user. However, these have been done and the problems still exist. Just in, HyperWrite AI CEO Matt Shumer announced a new 70B model that uses “Reflection-Tuning”- meaning it will analyze its own mistakes and adjust responses over time.
With such stubborn and somewhat flawed search algorithms, AI chatbots provide inaccurate information, suggesting it cannot replace humans anytime soon.