(Image credit: Shutterstock) OpenAI’s latest AI models, GPT o3 and o4-mini, hallucinate significantly more often than their predecessorsThe increased complexity of the models may be leading to more confident inaccuraciesThe high error rates raise concerns about AI reliability in real-world applicationsBrilliant but untrustworthy people are a staple of fiction (and history). The same correlation may apply to AI as well, based on an investigation by OpenAI and shared by The New York Times. Hallucinations, imaginary facts, and straight-up lies have been part of AI chatbots since they were created. Improvements to the models theoretically should reduce the frequency with which they appear.OpenAI’s latest flagship models, GPT o3 and o4-mini, are meant to mimic human logic. Unlike their predecessors, which mainly focused on fluent text generation, OpenAI built GP… Click below to read the full story from TechRadar
Read More
























