{"id":2826,"date":"2025-09-08T12:14:33","date_gmt":"2025-09-08T12:14:33","guid":{"rendered":"https:\/\/menamagazine.com\/?p=2826"},"modified":"2025-09-08T12:14:33","modified_gmt":"2025-09-08T12:14:33","slug":"openai-paper-warns-of-ongoing-ai-hallucination-issues","status":"publish","type":"post","link":"https:\/\/themena.com\/index.php\/2025\/09\/08\/openai-paper-warns-of-ongoing-ai-hallucination-issues\/","title":{"rendered":"OpenAI paper warns of ongoing AI hallucination issues"},"content":{"rendered":"\n<p><strong>OpenAI has released a new research paper examining why large language models, including GPT-5 and chatbots such as ChatGPT, continue to produce hallucinations\u2014false but plausible statements\u2014and whether this issue can be reduced.<\/strong><\/p>\n\n\n\n<p>According to&nbsp;<a href=\"https:\/\/techcrunch.com\/2025\/09\/07\/are-bad-incentives-to-blame-for-ai-hallucinations\/\" target=\"_blank\" rel=\"noopener\"><em><strong>TechCrunch<\/strong>,<\/em><\/a>&nbsp;in a blog post summarising the findings, OpenAI describes hallucinations as \u201cplausible but false statements generated by language models\u201d and acknowledges that, despite improvements, they \u201cremain a fundamental challenge for all large language models\u201d\u2014a challenge that is unlikely to be fully resolved.<\/p>\n\n\n\n<p>To illustrate the problem, the researchers tested a popular chatbot by asking for the title of Adam Tauman Kalai\u2019s Ph.D. dissertation. The chatbot provided three different responses, all incorrect. When asked for his date of birth, it again gave three differing answers, none accurate. Kalai is one of the authors of the paper.<\/p>\n\n\n\n<p>According to the researchers, part of the issue stems from the pretraining process. During this phase, models learn to predict the next word in a sequence, without being shown whether statements are true or false. As the paper explains: \u201cThe model sees only positive examples of fluent language and must approximate the overall distribution.\u201d<\/p>\n\n\n\n<p>It adds: \u201cSpelling and parentheses follow consistent patterns, so errors there disappear with scale. But arbitrary low-frequency facts, like a pet\u2019s birthday, cannot be predicted from patterns alone and hence lead to hallucinations.\u201d<\/p>\n\n\n\n<p>Rather than focusing on pretraining, the paper directs attention to how these models are evaluated. It argues that the evaluations themselves do not cause hallucinations but create misleading incentives.<\/p>\n\n\n\n<p>The researchers liken this to multiple-choice tests, where guessing may yield a correct answer by chance, whereas leaving a question blank guarantees no credit. \u201cIn the same way, when models are graded only on accuracy, the percentage of questions they get exactly right, they are encouraged to guess rather than say \u2018I don\u2019t know\u2019,\u201d the paper states.<\/p>\n\n\n\n<p>To address this, the authors propose an evaluation method similar to certain standardised tests, where wrong answers are penalised and uncertainty is treated more favourably. According to the paper, evaluations should \u201cpenalise confident errors more than [they] penalise uncertainty, and give partial credit for appropriate expressions of uncertainty.\u201d<\/p>\n\n\n\n<p>They stress that minor adjustments are insufficient. Rather than introducing \u201ca few new uncertainty-aware tests on the side,\u201d the researchers argue that \u201cthe widely used, accuracy-based evals need to be updated so that their scoring discourages guessing.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"<p>OpenAI has released a new research paper examining why large language models, including GPT-5 and chatbots such as ChatGPT, continue to produce hallucinations\u2014false but plausible&hellip;<\/p>\n","protected":false},"author":2,"featured_media":2827,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[36],"tags":[138,139,140,37],"class_list":["post-2826","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-technology","tag-chatgpt","tag-hallucinations","tag-openai","tag-technology"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.7 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>OpenAI paper warns of ongoing AI hallucination issues - The MENA News<\/title>\n<meta name=\"description\" content=\"OpenAI released a research paper examining why large language models, including GPT-5, chatbots ChatGPT, continue to produce hallucinations\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/themena.com\/index.php\/2025\/09\/08\/openai-paper-warns-of-ongoing-ai-hallucination-issues\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"OpenAI paper warns of ongoing AI hallucination issues - The MENA News\" \/>\n<meta property=\"og:description\" content=\"OpenAI released a research paper examining why large language models, including GPT-5, chatbots ChatGPT, continue to produce hallucinations\" \/>\n<meta property=\"og:url\" content=\"https:\/\/themena.com\/index.php\/2025\/09\/08\/openai-paper-warns-of-ongoing-ai-hallucination-issues\/\" \/>\n<meta property=\"og:site_name\" content=\"The MENA News\" \/>\n<meta property=\"article:published_time\" content=\"2025-09-08T12:14:33+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/themena.com\/wp-content\/uploads\/2025\/09\/OpenAI-paper-warns-of-ongoing-AI-hallucination.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"A S\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"A S\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"2 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/themena.com\/index.php\/2025\/09\/08\/openai-paper-warns-of-ongoing-ai-hallucination-issues\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/themena.com\/index.php\/2025\/09\/08\/openai-paper-warns-of-ongoing-ai-hallucination-issues\/\"},\"author\":{\"name\":\"A S\",\"@id\":\"https:\/\/themena.com\/#\/schema\/person\/e3e13e2aad1983b09c25a2a8241a07a7\"},\"headline\":\"OpenAI paper warns of ongoing AI hallucination issues\",\"datePublished\":\"2025-09-08T12:14:33+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/themena.com\/index.php\/2025\/09\/08\/openai-paper-warns-of-ongoing-ai-hallucination-issues\/\"},\"wordCount\":405,\"commentCount\":0,\"image\":{\"@id\":\"https:\/\/themena.com\/index.php\/2025\/09\/08\/openai-paper-warns-of-ongoing-ai-hallucination-issues\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/themena.com\/wp-content\/uploads\/2025\/09\/OpenAI-paper-warns-of-ongoing-AI-hallucination.webp\",\"keywords\":[\"ChatGPT\",\"hallucinations\",\"OpenAI\",\"TECHNOLOGY\"],\"articleSection\":[\"TECHNOLOGY\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/themena.com\/index.php\/2025\/09\/08\/openai-paper-warns-of-ongoing-ai-hallucination-issues\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/themena.com\/index.php\/2025\/09\/08\/openai-paper-warns-of-ongoing-ai-hallucination-issues\/\",\"url\":\"https:\/\/themena.com\/index.php\/2025\/09\/08\/openai-paper-warns-of-ongoing-ai-hallucination-issues\/\",\"name\":\"OpenAI paper warns of ongoing AI hallucination issues - The MENA News\",\"isPartOf\":{\"@id\":\"https:\/\/themena.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/themena.com\/index.php\/2025\/09\/08\/openai-paper-warns-of-ongoing-ai-hallucination-issues\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/themena.com\/index.php\/2025\/09\/08\/openai-paper-warns-of-ongoing-ai-hallucination-issues\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/themena.com\/wp-content\/uploads\/2025\/09\/OpenAI-paper-warns-of-ongoing-AI-hallucination.webp\",\"datePublished\":\"2025-09-08T12:14:33+00:00\",\"author\":{\"@id\":\"https:\/\/themena.com\/#\/schema\/person\/e3e13e2aad1983b09c25a2a8241a07a7\"},\"description\":\"OpenAI released a research paper examining why large language models, including GPT-5, chatbots ChatGPT, continue to produce hallucinations\",\"breadcrumb\":{\"@id\":\"https:\/\/themena.com\/index.php\/2025\/09\/08\/openai-paper-warns-of-ongoing-ai-hallucination-issues\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/themena.com\/index.php\/2025\/09\/08\/openai-paper-warns-of-ongoing-ai-hallucination-issues\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/themena.com\/index.php\/2025\/09\/08\/openai-paper-warns-of-ongoing-ai-hallucination-issues\/#primaryimage\",\"url\":\"https:\/\/themena.com\/wp-content\/uploads\/2025\/09\/OpenAI-paper-warns-of-ongoing-AI-hallucination.webp\",\"contentUrl\":\"https:\/\/themena.com\/wp-content\/uploads\/2025\/09\/OpenAI-paper-warns-of-ongoing-AI-hallucination.webp\",\"width\":1792,\"height\":1024},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/themena.com\/index.php\/2025\/09\/08\/openai-paper-warns-of-ongoing-ai-hallucination-issues\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/themena.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"OpenAI paper warns of ongoing AI hallucination issues\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/themena.com\/#website\",\"url\":\"https:\/\/themena.com\/\",\"name\":\"The MENA News\",\"description\":\"MIDDLE EAST MIDDLE EAST AND NORTH AFRICA PREMIUM MAGAZINE STYLE, LATEST NEWS, ENTERTAINMENT, FINANCE, HEALTH, BEAUTY, FASHION, ARTIST,SHOPPING AND MORE\",\"alternateName\":\"The Middle East and North Africa News\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/themena.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/themena.com\/#\/schema\/person\/e3e13e2aad1983b09c25a2a8241a07a7\",\"name\":\"A S\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/themena.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/410138d769581137fdcadb0bce7da1b2b149e2a30c291d14d15a614a0b183a94?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/410138d769581137fdcadb0bce7da1b2b149e2a30c291d14d15a614a0b183a94?s=96&d=mm&r=g\",\"caption\":\"A S\"},\"url\":\"https:\/\/themena.com\/author\/as\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"OpenAI paper warns of ongoing AI hallucination issues - The MENA News","description":"OpenAI released a research paper examining why large language models, including GPT-5, chatbots ChatGPT, continue to produce hallucinations","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/themena.com\/index.php\/2025\/09\/08\/openai-paper-warns-of-ongoing-ai-hallucination-issues\/","og_locale":"en_US","og_type":"article","og_title":"OpenAI paper warns of ongoing AI hallucination issues - The MENA News","og_description":"OpenAI released a research paper examining why large language models, including GPT-5, chatbots ChatGPT, continue to produce hallucinations","og_url":"https:\/\/themena.com\/index.php\/2025\/09\/08\/openai-paper-warns-of-ongoing-ai-hallucination-issues\/","og_site_name":"The MENA News","article_published_time":"2025-09-08T12:14:33+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/themena.com\/wp-content\/uploads\/2025\/09\/OpenAI-paper-warns-of-ongoing-AI-hallucination.webp","type":"image\/webp"}],"author":"A S","twitter_card":"summary_large_image","twitter_misc":{"Written by":"A S","Est. reading time":"2 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/themena.com\/index.php\/2025\/09\/08\/openai-paper-warns-of-ongoing-ai-hallucination-issues\/#article","isPartOf":{"@id":"https:\/\/themena.com\/index.php\/2025\/09\/08\/openai-paper-warns-of-ongoing-ai-hallucination-issues\/"},"author":{"name":"A S","@id":"https:\/\/themena.com\/#\/schema\/person\/e3e13e2aad1983b09c25a2a8241a07a7"},"headline":"OpenAI paper warns of ongoing AI hallucination issues","datePublished":"2025-09-08T12:14:33+00:00","mainEntityOfPage":{"@id":"https:\/\/themena.com\/index.php\/2025\/09\/08\/openai-paper-warns-of-ongoing-ai-hallucination-issues\/"},"wordCount":405,"commentCount":0,"image":{"@id":"https:\/\/themena.com\/index.php\/2025\/09\/08\/openai-paper-warns-of-ongoing-ai-hallucination-issues\/#primaryimage"},"thumbnailUrl":"https:\/\/themena.com\/wp-content\/uploads\/2025\/09\/OpenAI-paper-warns-of-ongoing-AI-hallucination.webp","keywords":["ChatGPT","hallucinations","OpenAI","TECHNOLOGY"],"articleSection":["TECHNOLOGY"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/themena.com\/index.php\/2025\/09\/08\/openai-paper-warns-of-ongoing-ai-hallucination-issues\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/themena.com\/index.php\/2025\/09\/08\/openai-paper-warns-of-ongoing-ai-hallucination-issues\/","url":"https:\/\/themena.com\/index.php\/2025\/09\/08\/openai-paper-warns-of-ongoing-ai-hallucination-issues\/","name":"OpenAI paper warns of ongoing AI hallucination issues - The MENA News","isPartOf":{"@id":"https:\/\/themena.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/themena.com\/index.php\/2025\/09\/08\/openai-paper-warns-of-ongoing-ai-hallucination-issues\/#primaryimage"},"image":{"@id":"https:\/\/themena.com\/index.php\/2025\/09\/08\/openai-paper-warns-of-ongoing-ai-hallucination-issues\/#primaryimage"},"thumbnailUrl":"https:\/\/themena.com\/wp-content\/uploads\/2025\/09\/OpenAI-paper-warns-of-ongoing-AI-hallucination.webp","datePublished":"2025-09-08T12:14:33+00:00","author":{"@id":"https:\/\/themena.com\/#\/schema\/person\/e3e13e2aad1983b09c25a2a8241a07a7"},"description":"OpenAI released a research paper examining why large language models, including GPT-5, chatbots ChatGPT, continue to produce hallucinations","breadcrumb":{"@id":"https:\/\/themena.com\/index.php\/2025\/09\/08\/openai-paper-warns-of-ongoing-ai-hallucination-issues\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/themena.com\/index.php\/2025\/09\/08\/openai-paper-warns-of-ongoing-ai-hallucination-issues\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/themena.com\/index.php\/2025\/09\/08\/openai-paper-warns-of-ongoing-ai-hallucination-issues\/#primaryimage","url":"https:\/\/themena.com\/wp-content\/uploads\/2025\/09\/OpenAI-paper-warns-of-ongoing-AI-hallucination.webp","contentUrl":"https:\/\/themena.com\/wp-content\/uploads\/2025\/09\/OpenAI-paper-warns-of-ongoing-AI-hallucination.webp","width":1792,"height":1024},{"@type":"BreadcrumbList","@id":"https:\/\/themena.com\/index.php\/2025\/09\/08\/openai-paper-warns-of-ongoing-ai-hallucination-issues\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/themena.com\/"},{"@type":"ListItem","position":2,"name":"OpenAI paper warns of ongoing AI hallucination issues"}]},{"@type":"WebSite","@id":"https:\/\/themena.com\/#website","url":"https:\/\/themena.com\/","name":"The MENA News","description":"MIDDLE EAST MIDDLE EAST AND NORTH AFRICA PREMIUM MAGAZINE STYLE, LATEST NEWS, ENTERTAINMENT, FINANCE, HEALTH, BEAUTY, FASHION, ARTIST,SHOPPING AND MORE","alternateName":"The Middle East and North Africa News","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/themena.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/themena.com\/#\/schema\/person\/e3e13e2aad1983b09c25a2a8241a07a7","name":"A S","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/themena.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/410138d769581137fdcadb0bce7da1b2b149e2a30c291d14d15a614a0b183a94?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/410138d769581137fdcadb0bce7da1b2b149e2a30c291d14d15a614a0b183a94?s=96&d=mm&r=g","caption":"A S"},"url":"https:\/\/themena.com\/author\/as\/"}]}},"_links":{"self":[{"href":"https:\/\/themena.com\/index.php\/wp-json\/wp\/v2\/posts\/2826","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/themena.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/themena.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/themena.com\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/themena.com\/index.php\/wp-json\/wp\/v2\/comments?post=2826"}],"version-history":[{"count":0,"href":"https:\/\/themena.com\/index.php\/wp-json\/wp\/v2\/posts\/2826\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/themena.com\/index.php\/wp-json\/wp\/v2\/media\/2827"}],"wp:attachment":[{"href":"https:\/\/themena.com\/index.php\/wp-json\/wp\/v2\/media?parent=2826"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/themena.com\/index.php\/wp-json\/wp\/v2\/categories?post=2826"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/themena.com\/index.php\/wp-json\/wp\/v2\/tags?post=2826"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}