{"version":"1.0","provider_name":"AG CommTech","provider_url":"https:\/\/agcommtech.de\/en\/","author_name":"Die Redaktion","author_url":"https:\/\/agcommtech.de\/en\/author\/die-redaktion\/","title":"Survey and analysis of hallucinations in large language models - AG CommTech","type":"rich","width":600,"height":338,"html":"<blockquote class=\"wp-embedded-content\" data-secret=\"udQKCObNo1\"><a href=\"https:\/\/agcommtech.de\/en\/2026\/01\/07\/survey-and-analysis-of-hallucinations-in-large-language-models\/\">Survey and analysis of hallucinations in large language models<\/a><\/blockquote><iframe sandbox=\"allow-scripts\" security=\"restricted\" src=\"https:\/\/agcommtech.de\/en\/2026\/01\/07\/survey-and-analysis-of-hallucinations-in-large-language-models\/embed\/#?secret=udQKCObNo1\" width=\"600\" height=\"338\" title=\"&#8220;Survey and analysis of hallucinations in large language models&#8221; &#8212; AG CommTech\" data-secret=\"udQKCObNo1\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" class=\"wp-embedded-content\"><\/iframe><script type=\"text\/javascript\">\n\/* <![CDATA[ *\/\n\/*! This file is auto-generated *\/\n!function(d,l){\"use strict\";l.querySelector&&d.addEventListener&&\"undefined\"!=typeof URL&&(d.wp=d.wp||{},d.wp.receiveEmbedMessage||(d.wp.receiveEmbedMessage=function(e){var t=e.data;if((t||t.secret||t.message||t.value)&&!\/[^a-zA-Z0-9]\/.test(t.secret)){for(var s,r,n,a=l.querySelectorAll('iframe[data-secret=\"'+t.secret+'\"]'),o=l.querySelectorAll('blockquote[data-secret=\"'+t.secret+'\"]'),c=new RegExp(\"^https?:$\",\"i\"),i=0;i<o.length;i++)o[i].style.display=\"none\";for(i=0;i<a.length;i++)s=a[i],e.source===s.contentWindow&&(s.removeAttribute(\"style\"),\"height\"===t.message?(1e3<(r=parseInt(t.value,10))?r=1e3:~~r<200&&(r=200),s.height=r):\"link\"===t.message&&(r=new URL(s.getAttribute(\"src\")),n=new URL(t.value),c.test(n.protocol))&&n.host===r.host&&l.activeElement===s&&(d.top.location.href=t.value))}},d.addEventListener(\"message\",d.wp.receiveEmbedMessage,!1),l.addEventListener(\"DOMContentLoaded\",function(){for(var e,t,s=l.querySelectorAll(\"iframe.wp-embedded-content\"),r=0;r<s.length;r++)(t=(e=s[r]).getAttribute(\"data-secret\"))||(t=Math.random().toString(36).substring(2,12),e.src+=\"#?secret=\"+t,e.setAttribute(\"data-secret\",t)),e.contentWindow.postMessage({message:\"ready\",secret:t},\"*\")},!1)))}(window,document);\n\/\/# sourceURL=https:\/\/agcommtech.de\/wp-includes\/js\/wp-embed.min.js\n\/* ]]> *\/\n<\/script>\n","thumbnail_url":"https:\/\/agcommtech.de\/wp-content\/uploads\/2026\/01\/lesetipp-2-januar-2025.png","thumbnail_width":1626,"thumbnail_height":994,"description":"The study by Anh-Hoang, Tran and Nguyen, published in Frontiers in Artificial Intelligence in 2025, analyzes the problem of hallucinations in large language models (LLMs), i.e. false or unfounded statements presented as fact by AI systems. The aim of the work is to understand the extent to which such errors are influenced by the design of prompts and where the limits of prompting lie."}