{"id":2320,"date":"2025-05-15T15:51:06","date_gmt":"2025-05-15T13:51:06","guid":{"rendered":"https:\/\/wp.unil.ch\/aail\/?p=2320"},"modified":"2025-07-18T09:21:49","modified_gmt":"2025-07-18T07:21:49","slug":"miccai2025","status":"publish","type":"post","link":"https:\/\/wp.unil.ch\/aail\/miccai2025\/","title":{"rendered":"Paper Accepted: Hallucination-Aware Multimodal Benchmark for Gastrointestinal Image Analysis with Large Vision-Language Models"},"content":{"rendered":"\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"590\" src=\"https:\/\/wp.unil.ch\/aail\/files\/2025\/05\/image-2-1024x590.png\" alt=\"image\" class=\"wp-image-2321\" srcset=\"https:\/\/wp.unil.ch\/aail\/files\/2025\/05\/image-2-1024x590.png 1024w, https:\/\/wp.unil.ch\/aail\/files\/2025\/05\/image-2-300x173.png 300w, https:\/\/wp.unil.ch\/aail\/files\/2025\/05\/image-2-768x442.png 768w, https:\/\/wp.unil.ch\/aail\/files\/2025\/05\/image-2-1536x885.png 1536w, https:\/\/wp.unil.ch\/aail\/files\/2025\/05\/image-2.png 1660w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>The paper <em>&#8220;Hallucination-Aware Multimodal Benchmark for Gastrointestinal Image Analysis with Large Vision-Language Models&#8221;<\/em> has been accepted at <a href=\"https:\/\/conferences.miccai.org\/2025\/en\/\">MICCAI 2025<\/a>!<\/p>\n\n\n\n<p> It tackles the critical issue of hallucination in medical vision-language models (VLMs), where the generated descriptions are inconsistent with the visual content, posing serious risks in clinical settings. To address this, the authors introduce <strong>Gut-VLM<\/strong>, a novel multimodal dataset focused on gastrointestinal imaging. In addition, the paper also establishes a new benchmark by evaluating state-of-the-art VLMs across multiple metrics, offering a valuable resource for advancing safe and accurate medical AI.<\/p>\n\n\n\n<p><a href=\"https:\/\/arxiv.org\/pdf\/2505.07001v1\">Paper Available to Read Here<\/a><\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The paper &#8220;Hallucination-Aware Multimodal Benchmark for Gastrointestinal Image Analysis with Large Vision-Language Models&#8221; has been accepted at MICCAI 2025! It &hellip; <\/p>\n","protected":false},"author":1002911,"featured_media":2321,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_seopress_robots_primary_cat":"","_seopress_titles_title":"","_seopress_titles_desc":"","_seopress_robots_index":"","footnotes":""},"categories":[25],"tags":[24,21],"class_list":{"0":"post-2320","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-news-out-campus","8":"tag-ai-news","9":"tag-paper-presentation"},"_links":{"self":[{"href":"https:\/\/wp.unil.ch\/aail\/wp-json\/wp\/v2\/posts\/2320","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/wp.unil.ch\/aail\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/wp.unil.ch\/aail\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/wp.unil.ch\/aail\/wp-json\/wp\/v2\/users\/1002911"}],"replies":[{"embeddable":true,"href":"https:\/\/wp.unil.ch\/aail\/wp-json\/wp\/v2\/comments?post=2320"}],"version-history":[{"count":3,"href":"https:\/\/wp.unil.ch\/aail\/wp-json\/wp\/v2\/posts\/2320\/revisions"}],"predecessor-version":[{"id":2327,"href":"https:\/\/wp.unil.ch\/aail\/wp-json\/wp\/v2\/posts\/2320\/revisions\/2327"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/wp.unil.ch\/aail\/wp-json\/wp\/v2\/media\/2321"}],"wp:attachment":[{"href":"https:\/\/wp.unil.ch\/aail\/wp-json\/wp\/v2\/media?parent=2320"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/wp.unil.ch\/aail\/wp-json\/wp\/v2\/categories?post=2320"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/wp.unil.ch\/aail\/wp-json\/wp\/v2\/tags?post=2320"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}