-
Decoding LLM Hallucinations An In-Depth Survey Summary
Paper Review ·The rapid advancement of Large Language Models (LLMs) has brought transformative capabilities, yet their tendency to “hallucinate”—generating outputs that are nonsensical, factually incorrect, or unfaithful to provided context—poses significant risks to their reliability, especially in information-critical applications . A comprehensive survey by Huang (Huang et al., 2025) systematically explores this phenomenon, offering a detailed taxonomy, analyzing root causes,...
-
Topology of Out-of-Distribution Examples in Deep Neural Networks
Paper Review ·As deep neural networks (DNNs) become more common, concerns about their robustness, particularly when facing unfamiliar inputs, are growing. These models often exhibit overconfidence when making incorrect predictions on out-of-distribution (OOD) examples. This...
-
Out-of-Distribution Detection Ensuring AI Robustness
Paper Review ·Deep neural networks can solve various complex tasks and achieve state-of-the-art results in multiple domains such as image classification, speech recognition, machine translation, robotics, and control. However, due to the distributional shift between collected training data and actual test data, The trained neural network has a difference between the network’s performance on the training and unseen real...