Get Into My Mind

  • Decoding LLM Hallucinations An In-Depth Survey Summary

    The rapid advancement of Large Language Models (LLMs) has brought transformative capabilities, yet their tendency to “hallucinate”—generating outputs that are nonsensical, factually incorrect, or unfaithful to provided context—poses significant risks to their reliability, especially in information-critical applications . A comprehensive survey by Huang (Huang et al., 2025) systematically explores this phenomenon, offering a detailed taxonomy, analyzing root causes,...

  • Out-of-Distribution Detection Ensuring AI Robustness

    Deep neural networks can solve various complex tasks and achieve state-of-the-art results in multiple domains such as image classification, speech recognition, machine translation, robotics, and control. However, due to the distributional shift between collected training data and actual test data, The trained neural network has a difference between the network’s performance on the training and unseen real...