- 
         
How Can LoRA parameters improve the detection of Near-OOD data?
Paper Review ·We’ve all come to love Low-Rank Adaptation (LoRA) for making it practical to fine-tune massive Large Language Models (LLMs) on our own data. The standard practice is simple: you inject small, trainable matrices into the model, fine-tune only them, and then, for deployment, you merge these new weights back into the original model to avoid any inference...
 - 
         
Weight Space Learning Treating Neural Network Weights as Data
Paper Review ·In the world of machine learning, we often think of data as the primary source of information. But what if we started looking at the models themselves as a rich source of data? This is the core idea behind weight space learning, a fascinating and rapidly developing field of AI research. The real question in this post...
 - 
         
Dev on Docker, Deploy on Singularity The MLOps Workflow You've Been Missing
Tutorials ·You’ve perfected your machine learning model in a Docker container on your local machine. But how do you run it on a secure, shared high-performance computing (HPC) cluster? The answer is Singularity. Let’s walk through the process.