Unstable Power: How Sharpness Drives Deep Network Learning

The paper “Understanding the Evolution of the Neural Tangent Kernel at the Edge of Stability” by Kaiqi Jiang, Jeremy Cohen, and Yuanzhi Li explores how the Neural Tangent Kernel (NTK) evolves during deep network training, especially under the Edge of Stability (EoS) regime. What is the NTK? The Neural Tangent Kernel (NTK) is a matrix that captures how tiny weight changes affect network outputs on each training example. It lets us analyze neural networks with tools from kernel methods, offering theoretical insights into learning dynamics. What is the Edge of Stability? When training with a large learning rate $\eta$, the largest eigenvalue of the NTK (or the loss Hessian) exceeds the stability threshold $2/\eta$ and then oscillates around it. This phenomenon, called Edge of Stability, combines elements of instability with phases of rapid learning. Key Findings Alignment Shift Higher $\eta$ leads to stronger final Kernel Target Alignment (KTA) between the NTK and the label vector $y$. ...

July 18, 2025

RiemannLoRA: A Unified Riemannian Framework for Ambiguity-Free LoRA Optimization

In recent years, Low‑Rank Adaptation (LoRA) has become a cornerstone technique for parameter‑efficient fine‑tuning of large language models (LLMs) and diffusion models. By injecting low‑rank matrices into pre-trained weights, LoRA drastically reduces memory and compute requirements, enabling rapid experimentation and deployment. However, practitioners face two persistent challenges: Initialization ambiguity: Different low‑rank factor pairs $$A, B$$ can represent the same adapted weight update $AB^\top$, leading to unstable or suboptimal starts. Redundant parameterization: Without a canonical representation, gradient updates can wander through equivalent parameter configurations. The RiemannLoRA framework, introduced by Bogachev et al., offers a unifying geometric viewpoint that removes these ambiguities and yields faster, more stable fine‑tuning. ...

July 17, 2025

Target Polish: How to Polish Data and Reveal Its True Structure

Imagine you’re analyzing sensor data. Suddenly one sensor shows -999°C. That’s an outlier — a single data point that can completely ruin your analysis. 🧩 What is factorization? Matrix factorization means decomposing data $X$ into two non-negative components: $$ X \approx WH $$ Where $W$ contains “features” and $H$ shows how much of each is needed. 💡 The problem Classical methods like NMF are sensitive to noise and outliers. When data is messy, analysis breaks down. ...

July 15, 2025

Not Just Bigger Models: Why AI Should See Better Instead of Just Scaling

In recent years, AI progress has been largely defined by size: bigger models, bigger datasets, bigger compute budgets. GPT-4, Claude, Gemini – each new model pushes the limits further. But is bigger always better? A group of researchers (Baek, Park, Ko, Oh, Gong, Kim) argue in their recent paper "AI Should Sense Better, Not Just Scale Bigger" (arXiv:2507.07820) that we’ve hit diminishing returns. Instead of growing endlessly, they propose a new focus: adaptive sensing. ...

July 13, 2025

HeLo – A New Path for Multimodal Emotion Recognition

Modern emotion-recognition systems increasingly leverage data from multiple sources—ranging from physiological signals (e.g., heart rate, skin conductance) to facial video. The goal is to capture the richness of human feelings, where multiple emotions often co-occur. Traditional approaches, however, focused on single-label classification (e.g., “happy” or “sad”). The paper “HeLo: Heterogeneous Multi-Modal Fusion with Label Correlation for Emotion Distribution Learning” introduces an entirely new paradigm: emotion distribution learning, where the model predicts the probability of each basic emotion being present. ...

July 10, 2025

QuEst: Blending Data and Predictions for Robust Quantile Estimation

Imagine you track your morning commute times by recording 50 real-world trips with your GPS-enabled phone. You also run a traffic simulator to generate 5,000 possible commute scenarios. You want a reliable estimate of the 95th percentile of commute time—the duration you won’t exceed 95% of the days. Using only your 50 recorded trips yields a wide confidence interval. Using only the simulator risks systematic biases: it might ignore sudden road closures or special events. ...

July 8, 2025

RetrySQL: Self-Correcting Query Generation

The text-to-SQL task involves converting natural language questions into executable SQL queries on a relational database. While modern large language models (LLMs) excel at many generative tasks, generating correct and complex SQL queries remains challenging. In the paper RetrySQL: text-to-SQL training with retry data for self-correcting query generation, the authors introduce a training paradigm that teaches the model to self-monitor and correct its reasoning steps during generation, rather than relying solely on post-processing modules. ...

July 7, 2025

How to Predict Scooter Demand? XGBoost and Urban Micromobility

Can we predict when and where people will rent electric scooters? Yes — and with impressive accuracy. A recent publication shows how advanced algorithms like XGBoost can revolutionize the management of micromobility in cities. 🌍 Context: Micromobility and Demand In many cities, dockless electric scooters have become a daily transport option. But for operators, a crucial question remains: Where and when will people want to rent a scooter? Too many vehicles in one location is wasteful. Too few — lost revenue and frustrated users. That’s why accurately predicting demand is so important. ...

July 4, 2025

Ghost Nodes: A Trick That Makes Neural Networks Learn Smarter

When we train deep neural networks, they often get stuck — not in a bad result, but in a “flat region” of the loss landscape. The authors of this paper introduce ghost nodes: extra, fake output nodes that aren’t real classes, but help the model explore better paths during training. Imagine you’re rolling a ball into a valley. Sometimes the valley floor is flat and the ball slows down. Ghost nodes are like adding new dimensions to the terrain — giving the ball more freedom to move and find a better path. ...

July 3, 2025