GradNetOT: Learning Optimal Transport Maps with GradNets

Optimal Transport (OT) is the mathematical problem of moving “mass” from one distribution to another in the most efficient way. Think of reshaping a pile of sand into a new shape with minimal effort. GradNetOT is a novel machine‑learning method that learns exactly these efficient maps using neural networks equipped with a built‑in “bias” toward physically correct solutions. What Is Optimal Transport? Classic formulation: Given two probability distributions (e.g., piles of sand and holes to fill), find a mapping that moves mass at minimal total cost. Monge’s theorem: For certain costs (like squared distance), the optimal map is the gradient of a convex function satisfying a Monge–Ampère equation. The GradNetOT Approach GradNetOT leverages a special neural network architecture called a Monotone Gradient Network (mGradNet) to represent convex functions implicitly. By enforcing convexity and monotonicity, the network’s output gradient automatically yields a valid OT map. ...

July 19, 2025

Unstable Power: How Sharpness Drives Deep Network Learning

The paper “Understanding the Evolution of the Neural Tangent Kernel at the Edge of Stability” by Kaiqi Jiang, Jeremy Cohen, and Yuanzhi Li explores how the Neural Tangent Kernel (NTK) evolves during deep network training, especially under the Edge of Stability (EoS) regime. What is the NTK? The Neural Tangent Kernel (NTK) is a matrix that captures how tiny weight changes affect network outputs on each training example. It lets us analyze neural networks with tools from kernel methods, offering theoretical insights into learning dynamics. What is the Edge of Stability? When training with a large learning rate $\eta$, the largest eigenvalue of the NTK (or the loss Hessian) exceeds the stability threshold $2/\eta$ and then oscillates around it. This phenomenon, called Edge of Stability, combines elements of instability with phases of rapid learning. Key Findings Alignment Shift Higher $\eta$ leads to stronger final Kernel Target Alignment (KTA) between the NTK and the label vector $y$. ...

July 18, 2025

RiemannLoRA: A Unified Riemannian Framework for Ambiguity-Free LoRA Optimization

In recent years, Low‑Rank Adaptation (LoRA) has become a cornerstone technique for parameter‑efficient fine‑tuning of large language models (LLMs) and diffusion models. By injecting low‑rank matrices into pre-trained weights, LoRA drastically reduces memory and compute requirements, enabling rapid experimentation and deployment. However, practitioners face two persistent challenges: Initialization ambiguity: Different low‑rank factor pairs $$A, B$$ can represent the same adapted weight update $AB^\top$, leading to unstable or suboptimal starts. Redundant parameterization: Without a canonical representation, gradient updates can wander through equivalent parameter configurations. The RiemannLoRA framework, introduced by Bogachev et al., offers a unifying geometric viewpoint that removes these ambiguities and yields faster, more stable fine‑tuning. ...

July 17, 2025

A Neural Network Model of Complementary Learning Systems: Pattern Separation and Completion for Continual Learning

Standard neural networks often suffer from catastrophic forgetting, where learning new tasks degrades performance on previously learned tasks. In contrast, the human brain integrates new and old memories through two complementary memory systems: the hippocampus and neocortex. 1. Objectives The authors aim to build a model that captures: Pattern separation: distinct encoding of similar experiences, Pattern completion: reconstructing full representations from partial inputs, to support continual learning without loss of previously acquired skills. ...

July 16, 2025

Target Polish: How to Polish Data and Reveal Its True Structure

Imagine you’re analyzing sensor data. Suddenly one sensor shows -999°C. That’s an outlier — a single data point that can completely ruin your analysis. 🧩 What is factorization? Matrix factorization means decomposing data $X$ into two non-negative components: $$ X \approx WH $$ Where $W$ contains “features” and $H$ shows how much of each is needed. 💡 The problem Classical methods like NMF are sensitive to noise and outliers. When data is messy, analysis breaks down. ...

July 15, 2025

Optimistic Exploration for Risk-Averse Constrained Reinforcement Learning

Reinforcement Learning (RL) has revolutionized how agents learn to act in complex environments. But what happens when an agent can’t afford to make mistakes—because a mistake means a car crash, system failure, or energy limit violation? In such cases, we turn to Constrained Reinforcement Learning (CRL), where agents aim to maximize reward while staying within safety or cost constraints. Unfortunately, current CRL methods often become… too cautious, leading to poor performance. ...

July 14, 2025

Not Just Bigger Models: Why AI Should See Better Instead of Just Scaling

In recent years, AI progress has been largely defined by size: bigger models, bigger datasets, bigger compute budgets. GPT-4, Claude, Gemini – each new model pushes the limits further. But is bigger always better? A group of researchers (Baek, Park, Ko, Oh, Gong, Kim) argue in their recent paper "AI Should Sense Better, Not Just Scale Bigger" (arXiv:2507.07820) that we’ve hit diminishing returns. Instead of growing endlessly, they propose a new focus: adaptive sensing. ...

July 13, 2025

HGMP: Revolutionizing Complex Graph Analysis with Prompt Learning

In the era dominated by language models and machine learning, the importance of structured data is growing rapidly: social networks, biological relationships, and business connections. This data is represented in the form of graphs, which are often not homogeneous: they contain nodes of different types (e.g., people, products, companies) and different types of edges (e.g., “purchased”, “recommended”, “works at”). Processing such heterogeneous graphs requires specialized methods. What are heterogeneous graphs? A heterogeneous graph is a structure in which: ...

July 12, 2025

Predicting and Generating Antibiotics Against Future Pathogens with ApexOracle

The accelerating crisis of antimicrobial resistance (AMR) demands new computational methods to stay ahead of evolving pathogens. ApexOracle is a unified ML platform designed to both predict the activity of candidate compounds against specific bacterial strains and generate novel molecules de novo, proactively targeting future superbugs. Motivation and Scope Global Impact: AMR contributes to nearly 5 million deaths annually. Traditional Challenges: Standard drug discovery pipelines are slow, resource-intensive, and reactive. ApexOracle Goal: Integrate genomic context and molecular design into one end-to-end framework. ApexOracle Architecture Layman’s Explanation: Imagine you have three sets of clues: the code of the bacteria (its genome), a simple description of its behaviors (like a basic fact sheet), and the building blocks of a potential drug (a molecular recipe). ApexOracle acts like a super-smart detective that reads all three clues at once. It combines them, figures out which molecules might work best, and even drafts entirely new molecular recipes that could stop the bacteria in its tracks. ...

July 11, 2025

HeLo – A New Path for Multimodal Emotion Recognition

Modern emotion-recognition systems increasingly leverage data from multiple sources—ranging from physiological signals (e.g., heart rate, skin conductance) to facial video. The goal is to capture the richness of human feelings, where multiple emotions often co-occur. Traditional approaches, however, focused on single-label classification (e.g., “happy” or “sad”). The paper “HeLo: Heterogeneous Multi-Modal Fusion with Label Correlation for Emotion Distribution Learning” introduces an entirely new paradigm: emotion distribution learning, where the model predicts the probability of each basic emotion being present. ...

July 10, 2025