M²FMoE: When Experts Learn to Predict Floods

Time series forecasting is one of the most important applications of machine learning — from demand prediction, through infrastructure monitoring, to flood forecasting. The problem? Standard models optimize for typical cases. Yet it’s precisely the atypical ones — extreme events — that are often most important to predict. M²FMoE is a model that learns to predict both. The Problem: Extreme Events Break Standard Models Time series forecasting has made remarkable progress. Transformers, frequency-domain methods, and hybrid architectures achieve impressive results on benchmarks. But there’s a catch. ...

January 14, 2026

BALLAST: When a Bandit Teaches Your Database How Long to Wait

Imagine you’re a team leader. You send a message and wait for a response. How long do you wait before assuming your colleague has “disappeared”? Too short — and you panic for no reason. Too long — and the whole project stalls. BALLAST is a system that teaches databases to answer this question automatically, using machine learning techniques. The Problem: Raft’s Achilles Heel Raft is a consensus protocol — the way distributed databases (like etcd, Consul, CockroachDB) agree on who’s the “leader” and which data is current. It works like this: ...

January 5, 2026

AI Co-Scientist: Teaching Models to Write Research Plans Better Than Humans

What if AI could not just answer questions, but actively plan scientific research? Not generating text — creating coherent, novel experiment plans that experts rate as better than human-written ones. Sounds like science fiction? Researchers from Meta AI and partners just achieved this. The Problem: How Do You Grade Scientific Creativity? Training models for “closed” tasks (math, coding) is relatively straightforward — the answer is correct or not. But how do you evaluate a research plan? ...

December 30, 2025

HyDRA: Teaching Your Phone to Understand Images Without Breaking the Bank

Imagine teaching your phone to recognize photos of dishes and suggest recipes. The catch? Models capable of this are massive and require the computational power of a Google data center. HyDRA is a clever method that adapts such models for mobile devices — without bankruptcy and without melting the planet. The Problem: An Elephant in Your Phone Vision Language Models (VLMs) are AI models that understand both images and text simultaneously. You can show them a photo and ask “what do you see?” or “how do I fix this?”. Sounds great, but there’s a catch. ...

December 27, 2025

Comp-LLM: When an Army of Experts Beats a Giant – An Analysis of a Revolution in AI Architecture

Have you ever wondered why the latest artificial intelligence models, like GPT-4 or Claude 3 Opus, are so enormous? We’re talking hundreds of billions or even trillions of parameters. These are digital monsters requiring massive amounts of energy and data-center-level infrastructure. For years, AI followed a simple rule: “Bigger means better.” Want a smarter model? Add more layers, more data, more GPUs. But — what if this is a dead end? ...

December 1, 2025

Cost-Constrained LLM Cascades — Meet C3PO

Imagine you have an army of helpers — several different Large Language Models (LLMs), each capable of handling tasks from simple queries to complex reasoning. But each helper costs something: time, compute, or actual money if you’re using an API. So the question is: Can we orchestrate these models wisely — starting from the cheapest one that might do the job, escalating only when needed — without exceeding a cost budget? ...

November 14, 2025

Accurate Satellite Rain Forecasting with Physics-Conditioned Neural Networks

Imagine this: you’re driving, clouds are gathering, and your weather app says “heavy rain in 15 minutes” — but there are no local radars, and it gets it wrong. Sounds familiar? That’s exactly the kind of problem tackled by the new research paper Precipitation nowcasting of satellite data using physically conditioned neural networks (by Antônio Catão et al.). The authors present a model that can forecast precipitation using only satellite data, powered by a neural network that’s conditioned by physics. In short: less “black box” magic, more scientific reasoning — and better forecasts where radar coverage is weak or nonexistent. ...

November 10, 2025

No Prior, No Leakage – can we really reconstruct data from a neural network?

In the era of artificial intelligence, privacy protection is one of the hottest topics. Neural networks often “memorize” pieces of training data. In extreme cases, an attacker could try to reconstruct the original examples just from the trained model’s parameters (so-called reconstruction attacks). Imagine a medical model that could reveal fragments of sensitive patient images — alarming, right? The new paper “No Prior, No Leakage: Revisiting Reconstruction Attacks in Trained Neural Networks” (arxiv.org) challenges this fear. It shows that without additional knowledge (priors), reconstruction is fundamentally undecidable. In other words: model parameters alone may not be enough to recover the training data. ...

September 26, 2025

How to Detect Credit Card Fraud?

Today, credit card transactions are everywhere — online shopping, bill payments, travel, etc. Unfortunately, the number of fraud cases is also growing. The challenge is that frauds are very rare compared to normal transactions. This means that simple models trained on raw data often “ignore” these rare cases — because statistically, it’s cheaper to be wrong on a few frauds than on thousands of normal payments. The paper “Credit Card Fraud Detection” (arXiv:2509.15044) analyzes how to improve fraud detection by applying data preprocessing techniques (class balancing) and comparing several models. This is crucial because the effectiveness of such systems has real-world consequences — for banks, payment platforms, and user security. ...

September 21, 2025

JANUS – how to fool Graph Neural Networks and what it teaches us

Graph Neural Networks (GNNs) are among the most powerful tools in modern AI. They can analyze data structured as nodes and connections – like social networks, financial links, protein structures, or transportation systems. But success comes with risk: GNNs can be attacked. A new research paper introduces JANUS – a framework that learns to inject fake nodes into graphs in a way that is extremely hard to detect. While framed as an attack, the insights are equally valuable for building defenses. ...

September 17, 2025