TAPS: Why Your Draft Model's Training Data Matters More Than Its Architecture

Speculative decoding is one of the most elegant tricks in LLM inference: a small, fast draft model draft model A lightweight language model that quickly proposes candidate tokens. A larger ‘verifier’ model then checks these proposals in parallel, accepting correct ones and rejecting wrong ones - accelerating generation without changing output quality. proposes tokens, and a large verifier verifier The full-size target language model that checks draft proposals. It processes all candidates in one forward pass and accepts those matching its own distribution, guaranteeing identical output quality to standard autoregressive decoding. approves or rejects them in parallel. Same output distribution, fewer expensive forward passes. ...

March 28, 2026

Lost in Stories: How LLMs Lose the Thread in Long Narratives

Ask any language model to write a 10,000-word story. On page one, the hero has blue eyes. By page five — brown. In chapter three it’s Thursday; in chapter six, the same day is suddenly Saturday. A character who died on page seven is chatting away on page ten. Sound familiar? The paper “Lost in Stories: Consistency Bugs in Long Story Generation by LLMs” systematically investigates this problem for the first time — and the results are sobering. Even the best models produce an average of one consistency error per 10,000 words, and human experts catch only 17% of them. ...

March 9, 2026

ASkDAgger: How Artificial Intelligence Learns More Effectively by Asking Questions

In a world where robots and AI systems increasingly learn through observation and interaction with humans, the efficiency of this process remains a key challenge. Traditional Imitation Learning methods often require a human teacher to constantly supervise and correct errors, which is time-consuming and costly. A team of researchers led by Jelle Luijkx proposes a groundbreaking solution in their latest paper, “ASkDAgger: Active Skill-level Data Aggregation for Interactive Imitation Learning.” ...

August 8, 2025