The Anatomy of AI Lies: How Language Models Can Deceive Us

We’re used to hearing that AI sometimes “hallucinates” — making funny or random mistakes. Hallucinations are unintended errors caused by the limits of statistical prediction. But the new research goes further: it shows that AI can knowingly choose to lie when deception helps it achieve a goal. The publication Can LLMs Lie? takes us into a world where AI acts more like a strategic agent, capable of manipulating information to maximize outcomes. ...

September 5, 2025

How to Teach AI to Handle Mistakes? Meet ε-Softmax

In the world of artificial intelligence, data is the fuel that powers machine learning models. But what if that fuel is contaminated? Mislabeled data, known as label noise, is a huge problem that can cause even the best algorithms to learn complete nonsense. The paper “ε-Softmax: Approximating One-Hot Vectors for Mitigating Label Noise,” accepted at the prestigious NeurIPS 2024 conference, offers an elegant solution. The Problem: When a Model Blindly Trusts Its Labels Let’s imagine we’re training a model to recognize animals. We show it a picture of a cute cat. In the traditional approach, we give it an absolutely certain piece of information, a so-called one-hot vector: ...

August 5, 2025

HGMP: Revolutionizing Complex Graph Analysis with Prompt Learning

In the era dominated by language models and machine learning, the importance of structured data is growing rapidly: social networks, biological relationships, and business connections. This data is represented in the form of graphs, which are often not homogeneous: they contain nodes of different types (e.g., people, products, companies) and different types of edges (e.g., “purchased”, “recommended”, “works at”). Processing such heterogeneous graphs requires specialized methods. What are heterogeneous graphs? A heterogeneous graph is a structure in which: ...

July 12, 2025