No Prior, No Leakage – can we really reconstruct data from a neural network?

In the era of artificial intelligence, privacy protection is one of the hottest topics. Neural networks often “memorize” pieces of training data. In extreme cases, an attacker could try to reconstruct the original examples just from the trained model’s parameters (so-called reconstruction attacks). Imagine a medical model that could reveal fragments of sensitive patient images — alarming, right? The new paper “No Prior, No Leakage: Revisiting Reconstruction Attacks in Trained Neural Networks” (arxiv.org) challenges this fear. It shows that without additional knowledge (priors), reconstruction is fundamentally undecidable. In other words: model parameters alone may not be enough to recover the training data. ...

September 26, 2025

Systematization of Knowledge: Data Minimization in Machine Learning

Modern systems based on Machine Learning (ML) are ubiquitous, from credit scoring to fraud detection. The conventional wisdom is that more data leads to better models. However, this data-centric approach directly conflicts with a fundamental legal principle: data minimization (DM). This principle, enshrined in key regulations like the GDPR in Europe and the CPRA in California, mandates that personal data collection and processing must be “adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed”. ...

August 15, 2025