Denoising Diffusion Model with Adversarial Learning for Unsupervised Anomaly Detection on Brain MRI Images (2025)

Denoising Diffusion Model with Adversarial Learning for Unsupervised Anomaly Detection on Brain MRI Images (2025)

Oct 2025

Abstract

This paper proposes the Adversarial Denoising Diffusion Model (ADDM). Diffusion models excel at generating high-quality samples, outperforming other generative models. These models also achieve outstanding medical image anomaly detection (AD) results due to their strong sampling ability. However, the performance of the diffusion model-based methods is highly varied depending on the sampling frequency, and the time cost to generate good-quality samples is significantly higher than that of other generative models. We propose the ADDM, a diffusion model-based AD method trained with adversarial learning that can maintain high-quality sample generation ability and significantly reduce the number of sampling steps. The proposed adversarial learning is achieved by classifying model-based denoised samples and samples to which random Gaussian noise is added to a specific sampling step. Compared with the loss function of diffusion models, defined under the noise space to minimise the predicted noise and scheduled noise, the diffusion model can explicitly learn semantic information about the sample space since adversarial learning is defined based on the sample space. Our experiment demonstrated that adversarial learning helps achieve a data sampling performance similar to the DDPM with much fewer sampling steps. Experimental results show that the proposed ADDM outperformed existing unsupervised AD methods on Brain MRI images. In particular, in the comparison using 22 T1-weighted MRI scans provided by the Centre for Clinical Brain Sciences from the University of Edinburgh, the ADDM achieves similar performance with 50% fewer sampling steps than other DDPM-based AD methods, and it shows 6.2% better performance about the Dice metric with the same number of sampling steps.


Keywords

Denoising Diffusion Model, Adversarial Learning, Anomaly Detection, Brain MRI, Unsupervised Learning


Key Contributions

  • Proposes a hybrid diffusion–adversarial framework to model “normal” brain MRI distribution without labels.

  • Detects anomalies by contrasting reconstructive behaviors between diffusion prediction and adversarial refinement.

  • Improves sensitivity to subtle structural irregularities common in medical anomaly detection.

  • Achieves SOTA unsupervised anomaly detection accuracy across multiple MRI benchmarks.


View Paper