Description
Alzheimer’s disease (AD) is an irreversible neurodegenerative disease whose clinical management and research heavily rely on longitudinal neuroimaging, particularly magnetic resonance imaging (MRI) and positron emission tomography (PET). However, real-world clinical datasets are frequently affected by missing imaging time points due to patient dropout, irregular follow-ups, or logistical constraints, limiting the ability of predictive models to accurately capture disease trajectories. This work addresses the problem of missing longitudinal neuroimaging data by developing a generative model capable of synthesizing realistic and biologically plausible MRI and PET images conditioned on available patient information.
The proposed approach explores state-of-the-art generative techniques, with a focus on diffusion-based models. A baseline Denoising Diffusion Probabilistic Model (DDPM) will be first implemented to generate unconditional brain images. This will be then extended to a conditional Stable Diffusion framework that enables the generation of missing MRI or PET scans by conditioning on multimodal clinical information, including cognitive test scores, biomarkers, and prior imaging data. To further enforce anatomical consistency and preserve patient-specific structural characteristics, segmentation-guided diffusion will be incorporated, conditioning the generation process on brain masks.
The models are trained and evaluated using large-scale longitudinal datasets from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) and the OASIS-3. Performance is assessed through a comprehensive evaluation framework combining image quality metrics, such as MSE, SSIM, and Fréchet Inception Distance, with biologically informed criteria that examine anatomical fidelity and consistency with known patterns of Alzheimer’s disease progression.
| Field of Research/Work | Beyond Physics |
|---|