MIT scientists investigate memorization risk in the age of clinical AI
New research demonstrates how AI models can be tested to ensure they don’t cause harm by revealing anonymized patient health data. Alex Ouyang | Abdul Latif Jameel Clinic for Machine Learning in Health Publication Date: January 5, 2026 Press Inquiries Press Contact: Alex Ouyang Email: ouyanga@mit.edu Abdul Latif Jameel Clinic for Machine Learning in Health Close Caption: MIT scientists are developing tests to ensure that AI models aren’t memorizing sensitive patient information. Credits: Image: Alex Ouyang/MIT Jameel Clinic, with Adobe Stock Previous image Next image What is patient privacy for? The Hippocratic Oath, thought to be one of the earliest and most widely known medical ethics texts in the world, reads: “Whatever I see or hear in the lives of my patients, whether in connection with my professional practice or not, which ought not to be spoken of outside, I will keep secret, as considering all such things to be private.” As privacy becomes increasingly scarce in the age of data-hungry algorithms and cyberattacks, medicine is one of the few remaining domains where confidentiality remains central to practice, enabling patients to trust their physicians with sensitive information.But a paper co-authored by MIT researchers investigates how artificial intelligence models trained on de-identified electronic health records (EHRs) can memorize patient-specific information. The work, which was recently presented at the 2025 Conference on Neural Information Processing Syste...