MIT scientists investigate memorization risk in the age of clinical AI
New research demonstrates how AI models can be tested to ensure they don’t cause harm by revealing anonymized patient health data.
New research demonstrates how AI models can be tested to ensure they don’t cause harm by revealing anonymized patient health data.
MIT community members made headlines with key research advances and their efforts to tackle pressing challenges.
Read MoreBoltzGen generates protein binders for any biological target from scratch, expanding AI’s reach from understanding biology toward engineering it.
Read MoreProfessors Facundo Batista and Dina Katabi, along with three additional MIT alumni, are honored for their outstanding professional achievement and commitment to service.
Read MoreMIT CSAIL and McMaster researchers used a generative AI model to reveal how a narrow-spectrum antibiotic attacks disease-causing bacteria, speeding up a process that normally takes years.
Read MoreMIT CSAIL researchers developed a tool that can model the shape and movements of fetuses in 3D, potentially assisting doctors in finding abnormalities and making diagnoses.
Read MoreProfessor Caroline Uhler discusses her work at the Schmidt Center, thorny problems in math, and the ongoing quest to understand some of the most complex interactions in biology.
Read MoreVaxSeer uses machine learning to predict virus evolution and antigenicity, aiming to make vaccine selection more accurate and less reliant on guesswork.
Read MoreCellLENS reveals hidden patterns in cell behavior within tissues, offering deeper insights into cell heterogeneity — vital for advancing cancer immunotherapy.
Read MoreLaunched with a gift from the Biswas Family Foundation, the Biswas Postdoctoral Fellowship Program will support postdocs in health and life sciences.
Read More
You must be logged in to post a comment.