MIT scientists investigate memorization risk in the age of clinical AI
New research demonstrates how AI models can be tested to ensure they don’t cause harm by revealing anonymized patient health data.
New research demonstrates how AI models can be tested to ensure they don’t cause harm by revealing anonymized patient health data.
BoltzGen generates protein binders for any biological target from scratch, expanding AI’s reach from understanding biology toward engineering it.
Read MoreMIT CSAIL and McMaster researchers used a generative AI model to reveal how a narrow-spectrum antibiotic attacks disease-causing bacteria, speeding up a process that normally takes years.
Read MoreVaxSeer uses machine learning to predict virus evolution and antigenicity, aiming to make vaccine selection more accurate and less reliant on guesswork.
Read MoreThe framework helps clinicians choose phrases that more accurately reflect the likelihood that certain conditions are present in X-rays.
Read MoreA deep neural network called CHAIS may soon replace invasive procedures like catheterization as the new gold standard for monitoring heart health.
Read MoreStarting with a single frame in a simulation, a new system uses generative AI to emulate the dynamics of molecules, connecting static molecular structures and developing blurry pictures into videos.
Read MoreUsing this model, researchers may be able to identify antibody drugs that can target a variety of infectious diseases.
Read MoreWith models like AlphaFold3 limited to academic research, the team built an equivalent alternative, to encourage innovation more broadly.
Read MoreResearchers at MIT, NYU, and UCLA develop an approach to help evaluate whether large language models like GPT-4 are equitable enough to be clinically viable for mental health support.
Read More
You must be logged in to post a comment.