MIT scientists investigate memorization risk in the age of clinical AI
New research demonstrates how AI models can be tested to ensure they don’t cause harm by revealing anonymized patient health data.
New research demonstrates how AI models can be tested to ensure they don’t cause harm by revealing anonymized patient health data.
CSAIL researchers find even “untrainable” neural nets can learn effectively when guided by another network’s built-in biases using their guidance method.
Read MoreMIT-IBM Watson AI Lab researchers developed an expressive architecture that provides better state tracking and sequential reasoning in LLMs over long texts.
Read MoreThe “self-steering” DisCIPL system directs small models to work together on tasks with constraints, like itinerary planning and budgeting.
Read MoreThe technique can help scientists in economics, public health, and other fields understand whether to trust the results of their experiments.
Read MoreMIT CSAIL and LIDS researchers developed a mathematically grounded system that lets soft robots deform, adapt, and interact with people and objects, without violating safety limits.
Read MoreBoltzGen generates protein binders for any biological target from scratch, expanding AI’s reach from understanding biology toward engineering it.
Read MoreAssociate Professor Phillip Isola studies the ways in which intelligent machines “think,” in an effort to safely integrate AI into human society.
Read MoreMIT PhD students who interned with the MIT-IBM Watson AI Lab Summer Program are pushing AI tools to be more flexible, efficient, and grounded in truth.
Read MoreThe coding framework uses modular concepts and simple synchronization rules to make software clearer, safer, and easier for LLMs to generate.
Read More
You must be logged in to post a comment.