MIT scientists investigate memorization risk in the age of clinical AI
New research demonstrates how AI models can be tested to ensure they don’t cause harm by revealing anonymized patient health data.
New research demonstrates how AI models can be tested to ensure they don’t cause harm by revealing anonymized patient health data.
MIT community members made headlines with key research advances and their efforts to tackle pressing challenges.
Read MoreCSAIL researchers find even “untrainable” neural nets can learn effectively when guided by another network’s built-in biases using their guidance method.
Read MoreMIT-IBM Watson AI Lab researchers developed an expressive architecture that provides better state tracking and sequential reasoning in LLMs over long texts.
Read MoreAssistant Professor Yunha Hwang utilizes microbial genomes to examine the language of biology. Her appointment reflects MIT’s commitment to exploring the intersection of genetics research and AI.
Read MoreNuclear waste continues to be a bottleneck in the widespread use of nuclear energy, so doctoral student Dauren Sarsenbayev is developing models to address the problem.
Read MoreThe approach could apply to more complex tissues and organs, helping researchers to identify early signs of disease.
Read MoreThe “self-steering” DisCIPL system directs small models to work together on tasks with constraints, like itinerary planning and budgeting.
Read MoreThe new certificate program will equip naval officers with skills needed to solve the military’s hardest problems.
Read MoreThe technique can help scientists in economics, public health, and other fields understand whether to trust the results of their experiments.
Read More
You must be logged in to post a comment.