MIT scientists investigate memorization risk in the age of clinical AI
New research demonstrates how AI models can be tested to ensure they don’t cause harm by revealing anonymized patient health data.
New research demonstrates how AI models can be tested to ensure they don’t cause harm by revealing anonymized patient health data.
MIT community members made headlines with key research advances and their efforts to tackle pressing challenges.
Read MoreCSAIL researchers find even “untrainable” neural nets can learn effectively when guided by another network’s built-in biases using their guidance method.
Read MoreMIT-IBM Watson AI Lab researchers developed an expressive architecture that provides better state tracking and sequential reasoning in LLMs over long texts.
Read MoreAssistant Professor Yunha Hwang utilizes microbial genomes to examine the language of biology. Her appointment reflects MIT’s commitment to exploring the intersection of genetics research and AI.
Read MoreThe “self-steering” DisCIPL system directs small models to work together on tasks with constraints, like itinerary planning and budgeting.
Read MoreThe new certificate program will equip naval officers with skills needed to solve the military’s hardest problems.
Read MoreThe technique can help scientists in economics, public health, and other fields understand whether to trust the results of their experiments.
Read MoreBy stacking multiple active components based on new materials on the back end of a computer chip, this new approach reduces the amount of energy wasted during computation.
Read MorePostdoc Zongyi Li, Associate Professor Tess Smidt, and seven additional alumni will be supported in the development of AI against difficult problems.
Read More
You must be logged in to post a comment.