MIT scientists investigate memorization risk in the age of clinical AI
New research demonstrates how AI models can be tested to ensure they don’t cause harm by revealing anonymized patient health data.
New research demonstrates how AI models can be tested to ensure they don’t cause harm by revealing anonymized patient health data.
And what these findings could mean for you and your business.
Read MoreMIT community members made headlines with key research advances and their efforts to tackle pressing challenges.
Read MoreCSAIL researchers find even “untrainable” neural nets can learn effectively when guided by another network’s built-in biases using their guidance method.
Read MoreMIT-IBM Watson AI Lab researchers developed an expressive architecture that provides better state tracking and sequential reasoning in LLMs over long texts.
Read MoreThe AI-powered tool could inform the design of better sensors and cameras for robots or autonomous vehicles.
Read MoreAn AI-driven system lets users design and build simple, multicomponent objects by describing them with words.
Read MoreThe approach could apply to more complex tissues and organs, helping researchers to identify early signs of disease.
Read MoreThe “self-steering” DisCIPL system directs small models to work together on tasks with constraints, like itinerary planning and budgeting.
Read MoreAI promises to make hiring fairer by reducing human bias. But it often reshapes what fairness means.
Read More
You must be logged in to post a comment.