MIT scientists investigate memorization risk in the age of clinical AI
New research demonstrates how AI models can be tested to ensure they don’t cause harm by revealing anonymized patient health data.
New research demonstrates how AI models can be tested to ensure they don’t cause harm by revealing anonymized patient health data.
CSAIL researchers find even “untrainable” neural nets can learn effectively when guided by another network’s built-in biases using their guidance method.
Read MoreThe “self-steering” DisCIPL system directs small models to work together on tasks with constraints, like itinerary planning and budgeting.
Read MoreThe technique can help scientists in economics, public health, and other fields understand whether to trust the results of their experiments.
Read MoreWith insect-like speed and agility, the tiny robot could someday aid in search-and-rescue missions.
Read MoreLarge language models can learn to mistakenly link certain sentence patterns with specific topics — and may then repeat these patterns instead of reasoning.
Read MoreA new approach developed at MIT could help a search-and-rescue robot navigate an unpredictable environment by rapidly generating an accurate map of its surroundings.
Read MoreMIT PhD student and CSAIL researcher Justin Kay describes his work combining AI and computer vision systems to monitor the ecosystems that support our planet.
Read MoreEconomics doctoral student Whitney Zhang investigates how technologies and organizational decisions shape labor markets.
Read MoreWith SCIGEN, researchers can steer AI models to create materials with exotic properties for applications like quantum computing.
Read More
You must be logged in to post a comment.