MIT scientists investigate memorization risk in the age of clinical AI
New research demonstrates how AI models can be tested to ensure they don’t cause harm by revealing anonymized patient health data.
New research demonstrates how AI models can be tested to ensure they don’t cause harm by revealing anonymized patient health data.
The MIT Ethics of Computing Research Symposium showcases projects at the intersection of technology, ethics, and social responsibility.
Read MoreA new book from Professor Munther Dahleh details the creation of a unique kind of transdisciplinary center, uniting many specialties through a common need for data science.
Read MoreThe winning essay of the Envisioning the Future of Computing Prize puts health care disparities at the forefront.
Read MoreAs artificial intelligence develops, we must ask vital questions about ourselves and our society, Ben Vinson III contends in the 2025 Compton Lecture.
Read MoreFelice Frankel discusses the implications of generative AI when communicating science visually.
Read MoreIn a new MIT course co-taught by EECS and philosophy professors, students tackle moral dilemmas of the digital age.
Read More“We need to both ensure humans reap AI’s benefits and that we don’t lose control of the technology,” says senior Audrey Lorvo.
Read MoreThe consortium will bring researchers and industry together to focus on impact.
Read MoreResearchers at MIT, NYU, and UCLA develop an approach to help evaluate whether large language models like GPT-4 are equitable enough to be clinically viable for mental health support.
Read More
You must be logged in to post a comment.