Researchers find nonclinical information in patient messages — like typos, extra white space, and colorful language — reduces the accuracy of an AI model.
Health care
Bringing meaning into technology deployment
The MIT Ethics of Computing Research Symposium showcases projects at the intersection of technology, ethics, and social responsibility.
Read MoreEnvisioning a future where health care tech leaves some behind
The winning essay of the Envisioning the Future of Computing Prize puts health care disparities at the forefront.
Read More3 Questions: How to help students recognize potential bias in their AI datasets
Courses on developing AI models for health care need to focus more on identifying and addressing bias, says Leo Anthony Celi.
Read MoreWith AI, researchers predict the location of virtually any protein within a human cell
Trained with a joint understanding of protein and cell behavior, the model could help with diagnosing disease and developing new drugs.
Read MoreStudy shows vision-language models can’t handle queries with negation words
Words like “no” and “not” can cause this popular class of AI models to fail unexpectedly in high-stakes settings, such as medical diagnosis.
Read MoreQ&A: A roadmap for revolutionizing health care through data-driven innovation
A new book coauthored by MIT’s Dimitris Bertsimas explores how analytics is driving decisions and outcomes in health care.
Read MoreMaking AI models more trustworthy for high-stakes settings
A new method helps convey uncertainty more precisely, which could give researchers and medical clinicians better information to make decisions.
Read MoreThe framework helps clinicians choose phrases that more accurately reflect the likelihood that certain conditions are present in X-rays.
Read MoreCan deep learning transform heart failure prevention?
A deep neural network called CHAIS may soon replace invasive procedures like catheterization as the new gold standard for monitoring heart health.
Read More

You must be logged in to post a comment.