Why it’s critical to move beyond overly aggregated machine-learning metrics
New research detects hidden evidence of mistaken correlations — and provides a method to improve accuracy.
New research detects hidden evidence of mistaken correlations — and provides a method to improve accuracy.
New research demonstrates how AI models can be tested to ensure they don’t cause harm by revealing anonymized patient health data.
Read MoreProfessors Facundo Batista and Dina Katabi, along with three additional MIT alumni, are honored for their outstanding professional achievement and commitment to service.
Read MoreMIT CSAIL and McMaster researchers used a generative AI model to reveal how a narrow-spectrum antibiotic attacks disease-causing bacteria, speeding up a process that normally takes years.
Read MoreBy enabling rapid annotation of areas of interest in medical images, the tool can help scientists study new treatments or map disease progression.
Read MoreMIT CSAIL researchers developed a tool that can model the shape and movements of fetuses in 3D, potentially assisting doctors in finding abnormalities and making diagnoses.
Read MoreVaxSeer uses machine learning to predict virus evolution and antigenicity, aiming to make vaccine selection more accurate and less reliant on guesswork.
Read MoreTools build on years of research at Lincoln Laboratory to develop a rapid brain health screening capability and may also be applicable to civilian settings such as sporting events and medical offices.
Read MoreThe Language/AI Incubator, an MIT Human Insight Collaborative project, is investigating how AI can improve communications among patients and practitioners.
Read MoreLaunched with a gift from the Biswas Family Foundation, the Biswas Postdoctoral Fellowship Program will support postdocs in health and life sciences.
Read More
You must be logged in to post a comment.