Why it’s critical to move beyond overly aggregated machine-learning metrics
New research detects hidden evidence of mistaken correlations — and provides a method to improve accuracy.
New research detects hidden evidence of mistaken correlations — and provides a method to improve accuracy.
New research demonstrates how AI models can be tested to ensure they don’t cause harm by revealing anonymized patient health data.
Read MoreLarge language models can learn to mistakenly link certain sentence patterns with specific topics — and may then repeat these patterns instead of reasoning.
Read MoreMIT PhD students who interned with the MIT-IBM Watson AI Lab Summer Program are pushing AI tools to be more flexible, efficient, and grounded in truth.
Read MoreThe team used two different AI approaches to design novel antibiotics, including one that showed promise against MRSA.
Read MoreCellLENS reveals hidden patterns in cell behavior within tissues, offering deeper insights into cell heterogeneity — vital for advancing cancer immunotherapy.
Read MoreThe Language/AI Incubator, an MIT Human Insight Collaborative project, is investigating how AI can improve communications among patients and practitioners.
Read MoreThe MIT-MGB Seed Program, launched with support from Analog Devices Inc., will fund joint research projects that advance technology and clinical research.
Read MoreResearchers find nonclinical information in patient messages — like typos, extra white space, and colorful language — reduces the accuracy of an AI model.
Read MoreCourses on developing AI models for health care need to focus more on identifying and addressing bias, says Leo Anthony Celi.
Read More
You must be logged in to post a comment.