Improving AI models’ ability to explain their predictions
A new approach could help users know whether to trust a model’s predictions in safety-critical applications like health care and autonomous driving.
A new approach could help users know whether to trust a model’s predictions in safety-critical applications like health care and autonomous driving.
The approach could help engineers tackle extremely complex design problems, from power grid optimization to vehicle design.
Read MoreLincoln Laboratory intern Ivy Mahncke developed and tested algorithms to help human divers and robots navigate underwater.
Read MoreBy leveraging idle computing time, researchers can double the speed of model training while preserving accuracy.
Read MoreTo help generative AI models create durable, real-world accessories and decor, the PhysiOpt system runs physics simulations and makes subtle tweaks to its 3D blueprints.
Read MoreBy providing holistic information on a cell, an AI-driven method could help scientists better understand disease mechanisms and plan experiments.
Read MoreStrahinja Janjusevic brings an international perspective and US Naval Academy education to his graduate research in the MIT Technology and Policy Program.
Read MoreResearch from the MIT Center for Constructive Communication finds leading AI models perform worse for users with lower English proficiency, less formal education, and non-US origins.
Read MoreA new method developed at MIT could root out vulnerabilities and improve LLM safety and performance.
Read MoreBy minimizing the need to drive around looking for a parking spot, this technique can save drivers up to 35 minutes — and give them a realistic estimate of total travel time.
Read More
You must be logged in to post a comment.