New technique makes AI models leaner and faster while they’re still learning
Researchers use control theory to shed unnecessary complexity from AI models during training, cutting compute costs without sacrificing performance.
Researchers use control theory to shed unnecessary complexity from AI models during training, cutting compute costs without sacrificing performance.
Researchers developed a system that intelligently balances workloads to improve the efficiency of flash storage hardware in a data center.
Read MoreBy quickly generating aesthetically accurate previews of fabricated objects, the VisiPrint system could make prototyping faster and less wasteful.
Read MoreMIT Sea Grant works with the Woodwell Climate Research Center and other collaborators to demonstrate a deep learning-based system for fish monitoring.
Read MoreAcademia-industry relationship is an early-stage accelerator, supporting professional progress and research.
Read MoreA new approach could help users know whether to trust a model’s predictions in safety-critical applications like health care and autonomous driving.
Read MoreTo help generative AI models create durable, real-world accessories and decor, the PhysiOpt system runs physics simulations and makes subtle tweaks to its 3D blueprints.
Read MoreEnCompass executes AI agent programs by backtracking and making multiple attempts, finding the best set of outputs generated by an LLM. It could help coders work with AI agents more efficiently.
Read MoreTorralba’s research focuses on computer vision, machine learning, and human visual perception.
Read MoreAs AI technology advances, a new interdisciplinary course seeks to equip students with foundational critical thinking skills in computing.
Read More
You must be logged in to post a comment.