Exposing biases, moods, personalities, and abstract concepts hidden in large language models
A new method developed at MIT could root out vulnerabilities and improve LLM safety and performance.
A new method developed at MIT could root out vulnerabilities and improve LLM safety and performance.
The context of long-term conversations can cause an LLM to begin mirroring the user’s viewpoints, possibly reducing accuracy or creating a virtual echo-chamber.
Read MoreRemoving just a tiny fraction of the crowdsourced data that informs online ranking platforms can significantly change the results.
Read MoreEnCompass executes AI agent programs by backtracking and making multiple attempts, finding the best set of outputs generated by an LLM. It could help coders work with AI agents more efficiently.
Read MoreHe joins Nikos Trichakis in guiding the cross-cutting initiative of the MIT Schwarzman College of Computing.
Read MoreTorralba’s research focuses on computer vision, machine learning, and human visual perception.
Read MoreProfessor James Collins discusses how collaboration has been central to his research into combining computational predictions with new experimental platforms.
Read MoreMIT researchers’ DiffSyn model offers recipes for synthesizing new materials, enabling faster experimentation and a shorter journey from hypothesis to use.
Read MoreAs AI technology advances, a new interdisciplinary course seeks to equip students with foundational critical thinking skills in computing.
Read MoreNew research detects hidden evidence of mistaken correlations — and provides a method to improve accuracy.
Read More
You must be logged in to post a comment.