Exposing biases, moods, personalities, and abstract concepts hidden in large language models
A new method developed at MIT could root out vulnerabilities and improve LLM safety and performance.
A new method developed at MIT could root out vulnerabilities and improve LLM safety and performance.
Opening a new window on the brainstem, a new tool reliably and finely resolves distinct nerve bundles in live diffusion MRI scans, revealing signs of injury or disease.
Read MoreEnCompass executes AI agent programs by backtracking and making multiple attempts, finding the best set of outputs generated by an LLM. It could help coders work with AI agents more efficiently.
Read MoreWhile the growing energy demands of AI are worrying, some techniques can also help make power grids cleaner and more efficient.
Read MoreCSAIL researchers find even “untrainable” neural nets can learn effectively when guided by another network’s built-in biases using their guidance method.
Read MoreMIT-IBM Watson AI Lab researchers developed an expressive architecture that provides better state tracking and sequential reasoning in LLMs over long texts.
Read MoreNuclear waste continues to be a bottleneck in the widespread use of nuclear energy, so doctoral student Dauren Sarsenbayev is developing models to address the problem.
Read MoreThe “self-steering” DisCIPL system directs small models to work together on tasks with constraints, like itinerary planning and budgeting.
Read MorePostdoc Zongyi Li, Associate Professor Tess Smidt, and seven additional alumni will be supported in the development of AI against difficult problems.
Read MoreThis new technique enables LLMs to dynamically adjust the amount of computation they use for reasoning, based on the difficulty of the question.
Read More
You must be logged in to post a comment.