Guided learning lets “untrainable” neural networks realize their potential
CSAIL researchers find even “untrainable” neural nets can learn effectively when guided by another network’s built-in biases using their guidance method.
CSAIL researchers find even “untrainable” neural nets can learn effectively when guided by another network’s built-in biases using their guidance method.
MIT-IBM Watson AI Lab researchers developed an expressive architecture that provides better state tracking and sequential reasoning in LLMs over long texts.
Read MoreThe “self-steering” DisCIPL system directs small models to work together on tasks with constraints, like itinerary planning and budgeting.
Read MoreThis new technique enables LLMs to dynamically adjust the amount of computation they use for reasoning, based on the difficulty of the question.
Read MoreMIT PhD students who interned with the MIT-IBM Watson AI Lab Summer Program are pushing AI tools to be more flexible, efficient, and grounded in truth.
Read MoreHow the MIT-IBM Watson AI Lab is shaping AI-sociotechnical systems for the future.
Read MoreAfter being trained with this technique, vision-language models can better identify a unique item in a new scene.
Read MoreMIT-IBM Watson AI Lab researchers have developed a universal guide for estimating how large language models will perform based on smaller models in the same family.
Read MoreBy visualizing Escher-like optical illusions in 2.5 dimensions, the “Meschers” tool could help scientists understand physics-defying shapes and spark new designs.
Read MoreThe CodeSteer system could boost large language models’ accuracy when solving complex problems, such as scheduling shipments in a supply chain.
Read More
You must be logged in to post a comment.