Researchers discover a shortcoming that makes LLMs less reliable
Large language models can learn to mistakenly link certain sentence patterns with specific topics — and may then repeat these patterns instead of reasoning.
Large language models can learn to mistakenly link certain sentence patterns with specific topics — and may then repeat these patterns instead of reasoning.
A new approach developed at MIT could help a search-and-rescue robot navigate an unpredictable environment by rapidly generating an accurate map of its surroundings.
Read MoreMIT PhD student and CSAIL researcher Justin Kay describes his work combining AI and computer vision systems to monitor the ecosystems that support our planet.
Read MoreEconomics doctoral student Whitney Zhang investigates how technologies and organizational decisions shape labor markets.
Read MoreWith SCIGEN, researchers can steer AI models to create materials with exotic properties for applications like quantum computing.
Read MoreSystem developed at MIT could provide realistic predictions for a wide variety of reactions, while maintaining real-world physical constraints.
Read MoreBy visualizing Escher-like optical illusions in 2.5 dimensions, the “Meschers” tool could help scientists understand physics-defying shapes and spark new designs.
Read MoreThis new approach could lead to enhanced AI models for drug and materials discovery.
Read MoreLanguage models follow changing situations using clever arithmetic, instead of sequential tracking. By controlling when these approaches are used, engineers could improve the systems’ capabilities.
Read MoreA team of researchers has mapped the challenges of AI in software development, and outlined a research agenda to move the field forward.
Read More
You must be logged in to post a comment.