Language
A new study shows LLMs represent different data types based on their underlying meaning and reason about data in their dominant language.
Read MoreLLMs develop their own understanding of reality as their language abilities improve
In controlled experiments, MIT CSAIL researchers discover simulations of reality developing deep within LLMs, indicating an understanding of language beyond simple mimicry.
Read MoreMIT researchers advance automated interpretability in AI models
MAIA is a multimodal agent that can iteratively design experiments to better understand various components of AI systems.
Read MoreReasoning skills of large language models are often overestimated
New CSAIL research highlights how LLMs excel in familiar scenarios but struggle in novel ones, questioning their true reasoning abilities versus reliance on memorization.
Read MoreHelping nonexperts build advanced generative AI models
MosaicML, co-founded by an MIT alumnus and a professor, made deep-learning models faster and more efficient. Its acquisition by Databricks broadened that mission.
Read MoreTechnique improves the reasoning capabilities of large language models
Combining natural language and programming, the method enables LLMs to solve numerical, analytical, and language-based tasks transparently.
Read More

You must be logged in to post a comment.