Study shows vision-language models can’t handle queries with negation words
Words like “no” and “not” can cause this popular class of AI models to fail unexpectedly in high-stakes settings, such as medical diagnosis.
Words like “no” and “not” can cause this popular class of AI models to fail unexpectedly in high-stakes settings, such as medical diagnosis.
The CausVid generative AI tool uses a diffusion model to teach an autoregressive (frame-by-frame) system to rapidly produce stable, high-resolution videos.
Read MoreNew type of “state-space model” leverages principles of harmonic oscillators.
Read MoreA new method helps convey uncertainty more precisely, which could give researchers and medical clinicians better information to make decisions.
Read MoreNew phase will support continued exploration of ideas and solutions in fields ranging from AI to nanotech to climate — with emphasis on educational exchanges and entrepreneurship.
Read MoreResearchers have created a unifying framework that can help scientists combine existing ideas to improve AI models or create new ones.
Read MoreTactStyle, a system developed by CSAIL researchers, uses image prompts to replicate both the visual appearance and tactile properties of 3D models.
Read MoreA new technique automatically guides an LLM toward outputs that adhere to the rules of whatever programming language or other format is being used.
Read MoreA new method from the MIT-IBM Watson AI Lab helps large language models to steer their own responses toward safer, more ethical, value-aligned outputs.
Read MoreThe approach maintains an AI model’s accuracy while ensuring attackers can’t extract secret information.
Read More
You must be logged in to post a comment.