Unpacking the bias of large language models
In a new study, researchers discover the root cause of a type of bias in LLMs, paving the way for more accurate and reliable AI systems.
In a new study, researchers discover the root cause of a type of bias in LLMs, paving the way for more accurate and reliable AI systems.
The MIT Ethics of Computing Research Symposium showcases projects at the intersection of technology, ethics, and social responsibility.
Read MoreThe winning essay of the Envisioning the Future of Computing Prize puts health care disparities at the forefront.
Read MoreSketchAgent, a drawing system developed by MIT CSAIL researchers, sketches up concepts stroke-by-stroke, teaching language models to visually express concepts on their own and collaborate with humans.
Read MorePhD student Sarah Alnegheimish wants to make machine learning systems accessible.
Read MoreWords like “no” and “not” can cause this popular class of AI models to fail unexpectedly in high-stakes settings, such as medical diagnosis.
Read MoreMAD Fellow Alexander Htet Kyaw connects humans, machines, and the physical world using AI and augmented reality.
Read MoreTactStyle, a system developed by CSAIL researchers, uses image prompts to replicate both the visual appearance and tactile properties of 3D models.
Read MoreProfessor of media technology honored for research in human-computer interaction that is considered both fundamental and influential.
Read MoreNew research could allow a person to correct a robot’s actions in real-time, using the kind of feedback they’d give another human.
Read More
You must be logged in to post a comment.