Exposing biases, moods, personalities, and abstract concepts hidden in large language models
A new method developed at MIT could root out vulnerabilities and improve LLM safety and performance.
A new method developed at MIT could root out vulnerabilities and improve LLM safety and performance.
Removing just a tiny fraction of the crowdsourced data that informs online ranking platforms can significantly change the results.
Read MoreMIT faculty join The Curiosity Desk to discuss football, math, Olympic figure skating, AI and the quest to cure ovarian cancer.
Read MoreThe MIT senior will pursue a master’s degree at Cambridge University in the U.K. this fall.
Read MoreThe technique can help scientists in economics, public health, and other fields understand whether to trust the results of their experiments.
Read MoreDepartment of Mathematics researchers David Roe and Andrew Sutherland seek to advance automated theorem proving; four additional MIT alumni also awarded.
Read MoreThe research center, sponsored by the DOE’s National Nuclear Security Administration, will advance the simulation of extreme environments, such as those in hypersonic flight and atmospheric reentry.
Read MoreA new approach can reveal the features AI models use to predict proteins that might make good drug or vaccine targets.
Read MoreBy visualizing Escher-like optical illusions in 2.5 dimensions, the “Meschers” tool could help scientists understand physics-defying shapes and spark new designs.
Read MoreThe CodeSteer system could boost large language models’ accuracy when solving complex problems, such as scheduling shipments in a supply chain.
Read More
You must be logged in to post a comment.