Privacy

Artificial IntelligenceCybersecurityMachine Learning

MIT scientists investigate memorization risk in the age of clinical AI

New research demonstrates how AI models can be tested to ensure they don’t cause harm by revealing anonymized patient health data.

Read More
Artificial IntelligenceCybersecurityMachine Learning

The approach maintains an AI model’s accuracy while ensuring attackers can’t extract secret information.

Read More
Artificial IntelligenceMachine Learning

Bridging philosophy and AI to explore computing ethics

In a new MIT course co-taught by EECS and philosophy professors, students tackle moral dilemmas of the digital age.

Read More
Artificial IntelligenceIndustry

Introducing the MIT Generative AI Impact Consortium

The consortium will bring researchers and industry together to focus on impact.

Read More
Artificial IntelligenceCybersecurityMachine Learning

New security protocol shields data from attackers during cloud-based computation

The technique leverages quantum properties of light to guarantee security while preserving the accuracy of a deep-learning model.

Read More
Artificial IntelligenceMachine Learning

Researchers developed an easy-to-use tool that enables an AI practitioner to find data that suits the purpose of their model, which could improve accuracy and reduce bias.

Read More
Artificial IntelligenceCybersecurityMachine LearningSocial Media

3 Questions: How to prove humanity online

AI agents could soon become indistinguishable from humans online. Could “personhood credentials” protect people against digital imposters?

Read More