MIT Study Finds Generative AI Mirrors Cultural Biases in Language Responses

New research from MIT Sloan School of Management shows that generative AI models do not offer culturally neutral answers—instead, their responses reflect deep-seated cultural patterns depending on the language in which a question is asked.

The study, published in Nature Human Behaviour, analyzed how generative AI systems like OpenAI’s ChatGPT and Baidu’s ERNIE respond to the same set of prompts when written in English versus Chinese. Researchers found that responses differed in consistent and measurable ways, aligning with psychological theories of social orientation and cognitive style.

“Our findings suggest that generative AI is not culturally neutral,” said Jackson Lu, associate professor at MIT Sloan and lead author of the study. “As people increasingly rely on these AI models, it is crucial to be aware of the cultural tendencies embedded within them.”

The team—including Lu, visiting PhD student Lesley Luyang Song, and PhD student Lu Doris Zhang—tested the models using established measures in cultural psychology. The results showed that when prompted in Chinese, the models tended to reflect a more interdependent social orientation and a holistic cognitive style. In contrast, English-language responses more often reflected independent and analytic patterns—traits commonly associated with Western cultures.

These tendencies aren’t just academic. The researchers demonstrated practical implications by prompting the models to generate advertising slogans. When using Chinese, the AI favored slogans with collectivist themes, such as “Your family’s future, your promise.” In English, slogans tended to be more individualistic, like “Your future, your peace of mind.”

The team also found that these cultural cues could be manipulated. When ChatGPT was asked to “assume the perspective of a Chinese person,” even English responses shifted toward interdependence and holistic reasoning—suggesting that AI’s cultural lens can be primed through context.

“This awareness of a lack of cultural neutrality matters—not only for developers of AI models—but also for everyday users,” said Zhang. “The cultural values embedded in generative AI may gradually bias speakers of a given language toward the norms of linguistically dominant cultures.”

The implications extend beyond direct users. Media, educational content, and marketing materials influenced by AI-generated text may subtly reinforce certain cultural norms—intentionally or not.

“Generative AI is not just speaking our language,” Song concluded. “It’s speaking our culture— sometimes without us realizing it.”

Other News