Artificial Intelligence Gives Women Lower Salary Advice
Artificial Intelligence Gives Women Lower Salary Advice – New Study from THWS Uncovers Bias
A recent study from the research group of Prof. Dr. Ivan Yamshchikov at the Technical University of Applied Sciences Würzburg-Schweinfurt (THWS) reveals a troubling trend: modern language models like ChatGPT systematically recommend lower salary targets to women compared to men—even when all other factors are identical.
The research, conducted by Aleksandra Sorokovikova, Pavel Chizhov, Iuliia Eremenko, and Ivan P. Yamshchikov, examines how bias manifests in large language models (LLMs). The team asked a state-of-the-art language model to provide salary negotiation advice in multiple identical scenarios—once for a male user, and once for a female user. The outcome was consistent: the AI advised women to aim for lower salaries than their male counterparts.
"When it comes to sensitive topics like salary negotiations, this kind of hidden bias can have real-world consequences for users," the authors stress.
Realistic Testing Reveals Unequal Salary Recommendations
The study found that such distortions in AI systems often remain undetected by standard benchmarks but become apparent in realistic, interactive scenarios. While traditional tests may show no significant difference across user profiles, deeply embedded biases emerge when the AI takes on an advisory role—for example, evaluating user inputs or offering direct guidance in salary negotiations.
AIOLIA Project: Promoting Ethical AI in Everyday Life
This research is part of the ongoing efforts within the European AIOLIA project (aiolia.eu) to ensure the ethical use of AI assistants. AIOLIA aims to develop and implement practical, internationally aligned guidelines for the responsible integration of AI into daily life. The findings from Würzburg highlight the urgent need for such standards to prevent discrimination through AI technologies.
The research team at CAIRO.thws contributes to the AIOLIA initiative by working to make AI assistants more transparent and equitable—supporting a more responsible path to digitalization.
About AIOLIA
Funded by the European Union, AIOLIA develops practical, internationally harmonized guidelines for ethical AI use. The project provides a foundation for fair and inclusive AI applications across Europe and beyond.
Read the full paper here:
https://arxiv.org/abs/2506.10491
Contact
Center for Artificial Intelligence (CAIRO.thws)
Prof. Dr. Ivan Yamshchikov
Franz-Horn-Str. 2
97082 Würzburg
Ivan.yamshchikov@thws.de