When AI is fairer than humans: The role of egocentrism in moral and fairness judgments of AI and human decisions
When AI is fairer than humans: The role of egocentrism in moral and fairness judgments of AI and human decisions
StatusVoR
Alternative title
Authors
Miazek, Katarzyna
Bocian, Konrad
Monograph
Monograph (alternative title)
Date
2025-06-20
Publisher
Journal title
Computers in Human Behavior Reports
Issue
Volume
19
Pages
Pages
1-17
ISSN
ISSN of series
Access date
2025-06-20
Abstract PL
Abstract EN
Algorithmic fairness is a core principle of trustworthy Artificial Intelligence (AI), yet how people perceive fairness in AI decision-making remains understudied. Prior research suggests that moral and fairness judgments are egocentrically biased, favoring self-interested outcomes. Drawing on the Computers Are Social Actors (CASA) framework and egocentric ethics theory we examine whether this bias extends to AI decision-makers, comparing fairness and morality perceptions of AI and human agents. Across three experiments (two preregistered, N = 1880, Prolific US samples), participants evaluated financial decisions made by AI or human agents. Self-interest was manipulated by assigning participants to conditions where they either benefited from, were harmed by, or remained neutral to the decision outcome. Results showed that self-interest significantly biased fairness judgments—decision-makers who made unfair but personally beneficial decisions were perceived as more moral and fairer than those whose decisions benefited others (Studies 1 & 2) or those who made fair but personally costly decisions (Study 3). However, this egocentric bias was weaker for AI than for humans, mediated by a lower perceived mind and reduced liking for AI (Studies 2 & 3). These findings suggest that fairness judgments of AI are not immune to egocentric biases, but are moderated by cognitive and social perceptions of AI versus humans. Our studies challenge the assumption that algorithmic fairness alone is sufficient for achieving fair outcomes. This provides novel insight for AI deployment in high-stakes decision-making domains, highlighting the need to consider both algorithmic fairness and human biases when evaluating AI decisions.
Abstract other
Keywords PL
Keywords EN
Morality
Fairness
Artificial intelligence
Decision making
Self-interest bias
Fairness
Artificial intelligence
Decision making
Self-interest bias
Keywords other
Exhibition title
Place of exhibition (institution)
Exhibition curator
Type
License type
Funder
Time range from
Time range to
Contact person name
Related publication
Related publication
Grant/project name
SONATA 17 Egocentryzm ocen charakteru moralnego – mechanizmy, różnice indywidualne i strategie redukcji