When AI is fairer than humans: The role of egocentrism in moral and fairness judgments of AI and human decisions

StatusVoR
dc.abstract.enAlgorithmic fairness is a core principle of trustworthy Artificial Intelligence (AI), yet how people perceive fairness in AI decision-making remains understudied. Prior research suggests that moral and fairness judgments are egocentrically biased, favoring self-interested outcomes. Drawing on the Computers Are Social Actors (CASA) framework and egocentric ethics theory we examine whether this bias extends to AI decision-makers, comparing fairness and morality perceptions of AI and human agents. Across three experiments (two preregistered, N = 1880, Prolific US samples), participants evaluated financial decisions made by AI or human agents. Self-interest was manipulated by assigning participants to conditions where they either benefited from, were harmed by, or remained neutral to the decision outcome. Results showed that self-interest significantly biased fairness judgments—decision-makers who made unfair but personally beneficial decisions were perceived as more moral and fairer than those whose decisions benefited others (Studies 1 & 2) or those who made fair but personally costly decisions (Study 3). However, this egocentric bias was weaker for AI than for humans, mediated by a lower perceived mind and reduced liking for AI (Studies 2 & 3). These findings suggest that fairness judgments of AI are not immune to egocentric biases, but are moderated by cognitive and social perceptions of AI versus humans. Our studies challenge the assumption that algorithmic fairness alone is sufficient for achieving fair outcomes. This provides novel insight for AI deployment in high-stakes decision-making domains, highlighting the need to consider both algorithmic fairness and human biases when evaluating AI decisions.
dc.affiliationInstytut Psychologii
dc.contributor.authorMiazek, Katarzyna
dc.contributor.authorBocian, Konrad
dc.date.access2025-06-20
dc.date.accessioned2025-06-23T12:43:11Z
dc.date.available2025-06-23T12:43:11Z
dc.date.created2025-05-30
dc.date.issued2025-06-20
dc.description.accesstimeat_publication
dc.description.grantnumber2021/43/D/HS6/02013
dc.description.grantnumber37/2023/FRBN/G
dc.description.granttitleSONATA 17 Egocentryzm ocen charakteru moralnego – mechanizmy, różnice indywidualne i strategie redukcji
dc.description.physical1-17
dc.description.versionfinal_published
dc.description.volume19
dc.identifier.doi10.1016/j.chbr.2025.100719
dc.identifier.eissn2451-9588
dc.identifier.urihttps://share.swps.edu.pl/handle/swps/1538
dc.identifier.weblinkhttps://www.sciencedirect.com/science/article/pii/S2451958825001344?via%3Dihub
dc.languageen
dc.pbn.affiliationpsychologia
dc.rightsCC-BY
dc.rights.questionYes_rights
dc.share.articleOPEN_JOURNAL
dc.subject.enMorality
dc.subject.enFairness
dc.subject.enArtificial intelligence
dc.subject.enDecision making
dc.subject.enSelf-interest bias
dc.swps.sciencecloudsend
dc.titleWhen AI is fairer than humans: The role of egocentrism in moral and fairness judgments of AI and human decisions
dc.title.journalComputers in Human Behavior Reports
dc.typeJournalArticle
dspace.entity.typeArticle