What happens when job candidates face AI instead of humans?

Think you can outsmart AI? When algorithms assess us, we change how we act, sometimes in ways that could affect our chances of getting hired or admitted.

Image Credit: Song_about_summer / Shutterstock

In a recent article published in the Proceedings of the National Academy of Sciences (PNAS), researchers investigated whether people behave differently during assessments when they are aware that an artificial intelligence (AI) is evaluating them rather than humans. They found that across 12 studies with more than 13,000 participants, people tended to shift their self-presentation to emphasize analytical characteristics and downplay intuitive or emotional traits when they believed they were being assessed by AI. (Note: “statistically shifted” replaced with “tended to shift” for emphasis on practical significance.)

Background

As artificial intelligence becomes increasingly embedded in decision-making processes, organizations are adopting AI-based tools to evaluate candidates for jobs or educational programs. These tools promise efficiency and objectivity, leading to a growing trend of replacing human assessors with algorithms. This shift raises a critical question: Does being assessed by AI change how people behave?

Drawing on psychological theory, this team of researchers hypothesized that people alter their self-presentation when they know an AI is evaluating them, a response they call the “AI assessment effect.” Specifically, individuals could highlight analytical traits and suppress intuitive or emotional aspects, driven by a widespread belief that AI values data-driven, logical qualities over human-like emotional insight. This lay belief—termed in the paper as the “analytical priority lay belief”—is central to how individuals anticipate AI preferences.

Early evidence for this effect came from a survey of over 1,400 job applicants who completed a game-based assessment. Those who believed AI was involved reported more behavioral adjustments. This is significant because growing legal mandates, such as the European Union’s AI Act, require transparency about AI use, increasing public awareness. If people alter their behavior based on possibly incorrect beliefs about AI preferences, it could distort assessment outcomes, leading to poor job matches and misguided decisions.

About the Study

The researchers conducted 12 studies with a total of 13,342 participants to examine whether people alter their behavior when they know they are being assessed by AI rather than a human. Participants were recruited through multiple platforms and a real-life applicant pool from a recruitment company. Most experiments were run via online survey tools, and ethical protocols were followed throughout, including informed consent in all but the field study.

Studies varied in design, including between-subjects, within-subjects, vignette-based, incentive-aligned, and real-world applications across various contexts, such as job recruitment and college admissions. Participants were randomly or quasi-randomly assigned to conditions where they were told they were being assessed by AI, a human, or both. The researchers measured participants’ self-reported and behavioral emphasis on analytical versus intuitive traits.

Attention checks ensured data quality, and bootstrapped confidence intervals were used to mitigate non-normality. Sample sizes were determined based on effect size predictions and adjusted for potential exclusions. Exclusions were applied consistently for failed attention checks, incomplete responses, or suspicion about the study’s purpose.

Key Findings

Participants consistently altered their behavior when they believed an AI, rather than a human, was evaluating them, presenting themselves as more analytical and less intuitive. This shift appears to stem from a belief that AI systems prioritize analytical traits over emotional or intuitive qualities.

The effect was observed across diverse participant samples, including a U.S.-representative group, and was stronger among younger individuals and those with particular personality traits. Experimental designs involving both between- and within-subject comparisons confirmed that the mere presence of AI as an evaluator changed how individuals approached self-presentation tasks.

When participants were encouraged to reconsider their assumptions about AI, such as reflecting on its potential to value emotional or intuitive qualities, the tendency to emphasize analytical traits was reduced or even reversed. However, when AI was involved in early evaluation stages and humans made final decisions, the effect was diminished but not eliminated.

In one study (Study 3), this behavioral shift had a notable real-world implication: 27% of candidates would have been selected for a position only if assessed by AI, and not if assessed by a human. Across all settings, the belief that AI favors rational, data-driven attributes led people to strategically adjust how they described themselves.

The “suppression” of intuitive or emotional traits is a statistical shift in emphasis, not a complete omission. Exploratory analyses in the paper also suggest that AI assessment may prompt changes in other self-presentation dimensions, such as creativity, ethical considerations, risk-taking, and effort investment, although the primary measured dimension was analytical versus intuitive.

These findings suggest that AI assessment significantly influences behavior and self-presentation, with meaningful implications for hiring, admissions, and other high-stakes evaluation contexts where algorithmic decision-making is increasingly used.

Conclusions

Overall, researchers found that AI-based assessments can shape candidate behavior, revealing a consistent pattern, termed the “AI assessment effect,” where individuals emphasize analytical traits and suppress emotional or intuitive ones when evaluated by AI. This behavioral shift appears to stem from a lay belief that AI values analytical thinking. Importantly, challenging this belief can reduce the effect.

The findings have significant implications for the fairness and validity of AI assessments. If candidates adjust their behavior based on inaccurate beliefs about AI preferences, true qualities may be masked, potentially leading to suboptimal hiring or admissions decisions. This suggests organizations should critically evaluate their assessment procedures and address potential distortions introduced by AI transparency policies. For instance, informing candidates about an AI’s specific capabilities and limitations might influence behavior differently.

While the study focused on human resource management, future research could explore effects in other high-stakes domains, such as public service provision. Additionally, shifts in other traits, such as risk-taking, ethics, and creativity, warrant further exploration, along with the long-term consequences of AI-driven impression management. The authors also highlight that with the evolution of AI systems, candidates' beliefs—and their resulting behaviors—may change, warranting continued study.

Journal reference:
Priyanjana Pramanik

Written by

Priyanjana Pramanik

Priyanjana Pramanik is a writer based in Kolkata, India, with an academic background in Wildlife Biology and economics. She has experience in teaching, science writing, and mangrove ecology. Priyanjana holds Masters in Wildlife Biology and Conservation (National Centre of Biological Sciences, 2022) and Economics (Tufts University, 2018). In between master's degrees, she was a researcher in the field of public health policy, focusing on improving maternal and child health outcomes in South Asia. She is passionate about science communication and enabling biodiversity to thrive alongside people. The fieldwork for her second master's was in the mangrove forests of Eastern India, where she studied the complex relationships between humans, mangrove fauna, and seedling growth.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Pramanik, Priyanjana. (2025, June 25). What happens when job candidates face AI instead of humans?. News-Medical. Retrieved on June 26, 2025 from https://www.news-medical.net/news/20250625/What-happens-when-job-candidates-face-AI-instead-of-humans.aspx.

  • MLA

    Pramanik, Priyanjana. "What happens when job candidates face AI instead of humans?". News-Medical. 26 June 2025. <https://www.news-medical.net/news/20250625/What-happens-when-job-candidates-face-AI-instead-of-humans.aspx>.

  • Chicago

    Pramanik, Priyanjana. "What happens when job candidates face AI instead of humans?". News-Medical. https://www.news-medical.net/news/20250625/What-happens-when-job-candidates-face-AI-instead-of-humans.aspx. (accessed June 26, 2025).

  • Harvard

    Pramanik, Priyanjana. 2025. What happens when job candidates face AI instead of humans?. News-Medical, viewed 26 June 2025, https://www.news-medical.net/news/20250625/What-happens-when-job-candidates-face-AI-instead-of-humans.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.