Artificial intelligence is no longer merely a technical tool; it is increasingly becoming a part of everyday decision-making as an advisor. But what happens when this advisor encourages unethical behaviour?
In a study published in the Economic Journal, a team led by Professor Dr Bernd Irlenbusch examined the influence of AI-generated advice on dishonest behaviour, comparing it to equivalent advice given by humans. They then investigated whether transparency regarding the source of the advice made any difference.
The result: advice that promotes dishonesty does indeed lead to more dishonest behaviour, whereas advice promoting honesty does not increase honest conduct. This holds true regardless of whether the advice comes from an AI or a human.
Another key finding concerns transparency about the source of the advice. Whether participants were aware that the advice came from an AI or not had no measurable impact on their behaviour. This challenges common policy proposals advocating for algorithmic transparency—at least as a stand-alone measure to counter misconduct.
“We find that, when faced with the trade-off between honesty and money, people use AI advice as a justification to lie for profit. As algorithmic transparency is insufficient to curb the corruptive power of AI, we hope this work will highlight to both policymakers and researchers the importance of investing resources in exploring effective interventions to encourage honesty in the face of AI-generated advice,” state the researchers.
The research team emphasises that these are initial steps towards the responsible use of AI-based recommendations. As people increasingly rely on AI in their everyday lives, the question of how digital advice influences ethical decision-making—and how potentially harmful effects can be mitigated—becomes ever more pressing.