BIT-ACT’s article published in the academic journal Minds & Machines looks at potential sources of unfairness in artificial intelligence used in anti-corruption
Published on 08 July 2024
Fernanda Odilla, a research fellow at the BIT-ACT project, has published a new article in which she assesses the potential sources and consequences of unfairness in artificial intelligence (AI) predictive tools used in anti-corruption efforts.
Titled “Unfairness in AI Anti-Corruption Tools: Main Drivers and Consequences”, the article was published in the renowned academic journal Minds & Machines and shed light on how AI can inadvertently perpetuate biases, noise and discrimination when used to combat corruption.
Odilla’s study focuses on three AI-based anti-corruption tools developed employed in Brazil. These tools estimate the risk of corrupt behavior in public procurement, among public officials, and among female straw candidates in electoral contests, respectivelly. By drawing on interviews with law enforcement officials involved in the development of these tools, as well as a review of academic and grey literature, the article uncovers several layers at which unfairness can arise: infrastructural, individual, and institutional.
Key findings from the research indicate that the potential sources of unfairness include problematic data inputs, issues with statistical learning, the personal values and beliefs of both developers and users, and the governance structures within the organizations deploying these tools. Notably, the AI tools were trained using data from past anti-corruption procedures, which may carry inherent biases and assumptions about corruption that are not necessarily free from unfair disproportionality or discrimination.
One significant revelation is that the developers of these AI tools did not sufficiently consider the risks of unfairness during the design phase. They also did not prioritize the implementation of technological solutions to identify and mitigate unfairness. Despite the fact that these tools support human decision-making rather than making automated decisions, their algorithms remain closed to external scrutiny, raising concerns about transparency and accountability.
Odilla's work emphasizes the need for a critical approach to the development and deployment of AI in anti-corruption initiatives. The study calls for increased awareness and proactive measures to ensure that these powerful tools do not reinforce existing inequities.
The publication serves as a crucial reminder that while AI has the potential to enhance anti-corruption efforts, it must be handled with care to avoid perpetuating the very issues it aims to resolve.
For further details, the full article is available here.