by Pier Francesco Bresciani, Research Fellow in Constitutional Law, University of Bologna [23 October 2024]
Published on 23 October 2024
“Numerical constitutionalism” envisions a democracy governed by indicators that objectively quantify social phenomena relevant to regulation. With its promise of data-driven decision-making free from the biases and noise of human intelligence, artificial intelligence now stands as one of the most promising tools to achieve it.
In fact, many parliaments are already experimenting with integrating AI into their operations, with Italy’s Chamber of Deputies taking a leading role globally.
The core constitutional question this raises is whether embedding AI in public decision-making will deepen the trend of viewing democratic processes as constrained by quantitative data. While technical input in political decisions is not new, the concern is whether AI-generated inputs will further erode the democratic nature of governance, shifting the balance even more toward science and technology over politics.
To begin addressing this question, one needs to compare how human and artificial intelligence produce knowledge. In this regard, the main issues with AI systems—now critical concerns in discussions about their use by policymakers—are opacity (the black-box problem) and hidden biases.
However, both of these problems also arise in relation to the technical contributions made by human experts, which are already commonly used in governance processes.
First and foremost, at least in general terms, policymakers do not find themselves in a significantly better position regarding opacity when dealing with human experts or the scientific products of these experts. For instance, legislators typically lack the information and expertise needed to fully understand or question the rationale behind the experts’ opinions. Nevertheless, parliaments often rely on experts when making decisions, and in some cases, they are even constitutionally obligated to do so. In Italy, for example, the Constitutional Court, when ruling on the legitimacy of mandatory COVID vaccination, stated in judgement no. 14 of 2023 that the constantly evolving findings of medical research must guide the legislator in exercising their choices on the matter.
The same goes for hidden biases. In philosophical discussions, an objection raised against the epistocratic ideal of entrusting political decisions to the most knowledgeable individuals in a given field revolves around the possibility that the more expert segment of the population may exhibit, in statistically disproportionate ways compared to the general population, epistemically harmful features that could counterbalance the benefits derived from their greater knowledge. In other words, high levels of specialized training might be confined to certain groups (in terms of race, gender, etc.) or may be linked to specific features in ways that are not empirically verifiable or are unknown (for example, “the more knowledgeable segment of the population is disproportionately more progressive or conservative than the general population”), thus potentially influencing—even if unconsciously—the “technical” judgment of experts regarding the best solution to a particular practical problem. This objection is fundamentally analogous to those raised about possible unknowable biases that may be inherently present in the functioning of AI.
For these reasons, a parallel between the use of technical contributions by policymakers from human intelligence and artificial intelligence suggests that rather than justifying pessimistic attitudes toward AI, there is a real need to develop a common critical approach. To this end, the philosophical and legal literature on AI already provides a toolkit for reflection (for example, on explainability and non-discrimination) that constitutional legal scholarship should apply to all forms of technical participation in democratic processes. This is essential because even in cases involving human experts, it is constitutionally necessary to ensure adequate levels of autonomy for the political decision-maker and epistemic justice.
Beyond this theoretical framework, AI could also serve as a practical tool to counter the trends toward deterministic views of democratic processes, which treat them as predetermined by technical contributions. In fact, if AI can democratize access to specialized knowledge, it would reduce policymakers’ reliance on expert communities, thus eliminating or diminishing the epistemic distortions arising from the non-statistically representative nature of these communities compared to the general population.
Whether AI will enhance the capacity for political discussion on highly technical issues or, conversely, strengthen the position of experts over political decision-makers will ultimately depend on the actual developments of AI technologies in the near future and, most importantly, on the ability of constitutional states to steer this epochal technological transition in favor of democratic principles.