The project is articulated into three subprojects:
As shown in Figure (where each rectangle corresponds to 6 months), subprojects 1 and 2, operating with synergetic exchanges, provide input to Subproject 3, which delivers the final comprehensive output of the project. Subprojects 1 and 2 will be completed during the first 4 years. Subproject 3, which builds upon the first two subprojects, will continue until the end of 5th year.
Subproject 1: “Regulatory issues and frameworks.” (Coordinator: Francesca Lagioia)
Subproject 1 addresses the governance of computations, focusing on intelligent systems. It critically considers (a) the ways in which computations affect legally relevant interests (both individual rights and social values) in different socio-technical contexts, (b) the influence chains and legal responsibilities linking computations to their deployers and designers, (c) the extent to which harm is prevented and remedied and computational justice and fairness are supported.
The subproject will focus on the in-depth analysis of two case studies: autonomous transportation, and e-commerce. These broadly scoped case studies are particularly suitable to the study of the legal impacts of computations since they involve transversal issues, such as human rights and data protection, as well as distinct areas of the law, such as liability for accidents/insurance/traffic law, and contracts/consumer protection/taxation. Connections with other domains will be considered, such as autonomous weapons, where advanced technologies for legal compliance are already being developed.
Subproject 2: “Logical‐Computational methods and technologies” (Coordinator: Roberta Calegari)
Subproject 2 focuses on methods and technologies for the legal governance of computations. The focus will be on computable laws, and on the architectures for law compliant ALAs. Besides the express formal representation of computable law, to be processed according to logical inference and argumentation, we shall also consider how compliance can be learned through case-based reasoning and machine learning. Finally, we will address regulations and guidelines directed to designers of computable laws and ALAs, to support the correct specification of computable laws and their effective implementation.
We will provide computable models of rules, principles, and cases. We will provide formal specifications covering: normative concepts (obligation, permission, powers, Hohfeldian positions); basic socio-cognitive notions (action, goal, intention, belief, influence); basic social relations (dependency, delegation and responsibility); rules (defeasibility, conditionality, constitution); values and goals (means-end reasoning, scalable goals, proportionality); defeasible argumentation (argument schemes and well as relations of support, attack and defeat between arguments); dialectical interactions (strategic games, argumentation protocols); institutional structures (meta-rules and roles for adjudicating and enforcing norms); norm-related cognitive processes (norm-awareness, motivation to comply and violate). The architecture of ALAs will mainly be based on the BDI (Belief-Desire-Intention) models, expanded with norms, legitimacy and trust. Issues connected to the legal/ethical assessment of the machine learning methods and outcomes will be addressed.
Subproject 3: “A techno‐legal approach to computable law” (Coordinator: Giuseppe Contissa)
Subproject 3 finalises a framework for the creation, and implementation or computable laws, to be complied with by ALAs having different degrees of autonomy and cognitive capacity. The proposed framework includes methodologies, technologies and substantive suggestions, to combine three kinds of norms: norms governing human behaviour; norms specifying functional requirements of computational systems, directed at designers and deployers; norms directed at computational entities. For this purpose, we will specify methods to translate legal requirements into computable norms, and to check the correctness of the translation.
We will put special emphasis on communication and argumentation, namely, on the ways in which (intelligent) systems should explain their choices, demonstrating compliance with the applicable norms. Subproject 3 will deliver a methodological toolkit as well as a set of substantive guidelines.