We have many ongoing projects. Come talk to us if you want to hear more!
Legal design and access to justice
In the contemporary era, legal technologies have emerged as powerful tools with the potential to enhance citizen access to the law, ensuring a more transparent and equitable legal landscape. There are multifaceted ways in which automation of the law can benefit individuals, particularly by facilitating a clearer understanding of the social benefits that apply to them. Despite the evident potential, there is a noticeable gap in research that comprehensively examines the advantages of legal technologies for citizens. Our research within this field aims to bridge this gap by analyzing how legal technologies contribute to accessibility to the law and subsequently engaging in a nuanced debate on whether the state bears a duty to augment such accessibility.
Digital ready legislation
Policymakers worldwide – from Switzerland, Denmark, Singapore, to Canada – are promoting initiatives to develop legislation and a legislative process that results in digital-ready rules. While the definition of digital-ready legislation varies, a central tenet of this concept is that legislation that emerges from parliaments is written in a form that facilitates its digital transformation and is translated into an automatically processible form. However, promoting digital-ready legislation is not easy, and a comprehensive, tested, and reproducible approach to ensure digital-ready legislation is missing at this stage. In this project, the goal is to analyze what digital-ready legislation is, how it can be shaped further, and what approaches seem the most promising today.
AI in the judiciary
AI is progressively making its way into the legal system. AI systems can aid the activities of courts in various ways, including automating routine tasks and providing decision support to judges. While proponents of AI systems highlight the benefits of the use of AI tools in the justice system, legal scholars argue that the use of AI tools by courts raises legal, ethical, and social concerns. These include impacts on fairness, accountability, and the moral and political legitimacy of public decision-making, which could have profound repercussions on fundamental rights. The criticism triggered by AI tools has become even more topical with the arrival of new regulations. Our research in this field aims to map the deployment of AI tools within the judiciary, understand the impact of such uses on citizens and the judiciary system overall, and analyze current and proposed regulations’ impact on the deployment of AI in the judiciary.
Data access and decentralized data governance
Academics and civil society have long argued that the public needs better access to the data of digital service providers, especially big market players like the GAFAM companies, to understand the impacts of their services and recommender systems on individuals and society. Within our research area on data access, we analyze how data access rights and portability can help redress the information asymmetries witnessed in the current data economy.
Personalization of the law
Scholars have argued that law in the age of AI could be personalized more strategically to individuals’ needs. Especially within the field of privacy, personalization to ensure better protection of individuals has gained traction, with personalized privacy assistants and personalized privacy policies being developed. Our research within this field has rested on seeking means to operationalize the ideals of personalization of law while discussing its effect on individuals.
Trustworthy AI
In recent times, several attempts to promote trust in AI through regulation. Such initiatives follow the premise that the right regulatory strategy can influence and shape trust. The broader literature on trust in automation can shed light on the interplay between AI, regulation, and trust, and indicate many factors that influence trust that are not straightforward to address through regulation. In this project, we tackle the complexity of promoting trust in AI through regulation by investigating the underlying conditions, the factors impacting trust, and the interaction between regulation and trust.
OpenJustice for Switzerland
Nowadays, the complexity of legal structures presents several challenges to accessing justice, of which substantial litigation expenses are a classic example. This project collaborates with OpenJustice, a team from Queen’s University Conflict Analytics Lab, to improve access to justice through automated legal advice. We aim to develop, create, and test the OpenJustice Small Language Model explicitly designed for legal context and applications based on Swiss legal data, assessing whether this model can enhance and support better access to justice for Swiss citizens.
Data dignity and inclusive AI
In the evolving landscape of AI, fostering a fair and inclusive data environment is crucial. Central to this vision is inclusive datasets that feed into AI systems. Individuals should have agency over their representation in data, asserting control over algorithmically and AI-generated narratives. Within our research, we unveil new concepts on how data dignity can be achieved and why such movement and reconceptualizations matter for good decision-making in the age of AI.