Legal issues in generative AI: navigating between innovation and regulation

The rise of generative AI raises unprecedented legal questions. In this interview, Professor Philippe Gilliéron, a specialist in technology law, provides an overview of the main issues at stake: the still unclear regulatory framework, the impact on legal professions, the redefinition of plagiarism, and tensions surrounding copyright and personal data. He advocates for a critical approach to teaching these tools, based on transparency and the acquisition of sufficient expertise to use them wisely, particularly in academic contexts.

A legal framework under development

According to Professor Gilliéron, the legal world is embracing these new technologies ‘with some delay’. While AI is not new, its democratisation through free tools such as ChatGPT has accelerated awareness of the issues at stake. To date, no specific regulations are in force, but legislative efforts are underway, particularly at European level with a proposed regulation on artificial intelligence.

The European Union appears to be a pioneer in this field, while the United States favours a more flexible approach that encourages innovation. China, for its part, has opted for stricter regulation for reasons of control.


Impact on the legal profession

These technologies hold the potential to significantly reshape legal professions. In the short term, they can serve as tools to assist with drafting and research. In the medium term, clients may demand their use to cut costs. Some major U.S. firms are aiming for savings of up to 40% through these tools — a shift that could lead to workforce reductions and a reorganization of outsourced legal work.

Implications for legal education

In response to these developments, Professor Gilliéron advocates for a pragmatic approach to education. Rather than banning these tools, he recommends teaching students how to use them critically: “You need a certain level of expertise to ask the right questions — because it’s by asking the right questions that you can take a critical stance toward the answers generated.”

Academic Integrity Issues

The traditional definition of plagiarism — based on the reproduction of “someone else’s text” — does not directly apply to AI-generated content, as such output is not considered a “product of the mind.” Instead, the issue should be approached from the perspective of scientific integrity, which calls for transparency regarding the use of these tools and the methods employed.

Intellectual Property Issues

The question of copyright is particularly acute for image generators, which are trained on vast datasets containing billions of protected images. Lawsuits have been filed in the United States against platforms like Midjourney and DALL·E. The legality of such practices will depend on how copyright exceptions are construed, a point that differs significantly from one legal system to another.

Personal data protection

The professor highlights the risks associated with personal data protection, as illustrated by the temporary ban on ChatGPT in Italy. He advises against including personal data in prompts and notes that OpenAI is considering allowing users to opt out of having their prompts used for future system training.

In conclusion, Professor Gilliéron emphasizes the importance of thorough education on the responsible use of these tools — both in terms of legal considerations and the protection of sensitive data. These issues are expected to evolve rapidly in the coming months, requiring ongoing attention from legal professionals and educators alike.

Professor Gilliéron, a lawyer specialising in intellectual property and technology rights in Lausanne, provides an overview of the legal issues raised by the emergence of generative AI.

Associate professor

Faculty of Law, Criminal Justice and Public Administration

New technologies, Unfair competition, Copyright

Portrait of Philippe Gilliéron