Scientific integrity and responsible use of generative AI in the academic context at UNIL

In a rapidly changing academic environment, the arrival of generative AI (large language models, diffusion, etc.) requires heightened vigilance: how can we preserve our values of scientific integrity while taking advantage of the potential of these new technologies? This article proposes a clear framework, tailored to UNIL, based on four fundamental principles and six key recommendations.

Fundamental principles of scientific integrity

  • Reliability : ensure quality, traceability and reproductibility of works.
  • Honnesty : to carry out, analyze and report activities with transparency and impartiality.
  • Respect : to incorporate people, society, ecosystems and cultural heritage.
  • Responsability : take charge of the entire process, from to idea to the dissemination of results and their impacts.

These principles are sourced from Code of Scientific Integrity of the Swiss Academies of Arts and Sciences (2021) and form the foundation of research and teaching activities at the university.

Recommendations for a responsible use of generative AI

  • Human responsability: the user remains solely responsible for the content produced; AI is neither the author nor co-author.
  • Transparency : mention the tool used (including name, version, and date), and describe its role when it has a significant impact on the work. Whenever feasible, share the prompts and outputs.
  • Data and intellectual property protection : Users must refrain from uploading any sensitive or protected data unless explicit guarantees are provided. All activities must adhere to the General Data Protection Regulation (GDPR) and respect copyright obligations..
  • Legal and ethical compliance : prohibit plagiarism, fabrication, falsification, or unauthorized disclosure.
  • Continuing education & digital restraint : Remain up to date with best practices, select the most suitable tool for the task, and strive to reduce environmental impact, including carbon emissions, biodiversity loss, and water usage.
  • Sensitive and non delegated decisions : Final responsibility for evaluations—such as examinations, recruitment decisions, or peer review—must not be entrusted to AI systems.

These recommendations are based on the Living Guidelines on the Responsible Use of Generative AI in Research (ERA Forum / European Commission, 2025), as well as the best practices outlined in our institutional FAQ.

To go further