This text was co-written with Claude IA (Revision version 1 of the text, OCR analysis of the PRAX-IA model and revision of version 2).
Faced with the rapid rise of generative AI, PRAX-AI offers a structured framework to support educators in adapting their teaching and assessment practices. The model is built around two key pillars: an analysis of academic tasks most vulnerable to AI intervention, and a broader reflection on how artificial intelligence affects the pedagogical coherence of courses. By combining reference frameworks, templates, self-assessment tools, and practical resources, PRAX-AI becomes a true companion for rethinking evaluation—without compromising academic integrity or pedagogical creativity.
A model which helps to situate assessment practices within a broader context influenced by generative artificial intelligence.
Since November 2022, the rapid rise of generative artificial intelligence (GAI) has sparked widespread reflection and debate within higher education. These tools, now widely accessible, raise profound questions—particularly regarding the integrity of academic assessments and the influence of GAI on learning processes. In response to these shifts, the PRAX-AI model was developed as a structured framework to support critical reflection. Grounded in a literature review on pedagogical practices and AI applications, PRAX-AI aims to guide educators through their pedagogical choices by offering practical tools and a curated set of resources.
The model is built around two main pillars: a targeted approach through assessments identified in the literature [add references], and a broader reflection on the impact of AI across the entire course structure [ref.]. Each pillar is supported by a documented knowledge base and practical resources, all accessible through a dedicated platform.
Before detailing the two main pillars of the model, it is important to highlight that PRAX-AI is grounded in an ongoing literature review. This regular monitoring effort brings together research studies, theoretical frameworks, and examples of practice related to assessment, artificial intelligence, and pedagogy in the context of higher education. The resources provided in the “Resources” section of the model stem from this weekly review, which is continuously updated to incorporate the most recent and relevant contributions.
First Entry Point: Academic Assessments in the Age of Generative AI
Assessment is arguably the area most impacted by the emergence of generative AI. PRAX-AI begins with a thorough analysis of current assessment practices, with three key objectives: identifying vulnerabilities, defining appropriate strategies, and establishing coherent conditions for the use of AI in academic evaluations.
Identifying Vulnerable Types of Assessments
The first step involves identifying the types of assessment formats currently in use—such as exams, take-home assignments, and collaborative projects—in order to pinpoint those that may be particularly vulnerable to generative AI. This analysis helps determine which tasks require immediate adaptation. For instance, take-home essays or open-ended responses can be easily generated by AI tools and therefore need to be rethought.
Defining an Acceptable Level of AI Use
The next step is to determine the extent to which AI use is permitted or encouraged in assessments. To support this process, PRAX-AI introduces the AIAS scale (Perkins, Furze, Roe, MacVaugh, 2024), a structured framework for evaluating different levels of AI integration—from strict prohibition to active incorporation as a learning tool. This model helps define a clear and consistent position that can then be communicated to students.
The “Resources” section includes explanatory documents on the AIAS scale, along with examples of its application across various disciplinary contexts. These resources also offer comparisons with other theoretical frameworks, providing a broader perspective.
Developing the necessary AI skills
Once an acceptable level of AI use has been defined, PRAX-AI encourages reflection on the AI-related competencies that should be developed—both for educators and students. The framework includes Anders’ competency model (Anders, 2023), which outlines the knowledge and skills needed to integrate AI effectively in academic contexts. This reference can be used to define specific learning objectives tailored to individual courses and disciplines.
To support this process, PRAX-AI provides an AI companion—an interactive tool designed to help formulate learning outcomes related to AI. The “Resources” section also includes international competency frameworks and practical examples, illustrating how these skills can be integrated into existing curricula.
Designing AI Usage Guidelines Aligned with Your Course
The reflection on assessment concludes with a strategic step: developing AI usage conditions tailored to the specific realities of each course. PRAX-AI structures these conditions around four key dimensions: transparency, regulation, pedagogical integration, and support. Once again, practical resources and examples of best practices are available to guide both educators and institutions.
Second Entry Point: A Broader Reflection on AI’s Impact on Course Design
While the first entry point focuses on assessment, the second encourages a broader reflection on the overall impact of artificial intelligence across entire courses. This approach is grounded in the idea that assessment cannot be separated from the broader pedagogical context: learning objectives, teaching activities, and assessments must form a coherent whole.
The pedagogical coherence template plays a central role in this phase. It supports educators in analyzing the alignment between the various components of their course, taking into account potential changes brought about by the integration of artificial intelligence. This framework offers a structured way to reflect on the adjustments needed to incorporate AI in a relevant and constructive manner.
The first question to consider in this process is the following:
Does artificial intelligence have an impact on my courses?
This reflection involves understanding how students are already using AI in their learning processes. What tools are they using? In what contexts? And how do these practices influence the way they engage with course content?
PRAX-AI provides simple tools designed to observe and analyze how students use AI. These tools help educators better understand the pedagogical implications of the technologies their students are engaging with, in order to identify potential adjustments to their teaching practices.
Should I incorporate AI into my teaching practices?
When artificial intelligence has a significant impact on a course, it becomes important to consider how best to integrate it into the teaching process. This integration can take various forms, depending on the discipline, students’ needs, learning objectives, and institutional guidelines.
Should I adjust my teaching objectives?
Integrating artificial intelligence often leads to a reassessment of learning objectives. These objectives should take into account the specific competencies students need to develop in relation to AI, such as the ability to use these tools critically and to understand their limitations.
Do I need to adapt my teaching activities?
Following the redefinition of learning goals, it is recommended to adjust or design learning activities that align with these new priorities. Such activities should enable students to engage with AI in a thoughtful and constructive way, while developing the skills necessary to use it effectively.
Close the loop with assesment
Ultimately, this broader reflection inevitably raises the question of assessment. If learning activities evolve to include artificial intelligence, it becomes essential to adapt assessment practices to ensure pedagogical coherence. The pedagogical coherence template serves as a tool to structure this reflection, helping ensure that any adjustments made are aligned with the course’s objectives and activities.
Conclusion: A tool under development to support collective reflection.
PRAX-AI was developed in response to the challenges brought about by the emergence of generative artificial intelligence (GAI) in higher education. These technologies raise new questions around assessment, learning, and the pedagogical coherence of courses.
Designed as a reflective framework, PRAX-AI aims to support educators as they adapt to this rapidly evolving context, while providing concrete tools and regularly updated resources.
Its primary goal is to foster critical thinking—whether individually or collectively—without claiming to offer universal solutions.
PRAX-AI proposes two complementary lines of inquiry: adapting assessment practices and exploring the broader impact of AI on course design, allowing educators the flexibility to choose the entry point most relevant to their context.
