From Policy to Classroom Practice
A comparative guide turning COPE/UNESCO guidance into course-level disclosure templates, rubric changes, and viva/oral checks.

1. The New Academic Landscape: Integrity in the Age of Generative AI
Since 2023, European universities have been forced to confront a new frontier of academic integrity. ChatGPT, Claude, Mistral, and other generative AI tools are now woven into students’ and researchers’ daily workflows. What began as an ethical question has evolved into a policy, pedagogy, and infrastructure issue.
Across the EU and Switzerland, higher education institutions are now drafting AI-integrity playbooks—practical frameworks that translate global guidance from UNESCO, COPE (Committee on Publication Ethics), and European Network for Academic Integrity (ENAI) into course-level implementation.
The question is no longer whether to allow AI use, but how to disclose, evaluate, and supervise it responsibly.
2. From Global Guidance to Institutional Policy
UNESCO: AI Ethics and Academic Fairness
UNESCO’s 2021 Recommendation on the Ethics of Artificial Intelligence set the tone: it calls for human oversight, transparency, and accountability in all AI-mediated processes.
For universities, this means teaching AI as a co-author, not an invisible ghostwriter.
COPE: Publication Ethics Meets Coursework
COPE’s 2023 position statement clarifies that “tools such as ChatGPT cannot be authors,” yet their use must be acknowledged. In academia, this principle now translates to requiring AI-disclosure statements in theses, reports, and even term papers.
EU Commission & Swiss SERI (State Secretariat for Education, Research and Innovation)
Both bodies promote a balanced approach — integrating AI literacy and critical thinking into curricula while upholding data protection (GDPR) and plagiarism standards.
Swiss universities, through swissuniversities.ch, emphasize transparency and verification, recommending AI-use declarations similar to citation practices.
3. The Playbook in Action: Turning Policy into Practice
a. Course-Level Disclosure Templates
Many institutions (e.g., University of Zurich, ETH Lausanne, KU Leuven) now include a mandatory AI disclosure section in student submissions.
Typical template:
AI Use Declaration:
This work has been supported by [tool name(s)] for [purpose — e.g., grammar checking, idea structuring, data visualization].
I confirm that the intellectual contribution, analysis, and interpretation are my own.
Such templates encourage honesty and awareness rather than punishment. They make invisible digital labor visible — a cornerstone of research integrity.
b. Assessment Rubrics: Evaluating Human + AI Collaboration
Traditional rubrics focused on originality, analysis, and writing style.
Modern rubrics now add dimensions like:
| Criterion | Description | Example Practice |
|---|---|---|
| Transparency of AI Use | Declares when, how, and why AI tools were used | Marks awarded for a clear disclosure statement |
| Critical Oversight | Shows ability to verify or critique AI output | Students annotate AI-generated sections |
| Human Judgment | Demonstrates personal reasoning over automation | Reflection section or viva-based questioning |
This helps educators differentiate authentic understanding from fluent mimicry.
c. Viva and Oral Defense Checks
Oral examinations are re-emerging as fairness equalizers.
Swiss and Dutch universities, in particular, use mini-viva models to confirm authorship and comprehension.
A 10-minute verbal discussion on the student’s AI-assisted essay often reveals more integrity than any plagiarism detector.
Checklist for AI-integrity viva:
- Ask how AI was used and what outputs were verified.
- Request examples of model error correction or data sourcing.
- Evaluate reflection on ethical implications, not just content accuracy.
This approach prioritizes AI literacy over policing, aligning with UNESCO’s human-centered education values.
4. Case Snapshots: EU and CH Implementation
| Country | Example Initiative | Key Takeaway |
|---|---|---|
| Switzerland | University of Geneva’s “AI in Assessment” guidelines (2024) | Standardized AI disclosure form and ethics module for all faculties |
| Germany | University of Münster’s “AI Transparency Statement” | Integration of AI-use field in digital submission portals |
| Belgium | KU Leuven’s AI-Integrity Rubric Project | Faculty-specific adaptation of ENAI guidance |
| Finland | University of Helsinki’s open Generative AI Literacy MOOC | Students learn to use AI critically and cite it |
| France | Sorbonne’s Oral Integrity Checks | Combines AI-detection reports with viva evaluation |
These efforts share a pragmatic ethos: AI isn’t banned; it’s contextualized.
5. The Next Step: Building a Culture, Not a Rulebook
True academic integrity with AI doesn’t come from stricter policing — it grows from shared norms.
Faculty need time, templates, and training. Students need trust and transparent expectations.
The future playbook will be less about “detecting AI” and more about mentoring its mindful use.
AI integrity ≠ AI prohibition.
It’s about reasserting the human role in thinking, judging, and creating knowledge.
6. Resources and References
- UNESCO (2021): Recommendation on the Ethics of Artificial Intelligence
- COPE (2023): Position statement on AI in research and writing
- ENAI (2024): Guidelines on Ethical Use of Generative AI in Education
- European Commission (2023): Ethics Guidelines for Trustworthy AI
Author’s Commentary
Universities in Europe and Switzerland are crafting not just policies but pedagogical blueprints. The shift is subtle but profound: from punishing AI misuse to cultivating AI fluency and reflection.
If academic integrity was once about preventing copying, it’s now about understanding collaboration — between humans and machines.