I have had the Open Letter: Stop the Uncritical Adoption of AI Technologies in Academia from June 27 open for some time now. I thought I wanted to sign it, but got stuck on the first paragraphs multiple times:

With this letter we take a principled stand against the proliferation of so-called ‘AI’ technologies in universities. As an educational institution, we cannot condone the uncritical use of AI by students, faculty, or leadership. We also call for reconsidering any direct financial relationships between Dutch universities and AI companies.The unfettered introduction of AI technology leads to contravention of the spirit of the EU Al act. It undermines our basic pedagogical values and the principles of scientific integrity. It prevents us from maintaining our standards of independence and transparency. And most concerning, AI use has been shown to hinder learning and deskill critical thought.

These few lines contain for me more than 25 years of research and I know the complexities. Before I can co-sign this letter, I need to understand the details. There is no definition of ‘AI’ here and it mentiones the EU AI Act (I guess, the letter actually writes “Al” (with an l of letter) act, I notice now after I read the content in another font), but I have not read the EU AI Act yet (it is 144 pages of legal text).

Let me first say, I am not a lawyer (IANAL). I am not versed in the specific legal definitions of tightly defined and controlled words.

Reading the EU AI Act, I read a reassuring opening statement (repeated later with more context, links to other laws, etc):

to promote the uptake of human centric and trustworthy artificial intelligence (AI) while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union (the ‘Charter’), including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union

We clearly see how these things are currently routinely violated.

This Regulation does not apply to AI systems or AI models, including their output, specifically developed and put into service for the sole purpose of scientific research and development.

In Dutch this is officially translated to “wetenschappelijk onderzoek”, so scientific research seems to be legally including humanites, etc, and not limited to natural sciences [citation needed].

The EU AI Act also outlines a definition of “AI”, leaning towards machine learning, but the border between deterministic, rule-based algorithms and machine-learned patters for predictions remains a bit vague to me. But I can live with it.

The Open Letter’s contravention of the spirit of the EU Al act gets context here too. It has to be the spirit, because the law does not apply to academia. Good, clarified. The Letter continues with:

It undermines our basic pedagogical values and the principles of scientific integrity. It prevents us from maintaining our standards of independence and transparency. And most concerning, AI use has been shown to hinder learning and deskill critical thought.

Yes, that clearly links to the EU AI Act’s protection of rights. Maybe on purpose and maybe there are legal reasons to not explicitly list them, are the international human rights, which includes rigths to benefit from science, but I think this is still in the spirit of the EU AI Act. And if AI fetters our ability to learn (yes, there is scientific evidence for that [citation needed]), then it violates the EU AI Act (IANAL).

What the Open Letter expects

The next part of the Open Letter calls to what the signers expect from our universities. I will will reflect on each of them.

Resist the introduction of AI in our own software systems, from Microsoft to OpenAI to Apple. It is not in our interests to let our processes be corrupted and give away our data to be used to train models that are not only useless to us, but also harmful.

The intrinsic problem and why I think it is fair to call out these companies, is, as the letter explains, there is an clear conflict of interest. The goal of companies is to make profit (and in a Western world, as much as possible), and not any of the human or scientific needs. In this respect, companies like Elsevier could just as well have mentioned too (see e.g. this post by Prof. Van Rooij, actually 2nd signature on the letter).

Ban AI use in the classroom for student assignments, in the same way we ban essay mills and other forms of plagiarism. Students must be protected from de-skilling and allowed space and time to perform their assignments themselves.

About a year ago, I was pleasently surprised by the depth of discussion at Maastricht University on how and when to use AI, and by default not. This one is really complicated and it matters when and how the AI is used. After all, and the spirit of the EU AI Act expects us to use AI in research (to trigger innovation). So, I cannot agree with the literal statement, but I fully agree with the spirit. Particularly combined with the clear “Stop the Uncritical Adoption of AI Technologies in Academia” of the title of the Open Letter.

I read this line like this, AI in the classroom must have a purpose that aligns with the EU AI Act. That means, use for writing assays, reports, it must not be used. I am old enough that remember the academic discussions (at Radboud University) about writing and the clear hesitance among scholars about the use of written assignments: “I want to test their scientific knowledge and reasoning skills, not their ability to write narratives”. And LLMs, like ChatGPT but also the European, more open variants, they write narratives, so the written report and assay is no longer a valid way to assess a student’s scientific learning progress.

So, alternatively, we should very carefully and scientifically evaluate which forms of assessment we perform, and banning AI in the classroom may just be distracting from a more fundamental problem. Anyways… if you continue using writing assignments to test progress in learning, you must ban use of AI in that process. You must be testing the student, not some piece of software (as a teaching institute).

Cease normalising the AI hype and the lies which are prevalent in the technology industry’s framing of these technologies. The technologies do not have the advertised capacities and their adoption puts students and academics at risk of violating ethical, legal, scholarly, and scientific standards of reliability, sustainability, and safety.

Sounds like a no brainer. But I too find my own university uncritically promoting AI. Maybe the tested it well, and just forgot to share that. But hey, scientifical quality goes all ways.

Fortify our academic freedom as university staff to enforce these principles and standards in our classrooms and our research as well as on the computer systems we are obliged to use as part of our work. We as academics have the right to our own spaces.

Again, a no brainer. But important to add. It must be said as it is intrisic part of Recognition & Rewards. If you cannot guarantee academic freedom, there there is something seriously wrong with your R&R.

Sustain critical thinking on AI and promote critical engagement with technology on a firm academic footing. Scholarly discussion must be free from the conflicts of interest caused by industry funding, and reasoned resistance must always be an option.

Yeah, this is something that is underestimated. Part of our academic teaching is this critical thinking. It returns in academic reading (did you already read “What Little Red Riding Hood Can Teach Us about Reading Science”, doi:10.1515/9783110782844-010, by Monica Gonzalez-Marquez et al.?), scientific programming, data analysis, and our teaching has been lacking here. Not just for new AI forms, but also for the old algorihmts. I have seen this, and scientific literature is riddled with mistakes, just because our peer reviewers are not sufficiently skilled. This will take effort. I know, it was a major part of my PhD thesis.

Of course, this is exactly why I have been so active in Open Science. Without Open Science, we cannot work in the spirit of the EU AI Act. It’s nothing new. It’s just that the big money has found in AI a way to profit at the expense of humans.

So, go read that Open Letter and sign too!