UT’s AI statement: ‘Appealing to common sense’

| Rense Kuipers

On Wednesday, UT released a statement on artificial intelligence. The university neither bans nor actively promotes the use of AI. Instead, it calls for a collective effort to explore responsible applications.

Archive U-Today, for illustration purposes.

‘AI is not a passing trend, but a fundamental technology that will have a lasting impact on our university, education, and research,’ the statement from the Executive Board begins. UT does not intend to ban AI use, nor to encourage it indiscriminately: it’s about responsible implementation.

Awareness

As the statement reads: ‘We appeal to professional and academic common sense. We embrace the opportunities AI offers, but remain aware of its complexity and risks. AI is not an end in itself, but a powerful tool that is only effective and legitimate when used in an ethically, legally, and socially responsible manner.’

According to Maarten van Steen, scientific director of the Digital Society Institute and one of the authors of the document, the statement is a starting point. ‘As a university, you primarily want to convey a sense of awareness. Developments in AI are moving so fast that it’s nearly impossible to formulate policy. This statement should be the first step towards proactive and robust policymaking. We need to try to keep pace with these developments.’

AI literacy

UT students and staff will not notice any immediate changes. The university aims to ‘continue investing in guidelines, training, and awareness’, including efforts to strengthen basic knowledge of AI within the community – so-called AI literacy. In education, this is already being addressed through CELT’s ‘AI in Education hub’. Another plan is to develop an ‘AI compliance framework’, focusing on relevant legislation – such as the European AI Act – in relation to education, research, and university operations.

The statement also notes that UT does not yet hold a central licence for commercial generative AI providers such as ChatGPT, Gemini, or Claude, due to legal and privacy concerns. In the interest of (privacy) security, UT is currently working on its own AI model. The university also participates in national and European collaborations ‘to develop safe and reliable alternatives’ and reduce dependence on commercial AI providers.

Dedicated ‘AI Office’

The use of artificial intelligence in educational institutions is not without criticism or concern. Last summer, an open letter signed by over a thousand academics called for a halt to the ‘uncritical adoption of AI technologies in higher education’. Examination boards, in particular, face challenges with generative AI, which has become a Gordian knot, as reports of fraud are difficult to verify. On the other hand, there are proponents who literally say: ‘You’re almost stupid if you don’t use ChatGPT.’

Van Steen emphasises that it’s not a matter of good or bad. ‘Or pros and cons – we’re fairly familiar with those by now. And it’s widely used. AI is here, and we need to find a way to relate to it. That also applies to difficult cases for examination boards: it’s important that we establish clear boundaries.’

That’s one reason why UT recently launched an ‘AI Office’, involving staff from the Digital Society Institute, the LISA service department, and legal experts. ‘The AI Office is the first point of contact for questions or concerns about the topic. From there, we need to try to distil policy,’ says Van Steen, who points to the complexity of the topic. ‘AI developments are moving so fast, and long-term applications are unpredictable. That’s why we need to operate like a lean start-up and avoid falling behind.’

Stay tuned

Sign up for our weekly newsletter.