As students, we are here to learn, grow and challenge ourselves. Therefore fully outsourcing our labour to an AI is not only shortsighted but goes against the purpose of being at university. In a recent opinion piece, Niels ter Meer rightly points this out by describing how students – the stereotypical lazy beings that we are – turn off their brains and outsource their labour.
In this op-ed, I don't want to elaborate on the dangers of this unethical hallucinating stochastic parrot. To learn more about this, keywords are Bias, unhinged, training data & dark-side.
To take it out of the way, I want to first pick up this notion of students ‘ogling’ out of laziness by Ter Meer. I am confident that plenty of students at our university can demonstrate integrity and maturity in this regard.
About the Author:
Fridtjof Otto is a 3rd-year ATLAS student specialising in Science & Technology Policy and is currently writing his thesis on the topic of Deliberating the impact of Large Language Models (LLMs) on University Education. He is also a member of the University Council and part of the UT-wide working group on AI in Education. Contrary to Niels ter Meer, he requested the support of his co-writer Alex, the AI.
While I acknowledge the concerns above and don't want to play them down, we should not overlook the opportunities of LLMs for education. My main message is a call for action, encouraging the UT community to investigate the opportunities out there. Similarly to the calculator in the 1820s, we should not focus our attention on the discussion of whether to ban this disruptive technology but rather to see how we can responsibly implement it. Let me be clear: entirely giving the work of writing your thesis to an LLM would negate the point of the education journey and clearly is academic misconduct.
An example of appropriate use is getting one's draft proofread by an LLM, which can be advantageous for students struggling with writing in academic English. The LLM recognises structural cohesiveness, the flow of arguments, idioms and more. With the proper prompts, e.g. ‘Give me suggestions to improve this motivation letter and explain my mistakes’, one can learn a great deal about one's writing skills, complementing the existing support of the UT writing centre. For exam preparation, asking ChatGPT to generate critical questions on the topic can reveal potential gaps in understanding, boosting your personal learning process if one is taught properly.
Finally, for those struggling with the preparatory readings, asking ChatGPT to explain the underlying concepts of a paper in advance can help students focus on the finer details and arguments made by the author while reading the paper. This allows you to take more out of the readings and beyond. All of these approaches are already successfully implemented by students at the UT, showcasing the mature use of this emergent technology.
As said, it is important to carefully consider the use of LLMs. With great power comes great responsibility! Taking such responsibility is the first key step for the UT. As a technical university, the UT must now demonstrate how to use ChatGPT responsibly in our education to give this high-tech our human touch.
Then, as a community, Let us experiment with prompts and seeds, look for biases and find ways to implement this technology responsibly. We have many engaged students eager to take the most out of their time at the UT using these tools responsibly. They are great stakeholders to involve in the policy process of shaping the use of AI in education.
In conclusion, LLMs are a mighty tool that is here to stay. However, it is up to us, together as a community on campus, to use them responsibly, steer their use, implement them cleverly into curriculums while safeguarding educational quality, and keeping in mind the purpose of being at university – to learn and grow as individuals.