As the semester ends, I reflect on how generative artificial intelligence tools have affected teaching and learning in my classroom at the FH Salzburg University of Applied Sciences in Austria. While it has only been less than two years since the introduction of ChatGPT in November 2022, a paradigm shift has taken place.
Today, students arrive at university adept at using a range of AI tools. No longer are they simply digital natives; they have become a generation of AI natives.
When reviewing the final papers of last year’s undergraduate students, I was amazed by the writing quality. It was good, almost too good for my non-native English speakers. What stood out was the choice of archaic phrasing, reminiscent of 19th-century prose, that was used to explain 21st-century technical processes. Could it be that I am grading AI-generated text? Even though our university has established guidelines limiting the use of AI tools for coursework, I could not be certain whether or not my students were using ChatGPT. I needed a new strategy to deal with AI.
New strategy
In the following semester, I had my students discuss the limitations and impacts of AI tools, examining questions of accuracy, dependability, ethics, and bias. Their first written assignment was an essay on whether cybersecurity is dependent on technology alone. After doing their research, they wrote their text in a computer lab, without access to the internet or any AI tools. This provided me with unadulterated samples of their writing, which could be used to identify the authenticity of future assignments.
One of my colleagues who teaches software development modified his testing methods to ensure that students did not access AI tools during exams. Instead of having students write code on computers with internet connections, he has them design their programs using paper and pen and asks them follow-up questions as to why they made certain choices.
The probability that our students are using AI tools to complete assignments is significant. For the sake of expediency, some students automatically accept the generated responses without reviewing for accuracy or relevance to the original assignment. Those who don’t review the generated answers rarely retain the content. This is frustrating when teaching the fundamentals of writing and programming.
When students rely on code suggested by AI tools and it runs, they seldom question or review the code to see how it is constructed. That results in minimal learning efficacy. How can we guide students to develop critical views of AI-generated information?
Digital literacy can prepare students to manage rather than be managed by AI. We need to understand the processes driving AI machine learning, so that we know how to appropriately apply AI tools and are aware of the consequences.
Limited learning
As educators, we are confronted with whether current teaching practices continue to be relevant in higher education. We are also concerned with how students’ learning is impeded when they employ generative AI in some foundational learning situations.
A student may lack the specific subject knowledge necessary to compare and review the accuracy and appropriateness of the generated content. The amount you learn when somebody else does your homework is limited, especially if that “somebody” is an AI bot. Without cognitive investment, any learning outcome is negligible.
Some of the most popular generative AI tools employed by university students include ChatGPT and a range of writing applications, which are astoundingly proficient at replicating human language. Some of these tools may also fabricate inaccurate results. Students need to develop a healthy skepticism in response to AI-generated outputs. But without understanding the basics, students have minimal foundations for comparing and critically reviewing information. If AI is doing the foundational work, will students learn basic skills and understand complex content?
We need evidence of the effects of AI use in education on students’ cognitive and social development, as well as whether use of these tools inhibits the development of critical-thinking skills.
Digital literacy can prepare students to manage AI rather than be managed by it. We need to understand the processes driving AI machine learning, so that we know how to appropriately apply AI tools and are aware of the consequences.
How can educational institutions prepare for the disruptions that AI brings to teaching and learning? If we arrange learning situations so students are challenged to think critically about a subject and then reflectively explain how they would solve problems while justifying their reasoning and conclusions, can we avoid negative AI influences? Or do we need to shield students from AI by creating controlled spaces with no access to AI to support their agency?
What do educators want? We want students to think for themselves and not just use generative AI to think for them.