A recent preprint article came across my mailbox through the POD Network Google group (https://groups.google.com/a/podnetwork.org/g/discussion/c/PX7pSvnuOJM). A kind of comprehensive analysis of ChatGPT perception and usage that I found very insightful. You can already tell from the figure above that ChatGPT is mostly used for education purposes. Furthermore, few individuals use it for lifelong learning, which is unfortunate in my opinion since it would be a perfect tool … if you know the right way to question it! This is the reason why I am not worried at all about the onset of AI usage in education personally: I have already been saying that the biggest value of a Ph.D. is the capacity to google efficiently and accurately any question (now we just have to replace “google” with “ChatGPT” that’s all).
An interesting part of the article to me are the following two sentences:
”Our findings suggest that users utilize ChatGPT for various reasons and perceive it as a valuable educational tool that can assist students and educators with various time-intensive tasks, such as research, problem-solving, customized feedback and language learning. However, early adopters also expressed concerns about the potential misuse, including cheating, overreliance on the AI tool for answers, indolence and superficial learning.”
I wonder why we hold those two thoughts together when one can simply avoid the second one by embracing the change that the first one implies. Indeed, we cannot simply add ChatGPT into our educative toolboxes without fundamentally rethinking:
What we ask our students to know in terms of writing as well as knowledge.
How we test our students with take-home essay/short answer exams.
The amount of now unnecessary paperwork we still complete.
etc.
Nonetheless, the authors provide (in my opinion) in the flowchart shared above a great resource to assess when one should be using AIs like ChatGPT. I particularly like the paper’s emphasis on learning how to check AI’s work since they are not perfect, perpetuate biases, and only fool non-experts in any given field (try asking ChatGPT to tell you something that you are an expert in, you’ll understand what I mean here).
Using Bloom’s taxonomy one can easily adapt learning objectives and goals to integrate AI tools:
Acquiring Factual Knowledge;
Understanding Complex Ideas;
Applying Knowledge to Solve;
Critically Analyzing AI-Generated Content;
Synthesizing Diverse Information; and,
Evaluating AI-Generated Content.
An example of a rule teachers can use to foster a collaborative usage of ChatGPT is provided by the paper’s authors:
“Students are encouraged to use ChatGPT as a supplementary resource to support their learning and research activities, complementing their engagement with primary sources, diverse perspectives and critical thinking skills, but not as a substitute for these essential components of learning. This approach ensures that students do not excessively rely on AI and, in turn, get the space to develop robust cognitive skills necessary for independent learning.”
I believe we need to embrace the AI era and continue the already established trends of focusing on critical thinking in higher education. AIs only reinforce the need to stop solely testing our students on do they know things. Instead, we need to focus more on do they understand things. Personally, I’ve noticed learning objectives and course goals being written in that language; however, I am not convinced the associated exams are designed to achieve that goal. Too often students report to me that they are surprised by the toughness of my exams in terms of having to think hard about each question without being able to straight away pinpoint which concept applies to which question. They do not think of it as unfair, many say openly how they like as well as understand the challenge those questions pose to them. They even go as far as sharing with future students that one must study in an integrative way for my classes and not just cruise on a superficial knowledge of the course content.
Finally, the authors also provide great guidelines for incorporating reflection for students using ChatGPT:
“Educators can facilitate reflection by prompting students to consider the following questions:
How did ChatGPT contribute to my understanding of the topic?
What strategies did I employ to verify the accuracy and reliability of the AI-generated content?
How can I use AI tools responsibly and ethically to support my learning?
What challenges did I encounter while using ChatGPT, and how can I overcome them in the future?”
Educators must embrace the new wave of AIs and integrate those new tools in their courses since students will now need the digital literacy to live in a world where AIs is going to be integral. If we solely focus on limiting or constricting or controlling AI usage by students in the classroom, we not only disserve them, but we also disserve ourselves.