A lot of university courses don’t have exams. Rather, the students are graded based on projects, papers, or presentations.
I’ve been teaching at a university for more than 20 years, and only a few times have I given students a final exam. I used to assign final papers, but I stopped doing that in the spring of 2023, when ChatGPT was becoming widely known and I couldn’t decide how to deal with its use by students.
The big issue, in my opinion, is that AI can be used productively and ethically in education; it can be used to cheat; and there’s a huge gray area between those two extremes, where there doesn’t seem to be any consensus yet.
For example, suppose a student uses Claude to brainstorm topics, then chooses one from among those topics and researches it in depth, then does some more brainstorming with ChatGPT based on what he or she found, and then jots down what he or she wants to say as bullet points. Finally, Gemini is used to write a paper that presents the information in those bullet points in a logical and well-formed manner, and the student checks the paper and makes revisions before turning it in. Is that okay?
When I’ve discussed this issue with humanities faculty, they’ve generally regarded that kind of AI-assisted writing to be cheating. Science faculty, however, have been more receptive to it, as they care more about the accuracy and originality of the ideas than about how the words are strung together. My sample was small, though, and I am sure that there are different opinions on both sides.
I retired from my full-time university post in March 2023. Now that I’m teaching only one class a semester, I have each student do a final one-on-one presentation and interview with me, which makes up a large part of their grade. I’m able to do that because I’m teaching only that one class. If I had a full teaching load with a hundred students or more total, there’s no way I could interview each one individually.
Outside of labs and a capstone project course, I don't think I had a single class across a dual-degree prorgam that didn't have a midterm and a final. There was a single class that I remember that had a take-home final, and a single class that was open notes for the exams. For most humanities we had in class essays. For foreign language classes there were written components and interviews with TAs. For engineering/math there were tons of math problems. Even a C programming class had a written exam (where we had to write correct programs by hand). This was ~10-15 years ago in a public university in the US.
I graduated before LLMs but after WolframAlpha and I essentially cheated my way through calculus I and II. Lenient grade weighting made 90s on the homework and 60s on the exams enough to slide through with a C. Funnily enough, now that it's over a decade later and I know more about myself and my neurodivergent patterns, I feel much more able and interested in actually learning calculus. I'm looking forward to seeing the pedagogical changes that result from LLMs enabling this sort of trickery for all subjects.
If you use AI exclusively I assume they won't. But I feel like most people who use it, use it to save some time - like they have the general idea of what they want to say, and just need a little help polishing it into a final form.
Like that joke about how you write a summary, use AI to expand it, then the recipient uses AI to get a summary. It's no longer a joke - it's really happening.
I've also seen students use AI to understand material - instead of reading the material, they feed it through AI to explain it to them. This is either a mediocre student who'll have a hard time if they ever need to do original research, or it's a bad teacher. Either way they'll do OK on the exams.