You have /5 articles left.
Sign up for a free account or log in.

A smartphone with the words "Chat GPT" against an abstract background.

NurPhoto/Contributor/Getty Images

When I talk to faculty these days about ChatGPT and how AI-generated text is showing up all over their assignments, I encounter a strange but growing apathy. I have watched with some horror as the work in my own courses has devolved throughout the term and assignments, discussion posts and quizzes increasingly contain language that sounds suspiciously AI generated. I expect other faculty to be similarly concerned; as a college hearing officer adjudicating academic integrity disputes, I have found professors to be deeply troubled, even personally offended, when their students cheat. But when it comes to ChatGPT, I’ve started to hear something very different. “The students will ultimately be the ones who suffer,” they say. “If students don’t learn what I’m teaching, they’ll fail anyway in life. Nothing I can do about it.”

This type of apathy is unacceptable. Our students deserve better, and frankly, the entire world needs us to care.

Back in January, I, like many others, thought we could design our coursework to outwit students who would rely on AI to complete their assignments. I thought we could create personalized discussion questions, meaningful and engaged essay assignments, and quizzes that were sufficiently individualized to course materials that they would be AI-proof. Turns out, I was incorrect. Particularly with the arrival of GPT-4, there is very little I can assign to my undergraduates that the computer can’t at least take a stab at. Students may have to fill in a few details and remember to delete or add some phrases, but they can avoid most of the thinking—and save a lot of time. GPT-4 can write essays, compare and contrast options, answer multiple-choice questions, ace standardized tests, and it is growing in its capacity to analyze data—even a lot of data—that is fed to it. It can write code and make arguments. It tends to make things up, including citations and sources, but it’s right a lot of the time.

In pedagogy circles, there remains an effort to stay optimistic and forward-looking. It is quite easy to find creative assignments that integrate ChatGPT into coursework. These well-meaning assignments use a blend of human and AI thinking—just as the workplace of the future is likely to do. The AI provides content and basic information while the human supplies deeper analytical, ethical and critical thinking. For example, a student might use AI to outline a paper, brainstorm solutions to a tricky management problem or generate ideas for business start-ups. Then the students are expected to turn off the AI and take over doing the harder work of refining the paper, choosing an option and defending it, or assessing the long-term viability of the innovations they didn’t actually create. ChatGPT writes the first draft of a paper, or provides feedback on that draft, but the human revises it.

Let that sink in for a moment. We’re expecting students to use ChatGPT to write a first draft of their paper but then not use it to revise the paper.

I don’t consider myself a pessimist about human nature, but in what world do we humans take a perfectly good tool that helped us get from point A to point B and then decline its offer to take us from point B to point C?

Intrinsic motivation is lovely, but we’re setting an answer key next to our students and then expecting them not to use it. Not to mention that those answers are for tests that require them to rely on rote memorization or engage with content they believe or know to be largely irrelevant to their life. Many of our students are working full-time and caring for a family. They view a degree as an important stepping-stone to their career, but not something they were particularly motivated to pursue in the first place. And you think this student will ignore the answer key—at a time in which cheating has already been surging? It seems to me we are asking the impossible.

Is higher education truly going to look the other way as our students collectively stop engaging with our curricula?

The tidal wave of AI-generated assignments is coming. By the end of the spring term, I suspect more than half of my writing assignments—the ones I thought were so clever and engaging—will have been authored by ChatGPT, Bing or any of the other AI tools heading our way. Why aren’t we panicking about this? Why isn’t everyone treating this as a crisis? Is it because we have started using ChatGPT to do our work as well? (How incensed can we be about students using a tool that is writing our papers and creating our exams?) Or is it that we don’t see a way forward, so we are pretending not to know our students’ discussion posts are 90 percent AI generated? Or worse—are we patting ourselves on the back because we assume that the motivated students, the good ones, will continue to do the work on their own, and the other (bad) students will suffer some kind of well-deserved unemployment?

I see two possible scenarios that might follow from this collective apathy. In the first, students are just as successful at their jobs as they would have been if they’d done our coursework. In this scenario, I assume society will collectively realize we in higher education have been providing little value for the millions of dollars we have been charging and fire us all. Someone will realize that higher education is focused on the wrong things—the wrong outcomes, the wrong content—and make something better. A higher education for critical thinking, ethics, empathy, human dynamics and problem-solving, perhaps. Skills students really need.

The other scenario is that what we are doing really does matter, and we’ve just doomed our students—particularly the ones juggling work, life and family—to failure. I am particularly concerned about our students in online classes. In face-to-face classes, we can create “AI-free” zones where students engage with coursework, build relationships and tackle problems without an AI helpmate. How will we do this for online classes? The stakes are incredibly high—online degree programs are growing fast and appeal directly to students who have the greatest challenges in finding time to complete their assignments (those with families and full-time jobs), and who may have been skeptical of the value of higher education to begin with. What can we do to make sure these students are not relying exclusively on AI?

To be clear, we are going to need to fight to preserve AI-free thinking. Case in point: a few years ago, I got a backup camera for my car. Now I can barely drive without it. What happens when AI becomes so integrated into our daily decision-making that we become dependent on it?

Either we are a group of people resigned to letting our students suffer, or we are rearranging the proverbial deck chairs on the Titanic, not bothering to try to save ourselves from irrelevance. I don’t like these choices.

Luckily, there is an alternative: recognizing that we are in a crisis situation and throwing all our effort into solving it. It’s time for an all-hands-on-deck discussion about how we save ourselves and our students. Only big ideas need apply.

I’ll start here: we need to dramatically change the scope, structure and purpose of higher education.

In the short term, we need to decide how to address the fact that our assignments are no longer being completed by our students. And let’s be clear: we cannot proctor or browser lockdown our way out of this situation. So, what are the alternatives? One option: have students run their own work through an AI checker like ZeroGPT. If the text turns up more than 50 percent AI generated, require them to revise it. Any assignments that score more than 50 percent on the AI-generator test will be automatically granted a zero. Not because students are cheating, but because their assignments represent too little student-generated value.

Of course, this type of workaround won’t be successful for long. I have no doubt that in the near future, AI checkers will be unable to spot AI-generated text. To address this, I suggest creating AI-free zones—spaces on campus or in the community where students surrender technology and have an AI-free conversation, brainstorming session or group meet-up. Training ourselves and our students to work with AI doesn’t require inviting AI to every conversation we have. In fact, I believe it’s essential that we don’t.

But this won’t be enough, either. The innovation needs to go deeper. Right now, higher education is organized around content specialties—business, chemistry, economics, literature. In a world where content is at everyone’s fingertips all the time, this puts the important things—critical thinking, problem solving, ethical decision-making, applying context, communicating and working with other humans—in the back seat. What if we rearranged our universities around departments of critical thinking rather than departments of chemistry? Create a school of applied ethics rather than a school of business? And maybe a degree only needs to be two years, not four. We can create certificates for innovation and creative thinking that challenge our students to think like humans, not computers.

We also need to ensure part of higher education is the development of human relationships. And not just the asynchronous, discussion-board kind: the kind that happens over a synchronous conversation, a shared goal or the development of two-way communication skills. Businesses have been clamoring for this for years, but higher education still treats soft skills as a condiment, not the main course. We need to reverse this type of thinking. Building relationships needs to be one of the essential elements of what we do and who we are.

ChatGPT is enrolled in your courses, and your students are checking out. We are in a crisis. It’s time to start acting like it.

Inara Scott, J.D., is the associate dean for teaching and learning in the College of Business at Oregon State University.

Next Story

Written By

More from Views