It’s 10:37 p.m. on a Thursday, and you just remembered you have a discussion post due at midnight. You’re struggling to craft an acceptable response in time to catch the last Bill’s Bus downtown when you remember your friend raving about ChatGPT the week before. This easy solution is too tempting to ignore, and you quickly plug in the prompt, copying and pasting the answer ChatGPT spits out into the submission box. Does this one-time weakness become the new way you do all of your assignments? Your professors have been wondering the same thing.

If you talk to members of faculty, you’ll quickly find there’s little consensus beyond the recognition that it raises complex questions about how they teach and how students learn. I had the opportunity to learn about the situation by conducting interviews and focus groups at UC Santa Barbara. I found that professors are trying to adapt to the rise of large language models, like ChatGPT, without established guidelines or a clear picture of how their students are currently using it.  

Some are banning the use of artificial intelligence (AI) and returning to in-person tests to avoid the potential for cheating. Others are putting effort into creating assignments that challenge students to use ChatGPT to produce answers and then compare or critique its responses against their own. A third group of professors reports encouraging students to use ChatGPT’s assistance with all their assignments, using the logic that it evens out the playing field and provides opportunities for students who have less resources to keep up. 

Despite this diversity in classroom policies, students overwhelmingly report feeling afraid of using ChatGPT because of how often they’ve been warned against it. The majority of students I met described feeling confused about faculty expectations, with one student bluntly stating that they “wish professors wouldn’t be so weird about it.” Students who do use it say they largely use it either as a learning resource to help them study, break apart difficult concepts, generate ideas, improve their writing skills or to complete “busywork” when they’re feeling overwhelmed and burned out. A minority of students also raised concerns about the ethical issues surrounding large language models, including replicating bias and a lack of transparency around the information AI’s use as sources. 

One of the more effective solutions to the issue of AI in the classroom appears to be integrating ChatGPT into coursework with an emphasis on critiquing its responses. 

The assignment overview of one such task, assigned in an upper division political science course, states, “An important part of a university education today is gaining systematic experience with AI tools and developing skills for using these tools intelligently and responsibly. In this assignment, you will take some steps in that direction.” 

To complete the project, students were given an essay prompt and asked to have ChatGPT produce an answer. Then, they were asked to evaluate some of the strengths and weaknesses of the response before editing and rewriting the essay to include their own perspectives. Students from that class who were interviewed largely reported feeling excited by the unique assignment and described learning a lot about not only what ChatGPT was capable of but also the areas in which it wasn’t helpful. 

Outside of these kinds of structured assignments, students lack a real understanding of how to navigate ChatGPT ethically and effectively, so they either use it without fully understanding its limitations or don’t use it at all. They feel caught between professors who encourage them to use it without explaining how and those who tell them not to use it at all without offering a way to overcome existing issues of burnout. 

The issue becomes more pressing when you factor in the importance of AI literacy for career readiness. Questions arise about the university’s responsibility to its students to prepare them to leave college as efficient workers and hireable job candidates. One student, considering this problem, asked why they should be expected not to use a tool that they will need to be familiar with when entering the job environment. Another student, adding to this, argued that if the university isn’t letting them use ChatGPT, it should provide better resources for those tasks where students most want to use AI.

The simplest solution might seem to be letting students do what they want, but that would ignore the complicated realities of AI. Some professors point out that it’s a potentially dangerous resource for students who are new to university learning; if students haven’t honed their foundational critical thinking skills, they won’t be able to engage with ChatGPT’s answers in a meaningful way. 

This is a concern echoed by students who noted that they feel like they’re just “turning their brain off” and aren’t doing any critical thinking when they use AI. Some students also pointed out that they weren’t impressed by ChatGPT-generated responses, such as one who noted, “it’ll give a really eloquent answer, but it doesn’t necessarily give the right answer or the most detailed answer.” Many students seem to consider ChatGPT a supplemental resource rather than a catch-all solution to the burdens of coursework. 

What I learned from talking with students is that some professors’ policies of restricting use appear to be ineffective. More importantly, restrictions limit some of the positive contributions  students feel ChatGPT can make for them. 

“If they taught us how to use it as a tool of learning, they wouldn’t have to restrict it so much,” one student argued. 

Some first-generation students emphasized that ChatGPT offers the opportunity to fill some of the gaps in learning they had from high school, provide information about resources they don’t normally have access to and make them more confident in their abilities because they know they have it as a resource.

My discussions with faculty and students convinced me that AI can, and should be, introduced as a tool of learning. Students need to learn how to build skills that can’t be replaced by AI while also building literacy about how to use AI as a tool of learning and efficiency. As with any other tool, students must be taught to wield it ethically and effectively, but the current environment provides little room for that. 

Professors, responding to the pressures of increased instances of cheating in their courses, are doing their best to adapt their coursework with the little information they have. Students, feeling the pressures from faculty, are either shying away from ChatGPT entirely or using it without a full understanding of how the technology could help, rather than replace, their learning journey. 

At a major research university like UCSB, it’s difficult for professors to carve out time to draw up new lesson plans and attend AI trainings alongside the constraints that come with being a full time researcher and educator. It becomes even more difficult when faculty has a foggy picture of how it’s currently being used, so they operate on the assumption that all students will use it when given the chance. 

Students and professors are not only unsure about the technology, but they are unsure about one another. This is making things worse than they have to be. The next step toward solutions  is consistent dialogue among professors, graduate students and undergraduates. The university has been slow to react to these needs, meaning that current students aren’t being prepared for their careers and incoming students may be deterred from attending a school that does so little to prepare them for an AI-assisted future. 

It’s crucial that UCSB administration recognizes the importance of centering student interests and building support for faculty in their attempts to develop standards around what an effective AI policy looks like. Only with this emphasis do we move away from the current state of confusion towards a sustainable and responsible technological future.


A version of this article appeared on pg. 10 of the April 11, 2024 print edition of the Daily Nexus.

Print