Teaching Humanities in the AI Era: Is this the Bargaining or the Acceptance Stage? – Grandbet88

Begin typing your search above and press return to search. Press Esc to cancel.
news for & about the philosophy profession
“What, again, is education? The non-coercive rearranging of desire.”
That’s D. Graham Burnett, professor of history at Princeton University, in an article in The New Yorker that you should read.
In the article, Burnett, whose area is history of science and who has focused quite a bit on the topic of attention, documents some of his experiments using large language models (LLMs) as research and teaching tools. In this post I’ll focus on the teaching side.
In one of his courses, “Attention and Modernity: Mind, Media, and the Senses”, he gave the students an assignment that had them make use of one of the LLMs.
An assignment in my class asked students to engage one of the new A.I. tools in a conversation about the history of attention. The idea was to let them take a topic that they now understood in some depth and explore what these systems could do with it. It was also a chance to confront the attention economy’s “killer app”: totally algorithmic pseudo-persons who are sensitive, competent, and infinitely patient; know everything about everyone; and will, of course, be turned to the business of extracting money from us. These systems promise a new mode of attention capture—what some are calling the “intimacy economy” (“human fracking” comes closer to the truth). The assignment was simple: have a conversation with a chatbot about the history of attention, edit the text down to four pages, and turn it in.
How did the students do? Burnett says:
Reading the results, on my living-room couch, turned out to be the most profound experience of my teaching career. I’m not sure how to describe it. In a basic way, I felt I was watching a new kind of creature being born, and also watching a generation come face to face with that birth: an encounter with something part sibling, part rival, part careless child-god, part mechanomorphic shadow—an alien familiar.
He then proceeds to provide several examples of the truly fascinating work his students submitted. Seriously, read the article.
What’s his takeaway from this “for those who are responsible for the humanistic tradition—those of us who serve as custodians of historical consciousness”? Those whose job is “helping others hold those artifacts and insights in their hands, however briefly, and of considering what ought to be reserved from the ever-sucking vortex of oblivion—and why”?

Adopting Gayatri Chakravorty Spivak’s definition of education as the “non-coercive rearranging of desire,” he sees an opportunity, and seems to accept that humanities education from here on in has to be different:
You can no longer make students do the reading or the writing. So what’s left? Only this: give them work they want to do. And help them want to do it. What, again, is education? The non-coercive rearranging of desire.
Oh… just that? Just give them work they want to do, and help them want to do it?
Um, hey, philosophers—have any of you tried doing this?
What’s that? Only every semester since you started teaching?
So you’re good then?
No?
Huh.
Burnett’s sharing of his experiences teaching with AI is indeed interesting and helpful. But, right off the bat, we should be cautious about generalizing from them. After all, he is teaching Princeton students… at Princeton. In terms of intelligence, knowledge, motivation, environment, they are not representative of college students in the United States. That they are especially susceptible to help that gets them to want to do work their professor thinks is worthwhile—that there is academic work they want to do—does not tell us that much about the over 99% of US undergraduates not enrolled in Ivy League or otherwise elite institutions.
Burnett’s course seems to have had a rather small enrollment, which is necessary for the opportunities for (and expectations of) in-person discussion of the students’ projects at a level sufficient to help incentivize hard work and deter fakery. And a light teaching load no doubt helps, too. What about lower level courses with enrollments in the 40- to 240-person range? And what about instructors teaching two or three or more other courses simultaneously? How common is it for instructors to have the time inside or outside of class to run such an assignment well?
I think that many academics who “give them work they want to do and help them want to do it” will be thinking, “yes, but…” What’s the but?
Still, maybe this is too much “easier said than done” and not enough creative brainstorming about the various ways to help various kinds of students at various kinds of institutions want to do some version of the work we think they should do. Your suggestions, as always, are welcome.
Burnett asks, about AI tools, “Do they herald the end of ‘the humanities’?” He answers: “In one sense, absolutely.” You know a “but” is coming, and indeed it does come, followed by the quote which began this post. Pessimistically, the meaning of “one sense” is “for most of you.” It would be nice to not be so pessimistic. Can you help?
I note that third stage of grief, “bargaining,” according to Wikipedia, “involves the hope that the individual can avoid a cause of grief. Usually, the negotiation for an extended life is made in exchange for a reformed lifestyle.”




I think I might have a useful perspective here. I teach at a public system campus, have the equivalent of a 3:4 load, and when I teach ethics, it’s multiple sections of 35 students. In other words, I do not teach at Princeton.
There are two things that stick out to me:
First, Weinberg’s point about helping students do work they want to do. He frames it skeptically, which I think is understandable to a certain degree.
Oh… just that? Just give them work they want to do, and help them want to do it?

Um, hey, philosophers—have any of you tried doing this?

What’s that? Only every semester since you started teaching?
But—and I cannot overstate this—learning research suggests this is genuinely the best way to get students to learn and learn deeply. It’s called “situated skill development” and is basically getting students to apply their learning via a project they find meaningful. Yes, this is hard work, especially in a discipline students aren’t familiar with and probably don’t enter into with a sense that it will be useful beyond ticking some distribution box. My campus is entirely health science-focused, so this is literally every student I have. But if you can find a way to do this, the results are EXPLOSIVE.
Weinberg goes on:
And a light teaching load no doubt helps, too. What about lower level courses with enrollments in the 40- to 240-person range? And what about instructors teaching two or three or more other courses simultaneously? How common is it for instructors to have the time inside or outside of class to run such an assignment well?
I don’t know how common it is, but I feel like I’ve done it. My ethics class revolves around an Apology assignment where students synthesize what they’ve learned from class to create their own view of a flourishing life. In my other classes, I work with students to understand what they want to say about what they’ve learned in class and how they want to say it. (My class on the ethics and politics of abortion, for example, is currently putting together a gallery display for the university research symposium as a way of engaging with feminist philosophical insights into different eras of abortion in the US.) Yes, it creates a lot of work on my end, but the results are worth it. Doing things this way seems to radically disincentivize using tools like AI because students recognize they want to do the work themselves and say something genuine. (One other thing I do that seems to help in this regard is alternative grading like contract grading. It seems to get students to focus on genuine learning in a way that traditional grading doesn’t. I haven’t done traditional grading in almost five years and you’ll kill me before you get me to go back.)
You can see some examples of past work here: https://bsky.app/profile/bcnjake.bsky.social/post/3kaaadfi3gg2y
Again, I teach the equivalent of a 4:3 with two and sometimes as many as four preps. It isn’t the heaviest load in the world but also isn’t a 2:2 with one prep, TAs to do the grading and 1/3 of the teaching, and a ten-person graduate seminar at some point in the year. I’ll also note I work in a context where there’s no major or minor, so I don’t have to worry about professionalizing students. But this is doable. It’s more work and it requires rethinking what counts as “doing philosophy” at a pretty fundamental level, but it’s doable.
“Education” has two senses: one roughly akin to “instruction” and the other roughly akin to “cultivation”. Burnett seems to be exclusively talking about the latter. (And, honestly, that’s wonderful for him.)
It’s nice to teach cultivation courses, where everyone is there because they genuinely want to be. They wouldn’t just take this assignment and tell the AI to generate a dialogue with itself.
Your average gened course is instruction and often coerced. Students see this as transactional and will produce what they are told to produce by any means available. However clever you think you are about letting the students use AI, they can just tell the AI what you told them.
The response to AI here is the same one used by math teachers in response to the calculator: in person graded work. Those students who take a liking to philosophy, will take up cultivation courses later.
Burnett is however right about something important: many students come to us for cultivation and will relish the opportunity to do their own work even if they could cheat. We ought not forget this, and we must continue giving them the opportunity.
It would be nice if we could say that instruction is for school teachers and cultivation is for college professors, but that’s just plainly not true in our world. Perhaps it is true at Princeton, but I have my doubts.
The future of the humanities in its current shape thus rests on whether the value of *instruction* can be defended against its detractors. Amusingly, our math teachers were wrong to say that we won’t have a calculator on us all the time. Now that we do, we still need to be instructed in basic math. Hopefully, the same case can be made for basic philosophical thinking.
“Hopefully”, since it may well be that humanities instruction will go away. We’d be left with cultivation only, essentially retuning such education to the elite pastime it used to be. That would be bad news for our profession. And also, at least according to John Dewey, bad news for democracy. (Like it needed more bad news.)
I’m aware it was used only as a rhetorical device, but I’d like to take this as an opportunity to say that the Kübler-Ross stage model of grief is a bit out of touch with present day grief studies (and the philosophy of grief). Sorry for the digression.
I can’t imagine what kind of job a BA in the humanities might qualify one for. It would be foolish of anyone to hire an applicant simply because they have a college degree. Some of the stupidest people I have ever met had a bachelor’s. During my time in College I didn’t learn as much from my professors as Intaught myself. I went to them for confirmation and approval of my arguments. You will never be able to make your student read or study but you can test their understanding and reasoning. In class closed book tests without electronic aid. If a student isn’t motivated enough to do the reading they’ll fail the test. Most of them will fail even if they did the reading, either they can’t articulate an argument or can’t even write.
It is worth noting that Burnett could have easily been reading submissions that were completely chatbot-created. Many students are becoming increasingly adept at using chatbots to do their work in ways that make it look original. And we can easily fool ourselves into thinking that we have found the special way to make sure that students do their own work (for a couple years, I fooled myself into thinking this).
Among the reasons I haven’t yet incorporated AI into the assignments I give is that I think use of most LLMs is immoral and I don’t want to encourage (and definitely don’t want to require) my students to engage in this kind of immorality. I wonder if anyone has worked through this worry in a helpful way, and if so, what advice you have.
My advice is don’t use LLMs. They’re deeply immoral (their training data is built on theft), probably environmentally bad, aren’t actually designed to provide accurate information (i.e., they’re bullshit machines), and the emerging evidence is that students who use LLMs see a decrease their academic skills.
It’s just wrong. Ethically, pedagogically, it’s just wrong and we as philosophers interested in promoting ethical behavior and good epistemic habits should have no part in it.
? go to https://cosmos-institute.org/ . My advice to everyone is use it, because by not doing so you’re promoting the AI divide and you’re gonna fall behind. Extrapolating your moral compass, we’d say: don’t be a part of the economy, don’t buy a car, don’t use electricity, it’s all immoral (their building blocks are built on theft).
Colleagues, you’re gonna love this: https://chatgpt.com/share/6810265f-6054-8001-bdf6-50364019fc73
I don’t think these ads are real. They even declare that they can publish your work in Nature. But I do believe that Synthese should
So, to my knowledge, the editors don’t seek a particular gender distribution. We might first want to make sure that the distribution of accepted papers […]
? go to https://cosmos-institute.org/ . My advice to everyone is use it, because by not doing so you’re promoting the AI divide and you’re gonna […]
Colleagues, you’re gonna love this: https://chatgpt.com/share/6810265f-6054-8001-bdf6-50364019fc73
My advice is don’t use LLMs. They’re deeply immoral (their training data is built on theft), probably environmentally bad, aren’t actually designed to provide […]
Among the reasons I haven’t yet incorporated AI into the assignments I give is that I think use of most LLMs is immoral and I […]
It is worth noting that Burnett could have easily been reading submissions that were completely chatbot-created. Many students are becoming increasingly adept at using chatbots […]
I can’t imagine what kind of job a BA in the humanities might qualify one for. It would be foolish of anyone to hire […]
It’s a questioning of the question Eric is investigating
Reading the replies here will give you a pretty good feel of what the actual problem is and why I’m not optimistic about a solution. […]
Enter your email address to subscribe to this blog and receive notifications of new posts by email.

source

Leave a Comment