
This essay is the second in a series about how I used an LLM (large language model) tool in a history class I taught in the fall term of 2024. Part 1 is available here.
Doing history in person
If I wasn’t looking to have JeepyTA serve as my teaching assistant, what did I want from an LLM? That was the question that confronted me as I redesigned the History of American Higher Education from scratch last spring.
I started teaching this course in 2020, online (because we had to) and in the evening (because, like some of my students, I had a day job). This fall, I was able to go back to my preferred mode of instruction, in-person and flipped. By “flipped,” I mean that instead of me telling my students about history, we “do history” together by reading, thinking, and writing through structured active in-class learning. SAIL is the acronym used by many who talk about this pedagogical approach, which is the latest name for the effort to remind everyone that students are not empty vessels to be filled with knowledge but active minds looking to create knowledge through shared experiences.
The problem with this approach is that many students come to class with habits of memorizing, summarizing, and regurgitating information instilled by our systems of schooling. They don’t like active learning because it frustrates their expectations. They want to be rewarded for individual achievement in the form of a grade, not asked to collaborate with peers in loosely structured activities. This frustration creates challenges for teachers who try out SAIL methods and for researchers trying to study how effective it is.
The student resistance I anticipated would be a relief compared to the trepidation I felt about teaching online again. The first time had been exciting. Changing everything during that first year of the pandemic forced me to rethink my prejudices against learning online and led me to try out some new learning tools. But each time I taught the course online, it became harder for me and my students. The excitement and energy were slowly replaced with the dread of yet another three hours on Zoom. I was happy to abandon online learning for a physical classroom.
ChatGPT: the uninvited guest
Throughout 2023, my excitement about returning to in-person instruction was tempered by everything I was reading and hearing about the presence of generative AI in my students’ lives.1 When I first read about what Ethan Mollick called the homework apocalypse, I wasn’t worried. My teaching practice is all about process, not product. I nodded along with John Warner’s ChatGPT Can’t Kill Anything Worth Preserving, smug in my conviction that generative AI could not threaten the beating heart of an active class of students engaged in peer review, close reading, and in-class group writing. But then I realized something.
I expected my students to put in a great deal of intellectual work before class. There was nothing stopping them from outsourcing their pre-class work, from having ChatGPT summarize the reading and generating the short essays I asked them to write as a kick-start to their long-form writing. Even if I forbade the use of AI in class, how could I ensure they would not bypass the crucial process of thinking for themselves before they walked through the door?
In an op-ed in the Washington Post in 2023, my colleague Jon Zimmerman offered an admirably simple solution:
Here’s my AI policy: I don’t have one.
Here’s what I’m going to tell my students instead.
Of course, you’ll have to notify me if you draw upon AI to write a paper, just as you are required to cite any other source. But whether to use AI or not is up to you.
Though, I hope you won’t….
He went on to say:
I want you to stare at a blank page or screen for hours, trying to decide how to start. I want to you to write draft after draft and develop a stronger version of your own ideas. I want you to be proud of what you accomplished, not ashamed that you cut corners.
I wanted that for my students, too! I wanted them to consider why replacing their work with machine outputs was a bad idea. But I also wanted to learn from them why so many students were choosing to cheat themselves out of that experience. I wanted them to challenge me about my assumptions about the value of generative AI, and for me to challenge theirs. My classroom practice is built around students teaching me and each other. Yes, about the history of higher education, but why not expand that to include teaching each other about generative AI?
Teachers or enforcers
One approach I never considered was prohibition. Surveillance, threats, or extracting promises not to use AI runs directly counter to my educational philosophy, which treats writing as inextricably wrapped up in democratic processes of shared and open inquiry. My class is a place to share ideas and practices, even those that make me and my students uncomfortable or confused. Rather than rule generative AI out of bounds, I wanted to invite students to bring their learning practices into the class discussion. Skeptical of its value, I wanted to explore how AI might be used to learn.2
The hardest thing I ask my students to do is go beyond summarizing experts. Instead, I ask them to write their answers to the questions they ask. This is hard because most of them have been taught that academic writing requires keeping their opinions to themselves. Summarizing scholarship and applying it is a necessary step, but in my classroom, this happens within the context of a question or argument that means something to my students as higher education administrators. To ask, as I do, that they analyze primary sources, not only through the lens of what professional historians have to say, but also from their position as education professionals, challenges their conception of what learning history means.
In some ways, their approach to ChatGPT’s outputs is similar to their approach to academic sources. I frequently give feedback along the lines of “Don’t simply tell me what another writer says; explain what it means to you in the context of your argument and your experience.” Could I just substitute “ChatGPT for “another writer”? Is the summary of an LLM so different from a Wikipedia entry or a literature review? In fact, I think it is different, but I have come to understand that many people who use ChatGPT don’t see the distinction as important. This confuses and scares me.
So, I decided to ask my students to help me work through those feelings and figure out what widespread student use of generative AI means for my teaching. I added a new educational objective in the preliminary syllabus for the class, which I share with each student via email when they register: “Develop an understanding of the ways generative artificial intelligence is impacting higher education by using and discussing these tools.”
This joined the objectives I had for previous iterations of the course, including “appreciate the uncertainty and challenges facing higher education due to the rapidly transforming social and technological environment” and “consider the way historical and social change shapes your own professional development and the future of higher education administration.”
My approach was to invite ChatGPT into my class as an object of inquiry and discussion rather than treat it as an external threat. I also offered JeepyTA as an alternative to OpenAI’s product or any of the increasing number of chatbot helpers available.

Inviting JeepyTA and other AI models into the class
I had the great fortune to work with experts from the Penn Center for Learning Analytics, who configured JeepyTA for use as a learning tool tailored to the course. I also worked with three former students who volunteered to help me develop assignments and lead in-class activities. These colleagues comprised what I call the teaching team for the class, and, together, we decided how to approach students about using AI.3
Each class I teach is designed to be experimental and flexible. I start with a set of reading and writing assignments for the first month and work with my students to develop future in-class activities and workshops, many of them focused on completing the core assignment, a 5000-word essay about the history of an institution of higher learning in the United States.
We start by reading Craig Steven Wilder’s Ebony and Ivy: Race, Slavery, and the Troubled History of America's Universities (2013) and exploring the website Land Grab Universities. Early reading and writing assignments focus on how the Atlantic slave trade and the land dispossession of American Indian nations laid the economic and social foundations for colonial colleges. Although students have the freedom to switch to other frames, most end up writing their essays by asking questions about how this history continues to shape institutions today and how those institutions should respond to demands (mostly from students) to do something to address or at least acknowledge the past. This raises historical and professional questions, making it an ideal set of issues for students to write about.
Exploring how historians go about gathering and analyzing evidence is the focus of early class meetings, so consideration of new technology fits within that framework. At the first class meeting, I invited students to share how they have been using ChatGPT and other AI tools. I introduced them to using JeepyTA, and we talked about what it would be like if we agreed to use it and any other AI tools they wanted for the class with no restrictions and no judgment but with a general expectation that we would share our experiences.
The students were game. They warily acknowledged that using AI to complete class assignments was considered cheating by many teachers but were willing to take my invitation at face value. Nearly everyone said they had used Grammarly and ChatGPT, and many of them were regular users. Most indicated some awareness of generative AI’s lack of reliability and ethical problems related to environmental concerns and algorithmic bias. In a theme that continued to come up in our discussions of AI, the students in the class who were not native speakers of English explained how LLMs helped them feel more confident in expressing their ideas in American educational contexts.
Questions
Then, we turned to the first structured activity, a walking tour of campus using the Penn & Slavery Project’s augmented reality app. Before we started, I briefly reviewed the assignments due the following week: 1) write a short essay (at least 700 words) discussing our shared experience of the Penn & Slavery Project in relation to Wilder’s introduction to Ebony & Ivy. 2) On the JeepyTA discussion board, post some thoughts meant to get them thinking about the long-form research paper.
That first day, after we had agreed as a class to explore the use of generative AI as a learning tool, I emailed each student to confirm they were comfortable with the aims of the course and offered them the chance to opt out of the use of AI or express concerns or ask questions. I reminded them that they had a low-stakes writing assignment where they were free to use or not use AI and an invitation to try out JeepyTA.
The experiment was on!
In Part 3, I will write about the two educational questions that guided my design of the course and what I learned.
Would JeepyTA enliven the class asynchronous discussion board by providing an immediate and useful response to student posts?
Would JeepyTA provide feedback prior to in-class peer review activities in ways that students found helpful and that I believe strengthened the learning process?
I read and blogged about AI and education throughout 2023 and early 2024, all of which influenced how I redesigned my course; far too much to summarize or reference here.
This essay in the Journal of Applied Learning & Teaching (JALT) by Anna Mills, Maha Bali, and Lance Eaton: How do we respond to generative AI in education? Open educational practices give us a framework for an ongoing process is an example of some of the wonderful work I found. It is an example of an often-overlooked change in how educational reform movements share ideas and practices. As the authors say in the abstract: “Social media, listservs, groups, and public annotation can be spaces for educators to share early, rough ideas and practices and reflect on these as we explore emergent responses to AI.”
In 2023, it felt as though early, rough ideas and practices were all we had when it came to AI and education. In fact, it still feels that way. I think that is a good thing.
To the extent that I have a philosophy of teaching, it is grounded in what the women teachers at the Chicago Laboratory School taught John Dewey in the 1890s and that he turned into highly regarded books about schooling in a democratic society.
My approach to teaching writing was shaped by a two-year stint teaching in the Rutgers Writing Program during the early 2000s, which, at the time, was led by Kurt Spellmeyer and Richard E. Miller. During that period, I was reading a lot of Ralph Waldo Emerson and Richard Poirer, taking classes in the history and English departments at Rutgers University, teaching lecture courses and senior seminars in American Studies, and devouring every issue of Raritan Quarterly. All of these experiences shaped my practices as a writer and teacher such that I do not think I can do one without the other.