Is ASUβs collaboration with OpenAI actual news?
ππ ππ¨π ππ πππ§ ππππ
Everyone in the pool! (Whereβs the pool?)
Just about everyone thinks that teachers, administrators, and senior executives should experiment with generative AI. We need to try out these new tools so we can make informed decisions about how and whether to use them. The next question is, of course, where do we get access?Β Right now if you work at a college or university the answer is to go get yourself a ChatGPT account or use a free service like Bing or Bardβ¦oh, and please donβt upload sensitive data. And be aware of bias. And give your students clear guidance. And be transparent about your own use of AI. And so on. Like an anxious parent, we want our kids to learn to swim, but we also want to make sure they have an approved floatation device, are slathered in sunscreen, and remember, Walk! Donβt Run!
Some institutions have pilots or experiments underway, usually a specific research project or a handful of courses in a single department, that provide access to faculty and staff, but I know of few institutions that are providing access on anything more than a limited, experimental basis. One exception is University of Michigan. I wrote about their surprisingly fast move last summer to give AI access to their entire community (including all students) through Microsoft Azure OpenAI Service. For the most part, the rest of higher ed is standing around watching Michigan swim, talking about the size and temperature of the pool, and assessing the risks of swimming.
Michiganβs approach of providing seeds and land to everyone seems more likely to yield an interesting crop of AI applications and experiments.
This week Arizona State University grabbed headlines by dipping its toe in the shallow end, announcing a βcollaborationβ with OpenAI to use ChatGPT Enterprise. Unlike UM, which offers access to anyone who requests it, ASU is limiting participation to specific teams of faculty and staff (and maybe student employees who work with them) who propose projects in three key areas: student success, innovative research, and streamlining organizational processes. βTell us how you want to use GPTs and we might give you accessβ is a lovely opportunity for a handful of lucky folks at ASU, but I am not sure this is actual news. Grant programs to experiment with AI have been available at many institutions since last spring. Technology companies sign contracts for limited pilot programs with universities daily. Well-played, marketing teams at OpenAI and ASU.
I remain far more interested in University of Michiganβs experiment. But just because they built it, doesnβt mean teachers and students will come. I suspect their budget model aims for a sweet spot of getting enough traffic to justify the effort and expense of setting up the services, but not so much it breaks the bank. There is risk both ways. ASUβs channeling access through a review process manages this risk effectively, but they will only get a limited number of experiments based on what a selection committee decides is promising. Michiganβs approach of providing seeds and land to everyone seems more likely to yield an interesting crop of AI applications and experiments.
I donβt blame ASU for being cautious. Their tentative βexplore the possibilities of ChatGPT Enterpriseβ highlights the challenges of institutional decision making when interest in AI is high and certainty is low. The risks of any partnership with the OpenAI/Microsoft AI behemoth are significant. What happens if OpenAI melts down again? The hit movie Sam Altman Gets Fired released last November was a comedy of remarriageβthink Philadelphia Story or The Lady Eveβwhere the couple splits up but reunites after some wacky hijinks. With the promised report to the new board about Altmanβs βlack of candor,β a sequel seems likely. What happens if we get a divorce drama like Kramer vs Kramer where the kid is played by universities who signed up with OpenAI?Β There are several other risks. Courts could rule that OpenAI has violated copyright. Regulation could change the way data can be collected for large models. Google or another competitor could produce a better model. Any of these developments could erase OpenAI/Microsoftβs lead, and everyone standing around the pool will feel smarter than they do now.
Know someone interested in AI and higher education?Β Consider sharing this weekβs post directly with a friend or colleague.
Research Matters
Georgia State Universityβs National Center for Student Success received a US Department of Education grant to study the use of instructional chatbots, specifically in first-year introductory courses at GSU and a few other universities. This includes funding for a research project, which will be led by Lindsay Page at Brown University. Peer-reviewed research from large-scale studies takes time, but when it arrives, the frenzied hype about chatbots I wrote about in October will have mostly faded. Empirical findings and a better understanding of generative AI among teachers will make for a better foundation for decisions about whether and how to use chatbots to teach.Β Β
We donβt have to wait to read peer-reviewed research on the use of artificial intelligence in education. Studying the topic is almost as old as the use of AI in education, even if we didnβt always call it AI. A recent article available prepublication from Ryan Baker, Jaclyn Ocumpaugh Rod Roscoe Stephen J. Hutt Stephen J. Aguilar reviews the past decadeβs research on AI tools in instruction and assessments. The article is worth reading just for the literature review and bibliography. It also makes a compelling argument for asset-based instruction instead of deficit-based approaches to using AI in the classroom.
We argue that future AIED [AI in education] research and development should shift to better incorporate, detect, leverage, build, and appreciate learnersβ assets. Implementing these principles has the potential to move AIED beyond βfinding and fixingβ and toward βappreciating and advancingβ in ways that better support studentsβ long term growth and success.Β Β
This argument may appear to bump up against recent controversies in educational research about standards and culture wars bullshit, but really this is a solid reminder of an insight that has been a feature of education research in the United States ever since the women teachers at the University of Chicago Laboratory School taught John Dewey everything he knew about education and democracy. Engage students as partners in learning. Treat them with respect. To use a phrase I hear frequently, meet students where they are. These are foundational to teaching in a democratic culture. And it needs to be foundational in how we implement generative AI in educational environments.Β
A chirpy chatbot urging them to persist in solving a problem with no meaningful context will not help.
The usual framing for Khanmigoβs potential and that of other AI-based tutoring is mastery learning, which often assumes the toolβs purpose is to make up for all the ways the student lacks the knowledge or skills to succeed. Such deficit framing risks alienating students, especially those who are not motivated by the experience of schooling. A chirpy chatbot urging them to persist in solving a problem with no meaningful context will not help. Asset-based approaches start with what the student knowsβhow they understand the worldβand lead them to new knowledge. If the promises of personalized learning are to be realized for all students, this idea must guide our implementations. Here is the article:
One of the central ideas we are proposing is that systems should afford the same opportunities to all studentsβparticularly those who have not typically been characterized as high-achievingβby building systems that are better at detecting and using assets that have historically been less recognized and leveraged. We acknowledge that there may be times when systems cannot do soβin fact, that currently appears to be the norm. However, when systems are unable to appropriately support students and build on their assets, this must be acknowledge[d] as a failure of the system rather than as a failure of the student.
After reading the paper,
asks the question Is Asset-Based AI Even Possible? His answer showcases a human teacher named Katrine Bryan demonstrating asset-based instruction and then suggests that maybe we should not be optimistic about AI tutors being able to pull it off. I continue to believe that games and simulations with AI-based non-player characters will be how AI impacts instruction. And I see greater potential for asset-based approaches to designing such experiences. In any case, adoption will happen at the speed of institutional change, and not at the urging of optimists excited about the capabilities of new tools.Keeping up with the latest research is tough, but
will help you. Check out his Research Insights series.Fridays are for the history of techno-optimism
has an essay this week that reminds us that in 1998 many technology executives, futurists, journalists, and scientists were confident that human cloning would arrive in just a matter of months. Recommended for anyone reading the credulous coverage of AGI timelines.
AI Log provides links, analysis, and reflection on developments in AI and education each Friday morning. To receive each weekly log in your email inbox, visit my blog on Substack. To receive a notice each Friday in your LinkedIn feed, click here. All content is free and I never share your email address with anyone.Β
π¨π° π³ππ Β© 2024 by Rob Nelson is licensed under CC BY-SA 4.0.