Update: The Mercury News and EdSource have good articles on this news, including the fact that CSU is paying Open
Note to AI Log readers: The news out of CSU feels important because it sets a precedent for treating OpenAI as if it were an educational technology company when really it is simply a tech start-up run by a CEO with questionable ethics who accidentally presided over the creation of a hit consumer product based on a genuine scientific breakthrough in chatbot functionality.
Sometimes, there are good reasons for a school or university to partner with a start-up. Given the circumstances, I can’t think of a good one to partner with OpenAI.
Instead, higher education institutions should put resources into understanding this new cultural technology in responsible ways and let savvy technology investors prop up Sam’s House of AGI Bubble Fun. This is true whether or not AI is as transformative as its boosters say it is. There are better ways to answer questions about the educational value of AI than spending money on an enterprise license for ChatGPT.
If you have information or ideas about the situation in the CSU system, especially if you are a student, faculty, or staff at CSU, please get in touch. I plan to write more as the situation develops.
I’ll be back tomorrow with one of AI Log’s more typical offerings, an essay about why confabulation is an essential word for this moment in understanding AI.
Let’s talk about AI alignment
One important issue I skimmed in Wednesday’s piece on the CSU/OpenAI press release is the pressures this puts on the entire CSU community to get up to speed on AI. On the one hand, this is good. Everyone should learn about AI! It is a technology that is shaping how we teach and learn. Asking all members of the community, as busy as we all are, to learn enough to make an informed decision about whether and how we might use it is reasonable.
On the other hand, no amount of pressure will make it immediately clear how to use AI effectively and responsibly, especially when we consider that the context for this announcement is hundreds of academic disciplines and programs, each with its own methods and norms. With CSU, multiply that by 23. Implementing new educational technology on a college campus, especially one with so much hype swirling around it, should be done openly and in consultation with the people who are to use it. That does not appear to be what’s happening in California.
To borrow a favorite term from Silicon Valley, there is a lack of alignment here. OpenAI has been crystal clear about what its values are, and they do not include openness, honesty, objectivity, and accountability. Many people, especially investors, admire Sam Altman’s style and hubris. They believe moving fast and breaking things, even when that “thing” is human welfare, is how you get things done. They think OpenAI's disregard for intellectual property rights, its decision to transform itself into a for-profit corporation, and its tendency to bullshit about what its products can do are just the way the world works now. And, given what’s happening in DC, they have a point.
My point is that now is the time to say no to this worldview.
Given the values of the academy, why would CSU academic leaders think a partnership with OpenAI is a good decision? Even if CSU is somewhat blinded to OpenAI’s ethical problems by the Chancellor’s Office’s proximity to Silicon Valley, they must know what happens when an investment bubble bursts. And it doesn’t matter if they believe this time is different. They just need to acknowledge the echoes of pets.com in 2000, UK railroads in the 1840s, the South Sea in 1720, and tulips in 1637 in what’s happening today. Then, think about what it means to bet their AI strategy on OpenAI being around next year.
I’ve been chatting with a few folks who know the CSU system, and there is a sense that academic governance has not been eroded as much as it has in other systems and universities. By all reports, CSU faculty have a culture of genuine solidarity and habits of successful collective action. I hope faculty governance is mobilizing to be a countervailing force. The appearance given by the press release, which seems very much driven by CSU’s new AI Workforce Acceleration Board rather than the needs of the CSU community, may turn out not to be as bad as it first appeared. I’m not optimistic.
Accelerating AI adoption at the urging of technology companies who are all in on “fake it til you make it” is unwise. If CSU leadership made commitments without going through academic governance and a thorough review by University IT experts, they set themselves up to fail. I look forward to hearing about faculty, students, and staff who have a chance to ask questions about the details.
A little poking around yielded the news that at least one CSU campus has already rolled out ChatGPT to its community. Last month, Cal State Fullerton started offering TitanGPT, which is based on the GPT-4o, the free “flagship” OpenAI foundation model. Good for them! Giving people access to AI models so they can play around with them is a good thing, especially if there are agreements in place that protect student and institutional data. And, crucially, you trust the company that is providing this service to protect that data.

Like most college and university campuses, CSU Fullerton does not have much to offer in the way of educational support for faculty or students using these tools for the first time. There is a video guide to logging in to TitanGPT voiced by AI. It doesn’t seem quite up to the task of saying “Fullerton,” and the video itself simply explains how to log in using your University credentials. The rest of the training materials on the site are LinkedIn Learning video tutorials.
If Cal State Fullerton is the leading edge of ChatGPT Edu at CSU, then an AI-empowered university System looks a lot like every other campus trying to figure out AI, except a little bit more embarrassing. I am not blaming CSU IT or middle managers for this. This is an effect of bosses doing what bosses do after meeting with their boards and hearing that our AI future is here and universities better get with the program. They come back to their office and start yelling at their subordinates to get on top of this AI thing. Now!
As I said in Wednesday’s piece, I hope I am wrong. Maybe CSU officials are taking advantage of OpenAI’s increasingly precarious situation to get access to AI tools at bargain prices. Maybe the LinkedIn Learning training program is a placeholder while educators prepare a thoughtful educational program that introduces AI as a complex set of tools that may or may not have educational or economic value. Maybe they have a clear exit strategy for when a lawsuit destroys OpenAI’s business model or people stop listening to Altman’s predictions about AGI being right around the corner.
Nothing in the announcement on Monday suggests caution or care. Instead, the language and theatrics of the press release made it appear that OpenAI is driving the bus. My concern is with those who may end up under it.
This was a real headscratcher, particularly in light of budget cuts. What drove this decision? Was it in the name of equity, to make the more advanced model available to everyone for free along the lines of the CSUCCESS program? On the student side, has every CSU faculty member instituted clear guidelines for AI use in their courses?
Not 23 for long, by the way. It will be 22 campuses this summer when Cal Maritime merges into Cal Poly SLO.
I swear, the only reason they are making these deals with publishers, governments and institutions is the hope that it will advance their mission of creating AGI/ASI. They don’t care one iota about improving our lives and they don't really expect longterm profits from subscriptions and licensing deals. The objective is to subsume as many human minds as possible to feed their god incubator. Once they have what they need they'll pull the plug and we will be waiting in darkness for what comes next. I sincerely hope they never achieve their goal. I wish so many humans were not unwittingly helping them.