10 Reasons I’m horrified that my University Licensed ChatGPT dot edu
Even aside from the costs of this licensing with Open AI, which comes at a time when USC fired 1000 employees and has frozen wages, I am horrified about the lack of faculty input and the decision to partner in any way, shape, or form with Open AI. Here are the top ten reasons for my horror:
1. All the chatbot/LLM creators (hereafter Big Tech) are nefarious, but Open AI is the worst of the worst, having set the race to the bottom in which training weights and data sets are now hidden, calling it their intellectual property but actually creating a black box with no means of verification.
2. Big Tech does not respect anyone else’s IP, having used massive amounts of copyrighted work without permission and then, outrageously, trying to claim fair use. As The Atlantic reported, internal documents from Meta noted that to license any creative work would damage their ability to “lean into [their] fair use argument” (something they’ve all done, but are now being challenged in court, thankfully).
3. Open AI cannot be trusted: so even though the USC administration tells us that our work and our students’ work will not be used to train their model, we cannot rely on this. Sam Altman, Open AI’s CEO, was fired in November 2023 for pitting employees against each other, lying about safety issues in order to skirt them to release models early, and for being deceptive more generally (see Karen Hao’s Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, especially the Prologue).
4. When still a nonprofit, Open AI created GPT2 and initially said it would not release it because of the danger of it ending truth forever. They then used this as an excuse to hide their methods, and became a for-profit company that spooked the rest of Big Tech, who worried about being outdone, spurring them into the same bad practices to compete.
5. OpenAI creates data swamps. They set the pace by hoovering up all kinds of unstructured data and, rather than paying a decent wage to clean it, or creating structured data sets in the first place (which would lead to far more accurate outputs), they use gig economies in places like Kenya and Venezuela, helping further destroy communities by creating both unstable work and mental health issues. As soon as one country’s leaders try to regulate, they close shop and move to the next economically needy country. Remember, Big Tech is made up of billionaires!
6. Hallucination is a feature, not a bug. There are two types of Gen AI systems: the expert system and the connective system. This latter system created contemporary LLMs (or chatbots like ChatGPT, Claude, Gemini, and DeepSeek). Connective systems (aka neural networks) have low reliability in terms of accuracy of output, but *may* lead to a more general model of independent ‘intelligence’ via RLHF (Reinforcement Learning from Human Feedback). So, in essence, by using these systems, we help make the billionaires rich by improving these models.
7. Energy usage: a single inquiry of a chatbot uses as much energy as 10 web queries and is typically less accurate, certainly far less accurate than Wikipedia. What do our students have to gain from playing with these tools while they destroy the environment?
8. Data centers worldwide are destroying the air, using clean water, and raising electricity rates at an alarming rate. Elon Musk (one of OpenAI’s initial funders) built Colossus to train Grok in a historically Black neighborhood in Memphis, and it has at least 38 UNLICENSED methane gas turbines that are releasing hundreds of thousands of pounds of formaldehyde and nitrogen oxides into the neighborhood. Musk is not alone in this.
9. The contemporary models have no actual use that cannot be done by other means. Sam Altman has made outlandish claims about curing cancer or ending climate problems, but no amount of sucking up creative work will cure cancer, and there are AI models that can contribute to climate science, but there is no political will to use them. Further, many engineers say that we will never achieve AGI (artificial general intelligence) using the current LLM regime (see the e-book What Could Go Wrong? by Scott Z. Burns, writer of Contagion).
10. Open AI and Big Tech lobbies for zero regulation or helps create smokescreens about what needs to be regulated in order to keep doing what they like, which is, mainly, releasing tools with no real use but amusement, while sucking the lifeblood out of the creative industries and spawning misinformation.
There is much more to say about each of the above points and there are far more issues, such as the supposed race with China, an idea which is both flawed and inaccurate, but in the short term, USC and indeed all universities have no business licensing these chatbots, especially with Open AI, in the name of responsible usage. The only responsible usage of Generative AI, in my opinion, would be to use a meshnet with an open-source LLM and a solar-powered charger (see, for example, the work of CJ Trowbridge for more on this).
For further information, I recommend:
Karen Hao’s Empire of AI: Dreams and Nightmares in Sam Altman’s Open AI (Penguin Press, 2025) or her two-part interview with Democracy Now.
AI Now Institute’s Landscape Report, Artificial Power.
The Last Invention podcast by Sam Harris.
Kate Crawford’s recent work for the New York Times (or any of her work).
I have also had many experiences with my own work being completely misconstrued, which I’m in the midst of assembling and will update here.