I decided to stop using generative AI to produce the images for 𝐀𝐈 𝐋𝐨𝐠 at the end of last year when I expanded to Substack. I would like to say this decision was reached after a considered analysis of the ethical problems raised by the technology. The energy used for each image generation may be equivalent to fully charging your smartphone and asking an LLM a question may consume as much power as leaving a low-brightness LED lightbulb on for one hour. The diminishing job opportunities for freelance illustrators and writers seem real.
Those considerations played a role! However, the honest answer is that the biggest reason I stopped was that I got bored. Trying to coax DALL-E or Ideogram into giving me the image I wanted was frustrating. I get more satisfaction from searching Unsplash or Smithsonian Open Access for human-created images, and it takes less energy, from me and from the electrical grid.
I started using image generators for the same reason I started the blog, to better understand the tools that are changing our work as teachers and writers. Maybe
convinced me with one of his posts last spring, or maybe it wasn’t based on reason at all. I saw Mollick, Laura Dumin, , and Jason Guyla doing something cool on LinkedIn1 and like a teenager on TikTok, I copied them and posted the results. We are social animals, after all.There has been intense coverage of the news about the New York Times suing OpenAI over copyright infringement. A bit undercovered is that the lawsuit represents a breakdown in the negotiation between the two over how much OpenAI should pay to use the content NYT produces. So this is really an argument about price, not principle. For an interesting take on AI and copyright recorded before the big news, see the latest episode of Decoder with Nilay Patel featuring Adobe’s general counsel Dana Rao. Adobe’s approach to generative AI is similar to Getty Images in that they only use images they own or license instead of scraping the web for everything.
Is my boredom with DALL-E a sign of how things will go with generative AI this year? It seems entirely possible. Will the hype cycle end with a whimper, not a bang? Will we grow tired of our new toys? Will their social costs will come into sharper focus? Will the moral panic over homework fade? Will the companies that stole much of the data used to train their models face judgment? Will GPT-5 and Gemni show that we have reached the upper limit of the transformer technology’s capabilities? All possible, even likely.
predicts that generative AI will turn out to be “just a big upgrade to existing machine learning systems.” He says, “Think of it like replacing a Playstation 3 with a Playstation 5. That’s a huge improvement. But it’s still a gaming console.” On the other hand, we have a compelling post on One Useful Thing that the coming year will see impacts on education, knowledge verification, and work that are, perhaps, more intense than last year’s. Plus, the usually sober reporters at the MIT Technology Review think we will see video generation and large models out into the world via robots. So maybe it’s a Playstation 5 with a great VR headset?Impacts depend not just on a technology’s capabilities but also on the speed with which these new tools are adopted. And that is nearly impossible to know. The companies selling the products will only show us the numbers they want us to see. What happens in classrooms and offices is mostly a black box and I think we really want to keep it that way (Please let’s not use this new technology to surveil teachers and workers). In the absence of reliable data, we are left to our own devices, literally peering into our screens at regurgitated press releases, the few academic studies trickling out, and our own interactions with large models. Then typing words into our devices about what we see there. And hopefully remembering to put down our devices, at least occasionally.
I have always admired Wendell Berry’s reminders that so many of the bad things happening around us are the result of collective decisions for which each of us has some small degree of responsibility. After all, we can say no.
Berry’s “Why I Am Not Going to Buy A Computer” appeared in Harper’s Magazine in 1988 and generated several letters in response, which were later published along with Berry’s replies. The essay describes the tools he uses to write and why he prefers them to a computer. The opening lines are:
Like almost everybody else, I am hooked to the energy corporations, which I do not admire. In my work, I try to be as little hooked to them as possible. As a farmer, I do almost all of my work with horses. As a writer, I work with a pencil or a pen and a piece of paper.
One letter accused him of hypocrisy, saying “I find it ironic that a writer who sees the underlying connectedness of things would allow his diatribe against computers to be published in a magazine that carries ads for the National Rural Electric Cooperative Association, Marlboro, Phillips Petroleum..”
Berry’s response to this letter has always struck me as a good description of how to live in a technological world we did not ourselves create but want to change for the better.
That I am a person of this century and am implicated in many practices that I regret is fully acknowledged at the beginning of my essay. I did not say that I proposed to end forthwith all my involvement in harmful technology, for I do not know how to do that. I said merely that I want to limit such involvement, and to a certain extent I do know how to do that. If some technology does damage to the world…then why is it not reasonable, and indeed moral, to try to limit one’s use of that technology?
I use a computer and I will continue to use generative AI, but not that which is trained on data that is stolen and that includes the absolute worst output of humanity. My point is not to urge you to draw the same line, merely that drawing lines is something we should all do.
The work of AI Log is to…well, log my ongoing engagement with generative AI. Here is an essay about my experiments with AI image generators back when I used them. I just republished it on Substack but it was originally posted on LinkedIn.
I am aware that “doing something cool” and “LinkedIn” are mutually exclusive. My kids keep me honest about such matters.
𝑨𝑰 𝑳𝒐𝒈 © 2024 by Rob Nelson is licensed under CC BY-SA 4.0.
I like your approach!