Two things caught my notice recently.
The first is the use of the term algogen as a catch-all term for algorithmically-generated art and text.
But a thing I find being discounted in a lot of these conversations is value. Things that are easy to make, things anyone can do, aren’t valuable. Value is a driver of worth. Algogen art will crash its own value just existing. (Iron Spike over on Twitter)
That this term was wrapped in an astute comment by a comics publisher on supply and demand was a bonus.
The other is the astute observation, by clarissa over on Mastodon, that because algogen text is all about generating text as it’s normally used—it’s middling utilitarian as an explicit ideal—that makes it the opposite of poetry. Where poetry de-familiarises, algogen is perfect familiarity. Where poetry casts the unexceptional as exceptional, draws out the unusual beauty in the usual, and reuses language in original ways, algogen aspires to peak mundane. Even when it’s ostensibly creative, it only does so through postmodern mash-ups: mixing up tropes, conventions, and references. It is perfectly mediocre.
Which goes a long way towards explaining why people in tech love it. It also explains why there is such a disconnect, in my experience, between how tech people describe algogen art an text and how it’s perceived by those outside of tech.
The flawed hands are just a symptom. It sets you up for an easy joke, but fixing it won’t make algogen art as good as that made by an artist. Many of those working in AI just don’t see even a fraction of the quality, rendering, composition, colour, anatomy, and texture issues in the art their tools generate. The same applies to the text. I’ve lost count of how many people in tech (and marketing, natch) who say that algogen text is just as good as that written by people. They genuinely don’t see the limited vocabulary, word repetition, incoherence, and simplistic use of sentence structure. They only aspire to perfect, non-threatening mediocrity and algogen text delivers that. They don’t care the role writing has in forming your own thoughts and creativity. They don’t care about how writing improves memory and recall. They don’t value the role of creativity in the text itself.
For them, it’s all about the idea.
That algogen fans are predominantly idea people—the lot who think that 99% of the value delivered by any given form of media comes from the idea—isn’t a new observation, but it’s apt. If you don’t think the form or structure of the medium delivers any value, then it has to be a uniform commodity that can, and should, be generated algorithmically to save people from the tedious work of pointless creation.
This worldview is such an utter misunderstanding of so many things that I wouldn’t even know where to start.
But it ends with this:
They don’t know what writing is for, yet they claim to be revolutionising it.
They don’t know what art is for, yet they claim to be perfecting it.
The worst thing we can do is to believe their claims.
Clarissa’s post and the following thread is worth checking out.
For more of my writing on AI, check out my book The Intelligence Illusion: a practical guide to the business risks of Generative AI.
Software Development Links
- “The cloudy layers of modern-day programming”
- “Efficiency trades off against resiliency - Made of Bugs”
- “Everybody Lies. Your Users Too. - UX Collective”
- “Faster Horses”. Speed was never the issue with horses. People bought cars so they wouldn’t have to shovel horseshit. Then cars became more affordable and people bought them who could never have afforded a horse.
- "‘It’s called stealing’: new allegations of plagiarism against Roy Lichtenstein". Not a software dev link but, honestly, I just dislike these guys so much. Lichtenstein and Erró are just the absolute worst. Horrible artists; horrible people.
- “It takes a body to understand the world – why ChatGPT and other language AIs don’t know what they’re saying”
- “AI-Generated Images from AI-Generated Prompts — Adrian Roselli”. “Anyone suggesting ChatGPT, Bard, or other self-described AI tools can generate their alternative text for them is simply being lazy.”
- “Building LLM applications for production”. Given how fast things are moving, isn’t anybody integrating an LLM into their product today extremely likely to be stuck with a massively obsolete system in the long term?
- “Prompt injection attack on ChatGPT steals chat data - System Weakness”
- “Poking around OpenAI. - Irrational Exuberance”
- “The Great Flowering: Why OpenAI is the new AWS and the New Kingmakers still matter”. This is why it honestly doesn’t matter whether better or more ethical alternatives to OpenAI appear. They’re going to be the default so they need to be held to account.
- “I Think I Found a Privacy Exploit in ChatGPT - Development tutorials for modern web development”. Case in point. OpenAI’s privacy issues aren’t limited to AI. They’re just bad actors overall.
- “Italy’s new rules for ChatGPT could become a template for the rest of the EU”. For the tech dudes huffing in the crowd that Italy is trying to kill AI: what they’ve outlined is just basic GDPR compliance.
- “General Purpose AI Poses Serious Risks, Should Not Be Excluded From the EU’s AI Act - Policy Brief - AI Now Institute”. This. General Purpose AI are riskier than specialised, so any regulation that promotes them over the other is counterproductive.
- “Toward a Critical Technical Practice: Lessons Learned in Trying to Reform AI”. This 1997 paper by Philip E. Agre is a fascinating look into the history of AI research. It shows that the field’s issues have been there for a long while.
- “Federal privacy watchdog probing OpenAI, ChatGPT following complaint - CBC News”. I’d missed this one when it came out.
- “Linda Gottfredson - Southern Poverty Law Center”. This is the author of the definition of intelligence that Microsoft’s “Sparks of Artificial General Intelligence” paper is built on.
- "‘I didn’t give permission’: Do AI’s backers care about data law breaches? - Artificial intelligence (AI) - The Guardian"