Web dev at the end of the world, from Hveragerði, Iceland

The ‘AI’ chaos


These two links deserved highlighting beyond just being plonked into my normal list of links.

They are both describing some of the chaos caused by the ongoing AI Bubble. One tackles what it’s doing to programmer education. The other is a behind-the-scenes view of what’s happening as tech companies lurch into “AI”.

On how the industry is effectively gaslighting people on “AI”

“How does AI impact my job as a programmer? – Chelsea Troy”

These students aren’t in tech yet. They’re still in the zeitgeist of tech. And that zeitgeist has deemed AI the current shiny. Journalists uncritically parrot the grandiose claims of self-styled AI tech executives. Popular narratives focus, not on what big-crunch machine learning models can do now, but what they theoretically could do in the future. Disastrous product demos receive little scrutiny, and people rely on YouTubers and bloggers as sentinels for the hoi polloi. As far as students can tell from the press, their futures depend on them learning to ride the wave of…whatever this is. So far, they’re seeing its supposedly awe-inspiring power neither in my lectures nor in their own experiments with it. So they’re assuming user error and imploring me—”What questions, exactly, are we meant to be asking this thing to pull down our success and riches?”


Large language model purveyors and enthusiasts purport to use the tools to help understand code. I’ve tested this claim pretty thoroughly at this point, and my conclusion on the matter is: much like perusing answers on StackOverflow, this approach saves you time relative to whether you’re already skilled enough to know when to be suspicious, because a large proportion of the answers you get are garbage.

The AI Bubble is an absolute shit show behind the scenes

“AI engineers face burnout in ‘rat race’ to stay competitive hits tech”

The thing about being in a bubble built on tech that is much less functional than promised, much more harmful than indicated, with bare-bones staff because of regular mass layoffs, and using concepts from an industry with a long history of fraudulent or near-fraudulent behaviour (“AI”) is that it’s almost inevitable that everything behind the scenes is going to devolve into an utter shit show.

The implication of the following two quotes are that it’s highly likely that some of the demos that result are either faked or manipulated so much they might as well be faked:

He said he often has to put together demos of AI products for the company’s board of directors on three-week timelines, even though the products are “a big pile of nonsense.”


He said the company’s investors have inaccurate views on the capabilities of AI, often asking him to build certain things that are “impossible for me to deliver.”

If you were wondering why many of these products end up being worse after “AI” integration, this quote explains why. It’s because the only thing that matters is using “AI”, not serving the customer:

He described the irony of using an “inferior solution” just because it involved an AI model.


Regardless of the employer, AI workers said much of their jobs involve working on AI for the sake of AI, rather than to solve a business problem or to serve customers directly.

One of the things I noted in The Intelligence Illusion was that Microsoft specifically was on the record as saying that speed mattered more than ethics and safeguards. This is confirmed yet again here:

When it comes to ethics and safeguards, he said, Microsoft has cut corners in favor of speed, leading to rushed rollouts without sufficient concerns about what could follow.


The engineer, who requested anonymity out of fear of retaliation, said he had to write thousands of lines of code for new AI features in an environment with zero testing for mistakes.

Using products by big tech companies in this environment is inherently risky. They’ve all de-prioritised or outright laid off their product safety and security teams and the mass layoffs themselves increase the odds of serious mistakes on top of that.

If this carries on, the only thing preventing some sort of catastrophic error that is ultimately down to some sort of systemic issue is sheer luck.

Betting on luck is not a business strategy.

And finally…

Just saying…

Anybody who thought that OpenAI and Sam Altman were trustworthy before the recent Scarlett Johansson drama has already demonstrated a pretty darn high tolerance for bullshit and shenanigans. A CEO positioning a voice chatbot as a celebrity soundalike without permission isn’t even going to blip on their ethical radar.

You can also find me on Mastodon and Bluesky