Web dev at the end of the world, from Hveragerði, Iceland

Waiting for the AI Godot (Links & Notes)

#AI

​The revolution’s proof is in its arrival

Looking forward to an explosion in features, functionality, and quality apps in a few months.

Because apparently AI is a coding productivity multiplier, revolution in how we make software, and absolutely, definitely does not cause any problems at all elsewhere in the programming process.

Read more…


“New data: what consumers really think about generative AI”

People who used Chat GPT find the results more credible than those who haven’t

Not sure this problem can be solved without regulation.


“Why chatbots fail. The novelty of AI does not make up for… - by Skyler Schain - Mar, 2023 - UX Collective”

Chat is, and always has been, a bad UI for research and productivity.


“Is it time to hit the pause button on AI?"

To be honest, if the US Department of Justice or Attorney General said that AI generated output probably didn’t qualify for Section 230 safe harbour, most public AI would be withdrawn 5 minutes later.

Then, of course, you’d get lawsuits for a few years until it actually gets cleared up in the courts, but it would give you a de facto moratorium on publicly available Generative AI for a while.​


“A Silicon Valley Love Triangle: Hiring Algorithms, Pseudo-Science, and the Quest for Auditability”

The title of this paper is just so good. Proposes a framework for auditing Automated Decision-Making Systems


“Historical pragmatism and the internet - James W. Carey, 2005”

We are now living with the consequences of those hopes and beliefs, but the age of the internet has taught us again of the fragility of politics, the brittleness of the economy and the vulnerability of the new world order. The ‘new’ man and woman of the ‘new age’ strikes one as the same mixture of greed, pride, arrogance and hostility that we encounter in both history and experience.

This paper, from 2005, does give you the impression that the tech industry has been up to the same kind of bullshit for a longer time than most appreciate.


“On technological optimism and technological pragmatism”

Always post a link when somebody points out that Wired Magazine and its contributors are full of shit.

Also, where I found the link the James W. Carey’s paper.


I have no use for an AI that can write for me.

Yeah, tech is vastly overestimating the productivity benefits (and consequently, the financial value) of sophisticated text synthesis.

The economic benefits of AI text synthesis for office work are, let’s say, unproven. Writing prose isn’t the productivity overhead the software industry thinks it is.

It’s what the writing represents: coordination, communication, and consensus-building. Automating that is trickier

This tech will be very useful in many areas, but tech executives focus on automating email because that’s what they do all day and they’re overestimating the value of a minor personal convenience.

Also, the technical term for that manager who automates all of their emails with ChatGPT is absolutely going to be “that asshole”.

If you really think “I can’t be bothered to write you a short email, so here’s something I got Outlook’s AI feature to regurgitate for me” is going to endear you to your coworkers and reports and help with your communications and collaboration, then you really haven’t been spending much time around people.

Maybe “AIhole” will be a thing?

As in:

“The sales manager on the third floor keeps sending me non-answers to my query about last month’s sales.”

“Oh, Tom the AIhole! Yeah, I don’t think he even reads his own replies.”


“The Great Replacement (Not That One, the Real One)"

This society made of of fleshy squishy human people made it clear long ago that the exact millisecond it could replace each and every one of us with a janky Python script and a post-it note with a smiley-face on it, that was absolutely 100% going to happen and you’re a Luddite who hates freedom if you so much as squeak about it.

Go read this. It’s good.


“Keep your AI claims in check - Federal Trade Commission”

Policy people are starting to notice that AI boosters are largely full of shit.


“The Fallacy of AI Functionality”

This paper, from last year, is a part of a multi-pronged effort by scientists and academics to get policy and law makers to pay attention to the functionality and *outcomes *of AI tools, instead of their hypothetical benefit or threat. The FTC post above makes it sound like they have been paying attention to papers like this one.

From the paper:

More surprising is the prevalence of criti-hype in the scholarship and political narratives around automation and machine learning—even amidst discussion of valid concerns such as trustworthiness, democratization, fairness, interpretability, and safety. These fears, though legitimate, are often premature “wishful worries”—fears that can only be realized once the technology works, or works “too well”, rather than being grounded in a reality where these systems do not always function as expected.

And:

This fear of misspecified objectives, runaway feedback loops, and AI alignment presumes the existence of an industry that can get AI systems to execute on any clearly declared objectives, and that the main challenge is to choose and design an appropriate goal. Needless to say, if one thinks the danger of AI is that it will work too well, it is a necessary precondition that it works at all.

This is a great paper pointing out that the biggest risk from AI systems today isn’t that they work too well but that they generally don’t. Vendors are promising things their systems cannot deliver, for a variety of reasons.

It posits that functionality (or a lack thereof), not the hypothetical threat of a hyper-capable future AI, should be the focus of policy and regulation.


“Those Meddling Kids! The Reverse Scooby-Doo Theory of Tech Innovation Comes with the Excuses Baked In”

Always post links to writers who remember that Wired Magazine is responsible for so much bullshit and dishonest manipulation.


“Can publishing survive the oncoming AI storm?"

Publishing outlets are going to be drowned in garbage (LLM-generated text really is mediocre garbage) and the publishing industry is completely unprepared for it.



The rest of the rest


Obsessive listens

You can also find me on Mastodon and Twitter