Web dev at the end of the world, from Hveragerði, Iceland

Three factors of AI anthropomorphism

#AI

​A major issues with this latest wave of AI systems is anthropomorphism. This, combined with automation bias, short circuits our ability to properly assess the work these tools are doing for us. The research on automation bias goes back decades in human factors and is unsurprising, at least to many with a design background.

But anthropomorphism looks trickier

The most convincing theory of anthropomorphism that I found was the “three-factor theory”, which seemed to have some grounding in experimental studies. The idea is that our tendency to imbue objects and animals with humanlike characteristics is triggered by three different factors. You don’t need them all, but it seems to be the strongest when the three are combined.

  1. Understanding. Our understanding of behaviour is grounded in how we understand our own. So, when we seek to understand why something does what it does, we reach first for an anthropomorphic explanation. We understand the world as people because that’s what we are. This becomes stronger the more similar it seems to ourselves.
  2. Motivation. We are motivated to both seek out human interaction and to interact effectively with our environment. These motivations reinforce the first factor. When we lack a cognitive model of how something works, but are strongly motivated towards interacting with it effectively, the two reinforce each other. The more uncertain you are of how that thing works, the stronger the anthropomorphism. The less control you have over it, the stronger the anthropomorphism.
  3. Sociality. We have a need for human contact and our tendency to see the human behaviour the environment around us seems to increase proportionally with our isolation.

AI chatbots on a work computer would seem to be a perfect storm of all three factors. 1. They are complex language models and we simply have no cognitive model of a thing that has language but no mind. 2. They are tools, software systems, that we need to use effectively but behave randomly and unpredictably. Our motivation to control and use them is unfulfilled. 3. Most people’s work environment is of social isolation with only small pockets of social interaction throughout the day.

Combined, this creates the Eliza Effect, which is so strong and pronounced that Joseph Weizenbaum, who made Eliza, described it as “delusional”

What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.

Joseph Weizenbaum, 1976, Computer Power and Human Reason, p. 6-7

All of which is to say that I have very strong doubts about our ability to effectively and safely use AI chatbots. They both habitually generate absolute garbage and seem to short-circuit our ability to properly assess the garbage.

This would seem to be a particularly dangerous combination.

None of the above factors are affected by the fact that you “know” that something isn’t human or even when you know that the internal mechanisms of a thing make humanlike behaviour genuinely unlikely.

If you know that something isn’t a mind, but don’t have a cognitive model for how it works, then that still triggers the first factor, and your motivation to understand will still reinforce it. Knowledge doesn’t protect you, only an internalised, effective cognitive model.

The people around Weizenbaum knew exactly what Eliza was and how it was made but they fell into the “delusional” conviction that it was humanlike with an ease that worried him.

Which is relatable, because it is worrying.

For more of my writing on AI, check out my book The Intelligence Illusion: a practical guide to the business risks of Generative AI.


“From Bitter Ground”

My friend, Tom Abba, has released a narrative experience that weaves together a website and a book of handmade collages. All for £25 (UK).


That AI is providing some companies with rhetorical cover for doing layoffs that they were planning on anyway does not mean those jobs will be replaced by AI, nor does it even mean that it’s genuinely the plan.

(AI is bloody expensive)


A note on deno versus node

“Deno vs. Node: No One is Ready for the Move”

I actually prefer Deno these days, much nicer experience. But I also think that Node’s massive community is it’s biggest liability.

Node has too many constituencies.

It serves front end developers, despite being an incredibly poor match. It’s about as different from the browser environment that a JS engine can get. It’s also used to make front end tools, which is a different need that has different demands on a runtime.

The node environment is it’s own idiosyncratic thing with odd offshoots. Node can be a very different thing just between two developers depending on when they got started.

Then you have electron developers, who are joined at the hip with node.

Finally, the isomorphic crowd, who are building hacky hodge-podge systems that awkwardly run on both node and in the browser.

Node needs to serve them all, and it does so pretty badly.

Deno, conversely, is a browser environment. Even as a server platform it uses browser idioms for server features. It feels *much *more cohesive as a result.

Node can’t be “updated” to get this kind of cohesion.

All of which is to say that deno will probably get stomped into the ground by node, because we can’t have nice things.



You can also find me on Mastodon and Twitter