Foggy feeds: the decline in my feed reader subscriptions
I’ve noticed a decline in the thinking across the websites I follow. I have two hypotheses. I kind of hope neither are true and I’m just imagining things, but I worry that they both are.
Brain fog #
I used to be a bit sceptical about ideas that the cognitive effects of Covid infections – which are very real – were widespread throughout society, mainly because I was the only person I personally know who admitted to having brain fog.
My initial brain fog, when it was at its worst, was impossible to miss because it made my job extremely difficult.
The more worrying part of my “fog” experience was later on when I truly believed it had mostly lifted – that I was at or around 90% – but then I got my next vaccine booster and realised that, no, I had only been at around 50%. The booster shot lifted the fog I didn’t even know was there and I was now, for the first time in months, truly back to being close to normal. (I may be a fool, but at least now I was back to my regular old pre-pandemic fool self.)
Among those I know personally I seemed to have been the only one to be hit by this particular long Covid issue. The initial fog was impossible to miss and those I know got hit by other long Covid issues, but not this one, which lead me to assume I had just been unlucky. But over time I’ve had to adjust that assumption.
I now think it’s more likely that the people around me – that I know “IRL” – are benefiting from a combination of the full set of the vaccine booster shots that have been made available by Icelandic healthcare, not regularly going to mass infection events like conferences or conventions, and – y’know – plain old luck. Because there’s a trend I’ve noticed that I have a hard time explaining any other way than “Covid-induced brain fog”: the quality of the thinking among my long-term feed reader subscriptions has been declining almost across the board.
Foggy feeds #
For those of you who aren’t familiar with feed readers: it’s a set of web standards dating back to the nineties that provide a mechanism for letting a piece of software follow updates to a website as they happen. Not quite in real-time, but close enough for most purposes.
It was originally used for syndication and aggregation – letting centralised websites such as Netscape’s collect updates from a variety of sites onto a single page – but very quickly became primarily used to let individuals follow a large number of websites.
You know how most blogs, websites, and newsletters that aren’t published by professionals with sponsorship on a schedule tend to be a bit irregular?
This was the original solution to that problem.
Instead of cluttering up your email inbox (which has other, more important, information tasks to handle) with newsletters you can subscribe to a few dozen sporadically updated sites in your newsreader and that way get a couple of dozen updates to read every morning even though most of the sites you follow only update once a month, at most.
Almost every blog and newsletter platform out there supports feeds (usually RSS), so you generally never have to subscribe to a newsletter manually over email, unless you’re in the business of intentional self-sabotage and are out to deliberately destroy your own email productivity.
In which case, have at it. You do you.
I mention this because an important part of this observation is that I follow a lot of feeds in my feed reader, most of them only updating at most once a month, maybe weekly, and only a tiny handful update daily. About seven hundred (700) of the feeds I follow are active and I’ve been following some of them since the early 2000s.
That’s a decent sample size both across fields – tech, media, and academia – and time – I’ve been following most of these since before COVID – and the decline in people’s thinking across many of these feeds has been noticeable because it’s incredibly fucking annoying.
It’s not all of them. There are still a bunch of writers in my feed reader that are consistently thoughtful, at least at their usual same level (😁), but so many of the rest have become painful to read.
Half-baked reasoning wrapped in raw dough nonsense #
This has been more noticeable among the writers who I habitually disagree with because of the conclusions they draw, but who nevertheless continually made sharp observations that were thoughtfully laid out. Their charm was the intelligence they used to lay out their argument. My disagreement was largely a matter of viewpoint and values, not reasoning.
And here’s where I run into trouble because if I single out an example, it would basically look like I’m picking on a random person on the web – or an internet celebrity – and saying that their opinions can only be explained by brain damage.
This would be a misinterpretation of what I’m saying – their opinions remain unchanged, it’s their reasoning that I think has declined – and would inevitably put a bunch of people with large platforms on the defensive.
That would be counterproductive. It would also be unfair because I know first hand that being foggy can be an intensely humiliating experience: having a confused moment in public, whether it’s in the store or among friends, or being unable to do something you used to, has the effect of collapsing you down into a broken body and is an experience that sits with you for a long time.
I’ve also known a few people who had vascular dementia, a cognitive disorder that is more extreme than COVID fog but seems to operate similarly on memory and reasoning, and cognitive “fog” generally doesn’t seem to change your values or opinions, just your ability to rationalise and argue them, and in some case even making it hard for you to remember why you hold certain beliefs and ideas.
And that matches some of what I’m seeing.
Many of the types of writers I follow would often in the past make thoughtful arguments defending some aspect of the status quo, usually while covering some part the US tech industry.
To use a hypothetical example (constructed from whole cloth because I don’t want to pick a fight by singling somebody out), they would do something like argue that, despite what you might thing, US car culture was actually quite good, both for the economy and society in general.
The hypothetical argument, before fogginess, would go something like:
- Cars have enabled the decentralisation of transport to a much greater extent than any other transport innovation.
- The US economy would never have been able to grow to its current size if most transport at scale had to be coordinated across local authorities, state authorities, federal authorities, shipping and freight, train or bus companies, and the businesses that need traffic from both workers and customers.
- The existence of a transport system that’s fundamentally a network where every vehicle can operate individually improves the flexibility and dynamism of transportation across society.
- It’s also an example of successful regulation. Both driving and the manufacturing of the cars themselves are highly regulated in ways that haven’t hindered their use and violating those regulations has had real consequences for violators on both sides.
- It’s mostly safe, otherwise people wouldn’t dare to step into a car.
- The climate impact is being mitigated in real-time with a transition to electric cars, the energy transition overall, and will probably be mitigated further in the future through other innovations.
Now this argument is reasonable. I think it’s wrong, but the disagreement lies mostly in values and perspective, not in the argument.
- That the climate impact will be mitigated is an assumption that doesn’t take into account the emissions from manufacture and that growth mostly seems to more than offset the lower emissions from individual vehicles. It treats the idea of progress as an inevitability.
- Cars are extremely unsafe. Americans just don’t seem to care because they don’t seem to value human life, including their own. Other countries, such as Norway, discovered that the only way to lower traffic fatalities was by reducing car use.
- Cars are highly regulated, true, but most national car regulators have been captured by their domestic car industry. Most of the time we discover serious violations it’s because a regulator investigates shenanigans by a foreign car manufacturer that gives them an unfair advantage over domestic manufacturers, such as when US regulators discovered cheating by Volkswagen.
- Most road systems are networks, true, but they are centrally designed in ways that substantially compromise the value of their distributed and networked nature. Roads and bridges are frequently designed to exclude specific communities from participating in the economy, for example.
Now this is a hypothetical example, but the point is that many of these writers regularly argued convincingly for their viewpoint and you could come away from reading them with a new perspective on what they were arguing for, even while still disagreeing with them.
Whereas today, much of the argument I’m seeing in my feed reader by those same people, is more along these lines:
- I like cars. Cars are good.
- I also like AI. AI is good. We need to let it improve.
- I don’t like regulation. Regulation bad.
- We shouldn’t regulate AI because we need it to become as big a part of our lives and industries as cars are.
The line of reasoning is just pathetic nonsense and I’m seeing this kind of non-sequitur nonsense everywhere.
Some of it makes more sense if you look at it through the lens of “fogginess”. Centralisation and regulation are, arguably, adjacent concepts and are related in many people’s minds. The core “nonsense” of the (synthetic) argument above is a messy mixing up of adjacent concepts. By substituting some of the core ideas with related but different ideas, the argument devolves into a series of unrelated non-sequiturs leading up to a bullshit conclusion that’s comprehensively unearned.
Except that bullshit is usually presented in more polished, seemingly thoughtful, language because of the other thing that’s everywhere. That fucking bubble we’re in.
Fucking AI fog #
The plot twist in this story is that I wrote my first real book while my brain fog was in full force: Out of the Software Crisis.
What made it work was drafting when I was the most coherent (fogginess varies) and then doing the editing and typesetting drudgework when I was not.
I also pulled out all of the stops in terms of using whatever cognitive aids I could think of.
- I wrote a lot out by hand.
- I used a specific notebook as a replacement for short term memory. It’s a notebook full of nothing except short lists that have been crossed out.
- I constantly mapped things out on A3 sheets and plastic marker sheets I’d put up on the walls.
- I used every self-editing trick and aid I knew of.
I gave myself every advantage from every tool I thought was likely to work.
But the biggest impact was from the writing process itself.
The initial draft is a representation of your argument laid out in front of you. Rewriting and editing that text so that it flows requires constantly engaging with the thoughts, ideas, and arguments in it. Clarifying the language also clarifies the thoughts. Let the text then rest a bit and then start the process anew when you revisit it. Repeat this often enough and the mud you started with becomes a cohesive logical argument.
It took a lot lot longer than when I’m not foggy, but it mostly worked. A clearer argument is a by-product of having to hone the writing.
Or, at least it used to be.
Today, you can use a Large Language Models (LLM) to take your shitty incoherent sentences and polish them instantly to something that looks cohesive. The sentences have their proper structure, arguments are bridged from one to another, qualifiers injected where they should be.
But it still doesn’t make it make sense.
The chatbot can’t make it make sense because it has no sense – it’s just a statistical model.
So, if people are suffering from fogginess without being aware of it – a distinct possibility because I certainly thought I had recovered to almost a 100% when I discovered that, no, I had only been at 50% – then the popularity of chatbots and other generative writing aids could be extremely effective at concealing that fogginess from the readers.
But that’s not all. An observation that persistently crops up in research into the use of Large Language Models is that regular use seems to have a diminishing effect on your critical thinking ability, even in studies that are outright set out to look for justification for using LLMs.
Specifically, higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking. Qualitatively, GenAI shifts the nature of critical thinking toward information verification, response integration, and task stewardship.
The findings revealed a significant negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading.
AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking
However, despite this reduction, these students demonstrated lower-quality reasoning and argumentation in their final recommendations compared to those who used traditional search engines.
This not even close to being an exhaustive overview of the studies finding either diminished critical thinking or metacognitive laziness among those who regularly use LLMs. This is a finding that keeps cropping up even in studies that aren’t looking for it directly.
As I wrote in the afterword to The Intelligence Illusion:
The purpose of cognitive automation is to think less. That’s what it’s for.
The very point of “AI” tools is to think less about what you do, think less about the arguments you make, and generally put less thought into your work and your life as a whole.
That’s what it’s for.
This would be bad news on it’s own, but my worry is that in this case it’s exacerbating some of the societal harm being done by the Covid pandemic.
Namely that many of our brightest are all fogged up without realising it.