Web dev at the end of the world, from Hveragerði, Iceland

SwiftUI, Privacy, macOS, and the Web

Pondering the future of software development. Why SwiftUI makes me optimistic about Apple’s platforms.

Obsessing about the state of computing

I’ve spent a decent portion of the past week digesting the announcements at Apple’s WWDC, Apple’s developer conference. Not because I’m a macOS or iOS developer in any way but because I’m a little bit obsessed with having a clear picture in my mind of the computing landscape. I have a strong personal preference for Apple’s software aesthetics and conventions, even in this somewhat barren, modernist-themed, ‘flat UI’ era of software design.

I’ve tried the alternatives. This post is written on a ChromeBook. I edit all my photos on a Windows machine. When I work from home I try to use Desktop Linux dual-booting on that Windows machine. My Canadian phone is Android.

Over the past couple of years I’ve made a point of getting a sense of what other, non-Apple, platforms feel like these days. I can do my job on pretty much any of these platforms. I’m more concerned with what it feels like to use them.

ChromeOS is minimalistic to the point of annoyance and Google’s general software design approach seems to be: “just boring enough to prevent you from outright thinking it’s ugly”. A combination of Linux app support and proper desktop web apps has turned into a pretty productive platform, especially if you work for a company that’s heavily into Google’s services. But there is nothing ‘great’ about it. It’s compromises all the way down. Even Google’s flagship apps: Gmail and Google Docs, are half-baked UX-wise and genuinely feel like they were designed by somebody stuck working at IBM in the 1990s, who was way too talented to be working there, but whose soul just died one morning when the office coffee-maker broke, and they started to phone it in.

Windows has always been a weird, confusing mess of conflicting aesthetics and behaviours. Every era of Windows design seems to still exist in the OS somewhere as a layer you can stumble into by clicking around for 5 minutes. It feels like every time you click on something, a gremlin smoking a cigar somewhere throws some dice to see if you end up in Metro land or a modernist pastiche of Windows 2000.

Desktop Linux has no overall aesthetic of its own but you can make it look pretty much any way you want. Although, like Windows, it’s very easy to stumble out of the surface layer and into something that looks like what 1980s developer imagined the 1990s would look like. And in Linux’s case that under-layer is substantially more likely to be broken than Window’s archaeological remnants. Less “oh my, I just turned the corner and this street looks just like something out of Jane Austin. How quaint.” More “oh my, I just turned the corner, why is it dark and why does that man have pins in his head? Are those meat-hooks? What do you mean ‘taste your pleasures’?”

Android is just a mess. The OS is serviceably boring in its own right, just like ChromeOS. Every single part of it is a bright idea wrapped in a dozen layers of “Is this inoffensive enough? I feel like it still has a little bit of an opinion. Let’s add another layer of blaaaaand.” Which is fine. I guess. It gets out of the way. Which actually makes the platform all the more unpleasant because most Android apps are just ghastly.

Using Android on a non-flagship phone is about as enjoyable as getting stuck in a long queue in the supermarket while constantly fending of a rando trying to talk to you. Android: it’s okay to be creepy because it’s cheap. It’s never actively counter-productive: just consistently hard to use and low-key creepy (surveillance is so obviously baked into the OS’s core). I’ve been trying to use Android for three years now so I feel confident saying this: compared to iOS and desktop web apps, and especially on low- to mid-range devices, Android apps just plain suck. And mobile web apps overall suck even harder.

The web platform deserves a call out on its own. The web has a few genuinely great services. They’re great as long as you’re using a fast laptop or a high end phone. I can easily completely lock up a mid-range Chromebook (like the Samsung Plus) just using a few well chosen pretty, shitty, but essential web apps. And the only major progressive web app that seems to work great on a mid-range phone is Twitter. Which is a bit like having a car that only works when the driver is smoking. “Great car, unfortunately it will only move when you’re actively poisoning yourself. Oh, and don’t mind the dude in the back seat. He’s just going to watch everything you do, ever.”

Because, you know, if you think Android has a problem with privacy (there is none) the web is a world where your bathroom has glass walls and strangers take notes when you shower, none of them positive. The web is capable of great experiences but it’s unbearable without ad blockers and even then you need to stay away from using low end devices.

What all of these platforms have in common is that you find the apps that you can tolerate (some of which may even occasionally be great) and then you do your best to try and ignore the crap surrounding the stuff you use.

None of these platforms are fun and a lot of it is just plain broken.

Dysfunctional aesthetics and interactivity

The software economy overall, no matter the platform, is substantially dysfunctional. Apple doesn’t get a pass on that one. Their app stores are just as broken as everybody else’s.

App stores favour abusive in-app purchases. The web favours espionage-based business models. Subscription-based businesses seem to be the only usable compromise. But a subscription requires a commitment on the part of the buyer not possible for many product segments that were possible in the old ‘buy then upgrade’ model. This is one of the underlying causes of my ongoing dissatisfaction with the software in my life. The market is broken.

The aesthetics of software these days are also very messed up and, as I’ve written about recently, software aesthetics have a huge effect on my productivity and ability to concentrate.

As I wrote above, I’m a little bit obsessed with understanding what’s going on in the general-purpose computing landscape today. This is why I’m jugging six OSes on a regular basis. Some of that is because I am a little bit obsessive by nature. But I’m also doing it because there are over half a dozen general purpose computing platforms in widespread use today. That’s a new thing.

A few years ago we only had two: Windows and Mac. The appearance of touch-oriented mobile OSes has resulted in an explosion of platforms: we now don’t just have traditional desktop OSes and the new touch OSes but also a cluster of hybrid OSes.

Software as a field is still digesting what the addition of new modes of interactivity like ‘touch’ changes in the computing landscape. Does adding a new mode obsolete the old one, ‘there can be only one’ Highlander-style? Can we figure out ways of making them coexist in a single platform? How can software development scale to handle these different modes? Do we just pick one mode and ignore the others?

For a long time it has felt like we are inevitably going to end up with three core platforms – each of which is a multi-paradigm OS that handles many different interaction modes on a variety of devices: Microsoft’s, Google’s, and Apple’s. Consolidation seems inevitable. Given how hard it is to maintain a software platform, this many different OSes doesn’t seem sustainable.

But to consolidate these OSes you first need to have clear answers to a few questions:

  • Do they handle the many different, possibly conflicting, modes of interaction in a single platform: touch, stylus, mouse, keyboard, voice, AR, VR?
  • How can they make it easy for developers to handle these different modes?
  • How can they help developers design UIs for a variety of different screens?

The Web is a meta-platform

The web is the only platform that has had answers to these questions. Some of the answers are messy. Some of them don’t quite work. But, generally speaking if you had to ask the question of how to make a web app that works both with touch and mice or works both on big and small screens, the web has for a long time had fairly straightforward answers for you. Both Microsoft and Google have, mostly, followed the web’s example. Or, more correctly, the web’s approach is a result of a compromise between the philosophies of Microsoft, Google, Mozilla, and Apple.

Microsoft’s and Google’s path towards consolidation has been fairly clear for a while now: both will be relying on a mix of one native app platform (Windows and Android) with progressive web apps. The future seems fairly clear for them both.

The web has become the meta-platform that 90s Microsoft hoped it could prevent. When in doubt, you build for the web. Progressive Web Apps will become a big part of the desktop app story for both Windows and ChromeOS. Both Microsoft and Google are investing in Chromium as a core runtime for their OSes. Between Progressive Web Apps and Electron, the web seems to have a lock on desktop app development for the foreseeable future.

Which is also going to drive hardware sales because none of you seem able to make your shit work well on slow devices.

Progressive Web Apps and Electron are poised to replace Windows as the “it’s a mess but it works” default target for making productivity, education, and enterprise software. The web is going to win the desktop by virtue of the same kind of creative thinking that brought us mid-90s Microsoft Access. Joy. At least Visual Studio Code is good, so all is forgiven, I guess.

Whether they will start to encroach on Android apps is less certain at this point, largely because web development seems entirely disconnected from the reality of what most phones are like and connection speeds they are facing.

We’ve known for a while which way Google and Microsoft are heading. Microsoft’s decision to switch to using Chromium as their web engine just brings them more in line with Google’s vision of the future of computing.

The big question, then, is: what is Apple going to do?

  • Are they going to support Progressive Web Apps?
  • Are they going to slowly phase out the Mac?
  • Will they give the iPad the features it needs to replace the Mac?
  • Do they have something else in mind?
  • How do they expect developers will handle these changes?

The future of the Mac

Apple has two general purpose computing platforms (iOS and Macs) but unlike ChromeOS or Windows, you can’t take an iPhone or iPad app and just run it on the desktop platform.

The simplest thing for them to do would have been to phase out macOS. But that would mean ditching a 35 year old software development culture that has consistently made excellent software with often relatively few resources.

The lack of attention Apple has paid to both the hardware and software of the Mac platform had lead many of us (me included) to conclude that the Mac’s demise is inevitable.

Last year’s Marzipan addition to the macOS seemed to make this obvious. Marzipan is a method for developers to take their iOS apps and adapt them quickly to run on macOS. The writing seemed on the wall; the Mac was going to be put on life support, relying on cheaply ported iOS apps to survive, Marzipan providing developers with a transition mechanism between the two.

Last week’s WWDC changed my mind. Apple’s plans are a bit more interesting than that and we got our first glimpse of what those plans might be from Craig Hockenberry who seems to have clearly guessed what was going on even before it became official. His two blog posts on the future of interaction, one published before WWDC and one after, are highly recommended if you make software for any platform:

In my opinion, limiting this thinking to just views and layout is short-sighted.

That’s because it doesn’t address the interaction problem with an ever increasing set of platforms. But what if this new framework not only let you declare views, but also the behaviors they enable?

The developer would describe the interactions an app supports. There would be relationships between those declared interactions. All this immutable information would then be processed by user interface frameworks.

"The Future of Interaction" by Craig Hockenberry

But I think there’s something important to add to his note: the SwiftUI DSL (Domain Specific Language) describes the most capable environment. It’s the maximum interaction surface: platforms will render and react to a subset of what’s declared.

Some devices, like a watch, will be capable of handling a physical rotation from a dial. Your only concern on other platforms, like iOS, are alternate interactions. When you target Playdate with your app’s SwiftUI DSL, you won’t be surprised to see the crank do the right thing.

"The Future of Interaction, Part II" by Craig Hockenberry

Turns out Marzipan (now Catalyst) very much is a transition mechanism but the transition isn’t from the old desktop mode to a brave new touch-oriented future. The transition is from the old mode of UI development to a new, declarative, dynamic, and reactive method of development that spans all of Apple’s platforms and makes it easier to deal with a variety of different interaction modes.

(And might even ‘do’ the web as well in the future. The development model allows for it but I have my doubts of Apple ever investing in that.)

SwiftUI is, clearly, the future of app development for Apple. It combines many of the good software development ideas to come out of the web, academia, and research labs in the past forty years. Something which you can easily surmise from the sheer variety of ways people use to dismiss it:

  1. ‘It’s just a React copy’ (unidirectional data flow, functional DSL for views).
  2. ‘We had interactive previews like this for web development since the 90s with Dreamweaver.’
  3. ‘This doesn’t do anything that SmallTalk [or insert academic programming language here] hasn’t done for ages.’
  4. ‘This is just Apple’s version of Google’s Flutter.’
  5. ‘The web has had declarative programming from the start.’

All of these are true. SwiftUI is a compilation of all of the best software development ideas Apple could find. The innovation is in how all of these pre-existing ideas are combined into a single, tightly integrated app development framework. And, yeah, it definitely wouldn’t exist if it weren’t for the web.

(You could write an entire monograph tracing the origins of the many ideas that SwiftUI is combining and how Apple has, in hindsight, been clearly working towards this for many years.)

This is where software development overall has been heading for a while. React is Facebook’s entry and has redefined web development. Flutter is Google’s exploration of these same ideas.

SwiftUI is obviously a good thing for software development on Apple’s platforms. It has built-in support for:

  • Cancellable animations
  • Full featured UI widget library
  • A detailed and rich icon and symbol library
  • Internationalisation
  • State management
  • Flexible layout handling
  • And lots lots more…

All right out of the box with little to no additional effort on the developer’s part.

It is, easily, the most exciting software development framework I’ve seen in years.

But…

Why does this change my mind about the future of the Mac?

Google and Microsoft are betting on convertible hardware:

  • Tablets that turn into laptops with the addition of a keyboard cover with a trackpad
  • 2-in-1 laptops that fold over to become tablets
  • Laptops and desktop machines with touchscreens.
  • Phones that fold out to become tablets.

Their bet is that the future lies in a single device that can switch between multiple interaction modes.

To understand last week’s news changed my mind you first need to understand why I was worried about the Mac in the first place. It wasn’t just one thing but a series of issues that together gave the impression of a platform nearing its end:

  • The seeming lack of attention being paid to the OS.
  • Botched product updates (the last few MacBook Pros were a clear misfire, for example, as was the trash-can-looking Mac Pro).
  • Apple consistent positioning of touch as the primary mode of interactivity of the future.
  • The lack of updates on the higher, pro, end of the Mac line-up.

All the while, Apple’s answer to the question of how you deal with varied modalities of interaction has been that too many of them in one device makes for a bad device.

Which, to be fair, might be true.

Apple’s line has been that a MacBook with a touchscreen or a MacBook that converts into an iPad is a sub-par MacBook. And correspondingly, they’ve seemed hesitant to position iPads as devices that can be converted to laptops by adding a keyboard+trackpad cover.

This is diametrically opposite to what Google and Microsoft are betting on. Many have just assumed that it’s just Apple providing itself with enough rhetorical cover to eventually kill off the Mac and replace it with a multi-modal line of iPads.

That Apple is rolling out mouse support in the next version of iPadOS could be interpreted as support of this idea and that the Mac is not long for this world. I’m more inclined to believe the opposite. They’re adding mouse support as a long-needed accessibility feature now because doing so earlier would have caused a panic in the Mac developer community. And they’ve made sure that it’s clearly positioned as an accessibility feature largely because any other positioning would threaten the Mac’s standing.

(The circular, touch-oriented pointer is important too. It’s a visual signifier that you are still operating within a touch paradigm and are using the mouse as an accessibility device, not as a primary mode of interaction. Symbols matter.)

What has been missing from Apple is a vision for the future of macOS software development and a vision for how we are supposed to deal with multiple modes of interactivity. Until last week, everybody thought that vision was simply: touch rules, mice suck, everybody make iPad apps and, long term, everybody buy iPads. But now their software vision is clearer:

  1. Write your apps using SwiftUI, declaratively handling the various interactive modes.
  2. SwiftUI as a framework handles all of the drudge-work in dealing with each platform leaving the developer the resources needed to handle each platform properly.

In terms of hardware, my theory is that Apple is going to go in an opposite direction from Google and Microsoft: devices should be specialised not convertible. Each primary mode of interactivity is suited to specific kinds of tasks and are best served by a device specially designed for those modes and tasks. This is also why I don’t think the touch bar is going away but is rather going to be iterated upon.

In this vision, the Mac is an essential part of the landscape as specialised devices in a range of specialised devices. A MacBook would largely be the same hardware as an iPad or iPad Pro, just with the addition of a keyboard and trackpad. macOS would continue to exist as a specialised environment for tasks that require a keyboard and pointer. iPadOS would become more and more specialised in terms of UX and UI to the tablet environment. Apple would end up dogfooding their own software development ideas in that all of their OSes would share more and more code but each would still preserve their own unique je ne sais quo because the UX would still be specific to how they are used.

SwiftUI excites me

It’s been a long time since I’ve seen a software development environment that looked so right. I’ve been watching the SwiftUI WWDC videos as they’ve been made available and they seem to tackle so many of the problems inherent in software development. Internationalisation is baked in. First class animation and transition support is baked in. The preview is a live, dynamic, fully-fledged authoring environment. The view DSL seems intuitive and easy to use. It’s the development framework I’ve wished for years existed.

It’s tightly integrated with iOS, iPadOS, macOS, and watchOS and therein lies the rub.

Apple’s platforms are the only ones that seem to offer both built-in privacy guarantees and a first-class, future-facing development framework. But their tight integration comes at a price: they are very closed and controlled.

I work for a company that is in the Open Education Resources and Free/Libre/Open Source Software space. One of the problems facing OER and FLOSS is that openness is being co-opted by large companies and well-funded start-ups. While openness and open licenses have helped disrupt oppressive incumbents (high priced textbook publishers in education and anti-competitive closed source companies on the software side) they have in turn enabled a new kind of monopolist: the aggregator; one that isn’t really covered by any existing anti-trust regulation.

Open licenses enabled broad collaboration and use, breaking down many an incumbent’s hold on the market. Then companies embraced open licenses and leveraged them to aggregate both supply and demand. They discovered that these licenses enabled a uniquely closed kind of company. One that didn’t have to give up anything of its own because the value isn’t in the software or the content but in aggregation, which lies outside the scope of open licenses. These companies could be as opaque and deceptive as their predecessors but still manage to turn any and all efforts at openness to their benefit. Open Licenses have become weaponised by venture capital.

Open source is responsible for a lot of good. It’s also responsible for the toxic hold modern tech companies have over our society.

Without Open Source Software, we wouldn’t have Google, Facebook, Uber, or Amazon. Trying to disrupt them with more OSS is like trying to disrupt fire with petrol. The Open Education movement is very likely to go down the same road. Odds are that OER will do little more than replace existing incumbent textbook publishers with a new kind of aggregator. One who not only subsumes the entirety of OER into itself, Google-style, but one who will also be yet another platform for universal invasive surveillance.

(As bad as it would be for my job, me being the digital reading guy and all, concentrating on printed Open Textbooks may well be the safest bet for the community at this point.)

So we, as users, are between the rock and a hard place:

  • We have platforms that are open and productive but have surveillance baked into their core (Android; ChromeOS; Windows, to an extent; the Web, definitely, waves at Google). These platforms give us control, as long as that control doesn’t mean disabling surveillance, because the more we customise, the more we reveal about ourselves.
  • We have platforms that give us as users control but whose productivity is limited to a few very narrow domains (Desktop Linux. If you can make it work for you, congratulations!).
  • And we have platforms that are tightly controlled, whose use of OSS is limited and strategically controlled, and have built in counter-measures to combat the rising tide of surveillance. That’s Apple.

It’s hard to see how the web and open platforms can ‘open’ or standardise their way into privacy. And its even harder to see how a world dominated by web apps can give users the autonomy and freedom so craved by the Free Software crowd.

It’s possible that Apple’s approach is the least bad option we have. Some software openness, mostly on the developer and web side, and tightly integrated:

  • Privacy assurances when dealing with third party services.
  • A platform vendor who takes measures to minimise the data they have access to while still providing you with the services you’re likely to need.
  • An excellent development environment which in turn should lead to a healthy software ecosystem.
  • Decent support for modern web standards that have been implemented in ways that don’t compromise your privacy too much.
  • Native apps that can integrate with cloud services without the same privacy compromises routinely made by web apps.

These are all compromises, to be sure. As users we’re giving up the freedoms we have when using Desktop Linux and the flexibility we have when using Android, Windows, or ChromeOS.

But, I’m starting to think that the balance struck by Apple’s approach is the safest of the bunch both in terms of individual and societal well-being. At least until the tech companies fuck up so badly that governments are forced to regulate them. Even if that happens, the web still has a problem because lack of privacy is backed into its server-client model.

The announcements at last week’s WWDC were a vision of the future that I find compelling and have gone a long way to dispel my many qualms about where Apple has been heading.

I’m still a web developer, obviously. Not about to switch careers at the drop of a hat. But as a user, Apple’s platforms are an appealing value proposition.

Maybe it’s time for me to start saving up for the inevitable ARM-based MacBook?

You can also find me on Mastodon and Twitter