W3C and EME: it isn't about preventing DRM but saving the W3C

The debate surrounding the W3C’s involvement has polarised. Unfortunately for the W3C, both sides are right.


On the one hand you have the crowd in the robustly anti-DRM corner like Cory Doctorow—who has done a very good job on multiple occasions of outlining this position.

On the other hand, you don’t have the pro-DRM crowd. Those largely stay mum and lobby in private. Instead you have the ‘this is inevitable’ crowd who offer a mixture of ‘this is just how the world works’ and ‘what did you expect?’.

These here tweets are a handy taster:

They aren’t wrong. Once you brush past the condescending ‘slow of thinking’ crack, it’s obvious that they are plainly describing reality as it is: the W3C is merely an instrument of the will of its biggest members.

This is a worse condemnation of the W3C than anything Cory Doctorow has said on this subject because at least he is taking the stated purpose, values, and principles of the W3C at face value. Doctorow’s criticism of the W3C’s complicity in DRM work is entirely consistent with the W3C’s mission.

No matter which side is right, the W3C faces an existential crisis.

Either:

  1. The W3C is a shepherd of the web for all, the web on everything, and a web of trust. But now it is fundamentally compromising its own principles in the name of maintaining industry relevance.
  2. Or, the W3C is merely an industry body for browser vendors to collaborate and its mission statement is nothing more than PR to increase buy-in from the smaller, largely powerless, members.

Both can’t be true. Neither is good news for the organisation.

Either way, the viability of the W3C is undermined because both ‘truths’ take away many of the incentives for people to participate.

If the W3C’s core mission isn’t the public good and the benefit of the web as a whole, then the costs and difficulty of participating for those who aren’t browser vendors don’t make sense. Instead of going through the W3C’s archaic and lumbering processes, it would make more sense to lobby vendors directly in their preferred venues—whether those are the WHATWG or some other vendor-sponsored discussion community. They’re the ones who are going to implement it so why pretend anybody else matters?

If the W3C’s purpose isn’t to safeguard the well-being of the web then future specification work becomes largely a question of software interoperability—best managed using tried and tested open source software community management such as the RFC-based process used by Rust, Ember, and Yarn.

As soon as you pull the ‘realpolitik’ card, the W3C has lost because its mission is explicitly a moral one.

Without that moral purpose, the W3C loses on every level. The RFC process is more transparent, more open, and more practical for software development than the W3C’s recommendation system. WhatWG’s process is better integrated with browser engine development. ECMA covers JavaScript standardisation. There are no practical reasons to do any standards work at the W3C in the long term, only moral ones.

The W3C’s existence depends on its mission being grounded in moral certainty. That certainty is the only substantial obstacle preventing it from being replaced with easier, more pragmatic standardisation efforts that focus exclusively on implementation.


Disclaimer: my employer is a W3C member and we do participate as much as our limited resources allow us. However, these opinions are my own and don’t reflect that of the Rebus Foundation in any way.