Been seeing a lot of tweets like this lately:
Facebook censored my column — and I still can’t find out why https://t.co/3AoTwdupzZ
— SalenaZito (@SalenaZito) August 23, 2018
Not surprised – and it is not nearly as nefarious as some might think. Facebook is trying to use algorithms to control what goes out over their platform. Ostensibly it is an effort to deal with “offensive” content. Well, algorithms are just rules, that’s all they are. Try and find a set of rules that define “offensive” – go ahead, I dare you. The Supreme Court tried and they could not do it. To attempt to do this is silly on its face – just PC nonsense. But if you work the problem deeper, things get even weirder.
All computers can do is evaluate something against the rules. So pick a simple rule as an example – “The use of the word ‘sparkfargle’ is offensive.” The computer will censor both the sentence, “You fricking sparkfargle,” and the sentence, “The use of the word ‘sparkfargle’ is offensive.” Now what is the computer going to do? – It cannot even read its own rules.
From this we can learn that some of Mr. Foer’s worries are overblown. Despite claims of “AI” and other grandiose visions of the Internet “visionaries” – computing has just gotten bigger, not smarter. Bigger allows it to do a better imitation of smart, but it does not make it smart. And so, at some point, human beings have to get involved.
The typical solution to the rule application conundrum is to have the computer flag the potentially offensive material for human review. There are two problems with that approach. For one it is cost prohibitive. Given the amount of material that flies through Facebook and the speed with which it is supposed to go up, Facebook would go broke hiring, training, and keeping the number of people it needs. But more importantly, while all reviewers would likely find, “You fricking sparkfargle,” offensive, there would likely be some that would find the use of the word “sparkfargle” in the sentence, “The use of the word ‘sparkfargle’ is offensive,” offensive and decide that the pseudonym “s*f*-word” should be used instead. And so now the rule will not be applied evenly. That’s even worse than having the rule to begin with.
And so welcome to the administrative state. I have been writing about this since the Kavanaugh nomination – first laying out some basics, then looking at what I called “the petty power play,” and most lately looking at the fact that the enforcement of regulation tends to assume guilt rather than innocence. The use of algorithms, becasue it is simply the application of rules, is exactly analogous to the trying to set up an administrative state. And in the “sparkfargle” scenarios we see one of the biggest problems – no matter how carefully you write the rules, they always end up, in some fashion, arbitrary.
Arbitrary governance, whether evident in the whims of a monarch or the whims of a regulatory enforcement agent, is the antithesis of the rule of law upon the United States was founded upon.
And so, as irritating as the recent spate of censorship on Twitter and Facebook is, I think it will be a good thing in the end. It demonstrates the overblown nature of the claims of the technologists and will thus prove self-defeating. But more importantly it illustrates the utter futility, and resulting tyranny, of the administrative state.
Facebook is beginning to wane. Calls to replace Twitter are increasing. Slowly people are learning that the world is about people, not computers. People are coming to terms with the fact that computers are a tool for them, not their masters. People are realizing that work is rewarding and endless entertainment is boring.
As goes this phase of technological development and implementation so will go this several decades of experimentation with the administrative state. Algorithms are stupid – people are smart.