Hello to my new subscribers! If you’re reading this hoping for more revelations about Tether, sadly this substack is not purely about crypto (although I do write about it frequently) - it’s general writings on tech and finance.
Now, to business…
Misinformation is everywhere. Biden says it’s killing people. It’s clear the government, and society at large, want social media companies to do more.
But what exactly is misinformation anyway? And who gets to define it?
Enragement is Engagement
Since 2016, the narrative we’ve heard on repeat is that “social media spreads disinformation,” particularly after Facebook’s cascade of scandals from Pizzagate to Russian hacking to Cambridge Analytica.
Social media’s defense at the time was that blaming them was akin to blaming your internet provider if someone sent you mean texts on Whatsapp. They simply provide the infrastructure.
But studies show that social media tends to perpetuate fake news, in part due to flaws in the algorithm that boost engagement but also due to flaws in our brains: according to one study, “false news stories are 70 percent more likely to be retweeted than true stories are.” We click on the things that are novel or surprising, and often share without even clicking.
The expectation has now become that social media needs to police misinformation, because we cannot police ourselves.
But COVID exposed both the pervasiveness of misinformation but also the difficulty of defining it. Wondering about the virus’ origins went from racist scaremongering to presidential pastime in less than a year, and Facebook has struggled to keep up with how to moderate what are “facts.”
How should we define what’s acceptable? Should it be the media, academics, experts? On controversial topics like COVID’s origins or vaccine side effects, it seems narratives are deemed acceptable by a vague notion of “consensus” among “experts.”
But consensus tends to move slowly - again, because people, including experts, are slow to change their mind even in the face of disconfirming information. Most of February 2020 we read articles about how travel bans don’t work or that studies didn’t support mask wearing.
Studies of academics show they are subject to the same biases as news readers: bad studies are cited more often than good ones. “[P]ublished papers in top psychology, economics, and general interest journals that fail to replicate are cited more than those that replicate.” Even academics spread fake news!
In most disciplines, only two-thirds of studies replicated, and in psychology, less than 40% did. Books by someone like Malcolm Gladwell, who defined an era of pop psychology with his interpretations of academic studies, can go from bestseller to quack in less than a decade.
In fact, Samuel Arbesman in his book The Half Life of Facts found that all disciplines - even the hard sciences - are subject to a half-life before a study was no longer cited:
Our knowledge of the world is growing exponentially, meaning old knowledge is becoming outdated even faster and faster.
So is Facebook capable of fact checking professors? And if accuracy is so arbitrary, how do they decide the boundaries?
On the other hand, the changing nature of knowledge has always been true. We went from a geocentric universe to a heliocentric one (though it took a while). Why is this a problem now?
The Incredible Shrinking Institution
A common explanatory narrative is that well-known falsehoods - COVID misinformation or that the election was stolen - are rooted in a growing distrust in institutions. And the data bears it out - Gallup polls show a common thread of decline in confidence across the church, newspaper, criminal justice, big business, and even public schools.
According to Pew polling on the subject, “About two-thirds (69%) of Americans say the federal government intentionally withholds important information...61% say the news media intentionally ignores stories.”
Is it simply a matter of a suspicious public? Information and knowledge are moving faster than ever, and yet the institutions that are supposed to inform us - the ones that should be the strongest countervailing force to misinformation - are weaker.
COVID headlines were where the need for a quick, Google-able answer collided with the arbitrary nature of defining the “science.” The shifting answers on whether masks work or vaccines had side effects hurt the credibility of public officials and health experts.
It’s easy to see why. Media had to meet the demands of a 24 hour news cycle with fewer fact checkers, shorter articles to fit on mobile screens, and shrinking advertising revenues. Meanwhile, the proliferation of sources online makes it easy to seek out the answers that confirm what you already wanted to believe, rather than read a long-form magazine piece laying out all of the background on a subject.
There’s also fair reasons for the souring of public opinion on academia. The replication crisis I described above has thrown more cold water on how much trust to place in the Ivory tower. Ivy institutions like Columbia are under fire for cash cow degree programs charging $180k for two years but where graduates can barely eke out $30k salaries. For-profit universities, often unaccredited and low quality, are taking in more students and leaving them to default on their loans. The typical American has grown increasingly suspicious of colleges and universities.
General research into “expert” knowledge by Philip Tetlock has shown that political forecasters are often no better than random chance, and that expertise has diminishing returns for forecasting ability.
Ultimately we come to the question: are democracy and our collective health in crisis because we can no longer discern “the truth,” as pundits will tell you? Or is that our faster-moving world has made it painfully obvious that the “truth” has always been defined socially?
Think back to the first time someone told you as a kid that something you learned at school was “biased.” You felt smarter. You began to question authority.
Now, armed with Google, everyone feels that way, and decades of research show that experts make the same mistakes we all do. That distrust is driving people away from listening to press and government to new sources like social media and search engines, a negative feedback cycle that further weakens those institutions financially and in the algorithms.
We could of course make Facebook hire the fact checkers instead, but this wouldn’t do much to strengthen the old guard. What to do?
Past is Prologue
On September 25, 1690, the first newspaper in America, “Publick Occurrences Both Forreign and Domestick,” was published in Boston.
The paper was banned after one day. The government felt that it spread misinformation.
That initial poor experiment with the colonial newspaper didn’t mean the story ended there. Printing presses continued to get better and cheaper, free speech laws were enacted, editorial standards were created, and bureaucracies rose up to create textbooks, curricula, and give schools accreditation based on common standards. Knowledge became self-regulated. But these changes took centuries.
The Internet is a similar shock to the speed of publication. And today, India is using draconian media laws to blame “disinformation” as a cover for shutting down criticism of the government, an obfuscation just like how China uses “terrorism” as a cover for putting Uighurs in camps.
In the US, officials like Sen. Klobuchar propose regulations for making social media companies liable. Would this solve anything?
Misinformation would likely then migrate to places like internet message boards, email lists, Telegram groups, or other uncontrolled channels. These have existed for decades, and every time a forum on the darker corners of the Internet like 4chan tried to regulate behavior users migrated to an even worse one. In India, misinformation spreads by group chat, and the government has proposed even more invasive techniques of surveillance on users to try and root them out.
These efforts seem akin to banning the first newspaper - trying to close a Pandora’s box that is already open. Ultimately it is technology that always leads the way; society has to upgrade its institutions in response.
In the near term, social media will create - and have already created - their own standards. The algorithms and content moderation teams will need to treat a health article differently than a disruptive finding in physics. Algorithms will need to slow down the fastest spreading stories on these topics to do both manual and automatic reviews.
And as knowledge about these issues and the supply of outdated studies goes up, editorial standards will likely rise as well. For example, the shorter a discipline’s half-life, the more caveats we will eventually see around a study. Media citations that cite anecdotal evidence, studies with low n, or non-replicated studies might need warnings. The expectation will go from needing to see data to needing to see a meta-analysis.
These seem technologically and organizationally difficult, but Wikipedia has already built an incredibly complex and effective set of standards to do the same - without regulation.
But how will most readers know what technical terms like sample size even mean? Just as mass primary schooling was both the product and solution for the speed of the printing press, more data literacy will be the only long term solution to Internet misinformation. Understanding randomness and probability - that a vaccine tested on millions of people will have a few long-tail events - needs to be commonplace, as well as seeing through tricks like weasel words such as “some experts say” (which Wikipedia covers as part of its standards) can be misleading.
The supply side of information is likely not controllable, and so we can best address misinformation by enhancing people’s understanding. It will probably take centuries to educate the whole populace to understand what p-hacking is, or rhetorical devices...just like it took centuries to establish universal literacy and free schooling. But the cat’s out of the bag.
Is there a cure for misinformation?
Have you ever read Martin Gurri's "The Revolt of the Public"? It's basically making a very similar point.
This is great. I'm curious to see if the additional resources (somewhat) independent of social media, eg Substack, makes the problem better or exacerbates it.
Really interesting that a quasi-monopoly in traditional media helped create consensus thought. People feared that the renegade nature of web 1.0 would break that, but it appears as if the emergence of another quasi-monopoly of social media may have been the cause for this flip
We're doing the largest sociological experiment in human history in real time, I guess we'll see what happens