Are Algorithms Incompatible With Free Speech?
Matthew Yglesias says social media algorithms pose a problem for free speech. Is he right?
My last Belvyland entry responded to a claim Matthew Yglesias made about Elon Musk. Welp, here’s more of the same—except this time the topic is free speech rather than Elon himself.
From the same article as before, Yglesias writes:
The concept of “free speech” on Twitter strikes me as inherently problematic due to the platform’s reliance on algorithmic amplification and suppression of certain tweets.
I don’t see why the presence of an algorithm would necessarily violate free speech. Twitter is more than capable of enshrining a robust commitment to free speech all the while using an algorithm to amplify and suppress content. That’s because the algorithm fundamentally concerns a particular piece of content’s reach, not its existence.
When we talk about amplification, that just means greater reach; suppression means less reach. But that’s just a digital version of something already present in society: the unequal distribution of traits that correlate with greater speech-reach.
Consider the following traits:
having a loud voice
being charming
being socially personable
having charisma
being articulate
These traits aren’t equally distributed in society. People who possess these qualities tend to enjoy greater reach with their words than those who lack them. Someone with a loud voice will be heard in a room by more people than someone with a faint voice. Someone who is charismatic and articulate will probably get more opportunities to talk to people than someone who lacks those traits. Yet the fact that these qualities are unequally distributed doesn’t in any way complicate our ability to ensure free speech for all.
You could say in response, “Yes but those are content-neutral traits. Twitter’s problem is the algorithm selects for ‘politically favored’ speech.” But this response misses a few things.
First, Yglesias located the problem in the algorithm’s amplificatory and suppressive functions—so my analogy to “life’s algorithms,” which favor being articulate and popular and the rest, was chosen to specifically address that point. Strictly speaking, the fact that any speech environment is conditioned—whether by design or just naturally—to favor some instances of speech for greater reach than others isn’t ipso facto a problem. A society can maintain a robust commitment to free speech even while its members’ words enjoy seriously unequal access to other people’s ears.
Second, even if the algorithm filtered for political views—blessing the favored views with greater reach and cursing the disfavored views with lesser reach—that wouldn’t by itself violate free speech. That’s because access to the algorithm isn’t some sort of public good every social media user is owed, as though the platform is obligated to give your speech a ride on its content-amplifying channels. So long as the platform allows your speech to exist on it, and so long as people who want to access your posts can access them, your speech is free. The platform doesn’t owe your speech a discovery boost.
With that said, in the case of a platform that filters for ideology, we would be correct to say that the platform is favoring some speech more than other speech (as when we say that a particular site favors left-leaning commentary to right-leaning commentary). But the point is we would be incorrect to suggest the platform is against free speech.
Yglesias’s error, I think, stems from his contention that “free speech is fundamentally about neutrality with regard to content.” This is true in some contexts but not in others—for example, the government’s observance of free speech has to take this understanding of free speech, but a private academic institution does not. Consider the example of allowing people equal access to certain goods. If a local government allows a member of political group x to use a megaphone at a public park, it must also allow a member of political group y to do the same thing. If a university is going to honor free speech, it must do the same, with one important difference: while the university must allow groups with differing views equal access to its goods (all the student groups must be equally eligible to book its auditorium, for example), it doesn’t have to grant each group equal visibility in its promotional materials (pamphlets, advertisements, etc.). Twitter is more like the private university, and its algorithm is like the uni’s promotional channels.
Certainly, for a platform to exhibit a genuine commitment to free speech, its rules must be applied fairly. But that’s a separate matter. That has to do with the company refraining from selective application of its rules (“If you’re a liberal, you can get away with it,” etc.). Yglesias’s point, rather, is that the presence of an algorithm, which favors some speech and disfavors other speech, is necessarily at odds with the very idea of free speech. That doesn’t follow for the simple reason that free speech, for a private social media platform, is about allowing an instance of speech to exist more than it is about granting that instance of speech the same algorithmic propulsion as any other instance of speech. The fact that the algorithm picks out my tweets over yours doesn’t mean you are dispossessed of free speech; it means my tweets are better, or better received, or incorporate some favored element (like a picture or something else the algorithm might select for).
There’s an ambiguity in the word “suppression” that might trip some people up. Above, I’ve been using it as the flipside of amplification—just as Yglesias has. But the word is also suggestive of a piece of content being completely blocked or censored. If it’s the latter use that’s in view, then that obviously challenges the idea that the platform allows free speech. If a tweet is inaccessible to other users—i.e., when users go to your account and can’t see the tweet—that’s this thicker form of suppression, and, ToS violations aside, that’s the kind of suppression that would constitute a repudiation of free speech. But the thinner version of “suppression”—tweets stripped of eligibility for algorithmic amplification—doesn’t seem to be necessarily at odds with free speech. In other words, there’s a difference between tweet censorship and tweet throttling.
Let’s say that in the interest of simplifying matters we went with “the right to say what I want without unjust interference” as a basic working definition of free speech. Under this simple definition, in order to conclude algorithms are incompatible with free speech we would have to believe that unless a social media post receives algorithmic amplification, it is being “unjustly interfered with.” But there is nothing necessarily unjust about a platform deciding not to privilege your views for greater reach, and it is only a genuine form of interference if you were owed algorithmic boosting.
What strikes me, Bernie, is that you're focusing on that one entry in Matt's article, when the entries below it flesh out the thesis a bit more, in ways that I think challenge your argument. Iglesias isn't just saying that algorithms induce an inegalitarian quality in our public discourse (which, as you correctly point out, has always existed in some form or another). His claim is that it creates a distinct bias in favor of disinformation.
One could perhaps argue that this, as well, was true before social media algorithms. After all, we've always had tabloids and sensationalist news publications. It's hard to say just how more or less well informed people are on average these days than before social media, but it isn't hard to imagine that we had similar problems with disinformation in the days when the gatekeepers of our journalistic institutions were more subject to manipulation by wealthy media tycoons.
However, my guess is that while we probably fare somewhat better at being well-informed on average, the variance among us has grown to dangerous proportions, with some of us knowing more about the world than would have been feasible thirty years ago, and others of us effectively knowing less than nothing, having been lured down rabbit holes of conspiracy theories so absurd and into information bubbles so friendly to our own biases that simple, old-fashioned, corn-fed ignorance would be a massive upgrade in awareness. This has led to levels of polarization in our country arguably not seen since the Civil War era. And one can make a very convincing (really, almost impenetrable) case that social media engagement algorithms are the principal culprit. Social media itself had already largely removed the barriers of time and space that once acted as wetland buffers against the wildfire of mob hysteria in our public discourse. Social media algorithms have acted like an accelerant of an already difficult dynamic, much like dousing the dry foliage in kerosene and lighting a match.
So it's not really that social media algorithms foster unfair amplification of speech. It's that they foster both unnatural and seemingly toxic amplifications of speech, at levels which the human brain is arguably ill-equipped to properly manage, and which weaponize against us the very social proclivities that have, for millenia, formed the unstable yet persistent foundation of human civilization. To draw a parallel with another newly controversial Constitutional right, social media algorithms are to the First Amendment what AR-15s are to the Second Amendment - an artifact of modernity which threatens to render as hopelessly naive our existing conceptions regarding the fundamental good those rights embody in the first place.
Which is a scary place to be. We should be just as concerned about repairing a broken culture of free speech as we are about healing an unhealthy gun culture. Social media algorithms, the invisible neural agent unleashed on an unsuspecting population, are a great place to start.