Discover more from Arc Digital
Are Algorithms Incompatible With Free Speech?
Matthew Yglesias says social media algorithms pose a problem for free speech. Is he right?
My last Belvyland entry responded to a claim Matthew Yglesias made about Elon Musk. Welp, here’s more of the same—except this time the topic is free speech rather than Elon himself.
From the same article as before, Yglesias writes:
The concept of “free speech” on Twitter strikes me as inherently problematic due to the platform’s reliance on algorithmic amplification and suppression of certain tweets.
I don’t see why the presence of an algorithm would necessarily violate free speech. Twitter is more than capable of enshrining a robust commitment to free speech all the while using an algorithm to amplify and suppress content. That’s because the algorithm fundamentally concerns a particular piece of content’s reach, not its existence.
When we talk about amplification, that just means greater reach; suppression means less reach. But that’s just a digital version of something already present in society: the unequal distribution of traits that correlate with greater speech-reach.
Consider the following traits:
having a loud voice
being socially personable
These traits aren’t equally distributed in society. People who possess these qualities tend to enjoy greater reach with their words than those who lack them. Someone with a loud voice will be heard in a room by more people than someone with a faint voice. Someone who is charismatic and articulate will probably get more opportunities to talk to people than someone who lacks those traits. Yet the fact that these qualities are unequally distributed doesn’t in any way complicate our ability to ensure free speech for all.
You could say in response, “Yes but those are content-neutral traits. Twitter’s problem is the algorithm selects for ‘politically favored’ speech.” But this response misses a few things.
First, Yglesias located the problem in the algorithm’s amplificatory and suppressive functions—so my analogy to “life’s algorithms,” which favor being articulate and popular and the rest, was chosen to specifically address that point. Strictly speaking, the fact that any speech environment is conditioned—whether by design or just naturally—to favor some instances of speech for greater reach than others isn’t ipso facto a problem. A society can maintain a robust commitment to free speech even while its members’ words enjoy seriously unequal access to other people’s ears.
Second, even if the algorithm filtered for political views—blessing the favored views with greater reach and cursing the disfavored views with lesser reach—that wouldn’t by itself violate free speech. That’s because access to the algorithm isn’t some sort of public good every social media user is owed, as though the platform is obligated to give your speech a ride on its content-amplifying channels. So long as the platform allows your speech to exist on it, and so long as people who want to access your posts can access them, your speech is free. The platform doesn’t owe your speech a discovery boost.
With that said, in the case of a platform that filters for ideology, we would be correct to say that the platform is favoring some speech more than other speech (as when we say that a particular site favors left-leaning commentary to right-leaning commentary). But the point is we would be incorrect to suggest the platform is against free speech.
Yglesias’s error, I think, stems from his contention that “free speech is fundamentally about neutrality with regard to content.” This is true in some contexts but not in others—for example, the government’s observance of free speech has to take this understanding of free speech, but a private academic institution does not. Consider the example of allowing people equal access to certain goods. If a local government allows a member of political group x to use a megaphone at a public park, it must also allow a member of political group y to do the same thing. If a university is going to honor free speech, it must do the same, with one important difference: while the university must allow groups with differing views equal access to its goods (all the student groups must be equally eligible to book its auditorium, for example), it doesn’t have to grant each group equal visibility in its promotional materials (pamphlets, advertisements, etc.). Twitter is more like the private university, and its algorithm is like the uni’s promotional channels.
Certainly, for a platform to exhibit a genuine commitment to free speech, its rules must be applied fairly. But that’s a separate matter. That has to do with the company refraining from selective application of its rules (“If you’re a liberal, you can get away with it,” etc.). Yglesias’s point, rather, is that the presence of an algorithm, which favors some speech and disfavors other speech, is necessarily at odds with the very idea of free speech. That doesn’t follow for the simple reason that free speech, for a private social media platform, is about allowing an instance of speech to exist more than it is about granting that instance of speech the same algorithmic propulsion as any other instance of speech. The fact that the algorithm picks out my tweets over yours doesn’t mean you are dispossessed of free speech; it means my tweets are better, or better received, or incorporate some favored element (like a picture or something else the algorithm might select for).
There’s an ambiguity in the word “suppression” that might trip some people up. Above, I’ve been using it as the flipside of amplification—just as Yglesias has. But the word is also suggestive of a piece of content being completely blocked or censored. If it’s the latter use that’s in view, then that obviously challenges the idea that the platform allows free speech. If a tweet is inaccessible to other users—i.e., when users go to your account and can’t see the tweet—that’s this thicker form of suppression, and, ToS violations aside, that’s the kind of suppression that would constitute a repudiation of free speech. But the thinner version of “suppression”—tweets stripped of eligibility for algorithmic amplification—doesn’t seem to be necessarily at odds with free speech. In other words, there’s a difference between tweet censorship and tweet throttling.
Let’s say that in the interest of simplifying matters we went with “the right to say what I want without unjust interference” as a basic working definition of free speech. Under this simple definition, in order to conclude algorithms are incompatible with free speech we would have to believe that unless a social media post receives algorithmic amplification, it is being “unjustly interfered with.” But there is nothing necessarily unjust about a platform deciding not to privilege your views for greater reach, and it is only a genuine form of interference if you were owed algorithmic boosting.