< back to full list of articles
Algorithms: The Siren Song of Objective Information

or Article Tags

 

As more and more of our online public discourse takes place on a select set of private content platforms and communication networks, and as these platforms and networks turn to complex algorithms to manage, curate, and organize their massive collections, an important tension is emerging between what we expect these algorithms to be and what in fact they are.

Not only must we recognize that these algorithms are not neutral, that they encode political choices, and that they frame information in a particular way. We must also understand the consequences of coming to rely on the algorithms and of wanting them to be neutral, reliable, effective ways for us to come to know what is most important.

Every search engine, whether Google, Bing or the search bar on your favorite content site (often the same engine, under the hood), is an algorithm that promises to provide a logical set of results in response to a query.

But in fact the algorithm is designed to take a range of criteria into account so as to serve up results that satisfy not just the user, but also the provider, which has a vision of relevance or newsworthiness or public import, and a business model with particular demands.

When Amazon, YouTube, or Facebook offers to report algorithmically and in real time on what is “most popular” or “liked” or “most viewed” or “best selling” or “most commented” or “highest rated,” it is curating a list whose legitimacy is based on the presumption that it has not been curated. And we want to believe that presumption so much that we are unwilling to ask about the choices and implications of the algorithms we use every day.

Twitter Trends, for Instance

Twitter Trends is only the most visible example of a current algorithm in action, but it provides an excellent way to show how imperfect these algorithms are. This list, automatically calculated on the fly, is nonetheless also the result of careful curation.

Yes, on a casual visit to Twitter’s home page, Trends may appear as an unproblematic list of terms produced by a simple calculation. But a cursory look at Twitter’s explanation of how Trends works—in its policies and help pages, in its company blog, in tweets, in response to press queries, even in the comment threads of the censorship discussions—begins to reveal the variety of weighted factors Trends takes into account, and the occasional and unfortunate consequences of these algorithms.

In response to charges of censorship, Twitter has explained why the company believes Trends should privilege terms that spike, terms that exceed single clusters of interconnected users, new content over retweets, and new terms over already trending ones.

In other words, the algorithms it uses to define what is “trending” or what is “hot” or what is “most popular” are not simple measures, they are carefully designed to capture something the site providers want to capture, as well as to weed out the inevitable “mistakes” a simple calculation would make.

As users, we’re left guessing: for instance, why didn’t Wikileaks trend when many people expected it to? Because it had trended before? Because the discussion of #wikileaks grew too slowly and consistently over time to spike enough to draw the algorithm’s attention? Because the bulk of messages were retweets? Or because the users tweeting about Wikileaks were already densely interconnected?

Twitter curates its Trends lists in additional ways. It engages in traditional censorship. For example, a Twitter engineer has acknowledged that Trends excludes profanity (something that’s obvious from the relatively circuitous path that attempts to push dirty words onto the Trends list must take). Twitter has also said that it removes tweets that constitute specific threats of violence, copyright or trademark violations, impersonation of others, revelations of others’ private information, and spam.

Its softer forms of governance include designing the algorithm so as to privilege some users as well as some kinds of content. Twitter offers rules, guidelines, and suggestions for proper tweeting, in hopes of gently moving users toward the kinds of topics that suit its site and away from the kinds of content that, if it were it to trend, might reflect badly on the site. And the punishment imposed on violators of some Twitter rules for proper profile content, tweet content, and hashtag use is that the violators’ tweets will not factor into search or Trends. This, of course, culls the Trends lists by culling content that’s even in consideration for it.

Then there are the terms that Twitter includes, even though they are not otherwise spiking in popularity—terms from promotional partners.

Complications and Conflicts

Ironically, terms like #wikileaks and #occupywallstreet are exactly the kinds of terms that, from another perspective, Twitter should want showing up as Trends. If we take the reasonable position that Twitter is benefiting from its role in the democratic uprisings of recent years, that it is pitching itself as a vital tool for important political discussion, and that it wants to highlight terms that will support that vision and draw users to topics that strike them as relevant, #occupywallstreet seems to fit the bill.

But despite carefully redesigning its algorithm away from the perennials of Bieber and the weeds of common language, apparently Twitter still cannot always successfully pluck out the vital public discussion it might want. This tends to confirm charges by its critics, who gather anecdotal evidence and conduct thorough statistical analysis, using available online tools that track the raw popularity of words in a vastly more exhaustive and catholic way than Twitter does, or at least than Twitter is willing to make available to its users.

The Trends list can often look, in fact, like a study in insignificance. Not only are the interests of a few often largely irrelevant to the rest of us, but much of what we talk about on Twitter every day is in fact quite everyday, despite claims of political import. Still, many Twitter users take Trends to be not just a measure of visibility but a means of visibility—whether or not the appearance of a term or hashtag increases audience, which is not in fact clear. Trends offers to propel a topic toward greater attention, and it offers proof of the attention already being paid. Or seems to.

Of course, Twitter has in its hands the biggest resource for improving its tool, a massive and interested user base. One could imagine “crowdsourcing” this problem, asking users to rate the quality of the Trends lists, and assessing responses over time and a huge number of data points.

But Twitter faces a dilemma: Revealing the workings of its algorithm risks helping those who would game the system. This is so even if the revelations simply respond to charges of censorship and manipulation, and much more so if they involve sharing the task of improving it. So the mechanism underneath this tool, meant to present a (quasi–)democratic assessment of what the public finds important right now, cannot reveal its own “secret sauce.”

Which in some ways leaves us, and Twitter, in a quandary. When the results do not match what someone expects, the algorithmic gloss of our aggregate social data practices can always be read/misread as censorship. If #occupywallstreet is not trending, does that mean (a) it is being purposefully censored? (b) it is very popular but consistently so, not a spike? (c) it is actually less popular than one might think?

Longing for the Impartial Arbiter

Broad scrapes of huge data, like Twitter Trends, are in some ways meant to show us what we know to be true, and to show us what we are unable to perceive as true because of our limited scope. And we can never really tell which they are showing us, or failing to show us. We remain trapped in a kind of algorithmic regress.

But what is most important here is not what algorithms yield; it is our emerging and powerful faith in them. Twitter Trends measures “trends,” a phenomenon Twitter gets to define and build into its algorithm. But we are invited to treat Trends as a reasonable measure of popularity and importance, a “trend” in our understanding of the term.

And we want it to be so. We want Trends to be an impartial arbiter of what’s relevant . . . and we want our pet topic, the one it seems certain that “everyone” is (or should be) talking about, to be duly noted by this objective measure specifically designed for the purpose. We want Twitter to be “right” about what is important . . . and sometimes we kinda want it to be wrong, deliberately wrong—because that will also fit our worldview (when facts are misrepresented, it’s because someone deliberately misrepresented them).

We’re not good at comprehending the complexity required to make a tool like Trends. We don’t have the vocabulary for assessing the algorithmic intervention of such tools, or for the unexpected associations they make that are beyond the intention (or comprehension) of their designers.

We don’t even have a clear sense of how to talk about the politics of this algorithm.

Too often, maybe in nearly every instance in which we use these platforms, we equate the “hot” list with our understanding of what is popular, the “trends” list with what matters. Most important, we may be unwilling or unable to recognize our growing dependence on these algorithmic tools for navigating the huge corpuses of data that we must navigate, because we want so badly for these tools to perform a simple, neutral calculus, without blurry edges, without human intervention, without having to be tweaked to get it “right,” without being shaped by the interests of their providers.

If Trends, as designed, does leave #occupywallstreet off the list, even when its use is surging and even when some people think it should be there, is the algorithm correctly assessing what is happening? Is it looking for the wrong things? Has it been turned from its proper ends by interested parties? At the very least, we need to ask such questions.

Tarleton Gillespie is an associate professor in the Department of Communication and the Department of Information Science at Cornell University. He is the author of Wired Shut: Copyright and the Shape of Digital Culture and is currently working on a book on the politics of online media platforms. This article is derived from his essay “Can an Algorithm Be Wrong?” which originally appeared at Culture Digitally (culturedigitally.org).

 

 

Connect With Us

1020 Manhattan Beach Blvd., Suite 204 Manhattan Beach, CA 90266
P: 310-546-1818 F: 310-546-3939 E: info@IBPA-online.org
© Independent Book Publishers Association