Social Media Systems and Democracy

There’s an article I’ve been meaning to write for 4 years. Every time I read another headline about fake news or social media and democracy, I once again say to myself “I’ve really got to write that thing I’ve been meaning to write!”

I hinted at it in a post about Google+ and the new YouTube comment system in 2013:

Now even discussion is curated by Google, rewarding those who talk often, and promoting hateful inflammatory comments because they provoke responses. Taking all the collected data and computational power of Google and using it to optimally encourage people to watch advertisements and argue with each other is, in this author’s opinion, brazenly unethical…

There’s a lot more to say about how this is part of a bigger picture involving various related companies and industries, but I think I’ll stick to the comments integration thing this time.

And then there’s this bit about YouTube comment curation from a post in 2015:

When I post a video, I create a space. And I choose what to fill it with. I could pretend to myself that I’m just letting it fill itself, letting people decide and express their own thoughts that I am not responsible for, but I know too much to fool myself like that.

When you let a space fill itself, it fills itself with whatever’s the fastest material. Ignorance is fast. Hate is fast. It takes a lot of practice to be fast at love and tolerance, so while there are those who are fast at it, there’s not enough of them. It also takes time and practice to get fast at cultural norms, but everyone gets that practice and becomes an expert, whether they realize it or not. People can judge what’s outside those norms really really quickly.

I created that comment section, and I’m the only one who can curate it. There’s no such thing as impartiality, only avoidance of responsibility and capitulation to those who are quickest to judge and spend the most time judging. The internet is not a 1-person-1-vote democracy. The only responsible choice is to either mute selectively, or mute everyone. Given the time it takes to curate comments, removing comment sections altogether is often the only choice.

Sometime I’ll make a longer deeper post about the systems involved in this sort of stuff.

Well, dear reader, I’d like to say that now I’ve finally found the time to write out my thoughts on the matter, but I really don’t… all my time for the last three months has disappeared into trying to find funding for my work. But I wanted to at least get something out there, and so knowing that I’ve at least ranted on this topic personally to people, I searched my email to see if I could quote some more quotes, and found a draft from April 2015 that I’d entirely forgotten about.

Some of it would be very different if I wrote it today, though enough of it is on the mark that I really wish I’d posted it back then. I’ll leave it as it was, both because I still don’t have the time, and also because I think it’s good to remember how I thought about these things years ago, before Trump’s campaign, before evidence of Russian interference and bots (though I like to think that maybe if I’d done the research and thinking required to finish this post, it would have occurred to me that the fastest users with the most scripted responses point to bots as an inevitability).

So with that preamble, here’s an email draft titled “Social media systems” from April 2015.


Today’s topic, in honor of my deciding to talk publicly on the internet about systems philosophy, is social media internet systems and how they relate to democracy. Comment systems, twitter, reddit, email, etc. We’ll talk about four types of easy responses and four types of feedback loops that work together to make sure your corner of the internet is addictive yet unsatisfying.

In the Vi Hart school of systems philosophy, it’s not about what’s good or bad, right or wrong. That is a job for moral philosophy. I care about what is inevitable. I care about understanding the consequences of the systems we create, so that we can’t claim ignorance when the inevitable happens.
So, first, to quote myself: “Rating, upvoting, sharing, commenting. A pale illusion of democracy where those who are quickest to judge get many more votes.”

There’s a sense in which the internet is quite democratic, possibly even more democratic than our current implementation of democracy in government (for viral content, money only goes so far). On the internet, content rises to the top if it wins the popular vote. But unlike modern implementations of democracy, you get as many votes as you have time to give, all day every day, and most of those votes are taken by web companies without asking. And unlike the popular vote in democracy, internet popularity votes do not imply endorsement.
Votes that come from gut reactions take less time than anything involving actual thought.

Gut reactions are very useful things and voting from gut reaction is not inherently good or bad. When we see a video of a cop killing an unarmed person I personally think it is good and right that we should, without hesitation, respond with a gut reaction of horror and injustice. Many people do, and so this kind of thing can spread on the internet quite virally, but because there are political connotations and you might want to fact check first or make sure the wording of your tweet is respectful and conveys the gravity of the situation, it takes longer to cast your internet vote for the importance of police killings than to internet vote on the color of a dress. If there were one election and one vote, the important issue would win. But on the internet, The Dress simply gets votes faster.

Meanwhile, complicated issues that are extremely important but subtle, issues that require hours of research before you even understand it well enough to want to vote on it, those things cannot compete. Economically speaking, shallow votes are significantly cheaper to make.

I think of fast gut-reaction votes as being in one of 4 categories, with the 4 being the combinatorics of controversial or not controversial, common experience or identity attack.

1. Common human experience: Votes that are easy because they’re completely uncontroversial and of little consequence. Examples include cats being adorable, puns being groan-worthy, and sunsets being pretty. Most people agree, and even if you disagree, you don’t really care, so anyone can feel safe participating on the internet in the delight of adorable kittens without fear of backlash.

2. Common mass media experience: controversial but unimportant arguments over media that many people are assumed to be familiar with. Which house is better, gryffindor or slytherin, stark or lannister? Is the dress blue and black or white and gold? The opinions are strong and discussion is fierce, with each camp having its stock set of answers. But everyone involved knows that it does not really matter; debate is easy and by-the-script without the barrier of caring. It is understood that when someone argues against your camp, it is not a personal attack.

3. Common criticism: Examples include shallow visual judgements and stereotyping. There is a common cultural script for stereotypes and so these judgements can be expressed immediately and automatically. They are faster than less-shallow responses. Criticism is asymmetric and uncontroversial in that, rather than there being an opposing party with an opposing criticism, there are only those who criticize and those who find it irrelevant or in bad taste. Also in this uncontroversial-but-critical category, I put criticisms of inaction. If you want to criticize someone, it is always possible and always easy to think of something they didn’t do or didn’t say, and then criticize them for not doing that thing. It borrows the form of a thoughtful calling out of a glaring omission, but it takes no effort and contains no content.

The criticism itself is not controversial, but the making of it often is, which can spiral into easy gut reaction type four:
4. True controversy on party lines. Votes that are easy because they follow party lines. Similar to the above, but symmetric. There’s a template, made by someone else, that you can follow, of how things should be and how to respond. There is more than one template, and so whenever two people with different templates collide, they each follow their template, instantly, and the debate carries on. This is differentiated from true intellectual debate because this is the case where response and internet-votes is easy and instant; whoever stops to think gets outvoted by those following the script written by their in-group. In order for thoughtful responses to win over instant reaction, thoughtful responders need to exist in numbers significantly greater than those following the scripts.

It is possible to think long and hard about how you personally want to react to certain types of things, and then, having made the decision, react that way very quickly. But the internets ability to amplifying quick reactions works best when an entire group is all following the same template.

These four kinds of judgements are really easy to make and can drown out all other discussion if your system lets them.

type 1: Is the cat adorable? Yes. Is this other cat cute too? Yah, or maybe you hate cats and that’s easy too. Very few people need to agonize for hours in order to decide whether a cat is cute; the decision is visual, shallow, and of very little consequence. Whether you intend to vote on cute things or not, there are certain internet systems where every time you engage in a shallow visual act you are leaking votes; and thus trivial easy-to-judge pictures with no personal judgement are doomed to overrun any social media outlet that counts those votes.

Twitter likes Common experience tweets. Common experience votes can be given without context; twitter’s lack of organizational capabilities, the sheer volume of disconnected snippets, means that in order for a tweet to be self-contained it must rely on common context. Twitter originally was text based, no automatically integrated video and pictures, so originally short text jokes became prevalent. Puns are perfect because they are short inside jokes common to the speakers of an entire language.


specialty sites with the context for the argument. Reddit, or a subreddit, will let you argue about game of thrones. The comments of an article about a subject will let people enjoy arguing about the trivialities of it.

vine and stereotypes
4: twitter politics. politics. evolution, abortion, vaccines, feminism.

If the goal of a web company is to maximize engagement, they must encourage engagement that is easy and cheap to produce. They must create and share content that is economical in that it solicits comments written with the ease of cultural scripts.

So where are these scripts created?

1. Reactions considered and tested on the internet

Reactions scripted by experience making angry, stereotypical, or hateful comments on the internet often miss their mark when attempted in their unamplified real-world form. There’s something to think about if you’re relying on internet systems to make your voice stronger than it is in the real world, if you feel safer expressing your opinions publicly on the internet than saying them to anyone you know in real life.

2. Reactions considered and tested in the real world, then brought to the internet

In contrast, reactions created out of slow real-world thoughts and personal experiences often do better in the real world than on the internet. In many cases they can be brought to the internet to in turn help amplify real world actions. Twitter has had a large role in organizing marches for social justice and climate change and organizing disaster relief. The reaction on twitter is fast because the script was decided by previous understanding gained in the real world, that can be brought back to the real world.

Now the feedback loops come into play. Almost every action you take on the web is recorded and used to shape the actions future users are directed to take, which is a recipe for feedback loops.

1: the keep-voting feedback loop

Crowd-sourced popularity guarantees the promotion of content that generates user actions, not real life actions; the internet thing that inspires you to get up and do something means only that you are no longer internet voting.

Many internet comment systems are optimized for engagement. It’s nearly inevitable that any company that is winning at capitalism cares less about whether you like their product and more about whether you use it.

Imagine an image search engine where every time you enter a search term, it searches for images near those keywords, and then gives preference to images that previous users clicked on. Every time you click on a search result, that is a vote for what that search term should give to future searchers. You might imagine that this helps them give users relevant results, as crowdsourced by other users.
Say I do a search for, let’s say, a conference, to get a general feel for what it looks like. There’s various logos and graphics for it, and a picture of someone in an amazing cosplay outfit. Which looks cool, so maybe I click on in. And then I go back and keep searching for just a regular photo of the conference. When I find that photo, I stop looking.
Of course, those who get distracted by every cosplayer and want to look at all the cosplay photos cast more votes. It’s up to the search engine to decide whether it’s good or bad that people searching for basically anything will see a page full of irrelevant sexy photos and clickbait; the point is not that it’s good or bad but that it’s not surprising when it happens and it’s not the fault of the users but of the system.

When I do an image search, don’t click on anything, and immediately leaves the search site, that is usually a sign of a successful search. When a web search results in a user immediately clicking a link to another site and never coming back, this is the sign of a successful search. But for google, for example, successful searches are not the product; the user searching is the product, sold to advertisers. It is important that you search, not that you find.

It’s a classic systems pitfall: you get what you optimize for, not what you pretend that thing represents. When you create a system to optimize for most user actions, it doesn’t matter whether you’re imagining user actions represent meaningful engagement. It’s hard to quantify and optimize for meaningful engagement, but a love for data-driven approaches doesn’t justify using a data-driven approach on the wrong data. It is better to use fallible human judgement and intuition on trying to solve the right problem than an algorithmic approach that is guaranteed to solve the wrong one. If you optimize for clicks, don’t be surprised if all you get is clicks.

For example, YouTube’s shift in the meaning of subscriptions. At some point, some manager or board member decided that increasing subscription numbers was an important goal for YouTube. A YouTuber with 10 million subscribers would look good for the economic health of the company. Successful megacreators make it appear as if investing in creating youtube content is a reasonable choice. And so it was no surprise when YouTube quickly reached the goal of having a youtuber with 10 million subscribers after making some optimizations. All they had to do was change what it meant to subscribe to someone, as well as automatically subscribe all new users to a list of heavily-subscribed youtubers in an opt-out system. The system prefers ten million people shallowly engaging with one shallow content creator than to have those same ten million divided up among a thousand niche creators that they feel a meaningful connection to. YouTube will reap all the short term benefits and face all the long term hazards of cultivating a monoculture.

In theory when you want to work but feel uninspired, browsing the web should lead you to a great many wonderful things that really make you want to create something. But if you are browsing in a part of the web that promotes things using internet votes, you are all but guaranteed to only find things that elicit a quick easy user action and then leave the user unsatisfied and looking for more. In practice, inspiring and satisfying pieces of content are dead ends for user actions. Thoughtful pieces of content that take twenty minutes to read get one vote in the time it takes for pretty pictures and amusing memes to get dozens.

Reddit is probably the epitome of this because it’s always been explicitly internet-vote-based. Facebook and YouTube used to show you only individuals you subscribe to and everything you subscribe to, but have since decided to filter your content based on “algorithms”; so they now have all the symptoms of systems run by internet-votes and are fully optimized time-sucks.

For companies that run on advertising, this is the economical choice in the short term, though as internet ads become worth less it might not be enough. For a web company that provides a valuable service to its users, it is possible to turn those users into paying customers. But for web companies with enough popularity and momentum that they transitioned into providing a service to advertisers, where the users are the product being sold, well, if web advertising continues its current trend it will be interesting to see these companies trying to turn their own product into their own customers. Google and YouTube’s latest attempt at a subscription fee, for example, would have made sense a few years ago, but seems very strange indeed given their recent change in direction.

Twitter, in contrast, shows you only and every tweet from people you’re subscribed to (plus clearly-differentiated advertisement tweets). It is still an internet-vote system that disproportionately favors trivial content, but showing all tweets avoids the worst of this feedback loops. It is no wonder that many people find twitter to be a more functional news source than legacy news media.

Which brings us to:

2: the filter bubble feedback loop

You may have heard of this before [talked about here]; basically the content you give positive internet votes to is content you agree with, which leads to your being surrounded by content you agree with. This can be pleasant for the user, but it also can sustain harmful communities by making members of them feel normal. Anti-vaxxers, antifeminists, and climate change deniers, are mostly not particularly terrible or stupid people, they just are bubbled into communities where reality is unwanted and so social media algorithms make sure reality doesn’t have the chance to get in.

Again, we’re not doing moral philosophy, we’re understanding the systems that enable certain things.

The communities that survive best in their isolated bubbles are those that develop scripts for how to respond to conflicting opinions; these automated defense reactions mean you can out-respond anyone who considers their words in real time.

3: the voter influence feedback loop

Easy-to-make decisions happen faster, so people see those first. When voting is asynchronous and public, the fastest reaction gets disproportionate influence on future voter behavior. The fastest reaction is also not usually a very well-thought-out one. (also see stack exchange’s fastest gun in the west problem).

Pure statistics says occasionally things will quickly gain enough positive votes, out of pure chance, to rise to the top of promoted content areas, where they are more visible and can gain more momentum.

4: the creator motivation feedback loop

If a creator is motivated by gaining ever higher numbers, then that creator must make large quantities of content that appeals to ever higher numbers of people. If a creator is motivated by positive reactions but demotivated by negative reactions, inoffensive culturally-normal content wins. Most creators can handle the occasional negative comment, but pretty much everyone has some threshhold where if they get that amount of negative or hateful feedback they will not continue putting content in that space. It is easier to reach that threshold if the internet space you’re in harbors a culture that makes easy stereotypical judgments about your race, gender, sexual orientation, religion, etc. And if you’re a creator who finds a barrage of trivial comments demotivating, whether critical or not, it’s difficult to find anywhere on the internet that seems worth existing in.

Personally, I consider businesses responsible for the inevitable results of the systems they create and I’d like to see more systems-creators taking responsibility for the results of their systems, though in my experience businesses prefer to cite their good intentions and when the inevitable happens they pass off the blame to their users.

Sometimes these scripts develop organically, sometimes they are explicitly made for this purpose. Last year’s “gamer gate” thing is of interest because the creation of the scripts is documented. It’s a case of a tiny insular community explicitly strategizing about how to overwhelm the internet by outlining standard responses and scripts for behavior. This strategy was extremely successful for a short period of time. When no one understood what gamergate was or how to respond, it was easy for a tiny group to overwhelm conversations. Members had so successfully filterbubbled themselves that many truly believed their grievances were legitimate and that the game industry, even mainstream media, would sympathize with them. But their strategy was successful enough that the filter bubble popped; the outside world realized ignoring it wouldn’t make it go away and finally took the time to respond. Gamer gate was confronted with reality for the first time and many previous gamergaters quickly disassociated themselves from the group, while the most invested went back to their bubble.

Gamergate’s mistake was trying to effect real world change, when their existence was only supported by internet systems. Scripted responses and instant outrage are amplified by internet structures, but real world actions are not. Internet groups that rally around making instant anti-social-justice reactions without aspiring to real world action can do a much better job at surviving with the helping hand of the many social media systems that amplify them.

Now democracy. Voter turnout is terribly low, and everyone complains their vote doesn’t matter. And yet, people mostly vote in the big elections where their vote matters least, rather than the small local elections and primaries where their vote is extremely valuable. But no one knows their local officials, and learning enough to be firm in your vote is hard! A simple democrat/republican party line vote, once every four years, is ever so much easier. Not only can you easily choose a side, but you can easily defend it, by following the party script.

Often in democracy, people feel they are not voting for their candidate, but against the other one. I worry about the extent to which this might, beyond being true, actually be the opposite case: that people are voting for the opposite candidate. Not in the election, but for the election. For example, most republicans are just as reasonable as most democrats, and yet the republican leaders are often complete caricatures that the party only backs out of loyalty and lack of other options (and vice versa). It is fascinating to me, this process by which a party ends up with leaders that party members actively dislike. I think it is probably the case that democrats, wallowing in easy votes against the most ridiculous republicans and following the script for judgements against those easiest to judge, make those candidates extremely popular. Republicans are left defending and further popularizing these targeted caricatures, and while many would like to vote for better candidates, voting for lesser-known candidates is slow and hard and diffused.

In a system where we imagine each other as monsters, it’s no surprise when we get monsters.

Vi Hart

You can support my work here: