I spend all day on the internet. I don’t do great in big groups of people. I’ve always felt more free online to communicate, express myself and try to solve problems through technology. As a trans woman there are a lot of daily obstacles that don’t exist online. You can be free of your body, immune to physical violence, and shielded from the societal judgement that comes with being visibly queer. It seems these factors drive many people to form important bonds via digital platforms. These communities develop a specific language representative of their collective values and experiences, making it easy to identify an outsider. This creates a system of trust that corporations and profit seeking entities have learned to take advantage of.
Machines are starting to catalogue and learn about this language in order to identify groups and ultimately modify their behavior. Behavior modification comes in many forms these days. Most of it is driven by advertising as the predominate business model for “free” services. The goal for these companies is to train you and you and your group well enough that you will generate profit or a favorable outcome. Training a human can be just as primal and simple as an animal. Give them rewards and they will associate what they did with positive feedback. Take the reward away or cause them pain and it teaches them not to do that. The real trick is making them think they are in control.
Once a group becomes aware of an intent to persuade, it’s more difficult to manipulate them. This is why social media is so perfect for this kind of experiment, because the people you interact with are already a trusted source. They can be your friends, family, or idols. All the machine has to do is present this trusted content to you in a context that might influence you. It may encourage you to invest further in the platform itself or purchase a product that another member may have “liked”.
This might seem relatively innocuous until you consider the implications in other contexts. I haven’t stopped thinking about a fantastic example from sociologist Zeynep Tufekci. She questions the ethics regarding algorithms that study a certain group of people more likely to buy things at a certain time of the year. The catch is that this group of people happen to be bipolar and in the midst of a manic episode. This happens without human interaction or oversight. This company then profits directly from the pain suffered by many neurodivergent people.
Another example of lack of oversight and perspective was reported by the New York Times this year. The article was about Microsoft’s facial recognition software and its inaccuracy in detecting people of color. This is clearly because the application was built by and tested by predominately white males. The bias is baked into the software. It’s a clear example of small our perspective can be when we set out to automate the world. Our digital communities often suffer from an endless echo chamber, because we seek a feeling of belonging and validation.
Xenophobia is one of the side effects of online groups. This can manifest as a protective instinct, attempting to maintain a “safe space” and consistent culture. One of the simplest ways to assert your standing within the community is to call out possible outsiders in order to re-affirm collective values. I think this is a factor in why outrage comes so naturally online. In my case, almost all social media interaction happens via Instagram. My experience has been that the comments section is most often used for sharing the content with another user or attempting to further contextualize the post. This is where users identify themselves to the public through “value signaling”. An example might be calling someone out for using an outdated or insensitive term. One I see online quite often is the use of the word “retard”. It’s quite an ugly phrase in my opinion. I’ve rarely seen an individual step back and say, “yes I was wrong I apologize”. This entire confrontation meanwhile is taking place in the comments section on a single post. A place thats highly ephemeral and hidden beneath layers and layers of content. It’s no place to change anyones mind, but its where many of us do a significant amount of social interaction.
In addition to that you have a system of likes for popular reactions or rebuttals. I see this as a sort of gamified wokeness. The format of the medium turns real concerns and ideas into a points system. It means that certain types of responses are ineffective, complexity is a weakness, and reactionary statements are truth. The binary system of either “like or not” presents serious issues in context for dealing with complex and intersectional problems like gender expression, race politics, and wealth inequity to name a few. This is the perfect environment for machines to exploit our deepest issues for profit.
Wokeness describes something. It’s not an action, like for instance “organizing”. To me it means that you are seeing clearly and others are not. You are in an insider and others are outsiders. Ironically this idea of otherness is exactly what the social platform wants to encourage, because it means you can be categorized and persuaded in quantifiable ways. The goal is to analyze, optimize, convert, repeat.
So how do we change this or at least rise above it? Jaron Lanier, an American computer scientist and writer, gave a fantastic Ted Talk in April on this subject. He stated that the business model of behavior modification was a mistake and it needs to change. As an alternative he sites the example of Netflix, a successful subscription service that changed the way companies thought about “free” online platforms. It was a gamble to bet that consumers were willing to pay a monthly fee, but ultimately they proved to be successful. He stated that “Sometimes when you pay for something it get’s better”. I mostly agree with his statement but I don’t feel that Netflix is really the golden standard we should be holding ourselves too.
I was hoping he would mention how decentralized platforms could play a role in solving this problem. In theory a decentralized social platform would have no CEO asserting their subtle will, or experimenting with your attention. It would not be subject to deciding whether its ok it show a “female nipple” to the general public. It wouldn’t encourage fear of non normative bodies. It’s not a panacea however, as there are always tradeoffs.
There are some self driven communities that are toxic and bad for the world. Some examples are hate speech, fascism and sexual exploitation. I think we need a better understanding of these communities think to truly mitigate their effects. The act of silencing them doesn’t allow us to properly understand how one might be tempted to join in. It’s possible that the world is better off censoring them and isolating people who think that way. I have a hunch however this drives the group to be more radical and xenophobic. It quickly becomes us vs. them. We are obsessed this type of binary thinking. Good versus evil is one of the oldest forms of divisive otherness. The truth is the world is much more complicated than that.
It seems to be in our nature as humans to seek togetherness. In my own experience, to deny someone that basic instinct can make them lose perspective. One important point to consider is that social media hasn’t been around long enough to gather definitive evidence about it’s long term effects. I believe its better to have a free and open internet. Despite the potential for harmful content and horrid things humans are capable of, in my opinion, its better than a capitalist mind experiment. An experiment which is designed to confuse our ability to make informed decisions and undermine the beautiful complexity of our identities.
🌱 Alice
Written for Real Review #7