Take out your phone, open the social media app or apps you use most, and do a quick scroll through your feeds. What are the first fifteen to twenty posts or videos that pop up? Is there a particular type of content or account that appears the most? What themes or trends do you observe? Do you notice a pattern in the demographics of the voices that keep showing up?
Chances are the content you consume on social media reflects your current interests and belief systems as well as your pre-existing circle, which most likely consists of people who look like you, think like you, and live like you. Your algorithm is showing you content that it thinks you will enjoy and consume.
“If you’ve ever participated in unconscious bias training… you might have been asked about your closest contacts,” says DEI manager Siobhán Kangataran, “and then asked to consider how diverse they are in comparison to your race, ethnicity, age, ability, background, gender, sexual orientation, and so on. Often these exercises are surprisingly unsurprising—we surround ourselves with people who look, sound, and live like us. Familiarity equals comfort, safety, and security.”1
This is our echo chamber—a metaphorical term that describes an environment where beliefs are reinforced by constant communication and repetition—and it’s no surprise that closing ourselves within it can feel very comfortable. For example, my own echo chamber consists of graduates from Georgetown University or people who have a similar university education level; women entrepreneurs; people who are passionate about social justice, racial justice, and diversity, equity, and inclusion; millennials; disability advocates; and those living in large metropolitan cities like New York, San Francisco, and Los Angeles. In other words, people who are very much like me.
Social media companies reinforce the comfort of our echo chamber by designing algorithms that define what type of content shows up on our feeds to compel us to keep scrolling. These companies make more profits by keeping us on their apps for longer. It’s a competition for our eyeballs and attention—the attention economy.
Remaining stuck in our social media echo chambers means we see only what we are predisposed to see. It is an example of confirmation bias that reinforces old beliefs, narratives, and ideologies. Because of this “mere exposure effect”—“a psychological phenomenon in which people prefer things that they are familiar with”2—this bias creates a vicious cycle that becomes increasingly hard to break free from. These echo chambers then contribute to increasing social and political polarization and extremism in a society that is already governed by partisanship.3
Our social media feeds are showing us a version of reality that is different from everyone else’s. This means there’s a whole world of other truths, opinions, and realities out there that we will likely never be exposed to if we don’t seek it out. By remaining within our echo chambers, we lose the opportunity to encounter diverse perspectives and expand our worldviews. This prevents us from thinking critically, making our own decisions based on facts, and considering the viewpoints of others.
Dr. Jess Rauchberg, an expert on digital media cultures, adds that social media algorithms are not neutral technologies: “If disability is represented as something negative offline, such beliefs, representations, and tropes will influence the development and design of digital platforms.”4 For example, TikTok’s algorithm continues to promote content and hashtags that further ableism, such as the #NewTeacher challenge, where users showed their children photos of people with facial differences or other apparent disabilities while recording the children’s frightened or disgusted expressions.
Rauchberg also argues that social media platforms take things a step further by actively erasing certain types of content, including disability content. One way that TikTok enforces what Rauchberg calls “digital eugenics” is through the practice of shadowbanning, a strategy that social media platforms use to suppress the visibility of a user’s content without formal notice, resulting in lower overall engagement even if the user has a large follower count. “Shadowbans are not random, isolated, or coincidental; rather, they are intentional and deliberate… [and] are intended to surveil and control marginalized communities. They are reflections of our own offline cultural beliefs about who ‘naturally’ belongs, and who does not.” Rauchberg cites the example of TikTok’s “Auto R” moderation guidelines, which “mark certain vulnerable or minoritized user populations into ‘risk groups’ as a means to prevent cyberbullying,” such as disability content creators being placed in the “Risk 4” category, which limits any videos they post to other accounts based in the same country. The result of these “risk protections,” of course, is lower engagement—because these accounts are hidden, censored, and eliminated from the app’s #ForYou home page.5
For all of these reasons, I encourage people to interrupt their social media algorithms. It may be hard to expand your circle beyond your comfort zone from the get-go, especially in terms of your daily interactions and the spaces you navigate. However, a good first step is simply going into your social media apps and diversifying your feed. You might have noticed the popularity of #DiversifyYourFeed in 2020, encouraging people to follow Black creators. I’m adapting it here, as the guidance also works for disabled content creators.
Essentially, teach your algorithm new ways of showing you information outside your bubble. You can do this by choosing to follow people who do not live like you, people who challenge your preconceived beliefs, and even people who make you feel uncomfortable at first. Note that this is different from accounts that intentionally cause harm and spread misinformation—please don’t follow those accounts. It is okay to not absorb content that is harmful to you, your identities, and your community. Remember that there is a difference between feeling uncomfortable (this is where growth happens) and experiencing harm.
In the context of becoming anti-ableist, follow a choir of disabled voices, not just one, to hear what we collectively have to say as well as what we agree or disagree on. (Remember, we are not a monolith!) Then, reflect on the root of what might be making you uncomfortable, and with time (and the frequency of seeing this content on your feed), you may notice that what once felt uncomfortable will start to feel familiar. This is the exposure effect at play again, but this time in a helpful way.
Increasing exposure and awareness is one of the early steps we can take to learn more about diverse disability experiences. For example, chronic illness advocate and host of the podcast Uninvisible Pod Lauren Freedman made the conscious choice to follow disabled creators with experiences that differed from her own when she launched her show in 2019, which has helped her find new guests.6
There is no denying the power of social media. And while social media algorithms have harmed, and continue to harm, disabled people, social media has also given us a platform to showcase our own narratives and make our disability experiences more accessible to the masses, even when traditional outlets haven’t deemed our stories worthy of coverage. It has made the world a smaller place, allowing us to transcend physical boundaries and access communities and viewpoints we might never have been able to see in real life. By using social media intentionally and consciously, we can transform it into a tool to expand our perspectives and gain a better understanding of anti-ableism.