14
You Want Some Privacy? You Better Work
ALICE FOX
W hen someone asks you to identify yourself, your explanation may very well change from context to context: At the DMV, you provide your height, weight, hair color, and sex. To an employer, you say that you are industrious, efficient, a team player. To your parents you say you are fine, have a full pantry, and are in a relationship. On Instagram you are living life to its fullest, single AF, constantly on fleek, and have friends that are (almost) as fierce as you are.
While it may be difficult to say which one of these explanations is most “authentically” you, all of them, to some extent or another, are parts of your identity. What can be said for certain, in the words of Ru herself, is that, “What it says on your driver’s license isn’t really who you are—you are something much greater than that.” Meaning, any human being is far more complex than a list of biometric data or ‘biological attributes.’ Further, it does not seem too far of a stretch to assume humans are also more complex than the data one can gain from an employer, parent, or Instagram. If I was reduced only to my Instagram data, people would assume, based on my photos, I either was a dog or a cappuccino.
The same phenomenon can be seen across RuPaul’s Drag Race . Contestants show many different faces (or ‘personas’) between challenges, heart-to-heart talks with Ru, judgment, RuPaul’s Drag Race Untucked , social media, and in real life (IRL). Humans subconsciously synthesize multiple personae into an individual’s ‘identity.’ An individual cannot be reduced to only their social media presence, IRL persona, or public persona; an individual is both all and none of these to various degrees. Now, to an outside, public observer, it might be exceedingly difficult to pinpoint which is more authentic, but to an individual’s inner circle, it is likely quite apparent.
RuPaul’s purest identity may be an enigma to all of us outsiders, but to his husband clear enough. If you feel yourself compelled to agree with what I’ve said so far, you likely also accept that, on a more general scale, identity is self-generated (I decide my identity, not anyone else), dynamic (my identity changes over time), and self-disclosing (I decide who or what to tell about myself). For Ru, this would mean that it is up to him how to identify on and off stage, how to change how he performs for the world stage, and whom to tell if he is a drag performer or not.
Most of the time, what external observers claim to understand or know of us is of little consequence. If someone on the bus hates our fabulous leopard yoga pants, it does not mean we get fired for dress-code violations from work. Everyone has some level of division to their life between work, play, and home. And within these divides are where we find privacy.
Taking measures to ensure certain or all aspects of your life don’t influence or affect the others is taking measures to keep your private life private. After all, who wants to constantly act like they’re ‘on the job’ even while at home? Not only is it impractical, it’s also exhausting, isolating, and, depending on the job, boring. Privacy ensures that our jobs are not at risk for our online or at-home interactions, our online presence does not have to be promoting or accepted by our IRL peers or workplace, and our IRL life is not constantly impacted by social media likability or work performance. Be it good or ill.
In the words of Adore Delano, “When you start living your life for you, then you stop giving a fuck about what other people think about you.” When there is a healthy degree of privacy, we can ‘live our lives’ without constantly looking over our shoulder for our bosses, family, or followers. It’s much easier not to give a fuck about what others think when their judgment has no impact on your life.
When we have degrees of privacy between public, private, work, and digital identities, we know the impacts of hate can be limited in those spheres. Further, between these different areas of persona are what Giorgio Agamben calls “ethical spaces” in his 2011 book Nudities . These spaces are where we have a chance to reflect upon the persona we are presenting before we are expected to perform it. In RuPaul’s Drag Race , this would be the moments the drag queens are in the Werk Room, before they’ve decided upon or completed their final look. They have a moment to breathe and reflect on what persona they will take and how they will perform it before they’re expected to. In a more widely relatable scenario, it would be the moments in the car between leaving home and arriving at work. The spaces in which we change wigs.
Sometimes this is intentional and deliberate, but, mostly, it’s a subconscious transition you may not realize until someone asks you about it. In recent times, a more distinct loss of ethical spaces can be noted as the boundaries between the digital and IRL are blurred. Ever more frequently individuals are being held accountable for their digital personas and content posted online. This is on par with even digital transitions to acquire user authenticity and assess IRL comparability through verifying users by checking driver’s licenses or passports and removing accounts that users refuse to connect to their IRL identities. These attempts to ‘get to know’ an individual’s IRL identity through collecting, analyzing, and comparing the data they generate online is what I call ‘digital surveillance’.
Digital surveillance techniques include facial recognition, gait recognition, emotion assessment, chat monitoring, tracking cookies, social connections, location tracking, and many more. Analytics companies utilize these techniques, often in some combination, to siphon data from users and feed them into algorithms to generate unique conclusions about a user or user group. Analytics firms then sell these unique conclusions to advertisers, corporations, political machines, or the government as ‘insights’. After receiving insights, these companies, campaign managers, marketers, or website designers can use them to influence, monitor, and gain more data on individuals through ads, content layouts, or features.
Hence, platforms like Facebook, Snapchat, Twitter, and Instagram keep their services free and employ casino techniques like intermittent rewards systems: it keeps users engaged and generating data that these companies can sell to analytics companies or use for themselves. It’s free because you are the commodity. Following that with the removal of ethical space and social media’s obsession with finding out ‘the real you’ to maximize its influencing power, the data you generate, and that is generated about you by others and algorithms, begins to speak for you more than you speak for yourself. Meaning, the person the algorithms assess you to be is of more importance and more authentic value than how you identify yourself.
I Tend to Think that Privacy Is for Criminals
You might have encountered the following argument against the relevance of privacy: Why should you, an average law-abid ing citizen, care if the NSA thinks you like Violet Chachki more than Kennedy Davenport?
First, you know it’s not true, so the marketing won’t be effective on you: you’re a “one-of-a-kind collectable.”
Second, it’s not as if you do anything online you wouldn’t mind being shared, the tea’s already been spilled on one platform or another. So—why care about your privacy if it both doesn’t tell much and is already quite open for everybody to see?
While I do agree with this sentiment, remember why algorithm-generated data is called ‘insights’. The algorithms are not just collecting and reorganizing the data you provide but instead draw new conclusions about you and others from this data. While your contribution to the overall data pool might be small, it helps to create type-cast and personalized profiles of users, which I will call ‘blackbox personas’. Blackbox personas are created by using our given digital data, like photos, locations, likes, and conversations, in conjunction with the evergrowing and improving algorithmic insights. And that’s problematic.
Firstly, there is a distinct lack of transparency: It’s unclear what data is being mined from users precisely and what insights are capable of being generated from it. Many individuals think (incorrectly) that corporations, governments, or marketers only have access to information they have given in public or consented to the collection of on various applications. This information may be about your body, facial recognition, gait recognition, gesture recognition; habits, like Google map data, Snapchat filters, and store rewards cards; or thoughts and emotions, like Facebook reacts, Instagram DMs, and Twitter RTs. In other words, people—mistakenly—think that the information accessible to surveillers is the same type of information a human observer would generate if they could see our digital content.
This is not the case . Individual humans are not the ones collecting, reading, and interpreting human-generated data—machines and algorithms are. And when machines and algorithms are given access to all of your data, from Amazon to Facebook to Instagram to general browser history, and can compare it to millions of other people , they can generate some pretty wild conclusions. How wild exactly? Algorithms can determine your sexual orientation from looking at your profile photos. They can also determine relational compatibility based on likes and posts. They can tell who on Facebook is related to you and how, and even suggest friends based on similar location data and other platforms, which is why that hottie from Grindr now keeps popping up in your recommended friends on Instagram.
Secondly, what can likely be assumed is that while the facial recognition may not be an indisputable gaydar, when it is used in conjunction with your ‘reacts’ to certain pages, like RuPaul’s Drag Race contestants, it can be scarily accurate. These insights are based on data layers, so the more layers advertisers have access to, the more complex and ‘accurate’ your blackbox persona becomes, and the easier you are to influence, predict, and control. Meaning: how you identify IRL and online takes a backseat to the blackbox personas, as they reveal more than you would willingly choose to. Being that this information is not publicly or entirely available, it’s difficult to tell for certain.
Which leads to the following problem: If we’re not only ignorant of what conclusions algorithms draw from us but also how or where they source these conclusions, then it’s incredibly difficult, if not impossible, to contest, change, or remove aspects of blackbox personas we disagree with. For instance, imagine trying to show coworkers Aquaria’s gag-worthy Melania Trump impression, but in the middle an ad pops up for “porn you may like” based on your purchase of bananas and condoms in the same Amazon order. Talk about highly uncomfortable, no matter who you are and what you’re into.
We cannot dispute what we’re not aware of. We could attempt to erase all digital presence in an act of defiance towards the digital powers-that-be, but there’s little evidence to support that this would indeed remove the blackbox persona entirely. Seemingly, the best we can do is to curse and beseech YouTube for showing us the fifth JJ Malibu advertisement after double-tapping one-too-many of April Carrión pics (whoops). These problems are then magnified to a terrifying level of concern once we consider that algorithms are notorious for stereotyping race and gender attributes. So, the blackbox personas created for us, used to advertise to us, influence and appraise us, may not only be inaccurate and uncontestable but also racist and sexist. Party.
Lastly: the inability to set boundaries. Even Mark Zuckerberg tapes his laptop’s webcam to avoid exploitation from opaque data-gathering techniques he might have “accidentally” agreed to. Considering that even he seems unable to limit the data-gathering on himself, we must assume we basics are at the mercy of the machine.
So, what do we know? Well, individuals who mention these types of surveillance practices in social settings usually get asked if they left their tinfoil hats at home, leading to ostracization and ridicule. This perception of surveillance causes casual discussions on privacy to be met with eye-rolls and labels of conspiracy, relegating any “serious” dialogue to academia and tech experts. At the more extreme ends of things, trying to dip out of too many socially-approved surveilling angles can get you flagged for criminal activity or grab the attention of national security surveilling agents. Attempting to hide something from the digital eyes in the sky, even something as personal as pregnancy or gender transitioning could get you flagged for criminal activity.
Go Back to Conspiracy City Where You Belong!
If these reasons are not enough to convince you that some shady shit is going on, and I’m not just trying to recruit you to my sweet tinfoil squad, consider this outline: Let’s say you agree with the opening remarks that identity is self-generated, self-disclosing, and dynamic. As explored above, analytics companies are assuming that, at least, identity is static. Meaning, there are some aspects of identity that are unchanging and almost always ‘true’. So, if the analytics companies can discover what these attributes are, then they can draw conclusions about our identity that we do not self-disclose. Further, if these analytics companies believe that whatever blackbox personas generated by algorithms are of higher value, due to increased influencing power, then our self-generated identity takes a back seat. The problem occurs when blackbox personas are believed over real persons. And companies, governments, and advertisers think algorithms know people better than people know themselves.
Let’s examine the following scenario: Boris self-identifies as gay. Boris chooses not to tell individuals in his family or community, as they live in a conservative community, and he does not want to face discrimination or harm for his sexuality. Boris loves watching RuPaul’s Drag Race . He actively contributes to the show’s wiki page, has ordered a poster for every season through Amazon for his college dorm, and has seen Hurricane Bianca on Netflix at least fifty times. After receiving this data from Boris’s online activity in conjunction with his display pictures and social media activity, the algorithms have assigned a “gay” identifier on his blackbox persona. Boris has been incredibly careful not to share any information publicly about his sexuality both IRL and online.
Boris begins to see ads for Grindr and LGBTQ+ meetups on YouTube, Facebook, Instagram, and Twitter. He begins to receive mail to his home on gay retreats and counseling. If Boris’s family were to stumble upon his mail or see the advertisements on a computer he was using, it may potentially “out” Boris against his expressed desires or cause distress or anxiety about using technology around other people, for fear of what may pop up in the adverts. This is an example of one incredibly terrible situation that Boris cannot control, prevent, or change. Which is a true tragedy because Boris should have a splendid coming out, as all good gay boys do these days: by his mom finding a topless Shawn Mendez stuffed under his mattress.
See, Boris tried to do his due diligence of not sharing things on social media he wouldn’t want everyone to know, but now he is put at risk because of the large amount of data these algorithms will access. Boris knew he consented to Facebook accessing his photos for his albums, but he didn’t know that also gave permission to XXXMarketing to buy his data and sell it to SweetDildos.net . Not to mention, Boris would never buy from a company with boxes clearly stamped SWEET DILDOS for all to see. No one’s neighbors need to be in on that business. Further, Boris didn’t realize algorithms could also determine his sexuality from his profile pictures. By sharing his pictures, he did not also consent to sharing his sexuality for targeted advertising. And if he knew this little factoid, he would have let the world continue thinking he was a fat-free, grande mochaccino.
Many online users think sharing a profile picture will allow Facebook or Instagram to see what we look like in the same way we passively allow all onlookers to do every single time we leave our homes. Perhaps Facebook can detect our eye color, our hair color, attractiveness (based on users checking us out), maybe our height and build, but that’s about it. Because that’s about all the information we can gain from each other in public . That is the critical mistake we make when we share things online: we think that algorithms are limited only to the information humans gain from one another through encounters. Digital users often think, either explicitly or implicitly, that the information gained from the picture is all we can gain from the image, but when the mechanized eyes in the sky open the library, they’re not only reading our bodies, they’re also reading our moods, body language, IRL personas and digital personas, perhaps more. By revealing our bodies and minds to the mechanized and digitalized eyes of the technological world, we’re revealing far more than our bodily features alone. And when we interact or interface with these technologies, we reveal far more than the quantitative data of likes, shares, or words—we’re revealing our thoughts, attitudes, preferences, emotions, habits, and behaviors in ways of which we are unaware.
This is particularly distressing when IRL consequences are leveled against us for blackbox personas we had no knowledge of, no consent to, and no control over. For example, public arrest rates going up for non-white individuals because algorithms read their bodies and faces as more aggressive, angrier, and accordingly, more frequently identify them as a “threat.” Or think about public shaming, transportation bans, or higher taxes for on and offline behavior (hi, China). Or a crusade against LGBTQ+ individuals using their blackbox personas and digital data, rounding up everyone the analysts and algorithms label “non-straight” and putting them in internment camps (hi, Russia). Or, other more close-to-home issues for the US, influencing elections and promoting disinformation through targeted advertising and emotion mining (hi, Russia). These scenarios are real. They’re happening. And they mean we all should be sweating like hookers in church right now.
We the People Are in Trouble
Because these insights are gleaned from all of us human beings, regardless of social media usage, I argue that it is the moral responsibility of humans to protect each other. Every human being shares the same plight: We are all stuck in meat suits we didn’t get to choose but have minds that are ever-changing. We’re all born naked; the rest is drag. Because every persona we slip into, every role we take, every sidewalk we stand on, is a performance.
And when the algorithmic machine tries to reduce us all to one truth, strip us all naked, yes, it may find that we are all indeed similar in our nudity. But tearing away at the masks loses the human experience—loses the intrigue and the wild-ness of being human. Without performance, without faces, we are all just a bunch of meat popsicles clunking into each other trying to figure out what the hell we’re doing here. Given that, we, as people , united in this common curse, are compelled to defend our personas, our masks, and our privacy because we are different than our bodies and biological features.
These are not new or novel problems. Even in the seventeenth century, Samuel von Pufendorf (what did you just call me?) was examining the tendencies of governments to reduce or ignore the human experience entirely in the pursuits of profit, efficiency, or expansion. Political, entrepreneurial, or social institutions are inconvenienced by the ambiguity, diversity, and complexity of the humans they serve. It hinders their goal of maximizing their interests while minimizing their effort. Throughout the centuries, we have been shown repeatedly that these institutions do not often value or prioritize what’s best for humanity’s richness or survival despite their philosophical conception demanding just that. Look at the environment, welfare, LGBTQ+ rights and racial consideration to show how the implementation of philosophical good will has often failed politically. So, our choices are to entrust in these institutions to do what’s best for all of us or else take matters into our own hands. Since the former has proven to yield only in increasingly unauthorized and ubiquitous surveillance methods, I recommend turning to the latter.
Termed ‘benefits par excellence’ by von Pufendorf in 1672, we have a duty to humanity to protect the freedoms, equality, and values of our fellow humans, given our shared human experience, even if that takes more conscious and significant effort than what we would put forward if we were considering only ourselves. If we do not look after each other in this way, we can be well assured that corporations and many government institutions will not, as despite being aware of algorithmic biases and privacy invasions, they continue to use and abuse the technology at the expense of the common person. It’s important to design better institutions, but till we’re there, we must demand better for and from ourselves so that our fellow person can be whoever they so choose, without technology backstabbing us by dictating who we are to corporations and governments.
Not Today, NSA
How do we work on fulfilling this duty and protecting our fellow humans? One solution suggested in Michael Nagenborg’s 2017 “Hidden in Plain Sight” is to do just that: hide in plain sight. We can change our public appearance drastically using reflective facial stickers and hair extensions to mask our expressions and trackable aspects of our faces when in public—a technology called “CV Dazzle.” This helps to provide us privacy from technological eyes but makes us stick out like a sore thumb to our fellow humans. Also, while this approach is a good start to thwarting surveillance practices through personal expression, it does not go far enough as the surveillers can still get a read on those eyes, gender, skin color, gait, clothing, and expressions through these types of masks.
And so, we need to go further to more actively protect at-risk individuals who are targeted based on these traits. And what better public educators than drag queens? Drag queens have a long and rich history of persona creation and convincing performances, both for their personal safety and for the sake of a good show, so they are a formidable choice for good teachers on this topic. They can hide in plain sight so well—by being incredibly extra in their drag performance, strangers are unable to identify a queen’s IRL persona at all. If that’s continued to be carried over to their online activities, their blackbox persona ends up so fucked by all the clown realness that it becomes beyond difficult to tell what is authentic and what is just part of the gag. Further, given that RuPaul’s Drag Race is the most accessible look into drag performance and technique, it’s the best case-study to exemplify necessary skills everyone ought to acquire to have an impact.
Consider this: When we reduce the number of personas and the type of personas we create, we become an easier target for surveillance to read. The more consistent, cohesive, and habitual our personas are, the more useful our data is to corporate, government, and marketing institutions. We must sacrifice some convenience and become a little less lazy to make it more difficult for analysts and algorithms to tell real from not real. But that’s how companies keep us hooked to their service. Facebook nudges you to update your information, Instagram rewards you with little hearts and followers for posting new content. Then Facebook, Twitter, and Instagram reserve rights to hold or terminate your account if they don’t think you’re using your actual identity. They actively encourage other users to report accounts they don’t think use authentic information. And they require driver’s licenses, passports, or other state ID to revalidate your account and give you access to your data again. It’s not easy or convenient to revolt. But that’s why they’re called benefits par excellence.
We also know it’s not easy to generate personas or become incredibly adept at rotating between them. It should be safe to assume that everyone reading this has watched at least one episode of RuPaul’s Drag Race . And the difference in dispositions and appearances in the queens in and out of drag are incredibly apparent. There’s no secret as far as how much work, talent, and skill goes into creating personas. With RuPaul’s Drag Race , we’re in the unique position to have an inside look into the performance art of drag readily accessible to consume online or on our TVs. The goal is to obtain a starker difference between personas in the Werkroom, on the runways, and in the competitions. We can see some examples of this in individuals who utilize ‘Finsta’ accounts—both their ‘public’ Instagram and their ‘private’ Instagram are them, but they perform different personas for different audiences on each one. What we’re trying to achieve here is to make it more difficult for algorithms to discern what data is more ‘authentic,’ and thereby more marketable, for that individual. Maybe data on one account is ‘less authentic,’ but at the end of the day, who cares? You can’t believe everything you read on the Internet anyways, and we all can make our own judgment calls of others’ identities. It means nothing to us if it thwarts corporate or government identity campaigns.
RuPaul’s Drag Race contestants offer the embodied form of this example through their craft. An individual’s drag persona is not necessarily the individual’s only or most authentic identity: Kim Chi is also Sang-Young Shin, RuPaul is also RuPaul Andres Charles, Bianca Del Rio is also Roy Haylock. They are all both and neither. The queens create their drag personas, they are performed with their own bodies and through their own minds. But it is a performance . And somewhere, out in the world, they have an identity that may or may not have any traces in their drag persona. Which is more ‘authentic’ or more ‘them’? No one knows, and really, no one needs to know. Like the queens, all of us have the active choice of how and who to present ourselves as in any given space.
To take an example, let’s look at “Snatch Game.” The contestants take on different personas of celebrities. Yes, everyone realizes that Tatianna is not Britney Spears (Season Two), Chad Michaels is not Cher (Season Four), and BenDeLaCrème is not Maggie Smith (Season Six), but their performances are convincing enough that if we, as viewers, were not already familiarized with who was portraying who, it would be quite challenging to guess who the drag performers were behind their celebrity personas.
Further, it would be difficult to pair these individuals out of drag with their drag personas on a visual level. While millions of people may know and recognize Alaska (Season Five, All Stars 2 ) comparatively very few people know and could recognize Justin Andrew Honard. While this discontinuity between IRL and drag personas may cause distress in some drag performers, this level of persona creation, expression, and performance is perfect for achieving a degree of anonymity and privacy that works even against body surveillance devices. Drag performers are unable to utilize Apple’s facial recognition when in drag, and skirt other surveillance techniques, like gait and expression recognition, given the significance of changes made to ‘pass’ in a drag persona.
To summarize, our way to regain control over our identities in the technosphere is by reclaiming masking practices of our past before technology. If we begin to strengthen the divides between public and private, both IRL and online, the grey area of the division itself expands, and so does our ethical space in which to operate—a space to “think before we act.” To make it more concrete, think of this “ethical space” as the time between the queens in RuPaul’s Drag Race are given a challenge and when they’re expected to deliver—where they establish how not to fuck up. It would be absolute pandemonium if the queens were told to deliver their looks immediately after being issued the challenge. Think of how chaotic the mini challenges are already, and they result in less than gag-worthy outcomes. However, they do show some unusual abilities of the queens to perform under pressure and without perfection.
In reality, this is what’s expected of all of us. The public sphere demands a performance from all who enter the stage, but we, as public performers, are uncertain what the rules or expectations are. We have a general idea about respectful talking volume, personal space, or phone etiquette, but thanks to technology, it’s uncertain what’s being passively revealed about ourselves as we attempt to ‘pass’ our public performances. Further, we’re often expected to perform with little to no advanced notice of changing situations or given time to prepare. Also, real life is unpredictable. It’s basically a never-ending hell of mini challenges that sometimes you get lucky on and other times you’re hot trash with a wig. It happens to everyone . But the consequences would be a lot less damning and the results a lot more successful if we have some time to assess our options before the wiki page is written.
If It Doesn’t Hurt, We’re Not Safe Yet
What, then, does RuPaul’s Drag Race offer to be implemented by all? Firstly, buy-ability: Is the queen convincing in that role? In RuPaul’s Drag Race we see this tested through a variety of appearance versus performance-based challenges. It’s not enough to simply walk the walk; a contestant must also talk the talk. Surely all of the queens who grace the stage of RuPaul’s Drag Race are talented and skilled, but it takes talent and skill in a multitude of categories to truly be on top. For applications IRL and digitally, this means that it’s not enough to half-ass any attempt to gain anonymity. Unfortunately, throwing some shades on or using an incognito browser does little to nothing to help oneself or fellow humans protect their privacy. You want to be anonymous? You better work! Part of this comes at the painful experience of removing data from social media. The other part comes from obfuscation, or making some of your digital interactions satirical, comedic, or straight up fictitious.
There’s nothing about your social media that must be authentic, true, or all-inclusive. Social media cannot force anyone to disclose accurate information about themselves. So, to be more proactive about privacy, we could create multiple accounts, with some true information, some halfway information, or false information. Make an Insta, a Finsta, and a Plan B-sta. It doesn’t matter which one is more “you,” or even if you put effort into the spinoffs. Even better, make a Facebook with your closest pals and all of you post on it. Sharing is caring. It’s all about generating more static so that the advertising companies and political machines don’t know how to grab your attention, or what to sell you, or how to influence you. It’s about generating confusion, and thereby, generating mystery. We could use a little bit more of the mystique that we lost with the masquerade (not the Mystique of Season Two).
On top of this, make the data a little more dubious to acquire. By using an adblocker or your browser and a tracking blocker, you’re making a stronger statement that you do not want what these companies are selling. And if the ads never even reach your eyes or ears, whatever big bucks being spent to buy your attention is not getting it. “Fire up the smoke machines and put on your heels [and have a kiki on the Internet]” because you are now on your way to some much-needed privacy.
IRL, use different gestures in public, toss them up, walk a little weird sometimes, wear different shoes or clothes that make it so, change your hair, wear a wig, wear hats, wear sunglasses, wear makeup. Keep as few ties between your performative personas and your personal life as possible. Use the public sphere as a place for experimentation, not for displaying your best and most authentic you. View the public sphere as your backstage or rehearsal, not as the grand finale of your truest self.
People who know you, especially in small towns, may view these types of behaviors as ultra-outlandish and shady AF. And, to be fair, there may not be as many problems with surveillance with places that barely, or don’t, have Internet infrastructure. There are a lot more rules to live by for some than what can be squeezed into this chapter. Think of these as bare-minimums for getting started on loving your neighbor and helping yourself on matters of privacy and surveillance.
Secondly, we can also learn much about consistency. The winners and runners up of Drag Race are not the individuals who always have the best looks or performances. They are the queens who can deliver buyable, enjoyable performance, attitudes, and visuals week after week. Statistically, even the winners of RuPaul’s Drag Race did not win most challenges. The average number of wins only being only three (including All Stars ) for RuPaul’s Drag Race winners and two for runners up. Everyone who has never placed in the bottom two in a challenge has ended RuPaul’s Drag Race in the Top Three. What’s the lesson here? You don’t have to be the best at everything; you simply have to not be the worst in anything, which is a much lower bar. IRL, this translates to the fact that we all don’t need to be in full drag or full performance mode in order to impact surveillance practices. We just need to do enough to shake the tree, making data gathered more controversial, irrelevant, or incredible. This can be taking the steps to avoid platforms that abuse their data gathered and voting with wallets for substantial change, like crowdfunding more anonymous social media options.
Thirdly, fluidity. We can extend our personas quite far, and to a highly believable level, without losing ourselves. We can be an embodiment of personas, and still have a keen sense of personal identity. The drag queens in RuPaul’s Drag Race not only use their drag personas, but also personas of others convincingly. They can slip in and out of their skins. Each episode offers a new “Category Is” moment. Not everything needs to be homogenous to be genuine or to count as an identity. It’s not important for the humans you interact with in public or online to know if the persona you’re displaying is the most “genuine and authentic” version of yourself, or merely your public persona. This doesn’t mean you can be an asshole for the sake of being an asshole because it’s online and thus doesn’t matter. Online it actually matters quite a bit, because who you choose to be when the performance expectations of civilized society are stripped away may reveal more about you than you think.
So, to take these lessons from the TV and act: Like pages you don’t really ‘like’ that much, double-tap posts that you don’t really love, follow pages you don’t care about, flag ads as irrelevant even if you’re buying. Generate mystery for the machine. In real life, take different routes to work sometimes, go different places for lunch, wear hats some days—anything you can possibly do to add variance to your life and remove routines, do it. If there’s part of your nature you can’t or don’t want to change, tuck it away and whip it out later for an audience of your choosing. Without habit and predictability, it becomes much harder to make assumptions and generate blackbox identities to any degree of usability. And if our digital and public personas can’t be taken seriously, then it makes it much harder to make the case for our uniqueness as suspect. No one should be targeted or at risk for being different or for characteristics that are beyond their control.
This increase of pointless ads being sent to everyone helps increase the privacy of all. If everyone receives, and everyone knows they receive, ads for LGBTQ+ meetups or XXXSweetDildos.com , it takes away the power of these advertisements to reveal the personal lives of the individuals who do have an interest in them. If everyone knows what it’s like to have that one dildo follow you around, no one is special or held to higher scrutiny. Further, if individuals interested in these topics flag them as irrelevant or ‘love’ them when they aren’t, then it still generates false data that reduces an algorithm’s ability to make predictions accurately or confidently.
If surveillance doesn’t turn a profit, then it will become a hard sell to corporations and governments paying for continued algorithmic services. Although these practices are far more difficult to implement IRL, any minor change is some change, and it is a worthwhile endeavor if it helps offer our fellow humans protection from unjust surveillance practices. Rock that wear-to-work wig, practice your runway walk in Times Square, and re-evaluate your habits. In a slight modification of Ru’s advice: If algorithms can’t read you, how in the hell will they sell you to anybody else?