Would you be able to trust humanity if the whole world was decentralized?
Social media may have started with the noble idea of connecting people, and in a way, decentralizing media to give people more freedom of speech. It disrupted the industry by severing the hold that traditional media had on advertizers, but social media has since made the world an angrier and more destructive place littered with misinformation.
With censorship that favors ruling military dictatorships, algorithms that amplify negative content and are inefficient in identifying fake news, and with the addictiveness that corrodes mental health, social media platforms are a far cry from the utopian world of free speech they promised.
While platforms such as Twitter, Instagram, and TikTok have not been spared criticism – Elon Musk infamously pulled out of buying Twitter as he did not believe their share of reported bot accounts (he followed through with the purchase several months later), Instagram linked to teen depression, and TikTok removing content that is unfriendly to China – it should be noted that Meta (in particular, Facebook) has racked up the most cases of misconduct. As such, it will be heavily examined in this chapter that relates to the social aspect of online interaction as well as commerce and regulations.
Meta, the largest social media conglomerate in the world with Facebook, Instagram, WhatsApp, and numerous other products and services under its belt, has been the subject of intense debate, Mark Zuckerberg himself grilled by US Congress over the influence of Russian trolls and hackers on the 2018 US elections in which Donald Trump won the presidency.
The scrutiny of Facebook's role as a key media influencer during the 2018 US elections concluded with the rare occasion of both Republican and Democrat congressmen calling for more regulation and reprimanding Zuckerberg for downplaying his company's acknowledgment and awareness of the problems it created.
Further revelations by United Nations (UN) human rights investigators into the genocide of Rohingya in Myanmar in 2018, and internal documents leaked by whistleblowers, led to Facebook's admission that it was “too slow to prevent misinformation and hate” in the military regime's state.
When it comes to the autonomy granted to social media companies in deciding how their algorithms work to serve content to users, several arguments are put forth by the likes of Meta (Facebook's parent company) to defend their policies. Initially, and most prevalent, was the argument of freedom of speech – anger is one of the human emotions, hence, it deserves its place in the panel of reactions provided to users, and people should be allowed to express different views.
What would social media be if people weren't allowed to post their true feelings on anything?
To be fair, social media platforms have placed Community Standards in a bid to police online communities across the world. But social media artificial intelligence (AI) tech is not adept enough in detecting troubling content and can often flag harmless posts due to misinterpretation of keywords used in messages. Scrolling through online help forums will reveal countless explicit examples of users being mistakenly banned by Facebook for various reasons, usually followed by advice from other users or Facebook's user support team on how to appeal to unban their accounts.
Apart from their AI surveillance, most social media platforms rely on users to report violations, which are then reviewed by an internal integrity team that determines if the content indeed goes against its Community Standards.
Together with Alphabet, Amazon, Apple, and Microsoft, Meta is among the “Big 5” of US information technology companies. The main difference between Meta and the other companies on this list, however, is its business model. Meta's platforms primarily profit from the time its users spend on them, which is sold to advertisers. The more hours a user spends on the platform, the more ads they see. The omnipresence of social media and its unprecedented growth is what moves its stock upward.
In a 2016 survey by Common Sense Media, it was found that American parents spend more than nine hours a day with screen media, with most of that time being spent with personal screen media (7 hours and 43 minutes on average), and only a little more than 90 minutes devoted to work screen media.
Out of the 1700 parents who participated in the survey, 78% believed they are good role models for their kids when it comes to media usage, although 56% worry about their children's social media use and online activities and that they may become addicted to technology, while 34% are concerned that the high screen time impacts their children's sleep.
Parents are using social media and digital entertainment just as much as their kids, yet they ironically express concerns about their kids' usage of the same technology while also claiming and believing they are good role models for their kids. But how much more parental guidance is needed for children online? And will it be effective when social media itself has been accused of accelerating the suicide rate of teens?
If a truly decentralized world were to govern itself, then Community Standards need to be upheld by decentralized powers and the power to censor content like propaganda, fake news, and hate speech needs to be exercised without the influence of profit. For social media or any interaction over the web, we need to move past face value and reinvent a way of recognizing reputation, credit, and good behavior that should be set as an example for the community.
This is not to say we should not allow freedom of speech or expression of dissatisfaction in general, but we need public anger or outrage to amount to the right course of action. For example, better wages as a result of workers protesting against the rising cost of living and inflation, or the accountability of large corporations and powerful individuals when faced with accusations of corruption and power abuse. Negative reactions should culminate into positive changes by pressuring powerful entities.
Above all, with Facebook admitting the existence of so‐called troll farms – large numbers of fake users often funded by ruling parties to spread propaganda – across the world, the veil anonymity that the metaverse provides must be addressed.
You might suggest, as a solution, that the world doesn't need social media. Though you may be a boomer who prefers real human interaction, this would not exclude you from how the world deals in trade today.
The COVID‐19 pandemic has not only brought exponential growth and reliance on online commerce, but also enabled scammers to access a market never seen before. An article published in 2022, titled “Scammers are winning: EUR41.3 (USD47.8) billion lost in scams, up 15%,” the author reports that nearly all nations have reported large increases in the number of reported scams, the highest being in Egypt (190%) and Nigeria (186%).
For the world to truly embrace the potentials of Web 3.0, we need to establish a way to identify users that goes beyond a simple profile tied to an email account.
Social media tried to connect a faceless world and ended up centralizing it further. Now, the architects of Web 3.0 are laying the foundations for our new digital souls – permanent digital tattoos that will carry all our data and be used in every single facet of online interaction for identification and policing.
Data protection is no longer a choice but a necessity for everyone with any sort of online presence, whether it is social or financial.
As governments move closer to implementing Central Bank Digital Currencies (CBDCs), this will empower their agencies with data on a scale never seen before. Citizens would want some level of anonymity that cryptocurrencies and stablecoins would be able to provide, but we would also want credibility to our digital financial portfolios to enable us to know who we are engaging in commerce with.
We would want KYC data ready and available for us to verify who we are dealing with, and vice versa, for our partners to be able to trust us. This data has to be protected from hacking or tempering and verifiable by a decentralized system – a government‐backed social credit score underlined by data from a CBDC will not cut it.
Likewise, there is a need for journalistic integrity when news is disseminated in today's world by social media, where online publishers can easily reopen blogs and accounts immediately after they are detected and shut down by state censorship or social media AI, and where social media platforms themselves cooperate with regimes and allow the influencing of elections.
Without credible digital identity for users, social media platforms have been open to abuse by scammers, manipulators, or trolls, acting either as individuals or in groups, and often utilizing the platforms' own tools like Pages and Live Video to spread their content to wider audiences. What is more worrying is that identity theft has become a common practice used by cybercriminals to cover their tracks.
The balancing act of purposeful identity and data privacy is similar to that of juggling freedom of expression and restricting hateful content. There are 4.74 billion social media users in the world which makes the majority (59.3%) of the earth's population, according to an analysis by Kepios in October 2022. Although most of us use social media, we have never come to a consensus on how it should be policed. That power is held by government regulators.
Facebook itself has been involved in the harvesting of data and metadata from its users, which most recently saw the social media company paying US$90 million to settle a decade‐old privacy lawsuit in California accusing it of tracking users' internet activity even after they logged out of the app.
In 2019, a UK parliamentary report by the Digital, Culture, Media and Sport select committee,1 which launched an 18‐month investigation into disinformation and fake news on Facebook, accused Zuckerberg of contempt for parliament in refusing three separate demands for him to give evidence, instead sending junior employees who were unable to answer the committee's questions, according to The Guardian.
Apart from labeling Facebook “Digital Gangsters,” the report warned that British electoral law is unfit to deal with the threat of manipulation and spread of misinformation by fake users that is meant to divide society, similar to that the United States faced in 2018 with Russia's interference.
The report called on the British government to investigate “foreign influence, disinformation, funding, voter manipulation and the sharing of data” in the 2014 Scottish independence referendum, the 2016 EU referendum (Brexit) and the 2017 general election.
The Labour Party's deputy leader, Tom Watson, declared that “the era of self‐regulation for tech companies must end immediately.”
There is a pressing need for us to protect ourselves – our identities and data – from the various digital gangsters, big and small, that roam the highways of information.
While Facebook's troubles with divisive content started almost immediately after the platform was launched, it was only in 2017 when Facebook introduced emoji reactions to accompany the original “Like” reaction button that anger took over the platform fulltime and became weaponized by political regimes. In many of these cases, Meta had full knowledge and even deliberately delayed fixing the problem in favor of profit.
Forced to testify before US Congress, Facebook had to reveal internal documents that confirmed that posts that had lots of emoji reactions were more likely to have engaged users – which was good for Facebook's business model of selling engagement time to advertisers.
Treating emoji reactions as five times more valuable than Likes, data scientists at Facebook concluded that the emoji reaction method brought the company closer to its goal of “Meaningful Social Interaction” (MSI).
On the surface, MSI, as announced by Zuckerberg, was aimed at bringing users together to improve their well‐being by encouraging more meaningful interaction between friends and family and less time‐consuming professionally produced content, which research suggested was harmful to their mental health.
MSI is identified by a points system based on social interaction. The score of a post based on reactions, shares, and comments would determine how many people will see it.
However, as Facebook's own researchers were quick to point out, the emoji reactions, particularly the “angry face,” was driving the surge of controversial posts that could open “the door to more spam/abuse/clickbait inadvertently,” a staff wrote in one of the internal documents, as reported in The Washington Post. A colleague responded, “It's possible.”
Facebook ignored the obvious data that showed that users were getting angrier in favor of prioritizing profit through engagement time. In 2019, Facebook data scientists confirmed that posts with the angry face emoji were highly likely to include misinformation, toxicity, and violence‐inducing, low‐quality news.
Relying on its algorithm to boost the traffic of content that is deemed viral based on the MSI points system, regular posts and updates from friends and family took a backseat to negative politics and scapegoating as Facebook's traffic‐serving AI‐delivered content that received high interaction, no matter how questionable the content. In the United States, this heightened already escalating tensions in the Black Lives Matters movement and the Capitol Hill Riot that claimed the lives of innocent individuals.
The Wall Street Journal, another publisher in the consortium of media that were allowed to review the internal Facebook documents, wrote that in 2018, BuzzFeed chief executive Jonah Peretti emailed a top official at Facebook with a warning. Peretti had noted the success of a BuzzFeed post titled “21 Things That Almost All White People are Guilty of Saying,” which received 13 000 shares and 16 000 comments on Facebook, with many criticizing BuzzFeed for writing it, and arguing with other commenters about race. Other content produced by BuzzFeed around the same time, from news videos to articles on self‐care and animals, did not manage to get as many engagements as expected. In fact, positive content almost always performed badly in sharp contrast to divisive content.
Peretti blamed Facebook's new algorithm. “MSI ranking isn't actually rewarding content that drives meaningful social interactions,” he wrote in his email to the Facebook official, adding that the content creators under him felt “pressure to make bad content” or otherwise, they would “underperform.”
His staff weren't just looking at material that exploited racial divisions too, but also “fad/junky science,” “extremely disturbing news,” and gross images, according to his email. By the time Facebook decided that the cons outweigh the pros, it was just too late. Even when points allocated for the angry face emoji reaction were halved, in an attempt to curb the spread of hateful content, it just didn't work, as society was already regularly feeding off negativity in many parts of the world. And as there was no ceiling to the algorithm's score, the halved scores of negative content were still higher than the points of the most viral positive content.
Today, after compounded allegations of abuse of power by Facebook, the angry face emoji reaction carries no points in the algorithm's scoring system as the company has tweaked it over and over again to reduce its significance, yet comments and shares still carry a high value on the algorithm and this is where trolls, who are often employed by dictatorships, thrive.
In Asia, without employing enough locals, social media companies lack the ability and language skills to identify questionable posts from users in a number of developing countries, and in Facebook's case, there even seems to be direct alignment with ruling regimes, giving dictators the freedom to say anything they want about any group of people.
Confirming the allegations, Facebook whistleblower, former product manager Frances Haugen, who leaked a cache of internal documents to expose the social media giant, testified before Congress that the company does not police abusive content in developing nations where hate speech is likely to cause the most harm.
In 2017, more than 730 000 Rohingya were forced to flee Myanmar's Rakhine state after military crackdowns that were an excuse for ethnic cleansing. Many Rohingya refugees were denied asylum by neighboring Asian countries and perished at sea. As villages burned and bodies riddled with bullets fell to the ground, human rights groups aided the refugees and documented the atrocities, which included the killing of children and rape.
Myanmar authorities said they were battling an insurgency in response to attacks on border police outposts and denied the allegations.
Facebook admitted in an official statement that it hadn't done enough to prevent its platform “from being used to foment division and incite offline violence” in Myanmar. It had further said: “We know we need to do more to ensure we are a force for good in Myanmar, and in other countries facing their own crises.”
Zuckerberg once again told US senators that the problem was being addressed, with Facebook hiring dozens more Burmese speakers to review hate speech posted in Myanmar on the platform.
But months later, a Reuters analysis found that hate speech was still flourishing on Facebook in Myanmar. One of the many such posts read: “We must fight them the way Hitler did the Jews, damn kalars!” (kalar being a common racial expletive used against the Rohingya). A UN investigator tasked to the matter said that the platform had “turned into a beast.”
Facebook has in the past banned Rohingya rebel groups from its platform, labeling them as “dangerous organizations.” But Facebook did not impose any restrictions on the accounts operated by or linked to the Myanmar military until August 2018, despite the widely reported humanitarian crisis caused by the military in forcing hundreds of thousands of Rohingya to flee to Bangladesh.
Despite the UN accusing Myanmar's military of war crimes and crimes against humanity, Facebook seemed to be in favor of helping the regime. In December 2021, a group of surviving Rohingya refugees filed a US$150 billion lawsuit at the International Court of Justice (ICJ) in Gambia, over allegations that the social media company did not act against hate speech that contributed to their persecution.
Similarly in India, a report by the Wall Street Journal revealed how Facebook India's head of public policy, Ankhi Das, “opposed applying hate speech rules” to at least four figures from the ruling Bharatiya Janata Party (BJP) who had posted violence‐inciting, Islamophobic content on their profiles.
According to the report, Das did so to remain in the ruling party's good books and protect the social media giant's business prospects in one of the largest markets in the world, in a time when it faced stiff competition from new contenders like TikTok.
This is not the first time Facebook has looked away from manipulation and the peddling of hatred by political parties – similar reports have emerged from various parts of Europe, Africa, and Asia.
Imposing bans on government or military‐linked accounts could cause state regulators to bring the ax down on social media companies and thus cut out their share of the pie.
Among the leaks revealed by whistleblower Haugen, was disturbing data from internal studies on Instagram conducted by Meta itself. Another round of court hearings were scheduled as US lawmakers grew concerned about the impact Instagram had on children.
One study found that 13.5% of UK teen girls in one survey said their suicidal thoughts became more frequent after starting on Instagram.2 In another leaked study, 17% of teen girls admitted that eating disorders got worse after using Instagram. About 32% of teen girls surveyed said that when they felt bad about their bodies, Instagram made them feel worse.
Senator Marsha Blackburn accused Facebook of intentionally targeting minors with an “addictive” product despite the app requiring users be 13 years or older. “It is clear that Facebook (Meta) prioritizes profit over the well‐being of children and all users,” she said.
Her concern was echoed by Consumer Protection, Product Safety, and Data Security subcommittee chair Richard Blumenthal who said, “Facebook exploited teens using powerful algorithms that amplified their insecurities,” and added that he hoped the hearing would ascertain if there was such a thing as a safe algorithm.
Haugen testified before Congress that when the issue of health and safety of children was raised by external researchers and lawmakers, the company was never truthful. “Facebook chooses to mislead and misdirect. Facebook has not earned our blind faith,” Haugen told Congress.
In the court hearings, Meta officials responded that other internal research shows that young people who use Instagram feel more connected to their peers and better about their well‐being.
Meta and other social media companies did not invent bad behavior nor polarization. They did not create ethnic violence. But they did hide vital information from the public and the US government as pointed out by Haugen and Congress, and in some of these instances, the damage done was unforgivable.
If we tolerate toxic behavior, it will breed more toxic behavior.
On a podcast in August 2022, interviewed by Joe Rogan, Zuckerberg had this to say about managing Meta's PR crises:
You wake up in the morning. Look at my phone to get like a million messages… It's usually not good, right? I mean like, people reserve the good stuff to tell me in person, right? But it's like, okay, what's going on in the world that I need to kind of pay attention to that day, so it's almost like, every day you wake up and you're like, punched in the stomach and that's like, okay, well, fuck.
What would be a good representation of your character? Would it be a good credit score or recommendations and testimonials by your business partners? Does a person's success guarantee that they will be a righteous leader?
Ethereum creator Vitalik Buterin recently co‐wrote a research paper titled “Decentralized Society: Finding Web3's Soul,” where he brought up the concept of “Soulbound” tokens (SBTs) as the foundations of a Web 3.0 decentralized future.
SBTs are non‐fungible tokens (NFTs) – unique cryptographic tokens that exist on a blockchain and cannot be replicated – containing personal data, including individual achievements and work credentials. Unlike standard NFTs, SBTs cannot be transferred, they are “soul‐bound” to the individual for life and even after that individual has departed.
Buterin described them as an “extended resume.”
SBTs display a person's “commitments, credentials, and affiliations” and it will be stored on the blockchain to confirm “provenance and reputation.” In the 37‐page paper, Buterin outlines use cases for SBTs including to bolster people's social identities to fight scams. SBTs would be portable, protected, and revealable if the owner decides to do so to apply for a job or perform commerce, for example.
Colleges could issue degrees via SBTs, and event organizers could verify a person's attendance and award them certificates in the form of SBT badges. SBTs could record academic credentials and employment, which would enable potential employers to verify a candidate's work history and grades accurately.
Another possibility of SBTs is its use as a personal credit score. This could make the borrowing and lending of assets more transparent and ease the burden of verification for state authorities, such as when a person is travelling between countries.
Buterin proposes that a community recovery process involving “guardians” agreeing to unlock your wallet, could beef up data security around SBTs. Guardians could be friends, family members, and trusted institutions who will verify your identity.
Subsets of SBTs could also be created to support your main SBT. Decentralized Autonomous Organizations (DAOs) could airdrop (Buterin refers to it as “souldrop”) SBTs to people that have a known interest in a particular field or topic so that the DAO can grow in numbers.
As with anything that does not have to deal with trust issues due to anonymous third‐party verification on blockchain, the use of SBTs could stretch beyond real‐world implications of commerce and trade.
SBTs could even be used for safer and more pleasant interaction on social media.
The potential for SBTs to affect change in public behavior by self‐policing communities is evident when every individual has a reputation to upkeep.
While outrageous acts are often rewarded by social media algorithms today to reach wider audiences, Reputation Tokens (RTs) could be a way to penalize bad behavior and reward good acts of charity and kindness. Red RTs could be given for serious offenses while green RTs could be awarded for good behavior, and orange RTs could be treated as warnings.
The angry face emoji reaction on Facebook does nothing to stop bad content but serving enough red RTs would leave a permanent mark of embarrassment on the user's SBT, which will show up when they apply for jobs or loans.
This influences and ultimately changes the behavior of modern society. Faceless commenters and paid trolls will no longer dominate the comments as the veil of anonymity that protects keyboard warriors, propagandists, and scammers will be gone.
Going beyond policing societal behavior, RTs could also be used to penalize governments and big businesses that are otherwise too powerful to take down. An accumulation of enough red RTs could trigger a smart contract to ban a politician from contesting an election or even deduct a portion of a company's profits to penalize it for unethical misconduct.
RTs will be a way for us to monitor the corporate governance of companies like Meta and hold them accountable to their misconduct. SMEs will likely benefit the most from SBTs and RTs as they would be able to factor them into their loan applications. The more green RTs, the easier it should be to get a loan.
The paper's co‐author, Glen Weyl, stated in an interview that he predicts that SBTs will be available for early use by the end of 2022 and suspects that the 2024 crypto market upcycle will focus on them.
The faster we get real on‐chain credentials, the sooner we will be able to identify malicious, fake content and accounts and penalize them ourselves without waiting for central powers to act.
In this world, where identity and reputation are no longer related to trust issues due to the advancements in blockchain technology, we would truly be able to be a part of a worldwide decentralized social media and commerce network that belongs to the people and serves the freedom of speech and trade.