Facebook is the biggest social media company in the world and is continuously influencing life both on and off the Internet. Not only is the parent platform itself home to almost 3 billion active monthly users (Statista, 2021), but it also owns several other companies such as Instagram and Whatsapp, both integral parts of 21st century communications in their own right. However, concerns have arisen with this relatively new mode of communication. Wherever Facebook makes its mark on the Internet, misinformation—in some cases deadly—proliferates freely. This misinformation has been expansive, covering broad aspects of modern life.
The Real Facebook Oversight Board and the environmental non-profit Stop Funding Heat conducted a study which found that misinformation about climate change increases substantially day by day on Facebook, with up to almost 1.4 million people viewing these inaccuracies daily (Stop Funding Heat, 2021). Ill-informed constituents can sway local elections, eventually leading to policy changes in the highest political echelons of the country.
As it happens, Facebook’s algorithm has driven many more sinister effects of misinformation. One of the largest misinformation campaigns propagated on the platform in the last two years revolves around COVID-19. In Aug. 2021, Facebook said it had removed 20 million pieces of COVID-19 misinformation, such as posts undermining the effectiveness of the vaccines or making false statistical claims about the nature of the virus (Bloomberg, 2021). According to the World Health Organization (WHO), misinformation about the pandemic is one of the major factors perpetuating COVID-19 around the world (CNBC, 2021). Although Facebook’s internal processes supposedly tried hard to combat the spread of lies on the platform, approximately half of the posts on Facebook related to the COVID-19 vaccines were misinformation, while the other half––posts fact-checking the misinformation––were viewed at a much smaller rate than posts fomenting the inaccuracies (Harvard Kennedy School, 2021).
People also use Facebook’s platforms to spread Islamophobic propaganda in India (Times of India, 2021). In 2018, misinformation spread on Whatsapp, a Facebook-owned mobile messaging app, led to the mob killings of about two dozen innocent people in India (New York Times, 2019). Although the case of mob violence in India is extreme, it is important to ensure that a similar circumstance does not occur again, and also to take note of how invasive and dangerous false information on the Internet has the potential to be.
Additionally, Facebook has had its fair share of shady practices in the past. The company’s previous practice of storing facial recognition and other personal data, has also drawn extreme criticism and raises privacy concerns amongst users (CNN, 2021).
With this tainted history, Facebook’s rebrand as a new company, Meta, is worrying. Having faced the repercussions of its actions, including the 2019 and 2021 Senate hearings about security, cryptocurrency and unhealthy standards, among other things, it seems as if the company understands that it is in trouble. Although Congress hasn’t dealt tangible punishments to Facebook yet, it is obvious that the government has its eye on the company. The rebrand makes it seem as if Facebook, after facing the consequences of its actions, is attempting to save face and avoid possible future restrictions.
A rebrand may seem to Facebook a fresh start and clean slate, and while they have vowed to remove all of the accumulated facial recognition data from their years in operation (New York Times, 2021), which is an unexpected step, this does not mean that the nature of the platform will fundamentally change. In fact, Facebook has stated it will not completely remove its facial recognition software—Deepface—nor will it promise to never use the system again, demonstrating an acute disregard for the security of its consumers. Facebook’s rebrand as Meta and subsequent development of the MetaVerse were supposed to be a method of improving the platform, penance for its previous sins. Increased commitment to removing misinformation and fostering a community which would increase connection and accessibility was supposed to be Facebook’s comeback. But essentially this addition does nothing to save the platform but is rather a branding scheme to distract people from the damages that Facebooks and its various sub platforms have had on society.
It is clear that Facebook, not only as a social media platform, but rather as an integral part of 21st century life, is not ready for a feature as pervasive as the proposed MetaVerse. Between the continued rampant misinformation on the platform, and the refusal to completely eliminate facial recognition practices and other security concerns, the company must do much more to get back in the good graces of the general public. Without fundamental changes to its platforms, Facebook, Meta and MetaVerse, will continue to bring sinister consequences to society.