Meta, the tech giant parent company of Instagram and Facebook, has announced the end of their fact-checks. In a Jan. 7 post on the Meta website, Joel Kaplan, the Chief Global Affairs Officer at Meta, gave insight into the decision and into what Meta hopes will come from the switch. The driving force behind Meta’s policy change comes in the name of free speech and the end of censorship. Meta believes the inaccuracies in their current system limit users’ ability to share what they want and often puts innocent people in “Facebook jail.” Meta added that they believed about 20% of the posts they removed were mistakes.
Currently, Meta relies on third party, independent fact-checkers. According to the Meta Transparency Center, fact-checkers who are certified by the nonpartisan International Fact-Checking Network (IFCN) review content and rate its accuracy. The possible evaluations are as follows: False, Altered, Partly False, Missing Context, Satire and True. After the evaluations are made, Meta delivers repercussions to posts depending on their rating, with False and Altered receiving the most aggressive action, and less severe consequences for Partly False and Missing Context. There is no action for True and Satire rated posts.
Meta first started using this system in 2016 with the hope of having independent experts give additional information and context about potentially misleading topics. With the rise of artificial intelligence and increasingly more complex computer-generated images and videos, deep fakes and lies online become more difficult to spot by the day. With older generations typically having a harder time spotting deep fake videos, they often need to have posts evaluated to prevent misinformation from negatively affecting impactful decisions.
The idea of fact-checking is not disappearing altogether for Meta. Meta is following in the footsteps of X by using their system of “Community Notes” to keep their online communities safe. This model puts the power back into the hands of the community and hopes to reduce the chance of bias within fact-checkers. This system relies on community members with a diverse range of backgrounds and perspectives to ensure all sides to be seen on each post. Meta hopes this will bring them closer to their end goal on free and accurate speech for a safer and well-informed community.
As Meta transitions away from third-party fact-checking, the responsibility for discerning truth online increasingly falls on individual users and community-driven initiatives. This shift raises questions about the future of misinformation management on social media platforms. There is widespread disagreement on whether or not this new system will improve the Meta communities. Only time will tell the success of the system. In the meantime, users must stay vigilant and engaged, and be responsible online. This ensures that the digital space remains a place of accurate and responsible communication among a wide variety of communities and spaces. As Meta navigates these changes, the commitment to free and fair speech remains a collective responsibility.