Birdwatch represents Twitter’s most experimental response to one of the biggest lessons that social media companies drew from the historic events of 2020: that their existing efforts to combat misinformation — including labeling, fact-checking and sometimes removing content — were not enough to prevent falsehoods about a stolen election or the coronavirus from reaching and influencing broad swaths of the population. Researchers who studied enforcement actions by social media companies last year found that fact checks and labels are usually implemented too late, after a post or a tweet has gone viral.
The Birdwatch project — which for the duration of the pilot will function as a separate website — is novel in that it attempts to build new mechanisms into Twitter’s product that foreground fact-checking by its community of 187 million daily users worldwide. Rather than having to comb through replies to tweets to sift through what’s true or false — or having Twitter employees append to a tweet a label providing additional context — users will be able to click on a separate notes folder attached to a tweet where they can see the consensus-driven responses from the community. Twitter will have a team reviewing winning responses to prevent manipulation, though a major question is whether any part of the process will be automated and therefore more easily gamed.
Crowdsourcing models are as old as the Internet itself and are most commonly associated with services such as Wikipedia, Quora and Reddit. Each of these services has a model in which community members and administrators debate content and arrive at a conclusion, with the platform taking a limited curation and policing role. While Wikipedia’s crowdsourced model is viewed as having been very effective, Reddit has struggled.
The Birdwatch interface may slightly change the look of Twitter, but it draws from a long-standing approach.
Twitter chief executive Jack Dorsey and Facebook chief executive Mark Zuckerberg have both said they believe that the best remedy for problematic speech is more conversation and dialogue — rather than a censorship model in which content is removed or covered up. The latter tack, which the companies doubled down on during the election and its aftermath, did not make a huge dent in preventing misinformation, and it also pushed many Trump supporters and right-leaning users to smaller, ideologically friendly platforms.
The sense that tech companies have moved away from their ideals of a big tent for free expression was further cemented when the platforms banned President Donald Trump in the wake of his misinformation-fueled rally that preceded the violent Capitol siege this month. Experts have also pointed out that the flurry and pace of labeling, fact-checking and other enforcements are unsustainable for smaller services like Twitter.
Two recent studies have pointed to the value of community fact-checking in correcting misinformation about the pandemic.
The post Twitter’s misinformation problem is much bigger than Trump. The crowd may help solve it. appeared first on Tekrati.