Mark Sargent saw instantly that his situation had changed for the worse. A voluble, white-haired 52-year-old, Sargent is a flat-earth evangelist who lives on Whidbey Island in Washington state and drives a Chrysler with the vanity plate “ITSFLAT.” But he's well known around the globe, at least among those who don't believe they are living on one. That's thanks to YouTube, which was the on-ramp both to his flat-earth ideas and to his subsequent international stardom.
Formerly a tech-support guy and competitive virtual pinball player, Sargent had long been intrigued by conspiracy theories, ranging from UFOs to Bigfoot to Elvis' immortality. He believed some (Bigfoot) and doubted others (“Is Elvis still alive? Probably not. He died on the toilet with a whole bunch of drugs in his system”). Then, in 2014, he stumbled upon his first flat-earth video on YouTube.
He couldn't stop thinking about it. In February 2015 he began uploading his own musings, in a series called “Flat Earth Clues.” As he has reiterated in a sprawling corpus of more than 1,600 videos, our planet is not a ball floating in space; it's a flat, Truman Show-like terrarium. Scientists who insist otherwise are wrong, NASA is outright lying, and the government dares not level with you, because then it would have to admit that a higher power (aliens? God? Sargent's not sure about this part) built our terrarium world.
Sargent's videos are intentionally lo-fi affairs. There's often a slide show that might include images of Copernicus (deluded), astronauts in space (faked), or Antarctica (made off-limits by a cabal of governments to hide Earth's edge), which appear onscreen as he speaks in a chill, avuncular voice-over.
Sargent's top YouTube video received nearly 1.2 million views, and he has amassed 89,200 followers—hardly epic by modern influencer standards but solid enough to earn a living from the preroll ads, as well as paid speaking and conference gigs.
Crucial to his success, he says, was YouTube's recommendation system, the feature that promotes videos for you to watch on the homepage or in the “Up Next” column to the right of whatever you're watching. “We were recommended constantly,” he tells me. YouTube's algorithms, he says, figured out that “people getting into flat earth apparently go down this rabbit hole, and so we're just gonna keep recommending.”
Scholars who study conspiracy theories were realizing the same thing. YouTube was a gateway drug. One academic who interviewed attendees of a flat-earth convention found that, almost to a person, they'd discovered the subculture via YouTube recommendations. And while one might shrug at this as marginal weirdness—They think the Earth is flat, who cares? Enjoy the crazy, folks—the scholarly literature finds that conspiratorial thinking often colonizes the mind. Start with flat earth, and you may soon believe Sandy Hook was a false-flag operation or that vaccines cause autism or that Q's warnings about Democrat pedophiles are a serious matter. Once you convince yourself that well-documented facts about the solar system are a fraud, why believe well-documented facts about anything? Maybe the most trustworthy people are the outsiders, those who dare to challenge the conventions and who—as Sargent understood—would be far less powerful without YouTube's algorithms amplifying them.
For four years, Sargent's flat-earth videos got a steady stream of traffic from YouTube's algorithms. Then, in January 2019, the flow of new viewers suddenly slowed to a trickle. His videos weren't being recommended anywhere near as often. When he spoke to his flat-earth peers online, they all said the same thing. New folks weren't clicking. What's more, Sargent discovered, someone—or something—was watching his lectures and making new decisions: The YouTube algorithm that had previously recommended other conspiracies was now more often pushing mainstream videos posted by CBS, ABC, or Jimmy Kimmel Live, including ones that debunked or mocked conspiracist ideas. YouTube wasn't deleting Sargent's content, but it was no longer boosting it. And when attention is currency, that's nearly the same thing.
“You will never see flat-earth videos recommended to you, basically ever,” he told me in dismay when we first spoke in April 2020. It was as if YouTube had flipped a switch.
In a way, it had. Scores of them, really—a small army of algorithmic tweaks, deployed beginning in 2019. Sargent's was among the first accounts to feel the effects of a grand YouTube project to teach its recommendation AI how to recognize the conspiratorial mindset and demote it. It was a complex feat of engineering, and it worked; the algorithm is less likely now to promote misinformation. But in a country where conspiracies are recommended everywhere—including by the president himself—even the best AI can't fix what's broken.
After the Las Vegas shooting, executives began focusing more on the challenge. Google's content moderators grew to 10,000, and YouTube created an “intelligence desk” of people who hunt for new trends in disinformation and other “inappropriate content.” YouTube's definition of hate speech was expanded to include Alex Jones' claim that the murders at Sandy Hook Elementary School never occurred. The site had already created a “breaking-news shelf” that would run on the homepage and showcase links to content from news sources that Google News had previously vetted. The goal, as Neal Mohan, YouTube's chief product officer, noted, was not just to delete the obviously bad stuff but to boost reliable, mainstream sources. Internally, they began to refer to this strategy as a set of R's: “remove” violating material and “raise up” quality stuff.
But what about content that wasn't quite bad enough to be deleted? Like alleged conspiracies or dubious information that doesn't advocate violence or promote “dangerous remedies or cures” or otherwise explicitly violate policies? Those videos wouldn't be removed by moderators or the content-blocking AI. And yet, some executives wondered if they were complicit by promoting them at all. “We noticed that some people were watching things that we weren't happy with them watching,” says Johanna Wright, one of YouTube's vice presidents of product management, “like flat-earth videos.” This was what executives began calling “borderline” content. “It's near the policy but not against our policies,” as Wright said.
By early 2018, YouTube executives decided they wanted to tackle the borderline material too. It would require adding a third R to their strategy—“reduce.” They'd need to engineer a new AI system that would recognize conspiracy content and misinformation and down-rank it.
To create an AI classifier that can recognize borderline video content, you need to train the AI with many thousands of examples. To get those training videos, YouTube would have to ask hundreds of ordinary humans to decide what looks dodgy and then feed their evaluations and those videos to the AI, so it could learn to recognize what dodgy looks like. That raised a fundamental question: What is “borderline” content? It's one thing to ask random people to identify an image of a cat or a crosswalk—something a Trump supporter, a Black Lives Matter activist, and even a QAnon adherent could all agree on. But if they wanted their human evaluators to recognize something subtler—like whether a video on Freemasons is a study of the group's history or a fantasy about how they secretly run government today—they would need to provide guidance.
The evaluators processed tens of thousands of videos, enough for YouTube engineers to begin training the system. The AI would take data from the human evaluations—that a video called “Moon Landing Hoax—Wires Footage” is an “unsubstantiated conspiracy theory,” for example—and learn to associate it with features of that video: the text under the title that the creator uses to describe the video (“We can see the wires, people!”); the comments (“It's 2017 and people still believe in moon landings ... help ... help”); the transcript (“the astronaut is getting up with the wire taking the weight”); and, especially, the title. The visual content of the video itself, interestingly, often wasn't a very useful signal. As with videos about virtually any topic, misinformation is often conveyed by someone simply speaking to the camera or (as with Sargent's flat-earth material) over a procession of static images.
In January 2019, YouTube began rolling out the system. That's when Mark Sargent noticed his flat-earth views take a nose dive. Other types of content were getting down-ranked, too, like moon-landing conspiracies or videos perseverating on chemtrails. Over the next few months, Goodrow and Rohe pushed out more than 30 refinements to the system that they say increased its accuracy. By the summer, YouTube was publicly declaring success: It had reduced by 50 percent the watch time of borderline content that came from recommendations. By December it reported a reduction of 70 percent.
The company won't release its internal data, so it's impossible to confirm the accuracy of its claims. But there are several outside indications that the system has had an effect. One is that consumers and creators of borderline stuff complain that their favorite material is rarely boosted any more. “Wow has anybody else noticed how hard it is to find ‘Conspiracy Theory’ stuff on YouTube lately? And that you easily find videos ‘debunking’ those instead?” one comment noted in February of this year. “Oh yes, youtubes algorithm is smashing it for them,” another replied.
Then there's the academic research. Berkeley professor Hany Farid and his team found that the frequency with which YouTube recommended conspiracy videos began to fall significantly in early 2019, precisely when YouTube was beginning its updates. By early 2020, his analysis found, those recommendations had gone down from a 2018 peak by 40 percent. Farid noticed that some channels weren't merely reduced; they all but vanished from recommendations. Indeed, before YouTube made its switch, he'd found that 10 channels—including that of David Icke, the British writer who argues that reptilians walk among us—comprised 20 percent of all conspiracy recommendations (as Farid defines them); afterward, he found that recommendations for those sites “basically went to zero.”
YouTube’s Plot to Silence Conspiracy Theories | Wired
I highlighted some good excerpts from the long article over at Wired. YouTube is admitting to a plot to getting rid of content relating to questioning our reality. This is absurd. If there is nothing to fear then there is no reason to hide anything or remove it. So let conspiracies be talked about on YouTube. Why censor unless the videos are getting close to the truth? This article checks out as far as the timeline of when YouTube started dropping the hammer on conspiracy videos that you could find. Now YouTube takes videos down all the time pertaining to questioning our rigged reality and it's the norm.
No comments:
Post a Comment