Just after the El Paso shooting, a video of a young man in glasses driving and rapping to Kendrick Lamar began to spread on Telegram. It was posted by Gavin McInnes, the Proud Boys founder who has been banned from major social media platforms for violent rhetoric, with this caption: “This just in: rap fan shoots up Walmart in El Paso. Is rap the devil’s music?” Within a minute, it was reposted by Milo Yiannopoulos, who has also been kicked off the major social networks, with the words “Kendrick Lamar has some questions to answer.” But the driver, a 26-year-old from New York state, had nothing to do with the shooting, and the video itself was several years old.
Collectively the two posts, which remain online, have been viewed by roughly 7,500 people according to Telegram statistics (though it’s possible the two men’s audiences overlap). Yet there’s no way to tell whether it had further spread to private groups on the platform, or if it made the leap from Telegram to other private spaces. The video highlights an emerging issue for online disinformation: As the major social platforms like Facebook, YouTube, and Twitter have taken steps to ban users for spreading false, hateful, or violent messages, bad actors have turned to private spaces where their communications are harder to track.
After the El Paso and Dayton shootings, disinformation spread across spaces the public couldn’t see. In group chats, copy-and-pasted texts about active shooters jumped from group to group. On Telegram, personalities who were banned from Twitter, Facebook, and YouTube found a safe haven and continued to mislead their followers. Some of that information was then screenshotted and published to Facebook groups and Instagram or Snapchat stories. None of it could be effectively countered by fact-checkers.
The video McInnes and Yiannopoulos posted had nothing to do with the El Paso shooting. It was taken from YouTube, where it had originally been posted in 2015 by Cody Dzintars. Dzintars generally makes videos commenting on music but hasn’t uploaded anything in months. Yiannopoulos didn’t comment on questions about the disinformation, instead calling the reporter “a dumb cunt.” He then posted a message to Telegram that the video “wasn’t legit.” Neither McInnes nor Telegram responded to requests for comments.
“I wanted to clarify that I am NOT the El Paso shooter and don’t follow any agendas led by Milo or Gavin,” Dzintars told BuzzFeed News when notified of the video by email. “Luckily it was on this other app. I don’t think they have access to platforms like Twitter and Insta, but had they posted it on their Twitters it could’ve been a completely different story,” he said.
The flow of disinformation across these private spaces is both harder to track and to counter. “There’s no metadata whatsoever. There’s nothing there to go back to, to be able to trace it,” said Claire Wardle, who heads First Draft News, a nonprofit organization dedicated to fact-checking worldwide. Wardle has overseen fact-checking projects with a focus on WhatsApp in India and Brazil.
“The way that WhatsApp or Facebook Messenger groups work is, they tend to be smaller groups of people who know and trust each other,” she said.
While the smaller nature of these groups limits their reach, it also means if people see disinformation in their chats, they may be more likely to trust it. During the El Paso shooting, messages about multiple gunmen were posted to group chats according to screenshots sent to BuzzFeed News. Some also screenshotted those messages and posted them to other semiprivate online spaces.
One image falsely warning of multiple shooters was shared over 500 times after being tagged with Facebook’s crisis response feature. Several were also posted to Instagram stories and Snapchat, where there’s no way to tell whether the spread of misinformation on those platforms was significant.
Wardle says the inability to gauge how far a hoax has traveled is part of what makes private disinformation so challenging. Lack of statistics makes it more difficult to know when something should be reported on and debunked, or ignored for the fear of amplifying something not many people have seen.
“It becomes harder and harder for us, particularly going into 2020, when there is just no way of understanding how widespread something is,” she said.
Bad actors looking to spread their message are skillful in taking advantage of messaging apps’ vulnerabilities. The inability to trace a hoax back to its origins means they can operate with relative impunity. Even on an app like Telegram, which has broadcasting capabilities, users can hide the list of their followers and their own identities. While extremism researchers have long worried about far-right influencers moving to Telegram, Wardle now wonders whether newsrooms, politicians, and academics are prepared to deal with messaging-based disinformation on the eve of a presidential election.
“There’s just been a head-in-the-sand thing, like, ‘Oh, people in America don’t use that,’” Wardle said. “I think what we need to recognize is that they do, and we are completely ill-prepared.”
“I think this is a real wake-up call,” Wardle said. “This information is increasing, but it’s moving into spaces that we simply cannot monitor.”