Monday, February 19, 2018

After Florida School Shooting, Russian ‘Bot’ Army Pounced

SAN FRANCISCO — One hour after news broke about the school shooting in Florida last week, Twitter accounts suspected of having links to Russia released hundreds of posts taking up the gun control debate.

The accounts addressed the news with the speed of a cable news network. Some adopted the hashtag #guncontrolnow. Others used #gunreformnow and #Parklandshooting. Earlier on Wednesday, before the mass shooting at Marjory Stoneman Douglas High School in Parkland, Fla., many of those accounts had been focused on the investigation by the special counsel Robert S. Mueller III into Russian meddling in the 2016 presidential election.

“This is pretty typical for them, to hop on breaking news like this,” said Jonathon Morgan, chief executive of New Knowledge, a company that tracks online disinformation campaigns. “The bots focus on anything that is divisive for Americans. Almost systematically.”

One of the most divisive issues in the nation is how to handle guns, pitting Second Amendment advocates against proponents of gun control. And the messages from these automated accounts, or bots, were designed to widen the divide and make compromise even more difficult.
Any news event — no matter how tragic — has become fodder to spread inflammatory messages in what is believed to be a far-reaching Russian disinformation campaign. The disinformation comes in various forms: conspiracy videos on YouTube, fake interest groups on Facebook, and armies of bot accounts that can hijack a topic or discussion on Twitter.


Those automated Twitter accounts have been closely tracked by researchers. Last year, the Alliance for Securing Democracy, in conjunction with the German Marshall Fund, a public policy research group in Washington, created a website that tracks hundreds of Twitter accounts of human users and suspected bots that they have linked to a Russian influence campaign.

The researchers zeroed in on Twitter accounts posting information that was in step with material coming from well-known Russian propaganda outlets. To spot an automated bot, they looked for certain signs, like an extremely high volume of posts or content that conspicuously matched hundreds of other accounts.

The researchers said they had watched as the bots began posting about the Parkland shooting shortly after it happened.

Amplified by bot swarms, Russian-linked Twitter accounts tried to foment discord before and after the election. Hundreds of accounts promoted false stories about Hillary Clinton and spread articles based on leaked emails from Democratic operatives that had been obtained by Russian hackers.

Facebook, Google and Twitter have, to varying degrees, announced new measures to eliminate bot accounts, and have hired more moderators to help them weed out disinformation on their platforms.

But since the election, the Russian-linked bots have rallied around other divisive issues, often ones that President Trump has tweeted about. They promoted Twitter hashtags like #boycottnfl, #standforouranthem and #takeaknee after some National Football League players started kneeling during the national anthem to protest racial injustice.

The automated Twitter accounts helped popularize the #releasethememo hashtag, which referred to a secret House Republican memorandum that suggested the F.B.I. and the Justice Department abused their authority to obtain a warrant to spy on a former Trump campaign adviser. The debate over the memo widened a schism between the White House and its own law enforcement agencies.
The bots are “going to find any contentious issue, and instead of making it an opportunity for compromise and negotiation, they turn it into an unsolvable issue bubbling with frustration,” said Karen North, a social media professor at the University of Southern California’s Annenberg School for Communication and Journalism. “It just heightens that frustration and anger.”

Intelligence officials in the United States have warned that malicious actors will try to spread disinformation ahead of the 2018 midterm elections. In testimony to Congress last year and in private meetings with lawmakers, social media companies promised that they will do better in 2018 than they did in 2016.

But the Twitter campaign around the Parkland shooting is an example of how Russian operatives are still at it.

“We’ve had more than a year to get our act together and address the threat posed by Russia and implement a strategy to deter future attacks, but I believe, unfortunately, we still don’t have a comprehensive plan,” said Senator Mark Warner, the Virginia Democrat who is the vice chairman of the Senate Intelligence Committee, during a hearing this month on global threats to the United States. “What we’re seeing is a continuous assault by Russia to target and undermine our democratic institutions, and they’re going to keep coming at us.”

When the Russian bots jumped on the hashtag #Parklandshooting — initially created to spread news of the shooting — they quickly stoked tensions. Exploiting the issue of mental illness in the gun control debate, they propagated the notion that Nikolas Cruz, the suspected gunman, was a mentally ill “lone killer.” They also claimed that he had searched for Arabic phrases on Google before the shooting. Simultaneously, the bots started other hashtags, like #ar15, for the semiautomatic rifle used in the shooting, and #NRA.

The bots’ behavior follows a pattern, said Mr. Morgan, one of the researchers who worked with the German Marshall Fund to create Hamilton 68, the website that monitors Russian bot and fake Twitter activity. The bots target a contentious issue like race relations or guns. They stir the pot, often animating both sides and creating public doubt in institutions like the police or media. Any issue associated with extremist views is a ripe target.

The goal is to push fringe ideas into the “slightly more mainstream,” Mr. Morgan said. If well-known people retweet the bot messages or simply link to a website the bots are promoting, the messages gain an edge of legitimacy.

An indictment made public on Friday by Mr. Mueller as part of the investigation into Russian interference in the election mentioned a Russian Twitter feed, @TEN_GOP, which posed as a Tennessee Republican account and attracted more than 100,000 followers. Messages from this now-deleted account were retweeted by the president’s sons and close advisers including Kellyanne Conway and Michael T. Flynn, the former national security adviser.
The indictment also described how fraudulent Russian accounts on Twitter tried to push real Americans into action. The indictment said the fake Twitter account @March_for_Trump had organized political rallies for Mr. Trump in New York before the election, including a “March for Trump” rally on June 25, 2016, and a “Down With Hillary” gathering on July 23, 2016.

By Friday morning, the bots that pushed the original tweets around the Parkland shooting had moved on to the hashtag #falseflag — a term used by conspiracy theorists to refer to a secret government operation that is carried out to look like something else — with a conspiracy theory that the shooting had never happened.

By Monday, the bots had new targets: the Daytona 500 auto race in Daytona Beach, Fla., and news about William Holleeder, a man facing trial in the Netherlands for his suspected role in six gangland killings. It is unclear why.





Continue reading the main story

NYT

No comments:

Twitter Updates

Search This Blog

Total Pageviews