Advertisement

SKIP ADVERTISEMENT

Facebook Says It’s Policing Fake Accounts. But They’re Still Easy to Spot.

Video
bars
0:00/5:03
-0:00

transcript

Inside Russia’s Network of Bots and Trolls

How do bots and trolls work to infiltrate social media platforms and influence U.S. elections? We take a closer look at these insidious online pests to explain how they work.

They hide behind Twitter hashtags, Facebook ads and fake news stories. They’re the work of bots and trolls, and one of the most skilled countries at deploying them is Russia. So how do these entities actually work to spread disinformation? We asked two experts. This is St. Petersburg-based activist Ludmila Savchuk. She has tracked disinformation campaigns and even gone undercover to learn how they work. And this is Ben Nimmo, a London-based analyst who focuses on information warfare. Let’s define what’s what. A bot is short for robot. It’s an automated social media account that operates without human intervention. During the 2016 presidential election, suspected Russian operators created bots on Twitter to promote hashtags like #WarAgainstDemocrats. A troll is an actual human being, motivated by passion or a paycheck to write social media posts that push an agenda. In 2015, Savchuk worked undercover for over two months at a troll factory in Russia that has gone by many names, including Glavset and the Internet Research Agency. Troll accounts are usually anonymous or pretend to be someone else, like hipsters or car repairmen. But it can even get stranger. Trolls can also set up bots to amplify a message. Facebook is one common platform for Russian trolls and bots, which, in 2016, used fake accounts to influence U.S. elections. Here’s how some experts think that played out. American officials suspect Russian intelligence agents of using phishing attacks to obtain emails damaging to the Hillary Clinton campaign. They then, allegedly, created a site called DCLeaks.com to publish them. A troll on Facebook, using the name Melvin Redick, was one of the first to hype the site, saying it contained the “hidden truth about Hillary Clinton.” An army of bots on Twitter then promoted the DC Leaks, and in one case, even drove a #HillaryDown hashtag into a trending topic. Facebook believes that ads on divisive issues created by Russian trolls were shown to Americans over four million times before the elections. Russian-linked trolls and bots also tried to exploit divisive issues and undermine faith in public institutions. Federal investigators and experts believed Russian trolls created Facebook groups like Blacktivist, which reposted videos of police beatings, or another, Secured Borders, which organized anti-immigrant rallies in real life. “Today, Russia hopes to win the second Cold War through the force of politics as opposed to the politics of force.” How can you stop them? You can’t. Even Vladimir Putin seems to agree. But ID’ing their tactics helps contain their influence. If a suspicious account is active during the workday in St. Petersburg or posting dozens of items a day, those are red flags. Decode the anonymity. Look for alphanumeric scrambles in a user’s name, and try Googling its profile picture. Look at the language. If an account makes grammar mistakes typical for Russian speakers, or changes behavior during times of strained Russian-U.S. relations, then congratulations. You might have caught a bot or pro-Kremlin troll.

Video player loading
How do bots and trolls work to infiltrate social media platforms and influence U.S. elections? We take a closer look at these insidious online pests to explain how they work.

WASHINGTON — Executives of Facebook, Twitter and Google pledged to Congress this week to do more to prevent the fakery that has polluted their sites. “We understand that the people you represent expect authentic experiences when they come to our platform,” Colin Stretch, the general counsel of Facebook, told the Senate Intelligence Committee. He said the company was doubling its review staff to 20,000 and using artificial intelligence to find more “bad actors.”

Mr. Stretch, meet Keven S. Eversley. Mr. Eversley’s Facebook profile informs us that he is from Minneapolis. But a glance at the web address for his profile reveals a different name: Aleksandar Teovski. And nearly all of his Facebook friends, his family photographs, his alma mater and even his employer are in Macedonia, a center for internet fakery.

Despite months of talk about the problem of fraud facing Facebook and other tech companies, and vows to root it out, their sites remain infected by obvious counterfeits. The Russian influence operation during the 2016 election, which occasioned the three congressional hearings this week, is only one especially consequential sample of a far larger problem, in which the platforms are gamed for profit or political influence.

Image
Colin Stretch, the general counsel of Facebook, testified this week before multiple congressional committees.Credit...Eric Thayer for The New York Times

Most experts say financial motives for the chicanery, in fact, are far more common than political goals. “Keven Eversley” is probably a case in point. Every few days, the Eversley profile posts on Facebook links to sensational, if fact-challenged, articles, all from the same obscure website, conswriters.com: President Trump has ended welfare for immigrants; the F.B.I. was ordered to halt its investigation into the mass shooting in Las Vegas; Hillary Clinton was “hit with terrible news” about Benghazi, Libya.

Conswriters.com, like hundreds of “clickbait” sites, pastes enticing headlines on articles that read like the work of time-pressed high school students. But it is packed with Google ads that generate revenue for every click, highlighting Google’s foundational role in the ecosystem of online deception.

Jonathan L. Zittrain, who studies the internet and society at Harvard, said the companies are reluctant to aggressively purge bogus users and deceptive content because of their business model, which is built on signing up more and more people.

“These platforms are oriented to maximize user growth and retention,” Mr. Zittrain said. “That means not throwing up even tiny hurdles along the sign-up runway, and not closing accounts without significant cause. I suspect they figure there are enough accounts that are the subject of complaints to review without looking for more to assess.”

It takes no great technical expertise to spot the dubious accounts, and amateur sleuths around the country have taken up the task. Zachary Elwood, a technical writer and an author of poker books in Portland, Ore., who started tracking evidence of fake Facebook profiles this year, found dozens of impostors, including Keven Eversley.

He noticed that a dozen profiles, several clearly with Macedonian content, using the same photographs and other details of a single real person, a Virginia real estate agent named Harry Taylor. Mr. Elwood found a network of what appeared to be attractive pro-Trump American women, but older posts and other details revealed that the accounts originated in the Middle East.

“It’s amazing how sloppy some of these accounts are,” Mr. Elwood said. “I hate liars and I’m drawn to understand stuff like this.”

______

Red Flags: How to Spot Fake Content

Check out who a profile is “friends” with. Is the hometown listed on a profile similar to its friend base? For example, “Keven Eversley” claims to be from Minneapolis, but the majority of his friends are from Skopje, Macedonia.

Compare the profile’s public name to its web address. For the Eversley profile, it has a different name in the address: Aleksandar Teovski.

Image

If a profile seems suspicious, search for similar pages that draw on the same personal details or images.

Image

______

With more than two billion users worldwide, Facebook relies on complaints to police its content. So, Mr. Elwood used Facebook’s internal complaint tool to report the Keven Eversley profile and 27 others showing evidence of deception. In all but a couple of cases, Facebook responded with a standard message of thanks for the feedback but said the profiles did not violate its community standards — even though those standards require users to give their “authentic identities.”

“The reporting process is frustrating,” Mr. Elwood said. “Facebook seems to be lagging way behind the problem.”

Facebook estimates that as many as 60 million accounts, 2 to 3 percent of the company’s 2.07 billion regular visitors, are fakes. Sean Edgett, Twitter’s general counsel, testified before Congress that about 5 percent of its 330 million users are “false accounts or spam,” which would add up to more than 16 million fakes.

“Spammers and bad actors are getting better at making themselves look more real,” Mr. Edgett said.

Independent experts say the real numbers are far higher.

On Twitter, little more than an email address is needed to start tweeting. Facebook’s requirement that users be their authentic selves means the company asks for a smattering of information to sign up — name, birthday, gender and email address. But few checks exist to verify that information.

“Part of the problem is that Facebook is a black box,” said Michael Serazio, a professor of communications at Boston College. “They do what they do, and we don’t know to what degree their operations can even handle these issues — not to mention how handling them maps with their economic model.”

In fact, fighting too hard against deception may clash with the business models that have allowed the companies to thrive. Facebook, Google and Twitter all offer self-serve advertising systems allowing anyone in the world to buy, target and deliver ads for as much — or as little — money as they wish to spend. More scrutiny could hamper growth.

Facebook, for instance, reported record profits this week in its quarterly earnings even as executives testified about Russian exploitation of their services. Shares of the social network soared to an all-time high on Wednesday afternoon after the news. Mark Zuckerberg, Facebook’s chief executive, insisted in the earnings call that the company is prepared to sacrifice profits to crack down on illicit activity.

“Protecting our community is more important than maximizing our profits,” he said.

Whether public concern about the manipulation of the platforms might at some point threaten the business remains to be seen. But many customers who run up against the fakery problem end up unhappy.

Kristofer Goldsmith, an assistant director for policy and government relations at Vietnam Veterans of America, noticed last summer a look-alike Facebook page calling itself Vietnam Vets of America that initially borrowed the real group’s logo. Linked to a website hosted in Bulgaria, the upstart page pushed viral content, weighing in on N.F.L. players’ protests of police shootings. It posted looping videos that were months or years old but presented them as breaking news, he said.

“Sometimes their grammar was off,” Mr. Goldsmith said, but there was no way to know who was behind the page.

Soon, the look-alike page had 200,000 followers — more than the 120,000 than the page of the real group, which has a long history of service, a congressional charter and chapters around the country. Mr. Goldsmith said the linked website had few ads, so he suspected a political motive, probably in line with the Russian campaign to divide Americans.

In August, Mr. Goldsmith began complaining to Facebook. But officials there hesitated; hosting pages for millions of groups, they were hardly equipped to assess in detail whether a particular veterans group was worthy and another was not.

Finally, in late October, Facebook shut the newer page, deciding it had illicitly stolen the intellectual property of the older page. But Mr. Goldsmith said the experience was disturbing.

“I don’t think they’re taking a very proactive approach,” he said of Facebook. “There was a foreign entity targeting American vets and inserting itself into divisive debates. Someone could do this to us every month.”

A correction was made on 
Nov. 6, 2017

An earlier version of this article misstated at one point Facebook’s most recent report of its monthly average users. The correct number is 2.07 billion, not 2.3 billion. That earlier version also misstated how many users Facebook estimates are fakes. The company estimates that 10 percent of its accounts, or 200 million, are duplicates used by real people, and that 60 million accounts are fake.

How we handle corrections

Lilia Chang contributed to this article.

A version of this article appears in print on  , Section B, Page 3 of the New York edition with the headline: Facebook Says It’s Policing Fake Accounts. So Why Are They Still Easy to Spot?. Order Reprints | Today’s Paper | Subscribe

Advertisement

SKIP ADVERTISEMENT