In November 2022, a Twitter account claiming to be owned by pharmaceutical giant Eli Lilly posted a bombshell message: it was giving away a life-saving diabetes treatment for no money.
“We are excited to announce insulin is free now,” tweeted the account @EliLillyandCo. But the viral tweet and the account did not come from Eli Lilly at all, which was not handing out insulin for free, and the convincing post ended up wiping 4.5 percent off its share value.
While this was the work of someone trying to raise concerns about the cost of healthcare in the US, not every fake profile posting fake information has such altruistic motives – which is where Israeli startup Cyabra comes in.
The Tel Aviv-based company calls itself a “social threat intelligence” company, which works to expose online risk to individuals, institutions or even governments. It says its mission is to fight misinformation, claiming it can root out even the most sophisticated threats.
Unique AI software created by Cyabra quickly identifies malicious actors using social media and other online spaces such as comment sections, to spread false information.
The sophisticated threats usually include the use of sock puppet (fake identity) accounts that post false information and networks of computer-run accounts (bots) that share these posts multiple times in order to lend them credibility.
The actual number of bots on Twitter, where they are most prolific, has long been a subject for debate. But that debate escalated with Elon Musk’s $44 billion purchase of the social media platform last year, when the Tesla founder commissioned a survey of the issue. The company he tasked with the investigation was Cyabra.
The startup found that some 11 percent of Twitter users were bots. Cyabra CEO Dan Brahmy told CNN that the company had carried out similar assessments of other social media platforms and that Twitter had the biggest bot problem.
To former CIA cyber-operations officer Dan Woods, however, that number is low. Woods, who is an expert on bot traffic as part of his role at American cybersecurity firm F5, claimed last year that 80 percent of Twitter’s traffic could potentially be generated by bots.
And spotting real facts from convincing fiction online can be tricky.
“Sometimes these accounts, they’ve been posting actual content for months to make it look very believable,” Cybara’s VP Marketing Rafi Mendelsohn tells NoCamels.
“But then what you might have is bot networks and bot farms that have been created not to post content, but to amplify the content of the fake account. So you actually have different types of inauthentic profiles that are engaging with each other in order to achieve the objective.”
What sets Cyabra apart, Mendelsohn explains, is the focus on accounts aiming to cause harm in the social sphere rather than hackers who pose what he calls “classic cybersecurity threats” such as to infrastructure or hardware.
Those companies who do take a look at social media activity have much more limited breadth than Cyabra, which, according to Mendelsohn, examines every platform.
Cyabra identifies its targets not just by looking at the account making the post, but by uncovering the fake accounts that promote the disinformation.
“We’re looking for the threats, and then seeing which accounts are associated,” Mendelsohn says.
Sign up for our free weekly newsletterSubscribe
AI In Action
Cyabra’s platform uses artificial intelligence in what Mendelsohn calls “semi-supervised machine learning.”
Some five or six hundred different behavioral parameters are fed into the Cyabara algorithm, Mendelsohn explains. These parameters include a particular account’s online behavior, monitoring the accounts that it follows and engages with and the accounts that follow and engage with it.
He cites the example of malicious social media activity surrounding Russia’s invasion of Ukraine in February 2022.
“We were very interested to see the disinformation campaigns that were going on,” Mendelsohn says. “Just one of the things that we uncovered was a cluster of fake accounts – no more than 50 or 60 – on Twitter, that we believe originated from Russia.”
He says the accounts were all presented as accounts from Poland, which were posting and amplifying one another’s negative content in Polish about Ukraine. Poles at the time were welcoming Ukrainian refugees fleeing Russian attacks into their homes, and someone in Moscow was trying to create divisions between the two nations.
“We’ve developed pretty interesting new technology that allows us to be able to detect whether an account is real or fake online,” Mendelsohn tells NoCamels.
One way in which it is possible to tell whether an account has a real person or a computer operating it is the amount of times it posts in a single day.
“If you have posted 23 of the 24 hours of the day, that’s an indication that there’s non-human behavior,” he says. “We can say, okay for that parameter, that’s a bit of a red flag.”
Cybabra was co-founded in 2018 by Brahmy, CPO Yossef Daar and CTO Ido Shraga. Two of three served in information warfare units in the Israel Defense Forces and all three are veterans of the Israeli high-tech sector.
“They developed the technical tools and skills to be able to track and fight disinformation, and then they started to use those skills for good,” Mendelsohn says.
The platform is already in use, including by foreign governments. He says the company worked, for example, with the US State Department to track foreign interference in elections and with the Taiwanese government to battle vaccine disinformation during the COVID pandemic.
“I suppose it’s useful to think of it as a social media search engine,” he says. “It’s very difficult to do that manually, because it’s like a fire hose of information coming your way.”