menu-control
The Jerusalem Post

We must act now: The crisis of bots and fake users threatening our digital world - opinion

 
 Facebook's new rebrand logo Meta is seen on smartphone in front of displayed logo of Facebook, Messenger, Instagram, Whatsapp and Oculus in this illustration picture taken October 28, 2021 (photo credit: REUTERS/ DADO RUVIC)
Facebook's new rebrand logo Meta is seen on smartphone in front of displayed logo of Facebook, Messenger, Instagram, Whatsapp and Oculus in this illustration picture taken October 28, 2021
(photo credit: REUTERS/ DADO RUVIC)

Bots account for 47% of internet traffic, with 30% being harmful. They skew data, damage business, and threaten security.

When you read a product review on Amazon, browse through the comments section of an article on CNN, or get annoyed at a provocative tweet, can you be sure the individual behind the screen is a living, breathing person? 

Absolutely not.

A recent report by Imperva revealed that bots make up 47% of all internet traffic, with “bad bots” comprising 30%. These staggering statistics threaten the integrity upon which the open web has been built.

Yet even when a user is human, there’s a good chance that their account is operating under a fake identity, meaning “fake users” are currently as prevalent online as authentic ones.

Advertisement

We are no strangers to the existential risk of bot campaigns here in Israel. Following October 7, large-scale misinformation campaigns, orchestrated by bots and fake accounts, manipulated public opinion and policymakers. Monitoring online activity during the war, The New York Times found that “in a single day after the conflict began, roughly 1 in 4 accounts of Facebook, Instagram, TikTok, and X (formerly Twitter) posting about the conflict appeared to be fake... In 24 hours after the blast at Al-Ahli Arab hospital, more than 1 in 3 accounts posting about it on X were fake.”

With eighty-two countries seeing elections in 2024, the risk of bots and fake users is reaching crisis levels. Just last week, OpenAI had to deactivate an account belonging to an Iranian group using its ChatGPT bot to generate content to influence the US elections.

 tiktok The avatar is on the way  (credit: Dr. Itay gal)
tiktok The avatar is on the way (credit: Dr. Itay gal)

As Rwanda prepared for its July elections, researchers at Clemson University uncovered 460 accounts disseminating AI-generated messages on X in support of incumbent president Paul Kagame. In the past six months alone, the Atlantic Council's Digital Forensic Research Lab (DFRLab) has identified influence campaigns targeting Georgian protesters and spreading confusion about an Egyptian economist's death, both powered by inauthentic X accounts.

Fake users and AI fuel online fraud

Bots and fake users harm national security, but online businesses are also paying a heavy price. 


Stay updated with the latest news!

Subscribe to The Jerusalem Post Newsletter


Imagine a business where bots or fake users generate 30-40% of all digital traffic. This scenario creates a cascade of problems, including skewed data that leads to misguided decision-making, impaired understanding of customer funnels and website analytics, sales teams pursuing false leads, and developers focusing on products with illusory demand.  

The implications are staggering. A study by CHEQ.ai, a Key1 portfolio company and go-to-market security platform, revealed that in 2022 alone, over $35 billion in ad spend was wasted, and more than $140 billion in potential revenue was lost. 

Advertisement

Ultimately, fake users and bots undermine the foundations of modern-day business, creating distrust in the data, results, and, in some cases, among teams. 

Introducing Gen AI into the mix has only fueled the fake web’s fire. The technology “democratizes” the ability to create bots and fake identities, lowers the attack barriers, increases sophistication, and meaningfully expands their reach.  

The scope of this growing problem cannot be overstated. But what, if anything, can be done to minimize the tremendous economic, geopolitical, and social damage?  

It’s time for a global response to regain control and rebuild our trust in the internet. 

Education is crucial in combating the fake online epidemic. By raising awareness of the tactics of bots and fake accounts, we can empower society to recognize and mitigate their impact. Understanding the telltale signs of inauthentic users - such as incomplete profiles, generic information, repetitive phrases, abnormally high activity levels, shallow content, and limited engagement - is a vital first step. However, as bots become increasingly sophisticated, this challenge will only grow more complex, underscoring the need for ongoing education and vigilance.

In addition, public policies and regulations must take effect to restore trust in digital environments. For example, governments can and should require large social networks to implement best-of-breed bot-mitigation tools to help police fake accounts.  

Striking the right balance between the freedom of these networks, the integrity of the information posted, and the potential harm caused is not an easy task to accomplish. Yet establishing these boundaries is a necessity to preserve the longevity of these networks.  

On the business front, various tools have been developed to mitigate and block invalid traffic.  These range from essential bot mitigation solutions that prevent Distributed Denial of Service attacks to specialized software protecting APIs from bot-driven data theft attempts.

More advanced bot-mitigation solutions employ sophisticated algorithms that perform real-time tests to ensure traffic integrity. These tests analyze account behavior, interaction levels, hardware characteristics, and automation tools. They also detect non-human behavior, such as abnormally fast typing, and scrutinize email and domain history.

While AI has contributed to the bot problem, it's also proving to be a powerful tool in combating it. AI's enhanced pattern recognition capabilities allow for more accurate and rapid distinction between legitimate and non-legitimate bots. Companies like CHEQ.ai are leveraging AI to help marketers ensure their ads reach human users and are placed in safe, bot-free environments, effectively countering the growing threat of bots in digital advertising.

From national security to business integrity, the ‘fake internet’ consequences are as broad as they are dire. Yet, there are several effective methods to mitigate the problem, methods that deserve a renewed public and private focus. By raising awareness, enhancing regulation, and instituting active protection, we can all contribute to a more accurate and far safer internet environment. 

×
Email:
×
Email: