Deepfakes fooling audiences, undermining trust in real media - study
They are being spread not only during the current Russian-Ukrainian War but also by Hamas terrorists and their supporters in various parts of the world.
“Deepfakes” are artificially manipulated audio-visual material, mostly produced using a fake ‘face’ constructed by artificial intelligence (AI) that is integrated with an authentic video to create a false video of an event that never really took place. Although fake, they can look very persuasive and are often produced to imitate or mimic an individual.
They are being used in the Russian-Ukrainian War and also by Hamas terrorists and their supporters in various parts of the world.
Now, a first-ever study at Ireland’s University of Cork (UCC) of wartime deepfake videos reveals their impact on news media and outlines implications for social media companies, media organizations and governments. The study has just been published in the journal PLOS ONE under the title “Do deepfake videos undermine our epistemic trust? A thematic analysis of tweets that discuss deepfakes in the Russian invasion of Ukraine.”
Close to 5,000 tweets on X in the first seven months of 2022 were analyzed to find out how people react to deepfake content online and to uncover evidence of previously theorized harm that deepfakes inflict on trust. As deepfake technology becomes increasingly accessible, it is important to understand how such threats emerge over social media, they wrote.
The Russia-Ukraine War presented the first real-life examples of deepfakes being used in warfare. The researchers highlight examples, including video game footage as evidence of the urban myth fighter pilot “The Ghost of Kyiv”, a deepfake of Russian president Vladimir Putin, showing him announcing peace with Ukraine, and the hacking of a Ukrainian news website to display a deepfaked message of Ukrainian President Volodymyr Zelensky surrendering.
Real media are often labeled as deepfakes
The study, led by psychologist Dr. John Twomey, Dr. Conor Linehan, and colleagues, found that fears of deepfakes often undermined users’ trust in the footage they were receiving from the conflict to the point where they lost trust in any footage coming from the conflict. The study is also the first of its kind to find evidence of online conspiracy theories that include deepfakes.
They also found that much real media were labeled as deepfakes and that efforts to raise awareness around deepfakes could undermine trust in legitimate videos. They urged that news media and governmental agencies weigh the benefits of educational deepfakes and pre-debunking against the risks of undermining truth.
Similarly, news companies and media should be careful in how they label suspected deepfakes, in case they cause suspicion of real media.
“Much of the misinformation the team analyzed in the X discourse dataset surprisingly came from the labeling of real media as deepfakes. Novel findings about deepfake skepticism also emerged, including a connection between deepfakes fueling conspiratorial beliefs and unhealthy skepticism,” Twomey wrote.
“The evidence in this study shows that efforts to raise awareness around deepfakes may undermine trust in legitimate videos. With the prevalence of deepfakes online, “this will cause increasing challenges for news media companies who should be careful in how they label suspected deepfakes, in case they cause suspicion around real media,” he continued. “News coverage of deepfakes needs to focus on educating people on what deepfakes are, what their potential is, and both what their current capabilities are and how they will evolve in the coming years.”
Ynet reported 10 days ago that Hamas was using deepfake technology on WhatsApp to incite terror and fear, as hackers infiltrated an account of a woman in Rishon Lezion and sent a message of distress in a neighborhood group, causing panic.
According to reports, hackers infiltrated the WhatsApp account of a woman in Rishon Lezion and uploaded a chilling voice message to one of the groups, of a voice shouting and using the word “kidnapped.” This message shocked her friends, triggering concerns that a security incident might be unfolding.
This occurred just four days after the Hamas terrorist massacre in the Gaza Strip. The woman said samples of her voice were available online because of her profession and may have made her an attractive target for the perpetrators. Before then, said Ynet, she received a call from an unidentified, number and didn’t answer. Her husband investigated and discovered that her Facebook account was being accessed from an area in the West Bank.
Jerusalem Post Store
`; document.getElementById("linkPremium").innerHTML = cont; var divWithLink = document.getElementById("premium-link"); if (divWithLink !== null && divWithLink !== 'undefined') { divWithLink.style.border = "solid 1px #cb0f3e"; divWithLink.style.textAlign = "center"; divWithLink.style.marginBottom = "15px"; divWithLink.style.marginTop = "15px"; divWithLink.style.width = "100%"; divWithLink.style.backgroundColor = "#122952"; divWithLink.style.color = "#ffffff"; divWithLink.style.lineHeight = "1.5"; } } (function (v, i) { });