The Super Election Year 2024 – Significant Increase in Disinformation and Deepfakes

2024 has been designated as a super election year, as more people than ever before in history will have the opportunity to vote in various elections. With elections planned in 76 countries worldwide, people in eight of the ten most populous nations - including Bangladesh, Brazil, India, Indonesia, Mexico, Pakistan, Russia, and the USA - will head to the polls. As major nations like India, the USA, the EU, and the United Kingdom prepare for elections, this coincides with a time when powerful AI tools, capable of distorting reality, have become increasingly accessible to the public. Several AI experts have warned about the potential consequences of this.

What are deepfakes?

Deepfakes are digitally produced forgeries of images, sounds, or video clips that, through machine learning, have become so sophisticated they appear genuine. The term "deepfake" started being used in 2017 and originates from "deep learning," which is advanced AI involving multiple layers of machine-learned algorithms. Deepfakes have many applications, ranging from entertainment to education. However, these technologies can also be misused in disinformation campaigns by attempting to manipulate perceptions, undermine trust in media and authorities, influence electoral processes, and cause conflicts. As this technology becomes more advanced, distinguishing genuine media content from forgeries becomes harder. With the extensive reach of social media, convincing deepfakes can be spread to millions of people in a very short time, with potentially harmful effects on society. The main producers of deepfakes are individuals creating these as a hobby, political actors, activists, scammers, and television companies. Moreover, one does not need extensive technical knowledge to produce a deepfake video, meaning almost anyone with a computer can create convincing forgeries, further emphasizing the need for awareness and protective measures against this.

Your voice can also be easily stolen for AI scams

Besides influencing election campaigns, AI can be employed in fraud and social manipulation in various different ways. Scammers calling up, primarily older people, pretending to be a grandchild and demanding money, has been a problem for many years. Now, a new variant has become increasingly common in the USA: Scams carried out with AI-cloned voices. The person on the phone sounded exactly like Ruth Card's grandson, Brandon. He needed money quickly and because Ruth recognized his voice, she went to the bank. Benjamin Perkins' parents received a call from a man claiming to be a lawyer. Benjamin was alleged to have killed another man and needed money for bail. The lawyer handed the phone to someone – who sounded exactly like Benjamin. In both cases, as reported by The Washington Post, it was a new form of scam – voices cloned with the help of AI.

Previously, large amounts of audio material were required to credibly clone someone's voice, but AI development is moving at a breakneck pace – now it's faster, easier, and more accessible than before. "What was unimaginable just a year ago is free today. The companies that make and sell AI want lots of data – that's why they give away their services," says Tobias Falk, lecturer at the Department of Computer and Systems Sciences at Stockholm University, to SVT News. "We've Reached a point where machines can mimic us. Algorithms are trained to mimic how the human brain functions. It's moving very fast now, but it has taken decades to reach a point where machines can mimic us as well as they do today."


Conclusion 

AI-cloned deepfakes and voices are now being utilized in ways that significantly blur the line between authenticity and fabrication, making it difficult to differentiate a fake video or voice from a real one. Today, many are accustomed to receiving phishing emails and similar attempts at deception, but this advancing technology could increasingly be deployed to render various fraud attempts even more convincingly realistic. It is imperative to cultivate an awareness of these technological capabilities not only to prevent being misled in personal or work-related contexts but also on a larger scale to avoid inadvertently disseminating fake information on social media. Such dissemination can harm the trust in societal institutions and undermine the integrity of election campaigns, posing a threat to the fabric of democracy and public confidence.

Previous
Previous

Extending your Cybersecurity capabilities with Staff Augmentation

Next
Next

NIST CSF 2.0 Released