Skip to main content
U

sing advanced machine learning techniques (among others), deepfakes allow for the creation of realistic but false photos, audios, and videos, posing a serious challenge to information security. However, although it rarely breaks through to public awareness, such multimedia can be used for good purposes.

In this article, I will briefly describe what deepfakes are and how they are created; how they have impacted the world and the market; how to recognize them and how to defend against them; and what potentially awaits us in the future. It will be a long article, but I hope it will increase our vigilance and dispel a few myths!

What Is a Deepfake?

A deepfake is a term referring to content created using (mainly) artificial intelligence algorithms that simulate the appearance, behavior, and speech of real people. This work materializes through multimedia files.

In the context of video (which is what we most commonly associate with deepfakes), this technique uses deep neural networks to analyze and mimic gestures, facial expressions, and voice, allowing for the creation of media where people appear to do things they never actually did (including speaking words they never said).

To create these deepfakes, autoencoders are primarily used – specialized neural networks designed for data compression and optimal image reproduction. Autoencoders learn to recognize and recreate key facial features based on input data, such as photos or recordings of a specific person. Additionally, general AI model skills, such as pixel generation or speech synthesis, are utilized.

Naturally, generating a deepfake photo is much easier than a video. Sometimes, just a model is enough, what I’ll demonstrate later! Generating sound is also relatively simple – a few samples are enough.

Negative Effects of Generating Deepfakes

Let’s start with what is most often discussed, namely how deepfakes can harm specific individuals or manipulate entire societies. I’ve selected a few categories and tried to find an example for each:

Political Manipulations

In politics, deepfakes are relatively easy to come by, as we have plenty of content that can be generated from. I’d venture to say that examples can be found at every latitude. I chose a few.

Already in 2018, a video created by Jordan Peele showed Barack Obama making politically incorrect statements he never publicly made. This film demonstrated how easily public perception could be manipulated using deepfakes. The consequences of such actions can lead to wrongful accusations, erroneous beliefs, and political destabilization, as well as the spread of misinformation. See for yourself – once again, note that this is an example from 2018:

Examples can be found in Poland too. In August 2023, the Civic Platform created an election spot in which the character and voice of the prime minister Mateusz Morawiecki read the contents of emails he had sent that were incriminating. In the same campaign, other deepfakes mimicked the voices of Donald Tusk or Janusz Korwin-Mikke.

Such content could potentially change election outcomes, even if deepfake does not mean fake news (as was the case in the aforementioned situations). It’s no wonder this sparked a critical discussion on the ethics of such behaviour and whether using deepfakes in electoral campaigns is/should be legal.

There are also examples of humorous or satirical uses, such as top Polish politicians playing Minecraft.

Security Threats

Remaining somewhat in the realm of politics, manipulating videos featuring world leaders can affect security and provoke international or internal conflicts. One example that sticks in my mind involves Volodymyr Zelensky, who in March 2022 was urging his compatriots to surrender (it was the beginning of the war in Ukraine).

I created this photo in about 10 minutes using only the built-in capabilities of the selected AI models to graphic generation.

In this context, I’d like to point out one side note. Videos compromising our security can be quickly and widely shared (fuelled by fear), which can magnify their negative effect. Therefore, vigilance of the media, services, and social media operators is crucial in quickly detecting, removing, and debunking such information.

Financial Frauds and Scams

An example from Hong Kong, where an employee transferred millions of dollars due to a deepfake, is a warning for the business world. What exactly happened? This employee participated in a video conference, during which he was convinced that he was speaking with senior officers. It turned out that all the images were generated by AI – let’s call them avatars. The avatars convinced him to make the transfer, bypassing company procedures (after all, these were “managers”).

There are likely more such examples, although companies naturally do not disclose information on this topic. I hope that examples of attacks using deepfakes will also encourage the creation of better security procedures in corporations.

Unfortunately, companies may increasingly fall victim to fraud using deepfakes. Therefore, it is important to follow security procedures and keep them updated.

Personal Rights Violations

From the information above, it’s easy to imagine how wide the possibilities are for using deepfakes to violate individuals’ (good) images. I’ll point out some examples.

The first will be particularly terrifying, thankfully still rare – it concerns fabricated pornographic videos and scandalous content. Naturally, creating such content without the consent of the individuals depicted is a serious violation (I hope that in every country it’s simply a crime). I won’t write about ethical issues because they are obvious. But let’s consider the consequences, even if such content is not published – extortion for money, potential for manipulation, and psychological damage.

Minors are also vulnerable to deepfakes, even as part of silly jokes by peers. It’s important to raise parents’ awareness of this threat.

I’ll also briefly touch on an event that recently (June/July 2024) take traction and inspired me to write this article. It concerns a fake news campaign accompanied by deepfakes, which targeted the image of Rafał Brzoska, the CEO of InPost), and his wife Omena Mensah. I agree that in this case, social platforms must take greater responsibility for spreading false information and deepfakes – especially when criminals use advertising tools to promote their content and further harm someone’s image.

Church leaders are also characters in deepfakes. The famous example of Pope Francis in a puffer jacket may not have been threatening, but a deepfake featuring Cardinal Kazimierz Nycz advertising a drug can be considered harmful to the image.

Can we somehow protect ourselves in these cases? A bit yes. I’ll present a few tips at the end of the article. Meanwhile…

Potentially Positive/Neutral Applications of Deepfakes

I want to show deepfakes and related AI applications from various perspectives. Regarding the examples below, I’m aware that their assessment will largely depend on the evaluator and their point of view. However, I think we are more likely to find supporters…

Movies and Entertainment

Technologies related to deepfakes and artificial intelligence are already revolutionizing the film industry. We can generate the voices of actors who, due to illness, have lost their natural abilities, or even “resurrect” individuals previously known from the silver screen. Additionally, we can update movies without the need to reshoot scenes or use advanced special effects and face editing software, improving the quality of the video and adding certain features to the characters’ appearances. We can also generate extras.

AI w branży filmowej

An interesting application of AI will be automatic, realistic dubbing of films in any language. Eleven Labs, a company founded by Poles, already offers appropriate solutions and will certainly refine them. Global cinema without subtitles and narrator’s voice – sounds intriguing!

Technologies behind deepfakes also allow for the creation of digital characters. For example, in the television program “Dalí Talks,” Salvador Dalí “appears” speaking about his art.

Deepfake can also enhance the level of immersion and naturalness of worlds in which computer game scenarios unfold. For example, so-called NPCs, background characters “controlled by a computer,” can tell something related to a specific situation and look “natural,” as if their lines were permanently written into the game’s code.

Deepfake i branża gier

In both the film and gaming industries, technologies related to deepfakes can, for example, generate background characters, making scenes more authentic.

Business and Advertising

Already in 2019, an educational ad about malaria featuring David Beckham in some sense broke language barriers. It used image manipulation technologies to show Beckham speaking in nine languages (the voices were as if from different people, but the facial expressions corresponded to the spoken words). Below you will find the mentioned recording:

Now the possibilities are much greater, and some companies are opting to generate video ads, each intended for one specific recipient. It’s conceivable that similar solutions will find application during teleconferences, where each participant will “speak” in the language of the given person (let’s note on the sidelines video conferences in the metaverse, where each participant is represented by a “talking” avatar).

I believe that deepfake will change (because it is already changing) the fashion industry and e-commerce, enabling the display of products on models varying in skin tone, height, and weight, which was previously impossible without involving many different individuals. In this context, the model could be entirely generated by AI. Going further, thanks to deepfakes, we could become models and move to a virtual fitting room.

Deepfake w branży e-commerce i modowej

A virtual fitting room could be an interesting solution for online stores in the future, don’t you think?

Some companies have decided to use virtual celebrities in their marketing campaigns – social media profiles where the “celebrity” character is generated by AI. (Other companies, on the other hand, take the opposite approach in the “only human” trend.)

Health and Social Issues

Examples below may be somewhat controversial, and I don’t know to what extent they can help achieve the goals for which they are considered, yet they are worth mentioning.

Artificial intelligence and technologies related to deepfakes can potentially help individuals suffering from Alzheimer’s disease by enabling them to interact with a younger version of themselves, which they may remember. These solutions can also digitally “restore” deceased loved ones, aiding in the grieving process, or allow transgender individuals to visualize themselves in their desired gender identity.

It’s possible that in the future, conversations with the deceased will be part of our grieving process. There is also talk of transferring our memory to a hard drive. Intriguing, yet somewhat frightening.

These potentially positive applications of deepfakes show that despite significant risks, they can also bring benefits in many areas. So what to do to maximize the benefits and neutralize the risks? It’s hard to give a definitive answer, but let’s ponder the future anyway!

The Future of Deepfakes and Legal Reality

AI and the technologies behind deepfakes will continue to evolve, becoming increasingly accessible, and deepfakes themselves more common and harder to distinguish from real photos or recordings. This is an obvious truism. What else is likely in store for us?

Regulations, as well as education, are important to neutralize the negative effects of deepfakes. Large online platforms are already obligated to take action. Are their efforts sufficient?

The growing awareness of the risks associated with deepfakes leads to more intense discussions about the need for legal regulations. The European Union has already implemented the Digital Services Act (DSA), which obliges large online platforms to combat the spread of harmful content, including deepfakes. This topic is also addressed in the AI Act. Let’s remember that EU regulations must be implemented by the member states, but they are not the only context in which the debate on deepfakes is conducted in Poland. Potential additional regulations could include issues related to privacy violations and national security.

Since deepfakes also raise a number of ethical questions, especially regarding its applications in politics and media, I also expect an increasingly wide debate but also growing awareness. Using this technology to falsify statements by politicians or manipulate voters poses a serious threat to democracy and electoral processes. On the other hand, citizens may lose faith in what they see; this can intensify the trust crisis in the media, which I see in Poland, and I won’t be surprised if its dimension is much wider and also affects other countries.

I believe that at this moment education is crucial, which will help each of us to distinguish deepfakes…

How to Recognize Deepfakes?

Distinguishing real content from manipulated one will become increasingly difficult. However, this does not mean that we are completely vulnerable. Based on various white papers, scientific publications, and my own experiences, I’ve selected a few methods that we can try to use. I will mainly write about video, but analogously we can act in the context of other types of multimedia.

The most important differences to pay attention to when assessing content, I summarized in the infographic below. I am making it available under the Creative Commons BY-ND 4.0 license, which means you can copy and distribute it, with attribution retained.

And now more details!

Silhouette, face, and inconsistencies in facial expressions

Firstly, it’s worth paying attention to any imperfections in lip movement or eye blinking. AI often struggles with accurately reproducing subtle facial movements, which can create a sense of artificiality. Deviations from the natural rhythm of blinking or unnaturally stiff movements (e.g., mouth movement does not cause other muscles to move) are important indicators that we may be dealing with a deepfake.

It’s also worth examining the skin and hair. If:

  • the face is free of imperfections or overly smoothed (e.g., visible extra tissue on the temples, especially when the face is seen slightly at an angle),
  • or the hair remains stiff, moves excessively, or appears artificially separated from the background or cut off

– the likelihood that we are dealing with a deepfake increases.

What is a quite effective method for qualifying content, but requires the video to be in high resolution, is observing the eyes. In most cases, artificial intelligence generating video does not change the diameter of the pupils and does not move the eyes – if no changes are visible here, it may indicate that the file is fake. To maintain the same pupil diameter and complete lack of movement, it would be necessary to focus and maintain gaze on one point (e.g., the camera lens) with unchanging light.

Head movement should emphasize what the person in the recording is saying.

Manekin wygenerowany przez AI

In the context of deepfakes, anything that looks unnatural should raise suspicions. Pay attention to details.

Additionally, if the entire silhouette is visible, it’s worth paying attention to the hands – whether there are any extra fingers and if they look natural.

Technical Analysis of Content

Among the techniques available to each of us: it’s worth paying attention to differences in lighting or shading that do not match the natural properties of the scene. These may suggest manipulation.

We can also take it a step further, but this will require some knowledge related to video or photo editing. Tools for viewing video metadata can help raise legitimate suspicions (metadata can be artificially introduced or removed, which can cause so-called false positives). Moreover, by manipulating contrast, colour saturation, and other “sliders,” we can see that different parts of the content react differently to changing parameters, which would also indicate that the content has been manipulated.

Of course, the above tips can also be successfully applied to photos.

Deepfake - dźwięk i mowa wygenerowane przez AI

Audio analysis can help determine if we are dealing with a deepfake. However, this requires specialized software.

Deepfakes (both films and audio files), although increasingly perfect, often contain artificial noise or sound artifacts. By analysing recordings in terms of unnatural voice tone changes, breaks, or noises, manipulated multimedia can be effectively identified. To do this, however, we need either a good ear, software (then we manipulate various parameters, as in the case of videos and photos), or AI tools.

At the same time we can quite easily recognize a robotic and unnaturally “flat” tone of voice.

Use of Artificial Intelligence

Tools that use AI, for example to analyse inconsistencies in depth and image coherence, are increasingly used to detect deepfakes. As expected, they are getting better and can detect increasingly subtle anomalies that may escape the human eye.

Thus, AI is becoming more and more effective weapon against unethically used… AI!

A “Digital Signature”

I believe that in the future, other solutions than artificial intelligence will help in detecting deepfakes, or at least in confirming the authenticity of multimedia content.

Deepfake i blockchain

A hidden pixel pattern or blockchain technology could help in signing content. The future will likely bring even more solutions.

We could, for example, use blockchain technology to verify the authenticity of content by tracking its origin and integrity. It is also possible to record a hidden pattern in the form of pixels invisible to the naked eye (whether on content generated by AI or original ones) to determine their nature. The use of these solutions requires, however, that browsers or other network services perform verification “in the background.” Depending on the approach, in this way, content could be marked as probably authentic or probably deepfakes.

Some services are already attempting to mark photos and propose standards, but sometimes these safeguards are very easy to remove.

Together in the Age of Deepfakes

Besides all the above techniques, I want to add a suggestion regarding general education in cybersecurity. Unfortunately, the more we “immerse” ourselves in the virtual reality, the more we are exposed to various forms of attack. If we are aware of the methods used by criminals to manipulate us, and which solutions can particularly help us in this rather uneven battle, a side effect will be the reduced effectiveness of examples of criminal use of deepfakes.

Deepfake

It is important for each of us to make an effort to avoid falling victim to deepfakes. I encourage the adoption of good security practices.

In this context, I particularly recommend familiarizing ourself with methods of conducting phishing attacks, but that’s just the tip of the iceberg.

I also encourage the use of password managers, setting strong and unique passwords for each service, and enabling two-factor authentication (which, in addition to a password, requires e.g., entering an SMS code). This will increase the security of our data and reduce the risk of becoming victims of deepfakes.

It is also important to publish photos and videos on the internet (especially those containing your face and voice) with an awareness of all the consequences. Perhaps it’s time to review social media and privacy settings? Additionally, be cautious of any services that, for example, offer to transform your photo into a superhero image – they may use your content to train AI models.

Remember that whatever we publish on the internet doesn’t disappear easily. Everyone has different needs and obligations (e.g., professional) that will affect the amount of content published online. However, let’s be caution and prudent.

Organizations should also develop, and if they already have them, constantly update cybersecurity strategies that take into account the specifics of attacks using deepfakes. I can help with this if needed.

How to Survive?

Combating deepfakes requires a comprehensive approach, combining education, tools, good security practices, and appropriate legal regulations. As the technologies behind such content evolve, so must the methods of detecting them and preventing negative effects. This is an effort that must be made by governments, media and online platform owners, company boards, but also each of us.

Let’s remember, however, that deepfakes are also a fascinating manifestation of the potential of artificial intelligence.

I encourage you to share the link to this article or share the infographic I have prepared.

I invite you to sign up for the free newsletter as well, where I will inform you about further articles and more. This will help you stay up-to-date on topics related to AI! The subscription form is below.

I invite you to sign up for the AI and Management newsletter. This way, you won’t miss any article on my blog.  Sign up