Skip to main content

Artificial intelligence, as I have written several times on the blog, brings a multitude of opportunities, but also many risks. I would like to devote more attention to the security and outline how we can potentially enhance it. Today, I will focus on the spread of false information.

I will start with a topic that may seem somewhat surprising in the context of AI. But first, it’s important to note that my focus on disinformation also broadly applies to malinformation. (Malinformation means that the information is not fabricated; rather, it is grounded in reality and truth. However, its application and distribution are intended to cause harm to others.)

AI and Socio-Political Processes

Liberal democracy was able to spread thanks to mass communication. In fact, the beginnings of mass media date back to the industrial revolution, and gradually this trend gained momentum: from the era of so-called traditional media to the current ubiquity of the internet (looking from the perspective of the Western world). While one can argue over details, at least to some extent, mass communication was accompanied by trust that the disseminated message was true (although in reality, it varied). This, in turn, ensured relatively stable times, especially in those parts of the globe where liberal democracy was prevalent.

AI and democracy

But what will happen when, due to the development of AI, we start to lose even more trust in communication? Doesn’t this pose a threat to democracy?

Artificial intelligence can spread disinformation like never before. I’m not only talking about language models generating texts or algorithms creating deepfakes. I also mean the use of AI to manage bots to optimally spread the message, for example, in social media. I believe that a threat worth considering in the future is the use of AI for highly targeted disinformation. In detail: it’s not just about false information well-tailored for recipients but also about information that is true but can be misinterpreted without context.

The problem lies not only in generating and disseminating data but also may concern the data on which AI models are trained.

Disinformation vs. Business

Spreading disinformation will impact businesses as well. There are many benefits – for example, one can earn a lot from fluctuations in stock prices. According to Gartner, companies will significantly increase their budgets related to cybersecurity and marketing in the coming years, strictly as a result of the growing threats from AI development.

Companies will need to be prepared for crisis situations and should act on at least three fronts:

  1. Build a solid foundation for their image, making it harder to damage their reputation;
  2. Be ready to take quick actions to combat disinformation;
  3. Make efforts to implement better defences against cyberattacks and realistically prepare their employees in terms of cyber

In the context of combating disinformation, I foresee the development of advanced tools for authenticating messages disseminated by companies (more on this topic below). I also assume the development of existing and the emergence of new solutions for monitoring the internet. They will be capable of sentiment analysis, similar to what Brand24 does, and of identifying subtle patterns and anomalies that indicate disinformation. All of this will enable faster and more effective responses to misinformation campaigns.

AI and advertisement

We will presumably move away from search engines towards chatbot-like information services. This trend could open the door to collaborations with chatbot creators, potentially changing the future of advertising and SEO.

It is also important to develop company procedures regarding the use of AI tools. I’m not just talking about (not) sharing confidential information in prompts. At this stage, it’s also crucial to approach their responses with scepticism and perform verification. If people mindlessly use AI, the best procedures and security measures will not be useful.

We should also be aware of another threat, where antivirus, anti-phishing, and anti-pharming protections can help. On so-called endpoint devices (e.g., our computers), due to malicious software or network manipulation, there may be manipulation of the information returned by services like ChatGPT (e.g., replacing the response with one containing false information or even impersonating such a service).
On a small scale, I conducted experiments related to training ChatGPT, for example, by correcting its responses. These experiments failed – I wasn’t able to teach anything to OpenAI’s services. This isn’t definitive proof but offers some support for certain assertions about ChatGPT. Assessing responses with thumbs up or down is a different matter.

Additionally, we often don’t know exactly who we are dealing with…

Conversations – Yes, But Preferably Between People

To counteract some dangerous trends related to AI, we also must ensure that conversations, for example on social media, are between people.

The internet of the future might lose some of its current anonymity (this doesn’t mean that the internet is currently entirely anonymous). Increasingly, creating an account may require authorizing a phone number, which can also serve for two-step authentication, thereby enhancing security. However, I wouldn’t be surprised by the emergence of even more advanced methods.

In Poland, regarding ePUAP (Electronic Platform of Public Administration Services), one option for confirming our identity is logging in via the banking system, which transfers our data and authenticates us. Additionally, in the past there was much discussion about OpenID services, which have survived in some form – for instance, Google and Apple accounts allow us to log in to some websites. Therefore, further development in this area may offer a chance to reduce the number of bots, though reducing them to zero is unlikely.

From a criminal’s perspective, safeguarding services from bots and using verified profiles could increase the rewards of identity theft and account takeovers. Therefore: we need to take even greater care of our online security.

AI security

To prevent identity theft and account takeovers, enhance online security. Use password managers, and both strong and unique passwords.

Sometimes, however, the presence of bots is necessary or useful. In such cases, their accounts could (or rather should) be appropriately marked.

The Value of Credible Information

It is also important to take care of journalistic content. Already, service X (formerly Twitter) adds comments to some posts, which outline the context based on reliable sources. I suspect that this trend will continue, but such a mechanism is not sufficient.

There is a lot of talk about marking content generated by artificial intelligence. This makes sense in certain applications – when we talk, for example, about multimedia files: videos, photos, and sound. Here we can embed information (in the sound wave or pixel layout) that a given material was created by AI. The implementation of such mechanisms depends on the AI solution creators’ decisions. Also, consensus is needed on how they operate.

However, especially in the context of text, in my opinion, a more likely and justified trend is the opposite: marking content created by humans. To ensure greater credibility, blockchain technology could be used. (I have a specific idea for this, just no time for all this…)

Artificial intelligence will be used to determine the likely origin of information/content. Existing services strive to determine whether a text is plagiarized or generated by a LLM. I think it’s a matter of time before more sophisticated solutions emerge in the context of, for example, deep fakes.

An interesting fact on the side: just as systems creating, for example, images can mark generated files, creators as well can embed some hidden objects in their files. Some artists use these techniques to intentionally mislead artificial intelligence. What for a human might be a car, for a computer might resemble a cat. As a result, the effectiveness of neural networks learning on these images may be lower.

The Time for Armament Has Come

Let’s not be naive. Criminals also reach for top-notch technology. What I consider most important to be prepared for the spread of disinformation is education. Sounds trivial? In my opinion, it is currently a big challenge, because everything is changing very quickly.

Knowledge and awareness of the possibilities and limitations will help us avoid getting carried away with AI and over-relying on its results. On the other hand, understanding the trends will enable us to use the best tools and, more importantly, shape our vision of the future world

One way to stay up-to-date is by regularly reading my blog, which I warmly encourage. I also recommend subscribing to the newsletter; you can find the subscription form below. Additionally, feel invited to read the texts about: