Interview with text generator ChatGPT.
With the upcoming elections, there are many fears that generative artificial intelligence (AI) will intervene heavily - whether it's creating deceptive or manipulative texts, images or videos. In your opinion, are these concerns valid?
Yes. If people are not informed about the capabilities of technology and are unable to distinguish authentic content from AI-generated content, it can affect public opinion and erode trust in information sources. Education and greater transparency in media is important.
However, AI is already generating such high quality output that it is virtually impossible for humans to distinguish.
This is true. This increases the risk of misuse because AI-generated content can be presented as authentic. Developing tools and methods that can detect AI-generated content, as well as educating the public on how to be critical consumers of information, is important to counteract this trend.
But is this situation anything new at all? After all, disinformation has always been around.
You're right, misinformation is as old as communication itself. But what is new is the speed, scale and sophistication with which misinformation can be created and disseminated today using AI. Whereas it used to take time and human resources to create convincing false information, today AI can generate large amounts of disinformation content in a short amount of time. In addition, digitization and social media are increasing the speed and scale of is.