In search of the right approach to AI: The German Council for Public Relations (DRPR) adopts guidelines

In search of the right approach to AI: The German Council for Public Relations (DRPR) adopts guidelines

It is an honorable concern of the DRPR to provide PR professionals with binding rules for dealing with AI. And he is not alone in this. Many national governments, the EU and international organizations are discussing or already regulating how artificial intelligence should be put on a leash. A distinction is made between obligations that are incumbent on the platform provider (such as OpenAI) and rules that affect the users – i.e. us. The DRPR is therefore doing what it can and telling us, in a nutshell, that what applies elsewhere also applies when dealing with artificial intelligence: be transparent and truthful.

Transparency is created by a labeling requirement if the content created by AI is published without being checked. This puts an end to the nonsense of labeling all content created with the help of AI. This demand is occasionally made and points in the wrong direction. By the same right, it could be demanded that every use of sources and aids be made subject to labeling, for example like this: “This article was created with the help of Wikipedia and Microsoft Word Autocorrect”. Obviously absurd. It is therefore good that the DRPR is providing clarity at this point.

Truthfulness means not spreading AI-based fake news and not feigning relevance, for example through the use of bots. So far so good.

The DRPR is of course aware that the user cannot do what the platform operators are not obliged or perhaps not even able to do. Artificial intelligence works in a black box. Even developers cannot understand how AI arrives at a certain result, which sources it uses and how these are processed. The DRPR is referring to a prayer of the German Ethics Council, in which it states that platform operators must take responsibility for compliance with ethical standards and explain the functionality and data basis transparently. On this basis, communication managers would have the “final human decision”. The only tricky thing is that this is just a wish list from the Ethics Council, which in no way reflects reality. We can therefore act 100% transparently and truthfully from the user’s perspective and still behave unethically or illegally. The dilemma exists and it is questionable whether it will ever be solved.

Is this an appeal to refrain from using AI in communication? A clear no. The AI is here and it’s not going anywhere. Ignoring them would be the wrong way out of the dilemma. The DRPR has wisely decided to update the guideline in view of the dynamics in the field of AI applications and the regulatory debates. This will be necessary, if only to avoid a patchwork of different regulations in different countries and areas of application. Until then, let’s stick to the good principles of transparency and truthfulness, and if everyone does that, the world will be a better place.



Leave a Reply