- 27. December 2023
- Posted by: Die Redaktion
- Category: NEWS
EU AI Act: between overregulation and risk minimization
By Andreas Rossbach, Team Lead Corporate Communications Europe & Israel, Acronis
Member of the steering committee of the CommTech working group, impact measurement cluster
The use of AI in communication and marketing offers many opportunities, but also risks. What the political agreement on the AI Act means and what communicators should think about – this is what AG CommTech member Andreas Rossbach has been thinking about.
After lengthy discussions, the EU member states reached a political compromise in December 2023 on the EU Artificial Intelligence Act (EU AI Act), the world’s first comprehensive law on artificial intelligence. The final formal decision is expected this year.
In line with this, Bitkom CEO Dr. Bernhard Rohleder stated: “(…) Despite the fundamental compromise reached on paper, the major challenge is to translate this agreement into practical rules that create a solid basis for the responsible use of AI. The risk of hindering rather than promoting the use and development of AI in Europe remains through over-regulation (…)”
Risk categories as a guide
In addition to the definition of artificial intelligence, the AI Act also deals with the classification into risk groups. Four risk categories are currently proposed:
- An unacceptable risk
- A high risk
- A limited risk
- A low risk
Systems that aim to exploit or oppress people would fall into the category of unacceptable risks under the AI Act and would therefore be prohibited once the AI Act comes into force. This category also includes social scoring systems that evaluate people based on their actions and statements, as well as systems that can analyze biometric data in real time AIs that are in the other risk groups 2, 3 and 4 are generally permitted, but the systems must meet a number of conditions depending on the risk.
Four tips for communication and marketing managers
The first step is to check whether the use of AI is planned in a problematic area in accordance with the EU risk pyramid in order to implement suitable test protocols. For example, if the communications department in a company uses AI tools to generate content, incorrect information, spelling mistakes, gender stereotypes and other problems could occur. However, such risks can be minimized by taking appropriate measures.
Furthermore, testing protocols, “common sense” and “critical questioning” are mandatory whenever AI tools are used, regardless of the risk level. It will be some time before manufacturers are obliged to do so, but errors may occur in the meantime. While it may be tempting to generate thousands of press releases or product descriptions at the click of a button, there are risks involved if they are published without verification.
Although AI-supported data analyses quickly provide insights and are therefore of interest to communications departments and companies alike, it remains uncertain how the generated data will be handled. It is therefore currently not advisable to make comprehensive company data available to the tools, as confidentiality is not yet guaranteed.
After all, the EU emphasizes the need for transparency, even for low risks, for a reason. For this reason, communication and marketing departments should already be clearly labeling AI-supported content (see also DRPR guidelines). In an era where algorithms tend to amplify even contentless information and lock us into filter bubble-like information environments, this transparency is an essential data point. This at least enables a critical interpretation. It is foreseeable that the transparency requirement will be introduced in some form. It therefore makes sense to prepare for transparency now.