European AI Act: “Communicators must now sit in the driver’s seat”

Interview with Thomas Mickeleit. He is the founder of AG CommTech. With his consulting boutique, he supports communications departments in their digital transformation. Among other things, Mickeleit was Head of Communications at Microsoft Germany for 15 years.

AG CommTech: Thomas, many people have only heard about the AI Act in passing. What is it essentially about?
Thomas Mickeleit: The AI Act is the first comprehensive set of regulations for artificial intelligence worldwide. The EU is trying to strike a balance: Enabling innovation, but at the same time protecting fundamental rights and values. This is the common thread running through the entire text – innovation yes, but under human supervision and clear responsibility.

AG CommTech: The Act distinguishes between “providers” and “operators”. What does this mean for companies?
Thomas: Providers develop AI systems and bring them to market – these are, of course, the providers of the major language models such as OpenAI, but also any company that builds its own AI tools and markets them. Operators are the users. Different obligations apply to both. Providers must document in great detail, demonstrate data quality and manage risk. Operators – i.e. those of us in communication – must use the systems correctly, supervise them and document how they are used. Many companies will be both at the same time.

AG CommTech: How exactly does the AI Act regulate the risks?
Thomas: The EU works with a risk pyramid:

  • Unacceptable risk: e.g. social scoring or emotion recognition in the workplace – prohibited.
  • High risk: such as biometric identification, education or critical infrastructures. The strictest requirements apply here, including CE certification.
  • Limited risk: this includes tools that are important to us, such as chatbots. They are permitted, but must be transparently recognizable as AI.
  • Minimal risk: e.g. spam filters or music recommendations – there are no special obligations here.

AG CommTech: The transparency obligation sounds particularly important for communication.
Thomas: Absolutely. Users need to know that they are interacting with an AI. This applies to chatbots, AI texts and, of course, deepfakes. If you copy a text 1:1 from ChatGPT, you have to mark it. Even a review and spelling corrections do not remove the marking obligation. In practice, this is difficult to understand – many media editors easily edit AI drafts and then consider them to be created by humans – without labeling. In the case of images or videos, more attention is usually paid to this. Communications departments need to know that they are operating in a gray area here.

AG CommTech: What does this mean in concrete terms for work in communications departments?
Thomas: Three things:

  1. Skills development – employees must be trained, both generally and specifically for the tools used. This has been mandatory since February.
  2. Governance – every company needs an AI compliance office. In large companies, this is a dedicated AI officer, in smaller companies it is usually Compliance or Legal, who set the framework conditions, e.g. how transparency and labeling obligations are to be implemented.
  3. Documentation – You must clearly record which data is used, how bias is avoided and who is in charge.

AG CommTech: Sounds like a lot of bureaucracy.
Thomas: It takes effort, no question. High-risk applications in particular are difficult to manage. But as with the GDPR, what is difficult at first can become the standard in the long term. If we take it seriously, the AI Act could become a global reference framework – even if it is a competitive disadvantage in the short term compared to less regulated or unregulated regions such as the USA or China.

AG CommTech: What role can communicators play in this process?
Thomas: A central one. Communication is close because many AI applications – text generators, image tools, chatbots – are in its field. Communication departments can also organize and document the transfer of skills. I often see them in the role of catalyst: they raise awareness, explain, ensure transparency and help to anchor governance. Those who actively take on this role strengthen their own position in the company.

AG CommTech: What happens when companies wait and see?
Thomas: Waiting is not an option. In Germany, the Federal Network Agency is not yet operational as a supervisory authority. But key parts will come into force from 2026/27. Anyone who has not established governance structures and carried out training by then risks liability and reputational damage. A plaintiff could argue that a company has breached its duty of care. Above all, a wait-and-see approach entrenches a competitive disadvantage.

AG CommTech: Your advice to communications departments?
Thomas:

  • Talk to Legal and Compliance – many don’t even know that they have to act.
  • Form cross-functional teams – communication, HR, IT and legal belong together.
  • Document pilot projects – it’s better to set standards now before it becomes mandatory.
    And: seize the opportunity. The AI Act is forcing us to build up expertise. Those who take this seriously will not only strengthen legal certainty, but also the innovative power of communication.



Leave a Reply