Generative AI for research & analysis

The rapid development in the field of artificial intelligence (AI) has already led to fundamental changes in many industries in recent years. However, progress in the field of generative AI – often described as the iPhone moment – is accelerating the transformation immensely. PR, communication and marketing in particular are being revolutionized by the introduction and use of custom generative pre-trained transformers (GPTs). This technology enables organizations to develop customized AI solutions tailored to their specific needs and data.

AI is not a new topic in the media intelligence industry, but has been in the spotlight for years under the keywords “automation of PR”, “big data” and “CommTech”. For years, so-called “discriminative AI” and other methods for automatic text classification and information extraction, which are trained using machine learning, have been used for data pre-processing and to support editors and analysts. These include NLP (Natural Language Processing) models such as fastText, spaCy, BERT and Polyglot, which automatically reveal the semantic structure of content in order to determine entities, topics and tonalities or the sentiment of statements.

For entity matching, i.e. the unique automatic identification of proper names such as organizations, persons and places, fastText or spaCy can be used, for example. Proper names are automatically recognized and disambiguated by these AIs on the basis of dictionaries or knowledge networks. spaCy is a popular open source library for natural language processing (NLP) and analysis in the Python programming language. Disambiguation describes the unambiguous determination through the use of further context information, e.g. to determine whether it is the head of Apple or simply an apple.

Sentiment matching, which is refined using machine learning methods, works in a similar way. One model that is widely used in our industry is Google BERT (Bidirectional Encoder Representation from Transformers), a machine learning technique developed by Google and introduced in 2018, which achieves high accuracy and validity in correctly determining the valuation.

The revolutionary potential of large language models

However, generative AI such as ChatGPT and Google Gemini are significantly revolutionizing the application possibilities in order to not only perform classification tasks on the basis of large amounts of data, but also to generate new content. They are all also based on machine learning or deep learning and a neural language model – the so-called Large Language Models such as OpenAI’s GPT-4, Google’s LaMDA and PaLM or Meta’s LLaMA – to understand questions and texts and generate content such as text, images, music and speech on the basis of huge training volumes. And that is the revolutionary thing – to enter into a natural language dialog with the user in order to understand queries and provide answers in natural language. Like on the Enterprise, when Jean-Luc Picard talks to the on-board computer.


The Financial Times has described how the underlying Transformer model works very clearly in a digital story:

ChatGPT has a remarkable language processing capability and is able to handle a variety of natural language comprehension tasks. In comparison with previously used models such as Google’s BERT, ChatGPT shows comparable or better performance in many cases. The use and integration are in a rather complementary relationship. Standardized methods such as the GLUE benchmark (General Language Understanding Evaluation) are used to evaluate and compare the performance of the models. This is a set of nine natural language comprehension tasks designed to assess the language processing ability of models. These tasks include, for example, text classification, entity recognition, semantic similarity and sentiment.[1]

An AI for every occasion

In addition to ChatGPT, many other specialized AI tools expand, facilitate and accelerate the application possibilities, especially in the areas of conception, evaluation, visualization, interpretation and reporting. Until now, this was a highly manual, time-consuming and complex process. Generative AI now makes it possible for non-experts to quickly evaluate and interpret data in dialog with the AI. While AI tools such as Graphy[2], Datasquirrel[3 ] and Highcharts[4 ] have specialized in the area of data evaluation, visualization and interpretation alongside the ChatGPT Code Interpreter and Advanced Data Analyis, AI tools such as Gamma[5], Tome[6 ] and StoryD[7 ] demonstrate impressive capabilities in the area of reporting and presentation.

Customized AI assistants for companies

In recent months, various companies have developed their own internal GPTs and AI assistants for optimal use in a protected corporate environment. An internal GPT is fed with the company’s own knowledge and documents. This creates a central knowledge database that employees can access at any time via a chat interface. This makes the chatbot the first point of contact for all kinds of questions, without constantly switching from one tool to another. Companies that have set up their own GPTs in recent months – also due to data protection concerns with full control over company data and maximum security – include, for example OttoGroup (ogGPT), the drugstore chain DM (dmGPT), E.ON (Eon GPT), Siemens (Industrial Copilot), Bosch (BoschGPT), KPMG (KaiChat), PwC (with Harvey AI), Merck (myGPT), Salesforce (Einstein GPT), Bloomberg (BloombergGPT) and Mercedes Benz (Direct Chat).[8]

To meet these data protection security requirements, OpenAI also introduced new Enterprise and Teams tariffs for ChatGPT. And since the beginning of November 2023, OpenAI has potentially made everyone an app developer and programmer with the introduction of custom GPTs. With the GPT configurator, practically anyone is able to build their own chatbot without writing a line of code. Doris Weßels, Professor of Business Informatics at Kiel University of Applied Sciences, described this development as “as if a new rocket stage had been ignited”.[9] This opens up impressive possibilities for daily communication work, but also for research and analytics by providing access to multimodal tools with combined use, e.g. web search, data and image analysis. The focus here is on the ability to systematically examine and evaluate data with an AI assistant in order to gain insights, patterns and correlations.

A new era of information processing

The integration of custom GPTs into corporate communications and media intelligence is leading to a fundamental change in the way companies process and use information. Using these technologies to create a central knowledge database makes it much easier to access relevant information. They make it possible to automate processes, improve decision-making and develop new forms of customer and employee interaction.

These developments mark an important turning point in the media and communications industry and open up new horizons for the future development of business strategies and communication measures.

About the author:

Andree Blumhoff is Group Head Media Analysis and member of the Business Development Team at pressrelations . There, he focuses on trends in the areas of research & analytics, CommTech and the application of artificial intelligence. His professional career began with an apprenticeship as a bank clerk, followed by a degree in communication science, economics and philosophy at the TU Dresden. With over 20 years of experience in academic and applied communications research and product development, he has held key positions at PRIME research – F.A.Z.-Institut (now Cision Insights), PMG Presse Monitor and ARGUS DATA INSIGHTS.


[1] Cf. Qihuang, Zhong & Ding, Liang & Liu, Juhua & Du, Bo & Tao, Dacheng. (2023). Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT.
[8] See also Handelsblatt, 12.04.2024: “Was firmeneigene Versionen von ChatGPT wirklich bringen”
[9] See also Handelsblatt, 27.11.2023: “This is how ChatGPT users also become programmers”

Leave a Reply