Cases from the Prototyping Lab
Here you can find all the cases from past labs. Exciting prototypes relating to artificial intelligence as well as virtual and mixed reality.
Here you can find all the cases from past labs. Exciting prototypes relating to artificial intelligence as well as virtual and mixed reality.
How are innovative technologies changing the future of media? In the Prototyping Lab, we are looking for answers to these and similar questions.
We give talented students from Hamburg's universities and experimental media and digital companies the opportunity to develop functional prototypes in just three months. In the lab, new market ideas are developed at the cutting edge and technological challenges for companies are solved. The interdisciplinary student teams are supported by renowned industry experts as mentors.
Du willst 2025 im Prototyping Lab als Student*in mitmachen? Erfahre auf der Programmseite mehr über die Anmeldung zum Lab!
At the Prototyping Lab 2024, the participating companies SPIEGEL, Carlsen, Onilo, Geolino and GEMA worked together with students to test innovative applications of artificial intelligence for the media and publishing world. Within just a few weeks, interdisciplinary teams developed functional prototypes that show how AI can usefully support creative, editorial and production processes - from automated data visualization and AI-supported comic production to workflows for video generation. The following project descriptions provide an insight into the respective problems, solution approaches and results.
Problem: The SPIEGEL editorial team regularly works with data-based content, but many articles remain purely textual, even though they would lend themselves to visual additions. This makes it difficult for readers to understand complex developments and trends. At the same time, journalists often lack the time to create suitable diagrams based on valid data. The aim of the project was therefore to develop a prototype that uses AI to automatically generate suitable visualization suggestions - based on publicly available data, for example from Destatis - and converts them directly into a common format, for example via data wrappers. The challenge was to find a technical solution that would actually reduce the workload for editorial staff while maintaining journalistic standards such as factual accuracy and source clarity.
Solution approach: The prototype developed with the working title "SPIEGEL-ÆI" combines LLM-based text analysis with algorithmic data search and automatic diagram generation. The starting point is the article text: This is analyzed by a language model such as GPT in order to extract central topics, key terms and potential visualization ideas. In the next step, the system searches the Destatis API for suitable data sets, checks their relevance and creates visualizations - including titles, labels and source information - directly in the Datawrapper tool. A strict separation was deliberately maintained between creative analysis (LLM) and fact-based data processing in order to avoid hallucinations. The implementation was carried out by an interdisciplinary team that brought together technical components (API linking, Python logic), editorial perspectives (visualization types, style) and UX aspects (usability, presentation of results).
Results: With the SPIEGEL-ÆI, a functioning prototype was created that shows how the editorial workflow in the area of data visualization can be usefully supplemented by AI. The system can generate data-based diagrams to match specific texts - quickly, automatically and in the style of established SPIEGEL graphics. This not only significantly reduces the effort required for editorial research and manual diagram creation, but also improves the reading experience for users. At the same time, the project provides important insights into the strengths and limitations of current AI systems in journalistic use: while topic identification and data research could be convincingly automated, the Destatis API still showed technical limitations, for example in terms of speed and data access. Overall, the prototype lays a solid foundation for the future-proof integration of AI-supported data visualization in everyday editorial work.
GEMA: "Spotlight" - Analytics dashboard for music content creators
Problem: GEMA is traditionally anchored in the traditional music industry, but with the rise of social media and UGC platforms, the music ecosystem has changed. Music content creators (MCCs) - musicians who are mainly active on platforms such as TikTok or Instagram - have hardly been considered as a target group to date. They are facing new challenges: lack of visibility, lack of recognition, legal uncertainties and high pressure to continuously produce content. The central question was therefore: How can GEMA offer innovative, relevant solutions to address the biggest pain points of this target group and at the same time position itself as a relevant partner in the social media environment?
Solution approach: A user-centered development process was set up as part of a design thinking challenge. After intensive research and eight qualitative interviews with semi-professional MCCs, key needs and challenges were identified. Based on this, the interdisciplinary team - in close coordination with the GEMA innovation team - developed several solution ideas. This resulted in the "Spotlight" prototype: a dashboard that is specifically tailored to the needs of MCCs. The platform combines social media analytics with gamification elements and inspiration tools. Development focused on a low-fidelity version with a focus on the front end and user experience.
Results: "Spotlight" offers three central functions that are directly geared towards the needs of MCCs:
The prototype clearly differs from existing market offerings such as Viberate or Infludata as it focuses on MCCs and does not primarily address brands or streaming data. The unique value proposition lies in the combination of visibility, recognition, inspiration and community building. In the long term, the platform could be monetized and integrated into existing GEMA offerings such as GEMAPlus. An expansion to include other data sources and target groups is also conceivable. "Spotlight" thus offers a future-proof basis for GEMA's positioning in the creator economy.
Problem: How can classic novels be efficiently transformed into modern comics or graphic novels - and what role can artificial intelligence play in the process? Carlsen Verlag approached a team of students with this question. The aim was to support creatives in their production process, particularly with literary adaptations such as Robert Louis Stevenson's Treasure Island.
Solution: The team developed an interactive prototype with which entire comic pages can be generated at the click of a mouse. The highlight: individual panels can also be regenerated without having to redesign the entire comic page. Users can select story ideas, characters and page layouts on a specially created website. With the help of various AI tools - including ChatGPT, Midjourney, Replicate and the OpenAI API - suitable panels, image motifs and speech bubbles are then created automatically.
Results: The generated comic pages can be downloaded, shared via QR code or edited further if required. The prototype realistically demonstrates how individual work steps - such as script creation, character design or coloring - can be made significantly more efficient using AI. A complete, four-page comic including cover was created as a proof of concept, which visually realizes the dramatic finale of Treasure Island and provides the publisher with a tangible result.
Problem: Onilo produces so-called board stories - animated stories based on children's books that promote children's reading skills. However, the production is very time-consuming, especially the manual cropping of figures and objects from the original illustrations for the subsequent animation. These work steps take up over 60% of the animators' working time. At the same time, high data protection requirements apply, as licensed material is often used.
Solution: To solve this bottleneck, a prototype was developed that uses Meta's Segment Anything Model (SAM) locally. SAM can automatically recognize, segment and crop objects in images based on just a few user interactions (e.g. clicks or boxes). In addition, a user-friendly web interface was implemented that is specially tailored to Onilo's requirements.
Results: The application developed allows images to be uploaded and specific parts of the image to be cut out using SAM. Various interaction modes are available (point, box, automatic segmentation), which can also be operated intuitively by non-technical users. The masks created can be exported as individual images and used directly in the animation software. An additional slider allows the segmentation accuracy to be adjusted so that fine image details can also be taken into account.
The prototype is operated locally, which ensures the data protection-compliant processing of copyrighted material - an important point for Onilo and the cooperating publishers.
Problem: The editorial team of "Mein erstes Geolino" wanted to produce short animated stories about the character Lu and his friends - told in a child-friendly way and tailored to the target group of 2-5 year olds. However, as the team had no knowledge of animation, video editing or technical implementation, the challenge was to develop a production process that was as automated as possible. The goal: a tool that automatically generates suitable images for a short story after a prompt is entered, animates them, sets them to music and enriches them with sound effects - in the consistent style of the brand and without manual production effort.
Solution approach: In order to implement the desired tool, a prototype was developed that combines the latest AI technologies into a realistic workflow. In a multi-stage process, story content was first generated automatically, then characters and backgrounds were created based on existing illustrations. Various methods were used for the animation - including a skeleton approach and models such as Sora from OpenAI. An AI-generated narrator's voice and sound effects complemented the visual level. For the practical application, a website was developed that depicts this process and makes it easier to get started with AI-supported production.
Results: The prototype developed is an example of how AI can support the work of the Geolino editorial team in the digital sector. The website acts as a central platform for inspiration, demonstration and workflow documentation. It illustrates how individual steps - from text creation to image and video generation to post-production - can be supported by AI. It became particularly clear: While text and image AIs already deliver convincing results, there are still technical hurdles in video animation, such as motion control and stylistic fidelity. Nevertheless, the project provides a viable basis for future developments - with clear potential for increasing efficiency and creative support in everyday editorial work.
In the Prototyping Lab 2022, 12 students from four universities developed AI solutions for Carlsen Verlag, Jahreszeiten Verlag and RMS.
Team Carlsen's challenge was to develop an AI that facilitates the lettering process when translating foreign comics. This process is normally done by hand and is therefore time-consuming, labour-intensive and costly. The AI should automatically recognise the speech bubbles of foreign comics, determine their shape and size and then refill them with German text. The text should be inserted in a visually appealing way and divided into suitable paragraphs.
The students developed a prototype that first recognises words using optical character recognition and then assigns them to groups, each of which represents a speech bubble. The speech bubbles are found and measured by the prototype in the version of the comic with empty speech bubbles. As the programme has numbered the speech bubbles in the original comic, the translation texts can be inserted into their respective speech bubbles using the measurement data.
Radio has developed into a multi-channel medium as part of the digital transformation. One of the central questions in this digital world is that of advertising impact. How do we measure the conversion of a digital audio advert? How do we prove that an audio advert turns a listener into an interacting person? As part of the Prototyping Lab, the RMS team was therefore tasked with developing a solution to measure the impact of advertising in digital audio adverts.
To solve this challenge, the students developed an algorithm for audio recognition. Based on one second of an audio advert, the algorithm can recognise whether it is an audio advert from RMS or not. If it is recognised as RMS audio, the programme can determine which audio spot it is.
In the Prototyping Lab, the Jahreszeiten Verlag team tackled the challenge of developing a market research tool for decision-makers at Jahreszeiten Verlag that can be used to identify food trends for the 20-30 age group. In the long term, strategic product development decisions should be made on the basis of the tool's data.
The team solved the challenge by designing a prototype with several components. In the data collection process, a scraper is first used to collect data from TikTok that can be used to analyse trends. The data is then processed using other programmes and stored in such a way that it can be read by decision-makers at Jahreszeiten Verlag and used for product development decisions.
Whether in a publishing house, a media company or an agency - artificial intelligence can change the future of media. In the Prototyping Lab 2021, 20 students from five universities worked together with four partner companies to develop AI solutions for challenges in the content industry.
The challenge: Carlsen Verlag publishes more than 700 books every year. How can an AI tool help determine the ideal order quantity and timing for reprinting decisions? The aim was to reduce storage, production and waste costs while ensuring delivery capability.
The prototype: First, the internal stock sales data, title metadata and costing tables were analysed. During implementation, the team opted for an orchestration approach that combined data clustering and the use of a recurrent neural network (RNN). The recommendation process is divided into two phases: The prediction of the sales trajectories and the calculation of the reprint decision. In a final test, the prototype impressed with realistic forecasts and enormous time savings, as the calculation time per title was reduced by 50% (from 3 to 2 hours).
The challenge: The archive on spiegel.de contains more than one million articles. How can Spiegel process its large volumes of articles and meta-information in a meaningful way with the help of artificial intelligence and create new offers from them?
The prototype: The kick-off workshop gave rise to the idea of creating an interactive and clear review of the year. The team decided to use the already pre-trained open source AI model "Spa.cy" for the implementation, which already has a very high hit rate (91%) in this area. Due to the short time available and the large amount of data, the prototype was limited to the year 2020 and the entity of people. Under the working title "Spiegel-Zeitreise", a web application was created that uses artificial intelligence to create a visual review of the year from the archive with the five most important personalities and further information and articles about them. The backend system used for this with a connected database could also be used for other purposes in the future, e.g. for summarising and automatically indexing texts.
The challenge: Brand management is becoming increasingly complex and challenging in the digital age. Fork Unstable Media has developed the Modular Branding approach, which interprets brands as dynamic "personalities", so-called brand tokens. How can AI be used to utilise the potential of this model to automatically adapt brand profiles?
The prototype: As a starting point, the team chose the preferences of users determined using machine learning, which were to be translated into a personalised presentation of the website. KPIs such as dwell time and interaction rate were defined for this purpose. The biggest problem was the lack of usage data, which is why the team decided to train with artificial users under real conditions. A performance AI checks the defined KPIs of the website, while a design AI automatically adapts the presentation of the website. After 2000 repetitions, the accuracy is 30%, which is significantly higher than the values that a human estimate would produce.
The challenge: tigermedia faces the challenge of maintaining and categorising the more than 10,000 titles in its media library, to which new content is regularly added. The aim of the lab was therefore to intelligently analyse and prepare the amount of content in order to optimise the associated processes.
The prototype: AI was used to develop a fully automated process for extracting and analysing content that can be applied to every new title. In order to generate new metadata automatically, the audio files first had to be transcribed into full text using text-to-speech technology. This text was then converted into vectors in a process known as embedding. On this basis, the individual audio tracks were then summarised in similar clusters using the DBScan algorithm. In this way, suitable keywords or tags can be recommended for several titles, i.e. per cluster. The tool analyses and categorises 14 tracks in twelve minutes - a person cannot even listen to one track in its entirety in this time. As an add-on, a Bad Mouth filter has been added, which searches new titles for terms that are unsuitable for children and flags them.
Hamburg universities have built impressive prototypes together with Der Spiegel, N-Joy and Bauer Media to focus on AI technology.
How can the Bauer Media Group's "House of Food" use AI to react quickly to new trends in the food segment? As a solution, an intelligent recipe database was developed in which ingredients were classified according to characteristics such as "vegan" or "gluten-free". Based on manual tagging of around 2000 files, the neural network was trained so that the prototype can automatically tag existing recipes and images. The following functions were also integrated: a search for food trends and diets, a reverse image search and a tool for calculating nutritional values.
Ad fraud refers to fraudulent adverts that conceal inferior products or malware, among other things. Until now, this problem could only be solved ex post and with a great deal of manual effort. The prototype for Der Spiegel was set up using a support vector machine and a bot on the Google Ad Manager. To classify the adverts, the AI was first tasked with checking all current and new adverts by analysing metadata and images and then automatically blocking questionable adverts. The result was convincing: fraudulent adverts were correctly identified 95% of the time in the 2000 test data.
The challenge: In order to remain successful in competition with streaming services such as Spotify, an AI should help the music editorial team to select new songs that listeners are most likely to like.
The prototype: The AI is a simple neural network that analyses music tracks for similarities based on 33 characteristics. The song files are read into the analysis tool and characteristics such as speed, key and lyrics are checked and evaluated. The success of the prototype was verified during the test phase with the help of survey results and 500 songs rated in advance by the editorial team. To ensure user-friendliness for editors, a simple graphical user interface was developed for Linux systems. The Music Prediction Machine is constantly learning and can therefore make increasingly accurate predictions for new songs.
In the first lab, Hamburg students experimented with XR technologies (VR, AR and 360°) with Bauer, Foodboom and Der Spiegel.
How can existing digital content be adapted and utilised for VR applications? The idea: to build and design the house of your dreams with your own hands - virtually and playfully according to the principle of a DIY construction kit. The prototype for Xcel Media worked with the full range of VR technology: VR goggles, two controllers and sensors in the room that are synchronised with these to enable realistic movement in virtual space. The application could potentially even be used to implement entire theme worlds, place advertising partners, add products to a shopping basket and pay online.
How can VR storytelling be integrated into editorial processes and what added value will such multimedia formats offer in the future? As a prototype for a VR reportage, the team developed the 10-minute VR live experience "Behind the Moon" to mark the 50th anniversary of the moon landing. Radio messages and diary recordings in audio form as well as info markers about the space shuttle complement the report, which was published on Spiegel Online. A 360° video was also produced to make it possible to use the film without VR equipment.
The challenge: As a lifestyle brand for digital food content, Foodboom is looking for new multimedia applications with added value that can be easily integrated into its own offerings and social platforms.
The prototype: The "Foodbot" uses the camera function to recognise food and then makes suitable recipe suggestions. At the end of the process, the team had created 3,500 self-photographed labels for 18 selected ingredients and 20 specially created recipes. Technologically, the smartphone app is based on augmented reality (AR) and machine learning. The advantage of AR: innovative content can be created that is low-threshold, has low acquisition costs and is therefore easier to integrate into the media mix than a VR application. At the same time, users can enjoy interactive, digital realities, clearly differentiating the functional prototype from other Foodboom offerings with this added value experience.