GenAI Zürich 2026

Conference Schedule

Experience over 120 sessions across multiple stages at GenAI
Zürich 2026, the European Summit on Applied Generative AI

Buy tickets
Day 1
1 April 2026
Day 2
2 April 2026
Use Case Stage
1st floor – Blauer Saal
Stage moderator
9:30-10:00
GenAI in Manufacturing & Construction
From GPT to Agent Orchestration
Many organizations experiment with AI through chatbots and isolated pilots but struggle to scale impact in innovation. This session shows how the Innovation Lab of ZF’s Commercial Vehicle Division moves from ad-hoc AI use to orchestrated, agent-based systems supporting the full innovation process. Participants learn the maturity path from simple GPT usage to agent orchestration and AI-native innovation, and how this enables faster, more consistent, and better-governed innovation decisions.
AI Adoption in a Global Manufacturing Company: From Pilots to Real Impact
AI Transformation Manager
@SFS Group
Manufacturing organizations are built on processes refined over decades. While AI promises efficiency and automation, introducing it into such environments often creates friction — confusion, resistance, and loss of trust among employees. In this session, I will share how AI can be introduced into a mature manufacturing organization without disrupting existing workflows, and how thoughtful AI adoption can improve day-to-day work for employees rather than complicate it. Drawing on real-world experience leading AI transformation in a global manufacturing company, this talk focuses on integrating AI into established processes in a way that feels natural, supportive, and reliable. Instead of replacing people or redesigning everything from scratch, the approach emphasizes augmenting existing roles, reducing cognitive and operational load, and building trust through transparency and gradual adoption. I will walk through how AI solutions were embedded into daily operations, how employee concerns were addressed, and how adoption was driven through practical value rather than top-down mandates. Key takeaways: - How to introduce AI into long-established processes without creating resistance - Why employee trust is critical for successful AI adoption - How AI can improve individual productivity while strengthening the overall system - Practical lessons from integrating AI into real manufacturing operations
We Taught an AI to Design in CAD. Here’s What Happened Next.
What happens when an AI learns to design? Raven, our AI-driven CAD tool, revealed unexpected possibilities and limitations in how machines interpret space, intent, and creativity. In this talk, I will share the story behind Raven’s development, the challenges of teaching design principles to an algorithm, and the surprising ways humans respond to an intelligent design partner. You’ll see how Raven accelerates exploration, elevates decision-making, and reshapes the early stages of architectural work. This presentation invites you to imagine a future where designers and AI collaborate to unlock ideas once out of reach.
10:00-10:25
GenAI in Software Engineering
Keynote
Tech
Tech
Tech
Tech
Stop Prompting, Start Engineering

93% of developers say AI makes them more productive. Yet 91% of organizations report no impact on the top line. That gap isn't a model problem — it's a systems problem. Most teams are still treating AI like a magic autocomplete: powerful in the moment, inconsistent at scale. The result? Rework, unpredictable quality, and a growing trust gap between developers and their AI tools. In this talk, we'll explore four techniques that close the gap and turn unpredictable AI assistance into reliable engineering outcomes across the SDLC: Specs (defining what you want before the AI builds it), Steering (persistent behavioral guidance that shapes how AI operates), Skills (reusable task-specific workflows), and SOPs (step-by-step procedures that ensure consistency across complex tasks). Through real-world examples across coding, testing, and architecture review, you'll see why the teams getting the most from AI in their development lifecycle are the ones engineering the system around the model — not just the prompts they feed

11:00-11:30
GenAI in Legal
AI Is Defined by Its Most Predictable Error
CEO and Co-Founder
@Legartis

Problem:

In sensitive application domains such as legal AI, impressive demos are easy to produce — reliable systems are not. What matters is the ability to systematically measure, compare, and control model performance. Without transparent statements about precision, error rates, and limitations, GenAI in real-world workflows remains opaque and risk-prone.

Approach:

The talk shows why high-stakes AI requires a clearly defined gold standard: structured data, annotated samples, and systematic benchmarking of model performance against human experts. With the emergence of agentic AI, many control steps can be significantly accelerated and partially automated as operational human intervention is reduced. The core principle remains unchanged: transparency about output quality. Model precision and error rates must be measurable and clearly communicated.

Key takeaways:

• Why demos fail as a proxy for real AI performance?
• Why human benchmarking remains essential even with agentic systems?
• How agentic AI increases speed without removing responsibility for output quality?

The Legal Challenges of Agentic AI: When Systems Start Acting on Their Own
Agentic AI systems are moving beyond passive tools. They can make decisions, initiate actions, negotiate with other systems and execute tasks with limited human intervention. This shift raises fundamental legal questions. If an AI agent enters into commitments, causes damage or processes personal data autonomously, who bears responsibility and how do existing legal frameworks apply? This session examines how current legal concepts such as liability, contract formation and data protection operate when software agents act with a high degree of autonomy. Using practical examples from emerging AI deployments, the presentation highlights where traditional legal assumptions no longer fit neatly and where companies face new forms of risk. Rather than focusing only on theory, the session takes a pragmatic approach. It identifies the key points in the lifecycle of agentic AI systems where legal exposure typically arises and explains how organizations can structure governance, oversight and contractual safeguards to manage these risks. Participants will gain a clear understanding of the main legal challenges created by agentic AI and how existing rules in areas such as liability and data protection can still be applied in practice. The session also provides concrete guidance on governance mechanisms that enable companies to deploy autonomous AI systems responsibly while continuing to innovate.
Demo to Data Room: The Minimum Viable Responsible AI Stack

Investors across the ecosystem are signing responsible AI commitments, and those commitments are now flowing downstream into term sheets, board expectations, and the data room. Founders inherit them without a governance manual and move forward assuming that someone, somewhere, has already defined what is required. In the race to build, that assumption becomes the missing layer.

The question is no longer whether you care about responsible AI. It is whether your organisation is structurally built to evidence it when investors ask for audit trails, when enterprise clients request documentation, and when regulators expect accountability.

This session draws a sharp line between responsible AI as a brand position and responsible AI as a legal and operational architecture, mapping exactly where the gap costs you. It will uncover the questions regulators, investors, and enterprise clients are now actually asking - and why current policies fail to answer them - before outlining a minimum viable responsible AI governance architecture: what to integrate first, what to defer, and what will cost you a deal if ignored.

11:35-12:05
GenAI in Media
Gen AI in the News

Generative AI is beginning to reshape how news is created, translated, verified, and delivered — but for a global news organization like Reuters, innovation must go hand in hand with trust. In this talk, I will share how Reuters is approaching generative AI as both a powerful technological enabler and a responsibility‑critical capability.
Drawing on practical experience, the session explores how generative AI is being applied across the news lifecycle, including support for journalists and editors, workflow efficiency, multilingual content, and product experiences. The focus is not on experimentation for its own sake, but on real deployment decisions in a high‑stakes environment where accuracy, independence, and transparency are essential.
The talk will highlight key design choices and trade‑offs: where generative AI delivers clear value, where it must be constrained, and how human editorial judgment remains central. I will also discuss governance, risk management, and cultural adoption challenges when introducing generative AI into a trusted media organization.
The session concludes with practical lessons for media leaders and technologists navigating generative AI in environments where credibility is the product.

Social Influencer – with AI Against Hate Speech
Hate speech spreads unabated across social media platforms and online news comment sections, often overwhelming voluntary counter-speech. Although it is crucial to speak out against toxic statements, responses are often hesitant, inconsistent, or lack the strategic focus necessary to be effective. Furthermore, the actual impact of such countermeasures has only been partially researched. This presentation addresses these challenges and presents a methodologically sound approach to counter-speech that is supplemented by artificial intelligence. We tested the effectiveness of structured interventions in which AI was used not to automate the conversation, but to assist in formulating precise, empathetic, and effective responses. Both the model suggestions and the final interventions are recorded, creating a dataset that allows us to analyze how AI-generated counterspeech suggestions are used and adapted by practitioners, and to inform future improvements to AI-assisted tools. This session will provide practical insights into effective counter speech and how AI can support it.

More sessions to be revealed soon...

12:10-12:40
GenAI in Pharma & Healthcare
From Grassroots to Productivity: Sonova’s GenAI Journey in R&D
Generative AI adoption is accelerating across industries as organizations integrate it into everyday workflows to improve productivity and decision‑making. At Sonova, we are on the same journey within Hearing Aid R&D. This talk shares Sonova’s experience of adopting Generative AI in R&D workflows, starting with grassroots initiatives in software development and expanding into a broader set of use cases. A central focus of the journey has been responsible adoption. Introducing GenAI in medical‑device R&D requires careful consideration of security, quality, safety, and intellectual property. The session concludes with concrete lessons learned from our adoption journey to date and a practical overview of how GenAI is being applied in Hearing Aid R&D today.

More sessions to be revealed soon...

12:45-13:15
GenAI Perspectives: Enterprise Transformation, SME Strategy & Inclusive Teams
Driving AI Value as an SME: Organizing Change & Adoption
The Problem: Generative AI has shifted from a technological novelty to a critical organizational challenge. However, for SMEs and agencies, the reality isn't about infinite R&D budgets or massive teams—it is about limited resources, short ROI expectations, and a "fully booked" workforce that struggles to find time for innovation. How do you move from "doing the right thing" to "doing things right"? The Approach: As the AI Lead at Webrepublic, I will share how we kill the buzz to focus on functionality that brings immediate value today. We view AI not just as a new tool, but as a "second chance" to fix essential data structures and define processes. This talk explores how we navigate the difficult balance of scaling with a small AI core team while managing diverse stakeholder fears regarding job security and business model disruption. Key Takeaways Organizational Shift: Strategies to push participation and adoption across departments, even when teams are at capacity. Smart Scaling: How to secure internal budgets, build vendor partnerships, and track adoption with limited manpower. Real Impact: Methods to moderate innovation conversations and prove the direct value of integrated AI to clients.
Inclusive Teams: Slaying Bias in GenAI Products
Co-Founder & Managing Director
@TechFace

Biased GenAI products stifle innovation, from flawed data to homogenous teams missing real-world blind spots. How do you build diverse teams that catch biases early and deliver superior products? What rituals and skills ensure ethical, inclusive development from ideation to production? Join Priska Burkard to discover why diverse teams are your edge in GenAI success.

More sessions to be revealed soon...

14:00-14:25
GenAI in Transportation & Logistics
From Data Chaos to Cognitive Enterprise: How SLMs Will Transform Governance
In a world of fragmented data, slow decisions, and organizational silos, true intelligence doesn’t come from bigger models, but from smarter, domain‑aware ones. Instead of building massive models, the real breakthrough is weaving meaning into the systems we already use. This talk shows how domain‑optimized Small Language Models harmonize data, processes, and decisions across complex architectures. From semantic extraction to real‑time conflict detection, learn how SLMs evolve into intelligent agents that strengthen architecture, reduce rework, and drive coherence at scale. Discover how a semantic fabric turns complexity into clarity, enabling real‑time understanding, seamless collaboration, and intelligent governance as organizations move toward a truly cognitive future.
How Zurich Airport Plans Responsible AI & Autonomous Solutions at Scale
Zurich Airport is exploring how AI-driven and autonomous solutions can be introduced responsibly in a complex and highly regulated environment. While the potential is significant, moving from experimentation to operational use requires more than new technology — it requires careful preparation and clear structures. This talk shares how Zurich Airport is approaching this journey. We will outline the steps taken to strengthen digital foundations, clarify governance responsibilities, and ensure compliance, safety, and transparency when evaluating AI use cases. Rather than presenting large-scale transformation, the session focuses on practical progress, lessons learned, and the realities of implementation. Examples from the aviation context will illustrate where AI shows tangible value today, where limitations remain, and how expectations are managed internally. The emphasis is on building understanding, aligning stakeholders, and creating conditions that allow innovation to develop in a controlled and responsible way. The session offers a realistic perspective on what it means for an airport to move from isolated experiments toward a more structured approach to AI — acknowledging both opportunities and constraints along the way.
14:25-15:00
GenAI in the Workplace
Beyond GenAI: Scaling Global Business with Specialized Language AI
Sales Director
@DeepL

Seamless communication is the key to unlocking global markets when scaling a business across borders. However, companies face a crucial dilemma. While general generative AI solutions offer broad capabilities, relying on them for enterprise localization introduces significant risks because these tools often produce generic text that dilutes brand nuance and mistranslates industry-specific terminology. This can result in a damaged brand reputation, financial losses, and legal and security risks.

In this fast-paced session, we will explore the critical differences between general-purpose GenAI and specialized language AI. We will present a targeted approach to global communication that prioritizes secure, purpose-built language models over generic tools.

In this session, you will learn how to:

-Ensure uncompromising quality: Use purpose-built language models to generate context-aware translations that protect your brand identity.
-Drive end-to-end efficiency: Deploy a centralized AI platform to speed up global operations and reduce costs securely.
-Connect the entire enterprise: Seamlessly integrate language AI across all business communications, from text APIs to real-time voice translation.

Generative AI for Evidence-Based Hiring in Talent Acquisition and Executive Search
Traditional executive search and recruitment remain largely relationship-driven and under-digitised, with limited transparency into how candidates are evaluated and shortlisted. Data and AI are still rarely used in a structured way for leadership decisions. Tagliabue & Partners, a specialist executive search firm, partnered with thirty3, an AI solutions company, to rethink this model. Together, they built a platform that brings structure, transparency, and scalability to the hiring workflow. In this talk, we will showcase the system that automates batch CV screening and generates client-ready longlist decks with structured, evidence-based candidate evaluations — demonstrating how Generative AI can transform high-stakes talent decisions.
Scalable AI Adoption
Head of Sales & Business Development
@AI Businessschool
"The $10M Question: Why Is Nobody Using Your AI Tools?" You've invested. The tools are deployed. The licenses are running. But only 15% of your employees actually use AI productively. 74% of companies see zero ROI from their AI investments – not because the technology fails, but because they're ignoring the critical 70%: people and processes. In this talk, I'll show you: Why the AI Adoption Gap is widening in 2025, not closing What the 5% leader companies do differently (Spoiler: They invest 70% in enablement, not tools) How our Parnters went from 15% to 85% adoption – in under 6 months. With concrete numbers, measurable results, and a playbook you can implement the next day. For everyone tired of burning millions on unused AI.
15:30-16:30
GenAI in Humanitarian Aid
From Principles to Practice: The Platform for Responsible AI
Executive Director
@ICT4Peace

Ethics frameworks for AI are everywhere—but they rarely help people make decisions when designing, procuring, or deploying systems. We're building a platform to change that. It translates complex questions about tech governance, accountability, and risk into structured, searchable guidance that works at the point of decision-making. Rather than abstract principles, it offers practical implementation pathways grounded in peace, security, and human rights contexts. The platform is also designed as a governable knowledge system—modular, traceable, and AI-assisted—so expert guidance stays accessible without losing context. The core insight: responsible innovation needs operational tools, not just better intentions. This talk explores how we're turning ethics into something people can actually use.

How to Use AI Responsibly in War: The Red Cross Case
"Move fast and break things". At the International Committee of the Red Cross, breaking things means putting the lives of the most vulnerable at high risk. In this talk, the audience will learn how this 160 years old humanitarian organisation has developed its Artificial Intelligence policy. How it found ways to take into account complex working environments: wars, autonomous weapons systems, polarization, politization of aid, hate speech, scarce resources and data, people who have lose their house or family members. How it embedded its fundamental principles of humanity, impartiality, neutrality and independence that are critical to allow access to communities affected by armed conflicts. But how in the meantime, it wanted to foster innovation in an accountable, responsible and fair manner.
AI for Refugee Protection: From Innovation to Impact
Global displacement is rising, placing unprecedented pressure on humanitarian systems. Refugee crises are complex, fast moving and data intensive, yet the stakes are measured in human lives, dignity and protection. The question is no longer whether AI can help, but how to deploy it responsibly in high risk environments. This session explores how UNHCR Innovation is operationalizing AI to strengthen refugee protection, improve humanitarian decision making and enhance service delivery at scale. From predictive analytics that anticipate displacement patterns to AI tools that support case processing and operational planning, AI is being embedded across four strategic pillars: protection and service delivery, humanitarian decision making, organizational efficiency, and oversight and planning. Grounded in strong humanitarian values, this approach prioritizes responsible governance, risk mitigation and human rights safeguards. Participants will gain practical insights into deploying AI in crisis contexts, building trusted partnerships, and scaling innovation while safeguarding safety, dignity, inclusion and measurable impact for refugees and stateless communities.
Panel Discussion
Tech
Tech
Tech
Tech
GenAI in Humanitarian Aid
Executive Director
@ICT4Peace
16:30-16:55
GenAI in Entertainment
Gran Turismo Sophy: From AI Research Breakthrough to Video Game Innovation
Strategy and Partnerships Manager
@Sony AI
Mastering the highly realistic driving simulator Gran Turismo requires more than just following a racing line. The problem is that an autonomous agent must simultaneously master complex real-time car physics, execute high-speed tactical decisions (like slipstreaming and blocking), and adhere to strict, highly nuanced racing etiquette to avoid unfair collisions with human players. To solve this, our approach utilized a novel deep reinforcement learning platform developed collaboratively by Sony AI, Polyphony Digital, and SIE. Instead of relying on hard-coded rules, we trained Gran Turismo Sophy (GT Sophy) using a massive cloud-based infrastructure. The AI learned through continuous trial and error, eventually discovering how to control cars at the absolute limits of physical friction while racing fairly. Key takeaways from this session include a high-level overview of the reinforcement learning approach used to train GT Sophy and insights into how the team went from the research breakthrough of making a Superhuman Racing AI featured in Nature magazine to productizing the AI into a fun in-game opponent enjoyed by millions of Gran Turismo players.
2026 Sponsors and Partners

Supported by
industry leaders and innovative startups

We are building a community of forward-looking practitioners who are at the frontier of GenAI innovation. Interested to join?

Become a partner

The Turing Genius

Neural Masters

Generative Starters

Startup Allies

Hackathon Co-organizers

Hackathon Partners

Corporate Partners

Special Partner for Ethical AI, Cybersecurity Governance & Human Rights

Community Partners