Experience over 120 sessions across multiple stages at GenAI
Zürich 2026, the European Summit on Applied Generative AI


.jpg)

93% of developers say AI makes them more productive. Yet 91% of organizations report no impact on the top line. That gap isn't a model problem — it's a systems problem. Most teams are still treating AI like a magic autocomplete: powerful in the moment, inconsistent at scale. The result? Rework, unpredictable quality, and a growing trust gap between developers and their AI tools. In this talk, we'll explore four techniques that close the gap and turn unpredictable AI assistance into reliable engineering outcomes across the SDLC: Specs (defining what you want before the AI builds it), Steering (persistent behavioral guidance that shapes how AI operates), Skills (reusable task-specific workflows), and SOPs (step-by-step procedures that ensure consistency across complex tasks). Through real-world examples across coding, testing, and architecture review, you'll see why the teams getting the most from AI in their development lifecycle are the ones engineering the system around the model — not just the prompts they feed

Problem:
In sensitive application domains such as legal AI, impressive demos are easy to produce — reliable systems are not. What matters is the ability to systematically measure, compare, and control model performance. Without transparent statements about precision, error rates, and limitations, GenAI in real-world workflows remains opaque and risk-prone.
Approach:
The talk shows why high-stakes AI requires a clearly defined gold standard: structured data, annotated samples, and systematic benchmarking of model performance against human experts. With the emergence of agentic AI, many control steps can be significantly accelerated and partially automated as operational human intervention is reduced. The core principle remains unchanged: transparency about output quality. Model precision and error rates must be measurable and clearly communicated.
Key takeaways:
• Why demos fail as a proxy for real AI performance?
• Why human benchmarking remains essential even with agentic systems?
• How agentic AI increases speed without removing responsibility for output quality?


Investors across the ecosystem are signing responsible AI commitments, and those commitments are now flowing downstream into term sheets, board expectations, and the data room. Founders inherit them without a governance manual and move forward assuming that someone, somewhere, has already defined what is required. In the race to build, that assumption becomes the missing layer.
The question is no longer whether you care about responsible AI. It is whether your organisation is structurally built to evidence it when investors ask for audit trails, when enterprise clients request documentation, and when regulators expect accountability.
This session draws a sharp line between responsible AI as a brand position and responsible AI as a legal and operational architecture, mapping exactly where the gap costs you. It will uncover the questions regulators, investors, and enterprise clients are now actually asking - and why current policies fail to answer them - before outlining a minimum viable responsible AI governance architecture: what to integrate first, what to defer, and what will cost you a deal if ignored.

Generative AI is beginning to reshape how news is created, translated, verified, and delivered — but for a global news organization like Reuters, innovation must go hand in hand with trust. In this talk, I will share how Reuters is approaching generative AI as both a powerful technological enabler and a responsibility‑critical capability.
Drawing on practical experience, the session explores how generative AI is being applied across the news lifecycle, including support for journalists and editors, workflow efficiency, multilingual content, and product experiences. The focus is not on experimentation for its own sake, but on real deployment decisions in a high‑stakes environment where accuracy, independence, and transparency are essential.
The talk will highlight key design choices and trade‑offs: where generative AI delivers clear value, where it must be constrained, and how human editorial judgment remains central. I will also discuss governance, risk management, and cultural adoption challenges when introducing generative AI into a trusted media organization.
The session concludes with practical lessons for media leaders and technologists navigating generative AI in environments where credibility is the product.





Biased GenAI products stifle innovation, from flawed data to homogenous teams missing real-world blind spots. How do you build diverse teams that catch biases early and deliver superior products? What rituals and skills ensure ethical, inclusive development from ideation to production? Join Priska Burkard to discover why diverse teams are your edge in GenAI success.



Seamless communication is the key to unlocking global markets when scaling a business across borders. However, companies face a crucial dilemma. While general generative AI solutions offer broad capabilities, relying on them for enterprise localization introduces significant risks because these tools often produce generic text that dilutes brand nuance and mistranslates industry-specific terminology. This can result in a damaged brand reputation, financial losses, and legal and security risks.
In this fast-paced session, we will explore the critical differences between general-purpose GenAI and specialized language AI. We will present a targeted approach to global communication that prioritizes secure, purpose-built language models over generic tools.
In this session, you will learn how to:
-Ensure uncompromising quality: Use purpose-built language models to generate context-aware translations that protect your brand identity.
-Drive end-to-end efficiency: Deploy a centralized AI platform to speed up global operations and reduce costs securely.
-Connect the entire enterprise: Seamlessly integrate language AI across all business communications, from text APIs to real-time voice translation.





Ethics frameworks for AI are everywhere—but they rarely help people make decisions when designing, procuring, or deploying systems. We're building a platform to change that. It translates complex questions about tech governance, accountability, and risk into structured, searchable guidance that works at the point of decision-making. Rather than abstract principles, it offers practical implementation pathways grounded in peace, security, and human rights contexts. The platform is also designed as a governable knowledge system—modular, traceable, and AI-assisted—so expert guidance stays accessible without losing context. The core insight: responsible innovation needs operational tools, not just better intentions. This talk explores how we're turning ethics into something people can actually use.





We are building a community of forward-looking practitioners who are at the frontier of GenAI innovation. Interested to join?