2026 Sessions

Hear big ideas
from thought leaders

Reachy Mini: Giving a Body to AI

Multimodal Research Lead
@
Hugging Face

Large language models can reason, generate code, and hold conversations. But they remain trapped behind screens. If AI is to become truly useful in our daily environments, it needs a body, a presence, and natural ways to interact with us.
In this talk, I’ll present Reachy Mini, an open and developer-friendly robot designed to explore what embodied AI can look like today. I’ll walk through how we are building its core software stack, from multimodal perception to real-time voice interaction, and why voice is emerging as the most natural interface for physical AI. We will look at how speech-to-speech pipelines, local inference, and modular backends allow Reachy Mini to move beyond scripted demos and into responsive, real-world interaction.
You will leave with a clearer understanding of what it takes to give AI a body, why voice-first interfaces matter, and how open tools can accelerate the next generation of interactive robotics.

Apertus: Democratizing Open and Compliant LLMs For Global Language Environments

This presentation introduces Apertus, a fully transparent large language model initiative developed through collaboration between EPFL, ETH Zürich, and CSCS, representing a significant public-sector response to the concentration of AI development within private corporations. The talk examines its origin and fundamental challenges in current LLM deployment, such as opacity of training data, limited multilingual representation, and proprietary control. The initiative demonstrates how public research institutions can develop competitive language models while adhering to stringent ethical standards, including training exclusively on public data with copyright compliance and supporting over 1,000 languages.

The Robot Renaissance – When Machines Do Our Jobs

Problem:

Kaufmann argues that humanity stands at a turning point: machines and Generative AI will soon outperform us in many routine and even complex tasks, while our social, economic, and cultural systems are still built around compulsory work and industrial‑age roles of “Homo faber.” Without a new vision, fears of job loss, loss of control, and foreign dominance (China/USA) over AI systems will shape the future instead of our own European values.​

Approach:

Kaufmann reframes robots and Generative AI as tools, like excavators or calculators, without intrinsic power fantasies, and insists we design them to augment humans rather than replace them. He sketches a near future where every person works with several humanoid or software agents that handle routine tasks, enabling humans to focus on uniquely human, hard‑to‑automate activities. He also calls for sovereign Swiss and European AI (e.g., SwissGPT, AlpineAI) to embed local values, privacy, and trust into foundational Generative AI infrastructure.​

Key takeaways:

Generative AI can trigger a shift from “Homo faber” to “Homo gaudens,” freeing people to pursue meaningful work and curiosity instead of mere survival.

A “Eutopia” – a realistic, golden age – is possible if productivity gains from AI are used to reduce compulsory work and secure public finances.

Trust, culture, and data protection will be the decisive “currency” in the global race for AI; Switzerland can lead by building reliable, privacy‑preserving Generative AI systems for governments, hospitals, and universities.

The Illusion of Intent: Why Language is the First Line of AI Governance

"The model isn't sure", "The AI assistant wants to be helpful"…. We use these phrases daily as shorthand, but are they undermining your risk strategy? This talk argues that anthropomorphic language is a big sleeper risk in GenAI governance today. When we assign human verbs—thinking, deciding, lying—to probabilistic systems, we create an "Illusion of Intent." This linguistic drift isn't just a semantic annoyance; it is a governance hazard. It hacks human empathy, creates false trust, and obscures liability by treating system failures as character flaws. In this session, we will dismantle the habit of humanizing the machine. We will explore how precise, de-anthropomorphized language acts as a firewall for ethical safety and improves collaboration between Legal, Tech, and Product teams. Join us to learn why the most critical update to your governance framework isn't new code—it's a new vocabulary. Let’s stop governing the ghost and start governing the tool.

The Future of AI Discovery: From Generation to Real-Time Perception

Generative AI has led to an explosion of content creation. Yet discovering relevant and inspiring content and products online is becoming increasingly difficult. Most digital experiences remain backward-looking, optimizing for historical interactions rather than understanding user interests in the moment. As a result, users are often trapped in narrow recommendation loops, while new and diverse content struggles to surface. This talk introduces a shift from AI generation to real-time perception: AI systems that continuously interpret user behavior and context as they evolve within a session. Real-time perception enables adaptive discovery, contextual search, and more effective agentic experiences; moving beyond static personalization toward moment-by-moment intelligence. Drawing from real-world deployments in large e-commerce and marketplace environments, the talk covers perception model architectures, product and system design, and concrete case studies with measurable business impact. Real-time discovery is emerging as a must-have capability for online platforms, one that directly translates into measurable gains in engagement and revenue.

Conscious Humans Lead: The Ethical Decision Architecture for Safe GenAI Scaling

Conscious Humans Lead: The Ethical Decision Architecture for Safe GenAI Scaling Many organisations are accelerating their GenAI initiatives, yet most struggle to move beyond pilots into safe, reliable, enterprise scale. The core problem is not the technology. It is the quality of human judgement, ethical awareness, and decision structures that surround it. In regulated industries in particular, unconscious decision habits, legacy governance models and unclear accountability create hidden risks that block GenAI’s strategic potential. This talk introduces a Conscious Humans Lead approach to GenAI adoption. It emphasises that safe scaling begins with humans who understand the implications of AI, can recognise behavioural and ethical blind spots, and can guide the technology with clarity and intention. Drawing on applied work and research in financial services and other regulated environments, I explore how organisations can build ethical intelligence, design higher quality decisions and create value in ways that are sustainable, socially aware and aligned with long term shareholder expectations. Participants will learn: • why human behaviour and governance maturity are the central barriers to safe GenAI scaling • the most common ethical and behavioural failure patterns that appear in real implementations • practical methods for strengthening ethical intelligence and decision quality • what leaders must prioritise to scale GenAI responsibly in 2026 and beyond Attendees will leave with a practical framework for aligning people, decisions and AI systems to enable safe, ethical and accelerated GenAI scaling.

How Leading Banks Kill Their Biggest Hidden Cost with AI

For many financial institutions, Quality Assurance (QA) has ballooned into a massive liability, costing millions and stalling deployment. This session exposes the playbook leading banks use to turn this cost center into a competitive advantage. We explore the shift from manual grunt work to human-supervised AI agents. This approach delivers the speed of automation with the safety of expert oversight. We will analyze real-world cases showing how this specific AI implementation slashes overhead and dramatically.

Designing and Delivering the First Agentic Claims-Handling Platform in Insurance

Chief Operating & Technology Officer
@
Solvd Group

Most companies are still experimenting with GenAI, but very few manage to scale beyond pilots—especially in highly regulated industries like insurance. In my talk, I will share how we moved from isolated GenAI use cases to designing and delivering the first Agentic AI–native claims-handling platform across three companies and five countries within a global insurance group. I will explain how Agentic AI changes process design, product architecture, and governance, and what it takes in reality to orchestrate multiple specialized AI agents while ensuring safety, reliability, and compliance with the AI Act, GDPR, and DORA. The talk focuses on practical learning: how to design agent roles and orchestration patterns, how to build trust and non-functional safeguards into autonomous flows, how to prepare the workforce for AI-centric operations, and how to balance speed with regulatory requirements. Attendees will leave with a concrete playbook for evolving from GenAI experiments to an AI-native operating model and with insights into the measurable impact on productivity, quality, and customer experience in claims handling.

The Playbook for a Sovereign Model-as-a-Service Platform

Senior Specialist Solution Architect
@
Red Hat

Relying on black-box AI APIs often means trading data sovereignty and cost control for convenience. This session provides a practical blueprint for architecting a private, sovereign Model-as-a-Service platform using powerful open-source models. Attendees will leave with a concrete playbook to master observability, enforce security policies, and transition from an API consumer to a platform provider.

We Taught an AI to Design in CAD. Here’s What Happened Next.

What happens when an AI learns to design? Raven, our AI-driven CAD tool, revealed unexpected possibilities and limitations in how machines interpret space, intent, and creativity. In this talk, I will share the story behind Raven’s development, the challenges of teaching design principles to an algorithm, and the surprising ways humans respond to an intelligent design partner. You’ll see how Raven accelerates exploration, elevates decision-making, and reshapes the early stages of architectural work. This presentation invites you to imagine a future where designers and AI collaborate to unlock ideas once out of reach.

Designing Context-First AI Systems

This session explores how to design context-first AI architectures where retrieval, memory, metadata, and workflow logic work together to ground LLMs in the right information at the right time. I will break down the building blocks of context engineering, vector search, structured context injection, guardrails, and adaptive memory and discuss how to build systems that reduce hallucinations, improve reasoning, and deliver reliable real-world performance in the business. Attendees will walk away with practical patterns for designing AI applications that adapt with high contextual intelligence.

It's Hard to Talk to People

Head of ML
@
Mavenoid

Building a generative voice assistant demo is easy. Getting it to production is hard. Scaling it to thousands of calls per day? That's where the real learning begins. This talk shares battle-tested lessons from taking a voice AI from prototype to handling thousands of calls daily - covering the unexpected challenges that no tutorial prepared us for.

From GPT to Agent Orchestration

Many organizations experiment with AI through chatbots and isolated pilots but struggle to scale impact in innovation. This session shows how the Innovation Lab of ZF’s Commercial Vehicle Division moves from ad-hoc AI use to orchestrated, agent-based systems supporting the full innovation process. Participants learn the maturity path from simple GPT usage to agent orchestration and AI-native innovation, and how this enables faster, more consistent, and better-governed innovation decisions.

AI Is Defined by Its Most Predictable Error

CEO and Co-Founder
@
Legartis

Problem:

In sensitive application domains such as legal AI, impressive demos are easy to produce — reliable systems are not. What matters is the ability to systematically measure, compare, and control model performance. Without transparent statements about precision, error rates, and limitations, GenAI in real-world workflows remains opaque and risk-prone.

Approach:

The talk shows why high-stakes AI requires a clearly defined gold standard: structured data, annotated samples, and systematic benchmarking of model performance against human experts. With the emergence of agentic AI, many control steps can be significantly accelerated and partially automated as operational human intervention is reduced. The core principle remains unchanged: transparency about output quality. Model precision and error rates must be measurable and clearly communicated.

Key takeaways:

• Why demos fail as a proxy for real AI performance?
• Why human benchmarking remains essential even with agentic systems?
• How agentic AI increases speed without removing responsibility for output quality?

When AI and Human Together Created a Fictional Alternative Rock Band

Imagine you're into playing instruments, singing, and producing music, but you’re lacking the decades of practice. Still, you write great lyrics and just "want to make music". As a personal project/side quest, I bridged that gap leveraging GenAI, my production skills, and professional post-production tools.

The result is Windlereye, a fictional alternative rock band with over 100 songs. Some of them are even good!

In this talk, I’ll demystify the "one-click" misconception by sharing details on my hybrid workflow and the workarounds I used to jump the biggest GenAI hurdles (vocal consistency, artifacts, instability). I'll explain how GenAI made me a better lyricist, and how I made my first whooping $0,000,001.12 in royalties.

Finally, I'll touch on my non-expert opinion on ethics and legal matters of this new frontier.

AI Value First – How Tech Leaders Avoid Zero-Impact AI

AI rarely fails due to the algorithms, but because organization focuses on the technology instead of measurable outcomes. In this presentation, we share a value-first approach proven in the Swiss market that ensures to connect business priorities with strong technical delivery - so that AI creates measurable business impact rather than prototypes. Our method combines three elements that must work together: clear business goals, sector-specific insights, and pragmatic hands-on engineering. The journey starts with value discovery and how it will be translated in to clear KPIs and success criteria). We then demonstrate the impact through improvements in data, process, and systems. This builds the foundation required for reliability and sustainable adoption. Only this allows to introduce AI at scale for smarter automation, faster decisions, and consistent execution. The presentation concludes with real examples that illustrate how this method translates AI strategy into tangible business results, and what technical capabilities are required for an end-to-end delivery.

Driving AI Value as an SME: Organizing Change & Adoption

The Problem: Generative AI has shifted from a technological novelty to a critical organizational challenge. However, for SMEs and agencies, the reality isn't about infinite R&D budgets or massive teams—it is about limited resources, short ROI expectations, and a "fully booked" workforce that struggles to find time for innovation. How do you move from "doing the right thing" to "doing things right"? The Approach: As the AI Lead at Webrepublic, I will share how we kill the buzz to focus on functionality that brings immediate value today. We view AI not just as a new tool, but as a "second chance" to fix essential data structures and define processes. This talk explores how we navigate the difficult balance of scaling with a small AI core team while managing diverse stakeholder fears regarding job security and business model disruption. Key Takeaways Organizational Shift: Strategies to push participation and adoption across departments, even when teams are at capacity. Smart Scaling: How to secure internal budgets, build vendor partnerships, and track adoption with limited manpower. Real Impact: Methods to moderate innovation conversations and prove the direct value of integrated AI to clients.

The Zero Partner Fund: Building an AI-Native VC

VCs evaluate AI startups. But what if the VC itself runs on AI? At Ellipsis Venture, we've built agentic systems for deal flow, diligence, and fund ops—turning a two-partner fund into something that operates like a much larger team. Moreover, if AI can help run a fund, can it also help founders build companies from ideation through validation to MVP? This talk is part demo, part manifesto. I'll show the tools we actually use, and present a framework for investors and founders building companies where AI isn't a feature but the foundation. If you're skeptical that agents can do real work, come and join the conversation.

Redefining Industrial Reliability & Safety: The Role of AI-Driven Risk Analysis

As industries such as transportation, energy, defense, construction, and space evolve toward increasingly complex, autonomous, and interconnected systems, the demand for robust safety and reliability measures continues to intensify. Prominent failures in recent years across various sectors have underscored the limitations of traditional, resource-intensive risk analysis methods and the urgent need for more effective solutions. Reshape Systems SA, a CERN spin-off, addresses these challenges with a proprietary, AI-driven SaaS platform purpose-built for comprehensive risk analysis across the entire project and product lifecycle. Leveraging advanced generative and explainable AI (XAI), our platform automates time-consuming hazard assessments, integrates supplier and design data, and ensures full transparency for expert oversight—meeting rising regulatory and customer demands for trustworthy, auditable AI. This automation enables early identification of potential failures, streamlines engineering processes, and delivers up to 80% savings in manpower, accelerating time-to-market while enhancing safety and reliability.

Scalable AI Adoption

"The $10M Question: Why Is Nobody Using Your AI Tools?" You've invested. The tools are deployed. The licenses are running. But only 15% of your employees actually use AI productively. 74% of companies see zero ROI from their AI investments – not because the technology fails, but because they're ignoring the critical 70%: people and processes. In this talk, I'll show you: Why the AI Adoption Gap is widening in 2025, not closing What the 5% leader companies do differently (Spoiler: They invest 70% in enablement, not tools) How our Parnters went from 15% to 85% adoption – in under 6 months. With concrete numbers, measurable results, and a playbook you can implement the next day. For everyone tired of burning millions on unused AI.

AI Journey of the ZKB

This talk shares Zürcher Kantonalbank’s (ZKB) journey in developing and implementing its Artificial Intelligence strategy. It outlines how the approach was crafted, with a focus on technology, people, governance, and real-world applications. Selected use cases are highlighted to illustrate how the strategy is being applied in practice. The session also describes where ZKB stands today, reflects on key lessons learned along the way, and outlines the next steps as the organization continues to unlock the potential of AI.

From Data Chaos to Cognitive Enterprise: How SLMs Will Transform Governance

In a world of fragmented data, slow decisions, and organizational silos, true intelligence doesn’t come from bigger models, but from smarter, domain‑aware ones. Instead of building massive models, the real breakthrough is weaving meaning into the systems we already use. This talk shows how domain‑optimized Small Language Models harmonize data, processes, and decisions across complex architectures. From semantic extraction to real‑time conflict detection, learn how SLMs evolve into intelligent agents that strengthen architecture, reduce rework, and drive coherence at scale. Discover how a semantic fabric turns complexity into clarity, enabling real‑time understanding, seamless collaboration, and intelligent governance as organizations move toward a truly cognitive future.

Scaling GenAI from POC to Enterprise Readiness at Lufthansa Group

While recent studies suggest that 95% of AI pilots fail, we want to present a practical case study from the 5% that succeed. Lufthansa Group's Digital Hangar, in partnership with Cognizant Netcentric, successfully scaled SkAI—a production-grade GenAI Hub now deeply embedded across their internal data analytics ecosystem and beyond. Moving from initial prototype, to enterprise readiness required solving real challenges: governance to ensure compliant usage, seamless integration with existing enterprise systems, and adoption and training that drove 75% user uptake. The impact is tangible: 25% of users save 4–8 hours weekly on routine tasks. Attendees will gain actionable strategies for overcoming organizational barriers, fostering high-impact engagement measured through clear KPIs, and technical insights into how they successfully transitioned their agentic platform from proof-of-concept to sustained business value. This session will be presented by: Margot Löwenberg (Head of Data Analytics, Swiss International Air Lines) and Nicolas Athanasopoulos (Principal Data & AI, Cognizant Netcentric).

How to Make the Human/Ethics Side Work When Applying GenAI

Managing Director, Data Special Project
@
Julius Baer
Culture can make or break any strategy and is resistant to change. How do you bring your culture on the AI journey? Which skills are going to ensure future success? What happens if you don't factor in the human and ethical elements as part of your AI journey? Come and learn why the human and ethical side are so critical to sustainable strategic outcomes.

How Zurich Airport Plans Responsible AI & Autonomous Solutions at Scale

Zurich Airport is exploring how AI-driven and autonomous solutions can be introduced responsibly in a complex and highly regulated environment. While the potential is significant, moving from experimentation to operational use requires more than new technology — it requires careful preparation and clear structures. This talk shares how Zurich Airport is approaching this journey. We will outline the steps taken to strengthen digital foundations, clarify governance responsibilities, and ensure compliance, safety, and transparency when evaluating AI use cases. Rather than presenting large-scale transformation, the session focuses on practical progress, lessons learned, and the realities of implementation. Examples from the aviation context will illustrate where AI shows tangible value today, where limitations remain, and how expectations are managed internally. The emphasis is on building understanding, aligning stakeholders, and creating conditions that allow innovation to develop in a controlled and responsible way. The session offers a realistic perspective on what it means for an airport to move from isolated experiments toward a more structured approach to AI — acknowledging both opportunities and constraints along the way.

AI Adoption in a Global Manufacturing Company: From Pilots to Real Impact

AI Transformation Manager
@
SFS Group
Manufacturing organizations are built on processes refined over decades. While AI promises efficiency and automation, introducing it into such environments often creates friction — confusion, resistance, and loss of trust among employees. In this session, I will share how AI can be introduced into a mature manufacturing organization without disrupting existing workflows, and how thoughtful AI adoption can improve day-to-day work for employees rather than complicate it. Drawing on real-world experience leading AI transformation in a global manufacturing company, this talk focuses on integrating AI into established processes in a way that feels natural, supportive, and reliable. Instead of replacing people or redesigning everything from scratch, the approach emphasizes augmenting existing roles, reducing cognitive and operational load, and building trust through transparency and gradual adoption. I will walk through how AI solutions were embedded into daily operations, how employee concerns were addressed, and how adoption was driven through practical value rather than top-down mandates. Key takeaways: - How to introduce AI into long-established processes without creating resistance - Why employee trust is critical for successful AI adoption - How AI can improve individual productivity while strengthening the overall system - Practical lessons from integrating AI into real manufacturing operations

The Sovereign AI Stack No One Else Can Switch Off

CEO & Co-Founder
@
Xinity
In April 2025, a US president had Microsoft block European Court of Justice accounts overnight. No negotiation, no warning, just a switch flipped from across the Atlantic. For European enterprises handling sensitive data under GDPR, FINMA, or the upcoming EU Digital Networks Act, this isn't a hypothetical risk. It's a preview. Yet 92% of European enterprise AI still runs on US infrastructure. Most "sovereign cloud" offerings are policy promises: contractual protections that can be rewritten or revoked. Architectural sovereignty is different: your hardware, your building, your data, your AI never leaving your infrastructure. In this talk, Alexander Zehetmaier shows what a truly sovereign AI stack looks like in production: OpenAI-compatible APIs running on enterprise-owned hardware, deployed today at European companies protecting journalistic sources and trade secrets, with an 83% median cost reduction versus cloud equivalents. The session culminates in a live open-source launch: Xinity's API Gateway goes from private to public on GitHub, on stage, making a European sovereign AI infrastructure stack freely available for the first time. Key takeaways: why policy-based sovereignty is theater, what architectural sovereignty looks like technically, and how to deploy it today.

The Power of Hyper Personalization in Banking: Citadele Banking Group Case-Study

For years, Citadele bank has led our region in customer experience, but the expectations of banking clients are evolving rapidly. Today, customers compare their banking interactions not to other banks, but to the seamless, personalized journeys delivered by global social media platforms and world‑class e‑commerce leaders. To understand how the industry is responding, we engaged with top banks worldwide and discovered three common challenges shaping the future: the need to strengthen collaboration between Business and IT, unlock greater data availability, and choose whether to innovate slowly alone or progress faster together. Amid the noise surrounding artificial intelligence, one theme consistently proves real and impactful—hyper‑personalization. Banks are turning to AI‑driven personalization not only to boost commercial performance but to elevate customer experience to entirely new levels. In this presentation, we share how Citadele is applying hyper‑personalization in sales automation, with concrete, real‑world examples. Most importantly, we demonstrate how to measure the true value of these initiatives and showcase a major corporate success story that highlights the tangible business impact of AI done right.

From Data to Wisdom: Designing Robust Human – AI Decision Systems

Advisor, Digital Natives & Scale-Ups
@
Databricks

The future of decision-making will be won by organizations that deliberately pair human judgment with AI at scale: decision flows where machines grind through the data and people bring expertise, nuance, and accountability.

This session is for executives and senior leaders who don’t need to code models but do need to own the impact of AI‑infused decisions. We will unpack how data becomes wisdom through four layers, giving participants a clear, practical mental model for designing robust Human‑AI decision systems that are reliable, auditable, and safe to use in the boardroom.

How AI Agents Negotiate: Why Governance Matters to Scale Good Intentions

Research in AI and Decision Making
@
Private Expert
As organizations increasingly rely on AI systems to support collective decisions, a key question emerges: how do AI agents resolve conflicts when there is no “right” answer? In this talk, I share insights from nearly 500 automated negotiation experiments involving leading AI models (Claude, GPT, and Gemini). The agents faced real game-theoretic dilemmas with conflicting preferences, voting paradoxes, time pressure, and institutional consequences for failure. Rather than focusing on who “wins,” the experiments reveal something more important: most successful negotiations were driven by agreement on process, not on outcomes. Over 85% of simulations reached agreement - and in virtually all successful cases, agents did so by adopting structured governance mechanisms such as voting rules, delegation, and agenda-setting. When governance was weak, even advanced models frequently failed. The key takeaway is that governance design shapes behavior more than model intelligence alone. For organizations deploying AI in procurement, compliance, or decision support, provider choice and institutional rules are strategic decisions, not technical details.

Gen AI in the News

Generative AI is beginning to reshape how news is created, translated, verified, and delivered — but for a global news organization like Reuters, innovation must go hand in hand with trust. In this talk, I will share how Reuters is approaching generative AI as both a powerful technological enabler and a responsibility‑critical capability.
Drawing on practical experience, the session explores how generative AI is being applied across the news lifecycle, including support for journalists and editors, workflow efficiency, multilingual content, and product experiences. The focus is not on experimentation for its own sake, but on real deployment decisions in a high‑stakes environment where accuracy, independence, and transparency are essential.
The talk will highlight key design choices and trade‑offs: where generative AI delivers clear value, where it must be constrained, and how human editorial judgment remains central. I will also discuss governance, risk management, and cultural adoption challenges when introducing generative AI into a trusted media organization.
The session concludes with practical lessons for media leaders and technologists navigating generative AI in environments where credibility is the product.

Generative AI for Evidence-Based Hiring in Talent Acquisition and Executive Search

Traditional executive search and recruitment remain largely relationship-driven and under-digitised, with limited transparency into how candidates are evaluated and shortlisted. Data and AI are still rarely used in a structured way for leadership decisions. Tagliabue & Partners, a specialist executive search firm, partnered with thirty3, an AI solutions company, to rethink this model. Together, they built a platform that brings structure, transparency, and scalability to the hiring workflow. In this talk, we will showcase the system that automates batch CV screening and generates client-ready longlist decks with structured, evidence-based candidate evaluations — demonstrating how Generative AI can transform high-stakes talent decisions.

Solution Study: From Prototype to Production – Scaling Trusted GenAI

Senior Director Product Marketing
@
Cloudera Switzerland GmbH
As organizations evolve from digital to AI-native, data serves as the foundation for innovation, enabling AI to transform raw information into actionable insights. Cloudera powers modern AI-driven use cases, including Generative AI, by unlocking the full potential of enterprise data. This session will explore how Cloudera can help your organization accelerate AI deployment to production, without compromising performance, accuracy, or security.

Are You a Target: Predicting Cyber Attacks with AI

Europe is no longer facing random cyber attacks. We are facing coordinated, state-sponsored operations now powered by AI. From ransoms to infrastructure sabotage, Advanced Persistent Threat (APT) groups are targeting enterprises, supply chains, defense contractors, and critical infrastructure with precision and speed. The reality stands that most organizations are already being targeted they just don’t see it. In this talk, we reveal what happens before an attack actually hits. We demonstrate how AI can detect early indicators of ransomware and advanced threats by learning attack patterns and correlating telemetry, predicting attacks before impact.

Shadow AI: The Trojan Horse of AI Security

Following the rapid rise of autonomous AI agents in 2025, the enterprise security landscape is undergoing a critical transformation. While agentic AI offers massive productivity gains, it deepens the security risk through sophisticated attack vectors like memory poisoning and privilege abuse. At the same time, it also broadens the perimeter through Shadow AI - the unsanctioned use of AI-related tools by employees. This leads to a dangerous "visibility gap". Most enterprises lack the monitoring tools to detect unauthorized agentic workflows or local Large Language Model (LLM) deployments, leaving them exposed to data exfiltration and other attacks. The impending regulations around AI, such as the EU AI Act, are shifting liability to enterprises, making ignorance of Shadow AI a significantly costly legal and financial risk. To limit their exposure, enterprises need a strategic pivot from traditional block-and-deny tactics toward radical observability. Effective defense requires the automatic detection of unknown assets, cross-domain data correlation, and the rigorous management of non-human identities (NHIs). Security operations must evolve to employ agentic defenses - utilizing AI to monitor and counter other AI agents. To survive this "AI Wild West", companies must treat security as a business enabler rather than a barrier. Success in 2026 depends not on building higher walls, but on achieving total visibility and implementing automated, agent-driven response capabilities to secure the future of digital work.

The Next Generation of AI Tools Will Be Built Around How People Act

Co-Founder & CEO
@
thesify.ai
We have all seen them — the hastily retrofitted chatbots and assistants that have metastasised across almost every app. And by now we know where they are genuinely helpful and where they are a nuisance. Their most striking problem is that their all-or-nothing approach turns copilots into autopilots: after the initial prompt, the user loses control. Too often the result is slop, and in many contexts, it encourages deceiving potential customers or readers. But a different generation of tools is emerging. Rather than a copilot, we should think of AI as an exoskeleton for our mind: something that makes us more capable and more precise, while keeping us firmly in control. At Thesify, we are building AI tools for academia, a domain with high standards of authorship, accountability and transparency. That constraint has forced us to design AI around the ways our users actually work, not the other way around. In this talk, I will illustrate these points using examples of first-generation AI tools that work and ones that don't. I will then show simple principles that underlie the next generation of AI tools that are quickly gaining popularity. Some of these examples are from our own customer research at Thesify and some are from other tools. I will conclude with an outlook over the main trends that we can see in AI tools.

How to Create Trustworthy AI Solutions for Regulated Industries

Business Development Operations Director
@
Oxagile
We are drowning in AI demos that dazzle in the moment but fail in production. Why? Because Generative AI is incredibly good at returning an "echo" of your request - smooth and confident - but terrible at making a commitment. The biggest bottleneck for GenAI adoption in 2026 isn't intelligence or latency - it's trust. A real case of Oxagile’s 20+ years of engineering experience that converted into a trustworthy solution for a regulated Swiss organization (Pharma, FinTech).

Systems of Action: Data, Decisions and the New Agentic Operating Model

Senior Industry Solutions Consultant
@
MongoDB

Enterprises have long relied on systems of record: databases and applications optimized for capturing and reconciling business data as the authoritative source of “what happened”. These human-centric architectures with rigid schemas and batch processes create barriers for agentic AI, which requires perceiving context, reasoning, and taking real-time action. This session explores the shift to systems of action, where an intelligence layer augments (rather than replaces) existing systems of record, enriching business objects with agent-generated insights to enable faster, smarter decisions.

Open Machine Learning Ecosystem

Building with open-source AI models has a lot of benefits: it ensures privacy, gives the application owner control and transparency over the model lifecycle, and cuts costs at scale. In this talk, I will go through the state of open AI, workflows, tooling, and more for building with open models.

Building Chatbots in Minutes: How Migros Made GenAI Fast, Secure and Enterprise‑Ready

GenAI promises to transform how organizations work — but turning that promise into secure, scalable, real‑world solutions is often harder than it seems. In this talk, discover how Migros built a flexible, enterprise‑grade chatbot platform that enables teams to create powerful assistants in just minutes, without compromising on security, governance, or quality.
Two years ago, we created our first chatbot — and quickly learned that technical innovation was only one part of the challenge. Alongside building early prototypes, we had to navigate aligning stakeholders and define governance models that satisfied both business needs and strict security requirements. This journey laid the foundation for what would eventually become our Chatbot Creator Platform.
In this session, we’ll show you that platform in action, share the lessons we learned while creating it, and explain how we built a system that now empowers teams across Migros to ship AI assistants in record time. You’ll walk away with practical insights for scaling GenAI in a complex enterprise environment — beyond the hype and toward real, sustainable impact.

Migrating Complex Systems with Agentic AI

Over 10,000 SAP BW systems face end-of-support by 2030 — a migration challenge so complex that traditional automation cannot scale to meet it. Manual migrations take two or more years, are error-prone, and require scarce expertise. We're using agentic AI to change that.

We'll show how multi-agent systems reverse-engineer legacy platforms, construct knowledge graphs of thousands of interdependent objects, and autonomously generate complete modernized systems — including the migration tooling itself.

Our three-phase framework — Reverse, Rethink, Rebuild — is a generalizable pattern for any complex system transformation, compressing years of work into months with automated validation at every step.

Attendees will leave with practical insights on multi-agent orchestration, knowledge graphs for spec-driven-development, synthetic data strategies for safe testing at scale, and hard-won lessons about where AI excels versus where human judgment remains essential.

Grounded in production experience, not theory.

The Intelligence Bomb. Do We Want to Master or Submit?

Not a single day goes by without a new record investment in the field of artificial intelligence. The unit of measurement is no longer billions, but trillions and beyond. The arms race between tech giants seems limitless – and in any case defies the physical limits of our planet.

AI is infiltrating all areas of human activity at an unimaginable speed. Mastering artificial intelligence is the most significant quantum leap since the advent of the atomic bomb.

Does the future belong to the American and Chinese tech giants? Are Switzerland and Europe doomed to follow and submit?

There is an alternative. Sovereign, collaborative, efficient and ethical.

Giotto.ai seeks to exceed the current capabilities of AI to push the boundaries towards artificial intelligence capable of going beyond memorisation – towards reflection. Giotto.ai is developing technology that stands out for its ability to generalise tasks, solve problems and offer transformative potential for industries and society in general.

Way more efficient than large LLMs and infinitely less energy-and data-intensive, the solution advocated by Giotto.ai is based on two fundamental pillars: sovereignty and efficiency.

- Yes, it is possible to escape the frantic race for resources and infrastructure.
- Yes, it is possible to assert Swiss and European leadership in artificial intelligence that serves society, democracy and humanity.

Switzerland and Europe have all the talent, universities and centres of expertise needed to succeed. Together, let's create networks of investors, developers and conditions to control our own destiny.

Submitting is not an option. We can decide.

AI Concierge - A Solution to Generate Counter Offers Instantaneously

Selling insurance or banking products usually involves analysing a lot of documents. With the AI Concierge, we greatly simplify our insurance and banking agents lives. The concierge analyses third party car insurance contracts as well as securities account statements and instantaneously creates a counter offer or an investment proposal.

More speakers will be announced shortly. Want to share your ideas?

Register

Join us on 1–2 April 2026
at Volkshaus Zürich

1–2 April 2026
Volkshaus Zürich

Conference Pass

Phase II

Available until 28 February, 11:59PM GMT+1
1 April 2026
Volkshaus Zürich

Day Pass: 1 April

Phase II

Available until 28 February, 11:59PM GMT+1
2 April 2026
Volkshaus Zürich

Day Pass: 2 April

Phase II

Available until 28 February, 11:59PM GMT+1
1–2 April 2026
Volkshaus Zürich

Student Pass

Available until 1 April 2026

All ticket purchases are non-refundable and cannot be exchanged.

The speaker list and other event details might be subject to change.

The ticket purchase implies consent to be photographed or filmed for event promotion.

2026 Sponsors and Partners

Supported by
industry leaders and innovative startups

We are building a community of forward-looking practitioners who are at the frontier of GenAI innovation. Interested to join?

Become a partner

The Turing Genius

Neural Masters

Generative Starters

Startup Allies

Hackathon Co-organizers

Hackathon Partners

Corporate Partners

Special Partner for Ethical AI, Cybersecurity Governance & Human Rights

Community Partners

Join us

Shape the future of GenAI in Switzerland and beyond

Join us for the next edition of GenAI Zürich to learn about latest developments in the field, network with like-minded professionals and gain deeper insights into GenAI applications reshaping both our professional and personal lives.