

Co-Creation Provenance Lab – Auditing AI Summaries for Representational Fairness
- Year2026
- CategoryImpact Achievers
- StageSemifinalist
Senior Scientist
Summarization is becoming infrastructure. Governments, enterprises, and researchers now rely on LLMs to compress hundreds of human contributions into a few paragraphs that shape real decisions. Yet we audit model outputs far more than we audit what gets excluded. I built this tool to make representational loss measurable, before it quietly scales.
I am applying because this feels like an inflection point. As AI systems mediate more collective input, representational fairness is no longer theoretical. It affects policy, product direction, funding decisions, and public trust. I want to test this approach in serious, high-impact settings and learn where it creates the most value.
Success means two things: rigorous feedback from experts who understand applied GenAI at scale, and partnerships that put the tool into live environments where representation truly matters. If this work helps shift the standard from "Is the output correct?" to "Whose voice shaped it?", that would be a meaningful win.
Sachit Mahajan
Senior Scientist
Shape the future of GenAI in Switzerland and beyond
Join us for the next edition of GenAI Zürich to learn about latest developments in the field, network with like-minded professionals and gain deeper insights into GenAI applications reshaping both our professional and personal lives.