Sports & Entertainment
How disseqt AI stress-tested an AI fan assistant serving over tens of millions of fans across four rigorous evaluation phases — surfacing vulnerabilities, validating fixes, and giving the organisation the confidence to go live responsibly.

A prominent fan engagement platform through a disseqt AI partner had built an AI assistant to answer fan questions on everything from match facts and player histories to live statistics. But with a fanbase exceeding 80 million, the stakes for getting it wrong were enormous and reputational risk.
Because the assistant relied heavily on external data sources and APIs, it was exposed to a range of risks that had to be validated before launch:
Jailbreaking: Adversarial users attempting to bypass the AI's safety guardrails through manipulative prompts.
Misinformation: Incorrect or fabricated content about players, matches, or league facts reaching fans at scale.
Bias drift: Subtle shifts in AI behaviour across longer conversations that could produce unfair or inconsistent outputs.
Privacy leaks : Unintended exposure of sensitive data or API keys through the AI's responses.
disseqt performed a comprehensive responsible AI evaluation across four structured phases, executing over 55,000 prompts to surface, validate, and resolve every safety risk before the assistant went live.
55,000+
Total prompts executed end to end
15,000+
Baseline safety, bias & privacy prompts
18,570
Adversarial jailbreak prompts across 31 techniques
1,025
Multi-turn conversational safety prompts

