Sports & Entertainment

55,000+ Prompts, Zero Unsafe Issues at Launch: Responsible AI Evaluation for fan engagement platform

55,000+ Prompts, Zero Unsafe Issues at Launch: Responsible AI Evaluation for fan engagement platform

How disseqt AI stress-tested an AI fan assistant serving over tens of millions of fans across four rigorous evaluation phases — surfacing vulnerabilities, validating fixes, and giving the organisation the confidence to go live responsibly.

  • 55,000+

    Total prompts executed end to end

  • 55,000+

    Total prompts executed end to end

  • 55,000+

    Total prompts executed end to end

  • 15,000+

    Baseline safety, bias & privacy prompts

  • 15,000+

    Baseline safety, bias & privacy prompts

  • 15,000+

    Baseline safety, bias & privacy prompts

  • 18,570

    Adversarial jailbreak prompts across 31 techniques

  • 18,570

    Adversarial jailbreak prompts across 31 techniques

  • 18,570

    Adversarial jailbreak prompts across 31 techniques

CHALLENGE

An AI assistant serving 80 million fans had to be safe before it could go live

CHALLENGE

An AI assistant serving 80 million fans had to be safe before it could go live

A prominent fan engagement platform  through a disseqt AI partner  had built an AI assistant to answer fan questions on everything from match facts and player histories to live statistics. But with a fanbase exceeding 80 million, the stakes for getting it wrong were enormous and reputational risk.

Because the assistant relied heavily on external data sources and APIs, it was exposed to a range of risks that had to be validated before launch:

  • Jailbreaking: Adversarial users attempting to bypass the AI's safety guardrails through manipulative prompts.

  • Misinformation: Incorrect or fabricated content about players, matches, or league facts reaching fans at scale.

  • Bias drift: Subtle shifts in AI behaviour across longer conversations that could produce unfair or inconsistent outputs.

  • Privacy leaks : Unintended exposure of sensitive data or API keys through the AI's responses.

SOLUTION

A four-phase responsible AI evaluation - 55,000+ prompts end to end

SOLUTION

A four-phase responsible AI evaluation - 55,000+ prompts end to end

disseqt performed a comprehensive responsible AI evaluation across four structured phases, executing over 55,000 prompts to surface, validate, and resolve every safety risk before the assistant went live.

55,000+

Total prompts executed end to end

15,000+

Baseline safety, bias & privacy prompts

18,570

Adversarial jailbreak prompts across 31 techniques

1,025

Multi-turn conversational safety prompts

Logo

AI Assurance & Governance Layer for Enterprise Agentic Systems

© DISSEQT AI LIMITED

Logo

AI Assurance & Governance Layer for Enterprise Agentic Systems

© DISSEQT AI LIMITED