Banking & Financial services
How disseqt helped a major bank eliminate biased AI decisions, prevent prompt injection vulnerabilities, and build a compliant, auditable AI pipeline for customer dispute resolution.

As AI became central to the bank's customer operations, new vulnerabilities began to emerge. The bank's AI-powered systems, including its customer-facing Copilot and internal dispute management tools, were processing sensitive financial requests at scale, but without sufficient AI Governance and RAIOps capabilities in place.
Three critical new risks had surfaced:
Prompt injection attacks: Malicious inputs through the Copilot could compromise the system and expose sensitive customer data, creating serious security and reputational risk for the bank .
Biased AI decisions: The AI model handling chargeback requests was rejecting a high volume of chargebacks as 34% were deemed fraudulent — whether through forgotten purchases or genuine fraud. These nuances are hard for LLM’s to decipher so evidence needs to be checked. As part of the EU AI Act, banks must supply a six month history of AI decisions to check for additional bias and black box issues .
Compliance gaps: Leadership lacked a reliable, automated mechanism to generate regulatory reports or give board members/regulators actionable visibility into AI-driven operations
Left unaddressed, these issues risked regulatory penalties, erosion of customer trust, and significant financial liability.
The Chargeback team has to make a number of decisions:
Does this look like a legitimate dispute or a pattern of abuse?
Has this customer filed multiple chargebacks recently?
Does the reason code match the transaction data?
Is this merchant known for high dispute rates?
Traditionally, the decisions were made by human analysts but it’s increasingly AI systems taking on this responsibility, making autonomous or semi-autonomous decisions about whether to progress the chargeback.
The bank deployed disseqt AI and Copilot across its chargeback resolution workflow to continuously monitor, evaluate, and govern every AI-generated response before it reached employees or customers, and provide employees with a high degree of confidence in decisions augmented by AI systems.
