Advanced Features

Introduction to AI Foundry & Validators

Discover DisseqtAI's comprehensive library of validators: specialized tools that automatically evaluate your AI's quality, safety, and performance across dozens of metrics. Learn what validators are, how they work, and which ones matter for your use case.

Discover DisseqtAI's comprehensive library of validators: specialized tools that automatically evaluate your AI's quality, safety, and performance across dozens of metrics. Learn what validators are, how they work, and which ones matter for your use case.

Last Updated on November 2, 2025

What Are Validators?

Validators are automated tests that evaluate specific aspects of your AI's inputs or outputs. Each validator focuses on one particular quality or concern.

The Toxicity validator checks if responses contain offensive language. The Hallucination validator detects when your AI makes up facts. The Relevance validator measures whether answers actually address the question. There are dozens more, each serving a specific purpose.

Accessing the Validator Library

Find "AI FOUNDRY" in your left sidebar. Click "LLM Validators" to see the most commonly used validators.

Five Validator Categories

LLM Validators cover general language model testing. Safety, quality, accuracy, and compliance metrics that apply to any AI generating text. These are your foundation validators.

RAG Validators focus on Retrieval-Augmented Generation systems. Faithfulness to source documents, context relevance, citation quality, and answer completeness. Use these if your AI retrieves information before answering.

Agentic Validators test AI agents that take actions and use tools. Goal achievement, planning quality, tool usage appropriateness, and task completion. Essential if your AI does more than chat.

MCP Validators evaluate Model Context Protocol compliance. Proper context handling, protocol adherence, and standardised communication.

Themes Classifier helps categorise content by topic, intent, sentiment, and themes. Useful for understanding what users are actually asking about.

Input vs Output Validation

Input Validators check user prompts before they reach your AI. Prompt Injection validator detects malicious prompts. Invisible Text validator catches hidden instructions. These protect your AI from bad inputs.

Output Validators check your AI's responses after generation. Toxicity ensures responses aren't offensive. Hallucination catches made-up facts. Relevance confirms answers address the question.

Many validators do both. Toxicity can check if user input contains toxic language and whether your AI generated toxic responses.

© Disseqt AI Product Starter Guide

© Disseqt AI Product Starter Guide

© Disseqt AI Product Starter Guide