Introduction
Authentication and API keys
Before you can use the SDK, you'll need two things: an API key and a project ID. These credentials authenticate your application and route your telemetry to the right workspace in TestSavant Studio.
Getting your API key
- Sign in to TestSavant Studio at https://app.testsavant.ai
- Navigate to API Keys at https://app.testsavant.ai/apikeys
- Create a new API key and give it a descriptive name
- Copy the key immediately — you won't be able to see it again after closing the dialog
- Note your project ID — it's displayed in your project settings
TESTSAVANT_API_KEY=your_api_key_here
TESTSAVANT_PROJECT_ID=your_project_id_hereThen load them in your application:
import os
from testsavant.guard import InputGuard
TESTSAVANT_API_KEY = os.getenv("TESTSAVANT_API_KEY")
TESTSAVANT_PROJECT_ID = os.getenv("TESTSAVANT_PROJECT_ID")
guard = InputGuard(API_KEY=TESTSAVANT_API_KEY, PROJECT_ID=TESTSAVANT_PROJECT_ID)Observability and Studio integration
Every scan the SDK performs generates structured telemetry that flows into TestSavant Studio—a central control plane where your team can:
- Review traces and replay user interactions that triggered guardrails
- Export audit-ready evidence packets for SOC 2, ISO 42001, or GDPR reviews
- Tune guardrail thresholds based on real production data
- Create red-team test packs and validate defenses before launch
Studio turns your SDK usage into a continuous assurance loop: test, deploy, observe, adapt.
Core concepts
Scanners
Scanners are modular models that evaluate data against a specific risk or policy.
Input scanners protect what goes into your model:
| Scanner | Description |
|---|---|
| Anonymize | Identifies and redacts PII like emails, phone numbers, SSNs |
| BanCode | Blocks code snippets or scripts |
| BanSubstrings | Filters specific banned text patterns |
| BanTopic | Blocks requests about restricted subjects |
| Code | Detects programming languages in prompts |
| Gibberish | Stops nonsense prompts from wasting tokens |
| ImageNSFW | Screens images for explicit or unsafe content |
| ImageTextRedactor | Redacts sensitive text from images |
| InvisibleText | Detects hidden or zero-width characters |
| Language | Enforces allowed language requirements |
| PromptInjection | Detects jailbreak attempts and adversarial prompts |
| Regex | Custom pattern matching for specific use cases |
| Secrets | Detects API keys, tokens, and credentials |
| Sentiment | Analyzes emotional tone of user input |
| TokenLimit | Enforces maximum token counts |
| Toxicity | Blocks hateful, harassing, or unsafe user input |
Output scanners protect what comes out of your model:
| Scanner | Description |
|---|---|
| Anonymize | Redacts PII from model responses |
| BanCode | Strips executable code blocks from outputs |
| BanSubstrings | Removes banned text patterns from responses |
| BanTopic | Keeps restricted subjects out of completions |
| Bias | Flags biased or unfair statements |
| FactualConsistency | Compares responses against source material to detect hallucinations |
| Gibberish | Prevents meaningless responses |
| JSON | Validates and enforces JSON output format |
| Language | Ensures responses match allowed languages |
| LanguageSame | Verifies reply matches customer's input language |
| MaliciousURL | Scans for phishing or dangerous links |
| NoRefusal | Detects when the model unnecessarily refuses valid requests |
| PromptInjection | Catches reflected injection attempts in outputs |
| ReadingTime | Estimates time required to read the response |
| Regex | Custom pattern matching for output validation |
| Sentiment | Analyzes emotional tone of model responses |
| Toxicity | Catches offensive or harmful language in completions |
You can mix and match scanners, set custom thresholds, and combine multiple checks for defense-in-depth.
Guards
A Guard is the orchestrator. You configure guards with your API credentials and project ID, then add the scanners you need.
- InputGuard validates prompts before they hit your model
- OutputGuard validates completions before they reach end users
Deployment flexibility
TestSavant SDK works with:
- LangChain, LlamaIndex, and custom orchestration frameworks
- Agent workflows, including multi-step reasoning and tool-calling pipelines
- SaaS, VPC, or on-prem deployments with customer-managed keys and regional data residency
You can run guardrails synchronously for real-time chat or asynchronously for batch processing and background agents.
What's next
Ready to integrate guardrails into your AI stack? Here's where to go:
- Installation → – Get the SDK installed and authenticated in under five minutes
- Quickstart → – Run your first scan and see results
- Input Scanners → – Explore all available input protection modules
- Output Scanners → – Learn how to validate model completions