Enrich and customize the NeuralTrust platform with easy to integrate plugins
A dynamic load balancing plugin that evenly distributes traffic for optimal performance.
Built by NeuralTrust
Stores and retrieves responses based on semantic similarity, reducing latency and improving efficiency by reusing relevant answers from past interactions.
Built by NeuralTrust
Controls the number of requests sent to your AI models, preventing abuse, ensuring fair usage, and maintaining system performance.
Built by NeuralTrust
Restricts the size of incoming requests to protect your AI systems from overload, ensuring stability and efficient resource usage.
Built by NeuralTrust
Enhances safety by combining firewall rules and toxicity checks to block harmful, abusive, or policy-violating prompts before they reach your AI models.
Built by NeuralTrust
Analyzes prompts in real time to detect context-specific risks, ensuring sensitive or harmful requests are identified before reaching your AI systems.
Built by NeuralTrust
Adds an extra layer of safety using AWS Bedrock Guardrails to filter harmful, toxic, or unwanted content before it reaches your AI models.
Built by NeuralTrust
Uses OpenAI’s moderation tools to identify and block toxic or harmful content, ensuring safer interactions with your AI models.
Built by NeuralTrust
Leverages Azure Content Moderation to detect and filter toxic, offensive, or inappropriate content before it reaches your AI models.
Built by NeuralTrust
Sophisticated content filtering system designed to protect your AI gateway from potentially harmful or unwanted content.
Built by NeuralTrust
Enforces Cross-Origin Resource Sharing (CORS) policies to control and secure how your AI Gateway resources are accessed from different origins.
Built by NeuralTrust
Enforces security-related HTTP headers and validates incoming requests based on allowed hosts, HTTPS requirements, and security header policies.
Built by NeuralTrust
Detects and blocks prompt injection attacks to prevent manipulation or unauthorized control of your AI models.
Built by NeuralTrust
Analyzes and cleans code prompts to remove potentially dangerous or malicious instructions before they are processed by your AI models.
Built by NeuralTrust
The TrustLens telemetry provider enables observability in TrustGate by sending trace-level data to the TrustLens platform.
Built by NeuralTrust
You can use this to collect metrics such as request throughput, latency distributions, and service health indicators.
Built by NeuralTrust
Enhance your AI systems with powerful plugins that provide seamless scalability, comprehensive monitoring, and full compliance with global standards.