We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
Enhance documentation: add details on rubric criteria, refine rubric feature, and update judge settings information
Update documentation for v0.4.0 release: revise version numbers, enhance module descriptions, and adjust links for clarity
Add Module Tier Labels feature with enum and interface updates; enhance documentation for clarity
Update documentation to include Red Teaming module details and version bump to v0.3.0
Add comprehensive documentation for developers and users - Created Developer-Contributing.md to outline contribution guidelines, branching strategy, versioning, changelog discipline, and code conventions. - Added Developer-Getting-Started.md to guide new developers in setting up the local development environment. - Introduced Developer-Module-Development.md detailing how to create agent connector and judge modules. - Established User-API-Keys.md for managing API keys for programmatic access to the REST API. - Added User-Agents.md to explain agent management and configuration. - Created User-Audit-Log.md to provide an overview of logged actions and how to read the audit log. - Introduced User-Dashboard.md for aggregate quality metrics and run history. - Added User-Documents.md for managing knowledge-base documents for AI-generated test questions. - Created User-Getting-Started.md to provide an overview of mate and its deployment modes. - Added User-Rubrics.md to define custom evaluation criteria for AI agent responses. - Established User-Settings.md for configuring AI judge, question generation, and module settings. - Created User-Test-Suites.md to manage test suites and cases within the evaluation workflow.