Test an AI agent
EARLY ACCESSThere are two ways to test an AI agent:
- Preview - Available directly in the AI agent builder. Send messages to the agent and review responses in real time. Use this during development to quickly check behavior after making changes.
- Quality center - A separate page for running bulk predefined test cases. Create test groups and test cases that define expected agent behavior, then run them to verify consistency and catch regressions.
Preview
Use Preview to interact with your agent in real time while you configure it.
To start a preview:
- Open your agent in the AI agent builder.
- Select the Preview tab.
- Type a message in the Chat panel and send it to the agent.
The Chat panel has the following options:
- Restart: Reset the conversation and start a new session.
- Show logs panel: Open a side panel that shows detailed logs for each agent response, including tool calls and subagent calls.
- Options menu:
- Save as test case: Save the conversation to reuse as a test case in Quality center.
- Enter user destination: Simulate a user coming from a specific destination.
Save as test case
You can save a preview conversation as a test case to reuse in Quality center.
- After exchanging one or more messages with the agent, select Options > Save as test case.
- Enter a name for the conversation.
- Select Save as test case.
The saved conversation becomes a test case with the messages you sent as inputs and the agent responses as expected responses.
Quality center
For structured, repeatable testing, use Quality center to create and run test groups before publishing your agent.
- Run tests - Send test messages manually or run automated test groups to validate agent behavior.
- Create test group - Organize test cases into groups for a specific AI agent or use case.
- Create test cases - Define individual test scenarios with inputs and expected agent responses.
- Review test results - Analyze pass and fail outcomes and identify areas where the agent needs improvement.