Plan your agent
EARLY ACCESSBefore you build your AI agent, plan to ensure a successful implementation. Good planning covers architectural decisions, tool selection, and workflow design.
You should also plan behavioral guidelines to define how the agent communicates and handles edge cases.
Identify business outcomes
Define the following before you build:
What do you want to achieve?
For example: reduce support time, increase conversion rate, improve customer satisfaction.
What tasks must be performed?
For example: book tickets, update records, schedule meetings.
How will you measure success?
For example: average handling time, first contact resolution, lead conversion rate.
Support: Ticket classification, knowledge base lookup, status updates, escalation management. Sales: Lead qualification, opportunity creation, meeting scheduling, CRM updates.
Multi-agent system considerations
In a multi-agent system, multiple AI agents work together to achieve a goal.
When to use a multi-agent system
Consider a multi-agent approach instead of a single large agent when:
- The agent needs to handle multiple distinct task categories.
- Different capabilities are independent or interact minimally.
- You want to keep complexity manageable as the system scales.
Architecture components
Consider the following architecture components:
- Orchestrator: Receives end user requests, identifies which subagent handles them, and manages tool routing.
- Subagents: Each agent focuses on a specific domain: knowledge retrieval, CRM updates, scheduling, classification, and so on.
For detailed architecture, example workflows, and subagent versioning, see Orchestration.
Plan the workflow
Follow this process before you implement AI agents:
- Decide if AI agents are required: Determine whether the use case requires AI agents or if chatbots are sufficient.
- Define KPIs, business outcomes, and core tasks: Identify specific outcomes you want to achieve.
- Identify all use cases: List all scenarios the agent handles, including edge cases and unexpected situations.
- Assess architectural complexity: Decide between a single-agent and a multi-agent system.
- Identify required tools: Identify both external integrations and internal systems.
- Validate integration paths: Confirm how the agent integrates with Chatbots and define escalation flows to human agents.
- Define behavioral guidelines, safety rules, and tone: Set clear boundaries for what the agent can and cannot do. See Behavioral rules and guardrails and Write prompts for AI agents.
- Identify test cases and evaluation criteria: Define how you will measure success before you start building.
- Develop, test, and refine: Refine the agent based on performance against your test cases.
- Deploy and monitor: Monitor performance and refine based on real-world usage.
Plan behavioral guidelines
AI agents behave differently from rule-based systems. To ensure a successful implementation, understand how their behavior differs, what you can and cannot control, and how to define the right boundaries.
Predictability: rule-based vs AI agents
| Action | Rule-based systems | AI agents |
|---|---|---|
| Behavior | Predefined and predictable. | Probabilistic and adaptive. |
| Control | Fully controlled. | Context-driven. |
| Responses | You know exactly how the system responds in every scenario. | Interpret the situation and choose an appropriate response. |
| Adaptability | Cannot adapt if end users deviate from the intended path. | Can handle unexpected inputs, but some behavior cannot be predicted or controlled in advance. |
What you cannot control
Understanding these limitations is essential before you deploy an AI agent in production:
- Exact phrasing: The agent generates responses dynamically. If you need exact wording, such as legal disclaimers, write it into your application rather than relying on the agent to generate it.
- Synonyms and rewording: The agent understands intent. "I want to cancel my order" and "Can you stop my shipment?" may be handled the same way. You cannot script exact responses.
- Scope boundaries: If you do not explicitly restrict a behavior, the agent may attempt it if it seems reasonable.
- Individual messages: Unlike rule-based chatbots, you cannot approve every possible response. You define guidelines and boundaries, not exact outputs.
Behavioral rules and guardrails
Define explicit guidelines across these areas:
| Area | Guidelines |
|---|---|
| Capability boundaries | Define what the agent can and cannot do. Be specific and comprehensive. Never promise capabilities that depend on tools not added to the agent. |
| Communication style | Set tone of voice, brand voice, language preferences, and level of formality. |
| Mandatory restrictions | Always confirm before modifying end user data. Never share personally identifiable information. Never offer actions that depend on unavailable tools. |
| Safety and compliance | Include legal disclaimers, privacy requirements, industry-specific regulations, and rules for when to escalate to a human agent. |
If you do not explicitly define these constraints, the agent makes assumptions. It may claim capabilities it does not have, take unintended actions, or generate responses that violate your guidelines.
Balancing control and adaptability
When you use AI agents, you trade strict word-for-word control for adaptability and intelligence. The agent handles unexpected situations more effectively, but requires well-defined boundaries and behavioral rules to stay within the intended scope.
Include these guidelines in your system prompt.
For examples and best practices, see Write prompts for AI agents.