Agent Editor

How to create and edit an agent using the Agent Editor

Every agent's behavior is fully configurable in its Agent Editor. There are three main sections to the Agent Editor:

  • Prompt: On the left hand side, these are the natural language instructions you are giving the agent.

  • Testing interface: An easy-to-use interface for testing your agent.

    • Click the "lightbulb" button to get an explanation of why the agent responded this way.

    • Click the "thumbs down" button for line-by-line suggestions on how to improve your prompt.

  • Simulations: Create and run intelligent, automated tests of common interactions and user pathways.

  • Ask AI: an intelligent assistant that can explain, provide feedback on, and even rewrite your prompt to achieve your desired behavior.

Toggle between the Testing interface and Ask AI on the right hand side of the page. This lets you simultaneously view and edit the prompt while testing it and getting feedback from Ask AI.

Save changes to the prompt by clicking "Save draft." Drafts may be tested by your colleagues before being published.

Publish your agent by clicking "Publish." While an agent can have many drafts, it may only have one published version at a time. The published version is what is your leads interact with.

How can Ask AI help me?

Ask AI is an intelligent assistant that can explain, provide feedback on, and even rewrite your prompt to achieve your desired behavior. It is also integrated into the Test chat interface. Some examples of questions you can ask:

  • "Check for any conflicting instructions in my prompt."

  • "Review the test conversation. Find any examples of the AI failing to adhere to the prompt instructions."

  • "How do I make my agent respond with friendlier, shorter sentences?"

  • "Why did the AI respond this way?" will reference the most recent test message

How do I create and run a Simulation?

Simulations are easy to create and run.

  • Each Simulation is just two parts: a) How the simulated user should behave, and b) a list of success criteria.

  • Simulations can be run individually or in batches.

  • Failed simulations will explain what went wrong and, when applicable, point out when the simulation's expected behavior doesn't align with your AI instructions.

Last updated