Quickstart
Each account can have up to 32 avatar applications.
Contents
Create an avatar from template
You can manually create an avatar or use one of the existing templates. Templates provide a ready-to-use avatar with a predefined scenario, all the necessary intents and entities, making it easy to integrate into your system.
You can also modify the template’s scenario and intents to customize your avatar.
To use a template:
- Log in to your control panel
- In the main menu, go to the Templates section.

- Choose one of the avatar templates and click on it.
- Read the avatar description and if it suits you, click the Install button.
- Specify the avatar's settings: language, time zone, and type: chatbot or omnichannel bot.
- Click Create.
Your avatar is ready! Once you open it, you can test the bot in the Debug mode and integrate it into your product.
You cannot open the debug window in multiple windows. Use only one window in the debug mode to test your avatar.
Create an avatar manually
To manually create an avatar, you must first create an avatar instance, then write an avatar scenario, create intents, and finally integrate it with your telephony or chat system.
Let us create a first-line support robot for a restaurant. The robot handles:
Questions about open hours (intent - “openHours”)
Questions about delivery options (intent - “delivery”)
Reservation requests (intent - “reservation”)
Create an avatar
Go to the Avatar section of the control panel and click the Create button. Enter your avatar name and click Create again.

An avatar application's name cannot be more than 128 characters.
Now, you have an empty avatar with predefined basic intents and a basic dialog scenario.
Set up intents
To add an intent, click the Add intent button, type “open_hours” in the modal window, and click Create:

Now, you can set up your intent by adding phrases that a user might say and default responses to those phrases.
Add 10-20 training phrases, for example:
What are your opening hours?
Do you work tomorrow?
How late are you open tonight?

Then, navigate to the Responses tab and add a default response: “We are open daily from 7 am to 9 pm.”
Repeat this procedure for “delivery” and “reservation” intents. So as not to have to do it manually, import all the data from this file by clicking the Import button.

When you add the data you can see the “Training required” hint.

This means that you need to retrain your neural network with the added data. So click on it and wait till the training finishes.

Your avatar is now equipped to handle natural speech, so let’s proceed with building a scenario.
Write a dialog scenario
Navigate to the Dialog scenario tab. Here, you discover the default conversation scenario that requires replacement with your own. Let’s begin with a straightforward one-state dialog.
Upon entering the state, your avatar greets the user. If the user says something that matches the list of utterances, the avatar responds with a default response defined in the UI on the previous step. If the intent is unknown, the avatar responds with “cannot get that” and continues listening.
Key points:
addState registers a dialog state. The dialog can be in only one state at a time. A state defines the avatar's reactions to user inputs.
onEnter callback is called when the dialog enters a state. It is like a doorway where you can greet the user.
onUtterance callback is called when the user says something to the avatar (when the dialog is in a certain state). Here you can check what the user’s intent is, which entities are extracted, and then respond.
onTimeout is called when the state reaches its timeout. You can specify a custom timeout for each state, or for voice avatars, there is a default timeout.
Response generates a special object that defines the avatar’s reaction upon triggering the handler. Using it, you can return the utterance to be sent to the user, set the listen flag that specifies whether the avatar should listen to further user input, and set nextState indicating the state to which the dialog should jump.
setStartState defines the dialog’s entry point.
If you do not specify the onTimeout callback in the avatar's state, the avatar crashes after reaching the timeout.
It’s time to test your avatar! To do this, save your scenario and click the Debug button. A dialog box will then appear.

Type some questions and check if your avatar responds accurately.

You can have only one avatar debug session per account.
All works, but you need to add exit points to the scenario to prevent it from becoming infinite. For instance:
If the avatar did not understand the user 3 times.
When you ask the user “Can we help you with anything else?“, and the user says 'no'.
With these exit points, the scenario becomes a bit more intricate. Please let me know what you think should happen next.
- Check if the entry state is entered for the first time. If yes, let your avatar greet the user; if no, let it ask if the user needs help with something else. For this purpose, use visitsCounter and increment it every time the dialog enters the state.
- In the onUtterance handler, add a branch to handle the “no“ and “yes” intents for “can I help you with something else?“. Depending on the response, finish the dialog (by going to the final state) or continue the conversation.
- Every time the user says something while the dialog stays in the same handler, increment utteranceCounter. When leaving the handler, reset it. So if the user says something unknown again and again without leaving the handler, this counter will grow. When it becomes equal to 3, the avatar stops the dialog.
- The final state of the dialog causes termination. You can pass some additional information to the scenario at this state. For example, you can add a custom key – aneedRedirectionToOperator.
Enable reservations
Now that your avatar can comprehend user intentions and respond accordingly, it is time to instruct it on how to make reservations.
When filling out the reservation form, track the conversation context and state alongside the reservation information. To achieve this, declare the reservationForm object at the beginning of the scenario and use it to store all the collected information about the reservation.
Since the user can request a reservation using both phrases: “Can I book a table?” (no parameters) and “Can I book a table for two?” (people: 2), extract the date and the number of people from the first phrase in the onUtterance handler of the start state. Store this information in the reservation form.
In the new reservation state, construct a question for the user based on the information provided in the form. If the user fails to respond, and the weirdUtterancesInRow counter exceeds its limit, cease attempting to complete the form and return to the start state.
In the onUtterance handler of the reservation state, check if the requested information is given through entities. If yes, add it to the form and continue the loop. If not, increment the weirdUtterancesInRow counter to avoid getting stuck in an endless loop.
At the same time, the avatar can continue answering clarifying questions without stopping the form-filling process, using the intent check in the onUtterance handler.
In the reservationConfirm state, verify the provided information. If everything is in order, return to the start state to inquire about any additional assistance you may need.
The final state passes the information from the form to the scenario. Alternatively, you can send it to your CRM/backend right from the avatar scenario by calling the httpRequest method.
Now, you have a fully functional bot with a complex scenario that supports a flexible and natural flow of dialogue. Let’s integrate it with telephony.
Integrate with telephony and chat
You have two integration options:
Using the Avatar class - you will have fine-grained control over integration.
Using the VoiceAvatar class - it already has ASR + TTS integration with telephony and avatar logic.
Let’s implement the second one.
To do that:
- Copy the code from the Integration section of your avatar.
- Create a platform scenario and paste the code there.
- Specify ASR and TTS options in the configuration step (select a language and voice).
- Set everything up to handle calls via this scenario and test it.
The complete VoxEngine scenario for Avatar will look like this:
Congratulations! Now you can call your avatar and have a voice conversation with it.
Refer to the AvatarEngine and VoxEngine.VoximplantAvatar API references to create a more complex solution.