This tutorial will teach you how to create a new character using NPCEditor. The character created in this example will greet the player and then prepare them a cake using a series of yes or no questions. This is the tutorial character's dialog graph:
NPCEditor stores data in the p-list format, similar to XML. These files contain the questions and answers for the character, as well as the links between them. There is also an accounts plist required to connect a character to other modules, which is stored in
run.batin your NPCEditor install, or from the Toolkit, run the Virtual Human Toolkit Launcher and expand the '
<< Advanced' options, then click '
Launch' in the NPCEditor row in the '
Save files automatically if application is idle for 15 seconds."
File->New' to create a new plist. Save it as '
cake.plist, select the People tab and create a new person named '
cake vendor'.This person will handle the initial greeting exchange.
A note on "People" in NPCEditor
Make sure you set both 'first name' and 'last name' for your person. If your person/domain's name is just one word, you can set the last name or the first name to be a space or you can split the word into 2 parts, one for the first name and the other for the last name, such as '
CakeVendor' or '
scriptable' as the type of dialog manager in the 'Conversations' tab. A new 'Dialog Manager' tab next to 'Conversations' will appear containing an initial script for a dialog manager. The script is written in Groovy and can be edited to suit your needs.
cake vendor' to '
Anybody' (default value). This step defines an inheritance hierarchy among the various domains. In this case the '
cake vendor' domain inherits the utterances defined for the '
cake type', '
sponge cake flavor', '
cheese cake flavor'. Set the parent property of each of them to the '
Type' and '
Speaker' category in the 'Setting' tab. The first is used by the default dialog manager script to handle off topic utterances from the user. The second (speaker) needs to be set for proper communication with the rest of the modules in Virtual Human Toolkit. Note that the DM uses the ID of a category for selection, so you need to change the autogenerated key to something unique, such as the Name, so Type's ID would be "Type".
Toss' category is set to be included in the answers and used by the classifier. This completes the setup, and we can move on to creating new content.
opaque'. Then define an appropriate text (e.g. "
I didn't understand what you said, try to rephrase it"). For all answers you want the agent to be able to speak and animate, set the speaker to '
Hi'. For the '
Hi' example, leave the type of the answer unset (because it's not an answer to be associated with non understanding, but an opening greeting). Set the speaker to '
Brad' and the domain to '
cake vendor'. On the user pane (the left half), add a new utterance with the text '
Hi'. Finally click on both newly added utterances (they both become blue) and set the link strength to '
6' to specify that the two greetings are a question/answer pair. Similarly do for adding the replies for 'thank you' and 'bye'.
cake vendor' to '
cavities' when the user says they want a cake. To do so, Add a new utterance as a resppnse, and set the toss property to the '
Anybody' classifier will fail because it contains no links to learn between user utterances and system answers.
After creating a new example and saving the plist, to make it load automatically every time the NPCEditor runs, you'll need to edit the config for theVirtual Human Toolkit launcher. Editing the NPCEditor launch script that in
svn_vhtoolkit/trunk/tools/launch-scripts/run-toolkit-npceditor.bat and then re-compile
vhtoolkit.sln, or, if you want to make this change without having to re-compile, edit the file
In that script change the pointer from the default plist file to your new plist file. Also set the option 'Connect at startup' in the 'People' tab for the person/domain associated with the Smartbody connection.
The tossing to a new domain is decided in the dialog manager script (that can be seen in the dialog manager tab).
The default script tosses after the character has finished speaking the complete utterance. This can cause problems when the user interrupts the agent, as their question will still be in the first domain. To modify the default behavior and move the toss from the end of the utterance to the beginning, move the two selected lines from the method
npcBehavior_done to the method
npcBehavior_begin as displayed in the following image:
To debug the dialog manager script one can add logging instructions. To do that one has to add this import:
import com.leuski.af.Application; and use the following expression to log something:
a is the object we want to print in the log.
The log is saved in
If you are unsure on the location of the log file, you can use tools like Process Explorer.
An additional way to debug it is by using the debugging capabilities of an IDE. With IntelliJ is easy to import the project directly from the source checked out of the svn repository. It figures out most of the dependencies (but not all). Just try to recompile all projects, and resolve one by one the errors the compile gets (that are unresolved symbol errors) by including in the dependencies of the project, the module that implements the symbols (classes) unresolved.
main method to run to get the NPCEditor is the one in the class
IntelliJ can also debug Groovy together with Java (the dialog manager script is written in Groovy).
To recompile NPCEditor just run
ant from the directory
If you use this way to debug NPCEditor, you may want to disable the NPCEditor row in the launcher so that only the instance started from IntelliJ is present.
Another way to handle state changes is by keeping track of the state in the dialog manager script.
When the user says or types something, the classifier receives it and returns the list of most appropriate answers.
This list is what the expression
List <Map<String,Object>> answers = engine.search(global.domainName, event);
returns (near the top of the method
public boolean vrSpeech_asr_complete(Event event). Each answer is an object of form
Within the script itself, one can keep a state variable, then the state can be changed based on the list returned by the classifier (i.e.
answers) and a particular reply can be sent to the virtual agent.
To send a particular reply, we can change the value associated with the key 'ID' of the element in
answers that we want to send to the virtual agent for speech production and animation.
Each answer in the 'Utterances' tab in the NCPEditor has an 'External ID' column. So, to send the utterance we want, just get it's ID (that is the value of the 'External ID' column, this should have be manually stored in the state variable), then pick one of the objects in
answers , change the value associated with the key 'ID' to the 'External ID' selected and send the modified object using the