Page tree

General

Where can I get the Toolkit?

Please see the Download section on how to obtain the toolkit.

How do I run the provided example scenario?

Please see the Running the Example Scenario tutorial.

Is the Toolkit open source?

Some components, like SmartBody, Launcher, Logger and VHMsg, are open source, while others are binary-only, including NPCEditor, NVBG and MultiSense. SmartBody has its own SourceForge page which you can access here. See the License Agreement page for licensing information. 

What is the license agreement for the Toolkit?

The License Agreement states, but is not limited to:

  • The Toolkit and any of its components is to only be used for academic research and US Government purposes.
  • Cite us appropriately when using the Toolkit or any of its components for published research. See Papers for details.
  • Toolkit users are required to honor all licenses of components and supporting software as defined in Exhibit A of the License Agreement.

The complete License Agreement and supporting documentation is found in the License section.

Is there a commercial version of the Toolkit?

Not at the moment, but we are looking into it. Please contact us directly if you are interested in a commercial license.

Which platforms does the Toolkit support?

The Toolkit as a whole is currently targeted for the Microsoft Windows platform only, in particular Windows 10, 64 bit. Note that MultiSense works best on Windows 7 and is not supported on Windows 10. Some components are multi-platform, most notably SmartBody and the NPCEditor, as well as the Launcher and Logger. C# components should work on Mac OS when using Mono. We do no currently support non-Windows platforms. Build instructions for other platforms can be found here

I know of this great character at ICT that uses Toolkit technology. Can I get access to the data driving that character?

With the Toolkit we are aiming to provide a technical platform allowing researchers to more quickly create their own virtual humans. Whether project-specific agents will be made available to the public will be determined on a case by case basis, but will usually fall outside the scope of the Toolkit.

Running the Example Scenario

Why can't I run the Launcher? It came up fine after the installation

It might be that the executable used to start the Launcher can not be found. Please make sure that javaw.exe is part of your path. The location is usually "C:\Program Files (x86)\Java\jre1.6.0_26\bin". The path is defined in your Environment Variables; right-click on My Computer, go to the Advanced tab and click Environment Variables. The Toolkit is currently only compatible with the 32 bit version of Java.

Why do I get many errors and are all of the Launcher rows orange?

Orange means that the Launcher has tried to launch a component, but hasn't received a confirmation message back indicating it is up and running. This is likely due to not running the ActiveMQ service. Please make sure it is running (Right click My Computer, select Manage, Services and Applications, Services). If it is not running, turn it on (the Play button at the top). If it is not present, please install using the Toolkit installer with 3rd party software.

Why won't the renderer load?

  1. Be sure to install all 3rd party software that comes with the installer. 
  2. If this doesn't work, make sure that you have the latest video drivers installed.
  3. Make sure that your video card supports shader model 3 or higher. This is usually the case with newer dedicated video cards. Integrated video cards might not work.
  4. See if the problem happens with both Unity and Ogre. If Ogre works in OpenGL mode, it might be a DirectX problem.
  5. In rare cases, a virus scanner may block the Unity executable. Refer to the documentation of your virus scanner to see if the executable has been quarantined or deleted. Update your virus definitions. The Unity executable is \vhtoolkit\bin\vhtoolkitUnity\vhtoolkitUnity.exe.

Why don't I hear Brad and Rachel?

See if the characters are moving their lips. If so, make sure your volume is not all the way down, that it is not muted, and that you know where sound should come from (i.e., desktop speakers, or headphones that have just been plugged in). If they are not moving their lips, make sure the NVGB, NPCEditor and speech recognition software are all running (check that their row is green in the Launcher). Try to identify where in the chain things break; when you talk, white text indicates the speech recognition result; yellow text indicates the characters are responding. You can look in the Unity console window (~ key) for any error messages that might give you an idea of what is going wrong. 

Why did Brad and Rachel stop reacting to me?

They might not have the correct answer for you, especially if you have tried asking the same question a couple of times. There might also be a delay in some of the modules. Try waiting a 30 seconds before trying again. If there is no response at all, make sure all modules are still green in the Launcher. If a module that is checked is neutral, try launching it. If one of the checked rows is orange, the module might have crashed. Restart them. If this doesn't help, quit all modules and restart them. As a last resort, try rebooting your computer and start all modules. 

ActiveMQ

How can I check whether ActiveMQ is running?

Right-click on Computer in the start menu, select Manage, expand Services and Applications, click on Services and see if the ActiveMQ service is running.  If it is listed and not running, select it and click the Play icon above or right click and select Start. If it is not listed, use the Toolkit installer with 3rd party software and make sure the ActiveMQ box is selected. 

SmartBody

For all SmartBody related questions, see the dedicated SmartBody website

NVBG

How do I add a new Character?

Edit the run-toolkit-NVBG-C#-all.bat by adding "-create_character <CharacterName> <CharacterFile>.ini" (without quotes).  The .ini files are found in data\nvbg-toolkit.  It's recommended to use Brad.ini or Rachel.ini to get started.

NPCEditor

How do I add a new Character?

  1. Click on the Settings Tab
  2. Click on Speaker in the Categories column on the left side of the application
  3. Click the Add button under Tokens

I am asking my character a predefined question, but he doesn't respond. How can I solve that?

There could be a number or reasons why this is happening:

  1. First, see if the answer you expect comes up on the Chat panel in the NPCEditor. If it does, you know the right answer is selected by the NPCEditor and that something further down the pipeline might be going wrong; see points 7 and higher for more options. If the answer does not show up, continue with the other steps:
  2. Check that the question has an answer associated with it. You can see this by looking at the column that has the hash symbol at top ('#'). This shows you the number of answers linked to this question.
  3. Check that you are asking the question exactly as it appears on the left hand side of the Utterance panel. Even though the NPCEditor can handle questions that are only similar to predefined questions, this is a good step to test if something is wrong.
  4. Make sure the ID for the human speaker (the person interacting with the character) is consistent between the NPCEditor and the AcquireSpeech client. This means that the Domain variable on the question (left-hand) side of NPCEditor is either Anybody (i.e., all speakers are accepted), or matches exactly the ID of the human speaker (the Speaker ID as defined in the AcquireSpeech module). You can change the AcquireSpeech speaker ID in the batch file startup script in \tools\launcher-scripts.
  5. Select the question you are asking on the left hand side of the NPCEditor, and order the rows by the strength of the link between questions and answers. You can do this by clicking on the associated column name at the top. This is likely the third column, which has a dash at the top ('-'), in between the error column and the Score column. Click on the column name (the dash) to order all rows. Do the same thing on the right hand side, the answers. On the answer side, select all answers associated with the particular question. First, select a single answer that is linked to the questions you have selected. This will unselect any previous answers. Now, hold CTRL and click all answers that have a link value (this value is usually 6). This in turn will highlight all questions that are linked to these answers. While holding CTRL, also select all the questions that are linked to this selection of answers. You now have selected a set of questions, together with the set of all linked answers. You can scroll through both the questions as well as answers list. Ideally, you only see white and green rows (although any selected row will be blue). If you see yellow rows, it means that the selected sets are not completely linked 1 to 1. Although this is not always necessary, the NPCEditor might have trouble if sets of answers and questions are overlapping. Try creating unique sets of questions, that link to unique sets of answers.
  6. If you have added a lot of new data to the NPCEditor, you might want to retrain it. To do this, go to the classifier, select the domain you want to train (this is the Addressee variable, which corresponds to the character), make sure the 'Test on training data' box is checked, and click 'Start Training'. If you see more than one Addressee, it means you have defined more than one domain on the answer side. This is likely due to the default Anybody domain. If you only have one character, you should usually only have one domain. In order to achieve this, go to the Utterances tab, select all answers (CTRL + A) and select one value from the Domain pull down list at the bottom.
  7. Make sure that all modules use the same character ID (like 'brad' in the example). For SmartBody, this is defined in the sequence file. For the Nonverbal Behavior Generator in the startup batch file, and for the NPCEditor in the Speaker variable (accessible from the Speaker tab).
  8. Make sure that on the answer side, all rows have a Speaker defined. This is the character ID that will be send out to the Nonverbal Behavior Generator (NVBG). By default the speaker will be empty, which results in the NPCEditor sending out a request to the NVBG, which the latter ignores, because it is only listening for request for specific characters.

Why do I hear Brad's voice twice?

  1. Ensure that you have only one NPCEditor open. In the case where you have more than one NPCEditors running and you ask a question, this is sent to both NPCEditors that are running and they both send a reply, therefore duplicating Brad's voice.
  2. If after closing one NPCEditor and that doesn't solve the problem, kill the NPCEditor and launch it again.

How do I delete local or user specific NPCEditor data?

If you have installed previous versions of the toolkit and used it before on your system, then the NPCEditor would most likely have created user setting files on your machine. Once in a while, a change from our side to NPCEditor's settings will need to be propogated to the users, which will require them to delete these user files on their machines so that they don't interfere/override the new settings.

The NPCEditor creates .plist files which it stores in the user directory for Windows. These files contain NPCEditor specific information.
So if you need to delete these user settings from your machine, follow the below steps.

Depending on which OS you are using, these files will be present in the mentioned folders

Windows 7:

If you are using Windows 7, then you should look under

Your Primary Drive:\Users_Your user name_\AppData\Roaming\NPCEditor\people

You should see files with the extension ".plist". These are the local files created by the NPCEditor on your machine to save user settings. Delete these files. If these files don't exist, then it shouldn't matter.

Windows XP:

Under Windows XP, These files should be present under

Your Primary Drive:\Documents and Settings_Your user name_\Application Data\NPCEditor\people

These are the local files created by the NPCEditor on your machine to save user settings. Delete these files. If these files don't exist, then it shouldn't matter.

Once you delete these files, then the NPCEditor will, by default, use the .plist files that the installer has copied within the toolkit folder, with the new settings.

Note: The AppData/ApplicationData folder under both OS's are hidden folders. So you will not be able to see them unless you make sure that hidden folders are visible in Windows Explorer. In order to do this, open any Explorer window and in the dropdown menu on top, go to 'Tools->Folder Options'. Go to the 'View' tab. Under 'Advanced settings', check the button that says "Show hidden files, folders, and drives".
This will display hidden folders in Windows Explorer.

Unity

What keyboard commands can I use in Unity?

  • W,A,S,D - camera movement. Q & E - camera up/down
  • J - mouse visibility toggle - mouse look mode
  • L - toggles the fake recognizer text box
  • O - toggles the user's recognized text
  • M - toggles speech recognition mode. When on, click and hold and talk in the mic. Release to quit talking.
  • X - reset camera
  • Z - show debug statistics
  • I - Toggles character subtitles
  • P - Toggle entire GUI
  • Alt-enter - toggle windowed / fullscreen
  • Escape - quit

What console commands can I use in Unity?

Use the ~ key to bring up the console.

Commands:

  • q - quit
  • play_intro - Play intro sequence
  • vhmsg ... - Send vhmsg out to system. 'vhmsg sbm ...' for sending a smartbody command
  • set_resolution x y - Set_resolution to 'x' x 'y'. Example 'set_resolution 1024 768'
  • toggle_fullscreen - toggle windowed / fullscreen
  • set_loco_char_name - ...

How do Unity and SmartBody interact?

We have a c++ dll, called the vhWrapper, that wraps API calls into SmartBody. This dll is loaded into Unity and its functions are exposed in the SmartbodyExternals.cs file in Unity.  These function are then called from the SmartbodyManager.cs class (also inside of unity).  This dll is used for 2-way interaction between Unity and SmartBody.  For example, you can send python commands from Unity to Smartbody using the function SmartbodyManager.PythonCommand(..) or play an animation using SmartbodyManager.SBPlayAnim(..);

Each frame, unity passes it’s time step to SmartBody and SmartBody returns to Unity all the positions/rotations of the joints of each character.  You can see the structures that are marshaled between the c# and c++ worlds in SmartbodyCharacterWrapper and UnityCharacterData structs.  These structs contain all the data that is passed from Unity to Smartbody and back each frame. 

Look at SmartbodyManager.LateUpdate and GetUnityCharacterData and UnitySmartbodyCharacter.OnBoneTransformations() functions to see how the data is used

Ogre

To what extent Is Ogre supported?

Ogre is only provided with the toolkit to show a source code proof of concept integration with SmartBody. It is a very bare bones implementation. The main renderer for the toolkit is Unity.

How can I look around in the world / move the camera?

You can use the A, S, W and D keys to move the camera. Hit the J key to toggle using mouse look (you can move the mouse to look around).

How do I get my mouse cursor back?

Hit the J key, or use ALT + TAB to switch to another window.

How do I quit Ogre?

You can either use the Kill button on the Launcher, or hit the Q key on your keyboard, while in the Ogre window.

Speech Recognition / Text Input

How can I talk to my character?

You can push the Start button on the first tab. This will bring the user to the Recorder tab and will start a new session, in which all speech input will be saved on disk. The toolkit only comes with PocketSphinx, which has not been optimized yet for use with the toolkit.

Where can I type in my input for the character?

The AcquireSpeech client starts on the first tab in which you can modify some of the default settings. The Recorder and Player tabs allow users to type in their utterances. The Player tab also loads a script with some example questions which can be used by double clicking on it.

What is PocketSphinx Wrapper?

PocketSphinx Wrapper is a wrapper over the PocketSphinx speech recognition system which allows the Toolkit to communicate with it using the sonic protocol.

Launcher

How can I bring up console / debug information?

In the Advanced menu, click Show Console option (CTRL + T).

How can I disable the periodic component status pinging / vrAllCall messages?

In the Advanced menu, disable Enable vrAllCall (CTRL + SHIFT + T).

How do I save changes to a certain profile?

Unsaved changes are automatically saved to a Last Known Config profile, which will be loaded by default on start up. If you want to save changes under a specific name, for instance to save several often used configurations, you can do that in the Profiles menu, Save As New Profile (CTRL + SHIFT + S). This will bring up a new window in which you can enter your profile name.

How do I switch between Build and SVN mode for a certain component?

If supported, right click on any component name and select the appropriate Switch To option. Note that this feature assumes a symmetrical installation of both a build version and a SVN (version control) version of your project. This is currently not supported by the toolkit.

Logger

Where did the Logger go?

The Logger window is minimized on start-up. On the task bar, you can usually find it next to the Launcher.

Why do new messages periodically appear, even when I'm not interacting with a character?

By default, the Launcher sends out status request messages to all modules and tools (vrAllCall). These then respond with a confirmation message, indicating they are online. This enables the Launcher to show which components are still online. You can turn this behavior off in the Advanced menu of the Launcher (Enable vrAllCall option). Note that the Launcher will then not correctly indicate component status anymore. For more information on this particular message protocol, see Messages.

Troubleshooting

The Launcher does not start

Make sure you have installed all required 3rd party software; these are included with the appropriate installers.  It might be that the executable used to start the Launcher can not be found.  Please make sure that javaw.exe is part of your path.  The location is usually "C:\Program Files\Java\jre1.6.0_26\bin" or "C:\Program Files (x86)\Java\jre1.6.0_26\bin".  The path is defined in your Environment Variables; right-click on My Computer, go to the Advanced tab and click Environment Variables. The Toolkit currently only supports Java 1.6_26 32 bit.

None of the modules start

Make sure you have installed all required 3rd party software; these are included with the appropriate installers.  Click 'Advanced' in the Launcher and see if all the checked rows are green.  If all of them are orange, make sure ActiveMQ is running; right-click on Computer in the start menu, select Manage, expand Services and Applications, click on Services and see if the ActiveMQ service is running.  If it is listed and not running, select it and click the Play icon above or right click and select Start. If it is not listed, use the Toolkit installer with 3rd party software and make sure the ActiveMQ box is selected. If some Launcher rows are green and others are orange, try launching the orange ones manually and note any errors. You can bring up the Launcher command console with 'CTRL + T'. 

Characters come up but don't respond

Expand the Launcher view (click 'Advanced') and see which checked rows are green.  If some are orange, try launching them manually.  If that fails, bring up the command console with 'CTRL + T' for any error messages that may help point you in the right direction.

If all else fails

Look at the Support page for an overview of available support.

Glossary

ActiveMQ

General purpose open source messaging system, used by the Toolkit components. See documentation. Technical basis of VHMsg standard.

AcquireSpeech

AcquireSpeech, one of the Toolkit modules, client to the speech recognizer. Can also be used for typing in questions or for selecting utterances from a predefined script.

Component

A component is either a module, tool, library or 3rd party software that's part of the Toolkit.

Developers

Modifies the Toolkit technology in some way, or uses the technology in other systems.

ICT

The Institute for Creative Technologies, part of the University of Southern California.

Institute for Creative Technologies

The Institute for Creative Technologies, part of the University of Southern California.

Library

A software component of the Toolkit. Supports modules and/or tools and is usually hidden from users.

Module

A run-time Toolkit component that is part of the toolkit architecture. Modules are essential for running a virtual character, as opposed to tools or libraries which just play a support role.

NVBG

Generates nonverbal behavior, based on textual input. One of the Toolkit modules. Often referred to with acronym NVBG. Developed at the Institute for Creative Technologies.

Nonverbal Behavior Generator

Generates nonverbal behavior, based on textual input. One of the Toolkit modules. Often referred to with acronym NVBG. Developed at the Institute for Creative Technologies.

NPCEditor

The NPCEditor is a statistical text classifier that takes text input and selects text output. One of the Toolkit modules. Serves as a character's brain and source for natural language input and output. Developed at the Institute for Creative Technologies.

Ogre

Ogre is an open source renderer that is provided with the Toolkit as an example on how to integrate SmartBody with a renderer. The main renderer of the Toolkit is Unity.

Plist

Data file used by the NPCEditor. Although a general XML format, in the Toolkit the plist is often used when referring to a character's knowledge.

SB

Stands for SmartBody. It is a character animation platform that provides locomotion, steering, object manipulation, lip syncing, gazing and nonverbal behavior in real time through the Behavior Markup Language (BML). Developed at the Institute for Creative Technologies.

SmartBody

SmartBody is a character animation platform that provides locomotion, steering, object manipulation, lip syncing, gazing and nonverbal behavior in real time through the Behavior Markup Language (BML). Developed at the Institute for Creative Technologies.

Text-To-Speech

General process of turning text into speech. Toolkit characters can use this through the ttsRelay to speak lines that have no prerecorded speech available. 

Tool

A Toolkit component that allows users and developers supportive functions.

Toolkit

See Virtual Human Toolkit.

TTS

See Text-To-Speech.

TtsRelay

TtsRelay is a Toolkit module that interfaces to a variety of text-to-speech engines, including Festival and MS SAPI.

USC

See University of Southern California.

Unity 3D

Main renderer for the Toolkit through vhtoolkitUnity. The Toolkit only contains the executable, but you can download the free version of Unity or purchase Unity Pro from their website

University of Southern California

University with which the Institute for Creative Technologies is affiliated.

User

Use the provided Toolkit technology as is, usually either running a component or using it to create new content.

VH

An acronym for virtual humans, often used to refer to the ICT Virtual Humans group.

VHToolkit

See Virtual Human Toolkit.

Virtual Human Toolkit

Collection of modules, tools and libraries that allow users to create their own virtual humans. This software is being developed at the University of Southern California Institute for Creative Technologies and is freely available for the research community. Sometimes referred to as VHToolkit.

Watson

Real-time visual feedback recognition library for interactive interfaces that can recognize head gaze, head gestures, eye gaze and eye gestures using the images of a monocular or stereo camera. One of the Toolkit modules.

  • No labels