Welcome to the ICT Virtual Human Toolkit Website.

The ICT Virtual Human Toolkit is a collection of modules, tools and libraries that allow users, authors and developers to create their own virtual humans. This software is being developed at the University of Southern California Institute for Creative Technologies and can be licensed without cost for academic research purposes.

Download Virtual Human Toolkit

News

News Archive

Toolkit Overview

Goal

The University of Southern California Institute for Creative Technologies (ICT) has created a Virtual Human Toolkit with the goal of reducing some the complexity inherent in creating virtual humans. Our toolkit is an ever-growing collection of innovative technologies, fueled by basic research performed at the ICT and its partners. The toolkit provides a solid technical foundation and modularity that allows a relatively easy way of mixing and matching toolkit technology with a research project's proprietary or 3rd-party software. Through this toolkit, ICT hopes to provide the virtual humans research community with a widely accepted platform on which new technologies can be built.

What is it
The ICT Virtual Human Toolkit is a collection of modules, tools and libraries that supports the creation of virtual human conversational characters. At the core of the toolkit lies innovative, research-driven technologies which are combined with other software components in order to provide a complete embodied conversational agent. Since all ICT virtual human software is built on top of a common framework, as part of a modular architecture, researchers using the toolkit can do any of the following:
* utilize all components or a subset thereof;
* utilize certain components while replacing others with non-toolkit components;
* utilize certain components in other existing systems.

The technology emphasizes natural language interaction, nonverbal behavior and visual recognition. The main modules are:
* [[NPCEditor|Non Player Character Editor (NPCEditor)]], a package for creating dialogue responses to inputs for one or more characters. It contains a text classifier based on cross-language relevance models that selects a character's response based on the user's text input, as well as an authoring interface to input and relate questions and answers, and a simple dialogue manager to control aspects of output behavior.
* [[NVBG|Nonverbal Behavior Generator (NVBG)]], a rule-based behavior planner that generates behaviors by inferring communicative functions from a surface text and selects behaviors to augment and complement the expression of those functions.
* [[SmartBody|SmartBody]], a character animation platform that provides locomotion, steering, object manipulation, lip syncing, gazing and nonverbal behavior in real time, using the Behavior Markup Language.
* [[Watson|Watson]], a real-time visual feedback recognition library for interactive interfaces that can recognize head gaze, head gestures, eye gaze and eye gestures using the images of a monocular or stereo camera.
* [[AcquireSpeech|Speech Client (AcquireSpeech)]], a tool that can send audio to one or more speech recognizers and relay the information to the rest of the system. It also allows text to be typed into the system, simulating speech input. The toolkit uses PocketSphinx as a 3rd party speech recognition solution.

The target platform for the overall toolkit is Microsoft Windows, although some components are multi-platform.

What is it not

Although the toolkit supports virtual humans development, some components are prototypes rather than state-of-the-art technologies. The [[Components]] section lists several potential alternatives for some components.

The toolkit does not contain many of the current basic research technologies at ICT, such as the reasoning [[Projects#SASO|SASO]] agents. Most of the toolkit technology, however, is the result of basic research, which is continually evaluated for potential use in future releases.

Currently, we are not at liberty to publicly distribute any project-specific data. However, interested parties are encouraged to [[Contact|contact us]] directly. In addition, we are considering creating a forum where users can share their creations.

Who can use it

The toolkit has three target audiences:
* Users, who can run any of the modules without any modifications. A simple example character, Brad, is included for everyone to interact with.
* Authors, who can create their own virtual human characters using the provided software. Authors can modify the provided Brad character, or create their own virtual human completely from scratch.
* Developers, who can either use the provided modules, tools and libraries in their own system or who can extend the components in those cases where they are open source.

All toolkit software can be used without cost for academic research purposes provided all associated licenses are being honored. If you are using the toolkit or any of its components for published research, please cite us appropriately, as per clause 3 of the license. See the [[Links_and_Papers|Papers & Links]] section for more details. Please [[Contact|contact us]] if you are interested in a commercial license.

Where To Get It

Please see the [[Support]] section for instructions on how to obtain the ICT Virtual Human Toolkit. The Getting Started section below will guide you through the first steps of using the software.

License and Disclaimers

The complete License Agreement and supporting documentation can be read in the [[License]] section. The License Agreement states, but is not limited to:
* The toolkit and any of its components can only be used for academic research purposes.
* If you are using the toolkit or any of its components for published research, please cite us appropriately. See [[Links_and_Papers|Papers & Links]] for details.
* Toolkit users are required to honor all licenses of components and supporting software as defined in Exhibit A of the License Agreement.

Please [[Contact|contact us]] if you are interested in a commercial license.

Please be aware that the toolkit consists of research software for which documentation and support is limited. However, both the software as well as the accompanying documentation are actively being developed and updated.

Current Toolkit Users

There are many [[Projects|ICT projects]] that use a subset of the technology provided with the toolkit. Below is a list of some examples:
* [[Projects#SASO|SASO]]
* [[Projects#Virtual Patient|Virtual Patient]]
* [[Projects#Sergeant Star|Sergeant Star]]
* [[Projects#Gunslinger|Gunslinger]]
* Elect BiLat
* [http://www.mos.org/interfaces/ InterFaces Project] (with Boston Museum of Science)
* Tactical Questioning

In addition, many groups outside of ICT use some of the toolkit components, most notably [[SmartBody]] and [[Watson]]:
* University of Reykjavik
* German Research Center for Artificial Intelligence
* ArticuLab at Northwestern University
* Telecom Paris Tech
* Affective Computing Research group at MIT Media Lab
* ICSI/UCB Vision Group at UC Berkeley
* Human-Centered, Intelligent, Human-Computer Interaction group at Imperial College
* Worcester Polytechnic Institute
* Microsoft Research
* Relational Agents group at Northeastern University
* Component Analysis Lab at Carnegie Mellon University

Getting Started

Please go to the [[Support]] page for instructions on how to obtain the toolkit. This page will also give you further guidance on how to install and run the provided scenario.

For navigation on this website, please use the menu on the left. Each of the listed sections is described below:
* '''[[Architecture]]''' - Gives an overview of the toolkit architecture, based on the ICT Virtual Human Architecture.
* '''[[Components]]''' - Lists all modules, tools and libraries that make up the toolkit and links to available documentation and third party enhancements.
* '''[[Tutorials]]''' - Lists all available tutorials, including how to run the provided examples and how to create your own virtual human.
* '''[[Projects]]''' - An overview of some projects that use technology included in the toolkit, see also below.
* '''[[FAQ]]''' - Frequently Asked Questions about the toolkit in general and all of its components in detail. Also contains a glossary for often used terms and acronyms.
* '''[[Support]]''' - An overview of available support.
* '''[[Links and Papers|Papers & Links ]]''' - List of links to related sites.

Users

Users can run all the needed components in order interact with Brad, the basic example character provided with the toolkit. After obtaining the toolkit, see the instructions on the [[Support]] page on how to install it. When the installer is done, you get the option to immediately start the Launcher. With the Launcher, it should be as simple as clicking the first Launch button (in the Run Checked row under Run It All), then quickly clicking OK on the Gamebryo settings windows, and waiting for about 30 seconds for all components to launch. Note that Gamebryo needs to be up and running before SmartBody can be launched. When using the Run All functionality, the Launcher will start Gamebryo and wait 15 seconds before loading all other modules. It this is not enough time, launch Gamebryo manually, uncheck Gamebryo in the Launcher and then Run All. When all the non-tool rows in the Launcher are green, it means you are ready to start interacting with Brad. Brad is a very basic character and shows off some, but not all, of the toolkit elements.

Many windows will have popped up, but the only ones you need right now are the Gamebryo window for the graphics, and AcquireSpeech to type in questions. In AcquireSpeech, go to the Player tab, type your question in the Text field, and click the associated Send button or hit enter. Brad should respond to your question by talking back. By default, Brad uses the Text-To-Speech voice that comes with Windows, so depending on your windows version, you might hear a very outdated computer voice, or a woman's voice. Brad is authored to answer all your general questions about the toolkit. Be aware that he only has a general overview of the toolkit; he does not serve as an interactive tutorial or tutor.

In the bin/sbmonitor folder, there is an application called sbmonitor.exe. This tool is the Smartbody Monitor and is used for debugging and interacting with any Smartbody process that it connects to. While the toolkit renderer is running with Smartbody, start the sbmonitor.exe and press the orange button in the top left corner. After a few seconds, you will be connected to the Smartbody process within the toolkit. There are many dialogs and tools that you can use to interact with Smartbody from within the Smartbody Monitor. The Utils tab on the right side of the screen and the menu options under "Tools" on the toolbar provide many options for interacting with Smartbody.

For more detailed instructions, including troubleshooting, on how to run the provided example, please see [[Tutorials:Run Example Domain|here]].

Authors

Authors can create their own virtual humans. They should first get familiar with the basic example character provided with the toolkit: Brad. Brad shows off some of the toolkit elements, in particular natural language interaction and nonverbal behavior. The graphics are simple, the default voice is outdated, and vision and speech recognition are not integrated.

Once you are familiar with the Brad character, you can read up on the technology behind the [[Components]] and some of the [[Tutorials]]. The documentation of these is a work in progress.

Creating a character consists of various elements:
* Natural language, the [[NPCEditor]]. See below for some basic instructions.
* Nonverbal Behavior, the [[NVBG|Nonverbal Behavior Generator]]. You can edit some of the language to behavior rules in C:\vhtoolkit\core\nvb_generator\NVBGenerator\xslt\rule_input_brad.xml. In this file, keywords are associated with certain animations.
* Animations, using a third party application like Maya or 3D Studio Max, in combination with [[SmartBody]] exporters.
* Character and background models, using a third party application like Maya or 3D Studio Max, in combination with [[Ogre]] exporters.
* Textures, using a third party application like Photoshop or Gimp.

Of these, the NPCEditor is the easiest to start with. In the Utterance tab you see questions a human user can ask on the left, and the possible answers on the right. Questions are linked to answers with a certain value from 1 to 6. Usually there are only links with value 6. When you select a question, all answers linked to that question turn green. You can select multiple questions and answers by holding control, thus creating sets of links. If sets are not mapped completely 1 to 1, elements from one set that are only partially linked to elements in the other set will be displayed yellow. If you want to create a 1 to 1 mapping between the sets, select all yellow rows on both sides and set the link value to 6, either by selecting it at the bottom or by using CTRL + 6. Similarly, you can make new links between questions and answers by selecting them and setting the link value. You can add either new questions or answers with the Add button. A new, empty line appears, which you have to select in order to fill in the text field. Be aware that on the answer side, the Compile / Script section might hide the text field. You can drag the compile section down and ignore it.

Try first to extend the Brad data file (called a plist), before you create a completely new character on your own. First, think of a question Brad cannot currently answer. Try out that question and confirm that Brad says something like "I don't know.". Then, add the new question on the left. All components can stay up for this; you can edit the plist in real time. Also create the answer on the right. Be sure to set the Speaker value to Brad. Now, select both question and answer, and make sure no other elements are selected. Hit CTRL + 6, to link them together, and save your file. Ask the same question again, and Brad should now give the answer you just created. From here on you can let your creativity flow. Have fun!

Developers

Developers can extend toolkit components, use toolkit components in their own systems, or use their own components within the toolkit. First, read up on the [[Architecture]] and [[Components]], and read the [[Tutorials:Develop a New Module|tutorial]] on how to develop a new module. The documentation of these is a work in progress.

The toolkit architecture consists of modules communicating by message passing. The [[Messages]] section defines some of these. You can use the [[Logger]] to get more detailed and practical information. Replacing or using a module means adhering to the existing messaging interface.

If you are interested in extending the components within the toolkit, we would love to hear from you. See the [[Support]] page on how to contact us. Note that not all provided software is open source, and that not all open source software is accessible from a repository. [[SmartBody]] is a SourceForge project, accessible [http://sourceforge.net/projects/smartbody/ here].

 

 

 

 
Navigate space