Welcome to the ICT Virtual Human Toolkit Website.
The ICT Virtual Human Toolkit is a collection of modules, tools, and libraries designed to aid and support users, authors, and developers with the creation of virtual human conversational characters. The toolkit is an on-going, ever-changing, innovative system fueled by basic research performed at the University of Southern California (USC) Institute for Creative Technologies (ICT) and its partners.
Designed for easy mixing and matching with a research project’s proprietary or 3rd-party software, this toolkit provides a widely accepted platform on which new technologies can be built. It is our hope that, together as a research community, we can further develop and explore virtual human technologies. The Virtual Human Toolkit can be licensed without cost for academic research purposes.
Request Virtual Human Toolkit
See Release Notes for details.
- Jul 11 2012 - Released a minor update to the toolkit, fixing some usability and stability issues.
- May 31 2012 - Released an exciting new version of the toolkit offering the MultiSense framework, the Rapport research platform and the SBMonitor tool. MultiSense is a perception framework that enables multiple sensing and understanding modules to interoperate simultaneously, broadcasting data through the Perception Markup Language. MultiSense currently contains GAVAM, CLM FaceTracker and FAAST which you can use with webcam or Kinect. The Rapport agent is a “virtual human listener” providing nonverbal feedback based on human nonverbal and verbal input. It has been used in a variety of international studies related to establishing rapport between real and virtual humans. Finally, the SBMonitor is a stand-alone tool for easy debugging of SmartBody applications, including testing available (facial) animations, gazes and more complex BML commands.
- Mar 2 2012 - Released a minor update of the toolkit updating the Unity version to 3.5 as well as providing incremental changes to the Unity/SmartBody debug tools in the Unity Editor (VH menu in Unity).
- Dec 22 2011 - Happy holidays! Released a new version of the toolkit which includes the ability to interrupt Brad, improved support for higher resolutions, and a fix for text-to-speech not working properly.
- Aug 10 2011 - Released a new version of the toolkit, which offers support for the free version of Unity 3D; users may now create scenes for Brad. Download Unity here: http://www.unity3d.com. For instructions on how to use the Unity 3D Editor with the toolkit, see the vhtoolkitUnity section. In addition, the user interaction has been improved; Unity now launches in full-screen automatically and users get visual feedback when talking to Brad. To directly talk to Brad, first make sure you have a microphone plugged in, wait for Brad to finish his introduction, and close the tips window. Now click and hold the left mouse button while asking Brad a question; release the mouse button when you are done talking. The recognized result will be displayed above Brad in white font (toggle on/off with the O key), and Brad will answer your question. It is advised to update Java and ActiveMQ, which are provided with the 3rd party installer versions.
The goal of the Virtual Human Toolkit created by the University of Southern California Institute for Creative Technologies (ICT) is to make creating virtual humans easier and more accessible, and thus expand the realm of virtual human applications.
What it is
Our research has led to the creation of ground-breaking technologies which we have coupled with other software components to form a complete embodied conversational agent. All ICT virtual human software is built on top of a common, modular architecture framework which allows toolkit users to do any of the following:
- utilize the toolkit and all of its components as is;
- utilize certain components while replacing others with non-toolkit components;
- utilize certain components in other existing systems.
Our technology emphasizes natural language interaction, nonverbal behavior, and visual recognition and is broken up into the following main modules:
- Non-Player Character Editor (NPCEditor): A suite of tools which work together to create appropriate dialogue responses to users’ inputs. A text classifier selects responses based on cross-language relevance models; the authoring interface relates questions and answers; and a simple dialogue manager controls aspects of output behavior.
- Nonverbal Behavior Generator (NVBG): A rule-based behavior planner that infers communicative functions from the surface text and selects appropriate behaviors that augment and complement the characters’ dialogue.
- SmartBody (SB): A modular, controller-based character animation system that uses the Behavior Markup Language.
- Watson: A real-time visual feedback recognition library for interactive interfaces which uses the images from either a monocular or stereo camera to recognize eye and head gazes and gestures.
- Speech Client (AcquireSpeech): A tool to send audio, or text, to speech recognizers and to relay the information to the entire system. The toolkit uses PocketSphinx as a 3rd party speech recognition solution.
- MultiSense: A perception framework that enables multiple sensing and understanding modules to inter-operate simultaneously, broadcasting data through Perception Markup Language. It currently contains GAVAM, CLM and FAAST which work with a webcam or Kinect.
- Rapport: An agent that provides nonverbal feedback based on human nonverbal and verbal input. Rapport has been used in a variety of international studies related to establishing rapport between real and virtual humans.
The target platform for the overall toolkit is Microsoft Windows, although some components are multi-platform.
What it is not
Our on-going, ever-evolving toolkit is not comprised of finished, state-of-the-art technologies. Many of our components are prototypes. Our Components section lists several potential alternatives, should you wish to use them instead.
The toolkit does not contain all of the basic research technologies currently being developed and utilized at the ICT, such as the reasoning SASO agents. However, we continually evaluate our basic research findings for potential inclusion in future releases.
Currently, we are not at liberty to publicly distribute any project-specific data. However, we encourage all interested parties to contact us directly.
Who can use it
The toolkit has two target audiences:
- Users are people who use the provided technology as is, usually either running a component or using it to create new content. Users can configure and run systems and are expected to have basic computers skills and some minor scripting skills.
- Developers are software engineers / programmers who can build and modify the code and create new capabilities for the system, either by modifying or extending existing code or by creating new modules that interface with the rest of the system.
All toolkit software can be used without cost for academic research purposes provided all associated licenses are being honored. If you are using the toolkit or any of its components for published research, please cite us appropriately, as per clause 3 of the license. See the Papers section for more details. Please contact us if you are interested in a commercial license.
Where To Get It
Please see the Support section for instructions on how to obtain the ICT Virtual Human Toolkit. The Getting Started section below will guide you through the first steps of using the software.
License and Disclaimers
The complete License Agreement and supporting documentation can be read in the License section. The License Agreement states, but is not limited to:
- The toolkit and any of its components can only be used for academic research and US Government purposes.
- If you are using the toolkit or any of its components for published research, cite us appropriately. See Papers for details.
- Toolkit users are required to honor all licenses of components and supporting software as defined in Exhibit A of the License Agreement.
Please contact us if you are interested in a commercial license.
Please be aware that the toolkit consists of research software for which documentation and support is limited. However, both the software as well as the accompanying documentation are actively being developed and updated.
There are many [[Projects|ICT projects]] that use a subset of the technology provided with the toolkit. Below is a list of some examples:
* [[Projects#Virtual Patient|Virtual Patient]]
* [[Projects#Sergeant Star|Sergeant Star]]
* Elect BiLat
* [http://www.mos.org/interfaces/ InterFaces Project] (with Boston Museum of Science)
* Tactical Questioning
In addition, many groups outside of ICT use some of the toolkit components, most notably [[SmartBody]] and [[Watson]]:
* University of Reykjavik
* German Research Center for Artificial Intelligence
* ArticuLab at Northwestern University
* Telecom Paris Tech
* Affective Computing Research group at MIT Media Lab
* ICSI/UCB Vision Group at UC Berkeley
* Human-Centered, Intelligent, Human-Computer Interaction group at Imperial College
* Worcester Polytechnic Institute
* Microsoft Research
* Relational Agents group at Northeastern University
* Component Analysis Lab at Carnegie Mellon University