Welcome to the ICT Virtual Human Toolkit Website.
The ICT Virtual Human Toolkit is a collection of modules, tools and libraries that allow users, authors and developers to create their own virtual humans. This software is being developed at the University of Southern California Institute for Creative Technologies and can be licensed without cost for academic research purposes.
The University of Southern California Institute for Creative Technologies (ICT) has created a Virtual Human Toolkit with the goal of reducing some the complexity inherent in creating virtual humans. Our toolkit is an ever-growing collection of innovative technologies, fueled by basic research performed at the ICT and its partners. The toolkit provides a solid technical foundation and modularity that allows a relatively easy way of mixing and matching toolkit technology with a research project's proprietary or 3rd-party software. Through this toolkit, ICT hopes to provide the virtual humans research community with a widely accepted platform on which new technologies can be built.
What is it
The ICT Virtual Human Toolkit is a collection of modules, tools and libraries that supports the creation of virtual human conversational characters. At the core of the toolkit lies innovative, research-driven technologies which are combined with other software components in order to provide a complete embodied conversational agent. Since all ICT virtual human software is built on top of a common framework, as part of a modular architecture, researchers using the toolkit can do any of the following:
* utilize all components or a subset thereof;
* utilize certain components while replacing others with non-toolkit components;
* utilize certain components in other existing systems.
The technology emphasizes natural language interaction, nonverbal behavior and visual recognition. The main modules are:
* [[NPCEditor|Non Player Character Editor (NPCEditor)]], a package for creating dialogue responses to inputs for one or more characters. It contains a text classifier based on cross-language relevance models that selects a character's response based on the user's text input, as well as an authoring interface to input and relate questions and answers, and a simple dialogue manager to control aspects of output behavior.
* [[NVBG|Nonverbal Behavior Generator (NVBG)]], a rule-based behavior planner that generates behaviors by inferring communicative functions from a surface text and selects behaviors to augment and complement the expression of those functions.
* [[SmartBody|SmartBody]], a character animation platform that provides locomotion, steering, object manipulation, lip syncing, gazing and nonverbal behavior in real time, using the Behavior Markup Language.
* [[Watson|Watson]], a real-time visual feedback recognition library for interactive interfaces that can recognize head gaze, head gestures, eye gaze and eye gestures using the images of a monocular or stereo camera.
* [[AcquireSpeech|Speech Client (AcquireSpeech)]], a tool that can send audio to one or more speech recognizers and relay the information to the rest of the system. It also allows text to be typed into the system, simulating speech input. The toolkit uses PocketSphinx as a 3rd party speech recognition solution.
The target platform for the overall toolkit is Microsoft Windows, although some components are multi-platform.
Although the toolkit supports virtual humans development, some components are prototypes rather than state-of-the-art technologies. The [[Components]] section lists several potential alternatives for some components.
The toolkit does not contain many of the current basic research technologies at ICT, such as the reasoning [[Projects#SASO|SASO]] agents. Most of the toolkit technology, however, is the result of basic research, which is continually evaluated for potential use in future releases.
Currently, we are not at liberty to publicly distribute any project-specific data. However, interested parties are encouraged to [[Contact|contact us]] directly. In addition, we are considering creating a forum where users can share their creations.
The toolkit has three target audiences:
* Users, who can run any of the modules without any modifications. A simple example character, Brad, is included for everyone to interact with.
* Authors, who can create their own virtual human characters using the provided software. Authors can modify the provided Brad character, or create their own virtual human completely from scratch.
* Developers, who can either use the provided modules, tools and libraries in their own system or who can extend the components in those cases where they are open source.
All toolkit software can be used without cost for academic research purposes provided all associated licenses are being honored. If you are using the toolkit or any of its components for published research, please cite us appropriately, as per clause 3 of the license. See the [[Links_and_Papers|Papers & Links]] section for more details. Please [[Contact|contact us]] if you are interested in a commercial license.
Please see the [[Support]] section for instructions on how to obtain the ICT Virtual Human Toolkit. The Getting Started section below will guide you through the first steps of using the software.
The complete License Agreement and supporting documentation can be read in the [[License]] section. The License Agreement states, but is not limited to:
* The toolkit and any of its components can only be used for academic research purposes.
* If you are using the toolkit or any of its components for published research, please cite us appropriately. See [[Links_and_Papers|Papers & Links]] for details.
* Toolkit users are required to honor all licenses of components and supporting software as defined in Exhibit A of the License Agreement.
Please [[Contact|contact us]] if you are interested in a commercial license.
Please be aware that the toolkit consists of research software for which documentation and support is limited. However, both the software as well as the accompanying documentation are actively being developed and updated.
There are many [[Projects|ICT projects]] that use a subset of the technology provided with the toolkit. Below is a list of some examples:
* [[Projects#Virtual Patient|Virtual Patient]]
* [[Projects#Sergeant Star|Sergeant Star]]
* Elect BiLat
* [http://www.mos.org/interfaces/ InterFaces Project] (with Boston Museum of Science)
* Tactical Questioning
In addition, many groups outside of ICT use some of the toolkit components, most notably [[SmartBody]] and [[Watson]]:
* University of Reykjavik
* German Research Center for Artificial Intelligence
* ArticuLab at Northwestern University
* Telecom Paris Tech
* Affective Computing Research group at MIT Media Lab
* ICSI/UCB Vision Group at UC Berkeley
* Human-Centered, Intelligent, Human-Computer Interaction group at Imperial College
* Worcester Polytechnic Institute
* Microsoft Research
* Relational Agents group at Northeastern University
* Component Analysis Lab at Carnegie Mellon University
Please go to the Download & Support page for instructions on how to obtain the toolkit. This page will also give you further guidance on how to install and run the provided scenario.
For navigation on this website, please use the menu on the left. Each of the listed sections is described below:
Architecture - Gives an overview of the toolkit architecture, based on the ICT Virtual Human Architecture.
Components - Lists all modules, tools and libraries that make up the toolkit and links to available documentation and third party enhancements.
Getting Started - Lists all available tutorials, including how to run the provided examples and how to create your own virtual human.
Projects - An overview of some projects that use technology included in the toolkit, see also below.
FAQ - Frequently Asked Questions about the toolkit in general and all of its components in detail. Also contains a glossary for often used terms and acronyms.
Download & Support - An overview of available support.
Papers & LinksPapers & Links - List of links to related sites.