Page tree
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 88 Next »

Introduction

Welcome to the ICT Virtual Human Toolkit Website.

The ICT Virtual Human Toolkit is a collection of modules, tools, and libraries designed to aid and support users and developers with the creation of virtual human conversational characters. The Toolkit is an on-going, ever-changing, innovative system fueled by basic research performed at the University of Southern California (USC) Institute for Creative Technologies (ICT) and its partners. 

Designed for easy mixing and matching with a research project’s proprietary or 3rd-party software, the Toolkit provides a widely accepted platform on which new technologies can be built. It is our hope that, together as a research community, we can further develop and explore virtual human technologies. The Virtual Human Toolkit can be licensed (License Agreement) without cost for academic research purposes. Please contact us if you are interested in a commercial license.

Click here to request the ICT Virtual Human Toolkit

Once you've downloaded the Toolkit, go to Getting Started for detailed directions.

 

 

News

  • Feb 12 2013 - The latest Toolkit release fixes a couple of recent regressions. It now loads correctly again on 32-bit systems, the SmartBody Monitor works again, as well as the TTS debug button. In addition, we made a number of smaller upgrades and improvements, for a full list see the Release Notes.
  • Jan 11 2013 We released a minor version of the Toolkit today. Java is now included in the distribution and no longer a 3rd party dependency. Unity has been updated to version 4 and Ogre to 1.8.1. In addition, character creation with SmartBody has been simplified and asset loading has been streamlined. See the Release Notes for details.
  • Oct 31 2012 - We are happy to announce that with the latest release of the Toolkit you can now interact with multiple characters: Brad and Rachel. Both have been built from the ground up, with updated character models, animations, environment and voices. In addition, we implemented a drag and drop feature for new SmartBody characters and are providing several 3rd party (as of yet non-talking) Mixamo characters. This release also contains a variety of miscellaneous improvements, including new Festival text-to-speech voices, NVBG configuration improvements and more ways to use MultiSense. For a full list of changes, see the Release Notes.
  • Jul 11 2012 - We have released a minor update to the Toolkit, fixing some usability and stability issues. See Release Notes for details.
  • May 31 2012 - An exciting new version of the toolkit is now available, offering the MultiSense framework, the Rapport research platform and the SBMonitor tool. MultiSense is a perception framework that enables multiple sensing and understanding modules to inter-operate simultaneously, broadcasting data through the Perception Markup Language. MultiSense currently contains GAVAM, CLM FaceTracker and FAAST which you can use with webcam or Kinect. The Rapport agent is a “virtual human listener” providing nonverbal feedback based on human nonverbal and verbal input. It has been used in a variety of international studies related to establishing rapport between real and virtual humans. Finally, the SBMonitor is a stand-alone tool for easy debugging of SmartBody applications, including testing available (facial) animations, gazes and more complex BML commands.
  • Mar 2 2012 - A minor release of the toolkit is now available, updating the Unity version to 3.5 as well as providing incremental changes to the Unity/SmartBody debug tools in the Unity Editor (VH menu in Unity).

See News Archive for older notes.

Toolkit Overview

Goal

The goal of the Virtual Human Toolkit developed by the University of Southern California Institute for Creative Technologies (ICT) is to make creating virtual humans easier and more accessible, and thus expand the realm of virtual human research and applications.

What it is

Our research has led to the creation of ground-breaking technologies which we have coupled with other software components to form a complete embodied conversational agent. All ICT virtual human software is built on top of a common, modular architecture which allows Toolkit users to do any of the following:

  • utilize the Toolkit and all of its components as is;
  • utilize certain components while replacing others with non-Toolkit components;
  • utilize certain components in other existing systems.

Our technology emphasizes natural language interaction, nonverbal behavior, and perception and is broken up into the following main modules:

  • AcquireSpeech: A tool to send audio, or text, to speech recognizers and to relay the information to the entire system. The Toolkit uses PocketSphinx as a 3rd party speech recognition solution.
  • MultiSense: A perception framework that enables multiple sensing and understanding modules to inter-operate simultaneously, broadcasting data through Perception Markup Language. It currently contains GAVAM, CLM FaceTracker, and FAAST which work with a webcam or Kinect.
  • Non-Player Character Editor (NPCEditor): A suite of tools which work together to create appropriate dialogue responses to users’ inputs. A text classifier selects responses based on cross-language relevance models; the authoring interface relates questions and answers; and a simple dialogue manager controls aspects of output behavior.
  • Nonverbal Behavior Generator (NVBG): A rule-based behavior planner that infers communicative functions from the surface text and selects appropriate behaviors that augment and complement the characters’ dialogue.
  • Rapport 1.0: An agent that provides nonverbal feedback based on human nonverbal and verbal input. Rapport has been used in a variety of international studies related to establishing rapport between real and virtual humans.
  • SmartBody (SB): A modular, controller-based character animation system that uses the Behavior Markup Language.

The target platform for the overall Toolkit is Microsoft Windows, although some components are multi-platform.

What it is not

The Toolkit does not contain all of the basic research technologies currently being developed and utilized at the ICT, such as the reasoning SASO agents. However, we continually evaluate our basic research findings for potential inclusion in future releases. 

Currently, we are not at liberty to publicly distribute any project-specific data. However, we encourage all interested parties to contact us.

Who can use it

The Toolkit has two target audiences:

  • Users: People who use the provided technology as is to either run a component or create new content. Users with basic computer and minor scripting skills will be able to configure and run systems.
  • Developers: Software engineers or programmers who are able to build and modify code. Developers may create new capabilities for the system, either by modifying or extending existing code or by creating new modules that interface with the rest of the system.

All Toolkit software may be used without cost for academic research and US Government purposes provided all associated licenses are honored. Please cite us appropriately, as per clause 3 of the license, when using the Toolkit or any of its components for published research. See the Papers section for more details.

License and Disclaimers

The License Agreement states, but is not limited to:

  • The Toolkit and any of its components is to only be used for academic research and US Government purposes.
  • Cite us appropriately when using the Toolkit or any of its components for published research. See Papers for details.
  • Toolkit users are required to honor all licenses of components and supporting software as defined in Exhibit A of the License Agreement.

The complete License Agreement and supporting documentation is found in the License section.

Please be aware that the Toolkit consists of research software for which documentation and support is limited. However, both the software as well as the accompanying documentation are actively being developed and updated.

Who Uses Toolkit Technology

There are many ICT projects that use a subset of the technology provided with the Toolkit:

In addition, many groups outside of ICT use some of the Toolkit components, most notably SmartBody and Watson:

    • University of Reykjavik
    • German Research Center for Artificial Intelligence
    • ArticuLab at Northwestern University
    • Telecom Paris Tech
    • Affective Computing Research group at MIT Media Lab
    • ICSI/UCB Vision Group at UC Berkeley
    • Human-Centered, Intelligent, Human-Computer Interaction group at Imperial College
    • Worcester Polytechnic Institute
    • Microsoft Research
    • Relational Agents group at Northeastern University
    • Component Analysis Lab at Carnegie Mellon University

 

Recently Updated

 

  • No labels