Introduction
Image RemovedImage AddedWelcome to the ICT Virtual Human Toolkit Website.
...
Once you've downloaded the Toolkit, go to Getting Started for detailed directions.
News
- Oct 31 2012 - We are happy to announce that with the latest release of the Toolkit you can now interact with multiple characters: Brad and Rachel. Both have been built from the ground up, with updated character models, animations, environment and voices. In addition, we implemented a drag and drop feature for new SmartBody characters and are providing several 3rd party (non-talking) Mixamo characters. This release also contains a variety of miscellaneous improvements, including new Festival text-to-speech voices, NVBG configuration improvements and more ways to use MultiSense. For a full list of changes, see the Release Notes.
- Jul 11 2012 - We have released a minor update to the toolkitToolkit, fixing some usability and stability issues. See Release Notes for details.
- May 31 2012 - An exciting new version of the toolkit is now available, offering the MultiSense framework, the Rapport research platform and the SBMonitor tool. MultiSense is a perception framework that enables multiple sensing and understanding modules to inter-operate simultaneously, broadcasting data through the Perception Markup Language. MultiSense currently contains GAVAM, CLM FaceTracker and FAAST which you can use with webcam or Kinect. The Rapport agent is a “virtual human listener” providing nonverbal feedback based on human nonverbal and verbal input. It has been used in a variety of international studies related to establishing rapport between real and virtual humans. Finally, the SBMonitor is a stand-alone tool for easy debugging of SmartBody applications, including testing available (facial) animations, gazes and more complex BML commands.
- Mar 2 2012 - A minor release of the toolkit is now available, updating the Unity version to 3.5 as well as providing incremental changes to the Unity/SmartBody debug tools in the Unity Editor (VH menu in Unity).
- Dec 22 2011 - Happy holidays! The latest release of the toolkit includes the ability to interrupt Brad, improved support for higher resolutions, and a fix for text-to-speech not working properly. See Release Notes for details.
...
- AcquireSpeech: A tool to send audio, or text, to speech recognizers and to relay the information to the entire system. The Toolkit uses PocketSphinx as a 3rd party speech recognition solution.
- MultiSense: A A perception framework that enables multiple sensing and understanding modules to inter-operate simultaneously, broadcasting data through Perception Markup Language. It currently contains GAVAM, CLM FaceTracker, and FAAST which work with a webcam or Kinect.
- Non-Player Character Editor (NPCEditor): A suite of tools which work together to create appropriate dialogue responses to users’ inputs. A text classifier selects responses based on cross-language relevance models; the authoring interface relates questions and answers; and a simple dialogue manager controls aspects of output behavior.
- Nonverbal Behavior Generator (NVBG): A rule-based behavior planner that infers communicative functions from the surface text and selects appropriate behaviors that augment and complement the characters’ dialogue.
- Rapport 1.0: An agent that provides nonverbal feedback based on human nonverbal and verbal input. Rapport has been used in a variety of international studies related to establishing rapport between real and virtual humans.
- SmartBody (SB): A modular, controller-based character animation system that uses the Behavior Markup Language.
...