Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The ICT Virtual Human Toolkit is a collection of modules, tools, and libraries that allow designed to aid and support users, authors, and developers to create their own virtual humans. This software is being developed with the creation of virtual human conversational characters.  The toolkit is an on-going, ever-changing, innovative system fueled by basic research performed at the University of Southern California (USC) Institute for Creative Technologies (ICT) and its partners. 

Designed for easy mixing and matching with a research project’s proprietary or 3rd-party software, this toolkit provides a widely accepted platform on which new technologies can be built.  It is our hope that, together as a research community, we can further develop and explore virtual human technologies.  The Virtual Human Toolkit can be licensed without cost for academic research purposes. 

Request Virtual Human Toolkit

...

News Archive

Toolkit Overview

Goal

The The goal of the Virtual Human Toolkit created by the University of Southern California Institute for Creative Technologies (ICT) has created the Virtual Human Toolkit with the goal of reducing some of the complexity inherent in creating virtual humans. Our toolkit is an ever-growing collection of innovative technologies, fueled by basic research performed at the ICT and its partners. The toolkit provides a solid technical foundation and modularity that allows a relatively easy way of mixing and matching toolkit technology with a research project's proprietary or 3rd-party software. Through this toolkit, ICT hopes to provide the virtual humans research community with a widely accepted platform on which new technologies can be built.

What is it

is to make creating virtual humans easier and more accessible, and thus expand the realm of virtual human applications.

What it is

Our research has led to the creation of ground-breaking technologies which we have coupled with other software components to form The ICT Virtual Human Toolkit is a collection of modules, tools and libraries to aid and support the creation of virtual human conversational characters. At the core of the toolkit lies innovative, research-driven technologies which are combined with other software components in order to provide a complete embodied conversational agent. Since all   All ICT virtual human software is built on top of a common framework, as part of a modular architecture , researchers using the toolkit can framework which allows toolkit users to do any of the following:

  • utilize the toolkit and all of its components as is;
  • utilize certain components while replacing others with non-toolkit components;
  • utilize certain components in other existing systems.

The Our technology emphasizes natural language interaction, nonverbal behavior, and visual recognition . The and is broken up into the following main modules are:

  • Nonplayer Non-Player Character Editor (NPCEditor), a package for creating : A suite of tools which work together to create appropriate dialogue responses to users’ inputs for one or more characters. It contains a .  A text classifier selects responses based on cross-language relevance models that selects a character's response based on the user's text input, as well as an authoring interface to input and relate ; the authoring interface relates questions and answers, ; and a simple dialogue manager to control controls aspects of output behavior.
  • Nonverbal Behavior Generator (NVBG), a : A rule-based behavior planner that generates behaviors by inferring infers communicative functions from a the surface text and selects appropriate behaviors to that augment and complement the expression of those functionscharacters’ dialogue.
  • SmartBody (SB), a :  A modular, controller-based character animation platform that provides locomotion, steering, object manipulation, lip syncing, gazing and nonverbal behavior in real time, using system that uses the Behavior Markup Language.
  • MultiSense, 
  • Rapport 1.0, 
  • Watson, a : A real-time visual feedback recognition library for interactive interfaces that can recognize head gaze, head gestures, eye gaze and eye gestures using the images of which uses the images from either a monocular or stereo camera to recognize eye and head gazes and gestures.
  • AcquireSpeech (Speech Client (AcquireSpeech), a tool that can : A tool to send audio to one or more , or text, to speech recognizers and to relay the information to the rest of the system. It also allows text to be typed into the system, simulating speech input. entire system.  The toolkit uses PocketSphinx as a 3rd 3rd party speech recognition solution.

The target platform for the overall toolkit is Microsoft Windows, although some components are multi-platform.

What it is

...

not

Although the toolkit supports virtual humans development, some components are prototypes rather than Our on-going, ever-evolving toolkit is not comprised of finished, state-of-the-art technologies. The Components section . Many of our components are prototypes. Our Components section lists several potential alternatives for some components., should you wish to use them instead. 

The toolkit does not contain many all of the current basic research technologies currently being developed and utilized at the ICT, such as the reasoning SASO agents. Most of the toolkit technology, however, is the result of basic research, which is continually evaluated for potential use   However, we continually evaluate our basic research findings for potential inclusion in future releases. 

Currently, we are not at liberty to publicly distribute any project-specific data.  However, we encourage all interested parties are encouraged to contact us directly. In addition, we are considering creating a forum where users can share their creations.

Who can use it

The toolkit has two target audiences:

...