Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The Virtual Human Toolkit is a set of components (modules, tools and libraries) that implements one possible version of the Virtual Human Architecture. It has the following main modules:

  • NPCEditor, a statistical text classifier which matches novel input (a question the user asks) to authored output (a line the characters speak).
  • SmartBody (SB), a character animation platform that provides locomotion, steering, object manipulation, lip syncing, gazing and nonverbal behavior in real time through the Behavior Markup Language (BML).
  • Speech Recognition: PocketSphinx server plus AcquireSpeech client 
  • Multi-modal Perception & Understanding: MultiSense
  • Agent/Natural Language Generation: NPCEditor
  • Nonverbal Behavior Generation: Nonverbal Behavior Generator (NVBG), a rule based system that takes a character utterance as input and a nonverbal behavior schedule (gestures, head nods, etc.)
  • Animation System: SmartBody
  • Renderer: Unity 3D or Ogre
  • in the form of BML as output.
  • MultiSense, a perception framework that enables multiple sensing and understanding modules to inter-operate simultaneously, broadcasting data through the Perception Markup Language (PML). 
  • Unity, a proprietary game engine. The Toolkit only contains the executable, but you can download the free version of Unity or purchase Unity Pro from their website. The Toolkit includes Ogre as an open source example on how to integrate SmartBody with a renderer. 
  • PocketSphinx, an open source speech recognition engine. In the Toolkit, PocketSphinx is the speech server for our AcquireSpeech client.
  • Text-to-speech engines, including Festival and MS SAPI.Speech Generation: Text To Speech Interface

For a complete overview of all the modules, tools and libraries, please see the Components section.

...