Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Language clean up, first part of 'Users' needs a rewrite for the new multichar setup

...

The Non-Verbal-Behavior-Generator (NVBG), as the name suggests, is the module that generates behavior other than speech, such as gestures, facial expressions and gazes. The NVBG generates the afore mentioned aforementioned behavior based on the speech input and other messages that it receives. These behaviors serve to augment and emphasize spoken communication. Based on who speaks and who listens, the NVBG can characterize the NPC as a listener, speaker or a bystander and generate appropriate behavior. These behaviors are configurable using xml files provided as input to NVBG. Using these xml files, we can specify which words or parts-of-speech trigger which animations. We can specify idle animations and idle gazes that get triggered when the character is idle for a specified amount of time.

...

-write_to_file: specifies whether the output bml is to be sent out as a VH message or if it should be written to a file with the name given here. Default is "false."

-write_to_file_path: If above parameter is true, then this file is where the output is written to Path of the file to be written with write_to_file.

 -parsetree_cachefile_path: specifies the file path to the file used for caching the response from the parser used by NVBG. If this doesn't exist, it is created.

 -hide_GUI: Hides the GUI if specified set as true.

-expressions_file_name: specifies the path to the expressions file (if one is being used).

-saliency_idle_gaze: specifies whether idle gaze is turned on or notoff. "falseFalse" by default.

-speaker_gaze: specifies whether gazes should be generated while speaking.

...

-storypoint: specifies which story-point should be loaded from the saliency-map xml and the saliency map is updated accordingly with priorities to appropriate pawns etc.

 On providing the required command line arguments, it should work with the other VH components and in accordance with the rules specified for it.

 

Sending input messages

 

NVBG mainly subscribes to vrExpress messages and a few other control messages which allow for setting some options. Below is a list of messages that NVBG subscribes to:

...

The pattern <tag> contains the word that is to be matched, and the <clip> tags contain the animations which should be played when the that word is utteredspoken. The <posture> tag is chosen based on which posture the character is currently inused to set animations for each character posture. Only animations that are in the current characters posture will be played by NVBG. The 'priority' attribute allows NVBG to prioritize between animations when multiple one's overlap.

...

<rule keyword="first_NP" priority="5" >
<pattern>first_NP</pattern>
</rule>

<rule keyword="noun_phrase" priority="5" >
<pattern>NP</pattern>
</rule>

 

Notice that, in the above case, no animations are specified (although they cancould be). This is because in In this case, what the NVBG does is that it checks the spoken sentence for parts of speech (first_NP, noun_phrase etc.) and inserts place-holder xml tags with the keyword as specified in the rule. Later when the NVBG applies the XSL transform to the intermediate BML(with the placeholder tags), it generates output behavior based on what the XSL rules specify (More on this later under the "POS Transform rules" section).

So basically the rule input file is a collection of these rules that the NVBG parses at runtime. Based on whether animations are directly specified in this file as <patterns> or whether they are specified further down the pipeline in the XSL rules, the NVBG generates appropriate behavior.

...

The saliency map specifies which objects/characters/pawns in the environment are important to the character and in what priority they are. These priorities can vary based on what story-point we are at i.e. the priorities can change on loading a new story-pointcertain objects become important only later in a scene.

Based on the priorities of these objects, the saliency map generates idle gazes and other actions based on spoken sentences and events.

...