Page tree

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 17 Next »

Overview

The Toolkit provides an example of how to use the MultiSense framework (also known as multimodal framework, developed by Multicomp Lab).

MultiSense uses these technologies:

Each module is implemented within the MultiSense framework and runs in separate threads in a synchronized manner. The output from the MultiSense framework is PML (the pml xml schema can be found at PML.xsd) sent via vrPerception messages (through VHMsg).

The Toolkit includes two main components that use MultiSense:

  • ssi_vhmsger (\core\multisense\ssi_vhmsger\)
    This is the core application that uses MultiSense.  It will only work if you have a web cam installed on the system. This integrates many different MultiSense components like vision, audio and analysis into a single application that outputs PML.
  • PerceptionTest (\core\multisense\PerceptionApplication)
    This application is a middle-man between the ssi_vhmsger and any other application which plans to use the PML for specific capabilities. For example, in the Toolkit, this application takes the PML as input  (via the vrPerception message) and outputs messages to Smartbody to mimic the head orientation of the detected user.

Users

MultiSense currently supports multiple modules, including a vision module (for tracking face features like smile, gaze, attention, activity, etc) and a speech recognition module. To use MultiSense, run the MultiSense application, and it will broadcast vrPerception messages (based on PML) generated by each module, which can be received by any other module or external component.

Using MultiSense

Two components are required to make full use of MultiSense, the MultiSense Application, and the Perception Application.

Running MultiSense Application

To run the MultiSense Application, either

  • Run the application from the Launcher
  • Navigate to \bin\multisense\ssi_vhmsger\bin folder and run "run.bat".

By default, only the Gavam and CLM facetrackers are turned on (for the toolkit; otherwise only the Gavam is turned on).

Notes:

  • If you wish to change the configuration for running other modules, you can edit \bin\multisense\ssi_vhmsger\bin\config.ini and change the "useXXXX" parameter to "true" to make a module work.
  • If the dependencies for the module are not setup correctly, the ssi_vhmsger will crash.
  • If you want to run FAAST, make sure the "useKinect" as well as "useFAAST" parameters are both "true" in the config.ini.
  • Make sure to have a web cam connected to the computer.
  • If the application fails to start, one of dependencies might be missing.
  • ssi_vhmsger will not exist when you click the "Kill" button from the launcher. Also, if you kill it from the launcher, the ssi tool will turn yellow.

Running the Perception Application (PerceptionTest)

To run the PerceptionTest application, either

  • Run it from the Launcher
  • Navigate to \bin\multisense\PerceptionTest\ and run "PerceptionTest.exe".

Please contact either Unknown User (suri) or Unknown User (stratou) if you have any issues in running the application

Developers

Setting up MultiSense

To setup the MultiSense, the user will need to checkout the repository from -

https://svn.ict.usc.edu/svn_vision/Projects/trunk/ssi-ict

After the user downloads the framework, they will need to build module which allows to send and receive VHMsgs (located in \tools\ssi_vhmsger; The solution file is located at \tools\ssi_vhmsger\build\ssi_vhmsger.sln). Make "ssi_vhmsger_test as the "Startup project" . The user will also need to set the working directory as "..\..\..\..\bin". Once the project builds successfully, the user will need to have a web cam connected to the computer to send vrPerception messages using activeMQ (note that this module has a dependency on activeMQ and it will need to be installed. The latest installer for apache-activemq can be found at \\vhbuild\SASO-Installs).

You can check out the detailed description of how to setup the project at the following location -

vrPerception messages with SSI-ICT framework

As an illustration of integration, the please check out the following link (integrates the toolkit with MultiSense)

VHToolkit with Perception

If there is a problem with installing MultiSense, please contact Unknown User (suri) or Unknown User (stratou).

Message API

Sends:

vrComponent

 

Receives:

vrPerception

vrPerceptionProtocol

vrAllCall

vrKillComponent (does not react to it yet)

Known Issues

  • The MultiSense Application currently does not listen to the vrKillComponent message, which means if the user tries to kill it from the Launcher, it will turn yellow and not close. The user has to manually close the application.
  • If the application does not seem to start up, make sure your camera is working properly. Also, if there is camera.option file present at the location "\bin\multisense\ssi_vhmsger\bin", delete it and try to run the application again. If there is a problem with running MultiSense, please contact us at vh-support@ict.usc.edu
  • If the user enables Gavam and sees a big red X on the screen, it means that Gavam is not able to detect the face properly. Currently, there is no way to reset (other than restarting the application). The user can restart CLM tracker though by right clicking the output window.

FAQ

See Main FAQ for frequently asked questions regarding the installer. Please use the Google Groups emailing list for unlisted questions.

  • No labels