Page tree

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »

Overview

This page explains how to use the MultiSense framework (also known as multimodal framework, developed by Multicomp Lab) focusing on CLM (developed by Jason Saragih et .al) / Gavam (developed by Louis-Philippe Morency's Multicomp Lab) face-tracker, Kinect, FAAST (developed by Mixed Reality Group) which is based on the ssi lib (developed by Johannes Wagner, University of Augsburg). Each of these modules is implemented within the MultiSense framework and runs separate threads in a synchronized manner. The output from the MultiSense framework is PML (the pml xml schema can be found at PML.xsd) which is currently passed on using vrPerception message (based on VHMsg API).

There are two main components within the MultiSense -

  • ssi_vhmsger (\core\multisense\ssi_vhmsger\)
    This is core application which is will only work if you have a web cam installed with the system. This integrates all the different components like vision, audio and packages into a single application whose output is a PML.
  • PerceptionTest (\core\multisense\PerceptionApplication)
    This application is a middle-man between the ssi_vhmsger and any other application which plans to use the PML for specific capabilities. For example, in the toolkit, this application takes input the PML (vrPerception messages) and outputs messages to the smartbody to mimic the head orientation of the detected user.

Users

MultiSense currently supports a lot of modules like vision (for tracking face features, smiles, gaze, attention, activity, gestures, among others), speech. To run the MultiSense, the user needs to run the MultiSense application which basically sends out the vrPerception messages (based on pml) which can be subscribed by any module for specific data.

How to run MultiSense

There are two main steps involved in running MultiSense -

How to run the MultiSense Application

To run the MultiSense module, either

  • Navigate to \bin\multisense\ssi_vhmsger\bin folder and run "run.bat".
  • Or, after you run the launcher from \run-launcher.bat, click on the launch for MultiSense.

By default, only Gavam and CLM facetracker are turned on (for the toolkit; otherwise only the Gavam is turned on). If you wish to change the configuration for running other modules, you can navigate to \bin\multisense\ssi_vhmsger\bin\config.ini and change the useXXXX to true to make a module work. Note that if the dependencies for the module are not setup correctly, the ssi_vhmsger will crash. Another thing to note is that If you want to run FAAST, make sure the useKinect as well as useFAAST are both true in the config.ini. Make sure to have a web cam connected to the computer. If the application fails to start, one of dependencies might be missing. Currently the ssi_vhmsger will not get killed when you click the "Kill" button from the launcher. Also, if you try to kill it from the launcher, the ssi tool will turn yellow.

How to run the Perception application (PerceptionTest)

To run the PerceptionTest application, either

  • Navigate to \bin\multisense\PerceptionTest\ and run "PerceptionTest.exe".
  • Or, after you run the launcher from \run-launcher.bat, click on the launch for Perception Application.

Please contact either Unknown User (suri) or Unknown User (stratou) if you have any issues in running the application

Developers

Setting up the MultiSense

To setup the MultiSense, the user will need to checkout the repository from -

https://svn.ict.usc.edu/svn_vision/Projects/trunk/ssi-ict

After the user downloads the framework, they will need to build module which allows to send and receive VHMsgs (located in \tools\ssi_vhmsger; The solution file is located at \tools\ssi_vhmsger\build\ssi_vhmsger.sln). Make "ssi_vhmsger_test as the "Startup project" . The user will also need to set the working directory as "..\..\..\..\bin". Once the project builds successfully, the user will need to have a web cam connected to the computer to send vrPerception messages using activeMQ (note that this module has a dependency on activeMQ and it will need to be installed. The latest installer for apache-activemq can be found at \\vhbuild\SASO-Installs).

You can check out the detailed description of how to setup the project at the following location -

VrPerception messages with SSI-ICT framework

As an illustration of integration, the please check out the following link (integrates the toolkit with MultiSense)

VHToolkit with Perception

If there is a problem with installing MultiSense, please contact Unknown User (suri) or Unknown User (stratou).

Message API

vrPerception

vrPerceptionProtocol

Known Issues

  • The MultiSense Application currently does not listen to vrKillComponent message which means if the user tries to kill it from the launcher, it will turn yellow and not close. The user has to manually close the application.
  • If the application does not seem to start up, make sure the camera is working properly in your computer. Also, if there is camera.option file present at the location "\bin\multisense\ssi_vhmsger\bin", delete it and try to run the application again. If there is a problem with running MultiSense, please contact us at vh-support@ict.usc.edu
  • If the user enables Gavam and sees a big red X on the screen, it only means that Gavam is not able to detect the face properly. Currently, there is no way to reset Reset (other than restarting the application). The user can restart CLM tracker though by right clicking the output window.

FAQ

See Main FAQ for frequently asked questions regarding the installer. Please use the Google Groups emailing list for unlisted questions.

  • No labels