Table of Contents | ||
---|---|---|
|
This page explains how to use the MultiSense framework (also known as multimodal framework, developed by Multicomp Lab) focusing on CLM (developed by Jason Saragih et .al) / Gavam (developed by Louis-Philippe Morency's Multicomp Lab) face-tracker, Kinect, FAAST (developed by Mixed Reality Group) which is based on the ssi lib (developed by Johannes Wagner, University of Augsburg). Each of these modules is implemented within the MultiSense framework and runs separate threads in a synchronized manner. The output from the MultiSense framework is PML (the pml xml schema can be found at PML.xsd) which is currently passed on using vrPerception message (based on VHMsg API).
...
MultiSense currently supports a lot of modules like vision (for tracking face features, smiles, gaze, attention, activity, gestures, among others), speech. To run the MultiSense, the user needs to run the ssi_vhmsger MultiSense application which basically sends out the vrPerception messages (based on pml) which can be subscribed by any module for specific data.
...
There are two main steps involved in running MultiSense -
...
To run the ssi_vhmsger MultiSense module, either
...
vrPerceptionProtocol
See Main FAQ for frequently asked questions regarding the installer. Please use the Google Groups emailing list for unlisted questions.