Quick facts:
The applications expects a config file to be specified on the command line as follows
-c [file-name]
This config file should contain the following information specified as below
-fwdflat
-bestpath
-lm [the language model file to be used]
-dict [the dictionary to be used]
-hmm [the acoustic model. By default, the Virtual Human Toolkit uses the wall street journal acoustic model that comes with pocketsphinx and the CMU pronunciation dictionary. You can change this to use your own.]
-samprate [the sampling rate]
You will need to follow the below steps for creating your own language model for use with pocketsphinx-sonic-server. First of all, we will need the "jasr" tool. This tool is present under lib/jasr in the virtual human toolkit folder. To make the actual Language-Models, you can use 'cmuslm' or 'srilm', which are both included within the jasr folder. However 'cmuslm' currently only works for Linux and 'srilm' works for linux and Windows XP.
You should do the following
How can programmers modify or add functionality?
List of common known issues, like why something isn't working, why it's implemented in a certain way, limitations, etc. If there are major Jira tickets, link to those as well.
What is pocketsphinx-sonic-server?
The pocketsphinx-sonic-server is a wrapper over the pocketsphinx speech recognition system which allows us to communicate with it using the sonic protocol.
Where can I find NVBG on svn?
http://svn.ict.usc.edu/svn_vh/trunk/core/pocketsphinx-sonic-server/
If I have questions or problems, who should I contact?
Link to the appropriate section in the main FAQ page.