VoxForge
I want ask if what i'm saying is it right or wrong I searched throughly for 8 to 9 hours and came to this conclusion:
If i have to make user dependant countinous recoginition i have to:
I train the system on my voice like in the tutorial:
http://www.voxforge.org/home/dev/acousticmodels/linux/create/htkjulius/how-to/run-julian
*But without grammar and voca file.
In .jconf, I have to specify Language Model File .bin and comment the line of voca and .dict file
language model file: http://www.speech.cs.cmu.edu/sphinx/models/hub4opensrc_jan2002/hub4opensrc.6000.mdef
and I start using Juluis for Countinous speech recoginition
--- (Edited on 3/28/2012 7:26 pm [GMT-0500] by ) ---
To build user-dependent dictation you need to go through the cmusphinx adaptation tutorial
http://cmusphinx.sourceforge.net/wiki/tutorialadapt
You need about 30 minutes of the dictated adaptation data to build a very good speaker-dependent model using wsj. Later you will be able to use it with pocketsphinx in your dictation application.
--- (Edited on 3/29/2012 07:59 [GMT+0400] by nsh) ---