Comments

Flat
flat start
User: hamid
Date: 4/11/2009 9:04 am
Views: 4611
Rating: 13

Hi,

In this tutorial we use the flat start scheme for HMM initialization and parameter estimation. What's the difference if we had a labeled training data for example TIMIT database. In this case we need phones0.mlf or phones1.mlf ?


Best Regards,

Hamid.

Re: flat start
User: kmaclean
Date: 4/22/2009 12:39 pm
Views: 99
Rating: 10

Hi hamid,

>In this tutorial we use the flat start scheme for HMM initialization and

>parameter estimation. What's the difference if we had a labeled training

>data [...]

from the HTK Book:

8.3      Flat Starting with HCompV
One limitation of using HInit for the initialisation of sub-word models is that it requires labelled training data. For cases where this is not readily available, an alternative initialisation strategy is to make all models equal initially and move straight to embedded training using HERest. The idea behind this so-called flat start training is similar to the uniform segmentation strategy adopted by HInit since by making all states of all models equal, the first iteration of embedded training will effectively rely on a uniform segmentation of the data.

So I think if you want to use labelled training data, you need to us HInit, using the process described in section 8.2 (Initialisation using HInit) o the HTK book.

Re: flat start
User: Kiran
Date: 2/15/2010 10:21 pm
Views: 96
Rating: 10

I want to use Embedded Training process and Speed up the operation of HERest Command. Could anyone pls mention the whole iteration process and at which step this is to be done?

Re: flat start
User: kmaclean
Date: 2/17/2010 9:14 pm
Views: 149
Rating: 11

>I want to use Embedded Training process and Speed up the operation

>of HERest Command.

I thought we used embeded training when we use HERest in the VoxForge tutorial... is there another approach using HERest?

PreviousNext