[scilab-Users] markov state sequence with grand()

vinsz vincent.guffens at googlemail.com
Thu Apr 15 19:12:55 CEST 2010


On 15/04/2010 16:45, 诚 甄 wrote:
> Hello fellow Scilab Users,
>
> I am using the grand() function to generate a state sequence for a
> 2-state Markov chain.
> It works fine to generate the sequence, but I don't quite understand
> how the initial state parameter 'x0' works. Here is the code I use:
>
> //-------------------
> N=50; //length of sequence
> P=[0.6 0.4;0.1 0.9]; //state transition matrix
> q=grand(N,'markov',P,[1]); //initial state is '1'
> q=-(q*2.0-3.0); //normalize to -1:+1 range
>
> clf();
> plot2d3([1:N],[q],[0],rect=[0,-1.5,N,1.5],leg='2-state Markov chain');
> //------------------
>
> If you run this piece of code a few times, sometimes it will start
> with state '1', but not always.
>
> From the help page I understand that:
>
> - if I want the chain to start in state '1', then I just have to make
> 'x0 = [1]', as in the code above;
>
> - if I want the state sequence to start from state '2', then I should
> do 'x0=[2]'.
>
> - if I want to get paths starting from both states I would make x0=[1
> 2], and each column of q (the output) would represent a path starting
> at the corresponding state.
>
> However, if I make x0=[1] or x0=[2], I get realizations starting from
> any of the states (sometimes state 1 and sometimes state 2) and that
> does not make much sense to me, because then, what are the
> limiting-state probabilities that the function is using to choose the
> initial state?
> According to what is explained in the grand() help page, if I make,
> for example, x0=[1], shouldn't the sequence 'always' start in state 1?
>
> I appreciate any help and clarifications on the behaviour of this
> function.
>
Hi Dan,

I think the initial state is not reported in the grand() output. You
therefore see the other state according to your transition matrix. For
instance:

N = 10000
b = zeros(N, 1) ;
for i=1:N
b(i) = grand(1,'markov',P,[1]);
end
ii = find(b==2);

size(ii,'*')/N
ans =

0.4095

This is close enough to the probability to go from state 1 to state 2
being in state 1.

Hope it helps,



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.scilab.org/pipermail/users/attachments/20100415/c9263e7e/attachment.htm>


More information about the users mailing list