Markov Jump Processes Flashcards

0
Q

Residual holding time for time homogeneous process

A

P(x(s) = j | x(s) = i, R(s) = w) =

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
1
Q

What is Markov Jump Process?

A

A Markov process with

a continuous time set

discreet state space

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is distribution of holding time in Markov chain process?

A

If staying in state 1 transition rate is - mu then holding time T1 would be distributed T1 ~ exp (mu)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Find variance of the MLE estimator in Markov chain

A

Variance can be found with CRLB rule which is -1/second derivative with respect to transition rate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Probability of remaining in a state A for at least 5 years.
Transition rate AA = -sum ( coming out from A transition rates)
Transition rate 0.15

A

P– (5) = exp (-0.15 *5)

AA

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Derive an expression for F(i) to be the probability that a person currently in i will never be in state P.

A

F(A) = mu(AT)/mu(AA) * F(T) + mu(AP) /mu (AA)* F(P) + mu(AD)/mu(AA) * F(D)
Where F(D) = 1 as can never be in P once dead
F(P) = 0 since if currently in state P it is impossible to never visit it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Calculate the expected future duration spent in state P for a person currently in state A given the probability that a person in a state A will visit state P is 0.5

A

If the person will visit state P the timer spent there will be exponentially distributed with parameter lambda which is transitional rate from P (0.2) then mean waiting time in state P will be 1/lambda or 1/0.2 = 5
Then expected future duration spender in state P for a person currently in A : 0.5 * 5

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the parameters of the Markov jump process?

A

Parameters of the model are the transition rates mu ij between the states where mu ij = lim p ij (h)/ h , i¥j
h - 0
List all transition rates

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Confidence interval of mu given d and v

A

Mu = d / v
95% confidence interval use the following formula
Mu +- 1.96*sqroot( mu^2 / d)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Irreducible chain definition

A

Each state can be ultimately reached starting from any other

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Periodic chain

A

Chain with period of d is one where return to a given state is only possible in a number of steps that is a a multiple of d

If d =1 then chain is aperiodic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Finite chain definition

A

When a chain has a finite number of spaces

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

If a state space is finite ……

A
  • there is at least one stationary distribution

- but the process may not conform to this distribution in the long term

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

If a chain is finite and irreducible …..

A
  • there is a unique stationary distribution

- but the process may not conform in the long run

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

In chain is finite, irreducible and aperiodic …..

A
  • there is a unique stationary distribution pi
  • the process will conform to this distribution in the long term
    lim as n -> ∞ p ij (n) = π j
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Stationary distribution

A

If π is a solution satisfying equation π = π P where π j ≥ 0
and Σ π j = 1 then π is a stationary distribution

16
Q

Key assumptions underlying a Markov multiple state model:

A
  • probability of being in state i going to j depends on the current state only no previous information required
  • for any two states g and h over a short interval dt
    dtp(t)¹²=μ(t)¹²+o(dt) where t > 0
  • μ(t)¹² is constant for 0 < 1
17
Q

When calculating probability of being in state j in n years starting in state i….

A

The solution would depend on what matrix we are given:
- generator matrix ( with transition rates p ij) then the solution would be
Exp (- λ * t) for probability of staying (hence using holding rate λ )
When jumping from 2 to 3 for example we use μ₂₃/λ₂ where λ₂ is the total force of transition from state 2.
We know that if it is generator matrix the some of transition rate across would be zero

  • probability matrix (when the some of probabilities is equal to 1)
    Then probabilities after a few years are found simply rough matrix multiplication!
18
Q

Expected time to reach a subsequent state k formula

A

is in the tables book next to Kolmogorov forward and backward equations

19
Q

Residual holding time for time homogeneous process

A

P(x(s) = j | x(s) = i, R(s) = w) =