Discrete time markov processes Flashcards
(19 cards)
What is a Markov process?
A stochastic process is a Markov process if the probability of future states depends only on the present state and not on previous states.
What is the Markov property?
The Markov property states that the value of a process at time t_n+1 depends only on its value at time t_n, not on prior values.
What is an intuitive explanation of the Markov property?
The Markov property means that given the current state, future events are independent of past events.
Why is the Markov property useful?
The Markov property simplifies probability calculations, allowing joint probabilities to be expressed as products of simple conditional probabilities.
What does the law of total probability help with in a Markov process?
It helps calculate the probability of reaching a particular state by conditioning on the next state the process will move to.
What is a time-homogeneous Markov chain?
A time-homogeneous Markov chain is a chain where the transition probabilities do not depend on the time step, i.e., P(X_n+1 = j | X_n = i) is constant for all n.
What is a transition matrix?
A transition matrix is a matrix where the (i,j)-th element represents the probability of transitioning from state i to state j in one time step.
What is the Chapman-Kolmogorov equation?
The Chapman-Kolmogorov equation relates the n-step transition probabilities to the product of one-step transition matrices.
What is the n-step transition probability?
The n-step transition probability p(n)_ij is the probability that a process in state i will be in state j after n steps.
What are marginal probabilities in a Markov process?
Marginal probabilities specify the distribution of a Markov process at a particular time step without conditioning on previous states.
What does the duration of stay in a state in a Markov chain follow?
The duration of stay in a state in a Markov chain follows a geometric distribution with parameter (1 - p_ii), where p_ii is the probability of staying in the same state.
What is the first step decomposition (FSD) technique?
The FSD technique involves conditioning on the first step of the Markov process to compute probabilities or expectations of interest.
How can FSD be used to calculate the probability of reaching a state?
FSD can calculate the probability of reaching a state by conditioning on where the process goes next and using the total law of probability.
What is an example of using FSD to compute a probability?
In Example 2.2, FSD is used to compute the probability of reaching state 1 from state 2 in a Markov process by conditioning on the next state.
What is the expected duration of stay in a state i?
The expected duration of stay in state i is 1 / (1 - p_ii), where p_ii is the probability of staying in state i.
How do you solve for the probability of reaching a state using FSD?
By recursively writing the probability of reaching a state in terms of the probabilities of reaching it from other states and solving the resulting system of equations.
What does P_C represent in Example 2.4?
P_C represents the probability that the Markov process will eventually reach state 1, given an initial distribution.
How do you compute P_C in Example 2.4?
P_C is computed by taking a weighted sum of the probabilities of reaching state 1 from each initial state, weighted by the initial distribution.
How do we handle marginal probabilities in a Markov chain?
Marginal probabilities can be computed by multiplying the initial distribution by the n-step transition matrix to get the probability distribution at time n.