Checking Markov Property and sample problems Flashcards
(15 cards)
What is the first step decomposition technique and formula?
First step decomposition is a technique used to compute expectations and probabilities in Markov chains by conditioning on the next state of the process. E[X] = E[E[X|Y]]
How is the law of conditional expectation applied in Markov chains?
The law of conditional expectation is used to compute expectations by conditioning on future states, allowing us to break down complex problems into simpler subproblems.
In Example 5.1, how do we compute the expected time for Geraint to deliver the parcel?
We use first step decomposition to condition on Geraint’s next move, then solve iteratively by applying the law of total expectation.
What is the expected time for Geraint to deliver the parcel in Example 5.1?
The expected time for Geraint to deliver the parcel is 2.5 minutes.
What is the second question in Example 5.1 about Geraint’s delivery?
The second question asks how many times Geraint will visit location B on average before delivering the parcel.
How can you tell if a process is Markov?
To check if a process is Markov, you can either logically deduce the Markov property or mathematically verify that the process satisfies the Markov property using conditional probabilities.
What does it mean if a Markov property does not hold?
If the Markov property does not hold, it means that the future state of the process depends on more than just the current state, violating the memoryless property.
How can you mathematically prove that a process is not Markov?
To prove that a process is not Markov, find a counterexample where the conditional probability depends on past states, violating the Markov property.
How do you decide if a new process derived from a Markov process is also Markov?
To determine if a new process is Markov, check if the transformation function is bijective. If it is, the new process will be Markov.
What happens if the transformation function is not bijective in a derived Markov process?
If the transformation function is not bijective, the new process may or may not be Markov. You may need to test it using counterexamples.
In Example 5.2, what is the relationship between X_n and Y_n?
Y_n is a function of X_n, and the function is not bijective because there is ambiguity in the value of Y_n when X_n takes specific values.
Why is {Y_n, n=0,1,2,…} not a Markov process in Example 5.2?
The process {Y_n} is not Markov because the transformation from X_n to Y_n is not bijective, which creates ambiguity in the value of Y_n.
What is the key feature that makes {Z_n, n=0,1,2,…} a Markov process?
{Z_n} is a Markov process because the transformation from X_n to Z_n is bijective, meaning we can uniquely determine X_n from Z_n.
How do you check if a new process derived from a Markov process is Markov?
To check, determine if the transformation function between the original process and the new process is bijective. If it is, the new process will also be Markov.
In Example 5.2, why is {X_n, n=0,1,2,…} a Markov process?
Because {X_n} is a sequence of independent random variables, and any sequence of i.i.d. random variables satisfies the Markov property.