Inference for point processes (estimation of any type) Flashcards

(18 cards)

1
Q

How is the Importance Weight Distribution given?

A

w_{θ,θ_0,n}(Y_m) = (h_θ(Y_m) / h_θ_0(Y_m)) / ∑_{i=0}^{n-1} h_θ(Y_i) / h_θ_0(Y_i), m= 0, . . . ,n−1,

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the Importance sampling Formula?

A

𝔼_θ[k(X)] = 𝔼_θ_0[k(X)h_θ(X) / h_θ_0(X)] / (c_θ / c_θ_0)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How does a Toroidal Approximation work?

A

S ⊃ W is chosen to be a rectangle wrapped around a Torus, such that points at opposite edges are considered to be neighbours.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Conditional Approach

A

A smaller window is chosen, and the log likelihood is then found by conditioning on the points outside the new window.

The missing data problem is then avoided.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Newton-Raphson

A

For a starting value θ_0 = ˆθ^{(0)}, the Newton-Raphson iterations for maximising the log likelihood are given by
ˆθ^{(m+1)} = ˆθ^{(m)} + u( ˆθ^{(m)} ) j(ˆθ^{(m)} )^{−1}, m = 0, 1, ...

Similar iterations for maximising the approximate log likelihood are given by
ˆθ^{(m+1)} = ˆθ^{(m)} + u_{θ_0 , n} (ˆθ^{(m)} ) j_{θ_0 ,n} (ˆθ^{(m)} )^{−1}, m = 0, 1, ...
where an MCMC sample with importance sampling parameter θ_0 is used.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is an MLE

A

It is the unique maximum, θ_(hat), of the log likelihood, l(θ)

It is a solution to u(θ)=0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Exponential family density

A

For x ∈ N_f and θ ∈ Θ,
θ(x) = b(x) exp( θ · t(x) ) / c_θ,
where b : N_f → [0, ∞) and t : N_f → R^p are functions and · is the usual inner product.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Define the Stauss process in terms of the exponential family density

A

A Strauss process with fixed interaction range R > 0 is an exponential family model with b = 1, θ = (θ1, θ2) = (log β, log γ), t(x) = (n(x), s_R(x)), and Θ = R × (−∞, 0].

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Normalising Constant

A

It is hard to calculate explicitly, therefore we try to use inference to solve for it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

(minimal) canonical sufficient statistic

A

The canonical sufficient statistic is defined as

V_θ (X) = log h_θ (x) / dθ = t(X)

If the density is identifiable, it is the minimal canonical sufficient statistic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Importance Sampling Approximation of the Log Likelihood Ratio

A

For fixed θ_0 ∈ Θ,
l_{θ_0, n(θ)} = log(h_θ (x) / h_{θ_0} (x)) − log 1 / n ∑_{m=0}^{n-1} h_θ (Y_m) / h_{θ_0} (Y_m)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Free Boundary Log Likelihood

A

For Y ∼ Poisson( ˜S, 1) and c_{θ,˜S} (∅) is the normalising constant for h_{θ,˜S} (·|∅), then

log E_θ [f_{θ, W} (x|X_{∂W} )|X_{∂˜S} = ∅] = log [ E[h_{θ,˜S} (x ∪ Y_{˜S\W} |∅)] ] − log [ c_{θ,˜S} (∅)]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Missing Data Log Likelihood

A

If W ⊂ S, the log likelihood function is given by the more complicated expression
l_{mis} (θ) = log E f_θ(x ∪ Y_{S\W} ) = log E h_θ (x ∪ Y_{S\W}) − log c_θ
where Y ∼ Poisson(S, 1).
However, as E h_θ (x ∪ Y_{S\W}) can not generally be calculated explicitly, we have a missing data problem.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Missing Data Approaches

A

It is a method for handling edge effects for Markov point processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Identifiability

A

f_θ ≠ f_θ˜ for different θ, θ˜ ∈ Θ

It is often required in the context of Markov point process densities (exponential families).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Which parameter is called the Importance Sampling Parameter?

17
Q

What is the Importance Sampling Estimator of 𝔼_θ[k(X)]

A
𝔼_{θ,θ_0,n}[k] = ∑_{m=0}^{n-1} k(Y_m) w_{θ,θ_0,n}(Y_m)
18
Q

How is the Importance Sampling Approximation given?

A

E_θ[k(X)] ~= ∑_{m=1}^{n} k(X_m) w_{θ,θ_0,n}(X_m),
where X_m is generated from f_{θ_0}