Definitions and Explanations Flashcards

(55 cards)

1
Q

poisson statistics

A

the number of photons arriving at our detector from a given source will fluctuate

treat the arrival rate of photons statistically

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

poisson statistics assumptions

A
  1. Photons arrive independently in time
  2. Average photon arrival rate is constant
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

as Rτ increases

A

the shape of the Poisson distribution becomes more symmetrical

tends to a normal or gaussian

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

variance

A

is a measure of the spread in the Poisson distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Laplace’s basis for plausible reasoning

A

Probability measures our degree of belief that something is true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

probability density function

A

when measuring continuous variables which can take on infinitely many possible values

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

sketch of a poisson PDF

A

see notes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

sketch of a uniform PDF

A

see notes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

sketch of a central/normal or gaussian pdf

A

see notes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

sketch of a cumulative distribution function (CDF)

A

see notes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

the 1st moment is called

A

the mean or expectation value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

the 2nd moment is called

A

the mean square

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

the median divides

A

the CDF into two equal halves

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

the mode is

A

the value of x for which the pdf is a maximum

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

central limit theorem

A

explains the importance of a normal pdf in statistics

but still based on the asymptotic behaviour of an infinite ensemble of samples that we didn’t actually observe

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Isoprobability contours for the bivariate normal pdf

A

p > 0 : positive correlation y tends to increase as x increases

p < 0 : negative correlation y tends to decrease as x increases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

as |p| -> 1

A

contours become narrower and steeper

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

the principle of maximum likelihood

A

is a method to estimate the parameters of a distribution which fit the observed data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

if we obtain a very small P-value

A

we can interpret this as providing little support for the null hypothesis,

which we may then choose to reject

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Monte-carlo methods

A

method for generating random variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

can test psuedo-random numbers for randomness in several ways

A

a) histogram of sampled values
b) correlations between neighbouring pseudo-random numbers
c) autocorrelation
d) chi squared

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Markov Chain Monte Carlo

A

method for sampling from PDFs

  1. start off at some randomly chosen value (a(1),b(1))
  2. compute L(a(1),b(1)) and gradient
  3. Move in direction of steepest +ve gradient
  4. repeat from step 2 until (a(n),b(n)) converges on maximum likelihood
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

MCMC provides

A

a simple metropolis algorithm for generating random samples of points from L(a,b)

24
Q

MCMC

A
  1. sample random initial point P(1) = (a(1),b(1))
  2. Centre a new pdf, Q, called the proposal density, on P(1)
  3. Sample tentative new point P’=(a’,b’) from Q
  4. Compute R = L(a’,b’)/L(a(1),b(1))

see notes for diagram

if R > 1 : P’ is uphill we accept P’
if R < 1 : P’ is downhill we may reject P’

25
whether we accept R < 1
generate a random number x ~ U[0,1] if x < R then accept P' if x > R then reject P'
26
correlation theorem
the FT of the first time domain function, multiplied by the complex conjugate of the FT of the second time domain function is equal to the FT of their correlation
27
the broader the gaussian in the time domain
the narrower the gaussian in the frequency domain
28
sketch the probability density function of the CDF
should be a normal distribution with two probability density functions
29
how a CDF of a random variable can be used to generate a random sample
to generate a sample for p(x), sample y from U[0,1], a unitary number in range 0 -> 1 compute x = P^-1(y) or y = P(x) Then x~p(x) see notes for graph
30
how is a maximum likelihood constructed
1. first consider the best model which fits the data 2. a visual inspection can be used to see if its a uniform, normal... 3. then calculate the likelihood for the chosen distribution 4. the individual likelihoods are then multiplied and a minimisation is used to 'maximise' the likelihood.
31
quantisation noise likelihood function
uniform distirubtion
32
(x(i) - µ)
is the residual used in the least squares problem
33
histogram of sampled values
will show that all values in the interval are equally likely to occur and the histogram should be flat
34
correlations between neighbouring pseudo-random numbers
plotting x(i) versus x(i+1) the data should be randomly scattered and show no pattern
35
autocorrelation
the autocorrelation should be unity for zero lag, and zero for all other values
36
chi-squared test
a confidence limit of p>0.05 will show whether the hypothesis is believable
37
statistics that describe noisy data
noise comes from random distributions i.e. uniform or normal distribution
38
the upper frequency is given by
the Nyquist-Shannon sampling theorem
39
the lower frequency is given by
the lower bound is set by the total data length
40
low pass filter
from Nyquist-Shannon sampling theorem a low pass filter will generate a sinc function in the time domain when converting to the frequency domain, thus will just become a top-hat function this is the representation of our ideal filter.
41
low pass filter sketch
see notes
42
why is a low pass filter important
to reduce the noise to remove the problem of aliasing
43
sketch of no correlation
see notes
44
sketch of positive correlation
see notes
45
sketch of negative correlation
see notes
46
central limit theorem
for any pdf with finite variance σ^2 , as M -> ∞ µ(hat) follows a normal pdf with mean µ and variance σ^2 / M
47
probability density function sketch
see notes
48
means for poisson
number of photons/second counted by a CCD number of galaxies/degree^2
49
if correlation coefficient = 0
then x and y are independent
50
the residuals are equally likely
to be positive or negative and all have equal variance
51
weighted least squares
makes good use of small data sets
52
ordinary least and weighted lest squares plot
see notes
53
chi 2 used when
we know there are definite outcomes no errors on measurement
54
reduced chi 2 used when
we know there is uncertainty or variance in a measured quantity errors on measurement
55
reduced chi 2 degrees of freedom
are the number of data points