week 8 Confirmatory Factor Analysis (CFA) Flashcards

(12 cards)

1
Q

Comparison between EFA and confirmatory factor analysis (CFA)

A

Confirmatory factor analysis is where the model is known before the analysis, and we confirm that model, unlike exploratory factor analysis (EFA), where the results statistically drive the model. With confirmatory factor analysis, the aims of what we want to get out of the statistics are the same as exploratory factor analysis, we want to say how many factors we have, we want to see where our items are loading, and we want to see whether our factors are correlated or not with each other. CFA is theory-driven.They have the same aims, just different techniques.
With CFA, tell spss how many factors have (eg 2 parenting styles), and say where each aspect should load.
SEM is very powerful. Multipple hypothesises and anlyses and dv’s can be tested simultaneously in the one model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Amos and structural equation modelling (SEM)

A

As confirmatory factor analysis is theoretically driven, we have to specify exactly where everything goes by drawing a model in the program. Following is a model from Amos, built in the graphics version using data from SPSS. The output gives you a diagram that puts the values of the correlations and your factor loadings in your model.
n the model, squares or rectangles are observable variables (the measured variable), and circles or ellipses always mean an unobserved variable. There are also little error circles representing errors in the measurement of those observed variables. One of the powerful things about structural equation modelling is it takes the error out of the model and looks at the prediction of things between each other, with the error removed from that relationship.

Anything that has a two-headed arrow means it’s a correlation. A one-way arrow in all structural equation models is called a regression weight and represents predictions of one independent variable going to another dependent variable, similar to multiple regression.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

amos2

A

E1, E2, E3 circles etc represent the error in each of those observed variables.
One way arrows represent predictions (standardised regression weights)..
The theory states that the factors exist and thus lead to the responses of the observed variable. That is why the arrows go in the direction they do.
Can see the Chi squared value change as change what put into the model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Conceptualisation of variance in CFA

A

What the CFA does is correlate two factors and count the actual shared variance. The idea is the error is taken out, and anything they don’t share together is removed from the equations or held separately so that we’re only looking at what’s related to each other. CFA is trying to explain variation by seeing how much is due to the factor and how much of it is due to the error variance.
Bottom image blue shows shared variance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

cfa2

A

eg. Raised Voice item has variance which is due to the Factor, and variance which is not due to the Factor. Has its own unique variation and the variation it shares with the other Factor items.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

cfa3

A

Standardised Regression Weights (or Factor Loadings) are always between 0 and 1.(Thetax Factorx)
Correlations are determined between Factors.Correlations are also called Covariances. Covariances are not standardised.Amos usually calls them Covariances.
Chi squared statistic is the test of Model Fit. ie Is the correlation value what we would expect if the (eg2 Factor model) is correct?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

cfa4

A

(standardised regression weight)squared=Squared Multiple Correlation. Eg in the example below, (.68x .68=.46)=46% of the item Demonstrate”s variability, is explained by the Factor Reasoning Parenting Style.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Squared multiple correlations (SMC)

A

As the name implies, the squared multiple correlation (SMC) is the square of the correlation. It’s just like R-squared. Amos doesn’t give you a value to predict what it is for the error variable—however, you can work that out easily because we know that from equations, the variance is made up of error and what it shares with the other factors.

Associated factors explain the variation because we know that we can just subtract it from 100 if it’s a percentage or 1 if it’s a proportion to find out how much error explains it. If we really wanted to know the value of that path, we could just get the square root of that value.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

squared multiple corelation2

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Further benefits of CFA over EFA

A

CFA is more sophisticated than EFA, as with CFA, you can obtain a lot more information about the interrelationships between variables (both observed and unobserved).
These are some of the other analysis variations you can consider to understand the interrelationships between variables further when using CFA:

As you have error variables, you can see if they’re correlated uniquely on their own by putting a correlation between the error variables.
eg can see if 2 items are correlated together as opposed to eg all items correlated as one Factor.
You can predict the error variables from other variables we put in your model, so you can start explaining why there is an error. That’s really powerful in psychological measurement.
With structural equation modelling, you can compare a model across different groups (eg the parenting model of authoritarian and reasoned could also be split into groups of mums and dads etc) at the same time to see whether the model actually fits and, for anything in that model, you can get a statistical test to see whether it’s significantly different across the two groups.
You can test the factor structure so you can test things over time and can look at what predicts each other over time. You can look at latent variables at time 1 if that predicts time 2, so you can test questions about the direction of effects as well.
You can look at correlated errors over time because our measurement instruments have errors involved, and that errors could be correlated over time. If you can take that out of the equation, you are getting a more pure relationship between stability or change over time, with the actual arrows going from the latent variables.

In SEM, you can also statistically compare the fit of different models in order to determine what model best fits your data. This is done with the goodness of fit, Chi squared
, which is the same in CFA as it was for EFA. It looks at how well the model fits with the pattern of covariances in the data. It’s a comparison between the actual covariances and what the ideal covariances or correlations would be if that model’s true.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Chi Squared Statistic

A

The chi squared statistic compares the actual model correlations to the estimated (or ideal) correlations to see how well the model fits. You want the chi squared statistic to NOT be significant, as this means the model is a good match for the data. However, the chi squared statistic is often significant and so it is not used as the sole factor in determining how good the model is.BUT is the main consideration in comparing if one model is better than another.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

chi squared statistic 2

A

need to consult the chi squared table.
below shows how to compare different models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly