EDTEC 544 - Chapter 3 Flashcards

(22 cards)

1
Q

Coherence Principle

A

Keep the lesson uncluttered: Avoid adding material that does not support the instructional goal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Weeding

A

To uproot words, graphics, or sounds that are not central to the instructional goal of the lesson.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Evidence-based practice

A

Base instructional techniques on research findings and research-based theory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Instructional effectiveness

A

Identifying instructional methods or features that have been shown to improve learning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Three Approaches to Research on Instructional Effectiveness

A

What works?

When does it work?

How does it work?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What works?

A

Example: Does an instructional method cause learning?

Research method: Experimental comparison

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

When does it work?

A

Example: Does an instructional method work better for certain learners?

Research method: Factorial experimental materials, or environment comparison

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How does it work?

A

Example: What learning processes determine the effectiveness of an instructional method?

Research method: Observation, interview, questionnaire

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What to Look for in Experimental Comparisons:

A

Focus on:

Step 1: situations like yours.
Step 2: studies that use the appropriate research method.
Step 3: experimental comparisons that meet the criteria of good research methodology

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Three criteria to look for in experimental comparisons

A
  1. experimental control
  2. random assignment
  3. appropriate measures
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How to interpret “no effect” in experimental comparisons

A
  1. Ineffective treatment
  2. Inadequate sample size
  3. Insensitive measure
  4. Inadequate treatment implementations
  5. Insensitivity to learners
  6. Confounding variables
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Ineffective treatment

A

Does not influence learning (no statistical improvement shown)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Inadequate sample size

A

Not enough learners in study (sample size < 25)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Insensitive measure

A

Not enough experimental items to detect differences in learning outcomes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Inadequate treatment implementations

A

Treatment and control group conditions were not different enough

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Insensitivity to learners

A

Learners were not sensitive enough to the treatment

17
Q

Confounding variables

A

Treatment and control groups differ on another important variable

18
Q

Interpreting research statistics (look for):

A
  1. Probability is less than .05 (p < .05)
  2. Effect of .5 or greater
19
Q

What does p < .05 signify?

A

There’s less than a 5% chance that a result is NOT statistically significant

OR

There’s a 95% chance that a result IS statistically significant

20
Q

What does effect size >.5 signify?

A

A result will have an effect of greater than half of a standard deviation.

21
Q

How to identify relevant research

A
  1. How similar are learners in research to my learners?
  2. Are conclusions based on an experimental research design?
  3. Are the experimental results replicated?
  4. Is learning measured by tests that measure application?
  5. Does the data analysis reflect practical significance as well as statistical significance?
22
Q

What to look for in experimental e-learning research

A
  1. Were subjects randomly assigned to treatments?
  2. Were there enough subjects to detect differences in learning?
  3. Were treatments similar except for the instructional method being tested?
  4. Was the outcome measure appropriate to measure relevant learning
    differences?
  5. Were the results statistically and practically significant?
  6. To what extent did the learners and lesson features (content, length, etc.)
    reflect your own environment?
  7. Hove several experiments been conducted that supported the same
    conclusions?