Principles of test selection & administration Flashcards

1
Q

Reasons for testing (5)

A
  • Ax of athletic talent
  • Identification of physical abilities & areas in need of improvement
  • Setting of realistic goals using baseline measurements
  • Evaluation of progress
  • Identification of physical staleness, burnout & overtraining
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

The process of collecting data

A

Measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A test administered one or more times during training period to assess progress and modify the program as needed to maximize benefit

A

Midtest

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A procedure for assessing ability in a particular endeavor

A

Test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Periodic reevaluation based on midtests administered during the training, usually at regular intervals

A

Formative evaluation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The process of analyzing test results for the purpose of making decisions

A

Evaluation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A test administered after the training period to determine the success of the training program achieving the training objectives

A

Posttest

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A test used to assess ability that is performed away from the laboratory and does not require extensive training or expensive equipment

A

Field test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A test administered before the beginning of training to determine the athlete’s initial basic ability levels

A

Pretest

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Types of validity (4)

A
  • Construct validity
  • Face validity
  • Content validity
  • Criterion-referenced validity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Types of reliability (4)

A
  • Test-retest reliability
  • Intrasubject variability
  • Interrater reliability
  • Intrarater variability
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is validity?

A

Is the degree to which a test or test item measures what it is supposed to measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

The most important characteristics of testing

A

Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Is the ability of a test to represent the underlying construct

A

Construct validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What does construct mean?

A

Is the theory developed to organize & explain some aspects of existing knowledge & observations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What 3 things are secondary to construct validity?

A

Face validity
Content validity
Criterion-referenced validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Is the appearance to the athlete & other casual observers that the test measures what it is supposed to measure
Generally informal and nonquantitative

A

Face validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Is the assessment by experts that the testing covers all relevant subtopics or component abilities in appropriate proportions

A

Content validity

19
Q

Example, for soccer players, a test battery should include tests for what? (4)

A

Sprinting speed
Agility
Coordination
Kicking power

20
Q

Is the extent to which test scores are associated with some other measure of the same ability
Is often estimated statistically

A

Criterion-referenced validity

21
Q

Types of criterion-referenced validity

A

Concurrent validity
Predictive validity
Discriminant validity

22
Q

Is the extent to which test scores are associated with those of other accepted tests that measure the same ability

A

Concurrent validity

23
Q

What is the type of validity related to “the gold standard”?

A

Convergent validity

24
Q

Is the extent to which the test score corresponds with future behavior or performance

A

Predictive validity

25
Is the ability of a test to distinguish between 2 different constructs & is evidenced by a low correlation between the results of the test & those of a different construct
Discriminant validity
26
Reliability
Is a measure of the degree of consistency or repeatability of a test
27
T or F: a test must be reliable to be valid
TRUE
28
T or F: a reliable test may not be valid
TRUE because it may not measure what it is supposed to measure
29
What are the factors that produce measurement error?
Intrasubject variability Lack of interrater reliability or agreement Intrarater variability Failure of the test itself to provide consistent results
30
Sources of interrater differences are variations in: (3)
- Calibration of testing devices - Preparation of athletes - Administration of the test
31
Sources of intrarater error (5)
- Unintentional leniency - Inadequate training - Inattentiveness - Lack of concentration - Failure to follow standardized procedures
32
Test selection should be based on which components?
Metabolic energy system specificity Biomechanical movement pattern specificity Athlete experience and training status Age and sex Environmental factors
33
Effects of high ambient temperature + high humidity?
- Impair endurance exercise performance - Lower the validity of aerobic endurance tests - Pose health risks
34
Effects of altitude
- Impair performance on aerobic endurance tests - Does not impair performance on tests of strength and power
35
What are the components of test administration? (7)
Health and safety considerations Selection and training of testers Recording forms Test format Testing batteries & multiple testing trials Sequence of tests Preparing athletes for testing
36
The Strength & Conditioning Professional must: - be aware of testing conditions that can threaten the health of athletes. - be observant of signs and symptoms of health problems that warrant exclusion from testing. - remain attentive to the health status of athletes, especially before, during, and after maximal exertions
Health and safety considerations
37
Testers - should be well trained. - should posses a thorough understanding of all procedures & protocols. - should perform & score all tests correctly. - must have sufficient practice. - should be trained to explain & administer the tests as consistently as possible.
Selection and training of testers
38
Scoring forms should: - be developed before the testing session - have space for all test results & comments
Recording forms
39
- Consider whether athletes will be tested all at once or in groups. - The same tester should administer a given test to all athletes if possible. - Each tester should administer one test at a time, especially when the tests require complex movements.
Test format
40
- Duplicate test setups can be used for large groups. - When multiple trials of a test or a battery of tests are performed, allow complete recovery between trials.
Testing batteries and multiple testing trials
41
Fundamental principle with test sequencing?
One test should not affect the performance of a subsequent test
42
From 1 to 7, what is the sequence of tests?
1. Non fatiguing tests 2. Agility tests 3. Maximum power & strength tests 4. Sprint tests 5. Local muscular endurance tests 6. Fatiguing anaerobic capacity tests 7. Aerobic capacity tests
43
The instructions should cover: - the purpose of the test, - how it is to be performed, - the amount of warm-up recommended, - the number of practice attempts allowed, - the number of trials, - test scoring, - criteria for disallowing attempts, & - recommendations for maximizing performance.
Preparing athletes for testing
44
Read slides 45-46
Preparing athletes for testing