Section 7 Neuropsychological Rehabilitation Flashcards

1
Q

What is the purpose of assessing patient functioning?

A

The purpose of assessing patient functioning is for diagnosis, prognosis and evaluation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Why is it less common to measure outcome systematically after treatment in clinical practice?

A

Because neuropsychologists often conduct extensive testing during pre-treatment assessment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are some important aspects to consider when choosing an outcome measure?

A

Psychometric properties (reliability, validity, responsiveness), assessment criteria for outcome measures, and the ICF framework.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the definition of rehabilitation according to the British Society of Rehabilitation Medicine (BSRM) and Royal College of Physicians (RCP)?

A

“Rehabilitation is defined as a process of active change by which a person who has become disabled acquires the knowledge and skills needed for optimal physical, psychological, and social function and the use of all means to minimize the impact of disabling conditions.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Measuring the level of …. in society is considered important in rehabilitation contexts.

A

Participation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the purpose of knowing about bias in research?

A

To ensure the validity and scientific quality of a study and the robustness of the evidence to guide clinical practice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are some strategies to avoid or minimize bias in research methodologies?

A

Randomised controlled trials (RCTs), single-case designs, systematic reviews, and clinical practice guidelines.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is bias in research methodology and what does it refer to according to Higgins and Altman (2008)?

A

“a systematic error, or deviation from the truth, in results” = incorrect estimate of the association between exposure and the health outcome.

Bias occurs when an estimated association (odds ratio, difference in means, etc.) deviates from the true measure of association.

systematic error is introduced into sampling or testing by selecting or encouraging one outcome or answer over others

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the five types of bias enumerated by Sackett (1979) in regards to the interpretation and reporting of results in research?

A

1) bias of rhetoric – use of arguments that are not based on evidence;
(2) all’s well literature bias – publishing studies that ignore or minimise conflicting results;
(3) one-sided references bias – citing references that support only one side of the argument;
(4) positive result bias – the more likely submission by investigators, and acceptance by editors, of studies reporting positive results; and
(5) hot stuff bias – When a topic is fashionable (‘hot’) investigators may be less critical in their approach to their research, and investigators and editors may not be able to resist the temptation to publish the results.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a critical appraisal (or methodological quality) instrument used for in research?

A

It is necessary to evaluate whether a report meets scientific standards and identifies biases in the planning and conduct of a study.

Examples of such instruments include the AMSTAR for systematic reviews, PEDro Scale for RCTs, and RoBiNT Scale for single-case research.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the four major classes of validity in research?

A

Internal validity; construct validity; statistical conclusion validity; and external validity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is internal validity in research?

A

Internal validity reflects the degree to which changes in the dependent variable are attributable to the effect of the independent variable rather than some other factor or confounder.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the goal of internal validity in research?

A

To ensure that changes in the dependent variable are solely the result of the intervention being studied.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are some common threats to internal validity in research?

A

History, maturation, assignment, attrition, instrumentation, testing, regression, participant reactivity, and investigator-related expectancy effects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are history and maturation?

A

“History” refers to the influence of environmental factors (including historical events) that are not under the control of the investigator, such as changes in a participant’s personal circumstances.

“Maturation” refers to changes within a participant over time, such as spontaneous recovery or adjustment to disability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are assignment, attrition and instrumentation?

A

“Assignment” refers to the potential for important differences among participants in a study that may be related to performance on the dependent variable. For example, age.

“Attrition” refers to the loss of participants from a study sample, which can bias results if it is greater in one group than in another.

“Instrumentation” refers to the measurement tools used to assess the dependent variable. The reliability of these instruments must be considered and improved.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are testing, regression, participant reactivity, and investigator-related expectancy effects?

A

“Testing” refers to the potential for practice effects on tests or familiarity with testing procedures to impact performance.

“Regression” refers to the tendency for extreme scores to return to the mean on subsequent testing occasions, in the absence of real changes in function.

“Participant reactivity” refers to the ways in which participants may respond in a way that complies with what they think the investigator expects. This can result from self-report measures or from the participant’s perceptions of the investigator’s expectations.

“Investigator-related expectancy effects” refers to the influence of an investigator’s expectations on a participant’s outcome. This can include compensatory equalization of treatments, where an investigator gives additional attention to participants.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is construct validity and what are two core features of it?

A

Construct validity refers to how well the variable being measured reflects the higher order construct the variable is meant to represent. Two core features of construct validity are clarity regarding the construct under investigation and how the construct is measured.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What are some common threats to construct validity?

A

Poorly defined constructs, construct under-representation, and treatment-sensitive factorial structure bias.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is construct explication and why is it important?

A

Refers to the process of defining or operationally defining the construct under investigation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is construct confounding and what is an example of it?

A

Construct confounding refers to the extent to which constructs either overlap with or are independent of each other. In other words, two variables are so closely related that it is difficult to determine which one is responsible for the observed relationship.

An example of construct confounding would be assuming a general construct (e.g. memory) from a specific construct (e.g. prospective memory) that is being studied, leading to inaccurate conclusions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is mono-operation bias and what is an example of it?

A

Mono-operation bias refers to a single operation (i.e. a measure) that may under-represent a construct, only capturing a single facet of a complex multidimensional construct. An example of mono-operation bias would be measuring anxiety only by documenting the frequency of episodes of agitation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is mono-method bias and what is an example of it?

A

Mono-method bias refers to the possibility of measuring a construct using the same method, resulting in the method of measurement becoming part of the construct. An example of mono-method bias would be if several measures of a construct were taken using ratings by independent observers, the construct would become “performance as rated by independent observers.”

24
Q

What is poorly defined constructs

A

Poorly defined constructs refer to a lack of clarity in the definition or operationalization of the construct being studied. This can lead to inaccurate conclusions because elements that are irrelevant to the construct are also being measured.

25
Q

What is construct under-representation?

A

Mono-method bias = measuring a construct using the same method, resulting in the method of measurement becoming part of the construct
& mono-operation bias= a single measure (or operation) is used to assess a construct. When a single measure is inadequate to capture all the components of a construct.

26
Q

What is treatment-sensitive factorial structure bias and what is an example of it?

A

Treatment-sensitive factorial structure bias refers to the idea that when someone receives an intervention or treatment, it can change the way they think about or experience the thing being measured. For example, if a treatment made people more aware of their memory failures, they might report more memory failures than they would have otherwise, even if their memory hasn’t actually gotten worse. This can make it harder to compare the results of people who received the treatment with people who didn’t, because they might be reporting different things, even though they’re being measured using the same tool.

27
Q

What does statistical conclusion validity refer to?

A

To the extent to which analytic techniques used in a study influence the risk of Type I or Type II errors.

28
Q

What impact does the integrity of data have on statistical conclusion validity = the degree to which conclusions about the relationship among variables based on the data are correct?

A

The range of variability, heterogeneity of the sample, and confounding factors can threaten the validity of statistical conclusions.

29
Q

What is the main reason for low statistical power in a study?

A

The main reason for low statistical power in a study is a sample size that is too small, which occurs due to inadequate planning, low recruitment rates, and high attrition.

30
Q

What happens if the assumptions of statistical tests are not met by the data being examined?

A

The results of statistical analyses may be unreliable.

31
Q

How can the effectiveness of an intervention be overestimated?

A

If multiple statistical tests are conducted without correcting for the number of comparisons, which will inflate the risk of Type I error.

32
Q

What is a type I error and a type II error?

A

A Type I error occurs when a null hypothesis is rejected when it is actually true.

A Type II error occurs when a null hypothesis is not rejected when it is actually false.

33
Q

What are threats to statistical conclusion validity?

A
  1. Low sample size: A small sample size can increase the risk of making a Type II error
  2. Violated assumptions of statistical tests: Some statistical tests have assumptions that need to be met in order for the results to be valid.
  3. Multiple comparisons: If multiple statistical tests are conducted without correcting for the number of comparisons, this can increase the risk of making a Type I error
34
Q

What is external validity?

A

External validity refers to the extent to which results of a study are applicable in other contexts.

35
Q

What is the generality of findings in external validity?

A

Refers to the extent to which the results of a study can be applied to other participants, practitioners, settings, timeframes, and variations of intervention components.

36
Q

How does single-case research impact external validity?

A

Single-case research with a sample size of n=1 can impact external validity as it can be challenging to claim applicability of an intervention beyond the single individual.

37
Q

What is the problem with selection criteria in terms of external validity?

A

Overly restrictive inclusion and exclusion criteria for a study can pose a threat to external validity as it can result in a homogeneous sample that may not be representative of the general population.

38
Q

How can the setting pose a threat to external validity?

A

If the intervention that was successful in one setting may not be applicable to another setting.

39
Q

What is a practical strategie to evaluate validity?

A

The use of critical appraisal tools that evaluate bias and threats to validity. (a low mean score on a critical appraisal tool can indicate that the study is subject to bias and threats to validity.)

40
Q

What is an RCT, and why is it considered the gold standard in research design?

A

An RCT, or a randomized controlled trial, is a research design where participants are randomly assigned to different groups. It is considered the gold standard in research design because it has the capacity to minimize the risk of bias in the experimental design.

41
Q

What are the main strategies to reduce risks of bias in the randomized controlled trial (RCT)?

A

Randomization and blinding are the main strategies to reduce risks of bias in the RCT.

42
Q

How does randomization control for bias in the randomized controlled trial (RCT)?

A

Randomization controls for bias by ensuring that differences among participants that could significantly influence the outcome will, theoretically, be equally distributed between the groups.

43
Q

What is blinding in the RCT, and what does it primarily address?

A

Blinding refers to the practice of keeping participants and/or researchers unaware of the group to which participants have been allocated. Blinding primarily addresses the risk of bias from some components of internal validity (e.g. investigator expectancies, diffusion, compensatory equalisation of treatments).

44
Q

What are critical appraisal instruments?

A

Critical appraisal instruments are tools used to assess the quality and validity of published research studies

45
Q

What are some strategies to reduce the risk of bias in single-case research?

A

Ensuring adequate sampling (accurately captures the variation in the variable) of the dependent variable (variable that is being predicted), establishing high inter-rater agreement on observations, and using experimental designs with sufficient experimental control

46
Q

What are reporting guidelines and what is their purpose?

A

Reporting guidelines are standards that describe the details of interventions in studies, such as the rationale, content, process, dose, and fidelity of behavioural interventions. Their purpose is to provide a systematic listing of such details to make it easier to compare the effects of different treatments.

47
Q

What is the Rehabilitation Treatment Taxonomy (RTT)?

A

Is a framework for characterizing rehabilitation treatments. It includes specifying the treatment target, active ingredients, and mechanism of action.

48
Q

What are the disadvantages of randomized controlled trial (RCTs)?

A

The complexity and expense. Trials that are tightly focused on internal validity (efficacy) must exclude patients with comorbidities and tend to be weak in external validity

49
Q

What are practical clinical trials (PCTs)?

A

Practical clinical trials (PCTs) compare clinically relevant alternative interventions that may be widespread in practice, using more diverse samples than an RCT and more distal measures of outcome (e.g. satisfaction with life)

Designed to evaluate the effectiveness of interventions in real-life routine practice conditions, rather than focusing on proving causative explanations for outcomes

50
Q

What is the difference between Pragmatic Control Trials and RCTs?

A

PCTs use more diverse samples than an RCT and more distal measures of outcome.

51
Q

What are adaptive and multi-component treatment trials?

A

Adaptive treatment trials are clinical trials that use a flexible treatment approach, where the treatment is modified or adjusted based on the individual participant’s response to the treatment.

Multi-component treatment trials, on the other hand, involve using more than one type of treatment intervention to address a particular condition or disease.

52
Q

Why is it important to mask the person who assesses the outcomes of a trial to the participants’ group assignment?

A

To minimize bias in the study results. If outcome assessors know which treatment a participant received, they may be more likely to rate their outcomes positively, leading to overestimation of treatment effects.

53
Q

What is the difference between multidisciplinary and interdisciplinary teams?

A

Members of multidisciplinary teams work in parallel but have clear role definitions and tasks, whereas interdisciplinary teams collaboratively discuss and set treatment goals and jointly carry out treatment plans.

54
Q

What is the role of assessment in neuropsychological rehabilitation and where does it consists of?

A

Assessment is the basis of neuropsychological rehabilitation, and includes not only standardised testing but also behavioural observations, functional tasks, interviews and questionnaires

55
Q

What is the goal of outcome measurement in neuropsychological rehabilitation?

A

Special emphasis is placed on the measurement of participation after brain injury, since participation in society is one of the major goals of rehabilitation

56
Q

What are some cons of the more novel forms of rehabilitation, such as computer-based cognitive retraining and non-invasive brain stimulation?

A

They are mostly restricted to trained tasks and generalization to daily life is limited.