Chapter 4 Flashcards
Basic components of a ReseaRch study
The basic research process is very simple. You start with an educated guess, called a hypothesis, about what you expect to find. When you decide how you want to test this hypothesis, you have a research design. This includes the aspects you want to measure in the people you are studying (the dependent variable) and the influences on their behaviours (the independent variable). For example, a researcher interested in understanding the rela- tionship between panic attacks and alcohol abuse might choose to study the effects of anxiety induction in the lab (the independent variable) on how much alcohol research participants choose to drink (the dependent variable). Finally, two forms of validity are specific to research studies: internal and external validity. Internal validity is the extent to which we can be confident that the inde- pendent variable is causing the dependent variable to change. External validity refers to how well the results relate to things outside your study, in other words, how well your findings describe similar individuals or processes outside the laboratory.
Hypothesis
Human beings look for order and purpose. We want to know why the world works as it does, and why people behave the way they do. Robert Kegan (cited in Lefrancois, 1990) describes us as “meaning-making” organisms, constantly striving to make sense of what is going on around us. In fact, fascinating research from social psychology tells us that we may have a heightened motiva- tion to make sense of the world, especially if we experience situ- ations that seem to threaten our sense of order and meaning (Heintzelman & King, 2014).
The familiar search for meaning and order also characterizes the field of abnormal behaviour. Almost by definition, abnormal behaviour defies the regularity and predictability we desire. It is this departure from the norm that makes the study of abnormal behaviour so intriguing. In an attempt to make sense of these phenomena, behavioural scientists construct hypotheses and then test them. Hypotheses are nothing more than educated guesses about the world, often informed from previous research. You may believe that watching violent television programs will cause children to be more aggressive. You may think that bulimia is influenced by media depictions of supposedly ideal female body types. You may suspect that someone abused as a child is likely to abuse his or her significant other or child. These concerns are all testable hypotheses.
Once a scientist decides what to study, the next step is to put it in words that are unambiguous and in a form that is testable. Consider a study of how self-esteem (how you feel about your- self) affects depression. Ulrich Orth from the University of California–Davis and his colleagues from around the world gathered information from more than 4000 people over several years (Orth et al., 2009). They knew from previous research that at least over a short period, having feelings of low self-esteem seems to put people at risk for later depression. The researchers posed the following hypothesis: Prior low self-esteem will be a predictor of later depression across all age groups of partici- pants. The way the hypothesis is stated suggests the researchers already know the answer to their question. They won’t know what they will find until the study is completed, but phrasing the hypothesis in this way makes it testable. If, for example, people with high self-esteem are at equal risk for later depression, then other influences must be studied. This concept of testability (the ability to confirm or refute the hypothesis) is important for science because it allows us to say that in this case, either (1) low self-esteem signals later depression, so maybe we can use this information for prevention efforts, or (2) there is no relationship between self-esteem and depression, so let’s look for other early signs that might predict who will become depressed. The
researchers did find a strong relationship between self-esteem and later depression for people in all age groups, which may prove useful for detecting people at risk for this debilitating disorder.
When they develop a hypothesis, researchers also specify the dependent and independent variables. A dependent variable is what is expected to change or be influenced by the study. Psychol- ogists studying abnormal behaviour typically measure an aspect of the disorder, such as overt behaviours, thoughts, and feelings, or biological symptoms. In the study by Orth and colleagues (2009), the main dependent variable (level of depression) was measured using the person’s responses on a questionnaire about his or her depression (Center for Epidemiologic Studies Depres- sion Scale). Independent variables are those factors thought to affect the dependent variables. The independent variable in the study was measured using responses on a questionnaire on self-esteem (the Rosenberg Self-Esteem Scale). In other words, self-esteem was thought to influence later levels of depression. When possible, the independent variable is manipulated by the researcher, to provide a better test of its influence on the depen- dent variable. In the case of the Orth and colleagues’ study, the independent variable was not manipulated but simply observed.
Internal and External Validity
The researchers in the study on self-esteem and depression used responses on the questionnaires collected from two very large studies conducted in the United States and Germany. Suppose they found that, unknown to them, most people who agree to participate in these types of studies have higher self-esteem than people who do not participate. This would have affected the data in a way that would limit what they could conclude about self- esteem and depression and would change the meaning of their results. This situation, which relates to internal validity, is called a confound (or confounding variable), defined as any factor occurring in a study that makes the results uninterpretable because a variable (in this instance, the type of population being studied) other than the independent variable (having high or low self-esteem) may also affect the dependent variable (depression).
Scientists use many strategies to ensure internal validity in their studies, three of which we discuss here: control groups, randomization, and analogue models. In a control group, people are similar to the experimental group in every way except that members of the experimental group are exposed to the indepen- dent variable and those in the control group are not. Because researchers can’t prevent people from being exposed to many things around them that could affect the outcomes of the study, they try to compare people who receive the treatment with people who go through similar experiences except for the treatment (control group). Control groups help rule out alternative explana- tions for results, thereby strengthening internal validity.
Randomization is the process of assigning people to different research groups in such a way that each person has an equal chance of being placed in any group. Researchers can, for exam- ple, randomly place people in groups but still end up with more of certain people (e.g., people with more severe depression) in one group than another. Placing people in groups by flipping a coin or using a random number table helps improve internal valid- ity by eliminating any systematic bias in assignment. You will see later that people sometimes put themselves in groups, and this self-selection can affect study results. Perhaps a researcher treat- ing people with depression offers them the choice of being either in the treatment group, which requires coming into the clinic twice a week for two months, or in a wait-list control group, which means waiting until some later time to be treated. The most severely depressed individuals may not be motivated to come to frequent treatment sessions and so will choose the wait-list group. If members of the treated group are less depressed after several months, it could be because of the treatment or because group members were less depressed to begin with. Groups assembled randomly avoid these problems.
Analogue models create in the controlled conditions of the laboratory aspects that are comparable (analogous) to the phenom- enon under study. Bulimia researchers could ask volunteers to binge eat in the laboratory, questioning them before they ate, while they were eating, and after they finished to learn whether eating in this way made them feel more or less anxious, guilty, and so on. Such “artificial” studies help improve internal validity.
In a research study, internal and external validity often seem to be in opposition. On the one hand, we want to be able to control as many things as possible to conclude that the independent vari- able (the aspect of the study we manipulated) was responsible for the changes in the dependent variables (the aspects of the study we expected to change). On the other hand, we want the results to apply to people other than the participants of the study and in other settings; this is generalizability, the extent to which results apply to everyone with a particular disorder. If we control all aspects of a study so that only the independent variable changes, the result may not be relevant to the real world. For example, if you reduce the influence of gender issues by studying only males, and if you reduce age variables by selecting only people from 25 to 30 years of age, and finally, if you limit your study to those with university degrees so that education level isn’t an issue— then what you study (in this case, 25- to 30-year-old male univer- sity graduates) may not be relevant to many other populations. Internal and external validity are in this way often inversely related. Researchers constantly try to balance these two concerns and, as you will see later in this chapter, the best solution for achieving both internal and external validity is to conduct several different studies on the same research question.
statistical veRsus clinical significance
The introduction of statistics is part of psychology’s evolution from a prescientific to a scientific discipline. Statisticians gather, analyze, and interpret data from research. As an example, consider a study evaluating whether a drug (naltrexone)—when added to a psychological intervention—helps those with alcohol addiction stay sober longer (Anton et al., 2006). The study found that the combination of medication and psychotherapy helped people stay abstinent 77 days on average and those receiving a placebo stayed abstinent 75 days on average. This difference was statistically significant. But is it an important difference? The difficulty is in the distinction between statistical significance (a mathematical calculation about the difference between groups) and clinical significance (whether or not the difference was meaningful for those affected) (Thirthalli & Rajkumar, 2009).
Closer examination of the results leads to concern about the size of the effect. Because this research studied a large group of people dependent on alcohol (1383 volunteers), even this small difference (75 versus 77 days) was statistically different. Few of us, however, would say staying sober for two extra days was worth taking medication and participating in extensive therapy—in other words, the difference may not be clinically significant.
Fortunately, concern for the clinical significance of results has led researchers to develop statistical methods that address not just that groups are different but also how large these differences are, or effect size. Calculating the actual statistical measures involves fairly sophisticated procedures that take into account how much each treated and untreated person in a research study improves or worsens. Some researchers have used more subjective ways of determining whether truly important change has resulted from treatment. The late behavioural scientist Montrose Wolf (1978) advocated the assessment of what he called social validity. This technique involves obtaining input from the person being treated, as well as from significant others, about the importance of the changes that have occurred. In the example here, we might ask the participants and family members if they thought the treatment led to truly important improvements in alcohol abstinence. If the effect of the treatment is large enough to impress those who are directly involved, the treatment effect is clinically significant. Statistical techniques of measuring effect size and assessing subjective judgments of change will let us better evaluate the results of our treatments.
the aveRage client
Too often we look at results from studies and make generaliza- tions about the group, ignoring individual differences. Kiesler (1966) labelled the tendency to see all participants as one homo- geneous group the patient uniformity myth. Comparing groups according to their mean scores (“Group A improved by 50 percent over Group B”) hides important differences in individual reac- tions to our interventions.
The patient uniformity myth leads researchers to make inac- curate generalizations about disorders and their treatments. To continue with our previous example, what if the researchers studying the treatment of alcoholism concluded that the treatment was a good approach? And suppose we found that, although some participants improved with treatment, others worsened. Such differences would be averaged out in the analysis of the group as a whole, but for the person whose drinking increased with the treatment, it would make little difference that, on average, people improved. Because people differ in such ways as age, cognitive abilities, gender, and history of treatment, a simple group compar- ison may be misleading. Practitioners who deal with all types of disorders understand the heterogeneity of their clients and there- fore do not know whether treatments that are statistically signifi- cant will be effective for a given individual. In our discussions of various disorders, we return to this issue.
STuDyInG InDIvIDual CaSES
One method is to use the case study method, investigating inten- sively one or more individuals who display the behavioural and physical patterns
One way to describe the case study method is by noting what it is not. It does not use the scientific method. Few efforts are made to ensure internal validity and, typically, many confounding variables are present that can interfere with conclusions. Instead, the case study method relies on a clinician’s observations of differences among one person or one group with a disorder, people with other disorders, and people with no psychological disorders. The clinician usually collects as much information as possible to obtain a detailed description of the person. Interview- ing the person under study yields a great deal of information on personal and family background, education, health, and work history, as well as the person’s opinions about the nature and causes of the problems being studied.
Case studies are important in the history of psychology. Sigmund Freud developed psychoanalytic theory and the methods of psychoanalysis on the basis of his observations of dozens of cases. Freud and Josef Breuer’s description of Anna O. (see Chapter 1) led to development of the clinical technique known as free association. Sexuality researchers Virginia Johnson and William Masters based their work on many case studies and helped shed light on numerous myths regarding sexual behaviour (Masters & Johnson, 1966). Joseph Wolpe, author of the land- mark book Psychotherapy by Reciprocal Inhibition (1958), based his work with systematic desensitization on more than 200 cases. As our knowledge of psychological disorders has grown, psycho- logical researchers’ reliance on the case study method has gradu- ally decreased.
One difficulty with depending heavily on individual cases is that sometimes coincidences occur that are irrelevant to the condition under study. Unfortunately, coincidences in people’s lives often lead to mistaken conclusions about what causes certain conditions and what treatment appears to be effective. Because a case study does not have the controls of an experimental study, the results may be unique to a particular person without the researcher realizing it or may derive from a special combination of factors that are not obvious. Complicating our efforts to understand abnormal behaviour is the portrayal of sensational cases in the media. For example, on April 16, 2007, a shooter on the campus of Virginia Tech University took the lives of 32 faculty members and students. Immediately after this horrific mass killing there was speculation about the shooter, including early bullying, descriptions of him being a “loner,” and depictions of notes he wrote against “rich kids,” “deceitful charlatans,” and “debauchery” (Kellner, 2008). Attempts have been made to discover childhood experiences that could possibly explain this later behaviour. We must be careful, however, about concluding anything from such sensational portrayals, since many people are bullied as children, for example, but do not go on to kill dozens of innocent people.
As another illustration of both the limits and potential of the case study method, Canadian researcher Earls and Lalumière (2002) described the case of a man who showed a preference for sex with a horse over sex with humans (or any other species for that matter). The man in question had been convicted of animal cruelty, had received a diagnosis of antisocial personality disor- der, and scored below average on a measure of IQ (80). The authors noted that the finding of low IQ was consistent with previ- ous research and discussions linking low intelligence to acts of bestiality and to zoophilia (a sexual preference for animals). Later, the authors were contacted by a man who suggested that some high-functioning men also have a strong preference for animals, using himself as an example: “You published one case study and I am another one. Who determines which one is typi- cal?” Earls and Lalumière (2009) later published a case report on this occupationally successful man: He had a long-standing sexual interest in horses (one that preceded his actual contact with horses), was a published medical doctor, and was married with children; he eventually left his wife to live on a farm alone with two horses, which he called his “mare-wives.”
Researchers in cognitive psychology point out that the public and researchers themselves are often, unfortunately, more highly influ- enced by dramatic accounts than by scientific evidence (Nisbett & Ross, 1980). Remembering our tendency to ignore this fact, we highlight research findings in this book. To advance our under- standing of the nature, causes, and treatment of abnormal behav- iour, we must guard against premature and inaccurate conclusions.
RESEaRCh By CoRRElaTIon
One of the fundamental questions posed by scientists is whether two variables are related to each other. A statistical relationship between two variables is called a correlation. For example, is schizophrenia related to the size of ventricles (spaces) in the brain? Are people with depression more likely to have negative attributions (negative explanations for their own and others’ behaviour)? Is the frequency of hallucinations higher among older people? The answers depend on determining how one vari- able (e.g., number of hallucinations) is related to another (e.g., age). Unlike experimental designs, which involve manipulating or changing conditions, correlational designs are used to study phenomena just as they occur. The result of a correlational study—whether variables occur together—is important to the ongoing search for knowledge about abnormal behaviour.
One of the clichés of science is that correlation does not imply causation. In other words, two things occurring together does not necessarily mean that one caused the other. For example, the occurrence of marital problems in families is correlated with behaviour problems in children (e.g., Yoo & Huang, 2012). If you conduct a correlational study in this area, you will find that in families with marital problems you tend to see children with behaviour problems; in families with fewer marital problems, you are likely to find children with fewer behaviour problems. The most obvious conclusion is that having marital problems will cause children to misbehave. If only it were as simple as that! The nature of the relationship between marital discord and childhood behaviour problems can be explained in a number of ways. It may be that problems in a marriage cause disruptive behaviour in the children. Some evidence suggests, however, the opposite may be true as well: The disruptive behaviour of children may cause marital problems (Rutter & Giller, 1984). In addition, evidence suggests genetic influences may play a role in conduct disorders and in marital discord (D’Onofrio et al., 2006; Lynch et al., 2006), so parents who are genetically more inclined to argue pass on those genes to children who then have an increased tendency to misbehave.
This example points out the challenges in interpreting the results of a correlational study. We know that variable A (marital problems) is correlated with variable B (child behaviour prob- lems). We do not know from these studies whether A causes B (marital problems cause child problems), whether B causes A (child problems cause marital problems), or whether some third variable, C, causes both (genes influence both marital and child problems).
The association between marital discord and child problems represents a positive correlation. This means that higher scores in one variable (a great deal of marital distress) is associated with higher scores in the other variable (more child disruptive behav- iour). At the same time, lower scores in one variable (less marital distress) is associated with lower scores in the other (less disrup- tive behaviour). When there is a negative correlation, the rela- tionship between the two variables is reversed. That is, higher scores in one variable are associated with lower scores in the other, and vice versa. The correlation coefficient can vary from –1.0 (a perfect negative correlation) to 0.0 (no correlation) to +1.0 (a perfect positive correlation). See ■ Figure 4.1 for an illustra- tion of positive and negative correlations.
Marital problems in families and behaviour problems in chil- dren have a relatively strong positive correlation represented by a number around +0.50. Schizophrenia and height are not related, so the correlation is likely close to 0.00. We used an example of a negative correlation in Chapter 2, when we discussed social supports and illness. The more social supports that are present, the less likely it is that a person will become ill. The negative relationship between social supports and illness could be repre- sented by a number such as –0.40.
Epidemiological ReseaRch
Scientists often think of themselves as detectives, searching for the truth by studying clues. One type of correlational research that is very much like the efforts of detectives is called epidemiology, the study of the incidence, distribution, and consequences of a particular problem or set of problems in a population. Epidemi- ologists expect that by tracking a disorder among many people, they will find important clues to why the disorder exists. One strategy is to determine the incidence of a disorder—the esti- mated number of new cases during a specific period. For example, as we see in Chapter 12, the incidence of new cases of cocaine use has been decreasing over the past decade among most age groups in Canada. A related strategy involves determining prevalence, the number of people with a disorder at any one time. For exam- ple, the prevalence of alcohol dependence among Canadian adults is about 3 percent (Statistics Canada, 2002a). Epidemiologists study the incidence and prevalence of disorders among different groups of people. For instance, data from epidemiological research conducted by Statistics Canada indicate that the preva- lence of alcohol dependence among women is substantially lower than among men (Statistics Canada, 2002a).
Although the primary goal of epidemiology is to determine the extent of medical problems, it is also useful in the study of psychological disorders. In the early 20th century, many people displayed symptoms of a strange mental disorder. Its symptoms were similar to those of organic psychosis, which is often caused by mind-altering drugs or great quantities of alcohol. Many patients appeared catatonic (immobile for long periods) or exhibited symptoms similar to those of paranoid schizophrenia. Victims were likely to be poor, which led to speculation about class inferiority. Using the methods of epidemiological research, however, researcher Joseph Goldberger found correlations between the disorder and diet, and he identified the cause of the disorder as a deficiency of the B vitamin niacin among people with poor diets. The symptoms were successfully eliminated by niacin therapy and improved diets. A long-term, widespread benefit of Goldberger’s findings was the introduction of vitamin- enriched bread in the 1940s (Colp, 2009).
Researchers have used epidemiological techniques to study the effects of stress on psychological disorders. For example, researchers have examined the psychological effects of the September 11, 2001, terrorist attacks on the U.S. World Trade Center and the American Pentagon. Following those events, Blanchard et al. (2004) examined rates of two anxiety disorders— acute stress disorder and post-traumatic stress disorder—in three samples of university students: those attending the University of Albany in New York state, those attending North Dakota State University in North Dakota, and those attending Augusta State University in Georgia. They found significantly greater rates of both acute stress disorder (28 percent versus 10 percent versus 19 percent, respectively) and post-traumatic stress disorder (11 percent versus 3 percent versus 7 percent, respectively) in the New York students.
A similar study conducted in Saskatchewan by Gordon Asmundson and his colleagues showed rates of disorder compa- rable to those obtained by Blanchard and colleagues (2004) in the students from North Dakota and Georgia. More specifically, about 4 percent of the Canadian sample met the criteria for full or partial post-traumatic stress disorder following the events of September 11, 2001 (Asmundson et al., 2004).
Taken together, these findings suggest a relationship between geographical proximity and impact of the trauma, with those living closer to the site of the terrorist attacks showing the greatest levels of distress. The studies by Blanchard et al. (2004) and Asmundson et al. (2004) are correlational studies because the investigators did not manipulate the independent variable. Like other types of correlational research, epidemiological research can’t tell us conclusively what causes a particular phenomenon. Knowledge about the prevalence and course of psychological disorders is extremely valuable to our understanding, however, because it points researchers in the right direction.
RESEaRCh By ExPERImEnT
An experiment involves the manipulation of an independent variable and the observation of its effects. We manipulate the independent variable to answer the question of causality. If we observe a correlation between social supports and psychological disorders, we can’t conclude which of these factors influenced the other. We can, however, change the extent of social supports and see whether it triggers an accompanying change in the prevalence of psychological disorders—in other words, do an experiment.
What will this experiment tell us about the relationship between these two variables? If we increase the number of social supports and find no change in the frequency of psychological disorders, it may mean that the lack of such supports does not cause psychological problems. However, if we find that psycho- logical disorders diminish with increased social support, we can be more confident that lack of support does contribute to disor- ders. However, because we are never 100 percent confident that our experiments are internally valid—that no other explanations are possible—we are cautious about interpreting our results. In the following section, we describe different ways researchers conduct experiments and consider how each one brings us closer to understanding abnormal behaviour.
ExpeRimental designs
With correlational designs, researchers observe people to see how different variables are associated. In experimental designs, researchers are more active. They actually change an independent variable to see how the behaviour of the people is affected. Suppose researchers design an intervention to help reduce insom- nia in older adults, who are particularly affected by the condition (Ancoli-Israel & Ayalon, 2009). They treat a number of individu- als and follow them for 10 years to learn whether their sleep patterns improve. The treatment is the independent variable; that is, it would not have occurred naturally. They then assess the treated group to learn whether their behaviour changed as a func- tion of what the researchers did. Introducing or withdrawing a variable in a way that would not have occurred naturally is called manipulating a variable.
Unfortunately, a decade later the researchers find that the older adults treated for sleep problems still, as a group, sleep less than eight hours per night. Is the treatment a failure? Maybe not. The question that can’t be answered in this study is what would have happened to group members if they hadn’t been treated. Perhaps their sleep patterns would have been worse. Fortunately, research- ers have devised ingenious methods to help sort out these chal- lenging questions.
A special type of experimental design is used more and more frequently in the treatment of psychological disorders and is referred to as a clinical trial (Durand & Wang, 2011; Pocock, 2013). A clinical trial is an experiment used to determine the effectiveness and safety of a treatment. The term clinical trial implies a level of formality with regard to how it is conducted. As a result, a clinical trial is not a design by itself but rather a method of evaluation that follows a number of generally accepted rules. For example, these rules cover how you should select the research participants, how many individuals should be included in the study, how they should be assigned to groups, and how the data should be analyzed—and this represents only a partial list. Also, treatments are usually applied using formal protocols to ensure that everyone is treated the same. The terms used to describe these experiments can be confus- ing. “Clinical trials” is the overarching term used to describe the general category of studies that follow the standards described previously. Within the “clinical trial” category are “randomized clinical trials,” which are experiments that employ randomization of participants into each group. Another subset of clinical trials is “controlled clinical trials,” which are used to describe experi- ments that rely on control conditions to be used for comparison purposes. Finally, the preferred method of conducting a clinical trial, which uses both randomization and one or more control conditions, is referred to as a “randomized controlled trial.” We next describe the nature of control groups and randomization, and discuss their importance in treatment outcome research.
Control Groups
One answer to the what-if dilemma is to use a control group— people who are similar to the experimental group in every way except they are not exposed to the independent variable. In the previous study looking at sleep in older adults, suppose another group who didn’t receive treatment was selected. Further suppose that the researchers also follow this group of people, assess them 10 years later, and look at their sleep patterns over this period. They probably observe that, without intervention, people tend to sleep fewer hours as they get older (Cho et al., 2008). Members of the control group, then, might sleep less than people in the treated group, who might themselves sleep somewhat less than they did 10 years earlier. Using a control group allows the researchers to see that their treatment did help the treated partici- pants keep their sleep time from decreasing further.
Ideally, a control group is nearly identical to the treatment group in such factors as age, gender, socioeconomic backgrounds, and the problems they are reporting. Furthermore, a researcher would do the same assessments before and after the independent variable manipulation (e.g., a treatment) to people in both groups. Any later differences between the groups after the change would, therefore, be attributable only to what was changed.
People in a treatment group often expect to get better. When behaviour changes as a result of a person’s expectation of change rather than as a result of any manipulation by an experimenter, the phenomenon is known as a placebo effect (from the Latin word placebo, which means “I shall please”). Conversely, people in the control group may be disappointed that they are not receiving treatment (analogously, we could label this a frustro effect, from the Latin word meaning “to disappoint”). Depending on the type of disorder they experience (e.g., depression), disappointment may make them worse. This phenomenon would also make the treatment group look better by comparison.
One way researchers address the expectation concern is through placebo control groups. The placebo is given to members of the control group to make them believe they are getting treatment. A placebo control in a medication study can be carried out with relative ease because people in the untreated group receive something that looks like the medication administered to the treatment group (e.g., a sugar pill). In psychological treatments, however, it is not always easy to devise something that people believe may help them but does not include the component the researcher believes is effective. Clients in these types of control groups are often given part of the actual therapy—for example, the same homework as the treated group—but not the portions the researchers believe are responsible for improvements.
Note that you can look at the placebo effect as one portion of any treatment. If someone you provide with a treatment improves, you may attribute the improvement to a combination of your treatment and the client’s expectation of improving. Therapists want their clients to expect improvement; this helps strengthen the treatment. However, when researchers conduct an experiment to determine what portion of a particular treatment is responsible for the observed changes, the placebo effect is a confound that can dilute the validity of the research. Thus, researchers use a placebo control group to help distinguish the results of positive expectations from the results of the active treatment ingredients.
The double-blind control is a variant of the placebo control group procedure. As the name suggests, not only are the partici- pants in the study “blind,” or unaware of what group they are in or what treatment they are given (single blind), but so are the researchers or therapists providing treatment (double blind). This type of control eliminates the possibility that an investigator might bias the outcome. For example, a researcher comparing two treatments who expected one to be more effective than the other might try harder if the preferred treatment wasn’t working as well as expected. On the other hand, if the treatment that wasn’t expected to work seemed to be failing, the researcher might not push as hard to see it succeed. This reaction might not be deliberate, but it does happen. This phenomenon is referred to as an allegiance effect (Dragioti et al., 2015). If, however, both the participants and the researchers or therapists are blind, there is less chance that bias will affect the results.
A double-blind placebo control does not work perfectly in all cases. If medication is part of the treatment, participants and researchers may be able to tell whether or not they have received it by the presence or absence of physical reactions (side effects). Even with purely psychological interventions, participants often know whether or not they are receiving a powerful treatment, and they may alter their expectations for improvement accordingly.
As an alternative to using no-treatment control groups to help evaluate results, some researchers compare different treatments. In this design, the researcher gives different treatments to two or more comparable groups of people with a particular disorder and can then assess how or whether each treatment helped the people who received it. This is called comparative treatment research. In the sleep study we discussed, two groups of older adults could be selected, with one group given medication for insomnia, the other given a cognitive-behavioural intervention, and the results compared.
pRocess and outcome of tReatment
The process and outcome of treatment are two important issues to be considered when different approaches are studied. Process research focuses on the mechanisms responsible for behaviour change, or “why does it work?” In an old joke, someone goes to a physician for a new miracle cure for the common cold. The physician prescribes the new drug and tells the patient the cold will be gone in seven to ten days. As most of us know, colds typi- cally improve in seven to ten days without treatment. The new drug probably does nothing to further the improvement of the patient’s cold. The process aspect of testing medical interven- tions involves evaluating biological mechanisms responsible for change. Does the medication cause lower serotonin levels, for example, and does this account for the changes we observe? Similarly, in looking at psychological interventions, we deter- mine what is “causing” the observed changes. This is important for several reasons. First, if we understand what the “active ingredients” of our treatment are, we can often eliminate aspects that are not important, thereby saving clients’ time and money. For example, one study of insomnia found that adding a relax- ation training component to a treatment package provided no additional benefit—allowing clinicians to reduce the amount of training and focus on only those aspects that really improve sleep (e.g., cognitive-behavioural therapy) (Harvey et al., 2002). In addition, knowing what is important about our interventions can help us create more powerful, newer versions that may be more effective.
Outcome research focuses on the positive and negative effects (results) of the treatment. In other words, does it work? Remember, treatment process involves finding out why or how your treatment works.
SInGlE-CaSE ExPERImEnTal DESIGnS
B. F. Skinner’s innovations in scientific methodology were among his most important contributions to psychopathology. Skinner formalized the concept of single-case experimental designs. This method involves the systematic study of individu- als under a variety of experimental conditions. Skinner thought it was much better to know a lot about the behaviour of one individual than to make only a few observations of a large group for the sake of presenting the “average” response. Psychopathol- ogy is concerned with the suffering of specific people, and this methodology has greatly helped us understand the factors involved in individual psychopathology (Barlow et al., 2009; Kazdin, 2011). Many applications throughout this book reflect Skinnerian methods.
Single-case experimental designs differ from case studies in their use of various strategies to improve internal validity, thereby reducing the number of confounding variables. As you will see, these strategies have strengths and weaknesses in comparison with traditional group designs. Although we use examples from treatment research to illustrate the single-case experimental designs, they, like other research strategies, can help explain why people engage in abnormal behaviour, as well as how to treat them.
Repeated measuRements
One of the more important strategies used in single-case experi- mental design is repeated measurement, in which a behaviour is measured several times instead of only once before you change the independent variable and once afterward.
Withdrawal design
a researcher tries to determine whether the independent variable is responsible for changes in behaviour.