Applied Research Methods Flashcards

1
Q

What is research

A
  • Investigation undertaken to gain knowledge and understanding
    • A detailed study of a subject especially in order to discover new information or reach a new understanding
    • Research methods developed in academia are applied in many real world domains
      ○ Eg commercial marketing
      ○ Government/social
      Ogranisations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the applied research process?

A

Gap in knowledge -> question -> design + data -> need -> context -> insight -> needs met

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the types of applied research objectives?

A
  • Exploratory
    ○ Aims to discover the nature of a topic that is not clearly understood or defined
    ○ Qualitative or miced methods
  • Descriptive
    ○ Aims to describe or define the topics
    ○ Quantitative methods or miced methods
    ○ Analyses like correlations and between-groups and within-groups/repeated measures comparisons
  • Explanatory/causal
    ○ Aims to explain why or how things work the way they do
    ○ Quantitative methods
    ○ Experimental designs like A/B testing and
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are some important applied research design considerations?

A
  • Population/s of interest
    ○ How specific is it? Do you need to look at different groups within the sample?
    • Factor/s of interest
      ○ What are you measuring?
    • Practical considerations
      ○ Budget
      ○ Timeframe
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are some mixed methods designs?

A
  • Qualitative insights used to inform design of quantitative phase
    • Quantitative insights raise questions that are best understood through qualitative examination
    • Qualitative insights used to design quantitative evaluation, then quantitative findings are explored with qualitative methods
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How is applied research reporting different from regular research?

A
  • Slide decks vs report format
    • Story telling approach to communicate findings
    • Design for your audience - what is most important to them?
    • Keep it short - put detailed results in appendix
    • Include Overview/executive summary at the start - help orient people to what they’re about to hear/read
    • More visuals and less text - show don’t tell
    • Insights are the golden egg
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the 8 principles covering how to conduct research, how to treat others, and how to behave in a professional manner?

A

-Honesty
-Rigour
-Transparency
-Fairness
-Respect
-Recognition
-Accountability
-Promotion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How to design research for inclusion?

A

-Develop cultural competence
-Design for accessibility
-Consider potential biases
-Consider impact of cultural norms
-Involve specific participant groups in end-to-end process
-Build neurodiversity into methodology

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How to ensure psychological safety in applied research?

A

-Wellbeing of participants is always more important than the research
-Follow trauma-informed practice principles
-May be at risk of vicarious trauma from unexpected or expected disclosures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a survey?

A
  • The most popular data collection tool
    • Commonly used to assess opinions, attitudes, and preferences, and enable self-report of behaviours and intentions
    • Different to psychological assessment tools which objectively measure constructs such as personality traits and knowledge or assess psychopathology or mood
    • Questions are most often closed-ended producing quantitative data but can be open-ended producing qualitative data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the Net Promoter Score (NPS)?

A

○ *The most used user feedback score
○ Kinda like the Myers Briggs Inventory
§ Very popular, used in industry etc but is very love or hate
Often have a comment section after the scale to get users to discuss why they responded the way they did - this is where the ‘gold’ comes from

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Why is Net Promoter Score (NPS) divisive?

A
  • Negatives
    ○ A lot of professional researchers think the psychometric qualities don’t stand up, low replicability
    ○ People don’t use them consistently (in terms of labels etc)
    ○ We are so bombarded with so many survey feedbacks, so you have to have had a really intense experience to want to respond
    ○ Intention-behaviour gap
    ○ They are often used for services that people just don’t recommend
    • Positives
      ○ Asking people one scale response - easy to do,
      Then ask for their comments on why they responded that way
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are some customer experience (CX) measures other than NPS?

A

-Customer Satisfaction Score (CSAT): gets your to rate how satisfied you are, has face validity
-Customer Effort Score (CES): face validity, bipolar scale
-Star rating

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the limitations of surveys?

A

urveys are prone to biases
○ Social desirability
§ Effects accuracy
○ Intention-behaviour gap
§ The size of the gap may be dependent on factors such as
□ Whether the intention based on personal attitudes or social pressure to act (social norms) - former smaller gap than the latter
□ How much effort the behaviour requires - a verbal recommendation requires relatively little effort compared to changing to a keto diet or buying a house
§ What to do?
□ Minimis the use of behavioural intention questions and make sure your client knows their limitations
□ Consider using big data on actual consumer behaviour as well or instead
○ Acquiescence/agreement
§ Tendency to just agree with things
○ Question order
§ Priming
§ Primacy
§ Recency
□ We have a tendency to be influenced by how recently we heard the information
○ Recall bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How to design a good survey?

A
  • Design is for optimising, not satisficing - Krosnick
    • Optimising
      ○ Interpreting and responding to a survey question using a careful and considered process
    • Satisficing
      ○ Taking shortcuts when responding to a survey question
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are some optimising strategies when designing a survey?

A
  • Reduce the task difficulty
    ○ Make questions easy to understand
    ○ Keep the survey short
    § No more than 30 mins (~ 30 responses, but you’ll need a pre-test)
    § For mobile-first survey, 7 mins max
    ○ Minimise distractions
    • Increase respondent motivation
      ○ Use incentives and gratitude
      ○ Ask respondents to commit to provide their best responses
      ○ Emphasise the importance of of the survey and their responses
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is the conventional wisdom for survey question order?

A
  • Start with questions on the topic described to respondents
    ○ Easy questions early
    ○ Group by topic
    ○ General to specific
    ○ Sensitive topics at the end
    ○ Use filters to avoid asking unnecessary questions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How can you guide your participants through a survey?

A
  • Include introductory text that clearly informs respondents what the survey is about, why you’re asking them to do it, how their answers will be used and how long it should take
    • At the start of each topic section, consider starting with a sentence saying what the questions are about, for example
      ○ For demographic questions: ‘first, we’d like to find out a bit about you’
      ○ For a block of rating questions: ‘In the following section, you will be shown
      pictures of different foods and you will be asked your opinions of these foods.
    • At the end, make sure you thank them
    • Consider including a progress bar that shows how far along in the survey they are
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the conventional wisdom on survey question wording?

A
  • Use simple, familiar language
    • Use simple syntax
    • Specific and concrete (as opposed to general and abstract)
    • Make response options exhaustive and mutually exclusive
    • Common mistakes
      ○ Using ambiguous words
      ○ Leading or loaded questions
      ○ Double-barreled questions
      ○ Double negative wording
      ○ Emotionally charged words
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What are rating questions in surveys?

A
  • Most used question format
    • Obtains a judgement on an object (described by question wording) along a dimension (provided by a response scale)
    • Choosing the number of response options is a choice between having enough to differentiate between respondents as much as (validly) possible while still maintaining high reliability in responses (which comes with fewer options)
    • According to Krosnick the ideal number of options is 5 points for unipolar scales (not at all satisfactory - very satisfactory); 7 points for bipolar (extremely dissatisfied - extremely satisfied), but consider 5 points if you have mobile first data collection
    • Ideally label all points on your response scales and use words, do not use (only) numbers
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Describe multiple choice questions in surveys

A
  • Enables respondents to indicate one or more responses from the list eg preferences, behaviours etc
    • Allow you to apply a pre-existing structure to your data eg groupings or other categories like demographics (except age)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Describe ranking questions in surveys

A
  • Enable comparisons between multiple things at once
    • Useful when wanting to measure comparison or choice-relative value
    • May be more reliable than rating questions, particularly for items at the ends of the ranking scale
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Describe open-ended questions in surveys

A
  • Enable you to ask exploratory questions and gather qualitative data
    • Often good to add Other (please specify) option to you rmultiple choice questions
    • But
      ○ they can increase task difficulty
      ○ More time-consuming to analyse
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

How to minimise bias in survey responses

A
  • Social desirability
    ○ Remind people of anonymity
    ○ Use the wording to make the less socially desirable response ok
    • Acquiescence/agreement
      ○ Avoid communicating the intent of the research
      ○ Keep it short
      ○ Vary response scales
      ○ Add attention checks
    • Order effects
      ○ Randomise question order and/or response order (where appropriate)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What is benchmarking/baselining in surveys?

A
  • Sometimes you will need to create a standard to measure your results against and this can influence your choice of research design and the questions you use
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Describe the survey testing process

A
  • Pilot testing
    ○ At least 5-10 people from your population of interest. You can add a question at the end of asking for any feedback on the survey on the survey. Look at the data to check for completeness or any unusual patterns
    • For larger surveys, developers sometimes use cognitive interviewing for testing. This involves people doing the survey while an interviewer prompts them to ‘think aloud’ and asks questions to explore their comprehension, information retrieval, judgement and response
    • Factors to consider
      ○ Comprehension - respondents understand the question wording and any instructions
      ○ Logic and flow - questions follow a logical order, nothing seems out of place or creates biases for what follows
      ○ Acceptability - none of the questions could be considered offensive or inappropriate
      ○ Length - most respondents finish without losing interest
      ○ Technical quality - the survey operates well on any device
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

How to identify your population and sampling approach for surveys?

A
  • Census designs - in which the whole of the population participates - are uncommon
    • The remainder of studies use a sample of the population of interest
    • Your approach to sampling, as part of your research design, is informed by the objectives of your research as well as practical constraints (eg money and time)
    • The two broad categories of sampling are random (aka probability) and non-random (aka non-probability)
    • How you sample your participants affects the generalisability of your findings (external validity) to your population of interest
    • The size of your sample affects your statistical power - a consideration if you want to do hypothesis testing (correlations, between groups differences, pre- and post-differences)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Describe random sampling

A
  • A random sample is a sub-group of your target population that has been selected randomly such that each person in the population has an equal chance of being selected for the sample
    • This process reduces the risk of bias that comes from selection methods that systematically increase or decrease the occurrence of particular characteristics in the sample
    • However, as research participation is voluntary, most samples are not truly random because they opt-in/self-select
    • Random samples are best practice for evaluative and experimental research, where null hypothesis statistical testing is used for making inferences about relationships between variables in the populations of interest (so systematic bias needs to be avoided)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What are non-random samples?

A
  • Members of the population of interest do not have an equal chance of being selected for the sample, thus there is a higher risk of bias in the data they produce
    • Types of non-random samples
      ○ Convenience
      § People who are readily available in a non-random way (eg online panels, Researcher Experience Program)
      ○ Quota
      § Often used with convenience samples, researcher selects sub-sets of sample based on characteristics (usually demographics) to increase representativeness of sample
      ○ Purposive
      § Selecting people because they have characteristics of interest for the research
      ○ Snowballing
      § Finding participants who then refer other potential participants to you
    • To help control the risk of bias created by non-random sampling, researchers use quota sampling and weighting of data to make the sample findings better represent the population
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Describe online panels

A
  • Online panels are now the most common source of participants for applied research
    • Panel providers have a large database of people who sign up to do research regularly for a small payment
    • Different types
      ○ General population
      § Can use random and non-random recruitment from within the panel
      ○ Specialist
      § Industry or population sectors
      ○ Proprietary
      § Research organisations collect their own data, create products, and sell
    • Those that specialise in supplying to research agencies are much better than those who supply directly to consumers
    • For general population studies look for large size (eg an Australian panel of 1 million is good) with a large range of attributes
    • Data quality issues
      ○ Some panels have major issues with bots/server farms producing significant proportions of junk data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What are types of data collection

A
  • Online
    ○ Most common, cheap and fast
    • Telephone
      ○ CATI - computer-assisted telephone interviewing
      ○ Survey is a structured interview with instructions
      ○ Best with multiple-choice questions, bad for open ended
    • Face to face
      ○ Most expensive
      ○ Sometimes referred to as CAPI - computer-assisted personal interviewing
      Useful for in-field/contextual data collection
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Describe the process of translation in surveys

A
  • The goal of survey translation should be to achieve functionally equivalent version in the target language
    • Usually in survey research, this means that one follows an ask-the-same question approach, where the questions are translated so the same concept is measured on the same measurement across languages
    • Ideally, use at least two translators with a background in survey research to separately draft a full translation of the questions. Then these are reviewed and integrated with another person
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

What are the different types of validity?

A
  • Construct validity
    ○ Are you measuring what you say you’re measuring?
    • Internal validity
      ○ Are your causal claims valid?
    • External validity
      ○ Are your claims about generalisability valid?
      § Population validity
      § Ecological validity
    • Statistical validity
      ○ Are you statistical conclusions valid?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What are different aims of research

A

-Testing for differences (between existing groups or manipualted groups): internal validity is v important
-Generalising to a population (external validity is v important)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

When are manipulated groups not better than existing groups in research

A

○ Some things can’t be manipulated
○ Some things can only be manipulated by modifying groups so much that we can no longer generalise to the situations we want to generalise to, or making other compromises
○ Sometimes we choose an outcome first (eg depression), and we want to know its causes. Experimentation can’t help with the search for candidate causes
○ Experiments are good for getting at ‘descriptive causation’ but not very good for getting at ‘explanatory causation
○ Waiting on an experiment can mean delyaing evidence-based solutions
○ Experiments favour the testing of light-touch interventions rather than structural changes (bc they’re hard to manipulate )
○ People willing to be randomised may be unrepresentative
○ Knowing you’re part of an experiment may alter the results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What are some possible threats to the research aim of testing for differences (in terms of threatening internal validity)

A

○ Differences between existing groups
§ Cause-and-effect relationship (‘X causes Y’) is very difficult to establish
§ Possible threats
□ Y causes X - reverse causality
□ Z causes X and Y - third variable
○ Differences between manipulated groups
§ Cause-and-effect relationship can be established
§ Possible threats - smaller but still exist
□ Manipulation of X can also affect other variables - confound
□ Participants drop out in non-random ways - selective attrition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

In terms a research aim seeking to test for differences, what are the key types of external validity?

A

○ Population validity
§ Getting a representative sample is often quite difficult
§ Convenience samples are very common, often accepted
§ Representative sample are always better, all else equal
□ Convenience samples always come with a risk that results were due to an unrepresentative sample
○ Ecological validity
§ Your study should not be too contrived, or too far from the real-world context that you want to draw conclusions about
* Essentially there is a trade-off between internal and external validity, because to have good internal validity you need to have a highly controlled and manipulated experiment, whereas external validity benefits from observational

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

What is a Type I error?

A

False positive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

What is a Type II error?

A

False negative

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

What is a p value?

A
  • Null hypothesis significance testing (NHST)
  • Starts by assuming the null hypothesis (no difference) is true (we don’t really believe this, but we pretend we do)
  • Asks: how unlikely is the difference I observed if I assume the null hypothesis is true?
  • Smaller p-value =the more extreme result. If p < .05, we conclude that null hypothesis is false. But that’s not sound logic!
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What are the statistical conclusions when testing for differences?

A
  • We didn’t detect a difference (null hypothesis couldn’t be rejected)
  • There is a difference between groups (null hypothesis is false)
  • Chance of error: when null hypothesis is true, 5% chance
    ○ For this to be the case, we need to play by the ‘rules’ (below), and we often don’t
  • How big is the effect? (Effect size and 95% confidence interval)
  • Chance of error: depends
  • We didn’t detect a difference between groups (null hypothesis couldn’t be rejected)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

What are the rules of Null Hypothesis Significance Testing to keep the p-value honest?

A
  • A p-value is valid if:
  • You do only one significance test (or correct for multiple tests)
  • You choose your test ahead of time - can’t change your analysis after you see the data (eg exclude outliers, transform variables)
  • You don’t peek at your data - must collect all data, then analyse once
  • Solutions
  • Pre-register your data collection and analysis plan
    ○ Be specific - what is your key hypothesis test?
    ○ Report any unexpected or unplanned result as exploratory (new hypothesis generated), confirm if a replication is possible
    ○ Report all studies you ran, and all statistical tests you conducted. Pay attention to the entire set of results, not just the ‘best’ ones
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

When would it be appropriate for a research aim to be generalising to a population, and what are the most important factors?

A
  • When you want to know frequencies, proportions, levels (eg How many people commute to work by bus?)
  • Most important factors:
    ○ Valid measures (construct validity)
    ○ Representative sample (external validity)
    ○ Large sample (statistical validity)
    ○ A small but representative sample is better than A large but unrepresentative sample
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Describe the difference between noise and bias

A
  • Noise = random error, imprecision
  • Can reduce noise by collecting more data - aggregation cancels out random error
  • Bias = systematic error, not random
  • Can reduce bias by collecting a representative sample
  • Large samples reduce noise but not bias
  • A large, unrepresentative sample gives you a precise estimate, but may be inaccurate
  • Representative samples reduce bias
  • A small, representative sample gives you an imprecise estimate, but likely in the right ballpark
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

What are some important considerations for systematic reviews?

A
  • Important consideration: Good idea to mix together both quantitative and qualitative methods to allow your own judgement to be inserted
    • Another Important consideration: If you leave things too open, biases can come in and it can mean the conclusion/interpretations of the data can be misguided
      ○ Note that this is quite contradictory to the above consideration in terms of one pressing the importance of your own judgement, and the other stressing making things predetermined to make it as objective as possible - this is a tricky weigh-up
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

What step in the systematic review stage is most subject to biases?

A
  • Data extraction and appraisal is the step where bias most commonly comes into play
    Most important thing is to be transparent about what you did so that the reader can assess whether it is clouded by bias
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

What to look out for when evaluating research quality?

A
  • Something to look out for: the closer a p-value is to .05 (eg .03), the less strong the effect and the more likely it is to be influenced by inadvertent p-hacking
    ○ If they preregistered their plan - eg recruitment methods, data collection methods etc - would reduce the reasonable scepticism
    • Many literature reviews do little or no quality evaluation
      Even academic papers (meta-analyses, systematic reviews, research syntheses) often do not evaluate the quality or validity of individual studies
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

What are the three components to quality research?

A

-Transparency
-Strong methods (valid)
-Calibration

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

What is transparency and why is it critical to quality research?

A

What is transparency?
* Procedures, materials, and data are reported in full
○ Would be easy to replicate the study
○ Would be easy to reanalyse the data/reproduce the results
* All relevant studies and all relevant results are reported
* Unplanned analyses and results are clearly marked
○ ^Pre-registration lets readers see what was planned and what wasn’t
○ Registered reports provide protection against p-hacking and HARKing
* Conflicts of interest are declared
○ Common in med but not as much in psych
○ It’s a bit of a grey area what a conflict of interest is within psychology
* Author contributions are reported
* Peer review history is public
* Transparency is necessary for evaluating quality
○ Without transparency, you can’t assess quality, but it doesn’t make a paper credible
Why transparency?
* Transparency is necessary for credible science
* ‘Nellius in verba’ - take no one’s word
* HARKing = Hypothesising After the Results are Known
○ Changing your hypothesis and pretending it was planned
* P-Hacking = analysing your data many different ways and only reporting the significant results
○ Changing your analysis and pretending it was planned
* File drawering = not publishing the unsatisfactory findings
* Publication bias = journals not publishing findings because they are not significant, or not the finding they want etc

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

What is pre-registration?

A
  1. Decide your design (sample size, conditions, measures), analysis plan, and key hypothesis tests
    2. Write it down & time stamp it
    3. Collect & analyse your data
    4. Share your plan when you share your paper
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

What are registered reports?

A

§ ^Another way to block these biases from coming in, which is stronger than pre-registration
§ Authors will write their intro, lit review, method etc without having done the study yet, then they submit it to a journal and gets peer-reviewed
§ Authors make a commitment to take on the edits, and the editors make the commitment to publish the paper if the edits were taken on
§ After data has been collected and analysed, and the paper is written it gets reviewed again before publishing
□ Makes the HARKing, p-hacking, file-drawing and publication bias very very unlikely

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

What is calibration with regards to research quality?

A
  • If the study can speak to the research question, but the conclusion was much more extreme - that is an issue of calibration
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

What are the red flags to look for when evaluating evidence quality?

A

○ Research isn’t reported transparently
§ You can’t tell what they did, wouldn’t be able to repeat it
§ The data aren’t available (and there’s no good reason given)
§ You can’t tell what was planned and what wasn’t
○ The methods are weak or not well-suited to the research question
§ Bad measures or manipulations (or mismatched to the aims/interpretations)
§ Bad sample (or mismatched to the aims/interpretations)
§ Causal inference
§ Etc
○ Authors make grandiose or exaggerated claims

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

What are some qualitative approaches in research theory, method, and analysis?

A
  • Ethnographic
    ○ Observations of people in situations, to capture external influences (socio-cultural, environmental)
    • Phenomenological
      ○ Exploring what emerges, why they act and react the way that they do (beyond unreliable self-reports), ie ie interpretive, phenomenological analysis
      ○ Extracting meanings from observation - not trying to explain but looking for a picture of what’s going on - similar to ethnographic, but ethnographic you are more situating yourself in someone else’s context and taking in what role their environmental factors could play
    • Grounded theory
      ○ Builds on existing theory, question, or data which gives structure to data collection and analysis
    • Case study
      ○ Deep exploration of one thing - eg person, group, organisation
    • Narrative
      ○ Done over time with the aim of developing a comprehensive story of the phenomenon of interest
      ○ Looking for a story not to explain but to build a picture
    • Historical
      ○ Use of past events/instances as models to explore current situations
      ○ Using events of the past to help explain what’s going on now
    • A lot of these methods are not necessarily used very often - a lot use thematic analysis
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

What are some popular qualitative methods?

A
  • Interviews/In-depth interviews (IDIs
    ○ By far the most popular
    • Focus groups
    • Content analysis (text, visual)
      ○ Can sometimes be used interchangeably with thematic analysis
    • Online/Insight communities
    • Observation/ethnography
    • Open-ended survey questions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

When to use interviews

A
  • Use when
    ○ you’re interested in individual perspectives and experiences
    ○ When the topic is sensitive - so confidentiality is important, and disclosure will require some degree of trust and rapport
    ○ There are concerns about fear of reprisal
    ○ You want to avoid group effects
    • Budget is a consideration - interviews are resource intensive/expensive
    • Sampling for interviews
      ○ Will have smaller sample sizes in qual than quant because we are looking for theoretical saturation (not hearing any new ideas) rather than statistical power)
      ○ Key informants - sometimes you need people will really specific knowledge on the matter
      § All part of the sample - not considered separately
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

What are some interviewing tips?

A

○ Rapport is critical because people will only talk candidly if they
§ Feel comfortable
§ Feel secure about confidentiality
§ Trust they will be understood
§ Trust they won’t be judged
○ Beware of
§ Influencing by leading questions conveying your own view (implicitly or explicitly) or giving examples
§ Moving too quickly from one topic to another
§ Moving off topic
§ Interrupting the person or speaking over them to ask a question
□ Majority of interview types in qual are semistructured

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

What is an appropriate interview guide for how to begin and commence the interview process?

A

○ Good to actually start with the warm-up questions like ‘how has your day been’, then
○ Begin by asking factual, socio-demographical questions, but if there are a lot or some are sensitive (eg income, age, and sexual orientation), it might be better to ask them at the end
○ Your first question/s about the research topic should ask for relatively neutral, descriptive information (eg Tell me about your career so far)
○ Aim to use open-ended questions, eg
§ Tell me about a time when…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

What are Patterson’s six interview question categories?

A

○ Experience and behaviour
§ Elicits behaviours, actions, and activities
§ Eg ‘Tell me about a typical day; how do you start the morning’
○ Sensory
§ Similar to ‘experience and behavoiur’ questions, but try to elicit more specific detail about what was seen, heard, touched, and so forth
§ Eg ‘Could you explain what … looked like at that moment?’ ‘What could you hear?’
○ Opinion and values
§ Taps into interviewees beliefs, attitudes, or opinions
§ Eg ‘What is your opinion/view of….’
○ Feelings
§ Taps the affective dimension
§ Eg ‘How do you feel about …; what was coming up for you when…’
○ Knowledge
§ Elicits interviewee’s actual factual knowledge about a topic
§ Eg what do you know about …’
○ Background/demographic
* Other question types
○ Hypothetical
§ Ask what the interviewee might do, or what it might be like in a particular situation
§ These usually begin with ‘What if…’
○ Devil’s advocate
§ Challenge the interviewee to consider an opposing view or explanation to a situation
○ Ideal position
§ Ask the respondent to describe an ideal situation
§ Eg ‘In an ideal world’
○ Interpretive
§ Make a tentative explanation or interpretation of what the interviewee has been saying and ask for feedback
§ Eg ‘How does that sound to you?’
* Prompting
○ Can you help me understand a little more
○ Tell me more

60
Q

What are focus groups and what do you need to keep in mind when conducting them?

A
  • The traditional focus group
    ○ Gatherings of 5-10 people (ideal Is 6-8) specifically recruited to discuss an issue
    ○ Typically run for 60-90mins
    ○ Led by a facilitator/moderator
    ○ Often observed by clients (behind mirrored glass or via camera)
  • Things to keep in mind about focus grouos
    ○ They can’t provide a reliable/robust read on how a group of the population thinks or behaves
    ○ They are not a generalisable method for descriptive or explanatory questions (such as ‘which ad is the most effective’, ‘what’s the best way to get people tod o X’)
    ○ They are subject to psychological biases
    § Group effects: social desirability, hierarchical relationships, peer relationships, peer pressure, group-think, group polarisation, dominance of the loudest
    § Individual effects: confirmation bias, anchoring/adjustment, overconfidence
    ○ Your unit of measure is the group, not the individuals in it - so four groups of eight people is not a sample of n=32 individuals, it is 4x groups of 8
61
Q

What is the sampling process for focus groups?

A

○ How you recruit participants depends on the research
○ For ‘general population’ samples, you can use specialist recruitment agencies to source participants
○ The quality of participants depends on how they’re recruited - use a recruitment screening questionnaire but don’t do what these ads do and reveal the purpose of the research
○ Generalisability of participants is not the same as it is in quantitative research - however, focus groups members are often sampled to represent characteristics of interest to the research. In this case, you may need to recruit to fulful the ratios of these characteristics

62
Q

What are online insights/communities?

A

○ Often used in designing new products/branding
○ Size and duration depends on research purpose, eg from 20-10000 people, and a few days to years. Also one-off (pop-up) activities with an established group
○ Their iterative data collection is a big selling point
○ Used for purposes such as
§ Exploring consumer attitudes sentiments , and interests
§ Understanding consumer context, behaviour, product usage
○ Community activity is managed by moderators who deliver activities and interact with participants
○ Types of research activities include
§ Discussion boards
§ Surveys
§ Co-creation exercises (eg shown and image and ‘what do you like about that, what is oticeable about that’
§ Video and audio diaries
§ Competitions (gamification)

63
Q

What are observational studies

A
  • You are most interested in in behaviour - eg dialogue, and other social interactions, using products in a natural context
    • You are researching a new area and need more input to develop your research question/s - exploratory purpose
    • When you odn’t trust the accuracy of self-reports, eg sensitive, prone to bias, unawareness
    • Can combine with interview in a methodology called contextual inquiry eg in ‘shop-alongs’ where the researcher observes participants while they shop and then asks you about choices
64
Q

What are the ethical considerations for observational studies?

A

○ What are the requirements for informed consent for everyone who is observed.- can depend on the setting
○ Does the need for consent change if you want to record observations?
§ In public you cannot record - you don’t have consent
§ In a contained environment, you need to get informed consent - but how much does that then change the way they behave
○ Confidentiality is crucial
○ What if you accidentally find out about something unlawful or otherwise harmful?

65
Q

How to conduct observational research?

A

○ Your approach to observation can be more or less structured eg grounded theory vs phenomenology
○ Taking detailed field notes is critical to good observational research
○ Developing an observational template is useful in structured or semi-structured approaches
○ Audio or video recording may be an option but can be intrusive
○ Only collect data relevant to your research questions

66
Q

How to analyse qualitative data

A

○ Qual analysis involves identifying or interpreting meaning from content
○ Often referred to as thematic analysis or content analysis
○ Components of analysis are:
§ Codes
□ A word or short phrase that represents the meaning or attribute
§ Themes
□ A higher level concept that codes are grouped under
§ Code framework (or structure)
□ Codes represented in a hierarchy with overarching themes
○ Code structures can be shared using pre-existing themes (aka deductive eg your research questions or literature review) or the themes can emerge out of the data (inductive)
○ Either way, good content analysis is an iterative process - you go back and forth from analysing your data and creating new codes and revising your code structure

67
Q

What is the challenge of qualitative analysis?

A

○ Interpretive meaning is
§ Subjective
§ Context-dependent
□ It can differ depending on
® Why it is being done (purpose)
® Where it is being done (socio-cultural, geographic)
® When it is being done (historical)

68
Q

When is co-creation/co-design used?

A
  • For community service, eg health, mental health, housing & other social service
    • With community groups that have been marginalised by traditional approaches
    • When community buy-in is essential
      ○ Secondary reason - the people you design for are not just the ‘end users’, but also the people who work in the service
69
Q

What makes co-creation/co-design research different from other types?

A
  • Takes the design process out of ‘the office’ into the community
    • Power balance is shifted from the ‘expert’ researcher/design and shared with the ‘expert’ users and other stakeholders eg service staff
      ○ Democratisation
    • The practitioner facilitates the community’s participation in the research design process
    • Community provides their lived experience expertise and participated in decisions about design and implementation
      ○ What’s different about this from other research methods, is that community is involved, not just consulted
    • Often mixed methods, with an emphasis on group processes, eg focus groups, online communities, workshops
    • May also incorporate design methods such as system mapping, personas, user journey, and prototyping
70
Q

Explain primary vs secondary data

A
  • Secondary
    ○ Already exists
    ○ Not collected for the purpose/project you’re using it for
    • Primary
      ○ Collected for your purpose
71
Q

What forms does secondary data come in?

A

○ Raw
§ Will mostly be de-identified
○ Aggregated
§ Has been summarised to a certain level
○ Described
§ You get frequencies - essentially summaries of the data by subgroups
○ Analysed
§ Using someone else’s research

72
Q

What are some sources of secondary data?

A
  • Government
    ○ Australian Bureau of Statistics (ABS)
    § Collects census and other population data
    ○ Other government agencies eg Australian Institute of Health and Welfare, Australian Institute of Family studies
    ○ State/territory and local governments also collect and release the data, see Victorian Government portal Data vic
    ○ Can collect population data, or data on the government services they provide
    § Collecting information about the people they deliver services to
    • Researchers:
      -OPen access sources & longitudinal studies
    • Organisations
      ○ Commercial organisations may give access to their data eg user/member/employee records (aggregated or de-identified)
      ○ Not for profit organisations undertake research for advocacy eg Mission Australia’s Annual Youth Survey
      ○ Intergovernmental organisations eg OECD, United Nations
      ○ For profit organisations eg Gallup World Poll
73
Q

How to use secondary data

A
  • Design and analysis
    ○ Usual psychology methods as well as epidemiological approaches
    § Epidemiology has a different frame for research than our psychological research - particularly around causal statements
    □ Epidemiologists will look at longitudinal data and make statistics based on risks and odds of what will happen based on what has happened in the past - will make causal statements but based on a different statistical foundation
    • Common research questions
      ○ Within groups
      ○ Between groups
    • Australian Criminal Intelligence Commission
      ○ If looking in to drug use, can look at wastewater reports from ACIC
      ○ Needs to be pro-rated to the population (meaning that rates of drug amounts in a certain city don’t tell you about consumption)
73
Q

What are some social media research methods?

A
  • Example: Querying six billion tweets to explore gendered hate speech
    ○ 8 years-worth of twitter data
    § 1% sample stream
    ○ Made a visualisation
    § Network used to analyse the social media data
    § Based on keywords like #alphamale etc
    § Every point on the network represents particular hashtags, and the clusters represent hashtags that ere often used together
    ○ Results showed that people posting #sexyselfie, are posted by people/in contexts where people are concerned about their social standing
    • Social media research can also be useful for communities who feel ‘overstudied’ by surveys, so this can make more connection
      ○ Eg people with ASD
    • Also can be helpful for addressing/exploring misinformation
74
Q

What are some advantages of using social media for research?

A

○ Data is not collected in a lab - is observational
§ Not being influenced by the fact that they know they are being watched
○ Can collect large amounts of data

75
Q

What are some disadvantages of using social media for research?

A

○ Sampling is a problem
§ Don’t know the quality of the sample or the biases
§ Can be a blackbox
§ Eg twitter skews young, male, and educated - so may not be fully generalisable
○ Identifying location and gender
§ Can be an issue for identification
§ Can also be very binary
§ People can also create new identities for themselves - so may not be accurate
○ How do you locate people?
§ Location data is generally about where the person was when they posted it, rather than where they live
□ Location can be gained from contextual cues of friends and text content
○ Ambiguities around ethics
§ Definitely need ethics approval for data gained across the internet

76
Q

Describe the ethics for using social media in research

A
  • Lines between public and private can get blurred
    • Something to consider is informed consent
      ○ While people are posting things on social media, so are inadvertently agreeing to the public witnessing this or using this, on an individual level it is often thought of as ‘no one would see/care about this so I’ll post it’
      § Ways around this
      □ Not naming private accounts
      □ Only referring to public figure/account names
      □ Protect people by using AI to use a language generation model to use text that is similar so people can’t find the original post
      ○ Terms of service
      § Researchers often can’t conduct research if it breaches the site’s terms of service - some studies will allow this to be overruled
      § Might be more of a legal issue
    • Theoretical/philosophical concerns
    • Practical concerns
77
Q

What is web scraping?

A
  • Allows you to get data you can see on the page in your browser
    ○ Can’t get geo-located data this way
    ○ But can get whatever is in your browser (what your browser loads)
    • Can be done using the html tags to pull specific data from webpages
    • Can also be done using an Application Programming Interface (API)
78
Q

What is Application Programming Interface (API) with regards to web scraping?

A

○ Way for machines to communicate with other machines
○ Provides a window into someone else’s data
○ Works on accessing htmls in often a problematic way, so they often have limits of where they have to stop
○ If you want to get user data (put in the endpoint of their profile name), or a keyword (so you could put in a hashtag and it will identify all of the uses of that hashtag)
○ Window that a company is allowing other companies to have into their data
○ Can be problematic
○ For social media, the data is into their user platform
○ Often incentives to make APIs
§ People can make them to attract more people to their site to make apps to work on top of the site
○ Not only for social media site though, can also be for libraries, such as Trove
○ Some are unauthorised though - eg the Optus leak
○ Can access them by writing code, or by GUIs (graphical user interfaces)
○ Not just about getting data, but can also be used to annotate data
§ Eg Perspective API
□ For moderation, can also be used in research
□ Might send data off to the perspective API, and it will have models which will go away and tag your data for ‘attributes’ eg ‘severe toxicity’, ‘sexually explicit’ etc
® Helps to identify and remove unwanted/inappropriate content, and then hide them
® Issue is that sometimes the can’t differentiate between toxic comments about a group of people, and someone identifying themselves as part of that community (eg people announcing that they are a proud black queer woman would be flagged because of the general hateful comments usually targeted towards people within those communities)

79
Q

What are the practicalities of web scraping?

A

§ To get the key you typically need to prove you have the app (eg authenticate with your accounts)
§ You often need a key (long password) that is unique to you (they want to track how much of their data you are pulling)
§ There are lots of different tools that will help you access API data eg postman, chrome, developer tools, google sheets

80
Q

What are some research issues to consider when web scraping?

A

§ Peripheral users can be under-represented in APIs
§ You may not know the quality of the sample
§ Hard to remove ‘spambots’, fake accounts
§ Informed consent and ethical concerns

81
Q

What is social network analysis?

A
  • Focuses on relationships between (not within) people
    • Looks at connectional ties (eg location, language etc) between peopl/whatever entity you are interested in
      ○ Can use it to look at who is the most popular in their social network
      § Some who is popular will have a high N-degree centrality (in a diagram, they will have a lot of lines directed to them)
      ○ Could als have a high in-betweenness centrality (which means they are an important node in the network)
82
Q

What is sentiment analysis?

A
  • Computer-aided detection and analysis of emotion in texts
    • Also known as ‘opinion mining’
    • Can be conducted via machine learning (machines drawign on past examples of human emotion labels and extrapolating that; or lexicon - prescribing emotions of interest to identify what is used
    • Applications
      ○ Monitor customer satisfaction
      ○ Recommendation of services
      ○ Political forecasting
      ○ Sociological research: how do people use emotional language online?
83
Q

What is topic modelling in social media research?

A
  • Unsupervised machine learning
    ○ Model automtically recognises patterns in the data (doesn’t need prior human-governed examples)
    • A machine learning technique to discover abstract topics within a collection of documents
    • Algorithms like Latent Dirichlet Allocation (LDA) are most commonly used
    • Can identify trending topics in real time
84
Q

Describe how AI (Chapt GPT) could be used in research

A
  • Works by predicting the next token
    ○ A token is any unit of language analysis
    ○ Incredibly large training data and amount of context
    Used with API
85
Q

What do we mean by ‘experiment’?

A
  • A method used when we want to test hypotheses about causal relationships between factors
    • Involves manipulating one factor (IV) and measuring how its different levels or conditions affect another factor (DV)
    • An essential ingredient of experimental methods is using strategies to control the effect of other factors (extraneous or confounding variables) on your DV
    • Potential confounds include characteristics of the participant, the environment, the research protocols, and unconscious expectations
86
Q

What are some control strategies in experimentation?

A
  • Random allocation of participants in experimental groups
    • Use of a control group that does not receive an experimental manipulation. In some treatment studies, a ‘waitlist control’ group is used
    • Blinding or concealing which groups participants belong to from everyone (including researchers - not possible in all experiments)
    • Strict protocols to ensure all participants have the same experience other than experimental manipulation
87
Q

What is an example of a field experiment in practice?

A

the baby simulators
* Schools give teenagers robot babies to de-incentivise the risky behaviours leading to teen pregnancy, because it shows the annoyance of the babies, and would make it much more real for the teenagers
* However, it wasn’t robustly assessed
* When trialling its success, found that:
○ Schools that were given the robot baby had higher rates of teen pregnancies, and school drop outs

88
Q

Explain how the Behavioural Insights Unit improved the forms used by mental health services clinics under the Mental Health Act

A

○ The are many forms used by clinicians or given to patients that are require for various treatments under the Mental Health Act
○ They are inherently complex forms and patients need to understand their rights/options, and clinicians need to follow the process accurately
○ There forms needed updating as a new version of the act came into effect this month
○ What did the BIU do?
§ Worked with Dept. of Health to revise the forms to make them more user friendly for clinicians and patients - increasing understanding and reducing errors:
□ Conducted a ‘de-sludging’ workshop with DH staff and health service representatives to brainstorm changes
□ Revised the forms and tested these with legal, data managers, people with lived experience and clincians/Officer of Chief Psychiatrist to gather their feedback
§ Sludging = opposite oof nudging - making things more difficult
§ Changed layout and format of the forms to make them easier to read and engage with

89
Q

Explain how the Behavioural Insights Unit helped Reduce pedestrian traffic related injuries through in-situ comms

A

○ Leading causes of pedestrian traffic/road related injuries come from inattention or mis-judging risk, e.g., stepping out of a tram without looking, risking running
across a level crossing, not looking both ways when crossing behind a bus.
○ Dept of Transport and Planning (DTP) were working on a poster campaign to get people to pay more attention in risky settings, but wanted to test and evaluate their options
○ What did Behavioural Insights Unit do?
§ Provided initial comments to DTP on
their campaign and advised on
evaluation strategy:
§ Advised on how to test campaign
in-situ, as well as get feedback on
the design.

90
Q

Explain how the Behavioural Insights Unit supported positive parenting behaviours through a text-based intervention

A

○ $5 billion commitment over 10
years to deliver three-year-old
kindergarten programs across
Victoria.
* Dept of Education (DE) wanted to
design a program to support
parents with their child’s
education (home factors very
important in a child’s learning
outcomes).
○ What did BIU do?
Worked with DE to design a text based
program to support parental
engagement in their child’s learning:
* The program design was informed
by Ready4K, an 8-month SMS
program run in the United States
aimed at supporting parents of 4
year-olds.
* Ready4K successfully applied
behavioural insights principles to
increase the frequency that parents
performed positive parent-child
literacy-related interactions (e.g.
reciting nursery rhymes, looking at
pictures in a book) at home.

91
Q

How to select your type of experiment

A
  • Pilots
    ○ Iterative testing to improve a product
    ○ More often used in private sector to develop prototypes
    ○ Can invlove qual and quant data collection (eg focus groups and opinions of different designs)
    • Trials or experiments
      ○ Testing different versions of a final product before full rollout/scaling
      ○ Eg lab or field experiments and academic studies with a control grouop
      ○ Mostly involves quant data collection, using new or existing data
      ○ (Experiment can mean with randomisation, while quasi-experiment means anotherr way of having control)
    • Evaluation
      ○ Measuring the impact something had after it had been rolled out
      ○ Used more in government trials/experiments were not pursued
      ○ Backwards looking way at quantifying impact using existing data in most cases
      ○ Can use existing data for ‘natural experiments’
92
Q

What are the key differences between lab and field experiments?

A
  • Lab
    ○ Have greater control over conditions, so can more easily isolate causes and effects
    ○ Less external validity - not as close to real world conditions or people, so cannot say with as uch certainty that acting on findings will lead to X impact
    • Field
      ○ Have less control over conditions, so external factors may interfere with the experiment - BUT if designed well should still have high internal validity
      ○ Higher external validity - you are measuring things as they would normally occur, so any impact should replicate at scale
93
Q

What is the key idea behind any trial?

A

to determine the counterfactual for your intervention
- * Essentially what would happen otherwise - as in without the intervention as opposed to with the intervention

94
Q

Why are randomised control trials considered the gold standard?

A
  • Randomisation means people in the treatment and control groups should be similar in all observable and unobservable dimensions
    • Because everyone has an equal chance of being treated or not
    • Controls for differences between groups of people as well as external factors/events - eg happens at the same time, everyone just as likely to have outlier events happen that could impact on outcome measure
    • Potential differences to control for
      ○ People
      § Are the treated and non-treated groups different?
      ○ Selection
      § is your sample representative of wider population
      ○ Time/events
      § did something else change between the treatment and control?
      ○ Geography/area
95
Q

Explain the Hungry Judge Ruling as an example of non-randomisation

A

○ Judges rulings became harsher the closer they were to meal times
○ Was thought that because the allocation of hearing times was random, this was due to their hunger levels
○ But found that actually allocations isn’t random, and actually that they will allocate cases with no representation/that seem more straight forward before meal times
§ Therefore, it was not random

96
Q

What are the practical considerations for randomisation in trials?

A

○ How is the intervention delivered
§ Does it allow for individual level randomisation
○ Does the system allow for random allocation or not?
§ Eg setting up a different mailing list
○ Is there adequate data
§ To both randomise and obtain outcome data that can track back to your different allocations
§ Eg if you send out different forms/letters randomly, will you know which form/letter those who then act on them received?
○ Always check your randomisation
§ Do the groups seem similar? If not, randomise again
○ Don’t rely on pseudo-randomisation if you can avoid it
§ Eg randomise by first letter of name, or days of the week
§ There’s always a chance that isn’t completely random
□ Some cultural communities might have higher proportions of names staring with M for example
§ Date of birth might be a better one

97
Q

What spillovers should be looked out for in randomisation of trials/

A

○ When treatment effect might not be isolated to the treatment group
○ Is there likely to be a problem of people in the treatment and control groups interacting or hearing about what the other group is getting?
§ Can you track comms and see if texts are being forwarded, or send unique links to websites etc?
○ Can you control how often people are exposed to the intervention, and that they won’t see the other interventions?
§ Eg online social media trial algorithm showing a random version of the message to each individual viewer vs each time an ad appears?

98
Q

What are some options for when you can’t randomise?

A
  • Often referred to as quasi-experimental methods, as we don’t have a ‘pure’ control to determine causation
    • Option you take may depend on the issue stopping you from randomising, or how the thing you want to trial will be rolled out
    • And some of these options cannot work / don’t work as well
      if you have multiple treatment effects – e.g., where rollout
      means everyone gets the same intervention.
    • Use cluster RCT when cannot randomise at individual level
    • Use Difference-in-Difference if you can’t select your control group randomly
    • Use phased rollout/stepped wedge design if cannot stop some people getting treated
    • Use before-after when you cannot get a control group/phase out
99
Q

What is a cluster randomised control trial (RCT)

A

○ Basically just randomising at the institution/area level - eg schools, police precincts etc
○ Often still looking at individual level data (but not always)
○ Main issue - are those at different clusters similar?
§ Eg culture, policies/practices
○ Also potential for spillovers
§ Do schools/precincts or people within them, talk to each other
○ Also more difficult to analyse/calculate power

100
Q

What is Difference-in-Difference trials, and when/why would you use this

A

if you can’t select your control group randomly
○ If differences between groups are constant, can measure if there is a difference in the group differences
○ Uses statistical comparator group as the control
○ Main problem - assumes consistent relative difference between groups
§ Other things could change over time, and change differently between groups (eg different policy announcement in different city)

101
Q

What is phased rollout/stepped wedge design, and when/why would you use this?

A

if cannot stop some people getting treated
○ If everyone will receive the treatment, but you can change when they receive it, you can use data from those yet to be treated as your control
○ Main problem - if control group changes behaviour in anticipation of getting the treatment, or hear about the treatment/impacts during rollout (spillovers between groups)

102
Q

What is a before-after experimental design, and when/why would you use this?

A

when you cannot get a control group/phase out
○ Close to what we’d call an evaluation - the control group is in the same population before the intervention is rolled out
○ Main problem - cannot control for the impact anything that may change over time could have on the results

103
Q

Explain outcome vs output

A

○ Outcome
§ The goal or objective we care about. Can be the behaviour itself we are trying to change, or the good that the behaviour leads to
○ Output
§ Steps taken or specific products/measures related to activities delivered. Eg no. of letters sent out or people visiting a website.
§ May also be the behaviour we’re interested in, but usually a leading indicator of success or a sub-behaviour related to our outcome behaviour

104
Q

Describe Behaviours vs self-reports vs leading indicators

A

○ Behavioural measures
§ A physical behaviour someone does, that we directly measure. As opposed to asking people what they did (Self-report)
○ Leading indicators/non-behaviour measures
§ Other things we can measure, like values, attitudes, feelings, psychological measures etc

105
Q

Why would you want to focus on behavioural outcomes in experiments?

A
  • Changes in attitudes and beliefs are important, but they may not go along with changes in behaviour
    ○ The intention-Action gap
    ○ + Social desirability bias
    • People often don’t correctly remember what they have done or why they have done it
      ○ 39% of men and 29% of women said they achieved the minimum level of physical activity each week
      § But with a tracker, actually on 6% of men and 4% women met their goals
    • People often don’t accurately predict what they are going to do
      ○ 71% of people chose a high brow movie to watch next week
      ○ But only 34% chose a high brow film to watch that day
106
Q

What are some considerations for choosing behavioural measures in experiments?

A
  • We may be able to easily collect data on the behaviour we care about (primary outcome), or need to choose a related behaviour
    ○ Do you have existing data on the behaviour or will you need to collect it? If so, how will you do this and is it feasible?
    ○ Will collecting behavioural data change people’s behaviour?
    § Hawthorne effect
    □ People change behaviour if observed
    § John Henry effect
    □ People resent missing out on treatment, and work hard to overcome this ‘disadvantage’
    □ Can work the other way around too
    ○ Can the behaviour be recorded accurately and consistently?
    □ Are there reasonable demands on service delivery staff, or how can they be suppotted to deliver the intervention and record results?
    ○ Cost, time, feasibility etc
107
Q

Why might you not be able to use the actual behaviour you care about in your trial?

A

with using your preferred outcome measure?
○ Time - you need results quicker to support decision making so need a leading indicator
§ Eg workplace wellbeing interventions take a long time before we see a change in retention or short term interventions that support educational attainment
○ Logistics - you cannot observe the behaviour you want
§ Eg how would you measure whether people are self-testing for COVID? Or hours of homework completed?
○ Costly to collect
§ It requires a lot of effort to gather that data as it doesn’t already exist
§ Eg need to send people on site to count how many people drive vs walk
○ Ethics
§ It may be invasive or otherwise risky to collect or record that data

108
Q

Describe the study on 911 Burnout

A
  • Sent an email once a week to 911 dispatch callers, to help overcome burnout
    • Emails were giving tips on how to cope, or ask for tips, and there would be different topics each week, as well as people could see what other people’s tips were
    • Immediate outcome measure was a burnout score
      ○ Using a validated Copenhagen Burnout Index (8pt reduction from intervention)
    • Secondary behavioural outcome - sick days taken (leading indicator): Actually increased!
      ○ Could have been refleciton on self-care increasing, rather than taking time off because they were struggling
    • Main behavioural outcome - retention/resignations: more than halved
109
Q

What is marketing research?

A
  • Obtains consumer insights to help inform business strategies
    • Brand loyalty
      ○ How are brands perceived by consumers
      § How consumers choose brands, products, services
    • Consumer satisfaction
      ○ How cunsumers interact with the brand
    • How brands communicate with the market
    • Sizing an opportunity
      ○ Identifying a need/gap in the market
    • Understanding why consumers switch to stem defection before it happens
    • Both qual and quant techniques
110
Q

Describe the case study on portable defibrilators

A

§ Qual phase
□ Qualitative research was used to understand the market’s perception and experience with cardiac health and AEDs before exploring initial reactions and price expectations for th enew device
□ Use of online community to help deal with a sensitive, and sometimes confronting topic
□ Able to capture insights in writing through shared videos and images; powerful insights for the client and their marketing teams
§ Business outcomes for client
□ Through qual and quant research, an essential element of their go-to market strategy needed to be around
® Education
® Awareness
® Partnerships (ie linked with St Johns Ambulance, Hospitals)
® Not just the right price to maximise uptake
◊ This led to an increased focus in these areas ahead of the Australian launch, significant media coverage was achieved in December 2021 to support this
® RRR launched the CellAED in Australia in December 2021 with a clear go-to market strategy and prioritisation framework at the price recommended by Choice Modelling

111
Q

What is choice modelling?

A

® A quantitative research technique used to understand individual’s preferences and the value placed on various product attributes in their purchase decision-making
® Choice evaluates the trade-offs that individuals make by studying the joint effect of multiple attributes of a product simultaneously
® Can determine the relative importance of importance choice attributes
® Through difficult trade-offs, we can understand and learn which attributes individuals truly value
Product or Services can be deconstructed into features/attributes

112
Q

What is segmentation?

A
  • The aim of segmentation is to develop targeted strategies to effectively reach your market, as well as identify and quantify high-priority customer segments based on immediate and future value, you need a robust segmentation solution
    • The final segment solution should divide the target market into clearly identifiable groups, which have similar characteristics based on distinct sets of needs, preferences and behaviours
    • The segmentation can be achieved through a varietyy of methods, including hierarchical clustering, k-means, latent class etc
113
Q

What are the key criteria for successful segmentation?

A

-Identifiable (measurable indicators, allowing for future identification)

-Accessible (should be able to reach segments through communications channels

-Substantial (segments should be large enough to validate resources required to target them)

-Unique (different enough segments to warrant different approaches)

-Durable (stability to endure through the intended tenure of segmentation)

-Actionable (ability of hte brand to deliver on the needs of each segment)

-Stable (should be tested across multiple stakeholder groups to ensure it is useful and durable for diverse purposes over time)

114
Q

Describe the SunSuper segmentation case study

A

○ Qualitative phase
§ Qual research was used to understand the market’s barriers to engagement with their super
§ Through 42 in-depth interviews, we were able to better understand the unique challenges in the superannuation space
From these findings, the team predicted that the segments would be separated on where they landed on the following six scales
○ Quant phase
§ A questionnaire was developed, using learnings from the Qual research to identify, define, and size the segments in the market
§ To get market view, they surveyed 1400 Australians, split between
□ 1000 Australians who hold a superannuation accumulation product and
□ 400 Australian retirees who have an income product with a super fund
○ Solution insights
§ The final solution was implemented into the SunSuoer CRM to allow for efficient targeting of communications, advertising, initiatives, etc
§ A simplified algorithm was used in the following SunSuper studies to help provide greater insights into those studies
§ An example is a study where SunSuper’s brand assets were evaluated
□ The aim was to identify which assests were strongly linked to the brand, and which took it a step further to become value adding assets
□ The simplified algorithm allowed them to see where certain assets resonated with more certain segments

115
Q

Describe advertising testing

A

○ Fundamentally, 2 key criteria for advertising to be effective:
§ Stand out/getting noticed
§ Coded to memory such that ad impact can be felt at the purchase occasion
○ Typical outcome measures used to assess effectiveness
§ Brand Linkage
§ Strength of Distinctive Assets
§ Message Take Out
§ Call to Action
§ Alignment to drivers of brand choice

116
Q

Describe creative advertising testing (Optus example)

A

○ Back in 2020, Optus had a new company vision and brand strategy
○ They had already developed a Triple Play strategy with Forethought to understand the key drivers of brand choice: Brand statements that spoke to Quality, Price, and the Emotion to elicit
○ Optus 3 creative executions to communicate their ‘new’ Brand. Which one to invest in and create a TV advertising?
○ Research objective
§ To understand which execution elicit the Triple Play drivers best, and the execution Optus should go to market with first

117
Q

Explain research in government settings

A

○ Can be done for a range of purposes including
§ Program design
□ To review the best practice literature/evidence for what works
§ Program operationalisation and implementation
□ To explore the feasibility of a program
§ Program delivery
□ To test specific research questions and build the evidence of what works

118
Q

Explain evaluation in government settings

A

○ Often driven by policy makers/funders to make value judgements about what works, for whom, in what settings, and other key questions (driven by stakeholders)
○ Takes place in applied settings
Can be done at the beginning (including as part of design), during implementation, and at the end (eg to outcomes/impact)

119
Q

What are enablers for program evaluation?

A

○ Data
○ Capability within teams doing evaluation
○ Leadership/executive buy-in
○ Organisational culture

120
Q

What are barriers to program evaluation?

A

○ Funding
○ Capability within organisation
§ To understand that even thought we are explaining things in simple ways, it is very complicated and based on a deep breadth of knowledge
○ Data availability and quality
○ Analytical skills
○ Design and practicality
§ Balance the theoretical, what is the minimum that can be done, and what is most practical

121
Q

What are reasons to evaluate programs?

A

○ Assessing impact
○ Informing continuous quality improvements
○ Accountability
○ To support funding decisions
○ Assess whether programs are meeting their policy intent
○ Ensure programs are meeting community needs
○ Demonstrate effective use of resources
Help inform what works in applied settings

122
Q

Who does program evaluation?

A

○ Internal
§ Likely have governing documents about approaches (eg government evaluation toolkit)
○ Blended internal and external
§ Sometimes will be brought in for additional capabilities - maybe they already have some research and they bring you in to help build framework etc.
§ Limitations:
□ Analytics - teaching someone who has never done any analytics is very complication - often need to convince them to pay someone
○ External
§ Tendering/procurement of a provider to complete the evaluation/evaluation supports

123
Q

What are the decision points for internal vs external program evaluations?

A

§ Capacity and capability
§ Working relationships/internal politics
§ Independence

124
Q

What are the key steps for program evaluation?

A

○ Key steps:
§ Need for research/evaluation is identified and project set up/commissioned
§ Planning/scope determined (including stakeholder analysis)
§ Evaluation Framework and Plan developed (ethics)
§ Data collection
§ Data analysis
§ Reporting
○ Key notes:
§ Approach always looks different - depending on the work, the client, the needs, timelines, budget, and other key decision points
§ Rainbow framework is a good resource - Better Evaluation

125
Q

How to decide on evaluation type?

A

○ There are a range of methodologies/approaches
○ Design which ones is a bit of an art as much as a science, includes considering:
§ Needs of the project
§ Appropriate methodology that is practical in the applied setting
§ Preferences of funders/government
§ Purpose of the work
§ Funding/resourcing
§ Personal views about approaches

126
Q

What are the types of evaluation?

A

-Formative evaluation (gathering info to plan, refine, improve the intervention, supports innovative development to guide adaptation; begins in design and development with evaluators asking questions allowing trial of ideas)
-Process/implementation evaluation (measure the activities that occur while a program is running, identifying whtehr the separate components of the program and the program are being implemented as intended; use as soon as program begins - during operation of existing program)
-Impact evaluation (Measure immediate effect of program; during operation of existing program at intervals/at end of program)
-Outcome evaluation (measure long-term effects of program; use after program has made contact with at least one person or group in target population and has time to achieve long-term impacts)

127
Q

What are the types of methodologies/approaches to evaluation?

A

-Experiment design (strongest methodology for demonstrating a causal relationship between pre-defined program activities and outcomes: measures changes in desired outcome for participants in an intervention group and control group)
-Quasi-experimental design (used when experimental designs aren’t feasible/ethical, some form of comparison is possible. HIgh quality ones can show causal links)
-Non-expeirmental (aka descriptive/observational studies, examine changes in participants before and after program implementation, or use qual data)
§ Participatory codesign
□ Evaluations designed with stakeholders
□ All stakeholders should have a voice across all parts of the project
§ Contribution analysis
□ Evaluation in policy saturated spaces
□ How to understand the contribution (not attribution) of programs in complex settings
§ Indigenous/First Nations evaluations
Additional considerations including Aboriginal and/ore Torres Strait Islander leadership, data sovereignty, reciprocity, self-determination, and transparency/accountability to communities

128
Q

What is contribution analysis?

A

○ A theory-based approach to evaluation aimed at making credible causal claims about interventions and their results
○ CA is based on the existence of, or more usually, the development of a postulated theory of change for the intervention being examined
○ The analysis examines and tests this theory against logic and the evidence available from results observed and the various assumptions behind the theory of change and examines other influencing factors
○ Endorsed by the World Bank and other key groups as a gold standard in policy saturated spaces

129
Q

What are the steps of contribution analysis?

A
  1. set out the attribution problem to be addressed.
  2. Develop a theory of change and risks to it
  3. Gather the existing evidence on the theory of change
  4. Assemble and assess the contribution story and challenges to it
  5. Seek out additional evidence
  6. Revise and strengthen the contribution story
130
Q

What are evaluation frameworks?

A
  • Guide for purpose and approach of the evaluation - often includes
    ○ Background information
    ○ Program logic/theory of change
    ○ Evaluation questions/TOR
    ○ Data sources/requirements
    ○ Governance
    ○ Risk management
    ○ Reporting
    • Supplemented by
      ○ Evaluation plan
      ○ Stakeholder/consultation plan
131
Q

Describe evaluation questions

A

○ Evaluation questions should be tailored to the program and will always differ slightly
○ There are lots of guides for evaluation questions, including across government jurisdictions
○ Suggestions for evaluation questions include assessing the outcomes, benefits, intended and unintended consequences and efficiency of programs and policies
○ 7-12 questions is th emost you want to do : ~2-4 questions per group is ideal

132
Q

Explain data sources and collection for program evaluation

A

○ Where possible
§ Have to be pragmatic
§ Need to think about completeness, availability, quality, and who are the data custodians
§ Maximise use of existing data
§ Minimise process burden on stakeholders
§ Keep data collection and collation as simple and consistent as possible
§ The ‘logistics of data collection and analysis’, eg data linkage, requests, data custodians, etc.

133
Q

Describe data analysis for program evaluation

A

○ There is no single approach to data analysis
§ Dependent on the needs of evaluation, the availability of data, the funding/budget, purpose, scope etc
§ Most evaluations will adopt mixed-methods approaches to answer KEQs
~TYPES:
-ECONOMIC (cost-minimisation; breakeven; cost effectiveness; cost-utility; cost-benefit analyses)
-QUANTITATIVE (descriptive; relationship; group difference analyses)
-QUAL (thematic; causal layered; discourse analyses)
-MIXED METHODS (triangulation of data from all sources)

134
Q

What are some key considerations for program evaluation?

A

○ What is practical/possible
○ ‘Comparative to what’
○ Indicators - are they policy relevant, aligned to goals/objectives, operationalised, are there risks of ceiling effects
○ Evaluation capability - in the team and in the client (when to source externally, how to choos good evaluators)
○ Evaluator skills- allaying anxiety, building rapport, maintaining credibility, recognising social dynamics, managing coercion attempts, managing hostility
○ Ethics and good practice (and how is this different for LMICs)
○ First Nations evaluations
○ Translating evaluation findings

135
Q

What are the tips for reporting evaluations

A
  1. communicate findings to stake-holders and decision-making bodies throughout the process
  2. Align the reporting and dissemination of findings to the agency’s and government’s strategic outcomes/goals
  3. link findings to agency’s and government’s strategic outcomes/goals
  4. present findings in an understandable format to stakeholders
  5. Use results to present an argument
  6. develop action plan based on findings
136
Q

What is user research (UX)

A
  • User research is the systematic study of the goals, needs, and capabilities of users so as to specify design, construction, or improvement of tools to benefit how users work and live
    ○ Schumacher, 2010
    • We study people so that we can build them useful stuff
137
Q

What makes a good design?

A

○ Usable and useful
○ Predicts whether someone will use this product
* Bad designs can be
○ Usable but not useful
○ Useful but hard to use

138
Q

What is thick data

A

○ Thick data is brought to light using qualitative, ethnographic research methods that uncover people’s emotions, stories, and models of the world
○ Big data does not mean good, small sample does not mean bad. We can make use of almost all data

139
Q

Explain Tricia and Nokia as an example of traingualting big data

A

Tricia was an ethnographic researcher at Nokia in rural China
Was studying street vendors, internet café workers etc
Found that there was an appetite in the market for a lower cost smartphone
Nokia at the time was very focussed on the iPhone market (looking at v expensive, high-end) - opposite of what these people were after
When she brought this to Nokia, they dismissed it because it was ‘small data’ using qual data- they valued quant data on big data, which showed the user want was high-market
But turns out they were measuring the wrong things in their studies, and Tricia had identified an important driver correctly
This means nokia overlooked an important insight and are now not as successful as they could have been
* Tricia proposes triangulating big data and small samples, where you have the depth and insight, but you also know how it can scale - gives the most robust method

140
Q

Describe the ‘Discover’ stae of the diamond method in co-creation/design

A

want to build useful things, so need to discover:
-user goals (what are people trying to achieve?)
-user needs (Maslow’s hierarchy of needs)t

141
Q

What are some methods used to ‘get to the need’ in user experience research?

A
  • Diary studies
    • Observational research
    • Interviewing
      ○ In the ‘Discover’ stage
      ○ Interviewing users
      § Usually semi-structured, meaning:
      □ Writing a topic guide
      □ Researcher guides, but conversation is user-led
      § Can also have unstructured or structured interviews
      § Sample theory still applies
      □ Effect size (power)
      □ Representability
      □ Variability
      ○ Interviewing pointers
      § Build rapport
      § Wide to narrow flow
      § Active listening, probe
      § Avoid leading questions
      § Use activities and props creatively
    • Survey
    • Product analytics
142
Q

Describe the ‘Define’ stage of co-creation/codesign diamond approach

A

Defining the problem space
Refine down to a clearly defined problem
-* Analysing qualitative data
* Moving from rich data to simple insights
* Use transcripts plus observations from video
* Bottom up vs top down
Can use:
-Thematic analysis
-Coding analysis
-Affinity Mapping
-* Interviews to insights
* One is the interviewer, one person is the participant, one person is the note taker
* Need to find out about (using unimelb in covid example)
○ How their learning experience has changed now vs a few years back
○ What did they like/dislike about the change?
How do they feel about learning (more) online

143
Q

Describe the ‘Develop’ stage of the co-desing/co-creation diamond

A

Come up with different solutions to the problem, collaborate
* It can be tempting to jump into a solution as we learn
* Not the best approach, because often our first idea is not our best
* Need to come up with many ideas, and test which is best = optimisation
* Also important because the people we are working with are expensive - don’t want to pay for this for an inadequate solution
* Engineers are expensive
* We want to make sure engineers work on the right things
○ Design thinking
§ Gives us activities that help divergent thinking
§ And encourages us to collaborate with different perspectives
□ This process increases fidelity - start off with very low fidelity (a very rough sketch of the concept), then test all those low fidelity optiosn, and use the feedback to build the fidelity
□ Based on the feedback, nicer prototypes implementing the suggestions are created and then tested, (maybe top 5 options)
□ Based on this next round of feedback, some of the designs have been narrowed down, and the good bits of each get blended together to make a high fidelity prototype which looks very much like a final version, and undergo testing on that
○ Insights into design concepts
§ Methods we use to come up with ideas
□ Wireframing or prototyping
® Wireframing = low fidelity (general model without details - makes it cheaper, and allows people to focus on the concept rather than the nitty gritty details
□ Co-design/design workshops
□ Card sorts
○ Insights into design validation
§ Solutions (designs) will be tested during iteration loops
§ Methods we use to test ideas
□ Guerilla test
□ Wizard of Oz
□ Usability or concept testing

144
Q

Describe the ‘Deliver’ stage of the co-creation/co-design diamond

A

Find the best solution to the problem, eliminate the bad ones, refine the good
* Optimisation
○ Our job isn’t done once its built
○ How is it used in the wild?
§ Is it actually useful and usable?
§ Can we make it better?
○ Methods we use
§ Usability testing
§ Product analytics
§ Experimentation (A/B testing or Variant testing)
§ Survey (often CSAT or NPS studies)
§ In product survey

145
Q

What is variant testing?

A

-Uses scientific method
-Control vs exp conditions
-measure behaviour