service evaluation and surveys Flashcards
(25 cards)
What is service evaluation
It comes under the umbrella of quality improvement.
It measures current practice within a service. The results of the service evaluation help towards producing internal recommendations for improvements that are not intended to be generalised beyond the service area.
What does service evaluation play an important role in
Planning and developing services
Service improvement
Providing a quality service
Ensuring intervention is evidence based
What are the three areas of interest for an evaluation?
Project monitoring: looking at the routine functioning of your improvement work. Is it doing what you wanted it to?
Process evaluation: looking at the way in which your improvement work is implemented and runs. Can you learn from the process?
Impact evaluation: looking at whether or not your improvement work is delivering the objectives set. Are you getting the outcomes you planned for?
(providing quality service) What is quality? The SIX DIMENTIONS OF HEALTHCARE QUALITY
STEEEP
Safe (avoiding harm to patients)
Timely (reducing waits/harmful delays)
Effective (evidence based services)
Efficient (avoiding waste)
Equitable (care doesn’t vary in quality because of a persons characteristics)
Person-centred (establishing a partnership between practitioners and patients)
How does a service evaluation differ from an audit
It does not measure performance against a standard. It provides practical information about whether a development or service should continue or not and what needs to change/improve
How do service evaluations and audits differ from research?
Service evaluation looks at intervention/are that is routine. Research may involve a new treatment.
Service evaluation uses analysis of existing data (interviews/questionnaires)
For audits/se results only relevant locally, research wide
Research requires REC approval
Key differences: Service evaluation looks at intervention that is in routine use – with selection of intervention based on choice of health care professional.
Service evaluation involves analysis of existing data but may involve additional interview/questionnaire.
Research can look at novel treatments and choice is governed by research/may involve randomisation.
Research involves collection and analysis of data not normally part of routine care; hence research needs ethical approval.
What designs and methods can you use?
Surveys
Semi-structured interviews/focus groups
Objective outcome measures e.g. assessment results, goal attainment
Subjective outcome measures e.g. therapist reported outcomes, patient reported outcomes
5 steps of service evaluation
- Develop an outline plan to evaluate the work. (question, design, data to be collected, way it will be analysed, who will do evaluation)
- Work with your stakeholders (patients, staff, organisation leadership, commissioners of the service, other parts of public sector)
- Be clear about the data needed. Where possible data that are routinely available should be used. Specific data may be required for the evaluation which is not already collected routinely. It is critical that a practical approach to collecting the data is developed, and that those collecting the data are able to collect it in a way that does not impact on their day-to-day work.
- Develop a plan for the evaluation (key milestones, who is responsible for what, timescales)
- Plan the dissemination
* who is the principal audience for the evaluation?
* how do you intend to feed back the findings of the evaluation to them?
* have you asked them what they want to see, and in what format?
* is there anyone else with whom you should be sharing the findings?
* will you be generating important learning that should be shared more widely?
What are surveys
Surveys are the use of a systematised proforma to elicit the views of a particular constituent group through their responses to questions. Used to find out about something
Commonly used in SLT practice to find out about people’s views, experiences, and satisfaction levels on services, on a specific aspect of practice
Surveys may be:
* Interviewer administered or
* Self-administered
carried out:
* Face to face
* Video call
* By telephone
* Online survey
* Through the post
* By email
Or a combination.
You can ask closed or open-ended questions. Closed questions will give you quantitative data and open will give you qualitative data.
What are the pros and cons of close-ended questions
Pros
Quicker and easier to answer
Higher response rates
Easier to compare responses
Fewer irrelevant answers
Cons
Cannot provide all possible answers
Don’t allow respondents to expand on answers/offer alternative views
Can be frustrating
Participants may select any response at random.
Pros and cons of open ended questions
Pros- Allow for unlimited responses
Provide more detail
Offer richer qualitative data
Deliver new insights that the researcher may not have thought of.
Cons- Time-consuming to answer
Lower response rates
Potentially irrelevant information
Trickier to interpret and analyse
How is the survey rate found
the number of people who completed the survey divided by the number of people it was sent to. Usually expressed as a percentage.
What does having a good response rate help to do?
Produce a more representative sample
Increase sample size and statistical power
Reduce wasted time and materials
How to maximise survey response rate
Pilot and revise survey
Advance and cover letters (saying WHY you’re doing it)
Incentives
Follow up reminders
Keep survey concise and clear
Optimise survey for all devices
Be flexible
Assure confidentiality
Translators and interpreters.
Why do we pilot surveys
To ensure the questions are right can help maximise response rate. Work closely with stakeholders to get this right. Consider translation and interpreting services if needed.
Things to watch out for:
Simplicity – is the language used accessible to the sample subjects? Have confusing acronyms/jargon been avoided?
Clarity – have ambiguous questions and answers been avoided? E.g., double-barreled, double-negatives
Length – are the questions and answers concise? Could length be distracting from the key issue being asked?
Wording – are the questions thoughtfully designed to elicit the desired information from the respondents?
Order effects – are there any potential sources of bias due to the order in which questions are asked or the way response options are presented?
Question structure – are conceptually similar questions grouped together?
Pros & cons of using open-ended/closed-ended questions
Response choices – are they mutually exclusive and exhaustive? Are questions with too few choices forcing respondents to answer imprecisely?
Should questions be compulsory? What happens if they are/are not?
Lie detectors/attention checks
Non-response – could a certain question not be answered because it is confusing/accidentally missed?
Beware of question order and sampling bias with surveys.
Must be aware of question order and sampling bias with surveys
Question Order Bias occurs when the order of questions are asked in a survey or study can influence the answers that are given.
Sampling- needs to be representative of the target population- who needed to be seen and who was seen. What proportion of the target group? Survey attrition, sampling bias.
What to watch out for: Data analysis.
Cleaning and preparing data
Cross tabulations- comparing different groups
Descriptive stats e.g. %
Dealing with missing data
Stats
What to watch out for: Reporting
Should be clear and transparent for readers to assess the strengths and limitations of the study
Use of visuals when presenting data aids understanding of the information conveyed
Link your findings to your research aims and knowledge in field
Be careful of wording- cautious not to over-interpret findings
Consider implications for practice and future research.
Evaluating survey reports what should you consider:
Coverage
Sampling
Non-response
Measurement
Other factors
Evaluating— Coverage
Did most members of the target population that the sample is meant to represent have a chance to be selected? If not, are those who did not have a chance to be selected different in important ways from those who did?
If the sample did not come from a traditional sampling frame, how were potential respondents identified and recruited?
Evaluating– sampling
How was the sample selected?
What steps were taken as part of the sampling and/or data collection process to ensure that the sample is representative of the target population?
How can I tell if these steps were effective?
What about sampling error?
Evaluation- Non-response
- What was the response rate (for a probability sample) or the participation rate (for a non-probability sample)?
- How concerned should I be that not everyone who was selected in turn responded?
- How can I tell if nonresponse is a problem? Might it be leading to bias in the survey results?
- What steps, if any, were taken to adjust for nonresponse?
- What impact did these adjustments have on the survey results?
Evaluating—- measurement
How was the survey administered (e.g. in person, by telephone, online, multiple modes, etc.)?
Were the questions well constructed, clear, and not leading or otherwise biasing?
What steps, if any, were taken to ensure that respondents were providing truthful answers to the questions, and were any respondents removed from the final dataset (e.g., speeders, multiple completions)?