Week 11 Flashcards

(19 cards)

1
Q

What is quality improvement in AH

A
  • Systematic activities that aim to monitor, assess and improve the quality of care in healthcare settings
    • Continually refining how services are delivered so that patient care is safer, more effective and aligned with best practices
    • Involves continuous assessment, data analysis and feedback to implement changes that enhance patient outcomes and service efficiency
      Ongoing cycle of identifying areas for improvement, implementing changes and evaluating results
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Why is evaluating outcomes important in EBP

A
  • Closes the loop between research and practice by confirming if an intervention worked as intended
    Tells us if evidence-based changes are truly making a positive difference in real-world patient care
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

The relationship between EBP, patient outcomes and quality improvement

A
  • EBP provides the research evidence and interventions that clinicians implement: Patient outcomes are how we measure success of those interventions
    • QI provides the framework for testing and refining practice
      Think of QI as the process that bridges research and practice – by using data on patient outcomes to continually adapt and improve care processes.
      For example, if research says a certain swallowing exercise helps stroke patients, a speech pathologist will implement it (EBP) and then use QI methods to monitor patients’ swallowing outcomes.
      If outcomes improve, the practice is reinforced; if not, the clinician investigates why and adjusts the approach. In this way, QI creates a feedback loop: evidence-based changes are applied, outcomes are measured, and care processes are adjusted to achieve the best possible results for patients.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Common challenges in monitoring and evaluating EBP interventions

A
  • Isolating the effect of an intervention on outcomes – healthcare is complex, and many factors can influence patient results.
    • Quality improvement projects are often conducted in real-world clinical settings where multiple changes happen at once, making it “very difficult to isolate the actual causes of change”.
    • Additionally, time and resource constraints are frequent barriers.
    • Collecting data on outcomes (e.g. conducting assessments, surveys, follow-ups) takes time that clinicians may struggle to find during busy clinical days.
    • Allied health practitioners might also experience a knowledge or skill gap in data analysis – they are experts in clinical care but may not feel as confident in interpreting statistics or designing outcome measures.
    • Engaging staff and patients in QI can be challenging too; for example, if a new documentation process is introduced to track outcomes, clinicians might be reluctant if they view it as extra paperwork.
      Lastly, sustaining improvements over time is difficult – even after a successful change, maintaining momentum and continually monitoring outcomes requires ongoing effort and leadership support.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

PDSA cycle

A

PDSA cycle is a four step iterative method used to test small changes on a small scale and build learning into improvement work
- PDSA cycles are meant to be rapid and repeated
Rather than implementing a big change across an entire organisation and hoping for the best, PDSA encourages small-scale tests of change

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

PDSA process

A

Plan: Identify a goal or an area for improvement and develop a plan for a change or intervention.
* This includes defining your objective, making predictions about what will happen, and deciding what data or metrics will be collected to measure success.
* The plan should be detailed: Who will carry out the change? When and where will it happen? What resources are needed? And importantly, what outcome will indicate improvement?
Do:
* Carry out the plan on a small scale (pilot the change) and begin collecting data.
* It’s important to implement the change as designed in the Plan step, but usually on a limited basis to minimize risk.
* During the Do phase, the team gathers data (e.g. recording the wait times during the trial period) and notes any problems or unexpected observations. This phase is essentially trying out the change and documenting what happens.
Study:
- Analyse the results and compare the data against the expectations and objectives set in the Plan phase. This is where learning occurs.
- The team examines whether the change led to improvement, and by how much. Continuing the example, the physiotherapy team would calculate the new average wait times from the pilot and compare them to the baseline wait times. Did wait times drop as predicted?
- They might also look at other effects – for instance, did patient satisfaction improve on feedback forms? In the Study phase, you reflect on what was learned: Were there any unintended consequences? Did anything surprising occur? This systematic analysis determines whether the change is an improvement.
- If the goal was to reduce wait times by 15 minutes and the data shows a reduction of 20 minutes, the change is a success. If no improvement (or a negative outcome) is seen, that’s important information too.
Act:
- Based on the analysis, decide how to proceed. If the change was successful, the team might Adopt the change – implementing it on a larger scale or making it the new standard process.
- If the results were mixed or the change didn’t work as expected, the team might Adapt the approach – modify the plan and prepare for another PDSA cycle to test the tweaks. If the change clearly failed or caused issues, the decision might be to Abandon that idea and go back to the drawing board.
- In our example, if wait times went down and patients responded positively, the clinic could roll out the new check-in process to all therapists’ schedules (adopt) and perhaps plan another PDSA cycle to see if wait times can be reduced even further.
- On the other hand, if wait times didn’t improve, the team would use what they learned (maybe the new process caused confusion at the front desk) to adjust the plan and test a different change. Act is about using the knowledge gained to make an informed decision and then starting the cycle again for continuous improvement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Practical applications of PDSA in AH

A
  • Speech Pathology Example: Improving home-program adherence. A community speech pathology service wants to increase patient adherence to home speech exercises (many patients forget to practice between sessions).
  • Using PDSA, they Plan to introduce weekly reminder phone calls as a change, predicting this will improve adherence rates. They’ll measure adherence by asking patients to report how many days they practiced.
  • They Do the pilot with 10 patients for one month, making reminder calls every Monday. In the Study phase, they find adherence improved in 7 of 10 patients (from an average of 2 days/week of practice to 4 days/week).
  • Three patients, however, said the calls didn’t help – two felt they didn’t need reminders and one had technical issues getting the calls. Act:
    The speech pathologists decide to continue the calls (adopt) but adapt the process by offering text-message reminders for those who prefer that. They then begin another PDSA cycle with the text option included. Over time, this iterative process might significantly boost overall adherence to home programs, leading to better therapy outcomes.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Lean 6 Sigma

A

Lean focuses on eliminating waste – any activity that doesn’t add value to patient care (for example, redundant paperwork, waiting time, unnecessary transport of patients). It categorises types of waste (often using the acronym Six Sigma focuses on reducing variation and defects in processes through data-driven methods. When combined as Lean Six Sigma, the approach is to improve efficiency and quality simultaneously – removing non-value-added steps while reducing errors and variability in outcomes. In practice, Lean Six Sigma in an allied health context could look like a project to reduce the time patients spend waiting in a clinic by reorganising workflow (Lean), while also reducing the error rate in patient documentation by standardising forms (Six Sigma). For example, a physiotherapy department might use Lean techniques to reorganise equipment storage so therapists don’t waste time searching for items, and Six Sigma tools to analyse why certain clinic visits run overtime. The combined result could be a more efficient clinic with more consistent appointment lengths. Lean Six Sigma projects often involve staff training (you may hear terms like “Green Belt” or “Black Belt” for Six Sigma expertise) and a strong emphasis on measuring results (e.g., tracking time saved or error rates before and after the changes). The key takeaway is that Lean Six Sigma is about maximizing value for patients by improving processes, cutting out waste, and using data to drive decisions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Model for improvement

A

Model for Improvement: Three fundamental questions for QI projects.
The Model for Improvement, popularised by the Institute for Healthcare Improvement (IHI), is actually the framework that incorporates the PDSA cycle. It starts with three fundamental questions:
1. “What are we trying to accomplish?” – Define a clear aim. This should be specific, measurable, and time-bound (e.g., “Improve patient satisfaction scores in the rehab unit from 3.5 to 4.5 within 6 months”)
2. “How will we know that a change is an improvement?” – Identify outcome measures or key performance indicators (KPIs) that will signal success. Essentially, decide what data will be collected to track progress (for the above aim, the measure is the satisfaction score, but you might have multiple measures like response rates to surveys or number of complaints).
3. “What change can we make that will result in improvement?” – Brainstorm and select specific changes or interventions to test. This could be adopting a best practice from literature or a creative idea from front-line staff. For example, to improve satisfaction, one change might be implementing a “patient orientation” session on admission to the rehab unit.
Once these questions are answered, the team then uses PDSA cycles to test and refine the chosen changes on a small scale. The Model for Improvement essentially combines thoughtful planning (through the 3 questions) with the iterative power of PDSA. It’s widely used because it ensures teams have a clear aim and measures before jumping into action. Many QI projects you’ll see in allied health (and beyond) are structured this way: Aim -> Measures -> Ideas -> PDSA cycles. It’s a simple yet powerful model to accelerate improvement. Think of the Model for Improvement as the overall game plan, and PDSA as the play-by-play method to execute that plan.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Clinical Audits

A

Other frameworks you might hear about include Clinical Audits (which systematically review performance against set standards) and industry-inspired methods like Total Quality Management (TQM) or ISO quality standards. All share the common goal of improving care, but PDSA and Lean Six Sigma remain among the most practical for day-to-day healthcare improvements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Selecting key performance indicators and outcome measures

A
  • What outcomes or indicators will tell you if your change has led to improvement
    Key Performance Indicator (KPI) in healthcare is a specific, measurable element of practice that can be used to asses quality and safety of care
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What to consider when selecting KPIs or outcome measures for a QI project

A
  • Relevance: Choose measures that matter to patient care and align with your goals. For example, if you’re improving a falls prevention program, a highly relevant outcome measure is the number of patient falls in a month. Other relevant measures might be fall-related injuries or fear-of-falling survey scores. If you’re streamlining a therapy scheduling process, relevant measures include wait times, did-not-attend rates, or therapist utilization rates.
  • Measurability: The indicator should be something you can actually measure reliably with available tools. Patient satisfaction, for instance, can be measured via surveys (a Likert scale rating or Net Promoter Score). Functional outcomes can be measured with clinical instruments (e.g., a gait speed test for physiotherapy, a swallowing scale for speech pathology). Ensure the data is collectible – do you have the means to get this data without excessive burden?
  • Sensitivity to change: A good outcome measure should be able to detect change over the time of your project. If your project is only 8 weeks, measuring 1-year mortality isn’t useful because you won’t see the impact in that timeframe. Instead, choose intermediate outcomes or process measures that will move in response to your intervention (e.g., % of patients screened for fall risk, which should go up quickly if your intervention is training staff on a screening tool).
    Mix of quantitative and qualitative: Often, using a combination gives a fuller picture. Quantitative measures (numbers, percentages, scores) provide objectivity, while qualitative measures (feedback, observations, interviews) provide context and depth. For example, a speech pathology QI project might track the percentage of clients meeting their communication goals (quantitative) and also collect parent feedback on communication confidence (qualitative). The quantitative data shows “what changed” in numbers, and qualitative data helps explain “why or how it changed” from the perspective of those involved.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Examples of KPIs

A
  • Physiotherapy: improvement in functional outcome scores (like change in a 10-meter walk test time, or an Oxford Shoulder Score), number of treatment sessions to reach a milestone, re-injury rate, pain level reduction on a numeric scale.
  • Speech Pathology: number of successful swallow trials without aspiration, speech intelligibility ratings, language test score improvements, patient/family satisfaction ratings.
  • Occupational Therapy: ADL (Activities of Daily Living) independence scores, number of falls (for falls prevention programs), proportion of patients able to return to home vs. needing higher care, cognitive assessment improvements for cognitive rehab, compliance with assistive device use.
  • Across all disciplines: Patient satisfaction (often via standardized surveys), therapy adherence rates (e.g., attendance, homework completion), waiting times, service access (number of patients seen within a target timeframe), and safety incidents (falls, adverse events).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Quantitative data in quality improvement

A

Quality improvement projects often yield a lot of quantitative data – numbers like rates, times, scores, percentages. This data is invaluable because it provides objective evidence of change. For example, seeing that the average length of a hospital stay dropped from 7 days to 6 days after a new care pathway is strong evidence of improvement. Quantitative data can be analyzed statistically to determine if changes are likely due to the intervention or just random variation. Common quantitative tools in QI include run charts, control charts, and basic statistics (like means, medians, percentages). In fact, run charts (graphs of an outcome over time) are one of the most important tools for assessing improvement, as they help visualize trends and whether changes coincide with interventions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Qualitative data in quality improvement

A

This is where qualitative data comes in – things like open-ended survey responses, interview quotes, observations, and case narratives. Qualitative data provides context and insight into the human factors behind the numbers. For instance, suppose an OT fall prevention program didn’t reduce falls as much as expected. Interviews with participants might reveal that some found the home exercises too difficult, or they didn’t understand the instructions. That insight is crucial to refining the program. Likewise, patient stories can highlight unintended benefits or issues that raw numbers won’t show. Maybe a patient didn’t improve on a balance test (quantitative measure), but in an interview they mention they feel much more confident walking outside now (qualitative insight) – that’s important information for the team to consider.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Effective QI projects that use both qual and quant data

A

Effective QI projects often use both: quantitative measures to track what changes, and qualitative feedback to understand why it changed or how to make the intervention better. Allied health professionals should be comfortable reading simple data reports and gathering feedback through conversations or observations. Even informal qualitative data (like a quick chat with a patient about how they felt during a new treatment process) can be very illuminating.

17
Q

Tools for QI evaluation

A

Spreadsheet software (Excel/Google sheets), Online survey tools, Statistical software, Run chart and control chart apps, Cause and effect diagrams and flowcharts

18
Q

TIMWOODS categorisatino of waste in Lean 6 Sigma

A

TIMWOODS: Transportation, Inventory, Motion, Waiting, Over-processing, Over-production, Defects, Skills underutilised) and aims to streamline workflow.

19
Q

PARIHS structed problem solving process

A

It uses a structured problem-solving process (DMAIC: Define, Measure, Analyse, Improve, Control) to identify root causes of problems and ensure processes perform reliably with minimal errors.