15 Flashcards
(49 cards)
You can’t manage what you don’t measure.
Metrics are only one of many factors influencing behavior
Measurements
Attributes
Measurements are numbers and attributes are characteristics that are either present or absent.
Measurements, for
example, include time stamps, pressure, temperature, or energy consumption; attributes, whether or not a part is defective or a machine down.
Performance management usually involves synthesizing the raw observations of measurements and
attributes into statistics called..
..metrics
Metrics are chosen to act as indicators of performance.
KPI 1-10
Metrics and PI 100s
Measurements and Observations 1000s
Key perfomance indicator = KPI
Sometimes direct measurements can be KPIs
Not all metrics are “key.”
Leading and lagging metrics
Leading indicators predict the future; lagging indicators report the past.
A factory needs both. Different
stakeholders want different metrics. A shareholder may be satisfied to know how well the company did
last month and use this number as a predictor of future performance. A factory manager, on the other
hand, needs to know how well the factory performs today and will in the next hours, days, weeks, and
months. The same factory manager is also interested in how the factory performed in the immediate
past, because it has career implications.
What ultimately matters is outcomes but we cannot manage a process based only on its outcome.
The
best way to achieve good outcome measures is to have good leading metrics.
Generally speaking, the closer you move to measuring process inputs and activities, the closer you get to
leading indicators – of lagging performance outcomes.
If you are measuring aggregated results at an
organizational level, you are more likely using lagging indicators. The key to picking good leading mea-
sures is to understand the process that leads to the outcome.
SMART requirements for good metrics
Good goals are specific, measurable, attainable, relevant, and timely
SMART Specific
A good metric measures what it intends to measure and is immediately understandable.
No training or even explanation is required, and the number directly maps to reality, free of any
manipulation. One type of common manipulation is to assume that one particular ratio cannot
possibly be over 85%, and redefine 85% for this ratio as “100% performance.” While this makes
performance look better, it also makes the number misleading. Some companies calculate scores
based on points awarded or deducted for a checklist of observations. This kind of highly processed
data is not immediately understandable. Managers then often use scores to rank people, departments, or companies. A rank only measures performance relative to a peer group, not in
absolute terms. It is possible to perform poorly and yet rank #1 if all others do worse, which
breeds complacency.
SMART Measurable
The input data of the metric should be easy to collect. Lead time statistics, for exam-
ple, require entry and exit timestamps by unit of production. The difference between these times
then only gives you the lead time is calendar time, not in work time. To get lead times in work
time, you then have to match the timestamps against the plant’s work calendar. Lead time infor-
mation, however, can be inferred from WIP and WIP age data, which can be collected by direct
observation of WIP on the shop floor. Metrics of WIP, therefore, contain some of the same infor-
mation but are easier to collect.
SMART Attainable
People see how they can affect the outcome. With a good metric, each employee under-
stands what kind of actions can affect the value. A shop floor metric, for example, should not be a
function of the price of oil in the world market, because there is nothing operators can do to affect it.
On the other hand, they can affect the number of labor hours required per unit, or the rework rate.
SMART Relevant
A better value for the metric always means better business performance for the company.
This is perhaps the most difficult characteristic to guarantee. Equipment efficiency measures are
notorious for failing at this, because maximizing them often leads to overproduction and WIP accu-
mulation. Metrics should also have the appropriate sensitivity. If daily fluctuations are not what is of
interest, then they need to be filtered out. A common method for doing this is to plot 5-day moving
averages instead of individual values – that is, the point plotted today is the average of the values
observed in the last five days. Daily fluctuations are smoothed away, but weekly trends clearly show.
There are many other methods, depending on the nature of the metric, whether its current value is
correlated with past values, and the accuracy and precision of the measurement methods.
SMART Timely
Metrics should be timely. There is a limited point in knowing the percentage of faulty
units produced after they have been shipped to customers. The time dimension also emphasizes the
importance of having leading indicators supporting lagging indicators. Leading indicators help
users manage performance actively.
The language of things or the language of money?
Shop floor metrics should be in the language of things rather than the language of money.
Metrics posted on the shop floor must therefore be nonfinancial. This does not mean that financials
should be hidden from shop floor personnel, just that they should not be the basis for metrics these peo-
ple review everyday.
Accountants translate the shop floor metrics in the language of things into the language of money for
communication up the management chain.
The Balanced Scorecard
The Balanced Scorecard uses four
perspectives to encourage a more balanced performance management:
1 The finance perspective. Typical measures here are earnings, revenue, return on
investment, manufacturing cost, cost of poor quality.
2 The customer perspective. Typical measures are customer satisfaction, complaint
rates, and market share.
3 The internal business perspective. This is where we measure the direct performance of
our operations. We can track process quality, product quality, scrap rates, speed and
efficiency, and a range of other metrics.
4 The innovation and learning perspective. These measures relate indirectly to today’s
operations but will impact how well we do it in the future. Examples include metrics
for continuous improvement and skill development.
The DuPont model
The DuPont model is a schematic way to break down a company’s profitability into its components.
The point here is not to use the DuPont model to calculate ROI (ganz rechts), but to understand the fac-
tors that ultimately drive financial results.
Management by Objectives = MBO
There are plenty of traps in measurement: (list 7)
More recently, it has been rephrased to Objectives and Key Results (OKR)
- The MBO aphorism “what gets measured, gets done” has some truth to it and managers risk get-
ting exactly what they measure, not what they want. MBO also ignores the fact that many things
that are not measured also get done. - The use of performance measurement creates a measurement-driven organizational culture, not
necessarily focused on customer needs. - What is easy to measure is usually not what is right to measure.
- While measurement may create extrinsic motivation, it can hurt the more effective intrinsic
motivation. - A performance focus creates internal competition, which almost always will be gamed, and creates
less cooperation and sharing between teams and departments. This is particularly a problem when
performance metrics are used for compensation and rewards. - When measures are not met, people can get low self-esteem, burnout, and low job dissatisfaction.
When measures are always met, people can get complacent. - Performance measurement systems can be costly bureaucracies that drain resources away from
delivering performance.
Objectives and Key Results (OKR)
While OKR is not a rigid method, it has a few important principles. One is that every objective (O) should be significant, concrete, clearly defined, and inspirational(!). Each objective
should have 3–5 key results (KR) tied to it. KRs should be unambiguously measurable, in
such a way that one can answer the question “Did we achieve that result? Yes or no.” The
target success rate for KRs should be 70% according to Doerr. OKR prefers leading indicators to lagging indicators. Tasks are planned to achieve the key results.
Hoshin kanri
- What is it?
- Name key features
Hoshin Kanri (also known as Policy Deployment) is a strategic planning and execution system used to align an organization’s goals with its daily operations, while engaging all levels of the organization in the process.
Key Features:
* Strategic Focus: Defines 5–6 key strategic priorities for the next 1–3 years.
* Cascading Goals: These priorities are translated into actionable goals at every level of the organization.
* Interactive Planning (“Catchball”):
* Senior leadership proposes goals.
* Middle management refines and responds.
* This back-and-forth (like tossing a ball) continues down to frontline managers.
* All levels align their action plans with overarching strategic goals.
* Communication via A3 Reports: Each goal is typically summarized on an A3-sized sheet, promoting clarity and focus.
Metrics in manufacturing: SQDCEP
Metrics in factories are often organized by dimensions of performance, like “safety,” “quality,” “deliv-
ery,” “cost,” “environment,” and “people” but many variants exist. Very common is also
“productivity.” Others replace “people” with “morale,” and others again use “time” instead of “delivery.” The dimensions under which one sorts the metrics is less important than choice of metrics themselves.
Metrics of productivity
Value-added per Employee = Sales – (Material costs + Energy costs + Outsourced Services costs)
e.g. employee per car
If a company employs more people than another to produce the same output, yes, it may be because it is less productive. But it may also be because it goes deeper into the manufacturing process.
The number of cars/employee/year cannot be used to compare productivity between car companies with di!erent manufacturing depths. Perhaps VW is lagging behind Toyota in productivity, but this metric does not prove it.
Metrics of quality
Cost of Quality (COQ)
Quality is more effectively measured by using multiple, simpler metrics, covering different subtopics,
such as:
* Ratings by external agencies for consumer goods.
* Counts of customer claims.
* Rejection rates.
* First-Pass Yield, also known as Right-First-Time.
Equipment metrics
The most commonly (mis)used metric for equipment performance is Overall Equipment Effectiveness
(OEE):
OEE = Availability × Performance × Quality
While the OEE summarizes metrics that are individually of interest, not much use can be made of it without unbundling it into its different factors. Since the meaning and the calculation methods for its factors vary across companies, it cannot be used for benchmarking. Yet, people erroneously use OEE for
benchmarking all the time.
The problem is that, in practice, increasing the OEE is often confused with increasing utilization.
Total Productive Maintenance (TPM)
Problem with OEE
The 3 factors of the OEE are defined differently in different organizations. There are issues with all 3 factors in the OEE formula:
- Availability. The availability of any device is the probability that it can be used when needed, as in the probability that a spindle is up and ready whenever you have a workpiece to put on it. In the OEE context, it is usually calculated at the ratio of net time available to assign it work in a planning period to the length of this planning period. If, in a 480-minutes shift, a machine stops during a 30-minutes break and has up to 60 minutes of unscheduled downtime and setups, then the planner can count of 480 - 30 - 60 = 390 minutes in which to schedule work, which yields a ratio of: Availability = 390/480 = 87%
This assumes that the machine’s ability to do work is proportional to the time it is up. For example, your connection to a server may work 99% of the time while uploading a large file and break
every time you try to save it. The formula makes it look as if it has 99% availability when in fact it is 0%. This is not to say that the formula is wrong but only that it commingles the effects of many causes and that its relevance is not universal. There may be better ways to quantify availability depending on the characteristics of a machine and the work it is assigned. Companies that calculate OEEs often do not bother with such subtleties and simply equate availability with uptime. - Performance. Performance is a generic term with many different meanings. As a factor in the OEE, it is the ratio of nominal to actual process time of the machine. If the machine actually takes 2 minutes to process a part when it is supposed to take only 1, its performance is 50%. The times
used are net of setups and don’t consider any quality issue, because quality is accounted for in the last factor. This factor is meant to account for microstoppages and reduced speeds, and it is a relevant and important equipment metric in its own right. - Quality. Quality is not a metric but a whole dimension of manufacturing performance with many relevant metrics. In the OEE, this factor is just the yield of the operation, meaning the ratio of good parts to total parts produced. It is not the First-Pass Yield, because reworked parts are still counted as good.