Estimation - CL2 Flashcards

1
Q

Cone of Uncertainty

A

In most projects, we are asked to estimate up front. To understand why this is such a problem, we must examine the Cone of Uncertainty, which Barry Boehm introduced to us in 1981 and Steve McConnell re-introduced in 1997 in his book Software Project Survival Guide.
The cone demonstrates that we have the most uncertainty at the beginning of any project (a variance of 4x to 0.25x in range). This variance means that what we estimate to be a one-year project could actually end up taking anywhere from 3 to 48 months. The beginning of any project is the time when we are the least certain about the project, yet it is also when we are asked to deliver estimates that are very precise.
In agile, we try to move from uncertainty to certainty in as short a cycle as possible. This is accomplished by maximizing early learning about the system and how it should be designed. To do this, we create a single path through the system, a complete and working story. We use this to flush out design and requirement assumptions early, which allows us to move to certainty much more quickly and with much more confidence.

Link:
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2013/hh765979(v=vs.120)?redirectedfrom=MSDN#why-estimation-is-hard

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Source of Estimation Errors

A

Estimation error creeps into estimates from four generic sources:
1. Inaccurate information about the project being estimated
2. Inaccurate information about the capabilities of the organization
that will perform the project
3. Too much chaos in the project to support accurate estimation
(that is, trying to estimate a moving target)
4. Inaccuracies arising from the estimation process itself

Common examples of project chaos include the following:
Requirements that weren’t investigated very well in the first place
Lack of end-user involvement in requirements validation
Poor designs that lead to numerous errors in the code
Poor coding practices that give rise to extensive bug fixing
Inexperienced personnel
Incomplete or unskilled project planning
Prima donna team members
Abandoning planning under pressure
Developer gold-plating
Lack of automated source code control

Unstable Requirements
Requirements changes have often been reported as a common source of
estimation problems (Lederer and Prasad 1992, Jones 1994, Stutzke
2005). In addition to all the general challenges that unstable
requirements create, they present two specific estimation challenges.
The first challenge is that unstable requirements represent one specific
flavor of project chaos. If requirements cannot be stabilized, the Cone of
Uncertainty can’t be narrowed, and estimation variability will remain high
through the end of the project.
The second challenge is that requirements changes are often not tracked
and the project is often not reestimated when it should be. In a well-run
project, an initial set of requirements will be baselined, and cost and
schedule will be estimated from that baselined set of requirements. As
new requirements are added or old requirements are revised, cost and
schedule estimates will be modified to reflect those changes. In practice,
project managers often neglect to update their cost and schedule
assumptions as their requirements change. The irony in these cases is
that the estimate for the original functionality might have been accurate,
but after dozens of new requirements have been piled onto the project—
requirements that have been agreed to but not accounted for—the
project won’t have any chance of meeting its original estimates, and the
project will be perceived as being late, even though everyone agreed that
the feature additions were good ideas.
The estimation techniques described in this book will certainly help you
estimate better when you have high requirements volatility, but better
estimation alone cannot address problems arising from requirements
instability. The more powerful responses are project control responses
rather than estimation responses. If your environment doesn’t allow you
to stabilize requirements, consider alternative development approaches
that are designed to work in high-volatility environments, such as short
iterations, Scrum, Extreme Programming, DSDM (Dynamic Systems
Development Method), time box development, and so on.

Unfounded Optimism
Optimism assails software estimates from all sources. On the developer
side of the project, Microsoft Vice President Chris Peters observed that
“You never have to fear that estimates created by developers will be too
pessimistic, because developers will always generate a too-optimistic
schedule” (Cusumano and Selby 1995). In a study of 300 software
projects, Michiel van Genuchten reported that developer estimates
tended to contain an optimism factor of 20% to 30% (van Genuchten
1991). Although managers sometimes complain otherwise, developers
don’t tend to sandbag their estimates—their estimates tend to be too low!
Don’t reduce developer estimates—they’re probably too
optimistic already.
Optimism applies within the management ranks as well. A study of about
100 schedule estimates within the U.S. Department of Defense found a
consistent “fantasy factor” of about 1.33 (Boehm 1981). Project
managers and executives might not assume that projects can be done
30% faster or cheaper than they can be done, but they certainly want the
projects to be done faster and cheaper, and that is a kind of optimism in
itself.
Common variations on this optimism theme include the following:
- We’ll be more productive on this project than we were on the last
project.
- A lot of things went wrong on the last project. Not so many things
will go wrong on this project.
- We started the project slowly and were climbing a steep learning
curve. We learned a lot of lessons the hard way, but all the
lessons we learned will allow us to finish the project much faster
than we started it.
Considering that optimism is a near-universal fact of human nature,
software estimates are sometimes undermined by what I think of as aCollusion of Optimists. Developers present estimates that are optimistic.
Executives like the optimistic estimates because they imply that desirable
business targets are achievable. Managers like the estimates because
they imply that they can support upper management’s objectives. And so
the software project is off and running with no one ever taking a critical
look at whether the estimates were well founded in the first place.

Subjectivity and Bias
Subjectivity creeps into estimates in the form of optimism, in the form of
conscious bias, and in the form of unconscious bias. I differentiate
between estimation bias, which suggests an intent to fudge an estimate
in one direction or another, and estimation subjectivity, which simply
recognizes that human judgment is influenced by human experience,
both consciously and unconsciously.
As far as bias is concerned, the response of customers and managers
when they discover that the estimate does not align with the business
target is sometimes to apply more pressure to the estimate, to the
project, and to the project team. Excessive schedule pressure occurs in
75% to 100% of large projects (Jones 1994).
As far as subjectivity is concerned, when considering different estimation
techniques our natural tendency is to believe that the more “control
knobs” we have on an estimate—that is, the more places there are to
tweak the estimate to match our specific project—the more accurate the
estimate will be.
The reality is the opposite. The more control knobs an estimate has, the
more chances there are for subjectivity to creep in. The issue is not so
much that estimators deliberately bias their estimates. The issue is more
that the estimate gets shaded slightly higher or slightly lower with each of
the subjective inputs. If the estimation technique has a large number of
subjective inputs, the cumulative effect can be significant.

Off-The-Cuff Estimates
Project teams are sometimes trapped by off-the-cuff estimates. Your boss
asks, for example, “How long would it take to implement print preview on
the Gigacorp Web site?” You say, “I don’t know. I think it might take about
a week. I’ll check into it.” You go off to your desk, look at the design and
code for the program you were asked about, notice a few things you’d
forgotten when you talked to your manager, add up the changes, and
decide that it would take about five weeks. You hurry over to your
manager’s office to update your first estimate, but the manager is in a
meeting. Later that day, you catch up with your manager, and before you
can open your mouth, your manager says, “Since it seemed like a small
project, I went ahead and asked for approval for the print-preview
function at the budget meeting this afternoon. The rest of the budget
committee was excited about the new feature and can’t wait to see it next
week. Can you start working on it today?”
One of the errors people commit when estimating solely from personal
memory is that they compare the new project to their memory of how
long a past project took, or how much effort it required. Unfortunately,
people sometimes remember their estimate for the past project rather
than the actual outcome of the past project. If they use their past estimate
as the basis for a new estimate, and the past project’s actual outcome
was that it overran its estimate, guess what? The estimator has just
calibrated a project overrun into the estimate for the new project.
Don’t give off-the-cuff estimates. Even a 15-minute estimate
will be more accurate.

Unwarranted Precision
In casual conversation, people tend to treat “accuracy” and “precision” as
synonyms. But for estimation purposes, the distinctions between these
two terms are critical.
Accuracy refers to how close to the real value a number is. Precision
refers merely to how exact a number is. In software estimation, this
amounts to how many significant digits an estimate has. A measurement
can be precise without being accurate, and it can be accurate without
being precise. The single digit 3 is an accurate representation of pi to one
significant digit, but it is not precise. 3.37882 is a more precise
representation of pi than 3 is, but it is not any more accurate.
Airline schedules are precise to the minute, but they are not very
accurate. Measuring people’s heights in whole meters might be accurate,
but it would not be at all precise.
For software estimation purposes, the distinction between accuracy and
precision is critical. Project stakeholders make assumptions about project
accuracy based on the precision with which an estimate is presented.
When you present an estimate of 395.7 days, stakeholders assume the
estimate is accurate to 4 significant digits! The accuracy of the estimate
might be better reflected by estimating 1 year, 4 quarters, or 13 months,
rather than 395.7 days. Using an estimate of 395.7 days instead of 1 year
is like representing pi as 3.37882—the number is more precise, but it’s
really less accurate.
Match the number of significant digits in your estimate (its
precision) to to your estimate’s accuracy.

Other Sources of Error
The sources of error described in the first nine sections of this chapter
are the most common and the most significant, but they are not
exhaustive. Here are some of the other ways that error can creep into an
estimate:
Unfamiliar business area
Unfamiliar technology area
Incorrect conversion from estimated time to project time (for
example, assuming the project team will focus on the project
eight hours per day, five days per week)
Misunderstanding of statistical concepts (especially adding
together a set of “best case” estimates or a set of “worst case”
estimates)
Budgeting processes that undermine effective estimation
(especially those that require final budget approval in the wide
part of the Cone of Uncertainty)
Having an accurate size estimate, but introducing errors when
converting the size estimate to an effort estimate
Having accurate size and effort estimates, but introducing errors
when converting those to a schedule estimate
Overstated savings from new development tools or methods
Simplification of the estimate as it’s reported up layers of
management, fed into the budgeting process, and so on

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Diseconomies of Scale

A

People naturally assume that a system that is 10 times as large as
another system will require something like 10 times as much effort to
build. But the effort for a 1,000,000-LOC system is more than 10 times as
large as the effort for a 100,000-LOC system, as is the effort for a
100,000-LOC system compared to the effort for a 10,000-LOC system.
The basic issue is that, in software, larger projects require coordination
among larger groups of people, which requires more communication
(Brooks 1995). As project size increases, the number of communication
paths among different people increases as a squared function of the
number of people on the project. [1] Figure 5-2 illustrates this dynamic.
The consequence of this exponential increase in communication paths
(along with some other factors) is that projects also have an exponential
increase in effort as a project size increases. This is known as a
diseconomy of scale.
Outside software, we usually discuss economies of scale rather than
diseconomies of scale. An economy of scale is something like, “If we
build a larger manufacturing plant, we’ll be able to reduce the cost per
unit we produce.” An economy of scale implies that the bigger you get,
the smaller the unit cost becomes.
A diseconomy of scale is the opposite. In software, the larger the system
becomes, the greater the cost of each unit. If software exhibited
economies of scale, a 100,000-LOC system would be less than 10 times
as costly as a 10,000-LOC system. But the opposite is almost always the
case.
For software estimation, the implications of diseconomies of scale are a
case of good news, bad news. The bad news is that if you have large
variations in the sizes of projects you estimate, you can’t just estimate a
new project by applying a simple effort ratio based on the effort from
previous projects. If your effort for a previous 100,000-LOC project was
170 staff months, you might figure that your productivity rate is
100,000/170, which equals 588 LOC per staff month. That might be a
reasonable assumption for another project of about the same size as the
old project, but if the new project is 10 times bigger, the estimate you
create that way could be off by 30% to 200%.
There’s more bad news: There isn’t a simple technique in the art of
estimation that will account for a significant difference in the size of two
projects. If you’re estimating a project of a significantly different size than
your organization has done before, you’ll need to use estimation software
that applies the science of estimation to compute the estimate for the
new project based on the results of past projects. My company provides
a free software tool called Construx® EstimateTM that will do this kind of
estimate. You can download a copy at www.construx.com/estimate.
Use software estimation tools to compute the impact of diseconomies of scale.

When You Can Safely Ignore Diseconomies of Scale
After all that bad news, there is actually some good news. The majority of
projects in an organization are often similar in size. If the new project
you’re estimating will be similar in size to your past projects, it is usually
safe to use a simple effort ratio, such as lines of code per staff month, to
estimate a new project. Figure 5-5 illustrates the relatively minor
difference in linear versus exponential estimates that occurs within a
specific size range.
If you use a ratio-based estimation approach within a restricted range of
sizes, your estimates will not be subject to much error. If you used an
average ratio from projects in the middle of the size range, the estimation
error introduced by economies of scale would be no more than about
10%. If you work in an environment that experiences higher-than-average
diseconomies of scale, the differences could be higher.
If you’ve completed previous projects that are about the same
size as the project you’re estimating—defined as being withina factor of 3 from largest to smallest— you can safely use a
ratio-based estimating approach, such as lines of code per
staff month, to estimate your new project.

Importance of Diseconomy of Scale in Software Estimation
Much of the software-estimating world’s focus has been on determining
the exact significance of diseconomies of scale. Although that is a
significant factor, remember that the raw size is the largest contributor to
the estimate. The effect of diseconomy of scale on the estimate is a
second-order consideration, so put the majority of your effort into
developing a good size estimate. We’ll discuss how to create software
size estimates more specifically in Chapter 18, “Special Issues in
Estimating Size.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Count, Compute, Judge techniques

A

Suppose you’re at a reception for the world’s best software estimators.
The room is packed, and you’re seated in the middle of the room at a
table with three other estimators. All you can see as you scan the room
are wall-to-wall estimators. Suddenly, the emcee steps up to the
microphone and says, “We need to know exactly how many people are in
this room so that we can order dessert. Who can give me the most
accurate estimate for the number of people in the room?”
The estimators at your table immediately break out into a vigorous
discussion about the best way to estimate the answer. Bill, the estimator
to your right, says, “I make a hobby of estimating crowds. Based on my
experience, it looks to me like we’ve got about 335 people in the room.”
The estimator sitting across the table from you, Karl, says, “This room
has 11 tables across and 7 tables deep. One of my friends is a banquet
planner, and she told me that they plan for 5 people per table. It looks to
me like most of the tables do actually have about 5 people at them. If we
multiple 11 times 7 times 5, we get 385 people. I think we should use that
as our estimate.”
The estimator to your left, Lucy, says, “I noticed on the way into the room
that there was an occupancy limit sign that says this room can hold 485
people. This room is pretty full. I’d say 70 to 80 percent full. If we multiplythose percentages by the room limit, we get 340 to 388 people. How
about if we use the average of 364 people, or maybe just simplify it to
365?”
Bill says, “We have estimates of 335, 365, and 385. It seems like the right
answer must be in there somewhere. I’m comfortable with 365.”
“Me too,” Karl says.
Everyone looks at you. You say, “I need to check something. Would you
excuse me for a minute?” Lucy, Karl, and Bill give you curious looks and
say, “OK.”
You return a few minutes later. “Remember how we had to have our
tickets scanned before we entered the room? I noticed on my way into
the room that the handheld ticket scanner had a counter. So I went back
and talked to the ticket taker at the front door. She said that, according to
her scanner, she has scanned 407 tickets. She also said no one has left
the room so far. I think we should use 407 as our estimate. What do you
say?”

Count First
What do you think the right answer is? Is it the answer of 335, created by
Bill, whose specialty is estimating crowd sizes? Is it the answer of 385,
derived by Karl from a few reasonable assumptions? Is it Lucy’s 365,
also derived from a few reasonable assumptions? Or is the right number
the 407 that was counted by the ticket scanner? Is there any doubt in
your mind that 407 is the most accurate answer? For the record, the story
ended by your table proposing the answer of 407, which turned out to be
the correct number, and your table was served dessert first.
One of the secrets of this book is that you should avoid doing what we
traditionally think of as estimating! If you can count the answer directly,
you should do that first. That approach produced the most accurate
answer in the story.
If you can’t count the answer directly, you should count something else
and then compute the answer by using some sort of calibration data. In
the story, Karl had the historical data of knowing that the banquet was
planned to have 5 people per table. He counted the number of tables and
then computed the answer from that.
Similarly, Lucy based her estimate on the documented fact of the room’s
occupancy limit. She used her judgment to estimate the room was 70 to
80 percent full.
The least accurate estimate came from, Bill, the person who used only
judgment to create the answer.
Count if at all possible. Compute when you can’t count. Use
judgment alone only as a last resort.

What to Count
Software projects produce numerous things that you can count. Early in
the development life cycle, you can count marketing requirements,
features, use cases, and stories, among other things.
In the middle of the project, you can count at a finer level of granularity—
engineering requirements, Function Points, change requests, Web
pages, reports, dialog boxes, screens, and database tables, just to name
a few.
Late in the project, you can count at an even finer level of detail—code
already written, defects reported, classes, and tasks, as well as all the
detailed items you were counting earlier in the project.
You can decide what to count based on a few goals.
Find something to count that’s highly correlated with the size of the
software you’re estimating If your features are fixed and you’re
estimating cost and schedule, the biggest influence on a project estimate
is the size of the software. When you look for something to count, look for
something that will be a strong indicator of the software’s size. Number of
marketing requirements, number of engineering requirements, and
Function Points are all examples of countable quantities that are strongly
associated with final system size.
In different environments, different quantities are the most accurate
indicators of project size. In one environment, the best indicator might be
the number of Web pages. In another environment, the best indicator
might be the number of marketing requirements, test cases, stories, or
configuration settings. The trick is to find something that’s a relevant
indicator of size in your environment.
Look for something you can count that is a meaningful
measure of the scope of work in your environment.
Find something to count that’s available sooner rather than later in
the development cycle The sooner you can find something meaningfulto count, the sooner you’ll be able to provide long-range predictability.
The count of lines of code for a project is often a great indicator of project
effort, but the code won’t be available to count until the very end of the
project. Function Points are strongly associated with ultimate project size,
but they aren’t available until you have detailed requirements. If you can
find something you can count earlier, you can use that to create an
estimate earlier. For example, you might create a rough estimate based
on a count of marketing requirements and then tighten up the estimate
later based on a Function Point count.
Find something to count that will produce a statistically meaningful
average Find something that will produce a count of 20 or more.
Statistically, you need a sample of at least 20 items for the average to be
meaningful. Twenty is not a magic number, but it’s a good guideline for
statistical validity.
Understand what you’re counting For your count to serve as an
accurate basis for estimation, you need to be sure the same assumptions
apply to the count that your historical data is based on and to the count
that you’re using for your estimate. If you’re counting marketing
requirements, be sure that what you counted as a “marketing
requirement” for your historical data is similar to what you count as a
“marketing requirement” for your estimate. If your historical data indicates
that a past project team in your company delivered 7 user stories per
week, be sure your assumptions about team size, programmer
experience, development technology, and other factors are similar in the
project you’re estimating.
Find something you can count with minimal effort All other things
being equal, you’d rather count something that requires the least effort. In
the story at the beginning of the chapter, the count of people in the room
was readily available from the ticket scanner. If you had to go around to
each table and count people manually, you might decide it wasn’t worth
the effort.

Use Computation to Convert Counts to
Estimates
If you collect historical data related to counts, you can convert the counts
to something useful, such as estimated effort. Table 7-1 lists examples of
quantities you might count and the data you would need to compute an
estimate from the count.
Example of counting defects late in a project Once you have the kind
of data described in the table, you can use that data as a more solid
basis for creating estimates than expert judgment. If you know that you
have 400 open defects, and you know that the 250 defects you’ve fixed
so far have averaged 2 hours per defect, you know that you have about
400 x 2 equals 800 hours of work to fix the open defects.
Example of estimation by counting Web pages If your data says that
so far your project has taken an average of 40 hours to design, code, and
test each Web page with dynamic content, and you have 12 Web pages
left, you know that you have something like 12 x 40 equals 480 hours of
work left on the remaining Web pages.The important point in these examples is that there is no judgment in
these estimates. You count, and then you compute. This process helps
keep the estimates free from bias that would otherwise degrade their
accuracy. For counts that you already have available—such as number of
defects—such estimates also require very low effort.
Don’t discount the power of simple, coarse estimation models
such as average effort per defect, average effort per Web
page, average effort per story, and average effort per use
case.

Use Judgment Only as a Last Resort
So-called expert judgment is the least accurate means of estimation.
Estimates seem to be the most accurate if they can be tied to something
concrete. In the story told at the beginning of this chapter, the worst
estimate was the one created by the expert who used judgment alone.
Tying the estimate to the room occupancy limit was a little better,
although it was subject to more error because that approach required a
judgment about how full the room was as a percentage of maximum
occupancy, which is an opportunity for subjectivity or bias to contaminate
the estimate.
Historical data combined with computation is remarkably free from the
biases that can undermine more judgment-based estimates. Avoid the
temptation to tweak computed estimates to conform to your expert
judgment. When I wrote the second edition of Code Complete
(McConnell 2004a), I had a team that formally inspected the entire first
edition—all 900 pages of it. During our first inspection meeting, our
inspection rate averaged 3 minutes per page. Realizing that 3 minutes
per page implied 45 hours of inspection meetings, I commented after the
first meeting that I thought we were just beginning to gel as a team, and,
in my judgment, we would speed up in future meetings. I suggested using
a working number of 2 or 2.5 minutes per page instead of 3 minutes to
plan future meetings. The project manager responded that, because we
had only one meeting’s worth of data, we should use that meeting’s
number of 3 minutes per page as a guide for planning the next few
meetings. We could adjust our plans later based on different data from
later meetings, if we needed to.
Nine hundred pages later, how many minutes per page do you think we
averaged for the entire book? If you guessed 3 minutes per page, you’re
right!
Avoid using expert judgment to tweak an estimate that has
been derived through computation. Such “expert judgment”
usually degrades the estimate’s accuracy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Delphi method

A

Group expert judgment techniques are useful when estimating early in a
project or for estimating large unknowns. This chapter presents an
unstructured group judgment technique (group reviews) and a structured
technique called Wideband Delphi.

Group Reviews
A simple technique for improving the accuracy of estimates created by
individuals is to have a group review the estimates. When I have groups
review estimates, I require three simple rules:
- Have each team member estimate pieces of the project
individually, and then meet to compare your estimates
Discuss differences in the estimates enough to understand the
sources of the differences. Work until you reach consensus on
high and low ends of estimation ranges.
- Don’t just average your estimates and accept that You can
compute the average, but you need to discuss the differences
among individual results. Do not just take the calculated average
automatically.
- Arrive at a consensus estimate that the whole group accepts
If you reach an impasse, you can’t vote. You must discuss
differences and obtain buy-in from all group members.

Wideband Delphi
Wideband Delphi is a structured group-estimation technique. The original
Delphi technique was developed by the Rand Corporation in the late
1940s for use in predicting trends in technology (Boehm 1981). The
name Delphi comes from the ancient Greek oracle at Delphi. The basic
Delphi technique called for several experts to create independent
estimates and then to meet for as long as necessary to converge on, or
at least agree upon, a single estimate.
An initial study on the use of Delphi for software estimation found that the
basic Delphi technique was no more accurate than a less structured
group meeting. Barry Boehm and his colleagues concluded that the
generic Delphi meetings were subject to too much political pressure and
were also likely to be dominated by the more assertive estimators in the
group. Consequently, Boehm and his colleagues extended the basic
Delphi technique into what has become known as Wideband Delphi.
Table 13-1 describes the basic procedure.
Table 13-1: Wideband Delphi Technique
1. The Delphi coordinator presents each estimator with the
specification and an estimation form
2. Estimators prepare initial estimates individually. (Optionally,
this step can be preformed after step 3.)
3. The coordinator calls a group meeting in which the estimators
discuss estimation issues related to the project at hand. If the
group agrees on a single estimate without much discussion,
the coordinator assigns someone to play devil’s advocate.
4. Estimators give their individual estimates to the coordinator
anonymously.
5. The coordinator prepares a summary of the estimates on an
iteration form (shown in Figure 13-2) and presents the iteration
form to the estimators so that they can see how their estimatescompare with other estimators’ estimates.
6. The coordinator has estimators meet to discuss variations in
their estimates.
7. Estimators vote anonymously on whether they want to accept
the average estimate. If any of the estimators votes “no,” they
return to step 3.
8. The final estimate is the single-point estimate stemming from
the Delphi exercise. Or, the final estimate is the range created
through the Delphi discussion and the single-point Delphi
estimate is the expected case.

When to Use Wideband Delphi
In the difficult group estimation exercise I’ve discussed in this chapter,
Wideband Delphi reduced the average estimation error from 290% to
170%. Errors of 290% and 170% are very high, characteristic of
estimates created in the wide part of the Cone of Uncertainty. Still,
reducing error by 40% is valuable, whether the reduction is from 290% to
170% or from 50% to 30%.
Although my data seems to endorse the use of Wideband Delphi,
industry studies on the question of how to combine estimates created by
different estimators have been mixed. Some studies have found that
group-based approaches to combining estimates work best, and others
have found that simple averaging works best (Jørgensen 2002).Because Wideband Delphi requires a meeting, it burns a lot of staff time,
making it an expensive way to estimate. It is not appropriate for detailed
task estimates.
Wideband Delphi is useful if you’re estimating work in a new business
area, work in a new technology, or work for a brand-new kind of software.
It is useful for creating “order of magnitude” estimates at product
definition or software concept time, before you’ve pinned down many of
the requirements. It’s also useful if a project will draw heavily from
diverse specialties, such as a combined need for uncommon usability,
algorithmic complexity, exceptional performance, intricate business rules,
and so on. It also tends to sharpen the definition of the scope of work,
and it’s useful for flushing out estimation assumptions. In short,
Wideband Delphi is most useful for estimating single, focused items that
require input from numerous disciplines in the very wide part of the Cone
of Uncertainty. In these uncertain situations, Wideband Delphi can be
invaluable.

Use Wideband Delphi for early-in-the-project estimates, for
unfamiliar systems, and when several diverse disciplines will
be involved in the project itself.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Challenges with Estimating Size

A

Once you move from directly estimating effort and schedule to computing
them from historical data, size becomes the most difficult quantity to
estimate. Iterative projects might use a size estimate to help determine
how many features can be delivered within an iteration, but they usually
focus on techniques designed to estimate features more directly.
Estimation in the later stages of sequential projects tends to focus on
bottom-up effort estimates created by the people who will be doing the
work. Estimating size is thus most applicable to the early and middle
stages of sequential projects. The purpose of a size estimate is to
support long-range predictability in the wide part of the Cone of
Uncertainty.
The common size measures of lines of code and function points have
different strengths and weaknesses, as do custom measures defined by
organizations for their own use. Creating estimates by using multiple size
measures and then looking for convergence or spread tends to produce
the most accurate results.
This chapter describes how to create the size estimate. Chapter 19,
“Special Issues in Estimating Effort,” explains how to convert this
chapter’s size estimates into an effort estimate, and Chapter 20, “Special
Issues in Estimating Schedule,” describes how to convert the effortestimate into a schedule estimate.

Challenges with Estimating Size
Numerous measures of size exist, including the following:
Features
User stories
Story points
Requirements
Use cases
Function points
Web pages
GUI components (windows, dialog boxes, reports, and so on)
Database tables
Interface definitions
Classes
Functions/subroutines
Lines of code
The lines of code (LOC) measure is the most common size measure
used for estimation, so we'll discuss that first.

Role of Lines of Code in Size Estimation
Using lines of code is a mixed blessing for software estimation. On the
positive side, lines of code present several advantages:
Data on lines of code for past projects is easily collected via tools.
Lots of historical data already exists in terms of lines of code inmany organizations.
Effort per line of code has been found to be roughly constant
across programming languages, or close enough for practical
purposes. (Effort per line of code is more a function of project
size and kind of software than of programming language, as
described in Chapter 5, “Estimate Influences.” What you get for
each line of code will vary dramatically, depending on the
programming language.)
Measurements in LOC allow for cross-project comparisons and
estimation of future projects based on data from past projects.
Most commercial estimation tools ultimately base their effort and
schedule estimates on lines of code.
On the negative side, LOC measures present several difficulties when
used to estimate size:
Simple models such as “lines of code per staff month” are error-
prone because of software’s diseconomy of scale and because of
vastly different coding rates for different kinds of software.
LOC can’t be used as a basis for estimating an individual’s task
assignments because of the vast differences in productivity
between different programmers.
A project that requires more code complexity than the projects
used to calibrate the productivity assumptions can undermine an
estimate’s accuracy.
Using the LOC measure as the basis for estimating requirements
work, design work, and other activities that precede the creation
of the code seems counterintuitive.
Lines of code are difficult to estimate directly, and must be
estimated by proxy.
What exactly constitutes a line of code must be defined carefully
to avoid the problems described in “Issues Related to SizeMeasures” in Section 8.2, “Data to Collect.”
Some experts have argued against using lines of code as a measure of
size because of problems associated with using them to analyze
productivity across projects of different sizes, kinds, programming
languages, and programmers (Jones 1997). Other experts have pointed
out that variations of the same basic issues apply to other size
measurements, including function points (Putnam and Myers 2003).
The underlying issue that’s common to lines of code, function points, and
other simple size measures is that measuring anything as multifaceted as
software size using a single-dimensional measure will inevitably give rise
to anomalies in at least a few circumstances (Gilb 1988, Gilb 2005).
We don’t use single-dimensional measures to describe the economy or
other complex entities. We can’t even use a single measure to determine
who the best hitter in baseball is. We consider batting average, home
runs, runs batted in, on-base percentage, and other factors—and then we
still argue about what the numbers mean. If we can’t measure the best
hitter using a simple measure, why would we expect we could measure
something as complex as software size using a simple measure?
My personal conclusion about using lines of code for software estimation
is similar to Winston’s Churchill’s conclusion about democracy: The LOC
measure is a terrible way to measure software size, except that all the
other ways to measure size are worse. For most organizations, despite
its problems, the LOC measure is the workhorse technique for measuring
size of past projects and for creating early-in-the-project estimates of new
projects. The LOC measure is the lingua franca of software estimation,
and it is normally a good place to start, as long as you keep its limitations
in mind.
Your environment might be different enough from the common
programming environments that lines of code are not highly correlated
with project size. If that’s true for you, find something that is more
proportional to effort than lines of code, count that, and base your size
estimates on that instead, as discussed in Chapter 8, “Calibration and
Historical Data.” Try to find something that’s easy to count, highlycorrelated with effort, and meaningful for use across multiple projects.

Use lines of code to estimate size, but remember both the
general limitations of simple measures and the specific
hazards of the LOC measure.

Function-Point Estimation
One alternative to the LOC measure is function points. A function point is
a synthetic measure of program size that can be used to estimate size in
a project’s early stages (Albrecht 1979). Function points are easier to
calculate from a requirements specification than lines of code are, and
they provide a basis for computing size in lines of code. Many different
methods for counting function points exist. The standard for function-
point counting is maintained by the International Function Point Users
Group (IFPUG) and can be found on their Web site at www.ifpug.org.
The number of function points in a program is based on the number and
complexity of each of the following items:
External Inputs Screens, forms, dialog boxes, or control signals
through which an end user or other program adds, deletes, or
changes a program’s data. They include any input that has a
unique format or unique processing logic.
External Outputs Screens, reports, graphs, or control signals
that the program generates for use by an end user or other
program. They include any output that has a different format or
requires a different processing logic than other output types.
External Queries Input/output combinations in which an input
results in an immediate, simple output. The term originated in the
database world and refers to a direct search for specific data,
usually using a single key. In modern GUI and Web applications,
the line between queries and outputs is blurry, but, generally,
queries retrieve data directly from a database and provide only
rudimentary formatting, whereas outputs can process, combine,
or summarize complex data and can be highly formatted.
Internal Logical Files Major logical groups of end-user data or
control information that are completely controlled by the program.
A logical file might consist of a single flat file or a single table in a
relational database.
External Interface Files Files controlled by other programs withwhich the program being counted interacts. This includes each
major logical group of data or control information that enters or
leaves the program.
Count function points to obtain an accurate early-in-the-
project size estimate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Challenges with Estimating Effort

A

Most projects eventually estimate effort directly from a detailed task list.
But early in a project, effort estimates are most accurate when computed
from size estimates. This chapter describes several means of computing
those early estimates.

Influences on Effort
The largest influence on a project’s effort is the size of the software being
built. The second largest influence is your organization’s productivity.
Table 19-1 illustrates the ranges of productivities between different
software projects. The data in the table illustrates the hazards both of
using industry-average data and of not considering the effect of
diseconomies of scale. Embedded software projects, such as the Lincoln
Continental and IBM Checkout Scanner projects, tend to generate code
at a slower rate than shrink-wrapped projects such Microsoft Excel. If you
used “average” productivity data from the wrong kind of project, your
estimate could be wrong by a factor of 10 or more.
Within the same industry, productivity can still vary significantly. Microsoft
Excel 3.0 produced code at about 10 times the rate that Lotus 123 v.3
did, even though both projects were trying to build similar products and
were conducted within the same timeframe.
Even within the same organization, productivity can still vary because of
diseconomies of scale and other factors. The Microsoft Windows NT
project produced code at a much slower rate than other Microsoft
projects did, both because it was a systems software project rather than
an applications software project and because it was much larger.
The lowest rate of productivity in Table 19-1 on a line-of-code-per-staff-
year basis is the Space Shuttle software, but it would be a mistake to
characterize that development team as unproductive. For projects of that
size, the odds of outright failure exceed 50% (Jones 1998). The fact that
the project finished at all is a major accomplishment. Its productivity wasonly 15% less than the Windows NT project even though the Space
Shuttle software was 10 times the size of the Windows NT project, which
is impressive.
If you don’t have historical data on your organization’s productivity, you
can approximate your productivity by using industry-average figures for
different kinds of software: internal business systems, life-critical
systems, games, device drivers, and so on. But beware of the factor of
10 differences in productivity for different organizations within the same
industry. If you do have data on your organization’s historical productivity,
you should use that data to convert your size estimates to effort
estimates instead of using industry-average data.

Computing Effort from Size
Computing an effort estimate from a size estimate is where we start to
run into some of the weaknesses of the art of estimation and need to rely
more on the science of estimation.
Computing Effort Estimates by Using Informal Comparison
to Past Projects
If your historical data is for projects within a narrow size range (say, a
factor of 3 difference from smallest to largest), you are probably safe
using a linear model to compute the effort estimate for a new project
based on the effort results from similar past projects. Table 19-2 shows
an example of past-project data that could form the basis for such an
estimate.
Suppose you’re estimating the effort for a new business system, and
you’ve estimated the size of the new software to be 65,000 to 100,000
lines of Java code, with a most likely size of 80,000 lines of code. Project
C is too small to use for comparison purposes because it is less than
one-third the size of the low end of your range. Project E is too large to
use for comparison purposes because it is more than 3 times the top end
of your range. Thus your relevant historical productivity range is 986 LOC
per staff month (Project B) to 1,612 LOC per staff month (Project A).
Dividing the lowest end of your size range by the highest productivity rate
gives a low estimate of 40 staff months. Dividing the highest end of your
size range by the lowest productivity gives a high estimate of 101 staff
months. Your estimated effort is 40 to 101 staff months.
A good working assumption is that the range includes 68% of the
possible outcomes (that is, ±1 standard deviation, unless you have
reasons to assume otherwise). You can refer back to Table 10-6,
“Percentage Confident Based on Use of Standard Deviation,” to consider
other probabilities that the 40 to 101 staff-month range might include.

What Kinds of Effort Are Included in This Estimate?
Because you’re using historical data to create this estimate, it includes
whatever effort is included in the historical data. If the historical data
included effort only for development and testing, and only for the part of
the project from end of requirements through system testing, that’s what
the estimate includes. If the historical data also included effort for
requirements, project management, and user documentation, that’s what
the estimate includes.
In principle, estimates that are based on industry-average data usually
include all technical work, but not management work, and all
development work except requirements. In practice, the data that goes
into computing industry-average data doesn’t always follow these
assumptions, which is part of why industry-average data varies as much
as it does.

Computing Effort Estimates by Using the
Science of Estimation
The science of estimation produces somewhat different results than the
informal comparison to past projects does. If you plug the same
assumptions into Construx Estimate (that is, using the historical data
listed to calibrate the estimate), you get an expected result of 80 staff
months, which is in the middle of the range produced by the less-formal
approach. Construx Estimate gives a Best Case estimate (20%
confident) of 65 staff months, and a Worst Case (80% confident) estimate
of 94 staff months.
When Construx Estimate is calibrated with industry-average data instead
of historical data, it produces a nominal estimate of 84 staff months and a
20% to 80% range of 47 to 216 staff months, which is a much wider
range. This again highlights the benefit of using historical data, whenever
possible.

Use software tools based on the science of estimation to
most accurately compute effort estimates from your size
estimates.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Challenges with Estimating Schedule

A

The need to meet customer deadlines, trade show deadlines, seasonal
sales-cycle deadlines, regulatory deadlines, and other calendar-oriented
deadlines seems to put much of the estimation pressure on the schedule.
The schedule estimate seems to produce most of the heat in estimation
discussions.
Ironically, once you move from intuitive estimation approaches to
approaches based on historical data, the schedule estimate becomes a
simple computation that flows from the size and effort estimates. If T.S.
Eliot had written poems about software, he might have written
This is the way the estimate ends This is the way the estimate ends
This is the way the estimate ends Not with a bang but a whimper

The Basic Schedule Equation
A rule of thumb is that you can estimate schedule early in a project using
the Basic Schedule Equation:
(#17)
In case your math is a little rusty, the 1/3 exponent in the equation works
the same as taking the cube root of StaffMonths.
Sometimes the 3.0 is a 2.0, 2.5, 3.5, 4.0 or similar number, but the basic
idea that schedule is a cube-root function of effort is almost universally
accepted by estimation experts. (The specific number is one that can be
derived through calibration with your organization’s historical data.) Barry
Boehm commented in 1981 that this formula was one of the most
replicated results in software engineering (Boehm 1981). Additional
analysis over the past few decades has continued to affirm the validity of
the schedule equation (Boehm 2000, Stutzke 2005).
To use the equation, suppose you’ve estimated that you will need 80 staff
months to build your project. The schedule computed from this formula
ranges from 8.6 to 17.2 months depending on what coefficient from 2.0 to
4.0 is used. The nominal schedule will be (3.0 x 801/3), which is 12.9
months. (I don’t recommend presenting the schedule estimate with this
much precision; I’m including it here to make the calculations easier to
follow.)
The schedule equation is the reason that the uncertainty ranges in Figure
20-1 are much broader for efforts than they are for schedules. Effort
increases in proportion to scope, whereas schedule increases in
proportion to the cube root of effort.
The schedule equation implicitly assumes that you’re able to adjust the
team size to suit the size implied by the equation. If your team size is
fixed, the schedule won’t vary in proportion to the cube root of the scope;
it will vary more widely based on your team-size constraints. Section
20.7, “Schedule Estimation and Staffing Constraints,” will discuss this
issue in more detail.
The Basic Schedule Equation is also not intended for estimation of small
projects or late phases of larger projects. You should switch to some
other technique when you know the names of the specific people working
on the project.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Story based scope definition: scoping project, release planning

A

To get a quick feel for how big the project is, run the planning process at coarse
resolution.
Let’s say you’re the only one on the project so far. What’s the first step? How can you use
shopping to bring a project into existence?
• Items— Big stories
• Prices— Rough estimates of the time to implement each story
• Budget— Roughly how many people you have to work on the project
• Constraints— Supplied by someone with business knowledge
The purpose of this first plan is to quickly answer the question “Does the project make
any sense at all?” Often these sanity plans are made before there are any technical people
on the project at all. Don’t worry about getting perfect numbers. If the project makes any
sense then you’ll invest enough to prepare a plan you have some confidence in.
What if we were to implement a space-age travel system? (For the full story of the system
see Example of Stories.) We might have a few big stories in mind. Before we can assign
prices to them, we have to know a little more about the system. We ask a few questions:
• How many reservations do we need to handle?
• How much of the time do we need to have the system available?
• What kind of machines will be used to access the system?

We make some simplifying assumptions as we go along.
• The stories are completely independent of each other.
• We will develop the necessary infrastructure along with the story, but only the
infrastructure absolutely needed for that story.
We know these assumptions aren’t exactly accurate, but then again neither is anything
else. If we were trying to predict the future, this would worry us. Since we aren’t, it
doesn’t.
So, the bottom line is that we can implement the system in 24 months.
The shouting starts. “We have to go to market in six months, tops, or we’re dead.” Yes,
we understand. “If you can’t do it, we’ll hire someone who can.” You should do that, but
perhaps we can talk a little first. “You programmers can’t tell me what to do.” Of course
not, but perhaps you would like to know what you can’t do.
Now the negotiation starts. “What if we just made a booking system first? We’d need the
first three stories. That’s four months. But we can’t launch without the holographic
simulation. What can you give me in two months?”
Within a few hours or days, we have a rough plan from which we can move forward.

Making the Big Plan
The purpose of the big plan is to answer the question “Should we invest more?” We
address this question in three ways:
• Break the problem into pieces.
• Bring the pieces into focus by estimating them.
• Defer less valuable pieces.
Start with a conversation about the system (this works best if you involve at least one
other person). As you talk, write down your thoughts, one per index card. If your
thoughts get too detailed, stop writing until you get abstract again.
Some cards will contain business functionality. These are stories. Lay these out in the
middle of a big table. Some of the cards will contain ideas that are context—throughput,
reliability, budget, sketches of happy customers. Set these to one side.
Now you need to estimate how long each story would take your team to implement (just
guess at a size at first). Give yourself plenty of padding. There will be plenty of time for
stone-cold reality later. Bask in the glow of infinite possibilities for the moment.
If your estimates are too small (like days or weeks), you’ve slipped into detail land. Put
those cards to one side and start over. If you can’t imagine being able to estimate a story
(“Easy to Use” is the classic example), put it to one side. Better yet, think about some
specific things that would make the system easy to use, and turn them into stories (for
example, “Personal Profiles”).
36You can only estimate from experience. What if you don’t have any experience? Then
you’d better fake it. Write a little prototype. Ask a friend who knows. Invite a
programmer into the conversation.
Move fast. You’re sketching here, trying to quickly capture a picture of the whole system.
Don’t spend more than a few hours on your first rough plan.

Release Planning
The big plan helped us decide that it wasn’t patently stupid to invest in the project. Now
we need to synchronize the project with the business. We have to synchronize two
aspects of the project:
• Date
• Scope
Often, important dates for a project come from outside the company:
• The date on the contract
• COMDEX
• When the VC money will run out
Even if the date of the next release is internally generated, it will be set for business
reasons. You want to release often to stay ahead of your competition, but if you release
too often, you won’t ever have enough new functionality to merit a press release, a new
round of sales calls, or champagne for the programmers.
Release planning allocates user stories to releases and iterations—what should we work
on first? what will we work on later? The strategies you will use are similar to making the
big plan in the first place:
• Break the big stories into smaller stories.
• Sharpen the focus on the stories by estimating how long each will take.
• Defer less valuable stories until what is left fits in the time available.

Who Does Release Planning?
Release planning is a joint effort between the customer and the programmers. The
customer drives release planning, and the programmers help navigate. The customer chooses which stories to place in the release and which stories to implement later, while
the programmers provide the estimates required to make a sensible allocation.
The customer
• Defines the user stories
• Decides what business value the stories have
• Decides what stories to build in this release
The programmers
• Estimate how long it will take to build each story
• Warn the customer about significant technical risks
• Measure their team progress to provide the customer with an overall budget

How Stable Is the Release Plan?
Not at all.
The only thing we know for certain about a plan is that development won’t go according
to it. So release planning happens all the time. Every time the customer changes his mind
about the requirements and his priority, this changes the plan. Every time the developers
learn something new about the speed of doing things, this changes the plan.
The plan is therefore just a snapshot of the current view of what things will be done. This
snapshot helps people get an idea of what to expect, but it is no statement of certainty. It
will be revised frequently. Everyone—developers, customers, and management—needs
to accept constant change.

How Far in Advance Do You Plan?
How far in advance do you build a release plan for? We know that the further ahead we
plan, the less accurate we will be, so there’s little point going into great detail for years
into the future. We prefer to plan one or two iterations in advance and one or two releases
in advance.
Focusing on one or two iterations means that the programmers clearly need to know what
stories are in the iteration they are currently working on. It’s also useful to know what’s in
the next iteration. Beyond that the iteration allocation is not so useful.
However, the business needs to know what is currently in this release, and it’s useful to
have an idea of what will be in the release after that.
The real decider for how far in advance you should plan is the cost of keeping the plan
up-to-date versus the benefit you get when you know that plans are inherently unstable.
You have to honestly assess the value compared to the volatility of the plans.

How Do You Plan Infrastructure?
When you plan in a function-oriented way, such as we suggest, the obvious question is
how to deal with infrastructure. Before we can start building functionality we have to put
together the distributed object messaging infrastructure, the database persistence
infrastructure, and the dynamic GUI frameworks. This suggests a plan where you spend
several months building the infrastructure components before you deliver any customer
functionality.
This style of development is a common feature of the dead and dying projects we've
seen—and we don't think it's a coincidence. Doing infrastructure without customer
function leads to the following risks:
• You spend a lot of time not delivering things that are valuable to the customer,
which strains the relationship with the customer.
• You try to make the infrastructure cover everything you think you might need,
which leads to an overly complex infrastructure.
Therefore, evolve the infrastructure as you build the functionality. For each iteration,
build just enough infrastructure for the stories in that iteration. You won't build a more
complex infrastructure than you need, and the customer is engaged in building the
infrastructure because she sees the dependent functionality as it's evolving.

How Do You Store the Release Plan?
Our preferred form of release plan is a set of cards. Each card represents a user story and
contains the essential information to describe what the story is about. You group the
cards together to show which stories are in this release. Lay out stories with adhesive on a
wall, or pin them up on a cork board. Wrap future stories with a rubber band and stick
them safely in a drawer.
We like cards because they are simple, physical devices that encourage everyone to
manipulate them. It’s always that little bit harder for people to see and manipulate things
that are stored in a computer.
However, if you want to put your stories in a computer, go ahead. Just do it in a simple
way. A simple spreadsheet often does the job best. People who use complicated project
management packages are prone to spending time fiddling with the package when they
should be communicating with other people.
Another computer format that many people are using is Wiki, a collaborative Web tool
invented by Ward Cunningham (c2.com). Teams rapidly evolve conventions within
Wiki’s flexible format for recording stories, tasks, and status.

How Much Can You Put into a Release?
If you have stories, iterations, and releases, you need to know how many stories you can
do in each iteration and release. We use the term velocity to represent how much the
team can do in an iteration. We measure the velocity of the team and estimate the amount
of effort required for each story (see Chapter 12 for details on how we do this).
The sum of the effort for all the work you want to do cannot exceed the available effort.
When you are planning an iteration, the sum of the story estimates cannot exceed the
team’s velocity. When you are planning a release, the sum of the story estimates cannot
exceed the team’s velocity times the number of iterations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Documenting and presenting estimation results

A

The way you communicate an estimate suggests how accurate the
estimate is. If your presentation style implies an unfounded accuracy, you
lay the groundwork for a difficult discussion about the estimate itself. This
chapter presents several options for presenting estimates.

Communicating Estimate Assumptions
An essential practice in presenting an estimate is to document the
assumptions embodied in the estimate. Assumptions fall into several
familiar categories:
Which features are required
Which features are not required
How elaborate certain features need to be
Availability of key resources
Dependencies on third-party performance
Major unknowns
Major influences and sensitivities of the estimate
How good the estimate is
What the estimate can be used for

Expressing Uncertainty
The key issue in estimate presentation is documenting the estimate’s
uncertainty in a way that communicates the uncertainty clearly and that
also maximizes the chances that the estimate will be used constructively
and appropriately. This section describes several ways to communicate
uncertainty.
Plus-or-Minus Qualifiers
An estimate with a plus-or-minus qualifier is an estimate such as “6
months, ±2 months” or “$600,000, +$200,000, -$100,000.” The plus-or-
minus style indicates both the amount and the direction of uncertainty in
the estimate. An estimate of 6 months, +1/2 month, -1/2 month says that
the estimate is quite accurate and that there’s a good chance of meeting
the estimate. An estimate of 6 months, +4 months, -1 month says that the
estimate isn’t very accurate and that there is less chance of meeting the
estimate.
When you express an estimate with plus-or-minus qualifiers, consider
how large the qualifiers are and what they represent. A typical practice is
to make the qualifiers large enough to include one standard deviation on
each side of the core estimate. With this approach, you’ll still have a 16%
chance that the actual result will come in above the top of your estimate
and a 16% chance that it will come in below the bottom. If you need to
account for more than the 68% probability in the middle of the one-
standard-deviation range, use qualifiers that account for more than one
standard deviation of variability. (See Table 10-6, “Percentage Confident
Based on Use of Standard Deviation,” on page 121, for a list of standard
deviations and associated probabilities.) Be sure to consider whether the
minus qualifier should be the same as the plus qualifier. If you’re dealing
with effort or schedule, typically the minus side will be smaller than the
plus side for the reasons discussed in Section 1.4, “Estimates as
Probability Statements.”
A weakness of the plus-or-minus style is that, as the estimate is passed
through the organization, it tends to get stripped down to just the coreestimate. Occasionally, managers simplify such an estimate out of a
desire to ignore the variability implied by the estimate. More often, they
simplify the estimate because their manager or their corporate budgeting
system can handle only estimates that are expressed as single-point
numbers. If you use this technique, be sure you can live with the single-
point number that’s left after the estimate gets converted to a simplified
form.

Using Ranges (of Any Kind)
As discussed throughout this book, ranges are the most accurate way to
reflect the inherent inaccuracy in estimates at various points in the Cone
of Uncertainty. You can combine ranges with the other techniques
described in this chapter (that is, ranges of coarse time periods, using
ranges for a risk-quantified estimate instead of plus-or-minus qualifiers,
and so on).
When you present an estimate as a range, consider the following
questions:
What level of probability should your range include? Should
it include ±1 standard deviation (68% of possible outcomes), or
does the range need to be wider?
How do your company’s budgeting and reporting processes
deal with ranges? Be aware that companies’ budgeting and
reporting processes often won’t accept ranges. Ranges are often
simplified for reasons that have little to do with software
estimation, such as “The company budgeting spreadsheet won’t
allow me to enter a range.” Be sensitive to the restrictions your
manager is working under.
Can you live with the midpoint of the range? Occasionally, a
manager will simplify a range by publishing the low end of the
range. More often, managers will average the high and low ends
and use that if they are not allowed to use a range.
Should you present the full range or only the part of the
range from the nominal estimate to the top end of the range?
Projects rarely become smaller over time, and estimates tend to
err on the low side. Do you really need to present the low end to
high end of your estimate, or should you present only the part of
the range from the nominal estimate to the high end?
Can you combine the use of ranges with other techniques?
You might want to consider presenting your estimate as a range
and then listing assumptions or quantified risks.
Use an estimate presentation style that reinforces the
message you want to communicate about your estimate’s
accuracy.

Usefulness of Estimates Presented as Ranges
Project stakeholders might think that presenting an estimate as a wide
range makes the estimate useless. What’s really happening is that
presentation of the estimate as a wide range accurately conveys the fact
that the estimate is useless! It isn’t the presentation that makes the
estimate useless; it’s the uncertainty in the estimate itself. You can’t
remove the uncertainty from an estimate by presenting it without its
uncertainty. You can only ignore the uncertainty, and that’s to everyone’s
detriment.
The two largest professional societies for software developers—the IEEE
Computer Society and the Association of Computing Machinery—have
jointly decided that software developers have a professional responsibility
to include uncertainty in their estimates. Item 3.09 in the IEEE-CS/ACM
Software Engineering Code of Ethics reads as follows:
Software engineers shall ensure that their products and related
modifications meet the highest professional standards possible. In
particular, software engineers shall, as appropriate:
3.09 Ensure realistic quantitative estimates of cost, scheduling,
personnel, quality and outcomes on any project on which they work
or propose to work and provide an uncertainty assessment of these
estimates. [emphasis added]
Including uncertainty in your estimates isn’t just a nicety, in other words.
It’s part of a software professional’s ethical responsibility.

Ranges and Commitments
Sometimes, when stakeholders push back on an estimation range,
they’re really pushing back on including a range in the commitment. In
that case, you can present a wide estimation range and recommend thattoo much variability still exists in the estimate to support a meaningful
commitment.
After you’ve reduced uncertainty enough to support a commitment,
ranges are generally not an appropriate way to express the commitment.
An estimation range illustrates what the nature of the commitment is—
more or less risky—but the commitment itself should normally be
expressed as a single-point number.
Don’t try to express a commitment as a range. A
commitment needs to be specific.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

PERT analysis

A

PERT is a method of analyzing the tasks involved in completing a given project, especially the time needed to complete each task, and to identify the minimum time needed to complete the total project. It incorporates uncertainty by making it possible to schedule a project while not knowing precisely the details and durations of all the activities. It is more of an event-oriented technique rather than start- and completion-oriented, and is used more in these projects where time is the major factor rather than cost. It is applied on very large-scale, one-time, complex, non-routine infrastructure and on Research and Development projects.

PERT offers a management tool, which relies “on arrow and node diagrams of activities and events: arrows represent the activities or work necessary to reach the events or nodes that indicate each completed phase of the total project.”[1]

PERT and CPM are complementary tools, because “CPM employs one time estimation and one cost estimation for each activity; PERT may utilize three time estimates (optimistic, expected, and pessimistic) and no costs for each activity. Although these are distinct differences, the term PERT is applied increasingly to all critical path scheduling.”

Link:
https://en.wikipedia.org/wiki/Program_evaluation_and_review_technique

How well did you know this?
1
Not at all
2
3
4
5
Perfectly