Estimation - CL1 Flashcards

1
Q

Scope Concept

A

In project management, the term scope has two distinct uses: Project Scope and Product Scope.

Scope involves getting information required to start a project, and the features the product would have that would meet its stakeholders requirements.

Project Scope: “The work that needs to be accomplished to deliver a product, service, or result with the specified features and functions.”[1]
Product Scope: “The features and functions that characterize a product, service, or result.”[1]
Notice that Project Scope is more work-oriented (the hows), while Product Scope is more oriented toward functional requirements (the whats).

If requirements aren’t completely defined and described and if there is no effective change control in a project, scope or requirement creep may ensue.

Scope Management is the listing of the items to be produced or tasks to be done to the required quantity, quality and variety, in the time and with the resources available and agreed upon, and the modification of those variable constraints by dynamic flexible juggling in the event of changed circumstance called as Scope creep.

Link:
https://en.wikipedia.org/wiki/Scope_(project_management)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Estimates, Targets, and Commitments

A

Strictly speaking, the dictionary definition of estimate is correct: an
estimate is a prediction of how long a project will take or how much it will
cost. But estimation on software projects interplays with business targets,
commitments, and control.
A target is a statement of a desirable business objective. Examples
include the following:
- “We need to have Version 2.1 ready to demonstrate at a trade
show in May.”
- “We need to have this release stabilized in time for the holiday
sales cycle.”
- “These functions need to be completed by July 1 so that we’ll be
in compliance with government regulations.”
- “We must limit the cost of the next release to $2 million, because
that’s the maximum budget we have for that release.”
Businesses have important reasons to establish targets independent of
software estimates. But the fact that a target is desirable or even
mandatory does not necessarily mean that it is achievable.
While a target is a description of a desirable business objective, a
commitment is a promise to deliver defined functionality at a specific level
of quality by a certain date. A commitment can be the same as the
estimate, or it can be more aggressive or more conservative than the
estimate. In other words, do not assume that the commitment has to be
the same as the estimate; it doesn’t.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Overestimate vs Underestimate

A

Intuitively, a perfectly accurate estimate forms the ideal planning
foundation for a project. If the estimates are accurate, work among
different developers can be coordinated efficiently. Deliveries from one
development group to another can be planned to the day, hour, or
minute. We know that accurate estimates are rare, so if we’re going to
err, is it better to err on the side of overestimation or underestimation?

Arguments Against Overestimation
Managers and other project stakeholders sometimes fear that, if a project
is overestimated, Parkinson’s Law will kick in—the idea that work will
expand to fill available time. If you give a developer 5 days to deliver a
task that could be completed in 4 days, the developer will find something
to do with the extra day. If you give a project team 6 months to complete
a project that could be completed in 4 months, the project team will find a
way to use up the extra 2 months. As a result, some managers
consciously squeeze the estimates to try to avoid Parkinson’s Law.
Another concern is Goldratt’s “Student Syndrome” (Goldratt 1997). If
developers are given too much time, they’ll procrastinate until late in the
project, at which point they’ll rush to complete their work, and they
probably won’t finish the project on time.
A related motivation for underestimation is the desire to instill a sense of
urgency in the development team. The line of reason goes like this:
The developers say that this project will take 6 months. I think there’s
some padding in their estimates and some fat that can be squeezed
out of them. In addition, I’d like to have some schedule urgency on
this project to force prioritizations among features. So I’m going to
insist on a 3-month schedule. I don’t really believe the project can be
completed in 3 months, but that’s what I’m going to present to the
developers. If I’m right, the developers might deliver in 4 or 5
months. Worst case, the developers will deliver in the 6 months theyoriginally estimated.
Are these arguments compelling? To determine that, we need to examine
the arguments in favor of erring on the side of overestimation.

Arguments Against Underestimation
Underestimation creates numerous problems—some obvious, some not
so obvious.
Reduced effectiveness of project plans Low estimates undermine
effective planning by feeding bad assumptions into plans for specific
activities. They can cause planning errors in the team size, such as
planning to use a team that’s smaller than it should be. They can
undermine the ability to coordinate among groups—if the groups aren’t
ready when they said they would be, other groups won’t be able to
integrate with their work.
If the estimation errors caused the plans to be off by only 5% or 10%,
those errors wouldn’t cause any significant problems. But numerous
studies have found that software estimates are often inaccurate by 100%
or more (Lawlis, Flowe, and Thordahl 1995; Jones 1998; Standish Group
2004; ISBSG 2005). When the planning assumptions are wrong by this
magnitude, the average project’s plans are based on assumptions that
are so far off that the plans are virtually useless.
Statistically reduced chance of on-time completion Developers
typically estimate 20% to 30% lower than their actual effort (van
Genuchten 1991). Merely using their normal estimates makes the project
plans optimistic. Reducing their estimates even further simply reduces
the chances of on-time completion even more.
Poor technical foundation leads to worse-than-nominal results A low
estimate can cause you to spend too little time on upstream activities
such as requirements and design. If you don’t put enough focus on
requirements and design, you’ll get to redo your requirements and redo
your design later in the project—at greater cost than if you’d done those
activities well in the first place (Boehm and Turner 2004, McConnell2004a). This ultimately makes your project take longer than it would have
taken with an accurate estimate.
Destructive late-project dynamics make the project worse than
nominal Once a project gets into “late” status, project teams engage in
numerous activities that they don’t need to engage in during an “on-time”
project. Here are some examples:
- More status meetings with upper management to discuss how to
get the project back on track.
- Frequent reestimation, late in the project, to determine just when
the project will be completed.
- Apologizing to key customers for missing delivery dates
(including attending meetings with those customers).
- Preparing interim releases to support customer demos, trade
shows, and so on. If the software were ready on time, the
software itself could be used, and no interim release would be
necessary.
- More discussions about which requirements absolutely must be
added because the project has been underway so long.
- Fixing problems arising from quick and dirty workarounds that
were implemented earlier in response to the schedule pressure.
The important characteristic of each of these activities is that they don’t
need to occur at all when a project is meeting its goals. These extra
activities drain time away from productive work on the project and make it
take longer than it would if it were estimated and planned accurately.

Weighing the Arguments
Goldratt’s Student Syndrome can be a factor on software projects, but
I’ve found that the most effective way to address Student Syndrome is
through active task tracking and buffer management (that is, project
control), similar to what Goldratt suggests, not through biasing theestimates.
As Figure 3-1 shows, the best project results come from the most
accurate estimates (Symons 1991). If the estimate is too low, planning
inefficiencies will drive up the actual cost and schedule of the project. If
the estimate is too high, Parkinson’s Law kicks in.
I believe that Parkinson’s Law does apply to software projects. Work
does expand to fill available time. But deliberately underestimating a
project because of Parkinson’s Law makes sense only if the penalty for
overestimation is worse than the penalty for underestimation. In software,
the penalty for overestimation is linear and bounded—work will expand to
fill available time, but it will not expand any further. But the penalty for
underestimation is nonlinear and unbounded—planning errors,
shortchanging upstream activities, and the creation of more defects
cause more damage than overestimation does, and with little ability to
predict the extent of the damage ahead of time.
Don’t intentionally underestimate. The penalty for
underestimation is more severe than the penalty for
overestimation. Address concerns about overestimation
through planning and control, not by biasing your estimates.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Decomposition and Recomposition

A

Decomposition is the practice of separating an estimate into multiple
pieces, estimating each piece individually, and then recombining the
individual estimates into an aggregate estimate. This estimation
approach is also known as “bottom up,” “micro estimation,” “module build
up,” “by engineering procedure,” and by many other names (Tockey
2005).
Decomposition is a cornerstone estimation practice—as long as you
watch out for a few pitfalls. This chapter discusses the basic practice in
more detail and explains how to avoid such pitfalls.

How Small Should the Estimated Pieces Be?
Seen from the perspective shown in Figure 10-1, software development
is a process of making larger numbers of steadily smaller decisions. At
the beginning of the project, you make such decisions as “What major
areas should this software contain?” A simple decision to include or
exclude an area can significantly swing total project effort and schedule
in one direction or another. As you approach top-level requirements, you
make a larger number of decisions about which features should be in or
out, but each of those decisions on average exerts a smaller impact on
the overall project outcome. As you approach detailed requirements, you
typically make hundreds of decisions, some with larger implications and
some with smaller implications, but on average the impact of these
decisions is far smaller than the impact of the decisions made earlier in
the project.
By the time you focus on software construction, the granularity of the
decisions you make is tiny: “How should I design this class interface?
How should I name this variable? How should I structure this loop?” And
so on. These decisions are still important, but the effect of any single
decision tends to be localized compared with the big decisions that were
made at the initial, software-concept level.
The implication of software development being a process of steady
refinement is that the further into the project you are, the finer-grained
your decomposed estimates can be. Early in the project, you might base
a bottom-up estimate on feature areas. Later, you might base the
estimate on marketing requirements. Still later, you might use detailed
requirements or engineering requirements. In the project’s endgame, you
might use developer and tester task-based estimates.
The limits on the number of items to estimate are more practical than
theoretical. Very early in a project, it can be a struggle to get enough
detailed information to create a decomposed estimate. Later in the
project, you might have too much detail. You need 5 to 10 individual
items before you get much benefit from the Law of Large Numbers, but
even 5 items are better than 1.

Link:
https://en.wikipedia.org/wiki/Decomposition_(computer_science)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Analogy-based estimations

A

The basic approach that Mike is using in this example is estimation by
analogy, which is the simple idea that you can create accurate estimates
for a new project by comparing the new project to a similar past project.
I’ve had several hundred estimators create estimates for the Triad
project. Using the approach implied in the example, their estimates have
ranged from 30 to 144 staff months, with an average of 53 staff months.
The standard deviation of their estimates is 24, or 46% of the average
answer. That is not very good! A little bit of structure on the process helps
a lot.
Here is a basic estimation by analogy process that will produce better
results:
1. Get detailed size, effort, and cost results for a similar previous
project. If possible, get the information decomposed by feature
area, by work breakdown structure (WBS) category, or by some
other decomposition scheme.
2. Compare the size of the new project piece-by-piece to the old
project.
3. Build up the estimate for the new project’s size as a percentage
of the old project’s size.
4. Create an effort estimate based on the size of the new project
compared to the size of the previous project.
5. Check for consistent assumptions across the old and new
projects.

Estimate new projects by comparing them to similar past
projects, preferably decomposing the estimate into at least
five pieces.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Story based estimations

A

Story Points as Unit of Measure
Mike Cohn summarizes story points best: “Story points are a unit of measure for expressing the overall size of a user story, feature or other piece of work. ” Story points tell us how big a story is, relative to others, either in terms of size or complexity. Mike often refers to “dog points” when helping teams understand the concept of relative sizing. A 2-point (small) dog would be a Chihuahua. A 13-point (big) dog would be a Great Dane. With those two guides in mind, it’s fairly easy to size the other dog breeds relative to a Chihuahua or Great Dane. A Beagle, which is about twice as big as a Chihuahua, might be a 5. A Labrador, which is bigger than a Beagle but smaller than a Great Dane, might be an 8.

When you are first learning to use story points, your team will need to establish your own fixed comparison points. To do this, choose a story from your product backlog you can all agree is small (either in terms of size or complexity) and one you all agree is huge. I like having my small story be a two-point story because, if I need to go smaller (say I discover a Toy Chihuahua), I can. If I limit my smallest known story to a one-point story and I need to go smaller, I’m in trouble. The other stories can then be sized relative to these.

When it comes to choosing numbers to represent these sizes, I find the Fibonacci sequence to work the best. Fibonacci is the sum of the previous two numbers. So 1 and 2, the next is 3. 3 and 2, the next is 5. 5 and 3, the next is 8, and so forth. I prefer Fibonacci over, say, T-shirt sizing or growing exponentially (4/8/16/32/64/128/256, etc.) because we humans are good at base ten. When we get out of that range, even if we are in it with, say, xs, s, m, l, xl—it becomes confusing. Fibonacci numbers are simple, easily understood, and provide enough accuracy to get us to the goal—providing relative estimates. You can choose a different set of numbers, but remember that the important thing is to be consistent.

Story points are relative values, not fixed. There is no direct correlation between hours and points. For example, we cannot say with any degree of certainty that a two-point story is equal to 12.2 hours because stories in the two-point range will vary greatly in how many actual hours it takes to complete them. Similarly, you cannot compare one team’s story points with another’s with any degree of certainty. Story points are created by and are specific to the team that estimated them, will likely include a degree of complexity that is understood only by the team, and are not absolute.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly