Metrics Flashcards

(34 cards)

1
Q

Work in progress

A

Open PR with activity in the last 72 hrs

Why: A low WIP will lead to improved cycle time

How: Look at this number in relation to Active Contributors in your group.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Active contributors

A

Individuals who authored or pushed a commit in the last 24 hours

Why: When team members are available to code, it can be a strength

How: Use it to get a better sense of whats impacting availability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

PR Activity

A

Total time that has gone into a pull request.

Why: CD requires work to be done in small batches in order to ship frequently

How: How amount of activity w/in a PR could signal an issue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Weekly coding days

A

Avg numbers of days per week that a Dev spends coding.

Why: Represents a teams capacity.

How: When the number is low it warrants a re-priortization on coding. When the number is high it shows that Devs are being over-worked.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Commits per day

A

Avg number of time code is committed per day

Why: Smaller commits represent more frequent checkpointing.

How: A low number of commits can indicate a bottleneck, or issue with codebase.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Pushes per day

A

Avg number of pushes per day

Why: Serves as another checkpoint

How: Frequent pushes relates to working in small batches and incremental changes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Rework

A

Percentage of rewrites to code.

Why: Rework or churn represents efficiency at the coding level.

How: High Rework can be due to late changes in product requirements, or unfamiliarity with the codebase.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Time to open

A

Proxy for how long it takes to develop something.

Why: Provides visibility of an engineer’s work to the rest of the team.

How: may indicate that they don’t have clearly established norms around when pull requests are ready for review.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

PR size

A

LOC that were changed, added or removed

Why: Smaller PR’s are typically easier to review.

How: When this number is high, its typically a good coaching opportunity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Impact

A

Magnitude of changes to the codebase over a period of time.

Why: This is a synthetic measure we developed to help teams understand the significance of all the changes to the code.

How: Impact represents the collective significance of commits to the codebase.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Review Cycles

A

Time it takes for a PR to go back and forth between author and reviewer.

Why: It’s important to keep cycle times low

How: It gives you a better sense of where bottlenecks can be occurring in your process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Unreviewed PR requests

A

The amount of PR’s that were merged without review

Why: Unreviewed PR’s represent a risk to your codebase.

How: Gain a better understanding of your of your review process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

PR’s reviewed

A

The count of pull requests that have been reviewed.

Why: Helps you understand how well the burden of code review is distributed amongst the team.

How: Requests Reviewed may be a sign of conflicting priorities, disengagement, or difficulty with the codebase or process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Review speed

A

Time between when a pull request is opened and when the reviewer first submits feedback.

Why: you’ll want to make sure that work is consistently moving through the software development process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Review coverage

A

The percentage of files changed that receive at least one code review comment.

Why: Review Coverage is a proxy for review thoroughness

How: Will give you an understanding of how the quantity of feedback compares to its actionability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Review influence

A

Review comments that are addressed either by a response comment or a change to the code.

Why: Shows you how meaningful review feedback is to pull request authors

How: Use it to ensure that the team is leaving meaningful comments.

17
Q

PR throughput

A

A count of how many pull requests are merged over a period of time

Why: A total count of merged pull requests can serve as a proxy for value delivered.

How: This metric signals whether your engineering organization is getting more or less productive

18
Q

Cycle time

A

The time between when the first commit is authored to when a pull request is merged.

Why: Cycle Time represents your team’s time-to-market, or how quickly software is delivered to customers.

How: Use it to understand baseline productivity.

19
Q

PR Success rate

A

The percentage of opened Pull Requests closed that are successfully merged.

Why: Measurement of overall efficiency

How: Determine what’s effecting the rate (changing requirements, unclear technical direction, or implementation challenges) and course correct

20
Q

Revert rate

A

Percentage of total pull requests opened that are reverts

Why: Reflects the changes that caused the author to reverse the change

How: Reverts identify waste or defect in the process.

21
Q

Change type

A

The portion of your team’s efforts that are spent on developing new features, refactoring, and rework.

Why: New Code Written can be used as a proxy for raw innovation; Updating Older Code represents refactoring; and Rework is a form of churn.

How: Depending on your org’s structure, this information can mean different things.

22
Q

Time to merge

A

The duration between when a pull request is opened and when it is merged.

Why: Time to Merge is an indicator of how much inventory your team is managing at any given point in time.

How: When this metric is high, it’s usually a good time to re-evaluate collaborative processes. It may be that the latter end of the software delivery process has too many bottlenecks.

23
Q

Reviewers count

A

The total number of unique reviewers per pull request.

Why: The Reviewers Count shows you how many developers have to context switch in order to make time for code reviews.

How: If its low then expectations haven’t been clearly communicated. If its high then there may be over-review taking place.

24
Q

File count

A

The total number of files changed in a pull request.

Why: File Count is another way of understanding the magnitude of a pull request

How: This isn’t a metric you need to optimize.

25
Time to first review
The time between when a pull request is opened and when it received its first review. Why: This represents that amount of time, on average, that a submitter is left waiting for feedback. How: If the Time to First Review is elevated, consider aligning on what the code review expectations are and how reviews can better be integrated into the day-to-day.
26
Defect rate
The percentage of merged pull requests that are addressing defects. How: We do this by looking for fix, revert, bug, or repair in title.
27
Overview tab
Visualization of the quantity of work from your team. Great place to identify trends.
28
Workflow tab
A simple way to measure speed and efficiency of your PR process
29
People tab
Understand how teams or individuals are working quantitatively
30
Coaching Summary tab
Shows how each member of your team is doing compared to the metrics we have in velocity
31
Metrics tab
BI like interface to help you answer specific questions regarding your data
32
Snapshot tab
Heads-up display of the work that's currently taking place in your development iterations.
33
Code review
Provides teams and members information around the metrics associated to their process
34
WIP
All PR's and Issues that are currently in progress