Architectural evaluation Flashcards

(23 cards)

1
Q

Why do architects perform architectural evaluation?

A

To gather evidence—before or after coding—that the chosen architecture will satisfy the project’s priority quality attributes and business drivers. :contentReference[oaicite:24]{index=24}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Name the three major families of evaluation techniques in the lecture.

A

Analytical/questioning, Scenario-based, and Measurement-based. :contentReference[oaicite:25]{index=25}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Give one example technique for each family.

A

Analytical: checklist review; Scenario-based: ATAM; Measurement-based: load-test prototype. :contentReference[oaicite:26]{index=26}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

At what project stages can evaluation be applied?

A

Before design (feasibility), after candidate architecture is sketched, or against a running implementation. :contentReference[oaicite:27]{index=27}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

State the goal of the Architecture Trade-off Analysis Method (ATAM).

A

To reveal how well an architecture satisfies competing quality goals and to surface risks, sensitivity points, and trade-offs. :contentReference[oaicite:28]{index=28}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

List the six core steps of an ATAM workshop.

A

Present business drivers; present the architecture; identify architectural approaches; create a quality-attribute utility tree; analyse approaches; brainstorm & prioritise scenarios, then report. :contentReference[oaicite:29]{index=29}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is a “sensitivity point” in ATAM terminology?

A

A single architectural parameter whose value strongly influences a quality attribute. :contentReference[oaicite:30]{index=30}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Define a “trade-off point.”

A

A design decision where improving one quality attribute degrades another, forcing an explicit compromise. :contentReference[oaicite:31]{index=31}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What tangible artefact captures stakeholder quality priorities in ATAM?

A

The quality-attribute utility tree, which ranks scenarios by importance and difficulty. :contentReference[oaicite:32]{index=32}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Name two typical outputs of an ATAM session besides the utility tree.

A

Lists of risks, non-risks (strengths), sensitivity points, and trade-off points. :contentReference[oaicite:33]{index=33}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What differentiates QAW from ATAM?

A

QAW (Quality-Attribute Workshop) elicits and prioritises scenarios early; ATAM analyses a specific architecture against those scenarios. :contentReference[oaicite:34]{index=34}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Explain the purpose of aSQA (architectural Software Quality Assurance).

A

To provide a lightweight, continuous dashboard of quality health by scoring components against target levels for each attribute. :contentReference[oaicite:35]{index=35}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What four data items are recorded for each component-attribute pair in aSQA?

A

Target level, measured level, health (gap), and importance (criticality). :contentReference[oaicite:36]{index=36}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How often should aSQA metrics be reviewed in an agile setting?

A

At least once per sprint or PI, so drifting attributes trigger timely action. :contentReference[oaicite:37]{index=37}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Which evaluation family gives the hardest evidence for latency requirements?

A

Measurement-based techniques (e.g., load-test prototypes or simulations). :contentReference[oaicite:38]{index=38}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Why are questionnaires alone insufficient for performance assessment?

A

They capture expert opinion but no empirical numbers—cannot predict throughput under load. :contentReference[oaicite:39]{index=39}

17
Q

Give two metrics commonly mined in evolution-based evaluation.

A

Churn (commit frequency/size) and logical coupling (files that change together). :contentReference[oaicite:40]{index=40}

18
Q

What is the chief benefit of combining evaluation techniques?

A

Different methods reveal complementary insights, reducing blind spots (e.g., ATAM finds trade-offs, prototypes supply numbers). :contentReference[oaicite:41]{index=41}

19
Q

Describe Lehman’s “law of continuing change” in one sentence.

A

An evolving software system must continuously adapt or become progressively less useful. :contentReference[oaicite:42]{index=42}

20
Q

How does evaluation help combat architectural erosion?

A

By detecting divergences and risks early, enabling corrective refactoring before erosion becomes too costly. :contentReference[oaicite:43]{index=43}

21
Q

Which attribute is particularly suited to scenario-based evaluation but hard to benchmark?

A

Security—best probed through misuse/abuse scenarios rather than load metrics. :contentReference[oaicite:44]{index=44}

22
Q

What key data feeds a measurement-based simulation?

A

Workload models, performance parameters of components, and deployment topology. :contentReference[oaicite:45]{index=45}

23
Q

Provide a concise exam definition of architectural evaluation.

A

The systematic assessment of an architecture, using analytical, scenario-based, or measurement techniques, to judge whether it will meet stakeholder quality goals and to expose risks and trade-offs. :contentReference[oaicite:46]{index=46}