Architectural evaluation Flashcards
(23 cards)
Why do architects perform architectural evaluation?
To gather evidence—before or after coding—that the chosen architecture will satisfy the project’s priority quality attributes and business drivers. :contentReference[oaicite:24]{index=24}
Name the three major families of evaluation techniques in the lecture.
Analytical/questioning, Scenario-based, and Measurement-based. :contentReference[oaicite:25]{index=25}
Give one example technique for each family.
Analytical: checklist review; Scenario-based: ATAM; Measurement-based: load-test prototype. :contentReference[oaicite:26]{index=26}
At what project stages can evaluation be applied?
Before design (feasibility), after candidate architecture is sketched, or against a running implementation. :contentReference[oaicite:27]{index=27}
State the goal of the Architecture Trade-off Analysis Method (ATAM).
To reveal how well an architecture satisfies competing quality goals and to surface risks, sensitivity points, and trade-offs. :contentReference[oaicite:28]{index=28}
List the six core steps of an ATAM workshop.
Present business drivers; present the architecture; identify architectural approaches; create a quality-attribute utility tree; analyse approaches; brainstorm & prioritise scenarios, then report. :contentReference[oaicite:29]{index=29}
What is a “sensitivity point” in ATAM terminology?
A single architectural parameter whose value strongly influences a quality attribute. :contentReference[oaicite:30]{index=30}
Define a “trade-off point.”
A design decision where improving one quality attribute degrades another, forcing an explicit compromise. :contentReference[oaicite:31]{index=31}
What tangible artefact captures stakeholder quality priorities in ATAM?
The quality-attribute utility tree, which ranks scenarios by importance and difficulty. :contentReference[oaicite:32]{index=32}
Name two typical outputs of an ATAM session besides the utility tree.
Lists of risks, non-risks (strengths), sensitivity points, and trade-off points. :contentReference[oaicite:33]{index=33}
What differentiates QAW from ATAM?
QAW (Quality-Attribute Workshop) elicits and prioritises scenarios early; ATAM analyses a specific architecture against those scenarios. :contentReference[oaicite:34]{index=34}
Explain the purpose of aSQA (architectural Software Quality Assurance).
To provide a lightweight, continuous dashboard of quality health by scoring components against target levels for each attribute. :contentReference[oaicite:35]{index=35}
What four data items are recorded for each component-attribute pair in aSQA?
Target level, measured level, health (gap), and importance (criticality). :contentReference[oaicite:36]{index=36}
How often should aSQA metrics be reviewed in an agile setting?
At least once per sprint or PI, so drifting attributes trigger timely action. :contentReference[oaicite:37]{index=37}
Which evaluation family gives the hardest evidence for latency requirements?
Measurement-based techniques (e.g., load-test prototypes or simulations). :contentReference[oaicite:38]{index=38}
Why are questionnaires alone insufficient for performance assessment?
They capture expert opinion but no empirical numbers—cannot predict throughput under load. :contentReference[oaicite:39]{index=39}
Give two metrics commonly mined in evolution-based evaluation.
Churn (commit frequency/size) and logical coupling (files that change together). :contentReference[oaicite:40]{index=40}
What is the chief benefit of combining evaluation techniques?
Different methods reveal complementary insights, reducing blind spots (e.g., ATAM finds trade-offs, prototypes supply numbers). :contentReference[oaicite:41]{index=41}
Describe Lehman’s “law of continuing change” in one sentence.
An evolving software system must continuously adapt or become progressively less useful. :contentReference[oaicite:42]{index=42}
How does evaluation help combat architectural erosion?
By detecting divergences and risks early, enabling corrective refactoring before erosion becomes too costly. :contentReference[oaicite:43]{index=43}
Which attribute is particularly suited to scenario-based evaluation but hard to benchmark?
Security—best probed through misuse/abuse scenarios rather than load metrics. :contentReference[oaicite:44]{index=44}
What key data feeds a measurement-based simulation?
Workload models, performance parameters of components, and deployment topology. :contentReference[oaicite:45]{index=45}
Provide a concise exam definition of architectural evaluation.
The systematic assessment of an architecture, using analytical, scenario-based, or measurement techniques, to judge whether it will meet stakeholder quality goals and to expose risks and trade-offs. :contentReference[oaicite:46]{index=46}