Validation and Evaluation Flashcards

1
Q

Definition: Evaluation

A
  • Evaluation is the process of computing quantitative information of some key characteristics of a certain (possible partial) design
  • qualitative, give number on how good the product is
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Definition: Validation

A

Validation is the process of checking wether or not a certain (possibly partial) design is appropriate for its purpose, meet all constraints and will perform as expected (yes/no decision)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Give criteria for evaluation

A
  • average- & worst-case delay
  • power/energy consumption
  • thermal behavior
  • reliability, safetey, security
  • cost, size,
    weight,
    EMC characteristics
  • radiation hardness, environmental friendliness
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the solution space and what is the objective space?

A
  • Solution space: Design decisions (number of processors, size of memories, type and width of busses …)
  • Objective space: (results from decisions in solution space) for example: power/energy consumption, size, weight…
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Pareto points: when does a solution dominate?

A

A vector u dominates a vector v if u is “better” than v with respect to one objective and not worse than v with respect to all other objectives

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Pareto points: When is a solution indifferent?

A

A vector u is indifferent in respect to v if neither u dominates v nor v dominates u.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Pareto-optimal?

A

If x is a solution and there is no other solution that dominates x, then x is a Pareto-point and the solution pareto-optimal.

x is also pareto-optimal if it is non-dominated with respect to all solutions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Pareto-Set, Pareto-Front?

A
  • A pareto-set is the set of all Pareto-optimal solutions.

- Pareto-sets define a Pareto-Front (boundry of dominated sub-space)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is Design Space Evaluation?

A

(DSE) based on Pareto-points is the process of finding and returning a set of Pareto-optimal designs to the user, enabling the user to select the most appropriate design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Execution time, what objectives must be checked?

A
  • Average execution time and worst-case execution time (WCET)
  • WCET must firstly be smaller than Time constraint and it must be safe, meaning that the measured WCET is smaller than the estimated (given by you) WCETest. Also it must be tight (small difference between WCETest and known WCET)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

In generals WCETest are unable to determine for sure, especially for modern complex systems. What tools exist to get WCETest?

A
  • Hardware: requires detailed timing behavior
  • Software: requires availability of machine programs; complex analysis
  • analysis of loop cicles, pipeline/cache (static analysis)
  • analysis of path length (longest = WCET)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Realtime calculus / MPA (modular performance analysis): Arrival Curves opposed to Service curves

A
  • arrival curves: demand of phys. environment to our system (+/- higher and lower bounds ‘jitter’),
  • service curves: capabilities that the embedded device provides (ie. TDMA [‘round-robin’] bus giving bandwidth to task or computing power)

(V.5 p. 28ff)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Realtime calculus / MPA: work load characterization?

A
  • how much time does it take to handle certain number of events (upper/lower bound)
  • –> WCET, BCET
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Realtime calculus (RTC) in Modular Performance Analysis: Remarks?

A
  • the three curves with lower/upper bounds: Arrival curves, service curves and work load characterization contribute with their function to an overall timing understanding of the system
  • high level abstraction, mathematical (and formally proven)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Pros of Modular Performance Analysis for timing analysis?

A
  • easy to construct models
  • evaluation speed is fast and linear to model complexity
  • needs little information to construct early models
  • even though involved mathematics is very complex, the method is easy to use (ie. using Matlab toolbox)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Average vs. Worst-Case Energy Consumption

A
  • Average energy consumption is based on the consumption for selected sets of input data
  • worst-case energy consumption is a safe upper-bound on the energy consumption
17
Q

Energy-consumption predictability?

A
  • hardly from source code (compiler&linker impact not known)
  • small variations can lead to variation of energy consumption (example: shifting code in memory by one byte)
  • energy consumption must be predicted from executable code! (like WCET)
  • might even depend on which instance of the hardware is used
18
Q

Instruction-dependent costs in the CPU?

A
  • depend on one-bits in the registers/variables (ones are energy expensive)
  • depend on kind and order of instructions (ADD, MUL,SHL etc.) –> switching between hardware denoted by ‘hamming distance’
19
Q

What is the hamming distance?

A
  • distance between the corresponding hardware to handle instructions like ADD (ALU), SHL (Bit shifting), MUL etc
  • contributes to energy consumption
20
Q

Terms: failure, error, fault?

A
  • (Service) failure: is an event that occurs when the delivered service of a system deviates from the correct service.
  • Error: is the part of the total state of the system that may lead to its subsequent service failure
  • Fault: the adjudged or hypothized cause of an error (can be internal or external of a system)
21
Q

Define the Reliability R(t) and Failure F(t)

A
  • The Reliability R(t) is the probability that the time until the first failure is larger than some time t
  • The Failure F(t)

F(t) + R(t) = 1

22
Q

Define Failure rate

A

The failure rate is the probability of the system failing between time t and t+dt
(probability at a certain time interval)

23
Q

What is FIT?

A
  • Measurement (‘failure-in-time’) for failure rate
  • 1 FIT is the rate of failures in10^-9 per hour
    (= or at most one failure in 10^9 hours)
24
Q

What can make a system more reliable than its components?

A

Redundancy

25
Q

Kopetz’s 12 Design Principles (1-3)?

A
  1. safety considerations are part of the specification, driving the entire design process
  2. prescice specifications of design hypotheses must (including expected failures and their probability) be documented
  3. Fault containment regions (FCR) must be made right at the beginning. Faults in one FCR should not affect other FCRs
26
Q

Kopetz’s 12 Design Principles (4-6)?

A
  1. A consistent notion of time and state must be established. Otherwise it will be impossible to differentiate between original and follow-up errors
  2. Well-defined interfaces have to hide the internal of components (abstraction)
  3. It must be ensured that components fail independently (ensures redundancy)
27
Q

Kopetz’s 12 Design Principles (7-9)?

A
  1. principle of self-confidence: components should consider themselves to be correct unless two or more other components pretend the contrary to be true
  2. fault tolerance mechanisms must not be designed such that they do not create any additional difficulty in explaining the behavior of the system. Also they should be decoupled from the regular function.
  3. the system must be designed for diagnosis. for exampe it has to be possible to identify existing (but masked) errors
28
Q

Kopetz’s 12 Design Principles (10-12)

A
  1. the man-machine interface must be intuitive and forgiving. Safety should be maintained despite mistakes made by humans.
  2. Every anomaly should be recorded. These anomalies may be unobservable at the regular interface level. Recording should involve internal effects, otherwise the may be masked by fault tolerance mechanisms.
  3. provide a nerver-give-up strategy. ES may have to provide uninterrupted service. Going offline is unacceptable.
29
Q

Limitations of Simulations?

A
  • typically slower than actual design –> violations of timing constraints
  • simulations in the real environment may be dangerous
  • hugh amounts of data –> may be impossible to simulate enough data in available time
  • most actual systems are too complex to allow all possible cases (inputs)
  • simulations can help find errors in designs but the cannot guarantee the absence of errors!
30
Q

What is Rapid Prototyping?

A
  • quickly generated ES which behaves similar to final product
  • may be larger, more power consuming an have other properties that can be accepted in the validation phase
  • can be built, for example using FPGA
31
Q

What is Emulation?

A

Hybrid: Simulations based on models which are approximations of real systems, while other parts of the system are implemented in real hardware

32
Q

What is formal verification? (ideal/real)

A
  • formally proving a system correct, using the language of mathematics
  • formal model required -> obtaining this cannot be automated
  • when model available -> try to prove properties

Ideal: formally verified tools transforming specifications into implementations (‘correctness by construction’)
Real: non-verified tools and manual design steps -> validation of each and every design required

33
Q

Model checking: verification and analysis of the state space of the system. Three steps?

A
  1. Generation of a formal model of the system to be verified
  2. Definition of the properties expected
  3. Model checking (actual verification)