Validation and Evaluation Flashcards
Definition: Evaluation
- Evaluation is the process of computing quantitative information of some key characteristics of a certain (possible partial) design
- qualitative, give number on how good the product is
Definition: Validation
Validation is the process of checking wether or not a certain (possibly partial) design is appropriate for its purpose, meet all constraints and will perform as expected (yes/no decision)
Give criteria for evaluation
- average- & worst-case delay
- power/energy consumption
- thermal behavior
- reliability, safetey, security
- cost, size,
weight,
EMC characteristics - radiation hardness, environmental friendliness
What is the solution space and what is the objective space?
- Solution space: Design decisions (number of processors, size of memories, type and width of busses …)
- Objective space: (results from decisions in solution space) for example: power/energy consumption, size, weight…
Pareto points: when does a solution dominate?
A vector u dominates a vector v if u is “better” than v with respect to one objective and not worse than v with respect to all other objectives
Pareto points: When is a solution indifferent?
A vector u is indifferent in respect to v if neither u dominates v nor v dominates u.
Pareto-optimal?
If x is a solution and there is no other solution that dominates x, then x is a Pareto-point and the solution pareto-optimal.
x is also pareto-optimal if it is non-dominated with respect to all solutions
Pareto-Set, Pareto-Front?
- A pareto-set is the set of all Pareto-optimal solutions.
- Pareto-sets define a Pareto-Front (boundry of dominated sub-space)
What is Design Space Evaluation?
(DSE) based on Pareto-points is the process of finding and returning a set of Pareto-optimal designs to the user, enabling the user to select the most appropriate design
Execution time, what objectives must be checked?
- Average execution time and worst-case execution time (WCET)
- WCET must firstly be smaller than Time constraint and it must be safe, meaning that the measured WCET is smaller than the estimated (given by you) WCETest. Also it must be tight (small difference between WCETest and known WCET)
In generals WCETest are unable to determine for sure, especially for modern complex systems. What tools exist to get WCETest?
- Hardware: requires detailed timing behavior
- Software: requires availability of machine programs; complex analysis
- analysis of loop cicles, pipeline/cache (static analysis)
- analysis of path length (longest = WCET)
Realtime calculus / MPA (modular performance analysis): Arrival Curves opposed to Service curves
- arrival curves: demand of phys. environment to our system (+/- higher and lower bounds ‘jitter’),
- service curves: capabilities that the embedded device provides (ie. TDMA [‘round-robin’] bus giving bandwidth to task or computing power)
(V.5 p. 28ff)
Realtime calculus / MPA: work load characterization?
- how much time does it take to handle certain number of events (upper/lower bound)
- –> WCET, BCET
Realtime calculus (RTC) in Modular Performance Analysis: Remarks?
- the three curves with lower/upper bounds: Arrival curves, service curves and work load characterization contribute with their function to an overall timing understanding of the system
- high level abstraction, mathematical (and formally proven)
Pros of Modular Performance Analysis for timing analysis?
- easy to construct models
- evaluation speed is fast and linear to model complexity
- needs little information to construct early models
- even though involved mathematics is very complex, the method is easy to use (ie. using Matlab toolbox)
Average vs. Worst-Case Energy Consumption
- Average energy consumption is based on the consumption for selected sets of input data
- worst-case energy consumption is a safe upper-bound on the energy consumption
Energy-consumption predictability?
- hardly from source code (compiler&linker impact not known)
- small variations can lead to variation of energy consumption (example: shifting code in memory by one byte)
- energy consumption must be predicted from executable code! (like WCET)
- might even depend on which instance of the hardware is used
Instruction-dependent costs in the CPU?
- depend on one-bits in the registers/variables (ones are energy expensive)
- depend on kind and order of instructions (ADD, MUL,SHL etc.) –> switching between hardware denoted by ‘hamming distance’
What is the hamming distance?
- distance between the corresponding hardware to handle instructions like ADD (ALU), SHL (Bit shifting), MUL etc
- contributes to energy consumption
Terms: failure, error, fault?
- (Service) failure: is an event that occurs when the delivered service of a system deviates from the correct service.
- Error: is the part of the total state of the system that may lead to its subsequent service failure
- Fault: the adjudged or hypothized cause of an error (can be internal or external of a system)
Define the Reliability R(t) and Failure F(t)
- The Reliability R(t) is the probability that the time until the first failure is larger than some time t
- The Failure F(t)
F(t) + R(t) = 1
Define Failure rate
The failure rate is the probability of the system failing between time t and t+dt
(probability at a certain time interval)
What is FIT?
- Measurement (‘failure-in-time’) for failure rate
- 1 FIT is the rate of failures in10^-9 per hour
(= or at most one failure in 10^9 hours)
What can make a system more reliable than its components?
Redundancy