Lecture 1 Dependability and Security Flashcards
• Economic and human activities are increasingly dependent on softwareintensive systems. These can be thought of as critical systems. • For critical systems, the costs of failure are likely to signicantly exceed the costs of system development and operation. • Consequently, the dependability and security of the system are the most important development considerations. • Critical systems are often subject to external regulation.
What is meant by Dependability
Dependability is the ability for a system to deliver a service that can be trusted and reliable. This generally reflects upon the extent of the user’s confidence that it will operate as users expect and it will not ‘fail’ in normal circumstances.
Summary: The dependability of a system reflects the user’s degree of trust in that system.
Why is system dependability important?
- System failures may have widespread effects with large numbers of people affected by the failure.
- Systems that are not dependable and are unreliable, unsafe or insecure may be rejected by their users.
- The costs of system failure may be very high if the failure leads to economic losses or physical damage.
- Undependable systems may cause information loss with a high consequent recovery cost.
How is Dependability subjective
The dependability of a system is not absolute but depends on the judgement of a system’s stakeholders. What seems to be a system failure to one stakeholder is normal behaviour to another.
Note: The dependability of a system reflects the user’s degree of trust in that system.
How may a system Specification be incorrect or incomplete in relation to dependability?
(DHFT)
Dependability can only be defined formally with respect to a system specification i.e. a failure is a deviation from a specification.
However, many specifications are incomplete or incorrect - hence, a system that conforms to its specification may ‘fail’ from the perspective of system users.
Furthermore, users don’t read specifications don’t know how the system is supposed to behave.
Therefore perceived dependability is more important in practice.
Name the Principal Dependability Properties
(Probabilty that the system)
(A judgement of how likely it is that)
Availability
- The probability that the system will be up and running and able to deliver useful services to users.
Reliability
- The probability that the system will correctly deliver services as expected by users
Safety
- A judgment of how likely it is that the system will cause damage to people or its environment
Security
- A judgment of how likely it is that the system can resist accidental or deliberate intrusions.
Name examples of a system not being dependable
- Safe system operation depends on the system being available and operating reliably.
- A system may be unreliable because its data has been corrupted by an external attack (insecurity).
- Denial of service attacks on a system (insecurity) are intended to make it unavailable.
- If a system is infected with a virus (insecurity), you cannot be confident in its reliability or safety.
What is Availability and Reliability in terms of Dependability?
Reliability: the probability of failure-free system operation over a specified time in a given environment for a given purpose.
Alternative Defintion of Reliability: The probabilty that a system, will correctly deliver the resquested services, at that point in time.
Availability: the probability that a system, at a point in time, will be
operational and able to deliver the requested services.
Both reliability and availability attributes can be expressed quantitatively, e.g., availability of 0.999 means that the system is up and running for 99.9% of the time.
Explain whether Availability and Reliability are connected /related or not?
Availability and reliability are closely related
- Obviously if a system is unavailable it is not delivering the specified system services.
- However, it is possible to have systems with low reliability that must be available.
- So long as system failures can be repaired quickly and does not damage data, some system
failures may not be a problem. - Availability is therefore best considered as a separate attribute reflecting whether or not the system can deliver its services.
- Availability takes repair time into account, if the system has to be taken out of service to repair
faults.
How can perceived availability be measured?
Availability is usually expressed as a percentage of the time that a system is available to deliver services e.g. 99.95%.
However, this ignores:
- The number of users affected by the service outage. Loss of service in the middle of the night is less important for many systems than loss of service during peak usage periods.
- The length of the outage. The longer the outage, the more the disruption. Several short outages are less likely to be disruptive than 1 long outage. Long repair times are a particular problem.
Checkpoint (summary of above)
- Economic and human activities are increasingly dependent on software intensive systems. These can be thought of as critical systems.
- For critical systems, the costs of failure are likely to significantly exceed the costs of system development and operation.
- Consequently, the dependability and security of the system are the most important development considerations.
- Critical systems are often subject to external regulation.
Why is it important to continuously check the dependability of a system?
(3 points, Evolution, New Functionality)
- It is important for us to continuously check the dependability of a system.
- The evolution of a system throughout its lifetime will require us to check that correct service is still delivered (e.g., changes in underlying protocols, etc)
- New functionality added to a system may introduce unforeseen problems and hence requires appropriate checks and evidence of dependability
Name and Describe each Reliability Terminology (4)
Human Error or Mistake
- Human behaviour that results in the introduction of faults into a system. For example, in the wilderness weather system, a programmer might decide that the way to compute the time for the next transmission is to add 1 hour to the current time. This works except when the transmission time is between 23.00 and midnight (midnight is 00.00 in the 24-hour clock).
System Fault
- A characteristic of a software system that can lead to a system error. The fault is the inclusion of the code to add 1 hour to the time of the last transmission, without a check if the time is greater than or equal to 23.00.
System Error
- An erroneous system state that can lead to system behaviour that is unexpected by system users. The value of transmission time is set incorrectly (to 24.XX rather than 00.XX) when the faulty code is executed.
System Failure
- An event that occurs at some point in time when the system does not
deliver a service as expected by its users. No weather data is transmitted because the time is invalid.
How do failures occur and how can they be prevented?
- Failures are usually a result of system errors that are derived from faults in the system
- However, faults do not necessarily result in system errors
- The erroneous system state resulting from the fault may be transient and ‘corrected’ before an error arises.
- The faulty code may never be executed.
- Errors do not necessarily lead to system failures
- The error can be corrected by built-in error detection/recovery
- The failure can be protected against by built-in protection facilities. These may, for example,
protect system resources from system errors
What are the three key areas to achieving reliability?
(Development Techniques, V & V techniques, Run-time techniques)
Fault Avoidance
- Development Techniques are used that either minimise the possibility of mistakes or trap mistakes before they result in the introduction of system faults.
Fault Detection and Removal
- Verification and validation techniques that increase the probability of detecting and correcting errors before the system goes into service.
Fault Tolerance
- Run-time techniques are used to ensure that system faults do not result in system errors and/or that system errors do not lead to system failures.
What is meant by the term “Safety” ? in context to Critical systems?
Safety is a property of a system that reflects the system’s ability to operate, normally or abnormally, without danger of causing human injury or death and without damage to the system’s environment.
It is important to consider software safety as most devices whose failure is critical now incorporate software-based control systems