Innovotek -News & BLOG
A blog about the news of the latest technology developments, breaking technology news, Innovotek Latest news and information tutorials on how to.
Basics of software standards compliance (Part 6)
- Font size: Larger Smaller
- Hits: 5868
- 0 Comments
- Subscribe to this entry
- Bookmark
Following a series of fatal accidents in the mid-1990s, a formal investigation was conducted with the Therac-25 radiotherapy machine. Led by Nancy Leveson of the University of Washington, the investigation resulted in a set of recommendations on how to create safety-critical software solutions in an objective manner. Since then, industries as disparate as aerospace, automotive and industrial control have encapsulated the practices and processes for creating safety- and/or security-critical systems in an objective manner into industry standards.
Although subtly different in wording and emphasis, the standards across industries follow a similar approach to ensuring the development of safe and/or secure systems. This common approach includes ten phases:
1. Perform a system safety or security assessment
2. Determine a target system failure rate
3. Use the system target failure rate to determine the appropriate level of development rigor
4. Use a formal requirements capture process
5. Create software that adheres to an appropriate coding standard
6. Trace all code back to their source requirements
7. Develop all software and system test cases based on requirements
8. Trace test cases to requirements
9. Use coverage analysis to assess test completeness against both requirements and code
10. For certification, collect and collate the process artifacts required to demonstrate that an appropriate level of rigor has been maintained.
Phase 9 is discussed in this article. One of the basic truisms of software development is that it is not possible to fully test a program, so the basic question then becomes: how much testing is enough? In addition, for safety-critical systems, how can it be proved to the relevant authorities that enough testing has been performed on the software under development? The answer is software coverage analysis. While it has proven to be an effective metric for assessing test completeness, it only serves as an effective measure of test effectiveness when used within the framework of a disciplined test environment.
When performing exhaustive testing on a piece of software, ensuring that every path through the code is executed at least once sounds like a reasonable place to start. However, examining the possible execution paths in even simple programs soon reveals how difficult it is to test software to completion. For example, in a 2006 lecture on software testing, Professor I.K. Lundquist of MIT described a simple flow chart containing five decision points (including a loop) and six functional blocks that, when analysed, contained 1014 possible execution paths. When we compare this number to the age of the universe—about 4 x 1017 seconds—the difficulty of complete path analysis becomes clear. As a result, one of the persistent questions when it comes to developing safety-critical software is: When has enough software testing been performed to confirm that the system does what it is supposed to do?
The avionics community has addressed this problem by adopting coverage analysis as the metric of choice for assessing test completeness. As Tom DeMarco, well-known software engineering author and teacher, says: "You can't control what you can't measure."