Zum Hauptinhalt springen

Software Quality Journal, Vol. 4, No. 4, Dec. 1995

Editorial

Background and Motivation

This issue of "Software Quality Journal" contains papers which represent major trends of the current discussions on testing in the German special interest group on "Testing, Analysis and Verification of Software".

Testing - based on execution and observation - is a challenging issue in software and system development. Since it provides a good potential for raising productivity and improving product quality, it is of great practical importance. This is mirrored by many activities in this area including major conferences on software testing such as STAR and EuroSTAR, as well as main tracks in more general conferences such as the International Quality Week and the European Software Quality Conference.

Recognizing the growing importance of the topic, the German special interest group was founded in 1991 as part of the "Gesellschaft für Informatik" (GI, the German Society of Computer Science). The group now has more than 100 members, about 80% coming from industry. The group meets twice a year (average number of attendees is 45) exchanging practical experiences, presenting ongoing work, and discussing new ideas and approaches, current trends and developments. Topics covered include static analysis, inspection & walkthrough, measurement, test management, process improvement, tools, and test techniques. Selected contributions are published in proceedings (in German).

The aim of producing this special issue is to give an overview of ongoing work, hoping to foster some exchange between this and similar groups in Europe. Further information about the special interest group may be obtained from the group's speaker, Prof. Dr. Andreas Spillner (Department of Information Technology, Bremen Polytechnic (Hochschule Bremen), Neustadtswall 30, D-28199 Bremen, phone: +49 (421) 5905-467, fax: +49 (421) 5905-476).
 

Current Trends

In the group's discussions of the subject of testing three major streams maybe identified: testing various types of systems, influence of formal methods on testing, and integration of testing with other analytical techniques.

While the emphasis traditionally was on concepts and tools for testing small to medium sized, sequential programs, written in imperative languages, the focus today is on testing large software systems and on testing other types of systems, such as object-oriented software and reactive systems.

  • Testing large software is as complex as constructing it; we have to cope with complexity and size. Therefore, the test phase, generally, is decomposed into steps such as unit, integration, and system test. Is it adequate to use a concept such as statement testing in every test step, or are there alternatives we should consider?
  • Testing object-oriented software has become a problem, since object-oriented technology is captivating software development. Traditional concepts, such as control flow based testing, may be used for testing individual methods. However, since the methods of a class are relatively small (in general) the problems of testing object-oriented software are elsewhere: how does one test classes, how to cope with inheritance?
  • Software often is just one part of a system containing also hardware and mechatronic components. Examples are control systems in cars and airplanes, televisions and even razors. Such systems are reactive and continously respond to events from their environment. Their behaviour is characterized by sequences of inputs and outputs rather than just pairs. How does this influence testing?

Application areas such as aircraft or railway control systems need rigorous construction and analysis methods. In such areas formal methods have a strong influence on system development. Although they are not always used for formal verification, they at least provide a precise specification. Europe has a traditional strength in formal methods. This does however not mean that we do not need testing any longer. On the contrary, we should rather use this strength for making testing more formal and thus more systematic and reproducible. An unambiguous specification allows an objective decision on correct behaviour and it also provides a basis for systematic black box testing. Without a precise specification the selection of black box test cases depends too much on the intuition of an individual tester. Furthermore, a formal specification also allows automation with respect to generating test runs and checking test results.

Since testing is no longer seen in isolation, the integration of testing with other analytical techniques is becoming an issue. Integrating testing and static analysis is obvious: we should not waste our time by testing a program that has not been checked previously for faults or anomalies such as type inconsistencies and non-initialized variables. Measurement may be used to support testing, but coverage and testability measures are only one facet. Another possibility is to use measures of static complexity to support the selection of test concepts for a given test object. Testing may be complemented by formal verification, at least in those technical applications where we have to cope with high safety requirements. In such cases neither testing nor formal verification alone may be regarded sufficient.
 

Contents of this Collection

The first paper, "Costs and benefits of early defect detection" by Rudolf van Megen and Dirk Meyerhoff, also addresses the usage of measurement for supporting testing and, more general, defect detection. It reports on experiences from large projects. Based on metrics dependencies between the costs of early defect detection, late defect detection, and defect fixing are analyzed. Furthermore, guidelines for timing and resource allocation are given that support cost-effective defect detection. A conclusion from this real life experience is that a substantial amount of a project's budget and time can be saved by introducing defect detection and systematic testing early in the project, i.e. as soon as the requirements document is declared finished.

The second paper, entitled "A set of complexity metrics for guiding the software test process" by Peter Liggesmeyer, addresses the integration of testing and measurement: it presents an approach to using measures of static complexity to support the selection of test concepts. It argues that there are many test concepts and that it is difficult to select those suitable for the specific software under test. The idea is that program constructs with high complexity should be tested very thouroughly. A test technique which requires, for instance, that predicates are covered by test cases will be more appropriate for testing a program containing complex structured decisions than a technique like branch testing which ignores the complexity of decisions. The set of metrics specifically defined to support the selection of suitable test techniques contains control flow metrics, data flow metrics, data declaration metrics, and arithmetic metrics. The method is demonstrated and empirical data are given.

The next paper, entitled "Test criteria and coverage measures for software integration testing" by Andreas Spillner, deals with testing large systems and the use of measurement. Concentrating on integration testing it is argued that, for economic reasons, integration testing should concentrate on looking for interfaces faults. It is shown how control and data flow oriented techniques may be reused by adapting them to the specific needs of integration testing. In addition, respective coverage metrics are defined, and it is discussed how those may be used for assessing the effectiveness and the quality of the test process.

The paper "Systematic testing and formal verification to validate reactive programs" by Monika Müllerburg, Leszek Holenderski, Olivier Maffeis, Agathe Merceron and Matthew Morley, dsicusses the integration of testing and verification. It is argued that, regarding testing and verification as complementary rather than competing techniques, reactive systems which are implemented in synchronous languages can be validated both more effectively and more efficiently. Using only testing would be too expensive since establishing confidence with respect to safety properties would need far too many test cases. On the other hand, using only verification would be too expensive since some properties are difficult to verify given the complexity of the software. The approach is demonstrated using the well known lift example.

The last paper, entitled "Using formal specifications for supporting software testing " by Hans-Martin Hörcher and Jan Paleska, addresses the topic of formal methods. It discusses the use of formal specifications for systematically deriving test data for achieving complete coverage of the requirements specifications, as well as for automatically evaluating test results. Though the discussion is based on the specification language Z, the ideas may well be used for other specification languages.

Monika Müllerburg
Ken Croucher

Contents

  • R. van MegenD.D. Meyerhoff:
    Costs and benefits of early defect detection: experiences from developing client server and host applications
  • P. Liggesmeyer:
    A set of complexity metrics for guiding the software test process
  • A. Spillner:
    Test criteria and coverage measures for software integration testing
  • M. MüllerburgL. HolenderskiO. MaffeisA. MerceronM. Morley:
    Systematic testing and formal verification to validate reactive programs
  • H.-M. HörcherJ. Peleska:
    Using formal specification to support software testing.