Personal tools
You are here: Home Knowledge Model REPOSITORY of Terms T Testing

Testing

by Benedikt Liegener last modified Apr 25, 2012 14:20
— filed under:

Definitions

Term:
Testing
Domain: Cross-cutting issues
Engineering and Design
(KM-ED)
Adaptation and Monitoring
(KM-AM)
Quality Definition, Negotiation and Assurance
(KM-QA)
Generic
(domain independent)
D
o
m
a
i
n
:
L
a
y
e
r
s

Business Process Management
(KM-BPM)




Service Composition and Coordination
(KM-SC)




Service Infrastructure
(KM-SI)




Generic
(domain independent)
(Software) Testing is an activity performed for evaluating product quality, and for improving it, by identifying defects and problems [SWEBOK].
The goal of Testing is to (systematically) execute services or service-based applications in order to uncover failures. During testing, the service or service-based application which is tested is fed with concrete inputs and the produced outputs are observed. The observed outputs can deviate from the expected outputs with respect to functionality as well as quality of service (e.g., performance or availability). When the observed outputs deviate from the expected outputs, a failure of the service or the service-based application is uncovered. Failures can be caused by faults (or defects) of the test object. Examples for faults are a wrong exit condition for a loop in the software code that implements a service, or a wrong order of the service invocations in a BPEL specification. Finding such faults typically is not part of the testing activities but is the aim of debugging. A special case of testing is profiling. During profiling, a service or a service-based application can be systematically executed in order to determine specific properties. As an example, during profiling the execution times of individual services in a service composition could be measured for ’typical’ or ’extreme’ inputs in order to identify optimization potentials. Testing cannot guarantee the absence of faults, because it is infeasible (except for trivial cases) to test all potential concrete inputs of a service or service-based application. As a consequence, a sub-set of all potential inputs has to be determined for testing. The quality of the tests strongly depends on how well this sub-set has been chosen. Ideally this sub-set should include concrete inputs that are representative for all potential inputs (even those which are not tested) and it should include inputs that – with high probability – uncover failures. However, in cases where choosing such an ideal sub-set typically is infeasible, it is important to employ other quality assurance techniques and methods which complement testing. [PO-JRA-1.3.1]  The process of exercising a product to identify differences between expected and actual behaviour [FOLDOC]. 

 

Competencies

 


 

 

References




Document Actions
  • Send this
  • Print this
  • Bookmarks

The Plone® CMS — Open Source Content Management System is © 2000-2017 by the Plone Foundation et al.