Guidelines:
|
Artifact | Purpose | Tailoring (Optional, Recommended) |
Test Evaluation Summary |
Summarizes the Test Results for use primarily by the management team and other stakeholders external to the test team. |
Recommended for most projects. Where the project culture is relatively information, it may be appropriate simply to record test results and not create formal evaluation summaries. In other cases, Test Evaluation Summaries can be included as a section within other Assessment artifacts, such as the Iteration Assessment or Review Record. |
Test Results |
This artifact is the analyzed result determined from the raw data in one or more Test Logs. |
Recommended. Most test teams retain some form of reasonably detailed record of the results of testing. Manual testing results are usually recorded directly here, and combined with the distilled Test Logs from automated tests. In some cases, test teams will go directly from the Test Logs to producing the Test Evaluation Summary. |
[Master] Test Plan |
Defines high-level testing goals, objectives, approach, resources, schedule and deliverables that govern a phase or the entire the lifecycle. |
Optional. Useful for most projects. A Master Test Plan defines the high-level strategy for the test effort over large parts of the software development lifecycle. Optionally, you can include the Test Plan as a section within the Software Development Plan. Consider whether to maintain a "Master" Test Plan in addition to the "Iteration" Test Plans. The Master Test Plan covers mainly logistic and process enactment information that typically relates to the entire project lifecycle, therefore it is unlikely to change between iterations. |
[Iteration] Test Plan |
Defines finer grained testing goals, objectives, motivations, approach, resources, schedule and deliverables that govern an iteration. |
Recommended for most projects. A separate Test Plan per iteration is recommended to define the specific, fine-grained test strategy. Optionally, you can include the Test Plan as a section within the Iteration Plan. |
Test Ideas List |
This is an enumerated list of ideas, often partially formed, to be considered as useful tests to conduct. |
Recommended for most projects. In some cases these lists will be informally defined and discarded once Test Scripts or Test Cases have been defined from them. |
Test Script, Test Data |
The Test Scripts and Test Data are the realization or implementation of the test, where the Test Script embodies the procedural aspects, and the Test Data the defining characteristics. |
Recommended for most projects. Where projects differ is how formally these artifacts are treated. In some cases, these are informal and transitory, and the test team is judged based on other criteria. In other casesespecially with automated teststhe Test Scripts and associated Test Data (or some subset thereof) are regarded as major deliverables of the test effort. |
Test Suite |
Used to group individual related tests (Test Scripts) together in meaningful subsets. |
Recommended for most projects. Also required to define any Test Script execution sequences that are required for tests to work correctly. |
Test Case |
Defines a specific set of test inputs, execution conditions, and expected results. Documenting test cases allows them to be reviewed for completeness and correctness, and considered before implementation effort is planned & expended. This is most useful where the input, execution conditions and expected results are particularly complex. |
We recommend that on most projects, were the conditions required to conduct a specific test are complex or extensive, you should define Test Cases. You will also need to document Test Cases where they are a contractually required deliverable. In most other cases we recommend maintaining the Test-Ideas List and the Implemented Test Scripts instead of detailed textual Test Cases. Some projects will simply outline Test Cases at a high level and defer details to the Test Scripts. Another style commonly used is to document the Test Case information as comments within the Test Scripts. |
Workload Analysis Model |
A specialized type of Test Case. Used to define a representative workload to allow quality risks associated with the system operating under load to be assessed. |
Recommended for most systems, especially those where system performance under load must be evaluated, or where there are other significant quality risks associated with system operation under load. Not usually required for systems that will be deployed on a standalone target system. |
Test Classes in the Design Model Test Components in the Implementation Model |
The Design Model & Implementation Model include Test Classes & Components if the project has to develop significant additional specialized behavior to accommodate and support testing. |
Where Required. Stubs are a common category of Test Classes and Test Component. |
Test Log |
The raw data output during test execution, typically produced by automated tests. |
Optional. Many projects that perform automated testing will have some form of Test Log. Where projects differ is whether the Test Logs are retained or discarded after Test Results have been determined. You might retain Test Logs if you need to satisfy certain audit requirements, if you want to perform analysis on how the raw test output data changes over time, or if you are uncertain at the outset of all the analysis you may be required to give. |
Test Automation Architecture |
Provides an architectural overview of the test automation system, using a number of different architectural views to depict different aspects of the system. |
Optional. Recommended on projects where the test architecture is relatively complex, when a large number of staff will be collaborating on building automated tests, or when the test automation system is expected to be maintained over a long period of time. In some cases this might simply be a white-board diagram that is recorded centrally for interested parties to consult. |
Test Interface Specification |
Defines a required set of behaviors by a classifier (specifically, a Class, Subsystem or Component) for the purposes of testing (testability). Common types include test access, stubbed behavior, diagnostic logging and test oracles. |
Optional. On many projects, there is either sufficient accessibility for test in the visible operations on classes, user interfaces etc. Some common reasons to create Test Interface Specifications include UI extensions to allow GUI test tools to interact with the tool and diagnostic message logging routines, especially for batch processes. |
Tailor each artifact by performing the steps described in the Activity: Develop Development Case, under the heading "Tailor Artifacts per Discipline".
This section gives some guidelines to help you decide how you should review the test artifacts. For more details, see Guidelines: Review Levels.
Test Cases are created by the test team and are usually treated as Informal, meaning they are approved by someone within the test team.
Where useful, Test Cases might be approved by other team members and should then be treated as Formal-Internal.
If a customer wants to validate a product before it's released, some subset of the Test Cases could be selected as the basis for that validation. These Test Cases should be treated as Formal-External.
Test Scripts are usually treated as Informal; that is, they are approved by someone within the test team.
If the Test Scripts are to be used by many testers, and shared or reused for many different tests, they should be treated as Formal-Internal.
Test Classes are found in the Design Model, and Test Components in the Implementation Model. There are also two other related although not test specific artifacts: Packages in the Design Model, and Subsystems in the Implementation Model.
These artifacts are like design and implementation artifacts, however, they're created for the purpose of testing. The natural place to keep them is with the design and implementation artifacts. Remember to name or otherwise label them in such a way that they are clearly separated from the design and implementation of the core system.
Defects are usually treated as Informal and are usually handled in a defect-tracking system. On a small project, you can manage the defects as a simple list, for example, using your favorite spreadsheet. This is only manageable for small systemswhen the number of people involved and the amount of defects grow, you'll need to start using a more flexible defect-tracking system.
Another decision to make is whether you need to separate the handling of defectsalso known as symptoms or failuresfrom faults; the actual errors. For small projects, you may manage to track only the defects and implicitly handle the faults. However, as the system grows, you usually need to separate the management of defects from faults. For example, several defects may be caused by the same fault. Therefore, if a fault is fixed, it's necessary to find the reported defects and inform those users who submitted the defects, which is only possible if defects and faults can be identified separately.
In any project where the testing is nontrivial, you need a Test Plan for each Iteration. Optionally you might retain a Master Test Plan. In many cases, it's treated as Informal; that is, it's not reviewed and approved. Where testing has important visibility to external stakeholders, it could be treated as Formal-Internal or even Formal-External.
You must decide who is responsible for determining if an iteration has met its criteria. Strive to have clearly defined upfront as you enter each iteration how the test effort is expected to demonstrate this, and how it will be measured. This individual or group then decides if the iteration has met its criteria, if the system fulfills the desired quality criteria, and if the Test Results are sufficient and satisfactory to support these conclusions.
The following are examples of ways to handle iteration approval:
This is an important decisionyou cannot reach a goal if you don't know what it is.
Rational Unified
Process
|