Software Testing, QA

Glossary

A

Acceptance testing: Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.

Actual result: The behavior produced/observed when a component or system is tested.

Ad hoc testing: Testing carried out informally; no formal test preparation takes place, no recognized test design technique is used, there are no expectations for results and arbitrariness guides the test execution activity.

Alpha testing: Simulated or actual operational testing by potential users/customers or an independent test team at the developers’ site, but outside the development organization. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing.

Automated testware: Testware used in automated testing, such as tool scripts.

B

Basis test set: A set of test cases derived from the internal structure of a component or specification to ensure that 100% of a specified coverage criterion will be achieved.

Beta testing: Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing for off-the-shelf software in order to acquire feedback from the market.

Black box testing: Testing, either functional or non-functional, without reference to the internal structure of the component or system.

Blocked test case: A test case that cannot be executed because the preconditions for its execution are not fulfilled.

Boundary value: An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge, for example the minimum or maximum value of a range.

Boundary value analysis: A black box test design technique in which test cases are designed based on boundary values. See also boundary value.

Bug: See defect.

Bug report: See defect report.

Bug taxonomy: See defect taxonomy.

Bug tracking tool: See defect management tool.

C

Capture/playback tool: A type of test execution tool where inputs are recorded during manual testing in order to generate automated test scripts that can be executed later (i.e. replayed). These tools are often used to support automated regression testing.

Certification: The process of confirming that a component, system or person complies with its specified requirements, e.g. by passing an exam.

Checklist-based testing: An experience-based test design technique whereby the experienced tester uses a high-level list of items to be noted, checked, or remembered, or a set of rules or criteria against which a product has to be verified. See also experience-based testing.

Cost of quality: The total costs incurred on quality activities and issues and often split into prevention costs, appraisal costs, internal failure costs and external failure costs.

Coverage: The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite.

D

Debugging: The process of finding, analyzing and removing the causes of failures in software.

Debugging tool: A tool used by programmers to reproduce failures, investigate the state of programs and find the corresponding defect. Debuggers enable programmers to execute programs step by step, to halt a program at any program statement and to set and examine program variables.

Defect: A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.

Defect management tool: A tool that facilitates the recording and status tracking of defects and changes. They often have workflow-oriented facilities to track and control the allocation, correction and re-testing of defects and provide reporting facilities. See also incident management tool.

Defect report: A document reporting on any flaw in a component or system that can cause the component or system to fail to perform its required function.

Defect taxonomy: A system of (hierarchical) categories designed to be a useful aid for reproducibly classifying defects.

Documentation testing: Testing the quality of the documentation, e.g. user guide or installation guide.

E

Emulator: A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system. See also simulator.

Equivalence class: See equivalence partition.

Equivalence partition: A portion of an input or output domain for which the behavior of a component or system is assumed to be the same, based on the specification.

Equivalence partitioning: A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.

Error: A human action that produces an incorrect result.

Exhaustive testing: A test approach in which the test suite comprises all combinations of input values and preconditions.

Expected result: The behavior predicted by the specification, or another source, of the component or system under specified conditions.

Exploratory testing: An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests.

F

Fail: A test is deemed to fail if its actual result does not match its expected result.

Failure: Deviation of the component or system from its expected delivery, service or result.

Feature: An attribute of a component or system specified or implied by requirements documentation (for example reliability, usability or design constraints).

Functional requirement: A requirement that specifies a function that a component or system must perform.

Functional testing: Testing based on an analysis of the specification of the functionality of a component or system. See also black box testing.

H

High level test case: A test case without concrete (implementation level) values for input data and expected results. Logical operators are used; instances of the actual values are not yet defined and/or available. See also low level test case.

L

Load testing: A type of performance testing conducted to evaluate the behavior of a component or system with increasing load, e.g. numbers of parallel users and/or numbers of transactions, to determine what load can be handled by the component or system. See also performance testing, stress testing.

Low level test case: A test case with concrete (implementation level) values for input data and expected results. Logical operators from high level test cases are replaced by actual values that correspond to the objectives of the logical operators. See also high level test case.

M

Memory leak: A memory access failure due to a defect in a program’s dynamic store allocation logic that causes it to fail to release memory after it has finished using it, eventually causing the program and/or other concurrent processes to fail due to lack of memory.

N

Negative testing: Tests aimed at showing that a component or system does not work. Negative testing is related to the testers’ attitude rather than a specific test approach or test design technique, e.g. testing with invalid input values or exceptions.

Non-functional testing: Testing the attributes of a component or system that do not relate to functionality, e.g. reliability, efficiency, usability, maintainability and portability.

P

Pair testing: Two persons, e.g. two testers, a developer and a tester, or an end-user and a tester, working together to find defects. Typically, they share one computer and trade control of it while testing.

Pass: A test is deemed to pass if its actual result matches its expected result.

Performance: The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate.

Performance testing: The process of testing to determine the performance of a software product.

Project: A project is a unique set of coordinated and controlled activities with start and finish dates undertaken to achieve an objective conforming to specific requirements, including the constraints of time, cost and resources.

Q

Quality: The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations.

Quality assurance: Part of quality management focused on providing confidence that quality requirements will be fulfilled.

R

Record/playback tool: See capture/playback tool

Regression testing: Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed.

Requirement: A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document.

Result: The consequence/outcome of the execution of a test. It includes outputs to screens, changes to data, reports, and communication messages sent out. See also actual result, expected result.

Risk: A factor that could result in future negative consequences; usually expressed as impact and likelihood.

Robustness:The degree to which a component or system can function correctly in the presence of invalid inputs or stressful environmental conditions.

Robustness testing: Testing to determine the robustness of the software product.

S

Scripted testing: Test execution carried out by following a previously documented sequence of tests.

Security testing: Testing to determine the security of the software product.

Session-based testing: An approach to testing in which test activities are planned as uninterrupted sessions of test design and execution, often used in conjunction with exploratory testing.

Simulation: The representation of selected behavioral characteristics of one physical or abstract system by another system.

Simulator: A device, computer program or system used during testing, which behaves or operates like a given system when provided with a set of controlled inputs. See also emulator.

Smoke test: A subset of all defined/planned test cases that cover the main functionality of a component or system, to ascertaining that the most crucial functions of a program work, but not bothering with finer details. A daily build and smoke test is among industry best practices.

Software: Computer programs, procedures, and possibly associated documentation and data pertaining to the operation of a computer system.

Software lifecycle: The period of time that begins when a software product is conceived and ends when the software is no longer available for use. The software lifecycle typically includes a concept phase, requirements phase, design phase, implementation phase, test phase, installation and checkout phase, operation and maintenance phase, and sometimes, retirement phase. Note these phases may overlap or be performed iteratively.

Software quality: The totality of functionality and features of a software product that bear on its ability to satisfy stated or implied needs.

Specification: A document that specifies, ideally in a complete, precise and verifiable manner, the requirements, design, behavior, or other characteristics of a component or system, and, often, the procedures for determining whether these provisions have been satisfied.

Stress testing: A type of performance testing conducted to evaluate a system or component at or beyond the limits of its anticipated or specified work loads, or with reduced availability of resources such as access to memory or servers. See also performance testing, load testing.

System: A collection of components organized to accomplish a specific function or set of functions.

T

Test: A set of one or more test cases.

Test approach: The implementation of the test strategy for a specific project. It typically includes the decisions made that follow based on the (test) project’s goal and the risk assessment carried out, starting points regarding the test process, the test design techniques to be applied, exit criteria and test types to be performed.

Test automation: The use of software to perform or support test activities, e.g. test management, test design, test execution and results checking.

Test basis: All documents from which the requirements of a component or system can be inferred. The documentation on which the test cases are based. If a document can be amended only by way of formal amendment procedure, then the test basis is called a frozen test basis.

Test case: A set of input values, execution preconditions, expected results and execution postconditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement.

Test condition: An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, feature, quality attribute, or structural element.

Test data: Data that exists (for example, in a database) before a test is executed, and that affects or is affected by the component or system under test.

Test environment: An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test.

Test estimation: The calculated approximation of a result related to various aspects of testing (e.g. effort spent, completion date, costs involved, number of test cases, etc.) which is usable even if input data may be incomplete, uncertain, or noisy.

Test execution: The process of running a test on the component or system under test, producing actual result(s).

Test execution automation: The use of software, e.g. capture/playback tools, to control the execution of tests, the comparison of actual results to expected results, the setting up of test preconditions, and other test control and reporting functions.

Test management: The planning, estimating, monitoring and control of test activities, typically carried out by a test manager.

Test plan: A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process.

Test process: The fundamental test process comprises test planning and control, test analysis and design, test implementation and execution, evaluating exit criteria and reporting, and test closure activities.

Test suite: A set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one.

Tester: A skilled professional who is involved in the testing of a component or system.

Testing: The process consisting of all lifecycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.

Testware: Artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up and clear-up procedures, files, databases, environment, and any additional software or utilities used in testing.

U

Usability: The capability of the software to be understood, learned, used and attractive to the user when used under specified conditions.

Usability testing: Testing to determine the extent to which the software product is understood, easy to learn, easy to operate and attractive to the users under specified conditions.

Use case: A sequence of transactions in a dialogue between an actor and a component or system with a tangible result, where an actor can be a user or anything that can exchange information with the system.

Use case testing: A black box test design technique in which test cases are designed to execute scenarios of use cases.

V

Validation: Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.

Verification: Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled.

W

White-box test design technique: Procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system.

White-box testing: Testing based on an analysis of the internal structure of the component or system.

Terms definitions have been found in ISTQB glossary

If you’d like we apply our experience and knowledge to make your software better, you might be interested in our software testing service.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 60 other followers

%d bloggers like this: