April 25, 2008

Types of Test Reports

Types of Test Reports
The documents outlined in the IEEE Standard of Software Test Documentation covers test planning, test specification, and test reporting.Test reporting covers four document types:
1. A Test Item Transmittal Report identifies the test items being transmitted for testing from the development to the testing group in the event that a formal beginning of test execution is desired Details to be included in the report - Purpose, Outline, Transmittal-Report Identifier, Transmitted Items, Location, Status, and Approvals.
2. A Test Log is used by the test team to record what occurred during test execution.
Details to be included in the report - Purpose, Outline, Test-Log Identifier, Description, Activity and Event Entries, Execution Description, Procedure Results, Environmental Information, Anomalous Events, Incident-Report Identifiers
3. A Test Incident report describes any event that occurs during the test execution that requires further investigation Details to be included in the report - Purpose, Outline, Test-Incident-Report Identifier, Summary,
Impact
4. A test summary report summarizes the testing activities associated with one or more testdesign specifications Details to be included in the report - Purpose, Outline, Test-Summary-Report Identifier,Summary, Variances, Comprehensiveness Assessment, Summary of Results, Summary of Activities, and Approvals

Defect Tracking

Defect Tracking
What is a defect?
As discussed earlier, defect is the variance from a desired product attribute (it can be a wrong, missing or extra data). It can be of two types – Defect from the product or a variance from customer/user expectations. It is a flaw in the software system and has no impact until it affects the user/customer and operational system.

What are the defect categories?
With the knowledge of testing so far gained, you can now be able to categorize the defects you have found. Defects can be categorized into different types basing on the core issues they address. Some defects address security or database issues while others may refer to functionality or UI issues.
Security Defects: Application security defects generally involve improper handling of data sent from the user to the application. These defects are the most severe and given highest priority for a fix.
Examples:
- Authentication: Accepting an invalid username/password
- Authorization: Accessibility to pages though permission not given
Data Quality/Database Defects: Deals with improper handling of data in the database.
Examples:
- Values not deleted/inserted into the database properly
- Improper/wrong/null values inserted in place of the actual values
Critical Functionality Defects: The occurrence of these bugs hampers the crucial functionality of the application.
Examples:
- Exceptions
Functionality Defects: These defects affect the functionality of the application.
Examples:
- All Javascript errors
- Buttons like Save, Delete, Cancel not performing their intended functions
- A missing functionality (or) a feature not functioning the way it is intended to
- Continuous execution of loops
User Interface Defects: As the name suggests, the bugs deal with problems related to UI are usually considered less severe.
Examples:
- Improper error/warning/UI messages
- Spelling mistakes
- Alignment problems

How is a defect reported?
Once the test cases are developed using the appropriate techniques, they are executed which is when the bugs occur. It is very important that these bugs be reported as soon as possible because, the earlier you report a bug, the more time remains in the schedule to get it fixed.
Simple example is that you report a wrong functionality documented in the Help file a few months before the product release, the chances that it will be fixed are very high. If you report the same bug few hours before the release, the odds are that it wont be fixed. The bug is still the same though you report it few months or few hours before the release, but what matters is the time.It is not just enough to find the bugs; these should also be reported/communicated clearly and efficiently, not to mention the number of people who will be reading the defect.
Defect tracking tools (also known as bug tracking tools, issue tracking tools or problem
trackers) greatly aid the testers in reporting and tracking the bugs found in software
applications. They provide a means of consolidating a key element of project information in one place. Project managers can then see which bugs have been fixed, which are
outstanding and how long it is taking to fix defects. Senior management can use reports to understand the state of the development process.

How descriptive should your bug/defect report be?
You should provide enough detail while reporting the bug keeping in mind the people who will use it – test lead, developer, project manager, other testers, new testers assigned etc. This means that the report you will write should be concise, straight and clear. Following are the details your report should contain:
- Bug Title
- Bug identifier (number, ID, etc.)
- The application name or identifier and version
- The function, module, feature, object, screen, etc. where the bug occurred
- Environment (OS, Browser and its version)
- Bug Type or Category/Severity/Priority
o Bug Category: Security, Database, Functionality (Critical/General), UI
o Bug Severity: Severity with which the bug affects the application – Very High, High,
Medium, Low, Very Low
o Bug Priority: Recommended priority to be given for a fix of this bug – P0, P1, P2, P3,
P4, P5 (P0-Highest, P5-Lowest)
- Bug status (Open, Pending, Fixed, Closed, Re-Open)
- Test case name/number/identifier
- Bug description
- Steps to Reproduce
- Actual Result
- Tester Comments

What does the tester do when the defect is fixed?
Once the reported defect is fixed, the tester needs to re-test to confirm the fix. This is usually done by executing the possible scenarios where the bug can occur. Once retesting is completed, the fix can be confirmed and the bug can be closed. This marks the end of the bug life cycle.

Test Case Design Techniques

Test Case Design Techniques
The test case design techniques are broadly grouped into two categories: Black box techniques,White box techniques and other techniques that do not fall under either category.

Black Box (Functional)
- Specification derived tests
- Equivalence partitioning
- Boundary Value Analysis
- State-Transition Testing
White Box (Structural)
- Branch Testing
- Condition Testing
- Data Definition - Use Testing
- Internal boundary value testing
Other
- Error guessing
Specification Derived Tests
As the name suggests, test cases are designed by walking through the relevant specifications. It is a positive test case design technique.

Equivalence Partitioning
Equivalence partitioning is the process of taking all of the possible test values and placing them into classes (partitions or groups). Test cases should be designed to test one value from each class. Thereby, it uses fewest test cases to cover the maximum input requirements.
For example, if a program accepts integer values only from 1 to 10. The possible test cases for such a program would be the range of all integers. In such a program, all integers upto to 0 and above 10 will cause an error. So, it is reasonable to assume that if 11 will fail, all values above it will fail and vice versa.
If an input condition is a range of values, let one valid equivalence class be the range (0 or 10 in this example). Let the values below and above the range be two respective invalid equivalence values (i.e. -1 and 11). Therefore, the above three partition values can be used as test cases for the above example.

Boundary Value Analysis
This is a selection technique where the test data are chosen to lie along the boundaries of the input domain or the output range. This technique is often called as stress testing and incorporates a degree of negative testing in the test design by anticipating that errors will occur at or around the partition boundaries.
For example, a field is required to accept amounts of money between $0 and $10. As a tester,you need to check if it means upto and including $10 and $9.99 and if $10 is acceptable. So, the boundary values are $0, $0.01, $9.99 and $10.Now, the following tests can be executed. A negative value should be rejected, 0 should be accepted (this is on the boundary), $0.01 and $9.99 should be accepted, null and $10 should be rejected. In this way, it uses the same concept of partitions as equivalence partitioning.

State Transition Testing
As the name suggests, test cases are designed to test the transition between the states by creating the events that cause the transition.

Branch Testing
In branch testing, test cases are designed to exercise control flow branches or decision points in a unit. This is usually aimed at achieving a target level of Decision Coverage. Branch Coverage,need to test both branches of IF and ELSE. All branches and compound conditions (e.g. loops and array handling) within the branch should be exercised at least once.

Condition Testing
The object of condition testing is to design test cases to show that the individual components of logical conditions and combinations of the individual components are correct. Test cases are designed to test the individual elements of logical expressions, both within branch conditions and within other expressions in a unit.

Data Definition – Use Testing
Data definition-use testing designs test cases to test pairs of data definitions and uses. Data definition is anywhere that the value of a data item is set. Data use is anywhere that a data item is read or used. The objective is to create test cases that will drive execution through paths between specific definitions and uses.

Internal Boundary Value Testing
In many cases, partitions and their boundaries can be identified from a functional specification for a unit, as described under equivalence partitioning and boundary value analysis above. However,a unit may also have internal boundary values that can only be identified from a structural specification.

Error Guessing
It is a test case design technique where the testers use their experience to guess the possible errors that might occur and design test cases accordingly to uncover them.Using any or a combination of the above described test case design techniques; you can develop effective test cases.

What is a Use Case?
A use case describes the system’s behavior under various conditions as it responds to a request from one of the users. The user initiates an interaction with the system to accomplish some goal.
Different sequences of behavior, or scenarios, can unfold, depending on the particular requests made and conditions surrounding the requests. The use case collects together those different scenarios. Use cases are popular largely because they tell coherent stories about how the system will behave in use. The users of the system get to see just what this new system will be and get to react early.

The Test Planning Process

The Test Planning Process
What is a Test Strategy? What are its Components?
Test Policy - A document characterizing the organization’s philosophy towards software testing.
Test Strategy - A high-level document defining the test phases to be performed and the testing within those phases for a programme. It defines the process to be followed in each project. This sets the standards for the processes, documents, activities etc. that should be followed for each project.
For example, if a product is given for testing, you should decide if it is better to use black-box testing or white-box testing and if you decide to use both, when will you apply each and to which part of the software? All these details need to be specified in the Test Strategy.
Project Test Plan - a document defining the test phases to be performed and the testing within those phases for a particular project. A Test Strategy should cover more than one project and should address the following issues: An approach to testing high risk areas first, Planning for testing, How to improve the process based on previous testing, Environments/data used, Test management - Configuration management,Problem management, What Metrics are followed, Will the tests be automated and if so which
tools will be used, What are the Testing Stages and Testing Methods, Post Testing Review process, Templates.Test planning needs to start as soon as the project requirements are known. The first document that needs to be produced then is the Test Strategy/Testing Approach that sets the high level approach for testing and covers all the other elements mentioned above.
Test Planning – Sample Structure
Once the approach is understood, a detailed test plan can be written. Usually, this test plan can be written in different styles. Test plans can completely differ from project to project in the same organization.

Major Test Planning Tasks
Like any other process in software testing, the major tasks in test planning are to – Develop Test Strategy, Critical Success Factors, Define Test Objectives, Identify Needed Test Resources, Plan Test Environment, Define Test Procedures, Identify Functions To Be Tested, Identify Interfaces With Other Systems or omponents, Write Test Scripts, Define Test Cases, Design Test Data,Build Test Matrix, Determine Test Schedules, Assemble Information, Finalize the Plan

Test Case Development
A test case is a detailed procedure that fully tests a feature or an aspect of a feature. While the test plan describes what to test, a test case describes how to perform a particular test. You need to develop test cases for each test listed in the test plan.
General Guidelines
As a tester, the best way to determine the compliance of the software to requirements is by designing effective test cases that provide a thorough test of a unit. Various test case design techniques enable the testers to develop effective test cases. Besides, implementing the design techniques, every tester needs to keep in mind general guidelines that will aid in test case design:
a. The purpose of each test case is to run the test in the simplest way possible. [Suitable
techniques - Specification derived tests, Equivalence partitioning]
b
. Concentrate initially on positive testing i.e. the test case should show that the software does what it is intended to do. [Suitable techniques - Specification derived tests, Equivalence partitioning, State-transition testing]
c.
Existing test cases should be enhanced and further test cases should be designed to show that the software does not do anything that it is not specified to do i.e. Negative Testing [Suitable techniques - Error guessing, Boundary value analysis, Internal boundary value testing, Statetransition testing]
d.
Where appropriate, test cases should be designed to address issues such as performance,safety requirements and security requirements [Suitable techniques - Specification derived tests]
e. Further test cases can then be added to the unit test specification to achieve specific test
coverage objectives. Once coverage tests have been designed, the test procedure can be
developed and the tests executed [Suitable techniques - Branch testing, Condition testing, Data definition-use testing, State-transition testing]

April 24, 2008

Software Testing Levels and Types

Testing Levels and Types
There are basically three levels of testing i.e. Unit Testing, Integration Testing and System Testing. Various types of testing come under these levels.
Unit Testing
To verify a single program or a section of a single program
Integration Testing
To verify interaction between system components
Prerequisite: unit testing completed on all components that compose a system
System Testing
To verify and validate behaviors of the entire system against the original system objectives
Software testing is a process that identifies the correctness, completeness, and quality of software.
Following is a list of various types of software testing and their definitions in a random order:
Formal Testing: Performed by test engineers
Informal Testing: Performed by the developers
Manual Testing: That part of software testing that requires human input, analysis, or
evaluation.
Automated Testing: Software testing that utilizes a variety of tools to automate the testing process. Automated testing still requires a skilled quality assurance professional with knowledge of the automation tools and the software being tested to set up the test cases.
Black box Testing: Testing software without any knowledge of the back-end of the system,structure or language of the module being tested. Black box test cases are written from a definitive source document, such as a specification or requirements document.
White box Testing: Testing in which the software tester has knowledge of the back-end,structure and language of the software, or at least its purpose.
Unit Testing: Unit testing is the process of testing a particular complied program, i.e., a window, a report, an interface, etc. independently as a stand-alone component/program. The types and degrees of unit tests can vary among modified and newly created programs. Unit testing is mostly performed by the programmers who are also responsible for the creation of the necessary unit test data.
Incremental Testing: Incremental testing is partial testing of an incomplete product. The goal of incremental testing is to provide an early feedback to software developers.
System Testing: System testing is a form of black box testing. The purpose of system testing is to validate an application's accuracy and completeness in performing the functions as designed.
Integration Testing: Testing two or more modules or functions together with the intent of finding interface defects between the modules/functions.
System Integration Testing: Testing of software components that have been distributed across multiple platforms (e.g., client, web server, application server, and database server) to produce failures caused by system integration defects (i.e. defects involving distribution and back-office integration).
Functional Testing: Verifying that a module functions as stated in the specification and establishing confidence that a program does what it is supposed to do.
End-to-end Testing: Similar to system testing - testing a complete application in a situation that mimics real world use, such as interacting with a database, using network communication, or interacting with other hardware, application, or system.
Sanity Testing: Sanity testing is performed whenever cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It normally includes testing basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.
Regression Testing: Testing with the intent of determining if bug fixes have been successful and have not created any new problems.
Acceptance Testing: Testing the system with the intent of confirming readiness of the product and customer acceptance. Also known as User Acceptance Testing.
Adhoc Testing: Testing without a formal test plan or outside of a test plan. With some projects this type of testing is carried out as an addition to formal testing. Sometimes, if testing occurs very late in the development cycle, this will be the only kind of testing that can be performed – usually done by skilled testers. Sometimes ad hoc testing is referred to as exploratory testing.
Configuration Testing: Testing to determine how well the product works with a broad range of hardware/peripheral equipment configurations as well as on different operating systems and software.
Load Testing: Testing with the intent of determining how well the product handles competition for system resources. The competition may come in the form of network traffic, CPU utilization or memory allocation.
Stress Testing: Testing done to evaluate the behavior when the system is pushed beyond the breaking point. The goal is to expose the weak links and to determine if the system manages to recover gracefully.
Performance Testing: Testing with the intent of determining how efficiently a product handles a variety of events. Automated test tools geared specifically to test and fine-tune performance are used most often for this type of testing.
Usability Testing: Usability testing is testing for 'user-friendliness'. A way to evaluate and measure how users interact with a software product or site. Tasks are given to users and observations are made.
Installation Testing: Testing with the intent of determining if the product is compatible with a variety of platforms and how easily it installs.
Recovery/Error Testing: Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
Security Testing: Testing of database and network software in order to keep company data and resources secure from mistaken/accidental users, hackers, and other malevolent attackers.
Penetration Testing: Penetration testing is testing how well the system is protected against unauthorized internal or external access, or willful damage. This type of testing usually requires sophisticated testing techniques.
Compatibility Testing: Testing used to determine whether other system software components such as browsers, utilities, and competing software will conflict with the software being tested.
Exploratory Testing: Any testing in which the tester dynamically changes what they're doing for test execution, based on information they learn as they're executing their tests.
Comparison Testing: Testing that compares software weaknesses and strengths to those of competitors' products.
Alpha Testing: Testing after code is mostly complete or contains most of the functionality and prior to reaching customers. Sometimes a selected group of users are involved. More often this testing will be performed in-house or by an outside testing firm in close cooperation with the software engineering department.
Beta Testing: Testing after the product is code complete. Betas are often widely distributed or even distributed to the public at large.
Gamma Testing: Gamma testing is testing of software that has all the required features, but it did not go through all the in-house quality checks.
Mutation Testing: A method to determine to test thoroughness by measuring the extent to which the test cases can discriminate the program from slight variants of the program.
Independent Verification and Validation (IV&V): The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn't fail in an unacceptable manner. The individual or group doing this work is not part of the group or organization that developed the software.
Pilot Testing: Testing that involves the users just before actual release to ensure that users become familiar with the release contents and ultimately accept it. Typically involves many users,is conducted over a short period of time and is tightly controlled. (See beta testing)
Parallel/Audit Testing: Testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the operations correctly.
Glass Box/Open Box Testing: Glass box testing is the same as white box testing. It is a testing approach that examines the application's program structure, and derives test cases from the application's program logic.
Closed Box Testing: Closed box testing is same as black box testing. A type of testing that considers only the functionality of the application.
Bottom-up Testing: Bottom-up testing is a technique for integration testing. A test engineer creates and uses test drivers for components that have not yet been developed, because, with bottom-up testing, low-level components are tested first. The objective of bottom-up testing is to call low-level components first, for testing purposes.
Smoke Testing: A random test conducted before the delivery and after complete testing.

All about Bugs



Bug Life Cycle
Bug Life Cycle starts with an unintentional software bug/behavior and ends when the assigned developer fixes the bug. A bug when found should be communicated and assigned to a developer that can fix it. Once fixed, the problem area should be re-tested. Also, confirmation should be made to verify if the fix did not create problems elsewhere. In most of the cases, the life cycle gets very complicated and difficult to track making it imperative to have a bug/defect tracking system in place.
Following are the different phases of a Bug Life Cycle:
Open: A bug is in Open state when a tester identifies a problem area
Accepted: The bug is then assigned to a developer for a fix. The developer then accepts if valid.
Not Accepted/Won’t fix: If the developer considers the bug as low level or does not accept it as a bug, thus pushing it into Not Accepted/Won’t fix state.
Such bugs will be assigned to the project manager who will decide if the bug needs a fix. If it needs, then assigns it back to the developer, and if it doesn’t, then assigns it back to the tester who will have to close the bug.
Pending: A bug accepted by the developer may not be fixed immediately. In such cases, it can be put under Pending state.
Fixed: Programmer will fix the bug and resolves it as Fixed.
Close: The fixed bug will be assigned to the tester who will put it in the Close state.
Re-Open: Fixed bugs can be re-opened by the testers in case the fix produces problems
Elsewhere.The image below clearly shows the Bug Life Cycle and how a bug can be tracked using Bug Tracking Tools (BTT)
What is a bug? Why do bugs occur?
A software bug may be defined as a coding error that causes an unexpected defect, fault, flaw, or imperfection in a computer program. In other words, if a program does not perform as intended,it is most likely a bug.There are bugs in software due to unclear or constantly changing requirements, software complexity, programming errors, timelines, errors in bug tracking, communication gap,documentation errors, deviation from standards etc.
• Unclear software requirements are due to miscommunication as to what the software should or shouldn’t do. In many occasions, the customer may not be completely clear as to how the product should ultimately function. This is especially true when the software is a developed for a completely new product. Such cases usually lead to a lot of misinterpretations from any or both sides.
• Constantly changing software requirements cause a lot of confusion and pressure both on the development and testing teams. Often, a new feature added or existing feature removed can be linked to the other modules or components in the software. Overlooking such issues causes bugs.
• Also, fixing a bug in one part/component of the software might arise another in a different or same component. Lack of foresight in anticipating such issues can cause serious problems and increase in bug count. This is one of the major issues because of which bugs occur since developers are very often subject to pressure related to timelines; frequently changing requirements, increase in the number of bugs etc.
• Designing and re-designing, UI interfaces, integration of modules, database management all these add to the complexity of the software and the system as a whole.
• Fundamental problems with software design and architecture can cause problems in
programming. Developed software is prone to error as programmers can make mistakes too. As a tester you can check for, data reference/declaration errors, control flow errors, parameter errors, input/output errors etc.
• Rescheduling of resources, re-doing or discarding already completed work, changes in
hardware/software requirements can affect the software too. Assigning a new developer to the project in midway can cause bugs. This is possible if proper coding standards have not been followed, improper code documentation, ineffective knowledge transfer etc. Discarding a portion of the existing code might just leave its trail behind in other parts of the software; overlooking or not eliminating such code can cause bugs. Serious bugs can especially occur with larger projects, as it gets tougher to identify the problem area.
• Programmers usually tend to rush as the deadline approaches closer. This is the time when most of the bugs occur. It is possible that you will be able to spot bugs of all types and severity.
• Complexity in keeping track of all the bugs can again cause bugs by itself. This gets harder when a bug has a very complex life cycle i.e. when the number of times it has been closed, reopened,not accepted, ignored etc goes on increasing.

Cost of fixing bugs
Costs are logarithmic; they increase in size tenfold as the time increases. A bug found and fixed during the early stages – requirements or product spec stage can be fixed by a brief interaction with the concerned and might cost next to nothing.
During coding, a swiftly spotted mistake may take only very less effort to fix. During integration testing, it costs the paperwork of a bug report and a formally documented fix, as well as the delay and expense of a re-test.
During system testing it costs even more time and may delay delivery. Finally, during operations it may cause anything from a nuisance to a system failure, possibly with catastrophic consequences in a safety-critical system such as an aircraft or an emergency service.

Software Testing Life Cycle


Software Testing Life Cycle
Software Testing Life Cycle consist of six (generic) phases: 1) Planning, 2) Analysis, 3) Design, 4)Construction, 5) Testing Cycles, 6) Final Testing and Implementation and 7) Post Implementation. Each phase in the life cycle is described with the respective activities.
Planning. Planning High Level Test plan, QA plan (quality goals), identify – reporting procedures,problem classification, acceptance criteria, databases for testing, measurement criteria (defect quantities/severity level and defect origin), project metrics and finally begin the schedule for project testing. Also, plan to maintain all test cases (manual or automated) in a database.
Analysis. Involves activities that - develop functional validation based on Business Requirements (writing test cases basing on these details), develop test case format (time estimates and priority assignments), develop test cycles (matrices and timelines), identify test cases to be automated (if applicable), define area of stress and performance testing, plan the test cycles required for the project and regression testing, define procedures for data maintenance (backup, restore,validation), review documentation.
Design. Activities in the design phase - Revise test plan based on changes, revise test cycle matrices and timelines, verify that test plan and cases are in a database or requisite, continue to write test cases and add new ones based on changes, develop Risk Assessment Criteria, formalize details for Stress and Performance testing, finalize test cycles (number of test case per cycle based on time estimates per test case and priority), finalize the Test Plan, (estimate resources to support development in unit testing).
Construction (Unit Testing Phase). Complete all plans, complete Test Cycle matrices and timelines, complete all test cases (manual), begin Stress and Performance testing, test the automated testing system and fix bugs, (support development in unit testing), run QA acceptance test suite to certify software is ready to turn over to QA.
Test Cycle(s) / Bug Fixes (Re-Testing/System Testing Phase). Run the test cases (front and back end), bug reporting, verification, revise/add test cases as required.
Final Testing and Implementation (Code Freeze Phase). Execution of all front end test cases - manual and automated, execution of all back end test cases - manual and automated, execute all Stress and Performance tests, provide on-going defect tracking metrics, provide on-going complexity and design metrics, update estimates for test cases and test plans, document test cycles, regression testing, and update accordingly.
Post Implementation. Post implementation evaluation meeting can be conducted to review entire project. Activities in this phase - Prepare final Defect Report and associated metrics,identify strategies to prevent similar problems in future project, automation team - 1) Review test cases to evaluate other cases to be automated for regression testing, 2) Clean up automated test cases and variables, and 3) Review process of integrating results from automated testing in with results from manual testing.

April 23, 2008

Common Software Errors

lets discuss a few common errors generlly encountered
Following are the most common software errors that aid you in software testing. This helps you to identify errors systematically and increases the efficiency and productivity of software testing.
Types of errors with examples
User Interface Errors: Missing/Wrong Functions, Doesn’t do what the user expects, Missing information, Misleading, Confusing information, Wrong content in Help text, Inappropriate error messages. Performance issues - Poor responsiveness, Can't redirect output, Inappropriate use of key board
Error Handling: Inadequate - protection against corrupted data, tests of user input, version control; Ignores – overflow, data comparison, Error recovery – aborting errors, recovery from hardware problems.
Boundary related errors: Boundaries in loop, space, time, memory, mishandling of cases outside boundary.
Calculation errors: Bad Logic, Bad Arithmetic, Outdated constants, Calculation errors, Incorrect conversion from one data representation to another, Wrong formula, Incorrect approximation.
Initial and Later states: Failure to - set data item to zero, to initialize a loop-control
variable, or re-initialize a pointer, to clear a string or flag, Incorrect initialization.
Control flow errors: Wrong returning state assumed, Exception handling based exits, Stack underflow/overflow, Failure to block or un-block interrupts, Comparison sometimes yields wrong result, Missing/wrong default, Data Type errors.
Errors in Handling or Interpreting Data: Un-terminated null strings, Overwriting a file after an error exit or user abort.
Race Conditions: Assumption that one event or task finished before another begins,
Resource races, Tasks starts before its prerequisites are met, Messages cross or don't arrive in the order sent.
Load Conditions: Required resources are not available, No available large memory area, Low priority tasks not put off, Doesn't erase old files from mass storage, Doesn't return unused memory.
Hardware: Wrong Device, Device unavailable, Underutilizing device intelligence,
Misunderstood status or return code, Wrong operation or instruction codes.
Source, Version and ID Control: No Title or version ID, Failure to update multiple copies of data or program files.
Testing Errors: Failure to notice/report a problem, Failure to use the most promising test case, Corrupted data files, Misinterpreted specifications or documentation, Failure to make it clear how to reproduce the problem, Failure to check for unresolved problems just before release, Failure to verify fixes, Failure to provide summary report.

TERMS USED IN TESTING

Lets get familiarize with some testing terms.

Testing Terms
Bug: A software bug may be defined as a coding error that causes an unexpected defect,
fault or flaw. In other words, if a program does not perform as intended, it is most likely a bug.
Error: A mismatch between the program and its specification is an error in the program.
Defect: Defect is the variance from a desired product attribute (it can be a wrong, missing or extra data). It can be of two types – Defect from the product or a variance from customer/user expectations. It is a flaw in the software system and has no impact until it affects the user/customer and operational system. 90% of all the defects can be caused by process problems.
Failure: A defect that causes an error in operation or negatively impacts a user/ customer.
Quality Assurance: Is oriented towards preventing defects. Quality Assurance ensures all parties concerned with the project adhere to the process and procedures, standards and
templates and test readiness reviews.
Quality Control: quality control or quality engineering is a set of measures taken to ensure that defective products or services are not produced, and that the design meets performance requirements.
Verification: Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans, code,requirements and specifications; this can be done with checklists, issues lists, walkthroughs and inspection meetings.
Validation: Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual testing and takes place after verifications are completed.