Our Approach to Testing and QA

As a rule testing spans all stages of software lifecycle and requires careful planning and diligent execution. It starts with a test plan. The test plan is a document that identifies the high-level project information, software components, and testing environment. It also describes the project's testing strategy, identifies QA resources, efforts, schedule, and cost. Then test cases are developed with specific steps to exercise features of the software. This ensures that you will cost-effectively deliver high-quality software to your customers.

The more testing is packed into early development stages the better payoff.

Testing should start once builds (early releases of the software) are made available. Key areas of testing typically include:

  • User interface testing
  • Database testing
  • Security and authorization testing
  • Performance testing
  • Stress testing
  • Fault-tolerance and fail-over testing
  • Compatibility and configuration testing
  • Installation testing

Different types of testing discover different categories of errors.

Our QA engineers combine two types of testing:

  • Testing to spec (black-box testing). Testers assume no knowledge of internals of the software. They test functionality purely based on the requirements and specifications, i.e. intended behavior. The software is given expected input, and results are analyzed for errors.
  • Testing to code (white-box testing). This type of testing requires close interaction of our developers and QA engineers. Employs knowledge of the internal program logic to understand required input to exercise all execution paths in the software.

Testing to code is assisted by code coverage tools, such as Rational Pure Coverage. These tools provide information on what parts of source code were executed and how extensively. Code coverage is also indispensable as an integral part of test results, and in assessing readiness of software modules.

Test Automation for regression testing.

There are always many builds or releases throughout the project lifecycle. Each release introduces the possibility that features that worked previously no longer work properly. This problem is addressed by regression testing. Regression testing is re-execution of the previous test cases with the same data on every new release of the software. In simple terms it means: what was working in the previous release should NOT become broken in the new release.

As the number of builds increases, automation of regression testing becomes an important task, which achieves:

  • Higher software quality
  • Increased productivity
  • Fewer recurring defects
  • Shorter development cycle

Automation testing is implemented using various tools such as QTP, Rational Functional Tester, Silk Test, or Test Complete for GUI and Web-based interfaces. Various programmatic tools and scripts are used for testing of internal modules, such as database layers, business logic and other server-side components.