Table of Contents
Testing Strategy
1.0 Introduction
The goal of any testing is to deliver a good quality product that meets business requirements in the most efficient way possible. To do so, resources are allocated toward developing a stable and reliable automation test suite. This allows for more time spent fixing bugs or performing exploratory testing, and less time spent on executing repetitive test cases manually.
The purpose of this document is to define the proposed test plan for the pivot web application. This involves ensuring pivot meets all design, functional, and performance benchmarks.
2.0 Scope
This test strategy covers all aspects of pivot, with an emphasis on the main features of the application. It accounts for both the client side and server side of the application.
3.0 Testing Overview
3.1 Unit Testing
Definition: Unit testing is the level at which the actual functions in the code are tested. This is to validate the business logic of the application. These tests are to be implemented by the developer, and intended to test small pieces of the codebase in isolation to ensure functions return the expected responses when given different parameters or values.
Implementation: Unit tests are written as a part of the codebase and will be quick to execute. Ideally, these would be run before pushing new code changes to catch errors locally before committing the changes.
- These tests will be written and executed by Software Developers.
- Run when new code is committed.
3.2 Functional Testing
Definition: Functional testing focuses on the application from a user’s perspective. It ensures that the user experience is as intended, and that bugs or errors are caught before being released to the user. Functional testing involves testing the main features and main paths that a typical user would follow, as well as edge cases that not all users may experience, but may still be encountered. Functional testing also ensures that in the case of the application failing, it at least fails gracefully.
Implementation: Functional tests are typically considered as black box tests. These tests are performed on the application from the user’s perspective, which means it is not focused on how it works or why it works, but rather, what is the result. For example, clicking a “save” button saves the user’s parameters. Functional testing for this feature does not account for how the request is made or where the data is stored. Similarly, negative testing for this save button could involve trying to save parameters of the wrong data type, and validating that there is an error message for the user, and not anything deeper.
3.2.1 Design Verification
Definition: A subsection of functional testing includes validating that the product looks and works as it was designed, so design verification is assuring that the product follows the intended user interface and user flow.
Implementation: Design verification can be compared to a health-check, where the product is tested to make sure it looks and feels right. To do this, pivot undergoes a design verification at the end of each sprint, specifically looking for design bugs that have been fixed, or verifying that new features match mock-ups.
- executed by UI/UX Design
- run when new designs are implemented in the sprint
3.2.2 UI Testing
Ensure that the software uses a consistent user experience and no visual or graphical elements on the screen are broken.
3.3 Sanity Testing
Sanity tests are exploratory tests for new features. When a new feature is added to the application, a high-level sanity testing will be executed to make sure that it works as expected. For example, a quick test on a new build to validate that a bug has been fixed. For this reason, sanity tests are not typically documented until it is completed, and then a formal test case can be added to the regression testing suite.
Verification tests for bug fixes are generally completed at the end of each sprint, but any changes are also reviewed and tested during PR reviews.
3.4 Integration Testing
Definition: Integration testing is testing how well the different units of an application work with each other to see if they work as a system.
Implementation: A typical example of integration testing would be to perform an action using the UI, like uploading a file, and then verifying that the file is saved in the database. Integration testing could also be done using APIs rather than the UI, as long as they are the same APIs.
3.4.1 Unit Testing vs Integration Testing
Unit tests are good to test at the codebase level, and UI tests are good for testing from a user’s perspective, but it is also important to make sure that these two components functioning as they will when integrated together.
3.4.2 Functional vs Integration Testing
Integration testing and functional testing are related in the sense that they both involve testing the main functions of the application at more of an end to end level, rather than testing the business logic, like during unit testing. For pivot, functional tests are intended to be UI tests, ensuring that pivot meets all design benchmarks. Integration tests are to be more focused on the backend/server side, like API testing and database validation. For this reason, automated tests will be integration tests that also verify the functionality of the application.
- written and executed by: Quality Assurance
- run nightly (smoke test) or before a release (regression test)
3.5 Regression Testing
Definition: Verify that the new code didn't break any existing functionality.
The developers are constantly adding new features, functions, fixing bugs, and so on. There is a chance that all this new code might break the existing functionality that was working.
Users dislike using a product that is broken after they download and install a new release. They expect a consistent and reliable experience from the software, no matter which version they are using. They also expect that previously working features will keep on working and won't break in the future.
Regression testing is a testing technique where a tester makes sure that the new features didn't break any existing functionality. Its goal is to ensure that previously developed and tested functionality still works after adding new code. When a tester performs the regression testing automatically using testing frameworks and tools, it's known as automated regression testing.
In automated regression testing, a tester runs the suite of regression tests after each new release of the software. If the tests pass, then the tester continues with other types of testing. However, if it fails, then there is no point in further proceeding with tests until the developers fix the broken regression tests. Hence, they also act as a time-saver for the tester and ensure quality in software before shipping it.
3.6 Smoke Testing
Smoke tests are a subset of the regression testing suite. The smoke tests for pivot can be found by filtering the using the tag smoke in the Automation Repo. These tests are intended to catch the most critical bugs as soon as possible, without someone having to test the application every single day.
The smoke tests validate the critical functionality and features of pivot, and will be passing at all times. When a new feature is added to pivot, it will be discussed with the PM as to whether the feature will be considered critical. If the feature is critical, test cases are added to the smoke testing suite.
Ideally, these tests run nightly through continuous integration, and send notifications if any of the tests failed. This is one of the first goals when implementing CircleCi to the automation repo.
3.7 Performance Testing
Ensure that the software won't crash and perform reasonably under heavy load or stringent conditions.
3.8 Browser Compatibility Test
3.8.1 Browser Automation
Browser automation is the technique of programmatically launching a web application in a browser and automatically executing various actions, just as a regular user would. Browser testing gives the speed and efficiency that would be impossible for a human tester. Protractor, Cypress, and Selenium are some of the popular tools used in-browser testing.
Some of the activities performed in browser automation are as follows:
- Navigate to the application URL and make sure it launches
- Test the various links on the web page and ensure they are not broken.
- Keep a record of the broken links on the page.
- Perform load and performance testing on our web application.
- Launch multiple instances of the browsers with different test users and ensure that concurrent actions work as expected.
3.8.2 Cross-browser Testing
Cross-browser testing is a type of browser automation testing where the tester verifies if the web application will work smoothly on different browsers. Some of the popular browsers include Google Chrome, Mozilla Firefox, Internet Explorer, Safari, etc.
With web applications, can’t guarantee the browsers/platforms/devices our users might use to access our software. Some users could be using Google Chrome on their Android phones, some might use Firefox on a Windows desktop machine, or others could use Safari on their Macbooks. Hence, it’s crucial to test the web application or the website on multiple major browsers running on different operating systems.
Cross-browser testing is to launch the application on various browsers running on different operating systems, e.g. Windows, Mac OS, Linux, etc., and verify that the application works as expected. The tester looks for the design/rendering issues, the functionality of the application, and device-specific functionality and ensures that our web application works as expected on different versions of popular browsers on multiple platforms and devices. It ensures that the users get the same experience and features irrespective of which browser they use. It helps to reach a wide range of users, allows the users to switch browsers and devices, and still get the same user experience, increasing customer satisfaction and building a loyal user base.
Though it can be typically, sophisticated tools exist that allow the testers to automate cross-browser testing. Some examples include Selenium Box, BrowserStack, Browsershots, LambdaTest, etc.
3.9 Security Testing
Definition: Security testing usually is a key document deliverable to get into the master plan for delivery. It sets the expectations for everyone involved and gives the product manager and other stakeholders the material they need to build and run their own plans. In pivot platform development, Security Testing will involves ensuring confidential data is secure and/or encrypted, and that the system cannot be easily hacked.
-
User Password and Security Key in Integrations are encrypted. More details can be found on the confluence page Security.
-
PEN testing is done by QA Automation Engineer quarterly as in the confluence page PEN (Penetration) Testing.
4.0 Manual Testing vs Automated Testing
The difference between Manual Testing and Automated Testing:
A test is a good candidate for automation under the following conditions.
- The test is repeatable.
- The feature under the test doesn’t change its behavior frequently.
- It’s time-consuming for a human tester.
- The test involves complicated calculations.
- The test ensures the previous functionality didn’t break after a new change.
5.0 Testing Environments
A test environment is a computer, a server, or an environment on which a tester can test the software. After the team builds the software, the tester installs it on this computer with all its dependencies, just like the production environment. This allows the tester to test the software in a real-world scenario.
A test environment enables the tester to create reliable test setups which are identical whenever a new version of the software is released. The test environment includes the test bed, which is the test data using which the tester will test the software. This data helps the tester to verify test cases that need a particular setup.
Typically, the test environment is an identical copy of the production environment. Having a duplicate copy allows the tester to reliably reproduce the bugs reported by the customers and provide the exact steps to the developers to fix them.
Here are some prerequisites for a good test environment:
- A server with a similar configuration, including the software and the hardware to match a production environment.
- Sample test data with which to test the software.
- Test database with reasonably realistic data, it can be a copy of an actual production database.
- Installed software under the test.
6.0 Best Practice in test automation
Here are some of the best practices a software development and the testing team will use to ensure quality software.
- Decide what to automate: It’s not possible or practical to automate certain tests, such as usability, accessibility, exploratory testing, or non-repetitive test cases that frequently change.
- Assign test cases based on skill and experience: When dividing test cases, take into account the skills and experience of the tester and the complexity and severity of the feature under test.
- Removing Uncertainty: The whole goal of test automation is to have reliable, accurate, consistent tests that provide helpful feedback to the tester. If the tests fail due to bugs in the test itself, or it’s giving false positives, then the ROI on test automation starts decreasing.
- Choosing the right frameworks and tools: There are a lot of tools to perform automation testing. Picking the wrong tool for the test at hand will waste time and provide false confidence to release software that may fail in production.
- Keeping test records in a bug database: Using a bug database is a best practice whether a team uses test automation or not. Whenever new bugs are found by the automation tool or by the testers, they will be recorded in a bug tracking tool with the exact steps to reproduce the bugs and other details.