Automating the Test Execution
Various automation approaches and frameworks exist that can be used in mobile application testing.
The choice of approach will partly be determined by the type of application.
Two common test automation approaches used are:
- User-agent based testing
- Device-based testing
User-agent based testing utilizes the user-agent identifier string sent by the browser to spoof a particular browser on a particular device. This approach can be used for executing mobile web applications. Device-based testing on the other hand involves running the application under test directly on the device. This approach can be used for all types of mobile applications.
The application type can also determine the test automation framework that would be suitable for that application. Mobile web can be tested using the usual web application automation tools on the desktop, whilst native apps might need specific tools. Platform providers may also provide automation tools dedicated for the platform.
Automation approaches used for conventional applications are often applicable to mobile applications as well. These include capture/playback, data-driven, keyword-driven and behavior-driven testing.
Key capabilities that a mobile application testing framework should typically include are:
- Object identification
- Object operations
- Test reports
- Application programming interfaces and extendable capabilities
- Adequate documentation
- Integrations with other tools
- Independent of test development practice
To develop automated tests, the tester needs to understand the automation script recording or creation mechanism, and how to access and interact with the application’s graphical objects such as buttons, list boxes, and input fields.
Several methods exist for identifying a graphical object used for the mobile test automation. These include image recognition, OCR/text recognition, and object recognition (web or native, depending on the app type).
A Mobile Application Tester needs to not only practice the graphical object detection and identification, but also to understand which object identification method will be the most capable in enabling successful tests to be run on a large variety of mobile devices, in parallel and continuously.
Key differences between the script creation methods are:
|Items of comparison||Object Identification||Image/OCR Comparison|
|Reliability||As long as the identifier is constant the screen layout can be changed.|
The risk is that objects can be identified and interacted with in the code while being hidden from the user. This may lead to false negative test results.
|Images can be scaled according to screen size, but tests will fail as soon as the layout changes.|
|User experience||Usually manual scripting is required, at least to improve recorded scripts for readability and maintainability.||Full GUI-based testing without the need for scripting.|
|Execution speed||Tends to be faster than Image/OCR comparison, especially when using native tools provided by the system manufacturer.||Tends to be slower due to the need to compare the screen pixel by pixel with a baseline image.|
|Maintenance||Depends on the quality of the test scripts.||Mainly in providing changed baseline images.|
|Authoring challenge||Knowledge required of the scripting language and of software design methods to build a sustainable automation solution.||Generation of baseline images, especially when app changes often.|
Automation Tools Evaluation
To be successful in creating test automation solutions, the test automation teams need to choose an appropriate set of tools. Understanding the key differences of the available tools and their suitability for the project requirements needs to be considered.
The evaluation parameters for test automation tools can be broken into two categories:
- Organizational fit
- Technical fit
Technical fit parameters include the following:
- Test automation requirements and complexities such as the use of new features like FaceID, fingerprint and chatbots by the app.
- Test environment requirements, such as varying network conditions, import or creating test data, and server-side virtualization.
- Test reporting and feedback loop capabilities.
- The ability of the framework to manage and drive execution on a large scale either locally or in a test lab in the cloud.
- Integration of the test framework with other tools used in the organization.
- Support and documentation availability for current and future upgrades.
Approaches for setting up an Automation Test Lab
When performing mobile application testing, developers and testers have choices around the device test lab they would use to target their test automation against.
- On-premise device test lab
- Remote device test lab
Various combinations of these approaches can be applied. s
On-premise device test labs are generally difficult and time consuming to maintain. Having devices locally in parallel with emulators and simulators would best serve the early development and testing phases of the mobile app.
When reaching a more advanced stage of the app development, teams need to perform full regression test, functional tests, and non-functional tests. These tests are best executed on a full device lab. This is where a remote device test lab is managed, continuously updated, and maintained in the cloud.
Such remote device test labs complement an on-premise device test lab and ensure that sufficient combinations of device and operating system are available and up to date. By making use of commonly available remote device test labs, teams get access to a larger set of supported capabilities including richer test reports and advanced test automation capabilities.
Lastly, when executing at scale through a test automation framework or through a continuous integration job (CI), stability of the overall test lab is key for test efficiency and reliability. Such labs are typically designed to ensure that devices and operating systems are always available and stable.
Remote device test labs are not always necessary in the later development stages of the app. Well designed and maintained on-premise device test labs can be as good as or better than any remote device test lab.