Practical aspects of cross-platform automated mobile testing

Mobile development is not the frontier of information technologies anymore. The device that transformed our understanding of the smartphone term was released about 9 years ago. Since that moment we’ve got several mobile operating systems, a bunch of development tools, tons of documentation and even at least one new programming language. All these things are intended to simplify the development process.

I would like to share one of possible solutions for automated testing of native mobile applications.


Let's start with specification of key conditions of the testing process:

  1. Business logic testing must be performed on API level or on the lowest possible level. There are important reasons for this:

    • Both iOS and Android applications use REST API services as a backend. That means, a single API test increases test coverage for two applications.

    • Atomic API tests require much less effort for implementation comparing to UI scenarios.

    • Execution time significantly differs as well. Single API test can be run within a second, while UI scenario requires several minutes to initialize virtual machine, start a simulator, install and run the application on it.

    • Data-driven testing approach, when a single scenario can be executed over and over again with different parameters, can be easily implemented for API tests.

cross-platform automated mobile testing
  1. Testing scenarios must be universal for all platforms (iOS and Android for the moment).

  2. Automated tests must be a part of Continuous Integration process.

  3. Tests must support multi-threaded execution both for different devices (or OS versions) and for a bunch of tests running on the same device (same OS version). 
    As far as it’s quite a complex issue to support an own devices farm, we’ve decided to use on-demand virtual machines with mobile simulators.

As for technologies stack, our team decided to use Continuous Integration server TeamCity, which runs Appium tests using Specflow Runner on a remote SauceLabs virtual machines farm.

API tests

The least challenging part of the solution is API tests. They are implemented in the following way – for each request type we create a scenario which consists of three parts:

  • Request template definition filled with default values.

  • Template transformation which overrides some of default values, sending a request and receiving a response.

  • Response validation.

Scenario example on Gherkin language used in Specflow:

Cross-platform automated mobile testing

Such approach allows to add new tests simply by adding new lines into ‘Examples’ section.

UI tests

Mobile UI tests have a similar structure. They consist of the following components:

  1. Scenario in Gherkin language.

  2. Classes which implement each of scenario steps with C#.

  3. Classes which implement ‘page object’ pattern and describe each of application screens with their elements and interaction methods.

The steps (#2) use methods of the page object classes (#3) and implement some complex actions (e.g. fill the form and submit). At the same time, page object classes implement some atomic interactions (e.g. enter some value into some field of tap the button).

Such approach makes our classes quite universal, because the steps and page objects are common for iOS, Android, and any other possible platform. Interaction methods adaptation can be done inside of page object class based on current configuration. The page object knows itself, whether we should swap to the value in picker or select it from the list by tapping. Due to it, whenever we have some test scenario implemented for a single platform – we can easily add it for another. Usually it can be done just by adding new element locators.

Parallel execution
While adding new tests you’ll find yourself looking for a way to shorten tests execution time. Fortunately, Specflow allows multithreaded execution out of the box. Moreover, a single test scenario can be executed several times, using several configurations. We use it to execute a full set of tests for every supported OS version.

What can go wrong?

All this looks pretty much like common web testing, however there are some points I would like to mention, because they can make your life harder:

  1. Mobile testing is usually more time-consuming. It is caused by additional initialization procedures, so there is no sense in trying to cover everything possible with UI tests. Usually it’s a good idea to improve coverage by low-level tests (API and unit tests) when it’s possible.

  2. Screen elements hierarchy is more homogeneous than on web pages. Usually you have several elements of the same class which are hard to identify. To avoid searching elements by their indexes you may ask dev team to add unique IDs for all elements during the development.

  3. Adding custom elements may seem like a good idea, but very likely you will end up realizing that testing tools can’t interact with them correctly.

I hope you found the article useful, and will appreciate practical effects of using the solution.

Pavel Furs
Pavel Furs Lead Software Test Automation Engineer, SolbegSoft