Automation testing with Appium for React Native and Flutter
Building apps, creating easy to use interfaces for people, seeing how they solve issues and making their lifes easier with them is truly amusing! And I am pretty sure not only for Looming Tech engineers but for everyone. However, adding more functionalities, increasing complexity and just making sure your product works as desired could become painful, frustrating and ultimately generate more waste (slower releases) or simply degrade the quality of what is built.
Before an app can be released to customers for use, you need to be sure it will work smoothly and will fulfill the requirements of the client. Because of that you need to thoughtfully test it. End-to-end tests in particular provide an automated way of testing from an end user’s perspective, making them an appropriate choice for anyone that wants to take their testing to the next level.
In this article we are going to look into how we at Looming Tech test mobile apps, built on React Native and Flutter, we are going to showcase our approach on testing the applications we built. Additionally we are going to explain our way of thinking about what’s important in a framework, so you as a reader can consider it for your own needs.
Flutter and React Native - why are we focused on these two?
Since we are required to build apps for both iOS and Android, to reduce friction and increase development productivity and velocity, the projects built by Looming Tech use Flutter and React Native, this way we can leverage a single codebase for the whole application development and cover both iOS and Android.
Both of these frameworks also have a wide community that uses them and have extensive support of the open source community. In addition to this, different teams inside our company have cultivated knowledge of those frameworks and can help each other when a new app emerges and the project can be kickstarted.
Automated E2E testing tool
Moving on to the framework used for our mobile e2e automation tests. To choose the best one for our needs, and also try to keep it as somewhat of a standard between the different projects we had to analyze what the current options on the market are and commit to one of them. What criteria did we use to make the decision?
Which platforms does the tool support?
Ideally we wanted support for both iOS and Android, this way we could write cross-platform tests and use the same codebase - similar to the development of the app. Frameworks that are built for a single platform like Robotium and Espresso ( Android ) and XCUITest ( iOS) moved down in our list due to this.
Which programming languages are supported?
Here we had to look into our team's current knowledge and language preferences and look into what the different tools support. Is it worth it to invest time learning a new technology or use something the team is most confident in? For example here we had two extremes - Appium, which works with any language that supports Selenium ( The most widely used languages do ) and Calabash ( Ruby ).
Ease of use and how active the community is. Can it be used by Cloud device farms?
We also factored in the ease of use of the tools API, configuration effort required and how active the community is, is it open-source? Are there many maintainers? Does it support running tests on cloud device farm solutions, which does it support? How easy is it to integrate the tool inside a CI/CD?
Is the tool easily extended, easily maintained, flexible and low cost/no cost?
Can the tests be run in parallel?
Performance of the selected tool, do tests get flaky?
Based on this we decided to go with Appium, as it covered the most points of the criteria and also seems to still be the market standard and is the most widely used. On paper Detox seemed like it did a lot of stuff better than Appium:
Gray box testing, this allows the test framework to monitor the app from the inside and helps reduce test flakiness.
Works in sync with the app.
Faster and easier to set up.
At the end of the day the deal breakers of Detox for us were:
It is built for React Native, we wanted a tool to cover Flutter as well.
Although it is cross-platform it did not support running the tests against a real iOS device ( It supports running on real Android ). The team behind it has plans to extend this.
It supports less cloud device farm solutions, for example BrowserStack does not support it as of writing this.
Appium is more widely used and support is easier to find for it.
Installation and Configuration
Since we have decided on using Appium as a framework, initial configuration of Appium can take some time as it has a lot of dependencies and requirements, first we need to install it:
npm install -g appium
Then you will need to setup all of the requirements for Appium, you can also install appium-doctor to help you with that:
npm install appium-doctor
This is a helper tool that showcases all of the requirements for running Appium and checks if you have them installed locally. Can be run with: appium-doctor . For example you have to have Java JDK and Android SDK installed and have environment variables for them.
After installing all of the Appium requirements we can now create our tests project, here arises the question:
Should you keep End-To-End tests in the same repo as the development code under a /tests submodule or not?
The answer of this question is of course it depends, both of these approaches have their own pros and cons, it also requires for both the development code and the tests to be written in the same language.
In our case we decided to use the same repository for both, some of the benefits we saw in this approach are:
Shared resources - this was a big one, since we often notice some code that is duplicated in both development and tests project if they are structured as separate ones - this can be some constants or properties, it can also be models, for example request bodies, when you need to send a request to the server to assert some information change after an action on the UI. If the project supports localisation this can also include the translation files, you won't need to duplicate them inside the test project as well.
The friction is reduced as there is no need to maintain two different git repositories and add both of them to a workflow. Both developers and testers can run everything in the same codebase, which also helps with running the tests as the code is developed - you no longer need to switch between the two repos and keep both of them up-to-date locally.
We feel like it also increases the sense of ownership throughout the team, it also helps with collaboration as all PRs are reviewed by all team members, which helps developers and testers to be up-to-date with what each of them does. You can integrate git hooks more easily ( We mostly used this for API tests, as you can see an example in the article below).

What are your opinions on this? Would love to hear how others approach this.
Now back to setting up the tests, WDIO offers a command that will help you with a quicker setup through answering some questions in a sequence of steps, to initiate this you need to run:
npm init wdio .
OR
npx wdio config
As you can see in the screenshot you can choose a testing library (mocha, jasmine or cucumber), you can decide if you want to use a compiler ( we would recommend going with Typescript) and if you want to use page object pattern, which we will. WDIO will also create some example files we can look into to see the general suggested structure of the project by WDIO.
After inspecting the created files, you can see a wdio.conf.ts file which will store all your configuration options, we suggest creating a new subfolder config and in there create a common config file, as well as separate files for Android and iOS ( You can also create a separate one for the device farm provider which we will explore in an example for AWS Device Farm below, keep in mind WDIO has services for BrowserStack and SauceLabs so if you are going to use them, you should integrate these.). This shared config holds all the defaults so the iOS and Android configs only need to hold the capabilities and specs that are needed for running on iOS and or Android (app or browser).
Locator strategy
Since we are using Page Object Pattern we are going to have a page class for each of the different screens of the app. The goal of using page objects is to abstract any page information away from the actual tests. Ideally, you should store all selectors or specific instructions that are unique for a certain page in a page object, so that you still can run your test after you've completely redesigned your page. Locators between Android and iOS are different so you will have to keep that in mind and use the appropriate one based on your platform.
Now if you are using React Native, it makes your life a bit easier by providing testID property, which can be used so that the selector can be used by both Android and iOS, as of React Native 0.64 you can now use this property on Android as well, on older versions there was a problem with Appium having no way of retrieving this attribute on Android apps, therefore rendering it only useful for iOS. Due to that a lot of test frameworks depended on Accessibility ID passing which was not ideal and was actually a bad practice - you had to sacrifice the accessibility of your app! So if you cannot upgrade your version of React, it's better to use a separate selector for the two platforms than using the Accessibility ID.
It also seems that Appium requires the package name to be part of the resource-id itself for Android. Seems the best way to guarantee everything works is to prefix the package name to the testID property. We can define a helper function to use for that inside the FE code and call it one adding test IDs in the react elements like this:
<TextArea
autoCompleteType="off"
bg="lightText"
fontSize="md"
testID={tID('homePageTextArea')}
totalLines={5}
value={someVal}
w="full"
>
This way in your page object classes you can use this selector like this:
The elementPrefix helper method is just appending the package for Android, because as we said it is required. This way the selector will work for both platforms:
export const elementPrefix = (selector: string, pack = 'com.your.package:id') => {
if (!browser.isAndroid) {
return selector;
}
return `${pack}/${selector}`;
};
Now if you are using Flutter there is not such built-in functionality yet, so you will have to define and use different selectors, for example:
get submitButton () {
const androidSelector = '//android.widget.Button[@content-desc="Get started"]'
const iosSelector = '//XCUIElementTypeButton[@name="Get started"]'
return $(super.returnSelector() ? androidSelector : iosSelector);
}
In your base page object or in a different function you can define returnSelector function as:
returnSelector() { return driver.isAndroid }
This way your tests will find the element based on the platform you are running against.
Creating spec files
We will also have a spec file for each app functionality we are going to test. We can use the spec folder created by WDIO to store all our spec files, we should also organize them in subfolders, based on the different parts of the app ( Since some functionalities are bigger and we might need more than 1 spec file to store all our tests). You can also organize subfolders based on the different parts of your app -> Admin, onboarding, home.
For example you create a simple spec file to hold login flow testing ( Note that based on the test framework of your choice, the syntax might be slightly different, for example before -> beforeAll):
describe('Login tests', () => {
const loginSteps = new LoginSteps();
const tokenStore = new TokenStore();
const userCreatorService = new UserCreatorService(tokenStore);
let user : IKeycloackUserDto;
before(async() => {
user = await createNewUser();
});
beforeEach(async() => {
await startApp();
await handleFirstLaunch();
});
afterEach(async() => {
await driver.closeApp();
await driver.reloadSession();
});
after(async() => {
await userCreatorService.deleteUserWithEmail(MASTER_USER, user.email);
});
it('should not be able to login with providing correct credentials', async() => {
await loginSteps.typeUsername('validUsername@test.test')
await loginSteps.typePassword(DEFAULT_PASS);
await loginSteps.clickLogin();
await loginSteps.assertInvalidLoginError();
});
});
Creating so called steps files can further help with abstraction, spec files itself will communicate with the step file, which will in term fetch elements from the page object file and do actions on them, this way the page objects will only be responsible for storing the elements and the step files will do the actions on them.
As you can see we are also creating a new user before each run and deleting it after. This helps with preventing building up state, as it ensures each test and each user begins from scratch ( This way you can also test for example some onboarding functionality, which will appear only the first time a user is created, if you want to test onboarded users, you should still create new ones and onboard them through API calls to your server -> you can see some examples on how to do so in the API testing step).
To do this you can create a new subfolder called services and there integrate add all methods that will call your back-end services for different purposes, in this example the user creation. For example since we are using Keycloak for user management the call to create a user looks like this:
async createNewUser(username : string, keycloakUserDto : IKeycloackUserDto) : Promise<Test> {
return await keycloakClient()
.post(`/admin/realms/${process.env.KEYCLOAK_REALM}/users`)
.set('Authorization', `Bearer ${this.tokenStore.getAccessTokenFor(username)}`)
.send(keycloakUserDto);
}
To run the tests you should have separate scripts in your package.json for iOS and Android like, this way wdio will run the test with the appropriate config:
"scripts": {
"wdio.ios": "wdio ./config/wdio.ios.app.conf.ts",
"wdio.android": "wdio ./config/wdio.android.app.conf.ts"
}
After the run is finished you should see an output showing you how many of tests were run and how many of them pass:
» /src/specs/native/onboarding/login.spec.ts
Login tests
[32m✓[39m should not be able to login without providing a correct email
[32m✓[39m should not be able to login without providing a correct password
[32m✓[39m should be able to login when correct credentials are provided
[32m3 passing (1m 26.1s)[39m
Device Farm
AWS Device Farm is a service that helps us with testing our mobile app on real devices, owned and managed by Amazon. This way we can run our tests without having to own a lot of different real devices and manage them on our own. It is a bit different from other cloud providers, like BrowserStack and CloudLabs, because here our code is uploaded to AWS infrastructure in predefined format and runs there only, whereas in other providers code is run on the user's own machine and interacts with their cloud via web services. This means we will have to structure our code based on AWS requirements.
The thing you will have to do, before heading to DeviceFarm is to prepare your tests and bundle everything into a zip file, which will be accepted by AWS. To do this, we need to use the npm-bundle package.
npm install --save-dev npm-bundle
After that you need to add this command in your package.json scripts:
"package": "npm install && npx npm-bundle && zip -r bundle.zip *.tgz && rm *.tgz"
This will install all your dependencies and bundle them in a .zip file, which we will upload to DeviceFarm. Keep in mind that you will have to add all the dependencies your tests require in your package.json under “bundledDependencies”.
Another thing that is required by Amazon is to have an empty capabilities config file. As we have discussed above, we already have a config folder under our tests project, all we have to do is create a new file: “wdio.device.farm.conf.ts” and have the following code in it:
import config from './wdio.shared.conf';
config.capabilities = [
{
maxInstances: 1
},
];
exports.config = config;
The only thing that is required is to have the config.capabilities to have only maxInstances defined, all other config options will be passed dynamically by AWS, this include the platform (iOS/Android), the device we are running on, app specific properties and so on. This will allow us to use the same file for both platforms on DeviceFarm. We also inherit all common configurations we have from the shared config file, this include specs, services, reporters, etc…, everything else, but capabilities.
After that we are ready to upload our .apk/.ipa file to DeviceFarm and run our tests on the cloud! You will need to have your app file, so based on the framework you are using you need to build it.
Now running is very simple, all you have to do is go to your AWS instance and open the Device Farm service, create a new project ( It’s a very simple process, you just have to follow the steps and provide project name, description and some information like that). After you have created a project you need to open it and click on “Create a new run”. You will see a screen like this:
You will see a flow of 5 steps, where firstly you will need to upload your app file. After that you will have to select the test framework you are using, in our case: “Appium Node.js”:
After that you will have to upload the tests .zip file we have just generated! Finally you will provide a test runner config file for Amazon ( They have examples for both Android and iOS, which you might need to alter a bit, for example upgrade the node version/appium version if it does not match the one you are using). Here is an Android example for the set up in this guide:
version: 0.1
phases:
install:
commands:
# you can switch to an alternate node version using below command.
- nvm install 16.13.1
- echo "Navigate to test package directory"
- cd $DEVICEFARM_TEST_PACKAGE_PATH
- npm install *.tgz
- export APPIUM_VERSION=1.22.0
- avm $APPIUM_VERSION
- ln -s /usr/local/avm/versions/$APPIUM_VERSION/node_modules/.bin/appium /usr/local/avm/versions/$APPIUM_VERSION/node_modules/appium/bin/appium.js
# The pre-test phase includes commands that setup your test environment.
pre_test:
commands:
# Appium server log will go to $DEVICEFARM_LOG_DIR directory.
# The environment variables below will be auto-populated during run time.
- echo "Start appium server"
- >-
appium --log-timestamp
--default-capabilities "{\"deviceName\": \"$DEVICEFARM_DEVICE_NAME\", \"platformName\":\"$DEVICEFARM_DEVICE_PLATFORM_NAME\",
\"app\":\"$DEVICEFARM_APP_PATH\", \"udid\":\"$DEVICEFARM_DEVICE_UDID\", \"platformVersion\":\"$DEVICEFARM_DEVICE_OS_VERSION\",
\"chromedriverExecutable\":\"$DEVICEFARM_CHROMEDRIVER_EXECUTABLE\"}"
>> $DEVICEFARM_LOG_DIR/appiumlog.txt 2>&1 &
- >-
start_appium_timeout=0;
while [ true ];
do
if [ $start_appium_timeout -gt 60 ];
then
echo "appium server never started in 60 seconds. Exiting";
exit 1;
fi;
grep -i "Appium REST http interface listener started on 0.0.0.0:4723" $DEVICEFARM_LOG_DIR/appiumlog.txt >> /dev/null 2>&1;
if [ $? -eq 0 ];
then
echo "Appium REST http interface listener started on 0.0.0.0:4723";
break;
else
echo "Waiting for appium server to start. Sleeping for 1 second";
sleep 1;
start_appium_timeout=$((start_appium_timeout+1));
fi;
done;
# The test phase includes commands that start your test suite execution.
test:
commands:
# Go into the root folder containing your source code and node_modules
- echo "Navigate to test source code"
- cd $DEVICEFARM_TEST_PACKAGE_PATH/node_modules/*
- echo "Start Appium Node test"
- npm run wdio.device.farm
post_test:
commands:
artifacts:
- $DEVICEFARM_LOG_DIR
After that you just have to select the devices you want to run against and additionally you can provide some extra device configurations.
To learn more about AWS Device Farm, check out the documentation here.
Conclusion
Building apps is fun but without a proper regression testing approach, it is truly hard to release frequently and be sure there are no broken functionalities. Key challenge with mobile apps is that we have to support 2 different OS (Android and iOS) and the fact that Android is fragmented into several sub versions (Samsung, mi, stock, ...) which are sometimes incompatible between them.
With maintaining a good coverage of automated tests, the device farm and integration with the CI/CD we manage to avoid headaches when developing multiplatform apps. Of course every case is different and has nuances so you need to find your balance between coverage and effort to build (the tests) in order to have good quality at a reasonable cost.
Comments