Taro Logo

What do mobile testing strategies look like at top tech companies?

Profile picture
Senior Software Engineer at Series E Startupa year ago

I work in as an iOS engineer. My current company is a startup that's scaling up fast. In classic startup fashion, testing was historically not a top priority here. As a consequence, we struggle with lots of manual QAing and low code confidence when modifying pieces of poorly tested legacy code.

This has improved in the past year simply by unit testing every new piece of code by default and by introducing unit tests in legacy code as we touch it, boy scouting style.

At this point, I think we need to step it up and I'm looking forward to formalising a testing strategy for our mobile team.

Here are some ideas I have:

  • In the realm of unit tests:
    • I want to introduce API contract tests because our back-end API lacks documentation. In these tests we'd unit test the way we build our requests to API endpoints, and we'd have a sample JSON response from the server to test that the app can parse it into our data models as expected.
    • I want to introduce localisation tests because our app supports many languages but we are always manually testing localisation. In these tests, we'd assert that the outputs of our formatting logic are the ones we want for different locales.
  • In the realm of user interface tests:
    • I want to introduce snapshot testing, because right now the only way we can test that our views are rendered as expected for different view configurations (for example RTL vs LTR, empty content, loading content, content in verbose languages, etc.) is manually.
    • I want to introduce UI tests (sparingly, only for key flows) because we have no e2e tests. Currently, key flows are tested manually and frequently to prevent regressions, which takes lots of time.

What other testing strategies can I propose to establish some standard we can use to further improve our code confidence, reduce bugs, and rely more on automated testing?

How do the top tier tech companies approach mobile testing?



  • 12
    Profile picture
    Android Engineer @ Robinhood
    a year ago

    At Robinhood, we have a mix of screenshots tests, unit tests, integration tests (powered by Selenium), and manual QA.

    While all those different testing frameworks are very valid approaches to improving test coverage (and quality), I think you should identify what is the single, largest you're looking to address by introducing new testing paradigms/frameworks. All of those avenues take a significant amount of time to introduce and to support: introducing all of them within a timely manner will drain a significant amount of resources in the short to mid term (an immediate block of focused dev time is needed for the implementation to be successful) and will lower the overall velocity in the long term (more tests needed per commit means slower turnaround for code to be landed).

    If you don't have a clear understanding of the problem you're trying to solve for the business, then you'll likely end up hurting the business (if successful) or your ideas will get shot down since it isn't clear how the proposals will be a net gain for the company. I recommend loosely computing the yield for every testing practice you wish to introduce (benefit to the business / time needed to implement) and then pushing the 1-2 practices at the top of the list.

    Hope this helps!

  • 8
    Profile picture
    Software dev
    a year ago

    At this point, I think we need to step it up and I'm looking forward to formalising a testing strategy for our mobile team.

    It makes sense and is a good opportunity to agree on the testing terminology and the test boundaries. I have seen often E2E, UI, Functional and acceptance testing terms used interchangeably. Each test category should clearly define which code will be exercised, from the architecture layers and what can and can not be mocked

    In mobile, there is no one solution that fits all, but a good place to start is to follow the mobile testing pyramid and tweak it according to your needs. Ideally, you will have a combination of Unit, integration and UI testing.

    I want to introduce API contract tests because our back-end API lacks documentation

    This is good, but it might be beneficial to use the mocked Json in integration tests. That way you will exercise the whole UI, Domain and Network layer. In both cases, you have to manually maintain the JSON files. Ideally, they will start a single screen and you could mock the response to test multiple view states. I know this is hard in iOS due to the close nature of the platform.

    Snapshot testing is good, and it might cover localisation tests, they are also beneficial when refactoring the UI or migrating to the new design system.

    You are spot on to keep the UI tests to core flows only as they are expensive to run. It is good to generate some automated reporting nightly so the team can check the state of these tests.

    Lastly, be cautious of your time spent on keeping these tests in a healthy state and the benefit you are getting out it. I would also track the number of times, the regression has been caught before release because of these tests - this information will help you to prioritise your time!

  • 6
    Profile picture
    Tech Lead @ Robinhood, Meta, Course Hero
    a year ago

    At Instagram, it was a mix of:

    • Unit tests
    • Snapshot tests
    • End to end tests - These were flaky, so we ramped down investment over time
    • Manual QA through mostly offshore firms
    • Extremely aggressive dogfooding program - This was the crown jewel of our mobile testing strategy

    Instagram was unique though as:

    • It's a very easy app to use yourself as a universal consumer product
    • Employees had main automatically deployed to their devices (you couldn't opt out of it)
    • Instagram's core engineering principle is "Do the simple thing first"

    I talk about Instagram's testing philosophy more in-depth here: "What should I do when no one is available to help?" - This engineer is stuck on introducing new mobile tests

    Here's some other good resources around testing as well:

  • 9
    Profile picture
    Tech Lead @ Robinhood, Meta, Course Hero
    a year ago

    Adding on to my response, HWalia had some really great advice and I want to extend it.

    Automated tests are hard, almost always harder than people think. Mobile is tricky as it's still a newer space that's less well-defined than something like traditional back-end, and iOS is especially tricky due to the closed nature of the platform + the thrashier nature of the SDK compared to Android. Something I have seen time and time again is:

    1. People add some automated tests, which isn't that hard and feels good. Yay, code quality!
    2. The tests naturally decay over time and become flaky. The impact becomes less and less clear.
    3. People lose faith in the tests and see adding/maintaining them as a chore.

    What this means for you as a testing/quality lead is that you need to do everything in your power to prevent #3 in this cycle - You need to make sure that adding or fixing an automated test is extremely easy and the impact of the team maintaining its test suite is crystal clear.

    To me, this is far more important than the tactical types of tests you're going to cobble together. For example, end-to-end tests can theoretically catch bugs the best as they exercise the entire flow and catch an error at every step in the process, but the problem is that they're a huge pain to maintain and write.

    Here are the questions you should always be thinking about:

    1. How easy is it to write a new automated test, even if the engineer has literally never written one before and is a new-hire junior engineer?
      1. Is there a thorough set of wikis and existing example tests that people can use to write their own test?
      2. Is your code cleanly modularized so that it is easily testable?
    2. How smooth is it to get the testing environment setup and run an individual automated test or suite of automated tests?
    3. If a test is broken, is it easy to debug why and fix it?
    4. When you write automated tests, is it easy for you to see the impact?
      1. Can you measure how many bugs your tests have prevented?
      2. Can you see the severity of bugs your tests have prevented?
    5. Are engineers rewarded for holding the bar with automated test coverage?
      1. Do the quality gains reflect to concrete increases in their performance review?
      2. Do managers understand that the team may move a bit slower completing projects and meeting deadlines to ensure that good automated test coverage is paired with development?

    Striving to be able to answer positively to all these questions will naturally lead you down the right road optimizing the testing framework, creating clear dashboards, and building up the proper systems and alignment with engineering leadership.

    It wasn't about automated testing, but I did have to show all these behaviors when I completely revamped the oncall system for my ~20 engineer org at Instagram (another quality effort). You can watch the in-depth case study here: [Case Study] Revamping Oncall For 20 Instagram Engineers - Senior to Staff Project

    Also, becoming a true testing/quality champion is definitely staff scope, and it seems like your company is well-established at Series E. I hope this all becomes a shining star of your senior -> staff packet!

A startup or start-up is a company or project undertaken by an entrepreneur to seek, develop, and validate a scalable business model.
Startups241 questions