14

How to write good unit tests?

Profile picture
Mid-Level Software Engineer at Taro Community2 months ago

I'm currently working through a task where I have to write unit tests. These unit tests in theory should test out the entirety of its class isolated from any other classes or function and effectively apply business logic on how these functions/methods are being called.

But where do I draw the line in regards to how much testing I'd need to run? Currently struggling with this PR because there's lots of lines of code (bad I know), but I do need to find out what a complete unit test looks like.

If you had to ask me, a complete unit test is where all the unit tests traverse through all code path and have both positive tests and negative tests being ran to show how the code should be used.

If there's some sort of guidance online, that would be helpful. Otherwise I'm mimicking what unit tests have been approved based on style and usage.

1.1K
4

Discussion

(4 comments)
  • 10
    Profile picture
    Eng @ Taro
    2 months ago

    You can try to think of the tests as protecting you and your teammates when one of you refactors the code a few years down and a lot of the context behind the code has been forgotten. The test case, if it fails, would have caught something that would save you all from an incident in production.

    From that angle, can you think about the cases where if the class gets refactored, as long as the tests pass, the core functionality of the class will still be intact?

    If you are writing unit tests for a class, you can create unit tests for every public method.

    I like your thinking about having both positive and negative test cases, but try to avoid redundancy in your test cases.

  • 8
    Profile picture
    Helpful Tarodactyl
    Taro Community
    2 months ago

    If you had to ask me, a complete unit test is where all the unit tests traverse through all code path.

    Yes, a complete unit test will traverse through most code paths, but setting this as the goal might be dangerous since it may lead you to create tests that are tightly coupled to the implementation, making it harder to maintain. Instead of using number of code paths as the main metric, test the behavior of each function.

    But where do I draw the line in regards to how much testing I'd need to run? Currently struggling with this PR because there's lots of lines of code (bad I know), but I do need to find out what a complete unit test looks like.

    A class is a collection of functions that operate on some data. Each function will contain some specification - what values the client (whoever is using the class) provides and what values the function outputs. Your goal is to validate the behavior of these functions using the smallest set of test cases. MIT's software engineering course has a pretty good example that uses abs() and max() as examples (the page is pretty comprehensive):

    https://web.mit.edu/6.102/www/sp23/classes/02-testing/

    Another concept is the idea of class invariants, which are a set of rules that must be held before and after a client calls a function of a class. For example, a binary search tree has several invariants: that all left nodes are smaller than the parent, all right nodes are larger than the parent, and that all nodes have at most 2 children, etc. You should write tests that also check these invariants.

    So if I were you, I would do the following steps:

    1. Understand what the class is doing and what each function does
    2. Write a specification for each function, detailing the expected behavior, what the expected inputs are and what the expected outputs are. Also test edge cases
    3. Write the tests and try to reduce the test cases to the smallest possible set
    4. Bonus step: use mutation testing to validate your test suite is robust. it takes a while to run

    As for resources, I think the test driven development community have developed a good idea on what makes a good test suite. You don’t need to do TDD (having practiced it, it tanks your dev velocity by trying to be perfect) but it helps to understand what makes a good test case and how to avoid common pitfalls.

    I’d recommend reading Bob Martin’s clean code and also Sandi Metz’s 99 bottles of OOP. Kent Beck’s test driven development by example is also good.

    • 1
      Profile picture
      Thoughtful Tarodactyl
      Taro Community
      2 months ago

      Saving this post. One of the best answers. Thank you sir

  • 7
    Profile picture
    Tech Lead @ Robinhood, Meta, Course Hero
    2 months ago

    I'm going to take a different approach here and talk more about the "meta" part of writing a good unit test (and just writing good tests in general). In short, a good unit test is one that matters. This sounds obvious, but I have seen countless engineering teams mess this up as they chase arbitrary test coverage goals.

    So what does it mean to matter? Well, with every area that you could write tests for, ask yourself these 3 questions:

    1. How often does this code change? - Every codebase will have code that's dropped and never really touched again (often with less important features like niche stuff in settings). There's not much point writing tests for those as they're near impossible to break.
    2. What's the cost if this code breaks? - Again, there's a power law here. If home feed in Facebook app breaks, that's a SEV0 and the company loses millions of dollars. If the blood drive locating feature breaks in Facebook app, nobody really cares (yep, this feature exists and almost nobody uses it as it's buried like 5 layers deep in settings). Prioritize writing tests for the components where it's actually really bad if something goes wrong.
    3. How much has this code broken in the past? - Fool me once, shame on you, fool me twice, shame on me. If this code has broken before, especially if #1 and #2 are satisfied, then you should definitely write tests for it.

    I led huge automated testing efforts at Instagram (both unit and integration tests spanning across features powering billions in revenue), and I have found that the key to becoming a testing master is to focus more on the question of "Do I write a test or no test at all?" vs. "How do I write a good test instead of a bad test?" (but of course, you should always try to write good tests if you are writing them and the other advice in this thread is awesome for that).

    Going even deeper, the overall orchestration around tests is very important, which I talk about in-depth here: "What do mobile testing strategies look like at top tech companies?"