Post-Merge Code Reviews are Great!

One aspect of our workflow that people often find odd or new to them is that we don’t require an up-front code review or pull request with signoff from another engineer before pushing to production. This often comes up in interviews or conversations with engineers who are interested in jobs at ClassDojo, with reactions ranging from curiosity to aversion.

The reactions are understandable. Many organizations enforce blocking pull requests as a requirement, and many developers work in environments with fewer safety nets (test automation, linting, and other code quality tools) and less trust. Don’t get me wrong, code reviews are a good tool for getting feedback, sharing context and useful patterns, and we do create pull requests and ask for input all the time. It’s just not required.

Code reviews often happen too late for certain kinds of feedback, and both research and experience[1] show that mandatory external code reviews don’t catch issues at a higher rate than other techniques, and don’t reduce production outages. Having pull requests and feature branches hanging around waiting for review can lead to tasks getting out of date with the main branch — going stale and requiring additional work that slows down the time to release. The only real, predictable outcome from having forced code reviews is slower velocity.

In general, no single, silver bullet tool or process guarantees quality. You need to employ a combination of techniques to lower the number of defects and generate stable releases. Even in circumstances where bugs or bad code could be outright dangerous, resulting in the loss of life or irreversible harm, a code inspection process, coupled with other programming techniques such as defensive programming, could be a better fit than a blocking code review.

Code reviews have their place as a collaboration tool, but depending on what you intend to get out of collaboration there are other, more effective techniques to reach for. Mandated blocking code reviews that are external to the team are often motivated by an underlying need for control, a lack of trust in systems or people, or as an enforced form of collaboration.

Trust is important, and at ClassDojo we strive for a high trust, low control engineering environment. If you can't trust someone to make good changes without careful review, you can try several things to set them up for success. Options range from upskilling through training, mentoring, and pairing, to changing the way that you're working. For instance, you can promote safer changes by adding tests, instrumentations, and alerting. Ultimately, if you really don’t feel you can trust a person to write good code, you might consider parting ways with them.

What do we do?

At ClassDojo we’re strong advocates of pair and mob programming as collaboration practices. These both help to drive attention and energy toward problem solving, and provide live, in-context feedback on code quality and technical decisions. Fast feedback is one of the most important contributors to velocity, and getting that feedback at the time of writing the code when you have the mental model in your head is by far preferable to feedback at a later date.

Careful scope management helps make this kind of collaboration work for us. As a team picks up a project or idea, we investigate what the smallest possible first implementation could be and start there. Smaller changes are easier to reason about as you work on them, and easier to roll back if necessary.

For larger changes or changes with a lot of uncertainty, we have upfront design conversations, sometimes in writing but often as simple as saying "I'm planning on sticking X in Y for Z. Does that sound crazy to anyone?" That can be a better fit for up-front feedback on direction, and is something we encourage engineers and teams to do informally and often.

Additionally, we’re very invested in test automation and instrumentation to make going to production safe. We want to prioritize making it easy to fix or roll back changes over the idea that we can predict the future and never be wrong. We also encourage post-merge code reviews. After the code is live and hopefully contributing user and business value, we expect to iterate on the code. We have processes and systems that make it easy to do that, which rely on all of the practices we’ve talked about.

There are of course drawbacks to this approach. Services like GitHub lack good tools or notifications for post-merge code reviews, and it can be hard to remember to follow up and act on new suggestions or issues. This way of working might also not be a good fit if you don’t already have a good level of monitoring, alerting, and test automation that enables you to move swiftly with confidence.

While this way of working might not appeal to everyone, we have found that it works well in our engineering culture. Engineers at ClassDojo tend to be very collaborative, often using pairing or mob programming to problem solve together, and also tend to be motivated by the craft of achieving a high level of quality in code and products. Our practices as a team are enabled by our team values, so we work hard at maintaining both.

If our development practices sound like an interesting way of working, and you’re excited about working in a team that values trust and collaboration, have a look at our open roles and get in touch!

[1] McConnell, S. (2004). 20.3 Relative Effectiveness of Quality Techniques. In Code complete (2nd ed., pp. 469–472). essay, Microsoft Press.

    One aspect of ClassDojo’s engineering culture that tends to surprise people is that we don’t have dedicated QA engineers or testers. We use a variety of strategies to keep our end-to-end feature development fast, continuous, and high quality. These strategies include a strong culture of writing automated tests, along with regular playtesting sessions.

    What do we do instead of QA?

    At ClassDojo, we prioritize truly continuous deployment of our backend monolith, which means removing as many manual blocking steps as we can, and we’ve built systems to do so safely. Engineers have end-to-end ownership of features, and we rely heavily on writing automated unit and integration tests to ensure that we’re not pushing code that causes unexpected behavior or errors. Our canary containers help automatically roll back any commits that cause errors, and we monitor our logs carefully.

    However, as thorough as we try to be, automated tests and monitoring can’t catch everything. To supplement our automated systems, we hold regular playtesting sessions to serve as manual testing as well as context sharing.

    What is playtesting?

    Playtesting is simply manually testing the products and features the team has built. You may have also heard of the term “dogfooding,” which refers to actually using your product as a real user. Some folks on the team are able to dogfood as legitimate parent users since their kids’ teachers use ClassDojo. However, since the majority of us are not teachers in a classroom nor parents connected to a real classroom, we do intentional playtesting sessions instead.

    How do we run a playtesting session?

    Who should join?

    Individual teams might playtest new features right before shipping, in which case the whole team should join. We also schedule sessions that are open to anyone at the company so teams can share their recent features and solicit feedback from the wider group. For the best insight and context sharing, we encourage including folks from various functions, such as engineers, designers, PMs, marketers, and customer success agents. Playtesters don’t need to prepare anything ahead of time, or even have any knowledge of the flows. They just need to have the right type of device if it’s a mobile feature that’s only released on one platform.

    The process

    The first step is scheduling a time for the group of testers to get together and play through certain flows. Usually 30-45 minutes is sufficient, depending on how large or complex the feature is. Ahead of time, the product owner should prepare a set of loose guidelines for how to run through the flows, including any necessary setup steps such as installing an alpha build or turning on certain feature flags.

    At the beginning of the session, the facilitator or product owner gives a brief overview of the flows and setup, then all the playtesters simply go through the flows independently on their own devices and accounts. Doing this synchronously instead of async lets us quickly identify whether an issue is widespread or a particular edge case, and answer any questions on the spot. We typically use an Asana board where playtesters can add cards for anything that comes up — not just bug reports but also general product feedback and points of confusion.

    We reserve 5-10 minutes at the end of the session to go through the cards and make sure the issues are clear. From there, the product owner and team can prioritize them at their next prioritization meeting.

    What are the benefits?

    Our playtesting often finds bugs in obscure edge cases not covered by automated tests. With an app and user network as complex as ours, it’s nearly impossible to cover all use cases with test fixtures, so it helps to have a variety of people testing on their own real-world accounts.

    Playtesting is one of our best methods for cross-team and cross-functional context sharing. Screenshots, product specs, and demos can only go so far in conveying what a new feature really involves. Having teammates actually get their hands on the features is a great way to share what’s being built. It’s also valuable to get fresh perspectives from folks who perhaps hadn’t explored that product area before. It’s like having an in-house focus group to give feedback.

    If you have ideas on how we can improve our playtesting or manual testing strategies, reach out! We’d love to hear from you.

      One of the main engineering backbones here are ClassDojo is our CI/CD pipeline. By having a pipeline that automates tests, migrations, rolling out to production, and so much more, we allow the engineering team to focus on building great products. No managing a release train or babysitting a deployment!

      In this episode, I chat with Dom and Gregg about what CI/CD means to us, how we do it, and what steps you can take to start your own journey to continuous deployment.

      Listen to Episode 2, How and Why we use CI/CD
      • Engineering Dojo Podcast
      Newer posts
      Older posts