Difference between revisions of "Guidelines/Checklist"

From Apertis
Jump to: navigation, search
(Create page)
(No difference)

Revision as of 15:05, 30 August 2016

Checklists

Summary

Before submitting a large commit, go through the checklist below to ensure it meets all the requirements. If submitting a large patchset, submit it in parts, and ensure that feedback from reviews of the first few parts of the patchset are applied to subsequent parts.

Overall principles

In order to break sets of reviews down into chunks, there a few key principles we need to stick to:

  • Review patches which are as small as possible, but no smaller (see here, here and here)
  • Learn from each review so the same review comments do not need to be made more than once
  • Use automated tools to eliminate many of the repetitive and time consuming parts of patch review and rework
  • Do high-level API review first, ideally before implementing those APIs, to avoid unnecessary rework

Order of reviewing commits

Before proposing a large patchset, work out what order the patches should be submitted in. If multiple new classes are being submitted, they should be submitted depth-first, as the APIs of the root classes affect the implementation of everything else.

The high-level API of any major new feature should be first, then – only once that high-level API has been reviewed – its implementation. There is no point in starting to review the implementation before the high-level API, as the high-level review could suggest some big changes which invalidate a lot of the implementation review.

Revisiting earlier commits

Rather than trying to get everything comprehensive first time, we should aim to get everything correct and minimal first time. This is especially important for base classes. The commit which introduces a new base class should be fairly minimal, and subsequent commits can add functionality to it as that functionality becomes needed by new class implementations.

The aim here is to reduce the amount of initial review needed on base classes, and to ensure that the non-core parts of the API are motivated by specific needs in subclasses, rather than being added speculatively.

Automated tests

One of the checklist items requires checking the code coverage of the automated tests for a class. We are explicitly not requiring that the code coverage reaches some target value, as the appropriateness of this value would vary wildly between patches and classes. Instead, we require that the code coverage report (lcov output) is checked for each patch, and the developer thinks about whether it would be easy to add additional automated tests to increase the coverage for the code in that patch.

Pre-submission checklist

(A rationale for each of these points is given in the section below to avoid cluttering this one.)

Before submitting any patch, please make sure that it passes this checklist, to avoid the review getting hung up on avoidable issues:

  1. All new code follows the coding guidelines, especially the API design guidelines, namespacing guidelines, memory management guidelines, pre- and post-condition guidelines, and introspection guidelines — some key points from these are pulled out below, but these are not the only points to pay attention to.
  2. All new public API must be namespaced correctly.
  3. All new public API must have a complete and useful documentation comment (ignore the build system comments on that page – we use hotdoc now – the guidelines about the comments themselves are all still relevant).
  4. All new public API documentation comments must have GObject Introspection annotations where appropriate; g-ir-scanner (part of the build process) must emit no warnings when run with --warn-all --warn-error (which should be set by $(WARN_SCANNERFLAGS) from AX_COMPILER_FLAGS).
  5. All new public methods must have pre- and post-conditions to enforce constraints on the accepted parameter values.
  6. The code must compile without warnings, after ensuring that AX_COMPILER_FLAGS is used and enabled in configure.ac (if it is correctly enabled, compiling liblightwood should fail if there are any compiler warnings) — remember to add $(WARN_CFLAGS), $(WARN_LDFLAGS) and $(WARN_SCANNERFLAGS) to new Makefile.am targets as appropriate.
  7. The introduction documentation comment for each new class must give a usage example for that class in each of the main modes it can be used (for example, if done for the roller, there would be one example for fixed mode, one for variable mode, one for linked rollers, one for each animation mode, etc.).
  8. All new code must be formatted as per the coding guidelines, using clang-format not GNU indent.
  9. There must be an example program for each new class, which can be used to manually test all of the class’s main modes of operation (for example, if done for the roller, there would be one example for fixed mode, one for variable mode, one for linked rollers, one for each animation mode, etc.) — these examples may be submitted in a separate patch from the class implementation, but must be submitted at the same time as the implementation in order to allow review in parallel. Example programs must be usable when installed or uninstalled, so they can be used during development and on production machines.
  10. There must be automated tests (using the GTest framework in GLib) for construction of each new class, and for getting and setting each of its properties.
  11. The code coverage of the automated tests must be checked (using make check-code-coverage; see D3673) before submission, and if it’s possible to add more automated tests (and for them to be reliable) to improve the coverage, this should be done; the final code coverage figure for the class should be mentioned in a comment on the diff, and it would be helpful to have the lcov reports for the class saved somewhere for analysis as part of the review.
  12. There must be no definite memory leaks reported by Valgrind when running the automated tests under it (using AX_VALGRIND_CHECK and make check-valgrind; see D3673).
  13. All automated tests must be installed as installed-tests and must be run when liblightwood is built into a package (we can help with the initial setup of this infrastructure if needed).
  14. build-snapshot -Is $build_machine must pass before submission of any patch (where $build_machine is a machine running an up-to-date copy of Apertis, which may be localhost — this is a standard usage of build-snapshot).
  15. All new code has been checked to ensure it doesn’t contradict review comments from previous reviews of other classes (i.e. we want to avoid making the same review comments on every submitted class).
  16. Commit messages must explain why they make the changes they do.
  17. The dependency information between Phabricator diffs must be checked to be in the correct order after submitting diffs.

Rationales

  1. Each coding guideline has its own rationale for why it’s useful, and many of them significantly affect the structure of a diff, so are important to get right early on.
  2. Namespacing is important for the correct functioning of a lot of the developer tools (for example, GObject Introspection), and to avoid symbol collisions between libraries — checking it is a very mechanical process which it is best to not have to spend review time on.
  3. Documentation comments are useful to both the reviewer and to end users of the API — for the reviewer, they act as an explanation of why a particular API is necessary, how it is meant to be used, and can provide insight into implementation choices. These are questions which the reviewer would otherwise have to ask in the review, so writing them up lucidly in a documentation comment saves time in the long run.
  4. GObject Introspection annotations are a requirement for the platform’s language bindings (to JavaScript, for example) to work, so must be added at some point. Fixing the error messages from g-ir-scanner is sufficient to ensure that the API can be introspected.
  5. Pre- and post-conditions are a form of assertion in the code, which check for programmer errors at runtime. If they are used consistently throughout the code on every API entry point, they can catch programmer errors much nearer their origin than otherwise, which speeds up debugging both during development of the library, and when end users are using the public APIs. They also act as a loose form of documentation of what each API will allow as its inputs and outputs, which helps review (see the comments about documentation above).
  6. The set of compiler warnings enabled by AX_COMPILER_FLAGS have been chosen to balance false positives against false negatives in detecting bugs in the code. Each compiler warning typically identifies a single bug in the code which would otherwise have to be fixed later in the life of the library — fixing bugs later is always more expensive in terms of debugging time.
  7. Usage examples are another form of documentation (as discussed above), which specifically make it clearer to a reviewer how a particular class is intended to be used. In writing usage examples, the author of a patch can often notice awkwardnesses in their API design, which can then be fixed before review — this is faster than them being caught in review and sent back for modification.
  8. Well formatted code is a lot easier to read and review than poorly formatted code. It allows the reviewer to think about the function of the code they are reviewing, rather than (for example) which function call a given argument actually applies to, or which block of code a statement is actually part of.
  9. Example programs are a loose form of testing, and also act as usage examples and documentation for the class (see above). They provide an easy way for the reviewer to run the class and (for example, if it is a widget) review its visual appearance and interactivity, which is very hard to do by simply looking at the code in a patch. Their biggest benefit will be when the class is modified in future — the example programs can be used to test changes to it and ensure that its behaviour changes (or does not) as expected. Availability of example programs which covered each of the modes of using LightwoodRoller would have made it easier to test changes to the roller in the last two releases, and discover that they broke some modes of operation (like coupling two rollers).
  10. For each unit test for a piece of code, the behaviour checked by that unit test can be guaranteed to be unchanged across modifications to the code in future. This prevents regressions (especially as the unit tests for Apertis projects are set up to be run automatically on each commit by @apertis-qa-bot, which is more frequently than in other projects). The value of unit tests when initially implementing a class is in the way they guide API design to be testable in the first place. It is often the case that an API will be written without unit tests, and later someone will try to add unit tests and find that the API is untestable; typically because it relies on internal state which the test harness cannot affect. By that point, the API is stable and cannot be changed to allow testing.
  11. Looking at code coverage reports is a good way to check that unit tests are actually checking what they are expected to check about the code. Code coverage provides a simple, coarse-grained metric of code quality — the quality of untested code is unknown.
  12. Every memory leak is a bug, and hence needs to be fixed at some point. Checking for memory leaks in a code review is a very mechanical, time-consuming process. If memory leaks can be detected automatically, by using valgrind on the unit tests, this reduces the amount of time needed to catch them during review. This is an area where higher code coverage provides immediate benefits. Another way to avoid leaks is to use g_autoptr() to automatically free memory when leaving a control block — however, as this is a completely new technique to learn, we are not mandating its use yet. You might find it easier though.
  13. If all automated tests are run at package build time, they will be run by @apertis-qa-bot for every patch submission; and can also be run as part of the system-wide integration tests, to check that liblightwood behaviour doesn’t change when other system libraries (for example, Clutter or libthornbury) are changed. This is one of the motivations behind installed-tests. This is a one-time setup needed for liblightwood, and once it’s set up, does not need to be done for each commit.
  14. build-snapshot ensures that a Debian package can be built successfully from the code, which also entails running all the unit tests, and checking that examples compile. This is the canonical way to ensure that liblightwood remains deliverable as a Debian package, which is important, as the deliverable for Apertis is essentially a bunch of Debian packages.
  15. If each patch is updated to learn from the results of previous patch reviews, the amount of time spent making and explaining repeated patch review comments should be significantly reduced, which saves everyone’s time.
  16. Commit messages are a form of documentation of the changes being made to a project. They should explain the motivation behind the changes, and clarify any design decisions which the author thinks the reviewer might question. If a commit message is inadequate, the reviewer is going to ask questions in the review which could have been avoided otherwise.
  17. git-phab should manage this automatically, but there have been problems in the past after doing interactive rebases, which have caused confusion in the patch review, and have broken @apertis-qa-bot’s checking of patches, which slows down the review.