This document summarizes the test process that LabKey uses to ensure reliable, performant releases of LabKey Server:

  1. A client proposes a new feature or enhancement and provides a set of requirements and scenarios.
  2. LabKey writes a specification that details changes to be made to the system. The specification often points out areas and scenarios that require special attention during testing.
  3. Specifications are reviewed by developers, testers, and clients.
  4. Developers implement the functionality based on the specification.
  5. If deviations from the specification are needed (e.g., unanticipated complications in the code or implications that weren’t considered in the original specification), these are discussed with other team members and the client, and the specification is revised.
  6. If the change modifies existing functionality, the developer ensures relevant existing unit, API, and browser-based automated tests continue to pass. Any test failures are addressed before initial commit.
  7. If the change adds new functionality, then new unit, API, and/or browser-based automated tests (as appropriate, based on the particular functionality) are written and added to our test suites before the feature is delivered.
  8. Developers perform ad hoc testing on the areas they change before they commit.
  9. Developers also run the Developer Regression Test (DRT) locally before every commit. This quick, broad automated test suite ensures no major functionality has been affected.
  10. TeamCity, our continuous integration server farm, builds the system after every commit and immediately runs a large suite of tests, the Build Verification Test (BVT).
  11. TeamCity runs much larger suites of tests on a nightly basis. In this way, all automated tests are run on the system every day, on a variety of platforms (operating systems, databases, etc).
  12. TeamCity runs the full suite again on a weekly basis, using the oldest supported versions of external dependencies (databases, Java, Tomcat, etc), and in production mode. This ensures compatibility with the range of production servers that we support.
  13. Test failures are reviewed every morning. Test failures are assigned to an appropriate developer or tester for investigation and fixing.
  14. A buddy tester (a different engineer who is familiar with the area and the proposed changes) performs ad hoc testing on the new functionality. This is typically 3 – 5 hours of extensive testing of the new area.
  15. Clients obtain test builds by syncing and building anytime they want, downloading a nightly build from LabKey, retrieving a monthly sprint build, etc. Clients test the functionality they've sponsored, reporting issues to the development team.
  16. As an open-source project, the public syncs, builds, and tests the system and reports issues via our support boards.
  17. Production instances of LabKey Server send information about unhandled exception reports to a LabKey-managed exception reporting server. Exception reports are reviewed, assigned, investigated, and fixed. In this way, every unhandled exception from the field is addressed. (Note that administrators can control the amount of exception information their servers send, even suppressing the reports entirely if desired.)
  18. Developers and testers are required to clear their issues frequently. Open issues are prioritized (1 – 4). Pri 1 bugs must be addressed immediately, Pri 2 bugs must be addressed by the end of the month, and Pri 3 bugs by the end of the four-month release cycle. Resolved issues and exception reports must be cleared monthly.
  19. At the end of each monthly sprint, a "sprint branch" is made and a sprint build is created. This build is then:
    1. Tested by the team on the official LabKey staging server.
    2. Deployed to for public, production testing.
    3. Pushed to key clients for testing on their test and staging server.
  20. The fourth month of every release cycle is treated as a "stabilization month," where the product is prepared for production-ready release.
    1. All team members are required to progressively reduce their issue counts to zero by the end of the month.
    2. Real-world performance data is gathered from customer production servers (only for customers who have agreed to share this information). Issues are opened for problem areas.
    3. Performance testing is performed and issues are addressed.
  21. The final release process occurs in the two weeks after the stabilization month:
    1. The sprint build at the end of the stabilization month is considered the first release candidate.
    2. This build is tested on staging and deployed to all LabKey-managed production servers, include, our hosted server, and various client servers that we manage.
    3. The build is pushed to all key clients for extensive testing beta testing.
    4. Clients provide feedback, which results in issue reports and fixes.
    5. Once clients verify the stability of the release, clients deploy updated builds to their production servers.
  22. After all issues are closed (all test suites pass, all client concerns are addressed, etc.) an official, production-ready release is made.
  23. Bugs discovered by LabKey or clients after official release are considered for hotfix treatment if they meet the criteria documented here.


Was this content helpful?

Log in or register an account to provide feedback

expand all collapse all