LabKey performs the vast majority of product testing during the development cycle of a new release. The development of every new feature includes buddy testing, creation of automated unit tests, and creation of browser-based integration tests. Our automated servers run large suites of tests against every commit and even larger suites on a nightly and weekly basis to identify new bugs and regressions. We distribute monthly sprint builds to many clients, encouraging them to exercise these builds on their test servers and promptly report problems they find in new and existing functionality. After our final (stabilization) sprint, we push bi-weekly release candidates to our clients and ask them to validate these on their servers using their data. This culminates in LabKey making an official release of a build that has been tested thoroughly by us and many of our clients, typically occurring a couple weeks after the end of the stabilization sprint.
Our clients often find bugs in released builds. In most cases, we fix these problems as part of the next release cycle. We don’t typically fix bugs in released products for several reasons:
We evaluate every hotfix candidate using the following factors and questions:
Evaluating a hotfix candidate is a subjective risk vs. reward trade-off. In most cases, our clients and we find the reward is simply not worth the risk and cost. But, as hinted in #7 above, the length of time since the last release does affect the evaluation. A critical issue discovered shortly after release needs to be evaluated seriously, but an issue that isn’t reported until three months into a release is almost certainly not a high priority (we release new versions every four months). Combining this temporal element with the other factors leads to some general guidelines that we use to quickly assess whether an issue is a hotfix candidate.
|Not a hotfix candidate|
|One month after release||Two months after release||Always (until next release)|
|Significant data loss issue|
|Blocking issue in old functionality (regression)|
|Blocking issue in new functionality|
|Issue present in previous release||These are not hotfix candidates|
|New feature or improvement request|
|Issue with reasonable workaround|
|Issue with limited impact|
The above guidelines are not hard and fast rules. The risks or costs of a fix may preclude an otherwise worthy hotfix. On the other hand, we'll occasionally take a simple, low risk fix that doesn’t meet these criteria.
We encourage all clients to test new functionality promptly (as the sprint builds are made available) and perform regular regression testing of important existing functionality. Reporting all issues before public release is the best way to avoid hotfixes entirely.