To export a study folder, go to Admin > Manage Study and click the Export Study button at the bottom of the page. This page is also accessible through the Admin > Folder > Management page on the Export tab and is described in Export / Import a Folder.
Details about the study objects can be found here: Export Study Objects.
Importing a study archive is the same as importing any folder archive with a few additional considerations. Like other imports, you first create a folder of type "Study", then navigate to it before importing. You can import from an exported archive, or from another study folder on the same server. Selecting which study objects to import from either source gives you, for example, the ability to import an existing study's configuration and structure without including the actual dataset data from the archive. If you do want to import dataset data or specimen data to a new study, you must use the archive import option as the folder template method does not support data transfer.
By default, queries will be validated upon import of a study archive and any failure to validate will cause the import job to raise an error. To suppress this validation step, uncheck the Validate all queries after import option. If you are using the check-for-reload action in the custom API, there is a suppress query validation parameter that can be used to achieve the same effect as unchecking this box in the check for reload action.
By default, new visit rows will be created in the study during import for any dataset or specimen rows which reference a new, undefined visit. If you want the import to instead fail if it would create visits that are not already in the destination study or imported visit map, you can check the box to "Fail Import for Undefined Visits".
If you are importing a new visit map for a visit-based study, the import will fail if the new map causes overlapping visit.
A study minus the actual dataset data can be used as a template for generating new studies of the same configuration, structure and layout. To generate one, you can either:
When you import any study archive, or import from a study template, you can select only the objects of interest.
It is also possible to import a study into multiple folders at once. More information about these options can be found here: Advanced Folder Import Options.
A study can be configured to reload study data from the pipeline root, either manually or automatically at pre-set intervals, which can be useful for refreshing studies whose data is managed externally. For example, if the database of record is SAS, a SAS script could automatically generate TSVs nightly to be reloaded into LabKey Server. This simplifies the process of using LabKey tools for administration, analysis, reporting, and data integration without forcing migration of existing storage or data collection frameworks.
Caution: Reloading a study will replace existing data with the data contained in the imported archive.
To set up reload of study data:
When reload is attempted, the server checks the modification time on a file named studyload.txt in the pipeline root folder. If it has changed since the last reload, the server reloads the study archive from this folder. LabKey Server ignores the contents of studyload.txt, looking only at the file's modification timestamp.
Study reload can be used to copy study data from an external database of record into LabKey Server, often on a nightly basis, to enable analysis and integration. The process typically involves an automated script that follows steps similar to these:
The skipQueryValidation parameter is optional. If provided, it instructs the server to skip the query validation step that normally runs after the study is reloaded. This process flags query errors that might not otherwise be noticed, but can be time consuming.