Table of Contents

     Set Up a Development Machine
       Access the Source Code
       Add the Proteomics Binaries
       Gradle Build Overview
       Build LabKey from Source
       Customize the Build
       SVN and Git Ignore Configurations
       Build Offline
       Gradle Cleaning
       Gradle Properties
       Gradle: How to Add Modules
       Gradle: Declare Dependencies
       Gradle Tips and Tricks
       Run Selenium Tests
       Create Production Builds
       Machine Security
       Notes on Setting up OSX for LabKey Development
       Tomcat 7 Encoding
       Troubleshoot Development Machines
       Premium Resource: IntelliJ Reference
     LabKey Client APIs
       JavaScript API
         Tutorial: Create Applications with the JavaScript API
           Step 1: Create Request Form
           Step 2: Confirmation Page
           Step 3: R Histogram (Optional)
           Step 4: Summary Report For Managers
           Repackaging the App as a Module
         Tutorial: Use URLs to Pass Data and Filter Grids
           Choose Parameters
           Show Filtered Grid
         Tutorial: Visualizations in JavaScript
           Step 1: Export Chart as JavaScript
           Step 2: Embed the Script in a Wiki
           Modify the Exported Chart Script
           Display the Chart with Minimal UI
         JavaScript API - Examples
         Adding a Report to a Data Grid with JavaScript
         Export Data Grid as a Script
         Custom HTML/JavaScript Participant Details View
         Example: Master-Detail Pages
         Custom Button Bars
           Premium Resource: Invoke JavaScript from Custom Buttons
         Premium Resource: Sample Status Demo
         Insert into Audit Table via API
         Programming the File Repository
         Declare Dependencies
         Loading ExtJS On Each Page
         Licensing for the ExtJS API
         Search API Documentation
         Naming & Documenting JavaScript APIs
           Naming Conventions for JavaScript APIs
           How to Generate JSDoc
           JsDoc Annotation Guidelines
       Java API
         LabKey JDBC Driver
         Remote Login API
         Security Bulk Update via API
       Perl API
       Python API
         Premium Resource: Python API Demo
       Rlabkey Package
         Troubleshoot Rlabkey
         Premium Resource: Example Code for QC Reporting
       SAS Macros
         SAS Setup
         SAS Macros
         SAS Security
         SAS Demos
       HTTP Interface
         Examples: Controller Actions / API Test Page
         Example: Access APIs from Perl
       External ODBC Connections
         ODBC Data Sources and SQL Server Reporting Service (SSRS)
         Secure ODBC Connections
       API Keys
       Compliant Access via Session Key
     Develop Modules
       Premium Resource: Migrate Module from SVN to GitHub
       Tutorial: Hello World Module
       Map of Module Files
       Example Modules
       Modules: Queries, Views and Reports
         Module Directories Setup
         Module Query Views
         Module SQL Queries
         Module R Reports
         Module HTML and Web Parts
       Modules: JavaScript Libraries
       Modules: Assay Types
         Assay Custom Domains
         Assay Custom Details View
         Loading Custom Views
         Example Assay JavaScript Objects
         Assay Query Metadata
         Customize Batch Save Behavior
         SQL Scripts for Module-Based Assays
         Transformation Scripts
           Example Workflow: Develop a Transformation Script (perl)
           Example Transformation Scripts (perl)
           Transformation Scripts in R
           Transformation Scripts in Java
           Transformation Scripts for Module-based Assays
           Run Properties Reference
           Transformation Script Substitution Syntax
           Warnings in Tranformation Scripts
       Modules: Folder Types
       Modules: Query Metadata
       Modules: Report Metadata
       Modules: Custom Footer
       Modules: Custom Header
       Modules: SQL Scripts
       Modules: Database Transition Scripts
       Modules: Domain Templates
       Modules: Java
         Module Architecture
         Getting Started with the Demo Module
         Tutorial: Hello World Java Module
         The LabKey Server Container
         Implementing Actions and Views
         Implementing API Actions
         Integrating with the Pipeline Module
         Integrating with the Experiment API
         Using SQL in Java Modules
         GWT Integration
         GWT Remote Services
         Database Development Guide
         Java Testing Tips
         HotSwapping Java classes
       Modules: Custom Login Pages
       ETL: Extract Transform Load
         Tutorial: Extract-Transform-Load (ETL)
           ETL Tutorial: Set Up
           ETL Tutorial: Run an ETL Process
           ETL Tutorial: Create a New ETL Process
         ETL: Define an ETL Using XML
         ETL: User Interface
         ETL: Configuration and Schedules
         ETL: Filter Strategies
         ETL: Column Mapping
         ETL: Queuing ETL Processes
         ETL: Stored Procedures
           ETL: Stored Procedures in MS SQL Server
           ETL: Functions in PostgreSQL
           ETL: Check For Work From a Stored Procedure
         ETL: SQL Scripts
         ETL: Remote Connections
         ETL: Logs and Error Handling
         ETL: All Jobs History
         ETL: Examples
         ETL: Reference
         Premium Resource: ETL Best Practices
       Deploy Modules to a Production Server
       Upgrade Modules
       Main Credits Page
       Module Properties Reference
       Node.js Build Dependency
     Common Development Tasks
       Trigger Scripts
         Availability of Server-side Trigger Scripts
       Script Pipeline: Running R and Other Scripts in Sequence
       LabKey URLs
         URL Actions
       How To Find schemaName, queryName & viewName
       LabKey/Rserve Setup Guide
       Web Application Security
         HTML Encoding
         Cross-Site Request Forgery (CSRF) Protection
       Profiler Settings
       Using loginApi.api
       Configuring IntelliJ for XML File Editing
       Premium Resource: LabKey Coding Standards and Practices
       Premium Resource: Feature Branch Workflow
       Premium Resource: Git Branch Naming
       Premium Resource: Best Practices for Writing Automated Tests
     LabKey Open Source Project
       Release Schedule
       Premium Resource: Previous Releases
         Premium Resource: Previous Release Details
       Open Source Project: Entering Issues
       Branch Policy
       Test Procedures
       Run Automated Tests
       Hotfix Policy
       Submit Contributions
         Confidential Data
         CSS Design Guidelines
         UI Design Patterns
         Documentation Style Guide
         Check in to the Source Project
         Renaming files in Subversion
     Developer Reference


Developer Resources

LabKey Server is broadly API-enabled, giving developers rich tools for building custom applications on the Labkey Server platform. Client libraries make it easy to read/write data to the server using familiar languages such as Java, JavaScript, SAS, Python, Perl, or R. Developers can use other languages (such as PHP) to interact with a LabKey Server through HTTP requests; however, usage of client libraries is recommended.

Stack diagram for the LabKey Server Platform:

Client API Applications

Create applications by adding API-enhanced content (such as JavaScript) to wiki or HTML pages in the file system. Application features can include custom reports, SQL query views, HTML views, R views, charts, folder types, assay definitions, and more.

  • LabKey Client APIs - Write simple customization scripts or sophisticated integrated applications for LabKey Server.
  • Tutorial: JavaScript/HTML Application - Create an application to manage reagent requests, including web-based request form, confirmation page, and summary report for managers. Reads and write to the database.
  • Tutorial: JavaScript Chart APIs - Select data from the database and render as a chart using the JavaScript API.

Scripting and Reporting

LabKey Server also includes 'hooks' for using scripts to validate and manipulate data during import, and allows developers to build reports that show data within the web user interface

Module Applications

Developers can create larger features by encapsulating them in modules

LabKey Server Open Source Project

Set Up a Development Machine

This topic provides step-by-step instructions for acquiring the LabKey Server source code, installing required components, and building LabKey Server from source. The instructions are written for a Windows machine; use in conjunction with the topic Notes on Setting up OSX for LabKey Development, to set up development on a Mac or any OSX or Linux machine. Additional Topic:


A checklist, guiding you through the setup process, is available for download: LabKey_Development_Server_Checklist.xlsx

Obtain the LabKey Source Files

The LabKey source files are stored in two version control systems: (1) the build system and test sample data are stored in a Subversion (SVN) repository and (2) the core platform and all commonly distributed modules are stored in multiple GitHub repositories. To build LabKey Server, you need to checkout code from both version control systems.

Install TortoiseSVN

The following instructions apply to Windows machines. To install SVN on non-Windows machines see Access the Source Code.

  • Download the latest version of TortoiseSVN.
  • Install TortoiseSVN on your local computer.
  • On the list of features to install, include the command line client tools.
  • Add the TortoiseSVN/bin directory to your PATH (if it was not automatically added).

Checkout LabKey Source Files

  • Create a new directory to hold the LabKey source files, for example, on Windows: C:\dev\labkey\trunk. This directory is referred to as <LABKEY _HOME> below.
  • In Windows Explorer, right-click the new directory and select SVN Checkout.
  • Enter the URL for the LabKey repository:
    • No username/password is required.
  • Click OK to checkout the source files.

Install a Git Client

Clone Core Modules from GitHub

  • Clone the following core repositories. Note that the first two go into <LABKEY_HOME>\server\modules, while the third goes into <LABKEY_HOME>\server.
<LABKEY_HOME>\server\modules> git clone

<LABKEY_HOME>\server\modules> git clone

<LABKEY_HOME>\server> git clone

Install Required Prerequisites



Install a Database

Node.js and npm

The LabKey build depends on Node.js and the node package manager (npm). The build process automatically downloads and installs the versions of npm and node that it requires. You should not install npm and node yourself. For details on the Node.js dependency see Node.js Build Dependency.

Gradle Configuration

Note that you do not need to install Gradle. LabKey uses the gradle wrapper to make updating version numbers easier. The gradle wrapper script (either gradlew or gradlew.bat) is included in the SVN sync and is already in the <LABKEY_HOME> directory.

Create a gradle properties file to contain information about global settings for your gradle build following these steps:

  • Create a ".gradle" directory in your home directory (on OSX and Linux: /Users/<you>/.gradle. On Windows: C:\Users\<you>\.gradle). Note: the Windows file explorer may not allow you to create a folder beginning with a period. To solve this navigate to C:\Users\<you>\ in the command prompt and type mkdir .gradle.
  • Create a file in the .gradle directory using the following process:
    • Copy the file <LABKEY_HOME>/gradle/global_gradle.properties_template to C:\Users\<you>\.gradle. Rename the file to create a file named
    • In the file, substitute your <CATALINA_HOME> directory (the location of your Tomcat installation), including the specific version number, for the value after systemProp.tomcat.home. Use forward slashes, not backslashes, for the Tomcat path, even on Windows. For example:

Environment Variables and System PATH

    • Create or modify the system environment variable JAVA_HOME so it points to your JDK installation location (for example, C:\java\jdk-##.#.#). Note: If you've already set the JAVA_HOME variable to point to your installation of the JRE, you should modify it to point to the JDK.
    • Create or modify the system environment variable CATALINA_HOME so that it points to your Tomcat installation (for example, C:\apache\tomcat-#.#.##).
  • PATH
    • Add the following location to your system PATH. This directory won't exist yet, but add it to the path anyway.
For example, C:\dev\labkey\trunk\build\deploy\bin.

OSX Example

On OSX, for example, you would place the environment variables in your .bash_profile:

export JAVA_HOME=`/usr/libexec/java_home -v 1.11`
export CATALINA_HOME=$HOME/apps/tomcat
export LABKEY_HOME=$HOME/labkey/trunk
export PATH=$LABKEY_HOME/build/deploy/bin:$PATH

GWT Browser Settings (Optional)

The default developer build is optimized for Chrome but the target browser can be controlled through either a command line parameter or by setting the 'gwtBrowser' property in the file. Available settings are: gwt-user-chrome (the default value), gwt-user-firefox, or gwt-user-ie.

Open the LabKey Project in IntelliJ

The LabKey development team develops LabKey using IntelliJ IDEA. You can use the license-free Community Edition if you are planning on modifying or extending the LabKey source code. Below we describe how to configure the IntelliJ development environment; we recommend employing the same general principles if you are using a different development environment. Some developers have experimented with Eclipse as the IDE and you can find some set up details on the Developer Message Board.

Install IntelliJ

  • Download and install the latest version of IntelliJ IDEA. Either the Community or Ultimate Editions will work.

Configure the LabKey Project in IntelliJ

  • Create the workspace.xml file as follows:
    • Copy the file <LABKEY_HOME>/.idea/workspace.template.xml. Rename the copy to create a file named <LABKEY_HOME>/.idea/workspace.xml
    • This file configures the debug information for LabKey project. To review the debug settings go to Run > Edit Configurations in IntelliJ.
  • Open the LabKey project in IntelliJ:
    • Launch IntelliJ.
    • If your IntelliJ install is brand new, you will see the "Welcome to IntelliJ" pop up screen. Click Open.
    • If you have previously installed IntelliJ, select File > Open.
    • Select the LabKey IntelliJ project directory: <LABKEY_HOME>
    • If asked about an "Unlinked Gradle project", DO NOT "Import Gradle project" in the default way from IntelliJ. See the troubleshooting section Starting Over with Gradle + IntelliJ for more information.
    • Select File > Settings > Appearance & Behavior > Path Variables. (On a Mac, the menu path is IntelliJ IDEA > Preferences > Appearance & Behavior > Path Variables).
    • Click the green plus icon in the upper right. Set the CATALINA_HOME path variable to <CATALINA_HOME>, the root directory of your Tomcat installation, for example, C:\labkey\apps\apache\apache-tomcat-#.#.##.
    • Click OK to close the Settings window.
  • Configure the Target JDK
    • In IntelliJ, select File > Project Structure.
    • Under Project Settings, click Project.
    • Under Project SDK click New and then click JDK.
    • Browse to the path of your JDK, for example, (C:\java\jdk-##.#.#), and click OK.
    • Click Edit. Change the name of the JDK to "labkey".
    • Click Ok to close the Project Structure window.
  • Open the Gradle tool window at View > Tool Windows > Gradle.
    • Click the refresh icon. This will take as much as 15-30 minutes. You should start seeing messages about its progress. If not, something is probably hung up. Wait for this sync to complete before progressing with further IntelliJ configuration steps.
  • Edit configuration options as follows:
    • Select Run > Edit Configurations. (If the menu is greyed-out, wait until IntelliJ finishes indexing the project files.)
    • Open the Application node in the left panel and select LabKey Dev.
    • VM options: Confirm that the path separators are appropriate for your operating system. On Windows, ensure that the paths to the jar files are separated by semicolons. For example: "./bin/bootstrap.jar;./bin/tomcat-juli.jar;C:/Program Files (x86)/JetBrains/IntelliJ IDEA 2016.3.3/lib/idea_rt.jar". For Macs, the paths should be separated by a colon.
    • Confirm that Working Directory points to your current Tomcat installation (i.e. to CATALINA_HOME).
    • Confirm that the dropdown labeled Use classpath of module is set to api_main or org.labkey-api_main (whichever is available).
    • Click OK, to close the Run/Debug Configurations window.
  • Be sure that IntelliJ has enough heap memory. The default max is OK if you’re just dealing with the core modules, but you will likely need to raise the limit if you’re adding in customModules, optionalModules, etc. 3GB seems sufficient.

Build and Run LabKey

Configure the Appropriate .properties File

The LabKey source includes two configuration files, one for use with PostgreSQL ( and one for use with Microsoft SQL Server (, each specifying JDBC settings, including URL, port, username, password, etc.

  • If using PostgreSQL, open the file <LABKEY_HOME>/server/configs/
  • If using MS SQL Server, open the file <LABKEY_HOME>/server/configs/
  • Edit the appropriate file, adding your values for the jdbcUser and jdbcPassword. (This password is the one you specified when installing PostgreSQL or MS SQL Server. If your password contains an ampersand or other special XML characters, you will need to escape it in the .properties file, as the value will be substituted into an XML template without encoding. For example, if your JDBC password is "this&that", then use the escaped version "this&amp;that".)

Run pickPg or pickMSSQL

  • In a command window, go to the directory <LABKEY_HOME>
  • Run "gradlew pickPg" or "gradlew pickMSSQL" to configure labkey.xml with the corresponding database settings.
  • You may need to manually create the directory <CATALINA_HOME>/conf/Catalina/localhost.

When you build LabKey, the values that you've specified in the or file are copied into the LabKey configuration file, labkey.xml, overwriting previous values. This file is then copied into <CATALINA_HOME>/conf/Catalina/localhost.

Build LabKey

To learn more about the build process, the various build targets available, and how the source code is transformed into deployed modules, see Build LabKey from Source.

  • On the command line, go to the <LABKEY_HOME> directory, and invoke the gradle build target:
    gradlew deployApp

To control which modules are included in the build, see Customize the Build.

Run LabKey Server

To run and debug LabKey:
  • Select Run > Debug 'LabKey Dev' in IntelliJ.
  • If Tomcat starts up successfully, navigate your browser to http://localhost:8080/labkey to begin debugging (assuming that your local installation of Tomcat is configured to use the Tomcat default port 8080).

While you are debugging, you can usually make changes, rebuild, and redeploy LabKey to the server without stopping and restarting Tomcat. Occasionally you may encounter errors that do require stopping and restarting Tomcat.

Post-installation Steps

Install R

Run the Basic Test Suite

  • Run the command 'gradlew :server:test:uiTest -Psuite=DRT' from within your <LABKEY-HOME> directory, to initiate automated tests of LabKey's basic functionality.

Note that 'R' must first be configured for these tests to run. Other automated tests are available. For details, see Run Automated Tests.

Optional Modules on GitHub

Many optional modules are available from the LabKey repository on GitHub. To include these modules in your build, install a Git client and clone individual modules into the LabKey Server source.

Clone Modules from LabKey's GitHub Repository

  • To add a GitHub module to your build, clone the desired module into trunk/labkey/server/optionalModules. For example, to add the 'workflow' module:
C:\svn\trunk\server\optionalModules>git clone

Note that you can get the URL by going to the module page on GitHub (for example,, clicking Clone or Download, and copying the displayed URL.

Manage GitHub Modules via IntelliJ

Once you have cloned a GitHub module, you can have IntelliJ handle any updates:

To add the GitHub-based module to IntelliJ (and have IntelliJ generate an .iml file for the module):

  • Edit your settings.gradle file to include the new module
  • In IntelliJ, open the Gradle tool window at View > Tool Windows > Gradle.
  • Refresh the Gradle window by clicking the arrow circle in the upper left of the Gradle window
To update the GitHub-based module using IntelliJ:
  • To have IntelliJ handle source updates from GitHub, go to File > Settings (or Intellij > Preferences).
  • Select Version Control.
  • In the Directory panel, scroll down to the Unregistered roots section, select the module, and click the Plus icon in the lower left.
  • In the Directory panel, select the target module and set its VCS source as Git, if necessary.
  • Note that IntelliJ sometimes thinks that subdirectories of the module, like module test suites, have their sources in SVN instead of Git. You can safely delete these SVN sources using the Directory panel.
  • To sync to a particular GitHub branch: in IntelliJ, go to VCS > Git > Branches. A popup menu will appear listing the available Git modules. Use the popup menu to select the branch to sync to.
If you have added a new module to your enlistment, be sure to customize the build to include it in your Gradle project and then refresh the Gradle window to incorporate it into IntelliJ, as described above.

Install Optional Components

Mass Spec and Proteomics Tools

LabKey Server's mass spectrometry and proteomics binaries are provided as a separate (and optional) enlistment. To add these binaries, follow the instructions in the topic: Add the Proteomics Binaries

Related Topics

Premium Resource Available

Subscribers to premium editions of LabKey Server can learn more about using IntelliJ in this topic

Learn more about premium editions

Access the Source Code

The LabKey source files are stored in two version control systems: (1) the build system and test sample data are stored in a Subversion (SVN) repository and (2) the core platform and all commonly distributed modules are stored in multiple GitHub repositories.

To access the source code, you'll need to install both a Subversion client and a Git client, then checkout/clone the desired repositories.


If you are developing on Windows, we recommend that you install TortoiseSVN, a helpful graphical interface to Subversion. If you are developing on a Mac, Subversion is shipped with MacOS X and is accessible from the terminal.

Install TortoiseSVN (Recommended for Windows)

  • Download the latest version of TortoiseSVN from the TortoiseSVN download page.
  • Install TortoiseSVN on your local computer.
  • On the list of features to install to the local hard drive, include the command line tools
  • Add the TortoiseSVN/bin directory to your PATH

Check Out Source Files Using TortoiseSVN

TortoiseSVN integrates with the Windows file system UI. To use the TortoiseSVN commands, open Windows Explorer, right-click a file or folder, and select a SVN command.

  • Create a new directory in the Windows file system. This will be the root directory for your enlistment.
  • In Windows Explorer, right-click the new directory and select SVN Checkout...
  • Enter the URL for the LabKey repository
  • Make sure that the checkout directory refers to the location of your root directory.
  • Click OK to create a local enlistment. At this point, all the SVN-based LabKey source files and sample data will be copied to your computer.

Install Command Line SVN Client (Recommended for Non-Windows Operating Systems)

  • Download the most recent Subversion package by visiting the Apache Subversion Packages page and choosing the appropriate link for your operating system.
  • Install Subversion on your local computer following instructions from the Apache Subversion website. Provide the server and account information from above.
  • Extensive Subversion documentation is available in the Subversion Book.

Check Out Source Files Using Command Line SVN

Use the svn checkout command, for example:

(Optional) Add the Mass Spec and Proteomics Binaries

LabKey Server's mass spectrometry and proteomics binaries are provided as a separate (and optional) enlistment. To add these binaries, follow the instructions in the topic: Add the Proteomics Binaries

SVN Access URLs

Read-only access is available using the following configuration:

Anonymous access is enabled, so no username/password is necessary. If you have a read-write account in the Subversion project, specify your username and password when accessing the repository.

More Information


The core platform and most server modules are located on GitHub. For a list of available modules see:

For details on acquiring the core server modules, see Clone Core Modules on GitHub.

The optional modules can be added to your build on a module-by-module basis. For details on installing a Git client and cloning individual modules see Optional Modules on GitHub.

Supported Versions

If you are running a production LabKey server, you should install only official releases of LabKey on that server. VCS access is intended for developers who wish to peruse, experiment with, and debug LabKey code against a test database. Daily drops of LabKey are not stable and, at times, may not even build. We cannot support servers running any version other than an officially released version of LabKey.

Related Topics

Add the Proteomics Binaries

LabKey makes available pre-built Windows binaries of various proteomics analysis tools, including executables such as X!Tandem, Comet, the Trans-Proteomic Pipeline, and Proteowizard. This step is optional, a convenience for users and developers who are interested in developing proteomics functionality and/or running proteomics-related tests. TeamCity, LabKey's automated build and test system, is configured to automatically grab these tools as part of its normal build and test process.

These tools are available in a special location on the standard SVN server. For those who will be developing and testing proteomics-related functionality on Windows, we recommend checking out the current versions into a standard LabKey Server enlistment using the following commands (the same can be accomplished using TortoiseSVN or other tools) from your %LABKEY_ROOT%/external/windows directory:

svn co 
svn co
svn co

This will create separate subdirectories for each set of tools. Invoking "gradlew deployApp" will deploy the binaries into the standard %LABKEY_ROOT%/build/deploy/bin directory, where they will be available for usage.

Related Topics

Gradle Build Overview

This topic provides an overview of LabKey's Gradle-based build system.

General Setup

Before following any of the steps below, you'll need to Set Up a Development Machine, including completing the Gradle Configuration steps.

In the steps below, we use LABKEY_ROOT to refer to the directory into which you checked out your SVN enlistment (i.e., the parent of the server directory).

Your First Gradle Commands

1. Execute a gradle command to show you the set of currently configured projects (modules). You do not need to install gradle and should resist the urge to do so. We use the gradle wrapper to make updating version numbers easier. The gradle wrapper script (either gradlew or gradlew.bat) is included in the SVN sync and is already in the <LABKEY_ROOT> directory.

On the command line, type ./gradle projects (Mac/Linux) or gradlew projects (Windows)

2. Execute a gradle command to build and deploy the application

./gradlew :server:deployApp

This will take some time as it needs to pull down many resources to initialize the local caches. Subsequent builds will be much faster.

Changing the Set of Projects

Gradle uses the <LABKEY_ROOT>/settings.gradle file to determine which projects (modules) are included in the build. To include a different set of projects in your build, you will need to edit this file. By default, only modules in the server/modules directory and the server/test and server/test/modules directories are included in the build. See the file for examples of different ways to include different subsets of the modules.

Commonly Used Gradle Commands

For a list of commonly used Gradle commands, see Build LabKey From Source.


See Gradle Tips and Tricks.


See the topic Gradle Cleaning.

IntelliJ Setup

Follow these steps in order to make IntelliJ able to find all the source code and elements on the classpath as well as be able to run tests.


See the trouble shooting section in Set Up a Dev Machine.

Related Topics

Build LabKey from Source

The process of building and deploying LabKey Server from source is covered in this topic.

Build Directories

These are the various directories involved in the build process, listed in the order in which they are used in the build process:

  • Source directory - Where the source code lives. This is where developers do their work. The build process uses this as input only.
  • Build directory - Where all the building happens. This is the top-level output directory for the build. It is created if it does not exist when doing any of the build steps.
  • Module build directory - Where the building happens for a module. This is the output directory for a module's build steps. It is created if it does not exist when doing a build.
  • Staging directory - A gathering point for items that are to be deployed. This is just a way station for modules and other jars. The artifacts are collected here so they can be copied all at once into the deploy directory in order to prevent Tomcat from reloading the appliation multiple times when multiple modules are being updated.
  • Deploy directory - The directory where the application is deployed and recognized by Tomcat.
The table below shows the commands used to create, modify and delete the various directories. Since various commands depend on others, there are generally several commands that will affect a given directory. We list here only the commands that would be most commonly used or useful.
Directory TypePath
(relative to <LABKEY_HOME>)
Added to by...Removed from by...Deleted by...
BuildbuildAny build stepAny cleaning stepcleanBuild
Module buildbuild/modules/<module>moduleN/A:server:modules:<module>:clean
One of the key things to note here is the cleanBuild command removes the entire build directory, requiring all of the build steps to be run again. This is generally not what you want or need to do since Gradle's up-to-date checks should be able to determine when things need to be rebuilt. (If that does not seem to be the case, please file a bug.) The one exception to this rule is when we change the LabKey version number after a release. Then you need to do a cleanBuild to get rid of the artifacts with the previous release version in their names.

Application Build Steps

Source code is built into jar files (and other types of files) in a module's build directory. The result is a .module file, which contains potentially several jar files as well as other resources used by the module. This .module file is copied from the module's build directory into the staging directory and from there into the deploy directory. This is all usually accomplished with the './gradlew deployApp' command. The 'deployApp' task also configures and copies the labkey.xml tomcat context file. This file points to the deploy directory as a context path for Tomcat. Changes in that directory will therefore be noticed by Tomcat and cause it to reload the application.

Module Build Steps

A single module is deployed using the `./gradlew deployModule' command. This command will create a .module file in the module's build directory, then copy it first into the staging modules directory and then into the deploy modules directory.

Build Targets

A few important targets:

Gradle TargetoooooooooooooooooooooooooooooooooDescription
gradlew tasksLists all of the available tasks in the current project.
gradlew pickPg
gradlew pickMSSQL
Specify the database server to use. The first time you build LabKey, you need to invoke one of these targets to configure your database settings. If you are running against PostgreSQL, invoke the pickPg target. If you are running against SQL Server, invoke the pickMSSQL target. These targets copy the settings specified in the or file, which you previously modified, to the LabKey configuration file, labkey.xml.
gradlew deployAppBuild the LabKey Server source for development purposes. This is a development-only build that skips many important steps needed for production environments, including GWT compilation for popular browsers, gzipping of scripts, production of Java & JavaScript API documentation, and copying of important resources to the deployment location. Builds produced by this target will not run in production mode.
gradlew :server:modules:wiki:deployModule
server/modules/wiki>gradlew deployModule
For convenience, every module can be deployed separately. If your changes are restricted to a single module then building just that module is a faster option than a full build. Example, to build the wiki module: 'gradlew :server:modules:wiki:deployModule'.
gradlew deployApp -PdeployMode=prodBuild the LabKey Server source for deployment to a production server. This build takes longer than 'gradlew deployApp' but results in artifacts that are suitable and optimized for production environments.
gradlew cleanBuildDelete all artifacts from previous builds.
gradlew cleanBuild deployAppDelete all artifacts from previous builds and build the LabKey Server from source. This sequence of build targets is sometimes required after certain updates but should generally be avoided as it will cause Gradle to start all over again and not take advantage of previous build work.
gradlew startTomcat
gradlew stopTomcat
Starts/stops the server in dev mode.
gradlew projectsLists the current modules included in the build.
gradlew :server:test:uiTestOpen the test runner's graphical UI.
gradlew :server:test:uiTest -Psuite=DRTRun the basic automated test suite.

Gradle targets can also be invoked from within IntelliJ via the "Gradle projects" panel, but this has not been widely tested.

Parallel Build Feature

To speed up the build, consider using the parallel build feature in Gradle.

Use the --parallel flag on the command line:

./gradlew --parallel deployApp

Or set the gradle property org.gradle.parallel to true. Add this line:


either in your top-level file (that is, the one in your <USER_DIR>/.gradle/ file, where you originally set up your artifactory password and such) or in the narrower-scoped <LABKEY_ROOT>/ file. Note that you can also add this property to the command line:


If Gradle warns that "JVM heap space is exhausted", add more memory as described in the topic Gradle Tips and Tricks.

Related Topics

Customize the Build

The LabKey Server module build process is designed to be flexible, consistent, and customizable. The process is driven by a manifest file that dictates which module directories to build. Module directories are listed either individually or using wildcards.

A few of the options this enables:

  • Modify your build manifest files to remove modules that you never use, speeding up your build.
  • Add your custom module directories to an existing build location (e.g., /server/modules) to automatically include them in the standard build.
  • Create a custom manifest file. See 'local_settings.gradle' below for an example.
After changing any of these files, we recommend that you sync gradle, by clicking the Refresh icon in IntelliJ's Gradle projects panel.


By default, the standard build tasks use the manifest file "/settings.gradle". You can edit this file to customize the modules that are built. settings.gradle includes a mix of wildcards and individually listed modules.

Wildcard Example. The following builds every module under the localModules directory that is contained in an app directory. Note that app is the parent of the module directories, not the module directory itself.

def excludedModules = ["inProgress"]
// The line below includes all modules under the localModules directory that are contained in a directory "app".
// The modules are subdirectories of app, not app itself.
BuildUtils.includeModules(this.settings, rootDir, [**/localModules/**/app/*], excludedModules);

Module Directory Example. The following builds every module directory in "server/modules":

def excludedModules = ["inProgress"]
// The line below includes all modules in the server/modules directory (except the ones indicated as to be excluded)
BuildUtils.includeModules(this.settings, rootDir, [BuildUtils.SERVER_MODULES_DIR], excludedModules);

Individual Module Example. The following adds the 'workflow' module to the build.

// The line below is an example of how to include a single module
include ":server:optionalModules:workflow"

Custom Module Manifests

You can also create custom module manifest files. For example, the following manifest file 'local_settings.gradle' provides a list of individually named modules:


include ':remoteapi:java'
include ':schemas'
include ':server:internal'
include ':server:api'
include ':server:bootstrap'
include ':server:modules:announcements'
include ':server:modules:audit'
include ':server:modules:core'
include ':server:modules:experiment'
include ':server:modules:filecontent'
include ':server:modules:pipeline'
include ':server:modules:query'
include ':server:modules:wiki'
include ':server:modules:bigiron'
include ':server:modules:dataintegration'
include ':server:modules:elisa'
include ':server:modules:elispotassay'
include ':server:modules:flow'
include ':server:modules:issues'
include ':server:modules:list'
include ':server:modules:luminex'
include ':server:modules:microarray'
include ':server:modules:ms1'
include ':server:modules:ms2'
include ':server:modules:nab'
include ':server:modules:search'
include ':server:modules:study'
include ':server:modules:survey'
include ':server:modules:visualization'
include ':server:customModules:targetedms'
include ':server:customModules:fcsexpress'

The following uses the custom manifest file in the build:

gradlew -c local_settings.gradle deployApp

gradle/settings files

Instead of supplying a local settings file and using the -c option, you can put your setting file in the <LABKEY_HOME>/gradle/settings directory and use the moduleSet property to tell Gradle to pick up this settings file. The property value should be the basename of the settings file you wish to you. For example,

gradlew -PmoduleSet=all
will cause Gradle to incorporate the file <LABKEY_HOME>/gradle/settings/all.gradle to define the set of projects (modules) to include.

Skipping a Module

The build targets can be made to ignore a module if you define the property skipBuild for this project. You can do this by adding a file in the project's directory with the following content


Note that we check only for the presence of this property, not its value.

Related Topics

SVN and Git Ignore Configurations

When you build with Gradle, it creates a .gradle file in the directory in which the build command is issued. This directory contains various operational files and directories for Gradle and should not be checked in. Though the svn:ignore property has been updated to include .gradle in the root directory and server directory, you should probably tell svn to ignore this in every directory. The best way to do this is to edit the .subversion/config file in your home directory and add the following under the miscellany section.

global-ignores = .gradle

You may also want to add this in the .gitignore file for any of your Git modules.


We’ve also updated the credits page functionality for the Gradle build and the build now produces a file dependencies.txt as a companion to the jars.txt file in a module’s resources/credits directory. This is not a file that needs to be checked in, so it should also be ignored, and the best way to do that will also be to change your subversion/config file

global-ignores = .gradle dependencies.txt

And in the .gitignore file for Git modules it would be this:


Build Offline

Gradle will check by default for new versions of artifacts in the repository each time you run a command. If you are working offline or for some other reason don’t want to check for updated artifacts, you can set Gradle in offline mode using the --offline flag on the command line. If you don’t want to have to remember this flag, you can either set up an alias or use an init script to set the parameter startParameter.offline=true

If you are running commands from within IntelliJ, there is also a setting for turning on and off offline mode. Select File > Settings > Build, Execution, Deployment > Build Tools > Gradle and check the box for Offline Work.

You can also toggle this setting in the Gradle window as shown in this screenshot:

Gradle Cleaning

Before learning more about how to use gradle cleaning in this topic, familiarize yourself with the details of the build process here:
Understanding the progression of the build will help you understand how to do the right amount of cleanup in order to switch contexts or fix problems.
Gradle is generally very good about keeping track of when things have changed and so you can, and should, get out of the habit of wiping things clean and starting from scratch because it just takes more time. If you find that there’s some part of the process that does not recognize when its inputs have changed or its outputs are up-to-date, please file a bug or post to the developer support board so we can get that corrected.

The gradle tasks also provide much more granularity in cleaning. Generally, for each task that produces an artifact, we try to have a corresponding cleaning task that removes that artifact. This leads to a plethora of cleaning tasks, but there are only a few that you will probably ever want to use.

In this topic we summarize the most commonly useful cleaning tasks, indicating what the outcome of each task is and providing examples of when you might use each.

Building and Cleaning

This table summarizes the commands used to create and remove the various directories. Note that the cleaning behavior reflected here is accurate as of version 1.2 of the gradlePlguins.

Directory TypePath
(relative to <LABKEY_HOME>)
Added to by...Removed from by...Deleted by...
npm install
BuildbuildAny build stepAny cleaning stepcleanBuild
Module buildbuild/modules/<module>moduleN/A:server:modules:<module>:clean

Application Cleaning


Running 'gradlew cleanDeploy' removes the build/deploy directory. This will also stop the tomcat server if it is running.


  • Stops Tomcat
  • Removes the staging directory: <LABKEY_HOME>/build/staging
  • Removes the deploy directory: <LABKEY_HOME>/build/deploy
Use when:
  • Removing a set of modules from your LabKey instance (i.e., after updating settings.gradle to remove some previously deployed modules)
  • Troubleshooting a LabKey server that appears to have built properly


Running 'gradlew cleanStaging' removes the build/staging directory. This does not affect the running server.


  • Removes the staging directory: <LABKEY_HOME>/build/staging
Use when:
  • Using a version of gradle plugins earlier than 1.2 and you need to clean out a deployment (i.e., use it in conjunction with cleanDeploy). For versions of the plugin later than 1.2, you should not need to call this task.


Running 'gradlew cleanBuild' removes the build directory entirely, requiring all of the build steps to be run again. This will also stop the tomcat server if it is running. This is the big hammer that you should avoid using unless there seems to be no other way out.

This is generally not what you want or need to do since Gradle's up-to-date checks should be able to determine when things need to be rebuilt. The one exception to this rule is that when the LabKey version number is incremented with each major release, you need to do a cleanBuild to get rid of all artifacts with the previous release version in their names.


  • Stops Tomcat
  • Removes the build directory: <LABKEY_HOME>/build
Use when:
  • Updating LabKey version number in the file in an enlistment
  • All else fails

Module Cleaning

The most important tasks for cleaning modules follow. The example module name used here is "MyModule".


Removes the build directory for the module. This task comes from the standard Gradle lifecycle, and is generally followed by a deployModule or deployApp command.


  • Removes myModule's build directory: <LABKEY_HOME>/build/modules/myModule
  • Note that this will have little to no effect on a running server instance. It will simply cause gradle to forget about all the building it has previously done so the next time it will start from scratch.
Use when:
  • Updating dependencies for a module
  • Troubleshooting problems building a module


Removes all artifacts for this module from the staging and deploy directories. This is the opposite of deployModule. deployModule copies artifacts from the build directories into the staging (LABKEY_HOME/build/staging) and then the deployment (LABKEY_HOME/build/deploy) directories, so undeployModule removes the artifacts for this module from the staging and deployment directories. This will cause a restart of a running server since Tomcat will recognize that the deployment directory is changed.


  • Removes staged module file: <LABKEY_HOME>/build/staging/modules/myModule.module
  • Removes module’s deploy directory and deployed .module file: <LABKEY_HOME>/build/deploy/modules/myModule.module
and <LABKEY_HOME>/build/deploy/modules/myModule
  • Restarts Tomcat.
Use when:
  • There were problems at startup with one of the modules that you do not need in your LabKey server instance
  • Always use when switching between feature branches because the artifacts created in a feature branch will have the feature branch name in their version number and thus will look different from artifacts produced from a different branch. If you don’t do the undeployModule, you’ll likely end up with multiple versions of your .module file in the deploy directory and thus in the classpath, which will cause confusion.


Removes the build directory for the module as well as all artifacts for this module from the staging and deploy directories. Use this to remove all evidence of your having built a module. Combines undeployModule and clean to remove the build, staging and deployment directories for a module.


  • Removes myModule's build directory: <LABKEY_HOME>/build/modules/myModule
  • Removes staged module file: <LABKEY_HOME>/build/staging/modules/myModule.module
  • Removes module’s deploy directory and deployed .module file: <LABKEY_HOME>/build/deploy/modules/myModule.module
and <LABKEY_HOME>/build/deploy/modules/myModule
  • Tomcat restarts
Use when:
  • Removing a module that is in conflict with other modules
  • Troubleshooting a build problem for a module

Related Topics

Gradle Properties

Gradle properties can be set at four different places:
  • Globally - Global properties are applied at the user or system-level. The intent of global properties is the application of common settings across multiple Gradle projects. For example, using globally applied passwords makes maintenance easier and eliminates the need to copy passwords into multiple areas.
  • In a Project - Project properties apply to a whole Gradle project, such as the LabKey Server project. Use project-level properties to control the default version numbers of libraries and other resources, and to control how Gradle behaves over the whole project, for example, whether or not to build from source.
  • In a Module - Module properties apply to a single module in LabKey Server. For example, you can use module-level properties to control the version numbers of jars and other resources.
  • On the command line, for example: -PbuildFromSource=false
Property settings lower down this list override any settings made higher in the list. So a property set at the Project level will override the same property set at the Global level, and a property set on the command line will override the same property set at the Global, Project, or Module levels. See the Gradle documentation for more information on setting properties.

Global Properties

The global properties file should be located in your home directory:

  • On Windows: C:\Users\<you>\.gradle\
  • On Mac/Linux: /Users/<you>/.gradle/
A template global properties file is available at <LABKEY_HOME>/gradle/global_gradle.properties_template SVN link

For instructions on setting up the global properties file, see Set Up a Development Machine.

Some notable global properties are described below:

  • deployMode - Possible values are "dev" or "prod". This controls whether a full set of build artifacts is generated that can be deployed to production-mode servers, or a more minimal set that can be used on development mode machines.
  • systemProp.tomcat.home - Value is the path to the Tomcat home directory. Gradle needs to know the Tomcat home directory for some of the dependencies in the build process. IntelliJ does not pick up the $CATALINA_HOME environment variable, so if working with the IntelliJ IDE, you need to set the tomcat.home system property either here (i.e., as a global Gradle property) or on the command line with
    -Dtomcat.home=/path/to/tomcat/home. Regardless of OS, use the forward slash (/) as a file separator in the path (yes, even on Windows).
  • includeVcs - We do not check the value of this property, only its presence or absence. If present, the population of VCS revision number and URL is enabled in the file. Generally this is left absent when building in development mode and present when building in production mode.
  • svn_user - Your svn username
  • svn_password - Your svn password

Project Root Properties

The project properties file resides at:

For the most part, this file sets the version numbers for external tools and libraries, which should not be modified.

Notable exceptions are described below:

  • buildFromSource - Indicates if we should use previously published artifacts or build from source. This setting applies to all projects unless overridden on the command line or in a project's own file. The default properties file in <LABKEY_HOME>/ sets buildFromSource to "true". This setting will cause a build command to construct all the jars and the .module files that are necessary for a server build. If you are not changing code, consider setting "buildFromSource=false". The following table illustrates how you can buildFromSource to build just the source code you need to build.
If you want to......then...
Build nothing from sourceSet buildFromSource=false and run "gradlew deployApp".
Build everything from sourceSet buildFromSource=true and run "gradlew deployApp".
Build a single module from sourceSet buildFromSource=false and run the deployModule command for that module (for example, "gradlew :server:modules:wiki:deployModule").
Alternatively, you could create a file within the module you want to build from source, include the setting "buildFromSource=true", and call "gradlew deployApp".
Build a subset of modules from sourceSet buildFromSource=false and run the deployModule command for the subset of modules you wish to build.
Alternatively, you could create files within the modules you want to build from source, include the setting "buildFromSource=true" in each, and call "gradlew deployApp".

Note that, unlike other boolean properties we use, the value of the buildFromSource property is important. This is because we want to be able to set this property for individual modules and the mere presence of a property at a higher level in the property hierarchy cannot be overridden at a lower level. If the property is defined, but not assigned a value, this will have the same effect has setting it to false.

Community developers who want to utilize 'buildFromSource=false' will need to limit the list of modules built. Use a custom manifest of modules, such as 'local_settings.gradle', as described in the topic Customize the Build.

  • labkeyVersion - The default version for LabKey artifacts that are built or that we depend on. Override in an individual module's file as necessary.

Module-Specific Properties

This is most commonly used to set version number properties for the dependencies of a particular module, which, while not necessary, is a good practice to use for maintainability. An example module-specific properties file, from /server/test that sets various version properties:


Other common properties you may want to set at the module level:

  • skipBuild - This will cause Gradle to not build artifacts for this module even though the module's Gradle project is included by the settings.gradle file. We do not check the value of this property, only its presence.
  • buildFromSource - As mentioned above, this will determine whether a module should be built from source or whether its artifacts should be downloaded from the artifact repository. We do check the value of this property, so it should be set to either true or false. If the property is set but not assigned a value, this will have the same effect as setting it to false.

Gradle: How to Add Modules

Adding a Module: Basic Process

The basic workflow for building and deploying your own module within the LabKey source tree goes as follows:

  • Apply plugins in the module's build.gradle file.
  • Declare dependencies in build.gradle (if necessary).
  • Include the module in your build manifest file (if necessary).
  • Sync Gradle.
  • Deploy the module to the server.
Details for file-based modules vs. Java modules are described below.

Apply Plugins

File-based Modules

For file-based modules, add the following plugins to your build.gradle file:


apply plugin: 'java'
apply plugin: 'org.labkey.fileModule'

Java Modules

For Java modules, add the following plugins:


apply plugin: 'java'
apply plugin: 'org.labkey.module'

Stand-alone Modules

For modules whose source resides outside the LabKey Server source repository, you must provide your own gradle files, and other build resources, inside the module. The required files are:

  • build.gradle - The build script where plugins are applied, tasks, are added, dependencies are declared, etc. .
  • - Module-scoped properties, see Module Properties Reference
  • module.template.xml - The build system uses this file in conjunction with to create a deployable properties file at /config/module.xml.
You will also probably want to put a gradle wrapper in your module. You can do this by using the gradle wrapper from the LabKey distribution with the following command in your module's directory:
/path/to/labkey/enlistment/gradlew wrapper
This will create the following in the directory where the command is run:
  • gradlew - The Gradle wrapper linux shell script. You get this when you install the Gradle wrapper.
  • gradlew.bat - The Windows version of the Gradle wrapper. You get this when you install the Gradle wrapper.
  • gradle - directory containing the properties and jar file for the wrapper
If you do not have a LabKey enlistment or access to another gradle wrapper script, you will first need to install gradle in order to install the wrapper.

The module directory will be laid out as follows. Note that this directory structure is based on the structure you get from createModule (except for module.template.xml, gradlew):

│ build.gradle
│ └───wrapper
│ gradle-wrapper.jar
│ gradlew
│ gradlew.bat
│ module.template.xml

│ ├───schemas
│ ├───web
│ └───...

Sample files for download:

  • build.gradle - a sample build file for a stand-alone module
  • standaloneModule.tgz - a simple stand-alone module with minimal content from which you can create a module file to be deployed to a LabKey server
Also see Tutorial: Hello World Java Module.

Declare Dependencies

When your module code requires some other artifacts like a third-party jar file or another LabKey module, declare a dependency on that resource inside the build.gradle file. The example below adds a dependency to the workflow module. Notice that the dependency is declared for the 'apiCompile' configuration of the LabKey module. This is in keeping with the Module Architecture for LabKey modules.

compile project(path: ":server:optionalModules:workflow", configuration: 'apiCompile')

The example below adds dependencies on two external libraries:

external 'commons-httpclient:commons-httpclient:3.1'
external 'org.apache.commons:commons-compress:1.13'

Here we use the 'external' configuration instead of the compile configuration to indicate that these libraries should be included in the lib directory of the .module file that is created for this project. The 'compile' configuration extends this LabKey-defined 'external' configuration.

There are many other examples in the server source code, for example:

For a detailed topic on declaring dependencies, see Gradle: Declare Dependencies.

Include the module in your build manifest file

With the default settings.gradle file in place, there will often be no changes required to incorporate a new module into the build. By placing the module in one of the directories referenced in the settings.gradle file (server/modules, server/customModules, server/optionalModules, externalModules, and the various existing subdirectories within externalModules), it will automatically be picked up during the initialization phase. If you put the new module in a different location, you will need to modify the settings.gradle file to include this in the configuration.

For example adding the following line to your build manifest file incorporates the helloworld module into the build. For details see Customize the Build.

include ':localModules:helloworld'

Sync Gradle

  • In IntelliJ, on the Gradle projects panel, click Refresh all gradle projects.

Deploy the Module

In your module's directory, call the following gradle task to build and deploy it:

path/to/gradlew deployModule

Related Topics

Gradle: Declare Dependencies

If the module you are developing has dependencies on third-party libraries or modules other than server/api, internal or schema, you will need to add a build.gradle file in the module’s directory that declares these dependencies. For example, the build.gradle file for the pipeline module includes the following:

import org.labkey.gradle.util.BuildUtils

dependencies {
external 'org.apache.activemq:activemq-core:4.1.2'
external 'org.mule.modules:mule-module-builders:1.4.4:embedded'
external 'com.thoughtworks.xstream:xstream:1.2.2'
BuildUtils.addLabKeyDependency(project: project, config: "compile", depProjectPath: ":server:modules:core", depProjectConfig: 'apiCompile')

External Dependencies

In order for external dependencies to be found from the artifact repository, it is necessary to use the proper group name (string before the first colon, e.g., 'org.apache.activemq') and artifact name (string between the first and second colons, e.g., 'acitvemq-core') and version number (string after the second colon and before the third colon, if any, e.g., '4.2.1'). You may also need to include a classifier for the dependency (the string after the third colon, e.g., 'embedded'). To find the proper syntax for the external dependencies, you can query bintray or MavenCentral, look for the version number you are interested in and then copy and paste from the gradle tab. It is generally best practice to set up properties for the versions in use. External dependencies should be added to the LabKey-defined “external” configuration, while dependencies on other LabKey modules should be added to the “compile” configuration. The “external” configuration is used when creating the .module file to know which libraries to include with the module.

If the library you need is not available in one of the existing repositories, those who have access to the LabKey Artifactory can navigate to the "ext-release-local" artifact set, click the Deploy link in the upper right corner, and upload the JAR file. The Artifactory will attempt to guess what a reasonable group, artifact, and version number might be, but correct as needed. Once added, it can be referenced in a module's build.gradle file like any other dependency.

Internal Dependencies

For internal dependencies, the BuildUtils.addLabKeyDependecy method referenced above will reference the buildFromSource gradle property to determine if a project dependency should be declared (meaning the artifacts are produced by a local build) or a package dependency (meaning the artifacts are pulled from the artifact server). The argument to this method is a map with the following entries:

  • project: The current project where dependencies are being declared
  • config: The configuration in which the dependency is being declared
  • depProjectPath: The (Gradle) path for the project that is the dependency
  • depProjectConfig : The configuration of the dependency that is relevant. This is optional and defaults to the default configuration or an artifact without a classifier
  • depVersion: The version of the dependency to retrieve. This is optional and defaults to the parent project’s version if the dependent project is not found in the gradle configuration or the dependent project’s version if it is found in the gradle configuration.
  • specialParams: This is a closure that can be used to configure the dependency, just as you can with the closure for a regular Gradle dependency. This is particularly useful for declaring a dependency that is not transitive, using the closure

To declare a compile-time dependency between one module and the API jar of a second module, you will do this:

import org.labkey.gradle.util.BuildUtils

dependencies {
BuildUtils.addLabKeyDependency(project: project, config: "compile", depProjectPath: ":server:modules:someModule", depProjectConfig: 'apiCompile')
The assumption is that the module that is depended upon will be in the server distribution, and thus its jar files will be on the classpath, so there is no need to include its API jar in the lib directory of the first module. If this assumption is not valid and you do need or want to include the API jar of the second module within the lib directory of the first, prior to Gradle Plugins release 1.3.2, you would declare this as an "external" dependency like so:
import org.labkey.gradle.util.BuildUtils

dependencies {
BuildUtils.addLabKeyDependency(project: project, config: "external", depProjectPath: ":server:modules:someModule", depProjectConfig: 'apiCompile')
This will put the API jar file for :server:modules:someModule in the lib directory of the module that has declared this dependency and will require an entry in the resources/credits/jars.txt file so the credits check does not notice a discrepancy.

As of Gradle Plugins release 1.3.2, a new "labkey" configuration was introduced to obviate the need for declaring the dependency within the jars.txt file. The API dependencies for LabKey API jars of modules should now be declared using this "labkey" configuration, and it is likely this dependency should not be transitive:

import org.labkey.gradle.util.BuildUtils

dependencies {
BuildUtils.addLabKeyDependency(project: project, config: "labkey", depProjectPath: ":server:modules:someModule", depProjectConfig: 'apiCompile', specialParams: { transitive = false})
No entry will be required in the jars.txt file, but the API jar file will be included in the lib directory.

Module Dependencies

The moduleDependencies module-level property is used by LabKey server to determine the module initialization order and to control the order in which SQL scripts run. As of gradlePlugins version 1.2.3, the dependencies can be declared within a module's build.gradle file.

For moduleA that depends on moduleB, you would add the following line to the moduleA/build.gradle file:

dependencies { 
BuildUtils.addLabKeyDependency(project: project, config: "modules", depProjectPath: ":server:myModules:moduleB", depProjectConfig: 'published', depExtension: 'module')

Then you can remove the corresponding line from moduleA's file:

ModuleDependencies: moduleB

This feature is still incubating, which means the behavior when a module is not in the settings.gradle file and/or when you do not have an enlistment for a module may change. Currently, the behavior is as follows:

  • If :server:myModules:moduleB is not included in the settings.gradle file, moduleB will be treated like an external dependency and its .module file will be downloaded from Artifactory and placed in the build/deploy/modules directory by the deployApp command
  • If :server:myModules:moduleB is included in your settings.gradle file, but you do not have an enlistment moduleB, by default, this will cause a build error such as
"Cannot find project for :server:myModules:moduleB". You can change this default behavior by using the parameter -PdownloadLabKeyModules, and this will cause the .module file to be downloaded from Artifactory and deployed to build/deploy/modules, as in the previous case
  • If :server:myModules:moduleB is included in settings.gradle and you have an enlistment in moduleB, it will be built and deployed as you might expect.

Resolving Conflicts

After adding a new external dependency, or updating the version of an existing external dependency, you will want to make sure the dependency hasn't introduced a version inconsistency with other modules. To do this, you can run the task 'showDiscrepancies' and you will want to include as many modules as possible for this task, so using the module set 'all' is a good idea:
./gradlew -PmoduleSet=all showDiscrepancies
If there are any discrepancies in external jar version numbers, this task will produce a report that shows the various versions in use and by which modules as shown here.
commons-collections:commons-collections has 3 versions as follows:
3.2 [:server:modules:query, :server:optionalModules:saml]
3.2.1 [:externalModules:labModules:LDK]
3.2.2 [:server:api]
Each of these conflicts should be resolved before the new dependency is added or updated. Preferably, the resolution will be achieved by choosing a different version of a direct dependency in one or more modules. The task 'allDepInsight' can help to determine where a dependency comes from
./gradlew allDepInsight --configuration=external --dependency=commons-collections

If updating direct dependency versions does not resolve the conflict, you can force a certain version of a dependency, which will apply to direct and transitive dependencies. See the root-level build.gradle file for examples of the syntax for forcing a version.

Version Conflicts in Local Builds

When version numbers are updated, either for LabKey itself or for external dependencies, a local build can accumulate multiple, conflicting versions of certain jar files in its deploy directory, or the individual module build directories. This is never desirable. With gradlePlugin version 1.3, tasks have been added to the regular build process that check for such conflicts.

By default, the build will fail if a version conflict is detected, but the property 'versionConflictAction' can be used to control that behavior. Valid values for this property are:

  • 'delete' - this causes individual files in the deploy directory that are found in conflict with ones that are to be produced by the build to be deleted.
> Task :server:api:checkModuleJarVersions 
INFO: Artifact versioning problem(s) in directory /Users/susanhert/Development/labkey/trunk/build/modules/api/explodedModule/lib:
Conflicting version of commons-compress jar file (1.14 in directory vs. 1.16.1 from build).
Conflicting version of objenesis jar file (1.0 in directory vs. 2.6 from build).
INFO: Removing existing files that conflict with those from the build.
Deleting /Users/susanhert/Development/labkey/trunk/build/modules/api/explodedModule/lib/commons-compress-1.14.jar
Deleting /Users/susanhert/Development/labkey/trunk/build/modules/api/explodedModule/lib/objenesis-1.0.jar

Note that when multiple versions of a jar file are found to already exist in the build directory, none will be deleted. Manual intervention is required here to choose which version to keep and which to delete.
Execution failed for task ':server:api:checkModuleJarVersions'.
> Artifact versioning problem(s) in directory /Users/susanhert/Development/labkey/trunk/build/modules/api/explodedModule/lib:
Multiple existing annotations jar files.
Run the :server:api:clean task to remove existing artifacts in that directory.

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.

* Get more help at

  • 'fail' (default) - this causes the build to fail when the first version conflict or duplicate version is detected.
Execution failed for task ':server:api:checkModuleJarVersions'.
> Artifact versioning problem(s) in directory /Users/susanhert/Development/labkey/trunk/build/modules/api/explodedModule/lib:
Conflicting version of commons-compress jar file (1.14 in directory vs. 1.16.1 from build).
Conflicting version of objenesis jar file (1.0 in directory vs. 2.6 from build).
Run the :server:api:clean task to remove existing artifacts in that directory.

  • 'warn' - this will issue a warning message about conflicts, but the build will succeed. This can be useful in finding how many conflicts you have since the 'fail' option will show only the first conflict that is found.
> Task :server:api:checkModuleJarVersions 
WARNING: Artifact versioning problem(s) in directory /Users/susanhert/Development/labkey/trunk/build/modules/api/explodedModule/lib:
Conflicting version of commons-compress jar file (1.14 in directory vs. 1.16.1 from build).
Conflicting version of objenesis jar file (1.0 in directory vs. 2.6 from build).
Run the :server:api:clean task to remove existing artifacts in that directory.


Though these tasks are included as part of the task dependency chains for building and deploying modules, the four tasks can also be executed individually, which can be helpful for resolving version conflicts without resorting to cleaning out the entire build directory. The tasks are:

  • checkModuleVersions - checks for conflicts in module file versions
  • checkWebInfLibJarVersions - checks for conflicts in jar files included in the WEB-INF/lib directory
  • checkModuleJarVersions - checks for conflicts in the jar files included in an individual module
  • checkVersionConflicts - runs all of the above tasks

Related Topics

Gradle Tips and Tricks

Below are some additional tips on using Gradle to your advantage and to help learn the Gradle command line.

Flags and Options

Use gradlew -h to see the various options available for running gradle commands.

By default, gradle outputs information related to the progress in building and which tasks it considers as up to date or skipped. If you don’t want to see this, or any other output about progress of the build, you’ll want to add the -q flag:

gradlew -q projects

Set up an alias if you’re a command-line kind of person who can’t abide output to the screen.

Offline Mode

If working offline, you will want to use the --offline option to prevent it from contacting the artifact server (You won’t have success if you do this for your first build.)

Or you can toggle offline mode in IntelliJ, as shown in the following screen shot:

Efficient Builds

If doing development in a single module, there is a command available to you that can be sort of a one-stop shopping experience:

/path/to/gradlew deployModule

This will build the jar files, and the .module file and then copy it to the build/deploy/modules directory, which will cause Tomcat to refresh.

Gradle Paths

Build tasks are available at a fairly granular level, which allows you to use just the tasks you need for your particular development process. The targeting is done by providing the Gradle path (Gradle paths use colons as separators instead of slashes in either direction) to the project as part of the target name. For example, to deploy just the wiki module, you would do the following from the root of the LabKey enlistment:

./gradlew :server:modules:wiki:deployModule

Note that Gradle requires only unique prefixes in the path elements, so you could achieve the same results with less typing as follows:

./gradlew :se:m:w:depl

And when you mistype a target name it will suggest possible targets nearby. For example, if you switch back to ant task mode momentarily and type

gradlew pick_pg

Gradle responds with:

* What went wrong:

Task 'pick_pg' not found in project ':server'. Some candidates are: 'pickPg'.

Gradle's Helpful Tasks

Gradle provides many helpful tasks to advertise the capabilities and settings within the build system. Start with this and see where it leads you

gradlew tasks

Other very useful tasks for troubleshooting are

  • projects - lists the set of projects included in the build
  • dependencies - lists all dependencies, including transitive dependencies, for all the configurations for your build

Paths and Aliases

Placing the gradlew script in your path is not ideal in an environment where different directories/branches may use different versions of gradle (as you will invariably do if you develop on trunk and release branches). Instead, you can:

  • Always run the gradlew command from the root of the enlistment using ./gradlew
  • On Linux/Mac, use the direnv tool.
You may also want to create aliases for your favorite commands in Gradle. If you miss having the timestamp output at the end of your command, check out the tips in this issue.


Gradle will never be as performant of Ant. It does more work, so it takes more time. But there are various things you can do to improve performance:

  • Use targeted build steps to do just what you know needs to be done.
  • Use the -a option on the command line to not rebuild project dependencies (though you should not be surprised if sometimes your changes do require a rebuild of dependencies that you're not aware of and be prepared to remove that -a option).
  • You can add more memory to the JVM by setting the org.gradle.jvmargs property to something like this:

If you want to update your properties with either of these settings, We suggest that you put them in your <user>/.gradle/ file so they apply to all gradle instances and so you won't accidentally check them in.


If you don't need the absolute latest code, and would be happy with the latest binary distribution, do the following to avoid the compilation steps altogether.

  1. Download the appropriate distribution zip/tar.gz file.
  2. Put it in a directory that does not contain another file with the same extension as the distribution (.tar.gz or .zip)
  3. Use the ./gradlew :server:cleanStaging :server:cleanDeploy :server:deployDistribution command, perhaps supplying a -PdistDir=/path/to/dist/directory property value to specify the directory that contains the downloaded distribution file and/or a -PdistType=zip property to specify that you're deploying form a zip file instead of a .tar.gz file.

Run Selenium Tests

The server/test Project

Many of our tests reside in the server/test directory and you can run your tests in that directory using the custom test runner interface or command line. The gradle command is:

gradlew :server:test:uiTests

We use uiTest instead of test because "test", by convention, refers to unit tests, which are run with each build. The "test" name is reserved for possible future development of more unit tests located in a more standard place. By default, this will bring up the UI for the test runner and allow you to choose which tests you want to run, with various other parameters. If you want to run a particular test or suite, you can specify properties on the command line. You use a -P flag to specify the property, not the -D flag previously used with the ant build. For example, to run the DRT tests, you would use the following command

gradlew :server:test:uiTests -Psuite=DRT

You can use the 'test' property to specify a comma-separated list of test classes

gradlew :server:test:uiTests -Ptest=BasicTest,IssuesTest

To retain the artifacts created by the test, use -Pclean=false.

gradlew :server:test:uiTests -Ptest=BasicTest,IssuesTest -Pclean=false

To run a particular test method or methods (annotated @Test) within a test class, append a dot-separated list of methods to the class that contains them.

gradlew :server:test:uiTests -Ptest=BasicTest.testCredits.testScripts,IssuesTest.emailTest

You can also specify properties using the server/test/ file.

Individual Modules

Gradle brings with it the ability to do very targeted builds and tasks, thus we have the ability to run the tests for a particular module without using the test runner project.

Until we resolve the issue of making the test runner able to run tests from all modules while still allowing the individual tests to be run, you must set the project property enableUiTests in order to be able to run the tests for an individual module.

For any module that has its own test/src directory, there will be a task so you can run only the tests for only that module. As of the 1.4.0 release of gradlePlugins, the name of the task is moduleUiTests. Prior to this release, the task is uiTests. So, for example, you can run the tests for MS2 by using this command:

gradlew -PenableUiTests :server:modules:ms2:moduleUiTests

It is still required that you have the :server:test project since that is the location of the "" file and the helper methods for running tests.

Related Topics

Create Production Builds

By default, running gradlew deployApp creates a development build. It creates the minimum number of build artifacts required to run LabKey Server on a development machine. Some of these artifacts aren't required to run LabKey Server (such as pre-creating a .gz version of resources like .js files so the web server doesn't need to dynamically compress files for faster download), and others can be used directly from the source directories when the server is run in development mode (via the -DdevMode=true JVM argument). This means the development builds are faster and smaller than they would otherwise be.

Note that individual modules built in development mode will not deploy to a production server. On deployment, the server will show the error: "Module <module-name>...was not compiled in production mode". You can correct this by running 'gradlew deployApp -PdeployMode=prod' or, to build an individual module in production mode, you can add the following line to the file.

BuildType: Production

Production servers do not have access to the source directories, and should be optimized by performance, so they require that all resources be packaged in each module's build artifacts. This can be created by running gradlew deployApp -PdeployMode=prod instead. If you have existing build artifacts on your system, you will need to do an gradlew cleanBuild first so that the build recognizes that it can't use existing .module files.

All standard LabKey Server distributions (the .zip and .tar.gz downloads) are compiled in production mode.

Related Topics

Machine Security

LabKey requires that everyone committing changes to the source code repository exercise reasonable security precautions.

Virus Scanning

It is the responsibility of each individual to exercise reasonable precautions to protect their PC(s) against viruses.  We recommend that all committers:

  • Run with the latest operating system patches
  • Make use of software and/or hardware firewalls when possible
  • Install and maintain up-to-date virus scanning software 

We reserve the right to revoke access to any individual found to be running a system that is not properly protected from viruses. 

Password Protection

It is the responsibility of each individual to ensure that their PC(s) are password protected at all times.  We recommend the use of strong passwords that are changed at a minimum of every six months. 

We reserve the right to revoke access to any individual found to be running a system that is not exercising reasonable password security. 

Notes on Setting up OSX for LabKey Development

In addition to the general process described in Set Up a Development Machine, follow these extra steps when setting up OSX machines for LabKey development.

Software Installation

1. Install the Apple OSX developer tools. This contains a number of important tools you will need.

2. Java for OSX (FAQs for your reference:

  • OSX 10.6 and below: Apple's Java comes pre-installed with OSX.
  • OSX 10.7 (Lion) and above: Java is not pre-installed with OSX versions 10.7 and above. To get the latest version of OpenJDK, you will need OSX 10.7.3 and above.
3. Set up environment variables:
  • PATH = <LABKEY_HOME>/build/deploy/bin:<your-normal-path>
You can do this via traditional linux methods (in ~/.bash_profile) or via OSX's plist environment system.

To add the environment variables using ~/.bash_profile, edit the file and add the lines:

export JAVA_HOME=`/usr/libexec/java_home -v 11`
export CATALINA_HOME=$HOME/apps/tomcat
export LABKEY_HOME=$HOME/labkey/trunk
export PATH=$LABKEY_HOME/build/deploy/bin:$PATH

To add the environment variables using the OSX plist editor, open the file ~/.MacOSX/environment.plist. This should open in the plist editor (from Apple developer tools).

  • Create the env vars shown above
  • logout and in

IntelliJ IDEA

The setup for IntelliJ is described in the common documentation, but a few additional troubleshooting notes may be helpful:

Run/Debug LabKey Error

Error: "Could not find or load main class org.apache.catalina.startup.Bootstrap"

  • You might see this error in the console when attempting to start LabKey server. Update the '-classpath' VM option for your Run/Debug configuration to have Linux/OSX (:) path separators, rather than Windows path separators (;).

SVN annotate/history

Problems while loading file history: svn: E175002

  • Notes on upgrading on Yosemite, with Subversion 1.8.13:
  • From terminal, execute these commands:
    • Get Brew, if you don't have it already: $ ruby -e "$(curl -fsSL"
    • Uninstall svn: $ brew uninstall svn
    • Install svn: $ brew install svn
    • Link: $ brew link --overwrite subversion
    • Test the version: $ svn --version (without successful linking, svn won't be recognized as a valid command)

Linking Error


Linking /usr/local/Cellar/subversion/1.8.13...
Error: Could not symlink include/subversion-1/mod_authz_svn.h
/usr/local/include/subversion-1 is not writable.

To resolve, perform these steps:

  • Take ownership:
    $ sudo chown -R $USER /usr/local/include
  • Try Linking again:
    $ brew link --overwrite subversion
  • Configure IntelliJ to use the installed binary (
    • from Terminal execute : which svn
    • In IntelliJ, go to 'IntelliJ IDEA' menu --> Preferences --> Version Control --> Subversion --> Under "Use command line client:", copy the resultant path from 'which svn' command --> Apply.


To do development or testing using a database that is not supported on OSX (e.g., SQL Server or Oracle), it is recommended to set up a VirtualBox instance for the target operating system (Windows or Linux). (This is generally preferred for developers over using Parallels, but the installation instructions once you have an OS installed are the same regardless. )

  1. Download and install Virtual Box :
  2. Create a new Virtual Box VM and install the desired OS on it. The easiest way is to download an ISO file for the OS and use it as the installation media for your VM.
  3. Once the ISO file is downloaded start Virtual Box and create a new VM for your target OS (most defaults are acceptable).
  4. Start the new VM for the first time.
  5. When a VM gets started for the first time, another wizard -- the "First Start Wizard" -- will pop up to help you select an installation medium. Since the VM is created empty, it would otherwise behave just like a real computer with no operating system installed: it will do nothing and display an error message that no bootable operating system was found.
  6. Select the ISO file that was previously downloaded, this should result in the installation wizard getting run.
  7. You may also want to install the Guest Additions for the VM so the window can be expanded to a more usable size ( This will also enable you to share files between your OSX machine and the VM, which can sometimes be helpful.
  8. Once the OS is installed, you can install your target database on it. See below for specifics on SQLServer or Oracle.
  9. To allow for remote access to the database you've installed, you will need to create a hole for the database connections in the firewall. For Windows, follow the instructions in the "TCP Access" section of this TechNet note using the port number appropriate for your database installation.
  10. You also need to configure Virtual Box so that a connection to the database can be made from the instance of LabKey running on your Mac. The easiest way to do this is through port forwarding over NAT.
In the Virtual Box Manager, select your Windows VM and edit the settings, in the network tab, select NAT and click on port forwarding.

Create a new record using TCP, and localhost ( Set the host and guest port to be the same as the configuration in your file (typically 1433) Note: To get the IP address of the Guest OS, you can run "ipconfig" in a command window on the Windows VM. You will want the IPv4 address.

SQL Server on VM

Typically SQL Server Express is adequate for development. Follow the instructions here for the installation. Note that you should not need to do the extra steps to get GROUP_CONCAT installed. It will be installed automatically when you start up LabKey server for the first time pointing to your SQL Server database.

SQL Server Browser Setup

During the installation, you will want to set the SQL Server Browser to start automatically. You can do this from within the SQL Server Configuration Manager. Under SQL Server Services, right click on the SQL Server Browser and open the Properties window. Go to the Service tab and change the Start Mode to "Automatic."

Remote Access to SQL Server

To allow for remote access to SQL Server, you will need to:

  1. Create a hole for SQL Server in the Windows firewall. Follow the instructions in the "TCP Access" section of this TechNet note.
  2. Make some configuration changes to allow remote connections and set up a login for LabKey server to use:
  • Open SQL Server Management Studio (which is not the same as the SQL Server Configuration Manager)
  • Right click on the <Server Name> and choose Properties -->Connections, check "Allow remote connections to this server"
  • From <Server Name> --> Properties, --> Security, set Server Authentication to “SQL Server & Windows Authentication mode”
  • Click OK and Close the Properties window
  • Choose Security --> Logins --> double click on 'sa' --> Status, set Login to Enabled. This is the user that will be used by LabKey server, so set the password and take note of it.
  • From Sql Server Configuration Manager, select SQL Server Network Configuration --> Protocols for MSSQLSERVER.
    • Enable TCP/IP (If not enabled already).
    • Right Click on TCP/IP --> Properties --> IP Addresses tab
    • Make sure ports with IP Addresses of and other other IP Address (thats used in Port Forwarding that you found using ipconfig), are Enabled.
    • Restart your computer.
  1. Restart SQL Server & SQL Server Browser from the Services control panel.

LabKey Properties Files

  1. Edit the config file under /Labkey/server/configs. If you have setup the NAT forwarding mentioned above, then set the databaseDefaultHost to Otherwise, set the databaseDefaultHost to the windows IP (use ipconfig to find out what this is; you want the IPv4 address) (seems like this is necessary; just using the name of the host instead doesn't seem to work). If you have multiple datasources defined in your labkey.xml file, the IP address needs to be used for those data sources as well.
  2. Edit the config file further by updating the jdbcUser and jdbcPassword information. This is where you use the "sa" user and the password you had setup during the SQL Server install.
  3. Pick SQL Server for LabKey (run "gradlew pickMSSQL" - either from the command line or within IntelliJ)
  4. Restart your LabKey server instance.

Oracle on VM

Oracle Express Edition is probably sufficient for development and testing purposes. Follow the instructions in the installation docs on Oracle's site and then refer to the page for using Oracle as an external data source for some LabKey specifics.

Remote Access to Oracle

After the initial installation, Oracle Database XE will be available only from the local server, not remotely. Be sure to follow the steps for making Oracle available to Remote Clients. In particular, you will need to run the following command from within SQL*Plus connected as the system user


SQL Developer (Oracle Client UI).

For troubleshooting and development, you will probably want to install a version of SQL Developer, the Oracle client application. There is a version of the client that works on OSX, so it is probably easiest to download and install on your OSX machine. It may also be useful to install a version on the VM. If installing on the VM, Java is required unless you get the version of SQL Developer that also bundles Java.

Tomcat 7 Encoding

Using non-ASCII characters on a production deployment or running the Build Verification Test (BVT) in development require that your server supports UTF-8 URI encoding. If running Tomcat 7.0.x, you need to modify your server configuration in <CATALINA_HOME>/conf/server.xml to specify this encoding. Add the following attributes to your Connector element:


For example, the modified Tomcat non-SSL HTTP/1.1 connector might appear as follows:

<!-- Define a non-SSL HTTP/1.1 Connector on port 8080 -->
<Connector port="8080" maxHttpHeaderSize="8192"
maxThreads="150" minSpareThreads="25" maxSpareThreads="75"
enableLookups="false" redirectPort="8443" acceptCount="100"
connectionTimeout="20000" disableUploadTimeout="true"
useBodyEncodingForURI="true" URIEncoding="UTF-8"/>

For more information on configuring Tomcat HTTP connectors, see the Tomcat documentation at:

URIEncoding defaults to UTF-8 with later versions, so this step is not required for Tomcat 8.5.x or 9.0.x.

Related Topics

Troubleshoot Development Machines

This topic covers troubleshooting some problems you may encounter building LabKey Server from source.

If you don't find your issue here, try searching the LabKey Developer Forum.

IntelliJ Troubleshooting

IntelliJ Warnings and Errors

  • Warning: Class "org.apache.catalina.startup.Bootstrap" not found in module "LabKey": You may ignore this warning in the Run/Debug Configurations dialog in IntelliJ.
  • Error: Could not find or load main class org.apache.catalina.startup.Bootstrap on OSX (or Linux): you might see this error in the console when attempting to start LabKey server. Update the '-classpath' VM option for your Run/Debug configuration to have Unix (:) path separators, rather than Windows path separators (;).
  • Can't find workspace.template.xml? On older enlistments of LabKey, for example version 15.3, copy <LABKEY_HOME>/server/LabKey.iws.template to LabKey.iws instead.
  • On Windows, if you are seeing application errors, you can try resetting the winsock if it has gotten into a bad state. To reset:
    • Open a command window in administrator mode
    • Type into the command window: netsh winsock reset
    • When you hit enter, that should reset the winsock and your application error may be resolved. You might need to restart your computer for the changes to take effect.

IntelliJ Slow

You can help IntelliJ run faster by increasing the amount of memory allocated to it. To increase memory:

  • Go to C:\Program Files\JetBrains\IntelliJ IDEA <Version Number>\bin, assuming that your copy of IntelliJ is stored in the default location on a Windows machine.
  • Right click on the idea.exe.vmoptions file and open it in notepad.
  • Edit the first two lines of the file to increase the amount of memory allocated to IntelliJ. For example, on a 2 Gig machine, it is reasonable to increase memory from 32m to 512m. The first two lines of this file then read:
  • Save the file
  • Restart IntelliJ

Server Not Starting Using a LabKey-Provided Run/Debug Configs

Problem: Server does not start in IntelliJ using one of the LabKey-provided Run/Debug configurations. In the console window the following error appears:

-agentlib:jdwp=transport=dt_socket,address=,suspend=y,server=n -Dcatalina.base=./
-Dcatalina.home=./ -Ddevmode=true -ea -Xmx1G
-XX:MaxPermSize=160M -classpath "./bin/*;/Applications/IntelliJ"
-Dfile.encoding=UTF-8 org.apache.catalina.startup.Bootstrap start
Connected to the target VM, address: '', transport: 'socket'
Error: Could not find or load main class org.apache.catalina.startup.Bootstrap
Disconnected from the target VM, address: '', transport: 'socket'

Cause: This is most likely caused by an incorrect path separator in the Run/Debug configuration's classpath argument.

Solution: Edit the Run/Debug configuration and change the separator to the one appropriate to your platform (semicolon for Windows; colon for Mac/Linux).

Fatal Error in Java Runtime Environment

Error: When starting LabKey or importing data to a new server, you might see a virtual machine crash similar to this:

# A fatal error has been detected by the Java Runtime Environment:
# SIGSEGV (0xb) at pc=0x0000000000000000, pid=23893, tid=39779
# JRE version: Java(TM) SE Runtime Environment (8.0_45-b14) (build 1.8.0_45-b14)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.45-b02 mixed mode bsd-amd64 compressed oops)
# Problematic frame:
# C 0x0000000000000000
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again

Cause: These are typically bugs in the Java Virtual Machine itself.

Solution: Ensuring that you are on the latest patched release for your preferred Java version is best practice for avoiding these errors. If you have multiple versions of Java installed, be sure that JAVA_HOME and other configuration is pointing at the correct location. If you are running through the debugger in IntelliJ, check the JDK configuration. Under Project Structure > SDKs check the JDK home path and confirm it points to the newer version.

Gradle task already exists

Error: "task with name ':server:modules:query:antlrSqlBase' already exists"

Cause: A somewhat recent version of IntelliJ has started to put output files in an 'out' directory in each Gradle project's directory (, so if you run a Gradle build command from IntelliJ these directories will be created. Some of the tasks in LabKey's custom gradlePlugins jar are a little too greedy when looking for input files and will pick up files from this out directory as well.

Solution: This bug for the antlr plugin has been fixed in version 1.1 of the gradlePlugins jar so you can update to that version of the plugins for LabKey 17.2 or later. Also, you can work around this by removing the out directories that Intellij creates and running the build again from the command line.

Cannot find dependencies for compressClientLibs task

Error: "Could not determine the dependencies of task ':server:modules:survey:compressClientLibs'"

Cause: A somewhat recent version of IntelliJ has started to put output files in an 'out' directory in each Gradle project's directory (, so if you run a Gradle build command from IntelliJ these directories will be created. Some of the tasks in LabKey's custom gradlePlugins jar are a little too greedy when looking for input files and will pick up files from this out directory as well.

Solution: This bug has been fixed in version 1.2 of the gradlePlugins jar, so you can update to that version of the plugins for LabKey 17.2 or later. Also, you can work around this by removing the out directories that Intellij creates and running the build again from the command line.

Cannot find class X when hot-swapping

Cause: IntelliJ puts its build output files, which it uses when doing hot-swapping, in different directories than the command-line Gradle build does, so if you have not built the entire project (or at least the jars that the file you are attempting to swap in depends on) within IntelliJ, IntelliJ will not find them.

Solution: Open the Gradle window in IntelliJ and run the root-level "build" task from that window.

XML Classes not found after Gradle Refresh

Problem: After doing a Gradle Refresh, some of the classes with package names like org.labkey.SOME_SCOPE.xml (e.g., are not found.

Cause: This is usually due to the relevant schema jars not having been built yet.

Solution: Run the deployApp command and then do a Gradle Refresh within IntelliJ. Though you can use a more targeted Gradle task schemaJar to build just the jar file that is missing, you might find yourself running many of these to resolve all missing classes, so it is likely most efficient to simply use deployApp

Gradle Troubleshooting

Understanding how gradle builds the server, and the relationships between building and cleaning can help you diagnose many build problems. See these related topics:

Update version of gradlePlugins

We maintain release notes for the gradlePlugins that describe changes and bug fixes to the plugins. Check these notes to find out if the problem you are having has already been addressed. Be sure to pay attention to the "Earliest compatible LabKey version" indicated with each release to know if it is compatible with your current LabKey version. If you want to update to a new version of the gradle plugins to pick up a change, you need only update the gradlePluginsVersion property in the root-level file:


Gradle Refresh

Problem: Gradle Refresh in IntelliJ has no effect on project structure after the settings.gradle file is updated.

Cause: Name conflict between the gel_test project and the gel_test IntelliJ module that would be created from the gel project’s ‘test’ source set. (IDEA-168284)

Workarounds: Do one of the following

Remove gel_test from the test of projects you are including in your settings.gradle file and then do the Gradle Refresh. Within IntelliJ do the following Preferences -> Build, Execution, Deployment -> Build Tools -> Gradle Uncheck the setting “Create separate module per source set” Click “OK” Do the Gradle Refresh in the Gradle window Preferences -> Build, Execution, Deployment -> Build Tools -> Gradle Check the setting “Create separate module per source set” Click “OK” Do the Gradle Refresh in the Gradle window

Could not resolve all files for configuration

Problem: When compiling a module, an error such as the following appears

Execution failed for task ':server:modules:myModule:compileJava'.
> Could not resolve all files for configuration ':server:modules:myModule:compileClasspath'.
> Could not find XYZ-api.jar (project :server:modules:XYZ).
Similarly, after doing a Gradle Refresh, some of the files that should be on the classpath are not found within IntelliJ. If you look in the Build log window, you see messages such as this:
<ij_msg_gr>Project resolve errors<ij_msg_gr><ij_nav>/Development/labkey/trunk/build.gradle<ij_nav><i><b>root project 'trunk': Unable to resolve additional project configuration.</b><eol>Details: org.gradle.api.artifacts.ResolveException: Could not resolve all dependencies for configuration ':server:modules:myModule:compileClasspath'.<eol>Caused by: org.gradle.internal.resolve.ArtifactNotFoundException: Could not find XYZ-api.jar (project :server:modules:XYZ).</i>

Cause: The settings.gradle file has included only a subset of the dependencies within the transitive dependency closure of your project and Gradle is unable to resolve the dependency for the api jar file. In particular, though your settings file likely includes the :server:modules:XYZ project, it likely does not include another project that myModule depends on that also depends on XYZ. That is, you have the following dependencies:

  • myModule depends on otherModule
  • myModule depends on XYZ
  • otherModule depends on XYZ
And in your settings file you have:
include ':server:modules:myModule'
include ':server:modules:XYZ'
(Notice that 'otherModule' is missing here.) When gradle resolves the dependency for otherModule, it will pick it up from the artifact repository. That published artifact contains a reference to the API jar file for XYZ and Gradle will then try to resolve that dependency using your :server:modules:XYZ project, but referencing an artifact with a classifier (the -api suffix here) is not something currently supported by Gradle, so it will fail. This may cause IntelliJ (particularly older versions of IntelliJ) to mark some dependencies within this dependency tree as "Runtime" instead of "Compile" dependencies, which will cause it to not resolve some references to classes within the Java files.

Solution: Include the missing project in your settings.gradle file

include ':server:modules:myModule'
include ':server:modules:otherModule'
include ':server:modules:XYZ'


Problem: The server seems to have started fine, but there are no log message in the console after server start up.

Cause: The log4j.xml file that controls where messages are logged does not contain the appropriate appender to tell it to log to the console. This appender is added when you deploy the application in development mode (as a consequence of running the deployApp command). If, for some reason, the file build/deploy/modules/labkeyWebapp/WEB=INF/classes/log4j.xml has been created without the <appender-ref ref="CONSOLE"/> element in the <root> element at the bottom of the file, the log messages will not be sent to the console. For gradlePlugin versions 1.2.2 and earlier, there is a bug in the configureLog4j task that copies and modifies the log4j.xml file such that it will fail to update the deployed version of the file in certain cases.

Workaround: For gradlePlugin versions 1.2.2 and earlier, you will need to make a modification to the <LABKEY_HOME>/webapps/log4j.xml file (e.g., adding a comment to the file) and then run the ./gradlew deployApp command. (The modification is needed so Gradle will understand that the source file has changed. It does not base this off of the timestamp of the file.)

Starting Over with Gradle + IntelliJ

If you get into a situation where the IntelliJ configuration has been corrupted or created in a way that is incompatible with what we expect (which can happen if, for example, you say 'Yes' when IntelliJ asks if you want to have it link an unlinked Gradle project), these are the steps you can follow to start over with a clean setup for Gradle + IntelliJ:

  • Shut down IntelliJ.
  • Revert the <LABKEY_HOME>/.idea/gradle.xml file to the version checked in to VCS.
  • Remove the <LABKEY_HOME>/.idea/modules directory.
  • Remove the <LABKEY_HOME>/.idea/modules.xml file.
  • Start up IntelliJ.
  • Open the Gradle window (View > Tool Windows > Gradle) and click the Refresh icon, as described above.
This may also be required if the version of LabKey has been updated in your current enlistment but you find that your IntelliJ project still refers to jar files created with a previous version.


Tomcat Fails to Start

If Tomcat fails to start successfully, check the steps above to ensure that you have configured your JDK and development environment correctly. Some common errors you may encounter include:

org.postgresql.util.PSQLException: FATAL: password authentication failed for user "<username>" or java.sql.SQLException: Login failed for user '<username>'

These error occurs when the database user name or password is incorrect. If you provided the wrong user name or password in the .properties file that you configured above, LabKey will not be able to connect to the database. Check that you can log into the database server with the credentials that you are providing in this file. Address already in use: JVM_Bind:<port x>:

This error occurs when another instance of Tomcat or another application is running on the same port. Specifically, possible causes include:

  • Tomcat is already running under IntelliJ.
  • Tomcat is running as a service.
  • Microsoft Internet Information Services (IIS) is running on the same port.
  • Another application is running on the same port.
In any case, the solution is to ensure that your development instance of Tomcat is running on a free port. You can do this in one of the following ways:
  • Shut down the instance of Tomcat or the application that is running on the same port.
  • Change the port for the other instance or application.
  • Edit the Tomcat server.xml file to specify a different port for your development installation of Tomcat.
java.lang.NoClassDefFoundError: com/intellij/rt/execution/application/AppMain:
Error: Could not find or load main class com.intellij.rt.execution.application.AppMain:

In certain developer configurations, you will need to add an IntelliJ utility JAR file to your classpath.

  • Edit the Debug Configuration in IntelliJ.
  • Under the "VM Options" section, find the "-classpath" argument.
  • Find your IntelliJ installation. On Windows machines, this is typically "C:\Program Files\JetBrains\IntelliJ IDEA <Version Number>" or similar. On OSX, this is typically "/Applications/IntelliJ IDEA <Version Number>.app" or similar.
  • The required JAR file is in the IntelliJ installation directory, and is ./lib/idea_rt.jar. Add it to the -classpath argument value, separating it from the other values with a ":" on OSX and a ";" on Windows.
  • Save your edits and start Tomcat.

Database State Troubleshooting

If you build the LabKey source yourself from the source tree, you may need to periodically delete and recreate your LabKey database. The daily drops often include SQL scripts that modify the data and schema of your database.

Database Passwords Not Working

Problem: My passwords to PostgreSQL and MS SQL Server aren't working.

Solution: Unlike Ant, the Gradle build system will automatically escape any special XML characters, such as quotes and ampersand symbols in the / files. When migrating these files from Ant to Gradle, replace any escaped ampersands (&amp;) with plain text ampersands (&).

Environment Variables Troubleshooting


Most users will not have this problem. However, if you see a build error something like the following:

error: unmappable character for encoding ASCII

then setting this environment variable may fix the problem

export JAVA_TOOL_OPTIONS=-Dfile.encoding=UTF8

Ext Libraries

Problem: Ext4 is not defined.

The page or a portion of the page is not rendering appropriately. Upon opening the browser console you see an error stating "Ext4 is not defined". This usually occurs due to a page not appropriately declaring client-side code dependencies.

Example of the error as seen in Chrome console.


Declare a dependency on the code/libraries you make use of on your pages. Depending on the mechanism you use for implementing your page/webpart we provide different hooks to help ensure dependent code is available on the page.

  • Wiki: In a wiki you can use LABKEY.requiresScript() with a callback.
  • File-based module view: It is recommended that you use file-scoped dependencies by declaring a view.xml and using the <dependencies> attribute.
  • Java: In Java you can use ClientDependency.fromPath(String p) to declare dependencies for your view. Note, be sure to declare these before the view is handed off to rendering otherwise your dependency will not be respected.
  • JSP: Override JspBase.addClientDependencies(ClientDependencies dependencies) in the .jsp. Here is an example.
Background: In the 17.3 release we were able to move away from ExtJS 4.x for rendering menus. This was the last “site-wide” dependency we had on ExtJS, however, we still continue to use it throughout the server in different views, reports, and dialogs. To manage these usages we use our dependency framework to ensure the correct resources are on the page. The framework provides a variety of mechanisms for module-based views, JavaServer Pages, and Java.

Related Topics

Premium Resource: IntelliJ Reference

LabKey Client APIs

[JavaScript Tutorial] [JavaScript API Reference]


The LabKey client libraries provide secure, auditable, programmatic access to LabKey data and services.

The purpose of the client APIs is to let developers and statisticians write scripts or programs in various programming languages to extend and customize LabKey Server. The specifics depend on the exact type of integration you hope to achieve. For example, you might:

  • Analyze and visualize data stored in LabKey in a statistical tool such as R or SAS
  • Perform routine, automated tasks in a programmatic way.
  • Query and manipulate data in a repeatable and consistent way.
  • Enable customized data visualizations or user interfaces for specific tasks that appear as part of the existing LabKey Server user interface.
  • Provide entirely new user interfaces (web-based or otherwise) that run apart from the LabKey web server, but interact with its data and services.
All APIs are executed within a user context with normal security and auditing applied. This means that applications run with the security level of the user who is currently logged in, which will limit what they can do based on permissions settings for the current user.

Currently, LabKey supports working with the following programming languages/environments.

Related Topics:

JavaScript API

LabKey's JavaScript client library makes it easy to write custom pages and applications that interact with LabKey Server. A few examples of ways you might use the JavaScript API:
  • Add JavaScript to a LabKey HTML page to create a custom renderer for your data, transforming and presenting the data to match your vision.
  • Upload an externally-authored HTML page that uses rich UI elements such as editable grids, dynamically trees, and special purpose data entry controls.
  • Create a series of HTML/JavaScript pages that provide a custom workflow packaged as a module.


Premium Resources

Subscribers to premium editions of LabKey Server can learn more with the example code in these topics:

Additional Resources:

Tutorial: Create Applications with the JavaScript API

This tutorial shows you how to create an application for managing requests for reagent materials. It is a model for creating other applications which:
  • Provide web-based access to users and system managers with different levels of access.
  • Allows users to enter, edit, and review their requests.
  • Allows reagent managers to review requests in a variety of ways to help them optimize their fulfillment system.
The application is implemented using:
  • JavaScript/HTML pages - Provides the user interface pages.
  • Several Lists - Holds the requests, reagent materials, and user information.
  • Custom SQL queries - Filtered views on the Lists.
  • R Reports - Provides visualization of user activity.
See the interactive demo version of this application: Reagent Request Application


To complete this tutorial, you will need:

Tutorial Steps:

First Step

Related Topics

Step 1: Create Request Form

In this step, you will create the user interface for collecting requests. Users specify the desired reagent, a desired quantity, and some user contact information, submitting requests with a form like the following:

Folders and Permissions

First create a separate folder where your target users have "insert" permissions. Creating a separate folder allows you to grant these expanded permissions only for the folder needed and not to any sensitive information. Further, insertion of data into the lists can then be carefully controlled and granted only through admin-designed forms.

  • Log in to your server and navigate to your "Tutorials" project. Create it if necessary.
    • If you don't already have a server to work on where you can create projects, start here.
    • If you don't know how to create projects and folders, review this topic.
  • Create a new subfolder named "Reagent Request Tutorial." Accept the defaults Collaboration folder type.
    • On the User/Permissions page, click Finish and Configure Permissions.
    • Uncheck Inherit permissions from parent.
    • Next to Submitter, add the group "Site: All Site Users".
    • Remove "Guest" from the Reader role if present.
    • Click Save and Finish.

You will now be on the home page of your new tutorial folder.

Import Lists

Our example reagent request application uses two lists. One records the available reagents, the other records the incoming requests. Below you import the lists in one pass, using a "list archive". (We've pre-populated these lists to simulate a system in active use.)

  • Click to download this list archive:
  • Go to (Admin) > Manage Lists.
  • Click Import List Archive.
  • Click Browse or Choose File and select the list archive you just downloaded.
  • Click Import List Archive.

You will see the two lists now available.

Create the Request Page

Requests submitted via this page will be inserted into the Reagent Requests list.

  • Click the folder name link ( Reagent Request Tutorial) to return to the main folder page.
  • In the Wiki web part, click Create a new wiki page.
  • Give it the name "reagentRequest" and the title "Reagent Request Form".
  • Click the Source tab.
  • Scroll down to the Code section of this page.
  • Copy and paste the HTML/JavaScript code block into the Source tab.
  • Click Save and Close.

The page reagentRequest now displays the submission form, as shown at the top of this page.

See a live example.

Notes on the source code

The following example code uses LABKEY.Query.selectRows and LABKEY.Query.insertRows to handle traffic with the server. For example code that uses Ext components, see LABKEY.ext.Store.

View the source code in your application, or view similar source in the interactive example. Search for the items in orange text to observe any or all of the following:

  • Initialization. The init() function pre-populates the web form with several pieces of information about the user.
  • User Info. User information is provided by LABKEY.Security.currentUser API. Note that the user is allowed to edit some of the user information obtained through this API (their email address and name), but not their ID.
  • Dropdown. The dropdown options are extracted from the Reagent list. The LABKEY.Query.selectRows API is used to populate the dropdown with the contents of the Reagents list.
  • Data Submission. To insert requests into the Reagent Requests list, we use LABKEY.Query.insertRows. The form is validated before being submitted.
  • Asynchronous APIs. The success in LABKEY.Query.insertRows is used to move the user on to the next page only after all data has been submitted. The success function executes only after rows have been successfully inserted, which helps you deal with the asynchronous processing of HTTP requests.
  • Default onFailure function. In most cases, it is not necessary to explicitly include an onFailure function for APIs such as LABKEY.Query.insertRows. A default failure function is provided automatically; create one yourself if you wish a particular mode of failure other than the simple, default notification message.

Confirmation page dependency. Note that this source code requires that a page named "confirmation" exists before you can actually submit a request. Continue to the next step: Step 2: Confirmation Page to create this page.


<div style="float: right;">    <input value='View Source' type='button' onclick='gotoSource()'><br/><br/>    <input value='Edit Source' type='button' onclick='editSource()'> </div>

<form name="ReagentReqForm">    <table cellspacing="0" cellpadding="5" border="0">        <tr>            <td colspan="2">Please use the form below to order a reagent.                All starred fields are required.</td>        </tr> <tr><td colspan="2"><br/></td></tr>        <tr>            <td colspan="2"><div id="errorTxt" style="display:none;color:red"></div></td>        </tr>        <tr>            <td valign="top" width="100"><strong>Name:*</strong></td>            <td valign="top"><input type="text" name="DisplayName" size="30"></td>        </tr> <tr><td colspan="2" style="line-height:.6em"><br/></td></tr>        <tr>            <td valign="top" width="100"><strong>Email:*</strong></td>            <td valign="top"><input type="text" name="Email" size="30"></td>        </tr> <tr><td colspan="2" style="line-height:.6em"><br/></td></tr>        <tr>            <td valign="top" width="100"><strong>UserID:*</strong></td>            <td valign="top"><input type="text" name="UserID" readonly="readonly" size="30"></td>        </tr> <tr><td colspan="2" style="line-height:.6em"><br/></td></tr>        <tr>            <td valign="top" width="100"><strong>Reagent:*</strong></td>            <td valign="top">                <div>                    <select id="Reagent" name="Reagent">                        <option>Loading...</option>                    </select>                </div>            </td>        </tr> <tr><td colspan="2" style="line-height:.6em"><br/></td></tr>        <tr>            <td valign="top" width="100"><strong>Quantity:*</strong></td>            <td valign="top"><select id="Quantity" name="Quantity">                <option value="1">1</option>                <option value="2">2</option>                <option value="3">3</option>                <option value="4">4</option>                <option value="5">5</option>                <option value="6">6</option>                <option value="7">7</option>                <option value="8">8</option>                <option value="9">9</option>                <option value="10">10</option>            </select></td>        </tr> <tr><td colspan="2" style="line-height:.6em"><br/></td></tr>        <tr>            <td valign="top" width="100"><strong>Comments:</strong></td>            <td valign="top"><textarea cols="53" rows="5" name="Comments"></textarea></td>        </tr> <tr><td colspan="2" style="line-height:.6em"><br/></td></tr>

<tr>            <td valign="top" colspan="2">                <div align="center">                    <input value='Submit' type='button' onclick='submitRequest()'>            </td>        </tr>    </table> </form> <script type="text/javascript">

window.onload = init();

// Navigation functions. Demonstrates simple uses for LABKEY.ActionURL.    function gotoSource() { window.location = LABKEY.ActionURL.buildURL("wiki", "source", LABKEY.ActionURL.getContainer(), {name: 'reagentRequest'});    }

function editSource() { window.location = LABKEY.ActionURL.buildURL("wiki", "edit", LABKEY.ActionURL.getContainer(), {name: 'reagentRequest'});    }

// Initialize the form by populating the Reagent drop-down list and    // entering data associated with the current user.    function init() {        LABKEY.Query.selectRows({            schemaName: 'lists',            queryName: 'Reagents',            success: populateReagents        });

document.getElementById("Reagent").selectedIndex = 0;

// Set the form values        var reagentForm = document.getElementsByName("ReagentReqForm")[0];        reagentForm.DisplayName.value = LABKEY.Security.currentUser.displayName;        reagentForm.Email.value =;        reagentForm.UserID.value =;    }

// Populate the Reagent drop-down menu with the results of    // the call to LABKEY.Query.selectRows.    function populateReagents(data) {        var el = document.getElementById("Reagent");        el.options[0].text = "<Select Reagent>";        for (var i = 0; i < data.rows.length; i++) {            var opt = document.createElement("option");            opt.text = data.rows[i].Reagent;            opt.value = data.rows[i].Reagent;            el.options[el.options.length] = opt;        }    }

// Enter form data into the reagent request list after validating data    // and determining the current date.    function submitRequest() {        // Make sure the form contains valid data        if (!checkForm()) {            return;        }

// Insert form data into the list.        LABKEY.Query.insertRows({            schemaName: 'lists',            queryName: 'Reagent Requests',            rowDataArray: [{                "Name": document.ReagentReqForm.DisplayName.value,                "Email": document.ReagentReqForm.Email.value,                "UserID": document.ReagentReqForm.UserID.value,                "Reagent": document.ReagentReqForm.Reagent.value,                "Quantity": parseInt(document.ReagentReqForm.Quantity.value),                "Date": new Date(),                "Comments": document.ReagentReqForm.Comments.value,                "Fulfilled": 'false'            }],            success: function(data) {                // The set of URL parameters.                var params = {                    "name": 'confirmation', // The destination wiki page. The name of this parameter is not arbitrary.                    "userid": // The name of this parameter is arbitrary.                };

// This changes the page after building the URL. Note that the wiki page destination name is set in params.                var wikiURL = LABKEY.ActionURL.buildURL("wiki", "page", LABKEY.ActionURL.getContainer(), params);                window.location = wikiURL;            }        });    }

// Check to make sure that the form contains valid data. If not,    // display an error message above the form listing the fields that need to be populated.    function checkForm() {        var result = true;        var ob = document.ReagentReqForm.DisplayName;        var err = document.getElementById("errorTxt");        err.innerHTML = '';        if (ob.value == '') {            err.innerHTML += "Name is required.";            result = false;        }        ob = document.ReagentReqForm.Email;        if (ob.value == '') {            if(err.innerHTML != '')                err.innerHTML += "<br>";            err.innerHTML += "Email is required.";            result = false;        }        ob = document.ReagentReqForm.Reagent;        if (ob.value == '') {            if(err.innerHTML != '<Select Reagent>')                err.innerHTML += "<br>";            err.innerHTML += "Reagent is required.";            result = false;        }        if(!result)            document.getElementById("errorTxt").style.display = "block";        return result;    }


Start Over | Next Step

Step 2: Confirmation Page

Now that you have created a way for users to submit requests, you are ready to create the confirmation page. This page will display the count of requests made by the current user, followed by a grid view of requests submitted by all users, similar to the following. You can sort and filter the grid to see specific subsets of requests, such as your own.

After you have submitted one or more "requests" the tallies will change and your new requests will be shown.

See a live example.

Create the Confirmation Page

  • Return to your "Reagent Request Tutorial" folder if you navigated away.
  • Click the (triangle) menu on Reagent Request Form and select New.
  • Name: "confirmation" (this page name is already embedded in the code for the request page, and is case sensitive).
  • Title: "Reagent Request Confirmation".
  • Confirm that the Source tab is selected.
  • Copy and paste the contents of the code section below into the source panel.
  • Click Save & Close.
  • You will see a grid displayed showing the sample requests from our list archive. If you already tried submitting one before creating this page, you would have seen an error that the confirmation page didn't exist, but you will see the request listed here.
  • Click Reagent Request Form in the Pages web part to return to the first wiki you created and submit some sample requests to add data to the table.

Notes on the JavaScript Source

LABKEY.Query.executeSql is used to calculate total reagent requests and total quantities of reagents for the current user and for all users. These totals are output to text on the page to provide the user with some idea of the length of the queue for reagents.

Note: The length property (e.g., data.rows.length) is used to calculate the number of rows in the data table returned by LABKEY.Query.executeSql. It is used instead of the rowCount property because rowCount returns only the number of rows that appear in one page of a long dataset, not the total number of rows on all pages.


<p>Thank you for your request. It has been added to the request queue and will be filled promptly.</p> <div id="totalRequests"></div> <div id="allRequestsDiv"></div> <div id="queryDiv1"></div>

<script type="text/javascript">

window.onload = init();

function init() {

var qwp1 = new LABKEY.QueryWebPart({            renderTo: 'queryDiv1',            title: 'Reagent Requests',            schemaName: 'lists',            queryName: 'Reagent Requests',            buttonBarPosition: 'top',            // Uncomment below to filter the query to the current user's requests.            // filters: [ LABKEY.Filter.create('UserID',],            sort: '-Date'        });

// Extract a table of UserID, TotalRequests and TotalQuantity from Reagent Requests list.        LABKEY.Query.executeSql({            schemaName: 'lists',            queryName: 'Reagent Requests',            sql: 'SELECT "Reagent Requests".UserID AS UserID, ' +   'Count("Reagent Requests".UserID) AS TotalRequests, ' +   'Sum("Reagent Requests".Quantity) AS TotalQuantity ' +   'FROM "Reagent Requests" Group BY "Reagent Requests".UserID',            success: writeTotals        });


// Use the data object returned by a successful call to LABKEY.Query.executeSQL to    // display total requests and total quantities in-line in text on the page.    function writeTotals(data)    {        var rows = data.rows;

// Find overall totals for all user requests and quantities by summing        // these columns in the sql data table.        var totalRequests = 0;        var totalQuantity = 0;        for(var i = 0; i < rows.length; i++) {            totalRequests += rows[i].TotalRequests;            totalQuantity += rows[i].TotalQuantity;        }

// Find the individual user's total requests and quantities by looking        // up the user's id in the sql data table and reading off the data in the row.        var userTotalRequests = 0;        var userTotalQuantity = 0;        for(i = 0; i < rows.length; i++) {            if (rows[i].UserID ==={                userTotalRequests = rows[i].TotalRequests;                userTotalQuantity = rows[i].TotalQuantity;                break;            }        }

document.getElementById('totalRequests').innerHTML = '<p>You have requested <strong>' +                userTotalQuantity + '</strong> individual bottles of reagents, for a total of <strong>'                + userTotalRequests + '</strong> separate requests pending. </p><p> We are currently '                + 'processing orders from all users for <strong>' + totalQuantity                + '</strong> separate bottles, for a total of <strong>' + totalRequests                + '</strong> requests.</p>';    }


Previous Step | Next Step

Step 3: R Histogram (Optional)

This is an optional step. If you wish you can skip to the last step in the tutorial: Step 4: Summary Report For Managers

To further explore the possibilities available, let's add an R data visualization plot of the "Reagent Requests" list to the confirmation page, to create a page that looks like the following:

Set Up R

If you have not already configured your server to use R, follow these instructions before continuing: Install and Set Up R.

Create an R Histogram

  • Return to the home page of your "Reagent Request Tutorial" folder by clicking the Reagent Request Tutorial link or using the project and folder menu.
  • Select (Admin) > Manage Lists.
  • In the Available Lists grid, click Reagent Requests.
  • Select (Reports) > Create R Report. If this option is not shown, you must configure R on your machine.
  • Paste the following code onto the Source tab (replacing the default contents).
if(length($userid) > 0){
hist($quantity, xlab = c("Quantity Requested", labkey.url.params$displayName),
ylab = "Count", col="lightgreen", main= NULL)
} else {
write("No requests are available for display.", file = "${txtout:histogram}")
  • Check the "Make this report available to all users" checkbox.
  • Scroll down and click Save.
  • Enter a Report Name, such as "Reagent Histogram".
  • Click OK.
  • Click the Report tab to see the R report, if it is not selected by default.
  • Notice the reportId in the page URL. You will need this number to reference the report in your confirmation page. In this URL example, the reportId is 90.
    • The "%3A" is the encoded colon ":" character ("%20" represents a space).

This histogram gives a view of all requests listed in the "Reagent Requests" table.

Update the Confirmation Page

  • Return to the main page by clicking the Reagent Request Tutorial link near the top of the page.
  • Open the "Reagent Request Confirmation" wiki page for editing. Click it in the Pages web part, then click Edit. We will make three changes to customize the page:
  • Add the following to the end of the block of <div> tags at the top of the page:
    <div id="reportDiv">Loading...</div>
    • This will give users an indication that additional information is coming while the histogram is loading.
  • Uncomment the line marked with "// Uncomment below to filter the query to the current user's requests."
    filters: [ LABKEY.Filter.create('UserID',],
    • This will reduce the grid displayed; if you have not entered any sample requests, an empty table will be shown.
  • Add the following to the init() function after the "//Extract a table..." section:
// Draw a histogram of the user's requests.
var reportWebPartRenderer = new LABKEY.WebPart({
partName: 'Report',
renderTo: 'reportDiv',
frame: 'title',
partConfig: {
title: 'Reagent Request Histogram',
reportId: 'db:XX',
showSection: 'histogram'
  • Note the reference "db: XX". Replace XX with the report number for your R report you obtained from the URL earlier.
  • Click Save & Close.

You will now see the histogram on the Reagent Request Confirmation page.

Link to a live example.

Note that the R histogram script returns data for all users. The wiki page does the work of filtering the view to the current user by passing a filtered view of the dataset to the R script (via the partConfig parameter of LABKEY.WebPart). To see the web part configuration parameters available, see: Web Part Configuration Properties.

When creating a filter over the dataset, you will need to determine the appropriate filter parameter names (e.g., 'query.UserID~eq'). To do so, go to the dataset and click on the column headers to create filters that match the filters you wish to pass to this API. Read the filter parameters off of the URL.

You can pass arbitrary parameters to the R script by adding additional fields to partConfig. For example, you could pass a parameter called myParameter with a value of 5 by adding the line "myParameter: 5,". Within the R script editor, you can extract URL parameters using the labkey.url.params variable, as described at the bottom of the "Help" tab.

Previous Step | Next Step

Step 4: Summary Report For Managers

In this topic we create a report page for application managers, handy information that they can use to help coordinate their efforts to fulfill the requests. The page will look like the following:

See a live example.

Create Custom SQL Queries

We create three custom SQL queries over the "Reagent Requests" list in order to distill the data in ways that are useful to reagent managers. We create custom SQL queries using the LabKey UI, then use LABKEY.QueryWebPart to display the results as a grid. As part of writing custom SQL, we can add Metadata XML to provide a URL link to the subset of the data listed in each column.

Query #1: Reagent View

First we define a query that returns all the reagents, the number of requests made, and the number requested of each.

  • Return to the home page of your "Reagent Request Tutorial" folder.
  • Select (Admin) > Developer Links > Schema Browser.
  • Select the lists schema.
  • Click Create New Query.
  • Define your first of three SQL queries:
    • What do you want to call the new query?: Enter "Reagent View"
    • Which query/table do you want this new query to be based on?: Select Reagent Requests
    • Click the Create and Edit Source button.
    • Paste this SQL onto the Source tab (replace the default text):
"Reagent Requests".Reagent AS Reagent,
Count("Reagent Requests".UserID) AS TotalRequests,
Sum("Reagent Requests".Quantity) AS TotalQuantity
FROM "Reagent Requests"
Group BY "Reagent Requests".Reagent
    • Click the XML Metadata tab and paste the following (replace the default):
<tables xmlns="">
<table tableName="Reagent View" tableDbType="NOT_IN_DB">
<column columnName="TotalRequests">
<column columnName="TotalQuantity">
    • Click Save & Finish to see the results.
  • Depending on what requests have been entered, the results might look something like this:

Query #2: User View

The next query we add will return the number of requests made by each user.

  • Click lists Schema above the grid to return to the Schema Browser. (Notice your new "Reagent View" request is now included.)
  • Click Create New Query.
    • Call this query "User View" and again base it on Reagent Requests.
    • Click Create and Edit Source.
    • Paste this into the source tab:
"Reagent Requests".Name AS Name,
"Reagent Requests".Email AS Email,
"Reagent Requests".UserID AS UserID,
Count("Reagent Requests".UserID) AS TotalRequests,
Sum("Reagent Requests".Quantity) AS TotalQuantity
FROM "Reagent Requests"
Group BY "Reagent Requests".UserID, "Reagent Requests".Name, "Reagent Requests".Email
    • Paste this into the XML Metadata tab:
<tables xmlns="">
<table tableName="Reagent View" tableDbType="NOT_IN_DB">
<column columnName="TotalRequests">
<column columnName="TotalQuantity">
    • Click Save & Finish to see the results.

Query #3: Recently Submitted

  • Return to the lists Schema again.
  • Click Create New Query.
    • Name the query "Recently Submitted" and again base it on the list Reagent Requests.
    • Click Create and Edit Source.
    • Paste this into the source tab:
SELECT Y."Name",
MAX(Y.Today) AS Today,
MAX(Y.Yesterday) AS Yesterday,
MAX(Y.Day3) AS Day3,
MAX(Y.Day4) AS Day4,
MAX(Y.Day5) AS Day5,
MAX(Y.Day6) AS Day6,
MAX(Y.Day7) AS Day7,
MAX(Y.Day8) AS Day8,
MAX(Y.Day9) AS Day9,
MAX(Y.Today) + MAX(Y.Yesterday) + MAX(Y.Day3) + MAX(Y.Day4) + MAX(Y.Day5)
+ MAX(Y.Day6) + MAX(Y.Day7) + MAX(Y.Day8) + MAX(Y.Day9) AS Total
(SELECT X."Name",
SELECT Count("Reagent Requests".Key) AS C,
DAYOFYEAR("Reagent Requests".Date) AS DayIndex, "Reagent Requests"."Name"
FROM "Reagent Requests"
WHERE timestampdiff('SQL_TSI_DAY', "Reagent Requests".Date, NOW()) < 10
GROUP BY "Reagent Requests"."Name", DAYOFYEAR("Reagent Requests".Date)
GROUP BY X."Name", X.C, X.DayIndex)
    • There is nothing to paste into the XML Metadata tab.
    • Click Save & Finish.

If you do not see much data displayed by the "Recently Submitted" query, the dates of reagent requests may be too far in the past. To see more data here, you can:

  • Manually edit the dates in the list to occur within the last 10 days.
  • Edit the source XLS to bump the dates to occur within the last 10 days, and re-import the list.
  • Create a bunch of recent requests using the reagent request form.

Create Summary Report Wiki Page

  • Click Reagent Request Tutorial to return to the main page.
  • On the Pages web part, select (triangle) > New to create a new wiki.
  • Enter the following:
    • Name: reagentManagers
    • Title: "Summary Report for Reagent Managers"
    • Scroll down to the Code section of this page.
    • Copy and paste the code block into the Source tab.
    • Click Save & Close.

This summary page, like other grid views of data, is live - if you enter new requests, then return to this page, they will be immediately included.

Notes on the JavaScript Source

You can reopen your new page for editing or view the source code below to observe the following parts of the JavaScript API.

Check User Credentials

The script uses the LABKEY.Security.getGroupsForCurrentUser API to determine whether the current user has sufficient credentials to view the page's content.

Display Custom Queries

We use the LABKEY.QueryWebPart API to display our custom SQL queries in the page. Note the use of aggregates to provide sums and counts for the columns of our queries.

Display All Data

Lastly, we display a grid view of the entire "Reagent Requests" lists on the page using the LABKEY.QueryWebPart API, allow the user to select and create views using the buttons above the grid.


The source code for the reagentManagers page.

<div align="right" style="float: right;">
<input value='View Source' type='button' onclick='gotoSource()'>
<input value='Edit Source' type='button' onclick='editSource()'>
<div id="errorTxt" style="display:none; color:red;"></div>
<div id="listLink"></div>
<div id="reagentDiv"></div>
<div id="userDiv"></div>
<div id="recentlySubmittedDiv"></div>
<div id="plotDiv"></div>
<div id="allRequestsDiv"></div>

<script type="text/javascript">

window.onload = init();

// Navigation functions. Demonstrates simple uses for LABKEY.ActionURL.
function gotoSource() {
thisPage = LABKEY.ActionURL.getParameter("name");
window.location = LABKEY.ActionURL.buildURL("wiki", "source", LABKEY.ActionURL.getContainer(), {name: thisPage});

function editSource() {
editPage = LABKEY.ActionURL.getParameter("name");
window.location = LABKEY.ActionURL.buildURL("wiki", "edit", LABKEY.ActionURL.getContainer(), {name: editPage});

function init() {

// Ensure that the current user has sufficient permissions to view this page.
successCallback: evaluateCredentials

// Check the group membership of the current user.
// Display page data if the user is a member of the appropriate group.
function evaluateCredentials(results)
// Determine whether the user is a member of "All Site Users" group.
var isMember = false;
for (var i = 0; i < results.groups.length; i++) {
if (results.groups[i].name == "All Site Users") {
isMember = true;

// If the user is not a member of the appropriate group,
// display alternative text.
if (!isMember) {
var elem = document.getElementById("errorTxt");
elem.innerHTML = '<p>You do '
+ 'not have sufficient permissions to view this page. Please log in to view the page.</p>'
+ '<p>To register for a account, please go <a href="">here</a></p>'; = "inline";
else {

// Display page data now that the user's membership in the appropriate group
// has been confirmed.
function displayData()
// Link to the Reagent Request list itself.
schemaName: 'lists',
queryName: 'Reagent Requests',
success: function(data) {
var el = document.getElementById("listLink");
if (data && data.viewDataUrl) {
var html = '<p>To see an editable list of all requests, click ';
html += '<a href="' + data.viewDataUrl + '">here</a>';
html += '.</p>';
el.innerHTML = html;

// Display a summary of reagents
var reagentSummaryWebPart = new LABKEY.QueryWebPart({
renderTo: 'reagentDiv',
title: 'Reagent Summary',
schemaName: 'lists',
queryName: 'Reagent View',
buttonBarPosition: 'none',
aggregates: [
{column: 'Reagent', type: LABKEY.AggregateTypes.COUNT},
{column: 'TotalRequests', type: LABKEY.AggregateTypes.SUM},
{column: 'TotalQuantity', type: LABKEY.AggregateTypes.SUM}

// Display a summary of users
var userSummaryWebPart = new LABKEY.QueryWebPart({
renderTo: 'userDiv',
title: 'User Summary',
schemaName: 'lists',
queryName: 'User View',
buttonBarPosition: 'none',
aggregates: [
{column: 'UserID', type: LABKEY.AggregateTypes.COUNT},
{column: 'TotalRequests', type: LABKEY.AggregateTypes.SUM},
{column: 'TotalQuantity', type: LABKEY.AggregateTypes.SUM}]

// Display how many requests have been submitted by which users
// over the past 10 days.
var resolvedWebPart = new LABKEY.QueryWebPart({
renderTo: 'recentlySubmittedDiv',
title: 'Recently Submitted',
schemaName: 'lists',
queryName: 'Recently Submitted',
buttonBarPosition: 'none',
aggregates: [
{column: 'Today', type: LABKEY.AggregateTypes.SUM},
{column: 'Yesterday', type: LABKEY.AggregateTypes.SUM},
{column: 'Day3', type: LABKEY.AggregateTypes.SUM},
{column: 'Day4', type: LABKEY.AggregateTypes.SUM},
{column: 'Day5', type: LABKEY.AggregateTypes.SUM},
{column: 'Day6', type: LABKEY.AggregateTypes.SUM},
{column: 'Day7', type: LABKEY.AggregateTypes.SUM},
{column: 'Day8', type: LABKEY.AggregateTypes.SUM},
{column: 'Day9', type: LABKEY.AggregateTypes.SUM},
{column: 'Total', type: LABKEY.AggregateTypes.SUM}

// Display the entire Reagent Requests grid view.
var allRequestsWebPart = new LABKEY.QueryWebPart({
renderTo: 'allRequestsDiv',
title: 'All Reagent Requests',
schemaName: 'lists',
queryName: 'Reagent Requests',
aggregates: [{column: 'Name', type: LABKEY.AggregateTypes.COUNT}]



Congratulations! You have created a functioning JavaScript application. Return to your tutorial page, make a few requests and check how the confirmation and summary pages are updated.

Related Topics

Previous Step

Repackaging the App as a Module

Converting your application into a module has a number of advantages. For example, the application source can be checked into a source control environment, and it can be distributed and deployed as a module unit.

The jstutorial.module file shows how to convert two of the application pages (reagentRequest and confirmation) into views within a module. The .module file is a renamed .zip archive. To unzip the file and see the source, rename it to "", and unzip it.

To deploy and use the .module file:

Tutorial: Use URLs to Pass Data and Filter Grids

This tutorial covers how to:
  • Pass parameters between pages via a URL
  • Filter a grid using a received URL parameter
To accomplish this, you will:
  1. Collect user input from an initial page
  2. Build a parameterized URL to pass the user's input to a second page.
  3. Use information packaged in the URL to filter a data grid.
We will use a list of reagents as our sample data, our finished application will filter the list of reagents to those that start with the user provided value. For example, if the user enters 'tri', the grid will display only those reagents whose name starts with 'tri'.

See a completed version of what you will build in this tutorial.


To complete this tutorial, you will need:

First Step

Related Topics

Choose Parameters

In this step, we create a page to collect a parameter from the user. This value will be used to filter for items in the data that start with the text provided. For example, if the user enters 'tri', the server will filter for data records that start with the value 'tri'.

Set Up

First, set up the folder with underlying data to filter.

  • Click here to download this sample data:
    • This is a set of TSV files packaged as a list archive, and must remain zipped.

  • Log in to your server and navigate to your "Tutorials" project. Create it if necessary.
    • If you don't already have a server to work on where you can create projects, start here.
    • If you don't know how to create projects and folders, review this topic.
  • Create a new subfolder named "URL Tutorial." Accept the defaults in the folder creation wizard.

  • Import our example data to your new folder:
    • Select (Admin) > Manage Lists.
    • Click Import List Archive.
    • Click Browse or Choose File.
    • Select the file, and click Import List Archive.
  • The lists inside are added to your folder.
  • Click URL Tutorial to return to the main page of your folder.

Create an HTML Page

  • In the Wiki web part, click Create a new wiki page.
    • Name: 'chooseParams'
    • Title: 'Choose Parameters'
    • Click the Source tab and copy and paste the code below.
      • If you don't see a source tab, Convert the page to the HTML type.
<script type="text/javascript">

var searchText = "";

function buttonHandler()
if (document.SubmitForm.searchText.value)
//Set the name of the destination wiki page,
//and the text we'll use for filtering.
var params = {};
params['name']= 'showFilter';
params['searchText'] = document.SubmitForm.searchText.value;

// Build the URL to the destination page.
// In building the URL for the "Show Filtered Grid" page, we use the following arguments:
// controller - The current controller (wiki)
// action - The wiki controller's "page" action
// containerPath - The current container
// parameters - The parameter array we just created above (params)
window.location = LABKEY.ActionURL.buildURL(
alert('You must enter a value to submit.');


<form name="SubmitForm" onsubmit="
Search Text:<br>
<input type="text" name="searchText"><br>
<input type="submit" value="Submit">
    • Click Save & Close.

We use the "params" object to package up all the URL parameters. In this tutorial, we place only two parameters into the object, but you could easily add additional parameters of your choice. The two parameters:

  • name -- The name of the destination wiki page, with the value "showFilter". This page doesn't exist yet.
  • searchText -- The text we'll use for filtering on the "showFilter" page. This will be provided through user input.

Use the Wiki to Build the URL

  • In the Choose Parameters section, enter some text, for example, "a", and click Submit.
  • The destination page (showFilter) doesn't exist yet, so you will see an error.
  • Notice the URL in the browser which was built from the parameters provided, especially the query string portion following the '?': '?name=showFilter&searchText=a'.

Previous Step | Next Step

Show Filtered Grid

Now create the "showFilter" destination page page that will display the filtered data grid using the user's input.

Create a Destination HTML Page

  • Click URL Tutorial to return to the work folder.
  • In the Pages section to the right, select (triangle) > New.
  • Create a new HTML page with the following properties:
    • Name: "showFilter" (Remember this name is hard coded and case-sensitive).
    • Title: Show Filtered List
    • Click the Source tab and copy and paste the following code into it.
    • Click Save & Close.
<script type="text/javascript">

window.onload = function(){

// We use the 'searchText' parameter contained in the URL to create a filter.
var myFilters = [];
if (LABKEY.ActionURL.getParameter('searchText'))
var myFilters = [ LABKEY.Filter.create('Reagent',

// In order to display the filtered list,
// we render a QueryWebPart that uses the 'myFilters' array (created above) as its filter.
// Note that it is recommended to either use the 'renderTo' config option
// (as shown below) or the 'render( renderTo )' method, but not both.
// These both issue a request to the server, so it is only necessary to call one of them.
var qwp = new LABKEY.QueryWebPart({
schemaName : 'lists',
queryName : 'Reagents', // Change to use a different list, for example: 'Instruments'
renderTo : 'filteredTable',
filters : myFilters

<div id="filteredTable"></div>

Notice that the entire list of reagents is displayed because no filter has been applied yet. The query string in the URL is currently "?name=showFilter".

Display a Filtered Grid

Now we are ready to use our parameterized URL to filter the data.

  • Click URL Tutorial to return to the main page.
  • In the Choose Parameters web part, enter some search text, for example the single letter 'a' and click Submit.
  • The URL is constructed and takes you to the destination page.
  • Notice that only those reagents that start with 'a' are shown.

Notice the query string in the URL is now "?name=showFilter&searchText=a".

You can return to the Change Parameters web part or simply change the URL directly to see different results. For example, change the searchText value from 'a' to 'tri' to see all of the reagents that begin with 'tri'. The "showFilter" page understands the "searchText" value whether it is provided directly or via handoff from the other wiki page.


Congratulations! You've now completed the tutorial and created a simple application to pass user input via the URL to filter data grids.

For another API tutorial, try Tutorial: Create Applications with the JavaScript API.

Previous Step

Tutorial: Visualizations in JavaScript

Once you have created a chart and filtered and refined it using the data grid and user interface, you can export it as JavaScript. Then, provided you have Developer permissions (are a member of the "Developer" site group), you can insert it into an HTML page, such as a wiki, and directly edit it. The powerful LabKey visualization libraries include many ways to customize the chart beyond features available in the UI. This lets you rapidly prototype and collaborate with others to get the precise presentation of data you would like.

The exported JavaScript from a chart will:

  • Load the dependencies necessary for visualization libraries
  • Load the data to back the chart
  • Render the chart
Because the exported script selects data in the database directly, if the data changes after you export and edit, the chart will reflect the data changes as well.

In this tutorial, we will:

  1. Export Chart as JavaScript
  2. Embed the Script in a Wiki
  3. Modify the Exported Chart Script
  4. Display the Chart with Minimal UI
This example uses the sample study datasets imported in the study tutorial. If you have not already set that up, follow the instructions in this topic: Install the Sample Study.

First Step

Related Topics

Step 1: Export Chart as JavaScript

We will start by making a time chart grouped by treatment group, then export the JavaScript to use in the next tutorial steps. This example uses the sample study datasets imported in the study tutorial.

Create a Timechart

  • Navigate to the home page of your sample study, "HIV Study." If you don't have one already, see Install the Sample Study.
  • Click the Clinical and Assay Data tab.
  • Open the Lab Results data set.
  • Select (Charts) > Create Chart.
  • Click Time.
  • Drag CD4+ from the column list to the Y Axis box.
  • Click Apply.
  • You will see a basic time chart. Before exporting the chart to Javascript, we can customize it within the wizard.
  • Click Chart Type.
  • In the X Axis box, change the Time Interval to "Months".
  • Click Apply and notice the X axis now tracks months.
  • Click Chart Layout, then change the Subject Selection to "Participant Groups". Leave the default "Show Mean" checkbox checked.
  • Change the Number of Charts to "One per Group".
  • Click Apply.
  • In the Filters > Groups panel on the left, select Treatment Group and deselect anything that was checked by default. The chart will now be displayed as a series of four individual charts in a scrollable window, one for each treatment group:

Export to JavaScript

  • Hover over the chart to reveal the Export buttons, and click to Export as Script.
  • You will see a popup window containing the HTML for the chart, including the JavaScript code.
  • Select All within the popup window and Copy the contents to your browser clipboard.
  • Click Close in the popup. Then Save your chart with the name of your choice.
  • Before proceeding, paste the copied chart script to a text file on your local machine for safekeeping. In this tutorial, we use the name "ChartJS.txt".

Start Over | Next Step

Step 2: Embed the Script in a Wiki

You can embed an exported JavaScript chart without further modifications into a Wiki or any other HTML page. To complete this step you must have Developer permissions, meaning you are a member of the "Developers" site group.

  • Click the Overview tab to go to the home page of your study, or navigate to any tab where you would like to place this exported chart.
  • Add a Wiki web part on the left.
  • Create a new wiki:
    • If the folder already contains a wiki page named "default", the new web part will display it. Choose New from the web part (triangle) menu.
    • Otherwise, click Create a new wiki page in the new wiki web part.
  • Give the page the name of your choice. Wiki page names must be unique, so be careful not to overwrite something else unintentionally.
  • Enter a Title such as "Draft of Chart".
  • Click the Source tab. Note: if there is no Source tab, click Convert To..., select HTML and click Convert.
  • Paste the JavaScript code you copied above onto the source tab. Retrieve it from your text file, "ChartJS.txt" if it is no longer on your browser clipboard.
  • Scroll up and click Save.
  • You could also add additional HTML to the page before or after the pasted JavaScript of the chart, or make edits as we will explore in the next tutorial step.
    • Caution: Do not switch to the Visual tab. The visual editor does not support this JavaScript element, so switching to that tab would cause the chart to be deleted. You will be warned if you click the Visual tab. If you do accidentally lose the chart, you can to recover the JavaScript using the History of the wiki page, your ChartJS.txt file, or by exporting it again from the saved timechart.
  • Scroll up and click Save & Close.
  • Return to the tab where you placed the new wiki web part. If it does not already show your chart, select Customize from the (triangle) menu for the web part and change the Page to display to the name of the wiki you just created. Click Submit.
  • Notice that the web part now contains the series of single timecharts as created in the wizard.

Previous Step | Next Step

Modify the Exported Chart Script

Modify the Exported Chart Script

The chart wizard itself offers a variety of tools for customizing your chart. However, by editing the exported JavaScript for the chart directly you can have much finer grained control as well as make modifications that are not provided by the wizard. In this step we will modify the chart to use an accordian layout and change the size to better fit the page.

  • Open your wiki for editing by clicking Edit or the pencil icon if visible.
  • Confirm that the Source tab is selected. Reminder: Do not switch to the Visual tab.
  • Scroll down to find the line that looks like this:
    LABKEY.vis.TimeChartHelper.renderChartSVG('exportedChart', queryConfig, chartConfig);
  • Replace that line with the following code block. It is good practice to mark your additions with comments such as those shown here.
// ** BEGIN MY CODE **
// create an accordion layout panel for each of the treatment group plots
var accordionPanel = Ext4.create('Ext.panel.Panel', {
renderTo: 'exportedChart',
title: 'Time Chart: CD4 Levels per Treatment Group',
width: 760,
height: 500,
layout: 'accordion',
items: []

// loop through the array of treatment groups
var groupIndex = 0;
Ext4.each(chartConfig.subject.groups, function(group) {
// add a new panel to the accordion layout for the given group
var divId = 'TimeChart' + groupIndex ;
accordionPanel.add(Ext4.create('Ext.panel.Panel', {
title: group.label,
html: '<div id="' + divId + '"></div>'

// clone and modify the queryConfig and chartConfig for the plot specific to this group
var groupQueryConfig = Ext4.clone(queryConfig);
groupQueryConfig.defaultSingleChartHeight = 350;
groupQueryConfig.defaultWidth = 750;
var groupChartConfig = Ext4.clone(chartConfig);
groupChartConfig.subject.groups = [group];

// call the plot render method using the cloned config objects
LABKEY.vis.TimeChartHelper.renderChartSVG(divId , groupQueryConfig, groupChartConfig);

// ** END MY CODE **
  • Click Save and Close to view your new chart, which is now an "accordian panel" style. Click the and buttons on the right to expand/collapse the individual chart panels.

Previous Step | Next Step

Display the Chart with Minimal UI

To embed an exported chart without surrounding user interface, create a simple file based module where your chart is included in a simple myChart.html file. Create a myChart.view.html file next to that page with the following content. This will load the necessary dependencies and create a page displaying only the simple chart. (To learn how to create a simple module, see Tutorial: Hello World Module.)

<view xmlns="" template="print" frame="none"> 


Congratulations! You have completed the tutorial and learned to create and modify a visualization in JavaScript.

Previous Step

Related Topics

JavaScript API - Examples

The samples below will get you started using the JavaScript API to create enhanced HTML pages and visualizations of data

Other JavaScript API Samples

Show a QueryWebPart

Displays a query in the home/ProjectX folder. The containerFilter property broadens the scope of the query to pull data from all folders on the site.

<div id='queryDiv1'/>
<script type="text/javascript">
var qwp1 = new LABKEY.QueryWebPart({
renderTo: 'queryDiv1',
title: 'Some Query',
schemaName: 'someSchema',
queryName: 'someQuery',
containerPath: 'home/ProjectX',
containerFilter: LABKEY.Query.containerFilter.allFolders,
buttonBarPosition: 'top',
maxRows: 25

Files Web Part - Named File Set

Displays the named file set 'store1' as a Files web part.

<div id="fileDiv"></div>

<script type="text/javascript">

// Displays the named file set 'store1'.
var wp1 = new LABKEY.WebPart({
title: 'File Store #1',
partName: 'Files',
partConfig: {fileSet: 'store1'},
renderTo: 'fileDiv'


Inserting a Wiki Web Part

Note that the Web Part Configuration Properties covers the configuration properties that can be set for various types of web parts inserted into a wiki page.

<div id='myDiv'>
<script type="text/javascript">
var webPart = new LABKEY.WebPart({partName: 'Wiki',
renderTo: 'myDiv',
partConfig: {name: 'home'}

Retrieving the Rows in a List

This script retrieves all the rows in a user-created list named "People." Please see LABKEY.Query.selectRows for detailed information on the parameters used in this script.

<script type="text/javascript">
function onFailure(errorInfo, options, responseObj)
if(errorInfo && errorInfo.exception)
alert("Failure: " + errorInfo.exception);
alert("Failure: " + responseObj.statusText);

function onSuccess(data)
alert("Success! " + data.rowCount + " rows returned.");

schemaName: 'lists',
queryName: 'People',
columns: ['Name', 'Age'],
success: onSuccess,
error: onFailure,

The success and failure callbacks defined in this example illustrate how you might manage the fact that JavaScript requests to LabKey server use AJAX and are asynchronous. You don't get results immediately upon calling a function, but instead at some point in the future, and at that point the success or failure callbacks are run. If you would like to ensure a certain behavior waits for completion, you could place it inside the success callback function as in this example:

var someValue = 'Test value'; 
schemaName: 'lists',
queryName: 'People',
columns: ['Name', 'Age'],
success: function (data)
alert("Success! " + data.rowCount + " rows returned and value is " + someValue);
failure: onFailure

Displaying a Grid

Update Issues via the API

The following Ajax request will insert a new issue in the issue tracker.

  • The action property supports these values: insert, update, resolve, close, and reopen.
  • issueDefId is required when inserting.
  • issueDefName can be used when updating.
var formData = new FormData();
var issues = [];
assignedTo : 1016,
title : 'My New Issue',
comment : 'To repro this bug, do the following...',
notifyList : '',
priority : 2,
issueDefId : 20,
action : 'insert'
formData.append('issues', JSON.stringify(issues));

url: LABKEY.ActionURL.buildURL('issues', 'issues.api'),
method: 'POST',
form: formData,
success: LABKEY.Utils.getCallbackWrapper(function(response){
failure: LABKEY.Utils.getCallbackWrapper(function(response){

Example Ajax request for updating an issue.

var formData = new FormData();
var issues = [];
assignedTo : 1016,
comment : 'I am not able to repro this bug.',
notifyList : '',
//issueDefId: 20,
issueDefName: 'mybugs',
issueId : 25,
action : 'update'
formData.append('issues', JSON.stringify(issues));

url: LABKEY.ActionURL.buildURL('issues', 'issues.api'),
method: 'POST',
form: formData,
success: LABKEY.Utils.getCallbackWrapper(function(response){
failure: LABKEY.Utils.getCallbackWrapper(function(response){

Adding a Report to a Data Grid with JavaScript

JavaScript Reports

A JavaScript report links a specific data grid with code that runs in the user's browser. The code can access the underlying data, transform it as desired, and render a custom visualization or representation of that data (for example, a chart, grid, summary statistics, etc.) to the HTML page. Once the new JavaScript report has been added, it is accessible from the (Reports) menu on the grid.

Create a JavaScript Report

To create a JavaScript report:

  • Navigate to the data grid of interest.
  • Select (Reports) > Create JavaScript Report.
  • Note the "starter code" provided on the Source tab. This starter code simply retrieves the data grid and displays the number of rows in the grid. The starter code also shows the basic requirements of a JavaScript report. Whatever JavaScript code you provide must define a render() function that receives two parameters: a query configuration object and an HTML div element. When a user views the report, LabKey Server calls this render() function to display the results to the page using the provided div.
  • Modify the starter code, especially the onSuccess(results) function, to render the grid as desired. See an example below.
  • If you want other users to see this report, place a checkmark next to Make this report available to all users.
  • Elect whether you want the report to be available in child folders on data grids where the schema and table are the same as this data grid.
  • Click Save, provide a name for the report, and click OK.
  • Confirm that the JavaScript report has been added to the grid's Reports menu.

GetData API

There are two ways to retrieve the actual data you wish to see, which you control using the JavaScript Options section of the source editor, circled in red at the bottom of the following screenshot.

  • If Use GetData API is selected (the default setting), you can pass the data through one or more transforms before retrieving it. When selected, you pass the query config to LABKEY.Query.GetData.getRawData().
  • If Use GetData API is not selected, you can still configure columns and filters before passing the query config directly to LABKEY.Query.selectRows()

Modifying the Query Configuration

Before the data is retrieved, the query config can be modified as needed. For example, you can specify filters, columns, sorts, maximum number of rows to return, etc. The example below specifies that only the first 25 rows of results should be returned:

queryConfig.maxRows = 25;

Your code should also add parameters to the query configuration to specify functions to call when selectRows succeeds or fails. For example:

. . .
queryConfig.success = onSuccess;
queryConfig.error = onError;
. . .

function onSuccess(results)
. . .Render results as HTML to div. . .

function onError(errorInfo)
jsDiv.innerHTML = errorInfo.exception;


Your JavaScript code is wrapped in an anonymous function, which provides unique scoping for the functions and variables you define; your identifiers will not conflict with identifiers in other JavaScript reports rendered on the same page.


This sample can be attached to any dataset or list. To run this sample, select a dataset or list to run it against, create a JavaScript report (see above), pasting this sample code into the Source tab.

var jsDiv;

// When the page is viewed, LabKey calls the render() function, passing a query config
// and a div element. This sample code calls selectRows() to retrieve the data from the server,
// and displays the data inserting line breaks for each new row.
// Note that the query config specifies the appropriate query success and failure functions
// and limits the number of rows returned to 4.
function render(queryConfig, div)
jsDiv = div;
queryConfig.success = onSuccess;
queryConfig.error = onError;
// Only return the first 4 rows
queryConfig.maxRows = 4;

function onSuccess(results)
var data = "";

// Display the data with white space after each column value and line breaks after each row.
for (var idxRow = 0; idxRow < results.rows.length; idxRow++)
var row = results.rows[idxRow];

for (var col in row)
if (row[col] && row[col].value)
data = data + row[col].value + " ";

data = data + "<br/>";

// Render the HTML to the div.
jsDiv.innerHTML = data;

function onError(errorInfo)
jsDiv.innerHTML = errorInfo.exception;

Related Topics

Export Data Grid as a Script

Export/Generate Scripts

LabKey Server provides a rich API for building client applications on top of LabKey Server -- for example, applications that retrieve and interact with data from the database. To get started building a client application, LabKey Server can generate a client script that retrieves a grid of data from the database. Adapt and extend the scripts capabilities to meet your needs. You can generate a script snippet for any data grid. The following script languages are supported:

  • Java
  • JavaScript
  • Perl
  • Python
  • R
  • SAS
  • Stable URL
You can also generate a Stable URL from this export menu which can be used to reload the query, preserving any filters, sorts, or custom sets of columns.

To generate a script for a given dataset:

  • Navigate to the grid view of interest and click (Export).
  • Select the Script tab and select an available language: Java, JavaScript, Perl, Python, R, or SAS.
  • Click Create Script to generate a script.

For example, the Physical Exam dataset in the LabKey Demo Study can be retrieved using this snippet of JavaScript:

<script type="text/javascript">
requiredVersion: 9.1,
schemaName: 'study',
queryName: 'Physical Exam',
columns: 'ParticipantId,date,height_cm,Weight_kg,Temp_C,SystolicBloodPressure,DiastolicBloodPressure,Pulse,
filterArray: null,
sort: null,
success: onSuccess,
error: onError

function onSuccess(results)
var data = "";
var length = Math.min(10, results.rows.length);

// Display first 10 rows in a popup dialog
for (var idxRow = 0; idxRow < length; idxRow++)
var row = results.rows[idxRow];

for (var col in row)
data = data + row[col].value + " ";

data = data + "n";


function onError(errorInfo)

Filters. Filters that have been applied to the grid view are included in the script. Note that some module actions apply special filters to the data (e.g., an assay may filter based on a "run" parameter in the URL); these filters are not included in the exported script. Always test the generated script to verify it's retrieving the data you expect, and modify the filter parameters as appropriate.

Column List. The script explicitly includes a column list so the column names are obvious and easily usable in the code.

Foreign Tables. The name for a lookup column will be the name of the column in the base table, which will return the raw foreign key value. If you want a column from the foreign table, you need to include that explicitly in your view before generating the script, or add "/<ft-column-name>" to the field key.

Use Exported Scripts

JavaScript Examples.

  • You can paste a script into a <script> block in an HTML wiki.
  • For a better development experience, you can create a custom module. HTML pages in that module can use the script to create custom interfaces.
R Examples
  • Use the script in a custom R view.
  • Use the script within an external R environment to retrieve dat from LabKey Server. Paste the script into your R console. See documentation on the Rlabkey CRAN package.

Related Topics

Custom HTML/JavaScript Participant Details View

You can override the default participant details view by providing an alternative participant.html file. You can provide the alternative page either (1) through the web user interface (see Participant Details View), (2) or through a file-based module.

To add the participant details page through a file-based module, place a file named "participant.html" in the views/ directory:


Then enable the module in your study folder. When the participant details view is called, LabKey Server will use the participant.html you have provided.

Example Custom Participant.html

The following page grabs the participantid from the URL, queries the database for the details about that participant, and builds a custom HTML view/summary of the data with a different appearance than the default.

<style type="text/css">

div.wrapper {
/*margin-left: auto;*/
/*margin-right: auto;*/
margin-top: -10px;
width : 974px;

div.wrapper .x4-panel-body {
background-color: transparent;

div.main {
background-color: white;
padding: 10px 20px 20px 20px;
margin-top: 10px;
box-shadow: 0 1px 1px rgba(0,0,0,0.15), -1px 0 0 rgba(0,0,0,0.06), 1px 0 0 rgba(0,0,0,0.06), 0 1px 0 rgba(0,0,0,0.12);

div.main h2 {
display: inline-block;
text-transform: uppercase;
font-weight: normal;
background-color: #126495;
color: white;
font-size: 13px;
padding: 9px 20px 7px 20px;
margin-top: -20px;
margin-left: -20px;

div.main h3 {
text-transform: uppercase;
font-size: 14px;
font-weight: normal;
padding: 10px 0px 10px 50px;
border-bottom: 1px solid darkgray;

#demographics-content .detail {
font-size: 15px;
padding-left: 30px;
padding-bottom: 5px;

#demographics-content .detail td {
font-size: 15px;

#demographics-content h3 {
margin-bottom: 0.5em;
margin-top: 0.5em;

#demographics-content td {
padding: 3px;

#demographics-content td.label,
td.label, div.label, a.label {
font-size: 12px;
color: #a9a9a9;
vertical-align: text-top;
div.main-body {
margin-top: 0.5em;

#assays-content .detail td {
font-size: 15px;
padding: 3px;

.thumb.x-panel-header {
background-color: transparent;


<div id="participant-view"></div>

<script type="text/javascript">
<script type="text/javascript">

var outer_panel = null;
var subject_accession = null;



subject_accession = LABKEY.ActionURL.getParameter('participantId') || 'SUB112829';
outer_panel = Ext4.create('Ext.panel.Panel', {
renderTo : 'participant-view',
border : false, frame : false,
cls : 'wrapper',
layout : 'column',
items : [{
xtype : 'container',
id : 'leftContainer',
columnWidth : .55,
padding: 10,
items : []
xtype : 'container',
id : 'rightContainer',
columnWidth : .45,
padding: 10,
items : []


function getDemographicCfg()
var tpl = new Ext4.XTemplate(
'<div id="demographics" class="main">',
'<div id="demographics-content">',
'<table class="detail" style="margin-left: 30px">',
'<tr><td class="label" width="120px">ParticipantId</td><td>{ParticipantId:this.renderNull}</td></tr>',
'<tr><td class="label" width="120px">Gender</td><td>{Gender:this.renderNull}</td></tr>',
'<tr><td class="label" width="120px">StartDate</td><td>{StartDate:this.renderNull}</td></tr>',
'<tr><td class="label" width="120px">Country</td><td>{Country:this.renderNull}</td></tr>',
'<tr><td class="label" width="120px">Language</td><td>{Language:this.renderNull}</td></tr>',
'<tr><td class="label" width="120px">TreatmentGroup</td><td>{TreatmentGroup:this.renderNull}</td></tr>',
'<tr><td class="label" width="120px">Status</td><td>{Status:this.renderNull}</td></tr>',
'<tr><td class="label" width="120px">Height</td><td>{Height:this.renderNull}</td></tr>',
renderNull : function(v) {
return (v == undefined || v == null || v == "") ? "--" : v;

xtype : 'component',
id : 'demographics-' + subject_accession,
tpl : tpl,
border : false, frame : false,
data : {}

var sql = "SELECT Demographics.ParticipantId, " +
", " +
"Demographics.StartDate, " +
"Demographics.Country, " +
"Demographics.Language, " +
"Demographics.Gender, " +
"Demographics.TreatmentGroup, " +
"Demographics.Status, " +
"Demographics.Height " +
"FROM Demographics " +
"WHERE Demographics.ParticipantId='" + subject_accession + "'";

var demo_store = Ext4.create('', {
schemaName : 'study',
sql : sql,
autoLoad : true,
listeners : {
load : function(s) {
var c = Ext4.getCmp('demographics-' + subject_accession);
if (c) { c.update(s.getAt(0).data); }
scope : this
scope : this

Example: Master-Detail Pages

This topic shows you how to create an application providing master-detail page views of some study data. Adapt elements from this sample to create your own similar data dashboard utilities.

First, create a new study folder and import this study archive:

This archive contains a sample study with extra wikis containing our examples; find them on the Overview tab.

Participant Details View

On the Overview tab of the imported study, click Participant Details View. This wiki displays an Ext4 combo box, into which all the participant IDs from the demographics dataset have been loaded.

Use the dropdown to Select a Participant Id. Upon selection, the wiki loads the demographics data as well as several LABKEY.QueryWebParts for various datasets filtered to that selected participant ID.

Review the source for this wiki to see how this is accomplished. You can edit the wiki directly if you uploaded the example archive, or download the source here:

Participant Grid View

The "Participant Grid View" is a Master-Detail page using Ext4 grid panels (LABKEY.Ext4.GridPanel). It loads the demographics dataset and awaits click of a row to load details. Note that if you click the actual participant ID, it is already a link to another participant view in the folder, so click elsewhere in the row to activate this master-detail view.

After clicking any non-link portion of the row, you will see a set of other datasets filtered for that participant ID as Ext4 grids.

Review the source for this wiki to see how this is accomplished. You can edit the wiki directly if you uploaded the example archive, or download the source here:

Custom Button Bars

The button bar appears by default above data grids and contains icon and text buttons providing various features. You'll find a list of the standard buttons and their functions here.

The standard button bars for any query or table can be customized through XML or the JavaScript client API. You can add, replace, delete buttons. Buttons can be linked words, icons, or drop-down menus. You can also control the visibility of custom buttons based on a user's security permissions. Custom button bars can leverage the functionality supplied by default buttons.

This topic covers:

XML metadata

An administrator can add additional buttons using XML metadata for a table or query. To add or edit XML metadata within the UI:

  • Select (Admin) > Go To Module > Query.
    • If you don't see this menu option you do not have the necessary permissions.
  • Select the schema, then the query or table.
  • Click Edit Metadata then Edit Source.
  • Click the XML Metadata tab.
  • Type or paste in the XML to use.
  • Click the Data tab to see the grid with your XML metadata applied - including the custom buttons once you have added them.
  • When finished, click Save & Finish.


This example was used to create a "Custom Dropdown" menu item on the Medical History dataset in the demo study. Two examples of actions that can be performed by custom buttons are included:

  • Execute an action using the onClick handler ("Say Hello").
  • Navigate the user to a custom URL ("").
<tables xmlns="">
<table tableName="Medical History" tableDbType="TABLE">
<buttonBarOptions includeStandardButtons="true">
<item text="Custom Dropdown" insertPosition="end">
<item text="Say Hello">
<item text="">

Click here to see and try this custom button.

The following example returns the number of records selected in the grid.

<tables xmlns="">
<table tableName="Physical Exam" tableDbType="TABLE">
<buttonBarOptions includeStandardButtons="true">
<item text="Number of Selected Records" insertPosition="end">
var dataRegion = LABKEY.DataRegions[dataRegionName];
var checked = dataRegion.getChecked();

The following example hides a button (Grid Views) on the default button bar.

<tables xmlns="">
<table tableName="Participants" tableDbType="NOT_IN_DB">
<buttonBarOptions includeStandardButtons="true">
<item hidden="true">
<originalText>Grid views</originalText>

Premium users can access additional code examples here:

Parameters and Button Positioning

Review the API documentation for all parameters available, and valid values, for the buttonBarOptions and buttonBarItem elements:

A note about positioning: As shown in the example, you can add new buttons while retaining the standard buttons. By default, any new buttons you define are placed after the standard buttons. To position your new buttons along the bar, use one of these buttonBarItem parameters:
  • insertBefore="Name-of-existing-button"
  • insertAfter="Name-of-existing-button"
  • insertPosition="#" OR "beginning" OR "end" : The button positions are numbered left to right starting from zero. insertPosition can place the button in a specific order, or use the string "beginning" or "end".

Invoke JavaScript Functions

You can also define a button to invoke a JavaScript function. The function itself must be defined in a .js file included in the resources of a module, typically by using the includeScript element. This excerpt can be used in the XML metadata for a table or query, provided you have "moreActionsHandler" defined.

<item requiresSelection="true" text="More Actions">

LABKEY.QueryWebPart JavaScript API

The LABKEY.QueryWebPart API includes the buttonBar parameter for defining custom button bars. The custom buttons defined in this example include:

  • Folder Home: takes the user back to the folder home page.
  • Test Script: an onClick alert message is printed
  • Test Handler: A JavaScript handler function is called.
  • Multi-level button: The Test Menu has multiple options including a flyout to a sub menu.

Developers can try this example in a module which is enabled in a folder containing a LabKey demo study. See below for details.

<div id='queryTestDiv1'/>
<script type="text/javascript">

// Custom Button Bar Example

var qwp1 = new LABKEY.QueryWebPart({
renderTo: 'queryTestDiv1',
title: 'Query with Custom Buttons',
schemaName: 'study',
queryName: 'Medical History',
buttonBar: {
includeStandardButtons: true,
{text: 'Folder Home', url: LABKEY.ActionURL.buildURL('project', 'begin')},
{text: 'Test Script', onClick: "alert('Test Script button works!'); return false;"},
{text: 'Test Handler', handler: onTestHandler},
{text: 'Test Menu', items: [
{text: 'Item 1', handler: onItem1Handler},
{text: 'Fly Out', items: [
{text: 'Sub Item 1', handler: onItem1Handler}
'-', //separator
{text: 'Item 2', handler: onItem2Handler}

function onTestHandler(dataRegion)
alert("onTestHandler called!");
return false;

function onItem1Handler(dataRegion)
alert("onItem1Handler called!");

function onItem2Handler(dataRegion)
alert("onItem2Handler called!");



  • A custom button can get selected items from the current page of a grid view and perform a query using that info. Note that only the selected options from a single page can be manipulated using onClick handlers for custom buttons. Cross-page selections are not currently recognized.
  • The allowChooseQuery and allowChooseView configuration options for LABKEY.QueryWebPart affect the buttonBar parameter.

Install the Example Custom Button Bar

To see the above example in action, you can install it in your own local build following these steps.

If you do not already have a module where you can add this example, create the basic structure of the "HelloWorld" module, and understand the process of building and deploying it following this tutorial first:

  • To your module, add the file /resources/views/myDemo.html.
  • Populate it with the above example.
  • Create a file /resources/views/myDemo.view.xml and populate it with this:
    <view xmlns=""
    title="Custom Buttons"
    <permission name="read"/>

  • Confirm that your module is built, deployed, and then enable it in the folder in which you installed the demo study.
    • Go to (Admin) > Folder > Management > Folder Type.
    • Check the box for "HelloWorld" (or the module you are using).
    • Click Update Folder.
  • Still in your demo study folder, select (Admin) > Go To Module > HelloWorld.
  • The default 'HelloWorld-begin.view?' is shown. Edit the URL so it reads 'HelloWorld-myDemo.view?' (substituting your module name for HelloWorld if needed) and hit return.
  • You will see this example, can try the buttons for yourself, and use it as the basis for exploring more options.

  • To experiment with these custom buttons, simply edit the myDemo.html file, start and stop your server, then refresh your page view to see your changes.

Related Topics

Premium Resource Available

Subscribers to premium editions of LabKey Server can learn more with the example code in this topic:

Learn more about premium editions

Premium Resource: Invoke JavaScript from Custom Buttons

Premium Resource: Sample Status Demo

Insert into Audit Table via API

You can insert records into the audit log table via the standard LabKey Query APIs, such as LABKEY.Query.insertRows() in the JavaScript client API. For example, you can insert records in order to log backup events, client-side errors, etc.

Insert rows into the "Client API Actions" query in the "auditLog" schema. Logged-in users can insert into the audit log for any folder to which they have read access. Guests cannot insert in the audit table. Rows can only be inserted, they cannot be deleted or updated. A simple example using the JavaScript API:

schemaName: 'auditLog',
queryName: 'Client API Actions',
rows: [ {
comment: 'Test event insertion via client API',
int1: 5
} ]

For details on the API itself, see the documentation for LABKEY.Query.

Programming the File Repository

The table exp.Files is available for users to manage files included under the @files, @pipeline, and @filesets file roots. (Note that file attachments are not included.) You can add custom properties to the File Repository as described in the topic Files Web Part Administration. These custom properties will be added to the exp.Files table, which you can manage programatically using the LabKey APIs.

The table exp.Files is a filtered table over exp.Data which adds a number of columns that aren't available in exp.Data, namely, RelativeFolder and AbsoluteFilePath.

To allow access to the AbsoluteFilePath and DataFileUrl fields in exp.Files, assign the site-level role "See Absolute File Paths". For details see Configure Permissions.

JavaScript API Examples

The following example code snippets demonstrate how to use the APIs to control the file system using the exp.Files table.

// List all file records with default columns (including deleted files).
schemaName: 'exp',
queryName: 'files',
success: function(data)
var rows = data.rows;
for (var i = 0; i < rows.length; i++)
var row = rows[i];

// List all files containing "LabKey" substring in filename, with specified columns, order by Name.
schemaName: 'exp',
queryName: 'files',
columns: ['Name', 'FileSize', 'FileExists', 'RelativeFolder', 'AbsoluteFilePath', 'DataFileUrl'], // Both 'AbsoluteFilePath' and 'DataFileUrl' requires SeeFilePath permission
filterArray: [
LABKEY.Filter.create('Name', 'LabKey', LABKEY.Filter.Types.CONTAINS)
sort: 'Name',
success: function(data)
var rows = data.rows;
for (var i = 0; i < rows.length; i++)
var row = rows[i];

// Query files with custom property fields.
// Custom1 & Custom2 are the custom property fields for files.
schemaName: 'exp',
queryName: 'files',
columns: ['Name', 'FileSize', 'FileExists', 'RelativeFolder', 'Custom1', 'Custom2'],
filterArray: [
LABKEY.Filter.create('Custom1', 'Assay_Batch_1')
sort: 'Custom1',
success: function(data)
var rows = data.rows;
for (var i = 0; i < rows.length; i++)
var row = rows[i];
console.log(row['Name'] + ", " + row['Custom1']);

// Update custom property value for file records.
schemaName: 'exp',
queryName: 'files',
rows: [{
RowId: 1, // update by rowId
Custom1: 'Assay_Batch_2'
DataFileUrl: 'file:/LabKey/Files/run1.xslx', // update by dataFileUrl
Custom1: 'Assay_Batch_2'
success: function (data)
var rows = data.rows;
for (var i = 0; i < rows.length; i++)
var row = rows[i];
console.log(row['RowId'] + ", " + row['Custom1']);
failure: function(errorInfo, options, responseObj)
if (errorInfo && errorInfo.exception)
alert("Failure: " + errorInfo.exception);
alert("Failure: " + responseObj.statusText);

// Insert file records with a custom property.
schemaName: 'exp',
queryName: 'files',
rows: [{
AbsoluteFilePath: '/Users/xing/Downloads/labs.txt',
Custom1: 'Assay_Batch_3'
success: function (data)
var rows = data.rows;
for (var i = 0; i < rows.length; i++)
var row = rows[i];
console.log(row['RowId'] + ", " + row['Custom1']);
failure: function(errorInfo, options, responseObj)
if (errorInfo && errorInfo.exception)
alert("Failure: " + errorInfo.exception);
alert("Failure: " + responseObj.statusText);

// Delete file records by rowId.
schemaName: 'exp',
queryName: 'files',
rows: [{
RowId: 195
success: function (data)
failure: function(errorInfo, options, responseObj)
if (errorInfo && errorInfo.exception)
alert("Failure: " + errorInfo.exception);
alert("Failure: " + responseObj.statusText);

RLabKey Examples

newfile <- data.frame(

# Insert a new file record.
insertedRow <- labkey.insertRows(

newRowId <- insertedRow$rows[[1]]$RowId

# Query for file record by rowId.
colFilter=makeFilter(c("RowId", "EQUALS", newRowId))

# Update the custom file property for a file record.
updatedRow <- labkey.updateRows(

# Delete a file record.
deleteFile <- data.frame(RowId=newRowId)
result <- labkey.deleteRows(

Sample Wiki for Grouping by AssayId (Custom File Property)

<h3>View Assay Files</h3>
<div id="assayFiles"></div>
<script type="text/javascript">
var onSuccess = function(data) {
var rows = data.rows, html = "<ol>";
for (var i = 0; i < rows.length; i++)
var row = rows[i];
html += "<li><a href="query-executeQuery.view?schemaName=exp&query.queryName=Files&query.AssayId~eq=" + row.AssayId + "">" + row.AssayId + "</a></li>";
html += "</ol>";
var targetDiv = document.getElementById("assayFiles");
targetDiv.innerHTML = html;

schemaName: 'exp',
sql: 'SELECT DISTINCT AssayId From Files Where AssayId Is Not Null',
success: onSuccess

Declare Dependencies

This topic explains how to declare dependencies to script files, libraries, and other resources. For example, when rendering visualizations, the user should explicitly declare the dependency on LabKey.vis to load the necessary libraries.

Declare Module-Scoped Dependencies

To declare dependencies for all the pages in a module, do the following:

First, create a config file named "module.xml" at the module's root folder:


Then, add <clientDependencies> and <dependency> tags that point to the required resources. These resources will be loaded whenever a page from your module is called. The path attribute is relative to your /web dir or is an absolute http or https URL. See below for referencing libraries, like Ext4, with the path attribute.

<module xmlns="">
<dependency path="Ext4"/>
<dependency path="" />
<dependency path="extWidgets/IconPanel.css" />
<dependency path="extWidgets/IconPanel.js" />

Production-mode builds will created a minified JavaScript file that combines the individual JavaScript files in the client library. Servers running in production mode will serve up the minified version of the .js file to reduce page load times.

If you wish to debug browser behavior against the original version of the JavaScript files, you can add the "debugScripts=true" URL parameter to the current page's URL. This will make the server not used the minified version of the resources.

Declare File-Scoped Dependencies

For each HTML file in a file-based module, you can create an XML file with associated metadata. This file can be used to define many attributes, including the set of script dependencies. The XML file allows you to provide an ordered list of script dependencies. These dependencies can include:

  • JS files
  • CSS files
  • libraries
To declare dependencies for HTML views provided by a module, just create a file with the extension '.view.xml' with the same name as your view HTML file. For example, if your view is called 'Overview.html', then you would create a file called 'Overview.view.xml'. An example folder structure of the module might be:


The example XML file below illustrates loading a library (Ext4), a single script (Utils.js) and a single CSS file (stylesheet.css):

<view xmlns="">
<dependency path="Ext4"/>
<dependency path="/myModule/Utils.js"/>
<dependency path="/myModule/stylesheet.css"/>

Within the <dependencies> tag, you can list any number of scripts to be loaded. These should be the path to the file, as you might have used previously in LABKEY.requiresScript() or LABKEY.requiresCss(). The example above includes a JS file and a CSS file. These scripts will be loaded in the order listed in this file, so be aware of this if one script depends on another.

In addition to scripts, libraries can be loaded. A library is a collection of scripts. In the example above, the Ext4 library is listed as a dependency. Supported libraries include:

  • Ext3: Will load the Ext3 library and dependencies. Comparable to LABKEY.requiresExt3()
  • Ext4: Will load the Ext4 library and dependencies. Comparable to LABKEY.requiresExt4Sandbox()
  • clientapi: Will load the LABKEY Client API. Comparable to LABKEY.requiresClientAPI()
Declaring dependencies in a .view.xml file is the preferred method of declaring script dependencies where possible. The advantage of declaring dependencies in this manner is that the server will automatically write <script> tags to load these scripts when the HTML view is rendered. This can reduce timing problems that can occur from a dependency not loading completely before your script is processed.

An alternative method described below is intended for legacy code and special circumstances where the .view.xml method is unavailable.

Using LABKEY.requiresScript()

From javascript on an HTML view or wiki page, you can load scripts using LABKEY.requiresScript() or LABKEY.requiresCss(). Each of these helpers accepts the path to your script or CSS resource. In addition to the helpers to load single scripts, LabKey provides several helpers to load entire libraries:

<script type="text/javascript">
// Require that ExtJS 4 be loaded
LABKEY.requiresExt4Sandbox(function() {

// List any JavaScript files here
var javaScriptFiles = ["/myModule/Utils.js"];

LABKEY.requiresScript(javaScriptFiles, function() {
// Called back when all the scripts are loaded onto the page
alert("Ready to go!");

// This is equivalent to what is above
LABKEY.requiresScript(["Ext4", "myModule/stylesheet.css", "myModule/Utils.js"], function() {
// Called back when all the scripts are loaded onto the page
alert("Ready to go!");

Loading Visualization Libraries

To properly render charts and other visualizations, explicitly declare the LABKEY.vis dependency. Using the example of a script with "timeChartHelper.js", the declaration would look like:

// Load the script dependencies for charts. 

Create Custom Client Libraries

If you find that many of your views and reports depend on the same set of javascript or css files, it may be appropriate to create a library of those files so they can be referred to as a group. To create a custom library named "mymodule/mylib", create a new file "mylib.lib.xml" in the web/mymodule directory in your module's resources directory. Just like dependencies listed in views, the library can refer to web resources and other libraries:

<libraries xmlns="">
<script path="/mymodule/Utils.js"/>
<script path="/mymodule/stylesheet.css"/>
<dependency path="Ext4"/>
<dependency path=""/>

Note that external dependencies (i.e. https://.../someScript.js) can only be declared as a dependency of the library, and not as a defining script.

Troubleshooting: Dependencies on Ext3

Past implementations of LabKey Server relied heavily on Ext3, and therefore loaded the ExtJS v3 client API on each page by default. This resulted in the ability to define views, pages, and scripts without explicitly declaring client dependencies. Beginning with version LabKey Server v16.2, DataRegion.js is no longer dependent on ext3, so it is no longer loaded and these views may break at run time.

Symptoms: Either a view will fail to operate properly, or a test or script will fail with a JavaScript alert about an undefined function (e.g. "LABKEY.ext.someFn").

Workaround: Isolate and temporarily work around this issue by forcing the inclusion of ext3 on every page. Note that this override is global and not an ideal long term solution.

  • Open (Admin) > Site > Admin Console.
  • Click Admin Console Links, then Site Settings.
  • Check one or both boxes to "Require ExtJS v3… be loaded on each page."


Correct views and other objects to explicitly declare their dependencies on client-side resources as described above, or use one of the following overrides:

Override getClientDependencies()

For views that extend HttpView, you can override getClientDependencies() as shown in this example from

public LinkedHashSet<ClientDependency> getClientDependencies()
LinkedHashSet<ClientDependency> resources = new LinkedHashSet<>();
if (!DataRegion.useExperimentalDataRegion())

Override in .jsp views

Note the <%! syntax when declaring an override as shown in this example from core/project/projects.jsp.

public void addClientDependencies(ClientDependencies dependencies)

Related Topics

Loading ExtJS On Each Page

To load ExtJS on each page of your server:

  • Go to Admin > Site > Admin Console.
  • On the Admin Console Links tab, click Site Settings.
  • Scroll down to Customize LabKey system properties.
  • Two checkboxes, for two different libraries, are available:
    • Require ExtJS v3.4.1 be loaded on each page
    • Require ExtJS v3.x based Client API be loaded on each page

Note that it is your responsibility to obtain an ExtJS license, if your project does not meet the open source criteria set out by ExtJS. See Licensing for the ExtJS API for details.

Licensing for the ExtJS API

The LabKey JavaScript API provides several extensions to the Ext JavaScript Library. The LABKEY.ext.EditorGridPanel.html is one example.

If you use LabKey APIs that extend the Ext API, your code either needs to be open source, or you need to purchase commercial licenses for Ext.

For further details, please see the Ext JavaScript licensing page.

Search API Documentation

Search Client API Reference Documentation:

Naming & Documenting JavaScript APIs

This section provides topics useful to those writing their own LabKey JavaScript APIs.


Naming Conventions for JavaScript APIs

This page covers recommended patterns for naming methods, fields, properties and classes in our JavaScript APIs. Capitalization guidelines have been chosen for consistency with our existing JavaScript APIs.

Avoid web resource collisions

The web directory is shared across all modules so it is a best practice to place your module's resources under a unique directory name within the web directory. It is usually sufficient to use your module's name to scope your resources. For example,

└── resources
   ├── web
   │   └── **mymodule**
   │   ├── utils.js
   │   └── style.css
   └── views
└── begin.html

Choose concise names

General guidelines:

  • Avoid:
    • Adding the name of the class before the name of a property, unless required for clarity.
    • Adding repetitive words (such as "name" or "property") to the name of a property, unless required for clarity.
  • Consider:
    • Creating a class to hold related properties if you find yourself adding the same modifier to many properties (e.g., "lookup").
Examples of names that should be more concise: A good example of a concise name:

Choose consistent names

These follow Ext naming conventions.

Listener method names

  • Document failure as the name of a method that listens for errors.
    • Also support: failureCallback and errorCallback but not "errorListener"
  • Document success as the name of a method that listens for success.
    • Also support: successCallback
failure listener arguments
  • Use error as the first parameter (not "errorInfo" or "exceptionObj"). This should be a JavaScript Error object caught by the calling code.
    • This object should have a message property (not "exception").
  • Use response as the second parameter (not "request" or "responseObj"). This is the XMLHttpRequest object that generated the request. Make sure to say "XMLHttpRequest" in explaining this parameter, not "XMLHttpResponse," which does not exist.

Use consistent capitalization

General guidelines:

  • Use UpperCamelCase for the names of classes.
  • Use lowercase for the names of events.
  • Use lowerCamelCase for the names of methods, fields and properties. See the special cases for acronyms below.
Special Case: Four-letter acronyms: Special Case: Three-letter or shorter acronyms: Special Case: "ID":

How to Generate JSDoc


LabKey's JavaScript API reference files are generated automatically when you build LabKey Server. These files can be found in the ROOT\build\client-api\javascript\docs directory, where ROOT is the directory where you have placed the files for your LabKey Server installation.

Generating API docs separately can come in handy when you wish to customize the JSDoc compilation settings or alter the JSDoc template. This page helps you generate API reference documentation from annotated javascript files. LabKey uses the open-source JsDoc Toolkit to produce reference materials.

Use the Gradle Build Target

From the ROOT\server directory, use the following to generate the JavaScript API docs:

gradlew jsdoc

You will find the results in the ROOT\build\clientapi_docs folder. Click on the "index.html" file to see your new API reference site.

If you need to alter the output template, you can find the JsDoc Toolkit templates in the ROOT\tools\jsdoc-toolkit\templates folder.

Use an Alternative Build Method

You can also build the documents directly from within the jsdoc-toolkit folder.

First, place your annotated .js files in a folder called "clientapi" in the jsdoc-toolkit folder (<JSTOOLKIT> in the code snippet below). Then use a command line similar to the following to generate the docs:

C:\<JSTOOLKIT>>java -jar jsrun.jar app\run.js clientapi -t=templates\jsdoc

You will find the resulting API doc files a folder called "out" in your jsdocs-toolkit folder. Click on the "index.html" file inside the jsdocs folder inside "out" to see your new API reference site.

Further Info on JsDocs and Annotating Javascript with Tags

JsDoc Annotation Guidelines

A few recommendations for writing JSDoc annotations:
  • Follow LabKey's JavaScript API naming guidelines.
  • When documenting objects that are not explicitly included in the code (e.g., objects passed via successCallbacks), avoid creating extra new classes.
    • Ideally, document the object inline as HTML list in the method or field that uses it. LABKEY.Security contains many examples.
    • If you do need to create an arbitrary class to describe an object, use the @name tag. See LABKEY.Domain.DomainDesign for a simple example. You'll probably need to create a new class to describe the object IF:
      • Many classes use the object, so it's confusing to doc the object inline in only one class.
      • The object is used as the type of many other variables.
      • The object has (or will have) both methods and fields, so it would be hard to distinguish them in a simple HTML list.
  • Caution: Watch for a bug if you use metatags to write annotations once and use them across a group of enclosed APIs. If you doc multiple, similar objects that have field names in common, you may have to fully specify the name of the field-in-common. If this bug is problematic, fields that have the same names across APIs will not show links.
    • An example of a fix: Query.js uses fully specified @names for several fields (e.g., LABKEY.Query.ModifyRowsOptions#rows).
  • When adding a method, event or field, please remember to check whether it is correctly marked static.
    • There are two ways to get a method to be marked static, depending on how the annotations are written:
      • Leave both "prototype" and "#" off of the end of the @scope statement (now called @lends) for a @namespace
      • Leave both "prototype" and "#" off of the end of the @method statement
    • Note: If you have a mix of static and nonstatic fields/methods, you may need to use "prototype" or "#" on the end of a @fieldOf or @memeberOf statement to identify nonstatic fields/methods.
    • As of 9.3, statics should all be marked correctly.
  • Check out the formatting of @examples you’ve added – it’s easy for examples to overflow the width of their boxes, so you may need to break up lines.
  • Remember to take advantage of LabKey-defined objects when defining types instead of just describing the type as an {Object}. This provides cross-linking. For example, see how the type is defined for LABKEY.Specimen.Vial#currentLocation.
  • Use @link often to cross-reference classes. For details on how to correctly reference instance vs. static objects, see NamePaths.
  • Cross-link to the main doc tree on whenever possible.
  • Deprecate classes using a red font. See GridView for an example. Note that a bug in the toolkit means that you may need to hard-code the font color for the class that’s listed next in the class index (see Message for an example).

Java API

The client-side library for Java developers is a separate JAR from the LabKey Server code base. It can be used by any Java program, including another Java web application.


LabKey JDBC Driver

This topic is under construction for the 19.3.0 release of LabKey Server. For current documentation of this feature, click here.

Premium Feature — This feature is available in the Professional Plus and Enterprise Editions. Learn more or contact LabKey

The JDBC driver for LabKey Server allows client applications to query against the schemas, tables, and queries that LabKey Server exposes using LabKey SQL. It implements a subset of the full JDBC functionality, supporting read only (SELECT) access to the data. Update, insert, and delete operations are not supported.

The following client applications have been successfully tested with the driver:

Other tools may work with the driver as it is currently implemented, but some tools may require driver functionality that has not yet been implemented.

Containers (projects and folders) are exposed as JDBC catalogs. Schemas within a given container are exposed as JDBC schemas.

Acquire the JDBC Driver

The JDBC driver is included in the Professional Plus and Enterprise distributions of LabKey Server.

To download the driver:

  • Go to your customer support portal.
  • Click the Server Builds button.
  • Scroll below the node Related Products, Source Code and Previous Releases
  • To download the driver click labkey-api-jdbc-XX.X.jar (XX.X will be the number of the current server release.)

Note this this driver jar also contains the LabKey Java client api and all of its dependencies.

Driver Usage

  • Driver class: org.labkey.jdbc.LabKeyDriver
  • Database URL: The base URL of the web server, including any context path, prefixed with "jdbc:labkey:". Examples include "jdbc:labkey:http://localhost:8080/labkey" and "jdbc:labkey:". You may include a folder path after a # to set the default target, without the need to explicitly set a catalog through JDBC. For example, "jdbc:labkey:http://localhost:8080/labkey#/MyProject/MyFolder"
  • Username: Associated with an account on the web server
  • Password: Associated with an account on the web server


The driver also supports the following properties, which can be set either in Java code by setting in the Properties handed to to DriverManager.getConnection(), or by setting on the Connection that is returned by calling setClientInfo().

  • rootIsCatalog - Setting rootIsCatalog true will force the root container on the server to only be exposed as a catalog in calls to getTables(). Otherwise the schemas/tables will also be exposed at the top level of the tree. Note that folders and projects in LabKey Server are exposed as individual catalogs (databases) through the jdbc driver. Ordinarily we might expose the schemas for the LabKey Server root container at both the top level of a tree, and in a catalog with name "/". This can be problematic if the user connecting doesn’t have permissions to the root container (ie, is not an admin); attempting to enumerate the top level schemas results in a 403 (Forbidden) response. Setting the “rootIsCatalog” flag true will cause the driver to skip enumerating the top level schemas, and only expose root as a catalog.
  • timeout - In DbVisualizer, set the Timeout in the Properties tab on the connection configuration. The default timeout is 60 seconds for any JDBC command. You may set it to 0 to disable the timeout, or the specific timeout you'd like, in milliseconds.
  • containerFilter - Specify a container (folder) filter for queries to control what folders and subfolders of data will be queried. Possible values are:
    • Current (Default)
    • CurrentAndSubfolders
    • CurrentPlusProject
    • CurrentAndParents
    • CurrentPlusProjectAndShared
    • AllFolders
For example,
Properties props = new Properties();
props.put("user", "$<USERNAME>");
props.put("password", "$<MYPASSWORD>");
props.put("containerFilter", "CurrentAndSubfolders");
Connection connection = DriverManager.getConnection("$<DATABASE URL>", props);
connection.setClientInfo("Timeout", "0");
ResultSet rs = connection.createStatement().executeQuery("SELECT * FROM core.Containers");

Learn how to use the container filter with Spotfire in this topic: Spotfire Integration.


The driver has the following logging behavior:

  • Unimplemented JDBC methods get logged as SEVERE (java.util.logging) / ERROR (log4j/slf4j)
  • Queries that are returned and many other operations get logged as FINE / DEBUG
  • The package space for logging is org.labkey.jdbc.level

Example Java Code

Connection connection = DriverManager.getConnection("jdbc:labkey:", "", "mypassword");
connection.setClientInfo("Timeout", "0");
ResultSet rs = connection.createStatement().executeQuery("SELECT * FROM core.Containers");

Related Topics

Remote Login API

This topic is under construction for the 19.3.0 release of LabKey Server. For current documentation of this feature, click here.

Note: The remote login API service described in this topic is still supported, but we recommend using the CAS identity provider as the preferred LabKey identity provider service.

This document describes the simple remote login and permissions service available in LabKey Server.

Remote Login API Overview

The remote login/permissions service allows cooperating websites to:

  • Use a designated LabKey Server for login
  • Attach permissions to their own resources based on permissions to containers (folders) on the LabKey Server.
The remote login/permissions service has two styles of interaction:
  • Simple URL/XML based API which can be used by any language
  • Java wrapper classes that make the API a little more convenient for people building webapps in java.
  • PHP wrapper classes that make the API a little more convenient for people building webapps in PHP.
The remote login/permissions service supports the following operations
  • Get a user email and opaque token from the LabKey server. This is accomplished via a web redirect and the LabKey server’s login api will be shown if the user does not currently have a logged-in session active in the browser.
  • Check permissions for a folder on the LabKey Server.
  • Invalidate the token, so that it cannot be used for further permission checking.

Base URL

A LabKey Server has a base URL that we use throughout this API description. This doc will use ${baseurl} to refer to this base URL.

The base URL is of the form:


For example, a local development machine might be using port 8080 and the context path "labkey", so the base URL would be:


On some servers (such as there is no context path. The base URL where you are reading this topic is simply:

Set Up Allowable External Redirect Hosts

Before calling any of the actions described below, you must first "whitelist" the authentication provider server as an allowable URL for external redirects.

  • On your client server go to (Admin) > Site > Admin Console.
  • Click Admin Console Links.
  • Under Configuration, click External Redirect Hosts.
  • Add the server providing authentication to the list.

For details see Configure Allowable External Redirect Hosts.


There are 3 main actions supported by the Login controller.


To ensure that a user is logged in and to get a token for further calls, a client must redirect the browser to the URL:

${baseurl}/login/createToken.view?returnUrl=${url of your page}

Where ${url of your page} is a properly encoded url parameter for a page in the client web application where control will be returned. After the user is logged in (if necessary) the browser will be redirected back to <url of your page> with the following 2 extra parameters which your page will have to save somewhere (usually session state):

  • labkeyToken: This is a hex string that your web application will pass into subsequent calls to check permissions.
  • labkeyEmail: This is the email address used to log in. It is not required to be passed in further calls.


To create a token for the web page


You would use the following URL:

After the login the browser would return to your page with additional parameters:



This URL returns an XML document indicating what permissions are available for the logged in user on a particular folder. It is not intended to be used from the browser (though you certainly can do so for testing).

Your web app will access this URL and parse the resulting page. Note that your firewall configuration must allow your web server to call out to the LabKey Server. The general form is:


Where ${containerPath} is the path on the LabKey Server to the folder you want to check permissions against, and ${token} is the token sent back to your returnUrl from createToken.view.


To check permissions for the home folder on, here’s what you’d request:
An XML document is returned. There is currently no XML schema for the document, but it is of the form:
<TokenAuthentication success="true" token="${token}" email="${email}" permissions="${permissions}" />

Where permissions is an integer with the following bits turned on for permissions to the folder.

READ: 0x00000001
INSERT: 0x00000002
UPDATE: 0x00000004
DELETE: 0x00000008
ADMIN: 0x00008000
If the token is invalid the return will be of the form:
<TokenAuthentication success="false" message="${message}">


This URL invalidates a token and optionally returns to another URL. It is used as follows:

${baseurl}/login/createToken.view?labkeyToken=${token}&returnUrl=${url of your page}

Where ${token} is the token received from createToken.view and returnUrl is any page you would like to redirect back to. returnUrl should be supplied when calling from a browser and should NOT be supplied when calling from a server.

Java API

The Java API wraps the calls above with some convenient java classes that

  • store state in web server session
  • properly encode parameters
  • parse XML files and decode permissions
  • cache permissions
The Java API provides no new functionality over the URL/XML API.

To use the Java API store the remoteLogin.jar in the WEB-INF/lib directory of your web application. The API provides two main classes:

  • RemoteLogin: Contains a static method to return a RemoteLoginHelper instance for the current request.
  • RemoteLoginHelper: Interface providing methods for calling back to the server.
Typically a protected resource in a client application will do something like this:
RemoteLoginHelper rlogin = RemoteLogin.getHelper(request, REMOTE_SERVER);
if (!rlogin.isLoginComplete())
Set<RemoteLogin.Permission> permissions = rlogin.getPermissions(FOLDER_PATH);

if (permissions.contains(RemoteLogin.Permission.READ))
//Show data
//Permission denied

The API is best described by the Javadoc and the accompanying sample web app.

HTTP and Certificates

The Java API uses the standard Java URL class to connect to server and validates certificates from the server. To properly connect to an https server, clients may have to install certificates in their local certificate store using keytool.

Help can be found here:

The default certificate store shipped with Java JDK 1.6 supports more certificate authorities than previous jdk’s. It may be easier to run your web app under 1.6 than install a certificate on your client JDK. The certificate is supported under JDK 1.6.

Related Topics

Security Bulk Update via API

Creation and updates of security groups and role assignments may be scripted and performed automatically using the LabKey Security API. New user IDs are automatically created as needed.

Bulk Update

Operations available:

  • Create and Populate a Group
  • Ensure Group and Update, Replace, or Delete Members
Group members can be specified in one of these ways:
  • email - specifies a user; if the user does not already exist in the system, it will be created and will be populated with any of the additional data provided
  • userId - specifies a user already in the system. If the user does not already exist, this will result in an error message for that member. If both email and userId are provided, this will also result in an error.
  • groupId - specifies a group member. If the group does not already exist, this will result in an error message for that member.
public static class GroupForm
private Integer _groupId; // Nullable; used first as identifier for group;
private String _groupName; // Nullable; required for creating a group
private List<GroupMember> _members; // can be used to provide more data than just email address; can be empty;
// can include groups, but group creation is not recursive
private Boolean _createGroup = false; // if true, the group should be created if it doesn't exist;
//otherwise the operation will fail if the group does not exist
private MemberEditOperation _editOperation; // indicates the action to be performed with the given users in this group


public enum MemberEditOperation {
add, // add the given members; do not fail if already exist
replace, // replace the current members with the new list (same as delete all then add)
delete, // delete the given members; does not fail if member does not exist in group;
//does not delete group if it becomes empty

Sample JSON

‘groupName’: ‘myNewGroup’,
‘editOperation’: ‘add’,
‘createGroup’: ‘true’,
‘members’: [
{‘email’ : ‘’, ‘firstName’:’Me’, ‘lastName’:’Too’}
{‘email’ : ‘’, ‘firstName’:’You’, ‘lastName’:’Too’}
{‘email’ : ‘@invalid’, ‘firstName’:’Not’, ‘lastName’:’Valid’},
{‘groupId’ : 1234},
{‘groupId’: 314}


If you want to provide only the email addresses for user members, it would look like this:

‘groupName’: ‘myNewGroup’,
‘editOperation’: ‘add’,
‘createGroup’: ‘true’,
‘members’: [
{‘email’ : ‘’}
{‘email’ : ‘’}
{‘email’ : ‘invalid’}

A response from a successful operation will include the groupId, groupName, a list of users that were added to the system, lists of members added or removed from the group, as well as a list of members if any, that had errors:

‘id’: 123,
‘name’: ‘myNewGroup’,
‘newUsers’ : [ {email: ‘’, userId: 3123} ],
‘members’ : {
‘added’: [{‘email’: ‘’, ‘userId’: 2214}, {‘email’: ‘’, ‘userId’: 3123},
{‘name’: ‘otherGroup’, ‘userId’ : 1234}],
‘removed’: []
‘errors’ :[
‘invalid’ : ‘Invalid email address’,
‘314’ : ‘Invalid group id. Member groups must already exist.’

This mimics, to a certain degree, the responses from the following actions:

  • CreateGroupAction, which includes in its response just the id and name in a successful response
  • AddGroupMemberAction, which includes in its response the list of ids added
  • RemoveGroupMemberAction, which includes in its response the list of ids removed
  • CreateNewUserAction, which includes in its response the userId and email address for users added as well as a possible message if there was an error

Error Reporting

Invalid requests may have one of these error messages:
  • Invalid format for request. Please check your JSON syntax.
  • Group not specified
  • Invalid group id <id>
  • validation messages from UserManager.validGroupName
  • Group name required to create group
  • You may not create groups at the folder level. Call this API at the project or root level.
Error message for individual members include, but may not be limited to:
  • Invalid user id. User must already exist when using id.
  • Invalid group id. Member groups must already exist.
  • messages from exceptions SecurityManager.UserManagementException or InvalidGroupMembershipException

Perl API

LabKey's Perl API allows you to query, insert and update data on a LabKey Server from Perl. The API provides functionality similar to the following LabKey JavaScript APIs:
  • LABKEY.Query.selectRows()
  • LABKEY.Query.executeSql()
  • LABKEY.Query.insertRows()
  • LABKEY.Query.updateRows()
  • LABKEY.Query.deleteRows()



Configuration Steps

  • Install Perl, if needed.
    • Most Unix platforms, including Macintosh OSX, already have a Perl interpreter installed.
    • Binaries are available here.
  • Install the Perl module from CPAN:
    • Using cpanm:
      • cpanm LabKey::Query
    • Using CPAN:
      • perl -MCPAN -e "install LabKey::Query"
    • To upgrade from a prior version of the module:
      • perl -MCPAN -e "upgrade"
    • For more information on module installation please visit the detailed CPAN module installation guide.
  • Create a .netrc or _netrc file in the home directory of the user running the Perl script.
    • The netrc file provides credentials for the API to use to authenticate to the server, required to read or modify tables in secure folders.

Python API

LabKey's Python APIs allow you to query, insert and update data on a LabKey Server from Python.

Detailed documentation is available on GitHub:

Premium Resource Available

Subscribers to premium editions of LabKey Server can can use the example code in this topic to learn more:

Learn more about premium editions

Premium Resource: Python API Demo

Related Topics

Rlabkey Package

The LabKey client library for R makes it easy for R users to load live data from a LabKey Server into the R environment for analysis, provided users have permissions to read the data. It also enables R users to insert, update, and delete records stored on a LabKey Server, provided they have appropriate permissions to do so. The Rlabkey APIs use HTTP requests to communicate with a LabKey Server.

All requests to the LabKey Server are performed under the user's account profile, with all proper security enforced on the server. User credentials are obtained from a separate location than the running R program so that R programs can be shared without compromising security.

The Rlabkey library can be used from the following locations:


Configuration Steps

Typical configuration steps for a user of Rlabkey include:

  • Install R from
  • Install the Rlabkey package once using the following command in the R console. (You may want to change the value of repos depending on your geographical location.)
install.packages("Rlabkey", repos="")
  • Load the Rlabkey library at the start of every R script using the following command:
  • Create a netrc file to set up authentication.
    • Necessary if you wish to modify a password-protected LabKey Server database through the Rlabkey macros.
    • Note that Rlabkey handles sessionid and authentication internally. Rlabkey passes the sessionid as an HTTP header for all API calls coming from that R session. LabKey Server treats this just as it would a valid JSESSIONID parameter or cookie coming from a browser.


The Rlabkey package supports the transfer of data between a LabKey Server and an R session.

  • Retrieve data from LabKey into a data frame in R by specifying the query schema information (labkey.selectRows and getRows) or by using SQL commands (labkey.executeSql).
  • Update existing data from an R session (labkey.updateRows).
  • Insert new data either row by row (labkey.insertRows) or in bulk (labkey.importRows) via the TSV import API.
  • Delete data from the LabKey database (labkey.deleteRows).
  • Use Interactive R to discover available data via schema objects (labkey.getSchema).
For example, you might use an external instance of R to do the following:
  • Connect to a LabKey Server.
  • Use metadata queries to show which schemas are available within a specific project or sub-folder.
  • Use metadata queries to show which datasets are available within a schema and query of interest in a folder.
  • Create colSelect and colFilter parameters for the labkey.selectRows command on the selected schema and query.
  • Retrieve a data frame of the data specified by the current url, folder, schema, and query context.
  • Perform transformations on this data frame locally in your instance of R.
  • Save a data frame derived from the one returned by the LabKey Server back into the LabKey Server.
Within the LabKey interface, the Rlabkey macros are particularly useful for accessing and manipulating datasets across folders and projects.

Troubleshoot Rlabkey

This topic provides basic diagnostic tests and solutions to common connection errors related to configuring the Rlabkey package to work with LabKey Server.

Diagnostic Tests

Check Basic Installation Information

The following will gather basic information about the R configuration on the server. Run the following in an R view. To create an R view: from any data grid, select Report > Create R Report.

cat("Output of SessionInfo \n")
cat("\n\n\nOutput of Library Search path \n")

This will output important information such as the version of R being run, the version of each R library, the Operating System of the server, and the location of where the R libraries are being read from.

Check that your are running a modern version of R, and using the latest version of Rlabkey (2.1.129) and RCurl. If anything is old, we recommend that you update the packages.

Test HTTPS Connection

The following confirms that R can make a HTTPS connection to known good server. Run the following in an R View:

cat("\n\nAttempt a connection to Google. If it works, print first 200 characters of website. \n")
x = getURLContent("")

If this command fails, then the problem is with the configuration of R on your server. If the server is running Windows, the problem is most likely that their are no CA Certs defined. You will need to fix the configuration of R to ensure a CA Certificate is defined. Use the RLABKEY_CAINFO_FILE environment variable. See

Diagnose RCurl or Rlabkey

Next check if the problem is coming from the RCurl library or the Rlabkey library. Run the following in an R View, replacing "" with your server:

cat("\n\n\nAttempt a connection to using only RCurl. If it works, print first 200 characters of website. \n")
y = getURLContent("")

If this command fails, it means there is a problem with the SSL Certificate installed on the server.

Certificate Test

The 4th test is to have R ignore any problems with certificate name mis-matches and certificate chain integrity (that is, using a self-signed certificate or the certificate is signed by a CA that the R program does not trust. In an R view, add the following line after library(Rlabkey)

labkey.setCurlOptions(ssl_verifypeer=FALSE, ssl_verifyhost=FALSE)

If this command fails, then there is a problem with the certificate. A great way to see the information on the certificate is to run the following from Linux or OSX:

openssl s_client -showcerts -connect

This will show all certificates in the cert chain and whether they are trusted. If you see verify return:0 near the top of the output then the certificate is good.

Common Issues

Syntax Change from . to _

The syntax for arguments to setCurlOptions has changed. If you see an error like this:

Error in labkey.setCurlOptions(ssl.verifypeer = FALSE, ssl.verifyhost = FALSE) : 
The legacy config : ssl.verifyhost is no longer supported please update to use : ssl_verifyhost
Execution halted

Use the arguments set_verifypeer and set_verifyhost instead.

TLSv1 Protocol Replaces SSLv3

By default, Rlabkey will connect to LabKey Server using the TLSv1 protocol. If your attempt to connect fails, you might see an error message similar to one of these:

Error in function (type, msg, asError = TRUE) : 
error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number

Error in function (type, msg, asError = TRUE) : 
error:1411809D:SSL routines:SSL_CHECK_SERVERHELLO_TLSEXT:tls invalid ecpointformat list

First confirm that you are using the latest versions of Rlabkey and RCurl, both available on CRAN.

If you still encounter this issue, you can add the following to your R scripts or R session. This command tells R to use the TLSv1+ protocol (instead of SSLv3) for all connections:


Tomcat GET Header Limit

By default, Tomcat sets a size limit of 4096 bytes (4 KB) for the GET header. If your API calls hit this limit, you can increase the default header size for GETs.

To increase the allowable header size, edit Tomcat's server.xml file, adding a maxHttpHeaderSize attribute to the Connector entries. (By default, if this attribute is not present, the default value is set to 4096 bytes.) For example, to increase the size to 64KB:

<Connector port="8080" maxHttpHeaderSize="65536"...

(Windows) Failure to Connect

Rlabkey uses the package RCurl to connect to the LabKey Server. On Windows, older versions of the RCurl package are not configured for SSL by default. In order to connect, you may need to perform the following steps:

1. Create or download a "ca-bundle" file.

We recommend using ca-bundle file that is published by Mozilla. See You have two options:

2. Copy the ca-bundle.crt file to a location on your hard-drive.

If you will be the only person using the Rlabkey package on your computer, we recommend that you

  • create a directory named `labkey` in your home directory
  • copy the ca-bundle.crt into the `labkey` directory
If you are installing this file on a server where multiple users will use may use the Rlabkey package, we recommend that you
  • create a directory named `c:\labkey`
  • copy the ca-bundle.crt into the `c:\labkey` directory
3. Create a new Environment variable named `RLABKEY_CAINFO_FILE`

On Windows 7, Windows Server 2008 and earlier

  • Select Computer from the Start menu.
  • Choose System Properties from the context menu.
  • Click Advanced system settings > Advanced tab.
  • Click on Environment Variables.
  • Under System Variables click on the new button.
  • For Variable Name: enter RLABKEY_CAINFO_FILE
  • For Variable Value: enter the path of the ca-bundle.crt you created above.
  • Hit the Ok buttons to close all the windows.
On Windows 8, Windows 2012 and above
  • Drag the Mouse pointer to the Right bottom corner of the screen.
  • Click on the Search icon and type: Control Panel.
  • Click on -> Control Panel -> System and Security.
  • Click on System -> Advanced system settings > Advanced tab.
  • In the System Properties Window, click on Environment Variables.
  • Under System Variables click on the new button.
  • For Variable Name: enter RLABKEY_CAINFO_FILE
  • For Variable Value: enter the path of the ca-bundle.crt you created above.
  • Hit the Ok buttons to close all the windows.
Now you can start R and begin working.

Self-Signed Certificate Authentication

If you are using a self-signed certificate, and connecting via HTTPS on a OSX or Linux machine, you may see the following issues as Rlabkey attempts unsuccessfully to validate that certificate.

Peer Verification

If you see an error message that looks like the following, you can tell Rlabkey to ignore any failures when checking if the server's SSL certificate is authentic.

> rows <- labkey.selectRows(baseUrl="https://SERVERNAME", folderPath="home",schemaName="lists", queryName="myFavoriteList") 
Error in function (type, msg, asError = TRUE) :
SSL certificate problem, verify that the CA cert is OK. Details:
error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed

To bypass the peer verification step, add the following to your script:


Certificate Name Conflict

It is possible to tell Rlabkey to ignore any failures when checking if the server's name used in baseURL matches the one specified in the SSL certificate. An error like the following could occur when the name on the certificate is different than the SERVERNAME used.

> rows <- labkey.selectRows(baseUrl="https://SERVERNAME", folderPath="home",schemaName="lists", queryName="ElispotPlateReader") 
Error in function (type, msg, asError = TRUE) :
SSL peer certificate or SSH remote key was not OK

To bypass the host verification step, add the following to your script:


Troubleshoot .netrc / _netrc Files

If you experience authentication or other access problems, you may need to check your netrc file. For details see: Create a netrc file

Related Topics

Premium Resource: Example Code for QC Reporting

Related Topics

SAS Macros


The LabKey Client API Library for SAS makes it easy for SAS users to load live data from a LabKey Server into a native SAS dataset for analysis, provided they have permissions to read those data. It also enables SAS users to insert, update, and delete records stored on a LabKey Server, provided they have appropriate permissions to do so.

All requests to the LabKey Server are performed under the user's account profile, with all proper security enforced on the server. User credentials are obtained from a separate location than the running SAS program so that SAS programs can be shared without compromising security.

The SAS macros use the Java Client Library to send, receive, and process requests to the server. They provide functionality similar to the Rlabkey Package.


Related Topics


SAS Setup

Set up SAS to use the SAS/LabKey Interface

The LabKey/SAS client library is a set of SAS macros that retrieve data from an instance of LabKey Server as SAS data sets. The SAS macros use the Java Client Library to send, receive, and process requests to the server.

Configure your SAS installation to use the SAS/LabKey interface:

  1. Install SAS
  2. Retrieve the latest file (e.g., LabKey<version> from the "All Downloads" tab on the LabKey Server download page.
  3. Extract this file to a local directory (these instructions assume "c:\sas"). The directory should contain a number of .jar files (the Java client library and its dependencies) and 12 .sas files (the SAS macros).
  4. Open your default SAS configuration file, sasv9.cfg (e.g., in c:\Program Files\SASHome\x86\SASFoundation\9.3\nls\en)
  5. In the -SET SASAUTOS section, add the path to the SAS macros to the end of the list (e.g., "C:\sas")
  6. Configure your Java Runtime Environment (JRE) based on your SAS version:
  • Instructions for SAS 9.3
    • Install the SAS update TS1M2, in order to run it with Java 7, instead of Java 6 (which is the default for SAS 9.3)
    • Near the top of sasv9.cfg, add -set classpath "<full paths to all .jar files separated by ; (on Windows) or : (on Mac)>" (see below)
  • Instructions for SAS 9.4
    • No configuration of the Java runtime is necessary on SAS 9.4 since it runs a private Java 7 JRE, installed in the SASHOME directory
    • Near the top of sasv9.cfg, add -set classpath "<full paths to all .jar files separated by ; (on Windows) or : (on Mac)>"; (see below)
Example Java classpath for Windows:

-set classpath "C:\sas\commons-codec-1.6.jar;C:\sas\commons-logging-1.1.3.jar;C:\sas\fluent-hc-4.3.5.jar;C:\sas\httpclient-4.3.5.jar;C:\sas\httpclient-cache-4.3.5.jar;

Example Java classpath for Mac:

-set classpath "/sas/commons-codec-1.6.jar:/sas/commons-logging-1.1.3.jar:/sas/fluent-hc-4.3.5.jar:/sas/httpclient-4.3.5.jar:/sas/httpclient-cache-4.3.5.jar:

Configure LabKey Server and run the test script:

  1. On your local version of LabKey Server, configure a list called "People" in your home folder and import demo.xls to populate it with data
  2. Configure your .netrc or _netrc file in your home directory. For further information, see: Create a netrc file.
  3. Run SAS
  4. Execute "proc javainfo; run;" in a program editor; this command should display detailed information about the java environment in the log. Verify that java.version matches the JRE you set above.
  5. Load
  6. Run it

SAS Macros

SAS/LabKey Library

The SAS/LabKey client library provides a set of SAS macros that retrieve data from an instance of LabKey Server as SAS data sets and allows modifications to LabKey Server data from within SAS. All requests to the LabKey Server are performed under the user's account profile, with all proper security enforced on the server.

The SAS macros use the Java Client Library to send, receive and process requests to the server. This page lists the SAS macros, parameters and usage examples.

Related topics:

The %labkeySetDefaults Macro

The %labkeySetDefaults macro sets connection information that can be used for subsequent requests. All parameters set via %labkeySetDefaults can be set once via %labkeySetDefaults, or passed individually to each macro.

The %labkeySetDefaults macro allows the SAS user to set the connection information once regardless of the number of calls made. This is convenient for developers, who can write more maintainable code by setting defaults once instead of repeatedly setting these parameters.

Subsequent calls to %labkeySetDefaults will change any defaults set with an earlier call to %labkeySetDefaults.

%labkeySetDefaults accepts the following parameters:

baseUrlstringnThe base URL for the target server. This includes the protocol (http, https) and the port number. It will also include the context path (commonly "/labkey"), unless LabKey Server has been deployed as the root context. Example: "http://localhost:8080/labkey"
folderPathstringnThe LabKey Server folder path in which to execute the request
schemaNamestringnThe name of the schema to query
queryNamestringnThe name of the query to request
userNamestringnThe user's login name. Note that the NetRC file includes both the userName and password. It is best to use the values stored there rather than passing these values in via a macro because the passwords will show up in the log files, producing a potential security hole. However, for chron jobs or other automated processes, it may be necessary to pass in userName and password via a macro parameter.
passwordstringnThe user's password. See userName (above) for further details.
containerFilterstringnThis parameter modifies how the query treats the folder. The possible settings are listed below. If not specified, "Current" is assumed.

Options for the containerFilter parameter:

  • Current -- The current container
  • CurrentAndSubfolders -- The current container and any folders it contains
  • CurrentPlusProject -- The current container and the project folder containing it
  • CurrentAndParents -- The current container and all of its parent containers
  • CurrentPlusProjectAndShared -- The current container, its project folder and all shared folders
  • AllFolders -- All folders to which the user has permission
Example usage of the %labkeySetDefaults macro:
%labkeySetDefaults(baseUrl="http://localhost:8080/labkey", folderPath="/home", 
schemaName="lists", queryName="People");

The %labkeySelectRows Macro

The %labkeySelectRows macro allows you to select rows from any given schema and query name, optionally providing sorts, filters and a column list as separate parameters.

Parameters passed to an individual macro override the values set with %labkeySetDefaults.

Parameters are listed as required when they must be provided either as an argument to %labkeySelectRows or through a previous call to %labkeySetDefaults.

This macro accepts the following parameters:

dsnstringyThe name of the SAS dataset to create and populate with the results
baseUrlstringyThe base URL for the target server. This includes the protocol (http, https), the port number, and optionally the context path (commonly "/labkey"). Example: "http://localhost:8080/labkey"
folderPathstringyThe LabKey Server folder path in which to execute the request
schemaNamestringyThe name of the schema to query
queryNamestringyThe name of the query to request
viewNamestringnThe name of a saved custom grid view of the given schema/query. If not supplied, the default grid will be returned.
filterstringnOne or more filter specifications created using the %makeFilter macro
columnsstringnA comma-delimited list of column name to request (if not supplied, the default set of columns are returned)
sortstringnA comma-delimited list of column names to sort by. Use a “-“ prefix to sort descending.
maxRowsnumbernIf set, this will limit the number of rows returned by the server.
rowOffsetnumbernIf set, this will cause the server to skip the first N rows of the results. This, combined with the maxRows parameter, enables developers to load portions of a dataset.
showHidden1/0nBy default hidden columns are not included in the dataset, but the SAS user may pass 1 for this parameter to force their inclusion. Hidden columns are useful when the retrieved dataset will be used in a subsequent call to %labkeyUpdate or %labkeyDetele.
userNamestringnThe user's login name. Please see the %labkeySetDefaults section for further details.
passwordstringnThe user's password. Please see the %labkeySetDefaults section for further details.
containerFilterstringnThis parameter modifies how the query treats the folder. The possible settings are listed in the %labkeySetDefaults macro section. If not specified, "Current" is assumed.


The SAS code to load all rows from a list called "People" can define all parameters in one function call:

%labkeySelectRows(dsn=all, baseUrl="http://localhost:8080/labkey", 
folderPath="/home", schemaName="lists", queryName="People");

Alternatively, default parameter values can be set first with a call to %labkeySetDefaults. This leaves default values in place for all subsequent macro invocations. The code below produces the same output as the code above:

%labkeySetDefaults(baseUrl="http://localhost:8080/labkey", folderPath="/home", 
schemaName="lists", queryName="People");

This example demonstrates column list, column sort, row limitation, and row offset:

%labkeySelectRows(dsn=limitRows, columns="First, Last, Age", 
sort="Last, -First", maxRows=3, rowOffset=1);

Further examples are available in the %labkeyMakeFilter section below.

The %labkeyMakeFilter Macro

The %labkeyMakeFilter macro constructs a simple compare filter for use in the %labkeySelectRows macro. It can take one or more filters, with the parameters listed in triples as the arguments. All operators except "MISSING and "NOT_MISSING" require a "value" parameter.

columnstringyThe column to filter upon
operatorstringyThe operator for the filter. See below for a list of acceptable operators.
valueanyyThe value for the filter. Not used when the operator is "MISSING" or "NOT_MISSING".

The operator may be one of the following:

  • IN
  • NOT_IN
Note: For simplicity and consistency with other client libraries, EQUALS_ONE_OF has been renamed IN and EQUALS_NONE_OF has been renamed NOT_IN. You may need to update your code to support these new filter names.


/*  Specify two filters: only males less than a certain height. */
%labkeySelectRows(dsn=shortGuys, filter=%labkeyMakeFilter("Sex", "EQUAL", 1,
"Height", "LESS_THAN", 1.2));
proc print label data=shortGuys; run;

/* Demonstrate an IN filter: only people whose age is specified. */
%labkeySelectRows(dsn=lateThirties, filter=%labkeyMakeFilter("Age",
"IN", "36;37;38;39"));
proc print label data=lateThirties; run;

/* Specify a grid and a not missing filter. */
%labkeySelectRows(dsn=namesByAge, viewName="namesByAge",
filter=%labkeyMakeFilter("Age", "NOT_MISSING"));
proc print label data=namesByAge; run;

The %labkeyExecuteSql Macro

The %labkeyExecuteSql macro allows SAS users to execute arbitrary LabKey SQL, filling a SAS dataset with the results.

Required parameters must be provided either as an argument to %labkeyExecuteSql or via a previous call to %labkeySetDefaults.

This macro accepts the following parameters:

dsnstringyThe name of the SAS dataset to create and populate with the results
sqlstringyThe LabKey SQL to execute
baseUrlstringyThe base URL for the target server. This includes the protocol (http, https), the port number, and optionally the context path (commonly "/labkey"). Example: "http://localhost:8080/labkey"
folderPathstringyThe folder path in which to execute the request
schemaNamestringyThe name of the schema to query
maxRowsnumbernIf set, this will limit the number of rows returned by the server.
rowOffsetnumbernIf set, this will cause the server to skip the first N rows of the results. This, combined with the maxrows parameter, enables developers to load portions of a dataset.
showHidden1/0nPlease see description in %labkeySelectRows.
userNamestringnThe user's login name. Please see the %labkeySetDefaults section for further details.
passwordstringnThe user's password. Please see the %labkeySetDefaults section for further details.
containerFilterstringnThis parameter modifies how the query treats the folder. The possible settings are listed in the %labkeySetDefaults macro section. If not specified, "Current" is assumed.


/*	Set default parameter values to use in subsequent calls.  */
%labkeySetDefaults(baseUrl="http://localhost:8080/labkey", folderPath="/home",
schemaName="lists", queryName="People");

/* Query using custom SQL… GROUP BY and aggregates in this case. */
%labkeyExecuteSql(dsn=groups, sql="SELECT People.Last, COUNT(People.First)
AS Number, AVG(People.Height) AS AverageHeight, AVG(People.Age)
AS AverageAge FROM People GROUP BY People.Last"
proc print label data=groups; run;

/* Demonstrate UNION between two different data sets. */
%labkeyExecuteSql(dsn=combined, sql="SELECT MorePeople.First, MorePeople.Last
FROM MorePeople UNION SELECT People.First, People.Last FROM People ORDER BY 2"
proc print label data=combined; run;

The %labkeyInsertRows, %labkeyUpdateRows and %labkeyDeleteRows Macros

The %labkeyInsertRows, %labkeyUpdateRows and %labkeyDeleteRows macros are all quite similar. They each take a SAS dataset, which may contain the data for one or more rows to insert/update/delete.

Required parameters must be provided either as an argument to %labkeyInsert/Update/DeleteRows or via a previous call to %labkeySetDefaults.


dsndatasetyA SAS dataset containing the rows to insert/update/delete
baseUrlstringyThe base URL for the target server. This includes the protocol (http, https), the port number, and optionally the context path (commonly "/labkey"). Example: "http://localhost:8080/labkey"
folderpathstringyThe folder path in which to execute the request
schemaNamestringyThe name of the schema
queryNamestringyThe name of the query within the schema
userNamestringnThe user's login name. Please see the %labkeySetDefaults section for further details.
passwordstringnThe user's password. Please see the %labkeySetDefaults section for further details.

The key difference between the macros involves which columns are required for each case. For insert, the input dataset should not include values for the primary key column ('lsid' for study datasets), as this will be automatically generated by the server.

For update, the input dataset must include values for the primary key column so that the server knows which row to update. The primary key value for each row is returned by %labkeySelectRows and %labkeyExecuteSql if the 'showHidden' parameter is set to 1.

For delete, the input dataset needs to include only the primary key column. It may contain other columns, but they will be ignored by the server.

Example: The following code inserts new rows into a study dataset:

/*  Set default parameter values to use in subsequent calls.  */
%labkeySetDefaults(baseUrl="http://localhost:8080/labkey", folderPath="/home",
schemaName="lists", queryName="People");

data children;
input First : $25. Last : $25. Appearance : mmddyy10. Age Sex Height ;
format Appearance DATE9.;
Pebbles Flintstone 022263 1 2 .5
Bamm-Bamm Rubble 100163 1 1 .6

/* Insert the rows defined in the children data set into the "People" list. */

Quality Control Values

The SAS library accepts special values in datasets as indicators of the quality control status of data. The QC values currently available are:

  • 'Q': Data currently under quality control review
  • 'N': Required field marked by site as 'data not available'
The SAS library will save these as “special missing values” in the data set.

SAS Security

The SAS library performs all requests to the LabKey Server under a given user account with all the proper security enforced on the server. User credentials are obtained from a separate location than the running SAS program so that SAS programs may be shared without compromising security.

As in the Rlabkey package, user credentials are read from a file in the user’s home directory, so as to keep those credentials out of SAS programs, which may be shared between users. Most Unix Internet tools already use the .netrc file, so the LabKey SAS library also uses that file.

For further information, see: Create a netrc file.

SAS Demos

Simple Demo

You can select (Export) > Script > SAS above most query views to export a script that selects the columns shown.

For example, performing this operation on the custom grid shown here: Grid View: Join for Cohort Views in the demo study, produces the following SQL:

queryName="Lab Results",
viewName="Grid View: Join for Cohort Views",

This SAS macro selects the rows shown in this custom grid into a dataset called 'mydata'.

Full SAS Demo

The archive attached to this page provides a SAS script and Excel data files. You can use these files to explore the selectRows, executeSql, insert, update, and delete operations of the SAS/LabKey Library.

Steps for setting up the demo:

  1. Make sure that you or your admin has Set Up SAS to use the SAS/LabKey Interface.
  2. Make sure that you or your admin has set up a .netrc file to provide you with appropriate permissions to insert/update/delete. For further information, see Create a netrc file.
  3. Download and unzip the demo files: The zip folder contains a SAS demo script ( and two data files (People.xls and MorePeople.xls). The spreadsheets contain demo data that goes with the script.
  4. Add the "Lists" web part to a portal page of a folder on your LabKey Server if it has not yet been added to the page.
  5. Create a new list called “People” and choose the “Import from file” option at list creation time to infer the schema and populate the list from People.xls.
  6. Create a second list called “MorePeople” and “Import from file” using MorePeople.xls.
  7. Change the two references to baseUrl and folderPath in the to match your server and folder.
  8. Run the script in SAS.

HTTP Interface



We strongly recommend using the client library corresponding to your preferred programming language to interact programmatically with LabKey Server. Our client libraries provide flexible authentication mechanisms and automatically handle cookies & sessions, CSRF tokens, marshalling of parameters & payloads, and returning results in native data structures that are easy to manipulate. If absolutely required (e.g., a client library does not exist for your preferred language), you can interact with a LabKey Server through direct HTTP requests, but this requires significantly more effort than using a client library.

The HTTP Interface exposes a set of API endpoints that accept parameters & JSON payloads and return JSON results. These may be called from any program capable of making an HTTP request and decoding the JSON format used for responses (e.g., C++, C#, etc.).

This document describes the API actions that can be used by HTTP requests, detailing their URLs, inputs and outputs. For information on using the JavaScript helper objects within web pages, see JavaScript API. For an example of using the HTTP Interface from Perl, see Example: Access APIs from Perl.

Calling API Actions from Client Applications and Scripts

The API actions documented below may be used by any client application or script capable of making an HTTP request and handling the response. Consult your programming language’s or operating environment’s documentation for information on how to submit an HTTP request and process the response. Most modern languages include HTTP and JSON libraries or helpers.

Several actions accept or return information in the JavaScript Object Notation (JSON) format, which is widely supported. See for information on the format, and to obtain libraries/plug-ins for most languages.

Most of the API actions require the user to be authenticated so that the correct permissions can be evaluated. Clients should use basic authentication over HTTPS so that the headers will be encrypted. See for details on the HTTP headers to include, and how to encode the user name and password. The "realm" can be set to any string, as the LabKey server does not support the creation of multiple basic authentication realms. The credentials provided can be a username & password combination or an API key, as described in more detail on this page.

CSRF Token

Important: All mutating API actions (including insertRows, updateRows, and deleteRows) require a CSRF token in addition to user credentials. (For background and rationale, see Cross-Site Request Forgery (CSRF) Protection.) CSRF tokens are handled automatically by the client libraries, but code that invokes APIs via direct HTTP must obtain a CSRF token, send it with every API request. Follow these steps:
  • Execute a GET request to the whoAmI API:
  • Retrieve the CSRF token from the JSON response
  • Send the "X-LABKEY-CSRF" cookie back to the server on every request. Note: Many HTTP libraries will re-send server cookies automatically.
  • Add an "X-LABKEY-CSRF" header with the value of the CSRF token to every request. Note: Many HTTP libraries have a mechanism for setting an HTTP header globally.
You can verify that your code is correctly handling CSRF tokens by invoking the test csrf action and ensuring a success response:

If you are looking for information on building a custom login page, see Modules: Custom Login Pages.

The following sections document the supported API actions in the current release of LabKey Server.

For further examples of these actions in use, plus a tool for experimenting with "Get" and "Post" parameters, see Examples: Controller Actions / API Test Page

Query Controller API Actions

selectRows Action

The selectRows action may be used to obtain any data visible through LabKey’s standard query grid views.

Example URL:


where "<MyServer>", "<MyProject>", and "<MyFolder>" are placeholders for your server, project, and folder names.

HTTP Method: GET

Parameters: Essentially, anything you see on a query string for an existing query grid is legal for this action.

The following table describes the basic set of parameters.

schemaNameName of a public schema.
query.queryNameName of a valid query in the schema.
query.viewName(Optional) Name of a valid custom grid view for the chosen queryName.
query.columns(Optional) A comma-delimited list of column names to include in the results. You may refer to any column available in the query, as well as columns in related tables using the 'foreign-key/column' syntax (e.g., 'RelatedPeptide/Protein'). If not specified, the default set of visible columns will be returned.
query.maxRows(Optional) Maximum number of rows to return (defaults to 100)
query.offset(Optional) The row number at which results should begin. Use this with maxRows to get pages of results.
query.showAllRows(Optional) Include this parameter, set to true, to get all rows for the specified query instead of a page of results at a time. By default, only a page of rows will be returned to the client, but you may include this parameter to get all the rows on the first request. If you include the query.showAllRows parameter, you should not include the query.maxRows nor the query.offset parameters. Reporting applications will typically set this parameter to true, while interactive user interfaces may use the query.maxRows and query.offset parameters to display only a page of results at a time.
query.sort(Optional) Sort specification. This can be a comma-delimited list of column names, where each column may have an optional dash (-) before the name to indicate a descending sort.
query.<column-name>~<oper>=<value>(Optional) Filter specification. You may supply multiple parameters of this type, and all filters will be combined using AND logic. The list of valid operators are as follows:
eq = equals
neq = not equals
gt = greater-than
gte = greater-than or equal-to
lt = less-than
lte = less-than or equal-to
dateeq = date equal (visitdate~dateeq=2001-01-01 is equivalent to visitdate >= 2001-01-01:00:00:00 and visitdate < 2001-01-02:00:00:00)
dateneq = date not equal
neqornull = not equal or null
isblank = is null
isnonblank = is not null
contains = contains
doesnotcontain = does not contain
startswith = starts with
doesnotstartwith = does not start with
in = equals one of a semi-colon delimited list of values ('a;b;c').

For example, query.BodyTemperature~gt=98.6

Response Format:

The response can be parsed into an object using any one of the many JSON parsers available via

The response object contains four top-level properties:

  • metaData
  • columnModel
  • rows
  • rowCount
metaData: This property contains type and lookup information about the columns in the resultset. It contains the following properties:
rootThe name of the property containing rows (“rows”). This is mainly for the Ext grid component.
totalPropertyThe name of the top-level property containing the row count (“rowCount”) in our case. This is mainly for the Ext grid component.
sortInfoThe sort specification in Ext grid terms. This contains two sub-properties, field and direction, which indicate the sort field and direction (“ASC” or “DESC”) respectively.
idThe name of the primary key column.
fieldsan array of field information.
name = name of the field
type = JavaScript type name of the field
lookup = if the field is a lookup, there will be three sub-properties listed under this property: schema, table, and column, which describe the schema, table, and display column of the lookup table (query).

columnModel: The columnModel contains information about how one may interact with the columns within a user interface. This format is generated to match the requirements of the Ext grid component. See Ext.grid.ColumnModel for further information.

rows: This property contains an array of rows, each of which is a sub-element/object containing a property per column.

rowCount: This property indicates the number of total rows that could be returned by the query, which may be more than the number of objects in the rows array if the client supplied a value for the query.maxRows or query.offset parameters. This value is useful for clients that wish to display paging UI, such as the Ext grid.

updateRows Action

The updateRows action allows clients to update rows in a list or user-defined schema. This action may not be used to update rows returned from queries to other LabKey module schemas (e.g., ms2, flow, etc). To interact with data from those modules, use API actions in their respective controllers.

Example URL:



POST body: The post body should contain JSON in the following format:

{"schemaName": "lists",
"queryName": "Names",
"rows": [
{"Key": 5,
"FirstName": "Dave",
"LastName": "Stearns"}

Content-Type Header: Because the post body is JSON and not HTML form values, you must include the 'Content-Type' HTTP header set to 'application/json' so that the server knows how to parse the incoming information.

The schemaName and queryName properties should match a valid schema/query name, and the rows array may contain any number of rows. Each row must include its primary key value as one of the properties, otherwise, the update will fail.

By default, all updates are transacted together (meaning that they all succeed or they all fail). To override this behavior, include a “transacted”: false property at the top level. If 'transacted' is set to 'false,' updates are not automatic and partial updates may occur if an error occurs mid-transaction. For example, if some rows have been updated and an update produces an error, the rows that have been updated before the error will still be updated.

The response from this action, as well as the insertRows and deleteRows actions, will contain JSON in the following format:

{ "schemaName": "lists",
"queryName": "Names",
"command": "update",
"rowsAffected": 1,
"rows": [
{"Key": 5,
"FirstName": "Dave",
"LastName": "Stearns"}

The response can be parsed into an object using any one of the many JSON parsers available via

The response object will contain five properties:

  • schemaName
  • queryName
  • command
  • rowsAffected
  • rows
The schemaName and queryName properties will contain the same schema and query name the client passed in the HTTP request. The command property will be "update", "insert", or "delete" depending on the API called (see below). These properties are useful for matching requests to responses, as HTTP requests are typically processed asynchronously.

The rowsAffected property will indicate the number of rows affected by the API action. This will typically be the same number of rows passed in the HTTP request.

The rows property contains an array of row objects corresponding to the rows updated, inserted, or deleted, in the same order as the rows supplied in the request. However, the field values may have been modified by server-side logic, such as LabKey's automatic tracking feature (which automatically maintains columns with certain names, such as "Created", "CreatedBy", "Modified", "ModifiedBy", etc.), or database triggers and default expressions.

insertRows Action

Example URL:



Content-Type Header: Because the post body is JSON and not HTML form values, you must include the 'Content-Type' HTTP header set to 'application/json' so that the server knows how to parse the incoming information.

The post body for insertRows should look the same as updateRows, except that primary key values for new rows need not be supplied if the primary key columns are auto-increment.

deleteRows Action

Example URL:



Content-Type Header: Because the post body is JSON and not HTML form values, you must include the 'Content-Type' HTTP header set to 'application/json' so that the server knows how to parse the incoming information.

The post body for deleteRows should look the same as updateRows, except that the client need only supply the primary key values for the row. All other row data will be ignored.

executeSql Action

This action allows clients to execute SQL.

Example URL:



Post Body:

The post body should be a JSON-encoded object with two properties: schemaName and sql. Example:

schemaName: 'study',
sql: 'select, from MyDataset'

The response comes back in exactly the same shape as the selectRows action, which is described at the beginning of the Query Controller API Actions section of this page.

Project Controller API Actions

getWebPart Action

The getWebPart action allows the client to obtain the HTML for any web part, suitable for placement into a <div> defined within the current HTML page.

Example URL:


HTTP Method: GET

Parameters: The “” parameter should be the name of a web part available within the specified container. Look at the Select Web Part drop-down menu for the valid form of any web part name.

All other parameters will be passed to the chosen web part for configuration. For example, the Wiki web part can accept a “name” parameter, indicating the wiki page name to display. Note that this is the page name, not the page title (which is typically more verbose).

Assay Controller API Actions

assayList Action

The assayList action allows the client to obtain a list of assay definitions for a given folder. This list includes all assays visible to the folder, including those defined at the folder and project level.

Example URL:


HTTP Method: GET

Parameters: None

Return value: Returns an array of assay definition descriptors.

Assay definition descriptor has the following properties:

NameString name of the assay
idUnique integer ID for the assay.
TypeString name of the assay type. "ELISpot", for example.
projectLevelBoolean indicating whether this is a project-level assay.
descriptionString containing the assay description.
plateTemplateString containing the plate template name if the assay is plate based. Undefined otherwise.
domainsAn object mapping from String domain name to an array of domain property objects. (See below.)

Domain property objects have the following properties:

nameThe String name of the property.
typeNameThe String name of the type of the property. (Human readable.)
typeURIThe String URI uniquely identifying the property type. (Not human readable.)
labelThe String property label.
descriptionThe String property description.
formatStringThe String format string applied to the property.
requiredBoolean indicating whether a value is required for this property.
lookupContainerIf this property is a lookup, this contains the String path to the lookup container or null if the lookup in the same container. Undefined otherwise.
lookupSchemaIf this property is a lookup, this contains the String name of the lookup schema. Undefined otherwise.
lookupQueryIf this property is a lookup, this contains the String name of the lookup query. Undefined otherwise.

Troubleshooting Tips

If you hit an error, here are a few "obvious" things to check:

Spaces in Parameter Names. If the name of any parameter used in the URL contains a space, you will need to use "%20" or "+" instead of the space.

Controller Names: "project" vs. "query" vs "assay." Make sure your URL uses the controller name appropriate for your chosen action. Different actions are provided by different controllers. For example, the "assay" controller provides the assay API actions while the "project" controller provides the web part APIs.

Container Names. Different containers (projects and folders) provide different schemas, queries and grid views. Make sure to reference the correct container for your query (and thus your data) when executing an action.

Capitalization. The parameters schemaName, queryName and viewName are case sensitive.

Related Topics

Examples: Controller Actions / API Test Page


This page provides examples to help you get started using the HTTP Interface.


The API Test Tool

Please note that only admins have access to the API Test Tool.

To reach the test screen for the HTTP Interface, enter the following URL in your browser, substituting the name of your server for "<MyServer>" and the name of your container path for "<MyProject>/<MyFolder>"


Note that 'labkey' in this URL represents the default context path, but your server may be configured with a different context path, or no context path. This documentation assumes that 'labkey' (the default) is your server's context path.

Define a List

You will need a query table that can be used to exercise the HTTP Interface. In this section, we create and populate a list to use as our demo query table.

  • Go to the Manage Lists page at (Admin) > Manage Lists.
  • Under Available Lists click Create a New List.
  • Name the list "API Test List" and retain default parameters.
  • Click "Create List."
  • Add fields to the list. Under List Fields click the Add Field button.
  • Add two fields:
    • FirstName - a String
    • Age - an Integer
  • Click Save.

Populate the List

  • On the Create New List page, click Import Data.
  • Copy and paste the following table into the text box labeled Copy/paste text:
List Data


Query Controller API Actions: getQuery Action

The getQuery action may be used to obtain any data visible through LabKey’s standard query views.

Get Url:



"schemaName" : "lists",
"queryName" : "API Test List",
"formatVersion" : 8.3,
"metaData" : {
"importTemplates" : [ {

<<< snip >>>

} ],
"title" : "API Test List",
"importMessage" : null
"columnModel" : [ {
"hidden" : false,
"dataIndex" : "FirstName",
"editable" : true,
"width" : 200,
"header" : "First Name",
"scale" : 4000,
"sortable" : true,
"align" : "left",
"required" : false
}, {
"hidden" : false,
"dataIndex" : "Age",
"editable" : true,
"width" : 60,
"header" : "Age",
"scale" : 4000,
"sortable" : true,
"align" : "right",
"required" : false
}, {
"hidden" : true,
"dataIndex" : "Key",
"editable" : false,
"width" : 180,
"header" : "Key",
"scale" : 4000,
"sortable" : true,
"align" : "right",
"required" : true
} ],
"rows" : [ {
"FirstName" : "A",
"_labkeyurl_FirstName" : "/labkey/list/MyProject/MyFolder/details.view?listId=1&pk=1",
"Age" : 10,
"Key" : 1
}, {
"FirstName" : "B",
"_labkeyurl_FirstName" : "/labkey/list/MyProject/MyFolder/details.view?listId=1&pk=2",
"Age" : 15,
"Key" : 2
}, {
"FirstName" : "C",
"_labkeyurl_FirstName" : "/labkey/list/MyProject/MyFolder/details.view?listId=1&pk=3",
"Age" : 20,
"Key" : 3
} ],
"rowCount" : 3

Query Controller API Actions: updateRows Action

The updateRows action allows clients to update rows in a list or user-defined schema. This action may not be used to update rows returned from queries to other LabKey module schemas (e.g., ms2, flow, etc). To interact with data from those modules, use API actions in their respective controllers.

Post Url:


Post Body:

{ "schemaName": "lists",
"queryName": "API Test List",
"rows": [
{"Key": 1,
"FirstName": "Z",
"Age": "100"}]


"rowsAffected" : 1,
"queryName" : "API Test List",
"schemaName" : "lists",
"containerPath" : "/MyProject/MyFolder",
"rows" : [ {
"EntityId" : "5ABF2605-D85E-1035-A8F6-B43C73747420",
"FirstName" : "Z",
"Key" : 1,
"Age" : 100
} ],
"command" : "update"



Query Controller API Actions: insertRows Action

Post Url:


Post Body:

Note: The primary key values for new rows need not be supplied when the primary key columns are auto-increment.

{ "schemaName": "lists",
"queryName": "API Test List",
"rows": [
{"FirstName": "D",
"Age": "30"}]


"rowsAffected" : 1,
"queryName" : "API Test List",
"schemaName" : "lists",
"containerPath" : "/MyProject/MyFolder",
"rows" : [ {
"EntityId" : "5ABF26A7-D85E-1035-A8F6-B43C73747420",
"FirstName" : "D",
"Key" : 4,
"Age" : 30
} ],
"command" : "insert"



Query Controller API Actions: deleteRows Action

Post Url:


Post Body:

Note: Only the primary key values for the row to delete are required.

{ "schemaName": "lists",
"queryName": "API Test List",
"rows": [
{"Key": 3}]


"rowsAffected" : 1,
"queryName" : "API Test List",
"schemaName" : "lists",
"containerPath" : "/MyProject/MyFolder",
"rows" : [ {
"EntityId" : "5ABF27C9-D85E-1035-A8F6-B43C73747420",
"FirstName" : "C",
"Key" : 3,
"Age" : 20
} ],
"command" : "delete"



Project Controller API Actions: getWebPart Action

The URL of Project Controller actions uses "project" instead of "query," in contrast to the Query Controller Actions described above.

Lists. The Lists web part:


"html" : "<!--FrameType.PORTAL--><div name="webpart"rn id="webpart_0"rn><div class="panel panel-portal">rn<div class="panel-heading"><h3 class="panel-title pull-left" title="Lists"><a name="Lists" class="labkey-anchor-disabled"><a href="/labkey/list/MyProject/MyFolder/begin.view?"><span class="labkey-wp-title-text">Lists</span></a></a></h3>&nbsp;<span class="dropdown dropdown-rollup pull-right"><a href="#" data-toggle="dropdown" class="dropdown-toggle fa fa-caret-down"></a><ul class="dropdown-menu dropdown-menu-right"><li><a href="/labkey/list/MyProject/MyFolder/editListDefinition.view?" tabindex="0" style="">Create New List</a></li><li><a href="/labkey/list/MyProject/MyFolder/begin.view?" tabindex="0" style="">Manage Lists</a></li><li><a href="javascript:LABKEY.Portal._showPermissions(0,null,null);" tabindex="0" style="">Permissions</a></li></ul></span><div class="clearfix"></div></div>rn<div id="WebPartView519542865" class=" panel-body"><!--FrameType.DIV--><div>rnrnrnrnrnrnrnrnrnrnrnrnrnrnrnrnrnrnrnrn<table id="lists">rn <tr><td class="lk-menu-drop dropdown"> <a href="#" data-toggle="dropdown" style="color:#333333;" class="dropdown-toggle fa fa-caret-down"> &nbsp; </a><ul class="dropdown-menu dropdown-menu-right"><li><a href="/labkey/list/MyProject/MyFolder/grid.view?listId=1" tabindex="0" style="">View Data</a></li><li><a href="/labkey/list/MyProject/MyFolder/editListDefinition.view?listId=1" tabindex="0" style="">View Design</a></li><li><a href="/labkey/list/MyProject/MyFolder/history.view?listId=1" tabindex="0" style="">View History</a></li><li><a href="/labkey/list/MyProject/MyFolder/deleteListDefinition.view?listId=1" tabindex="0" style="">Delete List</a></li></ul></td><td><a href="/labkey/list/MyProject/MyFolder/grid.view?listId=1">API Test List</a></td></tr>rn</table>rnrn <a class="labkey-text-link" href="/labkey/list/MyProject/MyFolder/begin.view?">manage lists</a>rnrn</div></div></div></div><!--/FrameType.PORTAL-->",
"requiredJsScripts" : [ ],
"implicitJsIncludes" : [ ],
"requiredCssScripts" : [ ],
"implicitCssIncludes" : [ ],
"moduleContext" : {
"pipeline" : { },
"core" : { },
"internal" : { },
"search" : { },
"experiment" : { },
"filecontent" : { },
"query" : { },
"wiki" : { },
"api" : { },
"announcements" : { },
"issues" : { }

Wiki. Web parts can take the name of a particular page as a parameter, in this case the page named "somepage":


Assay List. Some web part names have spaces. You can find the valid form of web part names in the Select Web Part drop-down menu. A web part with a space in its name:


Example: Access APIs from Perl

You can use the client-side language of your choice to access LabKey's HTTP Interface.

The Perl script logs into a server and retrieves the contents of a list query called "i5397." It prints out the results decoded using JSON.

Note that JSON 2.07 can be downloaded from .

Please use the script attached to this page (click the link to download) in preference to copy/pasting the same script below. The wiki editor is known to improperly escape certain common perl characters. The code below is included for ease of reference only.

#!/usr/bin/perl -w
use strict;

# Fetch some information from a LabKey server using the client API
my $email = '';
my $password = 'mypassword';

use LWP::UserAgent;
use HTTP::Request;
my $ua = new LWP::UserAgent;
$ua->agent("Perl API Client/1.0");

# Setup variables
# schemaName should be the name of a valid schema.
# The "lists" schema contains all lists created via the List module
# queryName should be the name of a valid query within that schema.
# For a list, the query name is the name of the list
# project should be the folder path in which the data resides.
# Use a forward slash to separate the path
# host should be the domain name of your LabKey server
# labkeyRoot should be the root of the LabKey web site
# (if LabKey is installed on the root of the site, omit this from the url)
my $schemaName="lists";
my $queryName="MyList";
my $project="MyProject/MyFolder/MySubFolder";
my $host="localhost:8080";
my $labkeyRoot = "labkey";
my $protocol="http";

#build the url to call the selectRows.api
#for other APIs, see the example URLs in the HTTP Interface documentation at
my $url = "$protocol://$host/$labkeyRoot/query/$project/" .

#Fetch the actual data from the query
my $request = HTTP::Request->new("GET" => $url);
$request->authorization_basic($email, $password);
my $response = $ua->request($request);

# use JSON 2.07 to decode the response: This can be downloaded from
use JSON;
my $json_obj = JSON->new->utf8->decode($response->content);

# the number of rows returned will be in the 'rowCount' propery
print $json_obj->{rowCount} . " rows:n";

# and the rows array will be in the 'rows' property.
foreach my $row(@{$json_obj->{rows}}){
#Results from this particular query have a "Key" and a "Value"
print $row->{Key} . ":" . $row->{Value} . "n";

External ODBC Connections

Premium Feature — Available in the Professional Plus and Enterprise Editions. Learn more or contact LabKey.

An ODBC Connection exposes the LabKey schema and queries as a data source to external clients for analysis and reporting. Encrypted connections using TLS are supported and are recommended for production deployments. For details see Secure ODBC Connections.

Tested and supported clients include: Other clients which may be compatible are listed below.

The underlying exposure mechanism is an implementation of the PostgreSQL ODBC wire protocol. Each LabKey container (a project or folder) is surfaced to clients as a separate PostgreSQL "database". These "databases" expose the LabKey virtual schema (the same view of the data provided by the Query Schema Browser).

Queries through an ODBC connection respect all of the security settings present on the LabKey Server container. Clients must have the Reader role at least to query the data.

Only read access is supported; data cannot be inserted or updated using the virtual schema over an ODBC connection.

Note that ODBC connections are not supported in cloud hosted server environments, including trial instances of LabKey Server.

ODBC Connection Set Up

By default ODBC connections are disabled; to enable them, follow the instructions below.

  • Navigate to > Site > Admin Console. Click Admin Console Links. Under Premium Features, click External Analytics Connections.
  • On the page Enable External Analytics Connections, place a check mark next to Allow Connections.
  • By default the server will listen for client requests on port 5435. If desired, you can change the port number within the range: 1 to 65535.
  • Click Save.

Windows: Install PostgreSQL Driver

  • On the client machine, install the latest version of the PostgreSQL ODBC driver.
    • Downloads for Windows are available at
    • Note that there are 32-bit and 64-bit drivers available. You can install both, or install the version that matches your client tool, not your host machine. For example, if you have a 32-bit version of Excel, then install the 32-bit ODBC driver, even if you have a 64-bit machine.

Windows: Create a Data Source Name (DSN)

On the client machine, create a "data source name" (DSN) to wrap a data container on LabKey Server. Creating a "system" DSN, as shown below, makes it available to various clients. Client tools use the ODBC driver to query this DSN.

  • On Windows, open the ODBC Data Source Administrator.
  • Click the System DSN tab.
  • Click Add....
  • Select the PostgreSQL driver you installed above and click Finish.
    • Data Source - This is the name used by the client tool.
    • Description - This can be any text.
    • Database - A LabKey container path, that is, the project or folder you are connecting to. Include a leading slash in the path, for example, "/Home" or "/Home/MyDataFolder".
    • SSL Mode - Set to "disable".
    • Server - The server you are connecting to, for example, or localhost.
    • Port - This number must match the port enabled on the server. 5435 is the default used by LabKey Server.
    • User Name - The user this connection will authenticate against. This user should have at least the Reader role in the LabKey Server container.
    • Password - The password for the above user.
    • Click Test to ensure the connection is successful.
    • Click Save to finish.

Tableau Desktop

To load data into Tableau Desktop:

  • In Tableau Desktop, go to Data > New Data Source > More… > Other Databases (ODBC).
  • Place a checkmark next to DSN and select your DSN in the dropdown. Click Connect.
  • Search for and select the Schema - 'core' is shown in the screenshot below.
  • Search for and select the Table - 'Modules' is shown in the screenshot below.

We recommend that you set the Connection to "Extract" instead of "Live". (This helps to avoid the following errors from the ODBC driver: "ODBC escape convert error".)


To load data into Excel:

  • In Excel (Office 365 version), open an empty sheet and click the Data tab
  • Select Get Data > From Other Sources > From ODBC. (Note this path may vary, depending on your version of Excel.)
  • In the From ODBC popup dialog, select the system Data Source Name (DSN) you created above. Optionally, you can enter a SQL query under Advanced options.
  • If you chose not to provide a SQL query, select the table to load using the Navigator dialog. Select the desired table and click Load.
  • The data will be selected from the server and loaded into the worksheet.

Controlling Excel Data Loading

To control the SQL SELECT statement used by Excel to get the data, such as adding a WHERE or JOIN clause, double-click the table/query in the Queries and Connections panel. In the Power Query Editor, click Advanced Editor.

To control the refresh behavior, go to Data tab > Connections > Properties. The Refresh control panel provides various options, such as refreshing when the sheet is opened.

Note that saving a sheet creates a snapshot of the data locally. Use with caution if you are working with PHI or otherwise sensitive data.


Access can be run in snapshot or dynamic modes. Loading data into Access also provides a path to processing in Visual Basic.

See the Microsoft documentation at Add an ODBC Data Source

Microsoft SQL Server Reporting Services (SSRS)

SQL Server Reporting Services is used for creating, publishing, and managing reports, and delivering them to the right users in different ways, whether that's viewing them in a web browser, on their mobile device, or via email.

For detailed setup instructions, see ODBC Data Sources and SQL Server Reporting Service (SSRS).


  • In MATLAB, click the Apps tab.
  • Open the Database Explorer app. (If you don't see it, install the Database Toolbox.)
  • In the Database Explorer, click New Query.
  • In the Connect to a Data Source select your DSN and provide a username and password.
  • In the popup dialog, select the target Schema. The Catalog (a LabKey container) is determined by the DSN.
  • Select a table to generate SQL statements.


Error message: "Bad Connection..."

The following error may occur from the ODBC driver. This error has been seen especially with Tableau Desktop when working with date fields.

Bad Connection: Tableau could not connect to the data source. 
ODBC escape convert error

<snip>Generated SQL statement is shown here</snip>


When connecting Tableau Desktop to the DSN, select "Extract". See above for a screenshot.

Other Tools

These other external tools have not been extensively tested and are not officially supported, but have been reported to be compatible with LabKey using ODBC connections.

Related Topics

ODBC Data Sources and SQL Server Reporting Service (SSRS)

Premium Feature — Available in the Professional Plus and Enterprise Editions. Learn more or contact LabKey.

This topic explains how to set up SSRS to create reports and perform analyses on data stored in LabKey Server. It assumes that you have previously set an ODBC Connection to provide data to SSRS. For details see External ODBC Connections.

SSRS Setup

PostgreSQL ODBC driver is used to connect to LabKey regardless of the underlying database used by the LabKey server.

  • Download both the 32 bit and 64 bit PostgreSQL ODBC drivers from
  • Under the msi folder, download zip files for both the 32 and 64 bit drivers.
  • Unzip and double-click on .msi to install (the same if you want to uninstall).
  • You will likely want to set up a LabKey account with the minimum amount of permissions required for your reports.
  • From FireWall settings, set Inbound And Outbound rules for Port 5435 (or the port you set in later steps for your odbc interface).
  • For Inbound rule, click on New Rule.
  • Select Port → Next.
  • Select TCP → Select Specific local ports: 5435
  • Click Next.
  • Select ‘Allow the connection’.
  • Check the appropriate network profile - either Domain and/or Private and/or Public. Click Next.
  • Provide a meaningful name and click Finish.
  • For outbound rule, click on New Rule, and follow steps same as Inbound Rule setup above.
  • You may need to restart your computer for the firewall rules to take effect.
  • Since Visual Studio and Report Builder are 32 bit and recent versions of SSRS and MS SQL Server are 64 bit, you will likely have to set up both the 32 and 64 bit postgres ODBC data sources. ODBC data source setup steps:

32 bit ODBC Data Source

  • Go to the 32 bit ODBC Data Source Administrator
  • You’ll want to create a System DSN, so click that tab
  • Select Add then select the 32 bit postgres driver (ANSI or Unicode), and the following window will appear:

An example of the ODBC driver setup:

  • Data Source: the name of this data source, could be anything you prefer.
  • Database: your LabKey folder or container, ex. /Home or any folder of your LabKey data source since each LabKey folder is setup as its own database
  • Server: Your server host name, for example, "". "localhost" is for local setup/dev mode.
  • User Name: your LabKey Server user name.
  • Password: your LabKey Server password.
  • Port: 5435 is the default Postgres ODBC port. This value needs to match whatever is set in the Admin Console of your LabKey Server.
  • Go to Admin → Site → Admin Console → Admin Console Links.
  • Under Premium Features, click External Analytics Connections.
  • Clicking on Test should give you Connection Successful if all the setup is correct.
  • Save.
  • Click OK to close ODBC Data Source Administration (32-bit) window.

64 bit ODBC Data Source

  • Go to the 64 bit ODBC Data Source Administrator
  • Go through same steps as 32 bit ODBC data source above.
    • Important: You will want to name the 32 bit and 64 bit ODBC data sources the exact same name.

SSRS Reporting Services setup

  • Go to Report Server Configuration Manager, select Server Name and Report Server Instance: SSRS → Connect
  • In the Reporting Services Configuration Manager, ensure your screens look similar to below:
Report Server Status:

  • b) Service Account:
  • c) WebService URL & Web Portal URL (should be clickable):
  • d) Database (Change Database or Change Credentials to set these values for the first time):
  • e) You may need to setup an Execution Account.
  • The Account will request a format like <Domain><Username>, there is no domain on the LabKey Server so leave that part blank.
  • Use the same account as your ODBC data sources (i.e your LabKey user credentials)

Visual Studio

  • In Visual Studio create a Report Server Project
  • To set up the data source, add a new Shared Data Source
  • Set type to ODBC
  • The Connection string just needs to show which DSN to use. This should be the name that you used for both your 32 bit and 64 bit data sources.
  • Click Credentials and select Do not use credentials
  • Right click on Project → Properties → set properties as shown below:
  • You should now be able to use this datasource to create datasets and reports. Your report -should work in preview mode and when deployed to the report server.


  • The Query Designer is an optional UI to aid in the creation of report Datasets. Since it is generating and parsing SQL queries using common SQL languages like Transact-SQL, some of the enhanced features of LabKey SQL cannot be generated or parsed using the Query Designer.
LabKey pivot queries must be manually entered and the Query Designer cannot be used on a LabKey pivot query.
  • The Table Query Designer (available when Dataset Query Type is Table) is not available when using an ODBC data source, instead the Text Query Designer will be shown. This is a limitation of the .Net ODBC controller.
  • Parameters in Queries. When wanting to use parameters in Dataset queries, ‘?’ should be used in your query to insert your parameter. Then in the Parameters section you can define the name of the parameters used in the report UI. The parameters will be listed in the order they appear in the query.

Related Topics

Secure ODBC Connections

LabKey Server can support secure ODBC connections using TLS. Secure ODBC connections piggyback on Tomcat for TLS configurations (both certificates and keys).

TLS connections is recommended for production deployments. Currently, TLS connections are supported only for on premise deployments. TLS connections are not currently supported for cloud-based deployments.

See below for details on setting up a secure configuration.

Configure Tomcat for TLS Connections

For details see Configure the LabKey Web Application.

Cipher delimiter characters: While Tomcat does not care which delimiter is used in the server's xml config file, to make it work with ODBC connections, a colon delimiter must be used in separating cipher suites. For example:

sslProtocol="TLSv1.2" protocols="TLSv1.2"


Configure PostgreSQL Client for Secure Connections

PostgreSQL supports the following TLS connection modes:

  • disable
  • allow
  • prefer
  • require
  • verify-ca
  • verify-full
For details on these modes see the PostgreSQL documentation at Protection Provided in Different Modes .

Currently, when secure connections are enforced through LabKey Server, connections through disable and allow modes are not successful.

When LabKey's Enforce TLS switch is turned off (see below), connections through all the modes are successful provided the Tomcat is setup for secure connections.

For modes verify-ca and verify-full, clients (that is, users that want to connect to a LabKey Server data source) will need to place the certificate for the server in the location specified in the PostrgreSQL docs at Client Verification of Server Certificates

Configure DSN

When setting up the DSN wrapper for the ODBC connection, clients should select one of these modes:

  • prefer
  • require
  • verify-ca
  • verify-full
Self-signed certificates can be supported by using the following modes:
  • prefer
  • require
If the client has been configured to trust the certificate (by adding it to the CA list) verify-ca will also work.

Require TLS on LabKey Server

To set up TLS on LabKey Server, see Creating & Installing SSL/TLS Certificates on Tomcat.

To turn on the TLS enforcement for ODBC connections:

  • Open the Admin Console at > Site > Admin Console.
  • Click Admin Console Links. In the section Premium Features, click External Analytics Connections.
  • On the page Enable External Analytics Connections, place a checkmark next to Require TLS.
  • Click Save.

Related Topics

API Keys

API Keys can be used to authenticate client code accessing LabKey Server using one of the LabKey Client APIs. Authentication with an API key avoids needing to store your LabKey password or other credential information on the client machine. An API key can be specified in .netrc, provided to API functions, and used with external clients that support Basic authentication. API keys have security benefits over passwords (they are tied to a specific server, they're usually configured to expire, and they can be revoked), but a valid API key provides complete access to your data and actions, so it should be kept secret. An administrator can configure the server to allow users to obtain an API Key (or token) once they have logged in. API keys can be configured to expire after a duration specified by the administrator. An administrator also retains the power to immediately deactivate one or more session keys whenever necessary.

In cases where access must be tied to the current browser session and run under the current context (e.g., your user, your authorizations and if applicable, your declared terms of use and PHI level, your current impersonation state, etc.), such as some compliance environments, you will need to use a session key. Session keys expire at the end of the session, whether by timeout or explicit logout.

Configure API Keys (Admin)

  • Select (Admin) > Site > Admin Console.
  • Click the Admin Console Links tab.
  • Under Configuration, click Site Settings.
  • Under Configure API Keys, check the box for Let users create API keys.
  • Select when to Expire API keys. Options:
    • Never (default)
    • 7 days
    • 30 days
    • 90 days
    • 365 days
  • Click Save.

Access and Use an API Key (Developers/Users)

Once enabled, a logged-in user can retrieve an API key from the username menu:

The API key is a long, randomly generated token that provides an alternative authentication credential for use with APIs; it has the prefix "apikey|".

Click Generate API Key to see it; click Copy to Clipboard to grab it. The button will read Copied! when the copy has completed. Then click Done.

You can then use this key in a .netrc file or via clients that authenticate using Basic authentication. All access to the system will be subject to your authorization and logged with your user information.

Note: When an administrator is impersonating a user, group or role, they cannot generate an API key.

Example: .netrc File

To avoid embedding credentials into your code, you can use the API key as a password within a .netrc file. When doing so, the username is "apikey" (instead of your email address) and the password is the entire API key including the prefix. This is the recommended method of using an API key; it is compatible with all LabKey client libraries.

machine localhost
login apikey
password apikey|the_rest_of_the_long_api_key_copied

Any API use via a LabKey client library will be able to access the server with your permissions, until the key expires or is terminated by an administrator.

Manage API Keys (Admin)

A site administrator can manage API keys generated on the server using the APIKey query. Link to it from the top of the username > API Keys page.

You will see the keys that have been generated on this server, listed by username and displaying the time of creation as well as expiration (where applicable). Note that session keys are not listed here, and there is no ability for a non-admin user to see or delete their own keys.

To revoke an API Key, such as in a case where it has been compromised or shared, select the row and click (Delete). To revoke all API keys, select all rows and delete.

Related Topics

Compliant Access via Session Key

Regulatory compliance may impose stringent data access requirements, such as having the user declare their intended use of the data, provide their IRB number and necessary PHI level, and sign associated terms of use documents every time they log in. This information is logged with each access of the data for later review or audit. To enable programmatic use of data as if "attached" to a given session, an administrator can configure the server to let users obtain a Session Key (or token) once they have logged in via the web UI. This key can be used to authorize client code accessing LabKey Server using one of the LabKey Client APIs. Using any API key avoids copying and storing your credentials on the client machine. In the case of a session key, this access is tied to the current browser session and runs under the current context (e.g., your user, your authorizations and if applicable, your declared terms of use and PHI level, your current impersonation state, etc.) then expires at the end of the session, whether by timeout or explicit logout.

Enable Session Keys

  • Select (Admin) > Site > Admin Console.
  • Click the Admin Console Links tab.
  • Under Configuration, click Site Settings.
  • Under Configure API Keys, check Let users create session keys.
  • Click Save.

Obtain and Use a Session Key

Once enabled, the user can log in, providing all the necessary compliance information, then retrieve their unique session key from the username > API Keys menu:

Click Generate Session Key. The session key is a long, randomly generated token, beginning with the prefix "session|" that is valid for only this single browser session. Click Copy to Clipboard to grab it (the button will read "Copied!" when copied). Then click Done.

You can then paste this key into a script, tying that code's authorization to the browser session where the key was generated. The session key can also be used in a .netrc file or via an external client that supports Basic authentication, as shown in API Keys. When using a session key within a netrc file, you use the login "apikey." When using a session key, the code's actions and access will be logged with your user information and assertions you made at login time.

Example: netrc File

To avoid embedding credentials into your code, you can use a session key as a password within a .netrc/_netrc file. When doing so, the username is "apikey" and the password is the entire session key including the prefix.

machine localhost
login apikey
password session|the_rest_of_the_long_session_key_copied

Example: R

For example, if you were accessing data via R, the following shows the usage:

labkey.setDefaults(apiKey="session|the_rest of the_long_string_session_id")

You will then be able to access the data from R until the session associated with that key is terminated, whether via timeout or log out.


Related Topics

Develop Modules

Modules encapsulate functionality, packaging resources together for simple deployment within LabKey Server. Modules are developed by incrementally adding file resources within a standardized directory structure. For deployment, the files are archived as a .module file (a standard .zip file renamed with a custom file extension).

This topic helps module developers extend LabKey Server. To learn more about pre-existing modules built into LabKey Server, see Modules in LabKey Server Editions.

A wide variety of resources can be used, including, R reports, SQL queries and scripts, API-driven HTML pages, CSS, JavaScript, images, custom web parts, XML assay definitions, and compiled Java code. Much module development can be accomplished without compiling Java code, letting you directly deploy and test module source, oftentimes without restarting the server.

Module Functionality

Hello WorldA simple "Hello World" module.Tutorial: Hello World Module
Queries, Views, and ReportsA module that includes queries, reports, and/or views directories. Create file-based SQL queries, reports, views, web parts, and HTML/JavaScript client-side applications. No Java code required, though you can easily evolve your work into a Java module if needed.Modules: Queries, Views and Reports
AssayA module with an assay directory included, for defining a new assay type.Modules: Assay Types
Extract-Transform-LoadA module with an etl directory included, for configuring data transfer and synchronization between databases.ETL: Extract Transform Load
Script PipelineA module with a pipeline directory included, for running scripts in sequence, including R scripts, JavaScript, Perl, Python, etc.Script Pipeline: Running R and Other Scripts in Sequence
JavaA module with a Java src directory included. Develop Java-based applications to create server-side code.Modules: Java
Tutorial: Hello World Java Module

Do I Need to Compile Modules?

Modules do not need to be compiled, unless they contain Java code. Most module functionality can be accomplished without the need for Java code, including "CRUD" applications (Create-Retrieve-Update-Delete applications) that provide views and reports on data on the server, and provide some way for users to interact with the data. These applications will typically use some combination of the following client APIs: LABKEY.Query.selectRows, insertRows, updateRows, and deleteRows.

Also note that client-side APIs are generally guaranteed to be stable, while server-side APIs are not guaranteed to be stable and are liable to change as the LabKey Server code base evolves -- so modules based on the server API may require changes to keep them up to date.

More advanced client functionality, such as defining new assay types, working with the security API, and manipulating studies, can also be accomplished with a simple module without Java.

To create your own server actions (i.e., code that runs on the server, not in the client), Java is generally required. Trigger scripts, which run on the server, are are an exception: trigger scripts are a powerful feature, sufficient in many cases to avoid the need for Java code. Note that Java modules require a build/compile step, but modules without Java code don't need to be compiled before deployment to the server.

Module Development Setup

Use the following topic to set up a development machine for building LabKey modules: Set Up a Development Machine


The topics below show you how to create a module, how to develop the various resources within the module, and how to package and deploy it to LabKey Server.

Premium Resource: Migrate Module from SVN to GitHub

Tutorial: Hello World Module

LabKey Server's functionality is packaged inside of modules. For example, the query module handles the communication with the databases, the wiki module renders Wiki/HTML pages in the browser, the assay module captures and manages assay data, etc.

You can extend the functionality of the server by adding your own module. Here is a partial list of things you can do with a module:

  • Create a new assay type to capture data from a new instrument.
  • Add a new set of tables and relationships (= a schema) to the database by running a SQL script.
  • Develop file-based SQL queries, R reports, and HTML views.
  • Build a sequence of scripts that process data and finally insert it into the database.
  • Define novel folder types and web part layouts.
  • Set up Extract-Transform-Load (ETL) processes to move data between databases.
Modules provide an easy way to distribute and deploy code to other servers, because they are packaged as single .module files, really just a renamed .zip file. When the server detects a new .module file, it automatically unzips it, and deploys the module resources to the server. In many cases, no server restart is required. Also, no compilation is necessary, assuming the module does not contain Java code or JSP pages.

The following tutorial shows you how to create your own "Hello World" module and deploy it to a local testing/development server.

Set Up a Development Machine

In this step you will set up a test/development machine, which compiles LabKey Server from its source code.

If you already have a working build of the server, you can skip this step.

  • Download the server source code and complete an initial build of the server by completing the steps in the following topic: Set Up a Development Machine
  • Before you proceed, build and deploy the server. Confirm that the server is running by visiting the URL http://localhost:8080/labkey/project/home/begin.view?
  • For the purposes of this tutorial, we will call the location where you have synced the server source code LABKEY_SRC. On Windows, a typical location for LABKEY_SRC would be C:/dev/trunk

Module Properties

In this step you create the main directory for your module and set basic module properties.

  • Go to LABKEY_SRC, the directory where you synced to the server source code.
  • If necessary, in LABKEY_SRC, create a directory named "externalModules".
  • Inside LABKEY_SRC/externalModules, create a directory named "helloworld".
  • Inside the helloworld directory, create a file named "", resulting in the following:
  • Add the following property/value pairs to This is a minimal list of properties needed for deployment and testing. You can add a more complete list of properties later on, including your name, links to documentation, required server and database versions, etc. For a complete list of available properties see Module Properties Reference.
Name: HelloWorld
ModuleClass: org.labkey.api.module.SimpleModule
Version: 1.0

Build and Deploy the Module

  • Add a file named "build.gradle" to the root directory of your module, resulting in: LABKEY_SRC/externalModules/helloworld/build.gradle
  • Add these lines to the file build.gradle:
apply plugin: 'java'
apply plugin: 'org.labkey.fileModule'
  • Open the file settings.gradle (at LABKEY_SRC/settings.gradle) and add the following line to the bottom of the file:
include ":externalModules:helloworld"
  • Confirm that your module will be included in the build:
    • Open a command window.
    • Go to the directory /LABKEY_SRC
    • Call the Gradle task:
gradlew projects

In the list of projects, you should see the following:

Root project


+--- Project ':externalModules'
| --- Project ':externalModules:helloworld'

  • Build and deploy the server by calling the Gradle task:
gradlew deployApp

OR, for a more targeted build, you can call the gradle task:

gradlew :externalModules:helloworld:deployModule

Confirm the Module Has Been Deployed

  • Start the server:
gradlew startTomcat
  • In a browser go to: http://localhost:8080/labkey/project/home/begin.view?
  • Sign in.
  • Confirm that HelloWorld has been deployed to the server by going to (Admin) > Site > Admin Console. Click Module Information (on the right). Open the node HelloWorld. Notice the module properties you specified are displayed here: Name: HelloWorld, Version: 1.0, etc.

Add a Default Page

Each module has a default home page called "begin.view". In this step we will add this page to our module. The server interprets your module resources based on a fixed directory structure. By reading the directory structure and the files inside, the server knows their intended functionality. For example, if the module contains a directory named "assays", this tells the server to look for XML files that define a new assay type. Below, we will create a "views" directory, telling the server to look for HTML and XML files that define new pages and web parts.

  • Inside helloworld, create a directory named "resources".
  • Inside resources, create a directory named "views".
  • Inside views, create a file named "begin.html". (This is the default page for any module.)
│ build.gradle
  • Open begin.html in a text editor, and add the following HTML code:
<p>Hello, World!</p>

Test the Module

  • Start and stop the tomcat server using 'gradlew stopTomcat' and then 'gradlew startTomcat'.
  • Build the server using 'gradlew deployApp', or in the module directory, use 'gradlew deployModule'.
  • Wait for the server to redeploy.
  • Enable the module in some test folder:
    • Navigate to some test folder on your server.
    • Go to (Admin) > Folder > Management and click the Folder Type tab.
    • In the list of modules on the right, place a checkmark next to HelloWorld.
    • Click Update Folder.
  • Confirm that the view has been deployed to the server by going to (Admin) > Go to Module > HelloWorld.
  • The following view will be displayed:

Modify the View with Metadata

You can control how a view is displayed by using a metadata file. For example, you can define the title, framing, and required permissions.

  • Add a file to the views directory named "begin.view.xml". Note that this file has the same name (minus the file extension) as begin.html: this tells the server to apply the metadata in begin.view.xml to being.html.
│ build.gradle
  • Add the following XML to begin.view.xml. This tells the server to: display the title 'Begin View', display the HTML without any framing, and that Reader permission is required to view it.
<view xmlns="" 
title="Begin View"
<permission name="read"/>
  • Save the file.
  • Refresh your browser to see the result. (You do not need to rebuild or restart the server.)
  • The begin view now looks like the following:
  • Experiment with other possible values for the 'frame' attribute:
    • portal (If no value is provided, the default is 'portal'.)
    • title
    • dialog
    • div
    • left_navigation
    • none
  • When you are ready to move to the next step, set the 'frame' attribute to 'portal'.

Hello World Web Part

You can also package the view as a web part using another metadata file.

  • In the helloworld/resources/views directory add a file named "begin.webpart.xml". This tells the server to surface the view inside a webpart. Your module now has the following structure:
│ build.gradle
  • Paste the following XML into begin.webpart.xml:
<webpart xmlns="" 
title="Hello World Web Part">
<view name="begin"/>
  • Save the file.
  • Return to your test folder using the hover menu in the upper left.
  • In your test folder, enter > Page Admin Mode, then click the pulldown menu that will appear: <Select Web Part>.
  • Select the web part Hello World Web Part and click Add.
  • The following web part will be added to the page:
  • Click Exit Admin Mode.

Hello World User View

The final step provides a more interesting view that uses the JavaScript API to retrieve information about the current user.

  • Open begin.html and replace the HTML with the content below.
  • Refresh the browser to see the changes. (You can directly edit the file begin.html in the module -- the server will pick up the changes without needing to rebuild or restart.)
<p>Hello, <script>

<p>Your account info: </p>
<tr><td>id</td><td><script>document.write(; </script><td></tr>
<tr><td>displayName</td><td><script>document.write(LABKEY.Security.currentUser.displayName); </script><td></tr>
<tr><td>email</td><td><script>document.write(; </script><td></tr>
<tr><td>canInsert</td><td><script>document.write(LABKEY.Security.currentUser.canInsert); </script><td></tr>
<tr><td>canUpdate</td><td><script>document.write(LABKEY.Security.currentUser.canUpdate); </script><td></tr>
<tr><td>canUpdateOwn</td><td><script>document.write(LABKEY.Security.currentUser.canUpdateOwn); </script><td></tr>
<tr><td>canDelete</td><td><script>document.write(LABKEY.Security.currentUser.canDelete); </script><td></tr>
<tr><td>isAdmin</td><td><script>document.write(LABKEY.Security.currentUser.isAdmin); </script><td></tr>
<tr><td>isGuest</td><td><script>document.write(LABKEY.Security.currentUser.isGuest); </script><td></tr>
<tr><td>isSystemAdmin</td><td><script>document.write(LABKEY.Security.currentUser.isSystemAdmin); </script><td></tr>
  • Save the file.
  • Once you've refreshed the browser, the web part will display the following.
  • Also, try out rendering a query in your view. The following renders the table core.Modules, a list of all of the modules available on the server.
<div id='queryDiv'/>
<script type="text/javascript">
var qwp1 = new LABKEY.QueryWebPart({
renderTo: 'queryDiv',
title: 'LabKey Modules',
schemaName: 'core',
queryName: 'Modules',
filters: [
LABKEY.Filter.create('Organization', 'LabKey')

Make a .module File

You can distribute and deploy a module to a production server by making a helloworld.module file (a renamed .zip file).

  • In anticipation of deploying the module on a production server, add the property 'BuildType: Production' to the file:
Name: HelloWorld
ModuleClass: org.labkey.api.module.SimpleModule
Version: 1.0
BuildType: Production
  • Rebuild the module by going to the module directory and calling:
gradlew deployModule
  • The build process creates a helloworld.module file at:

This file can be deployed by copying it to another server's externalModules directory. When the server detects changes in this directory, it will automatically unzip the .module file and deploy it. You may need to restart the server to fully deploy the module.

Here is a completed helloworld module: helloworld.module

Related Topics

These topics show more functionality that you can package as a module:

These topics describe the build process generally:

Map of Module Files

This page shows the directory structure for modules, and the content types that can be included.

Module Directories and Files

The following directory structure follows the pattern for modules as they are checked into source control. The structure of the module as deployed to the server is somewhat different, for details see below and the topic Module Properties Reference. If your module contains Java code or Java Server Pages (JSPs), you will need to compile it before it can be deployed.

Items shown in lowercase are literal values that should be preserved in the directory structure; items shown in UPPERCASE should be replaced with values that reflect the nature of your project.

MODULE_NAME │ docs └──resources    │ module.xml docs, example    ├───assay docs    ├───credits docs, example    ├───domain-templates docs    ├───etls docs    ├───folderTypes docs    ├───olap example    ├───pipeline docs, example    ├───queries docs    │ └───SCHEMA_NAME    │ │ QUERY_NAME.js docs, example    │ │ QUERY_NAME.query.xml docs, example    │ │ QUERY_NAME.sql example    │ └───QUERY_NAME    │ VIEW_NAME.qview.xml docs, example    ├───reports docs    │ └───schemas    │ └───SCHEMA_NAME    │ └───QUERY_NAME    │ MyRScript.r example    │ docs, example    │ MyRScript.rhtml docs    │ MyRScript.rmd docs    ├───schemas docs    │ │ SCHEMA_NAME.xml example    │ └───dbscripts    │ ├───postgresql    │ │ SCHEMA_NAME-X.XX-Y.YY.sql example    │ └───sqlserver    │ SCHEMA_NAME-X.XX-Y.YY.sql example    ├───scripts docs, example    ├───views docs    │ VIEW_NAME.html example    │ VIEW_NAME.view.xml example    │ TITLE.webpart.xml example    └───web docs        └───MODULE_NAME                SomeImage.jpg                somelib.lib.xml                SomeScript.js example

Module Layout - As Source

If you are developing your module inside the LabKey Server source, use the following layout. The standard build targets will automatically assemble the directories for deployment. In particular, the standard build target makes the following changes to the module layout:

  • Moves the contents of /resources one level up into /mymodule.
  • Uses to create the file config/module.xml via string replacement into an XML template file.
  • Compiles the Java /src dir into the /lib directory.
│ ├───assay
│ ├───etls
│ ├───folderTypes
│ ├───queries
│ ├───reports
│ ├───schemas
│ ├───views
│ └───web
└───src (for modules with Java code)

Module Layout - As Deployed

The standard build targets transform the source directory structure above into the form below for deployment to Tomcat.

│ └───module.xml
├───lib (holds compiled Java code)

Related Topics

Example Modules

Use the modules listed below as examples for developing your own modules.

To acquire the source code for these modules, enlist in the LabKey Server open source project: Access the Source Code

Module LocationDescription / Highlights modules on GitHub. core modules for LabKey Server are located here, containing the core server action code (written in Java). test module runs basic tests on the server. Contains many basic examples to clone from.

Other Resources

Modules: Queries, Views and Reports

This tutorial shows you how to create a variety of module-based reports, queries, and views, and how to surface them in the LabKey Server user interface. The module makes use of multiple resources, including: R reports, SQL queries, SQL query views, HTML views, and web parts.

The Scenario

Suppose that you want to present a series of R reports, database queries, and HTML views. The end-goal is to deliver these to a client as a unit that can be easily added to their existing LabKey Server installation. Once added, end-users should not be able to modify the queries or reports, ensuring that they keep running as expected. The steps below show how to fulfill these requirements using a file-based module.


Use the Module on a Production Server

This tutorial is designed for developers who build LabKey Server from source. But even if you are not a developer and do not build the server from source, you can get a sense of how modules by work by installing the module that is the final product of this tutorial. To install the module, download reportDemo.module and copy the file into the directory LABKEY_HOME\externalModules (on a Windows machine this directory is typically located at C:\Program Files(x86)\LabKey Server\externalModules). Notice that the server will detect the .module file and unzip it, creating a directory called reportDemo, which is deployed to the server. Look inside reportDemo to see the resources that have been deployed to the server. Read through the steps of the tutorial to see how these resources are surfaced in the user interface.

First Step

Module Directories Setup

Here we install sample data to work with and we create the skeleton of our module, the three empty directories:
  • queries - Holds SQL queries and views.
  • reports - Holds R reports.
  • views - Holds user interface files.

Set Up a Dev Machine

Complete the topics below. This will set up a machine that can build LabKey Server (and the proteomics tools) from source.

Install Sample Data

Create Directories

  • Go to the externalModules/ directory, and create the following directory structure and standard file:
│ build.gradle

Add the following contents to

Module Class: org.labkey.api.module.SimpleModule
Name: ReportDemo

Add the following contents to build.gradle:

apply plugin: 'java'
apply plugin: 'org.labkey.fileModule'
  • In the settings.gradle file in the root of your enlistment, add the following line:
include ":externalModules:helloworld"

Build the Module

  • In a command shell, go to the module directory.
  • Call 'gradlew deployModule' to build the module.
  • Restart the server to deploy the module.

Enable Your Module in a Folder

To use a module, enable it in a folder.

  • Go to the LabKey Server folder where you want add the module functionality.
  • Select (Admin) > Folder > Management.
  • Click the Folder Type tab.
  • Under the list of Modules click on the check box next to ReportDemo to activate it in the current folder.

Start Over | Next Step

Module Query Views

The queries directory holds SQL queries, and ways to surface those queries in the LabKey Server UI. The following files types are supported:
  • SQL queries on the database (.SQL files)
  • Metadata on the above queries (.query.xml files).
  • Named views on pre-existing queries (.qview.xml files)
  • Trigger scripts attached to a query (.js files) - these scripts are run whenever there an event (insert, update, etc.) on the underlying table.
In this step you will define a "query view" on the Peptides table, in particular on the default query of the Peptides table, a built-in query on the server. Notice that the target schema and query are determined by the directories the view rests inside -- a view located at "ms2/Peptides/SomeView.qview.xml" means "a view on the Peptides query in the ms2 schema".

Additionally, if you wish to just create a default view that overrides the system generated one, be sure to just name the file as .qview.xml, so there is no actual name of the file. If you use default.qview.xml, this will create another view called "default", but it will not override the existing default.

Create an XML-based SQL Query

  • Add two directories (ms2 and Peptides) and a file (High Prob Matches.qview.xml), as shown below.
  • The directory structure tells LabKey Server that the view is in the "ms2" schema and on the "Peptides" table.

reportDemo │ └───resources     ├───queries     │ └───ms2     │ └───Peptides     │ High Prob Matches.qview.xml     │     ├───reports     └───views

View Source

The view will display peptides with high Peptide Prophet scores (greater than or equal to 0.9).

  • Save High Prob Matches.qview.xml with the following content:
<customView xmlns="">
<column name="Scan"/>
<column name="Charge"/>
<column name="PeptideProphet"/>
<column name="Fraction/FractionName"/>
<filter column="PeptideProphet" operator="gte" value="0.9"/>
<sort column="PeptideProphet" descending="true"/>

  • The root element of the qview.xml file must be <customView> and you should use the namespace indicated.
  • <columns> specifies which columns are displayed. Lookups columns can be included (e.g., "Fraction/FractionName").
  • <filters> may contain any number of filter definitions. (In this example, we filter for rows where PeptideProphet >= 0.9). (docs: <filter>)
  • <sorts> section will be applied in the order they appear in this section. In this example, we sort descending by the PeptideProphet column. To sort ascending simply omit the descending attribute.

See the View

To see the view on the ms2.Peptides table:

  • Build and restart the server.
  • Go to the Peptides table and click (Grid Views) -- the view High Prob Matches has been added to the list.
    • Select (Admin) > Developer Links > Schema Browser.
    • Open ms2, scroll down to Peptides.
    • Select (Grid Views) > High Prob Matches.

Previous Step | Next Step

Module SQL Queries

Here we add more resources to the queries directory, adding SQL queries and associated metadata files to provide additional properties.

If supplied, the metadata file should have the same name as the .sql file, but with a ".query.xml" extends (e.g., PeptideCounts.query.xml) (docs: query.xsd)

Below we will create two SQL queries in the ms2 schema.

  • Add two .sql files in the queries/ms2 directory, as follows:

reportDemo │ └───resources     ├───queries     │ └───ms2     │ │ PeptideCounts.sql     │ │ PeptidesWithCounts.sql     │ └───Peptides     │ High Prob Matches.qview.xml     ├───reports     └───views

Add the following contents to the files:


COUNT(Peptides.TrimmedPeptide) AS UniqueCount,
Peptides.Fraction.Run AS Run,
Peptides.PeptideProphet >= 0.9


PeptideCounts pc
Peptides p
ON (p.Fraction.Run = pc.Run AND pc.TrimmedPeptide = p.TrimmedPeptide)
WHERE pc.UniqueCount > 1

Note that the .sql files may contain spaces in their names.

See the SQL Queries

  • Build and restart the server.
  • To view your SQL queries, go to the schema browser at (Admin) > Developer Links > Schema Browser.
  • On the left side, open the nodes ms2 > user-defined queries > PeptideCounts.

Optionally, you can add metadata to these queries to enhance them. See Modules: Query Metadata.

Previous Step | Next Step

Module R Reports

The reports directory holds different kinds of reports and associated configuration files which determine how the reports are surfaced in the user interface.

Below we'll make an R report script that is associated with the PeptidesWithCounts query (created in the previous step).

  • In the reports/ directory, create the following subdirectories: schemas/ms2/PeptidesWithCounts, and a file named "Histogram.r", as shown below:
reportDemo │ └───resources    ├───queries    │ └───ms2    │ │ PeptideCounts.sql    │ │ PeptidesWithCounts.sql    │ │    │ └───Peptides    │ High Prob Matches.qview.xml    │    ├───reports    │ └───schemas    │ └───ms2    │ └───PeptidesWithCounts    │ Histogram.r    │    └───views

  • Open the Histogram.r file, enter the following script, and save the file. (Note that .r files may have spaces in their names.)

xlab="Fractional Delta Mass",
col = "light blue",
border = "dark blue")

Report Metadata

Optionally, you can add associated metadata about the report. See Modules: Report Metadata.

Test your SQL Query and R Report

  • Go to the Query module's home page ( (Admin) > Go to Module > Query). Note that the home page of the Query module is the Query Browser.
  • Open the ms2 node, and see your two new queries in the user-defined queries section.
  • Click on PeptidesWithCounts and then View Data to run the query and view the results.
  • While viewing the results, you can run your R report by selecting (Reports) > Histogram.

Previous Step | Next Step

Module HTML and Web Parts

The views directory holds user interface elements, like HTML pages, and associated web parts.

Since getting to the Query module's start page is not obvious for most users, we will provide an HTML view for a direct link to the query results. You can do this in a wiki page, but that must be created on the server, and our goal is to provide everything in the module itself. Instead we will create an HTML view and an associated web part.

Add an HTML Page

Under the views/ directory, create a new file named reportdemo.html, and enter the following HTML:

<a id="pep-report-link"
Peptides With Counts Report</a>

Note that .html view files must not contain spaces in the file names. The view servlet expects that action names do not contain spaces.

Token Replacement: contextPath and containerPath

Note the use of the <%=contextPath%> and <%=containerPath%> tokens in the URL's href attribute. Since the href in this case needs to refer to an action in another controller, we can't use a simple relative URL, as it would refer to another action in the same controller. Instead, these tokens will be replaced with the server's context path and the current container path respectively.

Token replacement/expansion is applied to html files before they are rendered in the browser. Available tokens include:

  • contextPath - The token "<%=contextPath%>" will expand to the context root of the labkey server (e.g. "/labkey")
  • containerPath - The token "<%=containerPath%>" will expand to the current container (eg. "/MyProject/MyFolder"). Note that the containerPath token always begins with a slash, so you don't need to put a slash between the controller name and this token. If you do, it will still work, as the server automatically ignores double-slashes.
  • webpartContext - The token <%=webpartContext%> is replaced by a JSON object of the form:
wrapperDivId: <String: the unique generated div id for the webpart>,
id: <Number: webpart rowid>,
properties: <JSON: additional properties set on the webpart>

Web resources such as images, javascript, html files can be placed in the /web directory in the root of the module. To reference an image from one of the views pages, use a url such as:

<img src="<%=contextPath%>/my-image.png" />

Define a View Wrapper

This file has the same base-name as the HTML file, "reportdemo", but with an extension of ".view.xml". In this case, the file should be called reportdemo.view.xml, and it should contain the following:

<view xmlns=""
frame="none" title="Report Demo">

Define a Web Part

To allow this view to be visible inside a web part create our final file, the web part definition. Create a file in the views/ directory called reportdemo.webpart.xml and enter the following content:

<webpart xmlns="" title="Report Demo">
<view name="reportdemo"/>

After creating this file, you should now be able to refresh the portal page in your folder and see the "Report Demo" web part in the list of available web parts. Add it to the page, and it should display the contents of the reportdemo.html view, which contains links to take users directly to your module-defined queries and reports.

Your directory structure should now look like this:

High Prob Matches.qview.xml

Set Required Permissions

You might also want to require specific permissions to see this view. That is easily added to the reportdemo.view.xml file like this:

<view xmlns="" title="Report Demo">
<permission name="read"/>

You may add other permission elements, and they will all be combined together, requiring all permissions listed. If all you want to do is require that the user is signed in, you can use the value of "login" in the name attribute.

The XSD for this meta-data file is view.xsd in the schemas/ directory of the project. The LabKey XML Schema Reference provides an easy way to navigate the documentation for view.xsd.

Related Topics

Previous Step

Modules: JavaScript Libraries

To use a JavaScript library in your module, do the following:
  • Acquire the library .js file you want to use.
  • In your module resources directory, create a subdirectory named "web".
  • Inside "web", create a subdirectory with the same name as your module. For example, if your module is named 'helloworld', create the following directory structure:
helloworld └───resources     └───web         └───helloworld

  • Copy the library .js file into your directory structure. For example, if you wish to use a JQuery library, place the library file as shown below:
helloworld └───resources     └───web         └───helloworld                 jquery-2.2.3.min.js

  • For any HTML pages that use the library, create a .view.xml file, adding a "dependencies" section.
  • For example, if you have a page called helloworld.html, then create a file named helloworld.view.xml next to it:
helloworld └───resources     ├───views     │ helloworld.html     │ helloworld.view.xml     └───web         └───helloworld                 jquery-2.2.3.min.js

  • Finally add the following "dependencies" section to the .view.xml file:
<view xmlns="" title="Hello, World!"> 
<dependency path="helloworld/jquery-2.2.3.min.js"></dependency>

Note: if you declare dependencies explicitly in the .view.xml file, you don't need to use LABKEY.requiresScript on the HTML page.

Remote Dependencies

In some cases, you can declare your dependency using an URL that points directly to the remote library, instead of copying the library file and distributing it with your module:

<dependency path=""></dependency>

Related Topics

Modules: Assay Types

Module-based assays allow a developer to create a new assay type with a custom schema and custom views without becoming a Java developer. A module-based assay type consists of an assay config file, a set of domain descriptions, and view html files. The assay is added to a module by placing it in an assay directory at the top-level of the module. When the module is enabled in a folder, assay designs can be created based on the type defined in the module. For information on the applicable API, see: LABKEY.Experiment#saveBatch.


Examples: Module-Based Assays

There are a handful of module-based assays in the LabKey SVN tree. You can find the modules in <LABKEY_HOME>/server/customModules. Examples include:

  • <LABKEY_HOME>/server/customModules/exampleassay/resources/assay
  • <LABKEY_HOME>/server/customModules/iaviElisa/elisa/assay/elisa
  • <LABKEY_HOME>/server/customModules/idri/resources/assay/particleSize

File Structure

The assay consists of an assay config file, a set of domain descriptions, and view html files. The assay is added to a module by placing it in an assay directory at the top-level of the module. The assay has the following file structure:

<module-name>/     assay/           ASSAY_NAME/               config.xml example               domains/ - example                   batch.xml                   run.xml                   result.xml               views/ - example                   begin.html                   upload.html                   batches.html                   batch.html                   runs.html                   run.html                   results.html                   result.html               queries/ - example                   Batches.query.xml                   Run.query.xml                   Data.query.xml                   CUSTOM_ASSAY_QUERY.query.xml                   CUSTOM_ASSAY_QUERY.sql (A query that shows up in the schema for all assay designs of this provider type)                   CUSTOM_ASSAY_QUERY/                       CUSTOM_VIEW.qview.xml               scripts/                   script1.R         

The only required part of the assay is the <assay-name> directory. The config.xml, domain files, and view files are all optional.

This diagram shows the relationship between the pages. The details link will only appear if the corresponding details html view is available.

How to Specify an Assay "Begin" Page

Module-based assays can be designed to jump to a "begin" page instead of a "runs" page. If an assay has a begin.html in the assay/<name>/views/ directory, users are directed to this page instead of the runs page when they click on the name of the assay in the assay list.

Assay Custom Domains

A domain is a collection of fields under a data type. Each data type (e.g., Assays, Lists, Datasets, etc.) provides specialized handling for the domains it defines. Assays define multiple domains (batch, run, etc.), while Lists and Datasets define only one domain each.

An assay module can define a custom domain to replace LabKey's built-in default assay domains, by adding a schema definition in the domains/ directory. For example:


The name of the assay is taken from the <assay-name> directory. The contents of <domain-name>.xml file contains the domain definition and conforms to the <domain> element from assayProvider.xsd, which is in turn a DomainDescriptorType from the expTypes.xsd XML schema. There are three built-in domains for assays: "batch", "run", and "result". This following result domain replaces the build-in result domain for assays:


<ap:domain xmlns:exp=""
<exp:Description>This is my data domain.</exp:Description>
<exp:Description>The Sample Id</exp:Description>
<exp:Label>Sample Id</exp:Label>

To deploy the module, the assay directory is zipped up as a <module-name>.module file and copied to the LabKey server's modules directory.

When you create a new assay design for that assay type, it will use the fields defined in the XML domain as a template for the corresponding domain. Changes to the domains in the XML files will not affect existing assay designs that have already been created.

Assay Custom Details View

Add a Custom Details View

Suppose you want to add a [details] link to each row of an assay run table, that takes you to a custom details view for that row. You can add new views to the module-based assay by adding html files in the views/ directory, for example:


The overall page template will include JavaScript objects as context so that they're available within the view, avoiding an extra client API request to fetch it from the server. For example, the result.html page can access the assay definition and result data as and respectively. Here is an example custom details view named result.html:

1 <table>
2 <tr>
3 <td class='labkey-form-label'>Sample Id</td>
4 <td><div id='SampleId_div'>???</div></td>
5 </tr>
6 <tr>
7 <td class='labkey-form-label'>Time Point</td>
8 <td><div id='TimePoint_div'>???</div></td>
9 </tr>
10 <tr>
11 <td class='labkey-form-label'>Double Data</td>
12 <td><div id='DoubleData_div'>???</div></td>
13 </tr>
14 </table>
16 <script type="text/javascript">
17 function setValue(row, property)
18 {
19 var div = Ext.get(property + "_div");
20 var value = row[property];
21 if (!value)
22 value = "<none>";
23 div.dom.innerHTML = value;
24 }
26 if (
27 {
28 var row =;
29 setValue(row, "SampleId");
30 setValue(row, "TimePoint");
31 setValue(row, "DoubleData");
32 }
33 </script>

Note on line 28 the details view is accessing the result data from See Example Assay JavaScript Objects for a description of the and objects.

Add a custom view for a run

Same as for the custom details page for the row data except the view file name is run.html and the run data will be available as the variable. See Example Assay JavaScript Objects for a description of the object.

Add a custom view for a batch

Same as for the custom details page for the row data except the view file name is batch.html and the run data will be available as the variable. See Example Assay JavaScript Objects for a description of the object.

Related Topics

Loading Custom Views

Module based custom views should be loaded based on their association with the target query name. In past releases, associations with the table title were also supported; the table title is found in the query's metadata.xml file. Using the table title technique to bind a custom view to a query is obsolete and searching for these table titles has a significant negative effect on performance when rendering a grid or dataview web part. Support for this legacy technique will be removed in LabKey Server version 17.3.

Disable Loading by Table Title

In LabKey Server v17.2 an administrator can find and fix these table title references by proactively disabling the "alwaysUseTitlesForLoadingCustomViews" using this experimental feature. If you want to improve performance, removing reliance on this feature will help.

  • Select (Admin) > Site > Admin Console.
  • Click the Admin Console Links tab.
  • Under Configuration, click Experimental Features
  • Click Enable for Remove support for loading of Custom Views by Table Title.

Custom views loading by table title will now generate a warning enabling you to find and fix them.

Loading Custom Views

The correct way to attach a custom view to a table is to bind via the query name. For instance, if you have a query in the elispot module called QueryName, which includes the table name definition as TableTitle, and your custom view is called MyView, you would place the xml file here:


Fixing Legacy Views

With the "alwaysUseTitlesForLoadingCustomViews" flag set, you would also have been able to load the above example view by binding it to the table name, i.e.:


In version 17.3, this flag will be removed, so to fix legacy views and remove reliance on this flag, use the experimental feature described above to disable it in version 17.2 and modify any module based custom views to directly reference the query name.

Example Assay JavaScript Objects

These JavaScript objects are automatically injected into the rendered page (example page: result.html), to save developers from needing to make a separate JavaScript client API request via AJAX to separately fetch them from the server.

The assay definition is available as for all of the html views. It is a JavaScript object, which is of type LABKEY.Assay.AssayDesign: = {
"id": 4,
"projectLevel": true,
"description": null,
"name": <assay name>,
// domains objects: one for batch, run, and result.
"domains": {
// array of domain property objects for the batch domain
"<assay name> Batch Fields": [
"typeName": "String",
"formatString": null,
"description": null,
"name": "ParticipantVisitResolver",
"label": "Participant Visit Resolver",
"required": true,
"typeURI": ""
"typeName": "String",
"formatString": null,
"lookupQuery": "Study",
"lookupContainer": null,
"description": null,
"name": "TargetStudy",
"label": "Target Study",
"required": false,
"lookupSchema": "study",
"typeURI": ""
// array of domain property objects for the run domain
"<assay name> Run Fields": [{
"typeName": "Double",
"formatString": null,
"description": null,
"name": "DoubleRun",
"label": null,
"required": false,
"typeURI": ""
// array of domain property objects for the result domain
"<assay name> Result Fields": [
"typeName": "String",
"formatString": null,
"description": "The Sample Id",
"name": "SampleId",
"label": "Sample Id",
"required": true,
"typeURI": ""
"typeName": "DateTime",
"formatString": null,
"description": null,
"name": "TimePoint",
"label": null,
"required": true,
"typeURI": ""
"typeName": "Double",
"formatString": null,
"description": null,
"name": "DoubleData",
"label": null,
"required": false,
"typeURI": ""
"type": "Simple"

The batch object is available as on the upload.html and batch.html pages. The JavaScript object is an instance of LABKEY.Exp.RunGroup and is shaped like: = new LABKEY.Exp.RunGroup({
"id": 8,
"createdBy": <user name>,
"created": "8 Apr 2009 12:53:46 -0700",
"modifiedBy": <user name>,
"name": <name of the batch object>,
"runs": [
// array of LABKEY.Exp.Run objects in the batch. See next section.
// map of batch properties
"properties": {
"ParticipantVisitResolver": null,
"TargetStudy": null
"comment": null,
"modified": "8 Apr 2009 12:53:46 -0700",
"lsid": ""

The run detail object is available as on the run.html pages. The JavaScript object is an instance of LABKEY.Exp.Run and is shaped like: = new LABKEY.Exp.Run({
"id": 4,
// array of LABKEY.Exp.Data objects added to the run
"dataInputs": [{
"id": 4,
"created": "8 Apr 2009 12:53:46 -0700",
"name": "run01.tsv",
"dataFileURL": "file:/C:/Temp/assaydata/run01.tsv",
"modified": null,
"lsid": <filled in by the server>
// array of objects, one for each row in the result domain
"dataRows": [
"DoubleData": 3.2,
"SampleId": "Monkey 1",
"TimePoint": "1 Nov 2008 11:22:33 -0700"
"DoubleData": 2.2,
"SampleId": "Monkey 2",
"TimePoint": "1 Nov 2008 14:00:01 -0700"
"DoubleData": 1.2,
"SampleId": "Monkey 3",
"TimePoint": "1 Nov 2008 14:00:01 -0700"
"DoubleData": 1.2,
"SampleId": "Monkey 4",
"TimePoint": "1 Nov 2008 00:00:00 -0700"
"createdBy": <user name>,
"created": "8 Apr 2009 12:53:47 -0700",
"modifiedBy": <user name>,
"name": <name of the run>,
// map of run properties
"properties": {"DoubleRun": null},
"comment": null,
"modified": "8 Apr 2009 12:53:47 -0700",
"lsid": ""

The result detail object is available as on the result.html page. The JavaScript object is a map for a single row and is shaped like: = {
"DoubleData": 3.2,
"SampleId": "Monkey 1",
"TimePoint": "1 Nov 2008 11:22:33 -0700"

Assay Query Metadata

Query Metadata for Assay Tables

You can associate query metadata with an individual assay design, or all assay designs that are based on the same type of assay (e.g., "NAb" or "Viability").

Example. Assay table names are based upon the name of the assay design. For example, consider an assay design named "Example" that is based on the "Viability" assay type. This design would be associated with three tables in the schema explorer: "Example Batches", "Example Runs", and "Example Data."

Associate metadata with a single assay design. To attach query metadata to the "Example Data" table, you would normally create a /queries/assay/Example Data.query.xml metadata file. This would work well for the "Example Data" table itself. However, this method would not allow you to re-use this metadata file for a new assay design that is also based on the same assay type ("Viability" in this case).

Associate metadata with all assay designs based on a particular assay type. To permit re-use of the metadata, you need to create a query metadata file whose name is based upon the assay type and table name. To continue our example, you would create a query metadata file callled /assay/Viability/queries/Data.query.xml to attach query metadata to all data tables based on the Viability-type assay.

As with other query metadata in module files, the module must be activated (in other words, the appropriate checkbox must be checked) in the folder's settings.

See Modules: Queries, Views and Reports and Modules: Query Metadata for more information on query metadata.

Customize Batch Save Behavior

You can enable file-based assays to customize their own Experiment.saveBatch behavior by writing Java code that implements the AssaySaveHandler interface. This allows you to customize saving your batch without having to convert your existing file-based assay UI code, queries, views, etc. into a Java-based assay.

The AssaySaveHandler interface enables file-based assays to extend the functionality of the SaveAssayBatch action with Java code. A file-based assay can provide an implementation of this interface by creating a Java-based module and then putting the class under the module's src directory. This class can then be referenced by name in the <saveHandler/> element in the assay's config file. For example, an entry might look like:


To implement this functionality:

  • Create the skeleton framework for a Java module. This consists of a controller class, manager, etc. See Tutorial: Hello World Java Module for details on autogenerating the boiler plate Java code.
  • Add an assay directory underneath the Java src directory that corresponds to the file-based assay you want to extend. For example: myModule/src/org.labkey.mymodule/assay/tracking
  • Implement the AssaySaveHandler interface. You can choose to either implement the interface from scratch or extend default behavior by having your class inherit from the DefaultAssaySaveHandler class. If you want complete control over the JSON format of the experiment data you want to save, you may choose to implement the AssaySaveHandler interface entirely. If you want to follow the pre-defined LABKEY experiment JSON format, then you can inherit from the DefaultAssaySaveHandler class and only override the specific piece you want to customize. For example, you may want custom code to run when a specific property is saved. (See below for more implementation details.)
  • Reference your class in the assay's config.xml file. For example, notice the <ap:saveHandler/> entry below. If a non-fully-qualified name is used (as below) then LabKey Server will attempt to find this class under org.labkey.[module name].assay.[assay name].[save handler name].
<ap:provider xmlns:ap="">
<ap:name>Flask Tracking</ap:name>
Enables entry of a set of initial samples and then tracks
their progress over time via a series of daily measurements.
  • The interface methods are invoked when the user chooses to import data into the assay or otherwise calls the SaveAssayBatch action. This is usually invoked by the Experiment.saveBatch JavaScript API. On the server, the file-based assay provider will look for an AssaySaveHandler specified in the config.xml and invoke its functions. If no AssaySaveHandler is specfied then the DefaultAssaySaveHandler implementation is used.

SaveAssayBatch Details

The SaveAssayBatch function creates a new instance of the SaveHandler for each request. SaveAssayBatch will dispatch to the methods of this interface according to the format of the JSON Experiment Batch (or run group) sent to it by the client. If a client chooses to implement this interface directly then the order of method calls will be:

  • beforeSave
  • handleBatch
  • afterSave
A client can also inherit from DefaultAssaySaveHandler class to get a default implementation. In this case, the default handler does a deep walk through all the runs in a batch, inputs, outputs, materials, and properties. The sequence of calls for DefaultAssaySaveHandler are:
  • beforeSave
  • handleBatch
  • handleProperties (for the batch)
  • handleRun (for each run)
  • handleProperties (for the run)
  • handleProtocolApplications
  • handleData (for each data output)
  • handleProperties (for the data)
  • handleMaterial (for each input material)
  • handleProperties (for the material)
  • handleMaterial (for each output material)
  • handleProperties (for the material)
  • afterSave
Because LabKey Server creates a new instance of the specified SaveHandler for each request, your implementation can preserve instance state across interface method calls within a single request but not across requests.

Related Topics

SQL Scripts for Module-Based Assays

How do you add supporting tables to your assay type? For example, suppose you want to add a table of Reagents, which your assay domain refers to via a lookup/foreign key?

Some options:

1) Manually import a list archive into the target folder.

2) Add the tables via SQL scripts included in the module. To insert data: use SQL DML scripts or create an initialize.html view that populates the table using LABKEY.Query.insertRows().

To add the supporting table using SQL scripts, add a schemas directory, as a sibling to the assay directory, as shown below.

│ └───example
│ │ config.xml
│ │
│ ├───domains
│ │ batch.xml
│ │ result.xml
│ │ run.xml
│ │
│ └───views
│ upload.html



To support only one database, include a script only for that database, and configure your module properties accordingly -- see "SupportedDatabases" in Module Properties Reference.

LabKey Server does not currently support adding assay types or lists via SQL scripts, but you can create a new schema to hold the table, for example, the following script creates a new schema called "myreagents" (on PostgreSQL):


CREATE SCHEMA myreagents;

CREATE TABLE myreagents.Reagents
ReagentName VARCHAR(30) NOT NULL


ALTER TABLE ONLY myreagents.Reagents

INSERT INTO myreagents.Reagents (ReagentName) VALUES ('Acetic Acid');
INSERT INTO myreagents.Reagents (ReagentName) VALUES ('Baeyers Reagent');
INSERT INTO myreagents.Reagents (ReagentName) VALUES ('Carbon Disulfide');

Update the assay domain, adding a lookup/foreign key property to the Reagents table:


If you'd like to allow admins to add/remove fields from the table, you can add an LSID column to your table and make it a foreign key to the exp.Object.ObjectUri column in the schema.xml file. This will allow you to define a domain for the table much like a list. The domain is per-folder so different containers may have different sets of fields.

For example, see customModules/reagent/resources/schemas/reagent.xml. It wires up the LSID lookup to the exp.Object.ObjectUri column

<ns:column columnName="Lsid"> 

...and adds an "Edit Fields" button that opens the domain editor.

function editDomain(queryName) 
var url = LABKEY.ActionURL.buildURL("property", "editDomain", null, {
domainKind: "ExtensibleTable",
createOrEdit: true,
schemaName: "myreagents",
queryName: queryName
window.location = url;

Transformation Scripts


Transformation scripts are attached to assay designs and used to clean and validate assay data, solving a wide variety of challenges. For example:

  • Instrument-generated files often contain header lines before the main data table, denoted by a leading #, !, or other symbol. These lines may contain useful metadata about the protocol, reagents, or samples tested which should either be incorporated into the data import or skipped over to find the main data table.
  • File or data formats might be optimized for display, not for efficient storage and retrieval. Transformation scripts can clean, validate, and reformat imported data.
  • During import, display values from a lookup column may need to be mapped to foreign key values for storage.
  • You may need to fill in additional quality control values with imported assay data, or calculate contents of a new column from columns in the imported data.
Transformation scripts can inspect an uploaded file and change the data or populate empty columns in the uploaded data. They can also modify run- and batch-level properties. If validation only needs to be done for particular single field values, the simpler mechanism is to use a validator within the field properties for the column.

Any scripting language that can be invoked via the command line and has the ability to read/write files is supported for transformation scripts, including:

  • Perl
  • Python
  • R
  • Java
Transformation scripts (which are always attached to assay designs) are different from trigger scripts, which are attached to a dataset (database table or query).

Use Transformation Scripts

Each assay design can be associated with one or more validation or transformation scripts which are run in the order specified. The script file extension (.r, .pl, etc) identifies the script engine that will be used to run the transform script. For example: a script named will be run with the Perl scripting engine. Before you can run validation or transformation scripts, you must configure the necessary Scripting Engines.

This section describes the process of using a transformation script that has already been developed for your assay type. An example workflow for how to create an assay transformation script in perl can be found in Example Workflow: Develop a Transformation Script (perl).

Identifying the Path to the Script File

It is convenient to upload the script file to the File Repository in the same folder as the assay design. The absolute path to the script file can be determined by concatenating the file root for the folder (available at (Admin) > Folder > Management > Files tab) plus the path to the script file in the Files web part (for example, "scripts\LoadData.R"). In the file path, LabKey Server accepts either backslashes (the default Windows format) or forward slashes.

When working on your own developer workstation, you can put the script file wherever you like, but putting it within the File Repository will make it easier to deploy to a production server. It also makes iterative development against a remote server easier, since you can use a Web-DAV enabled file editor to directly edit the same script file that the server is calling.

When you decide where to locate your transformation script file, consider that it is convenient to keep script files in the same location as the assay. You will enter the full path when you configure your assay design to run this script. Use the built-in substitution token "${srcDirectory}" which the server automatically fills in to be the directory where the called script file (the one identified in the Transform Scripts field) is located.

Accessing and Using the Run Properties File

The primary mechanism for communication between the LabKey Assay framework and the Transform script is the Run Properties file. Again a substitution token ${runInfo} tells the script code where to find this file. The script file should contain a line like

run.props = labkey.transform.readRunPropertiesFile("${runInfo}");

The run properties file contains three categories of properties:

1. Batch and run properties as defined by the user when creating an assay instance. These properties are of the format: <property name> <property value> <java data type>

for example,

gDarkStdDev 1.98223 java.lang.Double

When the transform script is called these properties will contain any values that the user has typed into the “Batch Properties” and “Run Properties” sections of the upload form. The transform script can assign or modify these properties based on calculations or by reading them from the raw data file from the instrument. The script must then write the modified properties file to the location specified by the transformedRunPropertiesFile property.

2. Context properties of the assay such as assayName, runComments, and containerPath. These are recorded in the same format as the user-defined batch and run properties, but they cannot be overwritten by the script.

3. Paths to input and output files. These are absolute paths that the script reads from or writes to. They are in a <property name> <property value> format without property types. The paths currently used are:

  • a. runDataUploadedFile: the raw data file that was selected by the user and uploaded to the server as part of an import process. This can be an Excel file, a tab-separated text file, or a comma-separated text file.
  • b. runDataFile: the imported data file after the assay framework has attempted to convert the file to .tsv format and match its columns to the assay data result set definition. The path will point to a subfolder below the script file directory, with a path value similar to <property value> <java property type>. The AssayId_22\42 part of the directory path serves to separate the temporary files from multiple executions by multiple scripts in the same folder.
  • c. AssayRunTSVData: This file path is where the result of the transform script will be written. It will point to a unique file name in an “assaydata” directory that the framework creates at the root of the files tree. NOTE: this property is written on the same line as the runDataFile property.
  • d. errorsFile: This path is where a transform or validation script can write out error messages for use in troubleshooting. Not normally needed by an R script because the script usually writes errors to stdout, which are written by the framework to a file named “<scriptname>.Rout”.
  • e. transformedRunPropertiesFile: This path is where the script writes out the updated values of batch- and run-level properties that are listed in the runProperties file.

Choosing the Input File for Transform Script Processing

The transform script developer can choose to use either the runDataFile or the runDataUploadedFile as its input. The runDataFile would be the right choice for an Excel-format raw file and a script that fills in additional columns of the data set. By using the runDataFile, the assay framework does the Excel-to-TSV conversion and the script doesn’t need to know how to parse Excel files. The runDataUploadedFile would be the right choice for a raw file in TSV format that the script is going to reformat by turning columns into rows. In either case, the script writes its output to the AssayRunTSVData file.

Associate the Script with an Assay

To specify a transform script in an assay design, you enter the full path including the file extension in the Transform Script field.

  • Open the assay designer for a new assay, or edit an existing assay design.
  • Click Add Script.
  • Enter the full path to the script in the Transform Scripts field.
  • You may enter multiple scripts by clicking Add Script again.
  • Confirm that other Properties required by your assay type are correctly specified.
  • Click Save and Close.

When you import (or re-import) run data using this assay design, the script will be executed.

There are two useful options presented as checkboxes in the Assay designer.

  • Save Script Data tells the framework to not delete the intermediate files such as the runProperties file after a successful run. This option is important during script development. It can be turned off to avoid cluttering the file space under the TransformAndValidationFiles directory that the framework automatically creates under the script file directory.
  • Import In Background tells the framework to create a pipeline job as part of the import process, rather than tying up the browser session. It is useful for importing large data sets.
A few notes on usage:
  • Client API calls are not supported in transform scripts.
  • Columns populated by transform scripts must already exist in the assay definition.
  • Executed scripts show up in the experimental graph, providing a record that transformations and/or quality control scripts were run.
  • Transform scripts are run before field-level validators.
  • The script is invoked once per run upload.
  • Multiple scripts are invoked in the order they are listed in the assay design.
Note that non-programmatic quality control remains available -- assay designs can be configured to perform basic checks for data types, required values, regular expressions, and ranges in uploaded data. See the Validators section of the Field Properties topic and Dataset QC States - Admin Guide.

The general purpose assay tutorial includes another example use of a transformation script in Set up a Data Transformation Script.

How Transformation Scripts Work

Script Execution Sequence

Transformation and validation scripts are invoked in the following sequence:

  1. A user uploads assay data.
  2. The server creates a runProperties.tsv file and rewrites the uploaded data in TSV format. Assay-specific properties and files from both the run and batch levels are added. See Run Properties Reference for full lists of properties.
  3. The server invokes the transform script by passing it the information created in step 2 (the runProperties.tsv file).
  4. After script completion, the server checks whether any errors have been written by the transform script and whether any data has been transformed.
  5. If transformed data is available, the server uses it for subsequent steps; otherwise, the original data is used.
  6. If multiple transform scripts are specified, the server invokes the other scripts in the order in which they are defined.
  7. Field-level validator/quality-control checks (including range and regular expression validation) are performed. (These field-level checks are defined in the assay definition.)
  8. If no errors have occurred, the run is loaded into the database.

Passing Run Properties to Transformation Scripts

Information on run properties can be passed to a transform script in two ways. You can put a substitution token into your script to identify the run properties file, or you can configure your scripting engine to pass the file path as a command line argument. See Transformation Script Substitution Syntax for a list of available substitution tokens.

For example, using perl:

Option #1: Put a substitution token (${runInfo}) into your script and the server will replace it with the path to the run properties file. Here's a snippet of a perl script that uses this method:

# Open the run properties file. Run or upload set properties are not used by
# this script. We are only interested in the file paths for the run data and
# the error file.

open my $reportProps, '${runInfo}';

Option #2: Configure your scripting engine definition so that the file path is passed as a command line argument:

  • Go to (Admin) > Site > Admin Console and click Admin Console Links.
  • Under Configuration, click Views and Scripting.
  • Select and edit the perl engine.
  • Add ${runInfo} to the Program Command field.

Related Topics

Example Workflow: Develop a Transformation Script (perl)

This example workflow describes the process for developing a perl transformation script. There are two potential use cases:
  • transform run data
  • transform run properties
This page will walk through the process of creating an assay transformation script for run data, and give an example of a run properties transformation at the end.

Script Engine Setup

Before you can develop or run validation or transform scripts, configure the necessary Scripting Engines. You only need to set up a scripting engine once per type of script. You will need a copy of Perl running on your machine to set up the engine.

  • Select (Admin) > Site > Admin Console and click Admin Console Links.
  • Under Configuration, click Views and Scripting.
  • Click Add > New Perl Engine.
  • Fill in as shown, specifying the "pl" extension and full path to the perl executable.
  • Click Submit.

Add Script to Assay Design

Create a new empty .pl file in the development location of your choice and include it in your assay design. This topic uses the folder and simple assay design you would have created while completing the Assay Tutorial.

  • Navigate to the Assay Tutorial folder.
  • Click GenericAssay in the Assay List web part.
  • Select Manage Assay Design > Copy assay design.
  • Click Copy to Current Folder.
  • Enter a new name, such as "TransformedAssay".
  • Click Add Script and type the full path to the new script file you are creating.
  • Check the box for Save Script Data.
  • Confirm that the batch, run, and data fields are correct.
  • Click Save & Close.

Download Test Data

To assist in writing your transform script, you will next obtain sample "runData.tsv" and "runProperties.tsv" files showing the state of your data import 'before' the transform script would be applied. To generate useful test data, you need to import a data run using the new assay design with the "Save Script Data" box checked.

  • Open and select the following file in the files web part(if you have already imported this file during the tutorial, you will first need to delete that run):
  • Click Import Data.
  • Select Use TransformedAssay (the design you just defined) then click Import.
  • Click Next, then Save and Finish.
  • When the import completes, select Manage Assay Design > Edit assay design.
  • Click the Download Test Data button.
  • Unzip the downloaded "sampleQCData" package to see the .tsv files.
  • Open the "runData.tsv" file to view the current fields.
Date	VisitID	ParticipantID	M3	M2	M1	SpecimenID
12/17/2013 1234 demo value 1234 1234 1234 demo value
12/17/2013 1234 demo value 1234 1234 1234 demo value
12/17/2013 1234 demo value 1234 1234 1234 demo value
12/17/2013 1234 demo value 1234 1234 1234 demo value
12/17/2013 1234 demo value 1234 1234 1234 demo value

Save Script Data

Typically transform and validation script data files are deleted on script completion. For debug purposes, it can be helpful to be able to view the files generated by the server that are passed to the script. When the Save Script Data checkbox is checked, files will be saved to a subfolder named: "TransformAndValidationFiles", in the same folder as the original script. Beneath that folder are subfolders for the AssayId, and below that a numbered directory for each run. In that nested subdirectory you will find a new "runDataFile.tsv" that will contain values from the run file plugged into the current fields.

participantid	Date	M1	M2	M3
249318596 2008-06-07 00:00 435 1111 15.0
249320107 2008-06-06 00:00 456 2222 13.0
249320107 2008-03-16 00:00 342 3333 15.0
249320489 2008-06-30 00:00 222 4444 14.0
249320897 2008-05-04 00:00 543 5555 32.0
249325717 2008-05-27 00:00 676 6666 12.0

Define the Desired Transformation

The runData.tsv file gives you the basic fields layout. Decide how you need to modify the default data. For example, perhaps for our project we need an adjusted version of the value in the M1 field - we want the doubled value available as an integer.

Add Required Fields to the Assay Design

  • Select Manage Assay Design > edit assay design.
  • Scroll down to the TransformedAssay Data Fields section and click Add Field.
  • Enter "AdjustM1", "Adjusted M1", and select type "Integer".
  • Click Save & Close.

Write a Script to Transform Run Data

Now you have the information you need to write and refine your transformation script. Open the empty script file and paste the contents of the Modify Run Data box from this page: Example Transformation Scripts (perl).

Iterate over the Test Run to Complete Script

Re-import the same run using the transform script you have defined.

  • From the run list, select the run and click Re-import Run.
  • Click Next.
  • Under Run Data, click Use the data file(s) already uploaded to the server.
  • Click Save and Finish.

The results now show the new field populated with the Adjusted M1 value.

Until the results are as desired, you will edit the script and use Reimport Run to retry.

Once your transformation script is working properly, re-edit the assay design one more time to uncheck the Save Script Data box - otherwise your script will continue to generate artifacts with every run and could eventually fill your disk.

Debugging Transformation Scripts

If your script has errors that prevent import of the run, you will see red text in the Run Properties window. If you fail to select the correct data file, for example:

If you have a type mismatch error between your script results and the defined destination field, you will see an error like:

Errors File

If the validation script needs to report an error that is displayed by the server, it adds error records to an error file. The location of the error file is specified as a property entry in the run properties file. The error file is in a tab-delimited format with three columns:

  1. type: error, warning, info, etc.
  2. property: (optional) the name of the property that the error occurred on.
  3. message: the text message that is displayed by the server.
Sample errors file:
errorrunDataFileA duplicate PTID was found : 669345900
errorassayIdThe assay ID is in an invalid format

Example Transformation Scripts (perl)

There are two use cases for writing transformation scripts:
  • Modify Run Data
  • Modify Run Properties
This page shows an example of each type of script using perl.

Modify Run Data

This script is used in the Example Workflow: Develop a Transformation Script (perl) and populates a new field with data derived from an existing field in the run.

use strict;
use warnings;

# Open the run properties file. Run or upload set properties are not used by
# this script. We are only interested in the file paths for the run data and
# the error file.

open my $reportProps, '${runInfo}';

my $transformFileName = "unknown";
my $dataFileName = "unknown";

my %transformFiles;

# Parse the data file properties from reportProps and save the transformed data location
# in a map. It's possible for an assay to have more than one transform data file, although
# most will only have a single one.

while (my $line=<$reportProps>)
my @row = split(/\t/, $line);

if ($row[0] eq 'runDataFile')
$dataFileName = $row[1];

# transformed data location is stored in column 4

$transformFiles{$dataFileName} = $row[3];

my $key;
my $value;
my $adjustM1 = 0;

# Read each line from the uploaded data file and insert new data (double the value in the M1 field)
# into an additional column named 'Adjusted M1'. The additional column must already exist in the assay
# definition and be of the correct type.

while (($key, $value) = each(%transformFiles)) {

open my $dataFile, $key or die "Can't open '$key': $!";
open my $transformFile, '>', $value or die "Can't open '$value': $!";

my $line=<$dataFile>;
$line =~ s/\r*//g;
print $transformFile $line, "\t", "Adjusted M1", "\n";

while (my $line=<$dataFile>)
$adjustM1 = substr($line, 27, 3) * 2;
$line =~ s/\r*//g;
print $transformFile $line, "\t", $adjustM1, "\n";


close $dataFile;
close $transformFile;

Modify Run Properties

You can also define a transform script that modifies the run properties, as show in this example which parses the short filename out of the full path:

use strict;
use warnings;

# open the run properties file, run or upload set properties are not used by
# this script, we are only interested in the file paths for the run data and
# the error file.

open my $reportProps, $ARGV[0];

my $transformFileName = "unknown";
my $uploadedFile = "unknown";

while (my $line=<$reportProps>)
my @row = split(/\t/, $line);

if ($row[0] eq 'transformedRunPropertiesFile')
$transformFileName = $row[1];
if ($row[0] eq 'runDataUploadedFile')
$uploadedFile = $row[1];

if ($transformFileName eq 'unknown')
die "Unable to find the transformed run properties data file";

open my $transformFile, '>', $transformFileName or die "Can't open '$transformFileName': $!";

#parse out just the filename portion
my $i = rindex($uploadedFile, "\\") + 1;
my $j = index($uploadedFile, "

#add a value for fileID

print $transformFile "
FileID", "\t", substr($uploadedFile, $i, $j-$i), "\n";
close $transformFile;

Transformation Scripts in R

The R language is a good choice for writing assay transformation scripts, because it contains a lot of built-in functionality for manipulating tabular data sets.

General information about creating and using transformation scripts can be found in this topic: Transformation Scripts. This topic contains information related to using R as the transformation scripting language.

Include Other Scripts

If your transform script calls other script files to do its work, the normal way to pull in the source code is using the source statement, for example:


To keep dependent scripts so that they are easily moved to other servers, it is better to keep the script files together in the same directory. Use the built-in substitution token "${srcDirectory}" which the server automatically fills in to be the directory where the called script file (the one identified in the Transform Scripts field) is located, for example:


Connecting Back to the Server from an R Transform Script

Sometimes a transform script needs to connect back to the server to do its job. One example is translating lookup display values into key values. The Rlabkey library available on CRAN has the functions needed to connect to, query, and insert or update data in the local LabKey Server where it is running. To give the connection the right security context (that of the current user), the assay framework provides the substitution token ${rLabkeySessionId}. Including this token on a line by itself near the beginning of the transform script eliminates the need to use a config file to hold a username and password for this loopback connection. It will be replaced with two lines that looks like:

labkey.sessionCookieName = "JSESSIONID"
labkey.sessionCookieContents = "TOMCAT_SESSION_ID"

where TOMCAT_SESSION_ID is the actual ID of the user's HTTP session.

Debugging an R Transform Script

You can load an R transform script into the R console/debugger and run the script with debug(<functionname>) commands active. Since the substitution tokens described above ( ${srcDirectory} , ${runInfo}, and ${rLabkeySessionId} ) are necessary to the correct operation of the script, the framework conveniently writes out a version of the script with these substitutions made, into the same subdirectory where the runProperties.tsv file is found. Load this modified version of the script into the R console.

Example Script

Input Data TSV File

Suppose you have the following Assay data in a TSV format:


You want a transform script that can flag values greater than 1 and less than 0 as "Out of Range", so that the data enters the database in the form:

S-42018-11-02-1Out of Range
S-52018-11-0299Out of Range


The following R transform script accomplishes this and will write to the Message column if it sees out of range values:


# Read in the run properties and results data. #

run.props = labkey.transform.readRunPropertiesFile("${runInfo}");

# save the important run.props as separate variables = labkey.transform.getRunPropertyValue(run.props, "runDataFile");
run.output.file = run.props$val3[run.props$name == "runDataFile"];
error.file = labkey.transform.getRunPropertyValue(run.props, "errorsFile");

# read in the results data file content = read.delim(, header=TRUE, sep="\t", stringsAsFactors = FALSE);

# Transform the data. #

# Your tranformation code goes here.

# If any Score value is less than 0 or greater than 1,
# then place "Out of Range" in the Message vector.
for(i in 1:nrow(
if ($Score[i] < 0 |$Score[i] > 1) {$Message[i] <- "Out of Range"}

# Write the transformed data to the output file location. #

# write the new set of run data out to an output file
write.table(, file=run.output.file, sep="\t", na="", row.names=FALSE, quote=FALSE);

# print the ending time for the transform script
writeLines(paste("nProcessing end time:",Sys.time(),sep=" "));


Before installing this sample, ensure that an R engine is configured on your server.

  • Create a new folder of type Assay.
  • Download this R script: sampleTransform.R
  • Upload the script to the Files Repository of your new folder.
    • Select (Admin) > Go To Module > FileContent then drop the sampleTransform.R file into the drag and drop area and it will upload.
  • Create an Assay Design named "Score" with the following data fields. You can either enter them yourself or download and import this assay design: Score.xar
    • SpecimenId - type Text (String)
    • Date - type DateTime
    • Score - type Number (Double)
    • Message - type Text (String)
  • Determine the absolute path to the script in the files repository. You can see it after uploading or by concatenating the <folder-root> with "/@files/sampleTransform.R"
  • Import data to the Assay Design. Include values less than 0 or greater than 1 to trigger "Out of Range" values in the Message field. You can use this example data file: R Script Assay Data.tsv
  • View the transformed results imported to the database to confirm that the R script is working correctly.

Related Topics

Transformation Scripts in Java

LabKey Server supports transformation scripts for assay data at upload time. This feature is primarily targeted for Perl or R scripts; however, the framework is general enough that any application that can be externally invoked can be run as well, including a Java program.

Java appeals to programmers who desire a stronger-typed language than most script-based languages. Most important, using a Java-based validator allows a developer to leverage the remote client API and take advantage of the classes available for assays, queries, and security.

This page outlines the steps required to configure and create a Java-based transform script. The ProgrammaticQCTest script, available in the BVT test, provides an example of a script that uses the remote client API.

Configure the Script Engine

In order to use a Java-based validation script, you will need to configure an external script engine to bind a file with the .jar extension to an engine implementation.

  • Select (Admin) > Site > Admin Console.
  • Click Admin Console Links.
  • Under Configuration, click Views and Scripting.
  • Select Add > New External Engine.
  • Set up the script engine by filling in its required fields:
    • File extension: jar
    • Program path: (the absolute path to java.exe)
    • Program command: -jar "${scriptFile}" "${runInfo}"
      • scriptFile: The full path to the (processed and rewritten) transform script. This is usually in a temporary location the server manages.
      • runInfo: The full path to the run properties file the server creates. For more about this file, see "How Transformation Scripts Work".
      • srcDirectory: The original directory of the transform script (usually specified in the assay definition).
  • Click Submit.

The program command configured above will invoke the java.exe application against a .jar file passing in the run properties file location as an argument to the java program. The run properties file contains information about the assay properties including the uploaded data and the location of the error file used to convey errors back to the server. Specific details about this file are contained in the data exchange specification for Programmatic QC.

Implement a Java Validator

The implementation of your java validator class must contain an entry point matching the following function signature:

public static void main(String[] args)

The location of the run properties file will be passed from the script engine configuration (described above) into your program as the first element of the args array.

The following code provides an example of a simple class that implements the entry point and handles any arguments passed in:

public class AssayValidator
private String _email;
private String _password;
private File _errorFile;
private Map<String, String> _runProperties;
private List<String> _errors = new ArrayList<String>();

private static final String HOST_NAME = "http://localhost:8080/labkey";
private static final String HOST = "localhost:8080";

public static void main(String[] args)
if (args.length != 1)
throw new IllegalArgumentException("Input data file not passed in");

File runProperties = new File(args[0]);
if (runProperties.exists())
AssayValidator qc = new AssayValidator();

throw new IllegalArgumentException("Input data file does not exist");

Create a Jar File

Next, compile and jar your class files, including any dependencies your program may have. This will save you from having to add a classpath parameter in your engine command. Make sure that a ‘Main-Class’ attribute is added to your jar file manifest. This attribute points to the class that implements your program entry point.

Set Up Authentication for Remote APIs

Most of the remote APIs require login information in order to establish a connection to the server. Credentials can be hard-coded into your validation script or passed in on the command line. Alternatively, a .netrc file can be used to hold the credentials necesasry to login to the server. For further information, see: Create a netrc file.

The following sample code can be used to extract credentials from a .netrc file:

private void setCredentials(String host) throws IOException
NetrcFileParser parser = new NetrcFileParser();
NetrcFileParser.NetrcEntry entry = parser.getEntry(host);

if (null != entry)
_email = entry.getLogin();
_password = entry.getPassword();

Associate the Validator with an Assay

Finally, the QC validator must be attached to an assay. To do this, you will need to edit the assay design and specify the absolute location of the .jar file you have created. The engine created earlier will bind the .jar extension to the java.exe command you have configured.

Related Topics

Transformation Scripts

Transformation Scripts for Module-based Assays

A transformation script can be included in a module-based assay by including a directory called 'scripts' in the assay directory. In this case, the exploded module structure looks something like:


The scripts directory contains one or more script files; e.g., "".

The order of script invocation can be specified in the config.xml file. See the <transformScripts> element. If scripts are not listed in the config.xml file, they will be executed in alphabetical order based on file name.

A script engine must be defined for the appropriate type of script (for the example script named above, this would be a Perl engine). The rules for defining a script engine for module-based assays are the same as they are for Java-based assays.

When a new assay instance is created, you will notice that the script appears in the assay designer, but it is read-only (the path cannot be changed or removed). Just as for Java-defined assays, you will still see an additional text box where you can specify one or more additional scripts.

Run Properties Reference

Run properties are defined as part of assay design and values are specified at run upload. The server creates a runProperties.tsv file and rewrites the uploaded data in TSV format. Assay-specific properties from both the run and batch levels are included.

There are standard default assay properties which apply to most assay types, as well as additional properties specific to the assay type. For example, NAb, Luminex, and ELISpot assays can include specimen, analyte, and antigen properties which correspond to locations on a plate associated with the assay instance.

The runProperties.tsv file also contains additional context information that the validation script might need, such as username, container path, assay instance name, assay id. Since the uploaded assay data will be written out to a file in TSV format, the runProperties.tsv also specifies the destination file's location.

Run Properties Format

The runProperties file has three (or four) tab-delimited columns in the following order:

  1. property name
  2. property value
  3. data type – The java class name of the property value (java.lang.String). This column may have a different meaning for properties like the run data, transformed data, or errors file. More information can be found in the property description below.
  4. transformed data location – The full path to the location where the transformed data are rewritten in order for the server to load them into the database.
The file does not contain a column header row because the column order is fixed.

Generic Assay Run Properties

Property NameData TypeProperty Description
assayIdStringThe value entered in the Assay Id field of the run properties section.
assayNameStringThe name of the assay design given when the new assay design was created.
assayTypeStringThe type of this assay design. (GenericAssay, Luminex, Microarray, etc.)
baseUrlURL StringFor example, http://localhost:8080/labkey
containerPathStringThe container location of the assay. (for example, /home/AssayTutorial)
errorsFileFull PathThe full path to a .tsv file where any validation errors are written. See details below.
originalFileLocationFull PathThe full path to the original location of the file being imported as an assay.
protocolDescriptionStringThe description of the assay definition when the new assay design was created.
protocolIdStringThe ID of this assay definition.
protocolLsidStringThe assay definition LSID.
runCommentsStringThe value entered into the Comments field of the run properties section.
runDataUploadedFileFull PathThe original data file that was selected by the user and uploaded to the server as part of an import process. This can be an Excel file, a tab-separated text file, or a comma-separated text file.
runDataFileFull PathThe imported data file after the assay framework has attempted to convert the file to .tsv format and match its columns to the assay data result set definition.
transformedRunPropertiesFileFull PathFile where the script writes out the updated values of batch- and run-level properties that are listed in the runProperties file.
userNameStringThe user who created the assay design.
workingDirStringThe temp location that this script is executed in. (e.g. C:\AssayId_209\39\)


Validation errors can be written to a TSV file as specified by full path with the errorsFile property. This output file is formatted with three columns:

  • Type - "error" or "warn"
  • Property - the name of the property raising the validation error
  • Message - the actual error message
For additional information about handling errors and warnings in transformation scripts, see: Warnings in Tranformation Scripts.

Additional Assay Specific Run Properties


Property NameData TypeProperty Description
sampleDataStringThe path to a file that contains sample data written in a tab-delimited format. The file will contain all of the columns from the sample group section of the assay design. A wellgroup column will be written that corresponds to the well group name in the plate template associated with this assay instance. A row of data will be written for each well position in the plate template.
antigenDataStringThe path to a file that contains antigen data written in a tab-delimited format. The file contains all of the columns from the antigen group section of the assay design. A wellgroup column corresponds to the well group name in the plate template associated with this assay instance. A row of data is written for each well position in the plate template.


Property NameData TypeProperty Description

NAb (TZM-bl Neutralizing Antibody) Assay

Property NameData TypeProperty Description
sampleDataStringThe path to a file that contains sample data written in a tab-delimited format. The file contains all of the columns from the sample group section of the assay design. A wellgroup column corresponds to the well group name in the plate template associated with this assay instance. A row of data is written for each well position in the plate template.

General Purpose Assay Type (GPAT)

Property NameData TypeProperty Description
severityLevel (reserved)StringThis is a property name used internally for error and warning handling. Do not define your own property with the same name in a GPAT assay.
maximumSeverity (reserved)StringThis is a property name reserved for use in error and warning handling. Do not define your own property with the same name in a GPAT assay. See Warnings in Tranformation Scripts for details.

Transformation Script Substitution Syntax

LabKey Server supports a number of substitutions that can be used with transformation scripts. These substitutions work both on the command-line being used to invoke the script (configured in the Views and Scripting section of the Admin Console), and in the text of transformation scripts themselves. See Transformation Scripts for a description of how to use this syntax.

Script SyntaxDescriptionSubstitution Value
${runInfo}File containing metadata about the runFull path to the file on the local file system
${srcDirectory}Directory in which the script file is locatedFull path to parent directory of the script
${rLabkeySessionId}Information about the current user's HTTP sessionlabkey.sessionCookieName = "COOKIE_NAME"
labkey.sessionCookieContents = "USER_SESSION_ID"
Note that this is multi-line. The cookie name is typically JSESSIONID, but is not in all cases.
${httpSessionId}The current user's HTTP session IDThe string value of the session identifier, which can be used for authentication when calling back to the server for additional information
${sessionCookieName}The name of the session cookieThe string value of the cookie name, which can be used for authentication when calling back to the server for additional information.
${baseServerURL}The server's base URL and context pathThe string of the base URL and context path. (ex. "http://localhost:8080/labkey")
${containerPath}The current container pathThe string of the current container path. (ex. "/ProjectA/SubfolderB")

Warnings in Tranformation Scripts

In General Purpose Assay (GPAT) designs, you can enable reporting of warnings in a transformation script. Ordinarily, errors will stop the execution of a script and the assay import, but if warnings are configured, you can have the import pause on warnings and allow an operator to examine transformed results and elect to proceed or cancel the upload. Note that this feature applies only to the General Purpose Assay Type (GPAT) and is not a generic assay feature. Warning reporting is optional, and invisible unless you explicitly enable it. If your script does not update maximumSeverity, then no warnings will be triggered and no user interaction will be required.

Enable Support for Warnings in a Transformation Script

To raise a warning from within your transformation script, set maximumSeverity to WARN within the transformedRunProperties file. To report an error, set maximumSeverity to ERROR. To display a specific message with either a warning or error, write the message to errors.html in the current directory. For example, this snippet from an R transformation script defines a warning and error handler:

# writes the maximumSeverity level to the transformRunProperties file and the error/warning message to the error.html file.
# LK server will read these files after execution to determine if an error or warning occurred and handle it appropriately
handleErrorsAndWarnings <- function()
if(run.error.level > 0)
if(run.error.level == 1)
writeLines(c(paste("maximumSeverity","WARN",sep="t")), fileConn);
writeLines(c(paste("maximumSeverity","ERROR",sep="t")), fileConn);

# This file gets read and displayed directly as warnings or errors, depending on maximumSeverity level.
writeLines(run.error.msg, fileConn);


Click here to download a sample transformation script including this handler and other configuration required for warning reporting.

Workflow for Warnings from Transformation Scripts

When a warning is triggered during assay import, the user will see a screen similar to this with the option to Proceed or Cancel the import after examining the output files:

After examining the output and transformed data files, if the user clicks Proceed the transform script will be rerun and no warnings will be raised the on second pass. Quieting warnings on the approved import is handled using the value of an internal property called severityLevel in the run properties file. Errors will still be raised if necessary.

Priority of Errors and Warnings:

  • 1. Script error (syntax, runtime, etc...) <- Error
  • 2. Script returns a non-zero value <- Error
  • 3. Script writes ERROR to maximumSeverity in the transformedRunProperties file <- Error
    • If the script also writes a message to errors.html, it will be displayed, otherwise a server generated message will be shown.
  • 4. Script writes WARN to maximumSeverity in the transformedRunProperties file <- Warning
    • If the script also writes a message to errors.html, it will be displayed, otherwise a server generated message will be shown.
    • The Proceed and Cancel buttons are shown, requiring a user selection to continue.
  • 5. Script does not write a value to maximumSeverity in transformedRunProperties but does write a message to errors.html. This will be interpreted as an error.

Modules: Folder Types

LabKey Server includes a number of built-in folder types, which define the enabled modules and the location of web parts in the folder. Built-in folder types include study, assay, flow, and others, each of which combine different default tools and web parts for different workflows and analyses.

Advanced users can define custom folder types in an XML format for easy reuse. This document explains how to define a custom folder type in your LabKey Server module. A folder type can be thought of as a template for the layout of the folder. The folder type specifies the tabs, web parts and active modules that are initially enabled in that folder.

Each folder type can provide the following:

  • The name of the folder type.
  • Description of the folder type.
  • A list of tabs (provide a single tab for a non-tabbed folder).
  • A list of the modules enabled by default for this folder.
  • Whether the menu bar is enabled by default. If this is true, when the folderType is activated in a project (but not a subfolder), the menu bar will be enabled.
Per tab, the following can be set:
  • The name and caption for the tab.
  • An ordered list of 'required web parts'. These web parts cannot be removed.
  • An ordered list of 'preferred web parts'. The web parts can be removed.
  • A list of permissions required for this tab to be visible (ie. READ, INSERT, UPDATE, DELETE, ADMIN)
  • A list of selectors. These selectors are used to test whether this tab should be highlighted as the active tab or not. Selectors are described in greater detail below.

Define a Custom Folder Type

Module Location

The easiest way to define a custom folder type is via a module, which is just a directory containing various kinds of resource files. Modules can be placed in the standard modules/ directory, or in the externalModules/ directory. By default, the externalModules/ directory is a peer to the modules/ directory.

To tell LabKey Server to look for external modules in a different directory, simply add the following to your VM parameters:


This will cause the server to look in C:/externalModules for module files in addition to the normal modules/ directory under the web application.

Module Directory Structure

Create a directory structure like the following, replacing 'MyModule' with the name of your module. Within the folderTypes directory, any number of XML files defining new folder types can be provided.


Definition file name and location

Custom folder types are defined via XML files in the folderTypes directory. Folder type definition files can have any name, but must end with a ".foldertype.xml" extension. For example, the following file structure is valid:


Example #1

The full XML schema (XSD) for folder type XML is documented and available for download. However, the complexity of XML schema files means it is often simpler to start from an example. The following XML defines a simple folder type:

<folderType xmlns="">
<name>My XML-defined Folder Type</name>
<description>A demonstration of defining a folder type in an XML file</description>
<property name="title" value="A customized web part" />
<property name="schemaName" value="study" />
<property name="queryName" value="SpecimenDetail" />
<name>Data Pipeline</name>
<name>Experiment Runs</name>
<name>Sample Sets</name>
<name>Run Groups</name>

Valid Web Part Names

Each <webPart> element must contain a <name> element. The example above specified that a query web part is required via the following XML:

Valid values for the name element can be found by looking at the 'Select Web Part' dropdown visible when in page admin mode. Note that you may need to enable additional LabKey modules via the 'customize folder' administrative option to see all available web part names.

Valid Module Names

The modules and defaultModules sections define which modules are active in the custom folder type. From the example above:


Valid module names can be found by navigating through the administrative user interface to create a new LabKey Server folder, or by selecting 'customize folder' for any existing folder. The 'customize folder' user interface includes a list of valid module names on the right-hand side.

Example #2 - Tabs

This is another example of an XML file defining a folder type:

<folderType xmlns="" xmlns:mp="">
<name>Laboratory Folder</name>
<description>The default folder layout for basic lab management</description>
<name>Laboratory Home</name>
<name>Lab Tools</name>

<name>Lab Tools</name>
<name>Data Views</name>
<name>Lab Tools</name>
<name>Lab Settings</name>
<name>Lab Tools</name>
<permission name=""/>

Tabbed Folders - The Active Tab

When creating a tabbed folder type, it is important to understand how the active tab is determined. The active tab is determined by the following checks, in order:

  1. If there is 'pageId' param on the URL that matches a tab's name, this tab is selected. This most commonly occurs after directly clicking a tab.
  2. If no URL param is present, the tabs are iterated from left to right, checking the selectors provided by each tab. If any one of the selectors from a tab matches, that tab is selected. The first tab with a matching selector is used, even if more than 1 tab would have a match.
  3. If none of the above are true, the left-most tab is selected
Each tab is able to provide any number of 'selectors'. These selectors are used to determine whether this tab should be marked active (ie. highlighted) or not. The currently supported selector types are:
  1. View: This string will be matched against the viewName from the current URL (ie. 'page', from the current URL). If they are equal, the tab will be selected.
  2. Controller: This string will be matched against the controller from the current URL (ie. 'wiki', from the current URL). If they are equal, the tab will be selected.
  3. Regex: This is a regular expression that must match against the full URL. If it matches against the entire URL, the tab will be selected.
If a tab provides multiple selectors, only 1 of these selectors needs to match. If multiple tabs would have matched to the URL, the left-most tab (ie. the first matching tab encountered) will be selected.

Modules: Query Metadata

To provide additional properties for a query, you may optionally include an associated metadata file for the query.

If supplied, the metadata file should have the same name as the .sql file, but with a ".query.xml" extension (e.g., PeptideCounts.query.xml). For details on setting up the base query, see: Module SQL Queries.

For syntax details, see the following:


See Query Metadata: Examples.

The sample below adds table- and column-level metadata to a SQL query.

<query xmlns="">
<tables xmlns="">
<table tableName="ResultsSummary" tableDbType="NOT_IN_DB">
<column columnName="Protocol">
<column columnName="Formulation">
<column columnName="DM">
<column columnName="wk1">
<columnTitle>1 wk</columnTitle>
<column columnName="wk2">
<columnTitle>2 wk</columnTitle>

Metadata Overrides

Metadata is applied in the following order:

  • 1. JDBC driver-reported metadata.
  • 2. Module schemas/<schema>.xml metadata.
  • 3. Module Java code creates UserSchema and FilteredTableInfo.
  • 4. Module queries/<schema>/<query>.query.xml metadata.
    • First .query.xml found in the active set of modules in the container.
  • 5. User-override query metadata within LabKey database, specified through the Query Schema Browser.
    • First metadata override found by searching up container hierarchy and Shared container.
  • 6. For LABKEY.QueryWebPart, optional metadata config parameter.
LabKey custom queries will apply the metadata on top of the underlying LabKey table's metadata. A Linked Schema may have metadata associated with the definition which will be applied on top of the source schema's metadata. The LinkedSchemas tables and queries may also have module .query.xml and metadata overrides applied using the same algorithm on top of the source schema's tables and queries.

An exception to this overriding sequence is that if a foreign key is applied directly to a database table (1), it will not be overridden by metadata in a schemas/<schema>.xml file (2), but can be overridden by metadata in a queries/<schema>/<query>.query.xml file (4).

Related Topics

Modules: Report Metadata

The following topic explains how to add an R report (in a file-based module) to the Reports menu on a dataset.

Report File Structure

Suppose you have a file-based R report on a dataset called "Physical Exam". The R report (MyRReport.r) is packaged as a module with the following directory structure.

Physical Exam

Include Thumbnail and Icon Images

To include static thumbnail or icon images in your file based report, create a folder with the same name as the report. Then place a file for each type of image you want; extensions are optional:

  • Thumbnail: The file must be named "Thumbnail" or "Thumbnail.*"
  • Mini-icon: The file must be named "SmallThumbnail" or "SmallThumbnail.*"
Note that the folder containing the images is a sibling folder to the report itself (and report XML file if present). To add both types of image to the same example used above ("MyRReport" on the "Physical Exam" dataset) the directory structure would look like this:
Physical Exam

Report Metadata

To add metadata to the report, create a file named in the "Physical Exam" directory:

Physical Exam

Using a metadata file, you can set the report as hidden, set the label and description, and other properties. For instance, you can set a "created" date and set the status to "Final" by including the following:

<Prop name="status">Final</Prop>
<Prop name="moduleReportCreatedDate">2016-12-01</Prop>

Setting a report as hidden (including <hidden>true</hidden> in the metadata XML) will hide it in Data Views web part and the Views menu on a data grid, but does not prevent the display of the report to users if the report's URL is called.

For more details see the report metadata xml docs: ReportDescriptor.

Sample Report Metadata

A sample report metadata file. Note that label, description, and category are picked up by and displayed in the Data Views web part.

<?xml version="1.0" encoding="UTF-8" ?>
<label>My R Report</label>
<description>A file-based R report.</description>
<Prop name="status">Final</Prop>
<Prop name="moduleReportCreatedDate">2016-12-01</Prop>

Modules: Custom Footer

Premium Feature — This feature is available in the Professional, Professional Plus, and Enterprise Editions. Learn more or contact LabKey.

The server provides a default site-wide footer which renders the text “Powered by LabKey” with a link to the home page.

To create a custom footer that appears on all pages throughout the site, place a file named _footer.html in your module, at the following location:


The footer can be written as an HTML fragment, without the <head> or <body> tags. The file can render any kind of HTML content, such as links, images, and scripts. It is also responsible for its own formatting, dependencies, and resources.

Images and CSS Files

Associated images and CSS files can be located in the same module, as follows:



The following _footer.html file references myimage.png.

<p align="center">
<img src="<%=contextPath%>/customfooter/myimage.png"/> This is the Footer Text!

Choosing Between Multiple Footers

If _footer.html files are contributed by multiple modules, you can select which footer to display from the Admin Console.

  • Go to (Admin) > Site > Admin Console.
  • Click Admin Console Links.
  • Under Premium Features, click Configure Footer.
  • Select the module containing the footer to use (see below).
  • Click Save.

The dropdown list is populated by footers residing in modules deployed on the server (including both enabled and un-enabled modules).

  • Core will show the standard LabKey footer "Powered by LabKey".
  • Default will display the footer with the highest priority, where priority is determined by module dependency order. If module A depends on module B, then the footer in A has higher priority. Note that only modules that are enabled in at least one folder will provide a footer to the priority ranking process.

Related Topics

Modules: Custom Header

Premium Feature — This feature is available in the Professional, Professional Plus, and Enterprise Editions. Learn more or contact LabKey.

To create a custom header that appears on all pages throughout the site, place a file named _header.html in any module you will include in your server. Place it at the following location:


The header is written as an HTML fragment, without the <head> or <body> tags. The file can render any kind of HTML content, such as links, images, and scripts. It is also responsible for its own formatting, dependencies, and resources.


The following _header.html file adds a simple banner message alongside the LabKey logo. Create a new .html file in the location described above, and use this content.

<p align="center" style="color:white">
<span class="fa fa-flag"></span>&nbsp;&nbsp; Custom Header Appears Here &nbsp;&nbsp;<span class="fa fa-flag"></span>

In this example, the header file was placed in the resources/views directory of the "HelloWorld" module created in our module development tutorial. If _header.html files are defined in several modules, you can select which one to display within the admin console.

  • Go to (Admin) > Site > Admin Console.
  • Click Admin Console Links.
  • Under Premium Features, click Configure Header.
  • Check the box to Show HTML header.
  • Select the module in which the header is defined.
  • Click Save.
The new header will appear at the top of every page.

Related Topics

Modules: SQL Scripts

LabKey includes a database schema management system that module writers use to automatically install and upgrade schemas on the servers that deploy their modules, providing convenience and reliability to the server admins. Module writers should author their SQL scripts carefully, test them on multiple databases, and follow some simple rules to ensure compatibility with the script runner. Unlike most code bugs, a SQL script bug has the potential to destroy data and permanently take down a server. We suggest reading this page completely before attempting to write module SQL scripts. If you have any questions, please contact the LabKey team. If your module is checked in to LabKey's subversion repository, or your module has the potential to be installed on additional servers (including by other developers), you should be especially conscious of updates to SQL scripts. Once a script has been checked in to LabKey's repository or run by another instance of LabKey, it is a good guideline to consider it immutable. If a table needs to be altered, no matter how trivial the change, a new upgrade script should normally be used. This is because if another server installs or upgrades using this script, it will not be re-run. If the script is then edited, this can cause the other machine to have an incomplete schema, which can easily result in errors downstream or on subsequent updates. For the case of scripts checked in to LabKey's subversion repository, be aware that other developers and LabKey's testing servers routinely run all checked-in scripts and it is very easy for problems to arise from inappropriately changed scripts. See the Hints and Advanced Topics section below for ways to make this process easier.

For the core LabKey Server modules, the version numbers correspond with the name of the release. For example, modules will show that they are are 1.10 for the 1.1 release, 1.20 for the 1.2 release, etc. This is just a convention - there is no requirement that each module be at the same version number as other modules.

Note that module-based SQL scripts for assay types are not supported.

SQL Script Manager

You must name your SQL scripts correctly and update your module versions appropriately, otherwise your scripts might not run at all, scripts might get skipped, or scripts might run in the wrong order. The LabKey SQL Script Manager gets called when a new version of a module gets installed. Specifically, a module gets updated at startup time if (and only if) the version number listed for the module in the database is less than the current version in the code. The module version in the database is stored in core.Modules; the module version in code is returned by the getVersion() method in each Module class (Java module) or listed in (file-based module).

Rule #1: The module version must be bumped to get any scripts to run.

When a module is upgraded, the SQL Script Manager automatically runs the appropriate scripts to upgrade to the new schema version. It determines which scripts to run based on the version information encoded in the script name. The scripts are named using the following convention: <dBschemaName>-<fromVersion #.00>-<toVersion #.00>.sql

Rule #2: Use the correct format when naming your scripts; anything else will get ignored.

Use dashes, not underscores. Use two (or three, if required) decimal places for version numbers (0.61, 1.00, 12.10). We support three decimal places for very active modules, those that need more than 10 incremental scripts per point release. But most modules should use two decimal places.

Some examples:

  • foo-0.00-1.00.sql: Upgrades foo schema from version 0.00 to 1.00
  • foo-1.00-1.10.sql: Upgrades foo schema from version 1.00 to 1.10
  • foo-1.10-1.20.sql: Upgrades foo schema from version 1.10 to 1.20
  • foo-0.00-1.20.sql: Upgrades foo schema from version 0.00 to 1.20
(Note that the schema produced by running the first three scripts above should be the same as the schema produced by running the fourth script alone.)

The script directories can have many incremental & full scripts to address a variety of upgrade scenarios. The SQL Script Manager follows a specific algorithm when determining which script(s) to run for an upgrade. This is what it does:

  • Determine installed module version number ("old") and new module version number ("new").
  • Find all scripts in the directory that start at or above "old" and end at or below "new". Eliminate any scripts that have already been run on this database (see the core.SqlScripts table).
  • Of these scripts, find the script(s) with the lowest "from" version. If there's just a single script with this "from" version, pick it. If there are more than one, pick the script with the highest "to" version.
  • Run that script. Now the schema has been updated to the "to" version indicated in the script just run.
  • Determine if more scripts need to be run. To do this, treat the "to" version of the script just run as the currently installed version.
  • Repeat all the steps above (create list of scripts in the new range, eliminate previously run scripts, choose the script with the lowest starting point having the greatest range, and run it) until there are no more scripts left.
A few scenarios based on the "foo" example above may help clarify the process:

Installed Module VersionNew Module VersionScript(s) Run
0.00 (not installed)1.10foo-0.00-1.00.sql, foo-1.00-1.10.sql
0.00 (not installed)1.20foo-0.00-1.20.sql
1.001.20foo-1.00-1.10.sql, foo-1.10-1.20.sql
1.111.20None of these scripts

Rule #3: Name your script as starting at the current module version in code.

This rule is important, but easily forgotten. If the most recent script in a directory is "foo-0.90-1.00.sql" and the new module version will be 2.00, it may be tempting to name the new script "foo-1.00-2.00.sql". This is almost certainly a mistake. What matters is the module version in code, not the ending version of the last script. The module number in code gets bumped for a variety of reasons (e.g., for a major release, for other schemas, or to force after-schema-update code to run), so a script that starts where the last script left off will probably never run. You must look at the current module version in code instead. There will be "gaps" in the progression; this is expected and normal.

If you're creating a new incremental script, here is a (nearly) foolproof set of steps that will produce a correct script name for module "Foo" that uses schema "foo":

  • Finalize and test your script contents.
  • Do an svn update to get current on all files. This ensures that no one else has bumped the version or checked in an incremental script with the same name.
  • Find the current version number returned by the FooModule getVersion() method. Let's say it's 1.02.
  • Name your script "foo-1.02-1.03.sql". (Incrementing by 0.01 gives you room to get multiple schema changes propagated and tested during the development period between major releases.)
  • Bump the version number returned by FooModule.getVersion() to 1.03.
  • Build, test, and commit your changes.
Everyone who syncs to your repository (e.g., all the developers on your team, your continuous integration server) will update, build, start their servers, and automatically run your upgrade script, resulting in Foo module version 1.03 successfully installed (unless you make a mistake… in which case you get to fix their database). After your commit there's no going back; you can't change scripts once they've been run. Instead, you must check in a new incremental that produces the appropriate changes (or rolls back your changes, etc.).

Rule #4: After a release, the next incremental script is still a point version of the release

Just before releasing a new version of LabKey Server, the LabKey team "rolls up" each module's incremental scripts into a single script for that release (e.g., for the release of version 1.1, foo-1.00-1.01.sql, foo-1.01-1.02.sql, and foo-1.02-1.03.sql get concatenated into foo-1.00-1.10.sql). This cleans things up a bit and reduces the number of script files, but it isn't required at all. The critical step is to get the incremental script right; you only get one chance for that.

The LabKey team will also bump all module versions to match the release. foo would now be version 1.10. The next script, intended for the 1.20 release, will be foo-1.10-foo-1.11.sql. Never bump the module version past the in-progress LabKey release. (e.g., if you get up to foo-1.18-1.19.sql before the 1.20 release, and still need another script, it would be foo-1.19-1.191.sql)

If you're testing an extensive schema change you may want to check in a script but not have it run on other developers' machines yet. This is simple; check in the script but don't bump the version number in code. When you're done testing, bump the version and everyone will upgrade.

The above guidelines eliminate most, but not all, problems with script naming. In particular, if multiple developers are working on the same module they must coordinate with each other to ensure scripts don't conflict with each other.

Remember that all scripts adhere to the module version number progression. If a single module manages multiple database schemas you must be extra careful about rule #3 and plan to see many gaps between each schema's script files.

Hints and Advanced Topics

  • Modules are upgraded in dependency order, which allows schemas to safely depend on each other.
  • Modules can (optionally) include two special scripts for each schema: <schema>-create.sql and <schema>-drop.sql. The drop script is run before all module upgrades and the create script is run after that schema's scripts are run. The primary purpose is to create and drop SQL views in the schema. The special scripts are needed because some databases don't allow modifying tables that are used in views. So LabKey drops all views, modifies the schema, and re-creates all views on every upgrade.
  • Java upgrade code. Some schema upgrades require code. One option is to implement and register a class in your module that implements UpgradeCode and invoke its methods from inside a script via the core.executeJavaUpgradeCode stored procedure. This works well for self-contained code that assumes a particular schema structure; the code is run once at exactly the right point in your upgrade sequence.
  • After schema update. Another option for running Java code is to call it from the Module afterUpdate() method. This can be useful if the upgrade code needs to call library methods that change based on the current schema. Be very careful here; the schema could be in a completely unknown state (if the server hasn't upgraded in a while then your code could execute after two years of future upgrade scripts have run).
  • bootstrap. On a developer machine: shut down your server, run "gradlew bootstrap", and restart your server to initiate a full bootstrap on your currently selected database server. This is a great way to test SQL scripts on a clean install. Use "gradlew pickPg" and "gradlew pickMSSQL" to test against the other database server.
  • The Admin Console provides other helpful tools. The "Sql Scripts" link shows all scripts that have run and those that have not run on the current server. From there, you can choose to "Consolidate Scripts" (e.g., rolling up incremental scripts into version upgrade scripts or creating bootstrap scripts, <schema>-0.00-#.00.sql). While viewing a script you have the option to "Reorder" the script, which attempts to parse and reorder all the statements to group all modifications to each table together. This can help streamline a script (making redundant or unnecessary statements more obvious), but is recommended only for advanced users.
  • In addition to these scripts, if you want to specify metadata properties on the schema, such as Caption, Display Formats, etc, you will need to create a schema XML file. This file is located in the /scripts folder of your module. There is one XML file per schema. This file can be auto-generated for an existing schema. To get an updated XML file for an existing schema, go to the Admin Console then pick 'Check Database'. There will be a menu to choose the schema and download the XML. If you are a Site Administrator, you can use a URL along these lines directly: http://localhost:8080/labkey/admin/getSchemaXmlDoc.view?dbSchema=<yourSchemaName>. Simply replace the domain name & port with the correct values for your server. Also put the name of your schema after 'dbSchema='. Note: Both the schema XML file name and 'dbSchema=' value are case-sensitive. They must match the database schema name explicitly.
  • LabKey offers automated tests that will compare the contents of your schema XML file with the actual tables present in the DB. To run this test, visit a URL similar to: http://localhost:8080/labkey/junit/begin.view?, but substitute the correct domain name and port. Depending on your server configuration, you may also need to omit "/labkey" if labkey is run as the root webapp. This page should give a list of all Junit test. Run the test called "org.labkey.core.admin.test.SchemaXMLTestCase".
  • Schema delete. When developing a new module, schemas can change rapidly. During initial development, it may be useful to completely uninstall / reinstall a module in order to rebuild the schema from scratch, rather than make changes via a large number of incremental scripts. Uninstalling a module requires several steps: drop the schema, delete the entry in the core.Modules table, delete all the associated rows in the core.SqlScripts table. The "Module Details" page (from the Admin Console) provides a quick way to uninstall a module; when your server is restarted, the module will be reinstalled and the latest scripts run. Use extreme caution… deleting a schema or module should only be done on development machines. Also note that while this is useful for development, see warnings above about editing scripts once checked into subversion and/or otherwise made available to other instances of LabKey.

Script Conventions

The conventions below are designed to help everyone write better scripts. They 1) allow developers to review & test each other's scripts and 2) produce schema that can be changed easily in the future. The conventions have been developed while building, deploying, and changing production LabKey installations over the last eight years; we've learned some lessons along the way.

Databases & Schemas

Most modules support both PostgreSQL and Microsoft SQL Server. LabKey Server uses a single primary database (typically named "labkey") divided into 20 - 30 "schemas" that provide separate namespaces, usually one per module. Note that, in the past, SQL Server used the term "owner" instead of "schema," but that term is being retired.


SQL keywords should be in all caps. This includes SQL commands (SELECT, CREATE TABLE, INSERT), type names (INT, VARCHAR), and modifiers (DEFAULT, NOT NULL).

Identifiers such as table, view, and column names are always initial cap camel case. For example, ProtInfoSources, IonPercent, ZScore, and RunId. Note that we append 'Id' (not 'ID') to identity column names.

We use a single underscore to separate individual identifiers in compound names. For example, a foreign key constraint might be named 'FK_BioSource_Material'. More on this below.

Constraints & Indexes

Do not use the PRIMARY KEY modifier on a column definition to define a primary key. Do not use the FOREIGN KEY modifier on a column definition to define a foreign key. Doing either will cause the database to create a random name that will make it very difficult to drop or change the index in the future. Instead, explicitly declare all primary and foreign keys as table constraints after defining all the columns. The SQL Script Manager will enforce this convention.

  • Primary Keys should be named 'PK_<TableName>'
  • Foreign Keys should be named 'FK_<TableName>_<RefTableName>'. If this is ambiguous (multiple foreign keys between the same two tables), append the column name as well
  • Unique Constraints should be named 'UQ_<TableName>_<ColumnName>'
  • Normal Indexes should be named 'IX_<TableName>_<ColumnName>'
  • Defaults are also implemented as constraints in some databases, and should be named 'DF_<TableName>_<ColumnName>'
  • Almost all columns that have a foreign key, whether explicitly defined as a constraint in the table definition or as a soft foreign key, should have an associated index for performance reasons.

Keep Your SQL as Database-Independent as Possible

You may prefer using PostgreSQL over SQL Server (or vice versa), but don't forget about the other database… write your scripts to work with both databases and you'll save yourself many headaches. Test your scripts on both databases.

  • NVARCHAR is preferred for almost all text-based columns on SQL Server since it allows extended characters.

Statement Endings

Every statement should end with a semicolon, on both PostgreSQL and SQL Server. In older versions of SQL Server, "GO" statements needed to be interjected frequently within SQL scripts. They are rarely needed now, except in a few isolated cases:

  • After creating a new user-defined type (sp_addtype), which is rare
  • Before and after a stored procedure definition; SQL Server requires each stored procedure definition to be executed in its own block
  • After a DROP and re-CREATE
  • After an ALTER statement, if the altered object is referenced later in the scripts

Scripting from SQL Server

It is often convenient to create SQL Server objects or data via visual tools first, and then have SQL Server generate the correct CREATE, INSERT, etc scripts. This is fine; however be aware that the script will have some artifacts which should be removed before committing your upgrade script:

  • A "USE database name" statement at the top. The database name in other deployments will likely differ from your database name, which will break the script.
  • SET ANSI_NULLS ON and SET QUOTED_IDENTIFIER ON. These scripts are run on pooled connections, and these settings will remain for the next operation which gets the connection the script ran on.
  • General style recommendations for consistency and to make scripts less verbose:
    • Square brackets around many identifiers (schema, table, and column names). These should only be used if absolutely necessary.
    • Foreign keys constraints get created WITH CHECK. This should be removed.
    • Foreign keys will then have a separate statement to CHECK the constraint. This statement should also be removed.
    • Indexes are created with an explicit setting for every option, even though most/all of them will be the default setting. Only necessary options should be included.

Related Topics

Modules: Database Transition Scripts

The schemas directory includes SQL scripts that are run when the module is first loaded. The scripts can define database schema and insert data.

Modules that need to store their own data may find it useful to create a new schema and set of related tables in the relational database used by LabKey Server. Modules can transition schemas between versions by including database transition scripts.

Generate a schema

You can generate a basic version of the schema file for an existing schema by navigating to a magic URL:


Save the result to the /schema/<schema-name>.xml file in your module.

Store schema transition scripts

Schema transition scripts should live in the schemas/dbscripts/<db-type>/ directory of your module. Currently, the following database types are supported:

Database TypeDirectory
Microsoft SQL Serverschemas/dbscripts/sqlserver/

The name of the script is important. Each script in this directory moves the database schema from one version of your module to another. The name of the file indicates which versions the script will transition from and to. The general format is <schemaname>-<oldversion>-<newversion>.sql. For more details about how these scripts work, see Modules: SQL Scripts.

For example, to create a new schema with some tables for your module (which we have assigned a version number of 1.0) on a PostgreSQL database, you would create a new SQL script file in the following location:


Your schema name can be anything that does not conflict with any existing schema name, so it's generally best for your schema to be named the same as your module.

When a new version of a module appears, the server will restart and, during its initialization, it will execute any relevant database scripts. Once the scripts to bring the module to version 1.0 have been executed, the module will report its version as 1.0, and those scripts will not be run again. If you need to make changes to your database schema, adjust your module version to 1.1, and create a new SQL script to transition the database schema from version 1.0 to 1.1. The file name for that would be:


Related Topics

See Modules: SQL Scripts, which describes these files in detail.

Modules: Domain Templates

A domain template is an xml file that can be included in a module that specifies the shape of a Domain, for example, a List, SampleSet, or DataClass. An example template xml file can be found in our test module:

test/modules/simpletest/resources/domain-templates/todolist.template.xml - link to source

A domain template includes:

  • a name
  • a set of columns
  • an optional set of indices (to add a uniqueness constraint)
  • an optional initial data file to import upon creation
  • domain specific options (e.g, for SampleSet, the list of columns that make the Name column unique.)
The XML file corresponds to the domainTemplate.xsd schema.

While not present in the domainTemplate.xsd, a column in a domain template can be marked as "mandatory". The domain editor will not allow removing or changing the name of mandatory columns. For example,


<template xsi:type="ListTemplateType">
<table tableName="Category" tableDbType="NOT_IN_DB" hidden="true">
<dat:column columnName="category" mandatory="true">

All domains within in a template group can be created from the template via JavaScript API:

domainGroup: "todolist",
importData: false

Or a specific domain:

domainGroup: "todolist",
domainTemplate: "Category",
importData: false

When "importData" is false, the domain will be created but the initial data won't be imported. The importData flag is true by default.

When "createDomain" is false, the domain will not be created, however any initial data will be imported.

A domain template typically has templates with unique names, but it is possible to have a template with the same name of different domain kinds -- for example, a DataClass template and a SampleSet template named "CellLine". In this situation, you will need to disambiguate which template with a "domainKind" parameter. For example,

domainGroup: "biologics",
domainTemplate: "CellLine",
domainKind: "SampleSet",
createDomain: false,
importData: true

Modules: Java

Module Architecture

Deploy Modules

At deployment time, a LabKey module consists of a single .module file. The .module file bundles the webapp resources (static content such as .GIF and .JPEG files, JavaScript files, SQL scripts, etc), class files (inside .jar files), and so forth.

The built .module file should be copied into your /modules directory. This directory is usually a sibling directory to the webapp directory.

At server startup time, LabKey Server extracts the modules so that it can find all the required files. It also cleans up old files that might be left from modules that have been deleted from the modules directory.

Build Modules

The build process for a module produces a .module file and copies it into the deployment directory. The standalone_build.xml file can be used for modules where the source code resides outside the standard LabKey source tree. It's important to make sure that you don't have the VM parameter -Dproject.root specified if you're developing this way or LabKey won't find all the files it loads directly from the source tree in dev mode (such as .sql and .gm files).

The createModule Gradle task will prompt you for the name of a new module and a location on the file system where it should live. It then creates a minimal module that's an easy starting point for development. You can add the .IML file to your IntelliJ project and you're up and running. Use the build.xml file in the module's directory to build it.

Each module is built independently of the others. All modules can see shared classes, like those in API or third-party JARs that get copied into WEB-INF/lib. However, modules cannot see one another's classes. If two modules need to communicate with each other, they must do so through interfaces defined in the LabKey Server API, or placed in a module's own api-src directory. Currently there are many classes that are in the API that should be moved into the relevant modules. As a long-term goal, API should consist primarily of interfaces and abstract classes through which modules talk to each other. Individual modules can place third-party JARs in their lib/ directory.


The LabKey Server build process enforces that modules and other code follow certain dependency rules. Modules cannot depend directly on each other's implementations, and the core API cannot depend on individual modules' code. A summary of the allowed API/implementation dependencies is shown here:

Upgrade Modules

See Upgrade Modules.

Delete Modules

To delete an unused module, delete both the .module file and the expanded directory of the same name from your deployment. The module may be in either the /modules or /externalModules directory.

Getting Started with the Demo Module

The LabKey Server source code includes a sample module (the Demo Module) for getting started on building your own LabKey Server module using Java. The Demo module demonstrates all the basic concepts you need to understand to extend LabKey Server with your own module. You can use the Demo module as a template for building your own module from scratch. Or, to create your own module from scratch, see the help topic on creating a new module.

Before you get started, you need to enlist in the version control project to obtain the source code. You will then need to set up your development environment to build the source code.

About the Demo Module

The Demo module is a simple sample module that displays names and ages for some number of individuals. Its purpose is to demonstrate some of the basic data display and manipulation functionality available in LabKey Server.

You can enable the Demo module in a project or folder to try it out:

  • Select (Admin) > Folder > Management.
  • Click the Folder Type tab.
  • Enable the Demo module using the checkbox.
  • Add the Demo Summary web part to your project or folder. A web part is an optional component that can provide a summary of the data contained in your module.
Click the Add Person button to add names and ages. Once you have a list of individuals, you can click on a column heading to sort the list by that column, in ascending or descending order. You can click the Filter icon next to any column heading to filter the list on the criteria you specify. Click Bulk Update to update multiple records at once, and Delete to delete a record.

A Tour of the Demo Module

In the following sections, we'll examine the different files and classes that make up the Demo module.

Take a look at the source code at: <labkey-home>\modules. The modules\ directory contains the source code for all of the modules, each sub-directory is an individual module.

The LabKey Server web application uses a model-view-controller (MVC) architecture based on Spring.

You may also want to look at the database component of the Demo module. The Person table stores data for the Demo module.

The Object Model (Person Class)

The Person class comprises the object model for the Demo module. The Person class can be found in the org.labkey.demo.model package (and, correspondingly, in the <labkey-home>\modules\server\demo\src\org\labkey\demo\model directory). It provides methods for setting and retrieving Person data from the Person table. Note that the Person class does not retrieve or save data to the database itself, but only stores in memory data that is to be saved or has been retrieved. The Person class extends the Entity class, which contains general methods for working with objects that are stored as rows in a table in the database.

The Controller File (DemoController Class)

Modules have one or more controller classes, which handle the flow of navigation through the UI for the module. A controller class manages the logic behind rendering the HTML on a page within the module, submitting form data via both GET and POST methods, handling input errors, and navigating from one action to the next.

A Controller class is a Java class that defines individual action classes, all of which are auto-registered with the controller's ActionResolver. Action classes can also be defined outside the controller, in which case they must be registered with the ActionResolver. Action classes are annotated to declare permissions requirements.

The controller for the Demo module,, is located in the org.labkey.demo package (that is, in <labkey-home>\server\modules\demo\src\org\labkey\demo). If you take a look at some of the action classes in the DemoController class, you can see how the controller manages the user interface actions for the module. For example, the BeginAction in the DemoController displays data in a grid format. It doesn't write out the HTML directly, but instead calls other methods that handle that task. The InsertAction class displays a form for inserting new Person data when GET is used and calls the code that handles the database insert operation when POST is used.

A module's controller class should extend the SpringActionController class, Labkey's implementation of the Spring Controller class.

The primary controller for a module is typically named <module-name>Controller.

The Module View

The module controller renders the module user interface and also handles input from that user interface. Although you can write all of the necessary HTML from within the controller, we recommend that you separate out the user interface from the controller in most cases and use the LabKey Server rendering code to display blocks of HTML. LabKey Server primarily uses JSP files templates to render the module interface.

The bulkUpdate.jsp File

The bulkUpdate.jsp file displays an HTML form that users can use to update more than one row of the Person table at a time. BulkUpdateAction renders the bulkUpdate.jsp file and accepts posts from that HTML form. The data submitted by the user is passed to handlePost() as values on an object of type BulkUpdateForm. The form values are accessible via getters and setters on the BulkUpdateForm class that are named to correspond to the inputs on the HTML form.

The bulkUpdate.jsp file provides one example of how you can create a user interface to your data within your module. Keep in mind that you can take advantage of a lot of the basic data functionality that is already built into LabKey Server, described elsewhere in this section, to make it easier to build your module. For example, the DataRegion class provides an easy-to-use data grid with built-in sorting and filtering.

The DemoWebPart Class

The DemoWebPart class is located in the org.labkey.demo.view package. It comprises a simple web part for the demo module. This web part can be displayed only on the Portal page. It provides a summary of the data that's in the Demo module by rendering the demoWebPart.jsp file. An object of type ViewContext stores in-memory values that are also accessible to the JSP page as it is rendering.

The web part class is optional, although most modules have a corresponding web part.

The demoWebPart.jsp File

The demoWebPart.jsp file displays Person data on an HTML page. The JSP retrieves data from the ViewContext object in order to render that data in HTML.

The Data Manager Class (DemoManager Class)

The data manager class contains the logic for operations that a module performs against the database, including retrieving, inserting, updating, and deleting data. It handles persistence and caching of objects stored in the database. Although database operations can be called from the controller, as a design principle we recommend separating this layer of implementation from the navigation-handling code.

The data manager class for the Demo module, the DemoManager class, is located in the org.labkey.demo package. Note that the DemoManager class makes calls to the LabKey Server table layer, rather than making direct calls to the database itself.

The Module Class (DemoModule Class)

The DemoModule class is located in the org.labkey.demo package. It extends the DefaultModule class, which is an implementation of the Module interface. The Module interface provides generic functionality for all modules in LabKey Server and manages how the module plugs into the LabKey Server framework and how it is versioned.

The only requirement for a module is that it implement the Module interface. However, most modules have additional classes like those seen in the Demo module.

The Schema Class (DemoSchema Class)

The DemoSchema class is located in the org.labkey.demo package. It provides methods for accessing the schema of the Person table associated with the Demo module. This class abstracts schema information for this table, so that the schema can be changed in just one place in the code.

Database Scripts

The <labkey-home>\server\modules\demo\webapp\demo\scripts directory contains two subdirectories, one for PostgreSQL and one for Microsoft SQL Server. These directories contain functionally equivalent scripts for creating the Person table on the respective database server.

Note that there are a set of standard columns that all database tables in LabKey Server must include. These are:

  • _ts: the timestamp column
  • RowId: an autogenerated integer field that serves as the primary key
  • CreatedBy: a user id
  • Created: a date/time column
  • ModifiedBy: a user id
  • Modified: a date/time column
  • Owner: a user id
Additionally, the CREATE TABLE call also creates columns which are unique to the Person table, and adds the constraint which enforces the primary key.

Tutorial: Hello World Java Module

This topic is under construction.

This tutorial shows you how to create a new Java module and deploy it to the server.

Create a Java Module

The createModule Gradle task
The Gradle task called createModule makes it easy to create a template Java module with the correct file structure and template Controller classes. We recommend using it instead of trying to copy an existing module, as renaming a module requires editing and renaming many files.

Invoke createModule

> gradlew createModule

It will prompt you for the following 5 parameters:

  1. The module name. This should be a single word (or multiple words concatenated together), for example MyModule, ProjectXAssay, etc. This is the name used in the Java code, so you should follow Java naming conventions.
  2. A directory in which to put the files.
  3. Will this module create and manage a database schema? (Y/n)
  4. Create test stubs (y/N)
  5. Create API stubs (y/N)
Enter the following parameters:

1. "JavaTutorial" 
2. "C:\dev\labkey\trunk\externalModules\JavaTutorial" (depending on where your externalModules directory is)
3. Y
4. N
5. N

The JavaTutorial dir will be created, and the following resources added to it:

├── resources
│   ├── schemas
│   │   ├── dbscripts
│   │   │   ├── postgresql
│   │   │   │   └── javatutorial-<0.00>-<currentver>.sql
│   │   │   └── sqlserver
│   │   │   └── javatutorial-<0.00>-<currentver>.sql
│   │   └── javatutorial.xml
│   └── web
└── src
└── org
└── labkey
└── JavaTutorial
└── view
└── hello.jsp

ContainerListener class Use this to define actions on the container, such as if the container is moved.

Controller class
This is a subclass of SpringActionController that links requests from a browser to code in your application.

Manager class
In LabKey Server, the Manager classes encapsulate much of the business logic for the module. Typical examples include fetching objects from the database, inserting, updating, and deleting objects, and so forth.

Module class
This is the entry point for LabKey Server to talk to your module. Exactly one instance of this class will be instantiated. It allows your module to register providers that other modules may use.

Schema class
Schema classes provide places to hook in to the LabKey Server Table layer, which provides easy querying of the database and object-relational mapping.

Schema XML file
This provides metadata about your database tables and views. In order to pass the developer run test (DRT), you must have entries for every table and view in your database schema. To regenerate this XML file, see Modules: Database Transition Scripts. For more information about the DRT, see Check in to the Source Project.

web directory
All of that static web content that will be served by Tomcat should go into this directory. These items typically include things like .gif and .jpg files. The contents of this directory will be combined with the other modules' webapp content, so we recommend adding content in a subdirectory to avoid file name conflicts.

.sql files
These files are the scripts that create and update your module's database schema. They are automatically run at server startup time. See the Modules: SQL Scripts for details on how to create and modify database tables and views. LabKey Server supports these scripts in PostgreSQL and Microsoft SQL Server.
At server startup time, LabKey Server uses this file to determine your module's name, class, and dependencies.

Build and Deploy the Java Module

1. In the root directory of the module, add a build.gradle file with this content:

apply plugin: 'java'
apply plugin: 'org.labkey.module'

2. Add the module to your settings.gradle file:

include :externalModules:JavaTutorial

3. In IntelliJ, on the Gradle tab, sync by clicking the (Refresh) button . For details see Gradle: How to Add Modules.

4. To build the module, use the targeted Gradle task:

gradlew :externalModules:tutorial:deployModule

Or use the main deployApp target, which will rebuild all modules.

You can run these Gradle tasks via the command line, or using IntelliJ's Gradle tab.

Either task will compile your Java files and JSPs, package all code and resources into a .module file, and deploy it to your local server.

5. Enable the module in a LabKey folder. The BeginAction (the "home page" for the module) is available at:

or go to > Go to Module > JavaTutorial

Add a Module API

A module may define its own API which is available to the implementations of other modules; the createModule task described above will offer to create the required files for you. We declined this step in the above tutorial. If you say yes to the api and test options, the following files and directories are created:


│ └───org
│ └───labkey
│ └───api
│ └───javatutorial
│ ├───schemas
│ │ │ javatutorial.xml
│ │ │
│ │ └───dbscripts
│ │ ├───postgresql
│ │ │ javatutorial-0.00-19.01.sql
│ │ │
│ │ └───sqlserver
│ │ javatutorial-0.00-19.01.sql
│ │
│ └───web
│ └───org
│ └───labkey
│ └───javatutorial
│ │
│ │
│ │
│ │
│ │
│ │
│ └───view
│ hello.jsp

│ └───javatutorial

│ └───javatutorial


To add an API to an existing module:

  • Create a new api-src directory in the module's root.
  • In IntelliJ, refresh the gradle settings:
    • Go to the Gradle tab
    • Find the Gradle project for your module in the listing, like ':server:externalModules:JavaTutorial'
    • Right click and choose Refresh Gradle project from the menu.
  • Create a new package under your api-src directory, "org.labkey.MODULENAME.api" or similar.
  • Add Java classes to the new package, and reference them from within your module.
  • Add a module dependency to any other modules that depend on your module's API.
  • Develop and test.
  • Commit your new Java source files.

Related Topic

The LabKey Server Container

Data in LabKey Server is stored in a hierarchy of projects and folders which looks similar to a file system, although it is actually managed by the database. The Container class represents a project or folder in the hierarchy.

The Container on the URL

The container hierarchy is always included in the URL. For example, the URL below shows that it is in the /Documentation folder:

The getExtraPath() method of the ViewURLHelper class returns the container path from the URL. On the Container object, the getPath() method returns the container's path.

The Root Container

LabKey Server also has a root container which is not apparent in the user interface, but which contains all other containers. When you are debugging LabKey Server code, you may see the Container object for the root container; its name appears as "/".

In the core.Containers table in the LabKey Server database, the root container has a null value for both the Parent and the Name field.

You can use the isRoot() method to determine whether a given container is the root container.

Projects Versus Folders

Given that they are both objects of type Container, projects and folders are essentially the same at the level of the implementation. A project will always have the root container as its parent, while a folder's parent will be either a project or another folder.

You can use the isProject() method to determine whether a given container is a project or a folder.

Useful Classes and Methods

Container Class Methods

The Container class represents a given container and persists all of the properties of that container. Some of the useful methods on the Container class include:

  • getName(): Returns the container name
  • getPath(): Returns the container path
  • getId(): Returns the GUID that identifies this container
  • getParent(): Returns the container's parent container
  • hasPermission(user, perm): Returns a boolean indicating whether the specified user has the given level of permissions on the container
The ContainerManager Class

The ContainerManager class includes a number of static methods for managing containers. Some useful methods include:

  • create(container, string): Creates a new container
  • delete(container): Deletes an existing container
  • ensureContainer(string): Checks to make sure the specified container exists, and creates it if it doesn't
  • getForId(): Returns the container with this EntityId (a GUID value)
  • getForPath(): Returns the container with this path
The ViewController Class

The controller class in your LabKey Server module extends the ViewController class, which provides the getContainer() method. You can use this method to retrieve the Container object corresponding to the container in which the user is currently working.

Implementing Actions and Views

The LabKey platform includes a generic infrastructure for implementing your own server actions and views.

Actions are the "surface area" of the server: everything you invoke on the server, whether a view on data or a manipulation of data, is some action or set of actions. An Action is implemented using the Model-View-Controller paradigm, where:

  • the Model is implemented as one or more Java classes, such as standard JavaBean classes
  • the View is implemented as JSPs, or other technologies
  • the Controller is implemented as Java action classes
Forms submitted to an action are bound to the JavaBean classes by the Spring framework.

Views are typically implemented in parent-child relationships, such that a page is built from a template view that wraps one or more body views. Views often render other views, for example, one view per pane or a series of similar child views. Views are implemented using a variety of different rendering technologies; if you look at the subclasses of HttpView and browse the existing controllers you will see that views can be written using JSP, GWT, out.print() from Java code, etc. (Note that most LabKey developers write JSPs to create new views. The JSP syntax is familiar and supported by all popular IDEs, JSPs perform well, and type checking & compilation increase reliability.)

Action Life Cycle

What happens when you submit to an Action in LabKey Server? The typical life cycle looks like this:

  • ViewServlet receives the request and directs it to the appropriate module.
  • The module passes the request to the appropriate Controller which then invokes the requested action.
  • The action verifies that the user has permission to invoke it in the current folder. (If the user is not assigned an appropriate role in the folder then the action will not be invoked.) Action developers typically declare required permissions via a @RequiresPermission() annotation.
  • The Spring framework instantiates the Form bean associated with the action and "binds" parameter values to it. In other words, it matches URL parameters names to bean property names; for each match, it converts the parameter value to the target data type, performs basic validation, and sets the property on the form by calling the setter.
  • The Controller now has data, typed and validated, that it can work with. It performs the action, and typically redirects to a results page, confirmation page, or back to the same page.

Example: Hello World JSP View

The following action takes a user to a static "Hello World" JSP view.


<%= h("Hello, World!") %>


// If the user does not have Read permissions, the action will not be invoked.
public class HelloWorldAction extends SimpleViewAction
public ModelAndView getView(Object o, BindException errors) throws Exception
JspView view = new JspView("/org/labkey/javatutorial/view/helloWorld.jsp");
view.setTitle("Hello World");
return view;

public NavTree appendNavTrail(NavTree root)
return root;

The HelloWorld Action is called with this URL:

Example: Submitting Forms to an Action

The following action processes a form submitted by the user.


This JSP is for submitting posts, and displaying responses, on the same page:

<%@ taglib prefix="labkey" uri="" %>
<%@ page import="org.labkey.api.view.HttpView"%>
<%@ page import="org.labkey.javatutorial.JavaTutorialController" %>
<%@ page import="org.labkey.javatutorial.HelloSomeoneForm" %>
<%@ page extends="org.labkey.api.jsp.JspBase" %>
HelloSomeoneForm form = (HelloSomeoneForm) HttpView.currentModel();
<labkey:errors />
<labkey:form method="POST" action="<%=urlFor(JavaTutorialController.HelloSomeoneAction.class)%>">
<h2>Hello, <%=h(form.getName()) %>!</h2>
<table width="100%">
<td class="labkey-form-label">Who do you want to say 'Hello' to next?: </td>
<td><input name="name" value="<%=h(form.getName())%>"></td>
<td><labkey:button text="Go" /></td>

Action for handling posts:

// If the user does not have Read permissions, the action will not be invoked.
public class HelloSomeoneAction extends FormViewAction<HelloSomeoneForm>
public void validateCommand(HelloSomeoneForm form, Errors errors)
// Do some error handling here

public ModelAndView getView(HelloSomeoneForm form, boolean reshow, BindException errors) throws Exception
return new JspView<>("/org/labkey/javatutorial/view/helloSomeone.jsp", form, errors);

public boolean handlePost(HelloSomeoneForm form, BindException errors) throws Exception
return true;

public ActionURL getSuccessURL(HelloSomeoneForm form)
// Redirect back to the same action, adding the submitted value to the URL.
ActionURL url = new ActionURL(HelloSomeoneAction.class, getContainer());
url.addParameter("name", form.getName());

return url;

public NavTree appendNavTrail(NavTree root)
root.addChild("Say Hello To Someone");
return root;

Below is the form used to convey the URL parameter value to the Action class. Note that the form follows a standard JavaBean format. The Spring framework attempts to match URL parameter names to property names in the form. If it finds matches, it interprets the URL parameters according to the data types it finds in the Bean property and performs basic data validation on the values provided on the URL:

package org.labkey.javatutorial;

public class HelloSomeoneForm
public String _name = "World";

public void setName(String name)
_name = name;

public String getName()
return _name;

URL that invokes the action in the home project:

Example: Export as Script Action

This action exports a query as a re-usable script, either as JavaScript, R, Perl, or SAS. (The action is surfaced in the user interface on a data grid, at Export > Script.)

public static class ExportScriptForm extends QueryForm
private String _type;

public String getScriptType()
return _type;

public void setScriptType(String type)
_type = type;

public class ExportScriptAction extends SimpleViewAction<ExportScriptForm>
public ModelAndView getView(ExportScriptForm form, BindException errors) throws Exception

return ExportScriptModel.getExportScriptView(QueryView.create(form, errors),
form.getScriptType(), getPageConfig(), getViewContext().getResponse());

public NavTree appendNavTrail(NavTree root)
return null;

Example: Delete Cohort

The following action deletes a cohort category from a study (provided it is an empty cohort). It then redirects the user back to the Manage Cohorts page.

public class DeleteCohortAction extends SimpleRedirectAction<CohortIdForm>
public ActionURL getRedirectURL(CohortIdForm form) throws Exception
CohortImpl cohort = StudyManager.getInstance().getCohortForRowId(getContainer(), getUser(), form.getRowId());
if (cohort != null && !cohort.isInUse())

return new ActionURL(CohortController.ManageCohortsAction.class, getContainer());

Packaging JSPs

JSPs can be placed anywhere in the src directory, but by convention they are often placed in the view directory, as shown below:


Implementing API Actions

This page describes how to implement API actions within the LabKey Server controller classes. It is intended for Java developers building their own modules or working within the LabKey Server source code. An API Action is a Spring-based action that derives from one of the abstract base classes:
  • org.labkey.api.action.ReadOnlyApiAction
  • org.labkey.api.action.MutatingApiAction



API actions build upon LabKey’s controller/action design. They include one of the “API” action base classes whose derived action classes interact with the database or server functionality. These derived actions return raw data to the base classes, which serialize raw data into one of LabKey’s supported formats.

Leveraging the current controller/action architecture provides a range of benefits, particularly:

  • Enforcement of user login for actions that require login, thanks to reuse of LabKey’s existing, declarative security model (@RequiresPermission annotations).
  • Reuse of many controllers’ existing action forms, thanks to reuse of LabKey’s existing Spring-based functionality for binding request parameters to form beans.
Conceptually, API actions are similar to SOAP/RPC calls, but are far easier to use. If the action selects data, the client may simply request the action’s URL, passing parameters on the query string. For actions that change data, the client posts a relatively simple object, serialized into one of our supported formats (for example, JSON), to the appropriate action.

API Action Design Rules

In principle, actions are autonomous, may be named, and can do whatever the controller author wishes. However, in practice, we suggest adhering to the following general design rules when implementing actions:

  • Action names should be named with a verb/noun pair that describes what the action does in a clear and intuitive way (e.g., getQuery, updateList, translateWiki, etc.).
  • Insert, update, and delete of a resource should all be separate actions with appropriate names (e.g., getQuery, updateRows, insertRows, deleteRows), rather than a single action with a parameter to indicate the command.
  • Wherever possible, actions should remain agnostic about the request and response formats. This is accomplished automatically through the base classes, but actions should refrain from reading the post body directly or writing directly to the HttpServletResponse unless they absolutely need to.
  • For security reasons, actions that respond to GET should not mutate the database or otherwise change server state. Actions that change state (e.g., insert, update, or delete actions) should only respond to POST and extend MutatingApiAction.

API Actions

An API Action is a Spring-based action that derives from one of the abstract base classes:

  • org.labkey.api.action.ReadOnlyApiAction
  • org.labkey.api.action.MutatingApiAction
API actions do not implement the getView() or appendNavTrail() methods that view actions do. Rather, they implement the execute method. MyForm is a simple bean intended to represent the parameters sent to this action. Actions should usually extend MutatingAPIAction. If you have an action that a) does not update server state and b) needs to be accessible via http GET requests, then you can choose extend ReadOnlyAPIAction instead.

public class GetSomethingAction extends MutatingApiAction<MyForm>
public ApiResponse execute(MyForm form, BindException errors) throws Exception
ApiSimpleResponse response = new ApiSimpleResponse();

// Get the resource...
// Add it to the response...

return response;

JSON Example

A basic API action class looks like this:

public class ExampleJsonAction extends MutatingApiAction<Object>
public ApiResponse execute(Object form, BindException errors) throws Exception
ApiSimpleResponse response = new ApiSimpleResponse();

response.put("param1", "value1");
response.put("success", true);

return response;

A URL like the following invokes the action:

Returning the following JSON object:

"success" : true,
"param1" : "value1"

Example: Set Display for Table of Contents

public class SetTocPreferenceAction extends MutatingApiAction<SetTocPreferenceForm>
public static final String PROP_TOC_DISPLAYED = "displayToc";

public ApiResponse execute(SetTocPreferenceForm form, BindException errors)
//use the same category as editor preference to save on storage
PropertyManager.PropertyMap properties = PropertyManager.getWritableProperties(
getUser(), getContainer(),
SetEditorPreferenceAction.CAT_EDITOR_PREFERENCE, true);
properties.put(PROP_TOC_DISPLAYED, String.valueOf(form.isDisplayed()));

return new ApiSimpleResponse("success", true);

Execute Method

public ApiResponse execute(FORM form, BindException errors) throws Exception

In the execute method, the action does whatever work it needs to do and responds by returning an object that implements the ApiResponse interface. This ApiResponse interface allows actions to respond in a format-neutral manner. It has one method, getProperties(), that returns a Map<String,Object>. Two implementations of this interface are available: ApiSimpleResponse, which should be used for simple cases; and ApiQueryResponse, which should be used for returning the results of a QueryView.

ApiSimpleResponse has a number of constructors that make it relatively easy to send back simple response data to the client. For example, to return a simple property of “rowsUpdated=5”, your return statement would look like this:

return new ApiSimpleResponse("rowsUpdated", rowsUpdated);

where rowsUpdated is an integer variable containing the number of rows updated. Since ApiSimpleResponse derives from HashMap<String, Object>, you may put as many properties in the response as you wish. A property value may also be a nested Map, Collection, or array.

The mutating or read-only action base classes take care of serializing the response in the JSON appropriate format.

Although nearly all API actions return an ApiResponse object, some actions necessarily need to return data in a specific format, or even binary data. In these cases, the action can use the HttpServletResponse object directly, which is available through getViewContext().getReponse(), and simply return null from the execute method.

Form Parameter Binding

If the request uses a standard query string with a GET method, form parameter binding uses the same code as used for all other view requests. However, if the client uses the POST method, the binding logic depends on the content-type HTTP header. If the header contains the JSON content-type (“application/json”), the API action base class parses the post body as JSON and attempts to bind the resulting objects to the action’s form. This code supports nested and indexed objects via the BeanUtils methods.

For example, if the client posts JSON like this:

{ "name": "Lister",
"address": {
"street": "Top Bunk",
"city": “Red Dwarf",
state": “Deep Space"},
"categories” : ["unwashed", "space", "bum"]

The form binding uses BeanUtils to effectively make the following calls via reflection:

form.getAddress().setStreet("Top Bunk");
form.getAddress().setCity("Red Dwarf");
form.getAddress().setState("Deep Space");
form.getCategories().set(0) = "unwashed";
form.getCategories().set(1) = "space";
form.getCategories().set(2) = "bum";

Where an action must deal with the posted data in a dynamic way (e.g., the insert, update, and delete query actions), the action’s form may implement the CustomApiForm interface to receive the parsed JSON data directly. If the form implements this interface, the binding code simply calls the setJsonObject() method, passing the parsed JSONObject instance, and will not perform any other form binding. The action is then free to use the parsed JSON data as necessary.

Jackson Marshalling (Experimental)

Experimental Feature: Instead of manually unpacking the JSONObject from .getJsonObject() or creating a response JSONObject, you may use Jackson to marshall a Java POJO form and return value. To enable Jackson marshalling, add the @Marshal(Marshaller.Jackson) annotation to your Controller or Action class. When adding the @Marshal annotation to a controller, all actions defined in the Controller class will use Jackson marshalling. For example,

public class ExampleJsonAction extends MutatingApiAction<MyStuffForm>
public ApiResponse execute(MyStuffForm form, BindException errors) throws Exception
// retrieve resource from the database
MyStuff stuff = ...;

// instead of creating an ApiResponse or JSONObject, return the POJO
return stuff;

Error and Exception Handling

If an API action adds errors to the errors collection or throws an exception, the base action will return a response with status code 400 and a json body using the format below. Clients may then choose to display the exception message or react in any way they see fit. For example, if an error is added to the errors collection for the "fieldName" field of the action's form class with message "readable message", the response will be serialized as:

"success": false,
"exception": "readable message",
"errors": [ {
"id" : "fieldName",
"msg" : "readable message",
} ]

Integrating with the Pipeline Module

The Pipeline module provides a basic framework for performing analysis and loading data into LabKey Server. It maintains a queue of jobs to be run, delegates them to a machine to perform the work (which may be a remote server, or more typically the same machine that the LabKey Server web server is running on), and ensures that jobs are restarted if the server is shut down while they are running. Other modules can register themselves as providing pipeline functionality, and the Pipeline module will let them indicate the types of analysis that can be done on files, as well as delegate to them to do the actual work.

Integration Points

PipelineProviders let modules hook into the Pipeline module's user interface for browsing through the file system to find files on which to operate. This is always done within the context of a pipeline root for the current folder. The Pipeline module calls updateFileProperties() on all the PipelineProviders to determine what actions should be available. Each module provides its own URL which can collect additional information from the user before kicking off any work that needs to be done.

For example, the org.labkey.api.exp.ExperimentPipelineProvider registered by the Experiment module provides actions associated with .xar and .xar.xml files. It also provides a URL that the Pipeline module associates with the actions. If the users clicks to load a XAR, the user's browser will go to the Experiment module's URL.

PipelineProviders are registered by calling org.labkey.api.pipeline.PipelineServer.registerPipelineProvider().

PipelineJobs allow modules to do work relating to a particular piece of analysis. PipelineJobs sit in a queue until the Pipeline module determines that it is their turn to run. The Pipeline module then calls the PipelineJob's run() method. The PipelineJob base class provides logging and status functionality so that implementations can inform the user of their progress.

The Pipeline module attempts to serialize the PipelineJob object when it is submitted to the queue. If the server is restarted while there are jobs in the queue, the Pipeline module will look for all the jobs that were not in the COMPLETE or ERROR state, deserialize the PipelineJob objects from disk, and resubmit them to the queue. A PipelineJob implementation is responsible for restarting correctly if it is interrupted in the middle of processing. This might involve resuming analysis at the point it was interrupted, or deleting a partially loaded file from the database before starting to load it again.

For example, the org.labkey.api.exp.ExperimentPipelineJob provided by the Experiment module knows how to parse and load a XAR file. If the input file is not a valid XAR, it will put the job into an error state and write the reason to the log file.

PipelineJobs do not need to be explicitly registered with the Pipeline module. Other modules can add jobs to the queue using the org.labkey.api.pipeline.PipelineService.queueJob() method.

Pipeline Serialization using Jackson

Pipeline jobs are serialized to JSON using Jackson.

To ensure a pipeline job serializes properly, it needs either:

  • a default constructor (no params), if no member fields are final.
  • OR
  • a constructor annotated with @JsonCreator, with a parameter for each final field annotated with @JsonProperty("<field name>").
If there are member fields that are other classes the developer has created, those classes may need a constructor as specified in the two options above.

Developers should generally avoid a member field that is a map with a non-String key. But if needed they can annotate such a member as follows:

@JsonSerialize(keyUsing = ObjectKeySerialization.Serializer.class)
@JsonDeserialize(keyUsing = ObjectKeySerialization.Deserializer.class)
private Map<PropertyDescriptor, Object> _propMap;

Developers should avoid non-static inner classes and circular references.

Note: Prior to release 18.3, pipeline job serialization was performed using XStream.

Integrating with the Experiment API

The Experiment module is designed to allow other modules to hook in to provide functionality that is particular to different kinds of experiments. For example, the MS2 module provides code that knows how to load different types of output files from mass spectrometers, and code that knows how to provide a rich UI around that data. The Experiment module provides the general framework for dealing with samples, runs, data files, and more, and will delegate to other modules when loading information from a XAR, when rendering it in the experiment tables, when exporting it to a XAR, and so forth.

Integration points

The ExperimentDataHandler interface allows a module to handle specific kinds of files that might be present in a XAR. When loading from a XAR, the Experiment module will keep track of all the data files that it encounters. After the general, Experiment-level information is fully imported, it will call into the ExperimentDataHandlers that other modules have registered. This gives other modules a chance to load data into the database or otherwise prepare it for later display. The XAR load will fail if an ExperimentDataHandler throws an ExperimentException, indicating that the data file was not as expected.

Similarly, when exporting a set of runs as a XAR, the Experiment module will call any registered ExperimentDataHandlers to allow them to transform the contents of the file before it is written to the compressed archive. The default exportFile() implementation, provided by AbstractExperimentDataHandler, simply exports the file as it exists on disk.

The ExperimentDataHandlers are also interrogated to determine if any modules provide UI for viewing the contents of the data files. By default, users can download the content of the file, but if the ExperimentDataHandler provides a URL, it will also be available. For example, the MS2 module provides an ExperimentDataHandler that hands out the URL to view the peptides and proteins for a .pep.xml file.

Prior to deleting a data object, the Experiment module will call the associated ExperimentDataHandler so that it can do whatever cleanup is necessary, like deleting any rows that have been inserted into the database for that data object.

ExperimentDataHandlers are registered by implementing the getDataHandlers() method on Module.

RunExpansionHandlers allow other modules to modify the XML document that describes the XAR before it is imported. This means that modules have a chance to run Java code to make decisions on things like the number and type of outputs for a ProtocolApplication based on any criteria they desire. This provides flexibility beyond just what is supported in the XAR schema for describing runs. They are passed an XMLBeans representation of the XAR.

RunExpansionHandlers are registered by implementing the getRunExpansionHandlers() method on Module.

ExperimentRunFilters let other modules drive what columns are available when viewing particular kinds of runs in the experiment run grids in the web interface. The filter narrows the list of runs based on the runs' protocol LSID.

Using the Query module, the ExperimentRunFilter can join in additional columns from other tables that may be related to the run. For example, for MS2 search runs, there is a row in the MS2Runs table that corresponds to a row in the exp.ExperimentRun table. The MS2 module provides ExperimentRunFilters that tell the Experiment module to use a particular virtual table, defined in the MS2 module, to display the MS2 search runs. This virtual table lets the user select columns for the type of mass spectrometer used, the name of the search engine, the type of quantitation run, and so forth. The virtual tables defined in the MS2 schema also specify the set of columns that should be visible by default, meaning that the user will automatically see some of files that were the inputs to the run, like the FASTA file and the mzXML file.

ExperimentRunFilters are registered by implementing the getExperimentRunFilters() method on Module.

Generating and Loading XARs

When a module does data analysis, typically performed in the context of a PipelineJob, it should generally describe the work that it has done in a XAR and then cause the Experiment module to load the XAR after the analysis is complete.

It can do this by creating a new ExperimentPipelineJob and inserting it into the queue, or by calling org.labkey.api.exp.ExperimentPipelineJob.loadExperiment(). The module will later get callbacks if it has registered the appropriate ExperimentDataHandlers or RunExpansionHandlers.

API for Creating Simple Protocols and Experiment Runs

Version 2.2 of LabKey Server introduces an API for creating simple protocols and simple experiment runs that use those protocols. It is appropriate for runs that start with one or more data/material objects and output one or more data/material objects after performing a single logical step.

To create a simple protocol, call org.labkey.api.exp.ExperimentService.get().insertSimpleProtocol(). You must pass it a Protocol object that has already been configured with the appropriate properties. For example, set its description, name, container, and the number of input materials and data objects. The call will create the surrounding Protocols, ProtocolActions, and so forth, that are required for a full fledged Protocol.

To create a simple experiment run, call org.labkey.api.exp.ExperimentService.get().insertSimpleExperimentRun(). As with creating a simple Protocol, you must populate an ExperimentRun object with the relevant properties. The run must use a Protocol that was created with the insertSimpleProtocol() method. The run must have at least one input and one output. The call will create the ProtocolApplications, DataInputs, MaterialInputs, and so forth that are required for a full-fledged ExperimentRun.

Using SQL in Java Modules

Ways to Work with SQL

Options for working with SQL from Java code:

Table Class

Using with a simple Java class/bean works well when you want other code to be able to work with the class, and the class fields map directly with what you're using in the database. This approach usually results in the least lines of code to accomplish the goal. See the demoModule for an example of this approach.

SQLFragment/SQLExecutor and are good approaches when you need more control over the SQL you're generating. They are also used for operations that work on multiple rows at a time.

Prepared SQL Statements

Use prepared statements (java.sql.PreparedStatement) when you're dealing with many data rows and want the performance gain from being able to reuse the same statement with different values.

Client-Side Options

You can also develop SQL applications without needing any server-side Java code by using the LABKEY.Query.saveRows() and related APIs from JavaScript code in the client. In this scenario, you'd expose your table as part of a schema, and rely on the default server implementation. This approach gives you the least control over the SQL that's actually used.

Related Topics

GWT Integration

Some pages within LabKey Server use the Google Web Toolkit (GWT) to create rich UI. GWT compiles Java code into JavaScript that runs in a browser. For more information about GWT see the GWT home page.

LabKey has moved away from GWT for new development projects and is in the midst of replacing existing GWT usages. We do not recommend using it for new code.

Points of note for the existing implementations using GWT:

  • The org.labkey.api.gwt.Internal GWT module can be inherited by all other GWT modules to include tools that allow GWT clients to connect back to the LabKey server more easily.
  • There is a special incantation to integrate GWT into a web page. The org.labkey.api.view.GWTView class allows a GWT module to be incorporated in a standard LabKey web page.
    • GWTView also allows passing parameters to the GWT page. The org.labkey.api.gwt.client.PropertyUtil class can be used by the client to retrieve these properties.
  • GWT supports asynchronous calls from the client to servlets. To enforce security and the module architecture a few classes have been provided to allow these calls to go through the standard LabKey security and PageFlow mechanisms.
    • The client side org.labkey.api.gwt.client.ServiceUtil class enables client->server calls to go through a standard LabKey action implementation.
    • The server side org.labkey.api.gwt.server.BaseRemoteService class implements the servlet API but can be configured with a standard ViewContext for passing a standard LabKey url and security context.
    • Create an action in your controller that instantiates your servlet (which should extend BaseRemoteService) and calls doPost(getRequest(), getResponse()). In most cases you can simply create a subclass of org.labkey.api.action.GWTServiceAction and implement the createService() method.
    • Use ServiceUtil.configureEndpoint(service, "actionName") to configure client async service requests to go through your PageFlow action on the server.
Examples of this can be seen in the study.designer and plate.designer packages within the Study module.

The checked-in jars allow GWT modules within Labkey modules to be built automatically. Client-side classes (which can also be used on the server) are placed in a gwtsrc directory parallel to the standard src directory in the module.

While GWT source can be built automatically, effectively debugging GWT modules requires installation of the full GWT toolkit (we are using 2.5.1 currently). After installing the toolkit you can debug a page by launching GWT's custom client using the class, which runs java code rather than the cross-compiled javascript. The debug configuration is a standard java app with the following requirements:

  1. gwt-user.jar and gwt-dev.jar from your full install need to be on the runtime classpath. (Note: since we did not check in client .dll/.so files, you need to point a manually installed local copy of the GWT development kit.)
  2. The source root for your GWT code needs to be on the runtime classpath
  3. The source root for the LabKey GWT internal module needs to be on the classpath
  4. Main class is
  5. Program parameters should be something like this:
    -noserver -startupUrl "http://localhost:8080/labkey/query/home/metadataQuery.view?schemaName=issues&query.queryName=Issues" org.labkey.query.metadata.MetadataEditor
    • -noserver tells the GWT client not to launch its own private version of tomcat
    • The URL is the url you would like the GWT client to open
    • The last parameter is the module name you want to debug


For example, here is a configuration from a developer's machine. It assumes that the LabKey Server source has is at c:\labkey and that the GWT development kit has been extracted to c:\JavaAPIs\gwt-windows-2.5.1. It will work with GWT code from the MS2, Experiment, Query, List, and Study modules.

  • Main class:
  • VM parameters:
    -classpath C:/labkey/server/internal/gwtsrc;C:/labkey/server/modules/query/gwtsrc;C:/labkey/server/modules/study/gwtsrc;C:/labkey/server/modules/ms2/gwtsrc;C:/labkey/server/modules/experiment/gwtsrc;C:/JavaAPIs/gwt-2.5.1/gwt-dev.jar; C:/JavaAPIs/gwt-2.5.1/gwt-user.jar;c:/labkey/external/lib/build/gxt.jar;C:/labkey/server/modules/list/gwtsrc;C:/labkey/external/lib/server/gwt-dnd-3.2.0.jar;C:/labkey/external/lib/server/gxt-2.2.5.jar;
  • Program parameters:
    -noserver -startupUrl "http://localhost:8080/labkey/query/home/metadataQuery.view?schemaName=issues&query.queryName=Issues" org.labkey.query.metadata.MetadataEditor
  • Working directory: C:\labkey\server
  • Use classpath and JDK of module: QueryGWT
A note about upgrading to future versions of GWT: As of GWT 2.6.0 (as of this writing, the current release), GWT supports Java 7 syntax. It also stops building permutations for IE 6 and 7 by default. However, it introduces a few breaking API changes. This means that we would need to move to GXT 3.x, which is unfortunately a major upgrade and requires significant changes to our UI code that uses it.

GWT Remote Services

Integrating GWT Remote services is a bit tricky within the LabKey framework.  Here's a technique that works.

1. Create a synchronous service interface in your GWT client code:

    public interface MyService extends RemoteService
        String getSpecialString(String inputParam) throws SerializableException;

2.  Create the asynchronous counterpart to your synchronous service interface.  This is also in client code:

    public interface MyServiceAsync
        void getSpecialString(String inputParam, AsyncCallback async);

3. Implement your service within your server code:

    import org.labkey.api.gwt.server.BaseRemoteService;
    import org.labkey.api.gwt.client.util.ExceptionUtil;
    import org.labkey.api.view.ViewContext;
    public class MyServiceImpl extends BaseRemoteService implements MyService
        public MyServiceImpl(ViewContext context)
        public String getSpecialString(String inputParameter) throws SerializableException
            if (inputParameter == null)
                 throw ExceptionUtil.convertToSerializable(new 
                     IllegalArgumentException("inputParameter may not be null"));
            return "Your special string was: " + inputParameter;

 4. Within the server Spring controller that contains the GWT action, provide a service entry point:

    import org.labkey.api.gwt.server.BaseRemoteService;
    import org.labkey.api.action.GWTServiceAction;

    public class MyServiceAction extends GWTServiceAction
        protected BaseRemoteService createService()
            return new MyServiceImpl(getViewContext());

5. Within your GWT client code, retrive the service with a method like this.  Note that caching the service instance is important, since construction and configuration is expensive.

    import org.labkey.api.gwt.client.util.ServiceUtil;
    private MyServiceAsync _myService;
    private MyServiceAsync getService()
        if (_testService == null)
            _testService = (MyServiceAsync) GWT.create(MyService.class);
            ServiceUtil.configureEndpoint(_testService, "myService");
        return _testService;

6. Finally, call your service from within your client code:

    public void myClientMethod()
        getService().getSpecialString("this is my input string", new AsyncCallback()
            public void onFailure(Throwable throwable)
                // handle failure here
            public void onSuccess(Object object)
                String returnValue = (String) object;
                // returnValue now contains the string returned from the server.

Database Development Guide

This document concerns low-level database access from Java code running in LabKey Server. “Low-level” specifically means access to JDBC functionality to communicate directly with the database and the various helpers we use internally that wrap JDBC.

This document does not cover the the “user schema” layer that presents an abstraction of the underlying database to the user and which supports our APIs and LabKey SQL engine.

There is more to be said about these classes later, but just as a guideline, remember these points. The “low-level” objects start with DbScope and DbSchema and spread out from there. The “user-level” objects start with DefaultSchema which hands out UserSchema objects. The TableInfo class is a very important class and is shared between both DbSchema and UserSchema. When using a TableInfo it’s important to keep in mind which world you are dealing with (DbSchema or UserSchema).


It is very important to remember that when you are directly manipulating the database, you are responsible for enforcing permissions. The rules that any particular API or code path may need to enforce can be completely custom, so there is no blanket rule about what you must do. However, be mindful of common patterns involving the “container” column. Many tables in LabKey are partitioned into containers (aka folders in the UI) by a column named “container” that joins to core.containers. All requests (API or UI) to the LabKey server are done in the context of a container, and that container has a SecurityPolicy that is used as the default for evaluating permissions. For instance, the security annotations on an action are evaluated against the SecurityPolicy associated with the “container” of the current request.

public class InsertAction extends AbstractIssueAction
// Because of the @RequiresPermission annotation,
// We know the current user has insert permission in the current container
assert getContainer().hasPermission(getUser(),InsertPermission.class);

If you are relying on these default security checks, it is very important to make sure that your code only operates on rows associated with the current container. For instance, if the url provides the value of a primary key of a row to update or delete, your code must validate that the container of that row matches the container of the current request.



DbScope directly corresponds to a javax.sql.Datasource configured in your Tomcat webapp.xml (usually labkey.xml or ROOT.xml), and is therefore a source of jdbc connections that you can use directly in your code. We also think of a DbScope as a collection of schemas, the caching layer for meta data associated with schemas and tables, and our transaction manager.

You usually do not need to directly handle jdbc Connections, as most of our helpers take a DbScope/DbSchema/TableInfo directly rather than a Connection. However, DbScope.getConnection() is available if you need it.

DbSchema schema = DbSchema.get("core", DbSchemaType.Module); DbScope scope = schema.getScope(); Connection conn= scope.getConnection();

DbScope also provides a very helpful abstraction over JDBC’s built-in transaction api. Writing correct transaction code directly using JDBC can be tricky. This is especially true when you may be writing a helper function that does not know whether a higher level routine has already started a transaction or not. This is very easy to handle using DbScope.ensureTransaction(). Here is the recommended pattern to follow:

DbSchema schema = DbSchema.get("core", DbSchemaType.Module);
DbScope scope = schema.getScope();
try (DbScope.Transaction tx = scope.ensureTransaction())
site = Table.insert(user, AccountsSchema.getTableInfoSites(), site);

When there is no transaction already pending on this scope object, a Connection object is created that is associated with the current thread. Subsequent calls to getConnection() on this thread return the same connection. This ensures that everyone in this scope/thread participates in the same transaction. If the code executes successfully, the call to tx.commit() then completes the transaction. When the try-with-resources block completes it closes the transaction. If the transaction has been committed then the code continues on. If the transaction has not been committed, the code assumes an error has occurred and aborts the transaction.

If there is already a transaction pending, the following happens. We recognize that a database transaction is pending, so do not start a new one. However, the Transaction keeps track of the “depth” of the nesting of ensureTransaction() scopes. The call to commit() again does nothing, but pops one level of nesting. The final outer commit() in the calling code commits the transaction.

In the case of any sort of error involving the database, it is almost always best to throw. With Postgres almost any error returned by jdbc causes the connection to be marked as unusable, so there is probably nothing the calling code can do to recover from the DB error, and keep going. Only the code with the outermost transaction may be able to “recover” by deciding to try the whole transaction over again (or report an error).

See class DbScope.TransactionImpl for more details.


DbSchema corresponds to a database schema, e.g. a collection of related tables. This class is usually the starting point for any code that wants to talk to the database. You can request a DbSchema via the static method DbSchema.get(schemaName, DbSchemaType). E.g. to get the core schema which contains the user, security, and container related tables you would use DbSchema.get("core", DbSchemaType.Module).
  • getScope() returns the scope that contains this schema, used for transaction management among other things.
  • getSqlDialect() is a helper class with methods to aid in writing cross-platform compatible sql.
  • getTableNames() returns a list of tables contained in this schema.
  • getTable(tablename) returns a TableInfo for the requested table. See below.


You all know about SQL injection, or you should. It is best practice to use parameter markers in your SQL statements rather than trying to directly concatenate constants. That’s all fine until you have lots of bits of code generating different bits of SQL with parameter markers and associated parameters values. SQLFragment simplifies this by carrying around the SQL text and the parameter values in one object. We use this class in many of our helpers. Here is an example:

SQLFragment select = new SQLFragment("SELECT *, ? as name FROM table", name);
SQLFragment where = new SQLFragment("WHERE rowid = ? and container = ?",
rowid, getContainer());



We’re ready to actually do database stuff. Let’s update a row. Say we have a name and a rowid and we want to update a row in the database. We know we’re going to start with our DbScope and using a SqlFragment is always a good idea. We also have a helper called SqlExecutor, so you don’t have to deal with Connection and PreparedStatement. It also translates SqlException into a runtime exceptions

DbScope scope = DbScope.get("myschema");
SQLFragment update = new SQLFragment("UPDATE mytable SET name=? WHERE rowid=?”,
name, rowid);
long count = new SqlExecutor(scope).execute(update);

Don’t forget your container filter! And you might want to do more than one update in a single transaction… So...

DbScope scope = DbScope.get(“myschema”);
try (DbScope.Transaction tx = scope.ensureTransaction())
SQLFragment update = new SQLFragment(
"UPDATE mytable SET name=? WHERE rowid=? And container=?”,
name, rowid, getContainer());
long count = new SqlExecutor(scope).execute(update);
// You can even write JDBC code here if you like
Connection conn = scope.getConnection();
// update some stuff



SqlSelector is the data reading version of SqlExecutor. However, it’s a bit more complicated because it does so many useful things. The basic pattern is

SQLFragment select = new SQLFragment("SELECT * FROM table WHERE container = ?",
RESULT r = new SqlSelector(scope, select).getRESULT(..)

Where getRESULT is some method that formats the result in some useful way. Most of the interesting methods are specified by the Selector interface (implemented by both SqlSelector and TableSelector, explained below). Here are a few useful ones.

  • .exists() -- Does this query return 1 or more rows? Logically equivalent to .getRowCount() > 1 but .exists() is usually more concise and efficient.
  • .getRowCount() -- How many rows does this query return?
  • .getResultSet() -- return JDBC resultset (loaded into memory and cached by default). getResultSet(false), returns a “raw” uncached jdbc resultset.
  • .getObject(class) -- return a single value from a one-row result. Class can be an intrinsic like String.class for a one column result. It can be Map.class to return a map (name -> value) representing the row, or a java bean. E.g. MyClass.class. In this case, it is populated using reflection to bind column names to field names. You can also customize the bean construction code (see ObjectFactory, BeanObjectFactory, BuilderObjectFactory)
  • .getArrayList(class), .getCollection(class) -- like getObject(), but for queries where you expect more than one row.
  • .forEach(lambda) .forEachMap(lambda) -- pass in a function to be called to process each row of the result
getResultSet(false) and the forEach() variants stream results without pulling all the rows into memory, making them useful for processing very large results. getResultSet() brings the entire result set into memory and returns a result set that is disconnected from the underlying jdbc connection. This simplifies error handling, but is best for modest sized results. Other result-returning methods, such as getArrayList() and getCollection(), must load all the data to populate the data structures they return (although they are populated by a streaming result set, so only one copy exists at a time). Use the forEach() methods to process results, if at all possible; they are both efficient and easy to use (e.g., iterating and closing are handled automatically and no need to deal with checked SQLExceptions that are throws by every ResultSet method).


TableInfo captures metadata about a database table. The metadata includes information about the database storage of the table and columns (e.g. types and constraints) as well as a lot of information about how the table should be rendered by the LabKey UI.

Note that TableInfo objects that are returned by a UserSchema are virtual tables and may have arbitrarily complex mapping between the virtual or “user” view of the schema and the underlying physical database. Some may not have any representation in the database at all (see EnumTableInfo).

If you are writing SQL against known tables in a known schema, you may never need to touch a TableInfo object. However, if you want to query tables created by other modules, say the issues table or a sampleset, you probably need to start with a TableInfo and use a helper to generate your select SQL. As for update, you probably should not be updating other module’s tables directly! Look for a service interface or use QueryUpdateService (see below).

DbSchema schema = DbSchema.get("issues", DbSchemaType.Module);
SchemaTableInfo ti = schema.getTable("issues");


TableSelector is a sql generator QueryService.getSelectSQL() wrapped around SqlSelector. Given a TableInfo, you can execute queries by specifying column lists, filter, and sort in code.

DbSchema schema = DbSchema.get("issues", DbSchemaType.Module);
SchemaTableInfo ti = schema.getTable("issues");
new TableSelector(ti,
new SimpleFilter(new FieldKey(null,"container"),getContainer()),
new Sort("IssueId")


The poorly named Table class has some very helpful utilities for inserting, updating and deleting rows of data while maintaining basic LabKey semantics. It automatically handles the columns created, createdby, modified, and modifiedby. For simple UI-like interactions with the database (e.g. updates in response to a user action) we strongly recommend using these methods. Either that or go through the front door via QueryUpdateService which is how our query api’s are implemented.
  • .insert() insert a single row with data in a Map or a java bean.
  • .update() update a single row with data in a Map or a java bean
  • .delete() delete a single row
  • .batchExecute() this helper can be used to execute the same statement repeatedly over a collection of data. Note, that we have a lot of support for fast data import, so don’t reach for this method first. However, it can be useful.


QueryService is the primary interface to LabKey SQL functionality. However, QueryService.getSelectSQL() was migrated from the Table class in the hope of one day integrating that bit of SQL generation with the LabKey SQL compiler (the dream lives). getSelectSql() is still a stand-alone helper for generating select statements for any TableInfo. When using this method, you are still responsible for providing all security checks and adding the correct container filter.

High level API

If you are writing custom API’s, it is just as likely that you are trying to modify/combine/wrap existing functionality on existing schemas and tables. If you are not updating tables that you “own”, e.g. created by SQL scripts in your own module, you should probably avoid trying to update those tables directly. Doing so may circumvent and special behavior that the creator of those tables relies on, it may leave caches in an inconsistent state, and may not correctly enforce security. In cases like this, you may want to act very much as the “user”, by accessing high-level APIs with the same behavior as if the user executed a javascript API.

In addition, to QueryUpdateService as documented below, also look for public services exposed by other modules. E.g.

  • ExperimentService
  • StudyService
  • SearchService
  • QueryService
  • ContainerManager (not strictly speaking a service)


Also not really a service, this isn’t a global singleton. For TableInfo objects returned by user schemas, QueryUpdateService is the half of the interface that deals with updating data. It can be useful to just think of the dozen or so methods on this interface as belonging to TableInfo. Note, not all tables visible in the schema browser support insert/update/delete via QueryUpdateService. If not, there is probably an internal API that can be used instead. For instance you can’t create a folder by calling insert on the core.containers table. You would use ContainerManager.createContainer().

Here is an example of inserting one row QueryUpdateService()

UserSchema lists = DefaultSchema.get(user, c).getSchema("lists")
TableInfo mylist = lists.getTable("mylist");
Map<String,Object> row = new HashMap<>();
BatchValidationException errors = new BatchValidationException();
mylist.getUpdateService().insertRows(user, c, Arrays.asList(row), errors, null, null);
if (errors.hasErrors())
throw errors;
  • .insertRows() -- insert rows, and returns rows with generated keys (e.g. rowids)
  • .importRows() -- like insert, but may be faster. Does not reselect rowids.
  • .mergeRows() -- only supported by a few table types
  • .updateRows() -- updates specified rows
  • .deleteRows() -- deletes specified rows
  • .truncateRows() -- delete all rows

Java Testing Tips

This PowerPoint presentation provides an overview of Java debugging techniques used by the LabKey Team.

HotSwapping Java classes

Java IDEs and VMs support a feature called HotSwapping. It allows you to update the version of a class while the virtual machine is running, without needing to redeploy the webapp, restart, or otherwise interrupt your debugging session. It's a huge productivity boost if you're editing the body of a method.


You cannot change the "shape" of a class. This means you can't add or remove member variables, methods, change the superclass, etc. This restriction may be relaxed by newer VMs someday. The VM will tell you if it can't handle the request.

You cannot change a class that hasn't been loaded by the VM already. The VM will ignore the request.

The webapp will always start up with the version of the class that was produced by the Gradle build, even if you HotSwapped during an earlier debug session.

Changes to your class will be reflected AFTER the current stack has exited your method.


These steps are the sequence in IntelliJ. Other IDEs should very similar.

  1. Do a Gradle build.
  2. In IntelliJ, do Build > Make Project. This gets IntelliJ's build system primed.
  3. Start up Tomcat, and use the webapp so that the class you want to change is loaded (the line breakpoint icon will show a check in the left hand column once it's been loaded).
  4. Edit the class.
  5. In IntelliJ, do Build > Compile <MyClass>.java.
  6. If you get a dialog, tell the IDE to HotSwap and always do that in the future.
  7. Make your code run again. Marvel at how fast it was.
If you need to change the shape of the class, we suggest killing Tomcat, doing an Gradle build, and restarting the server. This leaves you poised to HotSwap again because the class will be the right "shape" already.

Modules: Custom Login Pages

By default, LabKey Server uses the login page at /server/modules/core/resources/views/login.html, but you can provide your own login page deployed in a module.

Use the standard login page as a template for your custom page. Copy the template HTML file into your module (at MODULE_NAME/views/LOGIN_PAGE_NAME.html) and modify it according to your requirements. Note that the standard login page works in conjunction with a login.view.xml file (at /server/modules/core/resources/views/login.view.xml) and a JavaScript file (at /server/modules/core/webapp/login.js). The login.js file provides access to the Java actions that handle user authentication, such as loginApi.api and acceptTermsOfUseApi.api. Your login page should retain the use of these actions.

Once you have deployed your custom login page, you will need to tell the server to use it instead of the standard login page. For details see Look and Feel Settings.

Related Topics

ETL: Extract Transform Load

Premium Feature — Available in the Professional, Professional Plus, and Enterprise Editions. Learn more or contact LabKey.

Extract-Transform-Load functionality lets you encapsulate some of the most common database tasks, especially (1) extracting data from a database, (2) transforming it, and finally (3) loading it into another database.

Some scenarios where ETLs are useful:

  • Assembling data warehouses that integrate data from multiple data sources.
  • Migration from one database schema to another, especially where the source schema is an external database.
  • Coalescing many tables into one.
  • Distributing (aka, provisioning) one table into many.
  • Cloning the current state of a table.
  • Normalizing data from different systems.
  • Moving data in scheduled increments.
  • When migration processes require logging and auditing.
ETL functionality can be encoded (1) in a custom module or (2) using the folder management UI on the server. The following topics will get you started developing ETL scripts and processes and packaging them as modules:

Related Topics

Tutorial: Extract-Transform-Load (ETL)

Premium Feature — Available in the Professional, Professional Plus, and Enterprise Editions. Learn more or contact LabKey.

Data Warehouse

This tutorial shows you how to create and use a simple ETL. You can use this example as a starting point for further development.

As you go through the tutorial, imagine you are a researcher who wants to identify a group of participants for a research study. The participants must meet certain criteria to be included in the study, such as having a certain condition or diagnosis. You already have the following in place:

  • You have a running installation of LabKey Server which includes the dataintegration module.
  • You already have access to a large database of demographic information of candidate participants. This database is continually being updated with new data and new candidates for your study.
  • You have an empty table called "Patients" on your LabKey Server which is designed to hold the study candidates.
How do you get the records from the outside database into your LabKey Server, especially those records that meet your study's criteria? In this tutorial, you will set up an ETL process to solve this problem. The ETL script will automatically query the source database for participants that fit your criteria. If it finds any such records, it will automatically copy them into your system. The ETL process will run on a schedule: every hour it will re-query the database looking for new, or updated, records that fit your criteria.

Tutorial Steps

First Step

ETL Tutorial: Set Up

Premium Feature — Available in the Professional, Professional Plus, and Enterprise Editions. Learn more or contact LabKey.

In this step you will download and install a basic workspace for working with ETL processes. In our example, the data warehouse is represented by another table on the same server, rather than requiring configuration of multiple machines.

Set Up ETL Workspace

In this step you will import a pre-configured workspace in which to develop ETL processes. (Note that there is nothing mandatory about the way this workspace has been put together -- your own ETL workspace may be different, depending on the needs of your project. This particular workspace has been configured especially for this tutorial as a shortcut to avoid many set up steps, steps such as connecting to source datasets, adding an empty dataset to use as the target of ETL scripts, and adding ETL-related web parts.)

  • Download the folder archive:

  • Log in to your server and navigate to your "Tutorials" project. Create it if necessary.
    • If you don't already have a server to work on where you can create projects, start here.
    • If you don't know how to create projects and folders, review this topic.
  • Create a new subfolder named "ETL Workspace". Choose the folder type "Study" and accept other defaults.
  • Import into the folder:
    • In the Study Overview panel, click Import Study.
    • On the Folder Management page, confirm Local zip archive is selected and click Choose File.
    • Select the folder archive that you downloaded:
    • Click Import Study.
    • When the import is complete, click ETL Workspace to see the workspace. (You may need to refresh your browser to see the complete status.)

You now have a workspace where you can develop ETL scripts. It includes:

  • A LabKey Study with various datasets to use as data sources
  • An empty dataset named Patients to use as a target destination
  • The ETL Workspace tab provides an area to manage and run your ETL processes. Notice that this tab contains three web parts:
    • Data Transforms shows the available ETL processes. Currently it is empty because there are none defined.
    • The Patients dataset (the target dataset for the process) is displayed, also empty because no ETL process has been run yet. When you run an ETL process in the next step the the empty Patients dataset will begin to fill with data.
    • The Demographics dataset (the source dataset for this tutorial) is displayed with more than 200 records.

Create an ETL

  • Click the ETL Workspace tab to ensure you are on the main folder page.
  • Select (Admin) > Folder > Management.
  • Click the ETLs tab. If you don't see this tab, you may not have the dataintegration module enabled on your server. Check on the Folder Type tab, under Modules.
  • Click (Insert new row) under Custom ETL Definitions.
  • Replace the default XML in the edit panel with the following code:
    <?xml version="1.0" encoding="UTF-8"?>
    <etl xmlns="">
    <name>Demographics >>> Patients (Females)</name>
    <description>Update data for study on female patients.</description>
    <transform id="femalearv">
    <source schemaName="study" queryName="FemaleARV"/>
    <destination schemaName="study" queryName="Patients" targetOption="merge"/>
    <poll interval="1h"/>
  • Click Save.
  • Click the ETL Workspace tab to return to the main dashboard.
  • The new ETL named "Demographics >>> Patients (Females)" is now ready to run. Notice it has been added to the list under Data Transforms.

Start Over | Next Step

ETL Tutorial: Run an ETL Process

Premium Feature — Available in the Professional, Professional Plus, and Enterprise Editions. Learn more or contact LabKey.

In this step you will become familiar with the ETL user interface, and run the ETL process you just added to the server.

ETL User Interface

The web part Data Transforms lists all of the ETL processes that are available in the current folder. It lets you review current status at a glance, and run any transform manually or on a set schedule. You can also reset state after a test run.

For details on the ETL user interface, see ETL: User Interface.

Run the ETL Process

  • If necessary, click the ETL Workspace tab to return to the Data Transforms web part.
  • Click Run Now for the "Demographics >>> Patients (Females)" row to transfer the data to the Patients table. Note that you will need to be signed in to see this button.
  • You will be taken to the ETL Job page, which provides updates on the status of the running job.
  • Refresh your browser until you see the Status field shows the value COMPLETE
  • Click the ETL Workspace link to see the records that have been added to the Patients table. Notice that 36 records (out of over 200 in the source Demographics query) have been copied into the Patients query. The ETL process filtered to show female members of the ARV treatment group.

Experiment with ETL Runs

Now that you have a working ETL process, you can experiment with different scenarios.

Suppose the records in the source table had changed; to reflect those changes in your target table, you would rerun the ETL.
  • First, roll back the rows added to the target table (that is, delete the rows and return the target table to its original state) by selecting Reset State > Truncate and Reset.
  • Confirm the deletion in the popup window.
    • You may need to refresh your browser to see the empty dataset.
  • Rerun the ETL process by clicking Run Now.
  • The results are the same because we did not in fact change any source data yet. Next you can actually make some changes to show that they will be reflected.
  • Edit the data in the source table Demographics:
    • Click the ETL Workspace tab.
    • Scroll down to the Demographics dataset - remember this is our source data.
    • Hover over a row for which the Gender is "m" and the Treatment Group is "ARV" then click the (pencil) icon revealed. You could also apply column filters to find this set of records.
    • Change the Gender to "f" and click Submit to save.
  • Rerun the ETL process by first selecting Reset > Truncate and Reset, then click Run Now.
  • Click the ETL Workspace tab to return to the main dashboard.
  • The resulting Patients table will now contain the additional matching row for a total count of 37 matching records.

Previous Step | Next Step

ETL Tutorial: Create a New ETL Process

Premium Feature — Available in the Professional, Professional Plus, and Enterprise Editions. Learn more or contact LabKey.

Suppose you wanted to expand the Patients dataset to also include male participants who are "Natural Controllers" of HIV.

To do this, we use another SQL query that returns a selection of records from the Demographics table, in particular all Male participants who are Natural Controllers.

We'll create a new ETL from scratch, drawing on that SQL query.

Define Source Query

The archive has predefined the query we will use. To review it and see how you could add a new one, follow these steps:

  • Select (Admin) > Go To Module > Query.
  • Click study to open the study schema. If you were going to define your own new query, you could click Create New Query here.
  • Click MaleNC to open the predefined one.
  • Click Edit Source to see that the source code for this query looks like this:
    SELECT Demographics.ParticipantId,
    FROM Demographics
    WHERE Demographics.Gender = 'm' AND Demographics.TreatmentGroup = 'Natural Controller'
  • Click the Data tab to see that 6 participants are returned by this query:

Create a New ETL Process

ETL processes are defined using XML to specify the data source, the data target, and other properties. You can install these XML files in a custom module, or define the ETL directly using the user interface. Here we create a new configuration that draws from the query we just created above.

  • Select (Admin) > Folder > Management.
  • Click the ETLs tab.
  • Above the Custom ETL Definitions grid, click (Insert new row).
  • Copy and paste the following instead of the default shown in the window:
    <etl xmlns="">
    <name>Demographics >>> Patients (Males)</name>
    <description>Update data for study on male patients.</description>
    <transform id="males">
    <source schemaName="study" queryName="MaleNC"/>
    <destination schemaName="study" queryName="Patients" targetOption="merge"/>
    <poll interval="1h"/>
  • Click Save.
  • Click the ETL Workspace tab.
  • Notice this new ETL is now listed in the Data Transforms web part.

Run the ETL Process

  • Click Run Now next to the new process name.
  • Refresh in the pipeline window until the job completes, then click the ETL Workspace tab.
  • New records will have been copied to the Patients table, making a total of 43 records (42 if you skipped the step of changing the gender of a participant in the source data during the previous tutorial step).


Congratulations! You've completed the tutorial and created a basic ETL for extracting, transforming, and loading data. Learn more in the ETL Documentation.

Previous Step

ETL: Define an ETL Using XML

Premium Feature — Available in the Professional, Professional Plus, and Enterprise Editions. Learn more or contact LabKey.

This topic shows how to create, edit, and delete ETL XML definitions directly in the LabKey Server user interface, which obviates the need to deploy an ETL inside a custom module.

Enable the Data Integration Module

  • In the folder where you want to create an ETL, go to (Admin) > Folder > Management.
  • On the Folder Management page, click the Folder Type tab.
  • Under Modules, place a checkmark next to Data Integration.
  • Click Update Folder to save your changes.
  • Go back to the Folder Management page and notice that the ETLs tab has been added.

Create a New ETL Definition

  • Go to (Admin) > Folder > Management.
  • On the Folder Management page, click the ETLs tab.
  • On the Custom ETL Definitions panel, click the (Insert new row) button.
  • You will be provided with template XML for a new ETL definition.
  • Edit the provided XML to fit your use case:
    • Provide a name.
    • Provide a description.
    • Uncomment the transform element.
    • Replace the default values in the source and destination elements.
  • An example ETL that copies data from some external table to a list:
<etl xmlns="">
<name>Populate List</name>
<description>Updates the List data from the external data source.</description>
<transform id="step1" type="org.labkey.di.pipeline.TransformTask">
<description>Copy data to the List</description>
<source schemaName="external" queryName="SourceTable" />
<destination schemaName="lists" queryName="MyList" />
<incrementalFilter className="ModifiedSinceFilterStrategy" timestampColumnName="modified"/>
<poll interval="1h" />


While using the editor, autocomplete using CodeMirror makes it easier to enter XML syntax correctly and remember valid parameter names.

Type a '<' to see XML syntax options for that point in the code:

Type a space to see the list of valid parameters:

Change ETL Names/Save As

When you edit an existing ETL and change the name field then click Save, the name is first checked against all existing ETL names in the folder. If it is not unique, you will see a popup warning "This definition name is already in use in the current folder. Please specify a different name."

Once you click Save with a unique new name, you will be asked if you want to update the existing definition or save as a new definition.

If you click Update Existing, there will only be the single changed ETL after the save which will include all changes made.

If you click Save as New, there will be two ETL definitions after the save: the original content from any previous save point, and the new one with the new name and most recent changes.

Use an Existing ETL as a Template

To use an existing ETL as a template for creating a new one, click the Copy From Existing button in the ETL Definition editor.

Choose the location (project or folder) to populate the dropdown for Select ETL Definition. Choose a definition, then click Apply.

The XML definition you chose will be shown in the ETL Definition editor, where you can make further changes before saving. The name of your ETL definitions must be unique in the folder, so the name copied from the template must always be changed. This name change does not prompt the option to update the existing template. An ETL defined using a template always saves as a new ETL definition.

Note that the ETL used as a template is not linked to the new one you have created. Using a template is copying the XML at the time of template use. If edits are made later to the "template," they will not be reflected in the ETLs that used it.

Run the ETL

ETL: User Interface

Premium Feature — Available in the Professional, Professional Plus, and Enterprise Editions. Learn more or contact LabKey.

ETL User Interface

The web part Data Transforms lists all of the ETL processes that are available in the current folder.

  • Columns:
    • Name - This column displays the name of the process.
    • Source Module - This column tells you module where the configuration file resides.
    • Schedule - This column shows you the reload schedule. In this case the ETL process is configured to run once every hour.
    • Enabled - This checkbox controls whether the automated schedule is enabled: when unchecked, the ETL process must be run manually.
    • Last Status, Successful Run, Checked - These columns record the latest run of the ETL process.
    • Set Range - (Available only in devMode) The Set Range column is displayed only in dev mode and is intended for testing purposes during ETL module development. The Run button is only displayed for ETL processes with a filter strategy of RunFilterStrategy or ModifiedSinceFilterStrategy; the button is not displayed for the filter strategy SelectAllFilterStrategy. Click Run to set a date or row version window range to use for incremental ETL filters, overriding any persisted or initial values.
    • Last Transform Run Log Error - Shows the last error logged, if any exists.
  • Buttons:
    • Run Now - This button immediately activates the ETL process.
    • Reset State - This button returns the ETL process to its original state, deleting its internal history of which records are, and are not, up to date. There are two options:
      • Reset
      • Truncate and Reset
    • View Processed Jobs - This button shows you a log of all previously run ETL jobs, and their status.

Run an ETL Process Manually

The Data Transforms web part lets you:

  • Run jobs manually. (Click Run Now.)
  • Enable/disable the recurring run schedule, if such a schedule has been configured in the ETL module. (Check or uncheck the column Enabled.)
  • Reset state. (Select Reset State > Reset resets an ETL transform to its initial state, as if it has never been run.)
  • See the latest error raised in the Last Transform Run Log Error column.

Cancel and Roll Back Jobs

While a job is running you can cancel and roll back the changes made by the current step by pressing the Cancel button.

The Cancel button is available on the Job Status panel for a particular job, as shown below:

To roll back a run and delete the rows added to the target by the previous run, view the Data Transforms web part, then select Reset State > Truncate and Reset. Note that rolling back an ETL which outputs to a file will have no effect, that is, the file will not be deleted or changed.

See Run History

The Data Transform Jobs web part provides a detailed history of all executed ETL runs, including the job name, the date and time when it was executed, the number of records processed, the amount of time spent to execute, and links to the log files.

To add this web part to your page, enter > Page Admin Mode, then scroll down to the bottom of the page and click the dropdown <Select Web Part>, select Data Transform Jobs, and click Add. When added to the page, the web part appears with a different title: "Processed Data Transforms". Click Exit Admin Mode.

Click Run Details for fine-grained details about each run, including a graphical representation of the run.

ETL: Configuration and Schedules

Premium Feature — Available in the Professional, Professional Plus, and Enterprise Editions. Learn more or contact LabKey.

Multiple Steps in a Single ETL or Multiple ETLs?

  • Do changes to the source affect multiple target datasets at once? If so consider configuring multiple steps in one ETL definition.
  • Do source changes impact a single target dataset? Consider using multiple ETL definitions, one for each dataset.
  • Are the target queries relational? Consider multiple steps in one ETL definition.

Configuration Options

The following configuration options are offered for customizing ETL processes:


You can set a polling schedule to check the source database for new data and automatically run the ETL process when new data is found. Either specify a time interval, or use a full cron expression to schedule ETLs. When choosing a schedule for running ETLs, consider the timing of other processes, like automated backups, which could cause conflicts with running your ETL.

The schedule below checks every hour for new data:

<schedule><poll interval="1h" /></schedule>

These examples show some cron expressions to schedule running of the job:

<!-- run at 10:15 every day -->
<schedule><cron expression="0 15 10 ? * *"/></schedule>

<!-- run every hour on the hour every day, i.e. 9:00, 10:00, etc. -->
<schedule><cron expression="0 0 * ? * *"/></schedule>

<!-- run on Tuesdays and Thursdays at 3:30 pm -->
<schedule><cron expression="0 30 15 ? * TUE,THU *"/></schedule>

Cron expressions consist of six or seven space separated strings for the seconds, minutes, hours, day-of-month, month, day-of-week, and optional year in that order. The wildcard '*' indicates every valid value. The character '?' is used in the day-of-month or day-of-week field to mean 'no specific value,' i.e, when the other value is used to define the days to run the job.

It is good practice to include a comment clarifying what the cron expression you used means in plain text.

To assist you, use a builder for the Quartz cron format. One is available here:

A full description of the cron syntax is available on the Quartz site here.

Target Options

When the data is loaded into the destination database, there are three options for handling cases when the source query returns key values that already exist in the destination:

  • Append: Appends new rows to the end of the existing table. Fails on duplicate primary key values.
  • Merge: Merges data into the destination table. Matches primary key values to determine insert or update. Target tables must have a primary key.
  • Truncate: Deletes the contents of the destination table before inserting the selected data.
For example:

<destination schemaName="vehicle" queryName="targetQuery" targetOption="merge" />

Note: Merge and truncate are only supported for datasets, not lists.

Filter Strategy

Filter strategies define how the ETL process identifies new rows in the source database to be pulled over to the target.

For details see ETL: Filter Strategies.

File Targets

An ETL process can load data to a file, such as a comma separated file (CSV), instead of loading data into a database table. For example, the following ETL configuration element directs outputs to a tab separated file named "report.tsv". rowDelimiter and columnDelimiter are optional, if omitted you get a standard TSV file.

<destination type="file" dir="etlOut" fileBaseName="report" fileExtension="tsv" />

Transaction Options

Transact Multi-Step ETL

A multi-step ETL can be wrapped in a single transaction on the source and/or destination database side, ensuring that the entire process will proceed on a consistent set of source data, and that the entire operation can be rolled back if any error occurs. This option should be considered only if:

  • Every step in the ETL uses the same source and/or destination scope. Individual steps may use different schemas within these sources.
  • Your desired operation cannot be supported using "modified" timestamps or "transfer id" columns, which are the preferred methods for ensuring data consistency across steps.
Note that transacting the source database is only supported for Postgres. SQL Server does not support the required SNAPSHOT isolation on REPEATABLE READ required to avoid locking issues.

To enable multi-step transactions, use the the “transactSourceSchema” and “transactDestinationSchema” attributes on the top level “etl” element in an ETL xml:

<etl xmlns="" transactSourceSchema="SOURCE_SCHEMA_NAME" transactDestinationSchema="DESTINATION_SCHEMA_NAME">

The specified schema names can be any schema in the datasource of which LabKey is aware, i.e. any schema in the LabKey datasource, or any external schema mapped in an external datasource. Schema names are used instead of datasource names because the schema names are known to the etl author, whereas datasource names can be arbitrary and set at server setup time.

Disable Transactions

Note that disabling transactions risks leaving the destination or target table in an intermediate state if an error occurs during ETL processing.

ETL steps are, by default, run as transactions. To turn off transactions when running an ETL process, set useTransaction to false on the destination, as shown below:

<destination schemaName="study" queryName="demographics" useTransaction="false" />

Batch Process Transactions

Note that batch processing transactions risks leaving the destination or target table in an intermediate state if an error occurs during ETL processing.

By default an single ETL job will be run as a single transaction, no matter how many rows are processed. You can change the default behavior by specifying that a new transaction be committed for every given number of rows processed. In the example below, a new transaction will be committed for every 500 rows processed:

<destination schemaName="study" queryName="demographics" bulkLoad="true" batchSize="500" />

Command Tasks

Once a command task has been registered in a pipeline task xml file, you can specify the task as an ETL step.

<transform id="ProcessingEngine" type="ExternalPipelineTask" 

See this example module for an ETL that calls a pipeline job:

Permission to Run

ETL processes are run in the context of a folder. If run manually, they run with the permissions of the initiating user. If scheduled, they will run with the permissions of a "service user" which can be configured by the folder administrator.

Related Topics

ETL: Filter Strategies

Premium Feature — Available in the Professional, Professional Plus, and Enterprise Editions. Learn more or contact LabKey.

The filter strategy is how the ETL process identifies new rows in the source database. Filter strategies use a designated field on the destination table to be compared to the source and only pulls over new rows based on that field. There are three options:

  • SelectAllFilterStrategy: Get all data rows from the source, applying no filter.
  • ModifiedSinceFilterStrategy: Select the latest changes to the source table. Uses a date/timestamp column (specified by timeStampColumnName) to identify the new/updated records since the last ETL job run. Rows changed since the last run will be selected. This is the most commonly used filter strategy.
  • RunFilterStrategy: Check a specified column, typically an increasing integer column (e.g. Run ID), against a given or stored value. For instance, any rows with a higher value than when the ETL process was last run are transformed. Often used for relational data, multi-staged transfer pipelines, or when an earlier upstream process is writing a batch of parent-child data to the source. Useful when child records must be transferred at the same time as the parent records
For example, the strategy below says to check for updated data by consulting the "Date" field.

<incrementalFilter className="ModifiedSinceFilterStrategy" timestampColumnName="Date" />

Filter Strategies and Merge Options

append  Add new rows to your target table and avoid conflicts/duplicate rows.

Incremental Deletion of Target Rows

When incrementally deleting rows based on a selective filter strategy, use the element deletedRowsSource to correctly track the filtered values for deletion independently of the main query. Even if there are no new rows in the source query, any new records in the "deleteRowsSource" will still be found and deleted from the target. Using this method, the non-deleted rows will keep their row ids, maintaining any links to other objects in the target table.

ETL: Column Mapping

Premium Feature — Available in the Professional, Professional Plus, and Enterprise Editions. Learn more or contact LabKey.

Column Mapping

If your source and target tables have different column names, you can configure a mapping between the columns, such that data from one column will be loaded into the mapped column, even if it has a different name. For example, suppose you are working with the following tables:

Source Table ColumnsTarget Table Columns

Below we add a mapping such that data from "ParticipantId" is loaded into the column "SubjectId". Add column mappings to your ETL configuration using a <columnTransforms> element, with <column> elements to define each name mapping. For example:

<transform id="transform1">
<source schemaName="study" queryName="Participants"/>
<destination schemaName="study" queryName="Subjects" targetOption="merge">
<column source="ParticipantId" target="SubjectId"/>
<column source="StartDate" target="Date"/>
<column source="Gender" target="Sex"/>
<column source="TreatmentGroup" target="Treatment"/>
<column source="Cohort" target="Group"/>

Column mapping is supported for both query and file destinations. Mapping one source column onto many destination columns is not supported.

Container Columns

Container columns can be used to integrate data across different containers within LabKey Server. For example, data gathered in one project can be referenced from other locations as if it were available locally. However, ETL processes are limited to running within a single container. You cannot map a target container column to anything other than the container in which the ETL process is run.


To assign a constant value to a given target column, use a constant in your ETL configuration .xml file. For example, this sample would write "schema1.0" into the sourceVersion column of every row processed:

<column name="sourceVersion" type="VARCHAR" value="schema1.0"/>

If a column named "sourceVersion" exists in the source query, the constant value specified in your ETL xml file is used instead.

Constants can be set at both:

  • The top level of your ETL xml: the constant is applied for every step in the ETL process.
  • At an individual transform step level: the constant is only applied for that step and overrides any global constant that may have been set.
<destination schemaName="vehicle" queryName="etl_target">
<column name="sourceVersion" type="VARCHAR" value="myStepValue"/>

Creation and Modification Columns

If the source table includes the following columns, they will be populated in the target table with the same names:

  • EntityId
  • Created
  • CreatedBy
  • Modified
  • ModifiedBy
If the source tables include values for these columns, they will be retained. CreatedBy and ModifiedBy are integer columns that are lookups into the core.users table. When the source table includes a username value for one of these fields, the matching user is found in the core.user table and that user ID value is used. If no matching user is found, a deactivated user will be generated on the LabKey side and the column populated with that new user ID.

If no data is provided for these columns, they will be populated with the time and user information from the running of the ETL process.

DataIntegration Columns

Adding the following data integration ('di') columns to your target table will enable integration with other related data and log information.

Column NamePostresSQL TypeMS SQL Server TypeNotes
diModifiedTIMESTAMPDATETIMEValues here may be updated in later data mergers.
diModifiedByUSERIDUSERIDValues here may be updated in later data mergers.
diCreatedTIMESTAMPDATETIMEValues here are set when the row is first inserted via a ETL process, and never updated afterwards
diCreatedByUSERIDUSERIDValues here are set when the row is first inserted via a ETL process, and never updated afterwards

The value written to diTransformRunId will match the value written to the TransformRunId column in the table dataintegration.transformrun, indicating which ETL run was responsible for adding which rows of data to your target table.

Transformation Java Classes

The ETL pipeline allows Java developers to add a transformation java class to a particular column. This Java class can validate, transform or perform some other action on the data values in the column. For details and an example, see ETL: Examples


ETL: Queuing ETL Processes

Premium Feature — Available in the Professional, Professional Plus, and Enterprise Editions. Learn more or contact LabKey.

You can call an ETL task from within another ETL process by using a <taskref> that refers to org.labkey.di.steps.QueueJobTask.

Reference the ETL process you wish to queue up by module name and file name, using the pattern "{MODULE_NAME}/FILE_NAME". For example, to queue up the process MaleNC.xml in the module etlmodule, use the following:

<transform id="QueueTail" type="TaskrefTransformStep">
<taskref ref="org.labkey.di.steps.QueueJobTask">
<setting name="transformId" value="{MODULE-NAME}/MaleNC"/>

An ETL process can also queue itself by omitting the <setting> element:

<transform id="requeueNlpTransfer" type="TaskrefTransformStep">
<taskref ref="org.labkey.di.steps.QueueJobTask"/>

Handling Generated Files

If file outputs are involved (for example, if one ETL process outputs a file, and then queues another process that expects to use the file in a pipeline task), all ETL configurations in the chain must have the attribute loadReferencedFile="true” in order for the runs to link up properly.

<etl xmlns="" loadReferencedFiles="true">

Standalone vs. Component ETL Processes

ETL processes can be set as either "standalone" or "sub-component":

  • Standalone ETL processes:
    • Appear in the Data Transforms web part
    • Can be run directly via the user or via another ETL
  • Sub-Component ETL processes or tasks:
    • Not shown in the Data Transforms web part
    • Cannot be run directly by the user, but can be run only by another ETL process, as a sub-component of a wider job.
    • Cannot be enabled or run directly via an API call.
To configure as a sub-component, set the ETL "standalone" attribute to false. By default the standalone attribute is true.

<etl xmlns="" standalone="false">

ETL: Stored Procedures

Premium Feature — Available in the Professional, Professional Plus, and Enterprise Editions. Learn more or contact LabKey.

Stored Procedures as Source Queries

Instead of extracting data directly from a source query and loading it into a target query, an ETL process can call one or more stored procedures that themselves move data from the source to the target (or the procedures can transform the data in some other way). For example, the following ETL process runs a stored procedure to populate the Patients table.

<?xml version="1.0" encoding="UTF-8"?>
<etl xmlns="">
<name>Populate Patient Table</name>
<description>Populate Patients table with calculated and converted values.</description>
<transform id="ExtendedPatients" type="StoredProcedure">
<description>Calculates date of death or last contact for a patient, and patient ages at events of interest</description>
<procedure schemaName="patient" procedureName="PopulateExtendedPatients" useTransaction="true">
<!-- run at 3:30am every day -->
<schedule><cron expression="0 30 3 * * ?"/></schedule>

Special Behavior for Different Database Implementations

ETL: Stored Procedures in MS SQL Server

Premium Feature — Available in the Professional, Professional Plus, and Enterprise Editions. Learn more or contact LabKey.

You can call a stored procedure as a transform step to leverage existing database resources.

Example - Normalize Data

The following ETL process uses the stored procedure normalizePatientData to modify the source data.

<?xml version="1.0" encoding="UTF-8"?>
<etl xmlns="">
<name>Target #1 (Normalize Gender Values - Stored Procedure)</name>
<description>Runs a stored procedure.</description>
<transform id="storedproc" type="StoredProcedure">
<description>Runs a stored procedure to normalize values in the Gender column.</description>
<procedure schemaName="target1" procedureName="normalizePatientData">

The stored procedure is shown below.

CREATE procedure [target1].[normalizePatientData] (@transformRunId integer)
UPDATE Patients SET Gender='Female' WHERE (Gender='f' OR Gender='F');
UPDATE Patients SET Gender='Male' WHERE (Gender='m' OR Gender='M');


The <procedure> element can have <parameter> child elements that specify the initial seed values passed in as input/output parameters. Note that The "@" sign prefix for parameter names in the ETL xml configuration is optional.

<procedure … >
<parameter name="@param1" value="100" override="false"/>
<parameter name="@param2" value="200" override="false"/>

The output values of all input/output parameters are persisted in the database, and are used as input values for the next pass. These values take precedence over the initial seed values specified in the xml file. To reset and force the use of the value from the xml file, set the optional override attribute to "true".

<procedure schemaName="external" procedureName="etlTestRunBased">
<parameter name="@femaleGenderName" value="Female" override="false"/>
<parameter name="@maleGenderName" value="Male" override="false"/>

CREATE procedure [target1].[normalizePatientData] (@transformRunId integer,
@maleGenderName VARCHAR(25),
@femaleGenderName VARCHAR(25))
UPDATE Patients SET Gender=@femaleGenderName WHERE (Gender='f' OR Gender='F');
UPDATE Patients SET Gender=@maleGenderName WHERE (Gender='m' OR Gender='M');

Parameters - Special Processing

The following parameters are given special processing.

@transformRunIdInputintAssigned the value of the current transform run id.
@filterRunIdInput or Input/OutputintFor RunFilterStrategy, assigned the value of the new transfer/transform to find records for. This is identical to SimpleQueryTransformStep’s processing. For any other filter strategy, this parameter is available and persisted for stored procedure to use otherwise. On first run, will be set to -1.
@filterStartTimestampInput or Input/OutputdatetimeFor ModifiedSinceFilterStrategy with a source query, this is populated with the IncrementalStartTimestamp value to use for filtering. This is the same as SimpleQueryTransformStep. For any other filter strategy, this parameter is available and persisted for stored procedure to use otherwise. On first run, will be set to NULL.
@filterEndTimestampInput or Input/OutputdatetimeFor ModifiedSinceFilterStrategy with a source query, this is populated with the IncrementalEndTimestamp value to use for filtering. This is the same as SimpleQueryTransformStep. For any other filter strategy, this parameter is available and persisted for stored procedure to use otherwise. On first run, will be set to NULL.
@containerIdInputGUID/Entity IDIf present, will always be set to the id for the container in which the job is run.
@rowsInsertedInput/OutputintShould be set within the stored procedure, and will be recorded as for SimpleQueryTransformStep. Initialized to -1. Note: The TransformRun.RecordCount is the sum of rows inserted, deleted, and modified.
@rowsDeletedInput/OutputintShould be set within the stored procedure, and will be recorded as for SimpleQueryTransformStep. Initialized to -1. Note: The TransformRun.RecordCount is the sum of rows inserted, deleted, and modified.
@rowsModifiedInput/OutputintShould be set within the stored procedure, and will be recorded as for SimpleQueryTransformStep. Initialized to -1. Note: The TransformRun.RecordCount is the sum of rows inserted, deleted, and modified.
@returnMsgInput/OutputvarcharIf output value is not empty or null, the string value will be written into the output log.
@debugInputbitConvenience to specify any special debug processing within the stored procedure. May consider setting this automatically from the Verbose flag.
Return CodespecialintAll stored procedures must return an integer value on exit. “0” indicates correct processing. Any other value will indicate an error condition and the run will be aborted.

To write to the ETL log file, use a 'print' statement inside the procedure.

Log Rows Modified

Use special parameters to log the number of rows inserted, changed, etc. as follows:

CREATE procedure [target1].[normalizePatientData] (@transformRunId integer
, @parm1 varchar(25) OUTPUT
, @gender varchar(25) OUTPUT
, @rowsInserted integer OUTPUT
, @rowCount integer OUTPUT
, @rowsDeleted integer OUTPUT
, @rowsModified integer OUTPUT
, @filterStartTimestamp datetime OUTPUT)
SET @rowsModified = 0
UPDATE Patients SET Gender='Female' WHERE (Gender='f' OR Gender='F');
SET @rowsModified = @@ROWCOUNT
UPDATE Patients SET Gender='Male' WHERE (Gender='m' OR Gender='M');
SET @rowsModified += @@ROWCOUNT

Optional Source

An optional source must be used in combination with the RunFilterStrategy or ModifiedSinceFilterStrategy filter strategies.

<transform id="storedproc" type="StoredProcedure">
Runs a stored procedure to normalize values in the Gender column.
<!-- Optional source element -->
<!-- <source schemaName="study" queryName="PatientsWarehouse"/> -->
<procedure schemaName="target1" procedureName="normalizePatientData">


By default all stored procedures are wrapped as transactions, so that if any part of the procedure fails, any changes already made are rolled back. For debugging purposed, turn off the transaction wrapper setting useTransaction to "false":

<procedure schemaName="target1" procedureName="normalizePatientData" useTransaction="false">

ETL: Functions in PostgreSQL

Premium Feature — Available in the Professional, Professional Plus, and Enterprise Editions. Learn more or contact LabKey.

ETLs can call PostgreSQL functions as part of a transform step.

To call a PostgreSQL function from an ETL process, refer to the function in a transform element of the ETL configuration file. For example, the following ETL process calls "postgresFunction" in the patient schema.

ETL XML Configuration File

<?xml version="1.0" encoding="UTF-8"?>
<etl xmlns="">
<name>Stored Proc Normal Operation</name>
<description>Normal operation</description>
<transform id="callfunction" type="StoredProcedure">
<procedure schemaName="patient" procedureName="postgresFunction" useTransaction="false">
<parameter name="inoutparam" value="before"/>

Function and Parameter Requirements

PostgreSQL functions called by an ETL process must meet the following requirements:

  • The PostgreSQL function must be of return type record.
  • Parameter names, including the Special Processing parameters (see table below), are case-insensitive.
  • There can be an arbitrary number of custom INPUT and/or INPUT/OUTPUT parameters defined for the function.
  • There can be at most one pure OUTPUT parameter. This OUTPUT parameter must be named "return_status" and must be of type INTEGER. If present, the return_status parameter must be assigned a value of 0 for successful operation. Values > 0 are interpreted as error conditions.
  • Function overloading of differing parameter counts is not currently supported. There can be only one function (procedure) in the PostgreSQL database with the given schema & name combination.
  • Optional parameters in PostgreSQL are not currently supported. An ETL process using a given function must provide a value for every custom parameter defined in the function.
  • PostgreSQL does not have a "print" statement. Writing to the ETL log can be accomplished with a "RAISE NOTICE" statement, for example:
RAISE NOTICE '%', 'Test print statement logging';
  • The "@" sign prefix for parameter names in the ETL configuration xml is optional (for both SQL Server and PostgreSQL). When IN/OUT parameters are persisted in the dataintegration.transformConfiguration.transformState field, their names are consistent with their native dialect (an "@" prefix for SQL Server, no prefix for PostgreSQL).

Parameters - Special Processing

The following parameters are given special processing.

Note that the output values of INOUT's are persisted to be used as inputs on the next run.

transformRunIdInputintAssigned the value of the current transform run id.
filterRunIdInput or Input/OutputintFor RunFilterStrategy, assigned the value of the new transfer/transform to find records for. This is identical to SimpleQueryTransformStep's processing. For any other filter strategy, this parameter is available and persisted for functions to use otherwise. On first run, will be set to -1.
filterStartTimestampInput or Input/OutputdatetimeFor ModifiedSinceFilterStrategy with a source query, this is populated with the IncrementalStartTimestamp value to use for filtering. This is the same as SimpleQueryTransformStep. For any other filter strategy, this parameter is available and persisted for functions to use otherwise. On first run, will be set to NULL.
filterEndTimestampInput or Input/OutputdatetimeFor ModifiedSinceFilterStrategy with a source query, this is populated with the IncrementalEndTimestamp value to use for filtering. This is the same as SimpleQueryTransformStep. For any other filter strategy, this parameter is available and persisted for functions to use otherwise. On first run, will be set to NULL.
containerIdInputGUID/Entity IDIf present, will always be set to the id for the container in which the job is run.
rowsInsertedInput/OutputintShould be set within the function, and will be recorded as for SimpleQueryTransformStep. Initialized to -1. Note: The TransformRun.RecordCount is the sum of rows inserted, deleted, and modified.
rowsDeletedInput/OutputintShould be set within the functions, and will be recorded as for SimpleQueryTransformStep. Initialized to -1. Note: The TransformRun.RecordCount is the sum of rows inserted, deleted, and modified.
rowsModifiedInput/OutputintShould be set within the functions, and will be recorded as for SimpleQueryTransformStep. Initialized to -1. Note: The TransformRun.RecordCount is the sum of rows inserted, deleted, and modified.
returnMsgInput/OutputvarcharIf output value is not empty or null, the string value will be written into the output log.
debugInputbitConvenience to specify any special debug processing within the stored procedure.
return_statusspecialintAll functions must return an integer value on exit. “0” indicates correct processing. Any other value will indicate an error condition and the run will be aborted.

Example PostgreSQL Function

CREATE OR REPLACE FUNCTION patient.postgresFunction
(IN transformrunid integer
, INOUT rowsinserted integer DEFAULT 0
, INOUT rowsdeleted integer DEFAULT 0
, INOUT rowsmodified integer DEFAULT 0
, INOUT returnmsg character varying DEFAULT 'default message'::character varying
, IN filterrunid integer DEFAULT NULL::integer
, INOUT filterstarttimestamp timestamp without time zone DEFAULT NULL::timestamp without time zone
, INOUT filterendtimestamp timestamp without time zone DEFAULT NULL::timestamp without time zone
, INOUT runcount integer DEFAULT 1
, INOUT inoutparam character varying DEFAULT ''::character varying
, OUT return_status integer)


* Function logic here


LANGUAGE plpgsql;

ETL: Check For Work From a Stored Procedure

Premium Feature — Available in the Professional, Professional Plus, and Enterprise Editions. Learn more or contact LabKey.

You can set up a stored procedure as a gating procedure within an ETL process by adding a 'noWorkValue' attribute to a 'parameter' element. The stored procedure is used to check if there is work for the ETL job to do. If the output value of StagingControl parameter is equal to its noWorkValue, it indicates to the system that there is no work for the ETL job to do, and any following transforms will be not be run, otherwise subsequence transforms will be run. In the following example, the transform "checkToRun" controls whether the following transform "queuedJob" will run.

<transform id="checkToRun" type="StoredProcedure">
<procedure schemaName="patient" procedureName="workcheck" useTransaction="false">
<parameter name="StagingControl" value="1" noWorkValue="-1"/>
<transform id="queuedJob">
<source schemaName="patient_source" queryName="etl_source" />
<destination schemaName="patient_target" queryName="Patients" targetOption="merge"/>

The noWorkValue can either be a hard-coded string (for example, "-1", shown above), or you can use a substitution syntax to indicate a comparison should be against the input value of a certain parameter.

For example, the following parameter indicates there is no work for the ETL job if the output batchId is the same as the output parameter persisted from the previous run.

<parameter name="batchId" noWorkValue="${batchId}"/>


In the ETL transform below, the gating procedure checks if there is a new ClientStagingControlID to process. If there is, the ETL job goes into the queue. When the job starts, the procedure is run again in the normal job context; the new ClientStagingControlID is returned again. The second time around, the output value is persisted into the global space, so further procedures can use the new value. Because the gating procedure is run twice, don’t use this with stored procedures that have other data manipulation effects! There can be multiple gating procedures, and each procedure can have multiple gating params, but during the check for work, modified global output param values are not shared between procedures.

<transform id="CheckForWork" type="StoredProcedure">
<description>Check for new batch</description>
<procedure schemaName="patient" procedureName="GetNextClientStagingControlID">
<parameter name="ClientStagingControlID" value="-1" scope="global" noWorkValue="${ClientStagingControlID}"/>
<parameter name="ClientSystem" value="LabKey-nlp-01" scope="global"/>
<parameter name="StagedTable" value="PathOBRX" scope="global"/>

ETL: SQL Scripts

Premium Feature — Available in the Professional, Professional Plus, and Enterprise Editions. Learn more or contact LabKey.

You can include SQL scripts in your ETL module that will run automatically upon deployment of the module, in order to generate target databases for your ETL processes. For step by step instructions on running a script see ETL Tutorial: Create a New ETL Process.

Directory Structure

LabKey Server will automatically run SQL scripts that are packaged inside your module in the following directory structure:

MODULE_NAME        config        etls        queries        schemas            dbscripts                postgres                    SCRIPT_NAME.sql - Script for PostgreSQL.                mssql                    SCRIPT_NAME.sql - Script for MS SQL Server.

SQL Script Names

Script names are formed from three components: (1) schema name, (2) previous module version, and (3) current module version, according to the following pattern:


where SCHEMA is the name of the schema to be generated by the script.

For an initially deployed module that hasn't existed on the server previously, an example script name would be:


For more details on naming scripts, especially naming upgrade scripts, see Modules: SQL Scripts.

Schema XML File

LabKey will generate an XML schema file for a table schema by visiting a magic URL of the form:



This script creates a simple table and a stored procedure for MS SQL Server dialect.


CREATE procedure [target1].[normalizePatientData] (@transformRunId integer)
UPDATE Patients SET Gender='Female' WHERE (Gender='f' OR Gender='F');
UPDATE Patients SET Gender='Male' WHERE (Gender='m' OR Gender='M');

CREATE TABLE target1.Patients

LastName VARCHAR(30),
FirstName VARCHAR(30),
MiddleName VARCHAR(30),
Gender VARCHAR(30),
PrimaryLanguage VARCHAR(30),
Email VARCHAR(30),
Address VARCHAR(30),
City VARCHAR(30),
State VARCHAR(30),
Diagnosis VARCHAR(30),


These scripts are in PostgreSQL SQL dialect.

-- schema1 --

CREATE TABLE schema1.patients
patientid character varying(32),
date timestamp without time zone,
startdate timestamp without time zone,
country character varying(4000),
language character varying(4000),
gender character varying(4000),
treatmentgroup character varying(4000),
status character varying(4000),
comments character varying(4000),
CONSTRAINT patients_pk PRIMARY KEY (patientid)

CREATE OR REPLACE FUNCTION changecase(searchtext varchar(100), replacetext varchar(100)) RETURNS integer AS $$
UPDATE schema1.patients
SET gender = replacetext
WHERE gender = searchtext;

Related Topics

ETL: Remote Connections

Premium Feature — Available in the Professional, Professional Plus, and Enterprise Editions. Learn more or contact LabKey.

ETL modules can access data through a remote connection to an alternate LabKey Server.

To set up a remote connection, see Manage Remote Connections.

To configure an ETL process to utilize a remote connection, specify the transform type and the remoteSource as shown below:

<transform type="RemoteQueryTransformStep" id="step1">
<source remoteSource="EtlTest_RemoteConnection" schemaName="study" queryName="etl source" />

A sample ETL configuration file is shown below:

<?xml version="1.0" encoding="UTF-8"?>
<etl xmlns="">
<name>Remote Test</name>
<description>append rows from "remote" etl_source to etl_target</description>
<transform type="RemoteQueryTransformStep" id="step1">
<description>Copy to target</description>
<source remoteSource="EtlTest_RemoteConnection" schemaName="study" queryName="etl source" />
<destination schemaName="study" queryName="etl target" targetOption="truncate"/>
<incrementalFilter className="ModifiedSinceFilterStrategy" timestampColumnName="modified" />

Note that using <deletedRowSource> in an incremental filter does not support a remote connection.


The incrementalFilter ModifiedSinceFilterStrategy with a specified timestampColumnName, as shown in the above sample, will direct this ETL to only update rows that have been modified on the Source. Those that have been modified on the Target but not modified on the Source will be skipped by the ETL and not overwritten.

Related Topics

ETL: Logs and Error Handling

Premium Feature — Available in the Professional, Professional Plus, and Enterprise Editions. Learn more or contact LabKey.


Messages and/or errors inside an ETL job are written to a log file named for that job, and saved in the File Repository, available at > Go To Module > FileContent. In the File Repository, individual logs files are located in the folder named etlLogs.

The absolute paths to these logs follows the pattern below.


for example: