The user who installs LabKey Server becomes the first Site Administrator and has administrative privileges across the entire site. The Administrator invites other users and can grant this administrative access to others as desired.
When administering a production server, it is good practice to also configure and use staging and test servers for testing your applications and upgrades before going live.
Securely sharing research data presents a number of major challenges:
Different groups and individuals require different levels of access to the data. Some groups should be able to see the data, but not change it. Others should be able to see only the data they have submitted themselves but not the entire pool of available data. Administrators should be able to see and change all of the data. Other cases require more refined permission settings.
PHI data (Protected Health Information data) should have special handling, such as mechanisms for anonymizing and obscuring participant ids and exam dates.
Administrators should have a way to audit and review all of the activity pertaining to the secured data, so that they can answer questions such as: 'Who has accessed this data, and when?'.
This tutorial shows you how to use LabKey Server to overcome these challenges. You will learn to:
Assign different permissions and data access to different groups
Test your configuration before adding real users
Audit the activity around your data
Provide randomized data to protect PHI
As you go through the tutorial, imagine that you are in charge of a large research project, managing multiple teams, each requiring different levels of access to the collected data. You want to ensure that some teams can see and interact with their own data, but not data from other teams. You will need to (1) organize this data in a sensible way and (2) secure the data so that only the right team members can access the right data.
Suppose you are collecting data from multiple labs for a longitudinal study. The different teams involved will gather their data and perform quality control steps before the data is integrated into the study. You need to ensure that the different teams cannot see each other's data until it has been added to the study. In this tutorial, you will install a sample workspace that provides a framework of folders and data to experiment with different security configurations.
You configure security by assigning different levels of access to users and groups of users (for a given folder). Different access levels, such as Reader, Author, Editor, etc., allow users to do different things with the data in a given folder. For example, if you assign an individual user Reader level access to a folder, then that user will be able to see, but not change, the data in that folder. These different access/permission levels are called roles.
Set Up Security Workspace
The tutorial workspace is provided as a folder archive file, preconfigured with subfolders and team resources that you will work with in this tutorial. First, install this preconfigured workspace by creating an empty folder and then importing the folder archive file into it.
Log in to your server and navigate to your "Tutorials" project. Create it if necessary.
If you don't already have a server to work on where you can create projects, start here.
If you don't know how to create projects and folders, review this topic.
Create a new folder named "Security Tutorial". Accept all defaults.
Import the downloaded workspace archive into the new folder:
Select (Admin) > Folder > Management and click the Import tab.
Confirm Local zip archive is selected and click Choose File (or Browse) and select the SecurityTutorial.folder.zip you downloaded.
Click Import Folder.
When the pipeline status shows "COMPLETE", click the folder name to navigate to it.
Structure of the Security Workspace
The security workspace contains four folders:
Security Tutorial -- The main parent folder.
Lab A - Child folder for the lab A team, containing data and resources visible only to team A.
Lab B - Child folder for the lab B team, containing data and resources visible only to team B.
Study - Child folder intended as the shared folder visible to all teams.
In the steps that follow we will configure each folder with different access permissions customized for each team.
To see and navigate to these folders in the LabKey Server user interface:
Hover over the project menu to see the contents.
Open the folder node Security Tutorial by clicking the expansion button (it will become a ).
You will see three subfolders inside: Lab A, Lab B, and Study.
Click a subfolder name to navigate to it.
Configure Permissions for Folders
How do you restrict access to the various folders so that only members of each authorized team can see and change the contents? The procedure for restricting access has two overarching steps:
Create user groups corresponding to each team.
Assign the appropriate roles in each folder to each group.
Considering our scenario, we will configure the folders with the following permissions:
Only the Lab A group will be able to see the Lab A folder.
Only the Lab B group will be able to see the Lab B folder.
In the Study folder, Lab A and Lab B groups will have Reader access (so those teams can see the integrated data).
In the Study folder, a specific "Study Group" will have Editor access (intended for those users working directly with the study data).
To perform this procedure, first create the groups:
Navigate to the folder Lab A (click it on the project menu).
Select (Admin) > Folder > Permissions.
Notice that the security configuration page is greyed-out. This is because the default security setting, Inherit permissions from parent, is checked. That is, security for Lab A starts out using the settings of its parent folder, Security Tutorial.
Click the tab Project Groups. Create the following groups by entering the "New group name", then clicking Create New Group.
Lab A Group
Lab B Group
Study Group
You don't need to add any users to the groups, just click Done in the popup window.
Note that these groups are created at the project level, so they will be available in all project subfolders after this point.
Click Save when finished adding groups.
Next assign roles in each folder to these groups:
Click the Permissions tab.
Confirm that the Lab A folder is bold, i.e. "current", in the left-side pane.
Uncheck Inherit permissions from parent.
Notice that the configuration page is activated for setting permissions different from those in the parent.
(The asterisk in the left hand pane indicating permissions are inherited won't disappear until you save changes.)
Locate the Editor role. This role allows users to read, add, update, and delete information in the current folder.
Open the "Select user or group" dropdown for the Editor role, and select the group Lab A Group to add it.
Locate the Reader role and remove the All Site Users and Guests groups, if present. Click the X by each entry to remove it. If you see a warning when you remove these groups, simply dismiss it.
Click Save.
Select the Lab B folder, and repeat the steps:
Uncheck Inherit permissions from parent.
Add "Lab B Group" to the Editor role.
Remove site user and guest groups from the Reader role (if present).
Click Save.
Select the Study folder, and perform these steps:
Uncheck Inherit permissions from parent.
Add "Study Group" to the Editor role.
Remove any site user and guest groups from the Reader role.
Add the groups "Lab A Group" and "Lab B Group" to the Reader role.
Click Save and Finish.
In a real world application you would add individual users (and/or other groups) to the various groups. But this is not necessary to test our permissions configuration. Group and role "impersonation" lets you test security behavior before any actual users are added to the groups.
How do you test security configurations before adding any real world users to the system?
LabKey Server uses impersonation to solve this problem. An administrator can impersonate a role, a group, or an individual user. When impersonating, they shift their perspective on LabKey Server, viewing it as if they were logged in as a given role, group, or user. All such impersonations are logged, so that there is no question later who actually performed any action.
Impersonate Groups
To test the applications behavior, impersonate the groups in question, confirming that each group has access to the appropriate folders.
Navigate to the Lab A folder.
Select (User) > Impersonate > Group, then select Lab A Group and click Impersonate in the popup.
Open the project and folder menu.
Notice that the Lab B folder is no longer visible to you -- while you impersonate, adopting the group A perspective, you don't have the role assignments necessary to see folder B at all.
Click Stop Impersonating.
Then, using the (User) menu, impersonate "Lab B Group."
The server will return with the message "User does not have permission to perform this operation", because you are trying to see the Lab A folder while impersonating the Lab B group. If you don't see this message, you may have forgotten to remove site users or guests as Readers on the Lab A folder.
Which users have logged on to LabKey Server? What data have they seen, and what operations have they performed?
To get answers to these questions, a site administrator can look at the audit log, a comprehensive catalog of user (and system) activity that is automatically generated by LabKey Server.
View the Audit Log
Select (Admin) > Site > Admin Console.
If you do not have sufficient permissions, you will see the message "User does not have permission to perform this operation". (You could either ask your Site Admin for improved permissions, or move to the next step in the tutorial.)
Under Management, click Audit Log.
Click the dropdown and select Project and Folder Events. You will see a list like the following:
Click the dropdown again to view other kinds of activity, for example:
User Events (shows who has logged in and when; also shows impersonation events)
Group Events (shows which groups have been assigned which security roles).
Data exported from LabKey Server can be protected by:
Randomizing participant ids so that the original participant ids are obscured.
Shifting date values, such as clinic visits. (Note that dates are shifted per participant, leaving their relative relationships as a series intact, thereby retaining much of the scientific value of the data.)
Holding back data that has been marked as a certain level of PHI (Protected Health Information).
In this this step we will export data out of the study, modifying and obscuring it in the ways described above.
Examine Study Data
First look at the data to be exported.
Navigate to the Security Tutorial > Study folder.
Click the Clinical and Assay Data tab. This tab shows the individual datasets in the study. There are two datasets: "EnrollmentInfo" and "MedicalExam".
Click MedicalExam. Notice the participant ids column, and choose a number to look for later, such as "r123". When we export this table, we will randomize these ids, obscuring the identity of the subjects of the study.
Click the Clinical and Assay Data tab.
Click EnrollmentInfo. Notice the dates in the table are almost all from April 2008. When we export this table, we will randomly shift these dates, to obscure when subject data was actually collected. Notice the enrollment date for the participant id you chose to track from the other dataset.
Notice the columns for Gender and Country. We will mark these as different levels of PHI so that we can publish results without them. (Because there is exactly one male patient from Germany in our sample, he would be easy to identify with only this information.)
Mark PHI Columns
We will mark two columns, "Gender" and "Country" as containing different levels of PHI. This gives us the opportunity to control when they are included in export. If we are exporting for users who are not granted access to any PHI, the export would not include the contents of either of these columns. If we exported for users with access to "Limited PHI", the export would include the contents of the column marked that way, but not the one marked "Full PHI."
Click the Manage tab. Click Manage Datasets.
Click EnrollmentInfo and then Edit Definition.
Click the Fields section.
Use the to expand the Gender field.
Click Advanced Settings.
As PHI Level, select "Limited PHI" for this field.
Click Apply in the popup to save the setting.
Repeat for the Country field, selecting "Full PHI".
Scroll down and click Save.
Set up Alternate Participant IDs
Next we will configure how participant ids are handled on export, so that the ids are randomized using a given text and number pattern. Once alternate IDs are specified, they are maintained internally so that different exports and publications from the same study will contain matching alternates.
Click the Manage tab.
Click Manage Alternate Participant IDs and Aliases.
For Prefix, enter "ABC".
Click Change Alternate IDs.
Click OK to confirm: these alternate IDs will not match any previously used alternate IDs.
Click OK to close the popup indicating the action is complete.
Click Done.
Notice that you could also manually specify the alternate IDs to use by setting a table of participant aliases which map to a list you provide.
Export/Publish Anonymized Data
Now we are ready to export or publish this data, using the extra data protections in place.
The following procedure will "Publish" the study, meaning a new child folder will be created and selected data from the study will be randomized and copied to it.
Return to the Manage tab.
Scroll down and click Publish Study.
Complete the wizard, selecting all participants, datasets, and timepoints in the study. For fields not mentioned here, enter anything you like.
On the Publish Options panel, check the following options:
Use Alternate Participant IDs
Shift Participant Dates
You could also check Mask Clinic Names which would protect any actual clinic names in the study by replacing them with a generic label "Clinic."
Under Include PHI Columns, select "Not PHI". This means that all columns tagged "Limited PHI" or higher will be excluded.
Click Finish.
Wait for the publishing process to finish.
Navigate to the new published study folder, a child folder under Study named New Study by default.
On the Clinical and Assay Data tab, look at the published datasets EnrollmentInfo and MedicalExam.
Notice how the real participant ids and dates have been obscured through the prefix, pattern, and shifting we specified.
Notice that the Gender and Country fields have been held back (not included in the published study).
If instead you selected "Limited PHI" as the level to include, you would have seen the "Gender" column but not the "Country" column.
Security for the New Folder
How should you configure the security on this new folder?
The answer depends on your requirements.
If you want anyone with an account on your server to see this "deidentified" data, you would add All Site Users to the Reader role.
If want only members of the study team to have access, you would add Study Group to the desired role.
Project and folders form the workspaces and container structure of LabKey Server. A LabKey Server installation is organized as a folder hierarchy. The top of the hierarchy is called the "site", the next level of folders are called "projects". A project corresponds to a team or an area of work and
can contain any number of "folders and subfolders" underneath to present each collaborating team with precisely the subset of LabKey tools needed.
The Project and Folder Hierarchy forms the basic organizing container inside LabKey Server. Everything you create or configure in LabKey Server is located in some folder in the hierarchy. The hierarchy is structured like a directory-tree: each folder can contain any number of other folders, forming branching nodes.
An individual installation of LabKey Server, called a site, forms the top of the hierarchy. The containers one level down from the site are called projects. Different projects might correspond to different teams or investigations. Containers within projects are called folders and subfolders. Projects are essentially central, top-level folders with some extra functionality and importance. Often, projects and folders are referred to together simply as folders.
Projects are the centers of configuration in LabKey Server: settings and objects in a project are generally available in its subfolders through inheritance. Think of separate projects as potentially separate web sites. Many things like user groups and look-and-feel are configured at the project level, and can be inherited at the folder level. A new installation of LabKey Server comes with two pre-configured projects: the Home project and the Shared project. The Home project begins as a relatively empty project with a minimal configuration. The Shared project has a special status: resources in the Shared project are available in the Home project and any other projects and folders you create.
Folders can be thought of as pages on a website, and partly as functional data containers. Folders are containers that partition the accessibility of data records within a project. For example, users might have read & write permissions on data within their own personal folders, no permissions on others' personal folders, and read-only permissions on data in the project-level folder. These permissions will normally apply to all records within a given folder.
There are a variety of folder types to apply to projects or folders, each preconfigured to support specific functionality. For example, the study folder type is preconfigured for teams working with longitudinal and cohort studies. The assay folder type is preconfigured for working with instrument-derived data. For a catalog of the different folder types, see Folder Types.
The specific functionality of a folder is determined by the modules it enables. Modules are units of add-on functionality containing a characteristic set of data tables and user interface elements. You can extend the functionality of any base folder type by enabling additional modules. Modules are controlled via the Folder Types tab at (Admin) > Folder > Management.
Project and Folder Menu
The project and folder menu lets users click to navigate among folders. See Navigate the Server for more about the user experience and navigation among projects and folders.
Administrators have additional options along the bottom edge of the menu:
Create New Project: Click the leftmost icon along the bottom of the menu to create a new top level project.
Create New Subfolder: Click the create-folder icon to create a new subfolder of the current location.
Tabs are further subdivisions available in projects or folders. Tabs are used to group together different panels, tools, and functionality. Tabs are sometimes referred to as "dashboards", especially when they contain a collection of tools focused on an individual research task, problem, or set of data.
Web Parts are user interface panels that can be placed on tabs. Each web part provides a different data tool, or way to interact with data in LabKey Server. Examples of web parts are: data grids, assay management panels, data pipeline panels, file repositories for browsing and uploading/downloading files, and many more. For a catalog of the different web parts, see Web Part Inventory.
Applications are created by assembling the building blocks listed above. For example, you can assemble a data dashboard application by adding web parts to a tab providing tools and windows on underlying data. For details see Build User Interface.
A screen shot showing an application built from tabs and web parts:
LabKey Server can be structured in a wide variety of ways to suit individual research needs. This topic will help you decide how to structure your site using the available tools and functional building blocks. For background information on how a LabKey Server site is structured, see Project and Folder Basics.
Consider the following factors when deciding whether to structure your work inside of one project or across many projects.
What is the granularity of permissions you will need?
Will all users who can read information also be authorized to edit it?
Will users want to view data in locations other than where it is stored?
Do you want different branding, look and feel, or color schemes in different parts of your work?
Will you want to be able to share resources between containers?
Do you foresee growth or change in the usage of your server?
Projects and Folders
Should I structure my work inside of one project, or many?
Single Project Strategy. In most cases, one project, with one layer of subfolders underneath is sufficient. Using this pattern, you configure permissions on the subfolders, granting focused access to the outside audience/group using them, while granting broader access to the project as a whole for admins and your team. If you plan to build views that look across data stored in different folders, it is generally best to keep this data in folders under the same project. The "folder filter" option for grid views (see Query Scope: Filter by Folder) lets you show data from child folders as long as they are stored in the same project.
Multiple Project Strategy. Alternatively, you can set up separate projects for each working group (for example, a lab, a contract, or a specific team). This keeps resources more cleanly partitioned between groups. It will be more complex to query data across all of the projects than it would be if it were all in the same project, but using custom SQL queries, you can still create queries that span multiple projects (for details see Queries Across Folders). You can also use linked schemas to query data in another project (see Linked Schemas and Tables).
User Interface
If you wish different areas of your site to have distinct looks (colors, logos, etc.), make these areas separate projects. Folders do not have independent settings for look-and-feel.
Avoid using separate folders just for navigation or presenting multiple user pages. Use tabs or wiki pages within one folder if you don't need folder security features.
Shared Resources and Inheritance
Many resources (such as assay designs and database schema) located at the project level are available by inheritance in that project's subfolders, reducing duplication and promoting data standardization.
Use the "Shared" project, a project created by default on all sites, for global, site-wide resources.
Flexibility
Site structure is not written in stone. You can relocate any folder, moving it to a new location in the folder hierarchy, either to another folder or another project. Note that you cannot convert a project into a folder or a folder into a project using the drag-and-drop functionality, but you can use export and re-import to promote a folder into a project or demote a project into a folder.
Use caution when moving folders between projects, as some important aspects of the folder are generally not carried across projects. For example, security configuration and assay data dependent on project-level assay designs are not carried over when moving across projects.
LabKey Server lets you create subfolders of arbitrary depth and complexity. However, deep folder hierarchies tend to be harder to understand and maintain than shallow ones. One or two levels of folders below the project is sufficient for most applications.
Security and Permissions
As a general rule, you should structure permissions around groups, not individual users. This helps ensure that you have consistent and clear security policies. Granting roles (= access levels) to individual users one at a time makes it difficult to get a general picture of which sorts of users have which sorts of access, and makes it difficult to implement larger scale changes to your security policies. Before going live, design and test security configurations by impersonating groups, instead of individual users. Impersonation lets you see LabKey Server through the eyes of different groups, giving you a preview of your security configurations. See the security tutorial for details.
You should decide which groups have which levels of access before you populate those groups with individual users. Working with unpopulated groups gives you a safe way to test your permissions before you go live with your data.
Make as few groups as possible to achieve your security goals. The more groups you have, the more complex your policies will be, which often results in confusing and counter-intuitive results.
By default, folders are configured to inherit the security settings of their parent project. You can override this inheritance to control access to particular content using finer-grained permissions settings for folders within the project. For example, you may set up relatively restrictive security settings on a project as a whole, but selected folders within it may be configured to have less restrictive settings, or vice versa, the project may have relatively open access, but folders within it may be relatively closed and restricted.
Configure LDAP authentication to link with your institutional directory server. Contact LabKey If if you need help configuring LDAP or importing users from an existing LDAP system.
Take advantage of nested groups. Groups can contain individual users or other groups in any combination. Use the overarching group to provide shallow, general access; use the child groups to provide deeper, specific access.
Linked Schemas and Tables - Provide access to selected data (schemas), without providing access to the entire folder
LabKey URLs - URLs in LabKey Server reflect paths/locations in the container hierarchy
Shared Project - What resources can be shared site wide from this container
Folder Types
When you create a project or folder, you select a Folder Type. The folder type will determine which Modules are available in each folder by default. Modules form the functional units of LabKey Server and provide task-focused features for storing, processing, sharing and displaying files and data. For more information about the modules available, see Community Edition Modules - Descriptions.
To view the available folder types:
Select (Admin) > Folder > Management.
Click the Folder Type tab.
Folder types appear on the left.
You may see additional folder types depending on the modules available on your server.
Each folder type comes with a characteristic set of activated modules. Modules appear on the right - activated modules have checkmarks.
A Collaboration folder is a container for publishing and exchanging information. Available tools include Message Boards, Issue Trackers and Wikis. Depending on how your project is secured, you can share information within your own group, across groups, or with the public.
A Flow folder manages compensated, gated flow cytometry data and generates dot plots of cell scatters. Perform statistical analysis and create graphs for high-volume, highly standardized flow experiments. Organize, archive and track statistics and keywords for FlowJo experiments.
A folder of type MS2 is provided to manage tandem mass spectrometry analyses using a variety of popular search engines, including Mascot, Sequest, and X!Tandem. Use existing analytic tools like PeptideProphet and ProteinProphet.
Panorama folders are used for all workflows supported by Skyline (SRM-MS, filtering or MS2 based projects). Three configurations are available for managing targeted mass spectrometry data, management of Skyline documents, and quality control of instruments and reagents.
A Study folder manages human and animal studies involving long-term observations at distributed sites, including multiple visits, standardized assays, and participant data collection. Modules are provided to analyze, visualize and share results.
Custom
Create a tab for each LabKey module you select. A legacy feature used in older LabKey installations, provided for backward compatibility. Note that any LabKey module can also be enabled in any folder type via Folder Management. Note that in this legacy folder type, you cannot customize the tabs shown - they will always correspond with the enabled modules.
Create a new project or folder using an existing folder as a template. You can choose which parts of the template folder are copied and whether to include subfolders.
Projects and folders can be organized and customized to help you manage your data and provide all the tools your team needs for effective collaboration. You can customize how users find and interact with your content with web parts. To use the features covered in this topic, you will need an administrator role.
Many site level "look and feel" settings can be customized at the project level by selecting (Admin) > Folder > Project Settings anywhere in the project you want to adjust.
Navigate to the folder you want to view or manage.
Click the icon at the bottom of the folder menu, or select (Admin) > Folder > Management to view the Folder Management page.
Review the tabs available.
Folder Tree
The folder tree view shows the layout of your site in projects and folders. You can Create Subfolders as well as Manage Projects and Folders, including folders other than the one you are currently in. Select a folder to manage by clicking it.
Aliases: Define additional names to use for the selected folder.
Create Subfolder: Create a new subfolder of the selected folder.
Delete: Delete the selected project or folder. You will be asked to confirm the deletion.
Move: Relocate the folder to another location in the tree by changing the parent container. You cannot use this feature to make a folder into a project, or vice versa.
Rename: Change the name or display title of a folder.
Validate: Run validation on the selected node of the folder tree.
Folder Type
The available Folder Types are listed in the left hand panel. Selecting one will determine the availability of Modules, listed in the right hand panel, and thus the availability of web parts. You can only change the type of the folder you are currently in.
Module properties are a way to do simple configuration. A property can be set for the whole site, for a project or any other parent in the hierarchy, or in a subfolder.
Only the values for the modules, as seen under the Folder Type tab, that are enabled in the current folder will be shown. Not all modules will expose properties.
Insert or update the mapping of Concept URI by container, schema, and query.
Notifications
The administrator can set default Email Notification Settings for events that occur within the folder. These will determine how users will receive email if they do not specify their own email preferences. For more information, see: Manage Email Notifications.
Export
A folder archive is a .folder.zip file or a collection of individual files that conforms to the LabKey folder export conventions and formats. Using export and import, a folder can be moved from one server to another or a new folder can be created using a standard template. For more information see, Export / Import a Folder.
Import
You can import a folder archive to populate a folder with data and configuration. Choose whether to import from a local source, such as an exported archive or existing folder on your server, or to import from a server-accessible archive using the pipeline. Select specific objects to import from an archive. For more information see, Import a Folder.
LabKey Server allows you to upload and process your data files, including flow, proteomics and study-related files. By default, LabKey stores your files in a standard directory structure. A site administrators can override this location for each folder if desired.
Formats
You can define display and parsing formats as well as customize charting restrictions at the site, project, and folder level.
To customize the settings at the folder level use the (Admin) > Folder > Management > Formats tab.
Formats set here will apply throughout the folder. If they are not set here, the project-level setting will apply, and if not set in the project, the site level settings will apply.
Display formats for dates and numbers can further be overridden on a per column basis if desired using the field editor.
The information tab contains information about the folder itself, including the container ID (EntityId).
Search
The full-text search feature will search content in all folders where the user has read permissions. Unchecking this box will exclude this folder's content unless the search originates from within the folder. For example, you might exclude archived content or work in progress. For more, see Search Administration.
Projects Web Part
On the home page of your server, there is a Projects web part by default, listing all the projects on the server (except the "home" project). You can add this web part to other pages as needed:
Enter > Page Admin Mode.
Select Projects from the <Select Web Part> menu in the lower left and click Add.
The default web part shows all projects on the server, but you can change what it displays by selecting Customize from the (triangle) menu. Options include:
Specify a different Title or label for the web part.
Change the display Icon Style:
Details
Medium
Large (Default)
Folders to Display determines what is shown. Options:
All Projects
Subfolders. Show the subfolders of the current project or folder.
Specific Folder. When you choose a specific folder, you can specify a container other than the current project or folder and have two more options:
Include Direct Children Only: unless you check this box, all subfolders of the given folder will be shown.
Include Workbooks: workbooks are lightweight folders.
Hide Create Button can be checked to suppress the create button shown in the web part by default.
Subfolders Web Part
The Subfolders web part is available as a preconfigured projects web part showing the current subfolders. One is included in the default Collaboration folder when you create a new one. This web part shows icons for every current subfolder in a given container, and includes a Create New Subfolder button.
Enter > Page Admin Mode.
Select Subfolders from the <Select Web Part> menu in the lower left and click Add.
You can customize it if needed in the same way as the Projects web part described above.
Projects and folders are used to organize workspaces in LabKey Server. They give a "container" structure for assigning permissions, sharing resources, and can even be used to present different colors or branding to users on a server. To create a new project or folder, you must have administrative privileges.
You will now be on the home page of your new project.
Create a New Folder / Create Subfolder
To add a folder, navigate to the location where you want to create a subfolder
Use the Create Folder icon at the bottom of the project and folder menu.
Provide a Name. By default, the name will also be the folder title.
If you would like to specify an alternate title, uncheck the Use name as title box and a new box for entering the title will appear.
Select a Folder Type. If not otherwise specified, the default "Collaboration" type provides basic tools.
Select how to determine initial Users/Permissions in the new folder. You can change these later if necessary:
Inherit from parent folder (or project).
My User Only.
Click Finish.
If you want to immediately configure permissions you can also click Finish and Configure Permissions.
Create a Folder from a Template
You can create a new folder using an existing folder as a template, selecting which objects to copy to the new folder. For example, you could set up one folder with all the necessary web parts, tabs, and explanatory wikis, then create many working folders as "clones" of the template.
As Folder Type, select the radio button Create from Template Folder:
From the Choose Template Folder pulldown menu, select an existing folder to use as a template.
Use checkboxes to select the Folder objects to copy from the template and whether to include subfolders. Note that when using a study folder as a template, the dataset data is not eligible to be copied to the new folder. To copy dataset data, use the study import process.
Click Next and complete the rest of the wizard as above.
This topic covers how to move, delete, and rename projects and folders. Naming, casing, and creating hidden projects and folders are also covered. All of these options are available on the folder management page, found via (Admin) > Folder > Management.
A folder can be moved within a project, or from one folder to another. Remember that folders are peers under a common parent. While both actions are "moving" the folder, the movement within a folder is only changing the display order on menus and in the folder management tree.
Select (Admin) > Folder > Management.
On the Folder Tree tab (open by default).
Select the folder to move, then drag and drop it into another location in the folder tree. Hover messages will indicate what action you can expect.
You will be asked to click to confirm the action.
Considerations
When moving a folder from one project to another, there are a few additional considerations:
If your folder inherits configuration or permissions settings from the parent project, be sure to confirm that inherited settings are as intended after the move to the new parent. An alternative is to export and re-import the folder which gives you the option to retain project groups and role assignments. For details, see Export and Import Permission Settings.
If the folder is using assay designs or sample types defined at the project level, they will no longer have access to them.
Because a project is a top level folder that is created with different settings and options than an ordinary folder, you cannot promote a folder to be a project.
Since a project is a top-level folder, you can not "move" it into another project or folder, and drag and drop reordering is not supported.
Change Project Display Order
By default, projects are listed on the project and folder menu in alphabetical order. To use a custom order instead:
On the Folder Tree tab, select any project.
Click Change Display Order.
Click the radio button for Use custom project order.
Select any project and click Move Up or Move Down.
Click Save when finished.
You can also reorder folders within a project using this interface by first selecting a folder instead of a project, then clicking Change Display Order.
Delete a Folder or Project
On the Folder Tree tab, select the folder or project to delete.
Click Delete.
You will see a list of the folder and subfolder contents to review. Carefully review the details of which containers and contents you are about to delete as this action cannot be undone.
Confirm the deletion by clicking Yes, Delete All.
Rename a Folder or Project
You can change the project or folder name or the folder title. The folder name determines the URL path to resources in the folder, so changing the name can break resources that depend on the URL path, such as reports and hyperlinks. If you need to change the folder name, we recommend leaving Alias current name checked to avoid breaking links into the folder, with the two exceptions below:
Renaming a project/folder will break any queries and reports that specifically point to the old project/folder name. For example, a query created by the LABKEY.Query API that specifically refers to the old folder in the container path will be broken, even when aliasing is turned on. Such queries and reports that refer to the old container path will need to be fixed manually.
Renaming a folder will move the @files under the file root to the new path. This means that any links via _webdav to the old folder name will no longer find the file. To avoid broken links, refer to files by paths relative to the container. Full URL links from the base URL to the old name will also resolve correctly, as long as they do not involve the @files fileroot which has now been moved.
As an alternative to changing the folder name, you can change the title displayed by the folder in page headings. Only page headings are affected by a title change. Navigation menus show the folder name and are unaffected by a folder title change.
On the Folder Tree tab, select a folder or project.
Click Rename.
To change the folder name, enter a new value under Folder Name, and click Save.
To change the folder title, uncheck Same as Name, enter a new value under Folder Title, and click Save.
If you want to ensure links to the current name will still work, check the box to Alias current name.
Changing Folder Name Case
Suppose you want to rename the "Demo" folder to the "demo" folder. To change capitalization, rename the folder in two steps to avoid a naming collision, for example, "Demo" to "foo", then "foo" to "demo".
Hidden Projects or Folders
Hidden folders can help admins hide work in progress or admin-only materials to avoid overwhelming end-users with material that they do not need to see.
Folders and projects whose names begin with "." or "_" are automatically hidden from non-admins in the navigation tree. The folder will still be visible in the navigation tree if it has non-hidden subfolders (i.e., folders where the user has read permissions). To hide subfolders of a hidden folder, the admin could prefix the names of these subfolders with a dot or underscore as well.
Hiding a folder only affects its visibility in the navigation tree, not permissions to the folder. If a user is linked to the folder or enters the URL directly, the user will be able to see and use the folder based on permission settings regardless of the "hidden" nature.
Each folder type has a characteristic set of "modules" enabled by default. Each enabled module provides functionality to the folder: the assay module provides functionality related to experimental data, the study module provides data-integration functionality, etc. You can expand the functionality of a folder by enabling other modules beyond the default set.
You can export and import the contents of a folder, both data and configuration elements, in a folder archive format. A folder archive is a .folder.zip file or a collection of individual files that conform to LabKey folder description conventions and formats. In most cases, a folder archive is created via export from the UI. You can also populate a new folder from a template folder on the current server using the "Create Folder From Template" option on the folder creation page.
Folder archives are useful for migrating work from one server to another, or for populating one or more new folders with all or part of the contents of the archive. It is important to note that you can only go "forward" to new versions. You cannot import an archive from a newer (higher) version of LabKey Server into an older (lower) version.
This topic helps you understand the options available for folder archive export/import. A few common usage scenarios:
Create a folder template for standardizing structure.
Transfer a folder from a staging / testing environment to a production platform.
Export a selected subset of a folder, such as excluding columns tagged as containing protected health information to enable sharing of results without compromising PHI.
You can choose to include the datasets, views, and reports, as well as much of the original folder's configuration. See the screen shot below for items that can be included in the folder archive.
Not all folder types are supported for export and re-importation into another folder or server. For example, Biologics and Sample Manager folder types are not supported for migration in this manner. Make sure to test re-importation into a safe folder as a precautionary step.
Export
To export a folder, go to (Admin) > Folder > Management and click the Export tab.
Select the objects to export.
You can use the Clear All Objects button to clear all the selections (making it easier to select a minimal set manually).
You can use Reset to restore the default set of selections, i.e. most typically exported objects (making it easier to select a larger set).
Choose the options required.
Select how/where to export the archive file(s).
Click Export.
There are additional options available when the folder is a study. For more about exporting and importing a study, see Export/Import/Reload a Study.
Folder objects to export
This is the complete list of objects that can be exported to a folder archive. When each of the following options is checked, the specified content will be included in the archive. Some options will only be shown when relevant content is present.
Folder type and active modules: The options set on the "Folder Type" tab.
Missing value indicators: The settings on the "Missing Values" tab.
Full-text search settings: The settings on the "Search" tab.
Webpart properties and layout: Page layouts and settings.
Sample Status and QC State Settings: Exports QC state definitions including custom states, descriptions, default states for the different import pathways and the default blank QC state.
Learn more about sample status export/import in this topic.
Notification settings: The settings on the "Notifications" tab.
Queries: Shared queries. Note that private queries are not included in the archive.
Grid Views: Shared grid views. Note that private grid views are not included in the archive.
Reports and Charts: Shared reports and charts. Private ones are not included in the archive.
Categories: This option exports the Categories for report and dataset grouping.
External schema definitions: External schemas defined via the Schema Browser.
Experiments, Protocols, and Runs: Check to include experiment data including samples, assay designs and data, and job templates, if any are defined. Configured assay QC states are exported, but assignments of QC states to assay runs are not included.
Sample Types and Data Classes: Include definitions for these entities. If a Sample Type has a setting for "Auto-Link Data to Study", this setting will be included in the export.
Study(Only present in study folders): Learn more in this topic: Export a Study.
Panorama QC Folder Settings(Only present in Panorama QC folders): Learn more in this topic: Panorama QC Folders.
Select Export Options
Whether to Include Subfolders in your archive is optional; this option is only presented when subfolders exist.
You can also choose to exclude protected health information (PHI) at different levels. This exclusion applies to all dataset and list columns, study properties, and specimen data columns that have been tagged at a specific PHI level. By default, all data is included in the exported folder.
Include PHI Columns:
Uncheck to exclude all columns marked as containing any level of PHI.
Check to include some or all PHI, then select the level(s) of PHI to include.
Under Export to:, select one the export destination:
Pipeline root export directory, as individual files.
Pipeline root export directory, as a zip file.
Browser as a zip file.
You can place more than one folder archive in a directory if you give them different names.
Character Encoding
Study, list, and folder archives are all written using UTF-8 character encoding for text files. Imported archives are parsed as UTF-8.
Items Not Included in Archives
The objects listed above in the "Folder objects to export" section are the only items that are exported to a folder archive. Here are some examples of LabKey objects are not included when exporting a folder archive:
Assay Definitions without Runs: An assay definition will only be included in a folder export when at least one run has been imported to the folder using it.
File Properties: The definitions and property values will not be included in the folder archive.
Issue Trackers: Issue tracker definitions and any issue data will not be included in the folder archive.
Messages: Message settings and message data will not be included in the folder archive.
Project Settings and Resources: Custom CSS file, logo images, and other project-level settings are not included in the folder archive.
Query Metadata XML: Metadata XML applied to queries in the Schema Browser is not exported to the folder archive. Metadata applied to fields is exported to the folder archive.
File Watchers: todo
When migrating a folder into another container using an archive, the items above must be migrated manually.
Import
When you import a folder archive, a new subfolder is not created. Instead the configuration and contents are imported into the current folder, so be sure not to import into the parent folder of your intended location.
To create the imported folder as a subfolder, first create a new empty folder, navigate to it, then import the archive there.
To import a folder archive, go to (Admin) > Folder > Management and click the Import tab.
You can import from your local machine or from a server accessible location.
Import Folder From Local Source
Local zip archive: check this option, then Browse or Choose an exported folder archive to import.
Existing folder: select this option to bypass the step of exporting to an archive and directly import selected objects from an existing folder on the server. Note that this option does not support the import of specimen or dataset data from a study folder.
Both import options offer two further selections:
Validate All Queries After Import: When selected, and by default, queries will be validated upon import and any failure to validate will cause the import job to raise an error. If you are using the check-for-reload action in the custom API, there is a suppress query validation parameter that can be used to achieve the same effect as unchecking this box in the check for reload action. During import, any error messages generated are noted in the import log file for easy analysis of potential issues.
Show Advanced Import Options: When this option is checked, after clicking Import Folder, you will have the further opportunity to:
Select specific objects to import
Apply the import to multiple folders
If the folder contains a study, you will have an additional option:
Fail import for undefined visits: when you import a study archive, you can elect to cancel the import if any imported dataset or specimen data belongs to a visit not already defined in the destination study or the visit map included in the imported archive. Otherwise, new visits would be automatically created.
Select Specific Objects to Import
By default, all objects and settings from an import archive will be included. For import from a template folder, all except dataset data and specimen data will be included. If you would like to import a subset instead, check the box to Select specific objects to import. You will see the full list of folder archive objects (similar to those you saw in the export options above) and use checkboxes to elect which objects to import. Objects not available in the archive or template folder will be disabled and shown in gray for clarity.
This option is particularly helpful if you want to use an existing archive or folder as a structural or procedural template when you create a new empty container for new research.
By default, the imported archive is applied only to the current folder. If you would like to apply this imported archive to multiple folders, check Apply to multiple folders and you will see the list of all folders in the project. Use checkboxes to select all the folders to which you want the imported archive applied.
Note that if your archive includes subfolders, they will not be applied when multiple folders are selected for the import.
This option is useful when you want to generate a large number of folders with the same objects, and in conjunction with the selection of a subset of folder options above, you can control which objects are applied. For instance, if a change in one study needs to be propagated to a large number of other active studies, this mechanism can allow you to propagate that change. The option "Selecting parent folders selects all children" can make it easier to use a template archive for a large number of child folders.
When you import into multiple folders, a separate pipeline job is started for each selected container.
Import Folder from Server-Accessible Archive
Click Use Pipeline to select the server-accessible archive to import.
File Watchers: (Premium Feature) Automate various import actions including reloading a folder from an unzipped archive.
Export and Import Permission Settings
You can propagate security configurations from one environment to another by exporting them from their original environment as part of a folder archive, and importing them to a new one. For example, you can configure and test permissions in a staging environment and then propagate those settings to a production environment in a quick, reliable way.
You can export the following aspects of a security configuration:
Project groups and their members, both user members and subgroup members (for project exports only)
Role assignments to individual users and groups (for folder and project exports)
Place a checkmark next to Role assignments for users and groups.
Review the other exportable options for your folder -- for details on the options see Export / Import a Folder.
Click Export.
Export Project Permissions
To export the configuration for a given project:
Navigate to the folder you wish to export.
Select (Admin) > Folder > Management.
Click the Export tab.
Select one or both of the options below:
Project-level groups and members (This will export your project-level groups, the user memberships in those groups, and the group to group membership relationships).
Role assignments for users and groups
Review the other exportable options for your folder -- for details on the options see Export / Import a Folder.
Click Export.
Importing Groups and Members
Follow the folder or project import process, choosing the archive and using Show advanced import options, then Select specific objects to import to confirm that the object Project-level groups and members is selected.
When groups and their members are imported, they are created or updated according to the following rules:
Groups and their members are created and updated only when importing into a project (not a folder).
If a group exists with the same name in the target project its membership is completely replaced by the members listed in the archive.
Members are added to groups only if they exist in the target system. Users listed as group members must already exist as users in the target server (matching by email address). Member subgroups must be included in the archive or already exist on the target (matching by group name).
Importing Role Assignments
Follow the folder import process, choosing the archive and using Show advanced import options, then Select specific objects to import to confirm that the object Role assignments for users and groups is selected.
When role assignments are imported, they are created according to the following rules:
Role assignments are created when importing to projects and folders.
Role assignments are created only if the role and the assignee (user or group) both exist on the target system. A role might not be available in the target if the module that defines it isn't installed or isn't enabled in the target folder.
When the import process encounters users or groups that can't be found in the target system it will continue importing, but it will log warnings to alert administrators.
Administrators can set default Email Notification Settings for some events that occur within the folder, such as file creation, deletion, and message postings. When a folder default is set for a given type of notification, it will apply to any users who do not specify their own email preferences. Some types of notifications are available in digest form.
Note that this does not cover all email notifications users may receive. Report events, such as changes to report content or metadata, are not controlled via this interface. Learn more in this topic: Manage Study Notifications. Other tools, like the issue tracker and assay request mechanisms, can also trigger email notifications not covered by these folder defaults.
Note: Deactivated users and users who have been invited to a create an account, but have never logged in, are never sent email notifications by the server. This is true for all of the email notification mechanisms provided by LabKey Server.
Open Folder Notification Settings
Navigate to the folder.
Select (Admin) > Folder > Management.
Click the Notifications tab.
Folder Default Settings
You can change the default settings for email notifications using the pulldown menus and clicking Update.
For Files
There is no folder default setting for file notifications. An admin can select one of the following options:
No Email: Emails are never sent for file events.
15 minute digest: An email digest of file events is sent every 15 minutes.
Daily digest: An email digest of file events is sent every 24 hours at 12:05am.
For Messages
The default folder setting for message notifications is My conversations. An admin can select from the following options:
No Email: Notifications are never sent when messages are posted.
All conversations: Email is sent for each message posted to the message board.
My conversations: (Default). Email is sent for each conversation the user has participated in (started or replied).
Daily digest of all conversations: An email digest is sent for all conversations every 24 hours at 12:05am.
Daily digest of my conversations: An email digest is sent including updates for conversations the user participated in. This digest is also sent every 24 hours at 12:05am.
For Sample Manager (Premium Feature)
The Sample Manager application offers workflow management tools that provide notification services. You will only see this option on servers where the sample management module is available.
Sample Manager notifications are generally managed through the application interface and there is no folder default setting. If desired, a server admin can select one of the following options. Each user can always override this setting themselves.
No email: No email will be sent.
All emails: All users will receive email notifications triggered by Sample Manager workflow jobs that are assigned to them or for which they are on the notification list.
Below the folder defaults, the User Settings section includes a table of all users with at least read access to this folder who are eligible to receive notifications by email for message boards and file content events. The current file and message settings for each user are displayed in this table. To edit user notification settings:
Select one or more users using the checkboxes.
Click Update User Settings.
Select For files, messages, or samplemanager.
In the popup, choose the desired setting from the pulldown, which includes an option to reset the selected users to the folder default setting.
Click Update Settings for X Users. (X is the number of users you selected).
Visitors will be presented with the terms of use page before they proceed to the content. They will be prompted with a page containing a checkbox and any text you have included. The user must then select the check box and press the submit button before they can proceed. If a login is required, they will also be prompted to log in at this point.
Example: _termsOfUse Page
Note: the terms of use mechanism described here does not utilize user assertions of IRB number, intended activity, or support dynamically constructed terms of use. To access these more flexible features more appropriate to a compliant environment, see Compliance: Terms of Use.
Project Specific Terms of Use
To add a terms of use page scoped to a particular project, create a wiki page at the project-level with the name _termsOfUse (note the underscore). If necessary, you can link to larger documents, such as other wiki pages or attached files, from this page.
To add a project-scoped terms of use page:
Add a wiki page. If you do not see the Wiki web part in the project, enter > Page Admin Mode, then add one using the Select Web Part drop down at the bottom of the page. You can remove the web part after adding the page.
Add the _termsOfUse page. Note that this special page can only be viewed or modified within the wiki by a project administrator or a site administrator.
In the Wiki web part, open the triangle menu and select New.
Name the new page _termsOfUse
Text provided in the Title and Body fields will be shown to the user.
To later remove the terms of use restriction, you delete the _termsOfUse wiki page from the project.
Site-Wide Terms of Use
A "site-wide" terms of use requires users to agree to terms whenever they attempt to login to any project on the server. When both site-scoped and project-scoped terms of use are present, then the project-scoped terms will override the site-scoped terms, i.e., only the project-scoped terms will be presented to the user, while the site-scoped terms will be skipped.
To add a site-wide terms of use page:
Select (Admin) > Site > Admin Console.
In the Management section, click Site-wide Terms of Use.
You will be taken to the New Page wizard:
Notice the Name of the page is prepopulated with the value "_termsOfUse" -- do not change this.
Add a value for the Title which will appear in the panel above your text.
Add HTML content to the page, using either the Visual or Source tabs. (You can convert this page to a wiki-based page if you wish.) Explain to users what is required of them to utilize this site.
Click Save and Close.
The terms of use page will go into effect after saving the page.
Note: If the text of the terms of use changes after a user has already logged in and accepted the terms, this will not require that the terms be accepted again.
To turn off a site-wide terms of use, delete the _termsOfUse page as follows:
Select (Admin) > Site > Admin Console.
In the Management section, click Site-wide Terms of Use.
Click Delete Page.
Confirm the deletion by clicking Delete.
The terms of use page will no longer be shown to users upon entering the site.
Workbooks provide a simple, lightweight container for small-scale units of work -- the sort of work that is often stored in an electronic lab notebook (ELN). They are especially useful when you need to manage a large number of data files, each of which may be relatively small on its own. For instance, a lab might store results and notes for each experiment in a separate workbook. Key attributes of workbooks include:
Searchable with full-text search.
A light-weight folder alternative, workbooks do not appear in the folder tree, instead they are displayed in the Workbooks web part.
Some per-folder administrative options are not available, such as setting modules, missing value indicators or security. All of these settings are inherited from the parent folder.
Lists and assay designs stored in the parent folder/project are visible in workbooks. A list may also be scoped to a single workbook.
Create a Workbook
Workbooks are an alternative to folders, added through the Workbooks web part. In addition to the name you give a workbook, it will be assigned an ID number.
Enter > Page Admin Mode.
Select Workbooks from the <Select Web Part> drop-down menu, and click Add.
Click Exit Admin Mode.
To create a new workbook, click (Insert New Row) in the new web part.
Specify a workbook Title. The default is your username and the current date.
A Description is optional.
You can leave the Type selector blank for the default or select a more specific type of notebook depending on the modules available on your server.
Click Create Workbook.
Default Workbook
The default workbook includes the Experiment Runs and Files web parts for managing files and data. The workbook is assigned a number for easy reference, and you can edit both title and description by clicking the pencil icons.
Some custom modules include other types of workbooks with other default web parts.
If additional types are available on your server, you will see a dropdown to select a type when you create a new workbook and the home page may include more editable information and/or web parts.
Navigating Experiments and Workbooks
Since workbooks are not folders, you don't use the folder menu to navigate among them. From the Workbooks web part on the main folder page, you can click the Title of any workbook or experiment to open it. From within a workbook, click its name in the navigation trail to return to the main workbook page.
In an Experiment Workbook, the experiment history of changes includes navigation links to reach experiments by number or name. The experiments toolbar also includes an All Experiments button for returning to the list of all experiments in the containing folder.
Display Workbooks from Subfolders
By default, only the workbooks in the current folder are shown in the Workbooks web part. If you want to roll up a summary including the workbooks that exist in subfolders, select (Grid Views) > Folder Filter > Current folder and subfolders.
You can also show All folders on the server if desired. Note that you will only see workbooks that you have read access to.
List Visibility in Workbooks
Lists defined within a workbook are scoped to the single workbook container and not visible in the parent folder or other workbooks. However, lists defined in the parent folder of a workbook are also available within the workbook, making it possible to have a set of workbooks share a common list if they share a common parent folder. Note that workbooks in subfolders of that parent will not be able to share the list, though they may be displayed in the parent's workbooks web part.
In a workbook, rows can be added to a list defined in the parent folder. From within the workbook, you can only see rows belonging to that workbook. From the parent folder, all rows in the list are visible in the list, including those from all workbook children. Rows are associated with their container, so by customizing a grid view to display the Folder fields at the parent level, it is possible to determine the workbook or folder to which the row belongs.
The URL for a list item in the parent folder will point to the row in the parent folder even for workbook rows.
Other Workbook Types
Some custom modules add the option to select from a dropdown of specialized types when you create a workbook. Options, if available, may include:
Assay Test Workbook
File Test Workbook
Experiment (Expt) Workbook
Study Workbook
Default Workbook
Note that you cannot change the type of a workbook after creating it.
Experiment (Expt) Workbook
The experiment workbook includes:
Workbook Header listing Materials, Methods, Results, and Tags in addition to a Description.
Workbook Summary wiki.
Pipeline Files
Messages
Lab Tools for importing samples and browsing data.
Shared Project
The Shared project is pre-configured as part of the default LabKey Server installation. Resources to be shared site-wide, such as assay designs, can be placed here to be made available in the other projects or folders you create. Resources shared within subfolders of a single project can generally be shared by placing them at the local project level.
Changing the folder type of the Shared project, although allowed by the user interface, can produce unexpected behavior and is not supported.
*Wikis are generally available site-wide, depending on permissions. From any container, you can see and use wikis from any other container you have permission to read. A permission-restrictive project can choose to make wikis available site-wide by defining them in the Shared project and accessing them from there.
The following features will include definitions from the Shared project when providing user selection options. There is no hierarchy; the user can select equally any options offered:
When referring to any of the features in the following table, the hierarchy of searching for a name match proceeds as follows:
First the current folder is searched for a matching name. If no match:
Look in the parent of the current folder, and if no match there, continue to search up the folder tree to the project level, or wherever the feature itself is defined. If there is still no match:
Look in the Shared project (but not in any of its subfolders).
The topics in this section help administrators learn how to create and configure user interface elements to form data dashboards and web portal pages. Flexible tools and design options let you customize user interfaces to suit a variety of needs
Add Web Parts - Web parts are user interface panels that you can add to a folder/project page. Each type of web part provides some way for users to interact with your application and data. For example, the "Files" web part provides access to any files in your repository.
Manage Web Parts - Set properties and permissions for a web part
Projects and Folders - A project or folder provides the container for your application. You typically develop an application by adding web parts and other functionality to an empty folder or project.
JavaScript API - Use the JavaScript API for more flexibility and functionality.
Module-based apps - Modules let you build applications using JavaScript, Java, and more.
Premium Resource: Custom Home Page Examples
Premium Resource
This is a premium resource only available with Premium Editions of LabKey Server.
Premium Edition subscribers have access to expanded documentation resources and example code, including topics like these:
In addition to enhanced documentation, Premium Edition subscribers have access to additional features and receive professional support to help them maximize the value of the platform. Learn more about Premium Editions
Already a Premium Subscriber? Log in to view this content:
Administrators can enter Page Admin Mode to expose the tools they need to make changes to the contents of a page, including web parts and wikis. These tools are hidden to non-admin users, and are also not visible to admins unless they are in this mode.
Many web part menus also have additional options in Page Admin Mode that are otherwise hidden.
Page Admin Mode
Select > Page Admin Mode to enter it.
You will see an Exit Admin Mode button in the header bar when in this mode.
To exit this mode, click the button, or you can also select > Exit Admin Mode.
Web Parts
Add Web Parts
In page admin mode, you can add new web parts using the selectors at the bottom of the page. The main panel (wider) web parts can be added using the left hand selector. Narrow style web parts, such as table of contents, can be added using the selector on the right. Note that at narrow browser widths, the "narrow" web parts are displayed below (and as wide as) the main panel web parts.
Customize Web Parts
The (triangle) menu for a web part contains additional options in Page Admin Mode. Additional options:
Permissions: Control what permissions are required for a user to see this web part.
Move Up: Relocate the web part on the page.
Move Down: Relocate the web part on the page.
Remove From Page: Remove the web part; does not necessarily remove underlying content.
The outline, header bar, and menu can be removed from a web part by selecting Hide Frame from the (triangle) menu. Shown below are three views of the same small "Projects" web part: in admin mode both with and without the frame, then frameless as a non-admin user would see it.
Notice that frameless web parts still show the web part title and pulldown menu in page admin mode, but that these are removed outside of admin mode.
Frameless web parts make it possible to take advantage of bootstrap UI features such as jumbotrons and carousels.
Tabs
When there is only one tab, it will not be shown in the header, since the user will always be "on" this tab.
When there are multiple tabs, they are displayed across the header bar with the current tab in a highlight color (may vary per color theme).
On narrower screens or browser windows, or when custom menus use more of the header bar, multiple tabs will be displayed on a pulldown menu from the current active tab. Click to navigate to another tab.
Tabs in Page Admin Mode
To see and edit tabs in any container, an administrator enters > Page Admin Mode.
To add a new tab, click the mini tab on the right.
To edit a current tab, pulldown the (triangle) menu:
Hide: Do not display this tab in non-admin mode. It will still be shown in page admin mode. Hidden tabs are shown in page admin mode with a icon.
Delete: Delete this tab. Does not delete the underlying contents.
Move: The caret menu indicates you can click to select from a sub menu
Once you've created a project or folder you can begin building dashboards from panels called Web Parts. Web Parts are the basic tools of the server -- they surface all of the functionality of the server including: analysis and search tools, windows onto your data, queries, any reports built on your data, etc. The list of web parts available depends on which modules are enabled in a given folder/project.
There are two display regions for web parts, each offering a different set. The main, wider panel on the left, where you are reading this wiki, and a narrower right-hand column (on this page containing search, feedback, and a table of contents). Some web parts, like Search can be added in either place.
Add a Web Part
Navigate to the location where you want the web part.
Choose the desired web part from the <Select Web Part> drop down box and click Add.
Note: if both selectors are stacked on the right, make your browser slightly wider to show them on separate sides.
The web part you selected will be added below existing web parts. Use the (triangle) menu to move it up the page, or make other customizations.
Click Exit Admin Mode in the upper right to hide the editing tools and see how your page will look to users.
Note: If you want to add a web part that does not appear in the drop down box, choose (Admin) > Folder > Management > Folder Type to view or change the folder type and set of modules enabled.
Anchor Tags for Web Parts
You can create a URL or link to a specific web part by referencing its "anchor". The anchor is the name of the web part. For example, this page in the File Management Tutorial example has several web parts:
The following URL will navigate the user directly to the "Prelim Lab Results" query web part displayed at the bottom of the page. Notice that spaces in names are replaced with "%20" in the URL.
Web Parts are user interface panels -- they surface all of the functionality of the server including: analysis and search tools, windows onto your data, queries, any reports built on your data, etc. This topic describes how page administrators can manage them.
Each web part has a (triangle) pulldown menu in the upper right.
An expanded set of options is available when the user is in page administrator mode. If the web part is frameless, the (triangle) menu is only available in page admin mode.
Web Part Controls
The particular control options available vary by type of web part, and visibility depends on the user's role and editing mode. For example, Wiki web parts have a set of options available to editors for quick access to wiki editing features:
When the user does not have any options for customizing a given web part, the menu is not shown.
Page Admin Mode Options
Administrators have additional menu options, including usually a Customize option for changing attributes of the web part itself, such as the name or in some cases selecting among small/medium/large display options.
To access more options for web parts, administrators select (Admin) > Page Admin Mode.
Permissions: Configure web parts to be displayed only when the user has some required role or permission. For details see Web Parts: Permissions Required to View.
Move Up/ Move Down: Adjust the location of the web part on the page.
Remove From Page: This option removes the web part from the page, but not the underlying data or other content.
Hide Frame: Hide the frame of the web part, including the title, header bar, and outline, when not in page admin mode. For details, see Frameless Web Parts
The following tables list the available Web Parts, or user interface panels. There is a link to more documentation about each one. The set of web parts available depends on the modules installed on your server and enabled in your project or folder. Learn more about adding and managing web parts in this topic: Add Web Parts.
Capture capture complex lineage and derivation information, especially when those derivations include bio-engineering systems such as gene-transfected cells and expression systems.
Using tabs within a project or folder can essentially give you new "pages" to help better organize the functionality you need. You can provide different web parts on different tabs to provide tools for specific roles and groups of activities.
Some folder types, such as study, come with specific tabs already defined. Click the tab to activate it and show the content.
With administrative permissions, you can also enter page admin mode to add and modify tabs to suit your needs.
Tab Editing
Tabs are created and edited using > Page Admin Mode. When enabled, each tab will have a triangle pulldown menu for editing, and there will also be a + mini tab for adding new tabs.
Hide: Hide tabs from users who are not administrators. Administrators can still access the tab. In page admin mode, a hidden tab will show a icon. Hiding a tab does not delete or change any content. You could use this feature to develop a new dashboard before exposing it to your users.
Move: Click to open a sub menu with "Left" and "Right" options.
Rename: Change the display name of the tab.
Delete: Tabs you have added will show this option and may be deleted. You cannot delete tabs built in to the folder type.
Add a Tab
Enter > Page Admin Mode.
Click the mini tab on the right.
Provide a name and click OK.
Default Display Tab
As a general rule when multiple tabs are present, the leftmost tab is "active" and displayed in the foreground by default when a user first navigates to the folder. Exceptions to this rule are the "Overview" tab in a study folder and the single pre-configured tab created by default in most folder types, such as the "Start Page" in a collaboration folder. To override these behaviors:
"Overview" - When present, the Overview tab is always displayed first, regardless of its position in the tab series. To override this default behavior, an administrator can hide the "Overview" tab and place the intended default in the leftmost position.
"Start Page"/"Assay Dashboard" - Similar behavior is followed for this pre-configured tab that is created with each new folder, for example, "Start Page" for Collaboration folders and "Assay Dashboard" for Assay folders. If multiple tabs are defined, this single pre-configured tab, will always take display precedence over other tabs added regardless of its position left to right. To override this default behavior, hide the pre-configured tab and place whichever tab you want to be displayed by default in the leftmost position.
Note that if only the single preconfigured "Start Page" or "Assay Dashboard" tab is present, it will not be displayed at all, as the user is always "on" this tab.
Custom Tabbed Folders
Developers can create custom folder types, including tabbed folders. For more information, see Modules: Folder Types.
Add Custom Menus
An administrator can add custom menus to offer quick pulldown access to commonly used tools and pages from anywhere within a project. Custom menus will appear in the top bar of every page in the project, just to the right of the project and folder menus. For example, the LabKey Server Documentation is itself part of a project featuring custom menus:
From the project home page, select (Admin) > Folder > Project Settings.
Click the Menu Bar tab.
There are several types of menu web part available. Each web part you add to this page will be available on the header bar throughout the project, ordered left to right as they are listed top to bottom on this page.
The Custom Menu type offers two options, the simplest being to create a menu of folders.
Folders Menu
On the Create Custom Menu page, add a Custom Menu web part.
The default Title is "My Menu"; edit it to better reflect your content.
Select the radio button for Folders.
Elect whether to include descendents (subfolders) and whether to limit the choice of root folder to the current project. If you do not check this box, you can make a menu of folders available anywhere on the site.
Select the Root Folder. Children of that root will be listed as the menu. The root folder will not be presented on the menu itself, so you might choose to use it as the title of the menu.
The Folder Types pulldown allows you to limit the type of folder on the menu if desired. For example, you might want a menu of workbooks or of flow folders. Select [all] or leave the field blank to show all folders.
Click Submit to save your new menu.
You will now see it next to the project menu, and there will be a matching web part on the Menu Bar tab of Project Settings. The links navigate the user to the named subfolder.
Continue to add a web part on this page for each custom menu you would like. Click Refresh Menu Bar to apply changes without leaving this page.
List or Query Menu
The other option on the creation page for the Custom Menu web part is "Create from List or Query". You might want to offer a menu of labs or use a query returning newly published work during the past week.
On the Menu Bar page, add a Custom Menu web part.
The default Title is "My Menu"; edit it to better reflect your content.
Select Create from List or Query.
Use the pulldowns to select the following in order. Each selection will determine the options in the following selection.
Folder: The folder or project where the list or query can be found.
Schema: The appropriate schema for the query, or "lists" for a list.
Query: The query or list name.
View: If more than one view exists for the query or list, select the one you want.
Title Column: Choose what column to use for the menu display. In the "Labs" example shown here, perhaps "Principal Investigator" would be another display option.
Click Submit to create the menu.
Each item on the menu will link to the relevant list or query result.
Resource Menus
There are three built in custom menu types which will display links to all of the specific resources in the project. There are no customization options for these types.
Study List
If your project contains one or more studies, you can add a quick access menu for reaching the home page for any given study from the top bar.
Return to the Menu Bar tab.
Add a Study List web part. Available studies will be listed in a new menu named "Studies".
If no studies exist, the menu will contain the message "No studies found in project <Project_Name>."
Assay List
If your project contains Assays, you can add a menu listing them, along with a manage button, for easy access from anywhere in your project.
Return to the Menu Bar tab.
Add an AssayList2 web part. Available assays will be listed in a new menu named "Assays".
If no assays exist, the menu will only contain the Manage Assays button.
Samples Menu
The Samples Menu similarly offers the option of a menu giving quick access to all the sample types in the project - the menu shows who uploaded them and when, and clicking the link takes you directly to the sample type.
Return to the Menu Bar tab.
Add an Samples Menu web part. Available sample types will be listed in a new menu named "Samples".
If no samples exist, the menu will be empty.
Define a Menu in a Wiki
By creating a wiki page that is a series of links to other pages, folders, documents, or any destination you choose, you can customize a basic menu of common resources users can access from anywhere in the project.
Create a new Wiki web part somewhere in the project where it will become a menu.
Click Create a new wiki page in the new web part.
Give your new page a unique title (such as "menuTeamA").
Name your wiki as you want the menu title to appear "Team Links" in this example.
Add links to folders and documents you have already created, in the order you want them to appear on the menu. For example, the wiki might contain:
[Overview|overview]
[Alpha Lab Home Page|http://localhost:8080/labkey/project/Andromeda/Alpha%020/begin.view?]
[Staff Phone Numbers|directory]
In this example, we include three menu links: the overview document, the home page for the Alpha Lab, and the staff phone list document in that order.
Save and Close the wiki, which will look something like this:
To add your wiki as a custom menu:
Return to (Admin) > Folder > Project Settings. Click the Menu Bar tab.
Select Wiki Menu from the <Add Web Part> pulldown and click Add. An empty wiki menu (named "Wiki") will be added.
In the new web part, select Customize from the (triangle) menu.
Select the location of your menu wiki from the pulldown for Folder containing the page to display. The "Page to display" pulldown will automatically populate with all wikis in that container.
Select the menu wiki you just created, "menuTeamA (Team Links)" from the pulldown for Page to display:
Click Submit to save.
If your menu does not immediately appear in the menu bar, click Refresh Menu Bar.
The team can now use your new menu anywhere in the Andromeda project to quickly access content.
Menu Visibility
By selecting the Permissions link from the pulldown on any menu web part on the Menu Bar tab, you can choose to show or hide the given menu based on the user's permission.
You must enter (Admin) > Page Admin Mode to see the "Permissions" option.
The Required Permission field is a pulldown of all the possible permission role levels. Check Permission On: allows you to specify where to check for that permission.
If the user does not have the required permission (or higher) on the specified location, they will not see that particular menu.
Define a New Menu Type in a Custom Module
If you have defined a web part type inside a custom module (for details of the example used here see Tutorial: Hello World Module), you can expose this web part as new type of custom menu to use by adding <location name="menu"> to the appropriate .webpart.xml file, for example:
<webpart xmlns="http://labkey.org/data/xml/webpart" title="Hello World Web Part"> <view name="begin"/> <locations> <location name="menu"/> <location name="body"/> </locations> </webpart>
Your web part will now be available as a menu-type option:
The resulting menu will have the same title as defined in the view.xml file, and contain the contents. In the example from the Hello World Tutorial, the menu "Begin View" will show the text "Hello, World!"
Web Parts: Permissions Required to View
An administrator can restrict the visibility of a web part to only those users who have been granted a particular permission on a particular container. Use this feature to declutter a page, or target content for each user. For example, hiding links to protected resources that the user will not able to access. In some cases you may want to base the permissions check on what the user's permission is in a different container. For instance, you might display two sets of instructions for using a particular folder - one for users who can read the contents, another for users who can insert new content into it.
Note that web part permissions settings do not change the security settings already present in the current folder and cannot be used to grant access to the resource displayed in the web part that the user does not already have.
Enter > Page Admin Mode.
Open the (triangle) menu for the web part you want to configure and choose Permissions.
In the pop-up, select the Required Permission from the list of available permission levels (roles) a user may have.
Note that the listing is not alphabetical. Fundamental roles like Insert/Update are listed at the bottom.
Use the Check Permission On radio button to control whether the selected permission is required in the current container (the default) or another one.
If you select Choose Folder, browse to select the desired folder.
Click Save.
In the security user interface, administrators typically interact with "roles," which are named sets of permissions. The relationship between roles and permissions is described in detail in these topics:
LabKey Server has a group- & role-based security model. This means that each user of the system belongs to one or more security groups, and can be assigned different roles (combinations of permissions) related to resources the system. When you are considering how to secure your LabKey site or project, you need to think about which users belong to which groups, and which groups have what kind of access to which resources.
A few best practices:
Keep it simple.
Take advantage of the permissions management tools in LabKey.
Use the rule of least privilege: it is easier to expand access later than restrict it.
Prioritize sensible data organization over folder structure.
You may not need to understand every aspect of LabKey security architecture to use it effectively. In general the default security settings are adequate for many needs. However, it's helpful to be familiar with the options so that you understand how users are added, how groups are populated, and how permissions are assigned to groups.
Related Topics
Compliance - (Premium Features) Comply with security and auditing standards, such as FISMA and HIPAA.
Web Application Security - Describes the most important web application security vulnerabilities and how to protect against script injection.
Deprecated: Remote Login API - The remote login/permissions service allows cooperating websites to use LabKey Server for authentication and attach permissions to their own resources based on permissions in LabKey Server.
Configure Permissions
The security of a project or folder depends on the permissions that each group has on that resource. The default security settings are designed to meet common security needs, and you may find that they work for you and you don't need to change them. If you do need to change them, you'll need to understand how permissions settings work and what the different roles mean in terms of the kinds of access granted.
Security settings for a Studies provide further refinement on the folder-level permissions covered here, including the option for granular control over access to study datasets within the folder that will override folder permissions. See Manage Study Security for details.
Roles
A role is a named set of permissions that defines what members of a group can do. You secure a project or folder by specifying a role for each group defined for that resource. The privileges associated with the role are conferred on each member of the group. Assigning roles to groups is a good way to simplify keeping track of who has access to what resources, since you can manage group membership in one place.
Setting Permissions
To set permissions, you assign a role to a group or individual user.
Note: Before you can grant roles to users, you must first add the users at the site level. When you type into the "Select user or group..." box, it will narrow the pulldown menu to the matching already defined users, but you cannot add a new account from this page.
Select (Admin) > Folder > Permissions.
Set the scope of the role assignment by selecting the project/folder in the left-hand pane.
In the image below the Research Study subfolder is selected.
To grant a role, locate it in the Roles column and then select the user or group from the Select user or group... pulldown.
Click Save or Save and Finish to record any changes you make to permissions.
Revoke or Change Permissions
Permissions can be revoked by removing role assignments.
To revoke assignment to a role, click the x in the user or group box.
Here we will revoke the "Author" role for the group "Guests" as it was added by mistake.
You can also drag and drop users and groups from one role to another.
Dragging and dropping between roles removes the group from the source role and then adds it to the target role. If you want to end up with both roles assigned, you would need to add to the second group instead.
Inherit Permission Settings
You can control whether a folder inherits permission settings, i.e. role assignments, from its immediate parent.
Check the checkbox Inherit permissions from parent, as shown below.
When permissions are inherited, the permissions UI is grayed out and the folder appears with an asterisk in the hierarchy panel.
Site-Level Roles
A few specific permissions are available at the site level, allowing admins to assign access to certain features to users who are not full administrators. For a listing and details about these roles, see the Security Roles Reference documentation
To configure site-level roles:
Select (Admin) > Site > Site Permissions.
Permission Rules
The key things to remember about configuring permissions are:
Permissions are additive. This means that if a user belongs to several groups, they will have the superset of the highest level of permissions granted by all groups they are in.
Additive permissions can get tricky when you are restricting access for one group. Consider whether other groups also have the correct permissions. For example, if you remove permissions for the ProjectX group, but the Users (logged-in-site users) group has read permissions, then Project X team members will also have read permissions when they are signed in.
Folders can inherit permissions. In general, only site admins automatically receive permissions to access newly-created folders. However, default permissions settings have one exception. In the case where the folder admin is not a project or site admin, permissions are inherited from the parent project/folder. This avoids locking the folder creator out of his/her own new folder.
Visibility of parent folders. If a user can read information in a subfolder, but has no read access to the parent, the name of the parent container will still appear on the folder menu and in the breadcrumb path, but it will not be a clickable navigation link.
Permissions Review Enforcement (Premium Feature)
It is good practice to periodically review project permissions to ensure that the access is always correct. Setting reminders and defining policies outside the system are recommended.
Premium Features Available
Subscribers to the Enterprise Edition of LabKey Server can use the compliance module to enable an automated Project Review Workflow to assist project managers. Learn more in this topic:
Security groups make managing LabKey's role based permissions easier by letting administrators assign access to a named group rather than needing to manage the permissions of each individual user. Group membership may evolve over time as personnel arrive and leave the groups.
There are three main types of security groups available:
Global Groups: Built-in site-wide groups including "Site Users", "Site Administrators", and "Guests" (those who are not logged into an account). Roles can be granted anywhere on the site for these groups.
Site Groups: Defined by an admin at the site-wide level. Roles can be assigned anywhere on the site for these groups.
Project Groups: Defined by an admin only for a particular project, and can be assigned roles in that project and folders within it.
All users with accounts on LabKey belong to the "Site Users" global group. Any individual user can belong to any number of site and project groups.
Global groups are groups that are built into every LabKey Server at the site level and thus available when configuring permissions for any project. You can also define groups local to an individual project only. The global groups can be accessed by site admins via (Admin) > Site > Site Groups.
The Site Administrators group includes all users who have been added as global administrators. Site administrators have access to every resource on the LabKey site, with a few limited special use exceptions. Only users who require these global administrative privileges should be added to the Site Administrators group. A project administrator requires a similarly high level of administrative access, but only to a particular project, and should be part of the Site Users group, described below and then added to the administrators group at the project level only.
All LabKey security begins with the first site administrator, the person who installs and configures LabKey Server, and can add others to the Site Administrators group. Any site admin can also add new site users and add those users to groups. See Site Administrator for more information on the role of the site admin.
The Site Administrators group is implicit in all security settings. There's no option to grant or revoke folder permissions to this group under (Admin) > Folder > Permissions.
Developers Group
The Developers group is a site-level security group intended to include users who should be granted the ability to create server-side scripts and code. Membership in the Developers group itself does not confer any permission or access to resources. By default, the Developer group is granted the Platform Developer which does confer these abilities.
An administrator can revoke the Platform Developer role from this group if desired, in which case the Developers group exists purely as a convenience for assigning other roles throughout the site.
Membership in the Developers site group is managed on the page (Admin) > Site > Site Developers. The same interface is used as for project groups. Learn more about adding and removing users from groups in this topic: Manage Group Membership
Note that you cannot impersonate the Developers group or the Platform Developers role directly. As a workaround, impersonate an individual user who has been added to the Developers group or granted the Platform Developer role respectively.
The Site Users Group
The site-level Users group consists of all users who are logged onto the LabKey system, but are not site admins. You don't need to do anything special to add users to the Site Users group; any users with accounts on your LabKey Server will be part of the Site Users group.
The Site Users group is global, meaning that this group automatically has configurable permissions on every resource on the LabKey site.
The purpose of the Site Users group is to provide a way to grant broad access to a specific resource within a project without having to open permissions for an entire project. Most LabKey users will work in one or a few projects on the site, but not in every project.
For instance, you might want to grant Reader permissions to the Site Users group for a specific subfolder containing public documents (procedures, office hours, emergency contacts) in a project otherwise only visible to a select team. The select team members are all still members of the site users group, meaning the resource will be visible to all users regardless of other permissions or roles.
The Guests/Anonymous Group
Anonymous users, or guests, are any users who access your LabKey site without logging in. The Guests group is a global group whose permissions can be configured for every project and folder. It may be that you want anonymous users to be able to view wiki pages and post questions to a message board, but not to be able to view MS2 data. Or you may want anonymous users to have no permissions whatsoever on your LabKey site. An important part of securing your LabKey site or project is to consider what privileges, if any, guests should have.
Permissions for guests can range from no permissions at all, to read permissions for viewing data, to write permissions for both viewing and contributing data. Guests can never have administrative privileges on a project.
Site Groups allow site admins to define and edit site-wide groups of users. In particular, grouping users by the permissions they require and then assigning permissions to the group as a whole can greatly simplify access management. A key advantage is that if membership in the organization changes (new users enter or leave a given group of users) only the group membership needs updating - the permissions will stay with the group and not need to be reassigned in every container.
Site groups have no default permissions but are visible to every project and may be assigned project-level permissions as a group. Using site groups has the advantage of letting admins identify the affiliations of users while viewing the site users table where project group membership is not shown.
The server has built-in site groups described here: Global Groups.
Create a Site Group
View current site groups by selecting (Admin) > Site > Site Groups:
To create a new group, enter the name, here "Experimenters" then click Create New Group. You may add users or groups and define permissions now, or manage the group later. Click Done to create your group.
Manage Site Groups
Membership in site groups is managed in the same way as membership in project groups. Learn more in this topic: Manage Group Membership.
Clicking on the group name to view the group information box.
Add a single user or group using the pulldown menu.
Remove a user from the group by clicking Remove.
View an individual's permissions via the Permissions button next to his/her email address.
Manage permissions for the group as a whole by clicking the Permissions link at the top of the dialog box.
Click Manage Group to add or remove users in bulk as well as send a customized notification message to newly added users.
Grant Project-Level Permissions to a Site Group
To grant project-level permissions to Site Groups (including the built-in groups Guests and Site Users), select (Admin) > Folder > Permissions from the project or folder. Site groups will be listed among those eligible for assignment to each role.
Project groups are groups of users defined only for a particular project and the folders beneath it. Permissions/roles granted to project groups apply only to that project and not site-wide. Using project groups helps you simplify managing the roles and actions available to sets of users performing the same kinds of tasks. You can define any number of groups for a project and users can be members of multiple groups.
To define groups or configure permissions, you must have administrative privileges on that project or folder.
Create a Project Group
View current project groups by selecting (Admin) > Folder > Permissions and clicking the Project Groups tab. To create a new group, type the name into the box and click Create New Group.
Manage Group Membership
Add Users to Groups
When you create a new group, or click the name of an existing group in the permissions UI, you will open a popup window for managing group information.
In the popup window, you can use the pulldown to add project or site users to your new group right away, or you can simply click Done to create the empty group. Your new group will be available for granting roles and can be impersonated even before adding actual users.
Later, return to the project group list, click the group name to reopen the group information popup. You can now use the pulldown to add members or other groups to the group. Once you've added users, new options are available.
As you start to type the name of a user or group, you'll see the existing users to choose from. If you are a project (or site) administrator, and type the email address of a user who does not already have an account on the site, you can hit return and have the option to add a new account for them.
Remove Users from Groups
To remove a user from a group, open the group by clicking the group name in the permissions UI. In the popup, click Remove for the user to remove.
You can also see a full set of permissions for that user by clicking Permissions for the user, or the set for the group by clicking the Permissions link at the top of the popup.
When finished, click Done.
Add Multiple Users
From the permissions UI, click the name of a group to open the information popup, then click Manage Group in the upper right for even more options including bulk addition and removal of users. You can rename the group using the link at the top, and the full history of this group's membership is also shown at the bottom of this page.
To add multiple users, enter each email on it's own line in the Add New Members panel. As you begin to type, autocompletion will show you defined users and groups. By default email will be sent to the new users. Uncheck the box to not send this email. Include an optional message if desired. Click Update Group Membership.
Remove Multiple Users
To remove multiple users, reopen the same page for managing the group. Notice that group members who are already included in subgroups are highlighted with an asterisk. You can use the Select Redundant Members button to select them all at once.
To remove multiple group members, check their boxes in the Remove column. Then click Update Group Membership.
Delete
To delete a group, you must first remove all the members. Once the group is empty, you will see a Delete Empty Group button on both the manage page and in the information popup.
Default Project Groups
When you create a new project, you can elect whether to start the security configuration from scratch ("My User Only") or clone the configuration from an existing project. Every new project started from scratch includes a default "Users" group. It is empty when a project is first created, and not granted any permissions by default.
It is common to create an "Administrators" group, either at the site or project level. It's helpful to understand that there is no special status confirmed by creating a group of that name. All permissions must be explicitly assigned to groups. A site administrator can configure a project so that no other user has administrative privileges there. What is important is not whether a user is a member of a project's "Administrators" group, but whether any group that they belong to has the administrator role for a particular resource.
Permissions are configured individually for every individual project and folder. Granting a user administrative privileges on one project does not grant them on any other project. Folders may or may not inherit permissions from their parent folder or project.
"All Project Users"
The superset of members of all groups in a project is referred to as "All Project Users". This is the set of users you will see on the Project Users grid, and it is also the default that an issue tracker "assigned to" list is set to include "All Project Users".
Guests are any users who access your LabKey site without logging in. In other words, they are anonymous users. The Guests group is a global group whose permissions can be configured for every project and folder. It may be that you want anonymous users to be able to view wiki pages and post questions to a message board, but not to be able to view MS2 data. Or you may want anonymous users to have no permissions whatsoever on your LabKey site. An important part of securing your LabKey site or project is to consider what privileges, if any, anonymous users should have.
Permissions for anonymous users can range from no permissions at all, to read permissions for viewing data, to write permissions for both viewing and contributing data. Anonymous users can never have administrative privileges on a project or folder.
Granting Access to Guest Users
You can choose to grant or deny access to guest users for any given project or folder.
To change permissions for guest users, follow these steps:
Go to (Admin) > Folder > Permissions and confirm the desired project/folder is selected.
Add the guest group to the desired roles. For example, if you want to allow guests to read but not edit, then add the Guests group in the Reader section. For more information on the available permissions settings, see Configure Permissions.
Click Save and Finish.
Default Settings
Guest Access to the Home Project
By default guests have read access to your Home project page, as well as to any new folders added beneath it. You can easily change this by editing folder permissions to uncheck the "inherit permissions from parent" box and removing the guests group from the reader role. To ensure that guest users cannot view your LabKey Server site at all, simply removing the group from the reader role at the "home" project level.
Guest Access to New Projects
New projects by default are not visible to guest users, nor are folders created within them. You must explicitly change permissions for the Guests group if you wish them to be able to view any or all of a new project.
A role is a named set of permissions that defines what a user (or group of users) can do. This topic provides details about site and project/folder scoped roles.
These roles apply across the entire site. Learn about setting them here.
Site Administrator: The site administrator role is the most powerful role in LabKey Server. They control the user accounts, configure security settings, assign roles to users and groups, create and delete folders, etc. Site administrators are automatically granted nearly every permission in every project or folder on the server. There are some specialized permissions not automatically granted to site admins, such as adjudicator permissions and permission to view PHI data. See Site Administrator.
Application Administrator: This role is used for administrators who should have permissions above Project Administrators but below Site Administrators. It conveys permissions that are similar to Site Administrator, but excludes activities that are "operational" in nature. For example, they can manage the site, but can't change file/pipeline roots or configure the database connections. For details, see Administrator Role / Permissions Matrix
Troubleshooter: Troubleshooters may view administration settings but may not change them. Troubleshooters see an abbreviated admin menu that allows them to access the Admin Console. Most of the diagnostic links on the Admin Console, including the Audit Log, are available to Troubleshooters.
See User and Group Details: Allows non-administrators to see email addresses and contact information of other users as well as information about security groups.
See Email Addresses: Allows selected non-administrators to see email addresses.
See Audit Log Events: Only admins and selected non-administrators granted this role may view audit log events and queries.
Email Non-Users: Allows sending email to addresses that are not associated with a LabKey Server user account.
See Absolute File Paths: Allows users to see absolute file paths.
Use SendMessage API: Allows users to use the send message API. This API can be used to author code which sends emails to users (and potentially non-users) of the system.
Platform Developer: The platform developer role allows admins to grant developer access to trusted individuals who can then write and deploy code outside the LabKey security framework. By default, the Developer group is granted this role on a site-wide basis. Learn more in this topic: Platform Developer Role
Project Creator: Allows users to create new projects via the CreateProject API and optionally also grant themselves the Project Administrator role in that new project. Note that creating new projects in the UI is not granted via this role. Only Site Administrators can create new projects from the project and folder menu.
Project Review Email Recipient: (Premium Feature) Project administrators who are assigned this role will receive notification emails about projects needing review. Learn more in this topic: Project Locking and Review Workflow
Module Editor: (Premium Feature) This role grants the ability to edit module resources. Learn more in this topic: Module Editing Using the Server UI
Trusted Analyst: (Premium Feature) This role grants the ability to write code that runs on the server in a sandbox as well as the ability to share that code for use by other users under their own userIDs. For set up details, see Developer Roles.
Analyst: (Premium Feature) This role grants the ability to write code that runs on the server, but not the ability to share that code for use by other users.
Launch and use RStudio Server: (Premium Feature) Allows the user to use a configured RStudio Server.
Developer: Developer is not a role, but a site-level group that users can be assigned to. Roles can then be granted to that group, typically to allow things like creating executable code on the server, adding R reports, etc. For details see Global Groups.
Project and Folder Scoped Roles
Users and groups can be assigned the following roles at the project or folder level. Learn about setting them here.
Project and Folder Administrator: Similar to site admins, project and folder administrators also have broad permissions, but only within a given project or folder. Within their project or folder scope, these admins create and delete subfolders, add web parts, create and edit sample types and assay designs, configure security settings, and manage other project and study resources.
When a new subfolder is created within a project, existing project admin users and groups will be granted the folder admin role in the new folder. The admin creating the folder can adjust that access as needed. Once a folder is created and permissions configured, any subsequent new project admin users or groups will not be automatically be granted folder admin to the existing folder.
Editor: The editor role lets the user add new information and modify and delete most existing information. For example, an editor can import, modify, and delete data rows; add, modify, and delete wiki pages; post new messages to a message board and edit existing messages, and so on.
Editor without Delete: This role lets the user add new information and modify some existing information, as described above for the Editor role, but not delete information. For example, an "Editor without Delete" can import and modify data rows, but not delete them.
Author: The author role lets you view data and add new data, but not edit or delete data. Exceptions are Message board posts and Wiki pages: Authors can edit and delete their own posts and pages.
Reader: The reader role lets you read text and data, but generally you can't modify it.
Submitter: The Submitter role is provided to support users adding new information but not editing existing information. Note that this role does not include read access; it must be granted separately if appropriate.
A use with both Submitter and Reader roles can insert new rows into lists.
When used with the issue tracker, a Submitter is able to insert new issue records, but not view or change other records. If the user assigned the Submitter role is not also assigned the Reader role, such an insert of a new issue would need to be performed via the API.
When used in a study with editable datasets, a user with both Submitter and Reader roles can insert new rows but not modify existing rows.
Message Board Contributor: This role lets you participate as an "Author" in message board conversations and Object-Level Discussions. You cannot start new discussions, but can post comments on existing discussions. You can also edit or delete your own comments on message boards.
Shared View Editor: This role lets the user create and edit shared views without having broader Editor access. Shared View Editor includes Reader access, and applies to all available queries or datasets.
Electronic Signer: Signers may electronically sign snapshots of data.
Assay Designer: Assay designers may perform several actions related to creating assay designs.
Storage Editor: This role is required (in addition to "Reader" or higher) to read, add, edit, and delete data related to items in storage, picklists, and jobs. Available for use with Freezer Management in the LabKey Biologics and Sample Manager applications. Learn more in this topic: Freezer Storage Roles
Storage Designer: This role is required (in addition to "Reader" or higher) to read, add, edit, and delete data related to storage locations. Available for use with Freezer Management in the LabKey Biologics and Sample Manager applications. Learn more in this topic: Freezer Storage Roles
Workflow Editor: This role allows users to be able to add, update, and delete picklists and workflow jobs within Sample Manager or Biologics LIMS. It does not include general "Reader" access, or the ability to add or edit any sample, bioregistry, or assay data.
QC Analyst: (Premium Feature) - Perform QC related tasks, such as assigning QC states in datasets and assays. This role does not allow the user to manage QC configurations, which is available only to administrators. For set up details, see Assay QC States - Admin Guide.
PHI-related Roles: (Premium Feature) - For details see Compliance: Security Roles. Note that these roles are not automatically granted to administrators.
The table below shows the individual permissions that make up the common roles assigned in LabKey. There are other roles available for specific use cases; details follow the table.
Roles are listed as columns, individual permissions are listed as rows. A dot indicates that the individual permission is included in the given role. For example, when you set "Update" as the required permission, you are making the web part visible only to Administrators and Editors.
These roles are granted at the site level and grant specific permissions typically reserved for administrators.
Troubleshooter: This role grants a user Read-only access to the admin console, including the audit log.
See Email Addresses: This role grants the to see user email addresses; a permission typically reserved for Administrators.
See Audit Log Events: This role grants the ability to see audit log events; a permission typically reserved for Administrators.
Email Non-Users: This role grants the ability to email non-users; a permission typically reserved for Administrators.
See Absolute File Paths: This role grants the ability to see absolute file paths; a permission typically reserved for Administrators.
Use SendMessage API: This role grants the ability to use the send message API; a permission typically reserved for Administrators.
Platform Developer: This role grants the ability to write and deploy code outside the LabKey security framework. Use caution when granting this role.
Project Creator: Allows users to create new projects via the CreateProject API and optionally also grant themselves the Project Administrator role in that new project. Note that creating new projects in the UI is not granted via this role. Only Site Administrators can create new projects from the project and folder menu.
Trusted Analyst:(Premium Feature) This role grants the ability to write code in a sandboxed environment and share it with other users.
Analyst:(Premium Feature) This role grants the ability to write code in a sandboxed environment, but not to share it with other users.
The following tables describe the administrative activities available for the following roles:
Site Administrator
Application Administrator
Troubleshooter
To assign these roles go to (Admin) > Site > Site Permissions.
The first three tables describe the options available via the (Admin) > Site > Admin Console, on the Settings tab. The last table describes various user- and project-related activities.
Table Legend
all: the Administrator has complete control over all of the setting options.
read-only: the Administrator can see the settings, but not change them.
none: the Administrator cannot see or change the settings.
Admin Console Section: Configuration
Action
Site Admin
Application Admin
Troubleshooter
ANALYTICS SETTINGS
all
read-only
read-only
AUTHENTICATION
all
read-only
read-only
CHANGE USER PROPERTIES
all
all
none
EMAIL CUSTOMIZATION
all
all
none
EXPERIMENTAL FEATURES
all
none
none
EXTERNAL REDIRECT HOSTS
all
all
read
FILES
all
none
none
FLOW CYTOMETRY
all
none
none
FOLDER TYPES
all
all
none
LOOK AND FEEL SETTINGS
all
read all, change all except System Email Address and Custom Login
read-only
MASCOT SERVER
all
none
none
MISSING VALUE INDICATORS
all
all
none
PROJECT DISPLAY ORDER
all
all
none
SHORT URLS
all
all
none
SITE SETTINGS
all
read-only
read-only
SYSTEM MAINTENANCE
all
read-only
read-only
VIEWS AND SCRIPTING
all
read-only
read-only
Admin Console Section: Management
Action
Site Admin
Application Admin
Troubleshooter
AUDIT LOG
all
all
read-only
ETL - ALL JOB HISTORIES
all
all
none
ETL - RUN SITE SCOPE ETLS
all
all
none
FULL-TEXT SEARCH
all
read-only
read-only
MS2
all
none
none
ONTOLOGY
all
all
none
PIPELINE
all
all
none
PIPELINE EMAIL NOTIFICATIONS
all
none
none
PROTEIN DATABASES
all
none
none
SITE-WIDE TERMS OF USE
all
all
none
Admin Console Section: Diagnostics
Action
Site Admin
Application Admin
Troubleshooter
ACTIONS
all
all
read-only
CACHES
all
read-only
read-only
CHECK DATABASE
all
none
none
CREDITS
read-only
read-only
read-only
DATA SOURCES
all
read-only
read-only
DUMP HEAP
all
all
read-only
ENVIRONMENT VARIABLES
all
read-only
read-only
LOGGERS
all
none
none
MEMORY USAGE
all
all
read-only
PROFILER
all
read-only
none
QUERIES
all
all
read-only
RESET SITE ERRORS
all
all
none
RUNNING THREADS
all
all
read-only
SITE VALIDATION
all
all
none
SQL SCRIPTS
all
none
none
SYSTEM PROPERTIES
all
read-only
read-only
TEST EMAIL CONFIGURATION
all
all
none
VIEW ALL SITE ERRORS
all
all
read-only
VIEW ALL SITE ERRORS SINCE RESET
all
all
read-only
VIEW PRIMARY SITE LOG FILE
all
all
read-only
User and Project/Folder Management
Action
Site Admin
Application Admin
Project Admin
Folder Admin
Non Admin User
Notes
Create new users
yes
yes
yes
no
no
Delete existing, and activate/de-activate users
yes
yes (see note)
no
no
no
Application Admins are not allowed to delete or deactivate a Site Admin.
See user details, change password, view permissions, see history grid
yes
yes (see note)
yes (see note)
yes (see note)
only for themselves
Application Admins are not allowed to reset the password of a Site Admin.
Project/Folder Admins cannot change passwords.
Edit user details, change user email address
yes
yes (see note)
no
no
no
Application Admins are not allowed to edit details or change the email address of a Site Admin.
Update set of Site Admins and Site Developers
yes
no
no
no
no
Update set of Application Admins
yes
yes (see note)
no
no
no
The current user receives a confirmation message when trying to remove themselves from the Application Admin role.
Update Site Groups (create, delete, update members, rename, export members)
yes
yes (see note)
no
no
no
Application Admins cannot update group memberships for Site Admins or Developers.
Update Project Groups (create, delete, update members, rename, export members)
yes
yes (see note)
yes
no
no
Application Admins cannot update group memberships for Site Admins or Developers.
Update Site Permissions
yes
yes
no
no
no
Impersonate users, roles, and groups
yes
yes (see note)
yes (see note)
yes (see note)
no
Application Admins cannot gain Site Admin permissions by impersonating a Site Admin.
Project Admins can only impersonate users in their project.
Folder Admins can only impersonate users in their folder.
Create and delete projects
yes
yes (see note)
no
no
no
Application Admins cannot set a custom file root for the project on the last step of the project creation wizard.
Create and delete project subfolders
yes
yes
yes
yes
no
Update Project Settings
yes
yes (see note)
yes (see note)
yes (see note)
no
Only Site Admins can set the file root to a custom path.
The following table lists the minimum role required to perform some activity with reports, charts, and grids. For example, to create an attachment, the minimum role required is Author. In general, with "Reader" access to a given folder or dataset, you can create visualizations to help you better understand the data--i.e. check for outliers, confirm a conclusion suggested by another--but you cannot share your visualizations or change the underlying data. To create any sharable report or grid view, such as for collaborative work toward publication of results based on that data, "Author" permission would be required.
General Guidelines
Guests: Can experiment with LabKey features (time charts, participant reports, etc) but cannot save any reports/report settings.
Readers: Can save reports but not share them.
Authors: Can save and share reports (not requiring code).
Developers: Extends permission of role to reports requiring code.
The Platform Developer role allows admins to grant developer access to trusted individuals who can then write and deploy code outside the LabKey security framework. By default, the Developer group is granted this role on a site-wide basis. When that code is executed by others, it may run with different permissions than the original developer user had been granted.
The Platform Developer role is very powerful because:
Any Platform Developer can write code that changes data and server behavior.
This code can be executed by users with very high permissions, such as Site Administators and Full PHI Readers. This means that Platform Developers have lasting and amplified powers that go beyond their limited tenure as composers of code.
Administrators should (1) carefully consider which developers are given the Platform Developer role and (2) have an ongoing testing plan for the code they write. Consider the Trusted Analyst and Analyst roles as an alternative on Premium Editions of LabKey Server.
Grant the Platform Developer Role
To grant the platform developer role, an administrator selects (Admin) > Site > Site Permissions. They can either add the user or group directly to the Platform Developer role, or if the Developers site group is already granted this role (as shown below) they can add the user to that group by clicking the "Developers" box, then adding the user or group there.
Platform Developer Capabilities
The capabilities granted to users with the Platform Developer role include the following. They must also have the Editor role in the folder to use many of these:
APIs:
View session logs
Turn logging on and off
Reports:
Create script reports, including JavaScript reports
Share private reports
Create R reports on data grids
Views:
Customize participant views
Access the Developer tab in the plot editor for including custom scripts in visualizations
Export chart scripts
Schemas and Queries:
Create/edit/delete custom queries in folders where they also have the Editor role
View raw JDBC metadata
JavaScript:
Create and edit announcements with JavaScript
Create and copy text/HTML documents with JavaScript
Create and edit wikis with JavaScript using tags such as <script>, <style>, and <iframe>
Upload HTML files that include JavaScript with tags such as <script>, <style>, and <iframe>
Create tours
Developer Tools:
More verbose display and logging
Developer Links options on the (Admin) menu
Use of the mini-profiler
etc.
Developer Site Group
One of the built in site groups is "Developer". This is not a role, but membership in this group was used to grant access to developers prior to the introduction of the platform developer role. By default, the Developers site group is granted the platform developer role.
Developer Links Menu
Developers have a access to additional resources on the (Admin) menu.
Premium Feature — The Trusted Analyst and Analyst roles are available in all Premium Editions of LabKey Server. Learn more or contact LabKey.
Trusted Analyst
The role Trusted Analyst grants the ability to write code that runs on the server in a sandbox.
Sandboxing is a software management strategy that isolates applications from critical system resources. It provides an extra layer of security to prevent harm from malware or other applications. Note that LabKey does not verify security of a configuration an administrator marks as "sandboxed".
Code written by trusted analysts may be shared with other users and is presumed to be trusted. Admins should assign users to this role with caution as they will have the ability to write scripts that will be run by other users under their own userIds.
In the same folder, give the Editor role to the desired script/code writers.
Go to (Admin) > Site > Site Permissions and give the Trusted Analyst role to the desired script/code writers.
Trusted analysts also have the ability to create/edit/delete custom queries in folders where they also have the Editor role.
Analyst
The role Analyst grants the ability to write code that runs on the server, but not the ability to share that code for use by other users. For example, an analyst can use RStudio if it is configured, but may not write R scripts that will be run by other users under their own userIDs.
A user with only the Analyst role cannot write new SQL queries.
Two roles specific to managing freezer storage of physical samples let Sample Manager and LabKey Biologics administrators independently grant users control over the physical storage details for samples. This topic describes the Storage Editor and Storage Designer roles.
Administrators can assign permission roles in the Sample Manager application by using the Administration option under the user menu, then clicking Permissions. For LabKey Biologics, switch to the LabKey Server interface for your folder, select > Folder > Permissions.
Storage Roles for Freezer Management
Both storage roles include the ability to read Sample data, but not the full "Reader" role for other resources in the system (such as assay data, data classes, media, or notebooks). In addition, these roles supplement, but do not replace the usual role levels like "Editor" and "Administrator" which may have some overlap with these storage-specific roles.
Storage roles support the ability to define users who can:
Manage aspects of the physical storage inventory and read sample data, but not read other data (assays, etc), not change sample definitions or assay definitions: Grant a storage role only.
Manage aspects of the physical storage inventory and read all application data, including assays, etc, but not change sample definitions or assay definitions: Grant a storage role plus the "Reader" role.
Manage sample definitions and assay designs, and work with samples and their data in workflow jobs and picklists, but not manage the physical storage inventory. Grant "Editor" or higher but no storage role.
Manage both storage inventories and non-sample data such as assay designs: Grant both the desired storage role and "Editor" or higher.
Storage Editor
The role of "Storage Editor" confers the ability to read, add, edit, and delete data related to items in storage, picklists, and jobs. Note that the storage-related portion of this role is not included in any other access levels, including Administrator.
A user with the Storage Editor role can:
Add, move, checkin/out and discard samples from storage.
Create, update and delete sample picklists.
Create workflows and add or update sample assignments to jobs.
Update a sample's status.
Does not include permission to add or edit any storage locations or storage units.
Does not include permission to read data other than sample data.
Storage Designer
The role of Storage Designer confers the ability to read, add, edit, and delete data related to storage locations and storage units. Administrators also have these abilities.
Create, update, and delete storage locations, freezers, storage units themselves.
Does not include permission to add samples to storage, check them in or out, or update sample status.
Does not include permission to update picklists or workflow jobs.
Does not include permission to read data other than sample data.
In order to access secured resources, a user must have a user account on the LabKey Server installation and log in with their user name and password. User accounts are managed by a user with administrative privileges – either a site administrator, who has admin privileges across the entire site, or a user who has admin permissions on a given project or folder.
To change your information, click the Edit button. The display name defaults to your email address. It can be set manually to a name that identifies the user but is not a valid email address to avoid security and spam issues. You cannot change your user name to a name already in use by the server. When all changes are complete, click Done.
Add an Avatar
You can add an avatar image to your account information by clicking Edit, then clicking Browse or Choose File for the Avatar field. The image file you upload (.png or .jpg for example) must be at least 256 pixels wide and tall.
Change Password
To change your password click Change Password. You will be prompted to enter the Old Password (your current one) and the new one you would like, twice. Click Set Password to complete the change.
An administrator may also change the users password, and also has an option to force a reset which will immediately cancel the user's current password and send an email to the user containing a link to the reset password page. When a password is reset by an admin, the user will remain logged in for their current session, but once that session expires, the user must reset their password before they log in again.
Change Email
The ability for users to change their own email address must first be enabled by an administrator. If not available, this button will not be shown.
To change your email address, click Change Email. You cannot use an email address already in use by another account on the server. Once you have changed your email address, verification from the new address is required within 24 hours or the request will time out. When you verify your new email address you will also be required to enter the old email address and password to prevent hijacking of an unattended account.
When all changes are complete, click Done.
Sign In/Out
You'll find the Sign In/Out menu on the (User) menu. Click to sign in or out as required. Sign in with your email address and password.
Session Expiration
If you try to complete a navigation or other action after your session expires, either due to timeout, signing out in another browser window, or server availability interruption, you will see a popup message indicating the reason and inviting you to reload the page to log back in and continue your action.
The person who installs LabKey Server at their site becomes the first member of the Site Administrators group and has administrative privileges across the entire site. Members of this group can view any project, make administrative changes, and grant permissions to other users and groups. For more information on built in groups, see Global Groups.
When you add any users to the Site Administrators group, they will have full access to your LabKey site.
Most users do not require such broad administrative access to LabKey, and should be added as site users rather than as administrators. Users who require admin access for a particular project can be granted administrative access at the project level only.
Go to (Admin) > Site > Site Admins. You'll see the current membership of the group.
In the Add New Members text box, enter the email addresses for your new site administrators.
Choose whether to send an email and if so, how to customize it.
Once a site administrator has set up LabKey Server, they can start adding new users. There are several ways to add new users to your LabKey installation.
Note that there is an important distinction between adding a user account and granting that user permissions. By default, newly added accounts only have access as part of the "Site: Users" group. Be sure to grant users the access they need, generally by adding them to security groups with the required access.
Users Authenticated by LDAP and Single Sign-On
If your LabKey Server installation has been configured to authenticate users with an LDAP server or single-sign on via SAML or CAS, then you don't need to explicitly add user accounts to LabKey Server.
Every user recognized by the LDAP or single sign-on servers can log into LabKey using their user name and password. If they are not already a member, any user who logs in will automatically be added to the "Site Users" group, which includes all users who have accounts on the LabKey site.
If you are not using LDAP or single sign on authentication, then you must explicitly add each new user to the site, unless you configure self sign-up.
Site Admin Options
If you are a site administrator, you can add new users to the LabKey site by entering their email addresses on the Site Users page:
Select (Admin) > Site > Site Users.
Click Add Users.
Enter one or more email addresses.
Check the box to Clone permissions from an existing user if appropriate, otherwise individually assign permissions next.
Check the box if you want to Send password verification email to all new users. See note below.
Click Add Users.
You'll see a message indicating success and an option to review the new user email.
Click Done when finished.
Note that if you have enabled LDAP authentication on a premium edition of LabKey Server, emails will only be sent to new users whose email domain does not match any of the configured LDAP domains. The configured LDAP domains will be listed in the user interface for this checkbox.
Site admins may also use the pathway described below for adding users via the security group management UI.
Project Admin Options
If you are a project administrator, you can add new users to the LabKey site from within the project. Any users added in this way will also be added to the global "Site Users" group if they are not already included there.
Select (Admin) > Folder > Permissions.
Click the Project Groups tab.
Click the name of the group to which you want to add the user (add a new group if needed).
Type the user's email address in the "Add user or group.." box.
You'll see the list of existing users narrow so that you can select the user if their account has already been created.
If it has not, hit return after typing and a popup message will ask if you want to add the user to the system. Confirm.
To bulk add new site users to a project group, click the group name then click Manage Group.
You'll see a box to add new user accounts here, each on one line, similar to adding site users described above.
The Manage Group page provides the ability to suppress password verification emails; adding users via the permissions UI does not provide this option (they are always sent to non-LDAP users)
Return to the Permissions tab to define the security roles for that group if needed.
Click Save and Finish when finished.
When an administrator adds a non-LDAP user, the user will receive an email containing a link to a LabKey page where the user can choose their own password. A cryptographically secure hash of the user-selected password is stored in the database and used for subsequent authentications.
Note: If you have not configured an email server for LabKey Server to use to send system emails, you can still add users to the site, but they won't receive an email from the system. You'll see an error indicating that the email could not be sent that includes a link to an HTML version of the email that the system attempted to send. You can copy and send this text to the user directly if you would like them to be able to log into the system.
All users in LabKey must have user accounts at the site level. The site administrator can add and manage registered user accounts via (Admin) > Site > Site Users, as described in this topic. Site user accounts may also be added by site or project administrators from the project permissions interface.
Project Administrators can manage similar information for project users by going to (Admin) > Folder > Project Users. See Manage Project Users for further information.
Edit User Information
To edit information for a user from the site admin table, hover over the row for the user of interest to expose the (Details) link in the first column, as shown in the screencap above, then click it to view editable details.
Show Users: Return to the site users table.
Edit: Edit contact information.
Reset Password: Force the user to change their password by clearing the current password and sending an email to the user with a link to set a new one before they can access the site.
Create Password: If LDAP or another authentication provider is enabled on a premium edition of LabKey Server, the user may not have a separate database password. In this case you will see a Create Password button that you can use to send the "Reset Password" email to this user for selecting a new database password. This will provide an alternative authentication mechanism.
Delete Password: If a password was created on the database, but another authentication provider, such as LDAP is in use, you can delete the database password for this user with this button.
Change Email: Edit the email address for the user.
Deactivate: Deactivated users will no longer be able to log in, but their information will be preserved (for example, their display name will continue to be shown in place where they've created or modified content in the past) and they are re-activated at a later time.
Delete: Permanently delete the user. This action cannot be undone and you must confirm before continuing by clicking Permanently Delete on the next page. See below for some consequences of deletion; you may want to consider deactivating the user instead.
History: Below the user properties you can see the history of logins, impersonations, and other actions for this user.
Users can manage their own contact information when they are logged in, by selecting (User) > My Account from the header of any page.
Customize User Properties
You cannot delete system fields, but can add fields to the site users table, change display labels, change field order, and also define which fields are required.
Select (Admin) > Site > Site Users.
Click Change User Properties.
To mark a field as required, check the Required box, as shown for "LastName" below.
To rearrange fields, use the six-block handle on the left.
To edit a display label, or change other field properties, click the to expand the panel.
To add a new field, such as "MiddleName", as shown below:
Click Add Field.
Enter the Name: MiddleName (no spaces).
Leave the default "Text" Data Type selected.
Click Save when finished.
UID Field for Logins (Optional)
If an administrator configures a text field named "UID", users will be able to use this field when logging in (for either LabKey-managed passwords or LDAP authentications), instead of entering their email address into the login form. This can provide a better user experience when usernames don't align exactly with email addresses. The UID field must be populated for a user in order to enable this alternative.
Manage Permissions
To view the groups that a given users belongs to and the permissions they currently have for each project and folder on the site, click the [permissions] link next to the user's name on the Site Users page.
If your security needs require that certain users only have access to certain projects, you must still create all users at the site level. Use security groups to control user access to specific projects or folders. Use caution as the built in group "Site: Users" will remain available in all containers for assignment to roles.
Deactivate Users
The ability to deactivate a user allows you to preserve a user identity within your LabKey Server even after site access has been withdrawn from the user. Retained information includes all audit log events, group memberships, and individual folder permissions settings.
When a user is deactivated, they can no longer log in and they no longer appear in drop-down lists that contain users. However, records associated with inactive users still display the users' names. If you instead deleted the user completely, the display name would be replaced with a user ID number and in some cases a broken link.
Some consequences of deactivation include:
If the user is the recipient of important system notifications, those notifications will no longer be received.
If the user owned any data reloads (such as of studies or external data), when the user is deactivated, these reloads will raise an error.
Note that any scheduled ETLs will be run under the credentials of the user who checked the "Enabled" box for the ETL. If this account is later deactivated, the admin will be warned if the action will cause any ETLs to be disabled.
Such disabled ETLs will fail to run until an active user account unchecks and rechecks the "Enabled" box for each. Learn more in this topic: ETL: User Interface. A site admin can check the pipeline log to determine whether any ETLs have failed to run after deactivating a user account.
The Site Users and Project Users pages show only active users by default. Inactive users can be shown as well by clicking Include Inactive Users above the grid.
Note that if you View Permissions for a user who has been deactivated, you will see the set of permissions they would have if they were reactivated. The user cannot access this account so does not in fact have those permissions. If desired, you can edit the groups the deactivated user is a member of (to remove the account from the group) but you cannot withdraw folder permissions assigned directly to a deactivated account.
Reactivate Users
To re-activate a user, follow these steps:
Go to the Site Users table at (Admin) > Site > Site Users.
Click Include Inactive Users.
Find the account you wish to reactivate.
Select it, and click Reactivate.
This takes you to a confirmation page. Click the Reactivate button to finish.
Delete Users
When a user leaves your group or should no longer have access to your server, before deciding to delete their account, first consider whether that user ID should be deactivated instead. Deletion is permanent and cannot be undone. You will be asked to confirm the intent to delete the user.
Some consequences of deletion include:
The user's name is no longer displayed with actions taken or data uploaded by that user.
Group membership and permission settings for the deleted user are lost. You cannot 'reactivate' a deleted user to restore this information.
If the user is the recipient of important system notifications, those notifications will no longer be received.
If the user owned any data reloads (such as of studies or external data), when the user is deleted, these reloads will raise an error.
If the user had created any linked or external schemas, these schemas (and all dependent queries and resources) will no longer be available.
Generally, deactivation is recommended with long time users. The deactivated user can no longer log in or access their account, but account information is retained for audit and admin access.
Site administrators can manage users across the site via the Site Users page. Project administrators can manage users within a single project as described in this topic and in the project groups topic. Folder administrators within the project have some limited options for viewing user accounts.
A project user is defined as any user who is a member of any group within the project. To add a project user, add the user to any Project Group.
Project group membership is related to, but not identical with permissions on resources within a project. There may be users who have permissions to view a project but are not project users, such as site admins or other users who have permissions because of a site group. Likewise, a project user may not actually have any permissions within a project if the group they belong to has not been granted any permissions.
If you use an issue tracker within the project, by default you can assign issues to "All Project Users" - meaning all members of all project groups.
Project Admin Actions
Project admins can view the set of project users (i.e. the set of users who are members of at least one project group and access each project user's details: profile, user event history, permissions tree within the project, and group events within the project. Project admins can also add users to the site here, but cannot delete or deactivate user accounts.
Select (Admin) > Folder > Project Users.
Note that if you Add Users to the site via this grid, you will use the same process as adding them at the site level, but you will NOT see them listed yet. Before you will see them on this list of project users, you must add them to a project group.
Project admins can impersonate project users within the project, allowing them to see the project just as the member sees it. While impersonating, the admin cannot navigate to any other project (including the Home project). Impersonation is available at (User) > Impersonation.
Folder Administrator Options
A folder administrator can view, but not modify or add users to, the project users table. Folder admins can see the user history, edit permissions settings, and work with project groups.
User authentication is performed through LabKey Server's core database authentication system by default. Authentication means identifying the user to the server. In contrast, user authorization is handled separately, by an administrator assigning roles to users and groups of users.
With Premium Editions of LabKey Server, other authentication methods including LDAP, SAML and CAS single sign-on protocols, and Duo two-factor authentication can also be configured. Premium Editions also support defining multiple configurations of each external authentication method. Learn more about Premium Editions here.
System Default Domain: The domain to use when a user signs in with a username only and not a full email address.
Configurations: There are two tabs for configurations:
Primary: The default primary configuration is Standard database authentication.
On servers where additional authentication methods are enabled, you can use the Add New Primary Configuration dropdown.
Secondary: Use this tab to configure a secondary authentication method for two factor authentication, such as Duo. When configured, the secondary configuration will be used to complete authentication of users who pass a primary method.
Login Form Configurations: These configurations make use of LabKey's login page to collect authentication credentials. Standard database authentication uses this method. If additional configuration methods are added, such as LDAP on a Premium Edition server, LabKey will attempt authenticating against each configuration in the order they are listed. You can drag and drop to reorder them.
Single Sign-On Configurations: Configurations in this section (if any) let LabKey users authenticate against an external service such as a SAML or CAS server. LabKey will render custom logos in the header and on the login page in the order that the configurations are listed. You can drag and drop to reorder them.
Allow Self Sign Up
Self sign up allows users to register for new accounts themselves when using database authentication. When the user registers, they provide their own email address and receive an email to choose a password and sign in. If this option is disabled, administrators must create every user account.
When enabled via the authentication page, users will see a Register button on the login page. Clicking it allows them to enter their email address, verify it, and create a new account.
When self sign up is enabled, users will need to correctly enter a captcha sequence of characters before registering for an account. This common method of 'proving' users are humans is designed to reduce abuse of the self sign up system.
Use caution when enabling this if you have enabled sending email to non-users. With the combination of these two features, someone with bad intent could use your server to send unwanted spam to any email address that someone else attempts to 'register'.
Allow Users to Edit Their Own Email Addresses (Self-Service Email Changes)
Administrators can configure the server to allow non-administrator users to change their own email address (if their password is managed by LabKey Server). To allow non-administrator users to edit their own email address, check the box to Allow users to edit their own email addresses. If this box is not checked, administrators must make any changes to the email address for any user account.
When enabled uses can edit their email address by selecting (User) > My Account. On the user account page, click Change Email.
System Default Domain
The System Default Domain specifies the default email domain for user ids. When a user tries to sign in with an email address having no domain, the specified value will be automatically appended. You can set this property as a convenient shortcut for your users. Leave this setting blank to always require a fully qualified email address.
Multiple Authentication Configurations and Methods
Premium Features — The ability to add other authentication methods and to define multiple configurations of each method is available with all Premium Editions of LabKey Server. Learn more or contact LabKey.
Multiple authentication methods can be configured simultaneously, which provides flexibility, failsafe protections, and a convenient way for different groups to utilize their own authentication systems with LabKey Server. For example, standard database authentication, an LDAP server, and SAML can simultaneously be used.
For each external method of authentication used with Premium Editions of LabKey Server, there can also be multiple distinct configurations defined and selectively enabled. For example, the following server has 5 available configurations, 3 of which are enabled.
When multiple configurations are available, LabKey attempts to authenticate the user in the order configurations are listed on the Primary tab, followed by the Secondary tab. You can rearrange the listing order by dragging and dropping using the six-block handles on the left.
If any one configuration accepts the user credentials, the login is successful. If all enabled configurations reject the user's credentials, the login fails. This means that a user can successfully authenticate via multiple methods using different credentials. For example, if a user has both an account on a configured LDAP server and a database password then LabKey will accept either. This behavior allows non-disruptive transitions from database to LDAP authentication and gives users an alternate means in case the LDAP server stops responding or its configuration changes.
When migrating users from LDAP to the database authentication method, you can monitor progress using the "Has Password" field on the Site Users table.
LDAP: (Premium Feature) Configure LDAP servers to authenticate users against an organization's directory server.
SAML: (Premium Feature) Configure a Security Assertion Mark-up Language authentication method.
CAS: (Premium Feature) Authenticate users against an Apereo CAS server.
Duo: (Premium Feature) Require users to provide an additional piece of information to be authenticated.
Auto-create Authenticated Users
If one or more remote authentication methods is enabled, you will see an additional checkbox in the Global Settings. By default, new LabKey Server accounts will be automatically created for users who are authenticated by external methods such as LDAP, SAML, or CAS. You can disable it in the global settings by unchecking the box.
If you disable auto creation of authenticated users, be sure to communicate to your users the process they should follow for creating a LabKey account. Otherwise they will be able to authenticate but will not have an actual LabKey account with which to use the server. As one example process, you might require an email request to a central administrator to create accounts. The administrator would create the account, the activation email would invite the user to join the server, and they would be authenticated via the external configuration.
Enable/Disable and Delete Configurations
You cannot disable or delete the basic standard database authentication. When other configurations are available, you can use the toggle available at the top of the settings panel to enable/disable them. Click the (pencil) to edit settings. Click the to delete a configuration.
Standard database authentication is accomplished using secure storage of each user's credentials in LabKey Server. When a user enters their password to log in, it is compared with the stored credential and access is granted if there is a match and otherwise denied.
Administrators may manually create the account using the new user's email address, or enable self-signup. The new user can choose a password and log in securely using that password. The database authentication system stores a representation of each user's credentials in the LabKey database. Specifically, it stores a cryptographically secure hash of a salted version of the user-selected password (which increases security) and compares the hashed password with the hash stored in the core.Logins table. Administrators configure requirements for password strength and the password expiration period following the instructions in this topic.
Configure Standard Database Authentication
Select (Admin) > Site > Admin Console.
Under Configuration, click Authentication.
On the Authentication page, find the section Login Form Configurations on the Primary tab.
For Standard database authentication, click the (pencil) on the right.
In the Configure Database Authentication popup, you have the following options:
Password Strength: Require Weak or Strong passwords.
The rules for each type are shown.
Click the type to use.
Password Expiration: Configure how often users must reset their passwords. Options: never, every twelve months, every six months, every three months, every five seconds (for testing).
Click Apply.
Click Save and Finish.
For details on password configuration options see:
Note: these password configuration options only apply to user accounts authenticated against the LabKey authentication database. The configuration settings chosen here do not effect the configuration of external authentication systems, such as LDAP and CAS single sign-on.
Set Default Domain for Login
If you want to offer users the convenience of automatically appending the email domain to their username at log in, you can provide a default domain. For example, if you want to let a user with the email "justme@labkey.com" log in as simply "justme". You would configure the default domain:
Select (Admin) > Site > Admin Console.
Under Configuration, click Authentication.
Set the System default domain to the value to append to a username login. In our example, the default domain would be "labkey.com".
With this configuration, the user can type either "justme@labkey.com" or "justme" in the Email box at login.
User passwords can be set to either "weak" or "strong" rules.
Weak rules require only that the password:
Must be at least 6 non-whitespace characters long
Must not match the user's email address
Strong rules require that passwords meet the following criteria:
Must be eight or more characters long
Must contain characters from at least three of the following character types:
lowercase letters (a-z)
uppercase letters (A-Z)
digits (0-9)
symbols (! @ # $ % & / < > = ?)
Must not contain a sequence of three or more characters from the user's email address, display name, first name, or last name
Must not match any of the user's 10 previously used passwords
Password Expiration
Administrators can also set the password expiration interval. Available expiration intervals are:
Never
Twelve months
Six months
Three months
Every five seconds - for testing purposes
Password Best Practices for LDAP and SSO Users
For installations that run on LDAP or SSO authentication servers, it is recommended that at least one Site Administrator account be associated with LabKey's internal database authenticator as a failsafe. This will help prevent a situation where all users and administrators become locked out of the server should the external LDAP or SSO system fail or change unexpectedly. If there is a failure of the external authentication system, a Site Administrator can sign in using the failsafe database account and create new database authenticated passwords for the remaining administrators and users, until the external authentication system is restored.
To create a failsafe database-stored password:
Select (User) > My Account.
Choose Create Password. (This will create a failsafe password in the database.)
Enter your password and click Set Password.
After setting up a failsafe password in the database, LabKey Server will continue to authenticate against the external LDAP or SSO system, but it will attempt to authenticate using database authentication if authentication using the external system fails.
This topic covers mechanisms for changing user passwords. A user can change their own password, or an administrator can prompt password changes by setting expiration timeframes or direct prompt.
Any user can reset their own password from their account page.
Select (username) > My Account.
Click Change Password.
Enter your Old Password and the New Password twice.
Click Set Password.
Forgotten Password Reset
If a user has forgotten their password, they can reset it when they are logged out by attempting to log in again.
From the logon screen, click Forgot password
You will be prompted for the email address you use on your LabKey Server installation (it may already be populated if you have used "Remember my email address" in the past). Click Reset.
If an active account with that email exists, the user will be sent a secure link to the opportunity to reset their password.
Password Security
You are mailed a secure link to maintain security of your account. Only an email address associated with an existing account on your LabKey Server will be recognized and receive a link for a password reset. This is done to ensure that only you, the true owner of your email account, can reset your password, not just anyone who knows your email address.
If you need to change your email address, learn more in this topic.
Expiration Related Password Reset
If an administrator has configured passwords to expire on some interval, you may periodically be asked to change your password after entering it on the initial sign-in page.
If signing-in takes you to the change password page:
Enter your Old Password and the New Password twice.
Click Set Password.
Administrator Prompted Reset
If necessary, an administrator can force a user to change their password.
Select (Admin) > Site > Site Users.
Click the username for the account of interest. Filter the grid if necessary to find the user.
Click Reset Password.
This will clear the user's current password, send them an email with a 'reset password' link and require them to choose a new password before logging in. Click OK to confirm.
LabKey Server Account Names and Passwords
The name and password you use to log on to your LabKey Server are not typically the same as the name and password you use to log on to your computer itself. These credentials also do not typically correspond to the name and password that you use to log on to other network resources in your organization.
You can ask your administrator whether your organization has enabled single sign-on to make it possible for you to use the same logon credentials on multiple systems.
Premium Feature — Available with all Premium Editions of LabKey Server. Learn more or contact LabKey.
LabKey Server can use your organization's LDAP (lightweight directory access protocol) server(s) to authenticate users. The permissions a user will have are the permissions given to "Logged in users" in each project or folder. Using LDAP for authentication has a number of advantages:
You don't need to add individual users to LabKey
Users don't need to learn a new ID & password - they use their existing network id and password to log into your LabKey site
By default, if you set up a connection to an LDAP server, any user in the LDAP domain can log on to your LabKey application. You can change this default behavior by disabling the auto-creating of user accounts.
If you are not familiar with your organization's LDAP servers, you will want to recruit the assistance of your network administrator for help in determining the addresses of your LDAP servers and the proper configuration.
LDAP Authentication Process
When configuring LabKey to use any LDAP server you are trusting that the LDAP server is both secure and reliable.
When a user logs into LabKey with an email address ending in the LDAP domain you configure, the following process is followed:
LabKey attempts to connect to each LDAP server listed in LDAP Server URLs, in sequence starting with the first server provided in the list.
If a successful connection is made to an LDAP server, LabKey authenticates the user with the credentials provided.
After a successful LDAP connection, no further connection attempts are made against the list of LDAP servers, whether or not the user's credentials are accepted or rejected.
If the user's credentials are accepted by the LDAP server, the user is logged on to the LabKey Server.
If the user's credentials are rejected by the LDAP server, then LabKey authenticates the user via database authentication (provided database authentication is enabled).
If the list of LDAP servers is exhausted with no successful connection having been made, then LabKey authenticates the user via database authentication (provided database authentication is enabled).
Auto Create Authenticated Users
If a user is authenticated by the LDAP server but does not already have an account on the LabKey Server, the system can create one automatically. This is enabled by default but can be disabled using a checkbox in the global settings of the authentication page.
Configure LDAP Authentication
To add a new LDAP configuration, follow these steps:
Select (Admin) > Site > Admin Console.
Under Configuration, click Authentication.
On the Authentication page, on the Primary tab, select Add New Primary Configuration > LDAP...
In the popup, configure the fields listed below.
After completing the configuration fields, click Test to test your LDAP authentication settings. See below.
Click Finish to save the configuration.
Configuration Status: Click the slider to switch between Enabled and Disabled.
Description: Enter a unique descriptive label for this configuration. If you plan to define multiple configurations for this provider, so be sure to use a description that will help you differentiate.
LDAP Server URLs: Specifies the addresses of your organization's LDAP server or servers.
You can provide a list of multiple servers separated by semicolons.
The general form for the LDAP server address is ldap://servername.domain.org:389, where 389 is the standard port for non-secured LDAP connections.
The standard port for secure LDAP (LDAP over SSL) is 636. If you are using secure SSL connections, Java needs to be configured to trust the SSL certificate, which may require adding certificates to the cacerts file.
LabKey Server attempts to connect to these servers in the sequence provided here: for details see below.
LDAP Domain: For all users signing in with an email from this domain, LabKey will attempt authentication against the LDAP server, and for email accounts from other domains, no LDAP authentication is attempted with this configuration. Set this to an email domain (e.g., "labgroup1.org"), or use '*' to attempt LDAP authentication on all email addresses entered, regardless of domain.
Multiple LDAP configurations can be defined to authenticate different domains. All enabled LDAP configurations will be checked and used to authenticate users.
LDAP Principal Template: The LDAP principal template that describes the user attempting to authenticate. The default value is ${email}. Other LDAP servers require different authentication templates so check with your LDAP server administrator for specifics. The template supports substitution syntax; see section below for details.
If you are using LDAP Search, please refer to the LDAP Search section for the correct substitution syntax.
Use SASL authentication: Check the box to use SASL authentication.
Use LDAP Search: The LDAP Search option is rarely needed. It is useful when the LDAP server is configured to authenticate with a user name that is unrelated to the user's email address. Checking this box will add additional options to the popup as described below.
LDAP Security Principal Template
The LDAP security principal template must be set based on the LDAP server's requirements, and must include at least one substitution token so that the authenticating user is passed through to the LDAP server.
Property
Substitution Value
${email}
Full email address entered on the login page, for example, "myname@somewhere.org"
${uid}
User name portion (before the @ symbol) of email address entered on the login page, for example, "myname"
${firstname}
The value of the FirstName field in the user's record in the core.SiteUsers table
${lastname}
The value of the LastName field in the user's record in the core.SiteUsers table
${phone}
The value of the Phone field in the user's record in the core.SiteUsers table
${mobile}
The value of the Mobile field in the user's record in the core.SiteUsers table
${pager}
The value of the Pager field in the user's record in the core.SiteUsers table
${im}
The value of the IM field in the user's record in the core.SiteUsers table
${description}
The value of the Description field in the user's record in the core.SiteUsers table
Custom fields from the core.SiteUsers table are also available as substitutions based on the name of the field. For example, "uid=${customfield}"
If LDAP Search is configured, the lookup field is also available as a substitution. For example, "uid=${sAMAccountName}"
Here are some sample LDAP security principal templates:
Server
Sample Security Principal Template
Microsoft Active Directory Server
${email}
OpenLDAP
cn=${uid},dc=myorganism,dc=org
Sun Directory Server
uid=${uid},ou=people,dc=cpas,dc=org
Note: Different LDAP servers and configurations have different credential requirements for user authentication. Consult the documentation for your LDAP implementation or your network administrator to determine how it authenticates users.
If you are using LDAP Search, please refer to the LDAP Search section for the correct substitution syntax.
Edit, Enable/Disable, and Delete Configurations
You can define as many LDAP configurations as you require. Be sure to use descriptions that will help you differentiate them. Use the six-block handle on the left to reorder the login form configurations. Enabled configurations will be used in the order they are listed here.
To edit an existing configuration, click the (pencil) icon on the right.
Click the Configuration Status slider in the edit popup to toggle between Enabled and Disabled.
To delete a configuration, click the on the right.
Testing the LDAP Configuration
It is good practice to test your configuration during creation. If you want to reopen the popup to test later, click the (pencil) icon for the configuration to test.
From the LDAP Configuration popup, click Test.
Enter your LDAP Server URL, the exact security principal to pass to the server (no substitution takes place), and the password.
Check the box if you want to use SASL Authentication.
Click Test and an LDAP connect will be attempted.
As discussed above, the LDAP security principal must be in the format required by your LDAP server configuration.
If you're unfamiliar with LDAP or your organization's directory services configuration you should consult with your network administrator. You may also want to download an LDAP client browser to view and test your LDAP network servers. The Softerra LDAP Browser is a freeware product that you can use to browse and query your LDAP servers; visit the Softerra download page and click the "LDAP Browser #.#" tab.
LDAP Search Option
If your LDAP system uses an additional mapping layer between email usernames and security principal account names, it is possible to configure LabKey Server to search for these account names prior to authentication.
For example, a username that the LDAP server accepts for authentication might look like 'JDoe', while a user's email address is 'jane.doe@labkey.com'. Once this alternate mode is activated, instead of an LDAP template, you would provide credentials and a source database in which that credential can look up the security principal account name, or alternately a "Lookup field" value to use instead. This "Lookup field" value would be used for your substitution syntax for the LDAP Principal Template.
Check the box for Use LDAP Search. New fields will be added to the edit panel.
Search DN: Distinguished Name of the LDAP user account that will search the LDAP directory. This account must have access to the LDAP server URLs specified for this configuration.
Password: Password for the LDAP user specified as "Search DN".
Search base: Search base to use. This could be the root of your directory or the base that contains all of your user accounts.
Lookup field: User record field name to use for authenticating via LDAP. The value of this field will be substituted into the principal template to generate a DN for authenticating. In the above image, the principal template uses only this field but you could also use something more complex like "uid=${sAMAccountName},dc=example,dc=com".
Search template: Filter to apply during the LDAP search. Valid substitution patterns are as described above.
After entering the appropriate values, click Test to validate the configuration.
Click Apply to save the changes.
Click Save and Finish to exit the authentication page.
When this is properly configured, when a user attempts to authenticate to the LabKey Server, the server connects to the LDAP server using the "Search DN" credential and "Password". It will use the search base you specified, and look for any LDAP user account which is associated with the email address provided by the user (applying any filter you provided as the "Search template"). If a matching account is found, the LabKey Server makes a separate authentication attempt using the value of the "Lookup field" from the LDAP entry found and the password provided by user at the login screen.
Troubleshooting
If you experience problems with LDAP authentication after upgrading Java to version 17.0.3, they may be related to Java being stricter about LDAP URLs. More detail is available in this Java release note:
ERROR apAuthenticationProvider 2022-04-27T08:13:32,682 http-nio-80-exec-5 : LDAP authentication attempt failed due to a configuration problem. LDAP configuration: "LDAP Configuration", error message: Cannot parse url: ldap://your.url.with.unexpected.format.2k:389
Workarounds for this issue include using a fully qualified LDAP hostname, using the IP address directly (instead of the hostname), or running Java with the "legacy" option to not apply the new strictness:
Premium Feature — Available with all Premium Editions of LabKey Server. Learn more or contact LabKey.
LabKey Server supports SAML (Security Assertion Markup Language) authentication, acting as a service provider to authenticate against a SAML 2.0 identity provider. You can configure LabKey Server to authenticate against a single SAML identity provider (IdP). LabKey Server supports either plain text or encrypted assertion responses from the SAML identity provider. Note that nameId attribute in the assertion must match the email address in the user's LabKey Server account.
IdP: Identity Provider. The authenticating SAML server. This may be software (Shibboleth and OpenAM are two open source software IdPs), or hardware (e.g., an F5 BigIp appliance with the APM module). This will be connected to a user store, frequently an LDAP server.
SP: Service Provider. The application or server requesting authentication.
SAML Request: The request sent to the IdP to attempt to authenticate the user.
SAML Response: The response back from the IdP that the user was authenticated. A request contains an assertion about the user. The assertion contains one or more attributes about the user. At very least the nameId attribute is included, which is what identifies the user.
How SAML Authentication Works
From a LabKey sign in page, or next to the Sign In link in the upper right, a user clicks the admin-configured “SAML” logo. LabKey generates a SAML request, and redirects the user's browser to the identity provider's SSO URL with the request attached. If a given SAML identity provider is configured as the default, the user will bypass the sign in page and go directly to the identity provider.
The identity provider (IdP) presents the user with its authentication challenge. This is typically in the form of a login screen, but more sophisticated systems might use biometrics, authentication dongles, or other two-factor authentication mechanisms.
If the IdP verifies the user against its user store, a signed SAML response is generated, and redirects the user’s browser back to LabKey Server with the response attached.
LabKey Server then verifies the signature of the response, decrypts the assertion if it was optionally encrypted, and verifies the email address from the nameId attribute. At this point, the user is considered authenticated with LabKey Server and directed to the server home page (or to whatever page the user was originally attempting to reach).
Auto Create Authenticated Users
If a user is authenticated by the SAML server but does not already have an account on the LabKey Server, the system can create one automatically. This is enabled by default but can be disabled using a checkbox in the global settings of the authentication page
Create a New SAML Authentication Configuration
Go to (Admin) > Site > Admin Console.
In the Configuration section, click Authentication.
On the Primary tab of the Configurations panel, select Add New Primary Configuration > SAML...
In the popup, configure the properties as described below.
Click Finish to save.
You can create multiple SAML configurations on the same server.
Note that the configuration settings make use of the encrypted property store, so in order to configure/use SAML, the MasterEncryptionKey must be set in the labkey.xml file. (If it’s not set, attempting to go to the SAML configuration screen displays an error message, directing the administrator to configure the labkey.xml file.)
Configure the properties for SAML authentication:
Configuration Status: Click the slider to toggle between:
Enabled
Disabled
Description: Provide a unique description of this provider.
IdP Signing Certificate: Required. Either drag and drop or click to select a pem file.
Encryption Certificate (Optional): The encryption certificate for the service provider (SP). Use this field and the SP Private Key field (below) if you want the assertion in the SAML response to be encrypted. These two fields work together: they either must both be set, or neither should be set.
SP Private Key (Optional): The private key for the service provider (SP). Use this field and the Encryption Certificate field (above) if you want the assertion in the SAML response to be encrypted. These two fields work together: they either must both be set, or neither should be set.
IdP SSO URL (Required): The target IdP (identity provider) URL for SSO authentication, where the SAML identity provider is found. Obtain this URL from your provider; you may be able to find it by looking for 'Settings' and 'IDP-Initiated SSO' on the provider's site.
Issuer URL (Optional): The issuer of the service provider SAML metadata. Some IdP configurations require this, some do not. If required, it’s probably the base URL for the LabKey Server instance.
NameID Format (Optional): This is the NameIdformat specified in the SAML request. Options are:
Email Address (Default)
Transient
Unspecified
Force Authorization: If checked, require the user to login again via IdP, even if they are already logged in via an SSO provider.
EntityId: The base server entity id is shown here and can be reconfigured if necessary. See note below.
Assertion Customer Service (ACS) URL: The ACS URL for this server is a combination of the base server EntityID and "saml-validate.view?configuration=" followed by a "configuration" parameter that will be supplied for you when you save. Edit the configuration after saving to see the final URL. Configure your SAML identity provider to redirect to this URL at the end of the authentication process.
Default to this SAML Identity Provider: Check the box to redirect the login page directly to this SAML identity provider instead of requiring the user to click on a logo first.
Validate XML Responses: We strongly recommend validating XML responses from the IdP. Uncheck this only if your network infrastructure blocks the server's access to http://w3.org.
Page Header Logo / Login Page Logo: Provide a logo image to use in the page header or on the login page, or both.
These logos will be presented as quick links to access your authentication system.
Learn more about logo fields for Single Sign-On authentication here: Single Sign-On Logos.
Click Finish in the popup to save.
EntityId / entity_id (Base Server URL)
Note that the Base Server URL is included in the SAML request as the EntityId / entity_id. To control the Base Server URL, use the "Customize Site page" link in the UI or:
Go to (Admin) > Site > Admin Console.
Under Configuration, click Site Settings.
On the Customize Site page, change the Base Server URL as necessary.
Note that changing this setting will affect links in emails sent by the server, as well as any short URLs you generate. For details see Site Settings.
Edit an Existing SAML Provider
Go to (Admin) > Site > Admin Console.
In the Configuration section, click Authentication.
Click the (pencil) next to the target SAML configuration to edit the configuration.
After making changes, click Apply in the popup, then Save and Finish to exit the authentication page.
SAML Functionality Not Currently Supported
Metadata generation - LabKey Server supports only static service provider metadata xml.
Metadata discovery - LabKey Server does not query an IdP for its metadata, nor does the server respond to requests for its service provider metadata.
More complex scenarios for combinations of encrypted or signed requests, responses, assertions, and attributes are not supported. For example, signed assertions with individually encrypted attributes.
Processing other attributes about the user. For example, sometimes a role or permissions are given in the assertion; LabKey Server ignores these if present.
Interaction with an independent service provider is not supported.
Premium Feature — Available with all Premium Editions of LabKey Server. Learn more or contact LabKey.
Central Authentication Service (CAS) is a ticket-based authentication protocol that lets a user sign on to multiple applications while providing their credentials only once to a centralized CAS server (often called a "CAS identify provider"). Enabling CAS authentication lets LabKey Server authenticate users using a CAS identity provider, without users providing their credentials directly to LabKey Server. Learn more in the CAS Protocol Documentation .
You can also configure LabKey Server itself as a CAS Identity Provider, to which other servers can delegate authentication. For details see Configure CAS Identity Provider.
In the Configurations section, Primary tab, select Add New Primary Configuration > CAS....
In the popup, enter the following:
Configuration Status: By default, new configurations are Enabled. Click the slider to change it to Disabled.
Description: Enter a unique descriptive label for this configuration. Be sure the description clearly differentiates this configuration from other authentication configurations, especially if you configure multiple CAS configurations. This description will appear in the settings UI and in the audit log every time a user logs in using this configuration.
CAS Server URL: Enter a CAS server URL. The URL should start with "https://" and end with "/cas"
Default to this CAS Identity Provider: Check the box to make this CAS configuration the default login method.
Invoke CAS /logout API on User Logout: Check the box if you want a user logging out of this server to also invoke the /logout action on the CAS identity provider. Otherwise, logout on a server using CAS does not log the user out of the CAS identity provider itself.
Page Header Logo & Login Page Logo: Logo branding for the login UI. See examples below.
Click Finish.
You will see your new configuration listed under Single Sign-On Configurations. If there are multiple configurations, they will be listed in the interface in the order listed here. You can use the six-block handle on the left to drag and drop them to reorder.
Auto Create Authenticated Users
If a user is authenticated by the CAS server but does not already have an account on the LabKey Server, the system can create one automatically. This is enabled by default but can be disabled using a checkbox in the global settings of the authentication page
Edit/Disable/Delete CAS Configurations
On the list of Single Sign-On Configurations you will see the CAS configuration(s) you have defined, with an enabled/disabled indicator for each.
Use the six-block handle to reorder single sign-on configurations.
Use the to delete a configuration.
To edit, click the (pencil) icon.
After making changes in the configuration popup, including switching the Configuration Status slider to Disabled, click Apply to exit the popup and Save and Finish to close the authentication page.
Single Sign-On Logo
For CAS, SAML, and other single sign-on authentication providers, you can include logo images to be displayed in the Page Header area, on the Login Page, or both. These can be used for organization branding/consistency, signaling to users that single sign-on is available. When the logo is clicked, LabKey Server will attempt to authenticate the user against the SSO server.
If you provide a login page logo but no header logo, for example, your users see only the usual page header menus, but when logging in, they will see your single sign-on logo.
Premium Feature — Available with all Premium Editions of LabKey Server. Learn more or contact LabKey.
LabKey Server can be used as a CAS (Central Authentication Service) identity provider. Web servers that support the CAS authentication protocol (including other LabKey deployments) can be configured to delegate authentication to an instance of LabKey Server. The LabKey CAS identify provider implements the /login, /logout, and /p3/serviceValidate CAS APIs. Learn more in the CAS Protocol Documentation .
Initial Set Up
Ensure that the CAS module is deployed on your LabKey Server. Once the CAS module is deployed, the server can function as an SSO identity provider. You do not need to turn on or enable the feature.
However, other servers that wish to utilize the identity service must be configured to use the correct URL; for details see below.
Get the Identity Provider URL
Go to (Admin) > Site > Admin Console.
Click Settings.
Under Premium Features click CAS Identity Provider.
Copy the URL shown, or click Copy to Clipboard.
Go to the server that will use the identity provider server, and enter the URL. For details see Configure CAS Authentication.
Premium Feature — Available with all Premium Editions of LabKey Server. Learn more or contact LabKey.
LabKey Server can be configured to automatically synchronize with an external LDAP server, so that any user and groups found on the LDAP server are duplicated on LabKey Server. This synchronization is one way: any changes made within LabKey will not be pushed back to the LDAP server.
There are several options to control the synchronization behavior, including:
Specifying a synchronization schedule
Whether or not LabKey Server creates a user account corresponding to an LDAP user
Whether or not LabKey Server creates groups corresponding to LDAP groups
Deactivating a LabKey account for users with inactive LDAP accounts
Synchronizing based on user and group filters
Field mapping between the LDAP and LabKey user information
Choosing to enforce or disallow the overwriting of user account information in LabKey
Syncing nested groups is not supported. Groups that are members of other groups must be manually configured in LabKey.
Tomcat Configuration
Note that LDAP synchronization is independent of LDAP authentication and requires a separate connection resource added to the labkey.xml file, described below.
To set up an LDAP synchronization connection:
Add a <Resource> to the Tomcat configuration file (labkey.xml).
See the example configuration below for a starting template. Replace ADMIN, ADMIN_PASSWORD, and MYLDAP.MYDOMAIN.COM with values appropriate to your organizations LDAP server.
Once the LDAP resource has been added, configure the synchronization behavior as follows:
Go to (Admin) > Site > Admin Console.
Under Premium Features, click LDAP Sync Admin.
The page contains several sections of settings, detailed below.
Connection Settings
To test a connection with an LDAP server, click the Test Connection button.
Search Strings
Use the Search Strings section to control which groups and users are queried on the LDAP server. These settings are optional. Use LDAP syntax to specify search parameters such as "dc=edu" to retrieve .edu addresses.
Base Search String
Group Search String
Group Filter String
User Search String
User Filter String
An example Group Search string:
OU=Groups,OU=Seattle
You can also control which groups to synchronize using the graphical user interface described below. The string settings made here override any groups chosen in the graphical user interface.
Field Mapping
Use Field Mappings to control how LabKey Server fields are populated with user data. The fields on the left refer to LabKey Server fields in the core.Users table. The fields on the right refer to fields in the LDAP server.
Email
Display Name
First Name
Last Name
Phone Number
UID
Sync Behavior
This section configures how LabKey Server responds to data retrieved from the synchronization.
Read userAccountControl attribute to determine if active?: If Yes, then LabKey Server will activate/deactivate users depending on the userAccountControl attribute found in the LDAP server.
When a User is Deleted from LDAP: LabKey Server can either deactivate the corresponding user, or delete the user.
When a Group is Deleted from LDAP: LabKey Server can either delete the corresponding group, or take no action (the corresponding group remains on LabKey).
Group Membership Sync Method: Changes in the LDAP server can either overwrite account changes made in LabKey, or account changes in LabKey can be respected by the sync.
Keep in sync with LDAP changes
Keep LabKey changes (this allows non-LDAP users to be added to groups within LabKey)
Do nothing
Set the LabKey user's information, based on LDAP? - If Yes, then overwrite any changes made in LabKey with the email, name, etc. as entered in LDAP.
Page Size: When querying for users to sync, the request will be paged with a default page size of 500. If needed, you can set a different page size here.
Choose What to Sync
Choices made here are overwritten by any String Settings you make above.
All Users (subject to filter strings above): Sync all users found on the LDAP system.
All Users and Groups (subject to filter strings above): Sync all users and groups found on the LDAP system.
Sync Only Specific Groups and Their Members: When you select this option, available LDAP groups will be listed on the left. To sync a specific group, copy the group to the right side. Click Reset Group List to clear the selected groups panel.
Schedule
Is Enabled? If enabled, the schedule specified will run. If not enabled, you must sync manually using the Sync Now button below.
Sync Frequency (Hours): Specify the cadence of sync refreshes in hours.
Save and Sync Options
Save All Settings on Page: Click this button to confirm any changes to the sync behavior.
Preview Sync: Provides a popup window showing the results of synchronization. This is a preview only and does not actually make changes on LabKey Server.
Sync Now: Perform a manual, unscheduled sync.
Troubleshooting
If you have a large number of users and groups to be synced between the two servers, and notice that some user/group associations are not being synced, check to see if the page size is larger than the server's maximum page size. The default page size is 500 and can be changed if necessary.
Premium Feature — Available with all Premium Editions of LabKey Server. Learn more or contact LabKey.
Two-Factor Authentication is an additional security layer which requires users to perform a second authentication step after a successful primary authentication (username/password). The user is allowed access only after both primary and secondary authentication are successful.
LabKey Server supports two-factor authentication through integration with Duo Security. Duo Security provides a variety of secondary authentication methods, including verification codes sent over SMS messages, audio phone calls, and hardware tokens. LabKey Server administrators who wish to take advantage of two-factor authentication will need to open a paid account with Duo Security -- although evaluation and testing can be accomplished with a free trial account. Most of the configuration decisions about the nature of your two-factor authentication service occur within the Duo Security account, not within LabKey Server.
Two-factor authentication requires users to provide an additional piece of information to be authenticated. A user might be required to provide a six-digit verification code (sent to the user's cell phone over SMS) in addition to their username/password combination. The second credential/verification code is asked for after the user has successfully authenticated with LabKey Server's username/password combination. For example, the screenshot below shows the secondary authentication step once a verification passcode that has been sent to his/her cell phone via SMS/text message, voice call, or the Duo mobile application:
Duo Security Setup
To set up two-factor authentication, administrator permissions are required. You first sign up for a Duo Administrator account at the following location:
On the Duo website, select Applications > New Application.
On the Application Type dropdown select "Web SDK" and provide an Application Name of your choice.
Click Create Application.
Once the Duo Application has been created, you will be provided with an Integration Key, Secret Key, and an API Hostname, which you will use to configure LabKey Server.
Under Policy, specify the options for how users will be enrolled in Duo.
Configure Two-Factor Authentication on LabKey Server
Select (Admin) > Site > Admin Console.
Under Configuration, click Authentication.
On the Authentication page, click the Secondary tab in the Configurations panel.
Select Add New Secondary Configuration > Duo 2 Factor...
Note the Configuration Status is Enabled by default. Click the toggle to disable it.
Description: This field is used as the name in the interface. If you will create multiple duo configurations, make sure this description will be unique.
Enter the following values which you acquired in the previous step:
Integration Key
Secret Key
API Hostname
User Identifier: Select how to match user accounts on LabKey Server to the correct Duo user account. Options:
User ID (Default)
User Name: To match by username, the Duo user name must exactly match the LabKey Server display name.
Full Email Address.
Click Finish in the popup to save.
If desired, you can add additional duo configurations. Multiple enabled configurations will be applied in the order they are listed on the Secondary tab. Enable and disable them as needed to control which is in use at a given time.
Edit Configuration
To edit the configuration:
Select (Admin) > Site > Admin Console.
Under Configuration, click Authentication.
Click the Secondary tab.
Next to the Duo 2 Factor configuration name you want to edit, click the (pencil} icon to open it.
After making any changes needed, click Apply.
Click Save and Finish to exit the authentication page.
Enable/Disable Two-Factor Authentication
When you view the Secondary tab you can see which configurations are enabled. To change the status, open the configuration via the (pencil) and click the Configuration Status slider to change between Enabled and Disabled.
Click Apply to save changes, then click Save and Finish to exit the authentication page.
Delete Duo Configuration
To delete a configuration, locate it on the Secondary tab and click the (delete) icon.
Click Save and Finish to exit the authentication page.
Troubleshooting Disable
The preferred way to disable two-factor authentication is through the web interface as described above. If problems with network connectivity, Duo configuration, billing status, or other similar issues are preventing two-factor authentication, and thereby effectively preventing all users from logging in, server administrators can disable the Duo integration by adding a line to the LabKey configuration file in the Tomcat configuration directory (labkey.xml or ROOT.xml):
A netrc file (.netrc or _netrc) is used to hold credentials necessary to login to your LabKey Server and authorize access to data stored there. The netrc file contains authentication for connecting to one or more machines, often used when working with APIs or scripting languages.
On a Mac, UNIX, or Linux system the netrc file must be a text file named .netrc (dot netrc). The file should be located in your home directory and the permissions on the file must be set so that you are the only user who can read it, i.e. it is unreadable to everyone else. It should be set to at least Read (400), or Read/Write (600).
On a Windows machine, the netrc file must be a text file named _netrc (underscore netrc) and ideally it will also be placed in your home directory (i.e., C:/Users/<User-Name>). You should also create an environment variable called ’HOME’ that is set to where your netrc file is located. The permissions on the file must be set so that you are the only user who can read (or write) it.
The authentication for each machine you plan to connect to is represented by a group of three definitions (machine, login, password). You can have multiple sets of these definitions in a single netrc file to support connecting to multiple systems. A blank line separates the three definitions for each machine entry you include.
For each machine you want to connect to, the three lines must be separated by either white space (spaces, tabs, or newlines) or commas:
The machine is the part of your machine's URL between the protocol designation (http:// or https://) and any port number. For example, both "myserver.trial.labkey.host:8888" and "https://myserver.trial.labkey.host" are incorrect entries for this line.
The following are both valid netrc entries for a user to connect to an instance where the home page URL looks something like: "https://myserver.trial.labkey.host:8080/labkey/home/project-begin.view?":
Avoid leaving extra newlines at the end of your netrc file. Some applications may interpret these as missing additional entries or premature EOFs.
Use API Keys
When API Keys are enabled on your server, you can generate a specific token representing your login credentials on that server and use it in the netrc file. The "login" name used is "apikey" (instead of your email address) and the unique API key generated is used as the password. For example:
If you need to locate your netrc file somewhere other than the default location (such as to enable a dockerized scripting engine) you can specify the location in your code.
If you receive "unauthorized" error messages when trying to retrieve data from a remote server you should first check that:
Your netrc file is configured correctly, as described above
You've created a HOME environment variable, as described above
You have an entry for that remote machine
The login credentials are correct.
Additional troubleshooting assistance is provided below.
Port Independence
Note that the netrc file only deals with connections at the machine level and should not include a port number or protocol designation (http:// or https://), meaning both "mymachine.labkey.org:8888" and "https://mymachine.labkey.org" are incorrect. Use only the part between the slashes and/or any colon:port.
If you see an error message similar to "Failed connect to mymachine.labkey.org:443; Connection refused", remove the port number from your netrc machine definition.
File Location
An error message similar to "HTTP request was unsuccessful. Status code = 401, Error message = Unauthorized" could indicate an incorrect location for your netrc file. In a typical installation, R will look for libraries in a location like \home\R\win-library. If instead your installation locates libraries in \home\Documents\R\win-library, for example, then the netrc file would need to be placed in \home\Documents instead of the \home directory.
Premium Feature — Available with all Premium Editions of LabKey Server. Learn more or contact LabKey.
File uploads, attachments, archives and other content imported through the pipeline or webdav can be scanned for viruses using ClamAV. This topic covers how to configure and use ClamAV antivirus protection.
Provide the Endpoint for your ClamAV installation. Your endpoint will differ from this screenshot.
The Status message should show the current version and date time. If instead it reads "Connection refused: connect", check that you have provided the correct endpoint and that your service is running.
When ready, click Save. Then click Configure Antivirus scanner to return to the configuration page.
Select the radio button for ClamAV Daemon, then click Save.
Check Uploads for Viruses
When Antivirus protection is enabled, files uploaded via webdav or included as file attachments will be scanned by the configured provider, such as ClamAV. The process is:
The file is uploaded to a protected "quarantine" location.
The registered antivirus provider is sent a request and scans the file.
If the antivirus provider determines the file is bad, it is deleted from the quarantine location and the user is notified.
If the antivirus scan is successful, meaning no virus is detected, the file is uploaded from the quarantine location to the LabKey file system.
When virus checking is enabled, it is transparent to the users uploading virus-free files.
Virus Reporting
If the antivirus provider determines that the file contains a virus, an alert will be shown to the user either directly as an error message similar to "Unable to save attachments: A virus detected in file: <filename>", or a popup message similar to this:
Developer Notes
If a developer wishes to register and use a different virus checking service, they must do the following:
A site or project administrator can test security settings by impersonating a user, group, or role. Using impersonation to test security can prevent granting inadvertent access to users.
A project administrator's access to impersonating is limited to the current project; a site administrator can impersonate site wide.
Impersonating a user is useful for testing a specific user's permissions or assisting a user having trouble on the site.
Select (User) > Impersonate > User. Note that the Impersonate option opens a sub menu listing the options.
Select a user from the dropdown and click Impersonate
You are now viewing the site as the user you selected, with only the permissions granted to that user. The username of the user you are impersonating replaces your own username in the header, and you will see a "Stop Impersonating" button.
Impersonate a Group
Impersonating a security group is useful for testing the permissions granted to that group.
Select (User) > Impersonate > Group
Select a group from the dropdown and click Impersonate
You are now impersonating the selected group, which means you only have the permissions granted to this group. You are still logged in as you, so your display name appears in the header, and you can still edit documents you own (e.g., the reports, messages, wikis, and issues you have created), you are simply temporarily a member of the chosen group.
Is it Possible to Impersonate the "Guests" Group?
Note that the "Guests" group (those users who are not logged into the server) does not appear in the dropdown for impersonating a group. This means you cannot directly impersonate a non-logged in user. But you can see the server through a Guests eyes by logging out of the server yourself. When you log out of the server, you are seeing what the Guests will see. When comparing the experience of logged-in (Users) versus non-logged-in (Guests) users, you can open two different browsers, such as Chrome and Firefox: login in one, but remain logged out in the other.
Impersonate Roles
Impersonating security roles is useful for testing how the system responds to those roles. This is typically used when developing or testing new features, or by administrators who are curious about how features behave.
Select (User) > Impersonate > Roles
Select one or more roles in the list box and click Impersonate
You are now impersonating the selected role(s), which means you receive only the permissions granted to the role(s). As with impersonating a group, you are still logged in as you, so your display name appears in the menu and you can still edit documents you own.
In some cases you'll want to impersonate multiple roles simultaneously. For example, when testing specialized roles such as Specimen Requester or Assay Designer, you would typically add Reader (or another role that grants read permissions), since the specialized roles don't include read permissions themselves.
When impersonating roles, the menu has an additional option: Adjust Impersonation. This allows you to reopen the role impersonation checkbox list, rather than forcing you to stop impersonating and restart to adjust the roles you are impersonating.
Stop Impersonating
To return to your own account, click the Stop Impersonating button in the header, or select (User) > Stop Impersonating.
Project-Level Impersonation
When any admin impersonates a user from the project users page, the administrator sees the perspective of the impersonated user within the current project. All projects that the impersonated user may have access to outside the current project are invisible while in impersonation mode. For example, when impersonating a project-scoped group, a project administrator who navigates outside the project will have limited permissions (having only the permissions that Guests and All Site Users have). Site admins who want to impersonate a user across the entire site can do so from the site users page or the admin console.
A project impersonator sees all permissions granted to the user's site and project groups. However, a project impersonator never receives authorization from the user's global roles (currently site admin and developer) -- they are always disabled.
Logging of Impersonations
The audit log includes an "Impersonated By" column. This column is typically blank, but when an administrator performs an auditable action while impersonating another user, the administrator's display name appears in the "Impersonated By" column.
When an administrator begins or ends impersonating another user, this action itself is logged under "User Events". See Audit Log / Audit Site Activity for more about auditing.
Premium Resource: Best Practices for Security Scanning
Premium Feature — Available in the Enterprise Edition of LabKey Server. Learn more or contact LabKey.
The Compliance modules help your organization meet a broad array of security and auditing standards, such as FISMA, HIPAA, HITECH, FIPS, NIST, and others.
Compliance: Setting PHI Levels on Fields - Mark the PHI level of columns to control export of data. Administrators may control the metadata assignments without viewing the actual PHI data.
Compliant Access via Session Key - Configure the server to obtain a session key when users log in to avoid storing user credentials on the client machine.
Premium Feature — Available in the Enterprise Edition of LabKey Server. Learn more or contact LabKey.
This topic describes how the compliance module supports compliance with regulations like HIPAA, FISMA, and others. Account management, requiring users to sign the appropriate terms of use, preventing unauthorized access to protected patient information, and logging that lets auditors determine which users have accessed which data are all part of a compliant implementation.
When a user signs into a folder where the relevant Compliance features have been activated, they must first declare information about the activity or role they will be performing.
A Role must be provided.
An IRB (Institutional Review Board) number must be provided for many roles.
Users declare the PHI level of access they require for the current task. The declared PHI level affects the data tables and columns that will be shown to the user upon a successful login.
The declarations made above (Role, IRB, and PHI level) determine how a customized Terms of Use document will be dynamically constructed for display to the user. The user must agree to the terms of use before proceeding.
The compliance module lets you annotate each column (for Lists and Datasets) with a PHI level. Possible PHI levels include:
Not PHI - This column is visible for all PHI level declarations.
Limited PHI - Visible for users declaring Limited PHI and above.
Full PHI - Visible for user declaring Full PHI.
Restricted - Visible for users who have been assigned the Restricted PHI role. Note that no declaration made during login allows users to see Restricted columns.
The Query Browser is also sensitive to the user's PHI access level. If the user has selected non-PHI access, the patient tables are shown, but the PHI columns will be hidden or shown with the data blanked out. For instance, if a user selects "Coded/No PHI" during sign on, the user will still be able to access patient data tables, but will never see data in the columns marked at any PHI level.
Search and API
Search results follow the same pattern as accessing data grids. Search results will be tailored to the users PHI-role and declared activity. Similarly, for the standard LabKey API (e.g., selectRows(), executeSql()).
Grid View Sharing
When saving a custom grid, you have the option to share it with a target group or user. If any target user does not have access to PHI data in a shared grid/filter, they will be denied access to the entire grid. Grid and filter sharing events are logged.
Export
Export actions respect the same PHI rules as viewing data grids. If you aren't allowed to view the column, you cannot export it in any format.
Audit Logging
The role, the IRB number, the PHI level, and the terms of use agreed to are be logged for auditing purposes. Compliance logging is designed to answer questions such as:
Which users have seen a given patient's data? What data was viewed by each user?
Which patients have been seen by a particular user? What data was viewed for each patient?
Which roles and PHI levels were declared by each user? Were those declarations appropriate to their job roles & assigned responsibilities?
Was the data accessed by the user consistent with the user's declarations?
The image below shows how the audit log captures which SQL queries containing PHI have been viewed.
Note that PIVOT and aggregation queries cannot be used with the compliance module's logging of all query access including PHI. For details see Compliance: Logging.
Premium Feature — Available in the Enterprise Edition of LabKey Server. Learn more or contact LabKey.
This checklist provides step-by-step instructions for setting up and using the Compliance and ComplianceActivities modules.
Checklist
Acquire a distribution that includes the compliance modules
Unlike most modules, administrators don't have to explicitly enable the compliance modules in individual folders. The compliance modules are treated as enabled for all folders on a server if they are present in the distribution.
To ensure that the compliance modules are available, go to (Admin) > Site > Admin Console and click Module Information. Confirm that Compliance and ComplianceActivities are included in the list of modules. If not, contact us.
Define settings for accounts, login, session expiration, project locking, and more
Limit unsuccessful login attempts, set account expiration dates.
Premium Feature — Available in the Enterprise Edition of LabKey Server. Learn more or contact LabKey.
This topic covers settings available within the Compliance module. Both the Compliance and ComplianceActivities modules should be enabled on your server and in any projects where you require these features.
You can configure user accounts to expire after a set date. Once the feature is enabled, expiration dates can be set for existing individual accounts. When an account expires, it is retained in the system in a deactivated state. The user can no longer log in or access their account, but records and audit logs associated with that user account remain identifiable and an administrator could potentially reactivate it if appropriate.
To set up expiration dates, first add one or more users, then follow these instructions:
Go to (Admin) > Site > Admin Console.
Under Premium Features, click Compliance Settings.
On the Accounts tab, under Manage Account Expiration, select Allow accounts to expire after a set date.
Click Save.
You can now set expiration dates for user accounts.
Click the link (circled above) to go to the Site Users table. (Or go to (Admin) > Site > Site Users.)
Above the grid of current users, note the Show Temporary Accounts link. This will filter the table to those accounts which are set to expire at some date.
Click the Display Name for a user account you want to set or change an expiration date for.
On the account details page click Edit.
Enter an Expiration Date, using the date format Year-Month-Day. For example, to indicate Feb 16, 2019, enter "2019-02-16".
Click Submit.
Click Show Users, then Show Temporary Accounts and you will see the updated account with the assigned expiration date.
Manage Inactive Accounts
Inactive accounts can be automatically disabled (i.e., login is blocked and the account is officially deactivated) after a set number of days. When an account was last 'active' is determined by last login date, or if the user has never logged in, the account creation date.
To set the number of days after which accounts are disabled, follow the instructions below:
Select (Admin) > Site > Admin Console.
Under Premium Features, click Compliance Settings.
On the Accounts tab, under Manage Inactive Accounts, select Disable inactive accounts after X days.
Use the dropdown to select when the accounts are disabled. Options include: 1 day, 30 days, 60 days, or 90 days.
Accounts disabled by this mechanism are retained in the system in a deactivated state. The user can no longer log in or access their account, but records and audit logs associated with that user account remain identifiable and an administrator could potentially reactivate it if appropriate.
Audit Log Process Failures
If any of the events that should be stored in the Audit Log aren't processed properly, these settings let you automatically inform administrators of the error in order to immediately address it.
Configure the response to audit processing failures by checking the box. This will trigger notifications in circumstances such as (but not limited to) communication or software errors, unhandled exceptions during query logging, or if audit storage capacity has been reached.
Select (Admin) > Site > Admin Console.
Under Premium Features, click Compliance Settings.
Click the Audit tab.
Under Audit Process Failures, select Response to audit processing failures.
Select the email recipient(s) as:
Primary Site Admin (configured on the Site Settings page) or
All Site Admins (the default)
Click Save.
To control the content of the email, click the link email customization, and edit the notification template named "Audit Processing Failure". For details see Email Template Customization.
Limit Login Attempts
You can decrease the likelihood of an automated, malicious login by limiting the allowable number of login attempts. These settings let you disable logins for a user account after a specified number of attempts have been made. (Site administrators are exempt from this limitation on login attempts.)
To see those users with disabled logins, go to the Audit log, and select User events from the dropdown.
Go to (Admin) > Site > Admin Console.
Under Premium Features, click Compliance Settings.
Click the Login tab.
In the section Unsuccessful Logins Attempts, place a checkmark next to Enable login attempts controls.
Also specify:
the number attempts that are allowed
the time period (in seconds) during which the above number of attempts will trigger the disabling action
the amount of time (in minutes) login will be disabled
Click Save.
Third-Party Identity Service Providers
To restrict the identity service providers to only FICAM-approved providers, follow the instructions below. When the restriction is turned on, non-FICAM authentication providers will be greyed out in the Authentication panel.
Go to (Admin) > Site > Admin Console.
Under Premium Features, click Compliance Settings.
Click the Login tab.
In the section Third-Party Identity Service Providers, place a checkmark next to Accept only FICAM-approved third-party identity service providers.
The list of configured FICAM-approved providers will be shown. You can manage them from the Authentication Configuration page.
Manage Session Invalidation Behavior
When a user is authenticated to access information, but then the session becomes invalid, whether through timeout, logout in another window, account expiration, or server unavailability, obscuring the information that the user was viewing will prevent unauthorized exposure to any unauthorized person. To configure:
Go to (Admin) > Site > Admin Console.
Under Premium Features, click Compliance Settings.
Click the Session tab.
Select one of:
Show "Reload Page" modal but keep background visible (Default).
Show "Reload Page" modal and blur background.
Click Save.
With background blurring enabled, a user whose session has expired will see a popup for reloading the page, with a message about why the session ended. The background will no longer show any protected information in the browser.
Allow Project Locking
Project locking lets administrators make projects inaccessible to non-administrators, such as after research is complete and papers have been published.
Go to (Admin) > Site > Admin Console.
Under Premium Features, click Compliance Settings.
To support compliance with standards regarding review of users' access rights, a project permissions review workflow can be enabled, enforcing that project managers periodically review the permission settings on their projects at defined intervals. Project review is available when project locking is also enabled; if a project manager fails to review and approve the permissions of a project on the expected schedule, that project will "expire", meaning it will be locked until the review has been completed.
Simply marking fields with a particular PHI level does not restrict access to these fields. To restrict access, administrators must also define how the server handles PHI data with respect to PHI Role assignment and Terms of Use selection. To define PHI data handling, see Compliance: Configure PHI Data Handling.
Note that this system allows administrators to control which fields contain PHI data and how those fields are handled without actually viewing the data in the PHI fields. Access to viewing PHI data is controlled separately and not provided to administrators unless granted explicitly.
Example PHI Levels
The following table provides example PHI-level assignments for fields. These are not recommendations or best practices for PHI assignments.
PHI Level
Example Data Fields
Restricted PHI
HIV status Social Security Number Credit Card Number
Full PHI
Address Telephone Number Clinical Billing Info
Limited PHI
ZIP Code Partial Dates
Not PHI
Heart Rate Lymphocyte Count
Annotate Fields with PHI Level
To mark the PHI level of individual columns, use the Field Editor as shown here:
For Developers: Use XML Metadata
As an alternative to the graphical user interface, you can assign a PHI level to a column in the schema definition XML file.
In the example below, the column DeathOrLastContactDate has been marked as "Limited":
Premium Feature — Available in the Enterprise Edition of LabKey Server. Learn more or contact LabKey.
Users of your application must first sign "Terms of Use" before they enter and see data. The compliance module can be configured to display different Terms of Use documents depending on declarations made by the user before entering the data environment. This topic explains how to configure the various Terms of Use available to users, and how to dynamically produce Terms of Use depending on user declarations at login.
Each 'term' of a term of use document can be defined separately, enabling you to create a modular document structure one paragraph at a time. Some paragraphs might common to all users and some specific only to a single role and PHI level.
The Terms of Use mechanism described here is intended for compliant environments where users assert their IRB number and intended activity before agreeing to the dynamically constructed Terms of Use. This access gate point is applied every time the user navigates to a protected container. Another, simpler feature is available which uses a static Terms of Use signed once upon login for an entire session. You can learn more about this simpler version in this topic: Establish Terms of Use.
Configure Terms of Use
First confirm that both the Compliance and ComplianceActivities modules are present on your server and enabled in your container. Check this on the (Admin) > Folder > Management > Folder Type tab.
Administrators enter different elements and paragraphs, elements which are used to dynamically construct a Terms of Use based on user assertions at login.
You can define Terms of Use in the current folder, or in the parent folder. If defined in a parent folder, the Terms of Use can be inherited in multiple child folders. Terms of Use defined in the parent folder can be useful if you are building multiple child data portals for different audiences, such as individual data portals for different clinics, or different sets of researchers, etc.
The configuration described below shows how to define Terms of Use in a single folder. This configuration can be re-used in child folders if desired.
Go to Admin > Folder > Management and click the Compliance tab.
To reuse a pre-existing Terms of Use that already exist in the parent folder, select Inherit Terms of Use from parent.
To configure new Terms of Use element for the current folder, click Terms of Use.
On the Terms of Use grid, select (Insert data) > Insert New Row
Or select Import Bulk Data to enter multiple terms using an Excel spreadsheet or similar tabular file.
Activity: Activity roles associated with the Terms of Use element. By selecting an activity, terms will only be displayed for the corresponding PHI security role. Note that the Activity dropdown is populated by values in the ComplianceActivities module. Default values for the dropdown are:
RESEARCH_WAIVER - For a researcher with a waiver of HIPAA Authorization/Consent.
RESEARCH_INFORMED - For a researcher with HIPAA Authorization/Consent.
RESEARCH_OPS - For a researcher performing 'operational' activities in the data portal, that is, activities related to maintenance and testing of the data portal itself, but not direct research into the data.
HEALTHCARE_OPS - For non-research operations activities, such as administrative and business-related activities.
QI - For a user performing Quality Improvement/Quality Control of the data portal.
PH - For a user performing Public Health Reporting tasks.
IRB: The Internal Review Board number under which the user is entering the data environment. Terms with an IRB number set will only be shown for that IRB number.
PHI: If a checkmark is added, this term will be shown only if the user is viewing PHI. To have this term appear regardless of the activity/role or IRB number, leave this unchecked.
Term: Text of the Terms of Use element.
Sort Order: If multiple terms are defined for the same container, activity, IRB, and PHI level, they will be displayed based on the Sort Order number defined here.
The following Terms of Use element will be displayed to users that assert an activity of Research Operations, an IRB of 2345, and a PHI level of Limited PHI or Full PHI. It will also be displayed as the third paragraph in the dynamically constructed Terms of Use.
Dynamic Terms of Use: Example
Assume that an administrator has set up the following Terms of Use elements. In practice the actual terms paragraphs would be far more verbose.
And assume that a user makes the following assertions before logging in.
The terms applicable to information entered are concatenated in the order specified. The completed Terms of Use document will be constructed and displayed to the user for approval.
Establish Terms of Use - Set at project-level or site-level based on specially named wikis. Note that this is a separate, non-PHI related mechanism for establishing terms of use before allowing access to the server.
Compliance: Security Roles
Premium Feature — Available in the Enterprise Edition of LabKey Server. Learn more or contact LabKey.
Restricted PHI Reader - May choose to read columns tagged as Restricted PHI, Full PHI and Limited PHI.
Full PHI Reader - May choose to read columns tagged as Full PHI and Limited PHI.
Limited PHI Reader - May choose to read columns tagged as Limited PHI.
Patient Record Reviewer - May read patient records to determine completeness and compliance with PHI rules for the application.
Research Selector - May select the Research with Waiver of HIPAA Authorization/Consent, Research with HIPAA Authorization/Consent and Research Operations role in the application.
Healthcare Operations Selector - May select the Healthcare Operations role in the application.
Quality Improvement/Quality Assurance Selector - May select the Quality Improvement/Quality Assurance role in the application.
Public Health Reporting Selector - May select the Public Health Reporting role in the application.
Note: None of these roles are implicitly granted to any administrators, including site administrators. If you wish to grant administrators access to PHI, you must explicitly grant the appropriate role to the administrator or group.
Administrators can control the compliance features for a given folder by navigating to:
(Admin) > Folder > Management. Click the Compliance tab.
Terms of Use
To utilize any terms of use already present in the parent container (project or folder), check the box: Inherit Terms of Use from parent.
Click Terms of Use > to set new terms for this container
Require Activity/PHI Selection and Terms of Use Agreement
When enabled, users will be presented with a PHI level selection popup screen on login. The appropriate Terms of Use will be presented to them, depending on the PHI selection they make.
Require PHI Roles to Access PHI Columns
Role-based PHI handling prevents users from viewing and managing data higher than their current PHI level. Check the box to enable the PHI related roles. When enabled, all users, including administrators, must be assigned a PHI role to access PHI columns.
You can also control the behavior of a column containing PHI when the user isn't permitted to see it. Options:
Blank the PHI column: The column is still shown in grid views and is available for SQL queries, but will be shown empty.
Omit the PHI column: The column will be completely unavailable to the user.
Note that if your data uses any text choice fields, administrators and data structure editors will be able to see all values available within the field editor, making this a poor field choice for sensitive information.
The default behavior is to log only those queries that access PHI columns.
To open the Audit Log:
Select (Admin) > Site > Admin Console.
Under Management click Audit Log.
The following compliance-related views are available on the dropdown:
Compliance Activity Events - Shows the Terms of Use, IRB, and PHI level declared by users on login.
Logged query events - Shows the SQL query that was run against the data.
Logged select query events - Lists specific columns and identified data relating to explicitly logged queries, such as a list of participant id's that were accessed, as well as the set of PHI-marked columns that were accessed.
Site Settings events - Logs compliance-related configuration changes to a given folder, that is, changes made on a folder's Compliance tab.
User events - Records login and impersonation events.
In order to log query events, it must be clear what data for which specific participantIDs has been accessed. This means that queries must be able to conclusively identify the participantID whose data was accessed. In situations where the participantID cannot be determined, queries will fail because they cannot complete the logging required.
At the higher Log all query access level, all queries must conform to these expectations. When Log only query access including PHI columns, queries that do not incorporate any columns marked as containing PHI will have more flexibility.
Query scenarios that can be successfully logged for compliance:
SELECT queries that include the participantID and do not include any aggregation or PIVOT.
SELECT queries in a study where every data row is associated with a specific participantID, and there is no aggregation, whether the participantID is specifically included in the query or not.
SELECT queries with any aggregation (such as MAX, MIN, AVG, etc.) where the participantID column is included in the query and also included in a GROUP BY clause.
Query scenarios that will not succeed when compliance logging is turned on:
SELECT queries with aggregation where the participantID column is not included.
PIVOT queries, which also aggregate data for multiple participants. Learn more below.
Filter Behavior
When using compliance logging, you cannot filter by values in a column containing PHI, because the values themselves are PHI that aren't associated with individual participant IDs.
When you open the filter selector for a PHI column, you will see Choose Filters and can use a filtering expression. If you switch to the Choose Values tab you will see a warning:
PIVOT Queries and Compliance Logging
PIVOT queries cannot be used with compliance logging of query access. Logging is based on data (and/or PHI) access being checked by row linked to a participant. Because PIVOT queries aggregate data from multiple rows, and thus multiple participants, this access cannot be accurately logged. A pivot query run in a folder with the Compliance module running will raise an error like:
Saved with parse errors: ; Pivot query unauthorized.
Premium Feature — Available in the Enterprise Edition of LabKey Server. Learn more or contact LabKey.
The PHI Report shows the PHI level for each column in a given schema. This report is only available when the Compliance module is enabled in the current folder.
To generate the report:
Go the Schema Browser at (Admin) > Go To Module > Query.
In the left pane, select a schema to base the report on.
In the right pane, click PHI Report to generate the report. (If this link does not appear, ensure that the Compliance module is enabled in your folder.)
The report provides information on every column in the selected schema, including:
the column name
the parent table
the assigned PHI level
column caption
data type
Like other grids, this data can be filtered and sorted as needed. Note that you will only see the columns at or below the level of PHI you are authorized to view.
Premium Feature — Available in the Enterprise Edition of LabKey Server. Learn more or contact LabKey.
Electronic signatures are designed to help your organization comply with the FDA Code of Federal Regulations (CFR) Title 21 Part 11.
Electronic signatures provide the following data validation mechanisms:
Electronic signatures link the data to a reason for signing, the name of the signatory, and the date/time of the signature.
Electronic signatures are linked to a unique and unrepeatable id number.
Signatories must authenticate themselves with a username and password before signing.
Signing data resembles exporting a data grid, except, when data is signed, the signatory is presented with a pop-up dialog where a reason for signing can be specified. Only the rows in a given data grid that have been selected will be signed.
Upon signing, an Excel document of the signed data is created and added to the database as a record in the Signed Snapshot table. The Signed Snapshot table is available to administrators in the Query Browser in the compliance schema. You can add the Signed Snapshot web part to a page to allow users to download signed documents. The record includes the following information:
The Excel document (i.e., the signed data) is included as an attachment to the record
The source schema of the signed document
The source query of the signed document
The number of rows in the document
The file size of the document
The reason for signing the document
The signatory
The date of signing
Set Up
The electronic signature functionality is part of the compliance module, which must be installed on the server and enabled in your folder before use. When enabled, all data grids that show the or Export button can also be signed.
Select and Sign Data
To electronically sign data:
Go to the data grid which includes the data you intend to sign.
Select some or all of the rows from the data grid. Only rows that you select will be included in the signed document.
Click (Export/Sign Data).
On the Excel tab, confirm that Export/Sign selected rows is selected and click Sign Data.
In the Sign Data Snapshot pop-up dialog, enter your username, password, and a reason for signing. (Note that if you already signed in to the server, you username and password will be pre-populated.)
Click Submit.
Upon submission, you will be shown the details page for the record that has been inserted into the Signed Snapshot table.
Download Signed Data
To download a signed snapshot, view the Signed Snapshot table via the Schema Browser.
Select (Admin) > Go To Module > Query.
Select the compliance schema, and click the Signed Snapshot table to open it.
Click View Data to see the snapshots.
An administrator may create a "Query" web part to broaden access to such snapshots to users without schema browser access to this table.
To download, click the name of the desired signed file. All downloads are audited.
Metadata Included in Exported Document
Signature metadata (the signatory, the source schema, the unique snapshot id, etc.) is included when you export the signed document. Metadata can be found in the following locations, depending on the download format:
Downloaded Format
Metadata Location
Text format (TSV, CSV, etc)
The signature metadata is included in the first rows of the document as comments.
Excel format (XLS, XLSX)
The signature metadata is included in the document properties. On Windows, go to File > Info > Properties > Advanced Properties. On Mac, go to File > Properties > Custom.
Auditing Electronic Signatures
Electronic signature events are captured in the audit log.
Select (Admin) > Site > Admin Console.
Under Management click Audit Log.
From the dropdown, select Signed snapshots.
The audit log captures:
The user who signed, i.e. created the signed snapshot
The date and time of the event (signing, downloading, or deletion of a signature)
The date and time of the signature itself
The container (folder) and schema in which the signed snapshot resides
The table that was signed
The reason for signing
A comment field where the type of event is described:
Snapshot signed
Snapshot downloaded: The date of download and user who downloaded it are recorded
Snapshot deleted: The user who deleted the signed snapshot is recorded.
LabKey Server provides a broad range of tools to help organizations maintain compliance with a variety of regulations including HIPAA, FISMA, CFR Part 11, and GDPR. GDPR compliance can be achieved in a number of different ways, depending on how the client organization chooses to configure LabKey Server.
The core principles of GDPR require that users in the EU are granted the following:
The ability see what data is collected about them and how it is used
The ability to see a full record of the personal information that a company has about them
The ability to request changes or deletion of their personal data
To comply with the GDPR, client organizations must implement certain controls and procedures, including, but not limited to:
Writing and communicating the privacy policy to users
Defining what "deletion of personal data" means in the context of the specific use case
Your compliance configuration should be vetted by your legal counsel to ensure it complies with your organization's interpretation of GDPR regulations.
Premium Resource Available
Subscribers to premium editions of LabKey Server can learn more about how GDPR compliance was achieved at LabKey in this topic:
Premium Feature — Available in the Enterprise Edition of LabKey Server. Learn more or contact LabKey.
Project locking lets administrators make projects inaccessible to non-administrators, such as after research is complete and papers have been published. To support compliance with standards regarding review of users' access rights, a project permissions review workflow can be enabled, enforcing that project managers periodically review the permission settings on their projects at defined intervals. Project review is available when project locking is also enabled; if a project manager fails to review and approve the permissions of a project on the expected schedule, that project will "expire", meaning it will be locked until the review has been completed.
Under Premium Features, click Compliance Settings.
Click the Project Locking & Review tab.
Check the Allow Project Locking box.
Two lists of projects are shown. On the left are projects eligible for locking. On the right are projects that are excluded from locking and review, including the "home" and "Shared" projects. To move a project from one list to the other, click to select it, then click the or button.
Click Save when finished.
Lock Projects
Once enabled, administrators can control locking and unlocking on the (Admin) > Folder > Permissions page of eligible projects. Click the Project Locking & Review tab. Click Lock This Project to lock it. This locking is immediate; you need not click Save after locking.
When a project is locked, administrators will see a banner message informing them of the lock. Non-administrators will see an error page reading "You are not allowed to access this folder; it is locked, making it inaccessible to everyone except administrators."
To unlock a project, return to the (Admin) > Folder > Permissions > Project Locking & Review tab. Click Unlock This Project.
Project Review Workflow
To support compliance with standards regarding review of users' access rights, a project permissions review workflow can be enabled, enforcing that project managers periodically review the permission settings on their projects at defined intervals. Project review is available when project locking is also enabled; if a project manager fails to review and approve the permissions of a project on the expected schedule, that project will "expire", meaning it will be locked until the review has been completed.
Go to (Admin) > Site > Admin Console.
Under Premium Features, click Compliance Settings.
Click the Project Locking & Review tab.
Check the Allow Project Locking box.
Check the Enable Project Review Workflow box and customize the parameters as necessary.
Within the interface you can customize several variables:
Project Expiration Interval: Set to one of 3, 6, 9, or 12 months. Projects excluded from locking do not expire.
Begin Warning Emails: Set to the number of days before project expiration to start sending email notifications to those assigned the site-role "Project Review Email Recipient". The default is 30 days. Negative values and values greater than the "Project Expiration Interval X 30" will be ignored. Zero is allowed for testing purposes.
Warning Email Frequency: Set to a positive number of days between repeat email notifications to reviewers. The default is 7 days. Negative values, and values greater than the "Begin Warning Emails" value will be ignored.
Customize if needed, the text that will be shown in the project review section just above the Reset Expiration Date button: "By clicking the button below, you assert that you have reviewed all permission assignments in this project and attest that they are correct."
Click Save when finished.
Optionally, click to Customize the project review workflow email template if needed. The default is titled "Review project ^folderName^ before ^expirationDate^" and reads "The ^folderName^ project on the ^organizationName^ ^siteShortName^ website will expire in ^daysUntilExpiration^ days, on ^expirationDate^.<br> Visit the "Project Locking & Review" tab on the project's permissions page at ^permissionsURL^ to review this project and reset its expiration date."
Once enabled, you can access project review on the same (Admin) > Folder > Permissions > Project Locking & Review tab in the eligible folders. Project review email notifications include a direct link to the permissions page.
Note that project review expiration dates and settings are enabled as part of nightly system maintenance. You will not see the changes you save and notifications will not be sent until the next time the maintenance task is run.
Project Review Email Recipient Role
Project administrators tasked with reviewing permissions and attesting to their correctness should be assigned the site role "Project Review Email Recipient" to receive notifications. This role can be assigned by a site administrator.
Select (Admin) > Site > Site Permissions.
Under Project Review Email Recipient, select the project admin(s) who should receive notification emails when projects need review.
Complete Project Review
When an authorized user is ready to review a project's permissions, they open the (Admin) > Folder > Permissions page for the project.
The reviewer should very carefully review the permissions, groups, and roles assigned in the project and all subfolders. Once they are satisfied that everything is correct, they click the Project Locking & Review tab in the project, read the attestation, then click Reset Expiration Date to confirm.
Note that if the project is locked, whether manually, or automatically as part of expiration, the administrator will see the same attestation and need to review the permissions prior to clicking Unlock This Project.
The section for Premium Features will include some or all of the following features, as well as possibly additional features, depending on the edition you are running and the modules you have configured.
Master Patient Index: Integration with Enterprise Master Patient Index allows you to connect LabKey-housed data and a master index record for a patient using their EMPI ID.
Analytics Settings: Add JavaScript to your HTML pages to enable Google Analytics or add other custom script to the head of every page. Additional details are provided in the UI.
Authentication: View, enable, disable and configure authentication providers (e.g. Database, LDAP, CAS, Duo). Configure options like self sign-up and self-service email changes.
Experimental Features: Offers the option to enable experimental features. Proceed with caution as no guarantees are made about the features listed here.
Folder Types: Select which folder types will be available for new project and folder creation. Disabling a folder type here will not change the type of any current folders already using it.
Project Display Order: Choose whether to list projects alphabetically or specify a custom order.
Short URLs: Define short URL aliases for more convenient sharing and reference.
Site Settings: Configure a variety of basic system settings, including the base URL and the frequency of system maintenance and update checking.
System Maintenance: These tasks are typically run every night to clear unused data, update database statistics, perform nightly data refreshes, and keep these server running smoothly and quickly. We recommend leaving all system maintenance tasks enabled, but some of the tasks can be disabled if absolutely necessary. By default these tasks run on a daily schedule. You can change the time of day which they run if desired. You can also run a task on demand by clicking one of the links. Available tasks may vary by implementation but could include:
Clean Up Archived Modules
Database Maintenance
Defragment ParticipantVisit Indexes
Master Patient Index Synchronization
Purge Expired API Keys
Purge Unused Participants
Report Service Maintenance
Search Service Maintenance
Targeted MS: Journal groups. Manage journal groups used in conjunction with the "publication protocol" implemented for the targetedms (Panorama) module.
Targeted MS Chromatogram Crawler: Crawl containers to find chromatograms.
Views and Scripting: Allows you to configure different types of scripting engines.
Management
Audit Log: View the audit log; many category-specific logs are available.
Full-Text Search: Configure and view both primary and external search indexing.
MS2: Administrative information for the mass spectrometry module.
Notification Service Admin: Enable or disable the notification service at the site level. Active notifications are listed.
Pipeline: Administrative information for the pipeline module.
Site-Wide Terms of Use: Require users to agree to a terms of use whenever they attempt to login to any project on the server.
Diagnostics
Links to diagnostic pages and tests that provide usage and troubleshooting information.
Actions: View information about the time spent processing various HTTP requests.
Attachments: View attachment types and counts, as well as a list of unknown attachments (if any).
Caches: View information about caches within the server.
Check Database: Options for checking the database:
Check table consistency: Click Do Database Check to access admin-doCheck.view which will: check container column references, PropertyDescriptor and DomainDescriptor consistency, schema consistency with tableXML, consistency of provisioned storage. Some warnings here are benign, such as expectations of a 'name' field in Sample Types (that can now use SampleID instead), or 'orphaned' tables that are taking up space and can be safely deleted.
Validate domains match hard tables: Click Validate to run a background pipeline job looking for domain mismatches.
Get schema XML doc: Select a schema and click to Get Schema Xml.
Credits: Jar and Executable files distributed with LabKey Server modules.
Data Sources: A list of all the data sources defined in labkey.xml that were available at server startup and the external schemas defined in each.
Dump Heap: Write the current contents of the server's memory to a file for analysis.
Environment Variables: A list of all the current environment variables and their values, for example, CATALINA_HOME and JAVA_HOME will be shown
Loggers: Manage the logging level of different LabKey components here. For example, while investigating an issue you may want to set a particular logging path to DEBUG to cause that component to emit more verbose logging.
Memory Usage: View current memory usage within the server. You can clear caches and run garbage collection from this page.
Pipelines and Tasks: See a list of all registered pipelines and tasks. This list may assist you in troubleshooting pipeline issues. You will also see TaskId information to assist you when building ETL scripts.
Profiler: Configure development tools like stack trace capture and performance profiling.
Queries: View the SQL queries run against the database, how many times they have been run, and other performance metrics.
Reset Site Errors: Reset the start point in the labkey-errors.log file when you click View All Site Errors Since Reset later, nothing prior to the reset will be included. You will need to confirm this action by clicking OK.
Running Threads: View the current state of all threads running within the server. Clicking this link also dumps thread information to the log file.
Site Validation: Runs any validators that have been registered. (Validators are registered with the class SiteValidationProvider.)
SQL Scripts: Provides a list of the SQL scripts that have run, and have not been run, on the server. Includes a list of scripts with errors, and "orphaned" scripts, i.e., scripts that will never run because another script has the same "from" version but a later "to" version.
Suspicious Activity: Records any activities that raise 404 errors, including but not limited to things like attempts to access the server from questionable URLs, paths containing "../..", POST requests not including CSRF tokens, or improper encoding or characters.
System Properties: A list of current system properties and their values, for example, devmode = true.
View All Site Errors: View the current contents of the labkey-errors.log file from the <CATALINA_HOME>/logs directory, which contains critical error messages from the main labkey.log file.
View All Site Errors Since Reset: View the contents of labkey-errors.log that have been written since the last time its offset was reset through the Reset Site Errors link.
View Primary Site Log File: View the current contents of the labkey.log file from the <CATALINA_HOME>/logs directory, which contains all log output from LabKey Server.
Server Information Tab
The other tabs in the admin console offer grids of detailed information:
Server Information: Core database configuration and runtime information.
The version of the server is displayed prominently above the core database configuration.
Under Runtime Information, details about component versions, variable settings, and operation mode are shown.
Many of the server properties shown on the Server Information Tab can be substituted into the Header Short Name on the Look and Feel settings page. This can be useful when developing with multiple servers or databases to show key differentiating properties on the page header.
Server and Database Times
On the Server Information tab, under Runtime Information you'll see the current clock time on both the web server and database server. This can be useful in determining the correct time to check in the logs to find specific or actions.
Note that if the two times differ by more than 10 seconds, you'll see them displayed in red with an alert message: "Warning: Web and database server times differ by ## seconds!"
The administrator should investigate where both servers are retrieving their clock time and align them. When the server and database times are significantly different, there are likely to be unwanted consequences in audit logging, job synchronization, and other actions.
Error Code Reporting
If you encounter an error or exception, it will include a unique Error Code that can be matched with more details submitted with an error report. When reporting an issue to LabKey support, please include this error code. In many cases LabKey can use this code to track down more information with the error report, as configured in Site Settings.
Premium Feature Available
Premium edition users can also click to export diagnostics from this page, to assist in debugging any errors. Learn more in this topic:
Premium Feature — Available in the Professional and Enterprise Editions of LabKey Server. Learn more or contact LabKey.
This topic explains how to configure a connection between LabKey Server and a Docker container. This connection is used for integrations with sandboxed R engines, RStudio, RStudio Workbench, Jupyter Notebooks, etc.
Under Premium Features click Docker Host Settings.
Enable Docker Settings: select On.
Docker Host: Enter the URI for your docker host.
Docker Certification Path: Unless you are running in devmode AND using port 2375, you must enter a valid file system path to the Docker certificates. Required for TCP in production mode. Note that LabKey expects certificates with specific names to reside in this directory. The certificate file names should be:
ca.pem
key.pem
cert.pem
Container Port Range: In most cases, leave these values blank.
User File Management: Select Docker Volume for a server running in production mode.
Click Save and then Test Saved Host to check your connectivity.
Supported Integrations
Using Docker, you can facilitate these integrations. Link for more details:
Execute the below, replacing 2375 with the desired port:
docker run -d -v /var/run/docker.sock:/var/run/docker.sock -p 127.0.0.1:2375:2375 bobrik/socat TCP-LISTEN:2375,fork UNIX-CONNECT:/var/run/docker.sock
Next, export the DOCKER_HOST environment variable by adding this (with the appropriate port) to your bash_profile:
export DOCKER_HOST=tcp://localhost:2375
You can now find the Docker daemon running on that port.
2. An alternative method is to install socat on your local machine, then run socat in the terminal specifying the same port as you would like to use in your Docker Host TCP configuration. Example of running socat:
Using the admin console, administrators can control many server configuration options via the Site Settings page. During installation of LabKey Server, you have the option to immediately specify site settings, or you may accept defaults and return to customize them later.
Primary site administrator. Use this dropdown to select the primary site administrator. This user must have the Site Administrator role. This dropdown defaults to the first user assigned Site Administrator on the site. LabKey staff may contact this administrator to assist if the server is submitting exception reports or other information that indicates that there is a problem.
URL Settings
Base Server URL: Used to create links in emails sent by the system and also the root of Short URLs. The base URL should contain the protocol (http or https), hostname, and port if required. The webapp context path should never be added. Examples: "https://www.example.com/" or "https://www.labkey.org:9000" (but not "https://www.example.com/labkey").
Use "path first" urls (/home/project-begin.view): See LabKey URLs
Automatically Check for Updates and Report Usage Statistics
Check for updates to LabKey Server and report usage statistics to the LabKey team.
Checking for updates helps ensure that you are running the most recent version of LabKey Server. Reporting anonymous usage statistics helps the LabKey team improve product quality and service for the features you are using. All data is transmitted securely over SSL.
OFF - Do not check for updates or report any usage data.
ON - Check for updates and report system information, usage data, and organization details.
View Usage Statistics
At any time, you can click the View button to display the information that would be reported. Note that this is for your information only and no data will be submitted to LabKey when you view this information locally.
Manually Report Usage Statistics
LabKey uses securely reported usage statistics to improve product features and service, but in some cases you may not want them reported automatically. Click Download to download a usage report that you can transmit to your Account Manager on your own schedule. LabKey can load these manually reported statistics and continue to help improve the product in ways that matter to you.
Automatically Report Exceptions
Reporting exceptions helps the LabKey team improve product quality. All data is transmitted securely over SSL.
There are three levels of exception reporting available. For a complete list of information reported at each level, see Usage/Exception Reporting.
Low level: Include anonymous system and exception information, including the stack trace, build number, server operating system, database name and version, JDBC driver and version, etc.
Medium level: All of the above, plus the exception message and URL that triggered it.
High level: All of the above, plus the user's email address. The user will be contacted only to ask for help in reproducing the bug, if necessary.
After selecting an exception reporting level, click the View button to display the information that would be reported for the given level (except for the actual stack trace). Note that this is for your information only and no data will be submitted to LabKey when you view this sample.
Reporting exceptions to the local server may help your local team improve product quality. Local reporting is always at the high level described above.
You can also Download the exception report and manually transmit it to your LabKey Account Manager to have your data included without enabling automated reporting.
Customize LabKey System Properties
Log memory usage frequency: If you are experiencing OutOfMemoryErrors with your installation, you can enable logging that will help the LabKey development team track down the problem. This will log the memory usage to <CATALINA_HOME>/logs/labkeyMemory.log. This setting is used for debugging, so it is typically disabled and set to 0.
Maximum file size, in bytes, to allow in database BLOBs: LabKey Server stores some file uploads as BLOBs in the database. These include attachments to wikis, issues, and messages. This setting establishes a maximum file upload size to be stored as a BLOB. Users are directed to upload larger files using other means, which are persisted in the file system itself.
The following options to load ExtJS v3 on each page are provided to support legacy applications which rely on Ext3 without explicitly declaring this dependency. The performance impact on LabKey Server may be substantial. See Dependencies on Ext3 for more information.
Require ExtJS v3.4.1 to be loaded on each page: Optional.
Require ExtJS v3.x based Client API be loaded on each page: Optional.
Configure Security
Require SSL connections: Specifies that users may connect to your LabKey site only via SSL (that is, via the https protocol). Learn more here: Installation: Tomcat Configuration
SSL port: Specifies the port over which users can access your LabKey site over SSL. The standard default port for SSL is 443. Note that this differs from the Tomcat default port, which is 8443. Set this value to correspond to the SSL port number you have specified in the <tomcat-home>/conf/server.xml file. Learn more about configuring SSL here: Installation: Tomcat Configuration.
Configure API Keys
Allow API Keys: Enable to make API keys (i.e. tokens) available to logged in users for use in APIs. This enables client code to perform operations under a logged in user's identification without requiring passwords or other credentials to appear in said code. See API Keys for more details.
Expire API Keys: Configure how long-lived any API keys generated will be. Options include:
Never
7 days
30 days
90 days
365 days
Allow session keys: Enable to make available API keys which are attached only to the user's current logged-in session. See Compliant Access via Session Key for more details.
Note: You can choose to independently enable either API Keys or Session Keys or enable both simultaneously.
Configure Pipeline Settings
Pipeline tools. A list of directories on the web server which contain the executables that are run for pipeline jobs. The list separator is ; (semicolon) or : (colon) on a Mac. It should include the directories where your TPP and XTandem files reside. The appropriate directory will entered automatically in this field the first time you run a schema upgrade and the web server finds it blank.
Ribbon Bar Message
Display Message: whether to display the message defined in Message HTML in a bar at the top of each page.
Message HTML: You can keep a recurring message defined here, such as for upgrade outages, and control whether it appears with the above checkbox. For example:
<b>Maintenance Notice: This server will be offline for an upgrade starting at 8:30pm Pacific Time. The site should be down for approximately one hour. Please save your work before this time. We apologize for any inconvenience.</b>
Put Web Site in Administrative Mode
Admin only mode: If checked, only site admins can log into this LabKey Server installation.
Message to users when site is in admin-only mode: Specifies the message that is displayed to users when this site is in admin-only mode. Wiki formatting is allowed in this message. For example:
This site is currently undergoing maintenance, and only site administrators can log in.
HTTP Security Settings
X-Frame-Options
Controls whether or not a browser may render a server page in a <frame>, <iframe>, or <object>.
Same Origin - Pages may only be rendered in a frame when the frame is in the same domain.
Allow - Pages may be rendered in a frame in all circumstances.
Customize Navigation Options
Check the box if you want to Always include inaccessible parent folders in project menu when child folder is accessible.
Otherwise, i.e. when this box is unchecked, users who have access to a child folder but not the parent container(s) will not see that child folder on the project menu. In such a case, access would be by direct URL link only, such as from an outside source or from a wiki in another accessible project and folder tree.
This option is to provide support for security configurations where even the names and/or directory layout of parent containers should be obscured from a user granted access only to a specific subfolder.
This topic details the information included in the different levels of usage and exception reporting available in the site settings. Reporting this information to LabKey can assist us in helping you address problems that may occur.
To set up reporting of usage and exception information to LabKey:
Select (Admin) > Site > Admin Console.
Under Configuration, click Site Settings.
Select ON for Automatically check for updates to LabKey Server and report usage statistics to LabKey.
Select ON, [level] for Automatically report exceptions.
Click Save.
Basic Usage Reporting
When usage reporting is turned ON, the following details are reported back to LabKey, allowing the development team to gain insights into how the software is being used. This includes counts for certain data types, such as assay designs, reports of a specific type, or lists. May also capture the number of times a certain feature was used in a given time window, such as since the server was last restarted.
This will not capture the names of specific objects like folders names, dataset names, etc, nor the row-level content of those items. It will also not capture metrics at an individual folder level or other similar granularity. For example, a metric will not break down the number of lists defined in each folder, but it may capture the number of folders that have lists defined.
OS name
Java version
Enterprise pipeline enabled?
Heap Size
Database platform (SQL Server or Postgres) and version
Folder counts for PHI settings usage (terms of use, PHI activity set, PHI roles required, PHI query logging behavior)
Note that this is a partial list, and the exact set of metrics will depend on the modules deployed and used on a given server. To see the exact set of metrics and their values, use the button on the Site Settings page.
Exception Reporting
When exception reporting is turned on, the following details are reported back to LabKey:
Low Level Exception Reporting
Error Code
OS name
Java version
Enterprise pipeline enabled?
The max heap size for the JVM
Tomcat version
Database platform (SQL Server or Postgres) and version
JDBC driver and version
Unique ids for server & server session
Distribution name
Configured usage reporting level
Configured exception reporting level
Stack trace
SQL state (when there is a SQL exception)
Web browser
Controller
Action
Module build and version details, when known
Medium Level Exception Reporting
Includes all of the information from "Low" level exception reporting, plus:
Exception message
Request URL
Referrer URL
High Level Exception Reporting
Includes all of the information from "Medium" level exception reporting, plus:
The look and feel of your LabKey Server can be set at the site level, then further customized at the project level as desired. Settings selected at the project-level override the broader site-level settings. For example, each project can have a custom string (such as the project name) included in the emails generated from within that project.
Premium edition subscribers have access to additional customization of the look and feel using page elements including headers, banners, and footers. A Page Elements tab available at both the site- and project-level provides configuration options. Learn more in this topic:
To customize the Look and Feel settings at the site level:
Go to (Admin) > Site > Admin Console.
Under Configuration, click Look and Feel Settings.
Settings on the Properties tab are set and cleared as a group; the settings on the Resources tab are set and cleared individually.
Properties Tab
Customize the Look and Feel of Your LabKey Server Installation
System description: A brief description of your server that is used in emails to users.
Header short name: The name of your server to be used in the page header and in system-generated emails.
This field accepts string substitution token of a number of server properties like database name and version, listed in a tooltip when you hover over the '?'. The current values of these properties are also shown here.
Theme: Specifies the color scheme for your server. Learn more in this topic: Web Site Theme
Show Project and Folder Navigation: Select whether the project and folder menus are visible always, or only for administrators.
Show Application Selection Menu: (Premium Feature) Select whether the application selection menu is visible always, or only for administrators.
Show LabKey Help menu item: Specifies whether to show the built in "Help" menu, available as > LabKey Documentation. In many parts of the server this link will take you directly to documentation of that feature.
Enable Object-Level Discussions: Specifies whether to show "Discussion >" links on wiki pages and reports. If object-level discussions are enabled, users must have "Message Board Contributor" permission to participate in them.
Logo link: Specifies the page that the logo in the header links to. By default: ${contextPath}/home/project-start.view The logo image is provided on the resources tab.
Support link: Specifies page where users can request support. By default this is /home/Support. You can add a wiki or other resources to assist your users.
Support email: Email address to show to users to request support with issues like permissions and logins. (Optional)
Customize Settings Used in System Emails
System email address: Specifies the address which appears in the From field in administrative emails sent by the system.
Organization name: Specifies the name of your organization, which appears in notification emails sent by the system.
Date Parsing Mode(site-level setting only): Select how user-entered and uploaded dates will be parsed: Month-Day-Year (as typical in the U.S.) or Day-Month-Year (as typical outside the U.S.). This setting applies to the entire site. Year-first dates are always interpreted as YYYY-MM-DD.
Note that the overall US or non-US date parsing mode applies site-wide. You cannot have some projects or folders interpret dates in one parsing pattern and others use another. The additional parsing patterns below are intended for refinements to the selected mode.
Additional parsing pattern for dates: This pattern is attempted first when parsing text input for a column that is designated with a date-only data type or annotated with the "Date" meta type. Most standard LabKey date columns use date-time data type instead (see below).
Parsing patterns may also be set at the project or folder level, overriding this site setting.
Additional parsing pattern for date-times: This pattern is attempted first when parsing text input for a column that is designated with a date-time data type or annotated with the "DateTime" meta type. Most standard LabKey date columns use this pattern.
Parsing patterns may also be set at the project or folder level, overriding this site setting.
Customize Column Restrictions
Restrict charting columns by measure and dimension flags. Learn about these flags in this topic:Measure and Dimension Columns
Provide a Custom Login Page
Alternative login page: To provide a customized login page to users, point to your own HTML login page deployed in a module. Specify the page as a string composed of the module name, a hyphen, then the page name in the format: <module>-<page>. For example, to use a login HTML page located at myModule/views/customLogin.html, enter the string 'myModule-customLogin'. By default, LabKey Server uses the login page at modules/core/resources/views/login.html which you can use as a template for your own login page. Learn more in this topic: Modules: Custom Login Page.
Provide a Custom Site Welcome Page
Alternative site welcome page(site-level setting only): Provide a page to use as either a full LabKey view or simple HTML resource, to be loaded as the welcome page. The welcome page will be loaded when a user loads the site with no action provided (i.e. https://www.labkey.org). This is often used to provide a splash screen for guests. Note: do not include the contextPath in this string. For example: /myModule/welcome.view to select a view within the module myModule, or /myModule/welcome.html for a simple HTML page in the web directory of your module. Remember that the module containing your view or HTML page must be enabled in the Home project. Learn more in this topic: Modules: Custom Site Welcome Page.
Premium Resource Available
Premium edition subscribers can use the example wikis provided in this topic for guidance in customizing their site landing pages to meet user needs using a simple wiki instead of defining a module view:
Save: Save changes to all properties on this page.
Reset: Reset all properties on this page to default values.
Resources Tab
Header logo (optional): The custom logo image that appears in every page header in the upper left when the page width is greater than 767px. Recommend size: 100px x 30px.
If you are already using a custom logo, click View logo to see it.
Click Reset logo to default to stop using a custom logo.
To replace the logo, click Choose File or Browse and select the new logo.
Responsive logo (optional): The custom logo image to show in the header on every page when the page width is less than 768px. Recommend size: 30px x 30px.
If you are already using a custom logo, click View logo to see it.
Click Reset logo to default to stop using a custom logo.
To replace the logo, click Choose File or Browse and select the new logo.
Favicon (optional): Specifies a "favorite icon" file (*.ico) to show in the favorites menu or bookmarks. Note that you may have to clear your browser's cache in order to display the new icon.
If you are already using a custom icon, click View icon to see it.
Click Reset favorite icon to default to stop using a custom one.
To replace the icon, click Choose File or Browse and select the new logo.
Stylesheet: Custom CSS stylesheets can be provided at the site and/or project levels. A project stylesheet takes precedence over the site stylesheet. Resources for designing style sheets are available here: CSS Design Guidelines
If you are already using a custom style sheet, click View css to see it.
Click Delete custom stylesheet to stop using it.
To replace the stylesheet, click Choose File or Browse and select the new file.
After making changes or uploading resources to this page, click Save to save the changes.
Project Settings
To customize or project settings, controlling Look and Feel at the project level, plus custom menus and file roots:
Navigate to the project home page.
Go to (Admin) > Folder > Project Settings.
The project-level settings on the Properties and Resources tabs nearly duplicate the site-level options enabling optional overrides at the project level. One additional project-level property is:
Security defaults: When this box is checked, new folders created within this project inherit project-level permission settings by default.
Menu Bar Tab
You can add a custom menu at the project level. See Add Custom Menus for a walkthrough of this feature.
Files Tab
This tab allows you to optionally configure a project-level file root, data processing pipeline, and/or shared file web part. See File Root Options.
Premium Features Available
Premium edition subscribers have access to additional customization of the look and feel using:
Page elements including headers, banners, and footers. A Page Elements tab available at both the site- and project-level provides configuration options. Learn more in this topic: Page Elements
If you also use the Biologics or Sample Manager application, admins will see an additional Look and Feel option for showing or hiding the Application Selection Menu. Learn more in this topic: Product Selection Menu
Premium Feature — Available with all Premium Editions of LabKey Server. Learn more or contact LabKey.
To customize the branding of your LabKey Server, you can use custom headers, banners, and footers to change the look and offer additional options. Each of these elements can be applied site-wide or on a per-project basis. They can also be suppressed, including the default "Powered by LabKey" footer present on all pages in the Community Edition of LabKey Server.
The elements on the page available for customization are the header, banner area, and footer. There are no default headers or banners, but the default footer reads "Powered by LabKey" and links to labkey.com.
An administrator can customize these page elements sitewide via the admin console or on a per project basis via project settings. By default, projects inherit the site setting.
Edit page elements site-wide:
Select (Admin) > Site > Admin Console.
Under Premium Features, click Configure Page Elements.
Edit page elements for a project:
Select (Admin) > Folder > Project Settings.
Click the Page Elements tab.
Define Custom Page Elements
A custom page element is written in HTML and placed in the resources/views directory of a module deployed on your server. Details about defining and deploying custom elements, as well as simple examples, can be found in the developer documentation:
On the Page Elements tab, use the dropdown to configure the header. Options listed will include:
Inherit from site settings ([Site setting shown here]): Shown only at the project-level. This is the default for projects.
No Header Displayed: Suppress any header that is defined.
[moduleNames]: Any module including a resources/views/_header.html file will be listed.
Click Save.
Configure Banner
When you use a custom banner, it replaces the page title (typically the folder name) between the menu bar and your page contents web parts. Banners can be used site wide, or within a given project. In the project case, you have the option to use the banner throughout all child folder or only in the project folder itself.
On the Page Elements panel, use the dropdown to configure the banner. Options listed will include:
Inherit from site settings ([Site setting shown here]): Shown only at the project-level. This is the default for projects.
No Banner Displayed: Suppress any banner that is defined. This is the site default.
[moduleNames]: Any module including a resources/views/_banner.html file will be listed in this example, a module named "footerTest" has a custom header defined within it.
Set Display Options: Use the radio buttons to specify one of the options:
Display banner in this project and all child folders
Display banner only in this project's root folder.
Click Save.
Configure Footer
By default, every page on LabKey Server displays a footer "Powered By LabKey". An administrator can replace it with a custom footer or remove it entirely.
On the Page Elements tab, use the dropdown to configure the footer. Options listed will include:
Inherit from site settings ([Site setting shown here]): Shown only at the project-level. This is the default for projects.
No Footer Displayed: Do not show any footer.
Core: Use the default "Powered by LabKey" footer.
[moduleNames]: Any module including a resources/views/_footer.html file will be listed.
The site or project Theme is a way to control the overall look and feel of your web site with a custom color palette. You can set a site wide theme, and individual projects can also choose a different theme to override the site one.
Emails sent to users can be customized using templates defined at the site level. A subset of these templates can also be customized at the project or folder level. This topic describes how to customize the templates used to generate system emails.
For your server to be able to send email, you also need to configure SMTP settings in your labkey.xml file. See SMTP Settings for more information. To test these settings:
Select (Admin) > Site > Admin Console.
Under Diagnostics, click Test Email Configuration.
Template Customization
To customize the template, complete the fields:
From Name: The display name for the sender, typically the short site name. The email "From" field will be populated with the one configured via site or project settings.
Reply To Email: If you prefer a different "Reply-To" address, supply it here.
Subject
Message
Substitution Strings
Message template fields can contain a mix of static text and substitution parameters. A substitution parameter is inserted into the text when the email is generated. The syntax is: ^<param name>^
where <param name> is the name of the substitution parameter.
Each message type includes a full list of available substitution parameters with type, description, and current value if known, at the bottom of the email customization page. For example, some strings used in emails for user management:
^currentDateTime^ -- Current date and time in the format: 2017-02-15 12:30
^emailAddress^ -- The email address of the person performing the operation -- see Look and Feel Settings.
^errorMessage^ -- The error message associated with the failed audit processing -- see Compliance.
^homePageURL^ -- The home page of this installation -- see Site Settings.
^supportLink^ -- Page where users can request support.
^systemEmail^ -- The 'from:' address for system notification emails.
^verificationURL^ -- The unique verification URL that a new user must visit in order to confirm and finalize registration. This is auto-generated during the registration process.
The list of parameters available varies based on which email type is selected from the dropdown. There are some specialized parameters providing more than simple substitution. For example, templates for report and dataset notifications include a ^reportAndDatasetList^ parameter which will include a formatted list of all the changes which triggered the notification.
Format Strings
You may also supply an optional format string. If the value of the parameter is not blank, it will be used to format the value in the outgoing email. The syntax is: ^<param name>|<format string>^
For example:
^currentDateTime|The current date is: %1$tb %1$te, %1$tY^ ^siteShortName|The site short name is not blank and its value is: %s^
Properties are passed to the email template as their actual type, rather than being pre-converted to strings. Each type has different formatting options. For example, a date field can be formatted in either month-first or day-first order, depending on local style.
Some email templates support using HTML formatting in the message body to facilitate showing text links, lists, etc. For example, the default template for "Message board daily digest" reads in part:
<table width="100%"> <tbody> <tr><td><b>The following new posts were made yesterday in folder: ^folderName^</b></td></tr> ^postList^ </tbody> </table> <br > ...
In the email, this will show the heading in bold and remainder in plain text.
If you want to use a combination of HTML and plain text in an email template, use the delimiter:
--text/html--boundary--
to separate the sections of the template, with HTML first and plain text after. If no delimiter is found, the entire template will be assumed to be HTML.
Message Board Notifications
For Message board notification emails, there is a default message reading "Please do not reply to this email notification. Replies to this email are routed to an unmonitored mailbox." If that is not true for your message board you may change the template at the site or folder level. You may also choose whether to include the portion of the email footer that explains why the user received the given email and gives the option to unsubscribe. Include the parameter ^reasonFooter^ to include that portion; the text itself cannot be customized.
Folder-Level Email Customizations
A subset of email templates, like those for issue and message board notifications, can also be customized at the project or folder level.
Issue update
Message board daily digest
Message board notification
To access folder-level customizations for any of these templates, the easiest path is to use a Messages web part. You can add one to any folder if you don't already have one.
Viewing the Messages web part, open the (triangle) menu.
Click Email to open the submenu, then Folder Email Template.
The interface and options are the same as described above for site level templates. Only the subset of template types that can be customized at the folder level are listed. Select one.
Issue update
Message board daily digest
Message board notification
Customize the template as needed, then click Save.
If you added an empty Messages web part to make these changes, you can remove it from the page after customizing your templates.
The dumbster module includes a Mail Record web part that you can use to test email that would be sent in various scenarios.
The Mail Record presents the sender and recipient email addresses, the date, text of the message, and links to view the headers and versions of the message such as HTML, Text-only, and Raw.
Once SMTP is configured and real email will be sent, be sure to remove the dumbster module from your deployment. Otherwise it will continue to capture any outgoing messages before they are emailed.
When features are under development and not yet ready to be incorporated into the production product, they may be included as experimental features. These features may change, break, or disappear at any time. We make no guarantees about what may happen if you turn on these experimental features. Some experimental features are only available when specific modules are included. Proceed with discretion and please contact LabKey if you are interested in sponsoring further development of features listed here. Enabling or disabling some features will require a restart of the server.
Select (Admin) > Site > Admin Console
Under Configuration, click Experimental Features.
Carefully read the warnings and descriptions before enabling any features. Searching the documentation for more detail is also recommended, though not all experimental features are documented.
Use checkboxes to enable desired experimental features. Uncheck the box to disable the feature.
Block Malicious Clients
Reject requests from clients that appear malicious. Make sure this feature is disabled before running a security scanner.
Client-side Exception Logging to Mothership
Report unhandled JavaScript exceptions to mothership.
Client-side Exception Logging to Server
Report unhandled JavaScript exceptions to the server log.
ELISA Multi-plate, multi well data support
Allows ELISA assay import of high-throughput data file formats which contain multiple plates and multiple analyte values per well. Learn more in this topic: Enhanced ELISA Assay Support
Generic [details] link in grids/queries
This feature will turn on generating a generic [details] link in most grids.
Grid Lock Left Column
Lock the left column of grids on horizontal scroll, keeping the ID of the sample or entity always visible. Applies to grids in Sample Manager, Biologics, and Freezer Manager applications.
Include Last-Modified header on query metadata requests
For schema, query, and view metadata requests, enabling this will include a Last-Modified header such that the browser can cache the response. The metadata is invalidated when performing actions such as creating a new list or modifying the columns on a custom view.
Media Ingredient read permissions
Enforce media ingredient read permissions.
No Guest Account
When this experimental feature is enabled, there will be no guest account. All users will have to create accounts in order to see any server content.
No Question Marks in URLs
Don't append '?' to the end of URLs unless there are query parameters.
Notebook Custom Fields
Enable custom fields in Electronic Lab Notebooks. Learn more in this topic:
Display a notifications 'inbox' icon in the header bar with a display of the number of notifications; click to show the notifications panel of unread notifications.
Requests Menu in Biologics
Display "Requests" section in menu to all Biologics users.
Resolve Property URIs as Columns on Experiment Tables
If a column is not found on an experiment table, attempt to resolve the column name as a Property URI and add it as a property column.
Enable the Sample/Aliquot Selector button to show in sample grids within Sample Manager and Biologics.
Use QuerySelect for row insert/update form
This feature will switch the query based select inputs on the row insert/update form to use the React QuerySelect component. This will allow for a user to view the first 100 options in the select but then use typeahead search to find the other select values.
Use Sub-schemas in Study
Separate study tables into three groups: Datasets, Design, Specimens. User defined queries are not placed in any of these groups.
User Folders
Enable personal folders for users.
UX Assay Data Import
Adds an 'Import Data' button (using plus icon) to the 'Assay List' query view to get to the new UX Assay Data Import page.
Skip Importing Chromatograms
Enable to prevent the server from storing chromatograms in the database for newly imported files; instead load them on demand from .skyd files.
Prefer SKYD Chromatograms
When the server has the information needed to load a chromatogram on demand from a .skyd file, fetch it from the file instead of the database.
Rserve Reports
Use an R Server for R script evaluation instead of running R from a command shell. See LabKey/Rserve Setup Guide for more information.
Check for return URL Parameter Casing as 'returnUrl'
The returnUrl parameter must be capitalized correctly. When you enable this feature, the server will check the casing and throw an error if it is 'returnURL'.
Use Abstraction Results Comparison UI
Use a multi-column view for comparing abstraction results.
Use Last Abstraction Result
Use only the last set of submitted results per person for compare view. Otherwise all submitted iterations will be shown.
Abstraction Comparison Anonymous Mode
Make person names and randomize ordering for abstraction comparison views.
External Redirect Hosts
For security reasons, LabKey Server restricts the host names that can be used in returnUrl parameters. By default, only redirects to the same LabKey instance are allowed. Other server host names must be specifically granted access to allow them to be automatically redirected.
For more information on the security concern, please refer to the OWASP advisory .
A site administrator can allow hosts based on the server name or IP address, based on how it will be referenced in the returnUrl parameter values.
To add an External Redirect Host URL to the approved list:
Go to (Admin) > Site > Admin Console.
Under Configuration click External Redirect Hosts.
In the Host field enter an approved URL and click Save.
URLs already granted access are added to the list under Existing External Redirect Hosts.
You can directly edit and save the list of existing redirect URLs if necessary.
Note that mixing these two features for use on a single field is not recommended. Interactions between them may result in an error.
Missing Value Indicators
Missing Value (MV) indicators allow individual data fields to be flagged if the original data is missing or suspect. This marking may be done by hand or during data import using the MV indicators you define. Note that when you define MVIs, they will be applied to new data that is added; existing data will not be scanned, but missing values can be found using filtering and marked manually with the MVI.
Note: Missing value indicators are not supported for Assay Run fields.
Administrators can customize which MVI values are available at the site or folder level. If no custom MVIs are set for a folder, they will be inherited from their parent folder. If no custom values are set in any parent folders, then the MV values will be read from the site configuration.
Two customizable MV values are provided by default:
Q: Data currently under quality control review.
N: Data in this field has been marked as not usable.
Customize at the Site Level
The MVI values defined at the site level can be inherited or overridden in every folder.
Select (Admin) > Site > Admin Console.
Click Missing Value Indicators in the Configuration section.
See the currently defined indicators, define new ones, and edit descriptions here.
On older servers, the default descriptions may differ from those shown here.
Click Save.
Customization at the Folder Level
Select (Admin) > Folder > Management.
Click the Missing Values tab.
The server defaults are shown - click the "Server default" text to edit site wide settings as described above.
Uncheck Inherit settings to define a different set of MV indicators here. You will see the same UI as at the site level and can add new indicators and/or change the text descriptions here.
View Available Missing Value Indicators in the Schema Browser
To see the set of indicators available:
Select (Admin) > Go To Module > Query.
Open the core schema, then the MVIndicators table.
Click View Data.
Enable Missing Value Indicators
To have indicators applied to a given field, use the "Advanced Settings" section of the field editor:
Open the field for editing using the icon.
Click Advanced Settings.
Check the box for Track reason for missing data values.
Click Apply, then Finish to close the editor.
Mark Data with Missing Value Indicators
To indicate that a missing value indicator should be applied, edit the row. For each field tracking missing values, you'll see a Missing Value Indicator dropdown. Select the desired value. Shown below, the "Hemoglobin" field tracks reasons for missing data. Whether there is a value in the entry field or not, the data owner can mark the field as "missing".
In the grid, users will see only the missing value indicator, and the cell will be marked with a red corner flag. Hover over the value for a tooltip of details, including the original value.
How Missing Value Indicators Work
Two additional columns stand behind any missing-value-enabled field. This allows LabKey Server to display the raw value, the missing value indicator or a composite of the two (the default).
One column contains the raw value for the field, or a blank if no value has been provided. The other contains the missing value indicator if an indicator has been assigned; otherwise it is blank. For example, an integer field that is missing-value-enabled may contain the number "12" in its raw column and "N" in its missing value indicator column.
A composite of these two columns is displayed for the field. If a missing value indicator has been assigned, it is displayed in place of the raw value. If no missing value indicator has been assigned, the raw value is displayed.
Normally the composite view is displayed in the grid, but you can also use custom grid views to specifically select the display of the raw column or the indicator column. Check the box to "Show Hidden Fields"
ColumnName: shows just the value if there's no MV indicator, or just the MV plus a red corner flag if there is. The tooltip shows the original value.
ColumnNameMVIndicator (a hidden column): shows just the MV indicator, or null if there isn't one.
ColumnNameRawValue (a hidden column): shows just the value itself, or null if there isn't one.
There is no need to mark a primary key field with a MV indicator, because a prohibition against NULL values is already built into the constraints for primary keys.
Out of Range (OOR) Indicators
Out of Range (OOR) indicators give you a way to display and work with values that are outside an acceptable range, when that acceptability is known at the time of import. For example, if you have a machine reading that is useful in a range from 10 to 90, you may not know or care if the value is 5 or 6, just know that it is out of range, and may be output by the machine as "<10".
Note that OOR Indicators are supported only for Datasets and General Assay Designs. They are not supported for Lists or Sample Types.
Enable OOR indicators by adding a string column whose name is formed from the name of your primary value column plus the suffix "OORIndicator". LabKey Server recognizes this syntax and adds two additional columns with the suffices "Number" and "In Range" giving you choices for display and processing of these values.
Open the View Customizer to select the desired display options:
ColumnName: Shows the out of range indicator (ColumnNameOORIndicator) and primary value (ColumnNameNumber) portions concatenated together ("<10") but sorts/filters on just the primary value portion.
ColumnNameOORIndicator: Shows just the OOR indicator ("<").
ColumnNameNumber: Shows just the primary value ("10"). The type of this column is the same as that of the original "ColumnName" column. It is best practice to use numeric values for fields using out of range indicators, but not required.
ColumnNameInRange: Shows just the primary value, but only if there's no OOR indicator for that row, otherwise its value is null. The type of this column is the same as that of the original "ColumnName" column. This field variation lets you sort, filter, perform downstream analysis, and create reports or visualizations using only the "in range" values for the column.
For example, if your primary value column is an integer named "Reading" then add a second (String) column named "ReadingOORIndicator" to hold the OOR symbol or symbols, such as "<", ">", "<=", etc. :
Reading
ReadingOORIndicator
integer
string
For example, if you insert the following data...
Reading
ReadingOORIndicator
5
<
22
33
99
>
It would be displayed as follows, assuming you show all four related columns:
Reading
ReadingOORIndicator
ReadingNumber
ReadingInRange
<5
<
5
22
22
22
33
33
33
>99
>
99
Note that while the "Reading" column in this example displays the concatenation of a string and an integer value, the type in that column remains the original integer type.
Short URLs allow you to create convenient, memorable links to specific content on your server that make it easier to share and publish information you want to share. Instead of using an outside service like TinyURL or bit.ly, you can define short URLs within LabKey.
For example, say you're working with a team and have discovered something important about some complex data. Here we're looking at some sample data from within the demo study on labkey.org. Instead of directing colleagues to open the dataset, filter for one variable, sort on another, filter on a third, all these operations are contained in the full URL.
You could certainly email this URL to colleagues, but for convenience you can define a shortcut handle and publish a link like this, which if clicked takes you to the same place:
Note that the same filters are applied without any action on the part of the user of the short URL and the full URL is displayed in the browser.
The full version of a short URLs always ends with ".url" to distinguish it from other potentially conflicting resources exposed as URLs.
Define Short URLs
Short URLs are relative to the server and port number on which they are defined. The current server location is shown in the UI as circled in the screenshot below. If this is incorrect, you can correct the Base Server URL in Site Settings. Typically a short URL is a single word without special characters. The
To define a short URL:
Select (Admin) > Site > Admin Console.
Click Short URLs in the Configuration section.
Type the desired short URL word into the entry window (the .url extension is added automatically).
Paste or type the full destination URL into the Target URL window. Note that the server portion of the URL will be stripped off. You can only create short URLs that link to content on the current server.
Click Submit.
Any currently defined short URLs will be listed in the Existing Short URLs web part. Notice in the above screencap, the server portion was stripped off and the target begins with /home/Demos....
You can click the Test link to try your short URL. You can also directly type the short URL into a new browser tab and you will be taken directly to the target.
Try it now by pasting into a browser: www.labkey.org/important.url
Use Update or Delete buttons to manage the existing short URLs.
Security
The short URL can be entered by anyone, but access to the actual target URL and content will be subject to the same permission requirements as if the short URL had not been used.
Configure System Maintenance
System maintenance tasks are typically run every night to clear unused data, update database statistics, perform nightly data refreshes, and keep the server running smoothly and quickly. This topic describes how an administrator can manage maintenance tasks and the nightly schedule.
System maintenance tasks will be run as the database user configured for the labkeyDataSource in the labkey.xml file. Confirm that this user has sufficient permissions for your database.
Configure Maintenance Tasks
To configure system maintenance:
Select (Admin) > Site > Admin Console.
Under Configuration, click System Maintenance.
We recommend leaving all system maintenance tasks enabled, but some of the tasks can be disabled if absolutely necessary. By default, all enabled tasks run on a daily schedule at the time of day you select (see below for notes about Daylight Savings Time). You can also run system maintenance tasks manually, if needed; use the Run all tasks link or click on an individual link to run just that task.
Note that some tasks pictured above may not be available on your server, depending on the version and features enabled.
Disable a Task
To disable a maintenance task, uncheck the box and click Save. Some tasks, such as purging of expired API keys, cannot be disabled and are shown grayed out.
View Pipeline Log
System maintenance runs as a pipeline job and logs progress, information, warnings, and errors to the pipeline log. To view previously run system maintenance pipeline jobs:
Select (Admin) > Site > Admin Console.
Under Management, click Pipeline.
System Maintenance Schedule and Daylight Savings Time
System maintenance triggering is potentially subject to some oddities twice a year, when Daylight Savings Time transitions occur. As an example, in the United States, within time zones/locations that observe Daylight Savings Time, the following problems may occur if system maintenance is scheduled between the hours of 1:00AM and 3:00AM:
1:15AM may occur twice - duplicate firings are possible
2:15AM may never occur - missed firings are possible
Missing or re-running system maintenance twice a year will generally not cause any problems, but if this is a concern then schedule system maintenance outside the DST transition times for your locale.
When using SQL Server, if the user running the maintenance tasks does not have sufficient permissions, an exception will be raised similar to this:
java.sql.SQLException: User does not have permission to perform this action.
The full text of the error will include "EXEC sp_updatestats;" as well.
To resolve this, confirm the database account that LabKey Server is using to connect to SQL Server is a sysadmin or the dbo user. Check the "labkey.xml" or "ROOT.xml" file located under the <CATALINA_HOME>/conf/Catalina/localhost directory to confirm.
As a workaround, you can set up an outside process or external tools to regularly perform the necessary database maintenance, then disable having the server perform these tasks using the System Maintenance option in the admin console.
Some ways to use scripting engines on LabKey Server.
R, Java, Perl, or Python scripts can perform data validation or transformation during assay data upload (see: Transform Scripts).
R scripts can provide advanced data analysis and visualizations for any type of data grid displayed on LabKey Server. For information on using R, see: R Reports. For information on configuring R beyond the instructions below, see: Install and Set Up R.
Select Add > New R Engine from the drop-down menu.
If an engine has already been added and needs to be edited, double-click the engine, or select it and then click Edit.
Fill in the fields necessary to configure the scripting engine in the popup dialog box, for example:
Enter the configuration information in the popup. See below for field details.
Click Submit to save your changes and add the new engine.
Click Done when finished adding scripting engines.
R configuration fields:
Name: Choose a name for this engine, which will appear on the list.
Language: Choose the language of the engine. Example: "R".
File extensions: These extensions will be associated with this scripting engine.
Do not include the . in the extensions you list, and separate multiple extensions with commas.
Example: For R, choose "R,r" to associate the R engine with both uppercase (.R) and lowercase (.r) extensions.
Program Path: Specify the absolute path of the scripting engine instance on your LabKey Server, including the program itself. Remember: The instance of the R program will be named "R.exe" on Windows. And simply "R" on Linux and OSX machines, for example:
/usr/bin/R
Program Command: This is the command used by LabKey Server to execute scripts created in an R view.
Example: For R, you typically use the default command: CMD BATCH --slave. Both stdout and stderr messages are captured when using this configuration. The default command is sufficient for most cases and usually would not need to be modified.
Another possible option is capture.output(source("%s")). This will only capture stdout messages, not stderr. You may use cat() instead of message() to write messages to stdout.
Output File Name: If the console output is written to a file, the name should be specified here. The substitution syntax ${scriptName} will be replaced with the name (minus the extension) of the script being executed.
If you are working with assay data, an alternative way to capture debugging information is to enable "Save Script Data" in your assay design: for details see Transform Scripts.
Site Default: Check this box if you want this configuration to be the site default.
Sandboxed: Check this box if you want to mark this configuration as sandboxed.
Use pandoc and rmarkdown: Enable if you have rmarkdown and pandoc installed. If enabled, Markdown v2 will be used to render knitr R reports; if not enabled, Markdown v1 will be used. See R Reports with knitr
Enabled: Please click this checkbox to enable the engine.
Multiple R Scripting Engine Configurations
More than one R scripting engine can be defined on a server site. For example, you might want to use different versions of R in different projects or different sets of R packages in different folders. You can also use different R engines inside the same folder, one to handle pipeline jobs and another to handle reports.
Use a unique name for each configuration you define.
You can mark one of your R engines as the site default, which will be used if you don't specify otherwise in a given context. If you do override the site default in a container, then this configuration will be used in any child containers, unless you specify otherwise.
In each folder where you will use an R scripting engine, you can either:
Use the site default R engine, which requires no intervention on your part.
Or use the R engine configuration used by the parent project or folder.
Or use alternate engines for the current folder. If you chose this option, you must further specify which engine to use for pipeline jobs and which to use for report rendering.
To select an R configuration in a folder, navigate to the folder and select (Admin) > Folder > Management.
Click the R Config tab.
You will see the set of Available R Configurations.
Options:
Use parent R Configuration: (Default) The configuration used in the parent container is shown with a radio button. This will be the site default unless it was already overridden in the parent.
Use folder level R configuration: To use a different configuration in this container, select this radio button.
Reports: The R engine you select here will be used to render reports in this container. All R configurations defined on the admin console will be shown here.
Pipeline Jobs: The R engine you select here will be used to run pipeline jobs and transform scripts. All R configurations defined on the admin console will be shown here.
Click Save.
In the example configuration below, different R engines are used to render reports and to run pipeline jobs.
Sandbox an R Engine
When you define an R engine configuration, you have the option to select whether it is Sandboxed. Sandboxing is a software management strategy that isolates applications from critical system resources. It provides an extra layer of security to prevent harm from malware or other applications. By sandboxing an R engine configuration, you can grant the ability to edit R reports to a wider group of people.
An administrator should only mark a configuration as sandboxed when they are confident that their R configuration has been contained (using docker or another mechanism) and does not expose the native file system directly. LabKey will trust that by checking the box, the administrator has done the appropriate diligence to ensure the R installation is safely isolated from a malicious user. LabKey does not verify this programmatically.
The sandbox designation controls which security roles are required for users to create or edit scripts.
If the box is checked, this engine is sandboxed. A user with either the "Trusted Analyst" or "Platform Developer" role will be able to create new R reports and/or update existing ones using this configuration.
If the box is unchecked, the non-sandboxed engine is considered riskier by the server, so users must have the "Platform Developer" role to create and update using it.
Learn about these security roles in this topic: Developer Roles
Add a Perl Scripting Engine
To add a Perl scripting engine, follow the same process as for an R configuration.
Select (Admin) > Site > Admin Console.
Under Configuration, click Views and Scripting.
Select Add > New Perl Engine.
Enter the configuration information in the popup. See below for field details.
Click Submit.
You can only have a single Perl engine configuration. After one is defined, the option to define a new one will be grayed out. You may edit the existing configuration to make changes as needed.
Perl configuration fields:
Name: Perl Scripting Engine
Language: Perl
Language Version: Optional
File Extensions: pl
Program Path: Provide the path, including the name of the program. For example, "/usr/bin/perl", or on Windows "C:\labkey\apps\perl\perl.exe".
Program Command: Leave this blank
Output File Name: Leave this blank
Enabled
Add a Python Scripting Engine
To add a Python engine:
Select (Admin) > Site > Admin Console.
Under Configuration, click Views and Scripting.
Select Add > New External Engine.
Enter the configuration information in the popup.
Program Command and Output File Name are optional fields.
Program Command is used only if you need to pass any additional commands or arguments to Python or for any of the scripting engines. If left blank, we will just use the default option as when running Python via the command-line. In most cases, this field can be left blank, unless you need to pass in a Python argument. If used, we recommend adding quotes around this value, for example, "${runInfo}". This is especially important if your path to Python has spaces in it.
Output File Name: If the console output is written to a file, the name should be specified here. The substitution syntax ${scriptName} will be replaced with the name (minus the extension) of the script being executed.
If you are working with assay data, an alternative way to capture debugging information is to enable "Save Script Data" in your assay design: for details see Transform Scripts.
Premium Feature — This feature is available with the Professional and Enterprise Editions of LabKey Server. Learn more or contact LabKey.
One or more Proxy Servlets can be configured by a site administrator to act as reverse proxies to external web applications, such as Plotly Dash. A configured proxy servlet receives HTTP requests from a client such as a web browser, and passes them on to the external web application; HTTP responses are then received and passed back to the client.
The primary benefits of using a proxy servlet are:
The external web application can be secured behind a firewall. LabKey Server needs access, but clients do not need their own direct access.
LabKey will securely pass, via HTTP headers, context about the current user to the external web application.This context includes the user's email address, security roles, and an API key that can be used to call LabKey APIs as that user.
To use proxy servlets, you must obtain and deploy the connectors module. Contact your account manager using your support portal or contact LabKey for more information.
To add or edit Proxy Servlet configurations, users must have admin permissions. Users with the "Troubleshooter" site-wide role can see but not edit the configurations.
Configure Proxy Servlets
Administrators configure proxy servlets as follows:
Select (Admin) > Site > Admin Console.
Under Premium Features, click Proxy Servlets.
Enter:
Proxy Name: Proxy names are case insensitive and must be unique. They are validated for non-blank, not currently in-use, and consisting of valid characters.
Target URI: Target URIs are validated as legal URIs. The default port for Plotly Dash is 8050, shown below.
Click Add Proxy.
An attempt to add an invalid configuration results in an error message above the inputs.
Once added successfully, the proxy servlet configuration appears in the Existing Proxy Servlets grid. The Test Link takes the form:
<serverURL>/<server_context>/_proxy/<proxy_name>/
Click the Test Link to confirm that the connection is successful.
Use Proxy Servlets
All proxy servlets are rooted at LabKey servlet mapping /_proxy/*, so, for example, the dash configuration above on a localhost server would appear at http://localhost:8080/labkey/_proxy/dash/.
This URL can be accessed directly, in which case the web application's output will be shown full screen, with no LabKey frame. Or an iframe can be used to provide display and interactivity within a LabKey page.
The following headers are provided on all requests made to the web application:
Name
Description
X-LKPROXY-USERID
RowId for the current user’s account record in LabKey
X-LKPROXY-EMAIL
User’s email address
X-LKPROXY-SITEROLES
Site-level roles granted to the user
X-LKPROXY-APIKEY
Session key linked to the current user’s browser session. This API key is valid until the user logs out explicitly or via a session timeout.
X-LKPROXY-CSRF
CSRF token associated with the user’s session. Useful for invoking API actions that involve mutating the server.
Resolve Pathname Prefix
Developers of target web applications must ensure that the pages that are returned include links that resolve through the proxy name, otherwise, subsequent requests will bypass the proxy. One possible symptom of an incorrect path link would be an error like "The resource from [URL] was blocked due to MIME type ("text/html") mismatch (X-Content-Type-Options: nosniff)".
The framework may need to adjust this path to make links resolve correctly as relative to the proxy. For example, a Python script would need the following section:
app.config.update({ # Configure the path route relative to LabKey's proxy server path 'routes_pathname_prefix': './', 'requests_pathname_prefix': './' })
In older versions of the proxy servlet, you may have needed to explicitly add the name of the proxy servlet ("dash" in this example) to both the routes_pathname_prefix and requests_pathname_prefix. See this example in the documentation archives. If you see a doubled occurrence of the servlet name ("/dash/dash/") you may need to adjust this section of your python script to remove the extra "dash" as shown in the example on this page.
Delete Servlet Configurations
To delete a servlet configuration, click Delete for the row in the Existing Proxy Servlets panel.
Note that you cannot directly edit an existing servlet configuration. To change one; delete it and recreate a new one with the updated target URI.
Events that happen anywhere on an instance of LabKey Server are logged for later use by administrators who may need to track down issues or document what has occurred for compliance purposes. Different types of events are stored in separate log categories on the main Audit Log. In addition, there are other locations where system and module-specific activities are logged.
The main Audit Log is available to site administrators from the Admin Console. Users with the site-wide role "Troubleshooter" can also read the audit log.
Select (Admin) > Site > Admin Console.
Under Management, click Audit Log.
Use the dropdown to select the specific category of logged events to view. See below for detailed descriptions of each option.
The categories for logged events are listed below. Some categories are associated with specific modules that may not be present on your server.
Assay/Experiment events: Assay run import and deletion, assay publishing and recall.
Attachment events: Adding, deleting, and downloading attachments on wiki pages and issues.
Authentication settings events: Information about modifications to authentication configurations and global authentication settings.
Data Transaction events: Basic information about certain types of transactions performed on data, such as insert/update/delete of sample management data.
Dataset events: Inserting, updating, and deleting dataset records. QC state changes.
For dataset updates, the audit log will record only the fields that have changed. Both values, "before" and "after" the merge are recorded in the detailed audit log.
Note that dataset events that occurred prior to upgrading to the 20.11 release will show all key/value pairs, not only the ones that have changed.
Domain events: Data about creation, deletion, and modification of domains: i.e. data structures with sets of fields like lists, datasets, etc.
Domain property events: Records per-property changes to any domain. If a user changed 3 properties in a given domain, the "Domain events" log would record one event for changes to any properties; the "Domain property events" log would show all three events. The comment field of a domain property event shows the values of the attributes of the property that were created or changed, such as PHI level, URL, dimension, etc.)
File batch events: Processing batches of files.
File events: Changes to a file repository.
Flow events: Information about keyword changes in the flow module.
Group events: The following group-related events are logged:
Administrator created a group.
Administrator deleted a group.
Administrator added a user or group to a group.
Administrator removed a user or group from a group.
Administrator assigned a role to a user or group.
Administrator unassigned a role from a user or group.
Administrator renamed a group.
Administrator configured a container to inherit permissions from its parent.
Administrator configured a container to no longer inherit permissions from its parent.
Inventory Events: Events related to freezer inventory locations, boxes, and items.
LDAP Sync Events(Premium Feature): The history of LDAP sync events and summary of changes made.
Link to Study events: Events related to linking assay and/or sample data to a study.
List events: Creating and deleting lists. Inserting, updating, and deleting records in lists.
Details for list update events show both "before" and "after" values of what has changed.
Logged query events - Shows the SQL query that was submitted to the database. See Compliance: Logging.
Logged select query events - Lists specific columns and identified data relating to explicitly logged queries, such as a list of participant id's that were accessed, as well as the set of PHI-marked columns that were accessed. See Compliance: Logging.
Logged sql queries(Premium Feature): SQL queries sent to external datasources configured for explicit logging, including the date, the container, the user, and any impersonation information. See SQL Query Logging.
Message events: Message board activity, such as email messages sent.
Pipeline protocol events: Changes to pipeline protocols
Project and Folder events: Creation, deletion, renaming, and moving of projects and folders. Changes to file roots are also audited.
Query export events: Query exports to different formats, such as Excel, TSV, and script formats.
Query update events: Changes made via SQL queries, such as inserting and updating records using the query. Note that changes to samples are not recorded in the query update events section; they are tracked in sample timeline events instead.
Sample timeline events: Events occurring for individual samples in the system. Creation, update, storage activity, and check in/out are all logged.
Sample Type events: Summarizes events including Sample Type creation and modification.
Samples workflow events: Events related to jobs and tasks created for managing samples in Sample Manager and Biologics.
Search: Text searches requested by users and indexing actions.
Signed Snapshots: Shows the user who signed, i.e. created, the signed snapshot and other details. For a full list, see Electronic Signatures / Sign Data
Site Settings events: Changes to the site settings made on the "Customize Site" and "Look and Feel Settings" pages.
Study events: Study events, including sharing of R reports with other users.
Study Security Escalations: Audits use of study security escalation.
User events: All user events are subject to the 10 minute timer. For example, the server will skip adding user events to the log if the same user signs in from the same location within 10 minutes of their initial login. If the user waits 10 minutes to login again then the server will log it.
User added to the system (via an administrator, self sign-up, LDAP, or SSO authentication).
User verified and chose a password.
User logged in successfully (including the authentication provider used, whether it is database, LDAP, etc).
User logged out.
User login failed (including the reason for the failure, such as the user does not exist, incorrect password, etc). The IP address from which the attempt was made is logged for failed login attempts. Log only reflects one failed login attempt every 10 minutes. See the Primary Site Log File for more frequent logging. Failed login attempts using API Keys are not logged. Instead, see the Tomcat Access logs.
User changed password.
User reset password.
User login disabled because too many login attempts were made.
Administrator impersonated a user.
Administrator stopped impersonating a user.
Administrator changed a user's email address.
Administrator reset a user's password.
Administrator disabled a user's account.
Administrator re-enabled a user's account.
Administrator deleted a user's account.
Allowing Non-Admins to See the Audit Log
By default, only administrators and troubleshooters can view audit log events and queries. If an administrator would like to grant access to read audit log information to a non-admin user or group, they can do so assigning the role "See Audit Log Events". For details see Security Roles Reference.
Other Logs
Other event-specific logs are available in the following locations:
Go to (Admin) > Site > Site Users, then click History.
All Site Errors
Go to (Admin) > Site > Admin Console > Settings and click View All Site Errors under Diagnostics. Shows the current contents of the labkey-errors.log file from the <CATALINA_HOME>/logs directory, which contains critical error messages from the main labkey.log file.
All Site Errors Since Reset
Go to (Admin) > Site > Admin Console > Settings and click View All Site Errors Since Reset under Diagnostics. View the contents of labkey-errors.log that have been written since the last time its offset was reset through the Reset Site Errors link.
Primary Site Log File
Go to (Admin) > Site > Admin Console > Settings and click View Primary Site Log File under Diagnostics. View the current contents of the labkey.log file from the <CATALINA_HOME>/logs directory, which contains all log output from LabKey Server.
Setting Audit Detail Level
For some table types, you can set the level of auditing detail on a table-by-table basis, determining the level of auditing for insert, update, and delete operations. This ability is supported for:
Lists
Study Datasets
Participant Groups
Sample Types
Inventory (Freezer Management) tables
Workflow
Electronic Lab Notebooks
You cannot use this to set auditing levels for Assay events.
Audit Level Options
Auditing level options include:
NONE - No audit record.
SUMMARY - Audit log reflects that a change was made, but does not mention the nature of the change.
DETAILED - Provides full details on what change was made, including values before and after the change.
When set to detailed, the audit log records the fields changed, and the values before and after. Hover over a row to see a (details) link. Click to see the logged details.
The audit level is set by modifying the metadata XML attached to the table. For details see Query Metadata: Examples.
Write Audit Log to File System
Audit Log messages can be written to the filesystem in addition to being stored in the database. Such additional archiving may be required for compliance with standards and other requirements.
To configure filesystem logging, an administrator edits the log4j2.xml file, which is deployed to the $LABKEY_HOME/build/deploy/labkeywebapp/WEB-INF/classes directory. Note that any other options permitted by log4j can be enabled, including writing the audit log content to the console, using multiple files, etc.
The administrator first creates a log name corresponding to the name of the particular log to record on the filesystem. For example, the GroupAuditEvent log would be written to "labkey.audit.GroupAuditEvent." Within the default log4j2.xml file, there is a default template available showing how to include the audit logs and columns to be written to each designated file. See comments in the file close to this line for more detail:
These log messages are not part of any transaction, so even if the writing of the audit event to the database fails, the writing to the filesystem will already have been attempted.
The format of audit messages written to the designated file(s) or other output source is:
Premium Feature — Available in the Professional and Enterprise Editions of LabKey Server. Learn more or contact LabKey.
You can configure external data sources to log details of each SQL query, including:
the user making the query
impersonation information, if any
date and time
the SQL statement used to query the data source
This topic describes how to configure and use SQL Query Logging on an external data source. Note that the labkeyDataSource cannot be configured to log queries in this way. Doing so will cause a warning in the server log at startup -- then startup will proceed as normal.
Set Up
To configure an external data source to log queries, add a Parameter element to the labkey.xml file. For example, the following template uses a data source named “mySqlDataSource".
See below for a description of what each pattern code means. Complete documentation on the Tomcat Access Logging can be found at Access Log Valve
Recommended LabKey Server Access Log Format
For production installations of the LabKey server, we recommend the following format. Note that quoting methods may not paste into your console exactly as shown here and below. If you see errors, try using " (double quotes) around the entire string, substituting " for internal double quotes, as shown in the example below:
Reference information below was taken from the Tomcat docs at: Access Log Valve
%a - Remote IP address
%A - Local IP address
%b - Bytes sent, excluding HTTP headers, or ‘-’ if zero
%B - Bytes sent, excluding HTTP headers
%h - Remote host name (or IP address if resolveHosts is false)
%H - Request protocol
%l - Remote logical username from identd (always returns ‘-‘)
%m - Request method (GET, POST, etc.)
%p - Local port on which this request was received
%q - Query string (prepended with a ‘?’ if it exists)
%r - First line of the request (method and request URI)
%s - HTTP status code of the response
%S - User session ID
%t - Date and time, in Common Log Format
%u - Remote user that was authenticated (if any), else ‘-‘
%U - Requested URL path
%v - Local server name
%D - Time taken to process the request, in millis
%T - Time taken to process the request, in seconds
%I - current request thread name (can compare later with stacktraces)
There is also support to write information from the cookie, incoming header, outgoing response headers, the Session or something else in the ServletRequest. It is modeled after the Apache syntax:
%{xxx}i for incoming request headers
%{xxx}o for outgoing response headers
%{xxx}c for a specific request cookie
%{xxx}r xxx is an attribute in the ServletRequest
%{xxx}s xxx is an attribute in the HttpSession
Obtaining Source IP Address When Using a Load Balancer
When using a load balancer, such as AWS ELB or ALB, you may notice that your logs are capturing the IP address of the load balancer itself, rather than the user's originating IP address. To obtain the source IP address, first confirm that your load balancer is configured to preserve source IPs:
Premium Feature — Available with all Premium Editions of LabKey Server. Learn more or contact LabKey.
Site administrators can download a zip file bundling diagnostic information about their running LabKey Server. The contents of this file can help LabKey Client Services diagnose server configuration issues and better help you resolve them.
Note: The size of the files included may be very large and take time to compress and download.
Depending on your browser and settings, you may find the downloaded archive is visible on a toolbar at the top or bottom of your browser, or it may be located in a "Downloads" directory on the local machine.
Diagnostic Zip File Contents
LabKey takes your privacy very seriously. In order to serve you better, this file contains detailed information, like log files, that go beyond what is provided in the Admin Console UI. Passwords and other sensitive information are always removed to protect your privacy.
The following information is included in the archive.
Server Information
The contents of /labkey/admin-showAdmin.view. This is the information visible in the Admin Console UI. Including what you see on the Server Information tabs.
Log Files
Several sets of log files are included in the archive:
labkey.log, plus any rolled-over log files (labkey.log.1, etc.)
labkey-error.log, plus any rolled-over log files (labkey-error.log.1, etc.)
The contents of the Module Information tab of the admin console, with each module node expanded to show details is included, plus the contents of admin-modules.view.
You can see this view in the admin console by selecting Module Information and expanding each module node.
A grid of all modules is available by clicking Module Details.
Usage Metrics
Some modules may post key/value pairs of diagnostic information for use in these reports. These diagnostic metrics will be included in the zip file, but will not be reported to mothership, the exception reporting service built into LabKey.
Thread Dump
A thread dump containing additional potentially useful information is also included. See Collect Debugging Information for more detail.
The Actions option under Diagnostics on the Admin Console allows administrators to view information about the performance of web-based requests of the server. Within the server, an action corresponds to a particular kind of page, such as the Wiki editor, a peptide's detail page, or a file export. It is straightforward to translate a LabKey Server URL to its implementing controller and action. This information can be useful for identifying performance problems within the server.
Summary Tab
The summary tab shows actions grouped within their controller. A module typically provides one or more controllers that encompass its actions and comprise its user interface. This summary view shows how many actions are available within each controller, how many of them have been run since the server started, and the percent of actions within that controller that have been run.
Details Tab
The details tab breaks out the full action-level information for each controller. It shows how many times each action has been invoked since the server started, the cumulative time the server has spent servicing each action, the average time to service a request, and the maximum time to service a request.
Click Export to export all the detailed action information to a TSV file.
Exceptions Tab
This tab will show details about any exceptions that occur.
Caches provide quick access to frequently used data, and reduce the number of database queries and other requests that the server needs to make. Caches are used to improve overall performance by reusing the same data multiple times, at the expense of using more memory. Limits on the number of objects that can be cached ensure a reasonable tradeoff.
Cache Statistics
The Caches option under Diagnostics on the Admin Console allows administrators to view information about the current and previous states of various caches within the server.
The page enumerates the caches that are in use within the server. Each holds a different kind of information, and may have its own limit on the number of objects it can hold and how long they might be stored in the cache.
The Gets column shows how many times that code has tried to get an object from the cache.
The Puts column shows how many times an item has been put in the cache.
Caches that have reached their size limit are indicated separately, and may be good candidates for a larger cache size.
Links at the top of the list are provided to Clear Caches and Refresh as well as simply Refresh cache statistics. Each cache can be cleared individually by clicking Clear for that row.
You will see the Names of all the registered loggers, with their current Level on the left, the Parent (if any) and Notes (if any) on the right.
Find a specific logger by typing ahead (any portion of the name) to narrow the list.
Use the Show Level dropdown to narrow the list to all loggers at a given level:
INFO
WARN
ERROR
FATAL
DEBUG
TRACE
ALL
OFF
Change Logging Level
Click the value in the Level column for a specific logger to reveal a dropdown menu for changing its current level. Select a new level and click elsewhere on the page to close the selector. The logger will immediately run at the new level until the server is restarted (or until its level is changed again).
For example, while investigating an issue you may want to set a particular logging path to DEBUG to cause that component to emit more verbose logging. Typically, your Account Manager will suggest the logger(s) and level to set.
Another way to use the adjustability of logging is to lower logging levels to limit the growth of the labkey.log file. For example, if your log is filling with INFO messages from a given logger, and you determine that these messages are benign, 'raising' the threshold for messages being written to the log (such as to ERROR) will reduce overall log size. However, this also means that information that could help troubleshoot an issue will not be retained.
Some loggers provide a brief note about what specific actions or information they will log. Use this information to guide you in raising (or lowering) logging levels as your situation requires.
Troubleshoot with Loggers
If you want enhanced debug logging of a specific operation, the general process will be:
Changing the relevant logging level(s) to "DEBUG" as described above.
Repeat the operation you wanted to have logged at the debugging level.
Check the labkey.log to see all the messages generated by the desired logger(s).
Remember to return your logger to the previous level after resolving the issue; otherwise the debugging messages will continue to fill the labkey.log file.
Scenarios:
Logger
Level
Result in Log
org.apache.jasper.servlet.TldScanner
DEBUG
A complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
org.labkey.api.files.FileSystemWatcherImpl
DEBUG
Open file system handlers and listeners. For example, whether a file watcher is finding no files, or finding files that have already been 'ingested'.
org.labkey.api.admin.FolderImporterImpl
DEBUG/ALL
Details about folder import, including from file watchers. Will include what type of listener is used for the watcher as well as any events for the configuration.
The Memory Usage page shows information about current memory utilization within the LabKey Server process. This information can be useful in identifying problematic settings or processes. Access it via: (Admin) > Site > Admin Console > Diagnostics > Memory Usage.
Memory Graphs
The top section of the page shows graphs of the various memory spaces within the Java Virtual Machine, including their current utilization and their maximum size. The Heap and Metaspace sections are typically the most likely to hit their maximum size.
Total Heap Memory
Total Non-heap Memory
CodeHeap 'non-nmethods' Non-heap memory
Metaspace Non-heap memory
CodeHeap 'profiled nmethods'
Compressed Class Space Non-heap memory
G1 Eden Space Heap memory
G1 Old Gen Heap memory
G1 Survivor Space Heap memory
CodeHeap 'non-profiled nmethods' Non-heap memory
Buffer pool mapped
Buffer pool direct
Buffer pool mapped - 'non-volatile memory'
Memory Stats
Detailed stats about the pools available (shown above). Their initial size, amount used, amount committed, and max setting. This section also includes a list of system property names and their values.
Loaded Class Count
Unloaded Class Count
Total Loaded Class Count
VM Start Time
VM Uptime
VM Version
VM Classpath
Thread Count
Peak Thread Count
Deadlocked Thread Count
G1 Young Generation GC count
G1 Young Generation GC time
G1 Old Generation GC count
G1 Old Generation GC time
CPU count
Total OS memory
Free OS memory
OS CPU load
JVM CPU load
In-Use Objects
When the server is running with Java asserts enabled (via the -ea parameter on the command-line), the bottom of the page will show key objects that are tracked to ensure that they are not causing memory leaks. This is not a recommended configuration for production servers.
Links
The links at the top of the page allow an administrator to clear caches and garbage collect (gc):
Clear caches, gc and refresh
Gc and refresh
Refresh
Garbage collection will free memory claimed by unused objects. This can be useful to see how much memory is truly in use at a given point in time.
LabKey develops, tests, and deploys using the default Java garbage collector. Explicitly specifying an alternative garbage collector is not recommended.
LabKey Server monitors the queries that it runs against the underlying database. For performance and memory usage reasons, it does not retain every query that has ever run, but it attempts to hold on to the most recently executed queries, the longest-running queries, and the most frequently run queries. In the vast majority of cases, all of the queries of interest are retained and tracked.
This information can be very useful for tracking down performance problems caused by slow-running database queries. You can also use the query profiler to see details about any queries running on your server.
Queries Executed within HTTP Requests: Including Query Count, Query Time, Queries per Request, Query Time per Request, Request Count.
Queries Executed Within Background Threads: Including Query Count and Query Time.
Total Unique Queries
Server Uptime
All statistics here can be reset by clicking Reset All Statistics. Export the information to a TSV file by clicking Export.
Below the statistics, you'll see a listing of the top 270 unique queries with the highest number of invocations. Sort this list of queries by clicking the column headers:
Column Name
Description
Count
The number of times that the server has executed the query since it started, or statistics were reset.
Total
The aggregate time of all of the invocations of the query, in milliseconds.
Avg
The average execution time, in milliseconds.
Max
The longest execution time for the query, in milliseconds.
The query itself. Note that this is the actual text of the query that was passed to the database via the JDBC driver. It may contain substitution syntax.
Troubleshoot Performance
To troubleshoot performance of an action, script, or application, particularly if you are not sure which query might be causing a slowdown, you can Reset All Statistics here, then return to execute the action(s) of interest. When you return to this page, the statistics will report a more focused view of the recent actions.
Click the Max column header to see the queries that took the most time. You can also examine any traces to find out more about particular query invocations.
Traces
In addition to performance issues, query profiling can help identify possible problems with many actions and queries. To trace an action, have two browser windows open:
In one, Reset All Statistics on the query page.
In the other, execute only the action you want to trace.
Back in the first browser, refresh and examine the list of queries that just occurred.
Clicking on a link in the Traces column will show a details page. It includes the raw text of the query, as well as one example of actual parameter values that were passed. Note that other invocations of the query may have used other parameter values, and that different parameter values can have significant impact on the runtime performance.
Either panel of trace details can be saved for analysis or troubleshooting by clicking Copy to Clipboard and pasting into a text file.
Below the trace details, you can click Show Execution Plan to get the execution plan of the query as reported by the database itself.
Site Validators can be used to check for proper configuration and the existence of certain objects to ensure an application will run properly, such as:
Required schema objects, such as tables and columns
The existence of required fields in tables
The configuration of expected permissions, such as checking whether guests have permission to read the "home" project.
A validator can either be site level, or scoped to run at a specific container level, i.e. folder scoped. Folder scoped validators can be enabled only in folders where they are needed.
To access and run site validators:
Select (Admin) > Site > Admin Console.
Under Diagnostics, click Site Validation.
Implementation
Any validator should implement SiteValidationProvider, or more likely, the subclass SiteValidationProviderImpl. The methods getName() and getDescription() implement the name and description for the validator.
The boolean flag isSiteScope() controls whether the validator is site-scoped. The boolean flag shouldRun() controls whether the validator is applicable to a given container.
The method runValidation() returns a SiteValidationResultList of validation messages that will be displayed on the validation page in the admin console.
The messages can be set at different levels of severity: info, warn, or error. Errors will appear in red on the validation page. There are helper methods on SiteValidationResultList to aid in building the list. To build compound messages, SiteValidationResult behaves much like a StringBuilder, with an append() that returns itself.
Steps
Implement SiteValidationProvider
Implement runValidation():
Instantiate a SiteValidationResultList
For each of your validation steps, call SiteValidationResultList.addInfo(), addWarn() or addError()
In your module's doStartup(), call SiteValidationService.registerProvider() to register your validator
Sample Code
An example validator that checks whether any Guest users have read or edit permissions: PermissionsValidator.java.
The data processing pipeline performs long-running, complex processing jobs in the background. Applications include:
Automating data upload
Performing bulk import of large data files
Performing sequential transformations on data during import to the system
Users can configure their own pipeline tasks, such as configuring a custom R script pipeline, or use one of the predefined pipelines, which include study import, MS2 processing, and flow cytometry analysis.
The pipeline handles queuing and workflow of jobs when multiple users are processing large runs. It can be configured to provide notifications of progress, allowing the user or administrator to respond quickly to problems.
For example, an installation of LabKey Server might use the data processing pipeline for daily automated upload and synchronization of datasets, case report forms, and sample information stored at the lab level around the world. The pipeline is also used for export/import of complete studies when transferring them between staging and production servers.
The Data Pipeline grid displays information about current and past pipeline jobs. You can add a Data Pipeline web part to a page, or view the site-wide pipeline grid:
Select (Admin) > Site > Admin Console.
Under Management, click Pipeline.
The pipeline grid shows a line for each current and past pipeline job. Options:
Click Process and Import Data to initiate a new job.
Use Setup to change file permissions, set up a pipeline override, and control email notifications.
(Grid Views), (Charts and Reports), (Export) grid options are available as on other grids.
Select the checkbox for a row to enable Retry, Delete, Cancel, and Complete options for that job.
Click (Print) to generate a printout of the status grid.
Initiate a Pipeline Job
From the pipeline status grid, click Process and Import Data. You will see the current contents of the pipeline root. Drag and drop additional files to upload them.
Navigate to and select the intended file or folder. If you navigate into a subdirectory tree to find the intended files, the pipeline file browser will remember that location when you return to import other files later.
Click Import.
Delete a Pipeline Job
To delete a pipeline job, click the checkbox for the row on the data pipeline grid, and click (Delete). You will be asked to confirm the deletion.
If there are associated experiment runs that were generated, you will have the option to delete them at the same time via checkboxes. In addition, if there are no usages of files in the pipeline analysis directory when the pipeline job is deleted (i.e., files attached to runs as inputs or outputs), we will delete the analysis directory from the pipeline root. The files are not actually deleted, but moved to a ".deleted" directory that is hidden from the file-browser.
Cancel a Pipeline Job
To cancel a pipeline job, select the checkbox for the intended row and click Cancel. The job status will be set to "CANCELLING/CANCELLED" and execution halted.
Use Pipeline Override to Mount a File Directory
You can configure a pipeline override to identify a specific location for the storage of files for usage by the pipeline.
If you or others wish to be notified when a pipeline job succeeds or fails, you can configure email notifications at the site, project, or folder level. Email notification settings are inherited by default, but this inheritance may be overridden in child folders.
In the project or folder of interest, select Admin > Go To Module > Pipeline, then click Setup.
Check the appropriate box(es) to configure notification emails to be sent when a pipeline job succeeds and/or fails.
Check the "Send to owner" box to automatically notify the user initiating the job.
Add additional email addresses and select the frequency and timing of notifications.
In the case of pipeline failure, there is a second option to define a list of Escalation Users.
Click Update.
Site and application administrators can also subscribe to notifications for the entire site.
At the site level, select Admin > Site > Admin Console.
Under Management, click Pipeline Email Notification.
Customize Notification Email
You can customize the email notification(s) that will be sent to users, with different templates for failed and successful pipeline jobs. Learn more in this topic:
In addition to the standard substitutions available, custom parameters available for pipeline job emails are:
Parameter Name
Type
Format
Description
dataURL
String
Plain
Link to the job details for this pipeline job
jobDescription
String
Plain
The job description
setupURL
String
Plain
URL to configure the pipeline, including email notifications
status
String
Plain
The job status
timeCreated
Date
Plain
The date and time this job was created
userDisplayName
String
Plain
Display name of the user who originated the action
userEmail
String
Plain
Email address of the user who originated the action
userFirstName
String
Plain
First name of the user who originated the action
userLastName
String
Plain
Last name of the user who originated the action
Escalate Job Failure
Once Escalation Users have been configured, these users can be notified from the pipeline job details view directly using the Escalate Job Failure button. Click the ERROR status from the pipeline job log, then click Escalate Job Failure.
Related Topics
MS2: Pipeline Upload of MS2 Files - You can use the LabKey data pipeline to search and process LC-MS/MS run data stored in an mzXML file. You can also process pepXML files, which are stored results from a search for peptides on an mzXML file against a protein database. Results are displayed by the MS2 module for analysis.
Pipeline Protocols - Pipelines making use of analysis protocols can offer users a selection of upload protocols.
Set a Pipeline Override
The LabKey data processing pipeline allows you to process and import data files with tools we supply, or with tools you build on your own. You can set a pipeline override to allow the data processing pipeline to operate on files in a preferred, pre-existing directory instead of the directory where LabKey ordinarily stores files for a project. Note that you can still use the data processing pipeline without setting up a pipeline override if the system's default locations for file storage are sufficient for you.
A pipeline override is a directory on the file system accessible to the web server where the server can read and write files. Usually the pipeline override is a shared directory on a file server, where data files can be deposited (e.g., after MS/MS runs). You can also set the pipeline override to be a directory on your local computer.
Before you set the pipeline override, you may want to think about how your file server is organized. The pipeline override directory is essentially a window into your file system, so you should make sure that the directories beneath the override directory will contain only files that users of your LabKey system should have permissions to see. On the LabKey side, subfolders inherit pipeline override settings, so once you set the override, LabKey can upload data files from the override directory tree into the folder and any subfolders.
Single Machine Setup
These steps will help you set up the pipeline, including an override directory, for usage on a single computer. For information on setup for a distributed environment, see the next section.
Select (Admin) > Go to Module > Pipeline.
Click Setup. (Note: you must be a Site Administrator to see the Setup option.)
You will now see the "Data Processing Pipeline Setup" page.
Select Set a pipeline override.
Specify the Primary Directory from which your dataset files will be loaded.
Click the Searchable box if you want the pipeline override directory included in site searches. By default, the materials in the pipeline override directory are not indexed.
For MS2 Only, you have the option to include a Supplemental Directory from which dataset files can be loaded. No files will be written to the supplemental directory.
MS2 projects that set a pipeline override can specify a supplemental, read-only directory, which can be used as a repository for your original data files. If a supplemental directory is specified, LabKey Server will treat both directories as sources for input data to the pipeline, but it will create and change files only in the first, primary directory.
Note that UNC paths are not supported for pipeline roots here. Instead, create a network drive mapping configuration via (Admin) > Site > Admin Console > Settings > Configuration > Files. Then specify the letter mapped drive path as the supplemental file location.
Set Pipeline Files Permissions (Optional)
By default, pipeline files are not shared. To allow pipeline files to be downloaded or updated via the web server, check the Share files via web site checkbox. Then select appropriate levels of permissions for members of global and project groups.
When setting up a single machine for MS2 runs, notice the Supplemental File Location when setting up the pipeline to read files from an additional data source directory. In addition, other options include:
The FASTA root is the directory where the FASTA databases that you will use for peptide and protein searches against MS/MS data are located. FASTA databases may be located within the FASTA root directory itself, or in a subdirectory beneath it.
To configure the location of the FASTA databases used for peptide and protein searches against MS/MS data:
On the MS2 Dashboard, click Setup in the Data Pipeline web part.
Under MS2 specific settings, click Set FASTA Root.
By default, the FASTA root directory is set to point to a /databases directory nested in the directory that you specified for the pipeline override. However, you can set the FASTA root to be any directory that's accessible by users of the pipeline.
Click Save.
Selecting the Allow Upload checkbox permits users with admin privileges to upload FASTA files to the FASTA root directory. If this checkbox is selected, the Add FASTA File link appears under MS2 specific settings on the data pipeline setup page. Admin users can click this link to upload a FASTA file from their local computer to the FASTA root on the server.
If you prefer to control what FASTA files are available to users of your LabKey Server site, leave this checkbox unselected. The Add FASTA File link will not appear on the pipeline setup page. In this case, the network administrator can add FASTA files directly to the root directory on the file server.
By default, all subfolders will inherit the pipeline configuration from their parent folder. You can override this if you wish.
When you use the pipeline to browse for files, it will remember where you last loaded data for your current folder and bring you back to that location. You can click on a parent directory to change your location in the file system.
Set X! Tandem, Sequest, or Mascot Defaults for Searching Proteomics Data
You can specify default settings for X! Tandem, Sequest or Mascot for the data pipeline in the current project or folder. On the pipeline setup page, click the Set defaults link under X! Tandem specific settings, Sequest specific settings, or Mascot specific settings.
The default settings are stored at the pipeline override in a file named default_input.xml. These settings are copied to the search engine's analysis definition file (named tandem.xml, sequest.xml or mascot.xml by default) for each search protocol that you define for data files beneath the pipeline override. The default settings can be overridden for any individual search protocol. See Search and Process MS2 Data for information about configuring search protocols.
Setup for Distributed Environment
The pipeline that is installed with a standard LabKey installation runs on a single computer. Since the pipeline's search and analysis operations are resource-intensive, the standard pipeline is most useful for evaluation and small-scale experimental purposes.
For institutions performing high-throughput experiments and analyzing the resulting data, the pipeline is best run in a distributed environment, where the resource load can be shared across a set of dedicated servers. Setting up the LabKey pipeline to leverage distributed processing demands some customization as well as a high level of network and server administrative skill. If you wish to set up the LabKey pipeline for use in a distributed environment, contact LabKey.
Pipeline protocols are used to provide additional parameters or configuration information to some types of pipeline imports. One or more protocols can be defined and associated with a given pipeline import process by an administrator and the user can select among them when importing subsequent runs.
As the list of available protocols grows, the administrator can archive outdated protocols, making them no longer visible to users. No data or other artifacts are lost when a protocol is archived, it simply no longer appears on the selection drop down. The protocol definition itself is also retained so that the archiving process can be reversed, making older protocols available to users again.
Analysis protocols are defined during import of a file and can be saved for future use in other imports. If you are not planning to save the protocol for future use, the name is optional.
In the Data Pipeline web part, click Process and Import Data.
Select the file(s) to import and click Import Data.
In the popup, select the specific import pipeline to associate with the new protocol and click Import.
On the next page, the Analysis Protocol pulldown lists any existing protocols. Select "<New Protocol>" to define a new protocol.
Enter a unique name and the defining parameters for the protocol.
Check the box to "Save protocol for future use."
Continue to define all the protocols you want to offer your users.
You may need to delete the imported data after each definition to allow reimport of the same file and definition of another new protocol.
An example walkthrough of the creation of multiple protocol definitions can be found in the NLP Pipeline documentation.
Manage Protocols
Add a Pipeline Protocols web part. All the protocols defined in the current container will be listed. The Pipeline column shows the specific data import pipeline where the protocol will be available.
Click the name of any protocol to see the saved xml file, which includes the parameter definitions included.
Select one or more rows and click Archive to set their status to "Archived". Archived protocols will not be offered to users uploading new data. No existing data uploads (or other artifacts) will be deleted when you archive a protocol. The protocol definition itself is also preserved intact.
Protocols can also be returned to the list available to users by selecting the row and clicking Unarchived.
Use Protocols
A user uploading data through the same import pipeline will be able to select one of the currently available and unarchived protocols from a dropdown.
Premium Feature — Available with all Premium Editions of LabKey Server. Learn more or contact LabKey.
File Watchers monitor directories on the file system and perform specific actions when desired files appear. When new or updated files appear in a monitored directory, a specified pipeline task (or set of tasks) will be triggered. Multiple file watchers can be set up to monitor a single directory for file changes, or each file watcher could watch a different location.
Each File Watcher can be configured to be triggered only when specific file name patterns are detected, such as watching for '.xlsx' files, etc. Use caution when defining multiple file watchers to monitor the same location. If file name patterns are not sufficiently distinct, you may encounter conflicts among file watchers acting on the same files.
When files are detected, by default they are moved (not copied) to the LabKey folder's pipeline root where they are picked up for processing. You can change this default behavior and specify that the files be moved to a different location during configuration of the file watcher.
Premium Feature — Available with all Premium Editions of LabKey Server. Learn more or contact LabKey.
File Watchers let administrators set up the monitoring of directories on the file system and perform specific actions when desired files appear. This topic outlines the process of creating file watch triggers.
The two panels of the Create Pipeline Trigger wizard define a file watcher. Configuration options and on-screen guidance may vary for each task type.
Details
Name: A unique name for the trigger.
Description: A description for the trigger.
Type: Currently supports one value 'pipeline-filewatcher'.
Pipeline Task: The type of filewatcher task you want to create. By default the option you clicked to open this wizard is selected but you can change this selection from available options on the dropdown menu.
Run as username: The file watcher will run as this user in the pipeline. It is strongly recommended that this user has elevated permissions to perform updates, deletes, etc. For example, an elevated "service account" could be used.
Assay Provider: Use this provider for running assay import runs.
Enabled: Turns on detection and triggering.
Click Next to move to the next panel.
Configuration
Location to Watch: File location to watch for uploadable files. Confirm that this directory exists before saving the file watcher configuration.
This can be a path relative to the local container's pipeline root, beginning with "./". Use "." to indicate the pipeline root directory itself (or you could also enter the matching full path). Users with "Folder Admin" or higher can watch locations relative to the container root.
Site and Application Admins can watch an absolute path on the server's file system (beginning with a "/"), or a location outside the local filesystem, such as on the local machine or a networked drive.
Note that if you use the root location here, you should also set a Move to Container location to avoid a potential loop when the system tries to make a copy to the same location "before analysis".
Include Child Folders: A boolean indicating whether to seek uploadable files in subdirectories (currently to a max depth of 3) of the Location to Watch you specified.
File Pattern: A Java regular expression that captures filenames of interest and can extract and use information from the filename to set other properties. We recommend using a regex interpreter, such as https://regex101.com/, to test the behavior of your file pattern. Details are in this topic: File Watcher: File Name Patterns. If no pattern is provided, the default pattern is used:
(^\D*)\.(?:tsv|txt|xls|xlsx)
Note that this default pattern does not match file names that include numbers. For example, "AssayFile123.xls" will not be matched.
Quiet Period (Seconds): Number of seconds to wait after file activity before executing a job (minimum is 1). If you encounter conflicts, particularly when running multiple file watchers monitoring the same location, try increasing the quiet period.
Move to Container: Move the file to this container before analysis. This must be a relative or absolute container path.
If this field is blank, and the watched file is already underneath a pipeline root, then it will not be moved.
If this field is blank but the watched file is elsewhere, it will be moved to the pipeline root of the current container.
You must have at least Folder Administrator permissions in the folder where files are being moved to.
Move to Subdirectory: Move the file to this directory under the destination container's pipeline root.
Leaving this blank will default to the pipeline root.
You must have at least Folder Administrator permissions in the folder where files are being moved to.
Copy File To: Where the file should be copied to before analysis. This can be absolute or relative to the current project/folder. You must have at least Folder Administrator permissions in the folder where the files are being copied to. For example, an absolute file path to the Testing project:
/labkey/labkey/files/Testing/@files
Action: When importing data, the following import behaviors are available. Different options are available depending on the file watcher task and the type of target table. Lists support 'merge' and 'replace' but not 'append', etc.
'Merge' inserts new rows and updates existing rows in the target table.
'Append' adds incoming data to the target table.
'Replace' deletes existing rows in the target table and imports incoming data into the empty table.
Allow Domain Updates: When updating lists and datasets, by default, the target data structure will be updated to reflect new columns in the incoming list or dataset. Any columns missing from the incoming data will be dropped (and their data deleted). To override this behavior, uncheck the Allow Domain Updates box to retain the column set of the existing list or dataset.
Import Lookups by Alternate Key: If enabled, the server will try to resolve lookups by values other than the target's primary key. For details see Populate a List.
Show Advanced Settings: Click the symbol to add custom functions and parameters.
Parameter Function: Include a JavaScript function to be executed during the move. (See example here.)
Add Custom Parameter: These parameters will be passed to the chosen pipeline task for consumption in addition to the standard configuration. Some pipeline tasks have specific custom parameters available. For details, see File Watcher Tasks.
Click Save when finished.
Manage File Watchers
Navigate to the folder where you want to manage file watchers.
Select (Admin) > Folder > Management. Click the Import tab and scroll down.
In a study folder, you can instead click the Manage tab, then click Manage File Watchers.
Click Manage File Watcher Triggers.
All the file watchers defined in the current folder will be listed.
Hover to expose a (pencil) icon for editing.
Select a row and click the (delete) icon to delete.
Developing and Testing File Watchers
When iteratively developing a file watcher it is useful to reset the timestamp on the source data file so that the file watcher is triggered. To reset timestamps on all files in a directory, use the following Windows command like the following.
C:\testdata\datasets> copy /b *.tsv +,,
Common Configuration Issues
Problem: File Watcher Does Not Trigger, But No Error Is Logged
When the user provides a file in the watched directory, nothing happens. The failure is silent, and no error is shown or logged.
If this behavior results, there are two common culprits:
1. The provided file does not match the file pattern. Confirm that the file name matches the regex file pattern. Use a regex interpreter service such as https://regex101.com/ to test the behavior.
2. The file watcher was initially configured to watch a directory that does not exist. To recover from this: create a new directory to watch, edit Location to Watch to point to this directory, and re-save the file watcher configuration.
To reload a study, use this filewatcher, providing an unzipped folder archive containing the study objects as well as the folder.xml file and any other folder objects needed.
Import Samples From Data File
This option supports importing Sample data into a specified Sample Type. A few key options on the Configuration panel are described here.
File Pattern
You can tell the trigger which Sample Type the imported data belongs to by using one of these file name capture methods:
<name>: the text name of the Sample Type, for example "BloodVials".
<id>: The integer system id of the Sample Type, for example, "330". To find the system id: go to the Sample Types web part and click a Sample Type. The URL will show the id as a parameter named 'RowId'. For example:
For example, a File Pattern using the name might look like:
Sample_(?<name>.+)_.(xlsx|tsv|xls)
...which would recognize the following file name as targeting a Sample Type named "BloodVials":
Sample_BloodVials_.xls
If the target Sample Type does not exist, the filewatcher import will fail.
Action: Merge or Append
Import behavior into Sample Types has two options, Merge or Append.
Merge: When an incoming field contains a value, the corresponding value in the Sample Type will be updated. When a field in the imported data has no value (an empty cell), the corresponding value in the Sample Type will be deleted.
Append: The incoming data file will be inserted as new rows in the Sample Type. The operation will fail if there are existing sample ids that match those being imported.
Import Lookups by Alternate Key
Some sample fields, including the built in Status field, are structured as lookups. When a filewatcher encounters a value other than the primary key for such a lookup, it will only resolve if you check the box to Import Lookups by Alternate Key.
For example, if you see an error about the inability to convert the "Available (String)" value, you can either:
Edit your spreadsheet to provide the rowID for each "Status" value, OR
Edit your filewatcher to check the Import Lookups by Alternate Key box.
Import Assay Data from a File
Currently only Standard assay designs are supported, under the General assay provider. Multi-run files and run re-imports are not supported by the file watcher.
The following file formats are supported, note that .txt files are not supported:
xls, .xlsx, .csv, .tsv, .zip
The following assay data and metadata are supported by the file watcher:
result data
batch properties/metadata
run properties/metadata
plate properties/metadata
If only result data is being imported, you can use a single tabular file.
If additional run metadata is being imported, you can use either a zip file format or an excel multi-sheet format. In a zip file format the system determines the data type (result, run metadata, etc.) using the names of the files. In the multi-sheet format the system matches based on the sheet names. The sheet names don't need to be in any particular order. The following matching criteria are used:
data type
for zipped files, use file name...
for multi-sheet Excel, use sheet name...
batch properties
batchProperties.(tsv, csv, xlsx, xls)
batchProperties
run properties
runProperties.(tsv, csv, xlsx, xls)
runProperties
results data
results.(tsv, csv, xlsx, xls)
results
plate metadata
plateMetadata.json
not supported
The following multi-sheet Excel file shows how to format results data and run properties fields on different sheets:
The assay provider (currently only General is supported) and protocol can be specified in the file watcher configuration. This is easier to configure than binding to the protocol using a regular expression named capture group.
name: the assay protocol name (for example, MyAssay)
id: the system id of the target assay (an integer)
For example this file name pattern:
assay_(?<name>.+)_.(xlsx|tsv|xls|zip)
will interpret the following file name as targeting an assay named "MyAssay":
assay_MyAssay_.xls
If the target assay does not exist the filewatcher import will fail.
If there is no name capture group in the file pattern and there is a single assay protocol in the container, the system attempts to import into that single assay.
The following example file pattern uses the protocol ID instead of the assay name:
assayProtocol_(?<id>.+)_.(xlsx|tsv|xls|zip)
which will interpret this file as targeting the assay with protocol ID 308:
assayProtocol_308_.tsv
Reload Lists Using Data File
This option is available in any folder type, provided the list module has been enabled. It imports data to existing lists from source files in either Excel (.xls/.xlsx) or TSV (.tsv) formats. It can also infer non-key column changes. Note that this task cannot create a new list definition: the list definition must already exist on the server.
You can reload lists from files in S3 storage by enabling an SQS Queue and configuring cloud storage to use in your local folder. Learn more in this topic:
This option is available in any folder type. It moves and/or copies files around the server without analyzing the contents of those files.
Import/Reload Study Datasets Using Data File
This option is available in a study folder. It loads data into existing study datasets and it infers/creates datasets if they don't already exist. Source data can be in TSV, Excel, or text files.
Import Specimen Data Using Data File
This option is only available in study folders with the specimen module enabled.
This file watcher type accepts specimen data in both .zip and .tsv file formats:
.zip: The specimen archive zip file has a .specimens file extension.
.tsv: An individual specimens.tsv file which will typically be the simple specimen format and contain only vial information. This file will have a # specimens comment at the top.
By default, specimen data imported using the a file watcher will be set to replace existing data. To merge instead, set the custom property "mergeSpecimen" to true.
Import flow files to the flow module. This type of file watcher is only available in Flow folders. It supports a process where FCS flow data is deposited in a common location by a number of users. It is important to note that each data export must be placed into a new separate subdirectory of the watched folder. Once a subfolder has been 'processed', adding new files to it will not trigger a flow import.
When the File Watcher finds a new subdirectory of FCS files, they can be placed into a new location under the folder pipeline root based on the current user and date. Example: @pipeline/${username}/${date('YYYY-MM')}. LabKey then imports the FCS data to that container. All FCS files within a single directory are imported as a single experiment run in the flow module.
One key attribute of a flow filewatcher is to ensure that you set a long enough Quiet Period. When the folder is first created, the file watcher will "wait" the specified quiet period before processing files. This interval must be long enough for all of the files to be uploaded, otherwise the file watcher will only import the files that exist at the end of the quiet period. For example, if you set a 1 minute quiet period, but have an 18 file FCS folder (such as in our tutorial example) you might only have 14 files uploaded at the end of the minute, so only those 14 will be imported into the run. When defining a flow filewatcher, use caution to set an adequate quiet period. In situations where uploads take considerable time, you may decide to keep using a manual upload and import process to avoid the possibility of incomplete runs.
Add custom parameters on the Configuration panel, first expanding the Show Advanced Settings section. Click Add Custom Parameter to add each one. Click to delete a parameter.
allowDomainUpdates
This parameter used in earlier versions has been replaced with the checkbox option to Allow Domain Updates on the Configuration panel for the tasks 'Reload Lists Using Data File' and 'Import/Reload Study Datasets Using Data File'.
When updating lists and datasets, by default, the columns in the incoming data will overwrite the columns in the existing list or dataset. This means that any new columns in the incoming data will be added to the list and any columns missing from the incoming data will be dropped (and their data deleted).
To override this behavior, uncheck the Allow Domain Updates box to retain the column set of the existing list or dataset.
default.action
The "default.action" parameter accepts text values of either : replace or append, the default is replace. This parameter can be used to control the default Action for the trigger, which may also be more conveniently set using the Action options on the Configuration panel.
mergeData
This parameter can be included to merge data, with the value set to either true or false. The default is false (replace) and for existing configurations if no param was provided we interpret that as : false/replace.
By default, specimen data imported using the 'Import Specimen Data Using Data File' file watcher will be set to replace existing data. To merge instead, set the property "mergeSpecimen" to true.
skipQueryValidation
'Reload Study' and 'Reload Folder Archive' can be configured to skip query validation by adding a custom parameter to the file watcher named 'skipQueryValidation' and setting it to 'TRUE'. This may be helpful if your file watcher reloads are failing due to unrelated query issues.
Premium Feature — Available with all Premium Editions of LabKey Server. Learn more or contact LabKey.
A script pipeline lets you run scripts and commands in a sequence, where the output of one script becomes the input for the next in the series. The pipeline supports any of the scripting languages that can be configured for the server, including R, JavaScript, Perl, Python, SAS, and others.
With Premium Editions of LabKey Server, script pipelines are available as file watchers, so that they may be run automatically when desired files appear in a watched location.
For example, you might use a "helloworld" module and define two pipelines, named "Generate Matrix and Import into 'myassay'" and "Use R to create TSV file during import".
Next, enable the module containing your pipeline in any folder where you want to be able to use the script pipelines as file watchers.
Once the module is enabled, you will see your script pipelines on the folder management Import tab alongside the predefined tasks for your folder type:
Customize Help Text
The user interface for defining file watchers includes an information box that begins "Fields marked with an asterisk * are required." You can place your own text in that same box after that opening phrase by including a <help> element in your script pipeline definition (*.pipeline.xml file).
<pipeline xmlns="http://labkey.org/pipeline/xml" name="hello" version="0.0"> <description>Generate Matrix and Import into 'myassay'</description> <help>Custom help text to explain this script pipeline.</help> <tasks> <taskref ref="HelloWorld:task:hello"/> </tasks> </pipeline>
Configure File Watcher
As with other file watchers, give it a name and provide the Details and Configuration information using the Create Pipeline Trigger wizard. See Create a File Watcher for details.
Prevent Usage of Script Pipeline with File Watcher
If you want to disable usage of a specific script pipeline as a file watcher entirely, include in your *.pipeline.xml file setting the triggerConfiguration:allow parameter to false. Syntax would look like:
<pipeline xmlns="http://labkey.org/pipeline/xml" name="hello2" version="0.0"> <description>Use R to create TSV file during import</description> <tasks> <taskref ref="HelloWorld:task:hello"/> </tasks> <triggerConfiguration allow="false"/> </pipeline>
Premium Feature — Available with all Premium Editions of LabKey Server. Learn more or contact LabKey.
File Watchers let administrators set up the monitoring of directories on the file system and perform specific actions when desired files appear. This topic covers using file name selection patterns to define which files trigger the watcher. Use file patterns in conjuction with the target container path to take action on specific files.
If no FilePattern is supplied, the default pattern is used:
(^\D*)\.(?:tsv|txt|xls|xlsx)
This pattern matches only file names that contain letters and special characters (for example: Dataset_A.tsv). File names which include digits (for example: Dataset_1.tsv) are not matched, and their data will not be loaded.
File matched, data loaded into dataset FooStudy_Demographics.
FooStudy_LabResults.tsv
File matched, data loaded into dataset FooStudy_LabResults.
BarStudy_Demographics.tsv
No file match, data will not be loaded.
Name Capture Group pattern
This type of file pattern extracts names or ID's from the source file name and targets an existing dataset of the same name or id. For example, suppose you have a source file with the following name:
dataset_Demographics_.xls
The following file pattern extracts the value <name> from the file name, in this case the string "Demographics" that occurs between the underscore characters, and loads data into an existing dataset with the same name "Demographics".
dataset_(?<name>.+)_.(xlsx|tsv|xls)
Note that you can use the technique above to target datasets that include numbers in their names. For example, using the pattern above, the following behavior will result.
File Name
File Watcher Behavior
dataset_Demographics_.tsv
File matched, data loaded into dataset Demographics.
datasetDemographics.tsv
No file match, data will not be loaded.
dataset_LabResults1_.tsv
File matched, data loaded into dataset LabResults1.
dataset_LabResults2_.tsv
File matched, data loaded into dataset LabResults2.
To target a dataset by its dataset id, rather than its name, then use the following regex, where <id> refers to the dataset id. Note that you can determine a dataset's id by navigating to your study's Manage tab, and clicking Manage Datasets. The table of existing datasets shows the id for each dataset in the first column.
dataset_(?<id>.+)_.(xlsx|tsv|xls)
If you want to capture a name from a file with any arbitrary text (both letters and numbers) before the underscore and desired name, use:
.*_(?<name>.+).(xlsx|tsv|xls)
This would match as follows:
File Name
File Watcher Behavior
20220102_Demographics.tsv
File matched, data loaded into dataset Demographics.
LatestUpdate_LabResults.xlsx
File matched, data loaded into dataset LabResults.
2022Update_Demographics.xls
File matched, data loaded into dataset Demographics.
Suppose you want to create a set of datasets based on Excel and TSV files, and load data into those datasets. To set this up, do the following:
In the File web part create a directory named 'watched'. (It is important that you do this before saving the file watcher configuration.)
Prepare your Excel/TSV files to match the expectations of your study, especially, time point-style (date or visit), ParticipantId column name, and time column name. The name of your file should not include any numbers, only letters.
Upload the file into the study's File Repository.
Create a trigger to Import/reload study datasets using data file.
Location to Watch: enter 'watched'.
File Pattern: Leave blank. The default file pattern will be used, which is (^\D*)\.(?:tsv|txt|xls|xlsx) Note that this file pattern will not match file names which include numbers.
When the trigger is enabled, datasets will be created and loaded in your study.
Field
Value
Name
Load MyStudy
Description
Imports datasets to MyStudy
Type
Pipeline file watcher
Pipeline Task
Import/reload study datasets using data file.
Location to Watch
watched
File Pattern
Move to container
Move to subdirectory
Example: Named Capture Group <study>
Consider a set of data with original filenames matching a format like this:
An example filePattern regular expression that would capture such filenames would be:
sample_(.+)_(?<study>.+)\.tsv
Files that match the pattern are acted upon, such as being moved and/or imported to tables in the server. Nothing happens to files that do not match the pattern.
If the regular expression contains named capturing groups, such as the "(?<study>.+)" portion in the example above, then the corresponding value (in this example "study20") can be substituted into other property expressions. For instance, a Move to container setting of:
This substitution allows the administrator to determine the destination folder based on the name, ensuring that the data is uploaded to the correct location.
Field
Value
Name
Load StudyA
Description
Moves and imports datasets to StudyA
Type
Pipeline file watcher
Pipeline Task
Import/reload study datasets using data file.
Location
.
File Pattern
sample_(.+)_(?<study>.+)\.tsv
Move to container
/studies/${study}/@pipeline/import/${now:date}
Example: Capture the Dataset Name
A file watcher that matches .tsv/.xls files with "StudyA_" prefixed to the file name. For example, "StudyA_LabResults.tsv". Files are moved, and the data imported, to the StudyA folder. The <name> capture group determines the name of the dataset, so that "StudyA_LabResults.tsv" becomes the dataset "LabResults".
Field
Value
Name
Load StudyA
Description
Moves and imports datasets to StudyA
Type
Pipeline file watcher
Pipeline Task
Import/reload study datasets using data file.
Location
.
File Pattern
StudyA_(?<name>.+)\.(?:tsv|xls)
Move to container
StudyA
Move to subdirectory
imported
Example: Capture both the Folder Destination and the Dataset Name
To distribute files like the following to different study folders:
This chart summarizes server-side dependency recommendations for past & current releases, and predictions for upcoming releases.
Do not use: not yet available or tested with this version of LabKey
Recommended: fully supported and thoroughly tested with this version of LabKey
Upgrade ASAP: deprecated and no longer supported with this version of LabKey
Do not use: incompatible with this version of LabKey and/or past end of life (no longer supported by the organization that develops the component)
Component
Version
LabKey 22.3.x (Mar 2022)
LabKey 22.7.x (Jul 2022)
LabKey 22.11.x (Nov 2022)
LabKey 23.3.x (Mar 2023)
LabKey 23.7.x (Jul 2023)
Java
Java 18+
Java 17 (LTS)
Tomcat
10.x
9.0.x
PostgreSQL
15.x
14.x
13.x
12.x
11.x
10.x
Microsoft SQL Server (Premium Feature)
2022
2019
2017
2016
2014
Browsers
LabKey Server requires a modern browser for many advanced features, and we recommend upgrading your browser(s) to the latest stable release. As a general rule, LabKey Server supports the latest version of the following browsers:
If you experience a problem with a supported browser feel free to search for, and if not found, post the details to the support forum so we're made aware of the issue.
Java
We strongly recommend using the latest point release of Eclipse Temurin 17 64-bit (currently 17.0.5+8), the community-supported production-ready distribution of the Java Development Kit produced by the Adoptium Working Group. LabKey performs all development, testing, and deploying using Eclipse Temurin 17 only. Version 17 is a Long Term Support release (LTS), meaning that we can use it for an extended period, though groups should regularly apply point releases to stay current on security and other important bug fixes. LabKey does not support JDK 18 and higher.
You must run the Java 17 JVM with these special flags --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.desktop/java.awt.font=ALL-UNNAMED --add-opens=java.base/java.text=ALL-UNNAMED to allow certain libraries to function properly. (Note that standard startup scripts in the most recent Tomcat versions add these flags automatically.) We will upgrade the libraries as soon as they fully support the changes made in JEP 396.
LabKey Server has not been tested with other Java distributions such as Oracle OpenJDK, Oracle commercial Java SE, Amazon Corretto, Red Hat, Zulu, OpenJ9, etc.
Apache Tomcat
LabKey requires Apache Tomcat 9.0.x; we strongly recommend upgrading to the latest point release (currently 9.0.70).
LabKey no longer supports Tomcat 8.5.x and earlier. LabKey does not yet support Tomcat 10.0.x or 10.1.x.
We recommend installing Tomcat using the binary distributions; if a package manager is used, the lib directory may be installed in a different location. Also, the packages sometimes include alternative versions of some components (like JDBC connection pools) that can cause incompatibilities.
We recommend not using the Apache Tomcat Native library; this library can interfere with SSL and prevent server access via LabKey's client libraries.
PostgreSQL
For installations using PostgreSQL as the primary database, we recommend using the latest point release of PostgreSQL 14.x (currently 14.6). We have also tested the recently released PostgreSQL 15.x and have found no major issues with it.
For those who can't transition to 14.x yet, LabKey continues to support PostgreSQL 13.x, 12.x, and 11.x, though here also we strongly recommend installing the latest point release (currently 13.9, 12.13, and 11.18) to ensure you have all the latest security, reliability, and performance fixes. Note that PostgreSQL 10.x has reached end-of-life and is no longer maintained by the PostgreSQL team; if using 10.x, we strongly recommend that you upgrade to a current version immediately. Support for PostgreSQL 10.x will be removed in LabKey Server 22.12.
PostgreSQL provides instructions for how to upgrade your installation, including moving your existing data.
Microsoft SQL Server (Premium Feature)
Premium Editions of LabKey Server have the option of using Microsoft SQL Server databases as the primary database and as external data sources. For these installations, we recommend using Microsoft SQL Server 2019, which we've tested on both Windows and Linux. We have also tested the recently released SQL Server 2022 and found no major issues with it.
LabKey continues to support SQL Server 2017, 2016, and 2014. LabKey does not support SQL Server 2012 or earlier releases.
Linux
LabKey does not recommend a specific version of Linux, but you will need to use one that supports the versions of Tomcat, Java, your database, and any other necessary components for your version of LabKey Server.
Previous Version
To step back to this topic in the documentation archives for the previous release, click here.
Example Hardware/Software Configurations
This topic shows example hardware/software configurations for different LabKey Server installations. These are intended as guidelines only -- your own configuration should be adjusted to suit your particular requirements.
Small Laboratory Installation
The following configuration is appropriate for 10-20 users with small file and table sizes. We assume that the server and database are located on the same machine.
CPUs
2+ CPUs or Virtual CPUs
RAM
4GB (minimum) - 16GB (recommended)
Disk Storage
164GB (64GB for File Storage, 100GB for database storage)
Software
• OS: Linux or Windows (LabKey Server is supported on both Operating Systems. Select the operating system which is best supported by your organization) • Java • Tomcat • DB software (PostgreSQL or MS SQL Server with a Premium Edition of LabKey Server) • See Supported Technologies for the specific versions to use.
As usage increases, increase the amount of RAM memory to 8GB at a minimum (and also increase the memory used by Tomcat and the database accordingly).
Large Multi-project Installation
The following configuration is appropriate for a 100s of users working on multiple projects with large files and data tables.
We recommend placing the web server and the database server on different machines in order to optimize maintenance, update, and backup cadences.
Machine #1: Web Server
CPUs
4+ CPUs or Virtual CPUs
RAM
8GB (minimum) - 16GB (recommended)
Disk Storage
64GB (for OS and LabKey binaries), 512GB (for file storage)
Software
• OS: Linux or Windows (LabKey Server is supported on both Operating Systems. Select the operating system which is best supported by your organization) • Java • Tomcat • See Supported Technologies for the specific versions to use.
Network
1 GB/s
Machine #2: Database Server
CPUs
4+ CPUs or Virtual CPUs
RAM
8GB (minimum) - 16GB (recommended)
Disk Storage
100GB (for database storage)
Software
• OS: Linux or Windows (LabKey Server is supported on both Operating Systems. Select the operating system which is best supported by your organization) • DB software (PostgreSQL or MS SQL Server with a Premium Edition) • See Supported Technologies for the specific versions to use.
Premium Resource: Reference Architecture / System Requirements
Installation Checklists
These checklists explain how to install LabKey Server along with its prerequisite components. LabKey Server is a Java web application that runs under Apache Tomcat and accesses a relational database, either PostgreSQL or (with Premium Editions) Microsoft SQL Server. LabKey Server can also reserve a network file share for the data pipeline, and use an outgoing (SMTP) mail server for sending system emails. LabKey Server may optionally connect to an LDAP server to authenticate users within an organization.
If you are upgrading a prior installation of LabKey, follow the steps in this topic:
When configuring file systems, be sure not to place your full-text search index on an NFS filesystem or AWS EFS.
Install on Linux: Main Components
This topic explains how to install LabKey on Linux systems, along with its prerequisites: Java, Apache Tomcat, and a database.
This topic assumes installation onto a new, clean Linux machine. If you are installing into an established environment, you can skip steps, or adjust them, as required to use any prerequisites already in place. We also assume that you have super user access to the machine, and that you are familiar with Linux commands and utilities. The example Linux commands below are for Ubuntu 18.04. Adapt these commands as appropriate for your Linux implementation.
Consult the topic Supported Technologies to identify supported and compatible versions of the prerequisite components. Make a list of the latest version numbers (for Java, Tomcat, and a database) that are supported by the version of LabKey Server you wish to install.
Create Folder Structure
For simplicity, we recommend using the directory structure described here, particularly whenever you are creating a new LabKey installation from scratch.
Use the following commands to create the directory structure for the installation:
Use your native Linux install utility to install PostgreSQL (like apt or yum) or go to http://www.postgresql.org/download/ and download the PostgreSQL binary packages or source code. Follow the instructions in the downloaded package to install PostgreSQL.
Create the database and associated user/owner. Using superuser permissions, do the following:
Create an empty database named 'labkey'.
Create a PostgreSQL user named 'labkey'.
Grant the owner role to the labkey user over the database.
Revoke public permissions from the database.
See the following example PostgreSQL commands:
Connect to the DB server as the Postgres Super User using the psql command:
sudo -u postgres psql
Issue the following commands to create the user, database and revoke public permissions from the database. Use the example below, after substituting your chosen PASSWORD. Retain the single quotes around the password value.
create user labkey password 'PASSWORD'; create database labkey with owner labkey; revoke all on database labkey from public; \q
The username, password, and db name you choose above will be used to configure Tomcat below.
Find the file labkey.xml in the LabKey Server distribution files. This is the main LabKey Server configuration file and contains a number of settings required by LabKey Server to run.
Open labkey.xml in a text editor.
sudo nano $LABKEY_DIST/labkey.xml
The parameter values you need to change are bracketed by @@...@@:
@@appDocBase@@ - Replace with <LABKEY_HOME>/labkeywebapp. Use the full path: /usr/local/labkey/labkey/labkeywebapp
@@jdbcUser@@ - Replace with the database user created above: 'labkey'.
@@jdbcPassword@@ - Replace with the password create above.
Create the intermediate "Catalina" and "localhost" subdirectories to make the following path:
Note: While it is possible to use CATALINA_OPTS to specify a garbage collector, LabKey does not recommend doing so. LabKey develops, tests, and deploys using the default Java garbage collector. Explicitly specifying an alternative garbage collector is not recommended.
Save and close the file.
Notify the system of the new unit file:
sudo systemctl daemon-reload
Enable automatic startup on system reboot:
sudo systemctl enable tomcat
Start the Server
Start the server:
sudo systemctl start tomcat
After you start the server, point your web browser at the LabKey webapp:
http://<machine_name>:8080/labkey
For a local test machine, go to:
http://localhost:8080/labkey
You will be directed to create the first user account and set properties on your new server.
This topic explains how to install LabKey on a Windows machine, along with its prerequisites: Java, Apache Tomcat, and a database.
This topic assumes installation onto a clean Windows machine which contains none of the prerequisite components. If you are installing into an established environment, you can skip steps, or adjust them, to use any prerequisites already in place. We also assume that you have super-user/administrator access to the machine.
Directory Set Up
Setup the directory structure as below. This is the structure that we use internally for our LabKey servers:
C:\labkey\apps - This is where the prerequisite apps live. C:\labkey\apps\tomcat C:\labkey\apps\java C:\labkey\apps\lib - This holds any special scripts or files, like SSL certificates. C:\labkey\labkey - This is where LabKey gets installed. C:\labkey\src\labkey - This is where the downloaded distribution files go before installing.
On the Services panel, stop Tomcat before continuing.
PostgreSQL
This topic explains how to install PostgreSQL as part of an installation of LabKey Server.
If you already have PostgreSQL installed, LabKey Server can use that installed instance, provided the version is supported.
To install PostgreSQL on Windows:
Download and run the Windows PostgreSQL one click installer (Obtain the correct version from the supported versions page. When the wizard prompts you to choose where to install PostgreSQL, point it to the apps subdirectory, i.e. C:\labkey\apps\
Keep track of the PostgreSQL Windows Service account name and password. LabKey Server needs to ask for it so that we can pass it along to the PostgreSQL installer.
Keep track of the database superuser name and password. You'll need these to initially create the LabKey database, the LabKey database user, and grant that user the owner role.
We recommend that you install the graphical tool pgAdmin 4.x for easy database administration. Leave the default settings as they are on the "Installation Options" page to include pgAdmin.
Create the database. Using superuser permissions, do the following:
Create an empty database.
Create a PostgreSQL user named 'labkey'.
Grant the owner role to the labkey user over the database.
Revoke public permissions from the database.
See the following example PostgreSQL commands:
Connect to the DB server as the Postgres Super User using the psql command:
sudo -u postgres psql
Issue the following commands to create the user, database and revoke public permissions from the database. Use the example below, after substituting your chosen values for LABKEY_USERNAME and PASSWORD_HERE. Retain the single quotes around the password value.
create user labkey password 'PASSWORD_HERE' ; create database labkey with owner labkey; revoke all on database labkey from public; \q
The username and password you choose above will be used in the main configuration file below.
LabKey Server
Unzip LabKey####-####-community-bin.zip
The zip file contains:
bin Windows-specific binary files required by LabKey Server. labkeywebapp The LabKey Server web application. modules LabKey Server modules. pipeline-lib Jars for the data processing pipeline. tomcat-lib Server library jars. labkey.xml LabKey Server configuration file. manual-upgrade.sh For use with existing installations. README.txt A file pointing you to this documentation. VERSION A file containing the release number and build number.
The following recommendations and considerations continue the installation process by optimizing the performance and behavior of your server. We recommend that you review and consider your specific server needs and decide how to best configure your system, especially for production servers. Contact your Account Manager for additional guidance when you are using a Premium Edition of LabKey Server.
Tomcat uses port 8080 by default. To load any page served by Tomcat, you must either specify the port number, or configure the Tomcat installation to use a different port. To configure the Tomcat HTTP connector port, edit the server.xml file at:
<CATALINA_HOME>/conf/server.xml
Find the entry that begins with <Connector port="8080" .../> and change the value of the port attribute to the desired number. In most cases, you'll want to change this value to "80", which is the default port number used by web browsers. If you change this value to "80", users will not need to include the port number in the URL to access LabKey Server.
Existing Installations of Tomcat
You can run two web servers on the same machine only if they use different port numbers, so if you have a web server running you may need to reconfigure one to avoid conflicts.
If you have an existing installation of Tomcat, you can configure LabKey Server to run on that installation, OR install a separate instance of Tomcat for LabKey Server. In either case, you will need to configure LabKey to use a port that is not in use by another application.
If you receive a JVM_BIND error when you attempt to start Tomcat, it means that the port Tomcat is trying to use is in use by another application. The other application could be another instance of Tomcat, another web server, or some other application. You'll need to configure one of the conflicting applications to use a different port. Note that you may need to reconfigure more than one port setting. For example, in addition to the default HTTP port defined on port 8080, Tomcat also defines a shutdown port at 8005. If you are running more than one instance of Tomcat, you'll need to change the value of the shutdown port for one of them as well.
Security Settings and Considerations
Configure LabKey Server to Run Under SSL/TLS (Recommended)
You can configure LabKey Server to run under TLS (Transport Layer Security), or it's predecessor, SSL (Secure Socket Layer). We recommend that you take this step if you are setting up a production server to run over a network or over the Internet, so that your passwords and data are not passed over the network in clear text.
To configure Tomcat to run LabKey Server under SSL/TLS:
Note that Tomcat's default port is 8443, while the standard port for HTTPS connections recognized by web browsers is 443. To use the standard port, change this port number in the server.xml file.
To require that users connect to LabKey Server using a secure (https) connection:
Select (Admin) > Site > Admin Console.
Under Configuration, click Site Settings.
Check Require SSL connections.
Enter the SSL/TLS port number that you configured in the previous step in the SSL Port field.
Note that if you are using a Load Balancer or Proxy "in front of" LabKey, also enabling SSL/TLS may create conflicts as the proxy is also trying to handle the HTTPS traffic. This is not always problematic, as we have had success using Amazon's Application Load Balancer, but we have observed conflicts with versions from Apache, Nginx, and HA-Proxy. Review also the section about WebSockets below.
If you configure SSL/TLS and then cannot access the server, you may need to disable it in the database directly in order to resolve the issue. Learn more in this topic:
LabKey Server uses WebSockets to push notifications to browsers. This includes a dialog telling the user they have been logged out (either due to an active logout request, or a session timeout). As WebSockets use the same port as HTTP and HTTPS, there is no additional configuration required if you are using Tomcat directly.
If you have a load balancer or proxy in front of Tomcat, like Apache or NGINX, be sure that it is configured in a way that supports WebSockets. LabKey Server uses a "/_websocket" URL prefix for these connections.
Note that while this is recommended for all LabKey Server installations, it is required for full functionality of the Sample Manager and Biologics products.
When WebSockets are not configured, or improperly configured, you will see a variety of errors related to the inability to connect and to trigger notifications and alerts. LabKey Server administrators will see a banner reading "The WebSocket connection failed. LabKey Server uses WebSockets to send notifications and alert users when their session ends."
Set a Content Security Policy
Content-Security-Policy HTTP headers are a powerful security tool, successor to several previous mechanisms such as X-Frame-Option and X-XSS-Protection. Unlike previous HTTP header options, Content-Security-Policy is set at the tomcat level and should not be set only for a single application like LabKey Server.
To facilitate setting a Content-Security-Policy, you can make use of the ContentSecurityPolicy filter in the bootstrap jar. This means you'll add a <filter> section specifying your policy to your web.xml file.
Tomcat can be configured to log every HTTP access request sent to the server and store it in the filesystem, providing a comprehensive audit trail of actions including file downloads, API calls, guest user actions, etc. To enable this detailed level of logging, edit the server.xml file to include a Valve component defining the format for the log entries.
The default Tomcat webapps (e.g., examples, docs, manager, and host-manager) do not belong on a production deployment and security scanners will often flag these webapps and their content. LabKey displays an administrator warning banner if it detects these webapps running on a production deployment.
Optional Tomcat Settings
Configure Tomcat Session Timeout (Optional)
Tomcat's session timeout specifies how long a user remains logged in after their last session activity, 30 minutes by default. To increase session timeout, edit <CATALINA_HOME>/conf/web.xml
. Locate the <session-timeout> tag and set the value to the desired number of minutes.
Configure Tomcat to Use Gzip (Optional)
You may be able to improve the responsiveness of your server by configuring Tomcat to use gzip compression when it streams data back to the browser.
You can enable gzip in <CATALINA_HOME>/conf/server.xml by adding a few extra attributes to the active <Connector> elements:
Note that there is a comment in the default file that provides basic instructions for enabling gzip. The snippet above improves on the comment's recommendation by enabling compression on a few extra MIME types.
Configure Tomcat Error Page Handling (Optional)
Tomcat automatically intercepts some invalid requests and rejects them, including malformed URLs. By default, Tomcat raises an error page with a Java exception and stack trace. These stack traces are sometimes flagged by security scanners as potential vulnerabilities, though they are not. To configure Tomcat to more quietly respond to such errors, add the following to your server.xml's <Host> section:
The LabKey Server configuration file contains settings required for LabKey Server to run on Tomcat, including SMTP, Encryption, LDAP, and file roots. By default, it is named labkey.xml, but it may also be ROOT.xml This topic describes enhancements and extensions of the server that involve modifications to labkey.xml.
After editing the configuration file, make sure to pick up the changes by running "gradle pickMSSQL" or "gradle pickPg" as appropriate for your database.
Configuration File Name Determines Context Path
The name of the configuration file determines what the context path in the URL will be. The context path is a variable part of the URL between the server name and the project and folder path. For example, if you name the file "labkey.xml" you will access the home project with a URL like:
Learn more about the context path in this topic: LabKey URLs
Securing the LabKey Configuration File
The LabKey configuration file contains user name and password information for your database server, mail server, and network share. For this reason you should secure this file within the file system, so that only designated network administrators can view or change this file.
Modifying Configuration File Settings
You can edit the configuration file with your favorite text or XML editor. You will need to modify the LabKey Server configuration file if you are manually installing or upgrading LabKey Server, or if you want to change any of the following settings.
The appdocbase attribute, which indicates the location of the web application in the file system
Database settings, including server type, server location, username, and password for the database superuser.
SMTP settings, for specifying the mail server LabKey Server should use to send email to users.
Encryption Key - Configure an encryption key for the encrypted property set.
Note that in our template labkey.xml file, the placeholder values for these settings are highlighted with "@@", i.e "@@jdbcPassword@@", to make them easier to find in the file. Replace the full placeholder with the correct password along with the bracketing "@@". The final result should resemble:
The appDocBase attribute of the Context tag must be set to point to the directory where you have extracted or copied the labkeywebapp directory. For example, if the directory where you've copied labkeywebapp is /usr/local/labkey/labkey, you would change the initial value to "/usr/local/labkey/labkey/labkeywebapp".
Database Settings
The username and password attributes must be set to a user name and password with admin rights on your database server. Both the name and password attribute are found in the Resource tag named "jdbc/labkeyDataSource". If you are running a local version of PostgreSQL as your database server, you don't need to make any other changes to the database settings in labkey.xml, since PostgreSQL is the default database choice.
The following is a template resource tag for PostgreSQL:
If you are running LabKey Server against Microsoft SQL Server, you should comment out the Resource tag that specifies the PostgreSQL configuration, and add a Resource tag for the Microsoft SQL Server configuration. A template Resource tag for MS SQL Server is available in this topic: Install Microsoft SQL Server.
If you are running LabKey Server against a remote installation of a database server, you will also need to change the url attribute to point to the remote server; by default it refers to localhost.
The maxWaitMillis parameter is provided to prevent server deadlocks. Waiting threads will time out when no connections are available rather than hang the server indefinitely.
GUID Settings
By default, LabKey Servers periodically communicate back to LabKey developers whenever the server has experienced an exception. LabKey rolls up this data and groups it by the GUID of each server. You can override the Server GUID stored in the database with the one specified in labkey.xml. This ensures that the exception reports sent to LabKey Server developers are accurately attributed to the server (staging vs. production) that produced the errors, allowing swift delivery of fixes. For details, see Tips for Configuring a Staging Server.
SMTP Settings
LabKey Server uses an SMTP mail server to send messages from the system, including email to new users when they are given accounts on LabKey. Configuring LabKey Server to connect to the SMTP server is optional; if you don't provide a valid SMTP server, LabKey Server will function normally, except it will not be able to send mail to users.
At installation, you will be prompted to specify an SMTP host, port number, user name, and password, and an address from which automated emails are sent. Note that if you are running Windows and you don't have an SMTP server available, you can set one up on your local computer.
The SMTP settings are found in the Resource tag named "mail/Session".
mail.smtp.host Set to the name of your organization's SMTP mail server.
mail.smtp.user Specifies the user account to use to log onto the SMTP server.
mail.smtp.port Set to the SMTP port reserved by your mail server; the standard mail port is 25. SMTP servers accepting a secure connection may use port 465 instead.
SMTP Authentication and Secure Connections:
Many LabKey installations run an SMTP server on the same machine as the LabKey web server, which is configured for anonymous access from the local host only. Since only local applications can send mail, this ensures some amount of security without the hassle of using a central, authenticated mail server. If you choose instead to use an external authenticated server, you'll need to add the following:
mail.smtp.from This is the full email-address that you are would like to send the mail from. It can be the same as mail.smtp.user, but it doesn't need to be.
mail.smtp.password This is the password.
mail.smtp.starttls.enable When set to "true", configures the connection to use Transport Level Security (TLS).
mail.smtp.socketFactory.class When set to "javax.net.ssl.SSLSocketFactory", configures the connection to use an implementation that supports SSL.
mail.smtp.auth= When set to "true", forces the connection to attempt to authenticate using the user/password credentials.
When LabKey Server sends administrative emails, as when new users are added or a user's password is reset, the email is sent with the address of the logged-in user who made the administrative change in the From header. The system also sends emails from the Issue Tracker and Announcements modules, and these you can configure using the mail.from attribute so that the sender is an aliased address. The mail.from attribute should be set to the email address from which you want these emails to appear to the user; this value does not need to correspond to an existing user account. For example, you could set this value to "labkey@mylab.org".
Notes and Alternatives
If you do not configure an SMTP server for LabKey Server to use to send system emails, you can still add users to the site, but they won't receive an email from the system. You'll see an error indicating that the email could not be sent that includes a link to an HTML version of the email that the system attempted to send. You can copy and send this text to the user directly if you would like them to be able to log into the system.
If you are running on Windows and you don't have a mail server available, you can configure the SMTP service. This service is included with Internet Information Server to act as your local SMTP server. Follow these steps:
From the Start menu, navigate to Control Panel | Add or Remove Programs, and click the Add/Remove Windows Components button on the left toolbar.
Install Internet Information Services (IIS).
From Start | Programs | Administrative Tools, open the Windows Services utility, select World Wide Web Publishing (the name for the IIS service), display the properties for the service, stop the service if it is running, and set it to start manually.
From Start | Programs | Administrative Tools, open the Internet Information Services utility.
Navigate to the Default SMTP Virtual Server on the local computer and display its properties.
Navigate to the Access tab, click Relay, and add the address for the local machine (127.0.0.1) to the list of computers which may relay through the virtual server.
Troubleshooting SMTP
Begin by double checking that your labkey.xml file entries for the SMTP parameters (host, user, port, password) are correct. You can validate this by using them to log into the SMTP server directly.
If you are using gmail as your own email host, you may need to configure your gmail security settings to allow other applications to use it to access the server.
LabKey Server development source code includes a module named Dumbster to be used as a dummy SMTP server for testing purposes. See the testAutomation GitHub repository. It provides a Mail Record web part, which you can use to test outgoing email without sending actual email. If you are seeing unexpected SMTP configuration values, such as mail.smtp.port set to 53937 when you go to the Test Email Configuration link on the Admin Console, check the Admin Console's list of deployed modules for the Dumbster module. If it is present, you may need to disable the capture of email via the Mail Record web part to enable SMTP to send email. Learn more here.
Encryption Key
LabKey Server deployments can be configured to authenticate and connect to external systems to retrieve data or initiate analyses. In these cases, LabKey must store credentials (user names and passwords) in the primary LabKey database. While your database should be accessible only to authorized users, as an additional precaution, LabKey encrypts these credentials before storing them and decrypts them just before use. This encryption/decryption process uses an "encryption key" that administrations set in labkey.xml; LabKey will refuse to save credentials if an encryption key is not configured.
Replace @@encryptionKey@@ with a randomly generated, strong password, for example, a string of 32 random ASCII characters or 64 random hexadecimal digits. Once a key is specified and used, the server will use it to encrypt/decrypt credentials; changing it will cause these credentials not to decrypt correctly. Different LabKey Server deployments should use different encryption keys, however, servers that use copies of the same database (for example, most test, staging, and production server combinations) need to use the same encryption key.
Premium Feature Available
Administrators of servers running a premium edition of LabKey Server can later change the encryption key following the guidance in this topic:
If you wish to synchronize with the user and groups on an LDAP server, and are using a premium edition of LabKey Server, add an appropriate <Resource> tag to the labkey.xml file.
An example configuration to a public LDAP test service:
This topic covers third-party components which are often used in conjunction with LabKey Server to enhance its functionality. All of these components are optional and are not required for the server to work in general, but some are required for specific use cases.
Modify TPP_ROOT to point to the location you intend to install the binaries. NOTE: This location must be on your server path (ie the path of the user running the Tomcat server process).
Run 'make configure all install' from trans_proteomic_pipeline/src
Copy the binaries to the directory you specified in TPP_ROOT above.
(Current Topic) Third-Party Components and Licenses
Install LabKey with Embedded Tomcat
For users interested in using Docker with LabKey Server, we offer an installer that utilizes Spring Boot's "Embedded Tomcat." This installer is offered as a Beta and is meant for use with our Dockerfile repo.
Background: In an effort to mitigate the challenges of managing the underlying dependencies and to minimize the complexity of configuring new LabKey installations, LabKey has begun the process of migrating to embedded Tomcat. Switching to an embedded Tomcat distribution method allows us to centralize many configuration parameters into an application.properties file, define recommended defaults, and to package and ship Tomcat with the application.
Download and unpack the Embedded Tomcat Beta Installer .tar.gz archive.
Find the .jar file among the unpacked archive.
Move/rename that .jar file into the root of the DockerFile folder you created by cloning above.
Build the LabKey Embedded Docker Container
Once you have the .jar file in your local Dockerfile repo, follow these steps:
1. Export the minimal required environment variables or edit and source the quickstart_envs.sh
cd ./Dockerfile/ export LABKEY_VERSION="21.9.0" export LABKEY_CREATE_INITIAL_USER="" export LABKEY_CREATE_INITIAL_USER_APIKEY="" ...
or
source ./quickstart_envs.sh
2. Run the Make Build command to create the container
make build
Since the make file supports other features such as publishing to AWS Elastic Container Registry, you may see some warning messages in the Docker build log regarding this if you do not have AWS credentials. These warning messages may be ignored.
Start the LabKey Embedded Docker Container
Once the container has been built, follow these steps to start it:
1. Run the Make Up command to start the container using the makefile docker-compose settings
cd ./Dockerfile/ make up
2. After a few minutes LabKey should be available by opening a browser window and connecting to:
LabKey with Embedded Tomcat is currently experimental. Several features of LabKey Server will not work and need to be implemented via other Docker containers. For example: "R" and Python scripting engines.
Other known limitations include: Full Text Search and connections to external databases will not work as on a traditional distribution.
LabKey Application logs in the example container are redirected to the Docker Host console. This is accomplished via the example log4j2.xml configuration file. This allows consolidation of the logs to be consumed by an external service such as AWS Cloudwatch. In this configuration, the LabKey application and error logs are not viewable in the LabKey Admin Console.
Application Properties File
Configuring LabKey with Embedded Tomcat makes use of application properties. Both Spring Boot's default properties and LabKey specific ones are used. An example application.properties file is available here:
Troubleshoot Server Installation and Configuration
In case of errors or other problems when installing and running LabKey Server, first review installation basics and options linked in the topic: Install LabKey. This topic provides additional troubleshooting suggestions if the instructions in that topic do not resolve the issue.
From time to time Administrators may need to review the LabKey Tomcat logs to troubleshoot system startup or other issues. The LabKey Tomcat logs are located in the <CATALINA_HOME>/logs directory. CATALINA_HOME is the location where you installed Tomcat. The path might be similar to one of these
/labkey/apps/tomcat/logs OR C:\labkey\apps\apache\apache-tomcat-#.#.##\logs
The logs of interest are:
catalina.out: Contains the log file entries from Tomcat as it is starting the LabKey application.
labkey.log: Once the application starts, logging of INFO, ERROR, WARN and other messages is recorded here.
labkey-errors.log: Contains ERROR messages.
Useful commands:
# monitor the catalina.out log file while LabKey/Tomcat starts sudo tail -f /usr/local/labkey/apps/tomcat/logs/catalina.out
# monitor the labkey.log file while LabKey/Tomcat starts sudo tail -f /usr/local/labkey/apps/tomcat/logs/labkey.log
Log Rotation and Storage
To avoid massive log files and retain recent information with some context, log files are rotated (or "rolled-over") into new files to keep them manageable. These rotated log files are named with a trailing number and shuffled such that .1 will be the most recently rolled over log file, and .7 or .3 the oldest log of its type.
labkey.log: Rotated when the file reaches 10MB. Creates labkey.log.1, labkey.log.2, etc. Up to 7 archived files (80MB total).
labkey-errors.log: Rotated on server startup or at 100MB. Up to 3 labkey-errors log files are rotated (labkey-errors.log.1, etc.) If the server is about to delete the first labkey-errors file from the current session (meaning it has generated hundreds of megabytes of errors since it started up), it will retain that first log file as labkey-errors-YYYY-MM-DD.log. This can be useful in determining a root cause of the many errors.
To save additional disk space, older log archives can be compressed for storage. A script to regularly store and compress them can be helpful.
Developer Mode
Running a server in development mode ("devmode") provides enhanced logging and enables the MiniProfiler, and has the server prevent the browser from caching resources like JavaScript and CSS files as aggressively. You do not want to run your production server in devmode, but might set up a shared development server to support future work, or have each developer work on their own local dev machine.
To check whether the server is running in devmode:
Go to (Admin) > Site > Admin Console.
Under Diagnostics, click System Properties.
Check the value of the devmode property.
Set -Ddevmode=true
If you are using a local development machine, include the following in your VM options as part of your intelliJ configuration:
-Ddevmode=true
If you are not using a development machine, you can set this option follow these steps. The supported version of Tomcat will change over time; replace the # in our examples with the actual version number.
Open a command prompt.
Go to the <CATALINA_HOME>/bin directory, for example:
C:\labkey\apps\apache\apache-tomcat-#.#.##\bin
Execute the tomcat#w.exe program:
tomcat#w.exe //ES//LabKeyTomcat#
The command will open a program window. Click the Java tab.
In the Java Option box, scroll to the bottom of the properties. Add the following property at the bottom of the list:
-Ddevmode=true
Close the program window and restart the server.
JVM Caching
Note that the "caching" JVM system property can also be used to control just the caching behavior, without all of the other devmode behaviors. To disallow the normal caching, perhaps because files are being updated directly on the file system, add -Dcaching=false to the JVM arguments.
Diagnostic Information
Which Version of LabKey Server Is Running?
Find your version number at (Admin) > Site > Admin Console.
At the top of the Server Information panel, you will see the release version.
Learn more about other diagnostic information on the admin console in this topic:
Confirm that you are using the supported versions of the required components, as detailed in the Supported Technologies Roadmap. It is possible to have multiple versions of some software, like Java, installed at the same time. Check that LabKey and other applications, such as Tomcat, are configured to use the correct versions.
For example, if you see an error similar to "...this version of the Java Runtime only recognizes class file versions up to ##.0..." it likely means that Tomcat is running under an older version of the JDK than is supported.
Connection Pool Size
If your server becomes unresponsive, it could be due to the depletion of available connections to the database. Watch for a Connection Pool Size of 8, which is the Tomcat connection pool default size and insufficient for a production server. To see the connection pool size for the LabKey data source, select (Admin) > Site > Admin Console and check the setting of Connection Pool Size on the Server Information panel. The connection pool size for every data source is also logged at server startup.
To set the connection pool size, edit your labkey.xml/ROOT.xml configuration file and change the "maxTotal" setting for your LabKey data source to at least 20. Depending on the number of simultaneous users and the complexity of their requests, your deployment may require a larger connection pool size.
You should also consider changing this setting for external data sources to match the usage you expect. Learn more in this topic: External Schemas and Data Sources.
Deleted or Deactivated Users
Over time, you may have personnel leave the organization. Such user accounts should be deactivated and not deleted. If you encounter problems with accessing linked schemas, external schemas, running ETLs, or similar, check the logs to see if the former user may have "owned" these long term resources.
Properties in Config Files
Check that the labkeyDataSource and any external data source properties in your config file (labkey.xml or ROOT.xml) match those expected of the current component versions. Watch the catalina.out log for warning messages like "Ignoring unknown property..." or "...is being ignored".
For example, older versions of tomcat used "maxActive" and "maxWait" and newer versions use "maxTotal" and "maxWaitMillis". Error messages like this would be raised if labkey.xml/ROOT.xml used the older versions:
17-Feb-2022 11:33:42.415 WARNING [main] java.util.ArrayList.forEach Name = labkeyDataSource Property maxActive is not used in DBCP2, use maxTotal instead. maxTotal default value is 8. You have set value of "20"for"maxActive" property, which is being ignored. 17-Feb-2022 11:33:42.415 WARNING [main] java.util.ArrayList.forEach Name = labkeyDataSource Property maxWait is not used in DBCP2 , use maxWaitMillis instead. maxWaitMillis default value is PT-0.001S. You have set value of "120000"for"maxWait" property, which is being ignored.
Enabling SSL is done by including "SSLEnabled=true" in the server.xml config file for tomcat. If you see this error, you may have tried to set the nonexistent "ssl" property in the labkey.xml or ROOT.xml configuration file.
19-Feb-2022 15:28:29.624 INFO [main] java.util.ArrayList.forEach Name = labkeyDataSource Ignoring unknown property: value of "true"for"ssl" property
Conflicting Applications
If you have problems during installation, retry after shutting down all other running applications. Specifically, you may need to try temporarily shutting down any virus scanning application, internet security applications, or other applications that run in the background to see if this resolves the issue.
Filesystem Permissions
In order for files to be uploaded and logs to be written, the LabKey user account must have the ability to write to the underlying file system locations. For example, if the "Users" group on a windows installation has read but not write access to the site file root, an error message like this will be shown upon file upload:
Couldn't create file on server. This may be a server configuration problem. Contact the site administrator.
Browser Refresh for UI Issues
If menus, tabs, or other UI features appear to display incorrectly after upgrade, particularly if different browsers show different layouts, you may need to clear your browser cache to clear old stylesheets.
Tomcat Issues
Tomcat Failure to Start
If installation fails to start tomcat (such as with an error like "The specified service already exists."), you may need to manually stop or delete a failing Tomcat service.
To stop the service on Windows, open Control Panel > Administrative Tools > Services. Select the relevant service, such as LabKey Server Apache Tomcat #.0 and click Stop.
To delete, run the following from the command line as an administrator, substituting the major version number for the #:
sc delete LabKeyTomcat#
SSL/TLS Errors
ERR_CONNECTION_CLOSED during Startup
An error like "ERR_CONNECTION_CLOSED" during startup, particularly after upgrading your version of Tomcat, may indicate that necessary files are missing. Check that you copied the server.xml and web.xml as well as the SSL directory containing the keys. Check the logs for "SEVERE" to find any messages that might look like:
19-Feb-2022 15:28:13.312 SEVERE [main] org.apache.catalina.util.LifecycleBase.handleSubClassException Failed to initialize component [Connector[HTTP/1.1-8443]] org.apache.catalina.LifecycleException: Protocol handler initialization failed ... org.apache.catalina.LifecycleException: The configured protocol [org.apache.coyote.http11.Http11AprProtocol] requires the APR/native library which is not available
Disable sslRequired
If you configure "SSL Required" and then cannot access your server, you will not be able to disable this requirement in the user interface. To resolve this, you can directly reset this property in your database.
Confirm that this query will select the "sslRequired" property row in your database:
SELECT * FROM prop.Properties WHERE Name = 'sslRequired' AND Set = (SELECT Set FROM prop.PropertySets WHERE Category = 'SiteConfig')
You should see a single row with a name of "sslRequired" and a value of "true". To change that value to false, run this update query:
UPDATE prop.Properties SET Value = FALSE WHERE Name = 'sslRequired' AND Set = (SELECT Set FROM prop.PropertySets WHERE Category = 'SiteConfig')
You can then verify with the SELECT query above to confirm the update.
Once the value is changed you'll need to restart the server to force the new value into the cache.
If you are using SQL Server, you'll need to double quote every reference to "Set" because that database considers Set a keyword.
SSL/TLS Handshake Issues
When accessing the server through APIs, including RStudio, rcurl, Jupyter, etc. one or more errors similar to these may be seen either in a client call or command line access:
SEC_E_ILLEGAL_MESSAGE (0x80090326) - This error usually occurs when a fatal SSL/TLS alert is received (e.g. handshake failed) or Peer reports incompatible or unsupported protocol version. or Timeout was reached: [SERVER:PORT] Operation timed out after 10000 milliseconds with 0 out of 0 bytes received
This may indicate that your server is set to use a more recent sslProtocol (such as TLSv1.3) than your client tool(s).
Client programs like RStudio, curl, and Jupyter may not have been updated to use the newer TLSv1.3 protocol which has timing differences from the earlier TLSv1.2 protocol. Check to see which protocol version your server.xml is set to accept. To cover most cases, edit the server.xml to accept both TLSv1.2 and TLSv1.3, and make the default TLSv1.2. This line applies those settings:
You may need to remove references to Cygwin from your Windows system path before installing LabKey, due to conflicts with the PostgreSQL installer. The PostgreSQL installer also conflicts with some antivirus or firewalls. (see http://wiki.postgresql.org/wiki/Running_%26_Installing_PostgreSQL_On_Native_Windows for more information).
Restart Installation from Scratch
If you have encountered prior failed installations, don't have any stored data you need to keep, and want to clean up and start completely from scratch, the following process may be useful:
Delete the Tomcat service (if it seems to be causing the failure).
Uninstall PostgreSQL using their uninstaller.
Control Panel > Programs and Features
Select PostgreSQL program.
Click Uninstall.
Delete the entire LabKey installation directory.
Install LabKey again.
Graphviz
If you encounter the following error, or a similar error mentioning Graphviz:
Unable to display graph view: cannot run dot due to an error.
Cannot run program "dot" (in directory "./temp/ExperimentRunGraphs"): CreateProcess error=2, The system cannot find the file specified.
then install Graphviz according to the following instructions:
If you are using a load balancer, including on a server hosted by LabKey, be aware that AWS may shift over time what specific IP address your server is "on". Note that this includes LabKey's support site, www.labkey.org. This makes it difficult to answer questions like "What IP address am I using?" with any long term reliability.
In order to maintain a stable set of allowlist traffic sources/destinations, it is more reliable to pin these to the domain (i.e. labkey.org) rather than than to the IP address which may change within the AWS pool.
Support Forum
Users of Premium Editions of LabKey Server can obtain support with installation and other issues by opening a ticket on their private support portal. Your Account Manager will be happy to help resolve the problem.
All users can can search for issues resolved through community support in the LabKey Support Forum.
If you don't see your issue listed in the community support forum, you can post a new question.
Supporting Materials
If the install seems successful, it is often helpful to submit debugging logs for diagnosis.
If the install failed to complete, please include the install.log and install-debug.log from your selected LabKey install directory.
PostgreSQL logs its installation process separately. If PostgreSQL installation/upgrade fails, please locate and include the PostgreSQL install logs as well.
If you have encountered errors or other problems when installing and starting LabKey Server, first review the topic Troubleshooting: Common Issues. If you're still encountering problems, please review the list below for common errors, messages, and problems.
You can also search the LabKey Community Support Forums for guidance. If you don't already see your issue listed there, please post a new question.
1.
Error
Error on startup, "Connection refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections."
Problem
Tomcat cannot connect to the database.
Likely causes
The database is not running
The database connection URL or user credentials in the Tomcat configuration files are wrong
Tomcat was started before the database finished starting up
Solution
Make sure that database is started and fully operational before starting Tomcat. Check the database connection URL, user name, and password in the <tomcat>/conf/Catalina/localhost/labkey.xml file.
2.
Problem
Error when connecting to LabKey server on Linux: Can't connect to X11 window server or Could not initialize class ButtonServlet.
Solution
Run tomcat headless. Edit tomcat's catalina.sh file, and add the following line near the top of the file:
CATALINA_OPTS="-Djava.awt.headless=true"
Then restart tomcat.
3.
Problem
Viewing certain pages results in a specific NoSuchMethodError.
Error
java.lang.NoSuchMethodError:
org.apache.jasper.runtime.JspRuntimeLibrary.releaseTag(Ljavax/servlet/jsp/tagext/Tag;Lorg/apache/tomcat/InstanceManager;Z)V
at org.labkey.jsp.compiled.org.labkey.core.admin.maintenance_jsp._jspx_meth_labkey_005ferrors_005f0(maintenance_jsp.java:159)
at org.labkey.jsp.compiled.org.labkey.core.admin.maintenance_jsp._jspService(maintenance_jsp.java:110)
at org.labkey.api.view.JspView.renderView(JspView.java:170)
at org.labkey.api.view.WebPartView.renderInternal(WebPartView.java:372)
After upgrading LabKey Server the following error is shown.
Error
java.lang.NoSuchMethodError: org.labkey.api.settings.HeaderProperties: method 'void <init>()' not found
Solution
This may be the result of a partial upgrade, where some but not all of the binaries get upgraded.
1. Shut down the Tomcat service
2. Delete your modules and labkeyWebapp directories
3. Recopy those directories form the new version you downloaded
4. Restart the Tomcat service
5.
Error
You receive a message "The requested resource () is not available." OR "500: Unexpected server error" and see something like one of the following in the log file:
Problem
SEVERE: Error deploying configuration descriptor labkey.xml
java.lang.IllegalStateException: ContainerBase.addChild: start: LifecycleException:
start: : java.lang.UnsupportedClassVersionError: org/labkey/bootstrap/LabkeyServerBootstrapClassLoader : Unsupported ...
A failure occurred during LabKey Server startup.
java.lang.NoClassDefFoundError: javax/script/ScriptEngineFactory....
A failure occurred during LabKey Server startup.
java.lang.UnsupportedClassVersionError: Bad version number in .class file
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:675) ...
Confirm that Tomcat is configured to use the correct version of Java, as it is possible to have multiple versions installed simultaneously.
6.
Problem
Fatal Error in Java Runtime Environment
Error
# A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x0000000000000000, pid=23893, tid=39779 # # JRE version: Java(TM) SE Runtime Environment (8.0_45-b14) (build 1.8.0_45-b14) # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.45-b02 mixed mode bsd-amd64 compressed oops) # Problematic frame: # C 0x0000000000000000 # # Failed to write core dump. Core dumps have been disabled. # To enable core dumping, try "ulimit -c unlimited" before starting Java again
Cause
These are typically bugs in the Java Virtual Machine itself.
Solution
Ensuring that you are on the latest patched release for your preferred Java version is best practice for avoiding these errors. If you have multiple versions of Java installed, be sure that JAVA_HOME and other configuration is pointing at the correct location. If you are running through the debugger in IntelliJ, check the JDK configuration: under Project Structure > SDKs check the JDK home path and confirm it points to the newer version.
7.
Problem
If you see the following error when running complex queries on PostgreSQL:
Error
org.postgresql.util.PSQLException: ERROR: failed to build any 8-way joins
Solution
Increase the join collapse limit. Edit postgresql.conf and change the following line:
# join_collapse_limit = 8
to
join_collapse_limit = 10
8.
Problem
Plots are not rendered as expected, or cannot be exported to formats like PDF or PNG Excel exports fail or generate corrupted xlsx files. Possible Excel export failures include 500 errors or opening of an empty tab instead of a successful export.
Error
java.lang.InternalError: java.lang.reflect.InvocationTargetException
at java.desktop/sun.font.FontManagerFactory$1.run(FontManagerFactory.java:86)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:312)
at java.desktop/sun.font.FontManagerFactory.getInstance(FontManagerFactory.java:74)
...
Caused by: java.lang.NullPointerException: Cannot load from short array because "sun.awt.FontConfiguration.head" is null
at java.desktop/sun.awt.FontConfiguration.getVersion(FontConfiguration.java:1260)
at java.desktop/sun.awt.FontConfiguration.readFontConfigFile(FontConfiguration.java:223)
...
or
java.lang.InternalError: java.lang.reflect.InvocationTargetException
at java.desktop/sun.font.FontManagerFactory$1.run(FontManagerFactory.java:86)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:312)
at java.desktop/sun.font.FontManagerFactory.getInstance(FontManagerFactory.java:74)
...
Caused by: java.lang.NullPointerException
at java.desktop/sun.awt.FontConfiguration.getVersion(FontConfiguration.java:1262)
at java.desktop/sun.awt.FontConfiguration.readFontConfigFile(FontConfiguration.java:225)
at java.desktop/sun.awt.FontConfiguration.init(FontConfiguration.java:107)
...
Banner message reading "The WebSocket connection failed. LabKey Server uses WebSockets to send notifications and alert users when their session ends. See the Tomcat Configuration for more information."
Error logged to the JS console in the browser: "clientapi.min.js?452956478:1 WebSocket connection to 'wss://server.com/_websocket/notifications' failed."
Unexpected errors in login/logout modals for Sample Manager and Biologics applications.
Problem
WebSocket connection errors are preventing a variety of actions from succeeding.
Try using browser developer tools and checking the Console tab to see the specific errors.
Likely causes
A load balancer or proxy may be blocking websocket connections and needs to have its configuration updated.
On Java 16 or later, pipeline jobs and some other operations fail with errors
Problem
To be compatible with some libraries, such as Jackson, Java 16 and later need to be configured with a command-line argument to allow for successful serialization and deserialization. Errors may take different forms, but include messages like:
java.lang.reflect.InaccessibleObjectException: Unable to make private java.io.File(java.lang.String,java.io.File) accessible: module java.base does not "opens java.io" to unnamed module @169268a7
or
java.lang.reflect.InaccessibleObjectException: Unable to make field private java.lang.String java.lang.Throwable.detailMessage accessible: module java.base does not "opens java.lang" to unnamed module @18f84035
ERROR ModuleLoader 2022-04-07T10:09:07,707 main : Failure occurred during ModuleLoader init.
org.labkey.api.util.ConfigurationException: Can't upgrade from LabKey Server version 20.3; installed version must be 21.0 or greater.
...
Cause
Beginning in January 2022, you can only upgrade from versions less than a year old.
Solution
Following the guidance in this topic, you will need to perform intermediate upgrades of LabKey Server to upgrade from older versions.
13.
Problem
Disk filling with Tomcat temporary files. For example, Excel exports might be leaving one file for every export in "temp/poifiles" folder.
Error
...| comment: Exported to Excel |
ERROR ExceptionUtil 2022-07-19T11:24:20,609 ps-jsse-nio-8443-exec-25 : Unhandled exception: No space left on device
java.io.IOException: No space left on device
Likely causes
The Excel export writer is leaving temporary "poi-sxssf-####.xml" files behind. These can be fairly large and may correlate 1:1 with Excel exports. This issue was reported in version 22.3.5 and will be fixed in version 22.11.0 (or sooner).
Solution
These temporary "poifiles" may be safely deleted after the export is completed. Administrators can periodically clean these up; when this issue is resolved, they will be automatically cleaned up within 10 minutes of an Excel export
Collect Debugging Information
To assist in debugging errors, an administrator can force the LabKey Server process to dump the list of threads and memory to disk. This information can then be reviewed to determine the source of the error.
A thread dump is useful for diagnosing issues where LabKey Server is hung or some requests spin forever in the web browser.
A memory/heap dump is useful for diagnosing issues where LabKey Server is running out of memory, or in some cases where the process is sluggish and consuming a lot of CPU performing garbage collection.
Examining the state of all running threads is useful when troubleshooting issues like high CPU utilization, or requests that are taking longer than expected to process. To dump the state of all of running threads, you can use either the UI or the command line.
Using the UI
Go to (Admin) > Site > Admin Console.
Under Diagnostics, click Running Threads
This will show the state of all active threads in the browser, as well as writing the thread dump to the server log file.
Manually via the Command Line
The server monitors the timestamp of a threadDumpRequest file and when it notices a change, it initiates the thread dump process. The content of the file does not matter.
Unix. On Linux and OSX, use the command line to execute the following commands:
Change to the <LABKEY_HOME> directory. For example, if your <LABKEY_HOME> directory is located at /usr/local/labkey/labkey, then the command will be:
cd /usr/local/labkey/labkey
Force the server to dump its memory:
touch threadDumpRequest
Windows. On a Windows Server, do the following:
Open a Command Prompt.
Change to the <LABKEY_HOME> directory. For example, if your <LABKEY_HOME> directory is located at C:\labkey\labkey\, then the command will be:
cd "C:\labkey\labkey"
Force the server to dump its memory. The command will open the heapDumpRequest file in the notepad program.
notepad threadDumpRequest
Place the cursor at the top of the file and hit the "Enter" key twice.
Save the file.
Close notepad.
Location of the File Containing the Thread Dump
The list of threads is dumped the "labkey.log" file, which is located in the <CATALINA_HOME>/logs directory.
Memory Dump (Heap Dump)
To request LabKey Server dump its memory, you can use either the UI or the command line. By default, it will write the file to the LABKEY_HOME directory. Java can also be configured to dump the heap if the virtual machine runs out of memory.
The heap dump can be useful to determine what is consuming memory in cases where the LabKey Server process is running out of memory. Note that heap dumps can be large: several gigabytes in size is typical. You can tell the server to write to a different location by using a JVM startup argument:
-XX:HeapDumpPath=/path/to/desired/target
Using the UI
Go to (Admin) > Site > Admin Console.
Under Diagnostics, click Dump Heap.
You will see a full path to where the heap was dumped.
Manually via the Command Line
The server monitors the timestamp of a heapDumpRequest file and when it notices a change, it initiates the heap dump process. The content of the file does not matter.
Unix. On Linux and OSX, execute the following commands on the command line:
Navigate to the <LABKEY_HOME> directory.
Force the server to dump its memory:
touch heapDumpRequest
Windows. On a Windows Server, do the following:
Open a Command Prompt
Navigate to the <LABKEY_HOME> directory.
Force the server to dump its memory. This command will open the heapDumpRequest file in the notepad:
notepad heapDumpRequest
Place the cursor at the top of the file and hit the Enter key twice.
Save the file
Close notepad
Location of the File Containing the Memory Dump
The file will be located in the <LABKEY_HOME> directory. The file will have the ".hprof" extension.
Automatically Capturing Heap Dump when Server Runs Out of Memory
The Java virtual machine supports an option that will automatically trigger a heap dump when the server runs out of memory. This is accomplished via this JVM startup parameter:
LabKey server is unable to log errors thrown by PostgreSQL, so in the case of diagnosing some installation and startup errors, it may be helpful to to view the event log.
On Windows:
Launch eventvwr.msc
Navigate to Windows Logs > Application.
Search for errors there corresponding to the installation failures, which may assist LabKey support in diagnosing the problem.
If you can't find relevant messages, you may be able to trigger the error to occur again by running net start LabKey_pgsql-9.2 from the command line.
To get a full picture of some problems, it's also useful to have information about running queries and locks. For this, you want both a thread dump (described above) and information about the state of database connections. The latter needs to be obtained directly from SQL Server.
Launch SQL Server Management Console or a similar tool
Open a connection to the LabKey Server database, often named "labkey."
Run and capture the output from the following queries/stored procedures:
sp_lock
sp_who2
SELECT t1.resource_type, t1.resource_database_id, t1.resource_associated_entity_id, t1.request_mode, t1.request_session_id, t2.blocking_session_id FROM sys.dm_tran_locks as t1 INNER JOIN sys.dm_os_waiting_tasks as t2 ON t1.lock_owner_address = t2.resource_address;
This topic provides instructions for updating Tomcat configuration settings, including setting the Java Virtual Machine (JVM) memory configuration on Windows, Linux, and OSX operating systems.
LabKey Server is Java web application that runs on Tomcat. Many important Tomcat configuration settings can be defined using properties defined in CATALINA_OPTS or JAVA_OPTS, or set via utility, depending on your platform. The primary example in this topic is configuring memory configurations, but other flags can also be set using these methods.
The Tomcat server is run within a Java Virtual Machine (JVM). This JVM that controls the amount of memory available to LabKey Server. LabKey recommends that the Tomcat web application be configured to have a maximum Java Heap size of at least 2GB for a test server, and at least 4GB for a production server.
Locate Tomcat and the Settings File
<CATALINA_HOME>: Installation location of the Apache Tomcat Web Server. If you are following our recommended folder configuration, the location will be (where #.#.## is the specific version installed):
<LABKEY_ROOT>/apps/apache-tomcat-#.#.##
On Linux or OSX, you may also have created a symbolic link /usr/local/tomcat to this location.
Depending on your configuration, there are different places where CATALINA_OPTS (or JAVA_OPTS) may be located. The next sections offer some options for finding where yours are defined.
Method 1: If Tomcat is Running as a Service (Most common)
Locate the tomcat.service file. For example, it might be /etc/systemd/system/tomcat.service (or tomcat_lk.service)
Open the file and look for the CATALINA_OPTS parameter.
If you don't see this in the file for a running server, proceed to check via another method.
Identify where your CATALINA_OPTS (or JAVA_OPTS) are defined, often in a directory with other tomcat scripts. Depending on your platform and configuration, it could be in several locations including (but not limited to):
/etc/systemd/system
/etc/default
/usr/local/jsvc/
<CATALINA_HOME>/bin/
The filename also might vary and could be something like:
tomcat.service
tomcat_lk.service
Tomcat#.sh
tomcat
setenv.sh
Method 3: If you use JSVC to start/stop LabKey Server
Find the JSVC service script.
On Linux servers, this is usually in the /etc/init.d directory and named either "tomcat" or "tomcat#"
On OSX servers this might be in /usr/local/jsvc/Tomcat#.sh
Open the JSVC service script using your favorite editor and check that it contains the CATALINA_OPTS setting.
Method 4: If you use startup.sh and shutdown.sh to start/stop LabKey Server (Legacy method)
The start script might be located in <CATALINA_HOME>/bin/catalina.sh. Note that directly hardcoding JAVA_OPTS in this file is not recommended, but if you cannot find these options in any location mentioned above, it might have previously been defined here.
Open the catalina.sh script
Above the line reading "# OS specific support. $var _must_ be set to either true or false.: add one of the following:
Change the JVM Memory Configuration on Linux and OSX
Once you've located your Tomcat settings file, find the line for setting CATALINA_OPTS and add one of the following settings inside the double quotes.
Stop Tomcat.
For a production server, use:
-Xms4g -Xmx4g -XX:-HeapDumpOnOutOfMemoryError
For a test server, use:
-Xms2g -Xmx2g -XX:-HeapDumpOnOutOfMemoryError
Save the file.
Restart LabKey Server.
Change the JVM Memory Configuration on Windows
Tomcat is usually started as a service on Windows, and it includes a dialog for configuring the JVM. The maximum total memory allocation is configured in its own text box, but other settings are configured in the general JVM options box using the Java command line parameter syntax.
If you changed the name of the LabKey Windows Service, you must use Method 2.
Method 1:
Open Windows Explorer.
Go to the <CATALINA_HOME>/bin directory.
Locate and run the file tomcat#w.exe (where # is the Tomcat version number). Run this as an administrator by right-clicking the .exe file and selecting Run as administrator.
The command will open a program window.
If this produces an error that says "The specified service does not exist on the server", then please go to Method 2.
Go to the Java tab in the new window.
In the Java Options box, scroll to the bottom of the properties, and set the following property:
-XX:-HeapDumpOnOutOfMemoryError
Change the Initial memory pool to 2000 MB for a test server or 4000 MB for a production server.
Change the Maximum memory pool to the same value.
Click OK
Restart LabKey Server.
Method 2:
You will need to use this method if you customized the name of the LabKey Windows Service.
Open a Command Prompt. (Click Start, type "cmd", and press the Enter key.)
Navigate to the <CATALINA_HOME>bin directory.
Execute the following command, making appropriate changes if you are running a different version of Tomcat. This example is for Tomcat 9:
tomcat9w.exe //ES//LabKeyTomcat9
The command will open a program window.
If this produces an error that says "The specified service does not exist on the server", then see the note below.
Go to the Java tab in the new window.
In the Java Options box, scroll to the bottom of the properties, and set the following property:
-XX:-HeapDumpOnOutOfMemoryError
Change the Initial memory pool to 2GB for a test server or 4GB for a production server.
Change the Maximum memory pool to the same value.
Click the OK button
Restart LabKey Server
NOTE: The text after the //ES// must exactly match the name of the Windows Service that is being used to start/stop your LabKey Server. You can determine the name of your Windows Service by taking the following actions:
Open the Windows Services panel. (Click Start, type "Services", and press the Enter key.)
In the Services panel, find the entry for LabKey Server. It might be called something like Apache Tomcat or LabKey
Double-click on the service to open the properties dialog.
In the command above, replace the text "LabKeyTomcat#" with the text shown next to ServiceName in the Properties dialog.
Check Tomcat Settings
On a running server, an administrator can confirm intended settings after restarting Tomcat.
Select (Admin) > Site > Admin Console.
Under Diagnostics, click System Properties to see all the current settings.
Creating & Installing SSL/TLS Certificates on Tomcat
This topic describes how to create and install an SSL/TLS certificate on a Tomcat server. First we cover the process for creating a self-signed certificate and then an actual signed certificate from a Certificate Authority (CA).
Tomcat uses a Java KeyStore (JKS) repository to hold all of the security certificates and their corresponding private keys. This requires the use of the keytool utility that comes with the Java Development Kit (JDK) or the Java Runtime Environment (JRE). Review this topic for current JDK version recommendations: Supported Technologies
The alias is simply a "label" used by Java to identify a specific certificate in the keystore (a keystore can hold multiple certificates). It has nothing to do with the server name, or the domain name of the Tomcat service. A lot of examples show "tomcat" as the alias when creating the keystore, but it really doesn’t matter what you call it. Just remember when you do use it, you stick with it.
The common name (CN) is an attribute of the SSL/TLS certificate. Your browser will usually complain if the CN of the certificate and the domain in the URI do not match (but since you’re using a self-signed certificate, your browser will probably complain anyway). HOWEVER, when generating the certificate, the keytool will ask for "your first and last name" when asking for the CN, so keep that in mind. The rest of the attributes are not really that important.
Create a Self-Signed Certificate
Why create a self-signed certificate?
It allows you to learn to create a keystore and certificate, which is good practice for getting an actual SSL/TLS certificate provided by a Certificate Authority.
It allows you to use a certificate right away and make sure it works successfully.
It's free.
How to get started creating your self-signed certificate:
Step 1. Locate the keytool application within your JDK installation. Confirm that your <JAVA_HOME> environment variable points to the current supported version of the JDK and not to another JDK or JRE you may have installed previously.
The keytool will be in the bin directory. For example (where ## is the specific version number):
Step 2. Create the keystore and the certificate. When you use the following commands, be sure to change <your password> to a password of your choosing and add a <keystore location> and <certificate location> that you will remember, likely the same location.
Use the following syntax to build your keystore and your self-signed certificate. Some examples follow.
The path to the keytool
The -genkeypair flag to indicate you are creating a key pair
The -exportcert flag to generate the certificate
The -alias flag and the alias you want to use for the keystore
The -keyalg flag and the algorithm type you want to use for the keystore
The -keysize flag and the value of the certificate encryption size
The -validity flag and the number of days for which you want the certificate to be valid
The -keystore flag and the path to where you want your keystore located
The -file flag and the path where you want the certificate located
The -storepass flag and your password
The -keypass flag and your password
The -ext flag to generate the SAN entry that is required by some modern browsers
Step 3. The string will then create a series of prompts, asking for you to supply a password for the keystore and create what is known as a Certificate Server Request (CSR):
The keystore password and to confirm the password.
The domain you want your SSL/TLS certificate made for.
NOTE: This prompt will literally ask the question
“What is your first and last name? [Unknown]:”
This is NOT your first and last name. This should be the domain you want to make the SSL/TLS certificate for. Since this section is about creating a self-signed certificate, please enter localhost as the domain and press Enter.
The name of your Organizational Unit. This is optional, but most will put down the name of the department the certificate is being used for or the department the certificate is being requested by.
The name of your Organization. This is also optional, but most will put in the name of their company.
The name of your city. This is also optional, but most will put in the city the company or department is located in.
The name of your state/province. This is also optional, but if you do choose to enter in this information DO NOT abbreviate the information. If you choose to enter data here, you must spell out the state/province. For example: California is acceptable, but not CA.
The two-letter country code. This is optional, but if you’ve already entered the rest above, you should enter the two-letter code. For example: US for United States, UK for the United Kingdom.
Confirmation that the information you entered is correct.
A final prompt to enter the certificate, but just press Enter to use the same one as the Keystore from earlier.
The steps above will create the new keystore and add the new self-signed certificate to the keystore.
Step 4. Configure the server.xml file to use the new keystore & self-signed certificate.
Now that the keystore and certificate are ready, you now will need to configure the server.xml file to use them.
Access the server.xml located in the Tomcat directory, under the conf directory.
Activate the HTTPS-Connector listed in the server.xml file. It should look something like the example below. Note: Be sure to change the certificateKeyAlias, certificateKeystoreFile, and certificateKeystorePassword to the ones you used earlier (especially the password).
Step 5. (Optional) You can create a working certificate without this step, but for some uses (like running LabKey automated tests in HTTPS) you'll need to add the certificate manually to your OS and to Java so that they know to trust it. To add the certificate to the Trusted Root Certificate Authorities and Trusted Publishers, follow the instructions for your operating system below.
If you're trying to run LabKey's automated tests, edit the test.properties file to change labkey.port to "8443" and labkey.server to "https://localhost".
For OSX:
Find your my_selfsigned.cer file from step #2, go to Applications > Utilities > Keychain Access, and drag the file there.
Now we need to make this certificate trusted. Go to the Certificates line in the Category section in the lower-left corner of this window, find the "localhost" entry, and double-click it. Then expand the Trust section, and change "When using this certificate" to "Always Trust". Click the close button in the upper left and type your password to finish this operation.
Import the same my_selfsigned.cer file into your Java cacerts (which assumes you have not changed your Java's default keystore password of "changeit") by executing this command on the command line:
Go to Windows command line and run the command, certmgr. This will open certificate manager for current user. Perform the following in certificate manager.
Right-click on Personal/Certificates. Select All Tasks > Import. This should open Certificate Import Wizard.
Current User should be selected. Click Next.
Enter the path to the certificate you created, <certificate location>/my_selfsigned.cer. Click Next.
Select "Place all certificates in the following store". Certificate store should be "Personal".
Review and click Finish.
You should now see your certificate under Personal/Certificates.
Right-click and Copy the certificate.
Paste the certificate into Trusted Root Certification Authorities/Certificates.
Paste the certificate into Trusted Publishers/Certificates.
Close certmgr.
Before running LabKey automated tests, edit the test.properties file: change labkey.port to "8443" and labkey.server to "https://localhost".
Execute the following to add your certificate to Java cacerts:
Windows may not pick up the certificate right away. A reboot should pick up the new certificate.
Step 6: Restart Tomcat and try to connect to your local server as https://localhost:8443/labkey and see if you are able to connect to it via HTTPS. If you did everything correctly, you should see a grey padlock to the left of the URL, and not red "https" with a line through it.
Configure LabKey Server to use HTTPS
Be sure to change the LabKey Server settings in your admin console to support HTTPS better. Go to (Admin) > Site > Admin Console > Settings > Configuration > Site Settings, and change the following values (which assume localhost as your server and port 8443):
In particular, you'll want to do that second item ("Require SSL connections"), because you will no longer be able to sign in to LabKey properly via HTTP.
Create a Real Certificate
Once you’ve successfully created your own self-signed certificate, the steps to requesting and adding an actual certificate will be significantly easier.
To create your actual certificate, do the following:
1. Create the new keystore
Repeat step 2 in the self-signed section, but this time enter the domain you want the SSL/TLS certificate to be assigned to. For example, if your domain was “mylab.sciencelab.com”, you can run the following:
This would create a new dedicated keystore for your new domain. You can opt to use the existing keystore you created, but I prefer to keep that self-signed one separate from the legitimate one.
Go through the same steps as step 3 in the self-signed section, enter in your actual domain name when prompted for “What is your first and last name? [Unknown]:”. Everything else stays the same as before.
2. Create the Certificate Signing Request (CSR):
Now that the keystore is made, you can now make the CSR that you will then send to the certificate provider (such as GoDaddy.com, Comodo.com, or SSLShopper.com)
(Note: You can technically give the CSR any name, but I prefer to use the name of the domain and the extension .csr to keep things orderly)
When you run through the CSR, you’ll be prompted similarly like you were when you created the keystore. Enter in all the same respective values.
Once you are finished, open the .csr file using your favorite plain-text editor. You will see a long hash contained within two lines. This is the CSR that you will provide to the certificate provider to order your SSL/TLS certificate.
3. Applying your SSL/TLS certificate:
Once you receive the SSL/TLS certificate from your certificate provider, they may provide you with a few certificates, either 2 certificates (a Root certificate and the certificate for your domain) or 3 certificates (a Root certificate, an intermediate certificate, and the certificate for your domain). Sometimes, you may just get one certificate that has all of those certificates combined. Your certificate provider will provide you with an explanation on what they issued you and instructions on how to use them as well if you are in doubt.
Place the certificate(s) you’ve received in the same directory as the keystore.
If you are provided with a root and/or an intermediate certificate, run the following command:
Take note that the alias is “root” and not the alias you’re using from before. This is intentional. Do not use the alias you used for the CSR or the keystore for this.
Otherwise, if you only received a single certificate, run the following:
Create Special Wildcard and Subject Alternative Names (SAN) Certificates
Sometimes you may need to create a certificate that covers multiple domains. There are two types of additional certificates that can be created:
Wildcard certificates that would cover any subdomain under the main one.
Subject Alternative Name certificates (SAN) that would cover multiple domains.
To create a wildcard certificate, you would simply use an asterisk in lieu of a subdomain when creating your keystore and CSR. So the example of mylabs.sciencelab.com, you would use *.sciencelab.com instead and then when requesting your certificate from the provider, you would specifically indicate that you want a wildcard certificate.
To create a SAN certificate, you would insert the additional domains and IPs you wish the certificate to apply to when you run the keytool command.
LabKey Server supports a number of options to control its startup behavior, particularly as it relates to first-time installations and upgrades.
By default, the server will start up and provide status via the web interface. Administrators can log in to monitor startup, installation or upgrade progress. If the server encounters an error during the initialization process, it will remain running and provide error information via the web interface.
Three Java system properties can be passed as -D options on the command-line to the Java virtual machine to adjust the behavior, like "-DsynchronousStartup=true". All default to false:
synchronousStartup: Ensures that all modules are upgraded, started, and initialized before Tomcat startup is complete. No HTTP/HTTPS requests will be processed until startup is complete, unlike the usual asynchronous upgrade mode. Administrators can monitor the log files to track progress.
terminateAfterStartup: Allows "headless" install/upgrade where Tomcat terminates after all modules are upgraded, started, and initialized. This flag automatically sets synchronousStartup=true.
terminateOnStartupFailure: Configures the server to shut down immediately if it encounters a fatal error during upgrade, startup, or initialization, with a non-zero exit code. Information will be written to the log files.
R is a statistical programming environment frequently used with to analyze and visualize datasets on LabKey Server. This topic describes how to install and configure R.
Click Download R for the OS you are using (Linux, OSX, or Windows).
Click the subcategory base.
Download the installer using the link provided for your platform, for example Download R #.#.# for Windows.
Install using the downloaded file.
Tips:
You don’t need to download the “contrib” folder on the Install site. It’s easy to obtain additional R packages individually from within R.
Details of R installation/admin can be found here.
OS-Specific Instructions: Windows
Windows. On Windows, install R in a directory whose path does not include a space character. The R FAQ warns to avoid spaces if you are building packages from sources.
OS-Specific Instructions: Linux
There are different distributions of R for different versions of Linux. Find and obtain the correct package for your version here, along with version specific instructions:
This example shows installation of version 4.0 on an Ubuntu machine. This snippet uses "lsb_release -cs" to access which Ubuntu flavor you run ( one of "jammy", "focal", "bionic", ...):
# update indices sudo apt update -qq # install two helper packages we need sudo apt install --no-install-recommends software-properties-common dirmngr # add the signing key (by Michael Rutter) for these repos # To verify key, run gpg --show-keys /etc/apt/trusted.gpg.d/cran_ubuntu_key.asc # Fingerprint: E298A3A825C0D65DFD57CBB651716619E084DAB9 wget -qO- https://cloud.r-project.org/bin/linux/ubuntu/marutter_pubkey.asc | sudo tee -a /etc/apt/trusted.gpg.d/cran_ubuntu_key.asc # add the R 4.0 repo from CRAN -- adjust 'focal' to 'groovy' or 'bionic' as needed sudo add-apt-repository "deb https://cloud.r-project.org/bin/linux/ubuntu $(lsb_release -cs)-cran40/"
Additional Notes for Linux
These instructions install R under /usr/local (with the executable installed at /usr/local/bin/R)
Support for the X11 device (including png() and jpeg()) is compiled in R by default.
In order to use the X11, png and jpeg devices, an Xdisplay must be available.
Continue with this topic until you can test the graphical rendering of jpg() and/or png() in the R View Builder.
Authentication. If you wish to modify a password-protected LabKey Server database through the Rlabkey macros, you will need to set up authentication. See: Create a netrc file.
Permissions. Refer to Configure Permissions for information on how to adjust the permissions necessary to create and edit R Views. Note that only users who have the "Editor" role (or higher) plus either one of the developer roles "Platform Developer" or "Trusted Analyst" can create and edit R reports. Learn more here: Developer Roles.
Batch Mode. Scripts are executed in batch mode, so a new instance of R is started up each time a script is executed. The instance of R is run using the same privileges as the LabKey Server, so care must be taken to ensure that security settings (see above) are set accordingly. Packages must be re-loaded at the start of every script because each script is run in a new instance of R.
Install & Load Additional R Packages
You will likely need additional packages to flesh out functionality that basic install does not include. Additional details on CRAN packages are available here. Packages only need to be installed once on your LabKey Server. However, they will need to be loaded at the start of every script when running in batch mode.
How to Install R Packages
Use the R command line or a script (including a LabKey R script) to install packages. For example, use the following to install two useful packages, "GDD" and "Cairo":
You can also use the R GUI (Packages > Install Packages) to select and install packages.
How to Load
Each package needs to be installed AND loaded. If the installed package is not set up as part of your native R environment (check ‘R_HOME/site-library’), it needs to be loaded every time you start an R session. Typically, when running R from the LabKey interface, you will need to load (but not install) packages at the start of every script because each script is run in a new instance of R.
To load an installed package (e.g., Cairo), call:
library(Cairo)
Recommended Packages
GDD &/or Cairo: If R runs on a headless Linux server, you will likely need at least one extra graphics package. When LabKey R runs on a headless Linux server, it may not have access to the X11 device drivers (and thus fonts) required by the basic graphics functions jpeg() and png(). Installing the Cairo and/or GDD packages will allow your users to output .jpeg and .png formats without using the jpeg() and png() functions. More details on these packages are provided on the Determine Available Graphing Functions page.
You can avoid the use of Cairo and/or GDD by installing a display buffer for your headless server (see below for more info).
Lattice: Optional. This package is the commonly used, sophisticated graphing package for R. It is particularly useful for creating Participant Charts.
Headless Linux Servers Only: Rendering and the X Virtual Frame Buffer
On Linux servers, the png() and jpg() functions use the device drivers provided by the X-windows display system to do rendering. This is a problem on a headless server where there is generally no display running at all. Your users may need to use graphics packages such as GDD or Cairo to replace the png() and jpeg() functions. See Determine Available Graphing Functions for further details.
If with those packages you are able to see the expected rendering, you can skip this section.
If not, as a workaround, you can install the X Virtual Frame Buffer. This allows applications to connect to an X Windows server that renders to memory rather than a display.
When a LabKey Server runs on OSX, R views can only resolve hostnames that are stored in the server's host file.
It appears that OSX security policy blocks spawned processes from accessing the name service daemon for the operating system.
You can use one of two work-arounds for this problem:
Add the hostname (e.g., www.labkey.org) to the hosts file (/etc/hosts). Testing has shown that the spawned process is able to resolve hostnames that are in the host file.
Use the name "localhost" instead of DNS name in the R script.
Rendering and the X Virtual Frame Buffer: Test the graphical rendering of jpg() and/or png() in the R View builder within LabKey. There are special R graphical packages like GDD and Cairo that essentially replace the Virtual Frame Buffer.
If rendering works as expected, you do not need the steps on this page.
However, if rendering is not functional, you may need to configure the X virtual frame buffer. This page walks you through an example installation and configuration of the X virtual frame buffer on Linux. Note that the specific example on this page is somewhat out of date, but the general process here should still apply.
Make sure you have completed the steps to install and configure R. See Install and Set Up R for general setup steps. Linux-specific instructions are included in that topic.
Install Xvfb
If the name of your machine is <YourServerName>, use the following:
You will see many lines of output. At the ">" prompt, run the capabilities() command. It will tell you whether the X11, JPEG and PNG devices are functioning. The following example output shows success:
Make Configuration Changes to Ensure that Xvfb is Started at Boot-time
You need to make sure that Xvfb runs at all times on the machine or R will not function as needed. There are many ways to do this. This example uses a simple start/stop script and treats it as a service.
The script:
[root@<YourServerName> R-2.6.1]# cd /etc/init.d [root@<YourServerName> init.d]# vi xvfb #!/bin/bash # # /etc/rc.d/init.d/xvfb # # Author: Brian Connolly (LabKey.org) # # chkconfig: 345 98 90 # description: Starts Virtual Framebuffer process to enable the # LabKey server to use R. # #
Note: Any error messages produced by Xvfb will be sent to the file set in
$XVFB_OUTPUT.
If you experience problems, these messages can provide further guidance.
The last thing to do is to run chkconfig to finish off the configuration. This creates the appropriate start and kills links in the rc#.d directories. The script above contains a line in the header comments that says "# chkconfig: 345 98 90". This tells the chkconfig tool that xvfb script should be executed at runlevels 3,4,5. It also specifies the start and stop priority (98 for start and 90 for stop). You should change these appropriately.
Now you will need to the set the DISPLAY env variable for the user. This is the DISPLAY variable that is used to run the TOMCAT server. Add the following the .bash_profile for this user. On this serer, the TOMCAT process is run by the user tomcat:
[root@<YourServerName> ~]# vi ~tomcat/.bash_profile [added] # Set DISPLAY variable for using LabKey and R. DISPLAY=:2.0 export DISPLAY
Restart the LabKey Server or it will not have the DISPLAY variable set
On this server, we have created a start/stop script for TOMCAT within /etc/init.d. So I will use that to start and stop the server
Premium Feature — Available in the Professional and Enterprise Editions of LabKey Server. Learn more or contact LabKey.
As an alternative to installing a local instance of R, or using an Rserve server, LabKey Server can make use of an R engine inside a Docker image.
Multiple Docker-based R engines can be configured, each pointing to different Docker images, each of which may have different versions of R and packages installed.
Docker-based R engines are good candidates for "sandboxed" R engines.
To set up LabKey Server to use Docker-based R engines, complete the following steps:
For development/testing purposes, on a Windows machine, you can use a non-secure connection in your Docker Settings. Under General, place a checkmark next to Expose daemon on tcp://localhost:2375. Do not do this on a production server, as this is not a secure configuration. Only appropriate for local development and testing.
Make and Start a Docker Image with R Included
LabKey provides two template images to start from. One template is for building a Dockerized R engine using the latest released version of R. The other template is for building an older version of R.
To create a Docker image with the latest available version of R, see the following topic. Since this always uses the latest R version, the actual R version will differ based on the latest R version available at the time the build command is run.
To create a Docker image with a specific version of R, see the topic below. Note that this template set the default R version to 3.4.2, but this can be changed to any R version in the make files. Note that when using this image, R packages are installed from a CRAN snapshot so that package versions are correct for the R version.
When the docker image build process finishes, look for the name of your image in the console, for example:
Successfully tagged labkey/rsandbox-base:3.5.1
Start the image:
docker run labkey/rsandbox-base:3.5.1
Configure LabKey Server to Use the Docker R Engine
Step 1: Set the Base Server URL to something other than localhost:8080. For details, see Site Settings. Example value "http://my.server.com". You can also use the full IP address of a localhost machine if needed.
In the Edit Engine Configuration dialog, enter the Docker Image Name (shown here as "labkey/rsandbox"). If you are using RStudio Integration via Docker, this name will be "labkey/rstudio-base".
The Mount Paths let you (1) map in-container paths to host machine paths and (2) to write script outputs the mapped path on the host machine. For example, the following settings will write the script.R and script.Rout generated files to your host file system at C:\scripts\Rreports:
Mount (rw): host directory: C:\scripts\Rreports Mount (rw): container directory: /Users/someuser/cache
Mount (rw) paths refer to Read-Write directories, Mount (ro) refer to Read-Only paths.
Extra Variables: Additional environment variables to be passed in when running a container.
Usage example: 'USERID=1000,USER=rstudio' would be converted into '-e USERID=1000 -e USER=rstudio' for a docker run command.
A special variable 'DETACH=TRUE' will force the container to run in detached mode, with '--detach'.
Site Default refers to the default sandboxed R engine.
If you have only one sandboxed R engine, this cannot be unchecked. If you have more than one sandboxed R engine, you can choose one of them to be the Site Default.
Enabled: Check to enable this configuration. You can define multiple configurations and selectively enable each.
This topic will guide you in determining whether you need to install additional graphic functions for use with the R statistical programming environment.
Before reading this section further, figure out whether you need to worry about its contents. Execute the following script in the R script builder:
if(!capabilities(what = "jpeg") || !capabilities(what="X11")) warning("You cannot use the jpeg() function on your LabKey Server"); if(!capabilities(what = "png") || !capabilities(what="X11")) warning("You cannot use the png() function on your LabKey Server");
If this script outputs both warnings, you’ll need to avoid both jpeg() and png() functions. If you do not receive warnings, you can ignore the rest of this section.
Why Don't png() and jpeg() Work? On Unix, jpeg() and png() rely on the x11() device drivers. These are unavailable when R is installed on a "headless" Unix server.
If png() and jpeg() Don't Work, What Are My Options?. You have two categories of options:
Ask your admin to install a display buffer on the server such that it can access the appropriate device drivers.
Avoid jpeg() and png(). There are currently three choices for doing so: Cairo(), GDD() and bitmap().
Which Graphics Function Should I Use?
If you are working on a headless server without an installed display buffer, you will need to use Cairo(), GDD() or bitmap(). There are trade-offs for all options. If you use Cairo or GDD, your admin will need to install an additional graphics package. The Cairo package is based upon libraries undergoing continued development and maintenance, unlike the GDD package. Cairo does not require the use of Ghostscript to produce graphics, as does the bitmap() function. However, Cairo() fails to provide all graphics functions on all machines, so you will need to test its capabilities. GDD may provide functions unavailable in Cairo, depending on your machine setup.
Warning: LabKey R usually runs in batch mode, so any call to plot() must be preceded by a call to open the appropriate device (e.g., jpeg() or pdf()) for output. When R runs in its ordinary, interpreted/interactive mode, it opens an appropriate output device for graphics for you automatically. LabKey R does not do this, so you will need to open an output device for graphics yourself. Identifying appropriate devices and function calls is tricky and covered in this section.
Strategy #1: Use the Cairo and/or GDD Packages
You can use graphics functions from the GDD or Cairo packages instead of the typical jpeg() and png() functions.
There are trade-offs between GDD and Cairo. Cairo is being maintained, while GDD is not. GDD enables creation of .gif files, a feature unavailable in Cairo. You will want to check which image formats are supported under your installation of Cairo (this writer's Windows machine can not create .jpeg images in Cairo). Execute the following function call in the script-builder window to determine formats supported by Cairo on your machine:
Cairo.capabilities();
The syntax for using these packages is simple. Just identify the “type” of graphics output you desire when calling GDD or Cairo. The substitution parameters used for file variables are not unique to Cairo/GDD and are explained in subsequent sections.
# Load the Cairo package, assuming your Admin has installed it: library(Cairo); # Identify which "types" of images Cairo can output on your machine: Cairo.capabilities(); # Open a Cairo device to take your plotting output: Cairo(file="${imgout:labkeyl_cairo.png}", type="png"); # Plot a LabKey L: plot(c(rep(25,100), 26:75), c(1:100, rep(1, 50)), ylab= "L", xlab="LabKey", xlim= c(0, 100), ylim=c(0, 100), main="LabKey in R"); dev.off();
# Load the GDD package, assuming your Admin has installed it: library(GDD); # Open a GDD device to take your plotting output: GDD(file="${imgout:labkeyl_gdd.jpg}", type="jpeg"); # Plot a LabKey L: plot(c(rep(25,100), 26:75), c(1:100, rep(1, 50)), ylab= "L", xlab="LabKey", xlim= c(0, 100), ylim=c(0, 100), main="LabKey in R"); dev.off();
Strategy #2: Use bitmap()
It is possible to avoid using either GDD or Cairo for graphics by using bitmap(). Unfortunately, this strategy relies on Ghostscript, reportedly making it slower and lower fidelity than other options. Instructions for installing Ghostscript are available here.
Calls to bitmap will specify the type of graphics format to use:
Some of the initial configuration and startup steps of bootstrapping LabKey Server, i.e. launching a new server on a new database, can be performed automatically by creating a properties file that is applied at startup. Using these startup properties is not required.
Configuration steps which can be provided via the startup properties file include, but are not limited to:
Some site settings, such as file root and base server URL
Script engine definitions, such as automatically enabling R scripting
User groups and permission roles
Custom content for the home page
Using Startup Property Files
Startup property files are named with a .properties extension and placed in the <LABKEY_HOME>/build/deploy/startup directory, where <LABKEY_HOME> is the root of your enlistment. Create this directory in your enlistment if it does not exist.
One or more properties files can be defined and placed in this startup directory. The files are applied in reverse-alphabetical order. Any property defined in two such files will retain the "last" setting applied, i.e. the one in the file that is "alphabetically" first.
To control specific ordering among files, you could name multiple .properties files with sequential numbers, ordered starting with the one you want to take precedence. For instance, you might have:
01_application.properties: applied last
02_other.properties
...
99_default.properties: applied first
In this example, anything specified in "99_default.properties" will be overridden if the same property is also defined in any lower-numbered file. All values set for properties in the file "01_application.properties" will override any settings included in other files.
Startup Properties File Format
The properties file is a list of lines setting various properties. The format of a given line is:
For example, to emulate going to the "Look and Feel Settings", then setting the "initial" "systemEmailAddress" to be "username@mydomain.com", you would include this line:
Static site-specific content, such as custom splash pages, sitemap files, robots.txt etc. can be placed in an extraWebapp directory. This directory is "loaded last" when a server starts and resources placed here will not be overwritten during upgrade.
Static site-specific content, such as custom splash pages, sitemap files, etc. can be placed in an extraWebapp directory. This directory is located under the main LabKey deployment directory on production servers. On a development machine, it is placed inside the <LabKey_Root>\build\deploy directory. It is a peer to the labkeyWebapp and modules directories.
Files in this directory will not be deleted when your site is upgraded to a new version of LabKey Server.
Robots.txt and Sitemap Files
If your server allows public access, you may wish to customize how external search engines crawl and index your site. Usually this is done through robots.txt and sitemap files.
You can place robots.txt and sitemap files (or other site-specific, static content) into the extraWebapp directory.
Alternatives
For a resource like a site "splash page", another good option is to define your resource in a module. This method lets you version control changes and still deploy consistent versions without manual updates.
This topic describes the steps required for a LabKey Cloud Hosted Server to be able to send emails from a non-LabKey email address, such as from a client's "home" email domain.
To send email from the client email domain, clients must authorize LabKey to send email on their behalf by creating a new text record in their DNS system known as a DKIM (DomainKeys Identified Mail) record.
What is DKIM?
DKIM is an email authentication method designed to detect email spoofing and prevent forged sender email addresses. Learn more on Wikipedia.
Why has LabKey implemented this new requirement?
LabKey takes client security seriously. LabKey Cloud Servers typically do not use client email servers to send email. Further, many clients use LabKey to manage PHI data and thus need to meet strict compliance guidelines. With LabKey using DKIM authorization, clients can be assured that email originating from LabKey systems has been authorized by their organization thus increasing the level of trust that the content of the email is legitimate.
PostMark
How does mail get sent from a LabKey Cloud Hosted Server?
To prevent mail from our servers being filtered as spam and generating support calls when clients can't find messages, LabKey uses a mail service called PostMark.
PostMark confirms through various methods that mail being sent by its servers isn't spam, and can therefore be trusted by recipients.
One part of the configuration requires that every "FROM" email domain being sent by through the LabKey account has a DKIM record. A DKIM record is like a password that tells PostMark LabKey has permission to send mail from that domain. This prevents domain-spoofing in emails coming from LabKey and being sent through PostMark, thus ensuring the integrity of both LabKey and PostMark's reputation.
When LabKey sends a message from one of our cloud servers, it is sent to a specific PostMark email server via a password-protected account. PostMark then confirms the domain is one LabKey has a DKIM record for.
Because PostMark's model is to protect domains, LabKey cannot assign DKIM records to specific hosts, only to domains like labkey.com. As such, mail is sent from our cloud servers as username@domain, as opposed to username@host.domain.
If there's no DKIM for the domain in the email address, PostMark bounces the email from its server and never sends it. If the domain is DKIM approved, the mail is then sent on to the recipient.
Configure DNS Records
To configure DNS records so mail from client email address goes through, the following steps must be completed by both LabKey and the client:
The client tells LabKey which domain they want to send email from.
LabKey's DevOps team then configures PostMark to accept mail with that domain in the from address. At this point, PostMark gives LabKey a DKIM record.
LabKey passes the DKIM records to the client for the client to add to their DNS provider.
The client tells LabKey when they've done this and the LabKey DevOps team confirms that the DKIM record is properly configured.
LabKey sends a test message from that domain to ensure the mail is being accepted and sent.
LabKey informs the client that they can then send from their domain.
This entire process can be done in less than a day, provided the client is able to add the DKIM record with quick turnaround.
DKIM Records
What are the ramifications of adding a DKIM record for the client?
Because DKIM records are TXT records specific to PostMark, these records have no impact on the client apart from authorizing their LabKey Cloud Server to send email with their domain name. DKIM records do not impact existing mail configurations that the client is already using. They do not supplant MX records or add to them. For all intents and purposes, this record is invisible to the client -- it is only used by PostMark when mail is sent from a LabKey server with the client's domain in the from field.
Workarounds
Is there any way around needing the client to add a DKIM record?
If the client wants to send mail from their domain from a LabKey Cloud Server, they must add the DKIM record.
If they do not add this record, clients can configure their LabKey Cloud Server to send email from a LabKey domain (e.g. do_not_reply@labkey.com). LabKey has already created DKIM records for its email domains.
Deploying an AWS Web Application Firewall
This topic outlines the process for deploying an AWS Web Application Firewall (WAF) to protect LabKey instances from DDoS (Distributed Denial of Service) and other malicious attacks.
Overview
Public facing LabKey instances are subject to “internet background radiation” by nefarious miscreants who seek to comprise systems to gain access to protected data. Typically motivated by financial extortion, these individuals use DDoS and bot networks to attack victims. Fortunately there are some easy and low cost tools to protect against many attacks. This document describes how to deploy an AWS Web Application Firewall (WAF) to protect against the OWASP top 10 vulnerabilities and many malicious bot networks.
Configured Elastic Load Balancer with target group routing to LabKey EC2 Instance
Required AWS Permissions to use CloudFormation, WAF, IAM Policies, S3, Lambda, etc.
Considerations
Many LabKey core features require uploading and downloading of files. These types of activities are difficult to distinguish from malicious activities as the methods used to upload malicious code is indistinguishable from normal workflows. To address possible false positives, clients have the following options:
Create an "Allow list" of specific IP addresses or IP Address ranges of users originating from within the clients network. (e.g. allow the Public NAT gateway of the client’s network).
If this explicit listing is not feasible due to expectation of random user IP addresses from various internet locations; consider setting the XSS rule to count vs block. (See information below). While this may reduce the effectiveness of the WAF to protect against XSS attacks, clients still gain the benefit of other WAF features which block known malicious attacker source IP’s.
Deployment
Architecture
Deployment Steps
Follow the AWS Tutorial for detailed steps to deploy the WAF using the CloudFormation Template:
Premium Feature — Available with all Premium Editions of LabKey Server. Learn more or contact LabKey.
As part of installing required components, you need to install a database server. You can use either PostgreSQL or Microsoft SQL Server as the primary database with a Premium Edition of LabKey Server. This topic covers the process of setting up a Windows machine to use Microsoft SQL Server as the primary database. Information about other environments is provided below.
If you already have a licensed version of Microsoft SQL Server in a supported version, follow the installation instructions noting the requirements outlined for installing an express edition below.
If you don't have a licensed version of Microsoft SQL Server, you can download a free Express Edition. Note that the Express Edition has database size limitations that generally make it inappropriate for production deployments. You can also use the Developer Edition; details may differ slightly from instructions in this topic.
During installation, configure Microsoft SQL Server to accept both Windows Authentication and SQL Server Authentication, ("Mixed Mode"), and specify a user name and password for the administrative account.
Select the Custom installation option, choose the download location, then click Install to begin.
Once the SQL Server Installation Center wizard begins, choose New SQL Server installation.
Accept the license terms and click Next.
In the SQL Server 2019 Setup wizard, proceed through the steps accepting defaults until the Database Engine Configuration step:
Choose Mixed Mode (SQL Server authentication and Windows authentication).
Keep track of the user name and password; LabKey Server uses it to authenticate to SQL Server. It must be provided in plaintext in labkey.xml or in your mssql.properties file later.
Complete the wizard.
If you've already installed SQL Server without enabling SQL Server Authentication then see How to: Change Server Authentication Mode in the Microsoft SQL Server documentation.
Using SQL Server with a Local Development Machine
Follow the general steps to set up a development machine, with a few exceptions noted below:
1. You can ignore the instructions here around the config file "labkey.xml", which do not apply to setting up a development server.
2. Instead of configuring your pg.properties file, use the one for use with Microsoft SQL Server (mssql.properties), specifying JDBC settings, including URL, port, username, password, etc.
After you've installed SQL Server, you'll need to configure it to use TCP/IP. Follow these steps:
Launch the SQL Server Configuration Manager.
Under the SQL Server Network Configuration node, select Protocols for <servername>.
In the right pane, right-click on TCP/IP and choose Enable.
Right-click on TCP/IP and choose Properties.
Switch to the IP Addresses tab.
Scroll down to the IPAll section, clear the value next to TCP Dynamic Ports and set the value for TCP Port to 1433 and click OK. By default, SQL Server will choose a random port number each time it starts, but the JDBC driver expects SQL Server to be listening on port 1433.
Click OK
Restart the service by selecting the SQL Server Services node in the left pane, selecting SQL Server <edition name> in the right pane, and choosing Restart from the Action menu (or use the Restart button on the toolbar).
SQL Server Management Studio
Download the SQL Server Management Studio graphical database management tool.
Click the download link to obtain the latest general availability (GA) version of SQL Server Management Studio
Run the downloaded .exe file.
Use Windows Update to install the latest service packs.
Set Up a Login
You may want to set up a new login (in addition to the "sa" system administrator) for LabKey Server to use to connect to SQL Server:
Run SQL Server Management Studio.
Connect to the database.
Under Security > Logins, add a new login, using SQL Server authentication.
Enter the user name and password.
Use this password to configure the data source below.
LabKey Configuration File
Edit the LabKey Server configuration file (usually named labkey.xml or ROOT.xml) to configure the JDBC driver for Microsoft SQL Server, available with Premium Editions of LabKey Server.
We strongly recommend using Microsoft's JDBC driver, and this documentation provides configuration information for doing so. Support for the jTDS driver will be removed in 23.3.0.
Comment out the Resource tag that specifies the PostgreSQL configuration. This Resource tag can be identified by the driverClassName "org.postgresql.Driver". Use "<!--" and "-->" to comment it out, similar to the following:
Use the following template for configuring a MS SQL Server data source. Replace USERNAME, PASSWORD, SERVER_NAME, and DATABASE_NAME to fit the particulars of your target data source. If your SQL Server is not using port 1433 (the default), edit that part of the URL as well.
Note: In the url parameter, "trustServerCertificate=true" is needed if SQL Server is using a self-signed cert that Java hasn't been configured to trust. The applicationName isn't required but identifies the client to SQL Server to show in connection usage and other reports. For production environments, adding the certificate to a trust store enable it validation. See the driver documentation for details on these configuration options.
Note: The maxWaitMillis parameter is provided to prevent server deadlocks. Waiting threads will time out when no connections are available rather than hang the server indefinitely.
You may also need to install the PremiumStats CLR functions separately. For details see PremiumStats Install.
EHR users may also need to install LDKNaturalize, following similar methods.
SQL Server Synonyms
LabKey Server supports the use of SQL Server Synonyms. These alternative names function like shortcuts or symlinks, allowing you to "mount" tables and views which actually exist in another schema or database. For more information, see SQL Synonyms.
Installation on Other Platforms
Linux Deployment
Microsoft distributes a native Linux version. Obtain it and follow the documentation available here:
4. Enable Hyper-V by going to Turn Windows features on or off and checking all Hyper-V boxes. Click OK.
5. Make sure Virtualization is enabled by checking your Windows Task Manager. Find it on the Performance tab under the grid for CPU usage. If it is not enabled, check the troubleshooting documentation available from Docker.
6. Restart your PC. Docker should start during this reboot. Follow status from the icon on the taskbar.
7. Use Windows Powershell to test the installation:
9. Once the test is successful, you can pull the SQL Server image for linux from the Docker registry. Find the specific featured tag to pull on this page. For example:
13. View your SQL Server container up and running:
docker ps -a
14. Connect to SQL Server via SQL Server Management Studio using:
Server name: your Server name followed by a comma and the port number
Authentication: Login "sa" with your new strong password
15. You'll now be able to connect via LabKey. Modify mssql.properties with the port number and password.
16. If desired, you can use disable the automatic start of Docker by editing the settings.
For troubleshooting assistance with this process, review the Docker documentation.
Using Windows/Domain Authentication
In some organizations, you may want or need to use Windows/Domain Authentication instead of SQL Server Authentication. This option should be considered carefully, as access to LabKey Server will depend on a successful login in the domain/Active Directory. If a problem occurs, like an expired profile or if Active Directory is either down or having issues, this will prevent the Windows-based authentication to SQL Server.
In order to use Windows/Domain Authentication you need to add the integratedSecurty parameter to the URL string in your data source tag:
In addition, you need to install the MSSQL JDBC Auth DLL file within the driver package; it is not included by default. You can download the matching version of the full driver from Microsoft directly. The error message you see will guide you to the expected version and specific file name. Find the DLL in the auth > x64 subfolder, download it, and put it into the Windows/System32 directory.
If this DLL is not present, you'll see an error message similar to:
Message: This driver is not configured for integrated authentication. ClientConnectionId:##### SQLState: 08S01 ErrorCode: 0 com.microsoft.sqlserver.jdbc.SQLServerException: This driver is not configured for integrated authentication. ClientConnectionId:##### ... java.lang.UnsatisfiedLinkError: Unable to load authentication DLL mssql-jdbc_auth-10.2.1.x64
Switch from jTDS to Microsoft JDBC Driver
We strongly recommend using Microsoft's JDBC driver; support for the jTDS driver will be removed in 23.3.0.
If you are currently using the jTDS driver with a development machine, the next time you pull from develop and run 'gradlew pickMSSQL' you'll start using the new driver. If you need to switch back for some reason, you can use the target 'gradlew pickJtds' to return to using the jTDS driver.
For other instances, you will need to update the data source driver's class and URL in the labkey.xml/ROOT.xml file. See the configuration template above, particularly the driverClassName and url lines. We recommend that you first upgrade your staging (and/or test) instance(s), and upgrade production after testing is completed.
Note that if you are using Windows domain authentication, you will need to obtain and install a different DLL than you needed for the jTDS driver.
Install SAS/SHARE for Integration with LabKey Server
Premium Feature — Available with all Premium Editions of LabKey Server. Learn more or contact LabKey.
Overview
Publishing SAS datasets to your LabKey Server provides secure, dynamic access to datasets residing in a SAS repository. Published SAS data sets appear on LabKey Server as directly accessible datasets. They are dynamic, meaning that LabKey treats the SAS repository as a live database; any modifications to the underlying data set in SAS are immediately viewable on LabKey. The data sets are visible only to those who are authorized to see them.
Authorized users view published data sets using the familiar, easy-to-use grid user interface used throughout LabKey. They can customize their views with filters, sorts, and column lists. They can use the data sets in custom queries and reports. They can export the data in Excel, web query, or TSV formats. They can access the data sets from JavaScript, SAS, R, Python, and Java client libraries.
Several layers keep the data secure. SAS administrators expose selected SAS libraries to SAS/SHARE. LabKey administrators then selectively expose SAS libraries as schemas available within a specific folder. The folder is protected using standard LabKey security; only users who have been granted permission to that folder can view the published data sets.
SAS Setup
Before SAS datasets can be published to LabKey, an administrator needs to do three things:
Set up the SAS/SHARE service on the SAS installation
Set up the SAS/SHARE JDBC driver on the LabKey web server
Define SAS libraries as external schemas within LabKey
Set up the SAS/SHARE server. This server runs as part of the SAS installation (it does not run on the LabKey server itself). SAS/SHARE allows LabKey to retrieve SAS data sets over an internal corporate network. The SAS/SHARE server must be configured and maintained as part of the SAS installation. The LabKey installation must be able to connect to SAS/SHARE; it requires high-speed network connectivity and authentication credentials. SAS/SHARE must be configured to predefine all data set libraries that the LabKey installation needs to access.
Set up the SAS/SHARE JDBC driver. This driver allows LabKey to connect to SAS/SHARE and treat SAS data sets as if they were tables in a relational database. The SAS/SHARE JDBC driver must be installed on the LabKey installation. This requires copying two .jar files into the tomcat/lib directory on LabKey. It also requires adding a new DataSource entry in the labkey.xml file on LabKey containing several connection settings (e.g., SAS/SHARE URL and credentials). See External SAS Data Sources.
Define SAS libraries as external schemas within LabKey. A folder administrator chooses which SAS libraries to publish in a LabKey Server folder via the Schema Administration user interface. If a SAS data source is defined in the labkey.xml file, the "Data Source" drop-down list contains the name of this data source as an option. After selecting the data source, the administrator selects the schema (library) name to publish. After clicking the “Create” button, all data sets in that library are published; in other words, they can be viewed by anyone with read permissions in the folder.
Once defined via the Schema Administration page, a SAS library can be treated like any other database schema (with a couple important exceptions listed below). The query schema browser lists all its data sets as “built-in tables.” A query web part can be added to the folder’s home page to display links to a library’s data sets. Links to key data sets can be added to wiki pages, posted on message boards, or published via email. Clicking any of these links displays the data set in the standard LabKey grid with filtering, sorting, exporting, paging, customizing views, etc. all enabled. Queries that operate on these datasets can be written. The data sets can be retrieved using client APIs (Java, JavaScript, R, and SAS).
Limitations
The two major limitations with SAS data sets are currently:
Like all other external data sources, SAS data sets can be joined to each other but not joined to data in the LabKey database or other data sources.
SAS/SHARE data sources provide read-only access to SAS data sets. You cannot insert, update, or delete data in SAS data sets from LabKey.
GROUP_CONCAT Install
Premium Feature — Available with all Premium Editions of LabKey Server. Learn more or contact LabKey.
This topic explains how to install the Microsoft SQL Server GROUP_CONCAT CLR (Common Language Runtime) functions. You may need to install these functions as part of setting up a shared SQL Server installation.
GROUP_CONCAT is a SQL aggregate function (similar to SUM, MIN, or MAX) that combines values from multiple rows into a single string value. For example, executing GROUP_CONCAT on a column with row values "First", "Second", and "Third" produces a single value "First, Second, Third". Some databases, such as MySQL, include this as a built-in function. Microsoft SQL Server does not, so LabKey requires a CLR function that implements the capability.
This function is typically installed automatically as part of the regular LabKey installation process. However, the process can fail if, for example, the database user does not have permission to install CLR functions. In these cases, a database administrator needs to install the function manually.
Note on permissions: To install CLR functions, the database user must have sysadmin permissions. When installing on SQL Server in an Amazon AWS RDS instance, this permission is not available. Instead, you need to create a DB Parameter group in RDS. This advanced feature enables installation of CLRs.
On a workstation with a connection to the Microsoft SQL Server Database Server:
If the automatic installation has failed, site administrators will see a banner message on the running server reading "The GROUP_CONCAT aggregate function is not installed. This function is required for optimal operation of this server." with two links:
Click Download installation script in the banner message to download the required script, named "groupConcatInstall.sql"
Click View installation instructions in the banner message to open to this topic.
Connect to the Microsoft SQL Server using an account with membership in the sysadmin role.
Execute the downloaded SQL script in the database.
Confirm that group_concat is installed in the core schema.
Restart Tomcat. The changes to the database will be recognized by the server only after a restart.
This topic explains how to install the Microsoft SQL Server PremiumStats CLR functions. You may need to install these functions as part of setting up a SQL Server installation with a Premium Edition of LabKey Server.
PremiumStats is a CLR (Common Language Runtime) assembly with aggregate functions supporting LabKey Premium summary statistics, including median, median absolute deviation, quartiles, and interquartile ranges. Microsoft SQL Server does not support these natively, so LabKey requires a CLR assembly to implement these capabilities.
This assembly and functions are typically installed automatically as part of the regular LabKey installation process. However, the process can fail if, for example, the database user does not have permission to install CLR assemblies. In these cases, a database administrator needs to install the assembly manually.
On a workstation with a connection to the Microsoft SQL Server Database Server:
If the automatic installation has failed, site administrators will see a banner message on the running server reading "The premium aggregate functions are not installed. These functions are required for premium feature summary statistics." with two links.
Click Download installation script in the banner message to download the required script: "premiumAggregatesInstall.sql"
Click View installation instructions in the banner message to open to this topic.
Connect to the Microsoft SQL Server using an account with membership in the sysadmin role.
Execute the downloaded SQL script in the database.
Confirm that PremiumStats is installed in the core schema.
Restart Tomcat. The changes to the database will be recognized by the server only after a restart.
Modules are the functional building blocks of LabKey Server. A folder's functionality is determined by the set of modules that are enabled in that folder. The Folder Type of a project or folder determines an initial set of enabled modules, and additional modules included in the deployment can be enabled as necessary. For details see Enable a Module in a Folder.
The Core module provides central services such administration, folder management, user management, module upgrade, file attachments, analytics, and portal page management.
Experiment
The Experiment module provides annotation of experiments based on FuGE-OM standards. This module defines the XAR (eXperimental ARchive) file format for importing and exporting experiment data and annotations, and allows user-defined custom annotations for specialized protocols and data.
FileContent
The FileContent module lets you share files on your LabKey Server via the web.
Issues
The Issues module provides a ready-to-use workflow system for tracking tasks and problems across a group.
List
Lists are light-weight data tables, often used to hold utility data that supports an application or project, such as a list of instrument configurations.
Pipeline
The Data Pipeline module uploads experiment data files to LabKey Server. You can track the progress of uploads and view log and output files. These provide further details on the progress of data files through the pipeline, from file conversion to the final location of the analyzed runs.
The Search module offers full-text search of server contents, implemented by Lucene.
Study
The Study module provides a variety of tools for integration of heterogeneous data types, such as demographic, clinical, and experimental data. Cohorts and participant groups are also supported by this module.
Survey
The Survey module supports custom user surveys for collecting user information, feedback, or participant data.
Visualization
Implements the core data visualization features, including box plots, scatter plots, time charts, etc.
Wiki
The Wiki module provides a simple publishing tool for creating and editing web pages on the LabKey site. It includes the Wiki, Narrow Wiki, and Wiki TOC web parts.
LabKey Server modules requiring significant customization by a developer are not included in LabKey distributions. Developers can build these modules from source code in the LabKey repository. Please contact LabKey to inquire about support options.
Utilizing a Staging server, where you bring up a temporary staging server on a copy of the production database, is best practice as it will help you confirm that the production upgrade will be successful.
It is a known issue that pipeline jobs that are in progress prior to an upgrade may not resume successfully after an upgrade. The same is true for jobs that may have hit an error condition if they are retried after an upgrade. In general, hotfix releases should not have incompatibilities when resuming jobs, but major version transitions will sometimes exhibit this problem. When possible, it is recommended to let jobs complete prior to performing an upgrade.
From time to time, one time migrations may result in longer upgrade times than expected for certain modules. For example, when the audit log tables rowID was changed from integer to bigInt, upgrading the audit log module may have taken an unusual length of time and disk space.
Follow this checklist to manually upgrade LabKey Server to a new version. The process assumes that you have previously installed LabKey using the recommended directory structure described in Install on Linux: Main Components.
Upgrade Changes for 20.7
The JDBC jars (jtds.jar, postgresql.jar, mysql.jar) are now versioned and distributed inside the module directories like any other third-party jar, making it unnecessary to copy them to the CATALINA_HOME/lib directory during installation and upgrade. When you upgrade to 20.7, delete these JDBC jar files from CATALINA_HOME/lib to avoid conflicts.
Note that each LabKey distribution includes an upgrade script that automates much of the process. Consider using this script before you proceed with the fully manual process described below.
Before upgrade, notify your users in advance that the server will be down for a period of time.
Upgrade All Dependencies
Before upgrading LabKey Server, it is important to upgrade Java, Tomcat, and your database (PostgreSQL or SQL Server) to their recommended versions. For details see: Supported Technologies
If you did not install using the recommended directory structure, this is a reasonable time to make the switch. Using the root directory "/usr/local/labkey" allows you to keep all necessary components in one place.
Download the New LabKey Server Distribution
Navigate to the directory /usr/local/labkey/src, the base directory for downloading and unpacking distributions:
cd /usr/local/labkey/src
Premium Edition users can download their custom distribution from the Server Builds section of their client portal.
The name of the distribution archive includes the version number, build number, and edition. For example, "LabKey20.3.2-65320.7-community-bin.tar.gz" indicates:
version: 20.3.2
build number: 65320.7
edition: community
Unpack the distribution archive. In the example below, substitute the pound symbols #### appropriately:
sudo tar xfz LabKey####-bin.tar.gz
The unpacked bundle will now be located in a directory like:
Locate your LABKEY_HOME directory. The default location is /usr/local/labkey/labkey.
Find your Tomcat home directory, referred to as CATALINA_HOME. The default location is usr/local/labkey/apps/apache/apache-tomcat-#.#.##.
Confirm the locations of the existing LabKey Server files on your system for each of the following components, in preparation for replacing them with the corresponding LabKey Server files:
LABKEY_HOME/labkeywebapp: The directory containing the LabKey Server web application.
LABKEY_HOME/modules: The directory containing the LabKey Server modules.
LABKEY_HOME/externalModules: The directory containing additional, user-developed LabKey Server modules, if applicable. (Not all installations contain an externalModules directory. If you don't see one, skip this step.)
CATALINA_HOME/lib: The existing LabKey Server libraries and JAR files.
CATALINA_HOME/conf/Catalina/localhost/labkey.xml: The LabKey Server configuration file. This file may be named labkey.xml, LABKEY.xml, or ROOT.xml.
Stop Tomcat and Backup
Shut down the Tomcat web server. Note that you do not need to shut down the database that LabKey Server connects to. To shutdown Tomcat, use a command like the following, or the script provided by Apache.
Backup your resources by following these steps. It is critical that you completely replace the /labkeywebapp, /modules, and /externalModules directories below, to avoid including conflicting artifacts in the upgraded server.
Move the following directories to the backup directory:
Confirm that the LABKEY_HOME folder no longer contains these four subdirectories.
Copy the following directories to the same backup subfolder. You may need to return to their current status but it is not necessary to fully replace these directories in the upgrade process.
If your installation includes other modules not included the distribution bundle, recompile these modules against the server version you're installing (or otherwise obtain updated versions of the modules) and add them to $LABKEY_HOME/externalModules
Copy the contents of LABKEY_DIST/tomcat-lib into CATALINA_HOME/lib. Choose to overwrite any jars with the same names that are already present. Do not delete or move the other files in the CATALINA_HOME/lib folder, as they are required for Tomcat to run. For example:
If you have customized the server's stylesheet, restore your modified stylesheet from the backup directory into the new LABKEY_HOME/labkeywebapp directory.
Ensure that the LABKEY_HOME/bin directory is on your system path, or on the path of the user account that will be starting Tomcat.
Note: This will upgrade the versions of X!Tandem and TPP tools which are currently being used with LabKey Server.
Check for Changes to labkey.xml
Compare the outgoing and incoming labkey.xml (or root.xml) files.
If necessary, merge any other settings you have changed in the outgoing file into the incoming one.
If there are changes, update CATALINA_HOME/conf/Catalina/localhost/labkey.xml.
Note: The name of the LabKey Server configuration file determines the URL address of your LabKey Server application. If you change the name (or case) of this configuration file, any external links to your LabKey Server application will break. For more information, see Installation: LabKey Configuration File.
For example, if your existing LabKey Server installation has been running as the root web application on Tomcat and you want to ensure that your application URLs remain identical after the upgrade, rename the copied labkey.xml to ROOT.xml in the same localhost subfolder location.
Check for Changes to log4j.xml
If you have not customized the log4j.xml file at LABKEY_HOME/labkeywebapp/WEB-INF/classes/log4j.xml, then you can skip this step.
If you have customized the log4j.xml file, then compare your modified file with the incoming log4j.xml file. If necessary, merge your customizations with the incoming version.
Reassert Tomcat User Ownership
If necessary, reassert the Tomcat user's ownership over LABKEY_HOME and CATALINA_HOME. For example:
If you have any problems starting Tomcat, check the Tomcat logs in the CATALINA_HOME/logs directory.
At this point LabKey Server should be up and running.
It is good practice to review Server Information and System Properties on the Admin Console immediately after the upgrade to ensure they are correct.
If you have problems, check the Tomcat logs, and double-check that you have properly named the LabKey Server configuration file and that its values are correct.
LabKey Server ships with a script for upgrading a LabKey Server running on Linux and OSX, or other UNIX-style operating systems. This script, named manual-upgrade.sh, can be used to upgrade your LabKey Server to the latest version.
-l dir: LABKEY_HOME directory to be upgraded. This directory contains the labkeywebapp, modules, pipeline-lib, etc directories for the existing LabKey Server instance. (Required)
-d dir: Upgrade distribution directory: contains labkeywebapp, lib, and manual-upgrade.sh. (default: current working directory)
-c dir: CATALINA_HOME; root of LabKey Apache Tomcat installation.
-u owner: the tomcat user account (default: current user)
--noPrompt: do not require the user to hit enter before proceeding with the install
(Web server startup/shutdown method: select one) --service: use /etc/init.d/tomcat (default) --systemctl: use /bin/systemctl --catalina: use CATALINA_HOME/bin/shutdown.sh and CATALINA_HOME/bin/startup.sh
Example
For this example, we will assume the following:
LABKEY_HOME directory: /usr/local/labkey/labkey
Upgrade distribution directory: /usr/local/labkey/src/labkey/LabKey18.1-58484.70-community-bin
This script does not keep a backup copy of the LabKey Server java files after the upgrade. In order to install a previous version, you will need to have the LabKey Server distribution files for that previous version available on your file system.
You can then simply execute the script again specifying the previous version's directory containing the uncompressed LabKey Server distribution files
Note: this will not roll back any database upgrades that have occurred and may put your server into a bad state
Backup of LabKey Server Database
This script does not perform a backup of your LabKey Server database.
LabKey recommends that you perform a backup of your LabKey Server database before upgrading your LabKey Server using this script. Learn about backup in these topics:
This topic covers the manual upgrade process for LabKey Server running on a Windows machine.
Upgrade Changes for 20.7
The JDBC jars (jtds.jar, postgresql.jar, mysql.jar) are now versioned and distributed inside the module directories like any other third-party jar, making it unnecessary to copy them to the CATALINA_HOME/lib directory during installation and upgrade. When you upgrade to 20.7, delete these JDBC jar files from CATALINA_HOME/lib to avoid conflicts.
Download the Windows distribution: Download zip (i.e. LabKeyxx.x-xxxx-bin.zip)
Unzip the distribution bundle.
Locate Your Existing LabKey Server Installation
Locate your <LABKEY_HOME> directory, the directory to which you previously installed LabKey Server. A typical location is C:\labkey\labkey.
Find your Tomcat home directory, referred to as <CATALINA_HOME>. A typical location will be C:\labkey\apps\apache\apache-tomcat-x.x.xx.
Confirm the locations of the existing LabKey Server files on your system for each of the following components, in preparation for replacing them with the corresponding LabKey Server files:
<LABKEY_HOME>/bin: Contains the LabKey executables.
<LABKEY_HOME>/labkeywebapp: Contains the LabKey Server web application.
<LABKEY_HOME>/modules: Contains the LabKey Server modules.
<LABKEY_HOME>/externalModules: Contains additional, user-developed LabKey Server modules, if applicable. (Not all installations contain an externalModules directory. If you don't see one, skip this step.)
<CATALINA_HOME>/lib: The LabKey Server libraries and JAR files.
<CATALINA_HOME>/conf/Catalina/localhost/labkey.xml: The LabKey Server configuration file. This file may be named labkey.xml, LABKEY.xml, or ROOT.xml.
Prepare to Copy the New Files
Shut down the Tomcat web server. On Windows, Tomcat is typically run as a Windows service, in which case you should shut down Tomcat using the Services panel. Note that you do not need to shut down the database that LabKey Server connects to.
Open the Service panel, select Apache Tomcat #.#, and click Stop the service.
In the <LABKEY_ROOT>/backup directory, create a new subdirectory to store the backup of your current configuration. For instance, you could use a date, or a number that reflects the outgoing version. such as
C:\labkey\backup\v191
.
Back up your resources by moving and copying the following directories to the new backup subfolder. Note that it is critical that you completely replace the /labkeywebapp, /modules, and /externalModules directories below.
Move <LABKEY_HOME>/bin
Move <LABKEY_HOME>/labkeywebapp
Move <LABKEY_HOME>/modules
Move <LABKEY_HOME>/externalModules (if it exists)
Copy <CATALINA_HOME>/lib
Copy <CATALINA_HOME>/conf
Copy Files from the New LabKey Server Distribution
Copy the the following subdirectories from the unpacked new distribution <LABKEY_ROOT>/src/labkey/LabKeyxx.x-xxxx-bin/ into the appropriate locations:
Copy the /bin directory to <LABKEY_HOME>.
Copy the /labkeywebapp directory to <LABKEY_HOME>.
Copy the /modules directory to <LABKEY_HOME>.
Copy the /externalModules directory (if any) to <LABKEY_HOME>.
Copy the .jar libraries from the /tomcat-lib directory into <CATALINA_HOME>/lib. Choose to overwrite any jars that are already present. Do not delete or move the other files in the <CATALINA_HOME>/lib folder, as they are required for Tomcat to run.
If you have customized the stylesheet for your existing LabKey Server installation, copy your modified stylesheet from the /backup directory into the new <LABKEY_HOME>/labkeywebapp directory.
Restore Third Party Components
If you are using any third party components and libraries, restore them from the backup directory. (Backing up your existing <LABKEY_HOME>/bin directory by moving it to the backup directory causes the loss of any third-party binaries that might have been installed manually.)
Ensure that the <LABKEY_HOME>/bin directory is on your system path, or on the path of the user account that will be starting Tomcat.
Check for Changes in the LabKey Server Configuration File
Compare the outgoing and incoming labkey.xml (or root.xml) files.
If there are changes, backup the outgoing labkey.xml file.
Then merge any settings you have changed in the outgoing file into the incoming one.
Note: The name of the LabKey Server configuration file determines the URL address of your LabKey Server application. If you change the name (or case) of this configuration file, any external links to your LabKey Server application will break. For more information, see Installation: LabKey Configuration File.
For example, if your existing LabKey Server installation has been running as the root web application on Tomcat and you want to ensure that your application URLs remain identical after the upgrade, rename the copied labkey.xml to ROOT.xml in the same localhost subfolder location.
Restart Tomcat and Test
Using the Services panel, start the Tomcat web server. If you have any problems starting Tomcat, check the Tomcat logs in the <CATALINA_HOME>/logs directory.
Navigate to the LabKey Server application with a web browser using the appropriate URL address.
It is good practice to review the module version numbers on the Admin Console immediately after the upgrade to ensure they are correct.
At this point LabKey Server should be up and running. If you have problems, check the Tomcat logs, and double-check that you have properly named the LabKey Server configuration file and that its values are correct.
Premium Resource: Upgrade JDK on AWS Ubuntu Servers
LabKey Releases and Upgrade Support Policy
LabKey provides clients with three types of regular releases, and also creates nightly snapshot builds primarily for internal LabKey use:
Production releases. Every four months LabKey Server provide a release intended for production use. Production releases are tested thoroughly and receive maintenance updates for approximately six months after initial production release. Production releases are versioned using year and month, for example: 20.3.0 (March 2020 production release), 20.7.0 (July 2020 production release), 20.11.0 (November 2020 production release). Production quality releases of Sample Manager and Biologics are available each month.
Maintenance releases. LabKey issues reliability fixes and minor enhancements via periodic maintenance releases. Production releases have maintenance updates scheduled every two weeks, typically for several months after each production release. Maintenance releases include a non-zero minor version, for example, 20.3.1, 20.7.4. Maintenance releases are cumulative; each includes all the fixes and enhancements included in all previous maintenance releases.
Monthly releases. Every month we provide a LabKey Server release intended for development, testing, and staging servers, but not for production. Our clients can use monthly releases to preview and test new features that LabKey has developed. Monthly releases are versioned with year and month, for example: 20.1.0, 20.4.0, 20.10.0. Sample Manager and Biologics provide monthly releases that are intended for production use.
Snapshot builds. These builds are produced nightly with whatever has been changed each day. They should not be deployed to production servers. They are intended for internal LabKey testing, or for external developers who are running LabKey Server on their workstations.
We strongly recommend that every production deployment runs the most recent production release of LabKey Server at all times. Upgrading regularly ensures that you are operating with all the latest security, reliability, and performance fixes, and provides access to the latest set of LabKey capabilities. LabKey Server contains a reliable, automated system that ensures a straightforward upgrade process.
Recognizing that some organizations can't upgrade immediately after every LabKey production release, upgrades can be skipped for a full year (or longer in some cases):
Every release can directly upgrade every release from that year and the previous year. For example, 22.2 through 23.1 can upgrade servers running any 21.x or previous 22.x release. That provides an upgrade window of 13 - 24 months. Any earlier release (20.x or before) will not be able to upgrade directly to 22.2 or later.
Some earlier releases (e.g., 21.3) support a longer upgrade window; consult the chart below for details.
While we discourage running LabKey Server monthly releases or snapshot (nightly) builds in production environments, we support upgrading monthly and snapshot builds under the same rules (beginning with LabKey 19.1.0).
This upgrade policy provides flexibility for LabKey Server users. Having a window of support for upgrade scenarios allows us to retire old migration code, streamline SQL scripts, and focus testing on the most common upgrade scenarios.
The table below shows the upgrade scenarios supported by past and upcoming releases:
Releases
Can Upgrade From These Releases
23.2.0 - 24.1.0
22.1.0 and later
22.2.0 - 23.1.0
21.1.0 and later
21.11.0 - 22.1.0
19.2.0 and later
21.7.x
19.1.0 and later
21.3.x
19.1.0 and later
20.11.x
19.1.0 and later
The table below shows the upgrade scenarios supported by past releases that followed our previous upgrade policy:
Production Release
Can Upgrade From These Production Releases
Can Upgrade From These Monthly Releases and Snapshot Builds
20.7.x
18.1 and later
19.1.0 and later
20.3.x
17.3 and later
19.1.0 and later
19.3.x
17.2 and later
19.1.0 and later
19.2.x
17.1 and later
18.3.0 and later
19.1.x
16.3 and later
18.2 and later
18.3.x
16.2 and later
18.1 and later
18.2
16.1 and later
17.3 and later
18.1
15.3 and later
17.2 and later
17.3
15.2 and later
17.1 and later
17.2
15.1 and later
16.3 and later
17.1
14.3 and later
16.2 and later
16.3
14.2 and later
16.1 and later
16.2
14.1 and later
15.3 and later
16.1
13.3 and later
15.2 and later
15.3
13.2 and later
15.1 and later
15.2
13.1 and later
14.3 and later
15.1
12.3 and later
14.2 and later
14.3
12.2 and later
14.1 and later
14.2
12.1 and later
13.3 and later
14.1
11.3 and later
13.2 and later
13.3
11.2 and later
13.1 and later
13.2
11.1 and later
12.3 and later
13.1
10.3 and later
12.2 and later
12.3
10.2 and later
12.1 and later
12.2
10.1 and later
11.3 and later
12.1
9.3 and later
11.2 and later
11.3
9.2 and later
11.1 and later
11.2
9.1 and later
10.3 and later
11.1
8.3 and later
10.2 and later
10.3
8.2 and later
10.1 and later
If you have questions or find that this policy causes a problem for you, please contact LabKey for assistance.
Backup and Maintenance
Prior to upgrading your installation of LabKey Server, we recommend that you backup your database, as well as other configuration and data files. We also recommend that you regularly perform maintenance tasks on your database. This section provides resources for both backup and maintenance policies.
To protect the data in your PostgreSQL database, you should regularly perform the routine maintenance tasks that are recommended for PostgreSQL users. These maintenance operations include using the VACUUM command to free disk space left behind by updated or deleted rows and using the ANALYZE command to update statistics used by PostgreSQL for query optimization. The PostgreSQL documentation for these maintenance commands can be found here:
If you need to take down your LabKey Server for maintenance or due to a serious database problem, you can post a "Site Down" message to notify users who try to access the site. Learn more in this topic:
If you are using our recommended folder configuration, the <LABKEY_ROOT>\backup subdirectory is a good place to store backups locally.
Database
Backup procedures will vary somewhat based on the type and location of your database. These guidelines can get you started.
PostgreSQL
The default LabKey installation uses a PostgreSQL database. PostgreSQL provides commands for three different levels of database backup: SQL dump, file system level backup, and on-line backup.
You can also find backup details in the PostgreSQL documentation for your version, available here:
Site-level File Root. You should backup the contents (files and sub-directories) of the site-level file root. The location of the site-level file root is set at: (Admin) > Site > Admin Console > Configuration > Files.
Navigate to the file root location (for instance, an older installation might have been located in: C:\Program Files (x86)\LabKey Software).
Right-click on the files folder and select Send To > Compressed (zipped) folder to create a zip file of the folder.
Move this zip file to C:\labkey\backup.
Pipeline Files. You should also back up any directories or file shares that you specify as root directories for the LabKey pipeline. In addition to the raw data that you place in the pipeline directory, LabKey will generate files that are stored in this directory. The location of the pipeline root is available at: (Admin) > Go To Module > Pipeline > Setup. (If you do not see this option, you may not be logged in as a site administrator.)
Other File Locations. To see a summary list of file locations: go to (Admin) > Site > Admin Console > Configuration > Files, and then click Expand All. Note the Default column: if a file location has the value false, then you should backup the contents of that location manually.
Note: For some LabKey Server modules, the files (pipeline root or file content module) and the data in the database are very closely linked. Thus, it is important to time the database backup and the file system backup as closely as possible.
Configuration and Log Files
Log Files. Log files are located in <CATALINA_HOME>/logs.
Configuration Files. Configuration files, including labkey.xml, are located in <LABKEY_HOME> or <CATALINA_HOME>/conf/Catalina/localhost.
This page provides a suggested backup plan for an installation of LabKey Server. A backup plan may be built in many ways given different assumptions about an organization's needs. This page provides just one possible solution. You will tailor its suggestions to your LabKey Server implementation and your organization's needs.
General Guidelines
You should backup the following data in your LabKey Server:
Database
Site-level file root
Pipeline root and FileContent module files
LabKey Server configuration and log files
For some LabKey Server modules, the files (Pipeline Root or File Content Module) and the data in the database are very closely linked. Thus, it is important to time the database backup and the file system backup as closely as possible.
Assumptions for Backup Plan
Backup Frequency: For robust enterprise backup, this plan suggests performing incremental and transaction log backups hourly. In the event of a catastrophic failure, researchers will lose no more than 1 hour of work. You will tailor the frequency of all types of backups to your organization's needs.
Backup Retention: For robust enterprise backup, this plan suggests a retention period of 7 years. This will allow researchers to be able to restore the server to any point in time within the last 7 years. You will tailor the retention period to your organization's needs.
Database Backup
Full Backup of Database: Monthly
This should occur on a weekend or during period of low usage on the server
Differential/Incremental Backup of Database: Nightly
"Differential" means that you backup all changes since the last Full backup
Transaction Log Backups: Hourly
Site-level File Root
Full Backup of Files: Monthly
This should occur on a weekend or during period of low usage on the server
To determine the site-level file root go to: (Admin) > Site > Admin Console > Settings > Configuration > Files. Backup the contents of this file root.
Make sure to check for any file locations that have overriden the site-level file root. For a summary of file locations, go to (Admin) > Site > Admin Console > Settings > Configuration > Files > Expand All.
Pipeline Root or File Content Module File Backup
Full Backup of Files: Monthly
This should occur on a weekend or during period of low usage on the server
Incremental Backup of Database: Hourly
LabKey Server configuration and log files
These files are stored in the following locations (<CATALINA_HOME> is your Tomcat installation).
Log Files are located in <CATALINA_HOME>/logs
Configuration files are located in <CATALINA_HOME>
Full Backup of Files: Monthly
This should occur on a weekend or during period of low usage on the server
Incremental Backup of Database: Nightly
Example Scripts for Backup Scenarios
This topic provides example commands and scripts to help you perform backups of your server for several typical backup scenarios. These examples presume you are using PostgreSQL, which LabKey Server uses by default. They can be customized to your needs.
Perform a Full Backup on a Linux Server, where the PostgreSQL Database is Being Run as the PostgreSQL User
su - postgres -c '/usr/bin/pg_dump --compress=5 --format=c -f /labkey/backups/labkey_database_backup.bak labkey'
Perform a Full Backup of your PostgreSQL Database and All Files Stored in Site-level File Root
Learn more about the site level file root in this topic: File Terminology
The sample Perl script lkDataBackup.pl works on a Linux Server, but can be easily changed to work on other operating systems.
You can easily customize the script to fit your LabKey installation by changing the variables at the top of the file. To customize the script, you can change the variables:
$labkeyHome: this is the directory where have installed the LabKey binaries. Normally /usr/local/labkey
$labkeyFiles: this is the site-level file root. By default this located in the files subdirectory of $labkeyHome
$labkeyBackupDir: the directory where the backup files will be stored
$labkeyDbName: the name of the LabKey database. By default this is named labkey.
The script assumes:
You have perl installed on your server
You are using the PostgreSQL database and it is installed on the same computer as the LabKey server.
PostgreSQL binaries are installed are on the path.
See the script for more information
Error and status messages for the script are written to the log file data_backup.log. It will be located in the backup directory.
Example Backup Script for Windows
Modify this example as needed to suit your versions and environment:
You may also need to modify other files (such as PGPASS.conf) to provide the necessary access.
Note that this example performs two backups: one for the LabKey database, and one for the default postgres database. This default postgres DB backup is not always necessary, but is included in the script.
There are 2 recommended output formats for a backup of a postgres database using pg_dump:
1. Plain-text format
Default format for backups
This is simply a text file which contains the commands necessary to rebuild the database.
Can be restored using psql or pg_restore
Note: Individual tables CANNOT be restored using the format. In order to restore an individual table, you will need to restore the entire table first. Then copy the table contents from the restored database to the working database. For information about backing up and restoring tables, please consult the PostgreSQL.org docs for your specific version of PostgreSQL.
2. Custom archive format
Most flexible format.
When using this format, the default action compresses the backup. Compression however can be disabled using the —compress option.
Must use pg_restore to restore the database
The backup file can be opened and the contents reviewed using pg_restore
Individual tables CAN be restored using the format.
Note: Backup file size (uncompressed) can be roughly 2x the size of the database. This is due overhead necessary to allow pg_restore the ability to restore individual tables, etc.
Restore Backups
For restoring backups, you can use psql for plain-text formatted backups or you can use pg_restore to restore plain-text or custom formatted backups.
To restore a database, we recommend the following actions:
Drop the target database.
Create a new database to replace the database you dropped. This database will be empty. Make sure you're using the same database name.
Run the pg_restore or psql command to perform your database restore to the new, empty database you created.
Example 1: Custom-formatted backup
Example of restoring a database on a local PostgreSQL server, using a custom formatted database backup:
Linux:
Step 1: Run the su command to switch to the postgres user (Note: You will either need to be the root user or run the sudo command with su):
The same commands can be used locally as well, just replace the database_server_address with localhost or run the psql command as the postgres user and then connect to the default postgres database by using the \l postgres; command.
If you are the site admin and need to take down your LabKey Server for maintenance or due to a serious database problem, you can configure a "Site Down" banner message to notify users who try to access the site. Users will see a message you can configure with information about the duration of the outage or next steps.
To post a "Site Down" message, the simplest method is to follow these steps:
1. Shut down Tomcat, which will shut down all current active connections, including database connections.
2. Rename your labkey.xml (or ROOT.xml) file under <catalina-home>/conf/Catalina/localhost to labkey.old (or ROOT.old).
This will stop Tomcat from finding the LabKey web application upon startup.
3. Go to <catalina-home>/webapps/ROOT and create an HTML file with the desired maintenance message inside it.
If your domain is the only thing hosting LabKey and nothing else, the name "index.html" is recommended since this will ensure that even going to the main domain default page will display your site down message (instead of a 404 error).
The contents can be as simple as:
<p>LabKey is currently down while we upgrade to the latest version. </p>
4. In the <catalina-home>/conf directory, locate and edit the web.xml file, adding the following XML to it:
The main domain URL's default index.html page will display the Site Down message you created.
If anyone tries to access any page in that domain, even a previously valid full LabKey URL, they will also see the same site-down message.
LabKey itself will not be running (since the labkey.xml or ROOT.xml file was renamed to an extension that isn't valid for xml.
Perform the necessary work that caused the need to post the site down message.
Remove the Site Down Message
When you are ready to restore the operational server, follow these steps in this order:
Stop Tomcat.
Remove the error-page lines you entered in the web.xml file in Step 4 above.
Remove the HTML file you created in Step 3 above. (Move it to a location outside the <catalina-home> directory structure if you want to use the same message next time.)
Restore the original name of the labkey.old or ROOT.old file in <catalina-home>/conf/localhost (to labkey.xml or ROOT.xml).
Restart Tomcat.
Your server will now be back online as before the outage.
Proxy or Load Balancer
If you are using a proxy, load balancer, or other service that forwards HTTP requests from clients to Tomcat, they will typically have the ability to present a page when the primary server is offline. Consult the documentation for your service.
When using LabKey Server in any working lab or institution, using a staging server gives you a test platform for upgrades and any other changes prior to deploying them in your production environment. The staging server typically runs against a snapshot of the production database, giving testers and developers a real-world platform for validation. In this topic, the term "staging" is a catchall for any server that is not your institution's production machine, but is using a copy of the database.
Staging servers are used for many different reasons, including:
Ensuring that an upgrade of LabKey Server does not break any customization.
Testing new modules, views, or queries being developed.
Running usage scenarios on test data prior to launching to a large group of users.
Overview
This topic outlines some recommended settings to change when configuring a staging server. Not all possible changes are listed here, but this subset is the most useful. Doing things like changing the color scheme on the staging server will help users know which environment they are working on and avoid inadvertent changes to the production environment.
The specific SQL statements shown in this topic are for PostgreSQL databases. For MSSQL or other databases, the syntax will be similar but not identical.
If you use SQL statements to make the changes suggested on this page, make them after you restore the database on the staging server but before you start LabKey Server there.
Change the Server GUID
Summary: The Server GUID is stored in the database, but can be overridden with one specified in the LabKey XML configuration file (labkey.xml or ROOT.xml). Assigning your staging server (or any non-production server using a copy of that database) a unique Server GUID ensures that any exception reports received by LabKey developers are accurately attributed to the server (staging vs. production) that produced the errors.
Background: The Server GUID is a Global Unique Identifier for each running server. By default, LabKey Servers periodically communicate information, including details about exceptions raised, back to LabKey developers. LabKey groups this information by GUID of each server.
This GUID name should be unique, and can be as descriptive/obvious as you like, particularly in the case of a deployment involving multiple running servers, such as production, staging, and potentially others for testing, development, or disaster recovery. You can either use a human-readable GUID or let the server assign one for you during initial database creation.
For example, if your organization's production server GUID is "Biotopia Server", your staging server could use "Biotopia Staging Server". If your production server shows a generated ID like "0061580-2096-103a-8de4-a2f0599f", you could append a suffix as in "0061580-2096-103a-8de4-a2f0599f-staging".
When using a staging server on a restored snapshot of the production database, by default it will pick up the same GUID as the production server from that database. This can cause some confusion for LabKey developers when they are researching exception reports and trying to determine fixes for these problems. Changing the Server GUID for the staging server helps LabKey quickly track down exceptions and fix bugs detected on any of your servers.
How-to. Find your current Server GUID on the (Admin) > Admin Console > Server Information tab.
Set Server GUID in Config File
To specify your own GUID string to use, edit the LabKey Server configuration file for your staging server. This file will be named either labkey.xml or ROOT.xml and is located in the configuration directory of your Tomcat installation.
On Windows, this file is located in: <CATALINA_HOME>\conf\Catalina\localhost
On OSX, this file is located in: <CATALINA_HOME>/conf/Catalina/localhost
Add the following text below that line, replacing "Unique Server GUID" with the GUID you want to use for your staging server (please don't use "Unique Server GUID"):
<!-- Set new serverGUID --> <Parameter name="org.labkey.mothership.serverGUID" value="Unique Server GUID"/>
Save the file.
Restart LabKey Server.
Add a Suffix to the Current Server GUID
Instead of editing the config file, you can also use a SQL statement on the database itself to append a suffix to the current GUID value:
UPDATE prop.Properties SET Value = Value || '-staging' WHERE name = 'serverGUID' AND set IN (SELECT set FROM prop.propertySets WHERE category = 'SiteConfig');
Change the Site Settings
Change the Site Settings Manually
Log on to your staging server as a Site Admin.
Select (Admin) > Site > Admin Console.
Under Configuration, click Site Settings.
Change the following:
(Recommended): Base server url: change this to the URL for your staging server.
Optional Settings to change:
Configure Security > Require SSL connections: If you want to allow non-SSL connections to your staging server, uncheck this box.
Configure Security > SSL port number: If your SSL port number has changed. By default Tomcat will run SSL connection on 8443 instead of 443. Change this value if you staging server is using a different port.
Configure pipeline Settings > Pipeline tools: If your staging server is installed in a different location than your production server, change this to the correct location for staging.
Change the Site Settings via SQL Statements
These commands can be run via psql or pg_admin.
To change the Base Server URL run the following, replacing "http://testserver.test.com" with the URL of your staging server:
UPDATE prop.Properties p SET Value = 'http://testserver.test.com' WHERE (SELECT s.Category FROM prop.PropertySets s WHERE s.Set = p.Set) = 'SiteConfig' AND p.Name = 'baseServerURL';
To change the Pipeline Tools directory run the following, replacing "/path/to/labkey/bin" with the new path to the Pipeline tools directory:
UPDATE prop.Properties p SET Value = '/path/to/labkey/bin' WHERE p.Name = 'pipelineToolsDirectory';
To change the SSL Port number run this, replacing "8443" with the SSL port configured for your staging server:
UPDATE prop.Properties p SET Value = '8443' WHERE (SELECT s.Category FROM prop.PropertySets s WHERE s.Set = p.Set) = 'SiteConfig' AND p.Name = 'sslPort';
To disable the SSL Required setting:
UPDATE prop.Properties p SET Value = FALSE WHERE (SELECT s.Category FROM prop.PropertySets s WHERE s.Set = p.Set) = 'SiteConfig' AND p.Name = 'sslRequired';
Change the Look and Feel
Change the Look and Feel Manually
Logon to your staging server as a Site Admin.
Select (Admin) > Site > Admin Console.
Under Configuration, click Look and Feel Settings.
Change the following:
System description: Add a prefix, such as the word "TEST" or "Staging" to the text in this field
Header short name: This is the name shown in the header of every page. Again, prepending "TEST" to the existing name or changing the name entirely will indicate it is the staging server.
Theme: Use the drop-down to change this to a different theme.
NOTE: Following these instructions will change the site-level Look and Feel settings. If you have customized the Look and Feel on an individual project(s), these changes will override the site settings. To change them in customized projects, you would need to go to the Look and Feel settings for each Project and make a similar change.
Change the Look and Feel Settings via SQL statements
These commands can be run via psql or pg_admin
To change the Header short name for the Site and for all Projects, run this, replacing "LabKey Staging Server" with the short name for your staging server.
UPDATE prop.Properties p SET Value = 'LabKey Staging Server' WHERE (SELECT s.Category FROM prop.PropertySets s WHERE s.Set = p.Set) = 'LookAndFeel' AND p.Name = 'systemShortName';
To change the System description for the Site and for all Projects, run this, replacing "LabKey Staging Server" with the system description for your own staging server:
UPDATE prop.Properties p SET Value = 'LabKey Staging Server' WHERE (SELECT s.Category FROM prop.PropertySets s WHERE s.Set = p.Set) = 'LookAndFeel' AND p.Name = 'systemDescription';
To change the Web Theme, or color scheme, for the Site and for all Projects, run this, replacing "Harvest" with the name of the theme you would like to use on your staging server.
UPDATE prop.Properties p SET Value = 'Harvest' WHERE (SELECT s.Category FROM prop.PropertySets s WHERE s.Set = p.Set) = 'LookAndFeel' AND p.Name = 'themeName';
Other settings (For Advanced Users)
Below are some additional configuration settings that you may find useful.
Deactivate all non-Site Admin users
This is important to do as it will not allow one of your researchers to accidentally log into the Staging Server.
update core.Principals SET Active = FALSE WHERE type = 'u' AND UserId NOT IN (select p.UserId from core.Principals p inner join core.Members m on (p.UserId = m.UserId and m.GroupId=-1));
Mark all non-complete Pipeline Jobs to COMPLETE
This will ensure that any Pipeline Jobs that were scheduled to be run at the time of the Production server backup do not now run on the staging server. This is highly recommended if you are using MS2, Microarray or Flow.
UPDATE pipeline.statusfiles SET status = 'ERROR' WHERE status != 'COMPLETE' AND status != 'ERROR';
Change the Site Wide File Root
Only use this if the Site File Root is different on your staging server from your production server
UPDATE prop.Properties p SET Value = '/labkey/labkey/files' WHERE (SELECT s.Category FROM prop.PropertySets s WHERE s.Set = p.Set) = 'SiteConfig' AND p.Name = 'webRoot';
Have the Staging Server start up in Admin Only mode
UPDATE prop.Properties p SET Value = TRUE WHERE (SELECT s.Category FROM prop.PropertySets s WHERE s.Set = p.Set) = 'SiteConfig' AND p.Name = 'adminOnlyMode';
This topic provides an example of a large-scale installation that can be used as a model for designing your own server infrastructure. Testing your site by using a near-copy staging and/or test environment is good practice for confirming your production environment will continue to run trouble free.
Overview
The Atlas installation of LabKey Server at the Fred Hutch Cancer Research Center provides a good example of how staging, test and production servers can provide a stable experience for end-users while facilitating the rapid, secure development and deployment of new features. Atlas serves a large number of collaborating research organizations and is administered by SCHARP, the Statistical Center for HIV/AIDS Research and Prevention at the Fred Hutch. The staging server and test server for Atlas are located behind the SCHARP firewall, limiting any inadvertent data exposure to SCHARP itself and providing a safer environment for application development and testing.
The SCHARP team runs three nearly-identical Atlas servers to provide separate areas for usage, application development and testing:
Production. Atlas users interact with this server. It runs the most recent, official, stable release of LabKey Server and is updated to the latest version of LabKey every 3-4 months.
Staging. SCHARP developers use this server to develop custom applications and content that can be moved atomically to the production server. Staging typically runs the same version of LabKey Server as production and contains most of the same content and data. This mimics production as closely as possible. This server is upgraded to the latest version of LabKey just before the production server is upgraded, allowing a full test of the upgrade and new functionality in a similar environment. This server is located behind the SCHARP firewall, providing a safer environment for application development by limiting any inadvertent data exposure to SCHARP itself.
Test. SCHARP developers use this server for testing new LabKey Server features while these features are still under development and developing applications on new APIs. This server is updated on an as-needed basis to the latest build of LabKey Server. Just like the staging server, the test server is located behind the SCHARP firewall, enhancing security during testing.
All Atlas servers run on commodity hardware (Intel/Unix) and store data in the open source PostgreSQL database server. They are deployed using virtual hardware to allow administrators to flexibly scale up and add hardware or move to new hardware without rebuilding the system from scratch.