Table of Contents

guest
2025-02-09
Documentation Home
   Release Notes 24.11 (November 2024)
   Upcoming Features
     Release Notes 25.3 (March 2025)
Getting Started
   Try it Now: Data Grids
   Trial Servers
     Explore LabKey Server with a trial in LabKey Cloud
       Introduction to LabKey Server: Trial
       Exploring LabKey Collaboration
       Exploring Laboratory Data
       Exploring LabKey Studies
       Exploring LabKey Security
       Exploring Project Creation
       Extending Your Trial
       LabKey Server trial in LabKey Cloud
       Design Your Own Study
     Explore LabKey Biologics with a Trial
     Install LabKey for Evaluation
   Tutorials
     Set Up for Tutorials: Trial
     Set Up for Tutorials: Non-Trial
     Navigation and UI Basics
   LabKey Server Editions
     Training
LabKey Server
   Introduction to LabKey Server
   Navigate the Server
   Data Basics
     LabKey Data Structures
     Preparing Data for Import
     Field Editor
       Field Types and Properties
       Text Choice Fields
       URL Field Property
       Conditional Formats
       String Expression Format Functions
       Date & Number Display Formats
       Lookup Columns
       Protecting PHI Data
     Data Grids
       Data Grids: Basics
       Import Data
       Sort Data
       Filter Data
         Filtering Expressions
       Column Summary Statistics
       Customize Grid Views
       Saved Filters and Sorts
       Select Rows
       Export Data Grid
       Participant Details View
       Query Scope: Filter by Folder
     Reports and Charts
       Jupyter Reports
       Report Web Part: Display a Report or Chart
       Data Views Browser
       Query Snapshots
       Attachment Reports
       Link Reports
       Participant Reports
       Query Reports
       Manage Data Views
         Manage Study Notifications
       Manage Categories
       Manage Thumbnail Images
       Measure and Dimension Columns
     Visualizations
       Bar Charts
       Box Plots
       Line Plots
       Pie Charts
       Scatter Plots
       Time Charts
       Column Visualizations
       Quick Charts
       Integrate with Tableau
     Lists
       Tutorial: Lists
         Step 1: Set Up List Tutorial
         Step 2: Create a Joined Grid
         Step 3: Add a URL Property
       Create Lists
       Edit a List Design
       Populate a List
       Manage Lists
       Export/Import a List Archive
     R Reports
       R Report Builder
       Saved R Reports
       R Reports: Access LabKey Data
       Multi-Panel R Plots
       Lattice Plots
       Participant Charts in R
       R Reports with knitr
       Premium Resource: Show Plotly Graph in R Report
       Input/Output Substitutions Reference
       Tutorial: Query LabKey Server from RStudio
       FAQs for LabKey R Reports
     Premium RStudio Integration
       Connect to RStudio
         Set Up Docker with TLS
       Connect to RStudio Workbench
         Set Up RStudio Workbench
       Edit R Reports in RStudio
       Export Data to RStudio
       Advanced Initialization of RStudio
     SQL Queries
       LabKey SQL Tutorial
       SQL Query Browser
       Create a SQL Query
       Edit SQL Query Source
       LabKey SQL Reference
       Lookups: SQL Syntax
       LabKey SQL Utility Functions
       Query Metadata
         Query Metadata: Examples
       Edit Query Properties
       Trace Query Dependencies
       Query Web Part
       LabKey SQL Examples
         JOIN Queries
         Calculated Columns
         Premium Resource: Display Calculated Columns from Queries
         Pivot Queries
         Queries Across Folders
         Parameterized SQL Queries
         More LabKey SQL Examples
     Linked Schemas and Tables
     Controlling Data Scope
     Ontology Integration
       Load Ontologies
       Concept Annotations
       Ontology Column Filtering
       Ontology Lookup
       Ontology SQL
     Data Quality Control
     Quality Control Trend Reports
       Define QC Trend Report
       Use QC Trend Reports
       QC Trend Report Guide Sets
     Search
       Search Administration
     Integration with Spotfire
       Premium Resource: Embed Spotfire Visualizations
     Integration with AWS Glue
     LabKey Natural Language Processing (NLP)
       Natural Language Processing (NLP) Pipeline
       Metadata JSON Files
       Document Abstraction Workflow
       Automatic Assignment for Abstraction
       Manual Assignment for Abstraction
       Abstraction Task Lists
       Document Abstraction
       Review Document Abstraction
       Review Multiple Result Sets
       NLP Result Transfer
     Premium Resource: Bulk Editing
   Assay Data
     Tutorial: Import Experimental / Assay Data
       Step 1: Assay Tutorial Setup
       Step 2: Infer an Assay Design from Spreadsheet Data
       Step 3: Import Assay Data
       Step 4: Visualize Assay Results
       Step 5: Collect Experimental Metadata
     Tutorial: Assay Data Validation
     Assay Administrator Guide
       Set Up Folder For Assays
       Design a New Assay
         Assay Design Properties
       Design a Plate-Based Assay
         Customize Plate Templates
         Specialty Plate-Based Assays
       Participant/Visit Resolver Field
       Manage an Assay Design
       Export/Import Assay Design
       Assay QC States: Admin Guide
       Improve Data Entry Consistency & Accuracy
       Assay Transform Script
       Link Assay Data into a Study
         Link-To-Study History
       Assay Feature Matrix
     Assay User Guide
       Import Assay Runs
       Multi-File Assay Runs
       Work with Assay Runs
       Assay QC States: User Guide
       Exclude Assay Data
       Re-import Assay Runs
       Export Assay Data
     Assay Terminology
     ELISA Assay
       Tutorial: ELISA Assay
       ELISA Run Details View
       ELISA Assay Reference
       Enhanced ELISA Assay Support
     ELISpot Assay
       Tutorial: ELISpot Assay Tutorial
         Import ELISpot Data
         Review ELISpot Data
       ELISpot Properties and Fields
     Flow Cytometry
       Flow Cytometry Overview
       LabKey Flow Module
         Set Up a Flow Folder
         Tutorial: Explore a Flow Workspace
           Step 1: Customize Your Grid View
           Step 2: Examine Graphs
           Step 3: Examine Well Details
           Step 4: Export Flow Data
           Step 5: Flow Quality Control
         Tutorial: Set Flow Background
         Import a Flow Workspace and Analysis
         Edit Keywords
         Add Sample Descriptions
         Add Statistics to FCS Queries
         Flow Module Schema
         Analysis Archive Format
       Add Flow Data to a Study
       FCS keyword utility
     FluoroSpot Assay
     Luminex
       Luminex Assay Tutorial Level I
         Set Up Luminex Tutorial Folder
         Step 1: Create a New Luminex Assay Design
         Step 2: Import Luminex Run Data
         Step 3: Exclude Analytes for QC
         Step 4: Import Multi-File Runs
         Step 5: Link Luminex Data to Study
       Luminex Assay Tutorial Level II
         Step 1: Import Lists and Assay Archives
         Step 2: Configure R, Packages and Script
         Step 3: Import Luminex Runs
         Step 4: View 4pl Curve Fits
         Step 5: Track Analyte Quality Over Time
         Step 6: Use Guide Sets for QC
         Step 7: Compare Standard Curves Across Runs
       Track Single-Point Controls in Levey-Jennings Plots
       Luminex Calculations
       Luminex QC Reports and Flags
       Luminex Reference
         Review Luminex Assay Design
         Luminex Properties
         Luminex File Formats
         Review Well Roles
         Luminex Conversions
         Customize Luminex Assay for Script
         Review Fields for Script
       Troubleshoot Luminex Transform Scripts and Curve Fit Results
     Mass Spectrometry
     NAb (Neutralizing Antibody) Assays
       Tutorial: NAb Assay
         Step 1: Create a NAb Assay Design
         Step 2: Import NAb Assay Data
         Step 3: View High-Throughput NAb Data
         Step 4: Explore NAb Graph Options
       Work with Low-Throughput NAb Data
       Use NAb Data Identifiers
       NAb Assay QC
       Work with Multiple Viruses per Plate
       NAb Plate File Formats
       Customize NAb Plate Template
       NAb Assay Reference
     Assay Request Tracker
       Premium Resource: Using the Assay Request Tracker
       Premium Resource: Assay Request Tracker Administration
     Experiment Framework
       Experiment Terminology
       Experiment Runs
       Run Groups
       Experiment Lineage Graphs
       Provenance Module: Run Builder
       Life Science Identifiers (LSIDs)
         LSID Substitution Templates
     Record Lab Workflow
       Tutorial: Lab Workflow Folder
         Step 1: Create the User Interface
         Step 2: Import Lab Data
         Step 3: Create a Lookup from Assay Data to Samples
         Step 4: Using and Extending the Lab Workspace
     Reagent Module
   Samples
     Create Sample Type
     Sample Naming Patterns
     Aliquot Naming Patterns
     Add Samples
       Premium Resource: Split Large Sample Upload
     Manage Sample Types and Samples
     Link Assay Data to Samples
     Link Sample Data to Study
     Sample Parents: Derivation and Lineage
     Sample Types: Examples
     Barcode Fields
     Data Classes
       Create Data Class
   Studies
     Tutorial: Longitudinal Studies
       Step 1: Study Dashboards
       Step 2: Study Reports
       Step 3: Compare Participant Performance
     Tutorial: Set Up a New Study
       Step 1: Create Study Framework
       Step 2: Import Datasets
       Step 3: Identify Cohorts
       Step 4: Integrate and Visualize Data
     Install an Example Study
     Study User Guide
       Study Navigation
       Cohorts
       Participant Groups
       Dataset QC States: User Guide
     Study Administrator Guide
       Study Management
         Study Properties
         Manage Datasets
         Manage Visits or Timepoints
         Study Schedule
         Manage Locations
         Manage Cohorts
         Manage Participants
         Participant Aliases
         Manage Study Security
           Configure Permissions for Reports & Views
           Securing Portions of a Dataset (Row and Column Level Security)
         Dataset QC States: Admin Guide
         Manage Study Products
         Manage Treatments
         Manage Assay Schedule
         Study Demo Mode
       Create a Study
       Create and Populate Datasets
         Import Data to a Dataset
           Import From a Dataset Archive
         Dataset Properties
         Study: Reserved and Special Fields
         Dataset System Fields
         Tutorial: Inferring Datasets from Excel and TSV Files
       Visits and Dates
         Create Visits Manually
         Edit Visits or Timepoints
         Import Visit Map
         Import Visit Names / Aliases
         Continuous Studies
         Study Visits and Timepoints FAQ
       Export/Import/Reload a Study
         Export a Study
         Import a Study
         Reload a Study
           Study Object Files and Formats
       Publish a Study
         Publish a Study: Protected Health Information / PHI
       Ancillary Studies
       Refresh Data in Ancillary and Published Studies
       Cohort Blinding
       Shared Datasets and Timepoints
       How is Study Data Stored in LabKey Server?
       Create a Vaccine Study Design
         Vaccine Study: Data Storage
       Premium Resource: LabKey Data Finder
     Electronic Health Records (EHR)
       Premium Resource: EHR: Animal History
       Premium Resource: EHR: Animal Search
       Premium Resource: EHR: Data Entry
       Premium Resource: EHR: Data Entry Development
       Premium Resource: EHR: Lookups
       Premium Resource: EHR: Genetics Algorithms
       Premium Resource: EHR: Administration
       Premium Resource: EHR: Connect to Sample Manager
       Premium Resource: EHR: Billing Module
         Premium Resource: EHR: Define Billing Rates and Fees
         Premium Resource: EHR: Preview Billing Reports
         Premium Resource: EHR: Perform Billing Run
         Premium Resource: EHR: Historical Billing Data
       Premium Resource: EHR: Compliance and Training Folder
       Premium Resource: EHR: Trigger Scripts
     Structured Narrative Datasets
       SND: Packages
       SND: Categories
       SND: Super-packages
       SND: Projects
       SND: Events
       SND: QC and Security
       SND: APIs
       SND: Event Triggers
       SND: UI Development
       Extending SND Tables
       XML Import of Packages
     Enterprise Master Patient Index Integration
     Specimen Tracking (Legacy)
   Panorama: Targeted Mass Spectrometry
     Configure Panorama Folder
     Panorama Data Import
     Panorama Experimental Data Folder
       Panorama: Skyline Document Management
       Panorama: Skyline Replicates View
       Panorama: Protein/Molecule List Details
       Panorama: Skyline Lists
       Panorama: Skyline Annotation Data
       Panorama: Skyline Audit Log
       Panorama: Calibration Curves
       Panorama: Figures of Merit and Pharmacokinetics (PK)
       Panorama: Instruments Summary and QC Links
       Working with Small Molecule Targets
       Panorama: Heat Maps
     Panorama Multi-Attribute Method Folder
       Panorama MAM Reports
       Panorama: Crosslinked Peptides
     Panorama Chromatogram Library Folder
       Using Chromatogram Libraries
       Panorama: Reproducibility Report
     Panorama QC Folders
       Panorama QC Dashboard
       Panorama: Instrument Utilization Calendar
       Panorama QC Plots
       Panorama QC Plot Types
       Panorama QC Annotations
       Panorama: Pareto Plots
       Panorama: iRT Metrics
       Panorama: Configure QC Metrics
       Panorama: Outlier Notifications
       Panorama QC Guide Sets
     Panorama: Chromatograms
     Panorama and Sample Management
   Collaboration
     Files
       Tutorial: File Repository
         Step 1: Set Up a File Repository
         Step 2: File Repository Administration
         Step 3: Search the Repository
         Step 4: Import Data from the Repository
       Using the Files Repository
       View and Share Files
       Controlling File Display via the URL
       Import Data from Files
       Linking Assays with Images and Other Files
       Linking Data Records to Image Files
       File Metadata
       File Administrator Guide
         Files Web Part Administration
         File Root Options
           Troubleshoot Pipeline and Files
         File Terminology
         Transfer Files with WebDAV
       S3 Cloud Data Storage
         AWS Identity Credentials
         Configure Cloud Storage
         Use Files from Cloud Storage
         Cloud Storage for File Watchers
     Messages
       Use Message Boards
       Configure Message Boards
       Object-Level Discussions
     Wikis
       Create a Wiki
       Wiki Admin Guide
         Manage Wiki Pages
         Copy Wiki Pages
       Wiki User Guide
         Wiki Syntax
         Wiki Syntax: Macros
         Markdown Syntax
         Special Wiki Pages
         Embed Live Content in HTML Pages or Messages
           Examples: Web Parts Embedded in Wiki Pages
           Web Part Configuration Properties
         Add Screenshots to a Wiki
     Issue/Bug Tracking
       Tutorial: Issue Tracking
       Using the Issue Tracker
       Issue Tracker: Administration
     Electronic Data Capture (EDC)
       Tutorial: Survey Designer, Step 1
       Tutorial: Survey Customization, Step 2
       Survey Designer: Reference
       Survey Designer: Examples
       REDCap Survey Data Integration
       Medidata / CDISC ODM Integration
       CDISC ODM XML Integration
     Adjudication Module
     Contact Information
     How to Cite LabKey Server
   Development
     Set Up a Development Machine
       Gradle Build Overview
       Build LabKey from Source
       Build from Source (or Not)
       Customize the Build
       Node.js Build Dependency
       Git Ignore Configurations
       Build Offline
       Gradle Cleaning
       Gradle Properties
       Gradle: How to Add Modules
       Gradle: Declare Dependencies
       Gradle Tips and Tricks
       Premium Resource: Artifactory Set Up
       Premium Resource: NPMRC Authentication File
       Create Production Builds
       Set up OSX for LabKey Development
       Troubleshoot Development Machines
       Premium Resource: IntelliJ Reference
     Run in Development Mode
     LabKey Client APIs
       API Resources
       JavaScript API
         Tutorial: Create Applications with the JavaScript API
           Step 1: Create Request Form
           Step 2: Confirmation Page
           Step 3: R Histogram (Optional)
           Step 4: Summary Report For Managers
           Repackaging the App as a Module
         Tutorial: Use URLs to Pass Data and Filter Grids
           Choose Parameters
           Show Filtered Grid
         Tutorial: Visualizations in JavaScript
           Step 1: Export Chart as JavaScript
           Step 2: Embed the Script in a Wiki
           Modify the Exported Chart Script
           Display the Chart with Minimal UI
         JavaScript API Examples
           Premium Resource: JavaScript Security API Examples
         JavaScript Reports
         Export Data Grid as a Script
         Premium Resource: Custom Participant View
         Example: Master-Detail Pages
         Custom Button Bars
           Premium Resource: Invoke JavaScript from Custom Buttons
           Premium Resource: Custom Buttons for Large Grids
         Premium Resource: Type-Ahead Entry Forms
         Premium Resource: Sample Status Demo
         Insert into Audit Table via API
         Programming the File Repository
         Vocabulary Domains
         Declare Dependencies
         Using ExtJS with LabKey
         Naming & Documenting JavaScript APIs
           How to Generate JSDoc
           JsDoc Annotation Guidelines
           Naming Conventions for JavaScript APIs
       Java API
         LabKey JDBC Driver
           Integration with DBVisualizer
         Security Bulk Update via API
       Perl API
       Python API
         Premium Resource: Python API Demo
         Premium Resource: Download a File with Python
       Rlabkey Package
         Troubleshoot Rlabkey
         Premium Resource: Example Code for QC Reporting
       SAS Client API Library
         SAS Setup
         SAS Macros
         SAS Demos
       HTTP Interface
         Examples: Controller Actions / API Test Page
         Example: Access APIs from Perl
       External Tool Access
       API Keys
       External ODBC and JDBC Connections
         ODBC: Configure Windows Access
         ODBC: Configure OSX/Mac Access
         ODBC: External Tool Connections
           ODBC: Using SQL Server Reporting Service (SSRS)
       Compliant Access via Session Key
     Develop Modules
       Tutorial: Hello World Module
       Map of Module Files
       Module Loading Using the Server UI
       Module Editing Using the Server UI
       Example Modules
       Tutorial: File Based Module Resources
         Module Directories Setup
         Module Query Views
         Module SQL Queries
         Module R Reports
         Module HTML and Web Parts
       Modules: JavaScript Libraries
       Modules: Assay Types
         Assay Custom Domains
         Assay Custom Details View
         Loading Custom Views
         Example Assay JavaScript Objects
         Assay Query Metadata
         Customize Batch Save Behavior
         SQL Scripts for Module-Based Assays
         Transform Scripts
           Example Workflow: Develop a Transformation Script
           Example Transformation Scripts (perl)
           Transformation Scripts in R
           Transformation Scripts in Java
           Transformation Scripts for Module-based Assays
           Premium Resource: Python Transformation Script
           Premium Resource: Create Samples with Transformation Script
           Run Properties Reference
           Transformation Script Substitution Syntax
           Warnings in Transformation Scripts
       Modules: Folder Types
       Modules: Query Metadata
       Modules: Report Metadata
       Modules: Custom Header
       Modules: Custom Banner
       Modules: Custom Footer
       Modules: SQL Scripts
         Modules: SQL Script Conventions
       Modules: Domain Templates
       Java Modules
         Module Architecture
         Tutorial: Hello World Java Module
         LabKey Containers
         Implementing Actions and Views
         Implementing API Actions
         Integrating with the Pipeline Module
         Integrating with the Experiment API
         Using SQL in Java Modules
         Database Development Guide
         HotSwapping Java classes
       Modules: Custom Login Page
       Modules: Custom Site Welcome Page
       ETL: Extract Transform Load
         Tutorial: Extract-Transform-Load (ETL)
           ETL Tutorial: Set Up
           ETL Tutorial: Run an ETL Process
           ETL Tutorial: Create a New ETL Process
         ETL: User Interface
         ETL: Create a New ETL
         ETL: Planning
         ETL: Attributes
         ETL: Target Options
         ETL: Column Mapping
         ETL: Transform Types and Tasks
           ETL: Manage Remote Connections
         ETL: Filter Strategies
         ETL: Schedules
         ETL: Transactions
         ETL: Queuing ETL Processes
         ETL: Stored Procedures
           ETL: Stored Procedures in PostgreSQL
           ETL: Check For Work From a Stored Procedure
         ETL: Logs and Error Handling
         ETL: Examples
         ETL: Module Structure
         Premium Resource: ETL Best Practices
       Deploy Modules to a Production Server
       Main Credits Page
       Module Properties
       module.properties Reference
     Common Development Tasks
       Premium Resource: Content Security Policy Development Best Practices
       Trigger Scripts
       Script Pipeline: Running Scripts in Sequence
       LabKey URLs
         URL Actions
       How To Find schemaName, queryName & viewName
       LabKey/Rserve Setup Guide
       Web Application Security
         Cross-Site Request Forgery (CSRF) Protection
         Premium Resource: Fetching CSRF Token
       Premium Resource: Changes in JSONObject Behavior
       Profiler Settings
       Using loginApi.api
       Use IntelliJ for XML File Editing
       Premium Resource: Manual Index Creation
       Premium Resource: LabKey Coding Standards and Practices
       Premium Resource: Best Practices for Writing Automated Tests
       Premium Resource: Server Encoding
       Premium Resource: ReactJS Development Resources
       Premium Resource: Feature Branch Workflow
       Premium Resource: Develop with Git
       Premium Resource: Git Branch Naming
       Premium Resource: Issue Pull Request
     LabKey Open Source Project
       Release Schedule
       Previous Releases
         Previous Release Details
       Branch Policy
       Testing and Code Quality Workflow
       Run Automated Tests
       Tracking LabKey Issues
       Security Issue Evaluation Policy
       Submit Contributions
         CSS Design Guidelines
         Documentation Style Guide
         Check In to the Source Project
     Developer Reference
   Administration
     Tutorial: Security
       Step 1: Configure Permissions
       Step 2: Test Security with Impersonation
       Step 3: Audit User Activity
       Step 4: Handle Protected Health Information (PHI)
     Projects and Folders
       Project and Folder Basics
       Site Structure: Best Practices
       Folder Types
       Project and Folder Settings
         Create a Project or Folder
         Manage Projects and Folders
         Enable a Module in a Folder
         Export / Import a Folder
         Export and Import Permission Settings
         Manage Email Notifications
       Establish Terms of Use
       Workbooks
       Shared Project
     Build User Interface
       Premium Resource: Custom Home Page Examples
       Page Admin Mode
       Add Web Parts
       Manage Web Parts
       Web Part Inventory
       Use Tabs
       Add Custom Menus
       Web Parts: Permissions Required to View
     Security
       Best Practices for System Security
       Configure Permissions
       Security Groups
         Global Groups
         Site Groups
         Project Groups
         Guests / Anonymous Users
       Security Roles Reference
         Role/Permissions Matrix
         Administrator Permissions Matrix
         Matrix of Report, Chart, and Grid Permissions
         Privileged Roles
         Developer Roles
         Storage Roles
         Premium Resource: Add a Custom Security Role
       User Accounts
         My Account
         Add Users
         Manage Users
         Manage Project Users
         Premium Resource: Limit Active Users
       Authentication
         Configure Database Authentication
           Passwords
           Password Reset
         Configure LDAP Authentication
         Configure SAML Authentication
         Configure CAS Authentication
         Configure CAS Identity Provider
         Configure Duo Two-Factor Authentication
         Configure TOTP Two-Factor Authentication
         Create a netrc file
       Virus Checking
       Test Security Settings by Impersonation
       Premium Resource: Best Practices for Security Scanning
     Compliance
       Compliance: Overview
       Compliance: Checklist
       Compliance: Settings
       Compliance: Setting PHI Levels on Fields
       Compliance: Terms of Use
       Compliance: Security Roles
       Compliance: Configure PHI Data Handling
       Compliance: Logging
       Compliance: PHI Report
       Electronic Signatures / Sign Data
       GDPR Compliance
       Project Locking and Review Workflow
     Admin Console
       Site Settings
         Usage/Exception Reporting
       Look and Feel Settings
         Page Elements
         Web Site Theme
       Email Template Customization
       Optional, Deprecated, or Experimental Features
       Manage Missing Value Indicators / Out of Range Values
       Short URLs
       System Maintenance
       Configure Scripting Engines
       Configure Docker Host
       External Hosts
       LDAP User/Group Synchronization
       Proxy Servlets
         Premium Resource: Plotly Dash Demo
       Audit Log / Audit Site Activity
         SQL Query Logging
         Site HTTP Access Logs
         Audit Log Maintenance
       Export Diagnostic Information
       Actions Diagnostics
       Cache Statistics
       Loggers
       Memory Usage
       Query Performance
       Site/Container Validation
     Data Processing Pipeline
       Set a Pipeline Override
       Pipeline Protocols
       File Watchers
         Create a File Watcher
         File Watcher Tasks
         File Watchers for Script Pipelines
         File Watcher: File Name Patterns
         File Watcher Examples
       Premium Resource: Run Pipelines in Parallel
       Enterprise Pipeline with ActiveMQ
         ActiveMQ JMS Queue
         Configure the Pipeline with ActiveMQ
         Configure Remote Pipeline Server
         Troubleshoot the Enterprise Pipeline
     Install LabKey
       Supported Technologies
       Install on Linux
       Set Application Properties
       Premium Resource: Install on Windows
       Common Install Tasks
         Service File Customizations
         Use HTTPS with LabKey
         SMTP Configuration
         Install and Set Up R
           Configure an R Docker Engine
         Control Startup Behavior
         Server Startup Properties
         ExtraWebapp Resources
         Sending Email from Non-LabKey Domains
         Deploying an AWS Web Application Firewall
       Install Third Party Components
       Troubleshoot Installation and Configuration
         Troubleshoot: Error Messages
         Collect Debugging Information
       Example Hardware/Software Configurations
       Premium Resource: Reference Architecture / System Requirements
       LabKey Modules
     Upgrade LabKey
       Upgrade on Linux
       Upgrade on Windows
       Premium Resource: Upgrade JDK on AWS Ubuntu Servers
       LabKey Releases and Upgrade Support Policy
     External Schemas and Data Sources
       External PostgreSQL Data Sources
       External Microsoft SQL Server Data Sources
       External MySQL Data Sources
       External Oracle Data Sources
       External SAS/SHARE Data Sources
       External Redshift Data Sources
       External Snowflake Data Sources
     Premium Feature: Use Microsoft SQL Server
       GROUP_CONCAT Install
       PremiumStats Install
       ETL: Stored Procedures in MS SQL Server
     Backup and Maintenance
       Backup Guidelines
       An Example Backup Plan
       Example Scripts for Backup Scenarios
       Restore from Backup
       Premium Resource: Change the Encryption Key
     Use a Staging Server
   Troubleshoot LabKey Server
Sample Manager
   Use Sample Manager with LabKey Server
   Use Sample Manager with Studies
   LabKey ELN
     ELN: Frequently Asked Questions
LabKey LIMS
   LIMS: Downloadable Templates
   LIMS: Samples
     Print Labels with BarTender
   LIMS: Assay Data
   LIMS: Charts
   LIMS: Storage Management
   LIMS: Workflow
Biologics LIMS
   Introduction to LabKey Biologics
   Release Notes: Biologics
   Biologics: Navigate
   Biologics: Projects and Folders
   Biologics: Bioregistry
     Create Registry Sources
       Register Nucleotide Sequences
       Register Protein Sequences
       Register Leaders, Linkers, and Tags
       Vectors, Constructs, Cell Lines, and Expression Systems
     Registry Reclassification
     Biologics: Terminology
     Protein Sequence Annotations
     CoreAb Sequence Classification
     Biologics: Chain and Structure Formats
     Molecules, Sets, and Molecular Species
       Register Molecules
       Molecular Physical Property Calculator
     Compounds and SMILES Lookups
     Entity Lineage
     Customize the Bioregistry
     Bulk Registration of Entities
     Use the Registry API
   Biologics: Plates
   Biologics: Assay Data
     Biologics: Specialty Assays
     Biologics: Assay Integration
     Biologics: Upload Assay Data
     Biologics: Assay Batches and QC
   Biologics: Media Registration
     Managing Ingredients and Raw Materials
     Registering Mixtures (Recipes)
     Registering Batches
   Biologics Administration
     Biologics: Detail Pages and Entry Forms
     Biologics: Protect Sequence Fields
     Manage Notebook Tags
     Biologics Admin: URL Properties
Premium Resources
   Product Selection Menu
   LabKey Support Portals
   Premium Resource: Training Materials
   Premium Edition Required
Community Resources
   LabKey Terminology/Glossary
   FAQ: Frequently Asked Questions
   System Integration: Instruments and Software
   Demos
   Videos
   Project Highlight: FDA MyStudies Mobile App
   LabKey Webinar Resources
     Tech Talk: Custom File-Based Modules
   Collaborative DataSpace: User Guide
     DataSpace: Learn About
     DataSpace: Find Subjects
     DataSpace: Plot Data
     DataSpace: View Data Grid
     DataSpace: Monoclonal Antibodies
   Documentation Archive
     Release Notes 24.7 (July 2024)
     Release Notes 24.3 (March 2024)
       What's New in 24.3
     Release Notes: 23.11 (November 2023)
       What's New in 23.11
     Release Notes: 23.7 (July 2023)
       What's New in 23.7
     Release Notes: 23.3 (March 2023)
       What's New in 23.3
     Release Notes: 22.11 (November 2022)
       What's New in 22.11
     Release Notes 22.7 (July 2022)
       What's New in 22.7
     Release Notes 22.3 (March 2022)
       What's New in 22.3
     Release Notes 21.11 (November 2021)
       What's New in 21.11
     Release Notes 21.7 (July 2021)
       What's New in 21.7
     Release Notes 21.3 (March 2021)
       What's New in 21.3
     Release Notes 20.11 (November 2020)
       What's New in 20.11
     Release Notes 20.7
       What's New in 20.7
     Release Notes 20.3
       What's New in 20.3
     Release Notes 19.3
       What's New in 19.3
     Release Notes: 19.2
       What's New in 19.2
     Release Notes 19.1
       What's New in 19.1.x
     Release Notes 18.3
       What's New in 18.3
     Release Notes 18.2
       What's New in 18.2
     Release Notes 18.1
       What's New in 18.1
     Release Notes 17.3
     Release Notes 17.2
   Deprecated Docs - Admin-visible only

Documentation Home


Release Notes

Getting Started

LabKey SDMS Documentation

LabKey Sample Manager Documentation

LabKey LIMS Documentation

Biologics LIMS Documentation

Premium Resources

More Resources




Release Notes 24.11 (November 2024)


We're delighted to announce the release of LabKey Server version 24.11 (November 2024).

LabKey Server SDMS

Premium Edition Feature Updates

  • Include "Calculation" fields in lists, datasets, sample, source, and assay definitions. (docs)
  • Support for using a Snowflake database as an external data source is now available. (docs)
  • The "External Tool Access" page has been improved to make it easier to use ODBC and JDBC connections to LabKey. (docs)
  • Administrators can generate a report of file root sizes across the site. (docs)
  • Reporting usage statistics to LabKey can be enabled without also enabling upgrade banners. (docs)
  • SAML authentication uses RelayState instead of a configuration parameter, and the configuration interface has been streamlined. Also available in 24.7.7 (docs)
  • Administrators can save incomplete authentication configurations if they are disabled. Also available in 24.7.4 (docs)
  • Sample Manager Feature Updates
  • LabKey LIMS Feature Updates
  • Biologics LIMS Feature Updates
Learn more about Premium Editions of LabKey Server here.

Community Edition Updates

  • Beginning with maintenance release 24.11.4, when the server is configured to use only HTTPS, a Strict-Transport-Security header will be set to prevent future attempts to access the server over HTTP.
  • Linking Sample data to a visit-based study can now be done using the visit label instead of requiring a sequencenum value. (docs)
  • Date, time, and datetime fields are now limited to a set of common display patterns, making it easier for users to choose the desired format. (docs)
  • A new validator will find non-standard date and time display formats. (docs)
  • Use shift-click in the grid view customizer to add all fields from a given node. (docs)
  • When a pipeline job is cancelled, any queries it has initiated will also be cancelled. (docs)
  • Users can include a description for future reference when generating an API key. (docs)
  • Folder file root sizes are now calculated and shown to administrators. (docs)
  • Administrators can provide a set of external sources of data to allow if a content security policy is configured. (docs)
  • Sending of the Server HTTP header may be disabled using a site setting. (docs)
  • Some "Experimental Features" have been relocated to a "Deprecated Features" page to better describe their status. (docs)
  • When a user changes their database password, all current sessions associated with that login are invalidated. (docs)

Distribution Changes and Upgrade Notes

  • LabKey Server embeds a copy of Tomcat 10. It no longer uses or depends on a separately installed copy of Tomcat.
    • LabKey Cloud subscribers have been upgraded automatically.
    • For users with on-premise installations, the process for upgrading from previous versions using a standalone Tomcat 9 has changed significantly. Administrators should be prepared to make additional changes during the first upgrade to use embedded Tomcat. (docs)
    • For users with on-premise installations who already upgraded to use embedded Tomcat will follow a much simpler process upgrade to 24.11. Note that the distribution name itself has changed to drop the "-embedded" string, as all distributions are now embedded. (linux | windows)
    • The process for installing a new LabKey Server has also changed significantly, making it simpler than in past releases. (docs)
  • All specialty assays (ELISA, ELISpot, Microarray, MS2, NAb, Luminex, etc.) are now distributed only to clients actively using them. Administrators will see a warning about unknown modules when they upgrade and can safely remove these unused modules. Please contact your Account Manager if you have questions or concerns.
  • HTTP access logging is now enabled by default and the recommended pattern has been updated. (docs)
    • Users of proxies or load balancers may wish to add this to their accesslog.pattern to capture the IP address of the originating client:
      %{X-Forwarded-For}i
  • MySQL 9.0.0 is now supported. (docs)
  • LabKey Server now supports and recommends the recently released PostgreSQL 17.x. (docs)

Deprecated Features

  • Support for Microsoft SQL Server 2014 has been removed. (docs | docs)
  • Some older wiki macros were removed; the list of supported macros is in the documentation.
  • Support for "Advanced Reports" has been deprecated.
  • Support for the "Remote Login API" has been deprecated.
  • Support for bitmask (integer) permissions has been deprecated. Developers can learn more below.
  • The "Vaccine Study Protocol" interface has been deprecated.
  • Future deprecations: These features will be deprecated in the next release. Contact your Account Manager if you have questions or concerns.
    • Additional date, time, and datetime parsing patterns. Selecting a date parsing mode (Month-Day-Year or Day-Month-Year) will still be supported. (docs)
    • You will no longer have the "Advanced import options" of choosing objects during import of folders, as well as having folder imports applied to multiple folders. (docs)
    • "Assay Progress Reports" in studies. (docs)

Client APIs and Development Notes

  • Version 3.4.0 of Rlabkey is available. (docs)
    • Note: Rlabkey version 3.4.1 has also been released, supporting the consistency improvements in webdav URLs made in December 2024. Note that this means version 3.4.1 requires LabKey version 24.12 or higher; earlier versions of LabKey including 24.11 must use Rlabkey version 3.4.0 or lower. (docs)
  • Client API calls that provide credentials and fail authentication will be rejected outright and immediately return a detailed error message. Previously, a call that failed authentication would proceed as an unauthenticated user which would fail (unless guests have access to the target resource) with a less informative message. This change is particularly helpful in cases where the credentials are correct but do not meet password complexity requirements or are expired.
  • Version 3.0.0 of our gradlePlugin was released. It's earliest compatible LabKey release is 24.8 and it includes a few changes of note for developers. A few examples are here, and more are listed in the gradlePlugin release notes:
    • The AntBuild plugin has been removed, so we no longer have a built-in way to detect modules that are built with Ant instead of Gradle.
    • We removed support for picking up .jsp files from resources directories. Move any .jsp files in a resources directory under the src directory instead.
  • Support for bitmask (integer) permissions has been removed. Class-based permissions replaced bitmask permissions many years ago.
    • Developers who manage .view.xml files should replace <permissions> elements with <permissionClasses> elements. (docs)
    • Code that inspects bitmask permissions returned by APIs should switch to inspecting permission class alternatives.
    • Developers of React-based modules will need to make changes in the entryPoints.js file instead of directly in the .view.xml file.
  • Documentation has been added to assist you in determining which packages and versions are installed for R and python. (R info | python info)
  • API Resources

Sample Manager

The Sample Manager Release Notes list features by monthly version and product edition.


LabKey LIMS

Announcing LabKey LIMS! Learn more here: The LabKey LIMS Release Notes list features in addition to those in the Professional Edition of LabKey Sample Manager.


Biologics LIMS

The Biologics LIMS Release Notes list features in addition to those in LabKey LIMS.


Previous Release Notes: Version 24.7




Upcoming Features


Upcoming Features

Releases we are currently working on:

Recent Documentation Updates




Release Notes 25.3 (March 2025)


This topic is under construction for the next extended support release. The current release notes are here.

Here's what we're working on for LabKey Server version 25.3 (March 2025).


LabKey Server SDMS

Premium Edition Feature Updates

Learn more about Premium Editions of LabKey Server here.

Community Edition Updates

  • Assay transform scripts can be configured to run when data is imported, updated, or both. (docs)
  • A trendline option has been added to the chart builder for Line charts. (docs)
  • When the server is configured to use only HTTPS, a Strict-Transport-Security header will be set to prevent future attempts to access the server over HTTP. (docs) Also available in 24.11.4.
  • Study participants can be deleted from all datasets at once. (docs)
  • Set a timeout for read-only HTTP requests, after which long running processes will be killed. (docs)
  • Administrators can register a list of acceptable file extensions for uploads; others will be denied. If no list is provided, any extension is allowed.
  • Encoding of WebDav URLs has been made more consistent; users who have been relying on the previous behavior may need to make changes. (docs | details)
  • Names of data structures, including Sample Types, Source Types, and Assay Designs, may not contain certain special characters or substrings used internally. (docs)
  • The wiki and announcements renderer for Markdown has been replaced. The new renderer is highly compatible with the previous renderer and adds several new features.

Distribution Changes and Upgrade Notes

  • LabKey Server embeds a copy of Tomcat 10. It no longer uses or depends on a separately installed copy of Tomcat. (new installation docs)
    • LabKey Cloud subscribers are upgraded automatically.
    • Users with on-premise installations who have already upgraded to use embedded Tomcat should follow these upgrade instructions. (upgrade on linux | upgrade on windows)
    • Users with on-premise installations that have not already made the additional changes required to use embedded Tomcat will need to follow the migration process in the documentation archives. (migration docs)

Deprecated Features

  • Support for PostgreSQL 12.x has been removed. (supported versions)
  • Support for "Advanced Reports" has been removed.
  • Support for the "Remote Login API" has been removed.
  • Support for bitmask (integer) permissions has been removed. Developers can learn more in the archives.
  • The "Vaccine Study Protocol" interface has been removed.
  • Support for FreezerPro integration has been removed.
  • Support for additional date, time, and datetime parsing patterns has been deprecated.
  • The "Advanced import options" of choosing objects during import of folders, as well as having folder imports applied to multiple folders have been deprecated.
  • "Assay Progress Reports" in studies have been deprecated.
  • Support for SQL Server 2016 has been deprecated.

Client APIs and Development Notes

  • Rlabkey version 3.4.1 has been released, supporting the consistency improvements in webdav URLs. Note that this means version 3.4.1 requires LabKey version 24.12 or higher; earlier versions of LabKey must use Rlabkey version 3.4.0 or lower. (docs)
  • API Resources

Sample Manager

The Sample Manager Release Notes list features by monthly version and product edition.


LabKey LIMS

The LabKey LIMS Release Notes list features in addition to those in the Professional Edition of LabKey Sample Manager.


Biologics LIMS

The Biologics LIMS Release Notes list features in addition to those in LabKey LIMS.


Previous Release Notes: Version 24.11




Getting Started


The LabKey platform provides an integrated data environment for biomedical research. This section will help you get started. The best way to get started is to let us know your research goals and team needs. Request a customized demo to see how LabKey can help.

Topics

More LabKey Solutions




Try it Now: Data Grids


The Data Grid Tutorial is a quick hands-on demonstration you can try without any setup or registration. It shows you just a few of the ways that LabKey Server can help you:
  • Securely share your data with colleagues through interactive grid views
  • Collaboratively build and explore interactive visualizations
  • Drill down into de-identified data for study participants
  • Combine related datasets using data integration tools

Begin the Data Grid Tutorial




Trial Servers


To get started using LabKey products, you can contact us to tell us more about your research and goals. You can request a customized demo so we can help understand how best to meet your needs. In some cases we will encourage you to evaluate and explore with your own data using a custom trial instance. Options include:

LabKey Server Trial

LabKey Server Trial instances contain a core subset of features, and sample content to help get you started. Upload your own data, try tutorials, and even create a custom site tailored to your research and share it with colleagues. Your trial lasts 30 days and we're ready to help you with next steps toward incorporating LabKey Server into your research projects.

Start here: Explore LabKey Server with a trial in LabKey Cloud

Sample Manager Trial

Try the core features of LabKey Sample Manager using our example data and adding your own. Your trial lasts 30 days and we're ready to help you with next steps.

Start here: Get Started with Sample Manager

Biologics LIMS Trial

Try the core features of LabKey Biologics LIMS using our example data and tutorial walkthroughs. Your trial lasts 30 days and we're ready to help you with next steps toward incorporating LabKey Biologics into your work.

Start here: Explore LabKey Biologics with a Trial




Explore LabKey Server with a trial in LabKey Cloud


To get started using LabKey Server and understanding the core functionality, contact us about your research needs and goals. Upon request we will set up a LabKey Cloud-based trial of LabKey Server for you.

You'll receive an email with details about getting started and logging in.

Trial server instances contain a subset of features, and some basic content to get you started. Upload your own data, try tutorials, and even create a custom site tailored to your research and share it with colleagues. Your trial lasts 30 days and we're ready to help you with next steps toward incorporating LabKey Server into your research projects.

Tours & Tutorials

Step by step introductions to key functionality of LabKey Server.




Introduction to LabKey Server: Trial


Welcome to LabKey Server!

This topic helps you get started understanding how LabKey Server works and how it can work for you. It is intended to be used alongside a LabKey Trial Server. You should have another browser window open on the home page of your Trial.

Navigation and User Interface

Projects and Folders

The project and folder hierarchy is like a directory tree and forms the basic organizing structure inside LabKey Server. Everything you create or configure in LabKey Server is located in some folder. Projects are the top level folders, with all the same behavior, plus some additional configuration options; they typically represent a separate team or research effort.

The Home project is a special project. On your Trial server, it contains the main welcome banner. To return to the home project at any time, click the LabKey logo in the upper left corner.

The project menu is on the left end of the menu bar and includes the display name of the current project.

Hover over the project menu to see the available projects, and folders within them. Click any project or folder name to navigate there.

Any project or folder with subfolders will show / buttons for expanding and contracting the list shown. If you are in a subfolder, there will be a clickable 'breadcrumb' trail at the top of the menu for quickly moving up the hierarchy. The menu will scroll when there are enough items, with the current location visible and expanded by default.

The project menu always displays the name of the current project, even when you are in a folder or subfolder. A link with the Folder Name is shown near the top of page views like the following, offering easy one click return to the main page of the folder.

For more about projects, folders, and navigation, see Project and Folder Basics.

Tabs

Using tabs within a folder can give you new "pages" of user interface to help organize content. For an example of tabs in action, see the Research Study within the Example Project.

When your browser window is too narrow to display tabs arrayed across the screen, they will be collapsed into a pulldown menu showing the current tab name and a (chevron). Click the name of the tab on this menu to navigate to it.

For more about adding and customizing tabs, see Use Tabs.

Web Parts

Web parts are user interface panels that can be shown on any folder page or tab. Each web part provides some type of interaction for users with underlying data or other content.

There is a main "wide" column on the left and narrower column on the right. Each column supports a different set of web parts. By combining and reordering these web parts, an administrator can tailor the layout to the needs of the users.

For a hands-on example to try right now, explore the Collaboration Workspace project on your Trial Server.

To learn more, see Add Web Parts and Manage Web Parts. For a list of the types of web parts available in a full installation of LabKey Server, see the Web Part Inventory.

Header Menus

In the upper right, icon menus offer:

  • : Click to open a site-wide search box.
  • : Shown only to Admins: Administrative options available to users granted such access. See below.
  • username: Login and security options; help links to documentation.

Admin Menu

The "first user" of this trial site will always be an administrator and have access to the menu. If that user adds others, they may or may not have the same menu of options available, depending on permissions granted to them.

  • Site >: Settings that pertain to the entire site.
    • Admin Console: In this Trial edition of LabKey Server, some fields are not configurable and may be shown as read-only. See Admin Console for details about options available in the full installation of LabKey.
    • Site Users, Groups, Permissions: Site-level security settings.
    • Create Project: Creates a new project (top-level folder) on the server.
  • Folder >: Settings for the current folder.
    • Permissions: Security configuration for the current folder.
    • Management: General configuration for the current folder.
    • Project Users and Settings: General configuration for the current project.
  • Page Admin Mode: Used to change page layout and add or remove UI elements.
  • Manage Views, Lists, Assays: Configuration for common data containers.
  • Manage Hosted Server Account: Return to the site from which you launched this trial server.
  • Go To Module >: Home pages for the currently enabled modules.

Security Model

LabKey Server has a group and role-based security model. Whether an individual is authorized to see a resource or perform an action is checked dynamically based on the groups they belong to and roles (permissions) granted to them. Learn more here: Security. Try a walkthrough using your Trial Server here: Exploring LabKey Security

Tools for Working Together

Collaborating with teams within a single lab or around the world is made easier when you share resources and information in an online workspace.

  • Message Boards: Post announcements and carry on threaded discussions. LabKey uses message boards for the Support Forums. Learn more here.

  • Issue Trackers: Track issues, bugs, or other workflow tasks (like assay requests) by customizing an issue tracker. LabKey uses an issue tracker to manage development issues. Learn more here.

  • Wikis: Documents written in HTML, Markdown, Wiki syntax, or plain text; they can include images, links, and live content from data tables. You're reading a Wiki page right now. Learn more here.

  • File Repositories: Upload and selectively share files and spreadsheets of data; connect with custom import methods. You can see an example here. Learn more here.

To learn more and try these tools for yourself, navigate to the Example Project > Collaboration Workspace folder of your Trial Server in one browser window, and open the topic Exploring LabKey Collaboration in another.

Tools for Data Analysis

Biomedical research data comes in many forms, shapes, and sizes. LabKey integrates directly with many types of instruments and software systems and with customization can support any type of tabular data.


  • Uploading Data: From dragging and dropping single spreadsheets to connecting a data pipeline to an outside location for incoming data, your options are as varied as your data. Learn about the options here: Import Data.


  • Interpreting Instrument Data: Using assay designs and pipeline protocols, you can direct LabKey Server to correctly interpret complex instrument data during import. Learn more here: Assay Data.


  • Visualizations: Create easy charts and plots backed by live data. Learn more here: Visualizations.

  • Reporting: Generate reports and query snapshots of your data. Use R, JavaScript, and LabKey plotting APIs to present your data in many ways. Learn more here: Reports and Charts.

To tour some example content and try these tools, navigate to the Example Project > Laboratory Data folder of your Trial Server in one browser window, and open the topic Exploring Laboratory Data in another.

Tools for Research Studies

Study folders organize research data about participants over time. There are many different ways to configure and use LabKey studies. Learn more here: Studies.

  • Study Schedules and Navigation: Dashboards for seeing at a glance what work is completed and what is scheduled helps coordinators manage research data collection. Learn more here: Study Navigation.

  • Participant and Date Alignment: By aligning all of your data based on the participant and date information, you can integrate and compare otherwise disparate test results. Explore the breadth of data for a single study subject, or view trends across cohorts of similar subjects. Learn more here: Study User Guide.


To learn more and try these tools, navigate to the Example Project > Research Study folder of your Trial Server in one browser window, and open the topic Exploring LabKey Studies in another.

What's Next?

Explore the example content on your Trial Server using one of these walkthroughs.

Find out more about what a full installation of LabKey Server can do by reading documentation here:



Exploring LabKey Collaboration


LabKey tools for collaboration include message boards, task trackers, file sharing, and wikis. The default folder you create in LabKey is a "Collaboration" folder which gives you many basic tools for working and sharing data with colleagues.

This topic is intended to be used alongside a LabKey Trial Server. You should have another browser window open to view the Example Project > Collaboration Workspace folder on your trial server.

Tour

The "Collaboration Workspace" folder in the "Example Project" shows three web parts on its main dashboard. Web parts are user interface panels and can be customized in many ways.

  • 1. Learn About LabKey Collaboration Folders: a panel of descriptive information (not part of a default Collaboration folder).
  • 2. Messages: show conversations or announcements in a messages web part
  • 3. Task Tracker: LabKey's issue tracker tools can be tailored to a variety of uses.

To help show some of the collaborative options, this project also includes a few sample users with different roles. You can see a message from the "team lead" and there is also a task called "Get Started" listed as a "Todo".

Try It Now

Messages

A message board is a basic tool for communication; LabKey Message Boards can be customized to many use cases: from announcements to developer support and discussion.

  • Notice the first message, "Hello World".
  • Click View Message or Respond below it.
  • Click Respond.
  • Enter any text you like in the Body field. If you also change the Title, your response will have a new title but the main message thread will retain the existing title.
  • Notice that you can select other options for how your body text is rendered. Options: Plain Text, HTML, Markdown, or Wiki syntax.
  • Notice you could attach a file if you like.
  • Click Submit.
  • You are now viewing the message thread. Notice links to edit or delete. Since you an administrator on this server, you can edit messages others wrote, which would not be true for most users.
  • You may have also received an email when you posted your response. By default, you are subscribed to any messages you create or comment on. Click unsubscribe to see how you would reset your preferences if you don't want to receive these emails.
  • Click the Collaboration Workspace link near the top of the page to return to the main folder dashboard. These links are shown any time you are viewing a page within a folder.
  • Notice that on the main folder dashboard, the message board does not show the text of your reply, but just the note "(1 response)".
  • Click New to create a new message.
  • You will see the same input fields as when replying; enter some text and click Submit.
  • When you return to the main folder dashboard, you will see your new message.
  • An administrator can control many display aspects of message boards, including the level of detail, order of messages, and even what "Messages" are called.
  • The (triangle) menu in the upper right corner includes several options for customizing.
    • New: Create a new message.
    • View List: See the message board in list format.
    • Admin: Change things like display order, what messages are called, security, and what options users have when adding messages.
    • Email: Control when and how email notifications are sent about messages on this board.
    • Customize: Select whether this web part shows the "full" information about the message, as shown in our example, or just a simple preview.

More details about using message boards can be found in this topic: Use Message Boards.

Task Tracker

LabKey provides flexible tools for tracking tasks with multiple steps done by various team members. Generically referred to as "issue trackers" the example project includes a simple "Task Tracker".

The basic life cycle of any task or issue moves through three states:

  • Open: someone decides something needs to be done; various steps and reassignments can happen, including prioritization and reassignments
  • Resolved: someone does the thing
  • Closed: the orginal requestor confirms the solution is correct
  • Navigate to the Collaboration Workspace.
  • Scroll down to the Task Tracker and click the title of the task, Get Started to open the detail view.
  • The status, assignment, priority, and other information are shown here. You can see that the team leader opened this task and added the first step: assign this task to yourself.
  • Click Update. Used when you are not "resolving" the issue, merely changing assignment or adding extra information.
  • Select your username from the Assigned To pulldown.
  • You could also change other information and provide an optional comment about your update.
  • Click Save.
    • Note: You may receive an email when you do this. Email preferences are configurable. Learn more here.
  • The task is now assigned to you. You can see the sequence of changes growing below the current issue properties.
  • Click Collaboration Workspace to return to the main page.
  • Notice the task is assigned to you now.
  • Click New Task to create a new task.
  • Enter your choice of title, notice that the default status "Open" cannot be changed, but you can change the priority, and enter other information. Note that setting the priority field is required.
  • Assign the task to the "lab technician" user.
  • Click Save to open the new issue.
  • When you return to the task list on the Collaboration Workspace page, you will see it listed as issue 2.

To show how the resolution process works, use the fact that you are an administrator and can use impersonation to take on another user's identity. Learn more here.

  • Select (Your Username) > Impersonate > User.
  • Choose "lab_technician" and click Impersonate. You are now seeing the page as the lab technician would. For example, you no longer have access to administrator options on the messages web part.
  • Click the title of the new task you assigned to the lab technician to open it.
  • Click Resolve.
  • Notice that by default, the resolved task will be assigned to the original user who opened it - in this case you!
  • The default value for the Resolution field is "Fixed", but you can select other options if appropriate.
  • Enter a comment saying you've completed the task, then click Save.
  • Click Stop Impersonating.
  • Open the task again and close it as yourself.
  • Enter a few more tasks to have more data for viewing.
  • When finished, return to the Collaboration Workspace main page.

The Task Tracker grid offers many options.

  • Search the contents by ID number or search term using the search box.
  • Sort and filter the list of tasks using the header menu for any column. Learn more here.
  • Create custom grid views (ways to view tasks), such as "All assigned to me" or "All priority 1 issues for the current milestone". Use (Grid Views) > Customize Grid and add filters and sorts here. Learn more here.
  • You could also customize the grid to expose other columns if helpful, such as the name of the user who originally opened the issue. Learn more here.
The generic name for these tools is "issue trackers" and they can be customized to many purposes. LabKey uses one internally for bug tracking, and client portals use them to track client questions and work requests.

Learn about creating your own issue tracker in this topic: Tutorial: Issue Tracking.

What Else Can I Do?

Add New Web Parts

Web parts are panels of user interface that display content to your users. As an administrator, you can customize what is shown on the page by using page admin mode. To add a new web part to any folder page:

  • Select > Page Admin Mode. Note that if you do not see this option, make sure you are logged in as an administrator and not impersonating another user.
  • Notice that <Select Web Part> pulldown menus appear at the bottom of the page. There is a wider "main" column on the left and a narrow column on the right; each column supports a different set of web parts.
    • Note: If both pulldown menus are stacked on the right, make your browser slightly wider to show them on separate sides.
  • Select the type of web part you want to create on the desired side. For example, to create a main panel wiki like the welcome panel shown in the Collaboration Workspace folder, select Wiki on the left.
  • Click Add.
  • The new web part will be added at the bottom of the column. While you are in Page Admin Mode, you can reposition it on the page using the (triangle) menu in the web part header. Move up or down as desired.
  • Click Exit Admin Mode in the upper right.

If you later decide to remove a web part from a page, the underlying content is not deleted. The web part only represents the user interface.

For more about web part types and functionality, see Add Web Parts.

Add a Wiki

Wiki documents provide an easy way to display content. They can contain any text or visual information and be formatted in HTML, Markdown, Wiki syntax, or plain text by using "Wiki" format but not including formatting syntax. To create our first wiki, we use a wiki web part.

  • Add a Wiki web part on the left side of the page. If you followed the instructions above, you already did so.
  • Click Create a new wiki page. Note that if a page named "default" already exists in the folder, the new web part will display it. In this case, create a new one by selecting New from the (triangle) menu in the header of the web part.
  • The Name must be unique in the folder.
  • The Title is displayed at the top of the page.
  • Choosing a Parent page lets you organize many wikis into a hierarchy. The table of contents on the right of the page you are reading now shows many examples.
  • Enter the Body of the wiki. To change what formatting is used, use the Convert to... button in the upper right. A short guide to the formatting you select is shown at the bottom of the edit page.
  • Notice that you can attach files and elect whether to show them listed at the bottom of the wiki page.
  • Click Save & Close.
  • Scroll down to see your new wiki web part.
  • To reopen for editing, click the (pencil) icon.

More details about creating and using wikis can be found in this topic: Wikis.

Add a List

A list is a simple table of data. For example, you might store a list of labs you work with.

  • Select > Manage Lists.
  • You will see a number of preexisting lists related to the task tracker.
  • Click Create New List.
  • Name: Enter a short name (such as "labs"). It must be unique in the folder.
  • Review the list properties available; for this first example, leave them unchanged.
  • Click the Fields section header to open it.
  • Click Manually Define Fields.
  • In the Name box of the first field, enter "LabName" (no spaces) to create a key column for our list. Leave the data type "Text".
  • After doing this, you can set the Key Field Name in the blue panel. Select LabName from the dropdown (the field you just added).
  • Use the icon to expand the field and see the options available. Even more can be found by clicking Advanced Settings.
  • Click Add Field and enter the name of each additional column you want in your list. In this example, we have added an address and contact person, both text fields. Notice the "Primary Key" field is specifed in the field details for LabName; it cannot be deleted and you cannot change its data type.
  • Click Save to create the "labs" list. You will see it in the grid of "Available Lists".

To populate this new empty list:

  • Click the list name ("labs") in the grid. There is no data to show.
  • Select (Insert data) > Insert new row.
  • Enter any values you like.
  • Click Submit.
  • Repeat the process to add a few more rows.

Now your collaboration workspace shows quick contact information on the front page.

Learn more about creating and using lists here and in the List Tutorial.

Learn more about editing fields and their properties here: Field Editor

Add a File Repository

  • Each file can then be downloaded by other users of the collaboration workspace.

See the Tutorial: File Repository for a walkthrough of using file repositories.

More Tutorials

Other tutorials using "Collaboration" folders that you can run on your LabKey Trial Server:

To avoid overwriting this Example Project content with new tutorial content, you could create a new "Tutorials" project to work in. See Exploring Project Creation for a walkthrough.

Explore More on your LabKey Trial Server




Exploring Laboratory Data


Laboratory data can be organized and analyzed using LabKey Assay folders. This topic walks you through the basics using a simple Excel spreadsheet standing in for typically more complex instrument output.

This topic is intended to be used alongside a LabKey Trial Server. You should have another browser window open to view the Example Project > Laboratory Data folder.

Tour

The "Laboratory Data" folder is an Assay folder. You'll see four web parts in our example:

  • 1. Learn About LabKey Assay Folders: a panel of descriptive information (not part of a default Assay folder).
  • 2. Assay List: a list of assays defined in the folder. Here we have "Blood Test Data."
  • 3. Blood Test Data Results: A query web part showing a grid of data.
  • 4. Files: a file repository where you can browse files in this container or upload new ones.

The folder includes a simple assay design created from the "Standard Assay Type" representing some blood test results. A single run (spreadsheet) of data has been imported using it. Continue below to try out the assay tools.

Try It Now

This section helps you try some key features of working with laboratory data.

Assay List

Using an Assay List web part, you can browse existing assay designs and as an administrator, add new ones. An assay design tells the server how to interpret uploaded data. Click the name of the design in this web part to see the run(s) that have been uploaded.

  • Click the name Blood Test Data in the Assay List to see the "Runs" imported using it.
  • In this case, a single run (spreadsheet) "BloodTest_Run1.xls" has been imported. Click the Assay ID (in this case the file name) to see the results.

This example shows blood data for 3 participants over a few different dates. White blood counts (WBC), mean corpuscular volume (MCV), and hemoglobin (HGB) levels have been collected in an Excel spreadsheet.

To see the upload process, we can simulate importing the existing run. We will cancel the import before any reimporting actually happens.

  • Click Re-Import Run above the grid.
  • The first page shows:
    • Assay Properties: These are fixed properties, like the assay design name, that are read only for all imports using this design.
    • Batch Properties: These properties will apply to a set of files uploaded together. For example, here they include specifying how the data is identified (the Participant/Visit setting) and selecting a target study to use if any data is to be copied. Select /Example Project/Research Study (Research Study).
  • Click Next.
  • On the next page, notice the batch properties are now read only.
  • Below them are Run Properties to specify; these could be entered differently for each run in the assay, such as the "Assay ID". If not provided, the name of the file is used, as in our example.
  • Here you also provide the data from the assay run, either by uploading a data file (or reusing the one we already uploaded) or entering your own new data.
  • Click Show Expected Data Fields. You will see the list of fields expected to be in the spreadsheet. When you design the assay, you specify the name, type, and other properties of expected columns.
  • Click Download Spreadsheet Template to generate a blank template spreadsheet. It will be named "data_<datetimestamp>.xls". You could open and populate it in Excel, then upload it to this server as a new run. Save this template file and use it when you get to the Files section below.
  • Since we are just reviewing the process right now, click Cancel after reviewing this page.

Understand Assay Designs

Next, review what the assay design itself looks like. You need to be an administrator to perform this step.

  • From the Blood Test Data Runs page, select Manage Assay Design > Edit assay design to open it.

The sections correspond to how a user will set properties as they import a run. Click each section heading to open it.

  • Assay Properties: Values that do not vary per run or batch.
  • Batch Fields: Values set once for each batch of runs.
  • Run Fields: Values set per run; in this case none are defined, but as you saw when importing the file, some built in things like "Assay ID" are defined per run.
  • Results Fields: This is the heart of the assay design and where you would identify what columns (fields) are expected in the spreadsheet, what their data type is and whether they are required.
  • Click the for a row to see or set other properties of each column. With this panel open, click Advanced Settings for even more settings and properties, including things like whether a value can be used as a "measure" in reporting, as shown here for the WBC field. Learn more about using the field editor in this topic: Field Editor.
  • Review the contents of this page, but make no changes and click Cancel.
  • Return to the main folder page by clicking the Laboratory Data link near the top of the page.

Blood Test Data Results

The assay results are displayed in a web part on the main page. Click the Laboratory Data link if you navigated away, then scroll down. Explore some general features of LabKey data grids using this web part:

Column Header Menus: Each column header has a menu of options to apply to the grid based on the values in that column.


Sort:

  • Click the header of the WBC column and select Sort Descending.
  • Notice the (sort) icon that appears in the column header.


Filter:

  • Click the header of the Participant ID column and select Filter... to open a popup.
  • Use the checkboxes to "Choose Values". For this example, uncheck the box for "202".
    • The "Choose Values is only available when there are a limited number of distinct values.
    • Switching to the "Choose Filters" tab in this popup would let you use filter expressions instead.
  • Click OK to apply the filter.
    • Notice the (filter) icon shown in the column header. There is also a new entry in the filter panel above the grid - you can clear this filter by clicking the for the "Participant <> 202" filter.


Summary Statistics:

  • Click the header of the MCV column and select Summary Statistics... to open a popup.
  • Select the statistics you would like to show (the values are previewed in the popup.)
    • Premium Feature: Many summary statistics shown are only available with premium editions of LabKey Server. Learn more here.
  • In this case, Mean and Standard Deviation might be the most interesting. Click Apply.
  • Notice the new row at the bottom of the grid showing the statistics we selected for this column.


The view of the grid now includes a sort, a filter, and a summary statistic.

Notice the message "This grid view has been modified" was added when you added the summary statistic. The grid you are now seeing is not the original default. You have not changed anything about the underlying imported data table, merely how you see it in this grid.


Saving: To save this changed grid view, making it persistent and sharable with others, follow these steps:

  • Click Save next to the grid modification message.
  • Select Named and give the new grid a name (such as "Grid with Statistics").
  • Check the box for "Make this grid view available to all users."
  • Click Save.
  • Notice that the grid now has Grid With Statistics shown above it.
  • To switch between grid views, use the (Grid Views) menu in the grid header. Switch between "default" and "Grid With Statistics" to see the difference.
  • Return to the "default" view before proceeding with this walkthrough.

Visualizations and Reports:

There is not enough data in this small spreadsheet to make meaningful visualizations or reports, but you can see how the options work:

  • Column Visualizations: Each column header menu has a set of visualizations available that can be displayed directly in the grid view. Options vary based on the data type. For example, from the "WBC" column header menu, choose Box and Whisker.


  • Quick Charts: Choose Quick Chart from the header menu of any column to see a quick best guess visualization of the data from that column. The default is typically a box and whisker plot.
  • While similar to the column chart above, this chart creates a stand alone chart that is named and saved separately from the grid view.
  • It also opens in the "Chart Wizard", described next, which offers many customization options.
  • Note that a chart like this is backed by the "live" data in the source table. If you change the underlying data (either by editing or by adding additional rows to the table) the chart will also update automatically.

  • Plot Editor: More types of charts and many more configuration options are available using the common plot editor, the "Chart Wizard".
  • If you are already viewing a plot, you can change the plot type immediately by clicking Edit and then Chart Type.
  • If you navigated elsewhere, you can open the plot editor by returning to the grid view and selecting (Charts) > Create Chart above the grid.
  • Select the type of chart you want using the options along the left edge.

Learn more about creating and customizing each chart type in the documentation. You can experiment with this small data set or use the more complex data when Exploring LabKey Studies.

Files

The Files web part lists the single assay run as well as the log file from when it was uploaded.

To explore the abilities of the file browser, we'll create some new assay data to import. You can also click here to download ours if you want your data to match our screenshots without creating your own: data_run2.xls

  • Open the template you downloaded earlier. It is an Excel file named "data_<datetimestamp>.xls"
  • Enter some values. Some participant ID values to use are 101, 202, 303, 404, 505, and 606, though any integer values will be accepted. You can ignore the VisitID column and enter any dates you like. The WBC and MCV columns are integers, HGB is a double. The more rows of data you enter, the more "new" results you will have available to "analyze."
  • Save the spreadsheet with the name "data_run2.xls" if you want to match our instructions.
  • Click here to download ours if you want your data to match our screenshots: data_run2.xls

  • Return to the Example Project > Laboratory Data folder.
  • Drag and drop the "data_run2.xls" file into the Files web part to upload it.
  • Select it using the checkbox and click Import Data.
  • Scroll down to find the "Import Text or Excel Assay" category.
  • Select Use Blood Test Data - the assay design we predefined. Notice you also have the option to create a new "Standard" assay design instead if you wanted to change the way it is imported.
  • Click Import.
  • You will see the Assay Properties and be able to enter Batch Properties; select "Example Project/Research Study" as the Target Study.
  • Click Next.
  • On the Run Properties and Data File page, notice your "data_run2.xls" file listed as the Run Data. You don't need to make any changes on this page.
  • Click Save and Finish.
  • Now you will see the runs grid, with your new run added to our original sample.
  • Click the file name to see the data you created.
  • Notice the (Grid Views) menu includes the custom grid view "Grid with Statistics" we created earlier - select it.
  • Notice the grid also shows a filter "Run = 2" (the specific run number may vary) because you clicked the run name to view only results from that run. Click the for the filter to delete it.
  • You now see the combined results from both runs.
  • If you saved any visualizations earlier using the original spreadsheet of data, view them now from the (Charts/Reports) menu, and notice they have been updated to include your additional data.

What Else Can I Do?

Define a New Assay

LabKey can help you create a new assay design from a spreadsheet of your own. Choose something with a few columns of different types, or simply add a few more columns to the "data_run2.xls" spreadsheet you used earlier. If you have existing instrument data that is in tabular format, you can also use that to complete this walkthrough.

Note that LabKey does have a few reserved names in the assay framework, including "container, created, createdBy, modified, and modifiedBy", so if your spreadsheet contains these columns you may encounter mapping errors if you try to create a new assay from it. There are ways to work around this issue, but for this getting-started tutorial, try renaming the columns or using another sample spreadsheet.

Here we use the name "mydata.xls" to represent your own spreadsheet of data.

  • Navigate to the main page of the Laboratory Data folder.
  • Drag and drop your "mydata.xls" file to the Files web part.
  • Select the file and click Import Data.
  • In the popup, choose "Create New Standard Assay Design" and click Import.
  • Give your new assay design a Name, such as "MyData".
  • Notice the default Location is the current project. Select instead the "Current Folder (Laboratory Data)".
  • The Columns for Assay Data section shows the columns imported from your spreadsheet and the data types the server has inferred from the contents of the first few rows of your spreadsheet.
  • If you see a column here that you do not want to import, simply uncheck it. If you want to edit column properties, you can do that using the (triangle) button next to the column name.
  • You can change the inferred data types as necessary. For example, if you have a column that happens to contain whole number values, the server will infer it is of type "Integer". If you want it to be "Number (Double)" instead, select that type after clicking the (caret) button next to the type. If a column happens to be empty in the first few rows, the server will guess "Text (String)" but you can change that as well.
  • Column Mapping below these inferrals is where you would map things like Participant ID and Date information. For instance, if you have a column called "When" that contains the date information, you can tell the server that here. It is not required that you provide mappings at all.
  • When you are satisfied with how the server will interpret your spreadsheet, scroll back up and click Begin Import.
  • You will be simultaneously creating the assay design (for this and future uses) and importing this single run. Enter batch properties if needed, then click Next.
  • Enter run properties if needed, including an Assay ID if you want to use a name other than your file name.
  • Click Show Expected Data Fields to review how your data will be structured. Notice that if you didn't provide mappings, new columns will be created for Specimen, Participant, Visit ID, and Date.
  • Click Save and Finish.

  • You now see your new run in a grid, and the new assay design has been created.
  • Click the filename (in the Assay ID column) to see your data.
  • Click column headers to sort, filter, or create quick visualizations.
  • Select Manage Assay Design > Edit assay design to review the assay design.
  • Click Laboratory Data to return to the main folder page.
  • Notice your new assay design is listed in the Assay List.
  • You can now upload and import additional spreadsheets with the same structure using it. When you import .xls files in the future, your own assay design will be one of the options for importing it.

Understand Link to Study

You may have noticed the Linked to Research Study column in the Blood Test Data Results web part. The sample assay design includes linking data automatically to the Example Project/Research Study folder on your Trial Server.

"Linking" data to a study does not copy or move the data. What is created is a dynamic link so that the assay data can be integrated with other data in the study about the same participants. When data changes in the original container, the "linked" set in the study is also updated.

Click the View Link to Study History link in the Blood Test Data Results web part.

You will see at least one link event from the original import when our sample data was loaded and linked during the startup of your server (shown as being created by the "team lead"). If you followed the above steps and uploaded a second "data_run2.xls" spreadsheet, that will also be listed, created and linked by you.

  • Click View Results to see the result rows.
  • Click one of the links in the Linked to Research Study column. You see what appears to be the same grid of data, but notice by checking the project menu or looking at the URL that you are now in the "Research Study" folder. You will also see the tabs present in a study folder.
  • Click the Clinical and Assay Data tab.
  • Under Assay Data you will see the "Blood Test Data" linked dataset. In the topic Exploring LabKey Studies we will explore how that data can now be connected to other participant information.
  • Using the project menu, return to the Laboratory Data folder.

More Tutorials

Other tutorials using "Assay" folders that you can run on your LabKey Trial Server:

To avoid overwriting this Example Project content with new tutorial content, you could create a new "Tutorials" project to work in. See Exploring Project Creation for a walkthrough.

Explore More on your LabKey Trial Server




Exploring LabKey Studies


Using LabKey Studies, you can integrate and manage data about participants over time. Cohort and observational studies are a typical use case. Learn more in this topic.

This topic is intended to be used alongside a LabKey Trial Server. You should have another browser window open to view the Example Project > Research Study folder.

Tour

The "Research Study" folder contains a simple fictional study. There are 5 tabs in the default LabKey study folder; the main landing page is the Overview tab where you will see three web parts:

  • 1. Learn About LabKey Study Folders: a panel of descriptive information (not part of a default Study folder)
  • 2. Study Data Tools: commonly used tools and settings.
  • 3. Study Overview: displays study properties and quick links to study navigation and management options.

A study has five tabs by default. Each encapsulates functions within the study making it easier to find what you need. Click the tab name to navigate to it. Returning to the Research Study folder returns you to the Overview tab.

Participants: View and filter the participant list by cohort; search for information about individuals.

Clinical and Assay Data: Review data and visualizations available; create new joined grids, charts, charts and reports.

The Data Views web part lists several datasets, joined views, and reports and charts.

  • Hover over a name to see a preview and some metadata about it.
  • Click a name or use the (Details) link to open an item.
  • The Access column indicates whether special permissions have been set for an item. Remember that only users granted "Read" access to this folder can see any content here, so seeing "public" means only that it is shared with all folder users. Other options are "private" (only visible to the creator) and custom (shared with specific individuals). See Configure Permissions for Reports & Views for more information.
Datasets:
    • Demographic datasets contain a single row per participant for the entire study.
    • Clinical datasets can contain many rows per participant, but only one per participant and date combination.
    • Assay datasets, typically results from instrument tests, can contain many rows per participant and date.
Manage: Only administrators can access this dashboard for managing the study.

Try It Now

This example study includes a simplified research scenario. Let's explore some key feature areas on each tab:

Display Study Properties

The Study Overview web part shows properties and introductory information about your study.

  • Click the Overview tab.
  • Click the (pencil) icon on the Study Overview web part.
  • Review and change study properties:
    • Label: By default, the folder name, here "Research Study," is used. You can edit to make it more descriptive, such as "HIV Study".
    • Investigator: Enter your own name to personalize this example.
    • Grant/Species: If you enter the grant name and/or species under study, they will be displayed. These fields also enable searches across many study folders to locate related research. Enter a grant name to see how it is shown.
    • Description/Render Type: Our example shows some simple HTML formatted text. You can include as much or as little information here as you like. Select a different Render Type to use Markdown, Wiki syntax, or just plain text.
    • Protocol Documents: Attach documents if you like - links to download will be included in the web part.
    • Timepoint Type: Studies can use several methods of tracking time; this decision is fixed at the time of study creation and cannot be modified here. See Visits and Dates to learn more.
    • Start/End Date: See and change the timeframe of your study if necessary. Note that participants can also have individual start dates. Changing the start date for a study in progress should be done with caution.
    • Subject Noun: By default, study subjects are called "participants" but you can change that here to "subject," "mouse," "yeast," or whatever noun you choose. Try changing these nouns to "Subject" and "Subjects".
    • Subject Column Name: Enter the name of the column in your datasets that contains IDs for your study subjects. You do not need to change this field to match the subject nouns you use.
  • When finished, click Submit and see how this information is displayed. Notice the "Participants" tab name is changed.
  • Reopen the editor and change the subject noun back to "Participant[s]" to restore the original tab and tool names for this walkthrough.

Track Overall Progress

  • Return to the Overview tab if you navigated away.
  • Click the Study Navigator link or the small graphic in the Study Overview web part.
  • The Study Navigator shows you at a glance how much data is available in your study and when it was collected.
    • Rows represent datasets, columns represent timepoints.
  • Use the Participant's current cohort dropdown to see collection by cohort.
  • Use the checkboxes to switch between seeing counts of participants or rows or both.
  • Click the number at the intersection of any row and column to see the data. For example, Lab Results in month two look like this:

View Participant Data

Within a study you can dive deep for all the information about a single participant of interest.

  • Click the Participants tab.
  • If you know the participant ID, you can use the search box to find their information.
  • The Participant List can be quickly filtered using the checkboxes on the left.
  • Use the Filter box to narrow the list if you know part of the participant ID.
  • Hover over a label to see the group member IDs shown in bold. Click a label to select only that filter option in that category. Here we see there are 8 participants enrolled receiving ARV treatment.
  • Click any participant ID to see all the study data about that participant.
  • The details of any dataset can be expanded/collapsed using the and icons.
  • Click the Search For link above the report to search the entire site for other information about that participant. In this case, you will also see results from the "Laboratory Data" folder in this project.

Integrate Data Aligned by Participant and Date

Study datasets can be combined to give a broad picture of trends within groups over time.

  • Click the Clinical and Assay Data tab.
  • There are three primary datasets here: Demographics, Physical Exam, and Lab Results.
  • Click "Joined View: Physical Exam and Demographics" to open an integrated custom grid.
This grid includes columns from two datasets. To see how it was created:
  • Select (Grid Views) > Customize Grid.
  • Scroll down on the Available Fields side to Datasets and click the to expand it. Listed are the two other datasets.
  • Expand the "Demographics" node by clicking the .
  • Scroll down and notice the checked fields like Country and Group Assignment which appear in our joined view.
  • Scroll down on the Selected Fields side to see these fields shown.
  • You can use checkboxes to add more fields to what is shown in the grid, drag and drop to rearrange columns in your view, and use the and icons for selected fields to edit display titles or delete them from the view.
  • Click View Data when finished to see your changes.
  • Notice the message indicating you have unsaved changes. Click Revert to discard them.

Customize Visualizations and Reports

In Exploring Laboratory Data we show you how to create charts and plots based on single columns or tables. With the integration of diverse data in a study, you can easily create visualization and reports across many tables, backed by live data. Our example study includes a few examples.

  • On the Clinical and Assay Data tab, click Bar Chart: Blood Pressure by Cohort and Treatment.
  • Here we see plotted a measure from the "Physical Exam" dataset (Systolic Blood Pressure) against cohort and treatment group data from the "Demographics" dataset.
  • Click Edit in the upper right to open the plot for editing.
  • Click Chart Type to open the chart wizard.
  • Columns on the right can be dragged and dropped into the plot attribute boxes in the middle.
  • Select a different plot type using the options to the left. The plot editor will make a best effort to retain plot attribute column selections, though not all attributes apply to each chart type.
  • Click Box.
    • Notice that the X and Y axis selections are preserved.
    • Drag the column "Study: Cohort" to the "Color" box.
    • Drag the column "Gender" to the "Shape" box.
  • Click Apply to see the new box plot. At present, only the outliers make use of the color and shape selections.
  • Click Chart Layout.
    • Give the plot a new title, like "Systolic - Box Plot"
    • Change Show Points to All.
    • Check the box to Jitter Points; otherwise points will be shown in a single column.
    • Scroll down to see the controls for plot line and fill colors.
    • Choose a different Color Palette from which the point colors will be selected. Shown here, "Dark".
  • Click Apply to see the revised box plot.
  • Click Save As to save this as a new visualization and preserve the original bar chart.
  • Give the new report a name and click Save in the popup.

Manage Your Study

On the Manage tab, you can control many aspects of your study. For example, click Manage Cohorts to review how cohorts were assigned and what the other options are:

  • Assignment Mode: Cohorts can be simple (fixed) or advanced, meaning they can change during the study.
  • Assignment Type: You can manually assign participants to cohorts, or have assignments made automatically based on a dataset.
  • Automatic Participant/Cohort Assignment: Choose the dataset and column to use for assigning cohorts.
Optional
  • You can experiment by changing which column is used to define cohorts: For instance, choose "Country" and click Update Assignments.
  • Notice the new entries under Defined Cohorts and new assignments in the Participant-Cohort Assignments web part.
  • Click the Participants tab.
  • Now you see under Cohorts that both the original cohorts and new ones are listed. Using the hover behavior, notice that the original "Group 1 and Group 2" cohorts are now empty and participants can quickly be filtered by country.
  • Go back in your browser window or click the Manage tab and Manage Cohorts to return to the cohort page.
  • Under Defined Cohorts you can click Delete Unused, then return to the Participants tab to see they are gone.
  • Click the Clinical and Assay Data tab. Click Bar Chart: Blood Pressure by Cohort and Treatment and you will see it has been automatically updated to reflect the new cohort division by country.

  • Return the original cohorts before moving on:
    • On the Manage tab, click Manage Cohorts.
    • Restore the original assignments based on the Demographics/Group Assignment field.
    • Click Update Assignments.
    • Under Defined Cohorts, click Delete Unused.

What Else Can I Do?

Manage Dataset Security

Access to read and edit information in folders is generally controlled by the LabKey role based security model. Within a study, you gain the additional option of dataset level security.

  • On the Manage tab, click Manage Security.
  • Review the Study Security Type: Datasets can be read-only or editable, under either type of security:
    • Basic security: folder permissions determine access to all datasets
    • Custom security: folder permissions can be set by dataset for each group with folder access
  • Change the type to Custom security with editable datasets. Notice the warning message that this can change who can view and modify data.
  • Click Update Type.
  • When you change to using 'custom' security, two additional web parts are added to the page:
    • Study Security: Use radio buttons to grant access to groups. Click Update after changing. In the screenshot below, we're changing the "Study Team" and "Example Team Members" groups to "Per Dataset" permissions.
    • Per Dataset Permissions: Once any group is given per-dataset permissions using the radio buttons, you will have the option to individually set permission levels for each group for each dataset.
  • Click Save when finished, or to revert to the configuration before this step, set all for "Example Team Members" to read, and set all for "Study Team" to edit.

Learn About Protecting PHI

When displaying or exporting data, Protected Health Information (PHI) that could be used to identify an individual can be protected in several ways.

Alternate Participant IDs and Aliases

If you want to share data without revealing participant IDs, you can use a system of alternates or aliases so that you can still show a consistent ID for any set of data, but it is not the actual participant ID.

  • On the Manage tab, click Manage Alternate Participant IDs and Aliases.
  • Alternate participant IDs allow you to use a consistent prefix and randomized set of the number of digits you choose.
  • Dates are also offset by a random amount for each participant. Visit-date information could potentially isolate the individual, so this option obscures that without losing the elapsed time between visits which might be relevant to your study.
  • Participant Aliases lets you specify a dataset containing specific aliases to use. For instance, you might use a shared set of aliases across all studies to connect related results without positively identifying individuals.

Mark Data as PHI

There are four levels to consider regarding protection of PHI, based on how much information a user will be authorized to see. For PHI protection to be enforced, you must BOTH mark data as PHI and implement viewing and export restrictions on your project. Data in columns can be marked as:

  • Restricted: The most sensitive information not shared with unauthorized users, even those authorized for Full PHI.
  • Full PHI: Only shared when users have permission to see all PHI. The user can see everything except restricted information.
  • Limited PHI: Some information that is less sensitive can be visible here.
  • Not PHI: All readers can see this information.
Learn more in this topic: Protecting PHI Data

To mark a column's PHI level:
  • On the Clinical and Assay Data tab, open the dataset of interest. Here we use the Demographics dataset.
  • Above the grid, click Manage.
  • Click Edit Definition.
  • Click the Fields section to open it.
  • Find the field of interest, here "Status of Infection."
  • Click the to expand it.
  • Click Advanced Settings.
  • In the popup, you will see the current/default PHI Level: "Not PHI".
  • Make another selection from the dropdown to change; in this example, we choose "Full PHI".
  • Click Apply.
  • Adjust PHI levels for as many fields as necessary.
  • Scroll down and click Save.
  • Click View Data to return to the dataset.

Note that these column markings are not sufficient to protect data from view to users with read access to the folder. The levels must be enforced with UI implementation, perhaps including users declaring their PHI level and agreeing to a custom terms of use. For more information on a LabKey implementation of this behavior, see Compliance Features.

To export or publish data at a given PHI level:
  • From the Manage tab, click Export Study.
  • On the export page, notice one of the Options is Include PHI Columns. Select what you want included in your export:
    • Restricted, Full, and Limited PHI (default): Include all columns.
    • Full and Limited PHI: Exclude only the Restricted PHI columns.
    • Limited PHI: Exclude the Full PHI and Restricted columns.
    • Uncheck the checkbox to exclude all PHI columns.
  • If you marked the demographics column as "Full PHI" above, select "Limited PHI"to exclude it. You can also simply uncheck the Include PHI Columns checkbox.
  • Click Export.
  • Examine the downloaded archive and observe that the file named "ARCHIVE_NAME.folder.zip/study/datasets/dataset5001.tsv" does not include the data you marked as PHI.

Publish the Study

Publishing a LabKey study allows you to select all or part of your results and create a new published version.

  • Click the Manage tab.
  • Click Publish Study (at the bottom).
  • The Publish Study Wizard will guide you through selecting what to publish.
    • By default, the new study will be called "New Study" and placed in a subdirectory of the current study folder.
    • Select the participants, datasets, timepoints, and other objects to include. On the Datasets step, you can elect to have the study refresh data if you like, either manually or nightly.
    • The last page of the publish wizard offers Publish Options including obscuring information that could identify individuals and opting for the level of PHI you want to publish.
  • Click Finish at the end of the wizard to create the new study folder with the selected information.

Explore the new study, now available on the projects menu.

More Tutorials

Other tutorials using "Study" folders that you can run on your LabKey Trial Server:

To avoid overwriting this Example Project content with new tutorial content, you could create a new "Tutorials" project to work in. See Exploring Project Creation for a walkthrough.

Explore More on your LabKey Trial Server




Exploring LabKey Security


Learn more about how LabKey manages security through roles and groups in this topic.

This topic is intended to be used alongside a LabKey Trial Server. You should have another browser window open to view the Example Project folder. This walkthrough also assumes you are the original creator of the trial server and are an administrator there, giving you broad access site-wide.

Tour

Our Example Project contains three subfolders, intended for different groups of users:

  • Collaboration Workspace: The entire team communicates here and shares project-wide information.
  • Laboratory Data: A lab team performs tests and uploads data here, perhaps performing basic quality control.
  • Research Study: A team of researchers is exploring a hypothesis about HIV.
LabKey's security model is based on assignment of permission roles to users, typically in groups.

Project Groups

Groups at the project level allow you to subdivide your project team into functional subgroups and grant permissions on resources to the group as a whole in each folder and subfolder. While it is possible to assign permissions individually to each user, it can become unwieldy to maintain in a larger system.

  • Navigate to the Example Project. You can be in any subfolder.
  • Select > Folder > Permissions.
  • Click the tab Project Groups.
  • There are 5 predefined project groups. See at a glance how many members are in each.
  • Click the name to see the membership. Click Example Team Members and see the list of all our example users.
  • Click Done in the popup.
  • Click the Lab Team to see that the two members are the team lead and the lab technician.
  • Click the Study Team to see that the two members are the team lead and the study researcher.

Next we'll review how these groups are assigned different permissions within the project's subfolders.

Permission Roles

Permission roles grant different types of access to a resource. Read, Edit, and Admin are typical examples; there are many more permission roles available in LabKey Server. Learn more here.

  • If you navigated away after the previous section, select > Folder > Permissions.
  • Click the Permissions tab.
  • In the left column list of folders, you will see the entire project hierarchy. The folder you are viewing is shown in bold. Click Example Project to see the permissions in the project itself.
  • The "Admin Team" is both project and folder administrator, and the "Example Team Members" group are Editors in the project container.

  • Click Collaboration Workspace and notice that the "Example Team Members" are editors here, too.
  • Click Laboratory Data. In this folder, the "Lab Team" group has editor permissions, and the "Example Team Members" group only has reader permission.
    • Note that when users are members of multiple groups, like in the case of our sample "lab_technician@local.test", they hold the sum of permissions granted through the groups they belong to. This lab_technician has read access with example team membership, but also editor access because of lab team membership, so that user will be able to edit contents here.
  • To see the user membership of any group, click the group name in the permissions UI.
  • To see all permissions granted to a given user, click the Permissions link in the group membership popup.
  • This example lab technician can edit content in the example project, the collaboration workspace folder, and the laboratory data folder. They can read but not edit content in the research study folder.

  • Close any popups, then click Cancel to exit the permissions editing UI.

Try It Now

Impersonation

Using impersonation, an admin can see what another user would be able to see and do on the server.

  • Navigate to the Example Project/Research Study folder using the project menu.
  • Notice that as yourself, the application admin on this server, you can see a (pencil) icon in the header of the Study Overview web part. You would click it to edit study properties. You also see the Manage tab.
  • Select (User) > Impersonate > User.
  • Select "lab_technician@local.test" from the dropdown and click Impersonate.
  • Now you are seeing the content as the lab technician would: with permission to read but not edit, as we saw when reviewing permissions above.
  • Notice the (pencil) icon and Manage tabs are no longer visible. You also no longer have some of the original options on the menu in the header.
  • Click Stop Impersonating to return to your own "identity".

Impersonate a Group

Impersonation of a group can help you better understand permissions and access. In particular, when configuring access and deciding what groups should include which users, group impersonation can be very helpful.

  • Navigate to the Example Project/Research Study folder, if you navigated away.
  • Select (User) > Impersonate > Group.
  • Choose the "Study Team" and click Impersonate.
  • Hover over the project menu and notice that the only folder in "Example Project" that members of the "Study Team" can read is this "Research Study" folder.
  • Click Stop Impersonating.
  • To see an error, select (User) > Impersonate > Group, choose the "Lab Team" and click Impersonate. This group does not have access to this folder, so the error "User does not have permission to perform this operation" is shown.
  • Click Stop Impersonating.

Impersonate a Role

You can also directly impersonate roles like "Reader" and "Submitter" to see what access those roles provide.

To learn more about impersonation, see Test Security Settings by Impersonation.

Audit Logging

Actions on LabKey Server are extensively logged for audit and security review purposes. Impersonation is among the events logged. Before you complete this step, be sure you have stopped impersonating.

  • Select > Site > Admin Console.
  • Under Management, click Audit Log.
  • Using the pulldown, select User Events.
  • You will see the impersonation events you just performed. Notice that for impersonations of individual users, these are paired events - both the action taken by the user to impersonate, and the action performed "on" the user who was impersonated.
  • Explore other audit logs to see other kinds of events tracked by the server.

What Else Can I Do?

The Security Tutorial walks you through more security features. You can create a new project for tutorials on your trial server, then run the security tutorial there.

Learn More

Explore More on your LabKey Trial Server




Exploring Project Creation


Once you've learned about LabKey and explored your LabKey Trial Server, you can start creating your own projects and custom applications.

This topic assumes you are using a LabKey Trial Server and have it open in another browser window.
As a first project, let's create a "Tutorials" project in which you can run some LabKey tutorials.

Create a Project

To open the project creation wizard you can:

  • Click the "Create" button on the Trial Server home page.
  • OR: Select > Site > Create Project.
  • OR: Click the "Create Project" icon at the bottom of the project menu as shown:

The project creation wizard includes three steps.

  1. Give the project a Name and choose the folder type.
  2. Configure users and permissions (or accept the defaults and change them later).
  3. Choose optional project settings (or configure them later).
  • Select > Site > Create Project.
  • Step 1: Create Project:
    • Enter the Name: "Tutorials". Project names must be unique on the server.
    • Leave the box checked to use this as the display title. If you unchecked it, you could enter a different display title.
    • Folder Type: Leave the default selection of "Collaboration". Other folder types available on your server are listed; hover to learn more about any type. Click Folder Help for the list of folder types available in a full LabKey installation.
    • Click Next.

  • Step 2: Users / Permissions: Choose the initial security configuration. As the admin, you can also change it later.
    • The default option "My User Only" lets you set up the project before inviting additional users.
    • The other option "Copy From Existing Project" is a helpful shortcut when creating new projects to match an existing one.
    • Leave "My User Only" selected and click Next.


  • Step 3: Project Settings:
    • On a LabKey Trial Server, you cannot Choose File Location, so this option is grayed out.
    • Advanced Settings are listed here for convenience, but you do not need to set anything here.
    • Simply click Finish to create your new project.
  • You'll now see the landing page of your new project and can start adding content.

Add Some Content

Let's customize the landing page and make it easier to use.

  • In the Wiki web part, click Create a new wiki page to display in this web part.
  • Leave the Name as "default"
  • Enter the Title "Welcome"
  • In the body field, enter: "Feel free to make a new subfolder and run a tutorial in it."
  • Click Save & Close.
  • Customize the page layout:
    • Select > Page Admin Mode.
    • In the web part header menu for your new "Welcome" wiki, select (triangle) > Move Up.
    • In the web part header menu for both "Messages" and "Pages", select (triangle) > Remove From Page.
  • Click Exit Admin Mode.

You can now use this project as the base for tutorials.

Subfolders Web Part

Most LabKey tutorials begin with creation of a new subfolder of a specific type, which you can add by clicking Create New Subfolder here. There are no subfolders to display yet, but once you add a few, this web part will look something like this, giving you a one click way to return to a tutorial folder later.

Tutorials for your LabKey Trial Server

Try one of these:

Share With Others

Now that you have created a new project and learned more about LabKey Server, consider sharing with a colleague. To invite someone to explore the Trial Server you created, simply add them as a new user:

  • Select > Site > Site Users.
  • Click Add Users above the grid of existing site users.
  • Enter one or more email addresses, each on it's own line.
  • If you want to grant the new user(s) the same permissions on the server as another user, such as yourself, check the box for Clone permissions from user: and enter the user ID. See what permissions will be cloned by clicking Permissions.
  • To grant different permissions to the new user, do not check this box, simply click Add Users and configure permission settings for each project and folder individually.
  • Notification email will be sent inviting the new user unless you uncheck this option. You can also see this email by clicking the here link in the UI that will appear.
  • Click Done.
  • You will see the new user(s) listed in the grid. Click the Permissions link for each one in turn and see the permissions that were granted. To change these in any given project or folder, navigate to it and use > Folder > Permissions. See Configure Permissions for more information.

What Else Can I Do?

Change the Color Scheme

The color scheme, or theme, is a good way to customize the look of a given project. To see what the other themes look like, see Web Site Theme.

To change what your Tutorials project looks like:

  • Navigate to the Tutorials project, or any subfolder of it.
  • Select > Folder > Project Settings.
  • Under Theme, select another option.
  • Scroll down and click Save.
  • Return to the folder and see the new look.
  • Switch to the home project (click the LabKey logo in the upper left) and see the site default still applies there.
  • To restore the original look, return to the Tutorials project and reset the "Leaf" theme.

You can also change the theme for the site as a whole by selecting > Site > Site Console. Click Settings, then under Configuration, choose Look and Feel Settings. The same options are available for the Theme setting.

What's Next?

You can create more new projects and folders on your LabKey Trial Server to mock up how it might work for your specific solution. Many features available in a full LabKey installation are not shown in the trial version; you can learn more in our documentation.

Feel free to contact us and we can help you determine if LabKey is the right fit for your research.




Extending Your Trial


When your LabKey Server Hosted Trial is nearing it's expiration date, you will see a banner message in the server offering you upgrade options. If you need a bit more time to explore, you can extend your trial beyond the initial 30 days.

  • From within your trial server, select > Manage Hosted Server Account.
  • Click Extend.

Related Topics




LabKey Server trial in LabKey Cloud


Your LabKey Server trial provides a cloud-based environment where you can explore key features of the LabKey platform. Explore a preloaded example project, upload your own data, and try features only available in premium editions. Your trial lasts 30 days and we're ready to help you with next steps toward incorporating LabKey Server into your research projects.

Learn the Basics

Try these step by step topics to get started.

Explore More...

Premium Features

Your 30 day trial includes several premium features. See them in action prior to subscribing:

Data Analysis Options

Try built-in data analysis options, including:

Development with LabKey Server




Design Your Own Study


This topic is intended to be used with a LabKey Trial Server.

LabKey provides a customizable framework for longitudinal study data. A study tracks information about subjects over time. You can explore a read-only example here:

This topic assumes you have your own time series data about some study subjects and helps you create your own study where you can upload and analyze data of your own. To do that, you will use a trial version of LabKey Server. It takes a few minutes to set up and you can use it for 30 days.

Plan Your Study Framework

You may already have data you plan to upload. If not, we will provide example files to get you started.

Identify your Study Subjects

The subjects being studied, "participants" by default, can be humans, animals, trees, cells, or anything else you want to track.

Look at a dataset you plan to use. How are subjects under study identified. What column holds identifying information?

Decide How To Track Time

Time Based: Some studies track time using dates. The date of each data collection is entered and the study is configured to break these into month-long timepoints, labeled as "Month 1", "Month 2" etc.

Visit Based: You can also track time in a visit-by-visit manner. If the elapsed time between events is not relevant but you have a fixed sequence of visit events, a visit-based study may be appropriate. Visits are identified by sequence numbers.

Continuous: A continuous study is used when there is no strong concept of visits or fixed buckets of time. Data is entered and tracked by date without using timepoints.

Security

Identify your security requirements and how you plan to use the data in your study. Who will need access to what data. Will you want to be able to assign access 'per dataset' or is folder level security sufficient?

When you select your Security Mode, use basic if folder-level is sufficient and "custom" otherwise. Decide if you want datasets to be editable by any non-admins after import. Learn more about study security in this topic: Manage Study Security

Create Your Study

  • Once your trial server has been created, you will receive an email notification with a direct link to it.
  • Click the Create Project button on the home page.
  • Enter Name "Tutorials", leave the default folder type and click Next, then Next and then Finish to create the new project with all the default settings.
  • In the new project, click Create New Subfolder.
  • Enter the Name you want. In our example, it is named "My New Study".
  • Select the folder type Study and click Next.
  • Click Finish to create the folder.
  • Click Create Study.

  • Under Look and Feel Properties:
    • Study Label: Notice your folder name has the word "Study" added; you can edit to remove redundancy here if you like.
    • Subject Noun (Singular and Plural): If you want to use a word other than "Participant/Participants" in the user interface, enter other nouns here.
    • Subject Column Name: Look at the heading in your dataset for the column containing identifiers. Edit this field to match that column.

  • Under Visit/Timepoint Tracking:
    • Select the type of time tracking you will use. Dates, Assigned Visits, or Continuous.
    • If needed for your selection, enter Start Date and/or Default Timepoint Duration.

  • Under Security:
    • Select whether to use basic or custom security and whether to make datasets editable or read-only.
    • To maximize the configurability of this study, choose Custom security with editable datasets.

  • Click Create Study.
  • You will now be on the Manage tab of your own new study. This tab exists in the read-only exploration study we provided, but you cannot see it there. It is visible only to administrators.

Upload Your Data

Add Demographic Data

Begin with your demographic dataset. Every study is required to have one, and it contains one row per study participant (or subject). It does not have to be named "Demographics.xls", and you do not have to name your dataset "Demographics"; that is just the convention used in LabKey examples and tutorials.

  • On the Manage tab of your new study, click Manage Datasets.
  • Click Create New Dataset.
  • Give the dataset a short name (such as "Demographics"). This name does not have to match your filename.
  • Under Data Row Uniqueness, select Participants Only (demographic data).
  • Click the Fields section to open it.
  • Drag your demographic data spreadsheet into the Import or infer fields from file target area.
  • The column names and data types will be inferred. You can change types or columns as needed.
  • In the Column Mapping section, be sure that your mappings are as intended: Participant (or Subject) column and either "Date" for date-based or "Sequence Num" for visit-based studies.
    • Note: All your study data is assumed to have these two columns.
  • Notice the Import data from this file upon dataset creation? section is enabled and preview the first few lines of your data.
  • Click Save to create and populate the dataset.
  • Click View Data to see it in a grid.

You can now click the Participants/Subjects tab in your study and see the list gleaned from your spreadsheet. Each participant has a built in participant report that will contain (so far) their demographic data.

Add More Datasets

Repeat the above process for the other data from your study, except under Data Row Uniqueness, non-demographic datasets will use the default Participants and Visits. For example, you might have some sort of Lab Results for a set of subjects. Each row in a dataset like this will have one row per participant and visit/date combination. This enables studying results over time.

For each data spreadsheet:
  • Select Manage > Manage Datasets > Create New Dataset.
  • Name each dataset, select the data row uniqueness, upload the file, confirm the fields, and import.

Once you have uploaded all your data, click the Clinical and Assay Data tab to see the list of datasets.

Join your Data

With only two datasets, one demographic, you can begin to integrate your data.

  • Click the Clinical and Assay Data tab.
  • Click the name of a non-demographic dataset, such as one containing "Lab Results". You will see your data.
  • Select (Grid Views) > Customize Grid.
  • You will see a Datasets node under Available Fields.
  • Click the to expand it.
  • Click the for the Demographics dataset.
  • Check one or more boxes to add Demographics columns to your grid view.
  • Click View Grid to see the joined view. Save to save it either as the default or as a named grid view.

Learn more about creating joined grids in this topic: Customize Grid Views

Related Topics




Explore LabKey Biologics with a Trial


To get started using LabKey Biologics LIMS, you can request a trial instance of Biologics LIMS. Go here to tell us more about your needs and request your trial.

Trial instances contain some example data to help you explore using LabKey Biologics for your own research data. Your trial lasts 30 days and we're ready to help you understand how LabKey Biologics can work for you.

Biologics Tours

Tour key functionality of LabKey Biologics with your trial server following this topic: Introduction to LabKey Biologics

Documentation

Learn more in the documentation here:



Install LabKey for Evaluation


To get started using LabKey products, you can contact us to tell us more about your research and goals. You can request a customized demo so we can help understand how best to meet your needs. In some cases we will encourage you to evaluate and explore with your own data using a custom trial instance. Options include:

LabKey Server Trial

LabKey Server Trial instances contain a core subset of features, and sample content to help get you started. Upload your own data, try tutorials, and even create a custom site tailored to your research and share it with colleagues. Your trial lasts 30 days and we're ready to help you with next steps toward incorporating LabKey Server into your research projects.

Start here: Explore LabKey Server with a trial in LabKey Cloud

Sample Manager Trial

Try the core features of LabKey Sample Manager using our example data and adding your own. Your trial lasts 30 days and we're ready to help you with next steps.

Start here: Get Started with Sample Manager

Biologics LIMS Trial

Try the core features of LabKey Biologics LIMS using our example data and tutorial walkthroughs. Your trial lasts 30 days and we're ready to help you with next steps toward incorporating LabKey Biologics into your work.

Start here: Explore LabKey Biologics with a Trial




Tutorials


Tutorials provide a "hands on" introduction to the core features of LabKey Server, giving step-by-step instructions for building solutions to common problems. They are listed roughly from simple to more complex, and you can pick and choose only those that interest you.

In order to run any tutorial, you will need:

If you are using a free trial of LabKey Server, this video will show you around:

This icon indicates whether the tutorial can be completed on a trial server.

New User Tutorials

Study Tutorials

Assay Tutorials

Tutorial: Import Experimental / Assay DataImport, manage, and analyze assay data.
Tutorial: NAb AssayWork with NAb experiment data from 96-well or 384-well plates
Tutorial: ELISA AssayImport and analyze ELISA experiment data.
Tutorial: ELISpot Assay TutorialImport and analyze ELISpot experiment data.
Discovery Proteomics TutorialStorage and analysis for high-throughput proteomics and tandem mass spec experiments.
Tutorial: Import a Flow WorkspaceLearn about using LabKey for management, analysis, and high-throughput processing of flow data.
Tutorial: Set Flow BackgroundLearn about setting metadata and using backgrounds with flow data.
Luminex Assay Tutorial Level IManage, quality control, analyze, share, integrate and export Luminex immunoassay results. 
Luminex Assay Tutorial Level IIUse advanced features for quality control and analysis. 
Expression Matrix Assay TutorialTry an example expression matrix assay. 

Developer Tutorials

Biologics Tutorials




Set Up for Tutorials: Trial


This topic covers the quickest and easiest way to set up to run LabKey tutorials using a trial of LabKey Server. If you want to run tutorials not supported in the trial environment, or already have access to LabKey Server, see Set Up for Tutorials: Non-Trial.

Tutorials you can run using a free trial of LabKey Server are marked with this badge.

LabKey Trial Server

To run the LabKey Tutorials, you need three things:

1. An account on a running LabKey Server instance. After contacting us about your specific research needs and goals, we may set up a LabKey Trial Server for you:

2. A basic familiarity with navigation, folder creation, and utilities like web parts. Use this topic alongside your trial instance: 3. A tutorial workspace project where you are an administrator and can create new folders.

  • On the home page of your trial server, click Create.
  • Enter the Name "Tutorials".
    • If you plan to share this trial server with other users, consider using "Tutorials-Username" so you can each have your own workspace.
  • Leave the default folder type selection, "Collaboration," and click Next.
  • Leave the default permission selection, "My User Only," and click Next.
  • Skip the advanced settings and click Finish.
  • (Optional): To enhance this project, you can add some custom content making it easier to use.

Finished

Congratulations, you can now begin running tutorials in this workspace on your trial server.

I'm Ready for the Tutorial List




Set Up for Tutorials: Non-Trial


This topic explains how to set up for the LabKey Tutorials on a non-trial instance of the server. If you have access to a LabKey trial server and the tutorial you want will work there, use this topic instead: Set Up for Tutorials: Trial.

To run any LabKey Tutorial you need:

  1. An account on a running LabKey Server instance.
  2. A tutorial workspace project on that server where you are an administrator and can create new folders.
  3. A basic familiarity with navigation, folder creation, and utilities like web parts.
If running tutorials on a trial instance of LabKey Server does not meet your needs, the other options for creating a tutorial workspace are:

Existing Installations

1. & 2. If your organization is already using LabKey Server, contact your administrator about obtaining access as an administrator to a project or folder to use. For example, they might create and assign you a "Tutorials-username" project or subfolder. They will provide you the account information for signing in, and the URL of the location to use.

LabKey Server installations in active use may have specialized module sets or other customizations which cause the UI to look different than tutorial instructions. It's also possible that you will not have the same level of access that you would have if you installed a local demo installation.

3. Learn the navigation and UI basics in this topic:

The location given to you by your administrator is your tutorial workspace where you can create a subfolder for each tutorial that you run.

Full Local Installation

1. If none of the above are suitable, you may need to complete a full manual installation of LabKey Server. Detailed instructions are provided here:

2. You will be the site administrator on your own local server. To create a tutorials project:
  • Select (Admin) > Site > Create Project.
  • Enter the name "Tutorials" and choose folder type "Collaboration".
  • Accept all project wizard defaults.

3. Learn the navigation and UI basics in this topic:

Finished

Congratulations, you can now log in, navigate to your workspace, and begin running tutorials.

Learn More

I'm Ready for the Tutorial List




Navigation and UI Basics


Welcome to LabKey Server!

This topic helps you get started using LabKey Server, understanding the basics of navigation and the user interface.

If you are using a LabKey Trial Server, use this topic instead: LabKey Server trial in LabKey Cloud.

Projects and Folders

The project and folder hierarchy is like a directory tree and forms the basic organizing structure inside LabKey Server. Everything you create or configure in LabKey Server is located in some folder. Projects are the top level folders, with all the same behavior, plus some additional configuration options; they typically represent a separate team or research effort.

The Home project is a special project. It is the default landing page when users log in and cannot be deleted. You can customize the content here to suit your needs. To return to the home project at any time, click the LabKey logo in the upper left corner.

The project menu is on the left end of the menu bar and includes the display name of the current project.

Hover over the project menu to see the available projects, and folders within them. Click any project or folder name to navigate there.

Any project or folder with subfolders will show / buttons for expanding and contracting the list shown. If you are in a subfolder, there will be a clickable 'breadcrumb' trail at the top of the menu for quickly moving up the hierarchy. The menu will scroll when there are enough items, with the current location visible and expanded by default.

The project menu always displays the name of the current project, even when you are in a folder or subfolder. A link with the Folder Name is shown near the top of page views like the following, offering easy one click return to the main page of the folder.

For more about projects, folders, and navigation, see Project and Folder Basics.

Tabs

Using tabs within a folder can give you new "pages" of user interface to help organize content. LabKey study folders use tabs as shown here:

When your browser window is too narrow to display tabs arrayed across the screen, they will be collapsed into a pulldown menu showing the current tab name and a (chevron). Click the name of the tab on this menu to navigate to it.

For more about adding and customizing tabs, see Use Tabs.

Web Parts

Web parts are user interface panels that can be shown on any folder page or tab. Each web part provides some type of interaction for users with underlying data or other content.

There is a main "wide" column on the left and narrower column on the right. Each column supports a different set of web parts. By combining and reordering these web parts, an administrator can tailor the layout to the needs of the users.

To learn more, see Add Web Parts and Manage Web Parts. For a list of the types of web parts available in a full installation of LabKey Server, see the Web Part Inventory.

Header Menus and URLs

In the upper right, icon menus offer:

  • (Search): Click to open a site-wide search box
  • (Admin/Settings): Shown only to Admins: Administrative options available to users granted such access. See Admin Console for details about options available.
  • (User): Login and security options; help links to documentation and support forums.


Watch the URL at the top of the page as you navigate LabKey Server and explore features on your server. Many elements are encoded in the URL and programmatic access via building URLs is possible with APIs. Learn more here: LabKey URLs.

Security Model

LabKey Server has a group and role-based security model. Whether an individual is authorized to see a resource or perform an action is checked dynamically based on the groups they belong to and roles (permissions) granted to them. Learn more here: Security.

What's Next?




LabKey Server Editions


LabKey Server Editions

  • Community Edition: Free to download and use forever. Best suited for technical enthusiasts and evaluators in non-mission-critical environments. LabKey provides documentation and a community forum to help users support each other.
  • Premium Editions: Paid subscriptions that provide additional functionality to help teams optimize workflows, manage complex projects, and explore multi-dimensional data. Premium Editions also include professional support services for the long-term success of your informatics solutions.
For a complete list of features available in each LabKey Server Edition, see the LabKey Server Edition Comparison.

Topics




Training


Premium Feature — Available with all Premium Editions of LabKey Server. Learn more or contact LabKey.

User and Administrator Training

LabKey's user and administrator training course, LabKey Fundamentals, is included with Premium Editions of LabKey Server. It provides an introduction to the following topics:

  • LabKey Server Basics: Explains the basic anatomy/architecture of the server and its moving parts. It outlines the basic structures of folders and data containers, and the modules that process requests and craft responses. Best practices for configuring folders is included. The role of Administrators is also described.
  • Security: Describes LabKey Server's role-based security model--and how to use it to protect your data resources. General folder-level security is described, as well as special security topics, such as dataset-level security and Protected Health Information (PHI) features. Practical security information is provided, such as how to set up user accounts, assigning groups and roles, best practices, and testing security configurations using impersonation.
  • Collaboration: Explains how to use the Wiki, Issues, and Messages modules. Branding and controlling the look-and-feel of your server are also covered.
  • Data Storage: Files and Database: Explains the two basic ways that LabKey Server can hold data: (1) as files and (2) as records in a database. Topics include: full-text search, converting tabular data files into database tables, special features of the LabKey database (such as 'lookups'), the role of SQL queries, adding other databases as external data sources.
  • Instrument Data: Explains how LabKey Server models and captures instrument-derived data, including how to create a new assay "design" from scratch, or how to use a prepared assay design. Special assay topics are covered, such as transform scripts, creating new assay design templates ("types") from simple configuration files, and how to replace the default assay user interface.
  • Clinical/Research Study Data Management: Explains how to integrate heterogeneous data, such as instrument, clinical, and demographic data, especially in the context of longitudinal/cohort studies.
  • Reports: Explains the various ways to craft reports on your data, including R reports, JavaScript reports, and built-in visualizations, such as Time Charts, Box Plots, and Scatter Plots.
  • Samples: Explains how to manage sample data and link it to assay and clinical data.
  • Advanced Topics: A high-level overview of how to extend LabKey Server. The Starter Edition includes support for users writing custom SQL, R scripts, and ETLs. The Professional Edition provides support for users extending LabKey Server with JavaScript/HTML client applications, user-created file modules.

Developer Training

LabKey's developer training is included in the Professional and Enterprise Editions of LabKey Server. It is tailored to your project's specific needs and can cover:

  • Server-to-server integrations
  • Client APIs
  • Assay transform scripts
  • Remote pipeline processing servers
  • Custom LabKey-based pipelines
  • Module development assistance

Related Topics

Premium Resources Available

Subscribers to premium editions of LabKey Server can get a head start with video and slide deck training resources available in this topic:


Learn more about premium editions




LabKey Server


LabKey Server is at it's core an open-source software platform designed to help research organizations integrate, analyze, and share complex biomedical data. Adaptable to varying research protocols, analysis tools, and data sharing requirements, LabKey Server couples the flexibility of a custom solution with enterprise-level scalability to support scientific workflows.

Introduction

Solutions




Introduction to LabKey Server


This topic is for absolute beginners to LabKey Server. It explains what LabKey Server is for, how it works, and how to build solutions using its many features.

What is LabKey Server?

LabKey Server's features can be grouped into three main areas:

1. Data Repository

LabKey Server lets you bring data together from multiple sources into one repository. These sources can be physically separated in different systems, such as data in Excel spreadsheets, different databases, REDCap, etc. Or the data sources can be separated "morphologically", having different shapes. For example, patient questionnaires, instrument-derived assay data, medical histories, and sample inventories all have different data shapes, with different columns names and different data types. LabKey Server can bring all of this data together to form one integrated whole that you can browse and analyze together.

2. Data Showcase

LabKey Server lets you securely present and highlight data over the web. You can present different profiles of your data to different audiences. One profile can be shown to the general public with no restrictions, while another profile can be privately shared with selected individual colleagues. LabKey Server lets you collaborate with geographically separated teams, or with your own internal team members. In short, LabKey Server lets you create different relationships between data and audiences, where some data is for general viewing, other data is for peer review, and yet other data is for group editing and development.

3. Electronic Laboratory

LabKey Server provides many options for analyzing and inquiring into data. Like a physical lab that inquires into materials and natural systems, LabKey Server makes data itself the object of inquiry. This side of LabKey Server helps you craft reports and visualizations, confirm hypotheses, and generally provide new insights into your data, insights that wouldn't be possible when the data is separated in different systems and invisible to other collaborators.

The LabKey Server Platform

LabKey Server is a software platform, as opposed to an application. Applications have fixed use cases targeted on a relatively narrow set of problems. As a platform, LabKey Server is different: it has no fixed use cases, instead it provides a broad range of tools that you configure to build your own solutions. In this respect, LabKey Server is more like a car parts warehouse and not like any particular car. Building solutions with LabKey Server is like building new cars using the car parts provided. To build new solutions you assemble and connect different panels and analytic tools to create data dashboards and workflows.

The following illustration shows how LabKey Server takes in different varieties of data, transforms into reports and insights, and presents them to different audiences.

How Does LabKey Server Work?

LabKey Server is a web server, and all web servers are request-response machines: they take in requests over the web (typically as URLs through a web browser) and then craft responses which are displayed to the user.

Modules

Modules are the main functionaries in the server. Modules interpret requests, craft responses, contain all of the web parts, and application logic. The responses can take many different forms:

  • a web page in a browser
  • an interactive grid of data
  • a report or visualization of underlying data
  • a file download
  • a long-running calculation or algorithm
LabKey Server uses a PostgreSQL database as its primary data store. You can attach other external databases to the server. Learn about external data sources here: LabKey Server offers non-disruptive integration with your existing systems and workflows. You can keep your existing data systems in place, using LabKey Server to augment them, or you can use LabKey Server to replace your existing systems. For example, REDCap to collect patient data, and an external repository to hold medical histories, LabKey Server can synchronize and combine the data in these systems, so you can build a more complete picture of your research results, without disrupting the workflows you have already built.

The illustration below shows the relationships between web browsers, LabKey Server, and the underlying databases. The modules shown are not a complete set; many other modules are included in LabKey Server.

User Interface

You configure your own user interface by adding panels, aka "web parts", each with a specific purpose in mind. Some example web parts:

  • The Wiki web part displays text and images to explain your research goals and provide context for your audience. (The topic you are reading right now is displayed in a Wiki web part.)
  • The Files web part provides an area to upload, download, and share files will colleagues.
  • The Query web part displays interactive grids of data.
  • The Report web part displays the results of an R or JS based visualization.
Group web parts on separate tabs to form data dashboards.

The illustration below shows a data dashboard formed from tabs and web parts.

Folders and Projects

Folders are the "blank canvases" of LabKey Server, the workspaces where you organize dashboards and web parts. Folders are also important in terms of securing your data, since you grant access to audience members on a folder-by-folder basis. Projects are top level folders: they function like folders, but have a wider scope. Projects also form the center of configuration inside the server, since any setting made inside a project cascades into the sub-folders by default.

Security

LabKey uses "role-based" security to control who has access to data. You assign roles, or "powers", to each user who visits your server. Their role determines how much they can see and do with the data. The available roles include: Administrator (they can see and do everything), Editors, Readers, Submitters, and others. Security is very flexible in LabKey Server. Any security configuration you can imagine can be realized: whether you want only a few select individual to see your data, or if you want the whole world to see your data.

The server also has extensive audit logs built. The audit logs record:

  • Who has logged in and when
  • Changes to a data record
  • Queries performed against the database
  • Server configuration changes
  • File upload and dowload events
  • And many other activities

The Basic Workflow: From Data Import to Reports

To build solutions with LabKey Server, follow this basic workflow: import or synchronize your data, apply analysis tools and build reports on top of the data, and finally share your results with different audiences. Along the way you will add different web parts and modules as needed. To learn the basic steps, start with the tutorials, which provide step-by-step instructions for using the basic building blocks available in the server.

Ready to See More?




Navigate the Server


This topic covers the basics of navigating your LabKey Server, and the projects and folders it contains, as a user without administrator permissions.

Main Header Menus

The project menu is on the left end of the menu bar and includes the display name of the current project.

In the upper right, icon menus offer:

  • (Search): Click to open a site-wide search box
  • (Product): (Premium Feature) Click to switch between products available on your server.
  • (Admin): Administrative options available to users granted such access.
  • (User): Login and security options; context-sensitive help.

Product Selection Menu (Premium Feature)

LabKey offers several products, all of which may be integrated on a single Premium Edition of LabKey Server. When more than one application is installed on your server, you can use the menu to switch between them in a given container.

Learn more in this topic: Product Selection Menu

Project and Folder Menu

Hover over the project menu to see the available projects and folders within them. Note that only locations in which you have at least "Read" access are shown. If you have access to a subfolder, but not to the parent, the name of the parent folder or project will still be included on the menu for reference.

Any project or folder with subfolders will show and buttons for expanding and contracting the list shown. The menu will scroll when there are enough items, and the current location will be visible and expanded by default.

  • (Permalink URL): Click for a permalink to the current location.
  • Administrators have additional options shown in the bottom row of this menu.

Navigation

Click the name of any project or folder in the menu to navigate to that location. When you are in a folder or subfolder, you will see a "breadcrumb" path at the top of the projects menu, making it easy to step back up one or more levels. Note that if you have access to a subfolder, but not to one or more of its parent folders, the parent folder(s) will be still be displayed in the menu (and on the breadcrumb path) but those locations will not be clickable links.

Notice that from within a folder or subfolder, the project menu still displays the name of the project. A link with the Folder Name is shown next to the title offering easy one click return to the main page of the folder.

Context Sensitive Help

Click the (User) icon in the upper right to open the user menu. Options for help include:

Related Topics




Data Basics


LabKey Server lets you explore, interrogate and present your biomedical research data online and interactively in a wide variety of ways. The topics in this section give you a broad overview of data basics.

Topics

An example online data grid:

A scatter plot visualization of the same data:

Related Topics




LabKey Data Structures


This topic is under construction for the 25.3 (March 2025) release. For the previous documentation of this feature, click here.

LabKey Server offers a wide variety of ways to store and organize data. Different data structure types offer specific features, which make them more or less suited for specific scenarios. You can think of these data structures as "table types", each designed to capture a different kind of research data.

This topic reviews the data structures available within LabKey Server, and offers guidance for choosing the appropriate structure for storing your data.

Where Should My Data Go?

The primary deciding factors when selecting a data structure will be the nature of the data being stored and how it will be used. Information about lab samples should likely be stored as a sample type. Information about participants/subjects/animals over time should be stored as datasets in a study folder. Less structured data may import into LabKey Server faster than highly constrained data, but integration may be more difficult. If you do not require extensive data integration or specialized tools, a more lightweight data structure, such as a list, may suit your needs.

The types of LabKey Server data structures appropriate for your work depend on the research scenarios you wish to support. As a few examples:

  • Management of Simple Tabular Data. Lists are a quick, flexible way to manage ordinary tables of data, such as lists of reagents.
  • Integration of Data by Time and Participant for Analysis. Study datasets support the collection, storage, integration, and analysis of information about participants or subjects over time.
  • Analysis of Complex Instrument Data. Assays help you to describe complex data received from instruments, generate standardized forms for data collection, and query, analyze and visualize collected data.
These structures are often used in combination. For example, a study may contain a joined view of a dataset and an assay with a lookup into a list for names of reagents used.

Universal Table Features

All LabKey data structures support the following features:

  • Interactive, Online Grids
  • Data Validation
  • Visualizations
  • SQL Queries
  • Lookup Fields

Lists

Lists are the simplest and least constrained data type. They are generic, in the sense that the server does not make any assumptions about the kind of data they contain. Lists are not entirely freeform; they are still tabular data and have primary keys, but they do not require participant IDs or time/visit information. There are many ways to visualize and integrate list data, but some specific applications will require additional constraints.

Lists data can be imported in bulk as part of a TSV, or as part of a folder, study, or list archive. Lists also allow row-level insert/update/delete.

Lists are scoped to a single folder, and its child workbooks (if any).

Assays

Assays capture data from individual experiment runs, which usually correspond to an output file from some sort of instrument. Assays have an inherent batch-run-results hierarchy. They are more structured than lists, and support a variety of specialized structures to fit specific applications. Participant IDs and time information are required.

Specific assay types are available, which correspond to particular instruments and offer defaults specific to use of the given assay instrument. Results schema can range from a single, fixed table to many interrelated tables. All assay types allow administrators to configure fields at the run and batch level. Some assay types allow further customization at other levels. For instance, the Luminex assay type allows admins to customize fields at the analyte level and the results level. There is also a general purpose assay type, which allows administrators to completely customize the set of result fields.

Usually assay data is imported from a single data file at a time, into a corresponding run. Some assay types allow for API import as well, or have customized multi-file import pathways. Assays result data may also be integrated into a study by aligning participant and time information, or by sample id.

Assay designs are scoped to the container in which they are defined. To share assay designs among folders or subfolders, define them in the parent folder or project, or to make them available site-wide, define them in the Shared project. Run and result data can be stored in any folder in which the design is in scope.

Datasets

Clinical Datasets are designed to capture the variable characteristics of an organism over time, like blood pressure, mood, weight, and cholesterol levels. Anything you measure at multiple points in time will fit well in a Clinical Dataset.

Datasets are always part of a study. They have two required fields:

  • ParticipantId (this name may vary) - Holds the unique identifier for the study subject.
  • Date or Visit (the name may vary) - Either a calendar date or a number.
There are different types of datasets with different cardinality, also known as data row uniqueness:
  • Demographic: Zero or one row for each subject. For example, each participant has only one enrollment date.
  • “Standard”/"Clinical": Can have multiple rows per subject, but zero or one row for each subject/timepoint combination. For example, each participant has exactly one weight measurement at each visit.
  • “Extra key”/"Assay": can have multiple rows for each subject/timepoint combination, but have an additional field providing uniqueness of the subject/timepoint/arbitrary field combination. For example, many tests might be run each a blood sample collected for each participant at each visit.
Datasets have special abilities to automatically join/lookup to other study datasets based on the key fields, and to easily create intelligent visualizations based on these sorts of relationships.

A dataset can be backed by assay data that has been copied to the study. Behind the scenes, this consists of a dataset with rows that contain the primary key (typically the participant ID) of the assay result data, which is looked up dynamically.

Non-assay datasets can be imported in bulk (as part of a TSV paste or a study import), and can also be configurable to allow row-level inserts/updates/deletes.

Datasets are typically scoped to a single study in a single folder. In some contexts, however, shared datasets can be defined at the project level and have rows associated with any of its subfolders.

Datasets have their own study security configuration, where groups are granted access to datasets separately from their permission to the folder itself. Permission to the folder is a necessary prerequisite for dataset access (i.e., have the Reader role for the folder), but is not necessarily sufficient.

A special type of dataset, the query snapshot, can be used to extract data from some other sources available in the server, and create a dataset from it. In some cases, the snapshot is automatically refreshed after edits have been made to the source of the data. Snapshots are persisted in a physical table in the database (they are not dynamically generated on demand), and as such they can help alleviate performance issues in some cases.

Custom Queries

A custom query is effectively a non-materialized view in a standard database. It consists of LabKey SQL, which is exposed as a separate, read-only query/table. Every time the data in a custom query is used, it will be re-queried from the database.

In order to run the query, the current user must have access to the underlying tables it is querying against.

Custom queries can be created through the web interface in the schema browser, or supplied as part of a module.

Sample Types

Sample types allow administrators to create multiple sets of samples in the same folder, which each have a different set of customizable fields.

Sample types are created by pasting in a TSV of data and identifying one, two, or three fields that comprise the primary key. Subsequent updates can be made via TSV pasting (with options for how to handle samples that already exist in the set), or via row-level inserts/updates/deletes.

Sample types support the notion of one or more parent sample fields. When present, this data will be used to create an experiment run that links the parent and child samples to establish a derivation/lineage history. Samples can also have "parents" of other dataclasses, such as a "Laboratory" data class indicating where the sample was collected.

One sample type per folder can be marked as the “active” set. Its set of columns will be shown in Customize Grid when doing a lookup to a sample table. Downstream assay results can be linked to the originating sample type via a "Name" field -- for details see Samples.

Sample types are resolved based on the name. The order of searching for the matching sample type is: the current folder, the current project, and then the Shared project. See Shared Project.

DataClasses

DataClasses can be used to capture complex lineage and derivation information, for example, the derivations used in bio-engineering systems. Examples include:

  • Reagents
  • Gene Sequences
  • Proteins
  • Protein Expression Systems
  • Vectors (used to deliver Gene Sequences into a cell)
  • Constructs (= Vectors + Gene Sequences)
  • Cell Lines
You can also use dataclasses to track the physical and biological Sources of samples.

Similarities with Sample Types

A DataClass is similar to a Sample Type or a List, in that it has a custom domain. DataClasses are built on top of the exp.Data table, much like Sample Types are built on the exp.Materials table. Using the analogy syntax:

SampleType : exp.Material :: DataClass : exp.Data

Rows from the various DataClass tables are automatically added to the exp.Data table, but only the Name and Description columns are represented in exp.Data. The various custom columns in the DataClass tables are not added to exp.Data. A similar behavior occurs with the various Sample Type tables and the exp.Materials table.

Also like Sample Types, every row in a DataClass table has a unique name, scoped across the current folder. Unique names can be provided (via a Name or other ID column) or generated using a naming pattern.

For more information, see Data Classes.

Domains

A domain is a collection of fields. Lists, Datasets, SampleTypes, DataClasses, and the Assay Batch, Run, and Result tables are backed by an LabKey internal datatype known as a Domain. A Domain has:

  • a name
  • a kind (e.g. "List" or "SampleType")
  • an ordered set of fields along with their properties.
Each Domain type provides specialized handling for the domains it defines. The number of domains defined by a data type varies; for example, Assays define multiple domains (batch, run, etc.), while Lists and Datasets define only one domain each.

The fields and properties of a Domain can be edited interactively using the field editor or programmatically using the JavaScript LABKEY.Domain APIs.

Also see Modules: Domain Templates.

Domain/Data Structure Names

Data structures (Domains) like Sample Types, Source Types, Assay Designs, etc. must have unique names and avoid specific special characters, particularly if they are to be used in naming patterns or API calls. Names must follow these rules:

  • Must not be blank
  • Must start with a letter or a number character.
  • Must contain only valid unicode characters. (no control characters)
  • May not contain any of these characters:
    <>[]{};,`"~!@#$%^*=|?\
  • May not contain 'tab', 'new line', or 'return' characters.
  • May not contain space followed by dash followed by a character.
    • i.e. these are allowed: "a - b" or "a-b" or "a–-b"
    • these are not allowed: "a -b", "a –-b"
For domains that support naming expressions (Sample Types, Sources), these special substitution strings are not allowed to be used as names:
AliquotedFrom
~DataInputs
DataInputs
Inputs
~MaterialInputs
MaterialInputs
batchRandomId
containerPath
contextPath
sampleCount
rootSampleCount
dailySampleCount
dataRegionName
genId
monthlySampleCount
now
queryName
randomId
schemaName
schemaPath
selectionKey
weeklySampleCount
withCounter
yearlySampleCount
folderPrefix

Names are not allowed to contain the following substrings. These are used as substitution operators internally:

:passThrough
:htmlEncode
:jsString
:urlEncode
:encodeURIComponent
:encodeURI
:first
:rest
:last
:trim
:date
:dailySampleCount
:weeklySampleCount
:yearlySampleCount
:monthlySampleCount
:defaultValue
:minValue
:number
:prefix
:suffix
:join
:withCounter

File Import Column Names (aka Parent/Source aliases):

  • Must not contain any of the following characters:
    /:<>$[]{};,`"~!@#$%^*=|?\

External Schemas

External schemas allow an administrator to expose the data in a "physical" database schema through the web interface, and programmatically via APIs. They assume that some external process has created the schemas and tables, and that the server has been configured to connect to the database.

Administrators have the option of exposing the data as read-only, or as insert/update/delete. The server will auto-populate standard fields like Modified, ModifiedBy, Created, CreatedBy, and Container for all rows that it inserts or updates. The standard bulk option (TSV, etc) import options are supported.

External schemas are scoped to a single folder. If an exposed table has a "Container" column, it will be filtered to only show rows whose values match the EntityId of the folder.

The server can connect to a variety of external databases, including Oracle, MySQL, SAS, Postgres, and SQLServer. The schemas can also be housed in the standard LabKey Server database.

The server does not support cross-database joins. It can do lookups (based on single-column foreign keys learned via JDBC metadata, or on XML metadata configuration) only within a single database though, regardless of whether it’s the standard LabKey Server database or not.

Learn more in this topic:

Linked Schemas

Linked schemas allow you to expose data in a target folder that is backed by some other data in a different source folder. These linked schemas are always read-only.

This provides a mechanism for showing different subsets of the source data in a different folder, where the user might not have permission (or need) to see everything else available in the source folder.

The linked schema configuration, set up by an administrator, can include filters such that only a portion of the data in the source schema/table is exposed in the target.

Learn more in this topic:

Related Topics




Preparing Data for Import


This topic explains how to best prepare your data for import so you can meet any requirements set up by the target data structure.

LabKey Server provides a variety of different data structures for different uses: Assay Designs for capturing instrument data, Datasets for integrating heterogeneous clinical data, Lists for general tabular data, etc. Some of these data structures place strong constraints on the nature of the data to be imported, for example Datasets make uniqueness constraints on the data; other data structures, such as Lists, make few assumptions about incoming data.

Design Choices: Column Names and Field Types

Choose Column Data Types

When deciding how to import your data, consider the data type of columns to match your current and future needs. Considerations include:

  • Available Types: Review the list at the top of this topic.
  • Type Conversions are possible after data is entered, but only within certain compatibilities. See the table of available type changes here.
  • Number type notes:
    • Integer: A 4-byte signed integer that can hold values ranging -2,147,483,648 to +2,147,483,647.
    • Decimal (Floating Point): An 8-byte double precision floating point number that can hold very large and very small values. Values can range approximately 1E-307 to 1E+308 with a precision of at least 15 digits. As with most standard floating point representations, some values cannot be converted exactly and are stored as approximations. It is often helpful to set on Decimal fields a display format that specifies a fixed or maximum number of decimal places to avoid displaying approximate values.

General Advice: Avoid Mixed Data Types in a Column

LabKey tables (Lists, Datasets, etc.) are implemented as database tables. So your data should be prepared for insertion into a database. Most importantly, each column should conform to a database data type, such as Text, Integer, Decimal, etc. Mixed data in a column will be rejected when you try to upload it.

Wrong

The following table mixes Boolean and String data in a single column.

ParticipantIdPreexisting Condition
P-100True, Edema
P-200False
P-300True, Anemia

Right

Split out the mixed data into separate columns

ParticipantIdPreexisting ConditionCondition Name
P-100TrueEdema
P-200False 
P-300TrueAnemia

General Advice: Avoid Special Characters in Column Headers

Column names should avoid special characters such as !, @, #, $, etc. Column names should contain only letters, numbers, spaces, and underscores; and should begin only with a letter or underscore. We also recommend underscores instead of spaces.

Wrong

The following table has special characters in the column names.

Participant #Preexisting Condition?
P-100True
P-200False
P-300True

Right

The following table removes the special characters and replaces spaces with underscores.

Participant_NumberPreexisting_Condition
P-100True
P-200False
P-300True

Data Column Aliasing

Use data column aliasing to work with non-conforming data, meaning the provided data has different columns names or different value ids for the same underlying thing. Examples include:

  • A lab provides assay data which uses different participant ids than those used in your study. Using different participant ids is often desirable and intentional, as it provides a layer of PHI protection for the lab and the study.
  • Excel files have different column names for the same data, for example some files have the column "Immune Rating" and other have the column "Immune Score". You can define an arbitrary number of these import aliases to map to the same column in LabKey.
  • The source files have a variety of names for the same visit id, for example, "M1", "Milestone #1", and "Visit 1".

Import to Unrecognized Fields

When importing data, if there are unrecognized fields in your spreadsheet, meaning fields that are not included in the data structure definition, they will be ignored. In some situations, such as when importing sample data, you will see a warning banner explaining that this is happening:

Data Format Considerations

Handling Backslash Characters

If you have a TSV or CSV file that contains backslashes in any text field, the upload will likely ignore the backslash and either substitute it for another value or ignore it completely. This occurs because the server treats backslashes as escape characters, for example, \n will insert a new line, \t will insert a tab, etc.

To import text fields that contain backslashes, wrap the values in quotes, either by manual edit or by changing the save settings in your editor. For example:

Test\123
should be:
"Test\123"

Note that this does not apply to importing Excel files. Excel imports are handled differently, and text data within the cell is processed such that text is in quotes.

Clinical Dataset Details

Datasets are intended to capture measurements events on some subject, like a blood pressure measurement or a viral count at some point it time. So datasets are required to have two columns:

  • a subject id
  • a time point (either in the form of a date or a number)
Also, a subject cannot have two different blood pressure readings at a given point in time, so datasets reflect this fact by having uniqueness constraints: each record in a dataset must have a unique combination of subject id plus a time point.

Wrong

The following dataset has duplicate subject id / timepoint combinations.

ParticipantIdDateSystolicBloodPressure
P-1001/1/2000120
P-1001/1/2000105
P-1002/2/2000110
P-2001/1/200090
P-2002/2/200095

Right

The following table removes the duplicate row.

ParticipantIdDateSystolicBloodPressure
P-1001/1/2000120
P-1002/2/2000110
P-2001/1/200090
P-2002/2/200095

Demographic Dataset Details

Demographic datasets have all of the constraints of clinical datasets, plus one more: a given subject identifier cannot appear twice in a demographic dataset.

Wrong

The following demographic dataset has a duplicate subject id.

ParticipantIdDateGender
P-1001/1/2000M
P-1001/1/2000M
P-2001/1/2000F
P-3001/1/2000F
P-4001/1/2000M

Right

The following table removes the duplicate row.

ParticipantIdDateGender
P-1001/1/2000M
P-2001/1/2000F
P-3001/1/2000F
P-4001/1/2000M

Date Parsing Considerations

Whether to parse user-entered or imported date values as Month-Day-Year (as typical in the U.S.) or Day-Month-Year (as typical outside the U.S.) is set at the site level. For example, 11/7/2020 is either July 11 or November 7; value like 11/15/2020 is only valid Month-Day-Year (US parsing).

If you attempt to import date values with the "wrong" format, they will be interpreted as strings, so you will see errors similar to ""Could not convert value '11/15/20' (String) for Timestamp field 'FieldName'."

If you need finer grained control of the parsing pattern, such as to include specific delimiters or other formatting, you can specify additional parsing patterns at the site, project, or folder level.

Use Import Templates

For the most reliable method of importing data, first obtain a template for the data you are importing. Most datastructures will include a Download Template button when you select any bulk import method, such as importing from a file.

Use the downloaded template as a basis for your import file. It will include all possible columns and will exclude unnecessary ones. You may not need to populate every column of the template when you import data.


Premium Feature Available

Subscribers to Sample Manager, LabKey LIMS, and Biologics LIMS have access to download templates in even more places. Learn more in this Sample Manager documentation topic:


Learn more about Sample Manager here

Import Options: Add, Update and Merge Data

When bulk importing data (via > Import bulk data) the default Import Option is Add rows, for adding new rows only. If you include data for existing rows, the import will fail.

To update data for existing rows, select the option to Update rows. Note that update is not supported for Lists with auto-incrementing integer keys or Datasets with system-managed third keys.

If you want to merge existing data updates with new rows, select Update rows, then check the box to Allow new rows during update. Note that merge is not supported for Lists with auto-incrementing integer keys or Datasets with system-managed third keys.

Learn more about updating and merging data in these topics:

Data Import Previews

In some contexts, such as creating a list definition and populating it at the same time and importing samples from file into Sample Manager or LabKey Biologics, you will see a few lines "previewing" the data you are importing.

These data previews shown in the user interface do not apply field formatting from either the source spreadsheet or the destination data structure.

In particular, when you are importing Date and DateTime fields, they are always previewed in ISO format (yyyy-MM-dd hh:mm) regardless of source or destination formatting. The Excel format setting is used to infer that this is a date column, but is not carried into the previewer or the imported data.

For example, if you are looking at a spreadsheet in Excel, you may see the value with specific date formatting applied, but this is not how Excel actually stores the date value in the field. In the image below, the same 'DrawDate' value is shown with different Excel formatting applied. Date values are a numeric offset from a built-in start date. To see the underlying value stored, you can view the cell in Excel with 'General' formatting applied (as shown in the fourth line of the image below), though note that 'General' formatted cells will not be interpreted as date fields by LabKey. LabKey uses any 'Date' formatting to determine that the field is of type 'DateTime', but then all date values are shown in ISO format in the data previewer, i.e. here "2022-03-10 00:00".

After import, display format settings on LabKey Server will be used to display the value as you intend.

OOXML Documents Not Supported

Excel files saved in the "Strict Open XML Spreadsheet" format will generate an .xlsx file, however this format is not supported by POI, the library LabKey uses for parsing .xlsx files. When you attempt to import this format into LabKey, you will see the error:

There was a problem loading the data file. Unable to open file as an Excel document. "Strict Open XML Spreadsheet" versions of .xlsx files are not supported.

As an alternative, open such files in Excel and save as the ordinary "Excel Workbook" format.

Learn more about this lack of support in this Apache issue.

Related Topics




Field Editor


This topic is under construction for the 25.3 (March 2025) release. For the previous documentation of this feature, click here.

This topic covers using the Field Editor, the user interface for defining and editing the fields that represent columns in a grid of data. The set of fields, also known as the schema or "domain", describes the shape of data. Each field, or column, has a main field type and can also have various properties and settings to control display and behavior of the data in that column.

The field editor is a common tool used by many different data structures in LabKey. The method for opening the field editor varies by data structure, as can the types of field available. Specific details about fields of each data type are covered in the topic: Field Types and Properties.

Topics:

Open the Field Editor

How you open the field editor depends on the type of data structure you are editing.

Data StructureOpen for Editing Fields
ListOpen the list and click Design in the grid header
DatasetIn the grid header, click Manage, then Edit Definition, then the Fields section
Assay DesignSelect Manage Assay Design > Edit Assay Design, then click the relevant Fields section
Sample TypeClick the name of the type and then Edit Type
Data ClassClick the name of the class and then Edit
Study PropertiesGo to the Manage tab and click Edit Additional Properties
Specimen PropertiesUnder Specimen Repository Settings, click Edit Specimen/Vial/Specimen Event Fields
User PropertiesGo to > Site > Site Users and click Change User Properties
Issue DefinitionsViewing the issue list, click Admin
Query MetadataGo to > Go To Module > Query, select the desired schema and query, then click Edit Metadata

If you are learning to use the Field Editor and do not yet have a data structure to edit, you can get started by creating a simple list as shown in the walkthrough of this topic.

  • In a folder where you can practice new features, select > Manage Lists.
  • Click Create New List.
  • Give the list a name and click the Fields section to open it.

Default System Fields for Sample Types and Data Classes

Both Sample Types and Data Classes show Default System Fields at the top of the Fields panel. Other data types do not show this section.

  • Some system fields, such as Name cannot be disabled and are always required. Checkboxes are inactive when you cannot edit them.
  • Other fields, including the MaterialExpDate (Expiration Date) column for Sample Types, can be adjusted using checkboxes, similar to the Description field for Data Classes, shown below.
  • Enabled: Uncheck the box in this column to disable a field. This does not prevent the field from being created, and the name is still reserved, but it will not be shown to users.
    • Note that if you disable a field, the data in it is not deleted, you could later re-enable the field and recover any past data.
  • Required: Check this box to make a field required.

Click the to collapse this section and move on to any Custom Fields as for any data type.

Create Custom Fields

Both options are available when the set of fields is empty. Once you have defined some fields by either method, the manual editor is the only option for adding more fields.

Import or Infer Fields from File

  • When the set of fields is empty, you can import (or infer) new fields from a file.
    • The range of file formats supported depends on the data structure. Some can infer fields from a data file; all support import from a JSON file that contains only the field definitions.
    • When a JSON file is imported, all valid properties for the given fields will be applied. See below.
  • Click to select or drag and drop a file of an accepted format into the panel.
  • The fields inferred from the file will be shown in the manual field editor, where you may fine tune them or save them as is.
    • Note that if your file includes columns for reserved fields, they will not be shown as inferred. Reserved field names vary by data structure and will always be created for you.

Manually Define Fields

  • After clicking Manually Define Fields, the panel will change to show the manual field editor. In some data structures, you'll see a banner about selecting a key or associating with samples, but for this walkthrough of field editing, these options are ignored.
  • Depending on the type of data structure, and whether you started by importing some fields, you may see a first "blank" field ready to be defined.
  • If not, click Add Field to add one.
  • Give the field a Name. If you enter a field name with a space in it, or other special character, you will see a warning. It is best practice to use only letters, numbers and underscores in the actual name of a field. You can use the Label property to define how to show this field name to users.
    • SQL queries, R scripts, and other code are easiest to write when field names only contain combination of letters, numbers, and underscores, and start with a letter or underscore.
    • If you include a dot . in your field name, you may see unexpected behavior since that syntax is also used as a separator for describing a lookup field. Specifically, participant views will not show values for fields where the name includes a .
  • Use the drop down menu to select the Data Type.
    • The data types available vary based on the data structure. Learn which structures support which types here.
    • Each data type can have a different collection of properties you can set.
    • Once you have saved fields, you can only make limited changes to the type.
  • You can use the checkbox if you want to make it required that that field have a value in every row.

Edit Fields

To edit fields, reopen the editor and make the changes you need. If you attempt to navigate away with unsaved changes you will have the opportunity to save or discard them. When you are finished making changes, click Save.

Once you have saved a field or set of fields, you can change the name and most options and other settings. However, you can only make limited changes to the type of a field. Learn about specific type changes in this section.

Rearrange Fields

To change field order, drag and drop the rows using the six-block handle on the left.

Delete Fields

To remove one field, click the on the right. It is available in both the collapsed and expanded view of each field and will turn red when you hover.

To delete one or more, use the selection checkboxes to select the fields you want to delete. You can use the box at the top of the column to select all fields, and once any are selected you will also see a Clear button. Click Delete to delete the selected fields.

You will be asked to confirm the deletion and reminded that all data in a deleted field will be deleted as well. Deleting a field cannot be undone.

Click Save when finished.

Edit Field Properties and Options

Each field has a data type and can have additional properties defined. The properties available vary based on the field type. Learn more in this topic: Field Types and Properties

To open the properties for a field, click the icon (it will become a handle for closing the panel). For example, the panel for a text field looks like:

Fields of all types have:

Most field types have a section for Type-specific Field Options: The details of these options as well as the specific kinds of conditional formatting and validation available for each field type are covered in the topic: Field Types and Properties.

Name and Linking Options

All types of fields allow you to set the following properties:

  • Description: An optional text description. This will appear in the hover text for the field you define. XML schema name: description.
  • Label: Different text to display in column headers for the field. This label may contain spaces. The default label is the Field Name with camelCasing indicating separate words. For example, the field "firstName" would by default be labelled "First Name". If you wanted to show the user "Given Name" for this field instead, you would add that string in the Label field.
  • Import Aliases: Define alternate field names to be used when importing from a file to this field. This option offers additional flexibility of recognizing an arbitrary number of source data names to map to the same column of data in LabKey. Multiple aliases may be separated by spaces or commas. To define an alias that contains spaces, use double-quotes (") around it.
  • URL: Use this property to change the display of the field value within a data grid into a link. Multiple formats are supported, which allow ways to easily substitute and link to other locations in LabKey. The ${ } syntax may be used to substitute another field's value into the URL. Learn more about using URL Formatting Options.
  • Ontology Concept: (Premium Feature) In premium editions of LabKey Server, you can specify an ontology concept this field represents. Learn more in this topic: Concept Annotations

Conditional Formatting and Validation Options

Most field types offer conditional formatting. String-based fields offer regular expression validation. Number-based fields offer range expression validation. Learn about options supported for each type of field here.

Note that conditional formatting is not supported in Sample Manager, unless used with a Premium Edition of LabKey Server. The LabKey LIMS and Biologics LIMS applications both support conditional formats.

Create Conditional Format Criteria

Conditional formats change how data is displayed depending on the value of the data. Learn more in this topic:

To add a conditional format for a field where it is supported:
  • Click Add Format to open the conditional format editor popup.
  • Specify one or two Filter Type and Filter Value pairs.
  • Select Display Options for how to show fields that meet the formatting criteria:
    • Bold
    • Italic
    • Strikethrough
    • Text/Fill Colors: Choose from the picker (or type into the #000000 area) to specify a color to use for either or both the text and fill.
    • You will see a cell of Preview Text for a quick check of how the colors will look.
  • Add an additional format to the same field by clicking Add Formatting. A second panel will be added to the popup.
  • When you are finished defining your conditional formats, click Apply.

Create Regular Expression Validator

  • Click Add Regex to open the popup.
  • Enter the Regular Expression that this field's value will be evaluated against.
    • All regular expressions must be compatible with Java regular expressions as implemented in the Pattern class.
    • You can test your expression using a regex interpreter, such as https://regex101.com/.
  • Description: Optional description.
  • Error Message: Enter the error message to be shown to the user when the value fails this validation.
  • Check the box for Fail validation when pattern matches field value in order to reverse the validation: With this box unchecked (the default) the pattern must match the expression. With this box checked, the pattern may not match.
  • Name: Enter a name to identify this validator.
  • You can use Add Regex Validator to add a second condition. The first panel will close and show the validator name you gave. You can reopen that panel using the (pencil) icon.
  • Click Apply when your regex validators for this field are complete.
  • Click Save.

Create Range Expression Validator

  • Click Add Range to open the popup.
  • Enter the First Condition that this field's value will be evaluated against. Select a comparison operator and enter a value.
  • Optionally enter a Second Condition.
  • Description: Optional description.
  • Error Message: Enter the error message to be shown to the user when the value fails this validation.
  • Name: Enter a name to identify this validator.
  • You can use Add Range Validator to add a second condition. The first panel will close and show the validator name you gave. You can reopen that panel using the (pencil) icon.
  • Click Apply when your range validators for this field are complete.
  • Click Save.

Advanced Settings for Fields

Open the editing panel for any field and click Advanced Settings to access even more options:

  • Display Options: Use the checkboxes to control how and in which contexts this field will be available.
    • Show field on default view of the grid
    • Show on update form when updating a single row of data
    • Show on insert form when updating a single row of data
    • Show on details page for a single row

  • Default Value Options: Automatically supply default values when a user is entering information. Not available for Sample Type fields or for Assay Result fields.
    • Default Type: How the default value for the field is determined. Options:
      • Last entered: (Default) If a default value is provided (see below), it will be entered and editable for the user's first use of the form. During subsequent uploads, the user will see their last entered value.
      • Editable default: An editable default value will be entered for the user. The default value will be the same for every user for every upload.
      • Fixed value: Provides a fixed default value that cannot be edited.
    • Default Value: Click Set Default Values to set default values for all the fields in this section.
      • Default values are scoped to the individual folder. In cases where field definitions are shared, you can manually edit the URL to set default values in the specific project or folder.
      • Note that setting default values will navigate you to a new page. Save any other edits to fields before leaving the field editor.

  • Miscellaneous Options:
    • PHI Level: Use the drop down to set the Protected Health Information (PHI) level of data in this field. Note that setting a field as containing PHI does not automatically provide any protection of that data.
    • Exclude from "Participant Date Shifting" on export/publication: (Date fields only) If the option to shift/randomize participant date information is selected during study folder export or publishing of a study, do not shift the dates in this field. Use caution when protecting PHI to ensure you do not exempt fields you intend to shift.
    • Make this field available as a measure: Check the box for fields that contain data to be used for charting and other analysis. These are typically numeric results. Learn more about using Measures and Dimensions for analysis.
    • Make this field available as a dimension: (Not available for Date fields) Check the box for fields to be used as 'categories' in a chart. Dimensions define logical groupings of measures and are typically non-numerical, but may include numeric type fields. Learn more about using Measures and Dimensions for analysis.
    • Make this field a recommended variable: Check the box to indicate that this is an important variable. These variables will be displayed as recommended when creating new participant reports.
    • Track reason for missing data values: Check this box to enable the field to hold special values to indicate data that has failed review or was originally missing. Administrators can set custom Missing Value indicators at the site and folder levels. Learn more about using Missing Value Indicators.
    • Require all values to be unique: Check this box to add a uniqueness constraint to this field. You can only set this property if the data currently in the field is unique. Supported for Lists, Datasets, Sample Types, and Data Classes.

View Fields in Summary Mode

In the upper right of the field editor, the Mode selection lets you choose:

  • Detail: You can open individual field panels to edit details.
  • Summary: A summary of set values is shown; limited editing is possible in this mode.

In Summary Mode, you see a grid of fields and properties, and a simplified interface. Scroll for more columns. Instead of having to expand panels to see things like whether there is a URL or formatting associated with a given field, the summary grid makes it easier to scan and search large sets of fields at once.

You can add new fields, delete selected fields, and export fields while in summary mode.

Export Sets of Fields (Domains)

Once you have defined a set of fields (domain) that you want to be able to save or reuse, you can export some or all of the fields by clicking (Export). If you have selected a subset of fields, only the selected fields will be exported. If you have not selected fields (or have selected all fields) all fields will be included in the export.

A Fields_*.fields.json file describing your fields as a set of key/value pairs will be downloaded. All properties that can be set for a field in the user interface will be included in the exported file contents. For example, the basic text field named "FirstField" we show above looks like:

[
{
"conditionalFormats": [],
"defaultValueType": "FIXED_EDITABLE",
"dimension": false,
"excludeFromShifting": false,
"hidden": false,
"lookupContainer": null,
"lookupQuery": null,
"lookupSchema": null,
"measure": false,
"mvEnabled": false,
"name": "FirstField",
"propertyValidators": [],
"recommendedVariable": false,
"required": false,
"scale": 4000,
"shownInDetailsView": true,
"shownInInsertView": true,
"shownInUpdateView": true,
"isPrimaryKey": false,
"lockType": "NotLocked"
}
]

You can save this file to import a matching set of fields elsewhere, or edit it offline to reimport an adjusted set of fields.

For example, you might use this process to create a dataset in which all fields were marked as measures. Instead of having to open each field's advanced properties in the user interface, you could find and replace the "measure" setting in the JSON file and then create the dataset with all fields set as measures.

Note that importing fields from a JSON file is only supported when creating a new set of fields. You cannot apply property settings to existing data with this process.

Related Topics




Field Types and Properties


The set of fields that make up a data structure like a list, dataset, assay, etc. can be edited using the Field Editor interface. You will find instructions about using the field editor and using properties, options, and settings common to all field types in the topic: Field Editor

This topic outlines the field formatting and properties specific to each data type.

Field Types Available by Data Structure

Fields come in different types, each intended to hold a different kind of data. Once defined, there are only limited ways you can change a field's type, based on the ability to convert existing data to the new type. To change a field type, you may need to delete and recreate it, reimporting any data.

The following table show which fields are available in which kind of table/data structure. Notice that Datasets do not support Attachment fields. For a workaround technique, see Linking Data Records to Image Files.

Field TypeDatasetListSample TypeAssay DesignData Class
Text (String)YesYesYesYesYes
Text ChoiceYesYesYesYesYes
Multi-Line TextYesYesYesYesYes
BooleanYesYesYesYesYes
IntegerYesYesYesYesYes
Decimal (Floating Point)YesYesYesYesYes
DateTimeYesYesYesYesYes
DateYesYesYesYesYes
TimeYesYesYesYesYes
CalculationYesYesYesYesYes
FlagYesYesYesYesYes
FileYesNoYesYesNo
AttachmentNo (workaround)YesNoNoYes
UserYesYesYesYesYes
Subject/Participant(String)YesYesYesYesYes
LookupYesYesYesYesYes
SampleYesYesYesYesYes
Ontology LookupYesYesYesYesYes
Visit Date/Visit ID/Visit LabelNoNoYesNoNo
Unique IDNoNoYesNoNo

The SMILES field type is only available in the Compounds data class when using LabKey Biologics LIMS.

Changes Allowed by Field Type

Once defined, there are only limited ways you can change a field's data type, based on the ability to safely convert existing data to the new type. This table lists the changes supported (not all types are available in all data structures and contexts):

Current TypeMay Be Changed To:
Text (String)Text Choice, Multi-line Text, Flag, Lookup, Ontology Lookup, Subject/Participant
Text ChoiceText, Multi-Line Text, Flag, Lookup, Ontology Lookup, Subject/Participant
Multi-Line TextText, Text Choice, Flag, Subject/Participant
BooleanText, Multi-Line Text
IntegerDecimal, Lookup, Text, Multi-Line Text, Sample, User
Decimal (Floating Point)Text, Multi-Line Text
DateTimeText, Multi-Line Text, Visit Date, Date (the time portion is dropped), Time (the date portion is dropped)
DateText, Multi-line Text, Visit Date, DateTime
TimeText, Multi-line Text
CalculationCannot be changed
FlagText, Text Choice, Multi-Line Text, Lookup, Ontology Lookup, Subject/Participant
FileText, Multi-Line Text
AttachmentText, Multi-Line Text
UserInteger, Decimal, Lookup, Text, Multi-Line Text, Sample
Subject/Participant(String)Flag, Lookup, Text, Text Choice, Multi-Line Text, Ontology Lookup
LookupText, Multi-Line Text, Integer, Sample, User
SampleText, Multi-Line Text, Integer, Decimal, Lookup, User
Ontology LookupText, Text Choice, Multi-Line Text, Flag, Lookup, Subject/Participant
Visit DateText, Multi-Line Text, Date Time
Visit IDText, Multi-Line Text, Decimal
Visit LabelText, Text Choice, Multi-Line Text, Lookup, Ontology Lookup, Subject/Participant
Unique IDFlag, Lookup, Multi-Line Text, Ontology Lookup, Subject/Participant, Text, Text Choice

If you cannot make your desired type change using the field editor, you may need to delete and recreate the field entirely, reimporting any data.

Validators Available by Field Type

This table summarizes which formatting and validators are available for each type of field.

Field TypeConditional FormattingRegex ValidatorsRange Validators
Text (String)YesYesNo
Text ChoiceYesNoNo
Multi-Line TextYesYesNo
BooleanYesNoNo
IntegerYesNoYes
Decimal (Floating Point)YesNoYes
DateTimeYesNoYes
DateYesNoYes
TimeYesNoNo
CalculationYesNoNo
FlagYesYesNo
FileYesNoNo
AttachmentYesNoNo
UserYesNoYes
Subject/Participant(String)YesYesNo
LookupYesNoYes
SampleYesNoNo
Ontology LookupYesYesNo
Visit Date/Visit ID/Visit LabelYesNoNo
Unique IDYesNoNo

Type-Specific Properties and Options

The basic properties and validation available for different types of fields are covered in the main field editor topic. Details for specific types of fields are covered here.

Text, Multi-Line Text, and Flag Options

Fields of type Text, Multi-Line Text, and Flag have the same set of properties and formats available:

When setting the Maximum Text Link, consider that when you make a field UNLIMITED, it is stored in the database as 'text' instead of the more common 'varchar(N)'. When the length of a string exceeds a certain amount, the text is stored out of the row itself. In this way, you get the illusion of an infinite capacity. Because of the differences between 'text' and 'varchar' columns, these are some good general guidelines:
  • UNLIMITED is good if:
    • You need to store large text (like, a paragraph or more)
    • You don’t need to index the column
    • You will not join on the contents of this column
    • Examples: blog comments, wiki pages, etc.
  • No longer than... (i.e. not-UNLIMITED) is good if:
    • You're storing smaller strings
    • You'll want them indexed, i.e. you plan to search on string values
    • You want to do a sql select or join on the text in this column
    • Examples would be usernames, filenames, etc.

Text Choice Options

A Text Choice field lets you define a set of values that will be presented to the user as a dropdown list. This is similar to a lookup field, but does not require a separate list created outside the field editor. Click Add Values to open a panel where you can add the choices for this field.

Learn more about defining and using Text Choice fields in this topic: Note that because administrators can always see all the values offered for a Text Choice field, they are not a good choice for storing PHI or other sensitive information. Consider instead using a lookup to a protected list.

Boolean Options

  • Boolean Field Options: Format for Boolean Values: Use boolean formatting to specify the text to show when a value is true and false. Text can optionally be shown for null values. For example, "Yes;No;Blank" would output "Yes" if the value is true, "No" if false, and "Blank" for a null value.
  • Name and Linking Options
  • Conditional Formatting and Validation Options: Conditional formats are available.

Integer and Decimal Options

Integer and Decimal fields share similar number formatting options, shown below. When considering which type to choose, keep in mind the following behavior:

  • Integer: A 4-byte signed integer that can hold values ranging -2,147,483,648 to +2,147,483,647.
  • Decimal (Floating Point): An 8-byte double precision floating point number that can hold very large and very small values. Values can range approximately 1E-307 to 1E+308 with a precision of at least 15 digits. As with most standard floating point representations, some values cannot be converted exactly and are stored as approximations. It is often helpful to set on Decimal fields a display format that specifies a fixed or maximum number of decimal places to avoid displaying approximate values.
Both Integer and Decimal columns have these formatting options available:

Date, Time, and Date Time Options

Three different field types are available for many types of data structure, letting you choose how best to represent the data needed.

  • Date Time: Both date and time are included in the field. Fields of this type can be changed to either "Date-only" or "Time-only" fields, though this change will drop the data in the other part of the stored value.
  • Date: Only the date is included. Fields of this type can be changed to be "Date Time" fields.
  • Time: Only the time portion is represented. Fields of this type cannot be changed to be either "Date" or "Date Time" fields.
  • Date and Time Options:
    • Use Default: Check the box to use the default format set for this folder.
    • Format for Date Times: Uncheck the default box to enable a drop down where you can select the specific format to use for dates/times in this field. Learn more about using Date and Time formats in LabKey.
  • Name and Linking Options
  • Conditional Formatting and Validation Options: Conditional formats are available for all three types of date time fields. Range validators are available for "Date" and "Date Time" but not for "Time" fields.

Calculation (Premium Feature)

Calculation fields are available with the Professional and Enterprise Editions of LabKey Server, the Professional Edition of Sample Manager, LabKey LIMS, and Biologics LIMS.

A calculation field lets you include SQL expressions using values in other fields in the same row to provide calculated values. The Expression provided must be valid LabKey SQL and can use the default system fields, custom fields, constants, operators, and functions. Examples:

OperationExample
AdditionnumericField1 + numericField2
SubtractionnumericField1 - numericField2
MultiplicationnumericField1 * numericField2
Division by value known never to be zeronumericField1 / nonZeroField1
Division by value that might be zeroCASE WHEN numericField2 <> 0 THEN (numericField1 / numericField2 * 100) ELSE NULL END
Subtraction of dates/datetimes (ex: difference in days)TIMESTAMPDIFF('SQL_TSI_DAY', CURDATE(), MaterialExpDate)
Relative date (ex: 1 week later)TIMESTAMPADD('SQL_TSI_WEEK', 1, dateField)
Conditional calculation based on another fieldCASE WHEN FreezeThawCount < 2 THEN 'Viable' ELSE 'Questionable' END
Conditional calculation based on a text matchCASE WHEN ColorField = 'Blue' THEN 'Abnormal' ELSE 'Normal' END
Text value for every row (ex: to use with a URL property)'clickMe'
Text concatenation (use fields and/or strings)City || ', ' || State

Once you've provided the expression, use Click to validate to confirm that your expression is valid.

The data type of your expression will be calculated. In some cases, you may want to use casts to influence this type determination. For example, if you divide two integer values, the result will also be of type integer and yield unexpected and apparently "truncated" results.

Once the type of the calculation is determined, you'll see additional formatting options for the calculated type of the field. Shown above, date and time options.

File and Attachment Options

File and Attachment fields are only available in specific scenarios, and have display, thumbnail, and storage differences. For both types of field, you can configure the behavior when a link is clicked, so that the file or attachment either downloads or is shown in the browser.


  • File
    • The File field type is only available for certain types of table, including datasets, assay designs, and sample types.
    • When a file has been uploaded into this field, it displays a link to the file; for image files, an inline thumbnail is shown.
    • The uploaded file is stored in the file repository, in the assaydata folder in the case of an assay.
    • For Standard Assays, the File field presents special behavior for image files; for details see Linking Assays with Images and Other Files.
    • If you are using a pipeline override, note that it will override any file root setting, so the "file repository" in this case will be under your pipeline override location.
  • Attachment
    • The Attachment field type is similar to File, but only available for lists and data classes (including Sources for samples).
    • This type allows you to attach documents to individual records in a list or data class.
    • For instance, an image could be associated with a given row of data in an attachment field, and would show an inline thumbnail.
    • The attachment is not uploaded into the file repository, but is stored as a BLOB field in the database.
    • By default, the maximum attachment size is 50MB, but this can be changed in the Admin Console using the setting Maximum file size, in bytes, to allow in database BLOBs. See Site Settings.
Learn about using File and Attachment fields in Sample Manager and LabKey Biologics in this topic:

Choose Download or View in Browser for Files and Attachments

For both File and Attachment fields, you can set the behavior when the link is clicked from within the LabKey Server interface. Choose Show in Browser or Download.

Note that this setting will not control the behavior of attachments and files in Sample Manager or Biologics LIMS.

Inline Thumbnails for Files and Attachments

When a field of type File or Attachment is an image, such as a .png or .jpg file, the cell in the data grid will display a thumbnail of the image. Hovering reveals a larger version.

When you export a grid containing these inline images to Excel, the thumbnails remain associated with the cell itself.

Bulk Import into the File Field Type

You can bulk import data into the File field type in LabKey Server, provided that the files/images are already uploaded to the File Repository, or the pipeline override location if one is set. For example suppose you already have a set of images in the File Repository, as shown below.

You can load these images into a File field, if you refer to the images by their full server path in the File Repository. For example, the following shows how an assay upload might refer to these images by their full server path:

ImageNameImageFile
10001.pnghttp://localhost:8080/labkey/_webdav/Tutorials/List%20Tutorial/%40files/NIMH/Images/10001.png
10002.pnghttp://localhost:8080/labkey/_webdav/Tutorials/List%20Tutorial/%40files/NIMH/Images/10002.png
10003.pnghttp://localhost:8080/labkey/_webdav/Tutorials/List%20Tutorial/%40files/NIMH/Images/10003.png
10004.pnghttp://localhost:8080/labkey/_webdav/Tutorials/List%20Tutorial/%40files/NIMH/Images/10004.png

On import, the Assay grid will display the image thumbnail as shown below:

User Options

Fields of this type point to registered users of the LabKey Server system, found in the table core.Users and scoped to the current container (folder).

Subject/Participant Options

This field type is only available for Study datasets and for Sample Types that will be linked to study data. The Subject/Participant ID is a concept URI, containing metadata about the field. It is used in assay and study folders to identify the subject ID field. There is no special built-in behavior associated with this type. It is treated as a string field, without the formatting options available for text fields.

Lookup Options

You can populate a field with data via lookup into another table. This is similar to the text choice field, but offers additional flexibility and longer lists of options.

Open the details panel and select the folder, schema, and table where the data values will be found. Users adding data for this field will see a dropdown populated with that list of values. Typing ahead will scroll the list to the matching value. When the number of available values exceeds 10,000, the field will be shown as a text entry field.

Use the checkbox to control whether the user entered value will need to match an existing value in the lookup target.

  • Lookup Definition Options:
    • Select the Target Folder, Schema, and Table from which to look up the value. Once selected, the value will appear in the top row of the field description as a direct link to the looked-up table.
    • Lookup Validator: Ensure Value Exists in Lookup Target. Check the box to require that any value is present in the lookup's target table or query.
  • Name and Linking Options
  • Conditional Formatting Options: Conditional formats are available.
A lookup operates as a foreign key (<fk>) in the XML schema generated for the data structure. An example of the XML generated:
<fk>
<fkDbSchema>lists</fkDbSchema>
<fkTable>Languages</fkTable>
<fkColumnName>LanguageId</fkColumnName>
</fk>

Note that lookups into lists with auto-incrementing keys may not export/import properly because the rowIds are likely to be different in every database.

Learn more about Lookup fields in this topic:

Sample Options

  • Sample Options: Select where to look up samples for this field.
    • Note that this lookup will only be able to reference samples in the current container.
    • You can choose All Samples to reference any sample in the container, or select a specific sample type to filter by.
    • This selection will be used to validate and link incoming data, populate lists for data entry, etc.
  • Name and Linking Options
  • Conditional Formatting Options: Conditional formats are available.

Ontology Lookup Options (Premium Feature)

When the Ontology module is loaded, the Ontology Lookup field type connects user input with preferred vocabulary lookup into loaded ontologies.

Visit Date/Visit ID/Visit Label Options

Integration of Sample Types with Study data is supported using Visit Date, Visit ID, and Visit Label field types provide time alignment, in conjunction with a Subject/Participant field providing participant alignment. Learn more in this topic: Link Sample Data to Study

Unique ID Options (Premium Feature)

A field of type "Unique ID" is read-only and used to house barcode values generated by LabKey for Samples. Learn more in this topic: Barcode Fields

Related Topics




Text Choice Fields


A Text Choice field lets you define a set of values that will be presented to the user as a dropdown list. This is similar to a lookup field, but does not require a separate list created outside the field editor. Using such controlled vocabularies can both simplify user entry and ensure more consistent data.

This topic covers the specifics of using the Text Choice Options for the field. Details about the linking and conditional formatting of a text choice field are provided in the shared field property documentation.

Create and Populate a Text Choice Field

Open the field editor for your data structure, and locate or add a field of type Text Choice.

Click Add Values to open a panel where you can add the choices for this field. Choices may be multi-word.

Enter each value on a new line and click Apply.

Manage Text Choices

Once a text choice field contains a set of values, the field summary panel will show at least the first few values for quick reference.

Expand the field details to see and edit this list.

  • Add Values lets you add more options.
  • Delete a value by selecting, then clicking Delete.
  • Select a value in the text choice options to edit it. For example, you might change the spelling or phrasing without needing to edit existing rows.
  • Click Apply to save your change to this value. You will see a message indicating updates to existing rows will be made.
  • Click Save for the data structure when finished editing text choices.

In-Use and Locked Text Choices

Any values that are already in use in the data will be marked with a icon indicating that they cannot be deleted, but the text of the value may be edited.

Values that are in-use for any read-only data (i.e. assay data that cannot be edited) will be marked with a icon indicating that they cannot be deleted or edited.

PHI Considerations

When a text choice field is marked as PHI, the usual export and publish study options will be applied to data in the field. However, users who are able to access the field editor will be able to see all of the text choices available, though without any row associations. If you are also using the compliance features letting you hide PHI from unauthorized users, this could create unintended visibility of values that should be obscured.

For more complete protection of protected health information, use a lookup to a protected list instead.

Use a Text Choice Field

When a user inserts into or edits the value of the Text Choice field, they can select one of the choices, or leave it blank if it is not marked as a required field. Typing ahead will narrow longer lists of choices. Entering free text is not supported. New choices can only be added within the field definition.

Change Between Text and Text Choice Field Types

Fields of the Text and Text Choice types can be switched to the other type without loss of data. Such changes have a few behaviors of note:

  • If you change a Text Choice field to Text, you will lose any values on the list of options for the field that are not in use.
  • If you change a Text field to Text Choice, all distinct values in that column will be added to the values list for the field. This option creates a handy shortcut for when you are creating new data structures including text choice fields.

Importing Text Choices: A Shortcut

If you have an existing spreadsheet of data (such as for a list or dataset) and want a given field to become a text choice field, you have two options.

  • The more difficult is to create the data structure with the field of type Text Choice, and then copy and paste all the distinct values from the dataset into the drop-down options for the field, then import your data.
  • An easier option is to first create the structure and import the data selecting Text as the type for the field you want to be a dropdown. Once imported, edit the data structure to change the type of that field to Text Choice. All distinct values from the column will be added to the list of value options for you.

Related Topics




URL Field Property


Setting the URL property of a field in a data grid turns the display value into a link to other content. The URL property setting is the target address of the link. The URL property supports different options for defining relative or absolute links, and also supports using values from other fields in constructing the URL.

Link Format Types

Several link format types for URL property are supported: local, relative, and full path links on the server as well as external links. Learn more about the context, controller, action, and path elements of LabKey URLs in this topic: LabKey URLs.

Local Links

To link to content in the current LabKey folder, use the controller and action name, but no path information.

<controller>-<action>

For example, to view a page in a local wiki:

wiki-page.view?name=${PageName}

A key advantage of this format is that the list or query containing the URL property can be moved or copied to another folder with the target wiki page, in this case, and it will still continue to work correctly.

You can optionally prepend a / (slash) or ./ (dot-slash), but they are not necessary. You could also choose to format with the controller and action separated by a / (slash). The following format is equivalent to the above:

./wiki/page.view?name=${PageName}

Relative Links

To point to resources in subfolders of the current folder, prepend the local link format with path information, using the . (dot) for the current location:

./<subfoldername/><controller>-<action>

For example, to link to a page in a subfolder, use:

./${SubFolder}/wiki-page.view?name=${PageName}

You can also use .. (two dots) to link to the parent folder, enabling path syntax like "../siblingfolder/resource to refer to a sibling folder. Use caution when creating complex relative URL paths as it makes references more difficult to follow in the case of moving resources. Consider a full path to be more clear when linking to a distant location.

Full Path Links on the Same Server

A full path link points to a resource on the current LabKey Server, useful for:

  • linking commmon resources in shared team folders
  • when the URL is a WebDAV link to a file that has been uploaded to the current server
The local path begins with a / (forward slash).

For example, if a page is in a subfolder of "myresources" under the home folder, the path might be:

/home/myresources/subfolder/wiki-page.view?name=${Name}

External Links

To link to a resource on an external server or any website, include the full URL link. If the external location is another labkey server, the same path, controller, and action path information will apply:

http://server/path/page.html?id=${Param}

Substitution Syntax

If you would like to have a data grid contain a link including an element from elsewhere in the row, the ${ } substitution syntax may be used. This syntax inserts a named field's value into the URL. For example, in a set of experimental data where one column contains a Gene Symbol, a researcher might wish to quickly compare her results with the information in The Gene Ontology. Generating a URL for the GO website with the Gene Symbol as a parameter will give the researcher an efficient way to "click through" from any row to the correct gene.

An example URL (in this case for the BRCA gene) might look like:

http://amigo.geneontology.org/amigo/search/ontology?q=brca

Since the search_query parameter value is the only part of the URL that changes in each row of the table, the researcher can set the URL property on the GeneSymbol field to use a substitution marker like this:

http://amigo.geneontology.org/amigo/search/ontology?q=${GeneSymbol}

Once defined, the researcher would simply click on "BRCA" in the correct column to be linked to the URL with the search_query parameter applied.

Multiple such substitution markers can be used in a single URL property string, and the field referenced by the markers can be any field within the query.

Substitutions are allowed in any part of the URL, either in the main path, or in the query string. For example, here are two different formats for creating links to an article in wikipedia, here using a "CompanyName" field value:

  • as part of the path:
  • as a parameter value:

Built-in Substitution Markers

The following substitutions markers are built-in and available for any query/dataset. They help you determine the context of the current query.

MarkerDescriptionExample Value
${schemaName}The schema where the current query lives.study
${schemaPath}The schema path of the current query.assay.General.MyAssayDesign
${queryName}The name of the current queryPhysical Exam
${dataRegionName}The data region for the current query.Dataset
${containerPath}The LabKey Server folder path, starting with the project/home/myfolderpath
${contextPath}The Tomcat context path/labkey
${selectionKey}Unique string used by selection APIs as a key when storing or retrieving the selected items for a grid$study$Physical Exam$$Dataset

Link Display Text

The display text of the link created from a URL property is just the value of the current record in the field which contains the URL property. So in the Gene Ontology example, since the URL property is defined on the Gene_Symbol field, the gene symbol serves as both the text of the link and the value of the search_query parameter in the link address. In many cases you may want to have a constant display text for the link on every row. This text could indicate where the link goes, which would be especially useful if you want multiple such links on each row.

In the example above, suppose the researcher wants to be able to look up the gene symbol in both Gene Ontology and EntrezGene. Rather than defining the URL Property on the Gene_Symbol field itself, it would be easier to understand if two new fields were added to the query, with the value in the fields being the same for every record, namely "[GO]" and "[Entrez]". Then set the URL property on these two new fields to

for the GOlink field:

http://amigo.geneontology.org/cgi-bin/amigo/search.cgi?search_query=${Gene_Symbol}&action=new-search

for the Entrezlink field:

http://www.ncbi.nlm.nih.gov/gene/?term=${Gene_Symbol}

The resulting query grid will look like:

Note that if the two new columns are added to the underlying list, dataset, or schema table directly, the link text values would need to be entered for every existing record. Changing the link text would also be tedious. A better approach is to wrap the list in a query that adds the two fields as constant expressions. For this example, the query might look like:

SELECT TestResults.SampleID,
TestResults.TestRun,
TestResults.Gene_Symbol,
TestResults.ResultValueN,

'[GO]' AS GOlink,
'[Entrez]' AS Entrezlink

FROM TestResults

Then in the Edit Metadata page of the Schema Browser, set the URL properties on these query expression fields:

URL Encoding Options

You can specify the type of URL encoding for a substitution marker, in case the default behavior doesn't work for the URLs needed. This flexibility makes it possible to have one column display the text and a second column can contain the entire href value, or only a part of the href. The fields referenced by the ${ } substitution markers might contain any sort of text, including special characters such as question marks, equal signs, and ampersands. If these values are copied straight into the link address, the resulting address would be interpreted incorrectly. To avoid this problem, LabKey Server encodes text values before copying them into the URL. In encoding, characters such as ? are replaced by their character code %3F. By default, LabKey encodes all special character values except '/' from substitution markers. If you know that a field referenced by a substitution marker needs no encoding (because it has already been encoded, perhaps) or needs different encoding rules, inside the ${ } syntax, you can specify encoding options as described in the topic String Expression Format Functions.

Links Without the URL Property

If the data field value contains an entire url starting with an address type designator (http:, https:, etc), then the field value is displayed as a link with the entire value as both the address and the display text. This special case could be useful for queries where the query author could create a URL as an expression column. There is no control over the display text when creating URLs this way.

Linking To Other Tables

To link two tables, so that records in one table link to filtered views of the other, start with a filtered grid view of the target table, filtering on the target fields of interest. For example, the following URL filters on the fields "WellLocation" and "WellType":

/mystudy/study-dataset.view?datasetId=5018&Dataset.WellLocation~eq=AA&Dataset.WellType~eq=XX

Parameterize by adding substitution markers within the filter. For example, assume that source and target tables have identical field names, "WellLocation" and "WellType":

/mystudy/study-dataset.view?datasetId=5018&Dataset.WellLocation~eq=${WellLocation}&Dataset.WellType~eq=${WellType}

Finally, set the parameterized URL as the URL property of the appropriate column in the source table.

Related Topics

For an example of UI usage, see: Step 3: Add a URL Property and click here to see an interactive example. Hover over a link in the Department column to see the URL, click to view a list filtered to display the "technicians" in that department.

For examples of SQL metadata XML usage, see: JavaScript API Demo Summary Report and the JavaScript API Tutorial.




Conditional Formats


This topic is under construction for the 25.3 (March 2025) release. For the previous documentation of this feature, click here.
Conditional formats change how data is displayed depending on the value of the data. For example, if temperature goes above a certain value, you can highlight those values using orange. If the value is below a certain level those could be blue. Bold, italic, and strikethrough text can also be used.

Conditional formats are available in LabKey Server, LabKey LIMS, and Biologics LIMS. They are defined as properties of fields using the Field Editor.

Specify a Conditional Format

To specify a conditional format, open the field editor, and click to expand the field of interest. Under Create Conditional Format Criteria, click Add Format.

In the popup, identify the condition(s) under which you want the conditional format applied. Specifying a condition is similar to specifying a filter. You need to include a First Condition. If you specify a second one, both will be AND-ed together to determine whether a single conditional format is displayed.

Only the value that is being formatted is available for the condition checks. That is, you cannot use the value in column A to apply a conditional format to column B.

Next, you can specify Display Options, meaning how the field should be formatted when that condition is met.

Display options are:

  • Bold
  • Italic
  • Strikethrough
  • Colors: Select Text and/or Fill colors. Click a block to choose it, or type to enter a hex value or RGB values. You'll see a box of preview text on the right.

Click Apply to close the popup, then Save. When you view the table, you'll see your formatting applied.

Multiple Conditional Formats

Multiple conditional formats are supported in a single column. Before applying the format, you can click Add Formatting to add another. Once you have saved an active format, use Edit Formats to reopen the popup and click Add Formatting to specify another conditional format. This additional condition can have a different type of display applied.

Each format you define will be in a panel within the popup and can be edited separately.

If a value fulfills multiple conditions, then the first condition satisfied is applied, and conditions lower on the list are ignored.

For example, suppose you have specified two conditional formats on one field:

  • If the value is 40 degrees or greater, then display in bold text.
  • If the value is 38 degrees or greater, then display in italic text.
Although the value 40 fulfills both conditions, only the first condition to apply is considered, resulting in bold display.

Example: Conditional Formats for Temperature

In the following example, values out of the normal human body temperature range are highlighted with color if too high and shown in italics if too low. In this example, we use the Physical Exam dataset that is included with the importable example study.

  • In a grid view of the Physical Exam dataset, click Manage.
  • Click Edit Definition.
  • Select a field (such as temperature in this example), expand it, and click Add Format under "Create Conditional Format Criteria".
    • For First Condition, choose "Is Greater Than", enter 37.8.
    • Check Bold.
    • From the Fill Color drop down, choose orange for this example.
    • This format option is shown above.
  • Click Add Formatting in the popup before clicking Apply.
    • For First Condition of this second format, choose "Is Less Than", enter 36.1.
    • Check the box for Italics.
    • Choose a blue Fill Color.
  • Click Apply, then Save.
  • Click View Data to return to the data grid.

Now temperature values above 37.8 degrees are in bold on orange cells and those below 36.1 are displayed in italics with a blue background.

When you hover over a formatted value in the LabKey Server interface, a pop up dialog will appear explaining the rule behind the format. Note that these popups are not available in the LIMS applications.

View Conditional Formats in LIMS Applications

In LabKey LIMS and Biologics LIMS, you use the same mechanism to set formatting as described above. Instead of filling the entire cell, background colors are displayed as a lozenge as shown here.

The conditional formatting is also shown in individual details views for an entity:

Related Topics

Notes:



String Expression Format Functions


Reference

The following string formatters can be used when building URLs, or creating naming patterns for samples, sources and members of other DataClasses.

 NameSynonymInput TypeDescriptionExample
General
defaultValue(string)   any Use the string argument value as the replacement value if the token is not present or is the empty string. ${field:defaultValue('missing')}
passThrough none any Don't perform any formatting. ${field:passThrough}
URL Encoding
encodeURI uri string URL encode all special characters except ',/?:@&=+$#' like JavaScript encodeURI() ${field:encodeURI}
encodeURIComponent uricomponent string URL uncode all special characters like JavaScript encodeURIComponent() ${field:encodeURIComponent}
htmlEncode html string HTML encode ${field:htmlEncode}
jsString   string Escape carrage return, linefeed, and <>"' characters and surround with a single quotes ${field:jsString}
urlEncode path string URL encode each path part preserving path separator ${field:urlEncode}
String
join(string)   collection Combine a collection of values together separated by the string argument ${field:join('/'):encodeURI}
prefix(string)   string, collection Prepend a string argument if the value is non-null and non-empty ${field:prefix('-')}
suffix(string)   string, collection Append a string argument if the value is non-null and non-empty ${field:suffix('-')}
trim   string Remove any leading or trailing whitespace ${field:trim}
Date
date(string)   date Format a date using a format string or one of the constants from Java's DateTimeFormatter. If no format value is provided, the default format is 'BASIC_ISO_DATE' ${field:date}, ${field:date('yyyy-MM-dd')}
Number
number   format Format a number using Java's DecimalFormat ${field:number('0000')}
Array
first   collection Take the first value from a collection ${field:first:defaultValue('X')}
rest   collection Drop the first item from a collection ${field:rest:join('_')}
last   collection Drop all items from the collection except the last ${field:last:suffix('!')}

Examples

Function Applied to... Result
${Column1:defaultValue('MissingValue')} null MissingValue
${Array1:join('/')} [apple, orange, pear] apple/orange/pear
${Array1:first} [apple, orange, pear] apple
${Array1:first:defaultValue('X')} [(null), orange, pear]  X



Date & Number Display Formats


LabKey Server provides flexible display formatting for dates, times, datetimes, and numbers, so you can control how these values are shown to users. Set up formats that apply to the entire site, an entire project, a single folder, or even just to an individual field in one table.

Note that display formatting described in this topic is different from date and time parsing, which determines how the server interprets date/time strings.

Overview

You can customize how dates, times, datetimes, and numbers are displayed on a field-by-field basis, or set these formats on a folder-level, project-level or site-wide basis. The server decides which format to use for a particular field by looking first at the properties for that field. If no display format is set at the field-level, it checks the container tree, starting with the folder then up the folder hierarchy to the site level. In detail, decision process goes as follows:

  • The server checks to see if there is a field-level format set on the field itself. If it finds a field-specific format, it uses that format. If no format is found, it looks to the folder-level format. (To set a field-specific format, see Set Formats on a Per-Field Basis.)
  • If a folder-level format is found, it uses that format. If no folder-level format is found, it looks in the parent folder, then that parent's parent folder, etc. until the project level is reached and it looks there. (To set a folder-level default format, see Set Folder Display Formats)
  • If a project-level format is found, it uses that format. If no project-level format is found, it looks to the site-level format. (To set a project-level default format, see Set Project Display Formats.)
  • To set the site-level format, see Set Formats Globally (Site-Level). Note that if no site-level format is specified, the server will default to these formats:
    • Date format: yyyy-MM-dd
    • Time format: HH:mm
Date, time, and date-time formats are selected from a set of built in options. A standard Java format string specifies how numbers are displayed.

Set Site-Wide Display Formats

An admin can set formats at the site level by managing look and feel settings.

  • Select > Site > Admin Console.
  • Under Configuration, click Look and Feel Settings.
  • Scroll down to Customize date, time, and number display formats.

Set Project Display Formats

An admin can standardize display formats at the project level so they display consistently in the intended scope, which does not need to be consistent with site-wide settings.

  • Navigate to the target project.
  • Select > Folder > Project Settings.
  • On the Properties tab, scroll down to Customize date, time, and number display formats.
  • Check the "Inherited" checkbox to use the site-wide default, or uncheck to set project-specific formats:
    • Select desired formats for date, date-time, and time-only fields.
    • Enter desired format for number fields.
  • Click Save.

Set Folder Display Formats

An admin can standardize display formats at the folder level so they display consistently in the intended scope, which does not need to be consistent with either project or site settings.

  • Navigate to the target folder.
  • Select > Folder > Management.
  • Click the Formats tab.
  • Check the "Inherited" checkbox to use the project-wide format, or uncheck to set folder-specific formats:
    • Select desired formats for date, date-time, and time-only fields.
    • Enter desired format for number fields.
  • Click Save.

Set Formats on a Per-Field Basis

To do this, you edit the properties of the field.

  • Open the data fields for editing; the method depends on the type. See: Field Editor.
  • Expand the field you want to edit.
  • Date, time, and datetime fields include a Use Default checkbox, and if unchecked you can use the Format for Date Times selectors to choose among supported date and time formats just for this field.
  • Number fields include a Format for Numbers box. Enter the desired format string.
  • Click Save.

Date, Time, and DateTime Display Formats

Date, Time, and DateTime display formats are selected from a set of standard options, giving you flexibility for how users will see these values. DateTime fields combine one of each format, with the option of choosing "<none>" as the Time portion.

Date formats available:

Format SelectedDisplay Result
yyyy-MM-dd2024-08-14
yyyy-MMM-dd2024-Aug-14
yyyy-MM2024-08
dd-MM-yyyy14-08-2024
dd-MMM-yyyy14-Aug-2024
dd-MMM-yy14-Aug-24
ddMMMyyyy14Aug2024
ddMMMyy14Aug24
MM/dd/yyyy08/14/2024
MM-dd-yyyy08-14-2024
MMMM dd yyyyAugust 14 2024

Time formats available:

Format SelectedDisplay Results
HH:mm:ss13:45:15
HH:mm13:45
HH:mm:ss.SSS13:45:15.000
hh:mm a01:45 PM

Number Format Strings

Format strings for Number (Double) fields must be compatible with the format that the java class DecimalFormat accepts. A valid DecimalFormat is a pattern specifying a prefix, numeric part, and suffix. For more information see the Java documentation. The following table has an abbreviated guide to pattern symbols:

SymbolLocationLocalized?Meaning
0NumberYesDigit
#NumberYesDigit, zero shows as absent
.NumberYesDecimal separator or monetary decimal separator
-NumberYesMinus sign
,NumberYesGrouping separator
ENumberYesExponent for scientific notation, for example: 0.#E0

Examples

The following examples apply to Number (Double) fields.

Format StringDisplay Result
<no string>85.0
085
000085
.0085.00
000.000085.000
000,000085,000
-000,000-085,000
0.#E08.5E1

Related Topics




Lookup Columns


By combining data from multiple tables in one grid view, you can create integrated grids and visualizations with no duplication of data. Lookup Columns give you the ability to link data from two different tables by reference. Another name for this type of connection is a "foreign key".

Once a lookup column pulls in data from another table, you can then display values from any column in that target (or "looked up") table. You can also take advantage of a lookup to simplify user data entry by constraining entered values for a column to a fixed set of values in another list.

Set Up a Lookup Field

Suppose you want to display values from the Languages list, such as the translator information, alongside other data from the Demographics dataset. You would add a lookup column to the Demographics dataset that used values from the Languages list.

To join these tables, an administrator adds a lookup column to the Demographics dataset definition using this interface:

  • Go to the dataset or list where you want to show the data from the other source. Here, Demographics.
  • Click Manage, then Edit Definition.
  • Click Fields.
  • Expand the field you want to be a lookup (here "language").
    • If the field you want doesn't already exist, click Add Field to add it.
  • From the dropdown under Data Type, select Lookup.
  • In the Lookup Definition Options, select the Target Folder, Schema, and Table to use. For example, the lists schema and the Languages table, as shown below.
  • Scroll down and click Save.
  • In either the expanded or collapsed view, you can click the name of the target of the lookup to open it directly.
  • The lookup column is now available for grid views and SQL queries on the Demographics table.

This can also be accomplished using a SQL query, as described in Lookups: SQL Syntax.

Create a Joined Grid View

Once you have connected tables with a lookup, you can create joined grids merging information from any of the columns of both tables. For example, a grid showing which translators are needed for each cohort would make it possible to schedule their time efficiently. Note that the original "Language" column itself need not be shown.

  • Go to the "Demographics" data grid.
  • Select (Grid Views) > Customize Grid.
  • Click the to expand the lookup column and see the fields it contains. (It will become a as shown below.)
  • Select the fields you want to display using checkboxes.
  • Save the grid view.

Learn more about customizing grid views in this topic:

Default Display Field

For lookup fields where the target table has an integer primary key, the server will use the first text field it encounters as the default display column. For example, suppose the Language field is an integer lookup to the Languages table, as below.

In this case, the server uses Language Name as the default display field because it is the first text field it finds in the looked up table. You can see this in the details of the lookup column shown in the example above. The names "English", etc, are displayed though the lookup is to an integer key.

Display Alternate Fields

To display other fields from the looked up table, go to (Grid Views) > Customize View, expand the lookup column, and select the fields you want to display.

You can also use query metadata to achieve the same result: see Query Metadata: Examples

Validating Lookups: Enforcing Lookup Values on Import

When you are importing data into a table that includes a lookup column, you can have the system enforce the lookup values, i.e. any imported values must appear in the target table. An error will be displayed whenever you attempt to import a value that is not in the lookup's target table.

To set up enforcement:

  • Go to the field editor of the dataset or list with the lookup column.
  • Select the lookup column in the Fields section.
  • Expand it.
  • Under Lookup Validator, check the box for Ensure Value Exists in Lookup Target.
  • Click Save.

Note that pre-existing data is not retroactively validated by turning on the lookup validator. To ensure pre-existing data conforms to the values in the lookup target table, either review entries by hand or re-import to confirm values.

Suppress Linking

By default the display value in a lookup field links to the target table. To suppress the links, and instead display the value/data as bare text, edit the XML metadata on the target table. For example, if you are looking up to MyList, add <tableUrl></tableUrl> to its metadata, as follows. This will suppress linking on any table that has a lookup into MyList.

<tables xmlns="http://labkey.org/data/xml">
<table tableName="MyList" tableDbType="NOT_IN_DB">
<tableUrl></tableUrl>
<columns>
</columns>
</table>
</tables>

A related option is to use a SQL annotation directly in the query to not "follow" the lookup and display the column as if there were no foreign key defined on it. Depending on the display value of the lookup field, you may see a different value than if you used the above suppression of the link. Learn more here: Query Metadata.

Related Topics




Protecting PHI Data


In many research applications, Protected Health Information (PHI) is collected and available to authorized users, but must not be shared with unauthorized users. This topic covers how to mark columns (fields) as different levels of PHI and how you can use these markers to control access to information.

Administrators can mark columns as either Restricted PHI, Full PHI, Limited PHI, or Not PHI. Simply marking fields with a particular PHI level does not restrict access to these fields, and in some cases the setting has special behavior as detailed below.

Note that while you can set the PHI level for assay data fields, assays do not support using PHI levels to restrict access to specific fields in the ways described here. Control of access to assay data should be accomplished by using folder permissions and only selectively copying non-PHI data to studies.

Mark Column at PHI Level

There are four levels of PHI setting available:

  • Restricted PHI: Most protected
  • Full PHI
  • Limited PHI: Least protected
  • Not PHI (Default): Not protected
  • Open the Field Editor for the data structure you are marking.
  • Click the Fields section if it is not already open.
  • Expand the field you want to mark.
  • Click Advanced Settings.
  • From the PHI Level dropdown, select the level at which to mark the field. Shown here, the "country" field is being marked as "Limited PHI".
  • Click Apply.
  • Continue to mark other fields in the same data structure as needed.
  • Click Save when finished.

Developers can also use XML metadata to mark a field as containing PHI.

Once your fields are marked, you can use this information to control export and publication of studies and folders.

Some Columns Cannot be Marked as PHI

There are certain fields which cannot be annotated as PHI because to do so would interfere with system actions like study alignment and usability. For example, the ParticipantID in a study cannot be marked as containing PHI.

Instead, in order to protect Participant identifiers that are considered PHI, you can:

Export Without PHI

When you export a folder or study, you can select the level(s) of PHI to include. By default, all columns are included, so to exclude any columns, you must make a selection as follows.

  • Select (Admin) > Folder > Management.
  • Click the Export tab.
  • Select the objects you want to export.
  • Under Options, choose which levels of PHI you want to include.
    • Note that assay data does not support PHI field settings and if selected, all data will be included if selected, regardless of the PHI level you select here.
  • Uncheck the Include PHI Columns box to exclude all columns marked as PHI.
  • Click Export.

The exported archive can now be shared with users who can have access to the selected level of PHI.

This same option can be used when creating a new folder from a template, allowing you to create a new similar folder without any PHI columns.

Publish Study without PHI

When you publish a study, you create a copy in a new folder, using a wizard to select the desired study components. On the Publish Options panel, you can select the PHI you want to include. The default is to publish with all columns.

  • In a study, click the Manage tab.
  • Click Publish Study.
  • Complete the study wizard selecting the desired components.
  • On the Publish Options panel, under Include PHI Columns, select the desired levels to include.
  • Click Finish.

The published study folder will only contain the columns at the level(s) you included.

Issues List and Limited PHI

An issue tracking list ordinarily shows all fields to all users. If you would like to have certain fields only available to users with permission to insert or update issues (Submitter, Editor, or Admins), you can set fields to "Limited PHI".

For example, a development bug tracker could have a "Limited PHI" field indicating the estimated work needed to resolve it. This field would then be hidden from readers of the list of known issues but visible to the group planning to fix them.

Learn more about customizing issue trackers in this topic: Issue Tracker: Administration

Use PHI Levels to Control UI Visibility of Data


Premium Features Available

Subscribers to the Enterprise Edition of LabKey Server can use PHI levels to control display of columns in the user interface. Learn more in this topic:


Learn more about premium editions

Note that if your data uses any text choice fields, administrators and data structure editors will be able to see all values available within the field editor, making this a poor field choice for sensitive information.

Related Topics




Data Grids


Data grids display data from various data structures as a table of columns and rows. LabKey Server provides sophisticated tools for working with data grids, including sorting, filtering, reporting, and exporting.

Take a quick tour of the basic data grid features here: Data Grid Tour.

Data Grid Topics

Related Topics




Data Grids: Basics


Data in LabKey Server is viewed in grids. The underlying data table, list, or dataset stored in the database contains more information than is shown in any given grid view. This topic will help you learn the basics of using grids in LabKey to view and work with any kind of tabular data.
Note: If you are using Sample Manager, LabKey LIMS, or Biologics LIMS, there are some variations in the appearance and features available for data grids. Learn more in this topic: Grid Basics in Sample Manager and LIMS Products

Anatomy of a Data Grid

The following image shows a typical data grid.

  • Data grid title: The name of the data grid.
  • QC State filter: Within a LabKey study, when one or more dataset QC states are defined, the button bar will include a menu for filtering among them. The current status of that filter is shown above the grid.
  • Folder location: The folder location. Click to navigate to the home page of the folder.
  • Grid view indicator: Shows which view, or perspective, on the data is being shown. Custom views are created to show a joined set of tables or highlight particular columns in the data. Every dataset has a "default" view that can also be customized if desired.
  • Filter bar: When filters are applied, they appear here. Click the X to remove a filter. When more than one is present, a Clear all button is also shown.
  • Button bar: Shows the different tools that can be applied to your data. A triangle indicates a menu of options.
  • Column headers: Click a column header for a list of options.
  • Data records: Displays the data as a 2-dimensional table of rows and columns.
  • Page through data: Control how much data is shown per page and step through pages.

Column Header Options

See an interactive example grid on labkey.org.

Button Bar

The button bar tools available may vary with different types of data grid. Study datasets can provide additional functionality, such as filtering by cohort, that are not available to lists. Assays and proteomics data grids provide many additional features.

Hover over icon buttons to see a tooltip with the text name of the option. Common buttons and menus include:

  • (Filter): Click to open a panel for filtering by participant groups and cohorts.
  • (Grid Views): Pull-down menu for creating, selecting, and managing various grid views of this data.
  • (Charts/Reports): Pull-down menu for creating and selecting charts and reports.
  • (Export): Export the data grid as a spreadsheet, text, or in various scripting formats.
  • (Insert Data): Insert a new single row, or bulk import data.
  • (Delete): Delete one or more rows selected by checkbox. This option is disabled when no rows are selected.
  • Design/Manage: with appropriate permissions, change the structure and behavior of the given list ("Design") or dataset ("Manage").
  • Groups: Select which cohorts, participant groups, or treatment groups to show. Also includes options to manage cohorts and create new participant groups.
  • Print: Generate a printable version of the dataset in your browser.

Page Through Data

In the upper right corner, you will see how your grid is currently paginated. In the above example, there are 484 rows in the dataset and they are shown 100 to a "page", so the first page is rows 1-100 of 484. To step through pages, in this case 100 records at a time, click the arrow buttons. To adjust paging, hover over the row count message (here "1-100 of 484") and it will become a button. Click to open the menu:

  • Show First/Show Last: Jump to the first or last page of data.
  • Show All: Show all rows in one page.
  • Click Paging to see the currently selected number of rows per page. Notice the > caret indicating there is a submenu to open. On the paging submenu, click another option to change pagination.

Examples

The following links show different views of a single data grid:

Related Topics




Import Data


LabKey provides a variety of methods for importing data into a data grid. Depending on the type and complexity of the data, you must first identify the type of data structure in which your data will be stored. Each data structure has a different specific process for designing a schema and then importing data. The general import process for data is similar among many data structures. Specific import instructions for many types are available here:
Note: When data is imported as a TSV, CSV, or text file, it is parsed using UTF-8 character encoding.



Sort Data


This page explains how to sort data in a table or grid. The basic sort operations are only applied for the current viewer and session; they are not persisted by default, but can be saved as part of a custom grid view.

Sort Data in a Grid

To sort data in a grid view, click on the header for a column name. Choose either:

  • Sort Ascending: Show the rows with lowest values (or first values alphabetically) for this column at the top of the grid.
  • Sort Descending: Show the rows with the highest values at the top.

Once you have sorted your dataset using a particular column, a triangle icon will appear in the column header: for an ascending sort or for a descending sort.

You can sort a grid view using multiple columns at a time as shown above. Sorts are applied in the order they are added, so the most recent sort will show as the primary way the grid is sorted. For example, to sort by date, with same-date results sorted by temperature, you would first sort on temperature, then on date.

Note that LabKey sorting is case-sensitive.

Clear Sorts

To remove a sort on an individual column, click the column caption and select Clear Sort.

Advanced: Understand Sorting URLs

The sort specifications are included on the page URL. You can modify the URL directly to change the sorted columns, the order in which they are sorted, and the direction of the sort. For example, the following URL sorts the Physical Exam grid first by ascending ParticipantId, and then by descending Temp_C:

Note that the minus ('-') sign in front of the Temp_C column indicates that the sort on that column is performed in descending order. No sign is required for an ascending sort, but it is acceptable to explicitly specify the plus ('+') sign in the URL.

The %2C hexadecimal code that separates the column names represents the URL encoding symbol for a comma.

Related Topics




Filter Data


You can filter data displayed in a grid to reduce the amount of data shown, or to exclude data that you do not wish to see. By default, these filters only apply for the current viewer and session, but filters may be saved as part of a grid view if desired.

Filter Column Values

  • Click on a column name and select Filter.

Filter by Value

In many cases, the filter popup will open on the Choose Values tab by default. Here, you can directly select one or more individual values using checkboxes. Click on a label to select only a single value, add or remove additional values by clicking on checkboxes.

This option is not the default in a few circumstances:

Filtering Expressions

Filtering expressions available in dropdowns vary by datatype and context. Possible filters include, but are not limited to:

  • Presence or absence of a value for that row
  • Equality or inequality
  • Comparison operators
  • Membership or lack of membership in a named semicolon separated set
  • Starts with and contains operators for strings
  • Between (inclusive) or Not Between (exclusive) two comma separated values
  • Equals/contains one of (or does not equal/contain one of) a provided list that can be semicolon or new line separated.
For a full listing, see Filtering Expressions.

  • Switch to the Choose Filters tab, if available.
  • Specify a filtering expression (such as "Is Greater Than"), and value (such as "57") and click OK.

You may add a second filter if desired - the second filter is applied as an AND with the first. Both conditions must be true for a row to be included in the filtered results.

Once you have filtered on a column, the filter icon appears in the column header. Current filters are listed above the grid, and can be removed by simply clicking the X in the filter panel.

When there are multiple active filters, you can remove them individually or use the link to Clear All that will be shown.

Notes:
  • Leading spaces on strings are not stripped. For example, consider a list filter like Between (inclusive) which takes two comma-separated terms. If you enter range values as "first, second", rows with the value "second" (without the leading space) will be excluded. Enter "first,second" to include such rows.
  • LabKey filtering is case-sensitive.

Filter Value Variables

Using filter value variables can help you use context-sensitive information in a filter. For example, use the variable "~me~" (including the tildes) on columns showing user names from the core.Users table to filter based on the current logged in user.

Additional filter value variables are available that will not work in the grid header menu, but will work in the grid view customizer or in the URL. For example, relative dates can be specified using filter values like "-7d" (7 days ago) or "5d" (5 days from now) in a saved named grid view. Learn more here: Saved Filters and Sorts

Persistent Filters

Some filters on some types of data are persistent (or "sticky") and will remain applied on subsequent views of the same data. For example, some types of assays have persistent filters for convenience; these are listed in the active filter bar above the grid.

Use Faceted Filtering

When applying multiple filters to a data grid, the options shown as available in the filter popup will respect prior filters. For example, if you first filter our sample demographics dataset by "Country" and select only "Uganda", then if you open a second filter on "Primary Language" you will see only "French" and "English" as options - our sample data includes no patients from Uganda who speak German or Spanish. The purpose is to simplify the process of filtering by presenting only valid filter choices. This also helps you avoid unintentionally empty results.

Understand Filter URLs

Filtering specifications can be included on the page URL. A few examples follow.

This URL filters the example "PhysicalExam" dataset to show only rows where weight is greater than 80kg. The column name, the filter operator, and the criterion value are all specified as URL parameters. The dataset is specified by ID, "5003" in this case:

https://www.labkey.org/home/Demos/HIV%20Study%20Tutorial/study-dataset.view?datasetId=5003&Dataset.weight_kg~gt=80

Multiple filters on different columns can be combined, and filters also support selecting multiple values. In this example, we show all rows for two participants with a specific data entry date:

https://www.labkey.org/home/Demos/HIV%20Study%20Tutorial/study-dataset.view?datasetId=5003&Dataset.ParticipantId~in=PT-101%3BPT-102&Dataset.date~dateeq=2020-02-02

To specify that a grid should be displayed using the user's last filter settings, set the .lastFilter URL parameter to true, as shown:

https://www.labkey.org/home/Demos/HIV%20Study%20Tutorial/study-dataset.view?.lastFilter=true

Study: Filter by Participant Group

Within a study dataset, you may also filter a data grid by participant group. Click the (Filter) icon above the grid to open the filter panel. Select checkboxes in this panel to further filter your data. Note that filters are cumulatively applied and listed in the active filter bar above the data grid.

Related Topics




Filtering Expressions


Filtering expressions available for columns or when searching for subjects of interest will vary by datatype of the column, and not all expressions are relevant or available in all contexts. In the following tables, the "Arguments" column indicates how many data values, if any, should be provided for comparison with the data being filtered.

ExpressionArgumentsExample UsageDescription
Has Any Value  Returns all values, including null
Is Blank  Returns blank values
Is Not Blank  Returns non-blank values
Equals1 Returns values matching the value provided
Does Not Equal1 Returns non-matching values
Is Greater Than1 Returns values greater than the provided value
Is Less Than1 Returns values less than the provided value
Is Greater Than or Equal To1 Returns values greater than or equal to the provided value
Is Less Than or Equal To1 Returns values less than or equal to the provided value
Contains1 Returns values containing a provided value
Does Not Contain1 Returns values not containing the provided value
Starts With1 Returns values which start with the provided value
Does Not Start With1 Returns values which do not start with the provided value
Between, Inclusive2, comma separated-4,4Returns values between or matching two comma separated values provided
Not Between, Exclusive2, comma separated-4,4Returns values which are not between and do not match two comma separated values provided
Equals One Of1 or more values, either semicolon or new line separateda;b;c
a
b
c
Returns values matching any one of the values provided
Does Not Equal Any Of1 or more values, either semicolon or new line separateda;b;c
a
b
c
Returns values not matching a provided list
Contains One Of1 or more values, either semicolon or new line separateda;b;c
a
b
c
Returns values which contain any one of a provided list
Does Not Contain Any Of1 or more values, either semicolon or new line separateda;b;c
a
b
c
Returns values which do not contain any of a provided list
Is In Subtree1 (Premium Feature) Returns values that are in the ontology hierarchy 'below' the selected concept
Is Not In Subtree1 (Premium Feature) Returns values that are not in the ontology hierarchy 'below' the selected concept

Boolean Filtering Expressions

Expressions available for data of type boolean (true/false values):

  • Has Any Value
  • Is Blank
  • Is Not Blank
  • Equals
  • Does Not Equal

Date Filtering Expressions

Date and DateTime data can be filtered with the following expressions:

  • Has Any Value
  • Is Blank
  • Is Not Blank
  • Equals
  • Does Not Equal
  • Is Greater Than
  • Is Less Than
  • Is Greater Than or Equal To
  • Is Less Than or Equal To

Numeric Filtering Expressions

Expressions available for data of any numeric type, including integers and double-precision numbers:

  • Has Any Value
  • Is Blank
  • Is Not Blank
  • Equals
  • Does Not Equal
  • Is Greater Than
  • Is Less Than
  • Is Greater Than or Equal To
  • Is Less Than or Equal To
  • Between, Inclusive
  • Not Between, Exclusive
  • Equals One Of
  • Does Not Equal Any Of

String Filtering Expressions

String type data, including text and multi-line text data, can be filtered using the following expressions:

  • Has Any Value
  • Is Blank
  • Is Not Blank
  • Equals
  • Does Not Equal
  • Is Greater Than
  • Is Less Than
  • Is Greater Than or Equal To
  • Is Less Than or Equal To
  • Contains
  • Does Not Contain
  • Starts With
  • Does Not Start With
  • Between, Inclusive
  • Not Between, Exclusive
  • Equals One Of
  • Does Not Equal Any Of
  • Contains One Of
  • Does Not Contain Any Of

Related Topics




Column Summary Statistics


Summary statistics for a column of data can be displayed directly on the grid, giving at-a-glance information like count, min, max, etc. of the values in that column.
Premium Feature — An enhanced set of additional summary statistics is available in all Premium Editions of LabKey Server. Learn more or contact LabKey.

Add Summary Statistics to a Column

  • Click a column header, then select Summary Statistics.
  • The popup will list all available statistics for the given column, including their values for the selected column.
  • Check the box for all statistics you would like to display.
  • Click Apply. The statistics will be shown at the bottom of the column.

Display Multiple Statistics

Multiple summary statistics can be shown at one time for a column and each column can have it's own set. Here is a compound set of statistics on another dataset:

Statistics Available

The list of statistics available vary based on the edition of LabKey Server you are running, and on the column datatype. Not all functions are available for all column types, only meaningful aggregates are offered. For instance, boolean columns show only the count fields; date columns do not include sums or means. Calculations ignore blank values, but note that values of 0 or "unknown" are not blank values.

All calculations use the current grid view and any filters you have applied. Remember that the grid view may be shown across several pages. Column summary statistics are for the dataset as a whole, not just the current page being viewed. The number of digits displayed is governed by the number format set for the container, which defaults to rounding to the thousandths place.

Summary statistics available in the Community edition include:

  • Count (non-blank): The number of values in the column that are not blank, i.e. the total number of rows for which there is data available.
  • Sum: The sum of the values in the column.
  • Mean: The mean, or average, value of the column.
  • Minimum: The lowest value.
  • Maximum: The highest value.
Additional summary statistics available in Premium editions of LabKey Server include:
  • Count (blank): The number of blank values.
  • Count (distinct): The number of distinct values.
  • Median: Orders the values in the column, then finds the midpoint. When there are an even number of values, the two values at the midpoint are averaged to obtain a single value.
  • Median Absolute Deviation (MAD): The median of the set of absolute deviations of each value from the median.
  • Standard Deviation (of mean): For each value, take the difference between the value and the mean, then square it. Average the squared deviations to find the variance. The standard deviation is the square root of the variance.
  • Standard Error (of mean): The standard deviation divided by the square of the number of values.
  • Quartiles:
    • Lower (Q1) is the midpoint between the minimum value and the median value.
    • Upper (Q3) is the midpoint between the median value and the maximum value. Both Q1 and Q3 are shown.
    • Interquartile Range: The number of values between Q3 and Q1 in the ordered list. Q3-Q1.

Save Summary Statistics with Grid View

Once you have added summary statistics to a grid view, you can use (Grid Views) > Customize Grid to save the grid with these statistics displayed. For example, this grid in our example study shows a set of statistics on several columns (scroll to the bottom): For example, this grid in our example study includes both column visualizations and summary statistics:

Related Topics




Customize Grid Views


A grid view is a way to see tabular data in the user interface. Lists, datasets, assay results, and queries can all be represented in a grid view. The default set of columns displayed are not always what you need. This topic explains how to create custom grid views in the user interface, showing selected columns in the order you wish. You can edit the default grid view, and also save as many named grid views as needed.

Editors, administrators, and users granted "Shared View Editor" access can create and share customized grid views with other users.

Customize a Grid View

To customize a grid, open it, then select (Grid Views) > Customize Grid.

The tabs let you control:

You can close the grid view customizer by clicking the X in the upper right.

Note that if you are using LabKey Sample Manager or Biologics LIMS and want to customize grids there, you'll find instructions in this topic:

Adjust Columns Displayed

On the Columns tab of the grid view customizer, control the fields included in your grid and how they are displayed in columns:

  • Available Fields: Lists the fields available in the given data structure.
  • Selected Fields: Shows the list of fields currently selected for display in the grid.
  • Delete: Deletes the current grid view. You cannot delete the default grid view.
  • Revert: Returns the grid to its state before you customized it.
  • View Grid: Click to preview your changes.
  • Save: Click to save your changes as the default view or as a new named grid.
Actions you can perform here are detailed below.

Add Columns

  • To add a column to the grid, check the box for it in the Available Fields pane.
  • The field will be added to the Selected Fields pane.
  • This is shown for the "Hemoglobin" field in the above image.

Notice the checkbox for Show hidden fields. Hidden fields might contain metadata about the grid displayed or interconnections with other data. Learn more about common hidden fields for lists and datasets in this topic:

Expand Nodes to Find More Fields

When the underlying table includes elements that reference other tables, they will be represented as an expandable node in the Available Fields panel. For example:

  • Fields defined to lookup values in other tables, such as the built in "Created By" shown above. In that case to show more information from the Users table.
  • In a study, there is built in alignment around participant and visit information; see the "Participant Visit" node.
  • In a study, all the other datasets are automatically joined through a special field named "Datasets". Expand it to see all other datasets that can be joined to this one. Combining datasets in this way is the equivalent of a SQL SELECT query with one or more inner joins.
  • Click and buttons to expand/collapse nodes revealing columns from other datasets and lookups.
  • Greyed out items cannot be displayed themselves, but can be expanded to find fields to display.

Add All Columns for a Node

If you want to add all the columns in a given node, hold down shift when you click the checkbox for the top of the node. This will select (or unselect) all the columns in that node.

Note that this will only add one "layer" of the columns under a node. It also does not apply to any grayed out "special" nodes like the "Datasets" node in a study.

Reorder Columns

To reorder the columns, drag and drop the fields in the Selected Fields pane. Columns will appear left to right as they are listed top to bottom. Changing the display order for a grid view does not change the underlying data table.

Change Column Display Name

Hover over any field in the Selected Fields panel to see a popup with more information about the key and datatype of that field, as well as a description if one has been added.

To change the column display title, click the (Edit title) icon. Enter the new title and click OK. Note that this does not change the underlying field name, only how it is displayed in the grid.

Remove Columns

To remove a column, hover over the field in the Selected Fields pane, and click (Remove column) as shown in the image above.

Removing a column from a grid view does not remove the field from the dataset or delete any data.

You can also remove a column directly from the grid (without opening the view customizer) by clicking the column header and selecting Remove Column. After doing so, you will see the "grid view has been modified" banner and be able to revert, edit, or save this change.

Save Grid Views

When you are satisfied with your grid, click Save. You can save your version as the default grid view, or as a new named grid view.

  • Click Save.

In the popup, select:

  • Grid View Name:
    • "Default grid view for this page": Save as the grid view named "Default".
    • "Named": Select this option and enter a title for your new grid view. This name cannot be "Default" and if you use the same name as an existing grid view, your new version will overwrite the previous grid by that name.
  • Shared: By default a customized grid is private to you. If you have the "Editor" role or "Shared View Editor" role (or higher) in the current folder, you can make a grid available to all users by checking the box Make this grid view available to all users.
  • Inherit: If present, check the box to make this grid view available in child folders.
In this example, we named the grid "My Custom Grid View" and it was added to the (Grid Views) pulldown menu.

Saved grid views appear on the (Grid Views) pulldown menu. On this menu, the "Default" grid is always first, then any grids saved for the current user (shown with a icon), then all the shared grid views. If a customized version of the "Default" grid view has been saved but not shared by the current user, it will also show a lock icon, but can still be edited (or shared) by that user.

When the list of saved grid views is long, a Filter box is added. Type to narrow the list making it easier to find the grid view you want.

In a study, named grid views will be shown in the Data Views webpart when "Queries" are included. Learn more in this topic: databrowser

Save Filtered and Sorted Grids

You can further refine your saved grid views by including saved filters and sorts with the column configuration you select. You can define filters and sorts directly in the view customizer, or get started from the user interface using column menu filters and sorts.

Learn about saving filters and sorts with custom grid views in this topic:

Note: When saving or modifying grid views of assay data, note that run filters may have been saved with your grid view. For example, if you customize the default grid view while viewing a single run of data, then import a new run, you may not see the new data because of the previous filtering to the other single run. To resolve this issue:
  • Select (Grid Views) > Customize Grid.
  • Click the Filters tab.
  • Clear the Row ID filter by clicking the 'X' on it's line.
  • Click Save, confirm Default grid view for this page is selected.
  • Click Save again.

Include Column Visualizations and Summary Statistics

When you save a default or named grid, any Column Visualizations or Summary Statistics in the current view of the grid will be saved as well. This lets you include quick graphical and statistical information when a user first views the data.

For example, this grid in our example study includes both column visualizations and summary statistics:

Reset the Default Grid View

Every data structure has a grid view named "Default" but it does not have to be the default shown to users who view the structure.

  • To set the default view to another named grid view, select (Grid Views) > Set Default.
  • The current default view will be shown in bold.
  • Click Select for the grid you prefer from the list available. The newly selected on will now be bold.
  • Click Done.

Revert to the Original Default Grid View

  • To revert any customizations to the default grid view, open it using (Grid Views) > Default.
  • Select (Grid Views) > Customize Grid.
  • Click the Revert button.

Views Web Part

To create a web part listing all the customized views in your folder, an administrator can create an additional web part:

  • Enter > Page Admin Mode
  • In the lower left, select Views from the Select Web Part menu.
  • Click Add.
  • The web part will show saved grid views, reports, and charts sorted by categories you assign. Here we see the new grid view we just created.

Inheritance: Making Custom Grids Available in Child Folders

In some cases, listed in the table below, custom grids can be "inherited" or made available in child folders. This generally applies to cases like queries that can be run in different containers, not to data structures that are scoped to a single folder. Check the box to Make this grid view available in child folders.

The following table types support grid view inheritance:

Table TypeSupports Inheritance into Child Folders?
QueryYes (Also see Edit Query Properties)
Linked QueriesYes (Also see Edit Query Properties)
ListsNo
DatasetsNo
Query SnapshotsNo
Assay DataYes
Sample TypesYes
Data ClassesYes

Troubleshooting

Why Can't I Add a Certain Column?

In a study, why can't I customize my grid to show a particular field from another dataset?

Background: To customize your grid view of a dataset by adding columns from another dataset, it must be possible to join the two datasets. The columns used for a dataset's key influence how this dataset can be joined to other tables. Certain datasets have more than one key column (in other words, a "compound key"). In a study, there are three types of datasets, distinguished by how you set Data Row Uniqueness when you create the dataset:

  • Demographic datasets use only the Participants column as a key. This means that only one row can be associated with each participant in such a dataset.
  • Clinical (or standard) datasets use Participant/Visit (or Timepoint) pairs as a compound key. This means that there can be many rows per participant, but only one per participant at each visit or date.
  • Datasets with an additional key field including assay or sample data linked into the study. In these compound key datasets, each participant can have multiple rows associated with each individual visit; these are uniquely differentiated by another column key, such as a rowID.
Consequences: When customizing the grid for a table, you cannot join in columns from a table with more key columns. For example, if you are looking at a demographics dataset in a study, you cannot join to a clinical dataset because the clinical dataset can have multiple rows per participant, i.e. has more columns in its key. There isn't a unique mapping from a participant in the 'originating' dataset to a specific row of data in the dataset with rows for 'participant/visit' pairs. However, from a clinical dataset, you can join either to demographic or other clinical datasets.

Guidance: To create a grid view combining columns from datasets of different 'key-levels', start with the dataset with more columns in the key. Then select a column from the table with fewer columns in the key. There can be a unique mapping from the compound key to the simpler one - some columns will have repeated values for several rows, but rows will be unique.

Show CreatedBy and/or ModifiedBy Fields to Users

In LabKey data structures, there are built in lookups to store information about the users who create and modify each row. You can add this data to a customized grid by selecting/expanding the CreatedBy and/or ModifiedBy fields and checking the desired information.

However, the default lookup for such fields is to the core.SiteUsers table, which is accessible only to site administrators; others see only the row for their own account. If you wish to be able to display information about other users to a non-admin user, you need to customize the lookup to point to the core.Users table instead. That table has the same columns but holds only rows for the users in the current project, restricting how much information is shared about site-wide usage to non-admins. From the schema descriptions:

  • core.SiteUsers: Contains all users who have accounts on the server regardless of whether they are members of the current project or not. The data in this table are available only to site administrators. All other users see only the row for their own account.
  • core.Users: Contains all users who are members of the current project. The data in this table are available only to users who are signed-in (not guests). Guests see no rows. Signed-in users see the columns UserId, EntityId, and DisplayName. Users granted the 'See User and Group Details' role see all standard and custom columns.
To adjust this lookup so that non-admins can see CreatedBy details:
  • Open > Go To Module > Query.
    • If you don't have access to this menu option, you would need higher permissions to make this change.
  • Select the schema and table of interest.
  • Click Edit Metadata.
  • Scroll down and click Edit Source.
  • Add the following to the </columns> section:
    <column columnName="CreatedBy">
    <fk>
        <fkDbSchema>core</fkDbSchema>
          <fkTable>Users</fkTable>
          <fkColumnName>UserId</fkColumnName>
        </fk>
      </column>
  • Click Save & Finish.
Now anyone with the "Reader" role (or higher) will be able to see the display name of the user who created and/or modified a row when it's included in a custom grid view.

How do I recover from a broken view?

It is possible to get into a state where you have a "broken" view, and the interface can't return you to the editor to correct the problem. You may see a message like:

View has errors
org.postgresql.util.PSQLException: … details of error

Suggestions:

  • It may work to log out and back in again, particularly if the broken view has not been saved.
  • Depending on your permissions, you may be able to access the view manager by URL to delete the broken view. The Folder Administrator role (or higher) is required. Navigate to the container and edit the URL to replace "/project-begin.view?" with:
    /query-manageViews.view?
    This will open a troubleshooting page where admins can delete or edit customized views. On this page you will see grid views that have been edited and saved, including a line with no "View Name" if the Default grid has been edited and saved. If you delete this row, your grid will 'revert' any edits and restore the original Default grid.
  • If you are still having issues with the Default view on a grid, try accessing a URL like the following, replacing <my_schema> and <my_table> with the appropriate values:
    /query-executeQuery.view?schemaName=<my_schema>&query.queryName=<my_table>

Customize Grid and Other Menus Unresponsive

If you find grid menus to be unresponsive in a LabKey Study dataset, i.e. you can see the menus drop down but clicking options do not have any effect, double check that there are no apostrophes (single quote marks) in the definition of any cohorts defined in that study.

Learn more here.

Related Topics




Saved Filters and Sorts


When you are looking at a data grid, you can sort and filter the data as you wish, but those sorts and filters only persist for your current session on that page. Using the .lastFilter parameter on the URL can preserve the last filter, but otherwise these sorts and filters are temporary.

To create a persistent filter or sort, you can save it as part of a custom grid view. Users with the "Editor" or "Shared View Editor" role (or higher) can share saved grids with other users, including the saved filters and sorts.

Learn the basics of customizing and saving grid views in this topic:

Define a Saved Sort

  • Navigate to the grid you'd like to modify.
  • Select (Grid Views) > Customize Grid
  • Click the Sort tab.
  • Check a box in the Available Fields panel to add a sort on that field.
  • In the Selected Sorts panel, specify whether the sort order should be ascending or descending for each sort applied.
  • Click Save.
  • You may save as a new named grid view or as the default.
  • If you have sufficient permissions, you will also have the option to make it available to all users.

You can also create a saved sort by first sorting your grid directly using the column headers, then opening the grid customizer panel to convert the local sort to a saved one.

  • In the grid view with the saved sort applied above, sort on a second column, in this example we chose 'Lymphs'.
  • Open (Grid Views) > Customize Grid.
  • Click the Sort tab. Note that it shows (2), meaning two sorts are now defined. Until you save the grid view with this additional sort included, it will remain temporary, as sorts usually are.
  • Drag and drop if you want to change the order in which sorts are applied.
  • Remember to Save your grid to save the additional sort.

Define a Saved Filter

The process for defining saved filters is very similar. You can filter locally first or directly define saved filters.

  • An important advantage of using the saved filters interface is that when filtering locally, you are limited to two filters on a given column. Saved filters may include any number of separate filtering expressions for a given column, which are all ANDed together.
  • Another advantage is that there are some additional filtering expressions available here that are not available in the column header filter dialog.
    • For example, you can filter by relative dates using -#d syntax (# days ago) or #d (# days from now) using the grid customizer but not using the column header.
  • Select (Grid Views) > Customize Grid.
  • Click the Filter tab.
  • In the left panel, check boxes for the column(s) on which you want to filter.
  • Drag and drop filters in the right panel to change the filtering order.
  • In the right panel, specify filtering expressions for each selected column. Use pulldowns and value entry fields to set expressions, add more using the (Add) buttons.
  • Use the buttons to delete individual filtering expressions.
  • Save the grid, selecting whether to make it available to other users.

Apply Grid Filter

When viewing a data grid, you can enable and disable all saved filters and sorts using the Apply Grid Filter checkbox in the (Grid Views) menu. All defined filters and sorts are applied at once using this checkbox - you cannot pick and choose which to apply. If this menu option is not available, no saved filters or sorts have been defined.

Note that clearing the saved filters and sorts by unchecking this box does not change how they are saved, it only clears them for the current user and session.

Interactions Among Filters and Sorts

Users can still perform their own sorting and filtering as usual when looking at a grid that already has a saved sort or filter applied.

  • Sorting: Sorting a grid view while you are looking at it overrides any saved sort order. In other words, the saved sort can control how the data is first presented to the user, but the user can re-sort any way they wish.
  • Filtering: Filtering a grid view which has one or more saved filters results in combining the sets of filters with an AND. That is, new local filters happen on the already-filtered data. This can result in unexpected results for the user in cases where the saved filter(s) exclude data that they are expecting to see. Note that these saved filters are not listed in the filters bar above the data grid, but they can all be disabled by unchecking the (Grid Views) > Apply View Filter checkbox.

Related Topics




Select Rows


When you work with a grid of data, such as a list or dataset, you often need to select one or more rows. For example, you may wish to visualize a subset of data or select particular rows from an assay to link into a study. Large data grids are often viewed as multiple pages, adding selection options.

Topics on this page:

Select Rows on the Current Page of Data

  • To select any single row, click the checkbox at the left side of the row.
  • To unselect the row, uncheck the same checkbox.
  • The box at the top of the checkbox column allows you to select or unselect all rows on the current page at once.

Select Rows on Multiple Pages

Using the and buttons in the top right of the grid, you can page forward and back in your data and select as many rows as you like, singly or by page, using the same checkbox selection methods as on a single page. The selection message will update showing the total tally of selected rows.

To change the number of rows per page, select the row count message ("1 - 100 of 677" in the above screencap) to open a menu. Select Paging and make another selection for rows per page. See Page Through Data for more.

Selection Buttons

Selecting the box at the top of the checkbox column also adds a bar above your grid which indicates the number of rows selected on the current page and additional selection buttons.

  • Select All Selectable Rows: Select all rows in the dataset, regardless of pagination.
  • Select None: Unselect all currently selected rows.
  • Show All: Show all rows as one "page" to simplify sorting and selection.
  • Show Selected: Show only the rows that are selected in a single page grid.
  • Show Unselected: Show only the rows that are not selected in a single page grid.
Using the Show Selected option can be helpful in keeping track of selections in large datasets, but is also needed for some actions which may only apply to rows on the current page which are selected.

Related Topics




Export Data Grid


LabKey provides a variety of methods for exporting the rows of a data grid. You can export into formats that can be consumed by external applications (e.g., Excel) or into scripts that can regenerate the data grid. You can also choose whether to export the entire set of data or only selected rows. Your choice of export format determines whether you get a static snapshot of the data, or a dynamic reflection that updates as the data changes. The Excel and TSV formats supply static snapshots, while scripts allow you to display dynamically updated data.

Premium Feature Available

Subscribers to the Enterprise Edition of LabKey Server can export data with an e-Signature as described in this topic:


Learn more about premium editions

(Export) Menu

  • Click the (Export) button above any grid view to open the export panel.
  • Use tabs to choose between Excel, Text and Script exports, each of which carries a number of appropriate options for that type.

After selecting your options, decribed below, and clicking the Export button, you will briefly see visual feedback that the export is in progress:

Export Column Headers

Both Excel and Text exports allow you to choose whether Column Headers are exported with the data, and if so, what format is used. Options:

  • None: Export the data table with no column headers.
  • Caption: (Default) Include a column header row using the currently displayed column captions as headers.
  • Field Key: Use the column name with FieldKey encoding. While less display friendly, these keys are unambiguous and canonical and will ensure clean export and import of data into the same dataset.

Export Selected Rows

If you select one or more rows using the checkboxes on the left, you will activate the Export Selected Rows checkbox in either Excel or Text export mode. When selected, your exported Excel file will only include the selected rows. Uncheck the box to export all rows. For additional information about selecting rows, see Select Rows.

Filter Data Before Export

Another way to export a subset of data records is to filter the grid view before you export it.

  • Filter Data. Clicking a column header in a grid will open a dialog box that lets you filter and exclude certain types of data.
  • Create or select a Custom Grid View. Custom Grids let you store a selected subset as a named grid view.
  • View Data From One Visit. You can use the Study Navigator to view the grid of data records for a particular visit for a particular dataset. From the Study Navigator, click on the number at the intersection of the desired visit column and dataset row.

Export to Excel

When you export your data grid to Excel, you can use features within that software to access, sort and present the data as required. If your data grid includes inline images they will be exported in the cell in which they appear in the grid.

Export to Text

Select Text tab to export the data grid in a text format. Select tab, comma, colon, or semicolon from the Separator pulldown and single or double from the Quote pulldown. The extension of your exported text file will correspond to the separator you have selected, i.e. "tsv" for tab separators.

LabKey Server uses the UTF-8 character encoding when exporting text files.

Export to Script

You can export the current grid to script code that can be used to access the data from any of the supported client libraries. See Export Data Grid as a Script.

The option to generate a Stable URL for the grid is also included on the (Export) > Script tab.

Related Topics

Premium Feature Available

Subscribers to the Enterprise Edition of LabKey Server can export data with an e-Signature as described in this topic:


Learn more about premium editions




Participant Details View


The default dataset grid displays data for all participants. To view data for an individual participant, click on the participantID in the first column of the grid. The Participant Details View can also be customized to show graphical or other custom views of the data for a single study subject.

Participant Details View

The participant details view lists all of the datasets that contain data for the current participant, as shown in the image below.

  • Previous/Next Participant: Page through participants. Note that this option is only provided when you navigate from a dataset listing other participants. Viewing a single participant from the Participants tab does not include these options.
  • button: Expand dataset details.
  • button: Collapse dataset details

Add Charts

Expand dataset details by clicking the or name of the dataset of interest. Click Add Chart to add a visualization to the participant view details. The dropdown will show the charts defined for this dataset.

After you select a chart from the dropdown, click the Submit button that will appear.

Once you create a chart for one participant in a participant view, the same chart is displayed for every participant, with that participant's data.

You can add multiple charts per dataset, or different charts for each dataset. To define new charts to use in participant views, use the plot editor.

Notes:

1. Charts are displayed at a standard size in the default participant details view; custom height and width, if specified in the chart definition, are overridden.

2. Time charts displaying data by participant group can be included in a participant details view, however the data is filtered to show only data for the individual participant. Disregard the legend showing group names for these trend lines.

Customize Participant Details View

You can alter the HTML used to create the default participant details page and save alternative ways to display the data using the Customize View link.

  • You can make small styling or other adjustments to the "default" script. An example is below
  • You can leverage the LabKey APIs to tailor your custom page.
  • You can also add the participant.html file via a module. Learn more about file based modules in this webinar: Tech Talk: Custom File-Based Modules

Click Save to refresh and see the preview below your script. Click Save and Finish when finished.

Example 1: Align Column of Values

In the standard participant view, the column of values for expanded demographics datasets will be centered over the grid of visits for the clinical datasets. In some cases, this will cause the demographic field values to be outside the visible page. To change where they appear, you can add the following to the standard participant details view (under the closing </script> tag):

<style>
.labkey-data-region tr td:first-child {width: 300px}
</style>
Use a wider "width" value if you have very long field names in the first column.

Example 2: Tabbed Participant View (Premium Resource)

Another way a developer can customize the participant view is by using HTML with JavaScript to present a tabbed view of participant data. Click the tabs to see the data for a single participant in each category of datasets; step through to see data for other participants.


Premium Resource Available

Subscribers to premium editions of LabKey Server can use the example code in this topic to create views like the tabbed example above:


Learn more about premium editions

Troubleshooting

If you see field names but no values in the default participant view, particularly for demographic datasets, check to be sure that your field names do not include special characters. If you want to present fields to users with special characters in the names, you can use the "Label" of the field to do so.

Related Topics




Query Scope: Filter by Folder


For certain LabKey queries, including assay designs, issue trackers, and survey designs, but not including study datasets, the scope of the query can be set with a folder filter to include:
  • all data on the site
  • all data for a project
  • only data located in particular folders
  • in some cases, combinations of data including /Shared project data
For example, the query itself can be defined at the project level. Data it queries may be located within individual subfolders. Scope is controlled using the "Filter by Folder" option on the (Grid Views) menu in the web part.

This allows you to organize your data in folders that are convenient to you at the time of data collection (e.g., folders for individual labs or lab technicians). Then you can perform analyses independently of the folder-based organization of your data. You can analyze data across all folders, or just a branch of your folder tree.

You can set the scope through either the (Grid Views) menu or through the client API. In all cases, LabKey security settings remain in force, so users only see data in folders they are authorized to see.

Folder Filters in Grid Interface

To filter by folder through the user interface:

  • Select (Grid Views) > Filter by Folder.
  • Choose one of the options:
    • Current folder
    • Current folder and subfolders
    • All folders (on the site)

Some types of grid may have different selections available. For example, Sample Types, Data Classes, and Lists offer the following option to support cross-folder lookups:

  • Current folder, project, and Shared project

Folder Filters in the JavaScript API

The LabKey API provides developers with even finer grained control over the scope.

The containerFilter config property is available on many methods on such as LABKEY.Query.selectRows, provides fine-grained control over which folders are accessed through the query.

For example, executeSQL allows you to use the containerFilter parameter to run custom queries across data from multiple folders at once. A query might (for example) show the count of NAb runs available in each lab’s subfolder if folders are organized by lab.

Possible values for the containerFilter are:

  • allFolders
  • current
  • currentAndFirstChildren
  • currentAndParents
  • currentAndSubfolders
  • currentPlusProject
  • currentPlusProjectAndShared

Folder Filters with the JDBC Driver

The LabKey JDBC Driver also supports the containerFilter parameter for scoping queries by folder. Learn more in this topic:

Container Filter in LabKey SQL

Within a LabKey SQL FROM clause, you can include a containerFilter to control scope of an individual table within the query. Learn more in this topic:

Related Topics




Reports and Charts


The right visualization can communicate information about your data efficiently. You can create different types of report and chart to view, analyze and display data using a range of tools.

Built-in Chart and Report Types

Basic visualization types available to non-administrative users from the (Charts) > Create Chart menu and directly from data grid column headers are described in this topic:

Additional visualization and report types:

Display Visualizations

Reports and visualizations can be displayed and managed as part of a folder, project or study.

Manage Visualizations

External Visualization Tools (Premium Integrations)

LabKey Server can serve as the data store for external reporting and visualization tools such as RStudio, Tableau, Access, Excel, etc. See the following topics for details.

Related Topics




Jupyter Reports


Premium Feature — Available in the Professional and Enterprise Editions of LabKey Server. Learn more or contact LabKey.

This topic describes how to import reports from Jupyter Notebooks and use them with live data in LabKey Server. By configuring a Jupyter scripting engine, you can add a Jupyter Reports option on the menu of data grids.

In previous versions, this feature was enabled by configuring a docker image and a "Docker Report Engine". You can still use the same docker image configuration and use the resulting "localhost" URL for the "Jupyter Report Engine".

Once integrated, users can access the Jupyter report builder to import .ipynb files exported from Jupyter Notebooks. These reports can then be displayed and shared in LabKey.

Jupyter Notebooks URL

The Jupyter scripting configuration requires the URL of the service endpoint. Depending on where your Jupyter service is running, it could be something like "http://localhost:3031/" or "http://service-prefix-jupyter.srvc.int.jupyter:3031"

If you are connecting to a docker image, find the Remote URL by going to the Settings tab of the Admin Console. Under Premium Features, click Docker Host Settings. The "Docker Host" line shows your Remote URL.

The URL "http://noop.test:3031" can be used to configure a fake 'do nothing' service. This is only useful for testing.

Add Jupyter Report Engine on LabKey Server

  • Select > Site > Admin Console.
  • Under Configuration, click Views and Scripting.
  • Select Add > New Jupyter Report Engine. In the popup, you'll see the default options and can adjust them if needed.
  • Language: Python
  • Language Version: optional
  • File extension: ipynb (don't include the . before the extension)
  • Remote URL: enter it here
  • Enabled: checked
Click Submit to save.

You can now see the Jupyter Reports menu option in the Data Views web part and on the grid menu where you created it.

Obtain Report .ipynb from Jupyter Notebooks

Within Jupyter Notebooks, you can now open and author your report. You can use the LabKey Python API to access data and craft the report you want. One way to get started is to export the desired data in a python script form that you can use to obtain that data via API.

Use this script as the basis for your Jupyter Notebook report.

When ready, you'll "export" the report by simply saving it as an .ipynb file. To save to a location where you can easily find it for importing to LabKey, choose 'Save As' and specify the desired location.

Create Jupyter Report on LabKey Server

From any data grid, or from the Data Views web part, you can add a new Jupyter Report. You'll see a popup asking if you want to:

Import From File

Click Import From File in the popup and browse to select the .ipynb file to open. You'll see your report text on the Source tab of the Jupyter Report Builder.

Click Save to save this report on LabKey. Enter a report name and click OK. Now your report will be accessible from the menu.

Start with Blank Report

The blank report option offers a basic wrapper for building your report.

Jupyter Report Builder

The Report Builder offers tabs for:

  • Report: See the report.
  • Data: View the data on which the report is based.
  • Source: Script source, including Save and Cancel for the report. Options are also available here:
    • Make this report available to all users
    • Show source tab to all users
    • Make this report available in child folders
  • Help: Details about the report configuration options available.

Help Tab - Report Config Properties

When a Jupyter Report is executed, a config file is generated and populated with properties that may be useful to report authors in their script code. The file is written in JSON to : report_config.json. A helper utility : ReportConfig.py is included in the nbconfig image. The class contains functions that will parse the generated file and return configured properties to your script. An example of the code you could use in your script:

from ReportConfig import get_report_api_wrapper, get_report_data, get_report_parameters
print(get_report_data())
print(get_report_parameters())

This is an example of a configuration file and the properties that are included.

{
"baseUrl": "http://localhost:8080",
"contextPath": "/labkey",
"scriptName": "myReport.ipynb",
"containerPath": "/my studies/demo",
"parameters": [
[
"pageId",
"study.DATA_ANALYSIS"
],
[
"reportType",
"ReportService.ipynbReport"
],
[
"redirectUrl",
"/labkey/my%20studies/demo/project-begin.view?pageId=study.DATA_ANALYSIS"
],
[
"reportId",
"DB:155"
]
],
"version": 1
}

Export Jupyter Report from LabKey Server

The Jupyter report in LabKey can also be exported as an .ipynb file for use elsewhere. Open the report, choose the Source tab, and under Jupyter Report Options, click Export.

Related Topics




Report Web Part: Display a Report or Chart


Displaying a report or chart alongside other content helps you highlight visualizations of important results. There are a number of ways to do this, including:

Display a Single Report

To display a report on a page:

  • Enter > Page Admin Mode.
  • Click Add Web Part in the lower left, select Report, and click Add.
  • On the Customize Report page, enter the following parameters:
    • Web Part Title: This is the title that will be displayed in the web part.
    • Report or Chart: Select the report or chart to display.
    • Show Tabs: Some reports may be rendered with multiple tabs showing.
    • Visible Report Sections: Some reports contain multiple sections, such as: images, text, console output. If a list is offered, you can select which section(s) to display by selecting them. If you are displaying an R Report, the sections are identified by the section names from the source script.
  • Click Submit.

In this example, the new web part will look like this:

Change Report Web Part Settings

You can reopen the Customize Report page later to change the name or how it appears.

  • Select Customize from the (triangle) menu.
  • Click Submit.

Options available (not applicable to all reports):

  • Show Tabs: Some reports may be rendered with multiple tabs showing. Select this option to only show the primary view.
  • Visible Report Sections: Some reports contain multiple sections such as: images, text, console output. For these you can select which section(s) to display by selecting them from the list.

Related Topics




Data Views Browser


The Data Views web part displays a catalog of available reports and custom named grid views. This provides a convenient dashboard for selecting among the available ways to view data in a given folder or project. In a Study the Data Views web part also includes datasets. It is shown on the Clinical and Assay Data tab by default, but can be added to other tabs or pages as needed.

By default, the Data Views web part lists all the custom grid views, reports, etc. you have permission to read. If you would like to view only the subset of items you created yourself, click the Mine checkbox in the upper right.

Add and Manage Content

Users with sufficient permission can add new content and make some changes using the menu in the corner of the web part. Administrators have additional options, detailed below.

  • Add Report: Click for a submenu of report types to add.
  • Add Chart: Use the plot editor to add new visualizations.
  • Manage Views: Manage reports and custom grids, including the option to delete multiple items at once. Your permissions will determine your options on this page.
  • Manage Notifications: Subscribe to receive notifications of report and dataset changes.

Toggle Edit Mode

Users with Editor (or higher) permissions can edit metadata about items in the databrowser.

  • Click the pencil icon in the web part border to toggle edit mode. Individual pencil icons show which items have metadata you can edit here.
  • When active, click the pencil icon for the desired item.
  • Edit Properties, such as status, author, visibility to others, etc.
  • If you want to move the item to a different section of the web part, select a different Category.
  • If there are associated thumbnails and mini-icons, you can customize them from the Images tab. See Manage Thumbnail Images for more information.
  • Click Save.

Notice that there are three dates associated with reports: the creation date, the date the report itself was last modified, and the date the content of the report was last modified.

Data Views Web Part Options

An administrator adds the Data Views web part to a page:

  • Enter > Page Admin Mode.
  • Select Data Views from the <Select Web Part> pulldown in the lower left.
  • Click Add.

The menu in the upper corner of the web part gives admins a few additional options:

  • Manage Datasets: Create and manage study datasets.
  • Manage Queries: Open the query schema browser.
  • Customize: Customize this web part.
  • Permissions: Control what permissions a user must have to see this web part.
  • Move Up/Down: Change the sequence of web parts on the page. (Only available when an admin is in page admin mode).
  • Remove From Page: No longer show this web part - note that the underlying data is not affected by removing the web part. (Only available when an admin is in page admin mode).
  • Hide Frame - remove the header and border of the web part. (Only available when an admin is in page admin mode).

Customize the Data Views Browser

Administrators can also customize display parameters within the web part.

Select Customize from the triangle pulldown menu to open the customize panel:

You can change the following:

  • Name: the heading of the web part (the default is "Data Views").
  • Display Height: adjust the size of the web part. Options:
    • Default (dynamic): by default, the data views browser is dynamically sized to fit the number of items displayed, up to a maximum of 700px.
    • Custom: enter the desired web part height. Must be between 200 and 3000 pixels.
  • Sort: select an option:
    • By Display Order: (Default) The order items are returned from the database.
    • Alphabetical: Alphabetize items within categories; categories are explicitly ordered.
  • View Types: Check or uncheck boxes to control which items will be displayed. Details are below.
  • Visible Columns: Check and uncheck boxes to control which columns appear in the web part.
  • Manage Categories: Click to define and use categories and subcategories for grouping.
  • To close the Customize box, select Save or Cancel.

Show/Hide View Types

In the Customize panel, you can check or uncheck the View Types to control what is displayed.

  • Datasets: All study datasets, unless the Visibility property is set to Hidden.
  • Queries: Customized named grid views on any dataset.
    • Note that SQL queries defined in the UI or included in modules are not included when this option is checked.
    • A named grid view created on a Hidden dataset will still show up in the data views browser when "queries" are shown.
    • Named grid views are categorized with the dataset they were created from.
    • You can also create a custom XML-based query view in a file-based module using a .qview.xml file. To show it in the Data Views web part, use the showInDataViews property and enable the module in the study where you want to use it.
  • Queries (inherited): This category will show grid views defined in a parent container with the inherited box checked.
  • Reports: All reports in the container.
    • To show a query in the data views web part, you can create a Query Report to do so.
    • It is good practice to name such a report the same as the query and include details in the description field for later retrieval.

Related Topics




Query Snapshots


A query snapshot captures a data query at a moment in time. The data in the snapshot will remain fixed even if the original source dataset is updated until/unless it is refreshed manually or automatically. You can refresh the resulting query snapshot manually or set up a refresh schedule. If you choose automatic refresh, the system will listen for changes to the original data, and will update the snapshot within the interval of time you select.

Snapshotting data in this fashion is only available for:

  • Study datasets
  • Linked datasets from assays and sample types
  • User-defined SQL queries
Queries exposed by linked schemas are not available for snapshotting.

Create a Query Snapshot

  • Go to the query, grid, or dataset you wish to snapshot.
  • Select (Charts/Reports) > Create Query Snapshot.
  • Name the snapshot (the default name appends the word "Snapshot" to the name of the grid you are viewing).
  • Specify Manual or Automatic Refresh. For automatic refresh, select the frequency from the dropdown. When data changes, the snapshot will be updated within the selected interval of time. Options:
    • 30 seconds
    • 1 minute
    • 5 minutes
    • 10 minutes
    • 30 minutes
    • 1 hour
    • 2 hours
  • If you want to edit the properties or fields in the snapshot, click Edit Dataset Definition to use the Dataset Designer before creating your snapshot.
    • Be sure to save (or cancel) your changes in the dataset designer to return to the snapshot creation UI.
  • Click Create Snapshot.

View a Query Snapshot

Once a query snapshot has been created it is available in the data browser and at (Admin) > Manage Study > Manage Datasets.

Edit a Query Snapshot

The fields for a query snapshot, as well as the refresh policy and frequency can be edited starting from the grid view.

  • Select (Grid Views) > Edit Snapshot.
  • You can see the name and query source here, but cannot edit them.
  • If you want to edit the fields in the snapshot, (such as to change a column label, etc.) click Edit Dataset Definition to use the Dataset Designer. You cannot change the snapshot name itself. Be sure to save (or cancel) your changes in the dataset designer to return to the snapshot creation UI.
  • Change the Snapshot Refresh settings as needed.
  • Click Update Snapshot to manually refresh the data now.
  • Click Save to save changes to refresh settings.
  • Click Done to exit the edit interface.

The Edit Snapshot interface also provides more options:

  • Delete Snapshot
  • Show History: See the history of this snapshot.
  • Edit Dataset Definition: Edit fields and their properties for this snapshot. Note that you must either save or cancel your changes in the dataset designer in order to return to the snapshot editor.

Troubleshooting

If a snapshot with the desired name already exists, you may see an error similar to:

ERROR ExceptionUtil 2022-05-13T16:28:45,322 ps-jsse-nio-8443-exec-10 : Additional exception info:
org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "uq_querydef"
Detail: Key (container, schema, name)=(78b1db6c-60cb-1035-a43f-28cd6c37c23c, study, Query_Snapshot_Name) already exists.

If you believe this snapshot does not exist, but are unable to find it in the user interface or schema browser, it may need to be deleted directly from the database. Contact your Account Manager if you need assistance.

Related Topics




Attachment Reports


Attachment reports enable you to upload and attach stand-alone documents, such as PDF, Word, or Excel files. This gives you a flexible way to interconnect your information.

You can create a report or visualization using a statistical or reporting tool outside of LabKey, then upload the report directly from your local machine, or point to a file elsewhere on the server.

Add an Attachment Report

To upload an attachment report, follow these steps:

  • Create the desired report and save it to your local machine.
  • Open the (triangle) menu in the Data Views web part.
  • Select Add Report > Attachment Report.
  • Provide the name, date, etc. for the report.
  • Upload the report from your local machine (or point to a document already on the server).

Once the file is uploaded it will be shown in the data browser. If you specify that it is to be shared, other users can view and download it.

If the report was saved in the external application with an embedded JPEG thumbnail, LabKey Server can in some cases extract that and use it as a preview in the user interface. See Manage Thumbnail Images for more information.

Related Topics




Link Reports


Add links to external resources using a Link Report. This flexible option allows you to easily connect your data to other resources.

Create a Link Report

  • Open the (triangle) menu in the Data Views web part.
  • Select Add Report > Link Report.
  • Complete the form. Link to an external or internal resource. For example, link to an external website or to a page within the same LabKey Server.

Related Topics




Participant Reports


With a LabKey study, creating a participant report lets you show data for one or more individual participants for selected measures. Measures from different datasets in the study can be combined in a single report. A filter panel lets you dynamically change which

Create a Participant Report

  • In a study, select (Admin) > Manage Views.
  • Choose Add Report > Participant Report.
  • Click Choose Measures, select one or more measures, then click Select.
  • Enter a Report Name and optional Report Description.
  • Select whether this report will be Viewable By all readers or only yourself.
  • When you first create a report, you will be in "edit mode" and can change your set of chosen measures. Below the selection panel, you will see partial results.
    • Toggle the edit panel by clicking the (pencil) icon at the top of the report to see more results; you may reopen it at any time to further edit or save the report.
  • Click the Filter Report (chevron) button to open the filter panel to refine which participants appear in the report.
  • Select desired filters using the radio buttons and checkboxes. You may hide the filter panel with the , or if you click the X to close it entirely, a Filter Report link will appear on the report menu bar.
  • Click the Transpose button to flip the columns and rows in the generated tables, so that columns are displayed as rows and vice versa, as shown below.
  • When you have created the report you want, click Save.
  • Your new report will appear in the Data Views.

Export to Excel File

  • Select Export > To Excel.

Add the Participant Report as a Web Part

  • Enter > Page Admin Mode.
  • Select Report from the <Select Web Part>, select Report.
  • Name the web part, and select the participant report you created above.
  • Click Submit.
  • Click Exit Admin Mode.

Related Topics




Query Reports


Query reports let you package a query or grid view as a report. This can enable sharing the report with a different audience or presenting the query as of a specific data cut date. A user needs the "Author" role or higher to create a query report. The report also requires that the target query already exists.

Create a Query Report

  • Select (Admin) > Manage Views.
  • Select Add Report > Query Report.
Complete the form, providing:
  • Name (Required): The report name.
    • Consider naming the report for the query (or grid view) it is "reporting" to make it clearer to the user.
  • Author: Select from all project users listed in the dropdown.
  • Status: Choose one of "None, Draft, Final, Locked, Unlocked".
  • Data Cut Date: Specify a date if desired.
  • Category: If you are creating a report in a study, you can select an existing category.
  • Description: Optional text description.
    • Consider including details like the origin of the report (schema/query/view) so that you can easily search for them later.
  • Shared: Check the box if you want to share this report with other users.
  • Schema (Required): The schema containing the desired query. This choice will populate the Query dropdown.
  • Query (Required): Select a query from those in the selected schema. This will populate the View dropdown.
  • View: If there are multiple views defined on the selected query, you'll be able to choose one here.
  • Click Save when finished.

You can customize the thumbnail and mini-icon displayed with your Query Report. Learn more here.

Use and Share Query Reports

Your report is now available for display in a web part or wiki, or sharing with others. Query Reports can be assigned report-specific permissions in a study following the instructions in this topic.

In a study, your report will appear in the Data Views web part under the selected category (or as "Uncategorized"). If you want to hide a Query Report from this web part, you can edit the view metadata to set visibility to "Hidden".

Manage Query Reports

Metadata including the name, visibility, data cut date, category and description of a Query Report can all be edited through the manage views interface.

To see the details of the schema, query, and view used to create the report, you can view the Report Debug Information page. Navigate to the report itself, you will see a URL like the following, where ### is the id number for your report:

/reports-renderQueryReport.view?reportId=db%3A###

Edit the URL to replace renderQueryReport.view with reportInfo.view, as so:

/reports-reportInfo.view?reportId=db%3A###

Related Topics




Manage Data Views


Reports, charts, datasets, and customized data grids are all ways to view data in a folder and can be displayed in a Data Views web part. Within a study, this panel is displayed on the Clinical and Assay Data tab by default, and can be customized or displayed in other places as needed. This topic describes how to manage these "data views".

Manage Views

Select > Manage Views in any folder. From the Data Views web part, you can also select it from the (triangle) pulldown menu.

The Manage Views page displays all the views, queries, and reports available within a folder. This page allows editing of metadata as well as deletion of multiple items in one action.

  • A row of links are provided for adding, managing, and deleting views and attributes like categories and notifications.
  • Filter by typing part of the name, category, type, author, etc. in the box above the grid.
  • By default you will see all queries and reports you can edit. If you want to view only items you created yourself, click the Mine checkbox in the upper right.
  • Hover over the name of an item on the list to see a few details, including the type, creator, status, and an optional thumbnail.
  • Click on the name to open the item.
  • Click a Details link to see more metadata details.
  • Notice the icons to the right of charts, reports, and named grids. Click to edit the metadata for the item.
  • When managing views within a study, you can click an active link in the Access column to customize permissions for the given visualization. "Public" in this column refers to the item being readable by users with at least Read access to the container, not to the public at large unless that is separately configured. For details see Configure Permissions for Reports & Views.

View Details

Hover over a row to view the source and type of a visualization, with a customizable thumbnail image.

Clicking the Details icon for a report or chart opens the Report Details page with the full list of current metadata. The details icon for a query or named view will open the view itself.

Modification Dates

There are two modification dates associated with each report, allowing you to differentiate between report property and content changes:

  • Modified: the date the report was last modified.
    • Name, description, author, category, thumbnail image, etc.
  • Content Modified: the date the content of the report was modified.
    • Underlying script, attachment, link, chart settings, etc.
The details of what constitutes content modification are report specific:
  • Attachment Report:
    • Report type (local vs. server) changed
    • Server file path updated
    • New file attached
  • Box Plot, Scatter Plot, Time Chart:
    • Report configuration change (measure selection, grouping, display, etc.)
  • Link Report:
    • URL changed
  • Script Reports including JavaScript and R Reports:
    • Change to the report code (JavaScript, R, etc.)
  • Flow Reports (Positivity and QC):
    • Change to any of the filter values
The following report types do not change the ContentModified date after creation: Crosstab View, DataReport, External Report, Query Report, Chart Reports, Chart View.

Edit View Metadata

Click the pencil icon next to any row to edit metadata to provide additional information about when, how, and why the view or report was created. You can also customize how the item is displayed in the data views panel.

View Properties

Click the pencil icon to open a popup window for editing visualization metadata. On the Properties tab:

  • Modify the Name and Description fields.
  • Select Author, Status, and Category from pulldown lists of valid values. For more about categories, see Manage Categories.
  • Choose a Data Cut Date from the calendar.
  • Check whether to share this report with all users with access to the folder.
  • Click Save.

The Images tab is where you modify thumbnails and mini-icons used for the report.

You could also delete this visualization from the Properties tab by clicking Delete. This action is confirmed before the view is actually deleted.

View Thumbnails and Mini-icons

When a visualization is created, a default thumbnail is auto-generated and a mini-icon based on the report type is associated with it. You can see and update these on the Images tab. Learn more about using and customizing these images in this topic:

Reorder Reports and Charts

To rearrange the display order of reports and charts, click Reorder Reports and Charts. Users without administrator permissions will not see this button or be able to access this feature.

Click the heading "Reports and Charts" to toggle searching ascending or decending alphabetically. You can also drag and drop to arrange in any order.

When the organization is correct, click Done.

File based reports can be moved within the dialog box, but the ordering will not actually change until you make changes to their XML.

Delete Views and Reports

Select any row by clicking an area that is not a link. You can use Shift and Ctrl to multi-select several rows at once. Then click Delete Selected. You will be prompted to confirm the list of the items that will be deleted.

Manage Study Notifications

Users can subscribe to a daily digest of changes to reports and datasets in a study. Learn more in this topic: Manage Study Notifications

Related Topics




Manage Study Notifications


If you want to receive email notifications when the reports and/or datasets in a study are updated, you can subscribe to a daily digest of changes. These notifications are similar to email notifications for messages and file changes at the folder level, but allow finer control of which changes trigger notification.

Manage Study Notifications

  • Select Manage Notifications from the (triangle) menu in the Data Views web part.
    • You can also select > Manage Views then click Manage Notifications.
  • Select the desired options:
  • None. (Default)
  • All changes: Your daily digest will list changes and additions to all reports and datasets in this study.
  • By category: Your daily digest will list changes and additions to reports and datasets in the subscribed categories.
  • By dataset: Your daily digest will list changes and additions to subscribed datasets. Note that reports for those datasets are not included in this option.
  • Click Save.

You will receive your first daily digest of notifications at the next system maintenance interval, typically overnight. By default, the notification includes the list of updated reports and/or datasets including links to each one.

Subscribe by Category

You can subscribe to notifications of changes to both reports and datasets by grouping them into categories of interest. If you want to allow subscription to notifications for a single report, create a singleton subcategory for it.

Select the By Category option, then click checkboxes under Subscribe for the categories and subcategories you want to receive notifications about.

Subscribe by Dataset

To subscribe only to notifications about specific datasets, select the By dataset option and use checkboxes to subscribe to the datasets of interest.

Note that reports based on these datasets are not included when you subscribe by dataset.

Notification Triggers

The following table describes which data view types and which changes trigger notifications:

 Data Insert/Update/DeleteDesign ChangeSharing Status ChangeCategory ChangeDisplay Order Change
Datasets (including linked Datasets)YesYesYesYesYes
Query SnapshotYes (Notification occurs when the snapshot is refreshed)YesYesYesYes
Report (R, JavaScript, etc.)NoYesYesYesYes
Chart (Bar, Box Plot, etc.)NoYesYesYesYes

Datasets:

  • Changes to both the design and the underlying data will trigger notifications.
Reports:
  • Reports must be both visible and shared to trigger notifications.
  • Only changes to the definition or design of the report, such as changes to the R or JS code, will trigger notifications.
Notifications are generated for all items when their sharing status is changed, their category is changed, or when their display order is changed.

Customize Email Notification

By default, the notification includes the list of updated reports and/or datasets including links to each one. The email notification does not describe the nature of the change, only that some change has occurred.

The template for these notifications may be customized at the site-level, as described in Email Template Customization.

Related Topics




Manage Categories


In the Data Views web part, reports, visualizations, queries, and datasets can be sorted by categories and subcategories that an administrator defines. Users may also subscribe to notifications by category.

Categories can be defined from the Manage Categories page, defined during dataset creation or editing, or during the process of linking data into a study from an assay, or sample type.

Define Categories

Manage Categories Interface

  • In your study, select (Admin) > Manage Views.
  • Click Manage Categories to open the categories pop-up.

Click New Category to add a category; click the X to delete one, and drag and drop to reorganize sections of reports and grid views.

To see subcategories, select a category in the popup. Click New Subcategory to add new ones. Drag and drop to reorder. Click Done in the category popup when finished.

Dataset Designer Interface

During creation or edit of a dataset definition, you can add new categories by typing the new category into the Category box. Click Create option "[ENTERED_NAME]" to create the category and assign the current dataset to it.

Assign Items to Categories

To assign datasets, reports, charts, etc to categories and subcategories, click the (pencil) icon on the manage views page, or in "edit" mode of the data browser. The Category menu will list all available categories and subcategories. Make your choice and click Save.

To assign datasets to categories, use the Category field on the dataset properties page. Existing categories will be shown on the dropdown.

Hide a Category

Categories are only shown in the data browser or dataset properties designer when there are items assigned to them. Assigning all items to a different category or marking them unassigned will hide the category. You can also mark each item assigned inside of it as "hidden". See Data Views Browser.

Categorize Linked Assay and Sample Data

When you link assay or sample data into a study, a corresponding Dataset is created (or appended to if it already exists).

During the manual link to study process, or when you add auto-linking to the definition of the assay or sample type, you can Specify Linked Dataset Category.

  • If the study dataset you are linking to already exists and already has a category assignment, this setting will not override the previous categorization. You can manually change the dataset's category directly.
  • If you are creating a new linked dataset, it will be added to the category you specify.
  • If the category does not exist, it will be created.
  • Leave blank to use the default "Uncategorized" category. You can manually assign or change a category later.
Learn more about linking assays and samples to studies in these topics:

Related Topics




Manage Thumbnail Images


When a visualization is created, a default thumbnail is automatically generated and a mini-icon based on the report or chart type is associated with it. These are displayed in the data views web part. You can customize both to give your users a better visual indication of what the given report or chart contains. For example, rather than have all of your R reports show the default R logo, you could provide different mini-icons for different types of content that will be more meaningful to your users.

Attachment Reports offer the additional option to extract the thumbnail image directly from some types of documents, instead of using an auto-generated default.

View and Customize Thumbnails and Mini-icons

To view and customize images:
  • Enter Edit Mode by clicking the pencil icon in the data views browser or on the manage views page.
  • Click the pencil icon for any visualization to open the window for editing metadata.
  • Click the Images tab. The current thumbnail and mini-icon are displayed, along with the option to upload different ones from your local machine.
    • A thumbnail image will be scaled to 250 pixels high.
    • A mini-icon will be scaled to 18x18 pixels.
  • The trash can button will delete the default generated thumbnail image, replacing it with a generic image.
  • If you have customized the thumbnail, the trash can button, deletes it and returns the default, generated thumbnail image.
  • Click Save to save any changes you make.

You may need to refresh your browser after updating thumbnails and icons. If you later change and resave the visualization, or export and reimport it with a folder or study, the custom thumbnails and mini-icons will remain associated with it unless you explicitly change them again.

Extract Thumbnails from Documents

An Attachment Report is created by uploading an external document. Some documents can have embedded thumbnails included, and LabKey Server can in some cases extract those thumbnails to associate with the attachment report.

The external application, such as Word, Excel, or PowerPoint, must have the "Save Thumbnail" option set to save the thumbnail of the first page as an extractable jpeg image. When the Open Office XML format file (.docx, .pptx, .xlsx) for an attachment report contains such an image, LabKey Server will extract it from the uploaded file and use it as the thumbnail.

Images in older binary formats (.doc, .ppt, .xls) and other image formats, such as EMF or WMF, will not be extracted; instead the attachment report will use the default auto-generated thumbnail image.

Related Topics




Measure and Dimension Columns


Measures are values that functions work on, generally numeric values such as instrument readings. Dimensions are qualitative properties and categories, such as cohort, that can be used in filtering or grouping those measures. LabKey Server can be configured such that only specific columns marked as data "measures" or "dimensions" are offered for charting. When this option is disabled, all numeric columns are available for charting and any column can be included in filtering.

This topic explains how to mark columns as measures and dimensions, as well as how to enable and disable the restriction.

Definitions:

  • Dimension: "dimension" means a column of non-numerical categories that can be included in a chart, such as for grouping into box plots or bar charts. Cohort and country are examples of dimensions.
  • Measure: A column with numerical data. Instrument readings, viral loads, and weight are all examples of measures.
Note: Text columns that include numeric values can also be marked as measures. For instance, a text column that includes a mix of integers and some entries of "<1" to represent values that are below the lower limit of quantitation (LLOQ) could be plotted ignoring the non numeric entries. The server will make a best effort to convert the data to numeric values and display a message about the number of values that cannot be converted.

If your server restricts charting to only measures and dimensions, you have two options: (1) either mark the desired column as a measure/dimension or (2) turn off the restriction.

Mark a Column as a Measure or Dimension

Use the Field Editor, Advanced Settings to mark columns. The method for opening the field editor varies based on the data structure type, detailed in the topic: Field Editor.

  • Open the field editor.
  • Expand the field you want to mark.
  • Click Advanced Settings.
  • Check the box or boxes desired:
    • "Make this field available as a measure"
    • "Make this field available as a dimension"
  • Click Apply, then Finish to save the changes.

Turn On/Off Restricting Charting to Measures and Dimensions

Note that you must have administrator permissions to change these settings. You can control this option at the site or project level.

  • To locate this control at the site level:
    • Select > Site > Admin Console.
    • Under Configuration, click Look and Feel Settings.
  • To locate this control at the project level:
    • Select > Folder > Project Settings.
  • Confirm that you are viewing the Properties tab (it opens by default).
  • Scroll down to Restrict charting columns by measure and dimension flags.
  • Check or uncheck the box as desired.
  • Click Save.

Related Topics




Visualizations


Visualizations

You can visualize, analyze and display data using a range of plotting and reporting tools. The right kind of image can illuminate scientific insights in your results far more easily than words and numbers alone. The topics in this section describe how to create different kinds of chart using the common plot editor.

Plot Editor

When viewing a data grid, select (Charts) > Create Chart menu to open the plot editor and create new:

Column Visualizations

To generate a quick visualization on a given column in a dataset, select an option from the column header:

Open Saved Visualizations

Once created and saved, visualizations will be re-generated by re-running their associated scripts on live data. You can access saved visualizations either through the (Reports or Charts) pulldown menu on the associated data grid, or directly by clicking on the name in the Data Views web part.

Related Topics




Bar Charts


A bar plot is a visualization comparing measurements of numeric values across categories. The relative size of the bar indicates the relationship between the variable for groups like cohorts in a study.

Create a Bar Chart

  • Navigate to the data grid you want to visualize.
  • Select (Charts) > Create Chart to open the editor. Click Bar (it is selected by default).
  • The columns eligible for charting from your current grid view are listed.
  • Select the column of data to use for separating the data into bars and drag it to the X Axis Categories box.
  • Only the X Axis Categories field is required to create a basic bar chart. By default, the height of the bar shows the count of rows matching each value in the chosen category. In this case the number of participants from each country.
  • To use a different metric for bar height, select another column and drag it to the box for the Y Axis column. Notice that you can select the aggregate method to use. By default, SUM is selected and the label reads "Sum of [field name]". Here we change to "Mean"; the Y Axis label will update automatically.
  • Click Apply.

Bar Chart Customizations

  • To remove various values from your chart, such as if your data includes a large number of "blank" values:
    • Click View Data.
    • Click the relevant column header, then select Filter.
    • Click the checkbox for "Blank" to deselect it.
    • Click OK in the popup.
    • Click View Chart to return to the chart which is re-calculated without the data you filtered out.

To make a grouped bar chart, we'll add data from another column.

  • Click Chart Type to reopen the creation dialog.
  • Drag a column to the Split Categories By selection box.
  • Click Apply to see grouped bars. The "Split" category is now shown along the X axis with a colored bar for each value in the "X Axis Categories" selection chosen earlier. A legend shows the color map.
  • Further customize your visualization using the Chart Type and Chart Layout links in the upper right.
  • Chart Type reopens the creation dialog allowing you to:
    • Change the "X Axis Categories" column (hover and click the X to delete the current election).
    • Remove or change the Y Axis metric, the "Split Categories By" column, or the aggregation method.
    • You can also drag and drop columns between selection boxes to change how each is used.
    • Note that you can also click another chart type on the left to switch how you visualize the data with the same axes when practical.
    • Click Apply to update the chart with the selected changes.

Change Layout

  • Chart Layout offers the ability to change the look and feel of your chart.

There are 3 tabs:

    • General:
      • Provide a Title to show above your chart. By default, the dataset name is used; at any time you can return to this default by clicking the (refresh) icon for the field.
      • Provide a Subtitle to print under the chart title.
      • Specify the width and height.
      • You can also customize the opacity, line width, and line color for the bars.
      • Select one of three palettes for bar fill colors: Light, Dark, or Alternate. The array of colors is shown.
    • Margins (px): If the default chart margins cause axis labels to overlap, or you want to adjust them for other reasons, you can specify them explicitly in pixels. Specify any one or all of the top, bottom, left, and right margins. See an example here.
    • X-Axis/Y-Axis:
      • Label: Change the display labels for the axis (notice this does not change which column provides the data). Click the icon to restore the original label based on the column name.
      • For the Y-axis, the Range shown can also be specified - the default is Automatic across charts. Select Automatic Within Chart to have the range based only on this chart. You can also select Manual and specify the min and max values directly.
  • Click Apply to update the chart with the selected changes.

Save and Export Charts

  • When your chart is ready, click Save.
  • Name the chart, enter a description (optional), and choose whether to make it viewable by others. You will also see the default thumbnail which has been auto-generated, and can choose whether to use it. As with other charts, you can later attach a custom thumbnail if desired.

Once you have created a bar chart, it will appear in the Data Browser and on the (charts) menu for the source dataset. You can manage metadata about it as described in Manage Data Views.

Export Chart

Hover over the chart to reveal export option buttons in the upper right corner:

Export your completed chart by clicking an option:

  • PDF: generate a PDF file.
  • PNG: create a PNG image.
  • Script: pop up a window displaying the JavaScript for the chart which you can then copy and paste into a wiki. See Tutorial: Visualizations in JavaScript for a tutorial on this feature.

Videos

Related Topics




Box Plots


A box plot, or box-and-whisker plot, is a graphical representation of the range of variability for a measurement. The central quartiles (25% to 75% of the full range of values) are shown as a box, and there are line extensions ('whiskers') representing the outer quartiles. Outlying values are typically shown as individual points.

Create a Box Plot

  • Navigate to the data grid you want to visualize.
  • Select (Charts) > Create Chart to open the editor. Click Box.
  • The columns eligible for charting from your current grid view are listed.
  • Select the column to use on the Y axis and drag it to the Y Axis box.

Only the Y Axis field is required to create a basic single-box plot, but there are additional options.

  • Select another column and choose how to use this column:
    • X Axis Categories: Create a plot with multiple boxes along the x-axis, one per value in the selected column.
    • Color: Display values in the plot with a different color for each column value. Useful when displaying all points or displaying outliers as points.
    • Shape: Change the shape of points based on the value in the selected column. 5 different shapes are available.
  • Here we make it the X-Axis Category and click Apply to see a box plot for each cohort.
  • Click View Data to see, filter, or export the underlying data.
  • Click View Chart to return. If you applied any filters, you would see them immediately reflected in the plot.

Box Plot Customizations

  • Customize your visualization using the Chart Type and Chart Layout links in the upper right.
  • Chart Type reopens the creation dialog allowing you to:
    • Change any column selection (hover and click the X to delete the current election). You can also drag and drop columns between selection boxes to change positions.
    • Add new columns, such as to group points by color and shape. Don't forget to change the layout as described below to fully see these changes.
    • Click Apply to see your changes and switch dialogs.
  • Chart Layout offers options to change the look of your chart, including these changes to make our color and shape distinctions clearer:
    • Set Show Points to All AND:
    • Check Jitter Points to spread the points out horizontally.
    • Click Apply to update the chart with the selected changes.
  • Below we see a plot with all data shown as points, jittered to spread them out and show the different colors and shapes of points. Notice the legend in the upper right. Hover over any point for details about it.
  • You may also notice that the outline of the overall box plot has not changed from the basic fill version shown above. This enhanced chart is giving additional information without losing the big picture of the basic plot.

Change Layout

  • Chart Layout offers the ability to change the look and feel of your chart.

There are 4 tabs:

  • General:
    • Provide a Title to show above your plot. By default, the dataset name is used, and you can return to this default at an time by clicking the refresh icon.
    • Provide a Subtitle to show below the title.
    • Specify the width and height.
    • Elect whether to display single points for all data, only for outliers, or not at all.
    • Check the box to jitter points.
    • You can also customize the colors, opacity, width and fill for points or lines.
    • Margins (px): If the default chart margins cause axis labels to overlap, or you want to adjust them for other reasons, you can specify them explicitly in pixels. Specify any one or all of the top, bottom, left, and right margins. See an example here.
  • X-Axis:
    • Label: Change the display label for the X axis (notice this does not change which column provides the data). Click the icon to restore the original label based on the column name.
  • Y-Axis:
    • Label: Change the display label for the Y axis as for the X axis.
    • Scale Type: Choose log or linear scale for the Y axis.
    • Range: Let the range be determined automatically or specify a manual range (min/max values) for the Y axis.
  • Developer: Only available to users that have the "Platform Developers" site role.
    • A developer can provide a JavaScript function that will be called when a data point in the chart is clicked.
    • Provide Source and click Enable to enable it.
    • Click the Help tab for more information on the parameters available to such a function.
    • Learn more below.
Click Apply to update the chart with the selected changes.

Save and Export Plots

  • When your chart is ready, click Save.
  • Name the plot, enter a description (optional), and choose whether to make it viewable by others. You will also see the default thumbnail which has been auto-generated. You can elect None. As with other charts, you can later attach a custom thumbnail if desired.

Once you have created a box plot, it will appear in the Data Browser and on the (charts) menu for the source dataset. You can manage metadata about it as described in Manage Data Views.

Export Chart

Hover over the chart to reveal export option buttons in the upper right corner:

Export your completed chart by clicking an option:

  • PDF: generate a PDF file.
  • PNG: create a PNG image.
  • Script: pop up a window displaying the JavaScript for the chart which you can then copy and paste into a wiki. See Tutorial: Visualizations in JavaScript for a tutorial on this feature.

Rules Used to Render the Box Plot

The following rules are used to render the box plot. Hover over a box to see a pop-up.

  • Min/Max are the highest and lowest data points still within 1.5 of the interquartile range.
  • Q1 marks the lower quartile boundary.
  • Q2 marks the median.
  • Q3 marks the upper quartile boundary.
  • Values outside of the range are considered outliers and are rendered as dots by default. The options and grouping menus offer you control of whether and how single dots are shown.

Developer Extensions

Developers (users with the "Platform Developers" role) can extend plots that display points to run a JavaScript function when a point is clicked. For example, it might show a widget of additional information about the specific data that point represents. Supported for box plots, scatter plots, line plots, and time charts.

To use this function, open the Chart Layout editor and click the Developer tab. Provide Source in the window provided, click Enable, then click Save to close the panel.

Click the Help tab to see the following information on the parameters available to such a function.

Your code should define a single function to be called when a data point in the chart is clicked. The function will be called with the following parameters:

  • data: the set of data values for the selected data point. Example:
    {
    YAxisMeasure: {displayValue: "250", value: 250},
    XAxisMeasure: {displayValue: "0.45", value: 0.45000},
    ColorMeasure: {value: "Color Value 1"},
    PointMeasure: {value: "Point Value 1"}
    }
  • measureInfo: the schema name, query name, and measure names selected for the plot. Example:
    {
    schemaName: "study",
    queryName: "Dataset1",
    yAxis: "YAxisMeasure",
    xAxis: "XAxisMeasure",
    colorName: "ColorMeasure",
    pointName: "PointMeasure"
    }
  • clickEvent: information from the browser about the click event (i.e. target, position, etc.)

Video

Related Topics




Line Plots


This topic is under construction for the 25.3 (March 2025) release. Trendlines are available beginning in 25.1. For the previous documentation of this feature, click here.

A line plot tracks the value of a measurement across a horizontal scale, typically time. It can be used to show trends alone and relative to other individuals or groups in one plot.

Create a Line Plot

  • Navigate to the data grid you want to visualize. We will use the Lab Results dataset from the example study for this walkthrough.
  • Select (Charts) > Create Chart. Click Line.
  • The columns eligible for charting from your current grid view are listed.
  • Select the X Axis column by drag and drop, here "Date".
  • Select the Y Axis column by drag and drop, here "White Blood Count".
  • Leave the other fields unset/at their defaults for now.
  • Only the X and Y Axes are required to create a basic line plot. Other options will be explored below.
  • Click Apply to see the basic plot.

This basic line chart plots a point for every "Y-axis" value measured for each "X-axis" value, as in a scatter plot, then draws a line between them. When all values for all participants are mixed, this data isn't necessarily useful. Next, we might want to separate by participant to see if any patterns emerge for individuals.

You may also notice the labels for tickmarks along the X axis overlap the "Date" label. We will fix that below after making other plot changes.

Line Plot Customizations

Customize your visualization using the Chart Type and Chart Layout links in the upper right.

  • Chart Type reopens the creation dialog allowing you to:
    • Change the X or Y Axis column (hover and click the X to delete the current selection).
    • Select a Series column (optional). The series measure is used to split the data into one line per distinct value in the column.
    • Change the type of Trendline if desired. Learn more below.
    • Note that you can also click another chart type on the left to switch how you visualize the data with the same axes when practical.
  • For this walkthrough, drag "Participant ID" to the Series box.
  • Click Apply.

Now the plot draws series' lines between values for the same subject, but is unusably dense. Let's filter to a subset of interest.

  • Click View Data to see and filter the underlying data.
  • Click the ParticipantID column header and select Filter.
    • Click the "All" checkbox in the popup to unselect all values. Then, check the boxes for the first 3 participants.
    • Click OK.
  • Click View Chart to return. Now there are 3 lines showing values for the 3 participants.

Add a Second Y Axis

To plot more data, you can add a second Y axis and display it on the right.

  • Click Chart Type to reopen the editor.
  • Drag the "CD4" column to the Y Axis box. Notice it becomes a second panel and does not replace the prior selection (Lymphs).
  • Click the (circle arrow) to set the Y Axis Side for this measure to be on the right.
  • Click Apply.
  • You can see the trend line for each measure for each cohort in a single plot.

Change Chart Layout

The Chart Layout button offers the ability to change the look and feel of your chart.

There are four tabs:

  • General:
    • Provide a title to display on the plot. The default is the name of the source data grid.
    • Provide a subtitle to display under the title.
    • Specify a width and height.
    • Control the point size and opacity, as well as choose the default color, if no "Series" column is set.
    • Control the line width.
    • Hide Data Points: Check this box to display a simple line instead of showing shaped points for each value.
    • Number of Charts: Select whether to show a single chart, or a chart per measure, when multiple measures are defined.
    • Margins (px): If the default chart margins cause axis labels to overlap, or you want to adjust them for other reasons, you can specify them explicitly in pixels. Specify any one or all of the top, bottom, left, and right margins here.
  • X-Axis:
    • Label: Change the display label for the X axis (notice this does not change which column provides the data). Click the icon to restore the original label based on the column name.
  • Y-Axis:
    • Label: Change the display label for the Y axis as for the X axis.
    • Scale Type: Choose log or linear scale for the Y axis.
    • Range: For the Y-axis, the default is Automatic across charts. Select Automatic Within Chart to have the range based only on this chart. You can also select Manual and specify the min and max values directly.
  • Developer: Only available to users that have the "Platform Developers" site role.
    • A developer can provide a JavaScript function that will be called when a data point in the chart is clicked.
    • Provide Source and click Enable to enable it.
    • Click the Help tab for more information on the parameters available to such a function.
    • Learn more in this topic.

Adjust Chart Margins

When there are enough values on an axis that the values overlap the label, or if you want to adjust the margins of your chart for any reason, you can use the chart layout settings to set them. In our example, the date display is too long for the default margin (and overlaps the label) so before publishing, we can improve the look.

  • Observe the example chart where the date displays overlap the label "Date".
  • Open the chart for editing, then click Chart Layout.
  • Scroll down and set the bottom margin to 85 in this example.
    • You can also adjust the other margins as needed.
    • Note that plot defaults and the length of labels can both vary, so the specific setting your plot will need may not be 85.
  • Click Apply.
  • Click Save to save with the revised margin settings.

Trendline Options

The trendline shown in a line plot defaults to being point-to-point, and is adjustable to other options in some situations. Trendline options are available when creating a line plot in the LabKey Server interface, and conditionally available in the LabKey LIMS and Biologics LIMS applications when a numeric field is selected for the X axis.

Reopen the Chart Type editor to see if the option is available and select the desired trendline if so. Any data can use the first three basic types. The non-linear trendline options are only available for tables and queries in the assay schema.

  • Point-to-Point (default)
  • Linear Regression
  • Polynomial
  • Nonlinear 3PL
  • Nonlinear 3PL (Alternate)
  • Nonlinear 4PL
  • Nonlinear 4PL (Alternate)
  • Nonlinear 5PL

The same data presented with four different trendline options:

  • When a trendline type is selected it will apply to each distinct series, or all data points if no series variable is selected.
  • Nonlinear trendline options will conditionally show asymptote min/max inputs, when available.
  • Hovering over a trendline will show the stats and curve fit parameters.
  • Saving a line chart with a trendline type set will retain that selection and show on render of the chart.

Save and Export Plots

  • When your plot is finished, click Save.
  • Name the chart, enter a description (optional), and choose whether to make it viewable by others. You will also see the default thumbnail which has been auto-generated, and can choose whether to use it. As with other charts, you can later attach a custom thumbnail if desired.

Once you have saved a line plot, it will appear in the Data Browser and on the (charts) menu for the source dataset. You can manage metadata about it as described in Manage Data Views.

Export Chart

Hover over the chart to reveal export option buttons in the upper right corner:

Export your completed chart by clicking an option:

  • PDF: generate a PDF file.
  • PNG: create a PNG image.
  • Script: pop up a window displaying the JavaScript for the chart which you can then copy and paste into a wiki. See Tutorial: Visualizations in JavaScript for a tutorial on this feature.

Related Topics




Pie Charts


A pie chart shows the relative size of selected categories as different sized wedges of a circle or ring.

Create a Pie Chart

  • Navigate to the data grid you want to visualize.
  • Select (Charts) > Create Chart to open the editor. Click Pie.
  • The columns eligible for charting from your current grid view are listed.
  • Select the column to visualize and drag it to the Categories box.
  • Click Apply. The size of the pie wedges will reflect the count of rows for each unique value in the column selected.
  • Click View Data to see, filter, or export the underlying data.
  • Click View Chart to return. If you applied any filters, you would see them immediately reflected in the chart.

Pie Chart Customizations

  • Customize your visualization using the Chart Type and Chart Layout links in the upper right.
  • Chart Type reopens the creation dialog allowing you to:
    • Change the Categories column selection.
    • Note that you can also click another chart type on the left to switch how you visualize the data using the same selected columns when practical.
    • Click Apply to update the chart with the selected changes.

Change Layout

  • Chart Layout offers the ability to change the look and feel of your chart.
  • Customize any or all of the following options:
    • Provide a Title to show above your chart. By default, the dataset name is used.
    • Provide a Subtitle. By default, the categories column name is used. Note that changing this label does not change which column is used for wedge categories.
    • Specify the width and height.
    • Select a color palette. Options include Light, Dark, and Alternate. Mini squares showing the selected palette are displayed.
    • Customizing the radii of the pie chart allows you to size the graph and if desired, include a hollow center space.
    • Elect whether to show percentages within the wedges, the display color for them, and whether to hide those annotations when wedges are narrow. The default is to hide percentages when they are under 5%.
    • Use the Gradient % slider and color to create a shaded look.
  • Click Apply to update the chart with the selected changes.

Save and Export Charts

  • When your chart is ready, click Save.
  • Name the chart, enter a description (optional), and choose whether to make it viewable by others. You will also see the default thumbnail which has been auto-generated, and can choose whether to use it. As with other charts, you can later attach a custom thumbnail if desired.

Once you have created a pie chart, it will appear in the Data Browser and on the (charts) menu for the source dataset. You can manage metadata about it as described in Manage Data Views.

Export Chart

Hover over the chart to reveal export option buttons in the upper right corner:

Export your completed chart by clicking an option:

  • PDF: generate a PDF file.
  • PNG: create a PNG image.
  • Script: pop up a window displaying the JavaScript for the chart which you can then copy and paste into a wiki. See Tutorial: Visualizations in JavaScript for a tutorial on this feature.

Videos

Related Topics




Scatter Plots


Scatter plots represent the relationship between two different numeric measurements. Each dot is positioned based on the value of the values selected for the X and Y axes.

Create a Scatter Plot

  • Navigate to the data grid you want to visualize.
  • Select (Charts) > Create Chart. Click Scatter.
  • The columns eligible for charting from your current grid view are listed.
  • Select the X Axis column by drag and drop.
  • Select the Y Axis column by drag and drop.
  • Only the X and Y Axes are required to create a basic scatter plot. Other options will be explored below.
  • Click Apply to see the basic plot.
  • Click View Data to see, filter, or export the underlying data.
  • Click View Chart to return. If you applied any filters, you would see them immediately reflected in the plot.
  • Customize your visualization using the Chart Type and Chart Layout links in the upper right.
  • Chart Type reopens the creation dialog allowing you to:
    • Change the X or Y Axis column (hover and click the X to delete the current selection).
    • Add a second Y Axis column (see below) to show more data.
    • Optionally select columns for grouping of points by color or shape.
    • Note that you can also click another chart type on the left to switch how you visualize the data with the same axes and color/shape groupings when practical.
    • Click Apply to update the chart with the selected changes.
  • Here we see the same scatter plot data, with colors varying by cohort and points shaped based on treatment group. Notice the key in the upper right.

Change Layout

The Chart Layout button offers the ability to change the look and feel of your chart.

There are four tabs:

  • General:
    • Provide a title to display on the plot. The default is the name of the source data grid.
    • Provide a subtitle to display under the title.
    • Specify a width and height.
    • Choose whether to jitter points.
    • Control the point size and opacity, as well as choose the default color palette. Options: Light (default), Dark, and Alternate. The array of colors is shown under the selection.
    • Number of Charts: Select either "One Chart" or "One Per Measure".
    • Group By Density: Select either "Always" or "When number of data points exceeds 10,000."
    • Grouped Data Shape: Choose either hexagons or squares.
    • Density Color Palette: Options are blue & white, heat (yellow/orange/red), or select a single color from the dropdown to show in graded levels. These palettes override the default color palette and other point options in the left column.
    • Margins (px): If the default chart margins cause axis labels to overlap, or you want to adjust them for other reasons, you can specify them explicitly in pixels. Specify any one or all of the top, bottom, left, and right margins. See an example here.
  • X-Axis/Y-Axis:
    • Label: Change the display labels for the axis (notice this does not change which column provides the data). Click the icon to restore the original label based on the column name.
    • Scale Type: Choose log or linear scale for each axis.
    • Range: Let the range be determined automatically or specify a manual range (min/max values) for each axis.
  • Developer: Only available to users that have the "Platform Developers" site role.
    • A developer can provide a JavaScript function that will be called when a data point in the chart is clicked.
    • Provide Source and click Enable to enable it.
    • Click the Help tab for more information on the parameters available to such a function.
    • Learn more in this topic.

Add Second Y Axis

You can add more data to a scatter plot by selecting a second Y axis column. Reopen a chart for editing, click Chart Type, then drag another column to the Y Axis field. The two selected fields will both have panels. On each you can select the side for the Y Axis using the arrow icons.

For this example, we've removed the color and shape columns to make it easier to see the two axes in the plot. Click Apply.

If you use the Chart View > Number of Charts > One Per Measure, you will see two separate charts, still respecting the Y Axis sides you set.

Example: Heat Map

Displaying a scatter plot as a heatmap is done by changing the layout of a chart. Very large datasets are easier to interpret as heatmaps, grouped by density (also known as point binning).

  • Click Chart Layout and change Group By Density to "Always".
  • Select Heat as the Density Color Palette and leave the default Hexagon shape selected
  • Click Apply to update the chart with the selected changes. Shown here, the number of charts was reset to one, and only a single Y axis is included.
  • Notice that when binning is active, a warning message will appear reading: "The number of individual points exceeds XX. The data is now grouped by density which overrides some layout options." XX will be either 10,000 or 1, if you selected "Always" as we did. Click Dismiss to remove that message from the plot display.

Save and Export Plots

  • When your plot is finished, click Save.
  • Name the plot, enter a description (optional), and choose whether to make it viewable by others. You will also see the default thumbnail which has been auto-generated, and can choose whether to use it. As with other charts, you can later attach a custom thumbnail if desired.

Once you have saved a scatter plot, it will appear in the Data Browser and on the (charts) menu for the source dataset. You can manage metadata about it as described in Manage Data Views.

Export Plot

Hover over the chart to reveal export option buttons in the upper right corner:

Export your completed chart by clicking an option:

  • PDF: generate a PDF file.
  • PNG: create a PNG image.
  • Script: pop up a window displaying the JavaScript for the chart which you can then copy and paste into a wiki. See Tutorial: Visualizations in JavaScript for a tutorial on this feature.

Video

Related Topics




Time Charts


Time charts provide rich time-based visualizations for datasets and are available in LabKey study folders. In a time chart, the X-axis shows a calculated time interval or visit series, while the Y-axis shows one or more numerical measures of your choice.

Note: Only properties defined as measures in the dataset definition can be plotted on time charts.
With a time chart you can:
  • Individually select which study participants, cohorts, or groups appear in the chart.
  • Refine your chart by defining data dimensions and groupings.
  • Export an image of your chart to a PDF or PNG file.
  • Export your chart to Javascript (for developers only).
Note: In a visit-based study, visits are a way of measuring sequential data gathering. To create a time chart of visit based data, you must first create an explicit ordering of visits in your study. Time charts are not supported for continuous studies, because they contain no calculated visits/intervals.

Create a Time Chart

  • Navigate to the dataset, view, or query of interest in your study. In this example, we use the Lab Results dataset in the example study.
  • Select (Charts) > Create Chart. Click Time.
  • Whether the X-axis is date based or visit-based is determined by the study type. For a date-based study:
    • Choose the Time Interval to plot: Days, Weeks, Months, Years.
    • Select the desired Interval Start Date from the pulldown menu. All eligible date fields are listed.
  • At the top of the right panel is a drop down from which you select the desired dataset or query. Time charts are only supported for datasets/queries in the "study" schema which include columns designated as 'measures' for plotting. Queries must also include both the 'ParticipantId' and 'ParticipantVisit' columns to be listed here.
  • The list of columns designated as measures available in the selected dataset or query is shown in the Columns panel. Drag the desired selection to the Y-Axis box.
    • By default the axis will be shown on the left; click the right arrow to switch sides.
  • Click Apply.
  • The time chart will be displayed.
  • Use the checkboxes in the Filters panel on the left:
    • Click a label to select only that participant.
    • Click a checkbox to add or remove that participant from the chart.
  • Click View Data to see the underlying data.
  • Click View Chart(s) to return.

Time Chart Customizations

  • Customize your visualization using the Chart Type and Chart Layout links in the upper right.
  • Chart Type reopens the creation dialog allowing you to:
    • Change the X Axis options for time interval and start date.
    • Change the Y Axis to plot a different measure, or plot multiple measures at once. Time charts are unique in allowing cross-query plotting. You can select measures from different datasets or queries within the same study to show on the same time chart.
      • Remove the existing selection by hovering and clicking the X. Replace with another measure.
      • Add a second measure by dragging another column from the list into the Y-Axis box.
      • For each measure you can specify whether to show the Y-axis for it on the left or right.
      • Open and close information panels about time chart measures by clicking on them.
    • Click Apply to update the chart with the selected changes.

Change Layout

  • Chart Layout offers the ability to change the look and feel of your chart.

There are at least 4 tabs:

  • On the General tab:
    • Provide a Title to show above your chart. By default, the dataset name is used.
    • Specify the width and height of the plot.
    • Use the slider to customize the Line Width.
    • Check the boxes if you want to either Hide Trend Line or Hide Data Points to get the appearance you prefer. When you check either box, the other option becomes unavailable.
    • Number of Charts: Choose whether to show all data on one chart, or separate by group, or by measure.
    • Subject Selection: By default, you select participants from the filter panel. Select Participant Groups to enable charting of data by groups and cohorts using the same checkbox filter panel. Choose at least one charting option for groups:
      • Show Individual Lines: show plot lines for individual participant members of the selected groups.
      • Show Mean: plot the mean value for each participant group selected. Use the pull down to select whether to include range bars when showing mean. Options are: "None, Std Dev, or Std Err".
  • On the X-Axis tab:
    • Label: Change the display label shown on the X-axis. Note that changing this text will not change the interval or range plotted. Use the Chart Type settings to change what is plotted.
    • Range: Let the range be determined automatically or specify a manual range (min/max values).
  • There will be one Y-Axis tab for each side of the plot if you have elected to use both the left and right Y-axes. For each side:
    • Label Change the display labels for this Y-axis. Note that changing this text will not change what is plotted. Click the icon to restore the original label based on the column name.
    • Scale Type: Choose log or linear scale for each axis.
    • Range: Let the range be determined automatically or specify a manual range (min/max values) for each axis.
    • For each Measure using that Y-axis:
      • Choose an Interval End Date. The pulldown menu includes eligible date columns from the source dataset or query.
      • Choose a column if you want to Divide Data Into Series by another measure.
      • When dividing data into series, choose how to display duplicate values (AVG, COUNT, MAX, MIN, or SUM).
  • Developer: Only available to users that have the "Platform Developers" site role.
    • A developer can provide a JavaScript function that will be called when a data point in the chart is clicked.
    • Provide Source and click Enable to enable it.
    • Click the Help tab for more information on the parameters available to such a function.
    • Learn more in this topic.
  • Click Apply to update the chart with the selected changes. In this example, we now plot data by participant group. Note that the filter panel now allows you to plot trends for cohorts and other groups. This example shows a plot combining trends for two measures, lymphs and viral load, for two study cohorts.

Save Chart

  • When your chart is ready, click Save.
  • Name the chart, enter a description (optional), and choose whether to make it viewable by others. You will also see the default thumbnail which has been auto-generated, and can choose whether to use it. As with other charts, you can later attach a custom thumbnail if desired.
  • Click Save.

Once you have created a time chart, it will appear in the Data Browser and on the charts menu for the source dataset.

Data Dimensions

By adding dimensions for a selected measure, you can further refine the timechart. You can group data for a measure on any column in your dataset that is defined as a "data dimension". To define a column as a data dimension:

  • Open a grid view of the dataset of interest.
  • Click Manage.
  • Click Edit Definition.
  • Click the Fields section to open it.
  • Expand the field of interest.
  • Click Advanced Settings.
  • Place a checkmark next to Make this field available as a dimension.
  • Click Apply.
  • Click Save.

To use the data dimension in a time chart:

  • Click View Data to return to your grid view.
  • Create a new time chart, or select one from the (Charts) menu and click Edit.
  • Click Chart Layout.
  • Select the Y-Axis tab for the side of the plot you are interested in (if both are present.
    • The pulldown menu for Divide Data Into Series By will include the dimensions you have defined.
  • Select how you would like duplicate values displayed. Options: Average, Count, Max, Min, Sum.
  • Click Apply.
  • A new section appears in the filters panel where you can select specific values of the new data dimension to further refine your chart.

Export Chart

Hover over the chart to reveal export option buttons in the upper right corner:

Export your completed chart by clicking an option:

  • PDF: generate a PDF file.
  • PNG: create a PNG image.
  • Script: pop up a window displaying the JavaScript for the chart which you can then copy and paste into a wiki. See Tutorial: Visualizations in JavaScript for a tutorial on this feature.

Related Topics




Column Visualizations


Click a column header to see a list of Column Visualizations, small plots and charts that apply to a single column. When selected, the visualization is added to the top of the data grid. Several can be added at a time, and they are included within a saved custom grid view. When you come back to the saved view, the Column Visualizations will appear again.
  • Bar Chart - Histogram displayed above the grid.
  • Box & Whisker - Distribution box displayed above the grid.
  • Pie Chart - Pie chart displayed above the grid.

Visualizations are always 'live', reflecting updates to the underlying data and any filters added to the data grid.

To remove a chart, hover over the chart and click the 'X' in the upper right corner.

Available visualization types are determined by datatype as well as whether the column is a Measure and/or a Dimension.

  • The box plot option is shown for any column marked as a Measure.
  • The bar and pie chart options are shown for any column marked as a Dimension.
Column visualizations are simplified versions of standalone charts of the same types. Click any chart to open it within the plot editor which allows you to make many additional customizations and save it as a new standalone chart.

Bar Chart

A histogram of the Weight column.

Box and Whisker Plot

A basic box plot report. You can include several column visualizations above a grid simultaneously.

Pie Chart

A pie chart showing prevalence of ARV Regimen types.

Filters are also applied to the visualizations displayed. If you filter to exclude 'blank' ARV treatment types, the pie chart will update.

Related Topics




Quick Charts


Quick Charts provide a quick way to assess your data without deciding first what type of visualization you will use. A best guess visualization for the data in a single column is generated and can be refined from there.

Create a Quick Chart

  • Navigate to a data grid you wish to visualize.
  • Click a column header and select Quick Chart.

Depending on the content of the column, LabKey Server makes a best guess at the type and arrangement of chart to use as a starting place. A numeric column in a cohort study, for example, might be quickly charted as a box and whisker plot using cohorts as categories.

Refine the Chart

You can then alter and refine the chart in the following ways:

  • View Data: Toggle to the data grid, potentially to apply filters to the underlying data. Filters are reflected in the plot upon re-rendering.
  • Export: Export the chart as a PDF, PNG, or Script.
  • Help: Documentation links.
  • Chart Type: Click to open the plot editor. You can change the plot type to any of the following and the options for chart layout settings will update accordingly. In the plot editor, you can also incorporate data from other columns.
  • Chart Layout: Click to customize the look and feel of your chart; options available vary based on the chart type. See individual chart type pages for a descriptions of options.
  • Save: Click to open the save dialog.

Related Topics




Integrate with Tableau


Premium Feature — Available in the Professional and Enterprise Editions of LabKey Server. Learn more or contact LabKey.
Users can integrate their LabKey Server with Tableau Desktop via an Open Database Connectivity (ODBC) integration. Data stored in LabKey can be dynamically queried directly from the Tableau application to create reports and visualizations.

Configure LabKey

LabKey must first be configured to accept external analytics integrations using an ODBC connection. At all times, LabKey security settings govern who can see which data; any user must have at least the Reader role to access data from Tableau.

Learn more about setting up the ODBC connection in LabKey in these topics:

Configure Tableau

Once configured, load LabKey data into Tableau following the instructions in this section:

Use LabKey Data in Tableau

Within Tableau, once you have loaded the data table you wish to use, you can see the available measures and dimensions listed. Incorporate them into the visualizations by dragging and dropping. Instructions for using Tableau Desktop can be found on their website.

In this example, the Physical Exam and Demographics joined view is being used to create a plot showing the CD4 levels over time for two Treatment Groups, those treated and not treated with ARV regimens.

Tableau also offers the ability to easily create trend lines. Here an otherwise cluttered scatter plot is made clearer using trend lines:

You can build a wide variety of visualizations and dashboards in Tableau.

Video Demonstration

This video covers the ways in which LabKey Server and Tableau Desktop are great partners for creating powerful visualizations.

Related Topics




Lists


A List is a user-defined table that can be used for a variety of purposes:
  • As a data analysis tool for spreadsheet data and other tabular-format files, such as TSVs and CSVs.
  • As a place to store and edit data entered by users via forms or editable grids
  • To define vocabularies, which can be used to constrain choices during completion of fields in data entry forms
  • As read-only resources that users can search, filter, sort, and export
The design, or schema, of a list is the set of fields (columns and types), including the identification of the primary key. Lists can be linked via lookups and joins to draw data from many sources. Lists can be indexed for search, including optional indexing of any attachments added to fields. Populated lists can be exported and imported as archives for easy transfer between folders or servers.

Topics

List Web Parts

You need to be an administrator to create and manage lists. You can directly access the list manager by selecting (Admin) > Manage Lists. To make the set of lists visible to other users, and create a one click shortcut for admins to manage lists, add a Lists web part to your project or folder.

Lists Web Part

  • Enter > Page Admin Mode.
  • Choose Lists from the <Select Web Part> pulldown at the bottom of the page.
  • Click Add.
  • Click Exit Admin Mode.

List-Single Web Part

To display the contents of a single list, add a List - Single web part, name it and choose the list and view to display.




Tutorial: Lists


This tutorial introduces you to Lists, the simplest way to represent tabular data in LabKey. While straightforward, lists support many advanced tools for data analysis. Here you will learn about the power of lookups, joins, and URL properties for generating insights into your results.

This tutorial can be completed using a free 30-day trial version of LabKey Server.

Lists can be simple literal one column "lists" or many columned tables with a set of values for each "item" on the list. A list must always have a unique primary key - if your data doesn't have one naturally, you can use an auto-incrementing integer to guarantee uniqueness.

Lists offer many data connection and analysis opportunities we'll begin to explore in this tutorial:

  • Lookups can help you display names instead of code numbers, present options to users adding data, and interconnect tables.
  • Joins help you view and present data from related lists together in shared grids without duplicating information.
  • URL properties can be used to filter a grid view of a list OR help you link directly to information stored elsewhere.
Completing this tutorial as written requires administrative permissions, which you will have if you create your own server trial instance in the first step. The features covered are not limited to admin users.

Tutorial Steps

First Step




Step 1: Set Up List Tutorial


In this first step, we set up a folder to work in, learn to create lists, then import a set of lists to save time.

Set Up

  • Log in to your server and navigate to your "Tutorials" project. Create it if necessary.
    • If you don't already have a server to work on where you can create projects, start here.
    • If you don't know how to create projects and folders, review this topic.
  • Create a new subfolder named "List Tutorial". Accept all defaults.

Create a New List

When you create a new list, you define the properties and structure of your table.

  • In the Lists web part, click Manage Lists.
  • Click Create New List.
  • In the List Properties panel, name the list "Technicians".
  • Under Allow these Actions, notice that Delete, Upload, Export & Print are all allowed by default.
  • You could adjust things like how your list would be indexed on the Advanced Settings popup. Leave these settings at their defaults for now.

  • You'll see the set of fields that define the columns in your list.
  • In the blue banner, you must select a field to use as the primary key. Using the name could be non-unique, and even badge numbers might be reassigned, so select Auto integer key from the dropdown and notice that a "Key" field is added to your list.
  • Click Save.
  • You'll see the empty "frame" for your new list with column names but "No data to show."
  • Now the "Technicians" list will appear in the Lists web part when you return to the main folder page.

Populate a List

You can populate a list one row at a time or in bulk by importing a file (or copying and pasting the data).

  • If you returned to the List Tutorial main page, click the Technicians list name to reopen the empty frame.
  • Click (Insert data) and select Insert new row.
  • Enter your own name, make up a "Department" and use any number as your badge number.
  • Click Submit.
  • Your list now has one row.
  • Download this file: Technicians.xls
  • Click (Insert data) and select Import bulk data.
  • Click the Upload file panel to open it, then click Browse (or Choose File) and choose the "Technicians.xls" file you downloaded.
    • Alternately, you could open the file and copy paste the contents (including the headers) into the copy/paste text panel instead.
  • Click Submit.

You will now see a list of some "Technicians" with their department names. Before we continue, let's add a few more lists to the folder using a different list creation option below.

Import a List Archive

A list archive can be exported from existing LabKey lists in a folder, or manually constructed following the format. It provides a bulk method for creating and populating sets of lists.

  • Click here to download the Tutorial.lists.zip archive. Do not unzip it.
  • Select (Admin) > Manage Lists.
  • Click Import List Archive.
  • Click Choose File and select Tutorial.lists.zip from where you downloaded it.
  • Click Import List Archive.

Now you will see several additional lists have been added to your folder. Click the names to review, and continue the tutorial using this set of lists.

Related Topics

Start Over | Next Step (2 of 3)




Step 2: Create a Joined Grid


You can interconnect lists with each other by constructing "lookups", then use those connections to present joined grids of data within the user interface.

Connect Lists with a Lookup

  • Click the Technicians list from the Lists web part.

The grid shows the list of technicians and departments, including yourself. You can see the Department column displays a text name, but is currently unlinked. We can connect it to the "Departments" list uploaded in the archive. Don't worry that the department you typed is probably not on that list.

  • Click Design to open the list design for editing.
  • Click the Fields section to open it.
  • For the Department field, click the Text selector and choose the Data Type Lookup.
  • The details panel for the field will expand. Under Lookup Definition Options:
    • Leave the Target Folder set to the current folder.
    • Set the Target Schema to "lists".
    • On the Target Table dropdown, select "Departments (String)".
  • Click Save.

Now the list shows the values in the Department column as links. If you entered something not on the list, it will not be linked, but instead be plain text surrounded by brackets.

Click one of the links to see the "looked up" value from the Departments list. Here you will see the fields from the "Departments" list: The contact name and phone number.

You may have noticed that while there are several lists in this folder now, you did not see them all on the dropdown for setting the lookup target. The field was previously a "Text" type field, containing some text values, so only lists with a primary key of that "Text" type are eligible to be the target when we change it to be a lookup.

Create a Joined Grid

What if you want to present the details without your user having to click through? You can easily "join" these two lists as follows.

  • Select > Manage Lists.
  • Click Technicians.
  • Select (Grid Views) > Customize Grid.
  • In the Available Fields panel, notice the field Department now shows an (expansion) icon. Click it.
  • Place checkmarks next to the Contact Name and Contact Phone fields (to add them to the Selected Fields panel).
  • Click View Grid.
  • Now you see two additional columns in the grid view.
  • Above the grid, click Save.
  • In the Save Custom Grid View popup menu, select Named and name this view DepartmentJoinedView.
  • Click Save in the popup.

You can switch between the default view of the Technicians list and this new joined grid on the (Grid Views) menu.

Use the Lookup to Assist Input

Another reason you might use a lookup field is to help your users enter data.

  • Hover over the row you created first (with your own name).
  • Click the (pencil) icon that will appear.
  • Notice that instead of the original text box, the entry for "Department" is now done with a dropdown.
    • You may also notice that you are only editing fields for the local list - while the grid showed contact fields from the department list, you cannot edit those here.
  • Select a value and click Submit.

Now you will see the lookup also "brought along" the contact information to be shown in your row.

In the next step, we'll explore using the URL attribute of list fields to make more connections.

Previous Step | Next Step (3 of 3)




Step 3: Add a URL Property


In this previous step, we used lookups to link our lists to each other. In this step, we explore two ways to use the URL property of list fields to create other links that might be useful to researchers using the data.

Create Links to Filtered Results

It can be handy to generate an active filtering link in a list (or any grid of data). For example, here we use a URL property to turn the values in the Department column into links to a filtered subset of the data. When you click one value, you get a grid showing only rows where that column has the same value.

  • If you navigated away from the Technicians list, reopen it.
  • When you click a value in the "Department" column, notice that you currently go to the contact details. Go back to the Technicians list.
  • Click the column header Department to open the menu, then select Filter....
  • Click the label Executive to select only that single value.
  • Click OK to see the subset of rows.
  • Notice the URL in your browser, which might look something like this - the full path and your list ID number may vary, but the filter you applied (Department = Executive) is encoded at the end.
    http://localhost:8080/labkey/Tutorials/List%20Tutorial/list-grid.view?listId=277&query.Department~eq=Executive
  • Clear the filter using the in the filter bar, or by clicking the column header Department and then clicking Clear Filter.

Now we'll modify the design of the list to turn the values in the Department column into custom links that filter to just the rows for the value that we click.

  • Click Design.
  • Click the Fields section to expand it.
  • Scroll down and expand the Department field.
  • Copy and paste this value into the URL field:
/list-grid.view?name=Technicians&query.Department~eq=${Department}
    • This URL starts with "/" indicating it is local to this container.
    • The filter portion of this URL replaces "Executive" with the substitution string "${Department}", meaning the value of the Department column. (If we were to specify "Executive", clicking any Department link in the list would filter the list to only show the executives!)
    • The "listId" portion of the URL has been replaced with "name=Technicians." This allows the URL to work even if exported to another container where the listId might be different.
  • Scroll down and click Save.

Now notice that when you click a value in the "Department" column, you get the filtering behavior we just defined.

  • Click Documentation in any row and you will see the list filtered to display only rows for that value.

Learn more about ways to customize URLs in this topic: URL Field Property

Create Links to Outside Resources

A column value can also become a link to a resource outside the list, and even outside the server. All the values in a column could link to a fixed destination (such as to a protocol document or company web page) or you can make row-specific links to files where a portion of the link URL matches a value in the row such as the Badge Number in this example.

For this example, we've stored some images on our support site, so that you can try out syntax for using both a full URL to reference a non-local destination AND the use of a field value in the URL. In this case, images are stored as <badge number>.png; in actual use you might have locally stored slide images or other files of interest named by subjectId or another column in your data.

Open this link in a new browser window:

You can directly edit the URL, substituting the other badge IDs used in the Technicians list you loaded (701, 1802, etc) where you see 104 in the above.

Here is a generalized version, using substitution syntax for the URL property, for use in the list design.

https://www.labkey.org/files/home/Demos/ListDemo/sendFile.view?fileName=%40files%2F${Badge}.png&renderAs=IMAGE

  • Click the List Tutorial link near the top of the page, then click the Technicians list in the Lists web part.
  • Click Design.
  • Click the Fields section to expand it.
  • Expand the Badge field.
  • Into the URL property for this field, paste the generalized version of the link shown above.
  • Click Save.
  • Observe that clicking one of the Badge Number values will open the image with the same name.
    • If you edit your own row to set your badge number to "1234" you will have an image as well. Otherwise clicking a value for which there is no pre-loaded image will raise an error.

Congratulations

You've now completed the list tutorial. Learn more about lists and customizing the URL property in the related topics.

Related Topics

Previous Step




Create Lists


A list is a basic way of storing tabular information. LabKey lists are flexible, user-defined tables that can have as many columns as needed. Lists must have a primary key field that ensures rows are uniquely identified.

The list design is the set of columns and types, which forms the structure of the list, plus the identification of the primary key field, and properties about the list itself.

Create New List and Set Basic Properties

  • In the project or folder where you want the list, select > Manage Lists.
  • Click Create New List.
  • Name the list, i.e. "Technicians" in this example.
    • The name must be unique, must start with a letter or number character, and cannot contain special characters or some reserved substrings listed here.
    • If the name is taken by another list in the container, project, or /Shared project, you'll see an error and need to choose another name. You may also want to use that shared definition instead of creating a new list definition.
  • Adding a Description is optional, but can give other users more information about the purpose of this list.
  • Use the checkboxes to decide whether to Allow these Actions for the list:
    • Delete
    • Upload
    • Export & Print
  • Continue to define fields and other settings before clicking Save.

Set Advanced Properties

  • Click Advanced Settings to set more properties (listed below the image).
  • Default Display Field: Once some fields have been defined, use this dropdown to select the field that should be displayed by default when this list is used as a lookup target.
  • Discussion Threads let people create discussions associated with this list. Options:
    • Disable discussions (Default)
    • Allow one discussion per item
    • Allow multiple discussions per item
  • Search Indexing Options (by default, no options are selected):
    • Index entire list as a single document
    • Index each item as a separate document
    • Index file attachments
    • Learn more about indexing options here: Edit a List Design
  • Click Apply when finished.

Continue to define the list fields before clicking Save.

Define List Fields and Set Primary Key

The fields in the list define which columns will be included. There must be a primary key column to uniquely identify the rows. The key can either be an integer or text field included with your data, OR you can have the system generate an auto-incrementing integer key that will always be unique. Note that if you select the auto-incrementing integer key, you will not have the ability to merge list data.

You have three choices for defining fields:

Manually Define Fields

  • Click the Fields section to open the panel.
  • Click Manually Define Fields (under the drag and drop region).
  • Key Field Name:
    • If you want to use an automatically incrementing integer key, select Auto integer key. You can rename the default Key field that will be added.
    • If you want to use a different field (of Integer or Text type), first define the fields, then select from this dropdown.
  • Use Add Field to add the fields for your list.
    • Specify the Name and Data Type for each column.
    • Check the Required box to make providing a value for that field mandatory.
    • Open a field to define additional properties using the expansion icon.
    • Remove a field if necessary by clicking the .
  • Details about adding fields and editing their properties can be found in this topic: Field Editor.
  • Scroll down and click Save when you are finished.

Infer Fields from a File

Instead of creating the list fields one-by-one you can infer the list design from the column headers of a sample spreadsheet. When you first click the Fields section, the default option is to import or infer fields. Note that inferring fields is only offered during initial list creation and cannot be done when editing a list design later. If you start manually defining fields and decide to infer instead, delete the manually defined fields and you will see the inferral option return. Note that you cannot delete an auto integer key field and will need to restart list creation from scratch if you have manually chosen that key type already.

  • Click here to download this file: Technicians.xls
  • Select > Manage Lists and click Create New List.
  • Name the list, i.e. "Technicians2" so you can compare it to the list created above.
  • Click the Fields section to open it.
  • Drag and drop the downloaded "Technicians.xls" file into the target area.
  • The fields will be inferred and added automatically.
    • Note that if your file includes columns for reserved fields, they will not be shown as inferred. Reserved fields will always be created for you.
  • Select the Key Field Name - in this case, select Auto integer key to add a new field to provide our unique key.
  • If you need to make changes or edit properties of these fields, follow the instructions above or in the topic: Field Editor.
    • In particular, if your field names include any special characters (including spaces) you should adjust the inferral to give the field a more 'basic' name and move the original name to the Label and Import Aliases field properties. For example, if your data includes a field named "CD4+ (cells/mm3)", you would put that string in both Label and Import Aliases but name the field "CD4" for best results.
  • Below the fields section, you will see Import data from this file upon list creation?
  • By default the contents of the spreadsheet you used for inferral will be imported to this list when you click Save.
  • If you do not want to do that, uncheck the box.
  • Scroll down and click Save.
  • Click "Technicians2" to see that the field names and types are inferred forming the header row, but no data was imported from the spreadsheet.

Export/Import Field Definitions

In the top bar of the list of fields, you see an Export button. You can click to export field definitions in a JSON format file. This file can be used to create the same field definitions in another list, either as is or with changes made offline.

To import a JSON file of field definitions, use the infer from file method, selecting the .fields.json file instead of a data-bearing file. Note that importing or inferring fields will overwrite any existing fields; it is intended only for new list creation.

You'll find an example of using this option in the first step of the List Tutorial

Learn more about exporting and importing sets of fields in this topic: Field Editor

Shortcut: Infer Fields and Populate a List from a Spreadsheet

If you want to both infer the fields to design the list and populate the new list with the data from the spreadsheet, follow the inferral of fields process above, but leave the box checked in the Import Data section as shown below. The first three rows are shown in the preview.

  • Click Save and the entire spreadsheet of data will be imported as the list is created.
Note that data previews do not apply field formatting defined in the list itself. For example, Date and DateTime fields are always shown in ISO format (yyyy-MM-dd hh:mm) regardless of source data or destination list formatting that will be applied after import. Learn more in this topic.

Related Topics




Edit a List Design


Editing the list design allows you change the structure (columns) and functions (properties) of a list, whether or not it has been populated with data. To see and edit the list design, click Design above the grid view of the list. You can also select > Manage Lists and click Design next to the list name. If you do not see these options, you do not have permission to edit the given list.

List Properties

The list properties contain metadata about the list and enable various actions including export and search.

The properties of the Technicians list in the List Tutorial Demo look like this:

  • Name: The displayed name of the list.
    • The name must be unique, must start with a letter or number character, and cannot contain special characters or some reserved substrings listed here.
  • Description: An optional description of the list.
  • Allow these Actions: These checkboxes determine what actions are allowed for the list. All are checked by default.
    • Delete
    • Upload
    • Export and Print
  • Click Advanced Settings to see additional properties in a popup:
    • Default Display Field: Identifies the field (i.e., the column of data) that is used when other lists or datasets do lookups into this list. You can think of this as the "lookup display column." Select a specific column from the dropdown or leave the default "<AUTO>" selection, which uses this process:
      • Use the first non-lookup string column (this could be the key).
      • If there are no string fields, use the primary key.
    • Discussion Threads: Optionally allow discussions to be associated with each list item. Such links will be exposed as a "discussion" link on the details view of each list item. Select one of:
      • Disable discussions (Default)
      • Allow one discussion per item
      • Allow multiple discussions per item
    • Search Indexing Options. Determines how the list data, metadata, and attachments are indexed for full-text searching. Details are below.

List Fields

Click the Fields section to add, delete or edit the fields of your list. Click the to edit details and properties for each field. Learn more in the topic: Field Editor.

Example. The field editor for the Technicians list in the List Tutorial Demo looks like this:

Customize the Order of List Fields

By default, the order of fields in the default grid is used to order the fields in insert, edit and details for a list. All fields that are not in the default grid are appended to the end. To see the current order, click Insert New for an existing list.

To change the order of fields, drag and drop them in the list field editor using the six-block handle on the left. You can also modify the default grid for viewing columns in different orders. Learn more in the topic: Customize Grid Views.

List Metadata and Hidden Fields

In addition to the fields you define, there is list metadata associated with every list. To see and edit it, use the schema browser. Select > Developer Links > Schema Browser. Click lists and then select the specific list. Click Edit Metadata.

List metadata includes the following fields in addition to any user defined fields.

NameDatatypeDescription
Createddate/timeWhen the list was created
CreatedByint (user)The user who created the list
Modifieddate/timeWhen the list was last modified
ModifiedByint (user)The user who modified the list
container (Folder)lookupThe folder or project where the list is defined
lastIndexeddate/timeWhen the list was last indexed
EntityIdtextA unique identifier for this list

There are several built in hidden fields in every list. To see them, open (Grid Views) > Customize Grid and check the box for Show Hidden Fields.

NameDatatypeDescription
Last Indexeddate/timeWhen this list was last indexed.
KeyintThe key field (if not already shown).
Entity IdtextThe unique identifier for the list itself.

Full-Text Search Indexing

You can control how your list is indexed for search depending on your needs. Choose one or more of the options on the Advanced List Settings popup under the Search Indexing Options section. Clicking the for the first two options adds additional options in the popup:

When indexing either the entire list or each item separately, you also specify how to display the title in search results and which fields to index. Note that you may choose to index *both* the entire list and each item, potentially specifying different values for each of these options. When you specify which fields in the list should be indexed, Do not include fields that contain PHI or PII. Full text search results could expose this information.

Index entire list as a single document

  • Document Title: Any text you want displayed and indexed as the search result title. There are no substitution parameters available for this title. Leave the field blank to use the default title.
  • Select one option for the metadata/data:
    • Include both metadata and data: Not recommended for large lists with frequent updates, since updating any item will cause re-indexing of the entire list.
    • Include data only: Not recommended for large lists with frequent updates, since updating any item will cause re-indexing of the entire list.
    • Include metadata only (name and description of list and fields). (Default)
  • Select one option for indexing of PHI (protected health information):
    • Index all non-PHI text fields
    • Index all non-PHI fields (text, number, date, and boolean)
    • Index using custom template: Choose the exact set of fields to index and enter them as a template in the box when you select this option. Use substitution syntax like, for example: ${Department} ${Badge} ${Name}.

Index each item as a separate document

  • Document Title: Any text you want displayed and indexed as the search result title (ex. ListName - ${Key} ${value}). Leave the field blank to use the default title.
  • Select one option for indexing of PHI (protected health information):
    • Index all non-PHI text fields
    • Index all non-PHI fields (text, number, date, and boolean)
    • Index using custom template: Choose the exact set of fields to index and enter them as a template in the box when you select this option. Use substitution syntax like, for example: ${Department} ${Badge} ${Name}.

Related Topics




Populate a List


Once you have created a list, there are a variety of options for populating it, designed to suit different kinds of list and varying complexity of data entry. Note that you can also simultaneously create a new list and populate it from a spreadsheet. This topic covers populating an existing list. In each case, you can open the list by selecting (Admin) > Manage Lists and clicking the list name. If your folder includes a Lists web part, you can click the list name directly there.

Insert Single Rows

One option for simple lists is to add a single row at a time:

  • Select (Insert data) > Insert new row.
  • Enter the values for each column in the list.
  • Click Submit.

Import Bulk Data

You can also import multiple rows at once by uploading a file or copy/pasting text. To ensure the format is compatible, particularly for complex lists, you can first download a template, then populate it with your data prior to upload.

Copy/Paste Text

  • Select (Insert data) > Import bulk data.
  • Click Download Template to obtain a template for your list that you can fill out.
  • Because this example list has an incrementing-integer key, you won't see the update and merge options available for some lists.
  • Copy and paste the spreadsheet contents into the text box, including the header row. Using our "Technicians" list example, you can copy and paste this spreadsheet:
NameDepartmentBadge
Abraham LincolnExecutive104
HomerDocumentation701
Marie CurieRadiology88

  • Click Submit.
  • The pasted rows will be added to the list.

Upload File

Another way to upload data is to directly upload an .xlsx, .xls, .csv, or .txt file containing data.

  • Again select (Insert data) > Import bulk data.
  • Using Download Template to create the file you will populate can ensure the format will match.
  • Click the Upload file (.xlsx, .xls, .csv., .txt) section heading to open it.
  • Because this example list has an incrementing-integer key, you won't see the update and merge options available for some lists.
  • Use Browse or Choose File to select the File to Import.
  • Click Submit.

Update or Merge List Data

If your list has an integer or text key, you have the option to merge list data during bulk import. Both the copy/paste and file import options let you select whether you want to Add rows or Update rows (optionally checking Allow new rows for a data merge).

Update and merge options are not available for lists with an auto-incrementing integer key. These lists always create a new row (and increment the key) during import, so you cannot include matching keys in your imported data.

Import Lookups By Alternate Key

When importing data into a list, either by copy paste or from a file, you can use the checkbox to Import Lookups by Alternate Key. This allows lookup target rows to be resolved by values other than the target's primary key. It will only be available for lookups that are configured with unique column information. For example, tables in the "samples" schema (representing Samples) use the RowId column as their primary key, but their Name column is guaranteed to be unique as well. Imported data can use either the primary key value (RowId) or the unique column value (Name). This is only supported for single-column unique indices. See Add Samples.

View the List

Your list is now populated. You can see the contents of the list by clicking on the name of the list in the Lists web part. An example:

Hovering over a row will reveal icon buttons in the first column:

Edit List Rows

Click the icon to edit a row in a list. You'll see entry fields as when you insert a row, populated with the current values. Make changes as needed and click submit.

Changes to lists are audited under List Events in the main audit log. You can also see the history for a specific list by selecting (Admin) > Manage Lists and clicking View History for the list in question. Learn more here: Manage Lists.

Related Topics




Manage Lists


A list is a flexible, user-defined table. To manage all the lists in a given container, an administrator can select (Admin) > Manage Lists, or click Manage Lists in the Lists web part.

Manage Lists

An example list management page from a study folder:

  • (Grid Views): Customize how this grid of lists is displayed and create custom grid views.
  • (Charts/Reports): Add a chart or report about the set of lists.
  • (Delete): Select one or more lists using the checkboxes to activate deletion. Both the data and the list design are removed permanently from your server.
  • (Export): Export to Excel, Text, Script, or RStudio (when configured).
  • Create New List
  • Import List Archive
  • Export List Archive: Select one or more lists using the checkboxes and export as an archive.
  • (Print): Print the grid of lists.

Shared List Definitions

The definition of a list can be in a local folder, in the parent project, or in the "/Shared" project. In any given folder, if you select (Grid Views) > Folder Filter, you can choose the set of lists to show. By default, the grid folder filter will show lists in the "Current Folder, Project, and Shared Project".

Folder filter options for List definitions:

  • Current folder
  • Current folder and subfolders
  • Current folder, project, and Shared project (Default)
  • All folders
When you add data to a list, it will be added in the local container, not to any shared location. The definition of a list can be shared, but the data is not, unless you customize the grid view to use a folder filter to expose a wider scope.

Folder filter options for List data:

  • Current folder (Default)
  • Current folder, subfolders, and Shared project
  • Current folder, project, and Shared project
  • All folders

Manage a Specific List

Actions for each list shown in the grid:

  • Design: Click to view or edit the design, i.e. the set of fields and properties that define the list, including allowable actions and indexing. Learn more in this topic: Edit a List Design.
  • View History: See a record of all list events and design changes.
  • Click the Name of the list to see all contents of the list shown as a grid. Options offered for each list include:
    • (Grid Views): Create custom grid views of this list.
    • (Charts/Reports): Create charts or reports of the data in this list.
    • (Insert data): Single row or bulk insert into the list.
    • (Delete): Select one or more rows to delete.
    • (Export): Export the list to Excel, text, or script.
    • Click Design to see and edit the set of fields and properties that define the list.
    • Click Delete All Rows to empty the data from the list without actually deleting the list structure itself.
    • (Print): Print the list data.

Add Additional Indices

In addition to the primary key, you can define another field in a list as a key or index, i.e. requiring unique values. Use the field editor Advanced Settings for the field and check the box to Require all values to be unique.


Premium Resource Available

Subscribers to premium editions of LabKey Server can learn how to add an additional index to a list in this topic:


Learn more about premium editions

View History

From the > Manage Lists page, click View History for any list to see a summary of audit events for that particular list. You'll see both:

  • List Events: Change to the content of the list.
  • List Design Changes: Changes to the structure of the list.
For changes to data, you will see a Comment "An existing list record was modified". If you hover, then click the (details) link for that row, you will see the details of what was modified.

Related Topics




Export/Import a List Archive


You can copy some or all of the lists in a folder to another folder or another server using export and import. Exporting a list archive packages up selected lists into a list archive: a .lists.zip file that conforms to the LabKey list export format. The process is similar to study export/import/reload. Information on the list serialization format is covered as part of Study Object Files and Formats.

Export

To export lists in a folder to a list archive, you must have administrator permission to all selected lists. On the Manage Lists page, you may see lists from the parent project as well as the /Shared project.

  • In the folder that contains lists of interest, select > Manage Lists.
  • Use the checkboxes to select the lists of interest.
    • If you want to export all lists in the current container, filter the Folder column to show only the local lists, or use > Folder Filter > Current Folder.
    • Check the box at the top of the column to select all currently shown lists.
  • Click Export List Archive.
  • All selected lists are exported into a zip archive.

Import

To import a list archive:

  • In the folder where you would like to import the list archive, select > Manage Lists.
  • Select Import List Archive.
  • Click Choose File or Browse and select the .zip file that contains your list archive.
  • Click Import List Archive.
  • You will see the imported lists included on the Available Lists page.

Note: Existing lists will be replaced by lists in the archive with the same name; this could result in data loss and cannot be undone.

If you exported an archive containing any lists from other containers, such as the parent project or the /Shared project, new local copies of those lists (including their data) will be created when you import the list archive.

Auto-Incrementing Key Considerations

Exporting a list with an auto-increment key may result in different key values on import. If you have lookup lists make sure they use an integer or string key instead of an auto-increment key.

Related Topics




R Reports


You can leverage the full power of the R statistical programming environment to analyze and visualize datasets on LabKey Server. The results of R scripts can be displayed in LabKey reports that reflect live data updated every time the script is run. Reports may contain text, tables, or charts created using common image formats such as jpeg, png and gif. In addition, the Rlabkey package can be used to insert, update and/or delete data stored on a LabKey Server using R, provided you have sufficient permissions to do so.

An administrator must install and configure R on LabKey Server and grant access to users to create and run R scripts on live datasets. Loading of additional packages may also be necessary, as described in the installation topic. Configuration of multiple R engines on a server is possible, but within any folder only a single R engine configuration can be used.

Topics

Related Topics


Premium Resource Available

Subscribers to premium editions of LabKey Server can learn more with the example code in this topic:


Learn more about premium editions




R Report Builder


This topic describes how to build reports in the R statistical programming environment to analyze and visualize datasets on LabKey Server. The results of R scripts can be displayed in LabKey reports that reflect live data updated every time the script is run.
Permissions: Creating R Reports requires that the user have both the "Editor" role (or higher) and developer access (one of the roles "Platform Developer" or "Trusted Analyst") in the container. Learn more here: Developer Roles.

Create an R Report from a Data Grid

R reports are ordinarily associated with individual data grids. Choose the dataset of interest and further filter the grid as needed. Only the portion of the dataset visible within this data grid become part of the analyzed dataset.

To use the sample dataset we describe in this tutorial, please Tutorial: Set Up a New Study if you have not already done so. Alternately, you may simply add the PhysicalExam.xls demo dataset to an existing study for completing the tutorial. You may also work with your own dataset, in which case steps and screencaps will differ.

  • View the "Physical Exam" dataset in a LabKey study.
  • If you want to filter the dataset and thus select a subset or rearrangement of fields, select or create a custom grid view.
  • Select (Charts/Reports) > Create R Report.

If you do not see the "Create R Report" menu, check to see that R is installed and configured on your LabKey Server. You also need to have the correct permissions to create R Reports. See Configure Scripting Engines for more information.

Create an R Report Independent of any Data Grid

R reports do not necessarily need to be associated with individual data grids. You can also create an R report that is independent of any grid:

  • Select (Admin) > Manage Views.
  • Select Add Report > R Report.

R reports associated with a grid automatically load the grid data into the object "labkey.data". R reports created independently of grids do not have access to labkey.data objects. R reports that pull data from additional tables (other than the associated grid) must use the Rlabkey API to access the other table(s). For details on using Rlabkey, see Rlabkey Package. By default, R reports not associated with a grid are listed under the Uncategorized heading in the list on the Manage Views page.

Review the R Report Builder

The R report builder opens on the Source tab which looks like this. Enter the R script for execution or editing into the Script Source box. Notice the options available below the source entry panel, describe below.

Report Tab

When you select the Report tab, you'll see the resulting graphics and console output for your R report. If the pipeline option is not selected, the script will be run in batch mode on the server.

Data Tab

Select the data tab to see the data on which your R report is based. This can be a helpful resource as you write or refine your script.

Source Tab

When your script is complete and report is satisfactory, return to the Source tab, scroll down, and click Save to save both the script and the report you generated.

A saved report will look similar to the results in the design view tab, minus the help text. Reports are saved on the LabKey Server, not on your local file system. They can be accessed through the Reports drop-down menu on the grid view of you dataset, or directly from the Data Views web part.

The script used to create a saved report becomes available to source() in future scripts. Saved scripts are listed under the “Shared Scripts” section of the LabKey R report builder.

Help Tab

This Syntax Reference list provides a quick summary of the substitution parameters for LabKey R. See Input/Output Substitutions Reference for further details.

Additional Options

On the Source Tab you can expand additional option sections. Not all options are available to all users, based on permission roles granted.

Options

  • Make this report available to all users: Enables other users to see your R report and source() its associated script if they have sufficient permissions. Only those with read privileges to the dataset can see your new report based on it.
  • Show source tab to all users: This option is available if the report itself is shared.
  • Make this report available in child folders: Make your report available in data grids in child folders where the schema and table are the same as this data grid.
  • Run this report in the background as a pipeline job: Execute your script asynchronously using LabKey's Pipeline module. If you have a big job, running it on a background thread will allow you to continue interacting with your server during execution.
If you choose the asynchronous option, you can see the status of your R report in the pipeline. Once you save your R report, you will be returned to the original data grid. From the Reports drop-down menu, select the report you just saved. This will bring up a page that shows the status of all pending pipeline jobs. Once your report finishes processing, you can click on “COMPLETE” next to your job. On the next page you’ll see "Job Status." Click on Data to see your report.

Note that reports are always generated from live data by re-running their associated scripts. This makes it particularly important to run computationally intensive scripts as pipeline jobs when their associated reports are regenerated often.

Knitr Options

  • Select None, HTML, or Markdown processing of HTML source
  • For Markdown, you can also opt to Use advanced rmarkdown output_options.
    • Check the box to provide customized output_options to be used.
    • If unchecked, rmarkdown will use the default output format:
      html_document(keep_md=TRUE, self_contained=FALSE, fig_caption=TRUE, theme=NULL, css=NULL, smart=TRUE, highlight='default')
  • Add a semi-colon delimited list of JavaScript, CSS, or library dependencies if needed.
Report Thumbnail
  • Choose to auto-generate a default thumbnail if desired. You can later edit the thumbnail or attach a custom image. See Manage Views.
Shared Scripts
  • Once you save an R report, its associated script becomes available to execute using source(“<Script Name>.R”) in future scripts.
  • Check the box next to the appropriate script to make it available for execution in this script.
Study Options
  • Participant Chart: A participant chart shows measures for only one participant at a time. Select the participant chart checkbox if you would like this chart to be available for review participant-by-participant.
  • Automatically cache this report for faster reloading: Check to enable.
Click Save to save settings, or Save As to save without disturbing the original saved report.

Example

Regardless of where you have accessed the R report builder, you can create a first R report which is data independent. This sample was adapted from the R help files.

  • Paste the following into the Source tab of the R report builder.
options(echo=TRUE);
# Execute 100 Bernoulli trials;
coin_flip_results = sample(c(0,1), 100, replace = TRUE);
coin_flip_results;
mean(coin_flip_results);
  • Click the Report tab to run the source and see your results, in this case the coin flip outcomes.

Add or Suppress Console Output

The options covered below can be included directly in your R report. There are also options related to console output in the scripting configuration for your R engine.

Echo to Console

By default, most R commands do not generate output to the console as part of your script. To enable output to console, use the following line at the start of your scripts:

options(echo=TRUE);

Note that when the results of functions are assigned, they are also not printed to the console. To see the output of a function, assign the output to a variable, then just call the variable. For further details, please see the FAQs for LabKey R Reports.

Suppress Console Output

To suppress output to the console, hiding it from users viewing the script, first remove the echo statement shown above. You can also include sink to redirect any outputs to 'nowhere' for all or part of your script.

To suppress output, on Linux/Mac/Unix, use:

sink("/dev/null")

On Windows use:

sink("NUL")

When you want to restart output to the console within the script, use sink again with no argument:

sink()

Related Topics




Saved R Reports


Saved R reports may be accessed from the source data grid or from the Data Views web part. This topic describes how to manage saved R reports and how they can be shared with other users (who already have access to the underlying data).

Performance Note

Once saved, reports are generated by re-running their associated scripts on live data. This ensures users always have the most current views, but it also requires computational resources each time the view is opened. If your script is computationally intensive, you can set it to run in the background so that it does not overwhelm your server when selected for viewing. Learn more in this topic: R Report Builder.

Edit an R Report Script

Open your saved R report by clicking the name in the data views web part or by selecting it from the (Charts and Reports) menu above the data grid on which it is based. This opens the R report builder interface on the Data tab. Select the Source tab to edit the script and manage sharing and other options. Click Save when finished.

Share an R Report

Saved R Reports can be kept private to the author, or shared with other users, either with all users of the folder, or individually with specific users. Under Options in the R report builder, use the Make this report available to all users checkbox to control how the report is shared.

  • If the box is checked, the report will be available to any users with "Read" access (or higher) in the folder. This access level is called "public" though that does not mean shared with the general public (unless they otherwise have "Read" access).
  • If the box is unchecked, the report is "private" to the creator, but can still be explicitly shared with other individual users who have access to the folder.
  • An otherwise "private" report that has been shared with individual users or groups has the access level "custom".
When sharing a report, you are indicating that you trust the recipient(s), and your recipients confirm that they trust you when they accept it. Sharing of R reports is audited and can be tracked in the "Study events" audit log.

Note that if a report is "public", i.e was made available to all users, you can still use this mechanism to email a copy of it to a trusted individual, but that will not change the access level of the report overall.

  • Open an R Report, from the Data Views web part, and click (Share Report).
  • Enter the Recipients email addresses, one per line.
  • The default Message Subject and Message Body are shown. Both can be customized as needed.
  • The Message Link is shown; you can click Preview Link to see what the recipient will see.
  • Click Submit to share the report. You will be taken to the permissions page.
  • On the Report and View Permissions page, you can see which groups and users already had access to the report.
    • Note that you will not see the individuals you are sharing the report with unless the access level of it was "custom" or "private" prior to sharing it now.
  • Click Save.

Recipients will receive a notification with a link to the report, so that they may view it. If the recipient has the proper permissions, they will also be able to edit and save their own copy of the report. If the author makes the source tab visible, recipients of a shared report will be able to see the source as well as the report contents. Note that if the recipient has a different set of permissions, they may see a different set of data. Modifications that the original report owner makes to the report will be reflected in the link as viewed by the recipient.

When an R report was private but has been shared, the data browser will show access as "custom". Click custom to open the Report Permissions page, where you can see the list of groups and users with whom the report was shared.

Learn more about report permissions in this topic: Configure Permissions for Reports & Views

Delete an R Report

You can delete a saved report by first clicking the pencil icon at the top of the Data Views web part, then click the pencil to the left of the report name. In the popup window, click Delete. You can also multi-select R reports for deletion on the Manage Views page.

Note that deleting a report eliminates its associated script from the "Shared Scripts" list in the R report interface. Make sure that you don’t delete a script that is called (sourced) by other scripts you need.

Related Topics




R Reports: Access LabKey Data


Access Your Data as "labkey.data"

LabKey Server automatically reads your chosen dataset into a data frame called labkey.data using Input Substitution.

A data frame can be visualized as a list with unique row names and columns of consistent lengths. Column names are converted to all lower case, spaces or slashes are replaced with underscores, and some special characters are replaced with words (i.e. "CD4+" becomes "cd4_plus_"). You can see the column names for the built in labkey.data frame by calling:

options(echo=TRUE);
names(labkey.data);

Just like any other data.frame, data in a column of labkey.data can be referenced by the column's name, converted to all lowercase and preceded by a $:

labkey.data$<column name>

For example, labkey.data$pulse; provides all the data in the Pulse column. Learn more about column references below.

Note that the examples in this section frequently include column names. If you are using your own data or a different version of LabKey example data, you may need to retrieve column names and edit the code examples given.

Use Pre-existing R Scripts

To use a pre-existing R script with LabKey data, try the following procedure:

  • Open the R Report Builder:
    • Open the dataset of interest ("Physical Exam" for example).
    • Select > Create R Report.
  • Paste the script into the Source tab.
  • Identify the LabKey data columns that you want to be represented by the script, and load those columns into vectors. The following loads the Systolic Blood Pressure and Diastolic Blood Pressure columns into the vectors x and y:
x <- labkey.data$diastolicbp;
y <- labkey.data$systolicbp;

png(filename="${imgout:myscatterplot}", width = 650, height = 480);
plot(x,
y,
main="Scatterplot Example",
xlab="X Axis ",
ylab="Y Axis",
pch=19);
abline(lm(y~x), col="red") # regression line (y~x);
  • Click the Report tab to see the result:

Find Simple Means

Once you have loaded your data, you can perform statistical analyses using the functions/algorithms in R and its associated packages. For example, calculate the mean Pulse for all participants.

options(echo=TRUE);
names(labkey.data);
labkey.data$pulse;
a <- mean(labkey.data$pulse, na.rm= TRUE);
a;

Find Means for Each Participant

The following simple script finds the average values of a variety of physiological measurements for each study participant.

# Get means for each participant over multiple visits;

options(echo=TRUE);
participant_means <- aggregate(labkey.data, list(ParticipantID = labkey.data$participantid), mean, na.rm = TRUE);
participant_means;

We use na.rm as an argument to aggregate in order to calculate means even when some values in a column are NA.

Create Functions in R

This script shows an example of how functions can be created and called in LabKey R scripts. Before you can run this script, the Cairo package must be installed on your server. See Install and Set Up R for instructions.

Note that the second line of this script creates a "data" copy of the input file, but removes all participant records that contain an NA entry. NA entries are common in study datasets and can complicate display results.

library(Cairo);
data= na.omit(labkey.data);

chart <- function(data)
{
plot(data$pulse, data$pulse);
};

filter <- function(value)
{
sub <- subset(labkey.data, labkey.data$participantid == value);
#print("the number of rows for participant id: ")
#print(value)
#print("is : ")
#print(sub)
chart(sub)
}

names(labkey.data);
Cairo(file="${imgout:a}", type="png");
layout(matrix(c(1:4), 2, 2, byrow=TRUE));
strand1 <- labkey.data[,1];
for (i in strand1)
{
#print(i)
value <- i
filter(value)
};
dev.off();

Access Data in Another Dataset (Select Rows)

You can use the Rlabkey library's selectRows to specify the data to load into an R data frame, including labkey.data, or a frame named something else you choose.

For example, if you use the following, you will load some example fictional data from our public demonstration site that will work with the above examples.

library(Rlabkey)
labkey.data <- labkey.selectRows(
baseUrl="https://www.labkey.org",
folderPath="/home/Demos/HIV Study Tutorial/",
schemaName="study",
queryName="PhysicalExam",
viewName="",
colNameOpt="rname"
)

Convert Column Names to Valid R Names

Include colNameOpt="rname" to have the selectRows call provide "R-friendly" column names. This converts column names to lower case and replaces spaces or slashes with underscores. Note that this may be different from the built in column name transformations in the built in labkey.data frame. The built in frame also substitutes words for some special characters, i.e. "CD4+" becomes "cd4_plus_", so during report development you'll want to check using names(labkey.data); to be sure your report references the expected names.

Learn more in the Rlabkey Documentation.

Select Specific Columns

Use the colSelect option with to specify the set of columns you want to add to your dataframe. Make sure there are no spaces between the commas and column names.

In this example, we load some fictional example data, selecting only a few columns of interest.

library(Rlabkey)
labkey.data <- labkey.selectRows(
baseUrl="https://www.labkey.org",
folderPath="/home/Demos/HIV Study Tutorial/",
schemaName="study",
queryName="Demographics",
viewName="",
colSelect="ParticipantId,date,cohort,height,Language",
colFilter=NULL,
containerFilter=NULL,
colNameOpt="rname"
)

Display Lookup Target Columns

If you load the above example, and then execute: labkey.data$language; you will see all the data in the "Language" column.

Remember that in an R data frame, columns are referenced in all lowercase, regardless of casing in LabKey Server. For consistency in your selectRows call, you can also define the colSelect list in all lowercase, but it is not required.

If "Language" were a lookup column referencing a list of Languages with accompanying translators, this would return a series of integers or whatever the key of the "Language" column is.

You could then access a column that is not the primary key in the lookup target, typically a more human-readable display values. Using this example, if the "Translator" list included "LanguageName, TranslatorName and TranslatorPhone" columns, you could use syntax like this in your selectRows,

library(Rlabkey)
labkey.data <- labkey.selectRows(
baseUrl="https://www.labkey.org",
folderPath="/home/Demos/HIV Study Tutorial/",
schemaName="study",
queryName="Demographics",
viewName="",
colSelect="ParticipantId,date,cohort,height,Language,Language/LanguageName,Language/TranslatorName,Language/TranslatorPhone",
colFilter=NULL,
containerFilter=NULL,
colNameOpt="rname"
)

You can now retrieve human-readable values from within the "Language" list by converting everything to lowercase and substituting an underscore for the slash. Executing labkey.data$language_languagename; will return the list of language names.

Access URL Parameters and Data Filters

While you are developing your report, you can acquire any URL parameters as well as any filters applied on the Data tab by using labkey.url.params.

For example, if you filter the "systolicBP" column to values over 100, then use:

print(labkey.url.params}

...your report will include:

$`Dataset.systolicBP~gt`
[1] "100"

Write Result File to File Repository

The following report, when run, creates a result file in the server's file repository. Note that fileSystemPath is an absolute file path. To get the absolute path, see Using the Files Repository.

fileSystemPath = "/labkey/labkey/MyProject/Subfolder/@files/"
filePath = paste0(fileSystemPath, "test.tsv");
write.table(labkey.data, file = filePath, append = FALSE, sep = "t", qmethod = "double", col.names=NA);
print(paste0("Success: ", filePath));

Related Topics




Multi-Panel R Plots


The scripts on this page take the analysis techniques introduced in R Reports: Access LabKey Data one step further, still using the Physical Exam sample dataset. This page covers a few more strategies for finding means, then shows how to graph these results and display least-squares regression lines.

Find Mean Values for Each Participant

Finding the mean value for physiological measurements for each participant across all visits can be done in various ways. Here, we cover three alternative methods.

For all methods, we use "na.rm=TRUE" as an argument to aggregate in order to ignore null values when we calculate means.

DescriptionCode
Aggregate each physiological measurement for each participant across all visits; produces an aggregated list with two columns for participantid.
data_means <- aggregate(labkey.data, list(ParticipantID = 
labkey.data$participantid), mean, na.rm = TRUE);
data_means;
Aggregate only the pulse column and display two columns: one listing participantIDs and the other listing mean values of the pulse column for each participant
aggregate(list(Pulse = labkey.data$pulse), 
list(ParticipantID = labkey.data$participantid), mean, na.rm = TRUE);
Again, aggregate only the pulse column, but here results are displayed as rows instead of two columns.
participantid_factor <- factor(labkey.data$participantid);
pulse_means <- tapply(labkey.data$pulse, participantid_factor,
mean, na.rm = TRUE);
pulse_means;

Create Single Plots

Next we use R to create plots of some other physiological measurements included in our sample data.

All scripts in this section use the Cairo package. To convert these scripts to use the png() function instead, eliminate the call "library(Cairo)", change the function name "Cairo" to "png," change the "file" argument to "filename," and eliminate the "type="png"" argument entirely.

Scatter Plot of All Diastolic vs All Systolic Blood Pressures

This script plots diastolic vs. systolic blood pressures without regard for participantIDs. It specifies the "ylim" parameter for plot() to ensure that the axes used for this graph match the next graph's axes, easing interpretation.

library(Cairo);
Cairo(file="${imgout:diastol_v_systol_figure.png}", type="png");
plot(labkey.data$diastolicbloodpressure, labkey.data$systolicbloodpressure,
main="R Report: Diastolic vs. Systolic Pressures: All Visits",
ylab="Systolic (mm Hg)", xlab="Diastolic (mm Hg)", ylim =c(60, 200));
abline(lsfit(labkey.data$diastolicbloodpressure, labkey.data$systolicbloodpressure));
dev.off();

The generated plot, where the identity of participants is ignored, might look like this:

Scatter Plot of Mean Diastolic vs Mean Systolic Blood Pressure for Each Participant

This script plots the mean diastolic and systolic blood pressure readings for each participant across all visits. To do this, we use "data_means," the mean value for each physiological measurement we calculated earlier on a participant-by-participant basis.

data_means <- aggregate(labkey.data, list(ParticipantID = 
labkey.data$participantid), mean, na.rm = TRUE);
library(Cairo);
Cairo(file="${imgout:diastol_v_systol_means_figure.png}", type="png");
plot(data_means$diastolicbloodpressure, data_means$systolicbloodpressure,
main="R Report: Diastolic vs. Systolic Pressures: Means",
ylab="Systolic (mm Hg)", xlab="Diastolic (mm Hg)", ylim =c(60, 200));
abline(lsfit(data_means$diastolicbloodpressure, data_means$systolicbloodpressure));
dev.off();

This time, the plotted regression line for diastolic vs. systolic pressures shows a non-zero slope. Looking at our data on a participant-by-participant basis provides insights that might be obscured when looking at all measurements in aggregate.

Create Multiple Plots

There are two ways to get multiple images to appear in the report produced by a single script.

Single Plot Per Report Section

The first and simplest method of putting multiple plots in the same report places separate graphs in separate sections of your report. Use separate pairs of device on/off calls (e.g., png() and dev.off()) for each plot you want to create. You have to make sure that the {imgout:} parameters are unique. Here's a simple example:

png(filename="${imgout:labkeyl_png}");
plot(c(rep(25,100), 26:75), c(1:100, rep(1, 50)), ylab= "L", xlab="LabKey",
xlim= c(0, 100), ylim=c(0, 100), main="LabKey in R: Report Section 1");
dev.off();

png(filename="${imgout:labkey2_png}");
plot(c(rep(25,100), 26:75), c(1:100, rep(1, 50)), ylab= "L", xlab="LabKey",
xlim= c(0, 100), ylim=c(0, 100), main="LabKey in R: Report Section 2");
dev.off();

Multiple Plots Per Report Section

There are various ways to place multiple plots in a single section of a report. Two examples are given here, the first using par() and the second using layout().

Example: Four Plots in a Single Section: Using par()

This script demonstrates how to put multiple plots on one figure to create a regression panel layout. It uses standard R libraries for the arrangement of plots, and Cairo for creation of the plot image itself. It creates a single graphics file but partitions the ‘surface’ of the image into multiple sections using the mfrow and mfcol arguments to par().

library(Cairo);
data_means <- aggregate(labkey.data, list(ParticipantID =
labkey.data$participantid), mean, na.rm = TRUE);
Cairo(file="${imgout:multiplot.png}", type="png")
op <- par(mfcol = c(2, 2)) # 2 x 2 pictures on one plot
c11 <- plot(data_means$diastolicbloodpressure, data_means$weight, ,
xlab="Diastolic Blood Pressure (mm Hg)", ylab="Weight (kg)",
mfg=c(1, 1))
abline(lsfit(data_means$diastolicbloodpressure, data_means$weight))
c21 <- plot(data_means$diastolicbloodpressure, data_means$systolicbloodpressure, ,
xlab="Diastolic Blood Pressure (mm Hg)",
ylab="Systolic Blood Pressure (mm Hg)", mfg= c(2, 1))
abline(lsfit(data_means$diastolicbloodpressure, data_means$systolicbloodpressure))
c21 <- plot(data_means$diastolicbloodpressure, data_means$pulse, ,
xlab="Diastolic Blood Pressure (mm Hg)",
ylab="Pulse Rate (Beats/Minute)", mfg= c(1, 2))
abline(lsfit(data_means$diastolicbloodpressure, data_means$pulse))
c21 <- plot(data_means$diastolicbloodpressure, data_means$temp, ,
xlab="Diastolic Blood Pressure (mm Hg)",
ylab="Temperature (Degrees C)", mfg= c(2, 2))
abline(lsfit(data_means$diastolicbloodpressure, data_means$temp))
par(op); #Restore graphics parameters
dev.off();

Example: Three Plots in a Single Section: Using layout()

This script uses the standard R libraries to display multiple plots in the same section of a report. It uses the layout() command to arrange multiple plots on a single graphics surface that is displayed in one section of the script's report.

The first plot shows blood pressure and weight progressing over time for all participants. The lower scatter plots graph blood pressure (diastolic and systolic) against weight.

library(Cairo);
Cairo(file="${imgout:a}", width=900, type="png");
layout(matrix(c(3,1,3,2), nrow=2));
plot(weight ~ systolicbloodpressure, data=labkey.data);
plot(weight ~ diastolicbloodpressure, data=labkey.data);
plot(labkey.data$date, labkey.data$systolicbloodpressure, xaxt="n",
col="red", type="n", pch=1);
points(systolicbloodpressure ~ date, data=labkey.data, pch=1, bg="light blue");
points(weight ~ date, data=labkey.data, pch=2, bg="light blue");
abline(v=labkey.data$date[3]);
legend("topright", legend=c("bpsys", "weight"), pch=c(1,2));
dev.off();

Related Topics




Lattice Plots


The "lattice" R package provides presentation-quality, multi-plot graphics. This page supplies a simple script to demonstrate the use of Lattice graphics in the LabKey R environment.

Before you can use the Lattice package, it must be installed on your server. You will load the lattice package at the start of every script that uses it:

library("lattice");

Display a Volcano

The Lattice Documentation on CRAN provides a Volcano script to demonstrate the power of Lattice. The script below has been modified to work on LabKey R:

library("lattice");  

p1 <- wireframe(volcano, shade = TRUE, aspect = c(61/87, 0.4),
light.source = c(10,0,10), zlab=list(rot=90, label="Up"),
ylab= "North", xlab="East", main="The Lattice Volcano");
g <- expand.grid(x = 1:10, y = 5:15, gr = 1:2);
g$z <- log((g$x^g$g + g$y^2) * g$gr);

p2 <- wireframe(z ~ x * y, data = g, groups = gr,
scales = list(arrows = FALSE),
drape = TRUE, colorkey = TRUE,
screen = list(z = 30, x = -60));

png(filename="${imgout:a}", width=500);
print(p1);
dev.off();

png(filename="${imgout:b}", width=500);
print(p2);
dev.off();

The report produced by this script will display two graphs that look like the following:

Related Topics




Participant Charts in R


You can use the Participant Chart checkbox in the R Report Builder to create charts that display your R report results on a participant-by-participant basis. If you wish to create a participant chart in a test environment, install the example study and use it as a development sandbox.

Create and View Simple Participant Charts

  • In the example study, open the PhysicalExam dataset.
  • Select (Charts/Reports) > Create R Report.
  • On the Source tab, begin with a script that shows data for all participants. Paste the following in place of the default content.
png(filename="${imgout:a}", width=900);
plot(labkey.data$systolicbp, labkey.data$date);
dev.off();
  • Click the Report tab to view the scatter plot data for all participants.
  • Return to the Source tab.
  • Scroll down and click the triangle to open the Study Options section.
  • Check Participant Chart.
  • Click Save.
  • Name your report "Participant Systolic" or another name you choose.

The participant chart option subsets the data that is handed to an R script by filtering on a participant ID. You can later step through per participant charts using this option. The labkey.data dataframe may contain one, or more rows of data depending on the content of the dataset you are working with. Next, reopen the R report:

  • Return to the data grid of the "PhysicalExam" dataset.
  • Select (Charts/Reports) > Participant Systolic (or the name you gave your report).
  • Click Previous Participant.
  • You will see Next Participant and Previous Participant links that let you step through charts for each participant:

Advanced Example: Create Participant Charts Using Lattice

You can create a panel of charts for participants using the lattice package. If you select the participant chart option on the source tab, you will be able to see each participant's panel individually when you select the report from your data grid.

The following script produces lattice graphs for each participant showing systolic blood pressure over time:

library(lattice);
png(filename="${imgout:a}", width=900);
plot.new();
xyplot(systolicbp ~ date| participantid, data=labkey.data,
type="a", scales=list(draw=FALSE));
update(trellis.last.object(),
strip = strip.custom(strip.names = FALSE, strip.levels = TRUE),
main = "Systolic over time grouped by participant",
ylab="Systolic BP", xlab="");
dev.off();

The following script produces lattice graphics for each participant showing systolic and diastolic blood pressure over time (points instead of lines):

library(lattice);
png(filename="${imgout:b}", width=900);
plot.new();

xyplot(systolicbp + diastolicbp ~ date | participantid,
data=labkey.data, type="p", scales=list(draw=FALSE));
update(trellis.last.object(),
strip = strip.custom(strip.names = FALSE, strip.levels = TRUE),
main = "Systolic & Diastolic over time grouped by participant",
ylab="Systolic/Diastolic BP", xlab="");
dev.off();

After you save these two R reports with descriptive names, you can go back and review individual graphs participant-by-participant. Use the (Reports) menu available on your data grid.

Related Topics




R Reports with knitr


The knitr visualization package can be used with R in either HTML or Markdown pages to create dynamic reports. This topic will help you get started with some examples of how to interweave R and knitr.

Topics

Install R and knitr

  • If you haven't already installed R, follow these instructions: Install R.
  • Open the R graphical user interface. On Windows, a typical location would be: C:\Program Files\R\R-3.0.2\bin\i386\Rgui.exe
  • Select Packages > Install package(s).... Select a mirror site, and select the knitr package.
  • OR enter the following: install.packages('knitr', dependencies=TRUE)
    • Select a mirror site and wait for the knitr installation to complete.

Develop knitr Reports

  • Go to the dataset you wish to visualize.
  • Select (Charts/Reports) > Create R Report.
  • On the Source tab, enter your HTML or Markdown page with knitr code. (Scroll down for example pages.)
  • Specify which source to process with knitr. Under knitr Options, select HTML or Markdown.
  • Select the Report tab to see the results.

Advanced Markdown Options

If you are using rmarkdown v2 and check the box to "Use advanced rmarkdown output_options (pandoc only)", you can enter a list of param=value pairs in the box provided. Enter the bolded portion of the following example, which will be enclosed in an "output_options=list()" call.

output_options=list(
  param1=value1,
  param2=value2

)

Supported options include those in the "html_document" output format. Learn more in the rmarkdown documentation here .

Sample param=value options you can include:

  • css: Specify a custom stylesheet to use.
  • fig_width and fig_height: Control the size of figures included.
  • fig_caption: Control whether figures are captioned.
  • highlight: Specify a syntax highlighting style, such as pygments, monochrome, haddock, or default. NULL will prevent syntax highlighting.
  • theme: Specify the Bootstrap theme to apply to the page.
  • toc: Set to TRUE to include a table of contents.
  • toc_float: Set to TRUE to float the table of contents to the left.
If the box is unchecked, or if you check the box but provide no param=value pairs, rmarkdown will use the default output format:
​html_document(
keep_md=TRUE,
self_contained=FALSE,
fig_caption=TRUE,
theme=NULL,
css=NULL,
smart=TRUE,
highlight="default")

Note that pandoc is only supported for rmarkdown v2, and some formats supported by pandoc are not supported here.

If you notice report issues, such as graphs showing as small thumbnails, or HTML not rendering as expected in R reports, you may need to upgrade your server's version of pandoc.

R/knitr Scripts in Modules

R script knitr reports are also available as custom module reports. The script file must have either a .rhtml or .rmd extension, for HTML or markdown documents, respectively. For a file-based module, place the .rhtml/.rmd file in the same location as .r files, as shown below. For module details, see Map of Module Files.

MODULE_NAME
reports/
schemas/
SCHEMA_NAME/
QUERY_NAME/
MyRScript.r -- R report
MyRScript.rhtml -- R/knitr report
MyRScript.rmd -- R/knitr report

Declaring Script Dependencies

To fully utilize the report designer (called the "R Report Builder" in the LabKey user interface), you can declare JavaScript or CSS dependencies for knitr reports. This ensures that the dependencies are downloaded before R scripts are run on the "reports" tab in the designer. If these dependencies are not specified then any JavaScript in the knitr report may not run correctly in the context of the script designer. Note that reports that are run in the context of the Reports web part will still render correctly without needing to explicitly define dependencies.

Reports can either be created via the LabKey Server UI in the report designer directly or included as files in a module. Reports created in the UI are editable via the Source tab of the designer. Open Knitr Options to see a text box where a semi-colon delimited list of dependencies can be entered. Dependencies can be external (via HTTP) or local references relative to the labkeyWebapp path on the server. In addition, the name of a client library may be used. If the reference does not have a .js or .css extension then it will be assumed to be a client library (somelibrary.lib.xml). The .lib.xml extension is not required. Like local references, the path to the client library is relative to the labkeyWebapp path.

File based reports in a module cannot be edited in the designer although the "source" tab will display them. However you can still add a dependencies list via the report's metadata file. Dependencies can be added to these reports by including a <dependencies> section underneath the <R> element. A sample metadata file:

<?xml version="1.0" encoding="UTF-8"?>
<ReportDescriptor xmlns="http://labkey.org/query/xml">
<label>My Knitr Report</label>
<description>Relies on dependencies to display in the designer correctly.</description>
<reportType>
<R>
<dependencies>
<dependency path="http://external.com/jquery/jquery-1.9.0.min.js"/>
<dependency path="knitr/local.js"/>
<dependency path="knitr/local.css"/>
</dependencies>
</R>
</reportType>
</ReportDescriptor>

The metadata file must be named <reportname>.report.xml and be placed alongside the report of the same name under (modulename/resources/reports/schemas/...).

HTML Example

To use this example:

  • Install the R package ggplot2
  • Install the Demo Study.
  • Create an R report on the dataset "Physical Exam"
  • Copy and paste the knitr code below into the Source tab of the R Report Builder.
  • Scroll down to the Knitr Options node, open the node, and select HTML.
  • Click the Report tab to see the knitr report.
<table>
<tr>
<td align='center'>
<h2>Scatter Plot: Blood Pressure</h2>
<!--begin.rcode echo=FALSE, warning=FALSE
library(ggplot2);
opts_chunk$set(fig.width=10, fig.height=6)
end.rcode-->
<!--begin.rcode blood-pressure-scatter, warning=FALSE, message=FALSE, echo=FALSE, fig.align='center'
qplot(labkey.data$diastolicbp, labkey.data$systolicbp,
main="Diastolic vs. Systolic Pressures: All Visits",
ylab="Systolic (mm Hg)", xlab="Diastolic (mm Hg)", ylim =c(60, 200), xlim=c(60,120), color=labkey.data$temp);
end.rcode-->
</td>
<td align='center'>
<h2>Scatter Plot: Body Temp vs. Body Weight</h2>
<!--begin.rcode temp-weight-scatter, warning=FALSE, message=FALSE, echo=FALSE, fig.align='center'
qplot(labkey.data$temp, labkey.data$weight,
main="Body Temp vs. Body Weight: All Visits",
xlab="Body Temp (C)", ylab="Body Weight (kg)", xlim=c(35,40), color=labkey.data$height);
end.rcode-->
</td>
</tr>
</table>

The rendered knitr report:

Markdown v2

Administrators can enable Markdown v2 when enlisting an R engine through the Views and Scripting Configuration page. When enabled, Markdown v2 will be used when rendering knitr R reports. If not enabled, Markdown v1 is used to execute the reports.

Independent installation is required of the following:

This will then enable using the Rmarkdown v2 syntax for R reports. The system does not currently perform any verification of the user's setup. If the configuration is enabled when enlisting the R engine, but the packages are not properly setup, the intended report rendering will fail.

Syntax differences are noted here: http://rmarkdown.rstudio.com/authoring_migrating_from_v1.html

Markdown v1 Example

# Scatter Plot: Blood Pressure
# The chart below shows data from all participants

```{r setup, echo=FALSE}
# set global chunk options: images will be 7x5 inches
opts_chunk$set(fig.width=7, fig.height=5)
```

```{r graphic1, echo=FALSE}
plot(labkey.data$diastolicbp, labkey.data$systolicbp,
main="Diastolic vs. Systolic Pressures: All Visits",
ylab="Systolic (mm Hg)", xlab="Diastolic (mm Hg)", ylim =c(60, 200));
abline(lsfit(labkey.data$diastolicbp, labkey.data$systolicbp));
```

Another example

# Scatter Plot: Body Temp vs. Body Weight
# The chart below shows data from all participants.

```{r graphic2, echo=FALSE}
plot(labkey.data$temp, labkey.data$weight,
main="Temp vs. Weight",
xlab="Body Temp (C)", ylab="Body Weight (kg)", xlim=c(35,40));
```

Related Topics


Premium Resource Available

Subscribers to premium editions of LabKey Server can learn how to incorporate a plotly graph with the example code in this topic:


Learn more about premium editions




Premium Resource: Show Plotly Graph in R Report


Related Topics

  • R Reports with knitr
  • Proxy Servlets: Another way to use plotly with LabKey data.



  • Input/Output Substitutions Reference


    An R script uses input substitution parameters to generate the names of input files and to import data from a chosen data grid. It then uses output substitution parameters to either directly place image/data files in your report or to include download links to these files. Substitutions take the form of: ${param} where 'param' is the substitution. You can find the substitution syntax directly in the R Report Builder on the Help tab.

    Input and Output Substitution Parameters

    Valid Substitutions: 
    input_data: <name>The input datset, a tab-delimited table. LabKey Server automatically reads your input dataset (a tab-delimited table) into the data frame called labkey.data. If you desire tighter control over the method of data upload, you can perform the data table upload yourself. The 'input data:' prefix indicates that the data file for the grid and the <name> substitution can be set to any non-empty value:
    # ${labkey.data:inputTsv}
    labkey.data <- read.table("inputTsv", header=TRUE, sep="\t");
    labkey.data
    imgout: <name>An image output file (such as jpg, png, etc.) that will be displayed as a Section of a View on LabKey Server. The 'imgout:' prefix indicates that the output file is an image and the <name> substitution identifies the unique image produced after you call dev.off(). The following script displays a .png image in a View:
    # ${imgout:labkey1.png}
    png(filename="labkeyl_png")
    plot(c(rep(25,100), 26:75), c(1:100, rep(1, 50)), ylab= "L", xlab="LabKey",
    xlim= c(0, 100), ylim=c(0, 100), main="LabKey in R")
    dev.off()
    tsvout: <name>A TSV text file that is displayed on LabKey Server as a section within a report. No downloadable file is created. For example:
    # ${tsvout:tsvfile}
    write.table(labkey.data, file = "tsvfile", sep = "\t",
    qmethod = "double", col.names="NA")
    txtout: <name>A text file that is displayed on LabKey Server as a section within a report. No downloadable file is created. For example:
    # ${txtout:tsvfile}
    write.csv(labkey.data, file = "csvfile")
    pdfout: <name>A PDF output file that can be downloaded from LabKey Server. The 'pdfout:' prefix indicates that he expected output is a pdf file. The <name> substitution identifies the unique file produced after you call dev.off().
    # ${pdfout:labkey1.pdf}
    pdf(file="labkeyl_pdf")
    plot(c(rep(25,100), 26:75), c(1:100, rep(1, 50)), ylab= "L", xlab="LabKey",
    xlim= c(0, 100), ylim=c(0, 100), main="LabKey in R")
    dev.off()
    psout: <name>A postscript output file that can be downloaded from LabKey Server. The 'psout:' prefix indicates that the expected output is a postscript file. The <name> substitution identifies the unique file produced after you call dev.off().
    # ${psout:labkeyl.eps}
    postscript(file="labkeyl.eps", horizontal=FALSE, onefile=FALSE)
    plot(c(rep(25,100), 26:75), c(1:100, rep(1, 50)), ylab= "L", xlab="LabKey",
    xlim= c(0, 100), ylim=c(0, 100), main="LabKey in R")
    dev.off()
    fileout: <name>A file output that can be downloaded from LabKey Server, and may be of any file type. For example, use fileout in the place of tsvout to allow users to download a TSV instead of seeing it within the page:
    # ${fileout:tsvfile}
    write.table(labkey.data, file = "tsvfile", sep = "\t",
    qmethod = "double", col.names=NA)
    htmlout: <name>A text file that is displayed on LabKey Server as a section within a View. The output is different from the txtout: replacement in that no html escaping is done. This is useful when you have a report that produces html output. No downloadable file is created:
    txt <- paste("<i>Click on the link to visit LabKey:</i>
    <a target='blank' href='https://www.labkey.org'>LabKey</a>"
    )
    # ${htmlout:output}
    write(txt, file="output")
    svgout: <name>An svg file that is displayed on LabKey Server as a section within a View. htmlout can be used to render svg outputs as well, however, using svgout will generate a more appropriate thumbnail image for the report. No downloadable file is created:
    # ${svgout:output.svg}
    svg("output.svg", width= 4, height=3)
    plot(x=1:10,y=(1:10)^2, type='b')
    dev.off()

    Implicit Variables

    Each R script contains implicit variables that are inserted before your source script. Implicit variables are R data types and may contain information that can be used by the source script.

    Implicit variables: 
    labkey.dataThe data frame into which the input dataset is automatically read. The code to generate the data frame is:
    # ${input_data:inputFileTsv} 
    labkey.data <- read.table("inputFileTsv", header=TRUE, sep="\t",
    quote="", comment.char="")
    Learn more in R Reports: Access LabKey Data.
    labkey.url.pathThe path portion of the current URL which omits the base context path, action and URL parameters. The path portion of the URL: http://localhost:8080/home/test/study-begin.view would be: /home/test/
    labkey.url.baseThe base portion of the current URL. The base portion of the URL: http://localhost:8080/home/test/study-begin.view would be: http://localhost:8080/
    labkey.url.paramsThe list of parameters on the current URL and in any data filters that have been applied. The parameters are represented as a list of key / value pairs.
    labkey.user.emailThe email address of the current user

    Using Regular Expressions with Replacement Token Names

    Sometimes it can be useful to have flexibility when binding token names to replacement parameters. This can be the case when a script generates file artifacts but does not know the file names in advance. Using the syntax: regex() in the place of a token name (where LabKey server controls the token name to file mapping) will result the following actions:

    • Any script generated files not mapped to a replacement will be evaluated against the file's name using the regex.
    • If a file matches the regex, it will be assigned to the replacement and rendered accordingly.
    <replacement>:regex(<expression>)The following example will find all files generated by the script with the extension : '.gct'. If any are found they will be assigned and rendered to the replacement parameter (in this case as a download link).//
    #${fileout:regex(.*?(\.gct))}

    Cairo or GDD Packages

    You may need to use the Cairo or GDD graphics packages in the place of jpeg() and png() if your LabKey Server runs on a "headless" Unix server. You will need to make sure that the appropriate package is installed in R and loaded by your script before calling either of these functions.

    GDD() and Cairo() Examples. If you are using GDD or Cairo, you might use the following scripts instead:

    library(Cairo);
    Cairo(file="${imgout:labkeyl_cairo.png}", type="png");
    plot(c(rep(25,100), 26:75), c(1:100, rep(1, 50)), ylab= "L", xlab="LabKey",
    xlim= c(0, 100), ylim=c(0, 100), main="LabKey in R");
    dev.off();

    library(GDD);
    GDD(file="${imgout:labkeyl_gdd.jpg}", type="jpeg");
    plot(c(rep(25,100), 26:75), c(1:100, rep(1, 50)), ylab= "L", xlab="LabKey",
    xlim= c(0, 100), ylim=c(0, 100), main="LabKey in R");
    dev.off();

    Additional Reference




    Tutorial: Query LabKey Server from RStudio


    This tutorial shows you how to pull data directly from LabKey Server into RStudio for analysis and visualization.

    Tutorial Steps:

    Install RStudio

    • If necessary, install R version 3.0.1 or later. If you already have R installed, you can skip this step.
    • If necessary, install RStudio Desktop on your local machine. If you already have RStudio installed, you can skip this step.

    Install Rlabkey Package

    • Open RStudio.
    • On the Console enter the following:
    install.packages("Rlabkey")
    • Follow any prompts to complete the installation.

    Query Public Data