Premium Feature — Available with all Premium Editions of LabKey Server. Learn more or contact LabKey.

This topic provides guidance for planning and developing your ETLs. ETLs are defined in and run on your LabKey instance, but may interact with and even run remote processes on other databases.

Procedure for Development

ETLs are a mechanism that a developer controls using structured XML definitions that direct the module's actions. As you develop your own custom ETLs, following a tiered approach is recommended. The later steps will be easier to add and confirm when you are confident that the first steps are working as expected.

  1. Identify the source data. Understand structure, location, access requirements.
  2. Create (or identify) the destination or target table. Define the location and end requirements for data you will be loading.
  3. Create an ETL to perform your transformations, truncating the target each time (as the simplest pathway).
  4. If your ETL should run on a schedule, modify it to make it run on that schedule.
  5. Add additional options, such as a timestamp based filter strategy.
  6. Switch to a target data merge if desired.
  7. Add handling of deletes of data from the source.
In addition, outlining the overall goal and scope of your ETL up front will help you get started with the right strategy:

Source Considerations

The source data may be:

  • In the same LabKey folder or project where the ETL will be defined and run.
  • In a different folder or project on the same LabKey server.
  • Accessible via an external schema or a linked schema, such that it appears local though it in fact lives on another server or database.
  • Accessible by remote connection to another server or database.
  • Either a single table or multiple tables in any of the above locations.
In the <source> element in your transform, you specify the schemaName and queryName within that schema. If your source data is not on the same server you can use an external data source configuration or remote connection to access the data. If the source data is on the same LabKey server but not in the same LabKey folder as your ETL and destination, you can use a linked schema to access the source data. Check the query browser to be sure you can see your data source: (Admin) > Module > Query, check for the schema and make sure it includes the queryName you will use.

Destination Considerations

The destination, or target of the ETL describes where and how you want your data to be at the end. Options include:

  • Tables or files.
  • Single or multiple ETL destinations
  • Within the same LabKey server or to an external server
  • Particular order of import to multiple destinations
If your destination is a table, the <destination> element requires the schemaName and queryName within that schema. If your destination is not on the same server as the ETL, you will need to use an external data source configuration to access the destination. If your destination is on the same server but in a different LabKey folder, the ETL should be created in the same folder as the destination and a linked schema used to access the source data. Check the query browser to be sure you can see your data source: (Admin) > Module > Query, check for the schema and make sure it includes the queryName you will use.

For table destination ETLs, you also specify how you want any existing data in the destination query to be treated. Options are explored in the topic: ETL: Target Options

  • Append: Add new data to the end
  • Merge: Update existing rows when the load contains new data for them; leave existing rows; add new rows.
  • Truncate: Drop the existing contents and replace with the newly loaded contents.
The most common way to target multiple destinations is to write a multiple step ETL. Each step is one <transform> which represents one source and one destination. If just replicating the same data across multiple destinations, the multi-step ETL would have one step for each destination and the same source for each step. If splitting the source data to multiple destinations, the most common way would be to write queries (within LabKey) or views (external server) against the source data to shape it to match the destination. Then each query or view would be the source in a step of the multi-step ETL. Often the associated destination tables have foreign key relations between them, so the order of the multi-step ETL (top to bottom) will be important.

Transformation Needs

  • What kinds of actions need to be performed on your data between the source and target?
  • Do you have existing scripting or procedures running on other databases that you want to leverage? If so, consider having an ETL call stored procedures on those other databases.
Review the types of transformation task available in this topic:

Decisions and Tradeoffs

There are often many ways to accomplish the same end result. This section covers some considerations that may help you decide how to structure your ETLs.

Multiple Steps in a Single ETL or Multiple ETLs?

  • Do changes to the source affect multiple target datasets at once? If so consider configuring multiple steps in one ETL definition.
  • Do source changes impact a single target dataset? Consider using multiple ETL definitions, one for each dataset.
  • Are the target queries relational? Consider multiple steps in one ETL definition.
  • Do you need steps to always run in a particular order? Use multiple steps in a single ETL, particularly if one is long running and needs to occur first. ETLs may run in varying order depending on many performance affecting factors.
  • Should the entire series of steps run in a single transaction? If so, then use multiple steps in a single ETL.

ETLs Across LabKey Containers

ETLs are constructed as operations in a destination folder, pulling information from a remote or linked source location OR from the local container itself. If this source location is also within your same LabKey Server, but in a different container there are two ways to accomplish this with your ETL:

  1. Create a linked schema for the source table in the destination folder. Then your ETL is created in the destination folder and simply provides this linked schema and query name as the source.
  2. Make your LabKey Server a remote connection to itself. Then you can access the source folder on the "remote connection" and provide the different container path there.

Accessibility and Permissions

  • How does the user who will run the ETL currently access the source data?
  • Will the developer who writes the ETL also run (or schedule) it?
ETL processes are run in the context of a folder.
  • If run manually, they run with the permissions of the initiating user, i.e. the user who clicks Run.
  • If scheduled, they will run as the user who enables them, i.e. the user who checks the enabled box.
  • To run or enable ETLs as a "service user" an administrator can use impersonation of that service user account, or sign in under it.

Repeatability

  • Do you need your ETL to run on a regular schedule?
  • Is it a one-time migration for a specific table or will you want to make a series of similar migrations?
  • Do you expect the ETL to start from scratch each time (i.e. act on the entire source table) or do you need it to only act on modifications since the last time it was run? If the latter, use an incremental filters strategy.
  • Will the ETL only run in a single folder, or will you want to make it available site-wide? If the latter, consider including it in a module that can be accessed site wide.

Related Topics

Was this content helpful?

Log in or register an account to provide feedback


previousnext
 
expand allcollapse all